Is Modern Computability Theory "really About Algorithms?

Is modern computability theory "really” about algorithms?

Is Modern Computability Theory

I think it's important to take a historical perspective. There was a time not so long ago when computers as we know them now did not exist. At that stage, coming up with a precise definition of an algorithm or of a Turing machine was a major advance, allowing one to build the earliest modern computers and begin the revolution that we take for granted today. As actual computers became more powerful, interest shifted from the computable/uncomputable boundary to the feasible/infeasible boundary, where initially the definition of "feasible" was (roughly speaking) "polynomial time." So then we get the P = NP question and the birth of computational complexity theory as we know it today.As computers became more powerful and more diverse, interest again shifted. People today are increasingly interested in parallel/distributed algorithms, cloud computing, SIMD architectures, etc. Datasets are so large that polynomial time does not cut it any more; people want linear time or even sublinear time algorithms.So at the time of its invention, computability theory was about practical algorithms. The same goes for computational complexity theory and other subjects in computer science. But as technology advances, the definition of "practical" changes, so that the classical subjects no longer line up so nicely with the interests of current practitioners. That does not mean that the classical subjects are no longer of interest, because fundamentally important mathematical concepts never go away. But they become more abstract, and it takes a broad perspective to see their motivation and to be able to tell which problems are still of importance today. For example, in my opinion, some of the most exciting developments in computability theory today are its unexpected connections with differential geometry, as for example described in this paper by Soare. This work is very far removed from "practical algorithms" but illustrates how the study of fundamental mathematical concepts can reap unexpected dividends and is therefore worth pursuing even if immediate applications are not visible.

— — — — — —

What kind of mathematical “discoveries” have enabled mankind to build modern computers? [closed]

The mathematical foundations of modern computer science began to be laid by Kurt Gödel with his incompleteness theorem (1931). In this theorem, he showed that there were limits to what could be proved and disproved within a formal system. This led to work by Gödel and others to define and describe these formal systems, including concepts such as mu-recursive functions and lambda-definable functions. In 1936 Alan Turing and Alonzo Church independently, and also together, introduced the formalization of an algorithm, with limits on what can be computed, and a "purely mechanical" model for computing. This became the Church-Turing thesis, a hypothesis about the nature of mechanical calculation devices, such as electronic computers. The thesis claims that any calculation that is possible can be performed by an algorithm running on a computer, provided that sufficient time and storage space are available. In 1936, Alan Turing also published his seminal work on the Turing machines, an abstract digital computing machine which is now simply referred to as the Universal Turing machine. This machine invented the principle of the modern computer and was the birthplace of the stored program concept that almost all modern day computers use. These hypothetical machines were designed to formally determine, mathematically, what can be computed, taking into account limitations on computing ability. If a Turing machine can complete the task, it is considered Turing computable or more commonly, Turing complete. Up to and during the 1930s, electrical engineers were able to build electronic circuits to solve mathematical and logic problems, but most did so in an ad hoc manner, lacking any theoretical rigor. This changed with Claude Elwood Shannon's publication of his 1937 master's thesis, A Symbolic Analysis of Relay and Switching Circuits. While taking an undergraduate philosophy class, Shannon had been exposed to Boole's work, and recognized that it could be used to arrange electromechanical relays (then used in telephone routing switches) to solve logic problems. This concept, of utilizing the properties of electrical switches to do logic, is the basic concept that underlies all electronic digital computers, and his thesis became the foundation of practical digital circuit design when it became widely known among the electrical engineering community during and after World War II.Shannon went on to found the field of information theory with his paper titled A Mathematical Theory of Communication, which applied probability theory to the problem of how to best encode the information a sender wants to transmit. This work is one of the theoretical foundations for many areas of study, including data compression and cryptography

modern computers related articles
What Are Some Ways to Keep My Hair Really Healthy?
How Was the First Sale of Vivo X60 Series?
Huami Technology Held a Press Conference on August 27, and Amazfit Smart Sports Watch 3 Officially A
Connect to Nonencrypted Wireless Network Using Ubuntu Commands
Is Time Travel Possible? Can We Travel Back in Time?

Copyright © 2020 Coffee bag - Guangzhou tianci packaging industry Co,. Ltd. | Sitemap