29
$\begingroup$

Perhaps my limited understanding of the subject is incorrect, but this is what I understand so far:

  • Functional programming is based off of Lambda Calculus, formulated by Alonzo Church.

  • Imperative programming is based off of the Turing machine model, made by Alan Turing, Church's student.

  • Lambda calculus is as powerful and able as the Turing Machine,
    meaning they are equivalent in computational power.

If functional programming is based off of Lambda Calculus and not the Turing machine, then why are some (or all) of them described to be Turing complete, and not Lambda complete or something like that? Is the term "Turing completeness" special in any way to Turing machines, or is it just a word?

Lastly, if imperative languages are based off of the Turing Machine, and computers are basically Turing machines, without infinite memory, does that mean they perform better than functional programming languages on our modern PCs?

If that's the case, then what would be the equivalent of a lambda calculus machine?

I know this seems to be 3 separate questions, but they're all closely related, and each is dependent on the previous question being a valid question to begin with.

$\endgroup$
4
  • 7
    $\begingroup$ Not an answer, but you mentioned that not all versions of the lambda calculus are Turing Complete. The Simply Typed Lambda Calculus, and the stronger versions of Coq and Agda based on termination checking, are not Turing Complete (because they have decidable halting problems). Strongly-typed languages like Haskell and SML get around this by allowing arbitrary recursion with a fixpoint combinator, a term with type (a -> a) -> a . $\endgroup$ Commented Jul 10, 2015 at 19:54
  • 1
    $\begingroup$ It is just so wrong to say "defined to be Turing complete". Can we please change the title? $\endgroup$ Commented Jul 4, 2016 at 11:11
  • $\begingroup$ @AndrejBauer Thank you for the title edit, but I'm curious as to why it's (defined as Turning Complete) wrong? Is it because it's an adjective? Would described be a better word than define? $\endgroup$ Commented Sep 16, 2016 at 11:21
  • 3
    $\begingroup$ @Abdul Well, the problem is the word "defined". If you say that "functional languages are defined as Turing complete", you're saying that either the definition of "functional language" or the definition of "Turing complete" states that functional languages are Turing complete. As a matter of fact, neither definition says that. $\endgroup$ Commented Jul 30, 2018 at 17:38

5 Answers 5

29
$\begingroup$

In a nutshell:

What characterizes imperative programming languages as close to Turing machines and to usual computers such as PCs, (themselves closer to random access machines (RAM) rather than to Turing machine) is the concept of an explicit memory that can be modified to store (intermediate results). It is an automata view of computation, with a concept of a state (comprising both finite state control and memory content) that can change as the computation proceed.

Most other models are more abstract. Though they may express the computation as a succession of transformation steps of an original structure, these transformation are applied in a sort of intemporal universe of mathematical meanings. This may preserve properties, such as referential transparency, that may make mathematical analysis simpler. But it is more remote from natural physical models that rely on the concpet of memory.

Thus there are no natural functional machines, except in a larger sense as explained below, since software is not really separable from hardware.

The reference to Turing as the yardstick of computability comes probably from the fact that his model, the Turing machine was closest to this physical realizability constraint, which made it a more intuitive model of computation.

Further considerations:

There are many models of computation, which were designed to capture in the most general possible way the concept of a computation. They include Turing machines, actually in many different flavors, the lambda calculus (flavors too), semi-Thue rewriting systems, partial recursive function, combinatory logic.

They all capture some aspects of the various techniques used by mathematicians to express or conduct computations. And most have been used to some extent as the basis of some programming language design (e.g. Snobol for rewriting systems, APL for combinators, Lisp/Scheme for lambda calculus) and can often be combined in diverse ways in modern programming languages.

One major result is that all these computation models were proved equivalent, which lead to the Church-Turing thesis that no physically realizable models of computation can do more than any of these models. A model of computation is said Turing complete if it can be proved to be equivalent to one of these models, hence equivalent to all of them.

The name could have been different. The choice of the Turing machine (TM) as the reference is probably due to the fact that it is probably the simplest of these models, mimicking closely (though simplistically) the way a human computes and fairly easy to implement (in a limited finite form) as a physical device, to such an extent that Turing machines have been constructed with Lego sets. The basic idea requires no mathematical sophistication. It is probably the simplicity and realizability of the model that gave it this reference position.

At the time Alan Turing created his computing device, other proposals were on the table to serve as formal definition of computability, a crucial issue for the foundations of mathematics (see Entscheidungsproblem). The Turing proposal was considered by the experts of the time as the one most convincingly encompassing known work on what calculability should be (see Computability and Recursion, R.I. Soare, 1996, see section 3.2). The various proposals were proved equivalent, but Turing's was more convincing. [from comments by Yuval Filmus]

It should be noted that, from a hardware point of view, our computers are not Turing machines, but rather what is called Random Access Machines (RAM), which are also Turing complete.

Purely imperative language (whatever that might mean) are probably the formalisms used for the most basic models, such as Turing machines, or the assembly language (skipping its binary coding) of computers. Both are notoriously unreadable, and very hard to write significant programs with. Actually, even assembly language has some higher level features to ease programming a bit, compared to direct use of machine instructions. Basic imperative models are closed to the physical worlds, but not very usable.

This led quickly to the development of higher level models of computation, which started mixing to it a variety of computational techniques, such as subprogram and function calls, naming of memory location, scoping of names, quantification and dummy variables, already used in some form in mathematics and logic, and even very abstract concepts such as reflection (Lisp 1958).

The classification of programming languages into programming paradigm such as imperative, functional, logic, object oriented is based of the preeminence of some of these techniques in the design of the language, and the presence or absence fo some computing features that enforce some properties for programs or program fragments written in the language.

Some models are convenient for physical machines. Some others are more convenient for a high-level description of algorithms, it that may depend on the type of algorithm that is to be described. Some theoretician even use non deterministic specification of algorithms, and even that cn be translated in more conventional programming terms. But there is no mismatch problem, because we developed a sophisticated compiler/interpreter technology that can translate each model into another as needed (which is also the basis of the Church-Turing thesis).

Now, you should never look at your computer as raw hardware. It does contain boolean circuitry that does very elementary processing. But much of it is driven by micro-programs inside the computer that you never get to know about. Then you have the operating system that makes your machine appear even different from what the hardware does, On top of that you may have a virtual machine that executes byte-code, and then a high-level language such as Pyva and Jathon, or Haskell, or OCaml, that can be compiled into byte code.

At each level you see a different computation model. It is very hard to separate hardware level from the software level thus to assign a specific computational model to a machine. And since they are all intertranslatable, the idea of an ultimate hardware computation model is pretty much an illusion.

The lambda calculus machine does exist: it is a computer that can reduce lambda calculus expressions. Ad that is easily done.

About specialized machine architectures

Actually, complementing Peter Taylor's answer, and following up on hardware/software intertwinning, specialized machines have been produced to be better adapted to a specific paradigm, and had their basic software written in a programming language based on that paradigm.

These include

Fundamentally, these are also imperative hardware structures, but mitigated with special harware features or microprogrammed interpreters to better adapt to the intended paradigm.

Actually, hardware specialized for specific paradigms does not seem to have ever been successful in the long run. The reason is that the compiling technology to implement any paradigm on vanilla hardware became more and more effective, so that specialized hardware was not so much needed. In addition, harware performance was fast improving, but the cost of improvement (including evolution of basic software) was more easily amortized on vanilla hardware than on specialized hardware. Specialized hardware could not compete in the long run.

Nevertheless, and though I have no precise data on this, I would suspect that these ventures left some ideas that did influence the evolution of machines, memories, and instruction sets architecture.

$\endgroup$
7
  • 1
    $\begingroup$ The choice of Turing machine as reference, to the extend that it's actually made, is motivated mostly historically: Turing was the first to come up with a satisfying definition of computability. $\endgroup$ Commented Jul 10, 2015 at 16:24
  • $\begingroup$ @YuvalFilmus And why was it more of a "satisfying definition of computability."? $\endgroup$
    – babou
    Commented Jul 10, 2015 at 16:32
  • 1
    $\begingroup$ That's what Gödel thought. Bob Soare has a few words to say on the matter here: cs.uchicago.edu/~soare/Publications/compute.ps. $\endgroup$ Commented Jul 10, 2015 at 17:38
  • $\begingroup$ @YuvalFilmus This is 46 pages. I mean I am giving some reasons why it should be more satisfying. They may be naive. If there are more compelling one that explain the success, it is woth mentioning explicitly. $\endgroup$
    – babou
    Commented Jul 10, 2015 at 19:25
  • $\begingroup$ Look at Section 3.2. There were previous definitions of computability but they weren't convincing. Turing's was the first one which was convincing, at least for some key people. $\endgroup$ Commented Jul 10, 2015 at 20:35
24
$\begingroup$

Turing-complete is just a name. You can call it Abdul-complete if you want. Names are decided upon historically and are often named after the "wrong" people. It's a sociological process that has no clear criteria. The name has no meaning beyond its official semantics.

Imperative languages are not based on Turing machines. They are based on RAM machines. Your computer is a RAM machine. Turing machines are a nice theoretical model, but they are not a very good model of actual computers.

Programming languages based on other paradigms can be very successful, even though the underlying CPU doesn't support them natively; for example, printers run a stack language. There is more to programming than machine code.

$\endgroup$
10
  • $\begingroup$ "Turing machines are a nice theoretical model, but they are not a very good model of actual computers." Except for the lack of infinite memory, what are other reasons that it isn't a good model for actual computers? Also, was I correct in thinking functional languages are based off of lambda calculus? $\endgroup$ Commented Jul 10, 2015 at 15:30
  • 5
    $\begingroup$ Turing machines don't have random access, whereas your CPU does. Functional languages are intimately related to the $\lambda$ calculus. $\endgroup$ Commented Jul 10, 2015 at 15:31
  • 11
    $\begingroup$ Imperative languages tend to allow array accesses in constant time, in C A[x]. Turing machines can't do this in constant time. That's why even in theoretical computer science, the running time of algorithms is analyzed on the RAM machine model rather than on the Turing machine model. $\endgroup$ Commented Jul 10, 2015 at 15:34
  • 18
    $\begingroup$ Actually, Turing Machines are a good of actual computers … except that when Turing wrote his paper, "computer" was a job description for a human working with pen and paper. The read/write head is a model of a pen, the tape is a model of an infinite stack of sheets of paper (just cut them up into small strips and glue them together), the alphabet is a model of, well, our alphabet, and the finite transitions are a model of the limited amount of rules one can keep in one's head. $\endgroup$ Commented Jul 10, 2015 at 17:01
  • 4
    $\begingroup$ That was the best insight I have ever had into why the hell turing chose the Turing machine. I've always kind of wondered "why did he choose such a crappy model of computation". I've always thought if inventing the theory of computation had been left to me (god help us; we wouldn't be very far along) I would have probably have still chosen a better model of computation. Now I get where he was coming from and It makes so much more sense. $\endgroup$
    – Jake
    Commented Jul 10, 2015 at 18:47
12
$\begingroup$

Because "Turing-complete" just means "it can compute whatever a Turing machine can compute."

$\endgroup$
7
  • 1
    $\begingroup$ Turing-complete could also be named in honor of Turing (the person), who came up with the first philosophically satisfying definition of computability; or it could be named in honor of Turing's paper in which he describes this concept. $\endgroup$ Commented Jul 10, 2015 at 16:23
  • 1
    $\begingroup$ @YuvalFilmus: it could be named after Alan Turing's mum, but the assertion here is that it isn't ;-) $\endgroup$ Commented Jul 10, 2015 at 17:15
  • $\begingroup$ @YuvalFilmus Could be (though, as far as I'm aware, isn't). But where the term comes from is of only secondary importance. What matters here is what the term means. $\endgroup$ Commented Jul 11, 2015 at 1:19
  • 2
    $\begingroup$ This is short and sweet, but maybe a bit too short. What does a Turing machine "do"? Well among the things they "do" is to read and write tapes, which lambda expressions do not, Better would be "Turing-complete models of computation allow all computable functions to be computed." $\endgroup$ Commented Jul 11, 2015 at 18:14
  • $\begingroup$ @TheodoreNorvell I think your comment is similar to what I was thinking. I knew that lambda calculus and the Turing machine are equivalent in power, but different in mechanism (and now I learned there are others), but I was wondering if the term "Turing completeness" was in any way special to the Turing machine, or if it was just a name. $\endgroup$ Commented Jul 12, 2015 at 6:19
8
$\begingroup$

One of your questions doesn't seem to have been answered yet:

If that's the case, then what would be the equivalent of a lambda calculus machine?

A Lisp machine. Hardware designed specifically to fit the LISP model of computation. The Wikipedia article talks about commercial products, but my director of studies at university had a hand-built one in his office.

$\endgroup$
0
1
$\begingroup$

functional languages in the form of lambda calculus invented by Church were proven Turing complete. this is an actual mathematical proof that can be found in published scientific papers by "reducing" lambda calculus to operations/ calculations on Turing machines. around the time of Turings paper 1936 and after, different "comprehensive" models of computation were proposed/ circulating. it was not immediately realized that all were equivalent. the proof(s) that they were equivalent were published roughly in the late 1930s and 1940s after Turings paper.

the Turing machine is conceptually (but not functionally) simpler than the other models and that is likely a significant part of the reason that Turing completeness is named after him. other ideas such as lambda calculus are more abstract and started out/ originated mainly in math/ logic theory. Turing proposed a theoretical machine. a "machine" is literally a "physical device". its a remarkable conceptual object/ construction that ties together/ unifies two different worlds, the applied and the theoretical. it gives new abstract meaning to physical entities eg "time and space". it is not a mere coincidence that mathematicians sometimes refer to "technology," "machinery," or "devices" of proofs. Turing managed to ingeniously fuse all this in his conceptual invention. its definition is quite simple but analysis of it exhibits some of the most extraordinary emergent behavior ever seen in the history of scientific/ mathematical thought. Turing was the first scientist/ mathematician to grasp much of this significance/ power/ potential.

$\endgroup$
2
  • $\begingroup$ in other words it might be said that Turing was the 1st to identify/ "recognize" the significance of Turing completeness phenomenon and in turn, CS "recognizes" him for this monumental accomplishment through the use of the term. $\endgroup$
    – vzn
    Commented Jul 11, 2015 at 20:14
  • $\begingroup$ Your reduction is the wrong way around. To prove Turing-completeness of some system, $X$, you need to reduce Turing machine computation to that system. $\endgroup$ Commented Jul 11, 2015 at 23:43

Not the answer you're looking for? Browse other questions tagged or ask your own question.