10

I always feel disconnected between abstract machines (such as Turing machines) and computer architectures (including virtual machines' architectures, Von Neumann's achitecture). So I would like to know how they are related? How do one influence the other? References are also appreciated. Thanks.

3
  • 7
    Turing machines are a theoretical computer science model to reason about computability. Similarly, lambda calculus is a computer science model for computations, but one which has found practical applications in programming language design. While lambda calculus, turing machines, and actual computers are equivalent to each other with respect to the things they can compute, they are completely different in how they work. Notably, these theoretical models of computation don't describe what real hardware can do efficiently.
    – amon
    Commented Feb 11, 2015 at 17:53
  • 2
    @amon It appears you've already written most of an answer, why let it "go to waste" in a comment?
    – user7043
    Commented Feb 11, 2015 at 17:57
  • As others pointed out, there are several mathematical models for "computers": some closer to languages (partial recursive functions, lambda calculus), some closer to hardware. If you want, you should look at RAM machines (Wikipedia link): they are closer to real hardware than Turing machines. Commented Feb 12, 2015 at 9:50

6 Answers 6

22

Turing machines and similar "machines" are models of computation, they are intended to investigate problems such as:

  • What can be computed
  • The complexity class of problems
  • Relations among complexity classes
  • The equivalence of various ways to compute something

For that purpose, the machine itself must be as simple as possible. Programmer convenience or pesky implementation concerns do not matter, since these are mathematical objects and only very few programs are ever written directly for them.

Conversely, virtual machine architecture and actual silicon-based machine architecture is centered on executing a given program. The machine is made more complicated than strictly necessary for the above concerns, and in turn it takes fewer (and more obvious) instructions to do interesting things. Not too complicated, as they still have to be comprehensible (and efficiently implementable), but more complicated.

So the two approaches are fundamentally at odds. Besides both being in the field of computer science, they don't have a lot to do with each other.

2
  • 1
    Thanks. But I found "Turing machines and universal Turing machines with analogy to virtual machines", which might suggest their relations but there are no details.
    – Tim
    Commented Feb 11, 2015 at 18:00
  • 4
    @Tim I'd guess that course just takes Turing machines as a starting point to introduce the concept of an abstract machine, then quickly moves on to more useful abstract machines.
    – user7043
    Commented Feb 11, 2015 at 18:28
4

The main relationship is that you can simulate the theoretical construct in the physical one.

The fact that the physical one is capable of all things the theoretical one is gives rise to the ability for theoretical testing and analysis of the theoretical machine to be recognized as implementable in the real world.

The halting problem is a perfect example of something that was shown on a turing machine to be unsolvable, and by proof on the turing machine it can thus be known to be unsolvable on a real machine that abides the laws of the turing machine.

It's the difference between summing things by counting and doing it by writing it on paper, it's been proven that the reality of counting fulfills the same rules as doing the summation on a piece of paper. So when you simulate the physical counting of things, your results are recognized as applicable to real world - thus you know how much two candy bars will cost by mentally simulating the counting without needing to count the physical money to come up with the result.

People are currently working out analysis and testing of a theoretical model known as a "Quantum Turing Machine" to see what facilities will be available with quantum computing machines. It makes sense people would work with these models when the physical version of their model is both excessively expensive, rare, and current implementations are still very lacking. The theoretical models are used to show what we may be able to do when our physical implementations improve.

1

They are related in roughly the same way that the space shuttle is related to a balloon that you inflate with your breath and then let go of and watch fly away.

The fundamental principle of expelling something in one direction to propel something in the opposite direction is there.

That's where the similarities end.

0
1

I see the theoretical machines as bridging the gap between real world computation and mathematics. A Turing machine is powerful enough to simulate any real-world architecture or programming language, simple enough to be simulated easily, and, most importantly, simple enough to be the subject of reasonably straightforward mathematical reasoning and proofs.

1

It is important to know that the definition of computation is not "those things which computers do". Computation predates computers. Computers were given their name because they were created to assist the task of computation, not because they define it.

So the Turing Machine is not about how computers work. It is about whether or not a problem is computable - that is, solvable by a formal logical/mathematical process. It says nothing about how that process might be implemented. If it is computable, it can be solved by human beings with pencils and paper, given enough time, or with computers or (this is the important thing) with any system that can be shown to be Turing complete.

So the Turing Machine does two very important things:

  1. Provides a test for the computability of any problem/task.
  2. Provides a test for any system to show if it can compute any computable task.

The first point enables us to think about problems without being distracted by real world implementations. This is a good thing because the real hardware often distract people with irrelevant detail (like, "what happens if we run out of memory or storage space?", since Turing Machines have infinite resource). A provable theoretical solution can be developed for a Turing Machine and then all somebody needs to do is translate it into something that will work on a given architecture.

The second point allows us to verify the capability of any implementation without having to run lots of different tests on it. If it can simulate a Turing Machine, it can do anything the Turing Machine can do. Since Turing Machines can compute anything computable, so can it.

Which means that the relationship between the Turing Machine and any genuinely practical computer architecture (even virtual ones) is just one thing: they can compute.

Von Neumann's architecture was an attempt to create a design template for effective general purpose electronic digital computers. Turing's work provided the proof of its validity

-1

If you think about it, architectures are abstract machines. They describe how a lump of carefully made silicon "should" behave. The difference between the architectures and Turing machines is more a matter of scale than a fundamental shift in approach.

The advantage of Turing machines is that there is a set of useful proofs that are very easy to do using a Turing machine. It is simple to prove that any machine powerful enough to simulate a Turing machine can solve any problem a Turing machine can (duh). However, it gets more interesting when you define a Computable function. It turns out that there are many compatible definitions of a computable function. If you can define all of your behavior as computable functions, you can be simulated in a Turing machine.

So let's say you have an architecture which directly supports LISP style programs, and another like the x86 which is more procedural. Your friend claims "LISP is more expressive, so you can write programs on this machine you could never write on your x86." This is brutal to counter (especially since you probably don't know enough LISP). However, you can abuse several abstract machines like the Turing machine:

  • Your LISP machine may be fancy, but everything it can do is reducible to lambda calculus. Your friend nods eagerly. Lambda calculus is a bit of a cult thing for functional programmers.
  • My x86 may be fancy, but everything it can do is reducible to a register machine. Once again, no question from your friend. Registers are SO pase in modern computer theory!
  • Any register machine can be modeled as a Turing machine simulating that register machine. Now your friend wonders why you're harkening back to the punch-tape era.
  • And your lambda calculus machine can be reduced to a Turing machine as well. *Your friend objects, but you point them at the Church-Turing thesis, and they hang their head in shame.
  • Thus my x86 box can do anything your fancy LISP based machine can do!

There are, of course, plenty of other examples. Conway's Game of Life was proven to be Turing complete, meaning it could theoretically do anything your computer can do. The easiest way to do this was to build a Turing machine in Life. I bring this up because this would be a case of what you called an abstract machine being treated as a literal architecture! You can imagine how hard the claim of computability in Life would be without the aid of abstract models (I sure as heck am not modeling a x64, complete with cache peeking, just to prove Life is computable!)


In the end, the big difference between architectures and abstract machines is that architectures are usually concerned with performance. Architectures want to know how fast you can do something. Abstract machines tend to be content with just knowing if you can. Consider the Universal Constructor developed for von Neuman state machines. It was enough to prove that the UC could work, nevermind that the authors never had enough computing power to actually see it through.

The price architectures pay for demonstrating how fast they can work is that it is often terribly difficult to prove that they can compute everything. For that, the architectures turn right back around and start using abstract machines.

1
  • 1
    Your given reasoning examples aren't technically correct - if you state that a turing machine can do everything that a register machine or x86 mahine can, it doesn't neccessarily mean that a x86 machine can do everything that a register machine or a turing machine can. As a counterexample, any finite automaton can also be reduced to a Turing machine but clearly isn't equivalent to lambda calculus or LISP. Directionality matters - if you want to state "my x86 box can do anything your fancy LISP based machine can do", then it would require a reduction from Turing to x86, not from x86 to Turing.
    – Peteris
    Commented Feb 12, 2015 at 5:55

Not the answer you're looking for? Browse other questions tagged or ask your own question.