Models of Computation, Purpose and Types

Models of Computation:

Fundamental Goals: The intuitive appreciation of the significance of the concept ‘Model of computation’. Associate with many interesting illustrations that mirror key features of realistic systems in the simplest likely setting.

A) Models of computation: purpose and types
B) Construction with ruler and compass
C) Systolic algorithms, example: sorting networks
D) Threshold logic, perceptrons and artificial neural networks
E) Grammars and languages: Chomsky’s hierarchy
F) Markov algorithms
G) Random access machines (RAM)
H) Programming languages, [un-]decidability

Models of computation: purpose and types

Nearly any plausible statement regarding computation is true in certain model, and false in others!

A precise definition of a model of computation is necessary when proving negative outcomes.

Special purpose models versus universal models of computation (can simulate some other model).

Computability and Algorithm are originally intuitive perceptions. They can remain intuitive as long as we just want to show that certain specific outcome can be evaluated by following a specific algorithm. Almost for all time an informal description suffices to convince someone with requisite background that a specific algorithm evaluates a specified result. Everything modifies if we wish to exhibit that a desired outcome is not computable. The question occurs instantly: ‘What tools are we permitted to use?’ Everything is computable with the aid of an oracle which knows the answers to all questions. The attempt to verify negative outcomes regarding the nonexistence of some algorithms forces us to agree on the precise definition of algorithm.

Mathematics consists of big investigated problems of the kind: what type of objects can be constructed, what type of outcomes can be evaluated by using a limited set of primitives and operations. For illustration, the question of what polynomial equations can be resolved by using radicals, that is, using addition, multiplication, subtraction, division and the extraction of roots has kept mathematicians busy for centuries. This was solved by Niels Henrik Abel (1802 - 1829) in the year 1826: The roots of polynomials of degree ≤ 4 can be described in terms of radicals; those of polynomials of degree ≥ 5 can’t, in general. On an identical note, we in brief present the historically famous trouble of geometric construction using ruler and compass, and show how little changes in the assumptions can drastically modify the resultant set of objects which can be constructed.

Whenever the tools permitted are limited to the extent that ‘intuitively computable’ objects can’t be obtained by using such tools alone, we speak of a special purpose model of computation. These models are of great practical interest since the tools permitted are tailored to the particular characteristics of the objects to be evaluated and therefore are efficient for this reason. We present some illustrations close to hardware design.

From the theoretical viewpoint, though, universal models of computation are of prime interest. This concept occur from the natural question ‘what can be evaluated by an algorithm, and what cannot? This was studied during the year 1930s by Emil Post (1897–1954), Alonzo Church (1903-1995), Alan Turing (1912–1954) and other logicians. They defined different formal models of computation like production systems, recursive functions, lambda calculus and the Turing machines to capture the intuitive concept of ‘Computation by the application of accurate rules’. All such distinct formal models of computation turned out to be equal. This fact very much strengthens Church's thesis that the intuitive concept of the algorithm is formalized properly by any one of such mathematical systems. The models of computation stated by such logicians in the year 1930s are universal in the logic that they can evaluate anything computable by any other model, given the unbounded resources of time and memory. This concept of universal model of computation is a product of 20th century that lies at the center of theory of computation.

Standard universal models of computation were mainly designed to be conceptually simple: Their primitive operations are selected to be as weak as possible, as long as they keep their property of being universal computing systems in the sense that they can simulate any computation executed on any other machine. This generally comes as a surprise to beginner that the set of primitives of the universal computing machine can be much simple, as long as such machines possess two necessary ingredients - unbounded memory and unbounded time.

In this section we represent two illustrations of universal models of computation:

a) Markov algorithms that access data and instructions in a sequential way and might be the architecture of selection for computers which have just sequential memory.

b) Simple random access machines (or RAM), based on random access memory, whose architecture obeys the von Neumann design of the stored program computers.

Once one has learned to program the fundamental data manipulation operations, like moving and copying data and pattern matching, the claim that such primitive ‘computers’ are universal becomes believable. Most of the simulations of a powerful computer on a simple one share three features: It is straightforward in principle, it comprises laborious coding in practice and it explodes the space and time needs of a computation. The weakness of primitives desirable from the theoretical viewpoint has the consequence that as simple an operation as integer addition becomes an exercise in the programming. The aim of such illustrations is to support the idea that conceptually simple models of computation are uniformly powerful, in theory, as models which are much more complicated such as a high-level programming language. The unbounded time and memory is the key which lets the snail eventually cover similar ground as the hare.

The theory of computability was introduced in the year 1930s, and very much expanded in 1950s and 1960s. Its fundamental ideas have become the part of foundation which any computer scientist is predicted to know. However computability theory is not directly helpful. This is mainly based on the concept ‘computable in principle’ however provides no concept of ‘feasible in practice’.  And feasibility, instead of ‘possible in principle’, is the touchstone of the computer science. In 1960s, a theory of complexity of computation has being introduced with the goal of partitioning the variety of computability to the complexity classes according to time and space necessities. This theory is still in full progress and breaking new ground, in specific with (as yet) exotic models of computation like quantum computing or DNA computing.

Latest technology based Theory of Computation Online Tutoring Assistance

Tutors, at the www.tutorsglobe.com, take pledge to provide full satisfaction and assurance in Theory of Computation help via online tutoring. Students are getting 100% satisfaction by online tutors across the globe. Here you can get homework help for Theory of Computation, project ideas and tutorials. We provide email based Theory of Computation help. You can join us to ask queries 24x7 with live, experienced and qualified online tutors specialized in Theory of Computation. Through Online Tutoring, you would be able to complete your homework or assignments at your home. Tutors at the TutorsGlobe are committed to provide the best quality online tutoring assistance for Theory of Computation Homework help and assignment help services. They use their experience, as they have solved thousands of the Theory of Computation assignments, which may help you to solve your complex issues of Theory of Computation. TutorsGlobe assure for the best quality compliance to your homework. Compromise with quality is not in our dictionary. If we feel that we are not able to provide the homework help as per the deadline or given instruction by the student, we refund the money of the student without any delay.