## Turing Machine Networks

#### J. Romo

BSc Thesis, Department of Computer Science, University of York, 39 pages, 2019. | PDF | BIB

The importance of parallelism is difficult to overstate. As sequential
computers have failed to meet modern computational challenges, engineers
have begun to distribute computations across many processors, making
prior algorithms immensely faster. Such a prevalent concept demands
modelling so its power can be best understood. Many diverse models have
been developed, ranging from pragmatic models like BSP
and LogP, to theoretically involved and abstract ones
like Parallel Turing Machines.

It is unfortunate then that a direct connection of results for classical
Turing Machines to parallel complexity theory has seen little effort, with the only major model doing this being Parallel
Turing Machines, proposed by Wiedermann. This model,
while well-developed in its own right, only emulates shared memory
parallelism, not accounting for message-passing distributed models like
MapReduce. This makes it difficult to bring the hefty
theory surrounding Turing Machines to bear on message-passing
parallelism, with models like BSP and LogP giving little such theory and
the LOCAL, ASYNC and CONGEST models focusing on communication challenges instead.
Thus, it is harder to analyze problems such as how much faster an
problem can be solved in a parallel message-passing environment than
sequentially by a Turing Machine, a question that debatably defines the
efficacy of such an approach to parallelism.

This project aims to produce a model that resolves this conundrum,
linking Turing Machines and message-passing parallelism more strongly
and providing an environment in which substantial complexity results can
be produced. It should be noted that this project will restrict itself
to classical computation, not straying into the domain of nonstandard
models like quantum computing; these are beyond the scope of this
project.