Computing Beyond Turing
70 years ago (1935), Alan Turing started his studies of mathematical logic and formulated the initial parts of a theory, that would become - as "Turing Machine - the foundation of our computers until this very day. Turing compared his universal calculating machine to a human who calculated a number, and restricted its principal application to "those problems which can be solved by human clerical labour, working to fixed rules, and without understanding".
This restriction, however, could not prevent computer scientists to soon promise an intelligent electronic brain which could simulate the entire spectrum of human thinking and thus eventually could replace man himself. (Marvin Minsky, Hans Moravec, Ray Kurzweil, Bill Joy et al.)
After 70 years of computer research and around 50 years of futile efforts to create "artificial intelligence" we still are confronted with the question why the Turing machine seems neither capable of learning and adapting nor of mapping and simulating complex systems - and whether there could be computing beyond the Turing machine...
Without wanting to repeat the critical discussions of the AI approach from the 1960ies and 70ies, I would like to recall some principles of the Turing machine:
- The Turing machine follows classical mechanical (hierarchic) logic: it sequentially writes in a program controlled manner symbols to an (imaginary) endless tape, starting from a defined beginning. It requires this single beginning as hierarchical root for orientation in writing and reading (if the machine is switched off, it theoretically has to be set back to this beginning at every new start).
- The Turing machine is 'program driven': the representations it writes on its memory tape are defined unidirectional by the program. The machine cannot work 'data driven' by having the representations select fitting programs or even writing them.
- The Turing machine can be only in one state at a time. Although it can execute many programs, record and process many types of data and thus assume infinite, though determined states, it still must be in one single defined and thus non-ambivalent state before any step.
But herein lie also the principal restrictions of this machine: since it operates within one single program logic at a time, it is 'logically closed', it cannot see beyond the currently active logic domain and thus cannot adapt to the 'environment' of this domain. For the same reason it cannot integrate different logic domains. Due to this, it also lacks the fundamental preconditions for cognition as a thought process which integrates different logic domains and can synthesize new logic domains from within. Concepts like 'learning', 'innovation' or 'creativity' relate to this capability to integrate complexity (instead of replacing it by complication).
In terms of human thinking we refer since Kant to 'synthetic' (intuitive) and 'analytic' (logical) operations of thinking. Only in combination do they constitute 'thinking'. The fact that in our Western culture until today only logic is considered 'correct thinking' is deplored not only by philosophers ...
In medical terms we would refer to a Turing machine (like any other logic mechanism) as a 'paranoid machine': A paranoid patient relates all his observations systematically (paranoia therefore is also called a 'systematic delusion') to a single cause - the intent to harm him. He structures his interpretations hierarchically under a single logic root. While this mode of thought is considered pathological for humans, it is the standard mode of Turing machines (and thus of today's entire computer generation with the exception of Artificial Neural Networks, which are not considered here) - principally and continuously, even though with changing axioms (programs).
Thinking, however, is a complex non-hierarchical process, in which we integrate different observations (in the widest sense as representations in form of relations) and synthesize new logic domains. We can do this, because we exist in many states at the same time, so our 'memory' needs no single beginning and we can operate in 'data driven' (inductive) as well as in 'program driven' (deductive) modes. Since the world never presents itself to us as a deductive whole but only as partial observations which we connect to 'compositive wholes' (FA Hayek), cognition could be understood as an internal connection process in interaction with an environment.
If we expect computers to achieve similar feats, we expect more than a Turing machine can deliver and have to ask for the structural conditions that would enable a machine to have cognitive capabilities similar to 'thinking'. After futile attempts in the 1960ies and 70ies which are closely associated with names like Gotthard Günther, Heinz von Foerster and the Biological Computer Lab (BCL) at the University of Illinois, the problem was considered unsolvable. Yet particularly Gotthard Günther had pointed the way to a potential solution with his 'polycontextural' architecture of logic ("Kenogrammatic") in which different logic domains can intersect and be connected.
The practical implementation of such an architecture finally seems to have been successful: independent inventor Erez Elul has developed a completely new and unique approach during the last years ('Pile system'),which in 2005 moves into its experimental application stage. It apparently differs from a Turing machine in several essential points:
- Representation: The 'Elul machine' represents events not as data in the traditional sense, but as relations, i.e. it records only relations as parts of an arbitrary input sequence that make up the entire sequence as a whole. These relations again are related to the relations represented already in the system. This allows a lossless, generative and non-redundant representation which could be described as 'onto-genetic' rather than 'ontological': represented here is the generative structure of the event and, at the same time, the generative history of the representing system.
- Logic: the Elul machine, different from the Turing machine and all other mechanisms in general, has a 'polylogic' structure: its objects principally require two parents instead of just one in hierarchical structures. The machine generates theoretically infinite separate trees, of which two intersect in every object (and thereby are also fully interwoven). Therefore many beginning points exist in the system instead of just one. The path from a whole to its parts is as well defined here as the path from any part to its respective wholes in which it appears.
- Objects: the objects of the system are exclusively (proprietary) addresses as connections representing relations, from which data can be generated (metaphorically comparable to a computer game, where image data also are dynamically generated and not physically stored). These self-connecting and complex objects are useful elements for a self-organizing, complex 'compositive' structure.
- Structure: the emerging structure is not hierarchical, yet layered. I.e. in every layer the information coded in an object grows exponentially, as it represents the next higher order of the two objects it relates and whose orders it encodes. These connections form a power concatenation of relations generating data.
One can envision this system as a fully connected n-dimensional complex network or rhizome, where not only the nodes, but also all links are addressable objects. All paths in this scalable network are fully defined (as in a tree structure), with the structure still allowing arbitrary connections. In contrast to a complex network, where new nodes require exponentially increasing resources, the computational effort here grows only linear, since the information coded in new nodes already increases exponentially. Since this information is represented only virtually in a logical address space (as path) and needs not to be physically stored in memory, the system remains principally scalable. Any describable structure, complex or non-complex, dynamic or static, can be mapped.
The machine is currently implemented on a standard von-Neumann hardware architecture under a standard operating system (Windows). Whether it can in fact solve problems that principally are not solvable by Turing machines, remains to proven in practical applications. The main theoretical arguments have been put forward and test versions are available. If Pile does indeed provide the famous "juice" (Rodney Brooks) that computers were lacking so far, the consequences could be immense...
Peter Krieg became known as a director of documentary films ('September Wheat', 'Machine Dreams' et al), some of which (like the latter) investigated computing, chaos theory and cybernetics (with memorable appearances by Marvin Minsky and Heinz von Foerster). Since 1999 he supported Erez Elul's Pile system development and currently incubates Pile Systems Inc. "The Paranoid Machine" is also the title of his forthcoming book on this subject, due to be published (in German) in 2005. He can be reached under firstname.lastname@example.org. Introductions and independent reviews of Pile are published at www.pilesys.com.
Kommentare lesen (2 Beiträge)