In the previous article we had commented that The idea of artificially creating tools that do things reserved for human beings has been with us since ancient times. We also said that, until the beginning of the XNUMXth century, all inventions were limited to imitating specific behaviors of people and animals.
Artificial intelligence seeks to create machines that do tasks that require the ability to think and, although the Turing machine (later called the universal Turing machine when its application was extended) was limited to following instructions, it laid the theoretical foundations for the creation of the first electronic computers.
Table of Contents
the imitation game
Turing's second contribution to the field of Artificial Intelligence was his famous test. While many insisted that it was impossible for a machine to perform thinking or creative tasks, the mathematician decided to explore the theoretical possibility.
In Victorian times there was an entertainment called "The Imitation Game" in which it was a question of guessing the sex of a person by the answers to the questions that were asked. In Turing's version, the interrogators communicate with what is on the other side through a keyboard and a screen. They do not know if the person who answers them is a person or a machine. If after a while those who ask the questions do not manage to know that they are talking to a machine, then we can say that it is capable of thinking.
From theory to hardware
In another series of articles I already told you Claude Shannon's story, a person whose contributions should put him on a par with Newton or Einstein. If Turing envisioned a machine that was capable of following instructions, Shannon was the one who said how he could make it faster and more useful.
At the age of twenty-two, Shannon was hired as the operator of the Differential Analyzer, a machine that, using a mixture of analog components and electromechanical relays, solved equations. Over time he showed that it was possible to do the same with just relays, a series of interconnected switches that could turn each other on and off. The mathematical operations could be programmed according to how the switches were configured.
Since a switch only admits two positions on or off (1 or 0), the new devices had to adopt binary arithmetic.
Later the relays would be replaced by vacuum tubes first and transistors and microprocessors later.
By the end of the second war there were several machines running on the ideas of Turing and Shannon and they all had the same problem. If you wanted the machine to do something else you had to change the wiring configuration.
This is where a Hungarian immigrant who already had a reputation as a scientist steps in: John von Neumann.
Von Neuman had contributed to the United States war effort with his shock wave calculation methods (Used by the Manhattan Project) and the invention of game theory. In addition, he studied the subject of self-reproducing machines and wrote on quantum mathematics.
His answer to the reprogramming problem sounds like something out of a creativity manual. Divide the computer into two parts and assign another function to one of them.
According to the von Neuman model that is still used in computers, smartphones and other smart devices, hardware is divided into:
- The central processing unit (CPU) that is in charge of applying the instructions of a program to the data.
- The memory in which the data is stored and the program that contains instructions on what to do with it.
The von Neuman architecture allowed a computer to perform more complex tasks. since the CPU only has to obtain the data, follow the instruction and repeat the cycle until it finishes.
Turing, Shannon, and von Neuman laid the groundwork for machines capable of answering questions in the course of normal conversation or performing creative tasks. But, the right programs are still missing. That is the story we will tell in the next post.
Be the first to comment