A while ago, my Pablinux partner He told us about the letter that the insufferable Elon Musk and other personalities wrote asking for a pause on research in Artificial Intelligence until measures can be taken to prevent its possible adverse effects. That gives me the excuse to talk about the real and imagined risks of Artificial Intelligence.
At the risk of making a fool of myself with failed Bill Gates-style predictions, I begin by saying that in my opinion the biggest risk right now is a bubble burst That's going to leave the dot-coms up to a mild shock.
The real and imagined risks of Artificial Intelligence
I agree with Pablinux that the letter has more medieval obscurantism than scientific reasoning. That while continuing to share the idea that legislation should be established to regulate the use of its content. However, we cannot deny that all technology confused and scared people until it was well known.
The projection of the arrival of a train at the beginning of cinematography made people flee the room and, although it has much of an urban legend, the radio version of War of the Worlds by Orson Welles caused quite a bit of panic among people who believed it was real.
In fact, this type of software regulation is nothing new. Financial regulatory authorities in many countries prohibit programs such as Photoshop from editing images of banknotes or checks.
In 1994 Tom Clancy published Debt of honour. Considered an expert on defense issues, Clancy iimagined an attack on the financial system of the United States by manipulating the expert systems of the stock companies into believing that a crisis was taking place unleashing a selling wave that finally produced the crisis.
Before dismissing it as fiction, remember that in that same novel, 7 years before the Twin Towers, Clancy anticipated that the United States could suffer attacks using commercial airplanes.
Actually the idea is not new. the 1983 film Juegos de guerra It recounted how a teenager confused the computer in charge of the missile launch into thinking that the Russians were attacking.
Let's imagine that we hear a gallop approaching. Our first conclusion is that it is a horse and 9 times out of 10 we will be right. But, there is always the possibility that it is a zebra that escaped from the zoo. Doctors, astronauts and airplane pilots receive strict training thinking about zebras, knowing what to do if an anomaly occurs. Artificial intelligence models are trained with horses in mind.
A model like the one used by ChatGPT is based on existing information in its knowledge base. The more times that information is repeated, the greater the credibility it assigns.
Since saving all available information would require a lot of storage space, it only saves what is relevant and then rebuilds it as requested using the structure that statistically seems the most relevant. Hence, many times I cite references that do not exist just because statistically it is likely that there is a document with that title that contains that content.
About zebras and dogs that don't bark
Is there any other point you wish to draw my attention to?
-The curious incident of the dog at night.
-The dog did nothing at night.
That was the curious incident.
Sir Arthur Conan Doyle
Another of the risks that Artificial Intelligence systems have is what they do not do. And it is also an important point to keep in mind.
In the XNUMXs, an Australian doctor surmised that the most common cause of ulcers was bacteria. Since he didn't have a great resume, they laughed in his face until he was proven right. Like many other scientific discoveries (the rotation of the planets, the fact that the more breaks you take, the more productive you are) they are contrary to the wisdom of the moment.
But, Intelligence models are based on the wisdom of the moment. In those knowledge in which there is consensus. Just as freezing technology, automobiles and delivery have increased the number of obese, the availability of Artificial Technology tools can make us lazy intellectuals and stifle innovation.
As you can see, there are enough things to worry about on top of being afraid of being enslaved by the machines. And that we still do not talk about access to the source code and the privacy of users.