They say that for novelties the classics. An allegory written four centuries of our era is ideal to understand what are the limits of the new applications of Artificial Intelligence. I am referring to the "ChatGPT Cave" which is neither more nor less than an adaptation of the famous allegory of Plato's cave
I have no objection to the use of artificial intelligence tools. In fact, I find they make the job much easier. But as long as be used by people who have sufficient knowledge to evaluate your work.
For example; one can ask ChatGPT to write a WordPress plugin, but if one lacks knowledge of PHP that plugin can cause serious security problems.
Table of Contents
The allegory of the cave
Plato was a Greek philosopher who lived between the XNUMXth and XNUMXth centuries BC. He expressed his thoughts in the form of myths and allegories. The best known of them was that of the cave.
Posted in The Republic, the allegory imagines a group of people chained in a cavern, behind them they have a fire that casts shadows on the wall in front of them. The shadows are the only thing they see and imagine that they are the only thing that exists, ignoring what lies beyond.
When one of the prisoners is released, he is able to see the world for what it really is and realizes how limited his experiences in the cave were.
According to Plato scholars, this allegory highlights that we all live our lives based on our own information and experiences. Information and experiences equivalent to the shadows of the cave. Just like the prisoners, there is the true reality and it is beyond our comprehension.
ChatGPT and its competitors have both admirers and detractors. But, no one had given a technical explanation about its failures until an article published in New Yorker by science fiction writer Ted Chang
To explain the flaws in language models, Chang makes an analogy with what happens with images and audio files.
The recording and reproduction of a digital file requires two steps: the first is the encoding, at which point the file is converted to a more compact format, followed by decoding, which is the reverse process. The conversion process is called lossless (the restored file is the same as the original) or lossy (Some information is lost forever). Lossy compression is applied to image, video, or audio files and is most of the time not noticeable. When it is, it is called a compression artifact. Compression artifacts show up in the form of blurring in images or clanking in audio.
Chang uses the analogy of a fuzzy JPG from the web to refer to language models. And, this is pretty accurate. Both compress the information keeping only “The important thing”. LLanguage models generate, from large amounts of text data, a compact representation of the patterns and relationships between words and phrases.
From it, a new text is generated trying as much as possible to make it similar in content and meaning to the original text. The problem is when there is not enough information on the web to generate a new text. This translates to ChatGPT being able to write a college level essay, but not doing simple 5-digit operations.
Chang concludes that:
Even if it's possible to restrict large language models from participating in authoring, should we use them to generate web content? This would make sense only if our goal is to repackage information that is already available on the Web. Some companies exist to do just that; we generally call them content factories. Perhaps the fuzziness of the language models is useful to them, as a way to avoid copyright infringement. Generally speaking though, I would say that whatever is good for content factories is not good for people looking for information. The rise of this type of repackaging is what is making it difficult for us to find what we are looking for online right now.; The more text generated by large language models is published on the Web, the more the Web becomes a blurrier version of itself.
And, like the prisoners in the cave, our experience would be much smaller than what reality offers us.
Be the first to comment