What are the problems of AI — Part 2
Five problems inherent in generative AI that are important to know about
By Niccolò Maria Menozzi
Those who promote generative AI would like to season everything with it, but beware of indigestion. I have prepared a double article, where I analyze some of the most relevant issues. In this second part I exhaust the last remaining points, followed by conclusions.
Before reading this article make sure you have read the first part, where the first three problems were analyzed. If you are already caught up, let’s proceed!
Problem 4 — AI is not always able to provide its sources
ChatGPT reports things by selecting certain words, but is not always able to direct me to a source of accurate confirmation of those same words. Its feedback is a mixture of content, but it does not reflect nor does it fully value the complexity of the originals from which it draws.
Basing one’s knowledge by limiting oneself to these artificial “meatballs” has the same value as hearsay. It’s like relying on a friend-of-a-friend’s stories, bar chatter or “I read about it on Facebook,” as they say in this video by Raffaele Gaito. By the way, if you understand Italian, watch the video to discover more ChatGPT bias.
When doing research and needing to cite sources, words are important. Getting ChatGPTto tell you what a certain author wrote is different from reading his exact words. A source’s commentary is not a substitute for the source itself because it does not preserve the source’s identity, but it expands it with the contributions of others. And these may be more or less reliable.
The correctness of a piece of information also passes through the words used. Changing them may misrepresent the communicative intent from the author. Unfortunately, AI doesn’t go out of its way to play with words.
Today’s AI is murky. Losing touch with primary sources written by people, without knowing the mechanisms that govern these reworkings, is dangerous. One risks believing words put into the mouths of those who have said anything but.
In some cases an attempt has been made to solve part of the transparency problem with the so-called Chain-of-Thought, a procedure that exposes to the user the “reasoning” that leads to an answer. Unfortunately, these explanations do not always correspond to the processes actually implemented by AI. This is discussed in this article in Anthropic where they summarize some tests conducted on their AI, Claude.
However, the problem of sources and reliability of information is an old issue that predates AI. It can be said that experience and critical thinking skills continue to be essential components that we should not deprive ourselves of, even in the face of AI convenience.
Problem 5 — AI copies itself and content will become increasingly artificial
As AIs begin to produce more content than real people, the more likely it is that the AIs themselves will be trained with this “artificial food.” Imagine the AI training to imitate itself, starting at a much less prepared base than many people competent in writing. We could slip into a spiral of increasingly homogeneous and poorly nuanced content. At worst, as it turns out, also full of errors.
Some will say: nothing new. On the Internet, writing correctly and clearly, in a genuine and original style, without wrong data or misleading simplifications, is not widespread at every level. The information we seek very often is already parceled out and approximated in some way. Videos, blogs, Wikipedia, etc… None excluded.
People have always taken inspiration from the work of others (sometimes just by copying) and have always selected information, avoiding its original essence. Whether it is out of stylistic necessity, ignorance, distraction or bad faith, it matters little. It happens in art, design, literature and beyond. It has always happened, it happens, and it will continue to happen, with or without AI.
AI has very human flaws
Overall, we have analyzed five problems, also debunking some corollary points. Generative AI stands between users and creators, invents data or misrepresents it, is not omniscient, parcelizes reality, cannot always provide its sources, and risks self-feeding its own errors.
In all of this, AI is really the child of the human being, because all of these features are also characteristic of so many people and the work and training realities they manage.
In my experience, in many contexts, these have been recurrent aspects of interactions with others and with the product of their intellectual/work activities. People are often distracted, make irrational decisions (or driven by motivations we do not understand), and are influenced by ideologies and biases. Sometimes they are simply not qualified enough to ensure particular levels of quality. Subtracting ourselves from these dynamics, being on the acting or receiving point, is difficult for each of us.
Conclusions
The perspective just described is heartening, because it reminds us that these new tools are not disrupting the rules of a perfect reality. Far from it, the world of the relationship between people and information was already complex and problematic before AI’s arrival. And perhaps this is the real somewhat disheartening news.
Beyond the alarmism and sensationalism raised by the advent of AI, critical thinking remains the true compass for navigating these waters. An indispensable resource that must be cultivated by learning how these tools work. It has been true for so many other media and will be true for AIs as well.
Regardless of the considerations made, the most worrying aspect that continues to emerge is the possible convergence—of at least part—of the population toward a centralizing system when it comes to taking information. The presence of a unique opaque intermediary should be distrusted, for a variety of implications, which may be: political, ideological, identity, etc.
However, this last point is further complex. We have already seen similar phenomena with platforms such as Facebook and Twitter. Moreover, ChatGPT and AI Overviews are only part of this evolving technology. There are other work contexts in which the use of similar tools—but highly specialized on precise tasks—takes on less politically dystopian connotations and obvious human supportive features. As always, there is no one-size-fits-all key and each context deserves its own clarifications.
Finally, other possibilities must also be considered: Is artificial intelligence a bubble? This is what I asked myself in this article, bringing some data.
If you are interested in finding out more about these technologies, contact us. We can offer you interesting insights and recommend the best software solutions for your projects. We also offer consultations on AI, integrations with LLM, RAG systems and other solutions. Who knows, maybe your problem can be solved even without the use of AI!