What are the problems of AI — Part 1

Five problems inherent in generative AI that are important to know about

Generative AIs are appearing everywhere and seem to be the candidate replacement for Google, on steroids. They are fast and convenient, no ads and straight to the point. For many activities they are useful, but there is no shortage of pitfalls that you need to know how to guard against. In this article, divided into two parts, we explore five of them.

For months, various topics have been crowding my head. Settled as a result of articles read, videos watched and discussions with friends and colleagues. While writing the article Is artificial intelligence a bubble?, I realized that there was more to say. In particular, after accidentally coming across this video from the SmartWorld YouTube channel, I decided to write more.

I have limited myself to five major points, combining personal experiences with ChatGPT and suggestions absorbed over the past few years. I am sure you can find evidence of what I write elsewhere as well, or perhaps you have already experienced it. Let’s start with just one critical aspect discussed by the guys at SmartWorld.

Problem 1 — AI takes views away from content creators

According to an article by Danny Goodwin published in Search Engine Land, BrightEdge’s report 05/24-05/25 reports that the click-through rate (CTR) of Google searches has dropped by 30 percent, between 2024 and 2025. The reason has been identified in AI Overviews, the AI feature integrated at the top of the search engine results.

The user no longer has to choose a site and explore it. AI Overviews examines queries and provides a response, processing data collected from the internet. In summary: Google takes traffic away from content creators’ sites.

This is problematic for at least two reasons. Sites that monetize through advertising—which works for clicks and views—will see their revenue decline. All sites that are no longer visited will see their brand awareness weakened, as fewer users will come into direct contact with the distinctive elements of their image and content.

AI Overviews flattens information. For less curious users, it will be increasingly difficult to find out who solved their problem, because the AI will act as a mouthpiece. Sure, Google might still provide its sources, but will users still have an interest in visiting them? This is an issue to wonder about.

According to a statement by Danielle Coffey, president and CEO of News/Media Alliance, “links were the last redeeming quality of search that gave publishers traffic and revenue. Now Google just takes content by force and uses it with no return, the definition of theft. The DOJ remedies must address this to prevent continued domination of the internet by one company.”

Again according to BrightEdge, there is also positive data. For example, sites placed on Google in 20th place or later saw a 400% increase in citations from AI Overviews. However, the key figure remains the number of visits to the sites, and BrightEdge did not indicate how many of these citations were converted into user clicks. “Details” like this invite caution. BrightEdge’s business revolves around SEO, but the company also promotes tools with AI integrations, and the presence of a possible conflict of interest should be investigated. For all the details please read the report at the link above.

Will Google have interests in integrating solutions that can effectively offset this change? We shall see what the future holds.

Problem 2 — AI invents data and information

With me, AI has been wrong several times. In my spare time I do historical research. I read manuscripts, old books, academic papers and more. I have years of experience behind me but thought that ChatGPT could speed up research by providing the basic data to explore new topics.

For example, ChatGPT made up or misspelled names of authors and books. I make new attempts from time to time, but trust has cracked quickly. Some of my collaborators have also experienced similar problems. To err is human and, in this case, also software, it seems.

ChatGPTacts with nonchalance. It does not explicitly divide personal “inferences” from data with an established source. If interrogated, the AI may turn around the problem and deflect from asking for more precise answers. Finally, when cornered by pressing questions, it corrects oversights or reveals its bluff. One must often remain vigilant and skeptical, aware that the burden of proof should be on the AI.

This phenomenon has been named AI Hallucinations (but some people don’t like the name) and it happens because generative AI is not a technology with deterministic output. It cannot recognize “true” or “false” content. Information is selected and processed to provide the user with output that is reliable and understandable by statistical probability. Given these rules, the margin of error is not slow to appear. Perhaps, on some topics it is more likely than on others.

Recently, there has been discussion of AI Hallucinations and its unreliable percentage incidence, in a New York Times article by Cade Metz and Karen Weise and in a Forbes article by Conor Murray. I recommend reading it to explore the topic further. Perhaps peppered with comments from this Reddit thread, where the community discusses Metz and Weise’s article.

If you want to know how the AI Hallucinations work, read this article by Anthropic about Claude, their AI. The article is dense with information, but you can scroll directly to the dedicated section.

Of course, people make mistakes too. Even a blog article can report nonsense (will this same article be safe from some error or approximation? Perhaps not). The real problem, perhaps, is the flattening of information to a single voice, overpowering and encompassing all others.

Problem 3 — AI does not have unlimited knowledge

This brings us to another problem. The databases from which AIs draw, however large, are not boundless. Their ability to reach information has technical limitations. What data are considered, whether and how they can be “digested” by AI, are all aspects that affect responses.

What can an AI “eat”? Databases, pdf documents, web pages scanned by a crawler collecting data, videos, ebooks… AI only has access to limited portions of reality, depending on what it has been programmed to process. Even without AI, as people we still only have access to portions of reality. It is always a game of sets and subsets. For example, we do not have access to CIA data, and this probably affects our perception of current geopolitics.

Paradoxically, AI’s faculties are more limited than those of most of us, because most people are able to access and enjoy content, regardless of its format. AI is not “smarter” than we are, but it beats us because it has more time to absorb information and is extremely faster in sharing it with those who ask for it.

My experience makes one point clear. There is information that, as a person, I am able to find on my own, but that ChatGPT does not know at all. And it is likely that there are many other fields of inquiry where these tools are still lacking.

This happens because much specialized or academic material is still the prerogative of paper books and articles, often unreachable from the web, accessible behind a paywall, by piracy, or by direct knowledge between people. This does not mean that this cannot change in the future.

For now, it is both problematic and limiting. The more knowledgeable I am about a topic, the more likely the AI is to fail to meet my level of expertise. The less I know, the more likely I am to fall victim to its limitations, out of sheer confidence and a distorted sense of authority over it.

According to the third law of British writer Arthur C. Clarke “any sufficiently advanced technology is indistinguishable from magic.” In a sense, generative AI is almost magical. So many users do not know how it works. Its advent may drive a portion of the population to overconfidence in it. A “faith” in his truth, driven by convenience. An attitude reticent to exposure to doubt, falling between acceptance of religious dogma and surrender to unfathomable and omniscient supernatural forces.

Too apocalyptic? Perhaps yes, it sounds very catastrophic. Biased, wrongly reworked, even maliciously biased information has never spared newspapers, TV and other media. We are always the ones who decide what well to drink from. Whether to blindly trust or verify sources and reasoning. This is just yet another iteration on the topic.

So, some might ask: in the end, what does it change? Again, the real problem seems to lie in the unifying centralization inherent in the AI structure. A single opaque structure, which risks replacing the voices of those who create the content, weakening their impact on the real world.

Here we are at the end of the “first half.” The article was very long and to avoid making it too heavy, I divided it into two parts. If you want, take a short break and then read the last two problems and my conclusions, in the second part.