Microsoft and Google promise us a new era in which AI will look for information for us, sort and offer the only best answer. But like any new technology, this approach has disadvantages. AI is used almost in all spheres today. Whether you need to write a poem or play games with it, it is everywhere. You can check its work by playing Hellspin and enjoying the game. We talk about the capabilities and restrictions of search engines using artificial intelligence.
Generators Of A Meaningless Text
Large language models often generate text without meaning or containing errors. They can have the most different characters, from inventing biographical data and simulating scientific works to incorrect answers to questions like “What is heavier, 10 kg of iron or 10 kg of cotton?”.
There are more contextual errors. For example, advise the user who claims to have mental health problems, commit suicide, or bias. The algorithm broadcasts racism and misogyny contained in the persons marked by a person for learning.
These errors can have a different scale and character, and the simplest of them is easy to fix. However, some will note that there are many more correct answers, and on the Internet and so completely toxic and meaningless results that fall into the search results.
There is no guarantee that we can completely get rid of errors, as well as there is no reliable way to track their frequency. Microsoft and Google can add moisture to the rejection of responsibility, urging people to check the facts about what generates AI. But how much will it work?
The Problem Of The “Only Answer”
There is another obstacle. Search engines tend to offer one, in appearance, the final answer. The problem has existed for more than 10 years since the search for Google began to demonstrate snippets as text blocks over the search results. They have repeatedly got a variety of mistakes, both awkward and dangerous. From the US presidents, named by KKK members and dangerous tips, such as a person must be placed on the floor all can be found.
Researchers Chirag Shah and Emily M. Bender believe that the introduction of chatbots can aggravate this problem. In addition, users do not always understand how AI works and can trust him too much. Meanwhile, the answers of such search engines are collected from several sources, often without proper indication of authorship. And such an experience is seriously different from the lists of links, each of which encourages the user to cross it and ask questions on their own.
So far, new approaches to the search are pushing the user to study fewer sources and trust the algorithm more.
Also Read This: How Can Social Media Marketing Affect On-Page SEO?
Cultural wars
This problem stems from the above but deserves a separate category since it can provoke conflicts on political grounds and attract the attention of regulators. The problem is that as soon as you have a tool that gives peremptory answers to delicate questions, this can annoy people with different opinions. And they will blame the developer for this.
Such “cultural wars” could be observed immediately after the launch of ChatGPT. For example, in India, a chatbot was criticized due to bias, since he tells jokes about Vishnu, but not about Jesus or Muhammad.
There is also a problem with searching for sources. Right now, AI Bing collects information from various sources and quotes them in footnotes. But what makes the site deserved? Will Microsoft try to balance political bias? Where will Google draw a line in search of a reliable source?
Today, officials in the EU and the United States have taken an unprecedented warlike position in relation to the influence of Big Tech, and the bias of AIs looks provocative.