Understanding the Impact of Large Language Models on Search Engines

Reading time: 13 Minutes

Everything a man believes has been created will have been already created. Not once, but countless times.

Thus Spoken Zarathustra, Nietzsche, F.

Time is a cultural concept, and what we consider to be innovations are merely products of our perception. Humans can only adapt to these presumed innovations for as long as they are relevant within their lifespan.

Most of what we know today can be attributed to the expertise of the ancient Greeks. Since the 3rd century BCE, we have known that the Earth rotates around the sun.

Aristarchus of Samos‘ heliocentric model was never accepted, as it contradicted the human belief that the Earth was the center of the universe. Not until twelve centuries later when Galilei, an Italian scientist, resettled the debate and became the evangelist of the heliocentric model as we know it today.

Despite the time gap between Greek and Italian scientists, they shared the same views. However, it took centuries for the heliocentric model to become a universally accepted belief.

Generative AI is destined to follow the same playbook, possibly facing fewer constraints in cultural acceptance.

Innovation interlaces with socioeconomic factors which delay the full embracement in the public domain. If you ask your parents or your friends about ChatGPT3 or BARD, you might be surprised to learn they likely don’t know anything about it.

While ChatGPT3 and BARD are lightweight language models that will reshape search engine business models and influence SEO strategies, they are unlikely to disrupt the digital world as we know it.

Introducing a large language model to a search engine onion-layered structure comes with more drawbacks than benefits.

TL;DR of this post:

  1. Large language models (LLM) don’t allow users to search or browse results but return definitive and largely indisputable answers.
  2. Large language models (LLM) simplify annoying tasks for the average Internet user. Before their existence, an average programmer could have achieved the same outcome using just a few lines of code.
  3. Integrating LLM-based features into search engines will not enhance the search experience. Responses generated based on probability distribution models are susceptible to biases from the perspective of the prompt given.

Yet, we live in a world that is filled with questions that have no answers. Even after more than 2023 years, we still don’t know the answer to simple questions such as “what is love?” or “why do we come to light if we didn’t even want to?

Do you think that restricting your questions to a chatbot will help you get better answers for your living?

In this post, I’m going to explore the differences between search engines and large language models in an attempt to debunk the hype that surrounds them in the field of SEO and across the digital domain.

The “Crazy Race” to Language Models

In response to the launch of Microsoft’s ChatGPT3, Google recently announced its own language model version called BARD. The move followed a red code alert in Alphabet leading the company to regroup and devise a counterattack to tackle the rising popularity of ChatGPT3.

Let’s quickly recap what happened in the last few days.

  1. Google’s former AI-first strategy has turned into a PR disaster.
  2. Alphabet Inc.’s stock lost over $100 billion in market cap as Google failed to meet expectations at its “Life from Paris” event and the live stream video was set to private.
  3. Microsoft and OpenAI are now seen as potential competitors in the search market.
  4. Google’s inability to present its abilities and turn them into products has led to a loss of control over its narrative and a credibility issue.
  5. Market share crumbles aside, Google sits on a comfy 80% of the market share and retains full control over half the mobile Operating System (OS) market.

What a Large Language Model Can & Can’t Do

A large language model (LLM) is an artificial intelligence system that uses deep learning techniques to understand and generate human language.

These models are designed to analyze and learn from vast amounts of text data, such as books, articles, and other documents, to understand the structure and meaning of language.

Such models operate in the realm of predictive analysis and are trained with supervised machine learning techniques. The accuracy of the predictions hinges significantly on the quality of the labeled data employed to train the models.

When applied to SEO, large language models can do a few interesting things:

Task Main Implications
Write meta descriptions Applying a probability distribution to a short text for automation can limit the spread of outliers. Therefore, submitting a page title to generate a meta description is the best task that a large language model can undertake in SEO.
Proofreading You can ensure a text is readable, concise and free from errors
Summarize long documents Make it easier for the reader to get to the core of a large corpus of text. You could also use a LLM model to generate FAQs for your blog post, provided that the content is accurately fact-checked prior to publication.
Coding You can safely ask a LLM to convert one programming language into another, or to help you fix an error in a script. When it comes to emulating complex languages, the LLM’s probability distribution works well.
Snippet Generation Just as with coding, it’s simply a matter of computing a probability equation that outputs quick snippets of code, such as structured data and hreflang annotations

And to get a better understanding of what Large Language Models are, we should change our mindset when thinking about them

Here is a list of what Large Language Models were not designed to do:

Task Main Implications
Crawl a website LLMs are designed to react to user prompts and are fed by training models based on vast information databases. However, their substantial lack of proactivity, due to strict reliance on prompts, prevents them from crawling websites.
Scrape the web You will unlikely achieve content segmentation using ChatGPT3-alike models. For the same reason listed above, they won’t detect <span> and <div> elements which are key for your page template segmentation.
Browse the web They cannot help you automate keyword research or identify search intent at scale, as the output will only reflect a probability of the best words to concatenate in a sequence. Although the list of terms could be exhaustive and on-target, you may find that they do not necessarily match an actual search demand.
Content creation Despite the content being readable, the quadrillion of people using ChatGPT3 will be writing the exactly same content. Despite a decent syntax of a paragraph structure, in any case they will likely miss on expertise to differentiate their value proposition. Not the greatest match between E-E-A-T and AI-generated content.

What a Search Engine Can & Can’t Do

A search engine is a software program that helps users find information on the internet by searching through a database of indexed web pages, documents, and other online content. When a user enters a search query into a search engine, the program uses complex algorithms and ranking systems to identify and display the most relevant results for that query.

Some of the main things that search engines can do include:

Stage Description
Crawl Search engines can locate new content on the Web, prior to benchmarking portions of content areas based on aged factors to determine duplicate content that may not be passed on to the next step.
Index Search engines can store the information discovered from the crawl in large databases. Content indexation occurs prior a significant amount of data preprocessing in the index layer to minimize the data volume that must be searched. This minimizes latency and maximizes search relevance.
Process Queries Search engines use a tech stack called a “Query Processor.” This feature parses a user’s query, breaks it down into keywords and phrases, and returns a reranked output tailored to that user’s specific query. Spell-checking, personalizing results based on the user’s search history, device, and location are all models deployed in this pipeline. This currently requires running inference on multiple small models.
Crunching Web Advertising Search engines leverage a real-time bidding Ad engine. This works closely with the query processor to meet the demand of a tailored user’s search query and thereby monetize potential search interactions. The model needs to optimize conversion to earn revenue and drive up rates, so relevance is the hyper-optimized parameter.
Enable data analysis Search engines can also provide data and insights into search trends and user behavior, which can be useful for businesses and marketers looking to optimize their online strategies. (e.g Google Trends)

🔦BONUS

Google and Bing use very different hardware for query processing. While Google uses a lot of standard CPUs and in-house TPUs, Bing uses many standard CPUs and FPGAs, which accelerates both rankings and AI.

While search engines are incredibly powerful tools, there are still some things that they can’t do:

Can’t Do Description
Provide 100% accurate or complete results Search engines do their best to provide relevant and accurate results, but they are not perfect. Some information may be outdated or inaccurate, and even some content may not be indexed at all.
Guarantee privacy While search engines may claim to protect user privacy, they still collect a significant amount of personal information, including search history and location data. This information can be used for targeted advertising and other purposes, which may be concerning to some users.
Automate tasks that require human judgement Search engines are great at finding information and providing results, but they can’t perform tasks that require human judgement, such as making subjective decisions. For example, a search engine may not be able to tell whether a piece of content is satire or serious.
Fully understand unstructured data While search engines are great at indexing and reading structured data for semantic search, such as web pages and databases, they still struggle with unstructured data, such as images, videos, and audio files. Despite Google’s progress in this area, the machine learning model at its kernel isn’t enough to win over the challenge.

The Turning Point of Search Experience

As the search experience reaches a turning point in history, we find ourselves in a paradigm where the coexistence between large language models (LLM) and search engines is marred by a few structural problems.

One of the major challenges with large language models is that they can’t say ‘I don’t know’, which means responses are returned as pieces of truth in disguise. That being said, search engines can be equally inaccurate and overestimated.

Since there is no universal truth for a single question in this world, a state-of-the-art search engine should leave users with the freedom to think laterally and make their own judgment from a list of varied options in the search results.

On the other hand, large language models (LLM) are good at generating prescriptive responses based on complex probability distribution models trained on structured and unstructured data.

And where do all these data come from?

Whether it’s a search engine or a language model, data always originate from the same old source: our websites.

Large language models and search engines are mutually exclusive search stacks, which leads me to believe that language models should not be incorporated into any search engine.

Yet, most search engines are onboarding this new innovation to create a paradigm shift in the search domain.

The Innovator’s Dilemma

Google’s recent revenue loss is significant when compared to the looming shadows on the horizon of the search engine market. The hardest part of the game for Google to thrive in the next decade is to fine-tune their current business model to align with the digital economy.

This is the Innovator’s Dilemma, which brings several implications tied to several new operating costs that Google might have to support.

Introducing even a lightweight large language model (LLM) in the search engine domain implies:

  • Continuous retraining and fine-tuning to keep bots updated with the tons of new content published daily.
  • Continuous testing on active users to ensure their system responds positively to search prompts.
  • Extra Computing Costs. ChatGPT queries cost about 7 times more than Google searches, at an estimated 2 cents per query. If ChatGPT-like LLMs are incorporated into search, this could devastate the company, with its net income for services dropping from $55.5 billion in 2022 to $19.5 billion.
Google Search Cost Structure. Source: Semianalysis.com

Expanding a bit on Google’s business model, we should note that the introduction of an LLM-based model will likely reduce ad revenue. This will push Google to adopt a new advertising model aimed at charging advertisers more and displaying fewer ads.

Alphabet and its followers could afford this, and Microsoft could use the existing model to offer conversational search at a loss for some time until Google is pressured to react and make the next move.

The big dilemma is usually in the innovator’s hands: is Google ready to get out of their comfort zone and eagerly test new solutions?

Google’s Response to ChatGPT3

Google’s market followers have already begun to venture out with radical tests aimed at challenging the traditional search stack.

Microsoft’s integration of ChatGPT3 in Bing was a first-mover operation to capture a portion of the market share from Google.

When Alphabet raised a red code alert, they were subconsciously affected by a confirmation bias which led Google to announce their own ChatGPT3 version.

This was part of a pre-emptive counterattack aimed at safeguarding Google’s market stance.

Translated into hilarious terms:

It is as though Google was suffering from an “inferiority complex” when it comes to the competition, which is playing out in a gritty counterattack to exorcise the anguish of seeing their monopoly falling apart.

BARD was announced as a feature based on a lightweight version of LaMDA, the most advanced conversation algorithm on earth. As a smaller model, it requires less computing power, meaning the feature can provide feedback to more users at a lower cost.

Why didn’t Google deploy their full-sized LaMDA model instead? This would have beaten the competition straight away. Yet, the Alphabet company couldn’t afford such a risky move due to the potentially massive impact on their gross margins.

If Google is playing defense on margins by going for the skinnier baby BARD, then it means the company is still not ready for an abrupt shift in their business model, considering that 80% of its revenue comes from ads.

If Google is not prepared for a shift in their business model, then it means the company will progress with a handbrake on to envision the next wave of AI-generated features.

Improving the Semantic Search Engine to overcome Impostor Syndrome

Search is undergoing a seismic shift as user experience and monetization patterns are poised for an abrupt change. This may render certain parts of the existing search stack layers obsolete, which we have listed as the primary capabilities of a modern search engine.

Wouldn’t it be more useful for Google to overcome their impostor syndrome regarding the competition by improving their semantic search engine to cement their brand positioning?

Fervently studying the competition to plan your next business move may not always yield positive results, especially if your brand has even become a verb in common language (e.g., “let me Google it?”).

I am not suggesting that Google should completely disregard what’s happening in the competitive arena. Instead, they should re-examine what Search genuinely stands for and carefully identify the fundamental differences and scopes concerning large language models (LLM).

There is no such thing as a true novelty in this world as everything is an emulation of what has already existed.

Thus Spoken Zarathustra, Nietzsche, F.

References

Floridi, L. (2023). AI as Agency Without Intelligence: On ChatGPT, Large Language Models, and Other Generative Models. Philosophy and Technology.

Metzler, D., Tay, Y., Bahri, D., & Najork, M. (2021). Rethinking Search: Making Domain Experts out of Dilettantes. ArXiv. https://doi.org/10.1145/3476415.3476428