Everything a man believes has been created will have been already created. Not once, but countless times.
Thus Spoken Zarathustra, Nietzsche, F.
Time is a cultural concept, and what we consider to be innovations are merely products of our perception. Humans can only adapt to these presumed innovations for as long as they are relevant within their lifespan.
Most of what we know today can be attributed to the expertise of the ancient Greeks. Since the 3rd century BCE, we have known that the Earth rotates around the sun.
Aristarchus of Samos‘ heliocentric model was never accepted, as it contradicted the human belief that the Earth was the center of the universe. Not until twelve centuries later when Galilei, an Italian scientist, resettled the debate and became the evangelist of the heliocentric model as we know it today.
Despite the time gap between Greek and Italian scientists, they shared the same views. However, it took centuries for the heliocentric model to become a universally accepted belief.
Generative AI is destined to follow the same playbook, possibly facing fewer constraints in cultural acceptance.
Innovation interlaces with socioeconomic factors which delay the full embracement in the public domain. If you ask your parents or your friends about ChatGPT3 or BARD, you might be surprised to learn they likely don’t know anything about it.
While ChatGPT3 and BARD are lightweight language models that will reshape search engine business models and influence SEO strategies, they are unlikely to disrupt the digital world as we know it.
Introducing a large language model to a search engine onion-layered structure comes with more drawbacks than benefits.
TL;DR:
- Large language models (LLM) restrain the trigger of Search. Language models don’t allow users to search or browse results but return definitive and largely indisputable answers, lacking opinions and first-hand expertise.
- Large language models (LLM) are good at automating repetitive but limited tasks. The longer a task takes for a large language model (LLM) to process, the higher the likelihood of generating outliers. This means that the probability distribution of the next word becomes skewed, resulting in a higher proportion of errors.
- Integrating LLM-based features into search engines will not enhance the search experience. Machine learning models operate based on contextual information, which is the fundamental reason why we are still far from achieving fully self-driving cars (e.g). The multitude of out-of-context problems makes it difficult to incorporate that knowledge into a model effectively.
Yet, we live in a world that is filled with questions that have no answers. Even after more than 2023 years, we still don’t know the answer to simple questions such as “what is love?” or “why do we come to light if we didn’t even want to?“
Do you think that restricting your questions to a chatbot will help you get better answers for your living?
In this post, I’m going to explore the differences between search engines and large language models in an attempt to debunk the hype that surrounds them in the field of SEO and across the digital domain.
The “Crazy Race” to Language Models
In response to the launch of Microsoft’s ChatGPT3, Google recently announced its own language model version called BARD. The move followed a red code alert in Alphabet leading the company to regroup and devise a counterattack to tackle the rising popularity of ChatGPT3.
Let’s quickly recap what happened in the last few days.
- Google’s former AI-first strategy has turned into a PR disaster.
- Alphabet Inc.’s stock lost over $100 billion in market cap as Google failed to meet expectations at its “Life from Paris” event and the live stream video was set to private.
- Microsoft and OpenAI are now seen as potential competitors in the search market.
- Google’s inability to present its abilities and turn them into products has led to a loss of control over its narrative and a credibility issue.
- Market share crumbles aside, Google sits on a comfy 80% of the market share and retains full control over half the mobile Operating System (OS) market.
What a Large Language Model Can & Can’t Do
A large language model (LLM) is an artificial intelligence system that uses deep learning techniques to understand and generate human language.
NLP models were used to process text one word at a time (see RNNs algorithm), which was slow and limited in understanding the overall context.
At some point, though, Transformers broke into the scene with their “self-attention” mechanism that allowed processing words in parallel and capturing long-range dependencies more efficiently.
This revolutionized NLP tasks as it enabled quicker and more accurate responses to prompts based on the semantic context of sentences within a text.
💡 In case you’re interested in expanding on language processing, I recommend you checking out my latest full guide about NLP techniques and use cases for SEO
These models operate in the realm of predictive analysis and are trained with supervised machine learning techniques. The accuracy of the predictions hinges significantly on the quality of the labeled data employed to train the models.
When applied to SEO, large language models can do a few interesting things:
Task | Main Implications |
---|---|
Write meta descriptions | Applying a probability distribution to a short text for automation can limit the spread of outliers. Therefore, submitting a page title to generate a meta description is the best task that a large language model can undertake in SEO. |
Proofreading | You can ensure a text is readable, concise and free from errors |
Summarize long documents | Make it easier for the reader to get to the core of a large corpus of text. You could also use a LLM model to generate FAQs for your blog post, provided that the content is accurately fact-checked prior to publication. |
Coding | You can safely ask a LLM to convert one programming language into another, or to help you fix an error in a script. When it comes to emulating complex languages, the LLM’s probability distribution works well. |
Snippet Generation | Just as with coding, it’s simply a matter of computing a probability equation that outputs quick snippets of code, such as structured data and hreflang annotations |
And to get a better understanding of what Large Language Models are, we should change our mindset when thinking about them
Instead of saying, “GPT will answer my question” consider instead, “GPT will provide the most probable response”. This is a much better way to understand how the technology actually works.
— Danny Richman (@DannyRichman) February 14, 2023
Here is a list of what Large Language Models were not designed to do:
Task | Main Implications |
---|---|
Crawl a website | LLMs are designed to react to user prompts and are fed by training models based on vast information databases. However, their substantial lack of proactivity, due to strict reliance on prompts, prevents them from crawling websites. |
Provide Citations | Large language models are trained as probability distribution models using a predefined dataset. When asked for a citation, the model searches its training data for the most probable domain to use as a reference. Then, considering the prompt and response context, the model estimates the most likely URL path to add to the selected domain. |
Browse the web | They cannot help you automate keyword research or identify search intent at scale, as the output will only reflect a probability of the best words to concatenate in a sequence. Although the list of terms could be exhaustive and on-target, you may find that they do not necessarily match an actual search demand. |
Content creation | Despite improving the readability of content, the millions of people using language models to write articles may end up producing similar content, lacking original opinions and expertise. Not the greatest recipe to generate helpful content deep-rooted in the E-E-A-T principles. |
If you ask a Transformer-based language model, such as ChatGPT, for keyword extraction, you won’t receive the actual keywords extracted from the text corpus. Instead, you’ll get the keywords that GPT believes to be the most likely based on its own predictions and similar tasks.
Ok. LLMs aren’t crawling your sites. They’re being fed data from other things that do. Like common crawl.
— Ryan Jones (@RyanJones) February 8, 2023
They aren’t storing your text. They’re embedding words as numbers and storing vectors of word relations. They just know probability of 1 word/phrase following another.
What a Search Engine Can & Can’t Do
A search engine is a software program that helps users find information on the internet by searching through a database of indexed web pages, documents, and other online content. When a user enters a search query into a search engine, the program uses complex algorithms and ranking systems to identify and display the most relevant results for that query.
Some of the main things that search engines can do include:
Stage | Description |
---|---|
Crawl | Search engines can locate new content on the Web, prior to benchmarking portions of content areas based on aged factors to determine duplicate content that may not be passed on to the next step. |
Index | Search engines can store the information discovered from the crawl in large databases. Content indexation occurs prior a significant amount of data preprocessing in the index layer to minimize the data volume that must be searched. This minimizes latency and maximizes search relevance. |
Process Queries | Search engines use a tech stack called a “Query Processor.” This feature parses a user’s query, breaks it down into keywords and phrases, and returns a reranked output tailored to that user’s specific query. Spell-checking, personalizing results based on the user’s search history, device, and location are all models deployed in this pipeline. This currently requires running inference on multiple small models. |
Crunching Web Advertising | Search engines leverage a real-time bidding Ad engine. This works closely with the query processor to meet the demand of a tailored user’s search query and thereby monetize potential search interactions. The model needs to optimize conversion to earn revenue and drive up rates, so relevance is the hyper-optimized parameter. |
Enable data analysis | Search engines can also provide data and insights into search trends and user behavior, which can be useful for businesses and marketers looking to optimize their online strategies. (e.g Google Trends) |
🔦BONUS
Google and Bing use very different hardware for query processing. While Google uses a lot of standard CPUs and in-house TPUs, Bing uses many standard CPUs and FPGAs, which accelerates both rankings and AI.
While search engines are incredibly powerful tools, there are still some things that they can’t do:
Can’t Do | Description |
---|---|
Provide 100% accurate or complete results | Search engines do their best to provide relevant and accurate results, but they are not perfect. Some information may be outdated or inaccurate, and even some content may not be indexed at all. |
Guarantee privacy | While search engines may claim to protect user privacy, they still collect a significant amount of personal information, including search history and location data. This information can be used for targeted advertising and other purposes, which may be concerning to some users. |
Automate tasks that require human judgement | Search engines are great at finding information and providing results, but they can’t perform tasks that require human judgement, such as making subjective decisions. For example, a search engine may not be able to tell whether a piece of content is satire or serious. |
Fully understand unstructured data | While search engines are great at indexing and reading structured data for semantic search, such as web pages and databases, they still struggle with unstructured data, such as images, videos, and audio files. Despite Google’s progress in this area, the machine learning model at its kernel isn’t enough to win over the challenge. |
The Turning Point of Search Experience
As the search experience reaches a turning point in history, we find ourselves in a paradigm where the coexistence between large language models (LLM) and search engines is marred by a few structural problems.
One of the major challenges with large language models is that they can’t say ‘I don’t know’, which means responses are returned as pieces of truth in disguise. That being said, search engines can be equally inaccurate and overestimated.
Since there is no universal truth for a single question in this world, a state-of-the-art search engine should leave users with the freedom to think laterally and make their own judgment from a list of varied options in the search results.
On the other hand, large language models (LLM) are good at generating prescriptive responses based on complex probability distribution models trained on structured and unstructured data.
And where do all these data come from?
Whether it’s a search engine or a language model, data always originate from the same old source: our websites.
Large language models and search engines are mutually exclusive search stacks, which leads me to believe that language models should not be incorporated into any search engine.
Yet, most search engines are onboarding this new innovation to create a paradigm shift in the search domain.
The Innovator’s Dilemma
Google’s recent revenue loss is significant when compared to the looming shadows on the horizon of the search engine market. The hardest part of the game for Google to thrive in the next decade is to fine-tune its current business model to align with the digital economy.
This is the Innovator’s Dilemma, which brings several implications tied to several new operating costs that Google might have to support.
Introducing even a lightweight large language model (LLM) in the search engine domain implies:
- Continuous retraining and fine-tuning to keep bots updated with the tons of new content published daily.
- Continuous testing on active users to ensure their system responds positively to search prompts.
- Extra Computing Costs. ChatGPT queries cost about 7 times more than Google searches, at an estimated 2 cents per query. If ChatGPT-like LLMs are incorporated into search, this could devastate the company, with its net income for services dropping from $55.5 billion in 2022 to $19.5 billion.
Expanding a bit on Google’s business model, we should note that the introduction of an LLM-based model will likely reduce ad revenue. This will push Google to adopt a new advertising model aimed at charging advertisers more and displaying fewer ads.
Alphabet and its followers could afford this, and Microsoft could use the existing model to offer conversational search at a loss for some time until Google is pressured to react and make the next move.
The big dilemma is usually in the innovator’s hands: is Google ready to get out of their comfort zone and eagerly test new solutions?
Google’s Response to ChatGPT3
Google’s market followers have already begun to venture out with radical tests aimed at challenging the traditional search stack.
Microsoft’s integration of ChatGPT3 in Bing was a first-mover operation to capture a portion of the market share from Google.
When Alphabet raised a red code alert, they were subconsciously affected by a confirmation bias which led Google to announce their own ChatGPT3 version.
This was part of a pre-emptive counterattack aimed at safeguarding Google’s market stance.
Translated into hilarious terms:
We're adding it to the search engine that you actually use.
— Mic King (@iPullRank) February 6, 2023
Btw we have other even better language modeling tech, but we're adults so we're not gonna give it to you yet because y'all don't know how to act.
It is as though Google was suffering from an “inferiority complex” when it comes to the competition, which is playing out in a gritty counterattack to exorcise the anguish of seeing their monopoly falling apart.
BARD was announced as a feature based on a lightweight version of LaMDA, the most advanced conversation algorithm on earth. As a smaller model, it requires less computing power, meaning the feature can provide feedback to more users at a lower cost.
Why didn’t Google deploy their full-sized LaMDA model instead? This would have beaten the competition straight away. Yet, the Alphabet company couldn’t afford such a risky move due to the potentially massive impact on their gross margins.
If Google is playing defense on margins by going for the skinnier baby BARD, then it means the company is still not ready for an abrupt shift in their business model, considering that 80% of its revenue comes from ads.
If Google is not prepared for a shift in their business model, then it means the company will progress with a handbrake on to envision the next wave of AI-generated features.
Improving the Semantic Search Engine to Overcome Impostor Syndrome
Search is undergoing a seismic shift as user experience and monetization patterns are poised for an abrupt change. This may render certain parts of the existing search stack layers obsolete, which we have listed as the primary capabilities of a modern search engine.
Wouldn’t it be more useful for Google to overcome the impostor syndrome regarding the competition by improving its semantic search engine to cement its brand positioning?
Fervently studying the competition to plan your next business move may not always yield positive results, especially if your brand has even become a verb in common language (e.g., “let me Google it?”).
I am not suggesting that Google should completely disregard what’s happening in the competitive arena. Instead, they should re-examine what Search genuinely stands for and carefully identify the fundamental differences and scopes concerning large language models (LLM).
There is no such thing as a true novelty in this world as everything is an emulation of what has already existed.
Thus Spoken Zarathustra, Nietzsche, F.
FAQ
What are some good uses for an LLM?
Tasks requiring the manipulation of short-form of text are optimal use cases to submit to LM.
Examples include title tags and meta descriptions generation, short text summarization and translations but coding as well.
What are some bad uses for an LLM?
Tasks requiring human reasoning beyond plain execution.
Examples include keyword extractions, search intent clustering, topic clustering or entity extractions.
Why does LLM don’t get citations right?
Because language models are large probability distributions trained on Web data. When asked for a citation, the model searches its training data for the most probable domain to use as a reference. Then, considering the prompt and response context, the model estimates the most likely URL path to add to the selected domain.
Why Language Models are not Search Engines?
Because they are statistical models predicting the next word from your own input in a sentence.
The answers will be crafted on the user questions and whether this makes it a personalized search experience, it will always lack real opinions and expertise.
References
Floridi, L. (2023). AI as Agency Without Intelligence: On ChatGPT, Large Language Models, and Other Generative Models. Philosophy and Technology.
Metzler, D., Tay, Y., Bahri, D., & Najork, M. (2021). Rethinking Search: Making Domain Experts out of Dilettantes. ArXiv. https://doi.org/10.1145/3476415.3476428
Peck,J.(2023). An SEO’s guide to understanding large language models (LLMs). Search Engine Land.