OpenAI’s ChatGPT Search officially launched and generative engine optimization (GEO) just became a lot more important.
Most of the major players in generative AI search – ChatGPT, Perplexity and Google’s Gemini, now combine real-time search with conversational capabilities.
What does this mean for the future of SEO?
If you want your brand to be part of the conversations that matter, it’s time to start thinking differently.
Here are five key trends in GEO that are redefining the future of search, plus how you can prepare.
1. The evolution of entities
Entities are (once again) changing how we think about search and understanding their evolving role is key to staying visible.
Remember the phrase “things, not strings”?
When Google introduced its Knowledge Graph in 2012, it marked a major shift from simply matching a “string” of words in text to recognizing distinct “things,” or entities, like people, places, products and ideas.
This shift was the first step towards connecting information in a meaningful web of knowledge, bringing search engines closer to understanding information like a human would.
Now, with the rise of AI-powered search technology, entities have taken on an even greater role. They’re crucial to how AI interprets and prioritizes information.
Entities are connected through knowledge networks
Entities and their relationships are anchored within knowledge networks – structured collections like Google’s Knowledge Graph, Wikipedia, Wikidata and other trusted sources.
These networks define connections between entities and attributes, serving as a foundational reference that AI uses to understand context, assess credibility and determine relevance.
However, AI doesn’t just rely on these existing networks. Over time, it builds its own dynamic web of connections, developing a deeper understanding of how things relate to one another in context.
The role of entities in relevance
Think of entities as AI’s way of understanding “what something truly is.” It recognizes these connections and creates a web that links ideas, context and real-world relevance.
By identifying these patterns, it associates related topics, giving it the power to offer answers that feel cohesive and intuitive.
For example, say someone searches:
“What’s a good beginner-friendly bike for commuting in San Francisco?”
Instead of treating this as a series of unrelated words, AI interprets it by identifying key entities, attributes and the connections between them:
Bike: Product (entity).
San Francisco: Location (entity).
Beginner-friendly: Experience Level (attribute).
Commuting: Purpose (attribute).
Here, we have the entities “bike” and “San Francisco,” and supporting attributes like “beginner-friendly” and “commuting,” which give depth to the query.
AI recognizes that a beginner-friendly bike for San Francisco should handle hills easily and might have features like an upright design, easy gear shifting or electric assist.
By understanding these connections, AI doesn’t just pull a list of bikes.
It considers the context and intent, referencing trusted sources, recent reviews, customer sentiment and recommendations to surface options suited to the city’s terrain and the rider’s experience level.
Dig deeper: Entity SEO: The definitive guide
The role of entities in E-E-A-T
However, entities do more than link related information – they establish markers of experience, expertise, authority and trustworthiness (E-E-A-T).
Your brand, too, is an entity in this ecosystem.
Brands are recognized alongside other distinct “things,” and their authority and trustworthiness play a direct role in their visibility.
And especially for topics where accuracy is critical (think YMYL), AI relies on these established connections to decide which sources to use.
With clear authority in their niche and connections to other recognized entities, brands can become the voices AI turns to, embedding them in conversations around key topics.
Dig deeper: Modern SEO: Packaging your brand and marketing for Google
2. LLMs and RAG: The tech behind AI-driven search
Entities’ growing importance in modern search is tied to how LLMs and retrieval-augmented generation (RAG) operate.
Understanding this technology helps tie in the “why” behind GEO.
How do LLMs work?
LLMs are trained on extensive datasets – everything from websites and forums to structured databases like Wikipedia and Wikidata – which gives them the ability to process and understand the complexities of human language.
Understanding natural language and intent: LLMs learn how words, phrases and ideas interact within different contexts, enabling them to interpret both the literal meaning and the deeper meaning behind queries. This allows them to generate intuitive, human-like responses.
Mapping entity relationships: Through entity recognition, LLMs learn to map connections between things. For example, “San Francisco” is recognized as a location linked to attributes like “hilly terrain” or “tech hubs.” These patterns help LLMs synthesize cohesive responses from a web of interrelated knowledge.
Generating contextually relevant answers: When processing a query, LLMs rely on their pre-trained knowledge to generate responses that consider both the explicit query and its broader context, aligning answers with the user’s intent.
Despite their strengths, LLMs face a critical limitation: their reliance on static, pre-trained knowledge.
They can create outdated answers or “hallucination,” which are responses that seem plausible but lack factual accuracy.
RAG powering real-time updates
RAG solves these challenges by giving AI real-time access to fresh information.
Instead of relying solely on pre-trained data, it retrieves relevant content as queries occur, weaving it together with the LLM’s existing knowledge. This ensures responses stay accurate, timely and grounded in real-world data.
How does RAG work?
According to Google, retrieval-augmented generation enhances traditional LLM workflows by combining three key processes: retrieval, augmentation and generation.
Retrieval: RAG enhances responses by querying pre-indexed, vectorized data from diverse sources like news articles, APIs, Wikipedia, Wikidata and UGC platforms like Reddit and Quora. Leveraging semantic search, it combines authoritative knowledge with current and emerging trends for a well-rounded understanding.
Augmentation: Retrieved information is seamlessly integrated with the LLM’s pre-trained knowledge, enriching the prompt context.
Generation: With this enhanced context, AI generates a response that is accurate and grounded in current reality, combining foundational insights with up-to-date information.
Why this matters for GEO
LLMs build the foundation by understanding context, while RAG ensures what’s delivered is timely and accurate.
For brands, it’s no longer enough to publish content and hope for relevance.
Your content needs to be structured to integrate seamlessly into the databases and knowledge networks on which AI depends. Equally important is building credibility through associations with trusted sources, earning authoritative mentions and fostering real-time engagement.
The goal is to become the go-to source of information AI consistently turns to.
How do you get there? It starts with entity optimization.
3. The new age of entity optimization
Entities are how AI makes sense of the world. But knowing their significance is just the beginning.
For your brand to thrive in the interconnected web of AI understanding, it needs to become a part of the story. Here’s how to get started.