As more people use GPT tools and Google’s market share continues to slide, I’ve had more and more people asking me about ‘GEO’ — Generative Engine Optimisation — and how they can get their content or brand showing up in tools like ChatGPT and Perplexity.
Whilst I can completely understand where this is coming from, a small part of me dies inside as even afters years and years of beating the drum that we ‘optimise for users not engines’, when the new platform comes along the first question comes from a minipulative mindset of how to chase an algorithm rather than understanding what people want.
The Engine Is Not Your Audience
What we’ve learnt over the years is that these systems always evolve. And with every iteration, they get better at aligning with what people actually want rather than what’s been gamed into their results. The gap between the tricks and trust gets smaller each time.
Still, the temptation is understandable. A site might get a spike in visibility from something that seems to work well. But if the focus is on manipulating the system instead of creating something genuinely useful, that generally crashes pretty spectacularly. As a consultant I literally can’t afford short term gains with crashes because I’d be fired.
But this ‘designing for the engine instead of the person’ mindset is still here. Maybe its human nature. How do I get the edge?
Whether it was Google in 2011 or GPT-5 in 2025, the mindset has been the same — find the flaw, fire your shot, and hope it lands. Like Luke Skywalker taking out the Death Star by nailing the exhaust port. Maybe I could coin a new phrase ‘DSEO’… Death Star Engine Optimisation.
And well…back in 2011, it worked. I remember one Saturday morning I bought an exact match domain, spun up a templated site with some basic copy, threw a few spammy links at it, and by lunchtime, I was ranking #1 and I’d made an affiliate sale by the end of the weekend. It was absurd. But the engines were simple, and it was too cheap and easy to cheat them.
Those days are over. Today’s platforms, including Google search, are smarter, faster, and learning directly from how users interact with our content. Chasing the quirks of these models is a fools game.
The Pace of Change Is Only Accelerating
Outside of some intense periods, we’ve generally become accustomed to Google algorithm updates every few months. There’s been a few occassions where the bigger updates forced us to adapt, but we had time. We could run tests, observe patterns, reverse engineer, reflect and gradually shift strategies.
That’s gone.
With AI systems, we’re not dealing with quarterly updates anymore. These models are being retrained, fine-tuned, and reinforced at a rate we’ve never seen before.
These models adapt not only on a system basis, but per user, based on how people interact with their outputs. If someone skips a response, rephrases their question, or asks for a source, that’s feedback. The model learns. The loop is tighter, the turnaround faster. And that means even if you manage to ‘optimise’ your way into visibility today, you’ll drop out just as fast if the content doesn’t deliver value to the user.
Trying to stay one step ahead by guessing what the model wants is a race you’ll lose — because what the model wants is what people want. And people don’t reward fluff, misdirection, or content built purely to game a system. They reward usefulness.
Mid-Funnel Is the New Battleground
For years, content strategies have focused on two things: top-of-funnel visibility — chasing high-volume, broad awareness queries — and bottom-of-funnel conversion, optimising landing pages to generate leads or close sales. These stages have always been an easy sell for budget allocation: more traffic at the top, more conversions at the bottom.
But AI is collapsing both ends of that funnel.
Top-of-funnel needs are now being handled by AI systems like Google’s AI Mode and ChatGPT. These tools can answer ‘what is’ or ‘best X for Y’ queries instantly, often better than traditional web content ever could. They summarise, contextualise, and personalise responses (without needing to scroll through a load of banner ads) — making much of that informational content redundant.
Bottom-of-funnel queries — brand terms, pricing, transactional intent — are also being absorbed. AI Mode can surface these directly with links, brand panels, or instant answers. In many cases, branded content or PPC is more effective here than organic search.
This leaves the middle of the funnel — the messy middle — as the real opportunity. This is where users are weighing options, validating choices, comparing products, and building trust. These are the nuanced, high-intent, context-heavy queries that AI can’t fully answer — where people still need to explore, read, think, and decide.
If your content doesn’t help users make progress in that decision-making process — if it’s vague, shallow, or transparently self-serving — it won’t get surfaced. If an LLM can explain something better than you can (and do it in 5 seconds), you’re wasting your time. But if your content reflects real-world experience, user understanding, and helpful detail, you’ll stand out — not just to users, but to the systems themselves.
This is the shift. Away from volume, toward value. Away from content designed to rank, toward content designed to help.
So, What to Do Next?
Firstly, ignore the hype and don’t panic.
Forget trying to reverse-engineer language models or squeeze your way into GPT through any tricks. Sorry, I can’t help you with that because the fundamentals haven’t changed — they’ve just become more important and harder to fake.
Focus on creating content that earns its place — content that answers real questions, guides real decisions, and builds real trust. That means understanding your audience deeply, showing up in the messy middle with clear, in-depth advice, and being useful in a way that is difficult for others to replicate.
You don’t need to optimise for the engine. You need to optimise for the person the engine is trying to serve. The best strategies have always centred on understanding users, solving real problems, and helping people make confident decisions. The rise of AI and LLMs just reinforces that approach.