/

Highlights from Elastic{ON} London 2026

Highlights from Elastic{ON} London 2026

Vectors, AI Models and Agents In the Spotlight

I recently attended Elastic{ON} London 2026, held at Convene Sancroft in Paternoster Square, just a stone’s throw from St Paul’s Cathedral.

Elastic runs these events around the world for developers, engineers and architects building applications on Elasticsearch. After the opening keynote, the agenda split into three tracks – search, observability and security.

Unsurprisingly, I spent most of my time in the search track.

There was plenty of discussion about AI, vector search and building intelligent applications on top of enterprise data. But rather than trying to summarise the whole day, here are three things that particularly stood out to me.

Highlights from Elasticon London 2026

1. DiskBBQ – tackling the ‘RAM tax’ of vector search

One of the most interesting announcements was DiskBBQ, a new vector indexing format designed to address one of the biggest challenges in AI search systems: the cost of memory. The name comes from BBQ – Better Binary Quantization, a technique used to compress vectors so they can be stored and processed far more efficiently.

Historically, high-performance vector search – typically using HNSW indexes – has required large portions of the index to be kept in RAM in order to achieve fast query performance. That approach works well, but it creates what many people refer to as the “RAM tax” of vector search.

As datasets grow, the memory requirements – and therefore infrastructure costs – can increase dramatically.

DiskBBQ takes a different approach by using clustering and quantisation techniques that allow vector searches to run efficiently directly from disk, while still maintaining strong performance.

If it delivers on its promise, the implications are significant. It could make large-scale semantic search and retrieval-augmented generation (RAG) applications much more cost-effective to run.

For organisations trying to scale AI-driven search without exploding infrastructure budgets, that’s a very welcome development.

2. The Jina AI acquisition – strengthening the AI stack

Another highlight was catching up on the impact of Elastic’s acquisition of Jina AI.

Jina has built a strong reputation for its work on embeddings, multilingual models and re-ranking, all of which play a critical role in improving search relevance in AI-powered systems.

Bringing that technology directly into the Elastic ecosystem should make it easier for developers to build high-quality AI search experiences without stitching together multiple external components.

In particular, Jina’s re-ranking models can help improve the precision of search results by reordering retrieved documents based on deeper semantic understanding.

For teams building RAG pipelines or semantic search applications, that kind of capability can make a noticeable difference to the quality of the final answer.

3. Elastic Agent Builder – turning search into conversation

The third highlight for me was Elastic Agent Builder, which aims to simplify the creation of AI agents that interact with enterprise data.

There’s a lot of hype around agents at the moment, but what Elastic showed was actually quite pragmatic.

The idea is to provide developers with a framework for building agents that can combine multiple capabilities – including semantic search, hybrid search, structured queries and external tools – to answer more complex questions.

Instead of simply retrieving documents, these agents can reason across datasets and generate answers, effectively turning a traditional search interface into a more conversational experience.

For organisations exploring AI assistants or knowledge agents, this could be a useful way to move toward conversational discovery without having to build an entire agent framework from scratch.

Other highlights from the day

In addition to those three takeaways, there were several other interesting sessions throughout the day.

A presentation from Reed.co.uk explored how their job platform has evolved from traditional keyword search toward intent-based retrieval, combining semantic search and hybrid techniques to improve job matching.

There was also a useful migration story describing the journey from self-managed clusters to Elastic Cloud, highlighting how organisations are simplifying infrastructure and scaling more easily in managed environments.

And finally, the session on stateless Elasticsearch architectures gave a glimpse of how Elastic is thinking about future scalability – separating compute and storage to improve performance and flexibility in cloud deployments.

Final thoughts

Events like Elastic{ON} are always a useful way to see how the search ecosystem is evolving.

What stood out to me this time was how clearly the conversation has shifted. Elasticsearch is no longer being positioned simply as a search engine, but increasingly as a platform for building AI-powered applications.

Vectors, embeddings, agents and retrieval are now all part of the same story.

And if there’s one underlying theme across all of it, it’s this:

Good AI still depends on good search.

– Martin

Crow attending Elastic{CON} London 2026
Good attendance for Elastic{ON} London 2026
An Elastic Cloud migration story at Elastic{ON} London 2026
An Elastic Cloud migration story at Elastic{ON} London 2026

Stay up to date with our latest insights!