Why Lakebase Signals the Next Architectural Shift

I was in the room at Databricks Summit San Francisco when Lakebase was first announced. At the time, it felt bold, almost provocative, to talk about operational databases as ephemeral, serverless infrastructure running directly on the lake.

Now, Lakebase PostgreSQL is generally available on AWS and beta in Azure, and the significance of that moment is much clearer. This isn’t just a new Databricks product reaching GA, it’s a signal that databases themselves must adapt to the realities of an AI-native world.

Lakebase isn’t simply “managed Postgres on Databricks.” It represents a fundamental shift in how databases are designed for an environment where applications, environments, and even databases are created and destroyed programmatically, often by agents rather than humans.

In the same way the lakehouse reshaped analytics by decoupling storage and compute, Lakebase applies that principle to OLTP, turning databases into lightweight, scalable compute running on open data lake storage. In the context of AI, especially agentic systems, that changes everything.

If adaptability is destiny in the age of AI, then ephemeral infrastructure is the enabler and Lakebase is Databricks making that philosophy real for operational databases.

If adaptability is destiny in the age of AI, then ephemeral infrastructure is the enabler and Lakebase is Databricks making that philosophy real for operational databases.

The real problem isn’t Postgres, it’s persistence

Most teams don’t struggle because Postgres is bad. They struggle because traditional databases assume permanence.

They assume:

  • fixed capacity
  • long-lived instances
  • careful human oversight
  • DBAs managing risk by limiting change

That model worked when applications were few, releases were slow, and infrastructure changed quarterly.

It breaks completely when:

  • applications are generated faster than humans can reason about them
  • environments are cloned continuously for testing, experimentation, and agents
  • AI systems need live, transactional state, not just historical analytics

In short: shared, static database infrastructure becomes the bottleneck.
Not compute.
Not models.
Not developers.
The database.

Lakebase is not “managed Postgres”, it’s a category shift

Lakebase introduces a fundamentally different mental model.

Instead of treating databases as heavyweight, carefully curated infrastructure, it treats them as lightweight compute running on top of open data lake storage.

  • Storage lives in the lake, in open formats
  • Compute spins up when needed
  • Scales instantly
  • Scales to zero when idle

Postgres remains fully compatible, but it is no longer the centre of gravity.

This is the same architectural leap the lakehouse made for analytics:

  • Decouple storage from compute → unlock scale, flexibility, and new economics.
  • Lakebase applies that principle to OLTP.

And in the context of AI, that isn’t a nice-to-have. It’s essential.

Why this matters specifically for AI and agents

AI doesn’t just change what we build. It changes the shape of systems.

We’re already seeing:

  • internal tools replacing SaaS
  • rapid “vibe coding” of applications
  • agent-driven workflows provisioning infrastructure programmatically

That future simply does not work if every database requires:

  • manual provisioning
  • capacity planning
  • DBA oversight
  • separate governance models

You cannot hire a DBA for every agent-generated app.

Lakebase makes a different assumption:

Databases should be cheap, fast, governed and disposable.

That enables:

  • real-time feature serving directly on operational data
  • persistent memory for agents that stays consistent with the lakehouse
  • embedded analytics without ETL or fragile sync pipelines

Operational, analytical, and AI workloads finally live on the same foundation.

Speed is a consequence, not the point

Yes, the delivery metrics are impressive:

  • weeks reduced to days
  • months reduced to weeks
  • massive reductions in repo sprawl and pipeline maintenance

But speed is the outcome, not the innovation.

The real breakthrough is the removal of two structural bottlenecks:

1. cloning databases for development and testing
2. maintaining ETL just to keep operational and analytical data aligned

When every write is immediately available for analytics, without duplication, teams stop fighting their own architecture.

Governance becomes an enabler, not a brake

One of the most under-appreciated aspects of Lakebase is where governance lives.

Because operational databases sit on the same platform:

  • access control is unified
  • auditing is consistent
  • compliance is inherited, not bolted on

For AI systems, this matters deeply.

Autonomous or semi-autonomous systems must act on:

  • trusted data
  • governed access
  • auditable decisions

Lakebase doesn’t remove governance. It bakes it into the default path. That’s the only way AI scales safely.

Database management becomes a data problem

This is where the shift becomes truly profound.

With Lakebase, database operations themselves become analytics.

  • Telemetry, metadata, and performance signals land in the lake.
  • They can be queried with SQL.
  • Analysed with ML.
  • Interpreted by agents.

Instead of asking:

“Which DBA do I call?”
The system asks:
“Which behaviour is an outlier?”

That’s how you manage thousands or millions of ephemeral databases.

What this enables in practice: real-world use cases

This architecture unlocks patterns that were previously impractical or too expensive:

  • Internal AI applications with their own isolated transactional state
  • Agent-driven workflows that spin up short-lived databases per task or experiment
  • Modernisation of legacy desktop and SQL Server estates without parallel stacks
  • Operational dashboards and decision systems built directly on live data
  • Product experimentation without cloning, copying, or risk to production

In all cases, the same theme applies:

infrastructure adapts to change, instead of resisting it.

From my perspective as a CDO

As a CDO, the most important question I ask of any platform is not how powerful it is but what behaviours it enables by default.

Lakebase changes behaviour.

It makes it:

  • safer to experiment
  • cheaper to build
  • easier to govern
  • harder to create silos

Most importantly, it aligns infrastructure with the reality that change is now continuous, not episodic.

AI programmes don’t fail because teams lack ambition. They fail because the underlying systems were designed for a slower world.

Lakebase acknowledges that reality and designs for it.

Adaptability didn’t stop at mindset,it reached the database

“Adaptability is destiny” is often framed as a cultural idea.

Lakebase proves it’s also an architectural one.

Ephemeral infrastructure isn’t a trend. It’s the only model that survives contact with AI. And databases had to change too.

Bianca Stratulat
Chief data officer Unifeye