Building A Space For Thinking

← Back
Whitespace Concept

Over the past year, AI researchers have become obsessed with a phrase:

World models.

You see it everywhere:

  • Agents navigating Minecraft.
  • Simulated physics environments.
  • Virtual cities where AI learns to reason about space and cause.

Even serious money is flowing into the idea. Yesterday, Yann LeCun’s new company raised $1.03 billion to build world models. That’s a lot of zeros for something that sounds suspiciously like… a video game engine for intelligence. But the core idea is actually right:

If agents are going to operate autonomously, they need something more than prompts.

They need a world to reason inside. The problem is, that most world-model discussions are focused on physical worlds. But much of the work humans actually do is not physical, it’s cognitive.

  • Strategy
  • Creativity
  • Product design
  • Transformation
  • Narrative building

These worlds are not made of objects and gravity. Yes there are ‘physics’ to these kind of problems. But they’re made of priorities, constraints, ideas, and meaning. Which leads to a slightly uncomfortable hypothesis.

Context alone is not enough. An agent also needs to understand how its world works.

Context Is Only Half the Game

Most AI systems today operate on a single trick:

Stuff enough context into the prompt and hope the model figures it out.

This works surprisingly well for small tasks. But the moment you move into serious thinking work — strategy papers, concepts, analytical reports — the system collapses into improvisation. Because context answers only one question:

What exists in the world?

But agents also need to know:

  • How the world behaves

  • What rules govern it

  • What entities exist

  • What their role is inside it

In other words:

They need a world model, not a context dump.

This is where things get interesting.

Because if you build an artificial world, you get to define the rules. And that means you can optimize the world for the kind of thinking you want to happen inside it.

So we built one.

We call it the Whitespace.

The Whitespace: A World for Thinking

The Whitespace is not a document, not another project workspace, certainly not a chat thread. It’s an artificial cognitive environment designed for strategy, creativity, and transformation. And it runs on three structural pillars — what we call the Three C’s:

Concept. Context. Constitution.

Together they form a domain-centric world model. Not a physics simulation, but a thinking substrate.

Context: The Fabric of the World

The first layer is the Context Fabric. This is where the world’s raw information lives. But instead of throwing everything into prompts, the Whitespace structures context into meaningful categories:

  • priorities

  • constraints

  • themes

  • domains

  • user context

Each context is processed into a distilled representation before it becomes part of the fabric. Which means, our agents don’t read messy documents, but on structured meaning. The result is a living map of the environment — a world surface agents can orient themselves on.

Concept: The World Reflects on Itself

But a world that only accumulates information becomes a library. Don’t get us wrong — libraries are useful.

But they don’t think.

That’s why the Whitespace includes the second layer: the Concept. The Concept is a versioned interpretation of what the work actually is. It answers questions like:

  • What are we building?

  • What patterns are emerging?

  • What is the strategic direction?

Unlike context, which stores facts, the Concept stores interpretation.

And it evolves.

Each revision is a new snapshot of understanding. Over time, the world doesn’t just collect knowledge. It develops perspective.

Constitution: The Agent Understands Itself

Now we reach the third layer.

And arguably the most important one.

Because having a world model is still not enough; an agent must also understand who it is inside that world.

This is the role of the Constitution. Technically speaking, the Constitution is just a JSON object. Conceptually, it’s the identity layer of the agent.

The Constitution tells the agent:

  • what it is

  • what it can do

  • what tools it can use

  • what entities exist in the environment

We call that last piece the taxonomy — artifacts, ideas, contexts, tools and skills, other agents. The Constitution defines the ecosystem of the Whitespace and the agent’s relationship to it. In other words: The agent doesn’t just know the world, it also knows how it exists within that world.

Why Artificial Worlds Are Actually Easier

There’s a reason world-model research is exploding: Understanding the real world is incredibly hard. Physics. Society. Economics. Culture. It’s messy. But artificial worlds are different. We design the rules. Which means we can create worlds that are optimized for a specific kind of intelligence.

The Whitespace is one of those worlds. A world optimized not for physics.

But for thinking.

Building A Space For Thinking