More Human or … More Useful?

← Back
more-human-or-more-useful

The agent discourse is starting to sound like a gym-bro conversation.

“Bro, your loop is too small.”

“Bro, your context window isn’t stacked enough.”

“Bro, add memory. No —  m o r e  memory.”

“Bro, agent rules don’t matter.”

“Bro, recursive language models.”

And sure—some of that is real engineering. Miessler’s “the loop is too small” is a fair provocation: shallow tool-call loops do cap what an agent can do. Recursive Language Models are also legitimately interesting — an inference-time pattern for handling inputs far beyond a model’s native context window by treating the prompt as an “environment” you can inspect and process recursively.

But here’s the problem: a growing chunk of the discourse is no longer about solving problems. It’s about reenacting our folk theories of “thinking” in public—and calling it progress.

If you squint, you can already see the likely destination: not AGI. AHI – Artificial Humanoid Intelligence: the mediocre mess multiplied. A swarm of synthetic coworkers reproducing our worst habits at scale—overconfident, under-specified, distractible, endlessly “reflecting” instead of shipping. Not because the models are evil. Because we keep using human-like cognition as the spec, rather than outcomes.

And to be clear: “more human” is not the same as “more useful.” A forklift doesn’t get better by developing feelings about pallets.

The obsession with “agent-ness” is becoming a hobby

Memory. Context. Loop size. Rules. Reflection. Recursion.

These are not products. They’re ingredients. And we’ve fallen in love with the ingredients because they’re measurable, discussable, and tweetable.

They also create an infinite runway for bike-shedding. If the agent fails, the diagnosis is always the same: “needs more context,” “needs better memory,” “needs a bigger loop.”

Convenient — because it turns every failure into an invitation to build a bigger “mind,” instead of asking the humiliating question:

What problem are we actually solving?

A lot of agent builders are inventing new problems independent of solutions: designing elaborate cognitive scaffolds for tasks that were never constrained, never modeled, never decomposed, and never given domain primitives.

It’s like trying to build a universal robot hand … to butter toast.

Our working hypothesis: Utilligence beats AGI

At Apes on fire, we’re not allergic to big ideas. We’re just allergic to confusing vibes with value.

Our bet is Utilitarian Intelligence — Utilligence — the unsexy kind of “smart” that actually works: systems that reliably transform inputs into outcomes inside a constrained problem space. (Yes, we’re aware that naming things is half the job.)

If you want “real agents,” start where software has always started:

Classic systems design. State design. Architecture. Domain-centric applications.

Not “Claude Coworker for Everything.” — more like: “The Excel for this.” “The Photoshop for that.” “The Figma for this workflow.”

The future isn’t one mega-agent that roleplays your executive assistant. It’s a fleet of problem-shaped tools that feel inevitable once you use them — because their primitives match the domain they are operating in.

Stop asking the model to be an operating system

LLMs are incredible at what they’re good at: stochastic synthesis, pattern completion, recombination, compression, ideation, drafting, translation across representations.

They are not inherently good at being your cognitive scaffolding. Models are much closer to a processor in the modern technology stack, than an operating system.

So instead of building artificial people, we’re building an exoskeleton for human thinking: a structured environment where the human stays the decider and the model stays the probabilistic engine. The scaffolding lives in the system — state machines, constraints, domain objects, evaluation gates, deterministic renderers, auditability.

In other words: let the model do the fuzzy parts. Let the product do the responsible parts.

If we must learn from humans, let’s learn properly

Here’s the irony: the same crowd racing to build “human-like” agent cognition often has the loosest understanding of human cognition.

Before we try to manufacture artificial selves, maybe we should reread the observers of the human state. Kahneman’s Thinking, Fast and Slow is still a brutal reminder that “how we think” is not a very flattering blueprint. We are bias engines with a narrative generator strapped on top. Is that what we want an artificial “problem solver” to mimic?

Maybe not. Maybe the move is not: “let’s copy humans harder.” Maybe the move is: define the problem first, then build the machine that solves it. 

Because “more of us” isn’t automatically the solution. Sometimes it’s just… more of the problem. So instead of Artificial Humanoid Intelligence, let’s work on Utilligence: intelligence with a job description.

More Human or … More Useful?