Your Agent Is Just a Cron Job With a God Complex

2026 has already been dubbed the “Year of the Agent” — but not just by LinkedIn airball posts and X threads. A viral tool called OpenClaw (previously Moltbot/Clawdbot) has been making headlines for autonomously managing digital lives and spawning a full-on AI-only social network called Moltbook, where bots post, debate, and mimic social behavior without humans directly involved. And now, you can even follow the first AI Journalists on their own Substack.
Meanwhile Anthropic’s Claude Code rolled out longer-running session tasks that can coordinate multi-step workflows across time. And in cybersecurity circles, researchers have been dissecting Moltbook’s rapid rise and even a major security flaw that exposed agent credentials — raising fresh questions about what “autonomy” really means in practice.
Agents Are Software (And Why “Human” Is a Terrible Default)
Here’s the truth nobody’s selling you: agents are software. Period.
They run code. They follow control flow. They execute policies, read and write state, call tools, emit outputs. There is nothing mystical happening here but somewhere along the way, we started lying to ourselves.
We stopped saying “software” and started saying “agent.”
We stopped saying “program” and started saying “coworker.”
We stopped saying “automation” and started saying “autonomy.”
And with that shift, we quietly imported a dangerous assumption:
If it acts like a human, it must be better.
Let’s pause right there.
Humans are incredible.
Humans are creative.
Humans are adaptable.
Humans are also:
- inconsistent
- emotional
- biased
- forgetful
- reactive
- non-deterministic
- sometimes just… having a bad day
If we genuinely want agents to “act like humans,” then we don’t just get empathy and creativity — we also inherit bad vibes, erratic behavior, partial understanding, and mistakes.
Not because the software is bad. But because “human” is not an optimization target.
It’s a compromise.
The Hard Problems Are Human
Your “AI agent” is fundamentally a cron job with opinions — a while-loop that can hallucinate. Your agent doesn’t “decide” to do anything meaningful. It follows a probability distribution shaped by training data, system prompts, and temperature settings. When it succeeds, it’s because a human somewhere made good choices about what to optimize for. When it fails, it’s usually because those choices were implicit, unexamined, or wrong.
When we build agent systems, the industry loves to obsess over the easy stuff. Which LLM? What vector database? How many tools should it have access to? Should we use LangChain or roll our own framework?
This is intellectual theater. The hard problems aren’t technical — they’re human:
- Deciding what actually matters
- Judging quality when there’s no ground truth
- Choosing between legitimate trade-offs
- Setting direction when the path isn’t clear
Here’s the uncomfortable truth we discovered by actually running an always-on agent 24/7:
- You don’t use it.
- You manage it.
- You onboard it.
- You train it.
- You correct it.
- You set expectations.
- You accept blind spots.
That’s not a tool relationship. That’s leadership. And leadership is cognitively expensive.
People already manage:
- coworkers
- managers
- Slack threads
- Jira tickets
- family dynamics
- their own internal chaos
The last thing they want is another quasi-human entity that needs supervision.
The industry calls this progress.
Most user call this work.
Autonomy Sounds Great – Until You Ask ‘For Whom’?
Let’s be precise about autonomy because the word has become meaningless through overuse.
Real autonomy is delegated execution within bounded constraints. It’s your agent retrying a failed job without waking you up at 3 AM. It’s polling a data source, summarizing logs, or surfacing anomalies for human review. The human set the goal. The human defined the boundaries. The software executed within those guardrails.
Fake autonomy is the absence of human intent dressed up as intelligence. It’s when your system makes choices nobody asked for, optimizes metrics nobody validated, or “decides” based on reasoning nobody can inspect. Fake autonomy isn’t agentic behavior — it’s organizational negligence.
On paper, autonomy sounds incredible:
- General problem solving
- Self-directed behavior
- Minimal human involvement
- Agents acting “on your behalf”
In practice, the most “autonomous” demos we keep seeing are… revealing.
- “It can sort through 10,000 emails!”
- “We put 1,000 agents into a social network and watched what happened!”
Really?
That’s the bar?
We already failed at email.
We already failed at social networks.
We already built systems that amplify bias, conflict, and misinformation — with humans in the loop.
So here’s the question nobody wants to answer:
Why would software built in our likeness — with our biases and blind spots — perform better in those same systems?
If anything, it will fail faster.
Autonomy without judgment is just acceleration.
General problem solving without values is just noise.
The Real Black Box
Here’s where things get subtle: Non-determinism isn’t actually the scary part. Humans are non-deterministic too. The real problem is role ambiguity.
Is this thing:
- a tool?
- a coworker?
- a service?
- a witness?
- something that remembers me?
- something that judges me?
Humans are excellent at social calibration when roles are clear. We’re terrible when they aren’t. That uncanny valley people feel with agents isn’t technical? It’s relational. We didn’t solve human unpredictability with explainability.
We solved it with:
- social contracts
- relationship scopes
- interpersonal rituals
- bounded responsibility
- forgiveness
Trust isn’t built by saying “look how smart this is.”
Trust is built by knowing what it will not do.
Stop Worshipping Your Code
We name our agents. We give them personas. We say “the agent thinks” or “the agent wants” or “the agent decided.” This isn’t harmless fun — it’s a cognitive trap.
We so eager to recreate ourselves in software — before we’ve even agreed that we’re a good reference design.
Maybe the future isn’t:
- more autonomous agents
- more generalized problem solvers
- more human-like behavior
Maybe it’s something quieter, sharper, and more disciplined. Software that:
- is explicit about its limits
- is boring in the right ways
- makes human judgment clearer, not optional
- optimizes for intent, not imitation
Agents aren’t creatures. They’re tools with loops. Forgetting that is how you worship your own code instead of using it. It’s how you abdicate responsibility for decisions that should have human oversight. It’s how you end up with systems that “surprise” you in production in ways that aren’t surprising at all — they’re just unexamined.
The Boring Future We Need
2026 won’t be the year of the agent. It’ll be the year we finally stop pretending software is sentient and start building systems we can actually understand.
The best “agentic” systems won’t feel agentic at all. They’ll feel obvious. They’ll feel boring — in all the best ways. They’ll feel like what they are: well-designed software that does exactly what it was asked to do, shows its work, and knows when to ask for help.
Everything else is just a cron job with delusions of grandeur.