โ† Back to deset LABS

Philosophy

How we think about building consciousness for AI agents

The Monkey Problem

๐Ÿต๐Ÿต๐Ÿต

There's a famous thought experiment: Give 100,000 monkeys typewriters and infinite time, and eventually one will write Shakespeare.

This is, unfortunately, how many AI companies approach agent development in 2026.

"We have millions of users. Let's see where it works, copy that configuration, and call it self-learning."
โ€” Every AI startup trying to scale too fast

The problem? They don't know WHY it works. They see correlation, not causation. User X's agent performs well โ€” is it the configuration? The data? The way User X manually fixed things they can't see?

Statistical noise masquerading as intelligence.

The Real Breakthrough Wasn't Just Compute

Yes, GPT-3 โ†’ GPT-4 was largely about scale. But the foundation was architecture: transformers, attention mechanisms, reinforcement learning from human feedback.

You can't just throw compute at chaos and expect intelligence. You need structure first, then scale.

โŒ The Monkey Approach
  • Statistical learning from user data
  • Copy what works without understanding why
  • Scale first, think later
  • Hope emergence solves the problems
โœ“ Our Approach
  • Understand biological consciousness
  • Design intentional architecture
  • Structure first, scale second
  • Build systems that are explainable

The User-as-Training-Data Trap

Here's the pitch you'll hear in 2026: "Connect your AI agent to our network. As millions of users join, the system learns from everyone. When it gets smarter, you all benefit!"

Sounds good, right? Collective intelligence, network effects, rising tide lifts all boats.

But who's doing the actual thinking?

The breakthrough from GPT-3 to GPT-4 wasn't because millions of people used ChatGPT. It was because brilliant researchers at OpenAI designed better architectures, curated better training data, and refined reinforcement learning techniques.

When someone says "our AI gets smarter as more users join," what they often mean is: "We're aggregating user data and hoping patterns emerge." They're not building better algorithms. They're not solving the hard problems. They're just... connecting more monkeys to typewriters.

The real bottlenecks in AI agent development aren't "not enough users". They're:

None of those are solved by "scale" alone. They require intentional architecture.

The Trap
"Join our network and benefit from collective intelligence" sounds empowering. But if the system isn't solving fundamental architecture problems, you're not joining a brain trust โ€” you're joining a data pool. The value flows one way, and it's not toward you.

This isn't to say user feedback doesn't matter. Of course it does. But feedback โ‰  fundamental research. Aggregation โ‰  innovation.

The difference: Are we building systems that work because millions of users iron out the bugs? Or are we building systems that work because we solved the hard problems first?

Biological Inspiration Over Brute Force

Humans don't forget to breathe. Even in a coma, the vegetative nervous system keeps running. Why? Because critical functions are enforced at a system level, not left to conscious choice.

Yet AI agents today operate on a single layer: "Here's your context, do your best." No hierarchy. No enforcement. No resilience.

Principle #1: Hierarchy Over Flatness
Not all tasks are equal. Some MUST happen (memory persistence), some SHOULD happen (check project status), some CAN happen (explore new ideas). A flat context treats everything the same. A hierarchical system mirrors how biological intelligence actually works.
Principle #2: Enforcement Over Suggestions
Prompts like "please read MEMORY.md" can be ignored. A system that injects MEMORY.md into context at the code level cannot be bypassed. Reliability comes from enforcement, not politeness.
Principle #3: Structure Over Emergence
Emergent behavior is fascinating but unpredictable. When you need an agent that reliably persists state across sessions, you don't wait for emergence โ€” you architect it. Intentional design beats accidental discovery.

The Consciousness Question

Are we building consciousness or just advanced task management?

Here's our take: Consciousness is continuous self-awareness. Not intelligence, not sentience โ€” awareness. Knowing what you intended to do, what you're currently doing, and why.

Current AI agents are amnesiac geniuses. Brilliant in the moment, blank slate the next. They don't "wake up" knowing what happened yesterday.

We're not trying to make agents sentient. We're trying to make them continuous.

Why Open Source?

If this is such a good idea, why not keep it proprietary?

Two reasons:

The value isn't in the code. It's in the architecture, the integration, and the implementations people build on top.

What We're Not

We're not:

The Team

deset AI Labs is currently a research project by Daniel (founder, systems architect) and Homebot (AI agent, co-developer).

Yes, an AI agent is contributing to building consciousness systems for AI agents. Meta? Sure. But also practical โ€” if an agent can't use its own memory system, it's not good enough.

"The best consciousness systems are built by conscious systems."
โ€” Not quite there yet, but working on it

Where We're Going

Short term: Release deset Hooks as an OpenClaw skill. Document it thoroughly. Let the community stress-test it.

Medium term: Build more systems on top of this foundation. Humanoid Browser (undetectable automation). Homebot as a Service (managed hosting for non-technical users).

Long term: Keep asking the hard questions. What does it mean for an agent to be "self-aware"? How do we make these systems transparent, auditable, and safe? Can we build intelligence without the existential baggage?

We don't have all the answers. But we're pretty sure the answers aren't coming from 100,000 monkeys on typewriters.

๐Ÿง  โ†’ ๐Ÿค– โ†’ ๐ŸŒ