deset Hooks
Consciousness System for AI Agents — Multi-Layer Architecture for Reliable Memory and Autonomous Task Management
Vision: Give AI agents continuous self-awareness, reliable memory, and structured autonomy without hallucinated tasks or context loss.
The Problem
Current AI Agent Challenges
1. Context Loss
- Agents forget ongoing tasks after context compaction
- No continuity between sessions
- Critical information gets lost in chat history
2. Unstructured Activism
- Agents hallucinate tasks that weren't requested
- "Shall I automatically order products?" (No!)
- No clear separation between idea and directive
3. Documentation Without Enforcement
MEMORY.mdexists but isn't reliably read- Agent decides arbitrarily whether to load context
- "I don't feel like loading 2000 tokens" → ignores critical info
4. Token Waste
- Loading everything = too expensive
- Loading nothing = amnesia
- No intelligent prioritization
5. No Crash Resilience
- System crash → everything lost
- No "single source of truth"
- Reconstruction from chat history is error-prone
The Solution: Multi-Layer Consciousness
Biological Inspiration
Humans have multiple nervous system layers:
- Autonomic Nervous System: Heart rate, breathing, digestion — NOT optional, always runs, cannot be consciously disabled
- Conscious Nervous System: Voluntary actions, decision-making — can be paused/prioritized during danger
- Long-term Memory: Retrieved on demand, not always active, associatively linked
We apply the same hierarchy to AI agents.
Four-Layer Architecture
Fully automated, no AI processing
Example: Email check every 5 minutes
Context injection ENFORCED (code-level)
Example: Memory persistence at session start
Periodically checked, pausable during flow state
Example: Project tracking status 3-7
Lazy loaded, pointer-based
Example: Status 1-2 project ideas
Layer Details
Layer 1: Cron Jobs
Purpose: Fully automated tasks that require zero AI involvement.
Characteristics:
- Zero tokens consumed
- Reliable scheduling (OS-level cron or OpenClaw scheduler)
- Simple scripts that don't need LLM reasoning
Examples:
- Email inbox check every 5 minutes
- Daily METAR weather report at 07:00
- Backup jobs
- API polling
Layer 2: System-Critical Hooks
Purpose: Life-critical context that MUST be injected, enforced at code level.
Characteristics:
- Code-level enforcement (not prompt-based)
- Maximum 10,000 tokens
- Runs at session start automatically
- Cannot be ignored or skipped
File: NERVENSYSTEM.md (German: "nervous system")
Contains:
- Active commitments and deadlines
- Critical constraints (security, cost limits)
- System-critical bugs/workarounds
- Must-not-forget warnings
Example Entry:
## [E-042] DesetLabs SSL Certificate
**Type:** Maintenance
**Status:** Active
**Priority:** Critical
**Condition:** every-Monday-09:00
**Action:** Check SSL expiry (expires 2026-05-11)
**Location:** ico.cashbox.cash:/etc/letsencrypt/
**Warning:** Auto-renewal via certbot should work, but verify weekly
Layer 3: Heartbeat (Autonomous Tasks)
Purpose: Periodic checks that keep projects moving, but respect flow state.
Characteristics:
- Prompt-based (not code-enforced)
- Maximum 5,000 tokens
- Pausable when user is in deep work
- Rotates through task categories
File: HEARTBEAT.md
Examples:
- Check project tracking for status 3-7 projects
- Review email inbox for important messages
- Check calendar for upcoming events
- Weather check (if user might go outside)
Layer 4: Entity Registry (Ideas Pool)
Purpose: Store lower-priority ideas/projects without burning tokens.
Characteristics:
- Lazy loaded (pointer-level first)
- Maximum 20,000 tokens when fully loaded
- Three loading levels: pointer → summary → full
- Searchable via memory_search
File: Projekttracking.md or similar
Loading Strategy:
- Pointer Level: Just E-number + status + priority (20 bytes each)
- Summary Level: Add title + one-line description (200 bytes each)
- Full Level: Complete documentation only when actively working on task
Status Codes
| Status | Meaning | Action at Heartbeat |
|---|---|---|
| 1 | Idea / Concept | No automatic action |
| 2 | In Planning | Ask if start desired |
| 3 | In Progress | Continue work or ask status |
| 4 | Blocked (waiting for input) | Ask user |
| 5 | Blocked (technical) | Document solution attempt |
| 6 | Review / Testing | Present results |
| 7 | Almost Done | Prioritize completion |
| 8 | Done (but maintainable) | Only touch if problems |
| 9 | Archived | No longer check |
Conditions System
Tasks have conditions that determine when they execute:
Time-Based
every-Monday-09:00— Weekly checkbefore-2026-05-11— Deadline approachingafter-30-days-idle— Stale project warning
State-Based
when-status-changes— React to project status changewhen-blocked-gt-24h— Alert if blocked more than 24 hourswhen-cost-exceeds-budget— API cost threshold
Event-Based
on-session-start— Load critical contexton-pre-compaction— Save context before memory flushon-topic-change— Document current state before switching
Context-Based
if-user-idle— Run background tasks during inactivityif-flow-state-detected— Pause non-critical tasksif-weekend— Different task priority on weekends
Autonomous Task Management
The system enables AI agents to manage their own infrastructure:
Example: Bathtub Timer
User: "Turn off bathroom lights in 30 minutes"
- Agent writes script:
hue-api.jsto control Philips Hue lights - Agent creates cron job: Schedule script 30 minutes from now
- Agent documents in NERVENSYSTEM.md: Add E-number entry for tracking
- Job executes: Lights turn off automatically
- Agent updates status: Mark E-number as completed
Crash Resilience
Problem: What happens when the system crashes or restarts?
Solution: NERVENSYSTEM.md is the single source of truth, updated after every change.
Recovery Process
- System restarts
- Layer 2 hook auto-loads
NERVENSYSTEM.md - Agent knows exact state: active tasks, pending actions, critical warnings
- Agent can continue where it left off
No reconstruction from chat history needed. No guessing. Just read the file.
Token Management
| Layer | Token Budget | Enforcement |
|---|---|---|
| L1 (Cron) | 0 tokens | No AI involved |
| L2 (System-Critical) | Max 10,000 | Code-level check, error if exceeded |
| L3 (Heartbeat) | Max 5,000 | Warning if exceeded |
| L4 (Registry) | Max 20,000 | Lazy loading required |
NERVENSYSTEM.md exceeds 10k tokens, the system should error and refuse to start until it's trimmed.
Implementation Roadmap
Phase 1: Documentation & Prototyping
- ✅ Create comprehensive documentation
- ⏳ Create example
NERVENSYSTEM.mdfor Homebot - ⏳ Migrate
Projekttracking.mdto Layer 4 format - ⏳ Test with existing Homebot workflow
Phase 2: Skill Package
- ⏳ Create OpenClaw Skill with templates
- ⏳ Include setup wizard for first-time users
- ⏳ Document best practices
- ⏳ Create example implementations
Phase 3: Code-Level Enforcement (Fork)
- ⏳ Fork OpenClaw
- ⏳ Implement
injectSystemCritical()function - ⏳ Add 10k token validation
- ⏳ Add flow state detection
- ⏳ Test and benchmark token usage
Phase 4: Community Release
- ⏳ Publish to ClaHub
- ⏳ Create tutorial videos
- ⏳ Write blog post
- ⏳ Gather feedback
- ⏳ Iterate based on user needs
Design Philosophy
Alles kann, nichts muss
("Everything can, nothing must" — German idiom)
- Works out of the box with minimal config
- Power users can customize everything
- No forced complexity
Respektiere die Zeit des Users
("Respect the user's time")
- Even power users shouldn't need to debug
- Clear error messages
- Sensible defaults
Open Source First
- Free Skill package for 80% of users
- Optional fork for power users wanting code-level enforcement
- Community-driven development
FAQ
Why not just use MEMORY.md?
MEMORY.md is great for long-term memory, but it's not enforced and doesn't have structured task management. deset Hooks adds layers of enforcement, status codes, conditions, and crash resilience.
Won't this use too many tokens?
No — that's the point! The multi-layer architecture ensures only critical context is loaded when needed. Layer 1 uses zero tokens. Layer 2 is capped at 10k. Lazy loading minimizes waste.
Can I use this with other AI frameworks?
Yes! While designed for OpenClaw, the core concepts (NERVENSYSTEM.md format, status codes, conditions) are platform-agnostic. You can adapt it to any AI agent system.
What's the difference between a Skill and a Fork?
The Skill provides templates, documentation, and conventions — works with vanilla OpenClaw. The Fork adds code-level enforcement for true autonomic-nervous-system behavior. Most users will be fine with just the Skill.
Why "deset Hooks" instead of "Consciousness System"?
Marketing! "Hooks" is more concrete and memorable. "Consciousness" conveys the vision but sounds abstract. The subtitle "Consciousness System for AI Agents" explains what it does.
Contact & Contributing
Interested in contributing or have questions?
- Project Status: In Planning (Documentation Phase)
- GitHub: Coming soon
- Contact: Available via desetlabs.com