Tired of Your AI Making the Same Mistakes?
AI that remembers what went wrong and learns not to repeat it.
Seven skills for failure-anchored memory.
For developers building autonomous agents with Claude Code, Cursor, or any LLM.
From Failure to Learning
Most AI forgets its failures. Same mistakes, session after session. Agentic skills fix that.
When something goes wrong, it gets recorded. When the same mistake happens again, it becomes a pattern. When the pattern is confirmed by multiple sources, it becomes a rule. That rule is then enforced automatically, forever.
AI learns from consequences, not instructions. Failures become constraints.
How It Works
The journey from failure to learning:
Technical Details: Eligibility Criteria
For a failure to become a constraint, it must meet these thresholds:
- Recurrence ≥ 3 — Seen at least 3 times
- Confirmations ≥ 2 — Verified by multiple sources
- Disconfirmation rate < 20% — Not frequently disputed
This prevents noise from becoming rules while capturing genuine patterns.
The 7 Skills
Ordered by your journey: start with failure tracking, build from there.
Quick Start: Just want one skill? Start with failure-memory.
It records mistakes and detects patterns—the foundation everything else builds on.
Start Here
failure-memory /fm
Records failures with R/C/D counters, detects recurring patterns. The foundation of learning from mistakes.
constraint-engine /ce
Generates constraints from recurring failures, enforces them at runtime. Failures become rules.
Supporting
context-verifier /cv
Verifies file integrity via SHA-256 hashes, detects unauthorized changes. Trust, but verify.
review-orchestrator /ro
Coordinates multi-perspective reviews (technical, creative, external). Different eyes see different things.
Lifecycle
governance /gov
Manages constraint lifecycle, triggers 90-day reviews. Rules that evolve, not calcify.
Advanced
safety-checks /sc
Validates model configs, enforces pinning, provides fallbacks. Circuit breakers for AI.
workflow-tools /wt
Detects infinite loops, evaluates parallel vs serial decisions. Keep things moving forward.
OpenClaw Ecosystem
These skills integrate with the broader OpenClaw ecosystem.
When you record a failure with /fm record, it flows to
self-improving-agent for cross-session learning.
When you generate a constraint with /ce generate, it flows to
proactive-agent for runtime enforcement.
(OpenClaw ecosystem integration coming soon.)
Together, they form a complete failure-to-enforcement loop. Your skills handle the what (detecting and recording failures), the ecosystem skills handle the how (learning and enforcing across sessions).
Technical Details: Three-Layer Architecture
The skills operate across three layers:
- Layer 1: SKILL.md — Portable instructions that work everywhere
- Layer 2: Workspace — Shared files (
.learnings/,output/) compatible with ClawHub skills - Layer 3: Automation — Claude Code hooks (future release)
This layered approach means the skills work today via SKILL.md instructions, while preparing for deeper automation integration.
Installation
Works with any LLM that supports skills or system prompts.
GitHub (Recommended)
# Clone the repository
git clone https://github.com/live-neon/skills.git
# Copy skills to Claude Code
cp skills/agentic/*/SKILL.md ~/.claude/skills/
ClawHub Coming Soon
ClawHub is the skill registry for OpenClaw-compatible agents. Skills will be available for one-command installation.
# Coming soon:
clawhub install leegitw/failure-memory
clawhub install leegitw/constraint-engine
clawhub search leegitw --tag agentic
Any LLM Agent
Copy the contents of any SKILL.md into your agent's system prompt or context. Works with Claude, GPT, Gemini, and others.
Related Tools
Failures become constraints. Constraints become identity.
Your agent's mistakes aren't just lessons—they're the building blocks of who it becomes. See how NEON-SOUL builds autonomous identity from lived experience, or explore PBD Skills for principle extraction.