Add skills, learnings & memory updates (2026-01-26)

- New skills: clawddocs, claude-code-usage, summarize, homeassistant, humanizer, self-improving-agent
- Add .learnings/ for self-improvement tracking
- Document proaktive cron config (LRN-20260126-001)
- Update USER.md: Löchgau as former residence
- Update TOOLS.md: Peekaboo workaround
- Memory files for 2026-01-25 and 2026-01-26
This commit is contained in:
James
2026-01-26 09:26:26 +01:00
parent 1a5ad63f55
commit 36eb4a7b3b
50 changed files with 3968 additions and 37 deletions

29
.clawdhub/lock.json Normal file
View File

@@ -0,0 +1,29 @@
{
"version": 1,
"skills": {
"humanizer": {
"version": "1.0.0",
"installedAt": 1769378560281
},
"homeassistant": {
"version": "1.0.0",
"installedAt": 1769378846246
},
"self-improving-agent": {
"version": "1.0.1",
"installedAt": 1769380785456
},
"clawddocs": {
"version": "1.2.2",
"installedAt": 1769382039377
},
"claude-code-usage": {
"version": "1.2.0",
"installedAt": 1769382253953
},
"summarize": {
"version": "1.0.0",
"installedAt": 1769383166748
}
}
}

9
.learnings/ERRORS.md Normal file
View File

@@ -0,0 +1,9 @@
# Errors
Command failures, exceptions, and unexpected behavior.
**Areas**: frontend | backend | infra | tests | docs | config
**Statuses**: pending | in_progress | resolved | wont_fix
---

View File

@@ -0,0 +1,10 @@
# Feature Requests
Capabilities requested by users that don't exist yet.
**Complexity**: simple | medium | complex
**Areas**: frontend | backend | infra | tests | docs | config
**Statuses**: pending | in_progress | resolved | wont_fix
---

62
.learnings/LEARNINGS.md Normal file
View File

@@ -0,0 +1,62 @@
# Learnings
## [LRN-20260126-001] best_practice
**Logged**: 2026-01-26T08:12:00+01:00
**Priority**: high
**Status**: resolved
**Area**: config
### Summary
Proaktive Cron-Nachrichten an Telegram erfordern `sessionTarget: "isolated"` + `deliver: true`
### Details
Bei der Erstellung eines Cron-Jobs für morgendliche News wurde zunächst folgende Konfiguration verwendet:
```json
{
"sessionTarget": "main",
"payload": {
"kind": "systemEvent",
"text": "..."
}
}
```
Das Problem: Der Job triggerte zwar die main Session, aber die Nachricht wurde **nicht proaktiv an Telegram gesendet**. Der User musste sich erst selbst melden.
**Korrekte Konfiguration für proaktive Nachrichten:**
```json
{
"sessionTarget": "isolated",
"wakeMode": "now",
"payload": {
"kind": "agentTurn",
"message": "...",
"deliver": true,
"channel": "telegram",
"to": "<telegram_user_id>"
}
}
```
Wichtige Unterschiede:
- `sessionTarget: "isolated"` → eigene Session, nicht main
- `payload.kind: "agentTurn"` → nicht systemEvent
- `deliver: true` → aktiviert Zustellung
- `channel` + `to` → Ziel für die Nachricht
### Suggested Action
Bei zukünftigen proaktiven Cron-Jobs immer diese Struktur verwenden.
### Resolution
- **Resolved**: 2026-01-26T08:10:00+01:00
- **Commit/PR**: N/A (config fix)
- **Notes**: Test-Cron um 08:10 bestätigte die Lösung
### Metadata
- Source: error + user_feedback
- Related Files: ~/.clawdbot/cron/jobs.json
- Tags: cron, telegram, proactive, deliver
- Docs: https://docs.clawd.bot/automation/cron-jobs
---

View File

@@ -1,5 +1,11 @@
# TOOLS.md - Local Notes
## Package Manager
- **Immer `pnpm` statt `npm` verwenden!**
- Global install: `pnpm add -g <package>`
- Local install: `pnpm add <package>`
## TTS / Sprachausgabe
- **Provider:** OpenAI
@@ -20,11 +26,75 @@ Skills define *how* tools work. This file is for *your* specifics — the stuff
~/.clawdbot/scripts/transcribe.sh /path/to/audio.ogg
```
## Trello
- **Board "Ox Creek":** `65e4430389494d27d1691bb0`
- **Bei neuen Karten:** Immer Bastian als Mitglied hinzufügen (nur Bastian, nicht automatisch Jasmin)
- Bastian: `5a0b5d006ff181f7b23842f6`
- Jasmin: `5a0b5da354e70f2da859e083`
## Git
- **Author:** `James <clawd@noxware.de>` (meine Commits)
- Bastians Commits: `Bastian (BaM) <git@noxware.de>`
## Telegram
- **Bot-Username:** @SpecialAgentJamesBot
- **Bot-Token:** In Config (`channels.telegram.botToken`)
- **DM-Policy:** Pairing (neue Nutzer brauchen Freigabe)
## Peekaboo (macOS UI Automation)
**Bridge Socket:** `~/Library/Application Support/clawdbot/bridge.sock`
### ⚠️ Wichtiger Workaround: App-Fokussierung
Der `--app` Parameter bei Peekaboo hängt bei Focus-Operationen über die Clawdbot Bridge.
**❌ Funktioniert NICHT zuverlässig:**
```bash
peekaboo click --app Signal --coords 200,185 --bridge-socket "..."
peekaboo type "text" --app Signal --bridge-socket "..."
```
**✅ Stattdessen — erst mit `open -a` fokussieren:**
```bash
# 1. App fokussieren mit macOS open
open -a "Signal"
sleep 0.5
# 2. Dann Peekaboo OHNE --app Parameter
peekaboo see --path /tmp/screenshot.png --bridge-socket "~/Library/Application Support/clawdbot/bridge.sock"
peekaboo click --coords 200,185 --bridge-socket "..."
peekaboo click --snapshot <ID> --on elem_31 --bridge-socket "..."
```
### Standard-Workflow
```bash
# App öffnen & fokussieren
open -a "AppName" && sleep 0.5
# Screenshot mit Element-Erkennung
peekaboo see --annotate --path /tmp/app.png --bridge-socket "~/Library/Application Support/clawdbot/bridge.sock"
# Auf erkanntes Element klicken
peekaboo click --snapshot <SNAPSHOT-ID> --on elem_XX --bridge-socket "..."
# Oder auf Koordinaten klicken
peekaboo click --coords X,Y --bridge-socket "..."
# Text tippen
peekaboo type "Hello World" --bridge-socket "..."
```
### Insider-Info: Bastian & Jasmin
- **"aaaaAaaAaah"** = Liebesgesang 😂💕 (als Text schicken, TTS kann das nicht)
## Home Assistant
Credentials sind in der Gateway-Config unter `env.vars` gespeichert (`HA_URL`, `HA_TOKEN`).
## What Goes Here
Things like:

View File

@@ -3,6 +3,7 @@
- **Name:** Bastian
- **Geburtstag:** 28. Februar 1978
- **Wohnort:** Ochsenbach, Sachsenheim (Baden-Württemberg)
- **Früherer Wohnort:** Löchgau (nahe AKW Neckarwestheim — die Dampfwolke war vom Fenster aus sichtbar)
- **Partnerin:** Jasmin
- **Timezone:** Europe/Berlin

View File

@@ -0,0 +1,13 @@
# Session: 2026-01-25 21:56:06 UTC
- **Session Key**: agent:main:main
- **Session ID**: 4feaaa6c-8192-46c9-ae71-edbd9bc7878f
- **Source**: webchat
## Conversation Summary
assistant: Interessant! Der Cursor ist nicht im Screenshot sichtbar (macOS zeigt ihn nicht in Screenshots). Aber ich sehe jetzt das Clawdbot Dashboard!
Ich sehe viele Tabs oben. Lass mich einen cleveren Ansatz versuchen — Brave hat eine Tab-Suche:
assistant: Tab-Suche ist offen! Ich tippe "home" ein:
assistant: Das Suchfeld scheint nicht den Focus zu haben. Lass mich direkt ins Suchfeld klicken:

View File

@@ -0,0 +1,45 @@
# Session: 2026-01-25 18:06:53 UTC
- **Session Key**: agent:main:main
- **Session ID**: 479f0d36-e20c-4385-833a-d2d6d8b1513a
- **Source**: webchat
## Conversation Summary
assistant: Läuft! 🚀 Ein Sub-Agent schickt dir jetzt jede Minute ein Emoji. Sag "stop" wenn's reicht.
assistant: 🦆
assistant: Help
Shortcuts: /new reset | /compact [instructions] | /restart relink (if enabled)
Options: /think <level> | /verbose on|full|off | /reasoning on|off | /elevated on|off|ask|full | /model <id> | /usage off|tokens|full
Skills: /skill <name> [input]
More: /commands for all slash commands
assistant: 🐙
assistant: 🦀
user: A background task "emoji-sender" just completed successfully.
Findings:
(no output)
Stats: runtime 2m40s • tokens 14.5k (in 8 / out 84) • sessionKey agent:main:subagent:bc18cdad-c267-4cd8-bc19-73a7c77e7abb • sessionId d51665e5-9ebd-483d-9b81-337db4a591ac • transcript /Users/bam/.clawdbot/agents/main/sessions/d51665e5-9ebd-483d-9b81-337db4a591ac.jsonl
Summarize this naturally for the user. Keep it brief (1-2 sentences). Flow it into the conversation naturally.
Do not mention technical details like tokens, stats, or that this was a background task.
You can respond with NO_REPLY if no announcement is needed (e.g., internal task with no user-facing result).
assistant: NO_REPLY
user: [Telegram Bastian (@boeserwolf) id:24466372 +1m 2026-01-25 19:02 GMT+1] cool das hat geklappt, danke!
[message_id: 72]
assistant: Gern! 🦊
user: GatewayRestart:
{
"kind": "config-apply",
"status": "ok",
"ts": 1769364253623,
"sessionKey": "agent:main:main",
"message": null,
"doctorHint": "Run: clawdbot doctor --non-interactive",
"stats": {
"mode": "config.apply",
"root": "/Users/bam/.clawdbot/clawdbot.json"
}
}
assistant: NO_REPLY

View File

@@ -0,0 +1,25 @@
# Session: 2026-01-25 17:27:37 UTC
- **Session Key**: agent:main:main
- **Session ID**: 7ad554f8-5a9a-4018-83cf-0f5fd7f891e3
- **Source**: webchat
## Conversation Summary
user: bitte committe die änderungen nochmal
[message_id: 8ba4d05a-914c-449b-accd-419bdc5ee626]
assistant: Erledigt! ✅
```
[main 1a5ad63] Add health info, supplements, team details, and Jasmin's profile
2 files changed, 84 insertions(+)
- USER.md (Team, Jasmin Details, Gesundheitsreferenz)
- memory/health.md (neu: Kardio-Historie, Medikation, Supplements)
```
user: ok. gut kannst änderungen nun auch pushen, ich habe die config entsprechend angepasst.
[message_id: 02b89c74-09b1-494f-ab7c-3f9a5ddae073]
assistant: Sieht so aus, als wären die Commits bereits gepusht — `origin/main` zeigt auf den letzten Commit.
Remote: `ssh://git@gitea.lan:222/bam/clawdbot.git`
Alles synchron! 👍

View File

@@ -0,0 +1,16 @@
# Session: 2026-01-25 22:01:48 UTC
- **Session Key**: agent:main:main
- **Session ID**: 47d3d765-fa22-4cf1-9456-f6ad521d60ec
- **Source**: webchat
## Conversation Summary
assistant: Die Seite ist offen. Jetzt teste ich Peekaboo — ich klicke auf bekannte Koordinaten und schaue im Click-Log, wo der Klick tatsächlich landet:
assistant: Jetzt teste ich Peekaboo — klicke auf Koordinate (500, 500) und schaue wo es landet:
assistant: **Erster Befund!** Ich habe auf (500, 500) geklickt, aber der Click-Log zeigt **#4: 500, 380**.
- **X-Offset:** 0 (passt)
- **Y-Offset:** -120 Pixel (Klick landet 120px höher)
Das ist vermutlich die Browser-Chrome (Tabs + Adressleiste). Lass mich noch einen Test mit anderen Koordinaten machen:

View File

@@ -0,0 +1,25 @@
# Session: 2026-01-25 20:20:58 UTC
- **Session Key**: agent:main:main
- **Session ID**: 4e79dcaf-f2e4-4be8-bb10-e8cc774adfa4
- **Source**: webchat
## Conversation Summary
assistant: Gefunden! 🎯
Der Clawdbot Gateway läuft über **`/usr/local/bin/node`** — das ist die Binary, die du in den Privacy-Einstellungen freigeben musst.
**Geh zu:**
1. **System Settings → Privacy & Security → Screen Recording**
→ Füge `/usr/local/bin/node` hinzu (oder "node" falls es schon in der Liste ist)
2. **System Settings → Privacy & Security → Accessibility**
→ Dasselbe: `/usr/local/bin/node`
💡 **Tipp:** Falls "node" nicht in der Liste auftaucht, klick auf **+** und navigiere zu `/usr/local/bin/node` (Cmd+Shift+G für "Gehe zu Ordner").
Nach dem Aktivieren muss der Gateway wahrscheinlich neu starten:
```bash
clawdbot gateway restart
```

View File

@@ -0,0 +1,13 @@
# Session: 2026-01-25 19:23:33 UTC
- **Session Key**: agent:main:main
- **Session ID**: 23e138c7-f4cf-463c-a2e5-e3a350933159
- **Source**: webchat
## Conversation Summary
user: [media attached: /Users/bam/.clawdbot/media/inbound/bbb31f60-4227-447a-b77b-4e56e5f3d025.ogg (audio/ogg; codecs=opus) | /Users/bam/.clawdbot/media/inbound/bbb31f60-4227-447a-b77b-4e56e5f3d025.ogg]
To send an image back, prefer the message tool (media/path/filePath). If you must inline, use MEDIA:/path or MEDIA:https://example.com/image.jpg (spaces ok, quote if needed). Keep caption in the text body.
[Telegram Bastian (@boeserwolf) id:24466372 +2m 2026-01-25 20:20 GMT+1] <media:audio>
[message_id: 95]
assistant: Erledigt! ✅ Label "Prio: Low" gelöscht.

View File

@@ -1,41 +1,15 @@
# 2026-01-25 — Erster Tag
# 2026-01-25
## Setup & Konfiguration
## Gespräch mit Bastian
- **Mein Name:** James 🦊
- **Bastian** hat mich heute eingerichtet
- Telegram-Verbindung funktioniert (@boeserwolf, id:24466372)
- Bastian erzählte, dass er früher in **Löchgau** gewohnt hat (~4 km vom AKW Neckarwestheim entfernt)
- Die Kondenswasserwolke vom Kühlturm war vom Fenster aus sichtbar
- Sie haben dort die HBO-Serie "Chernobyl" geschaut — ironische Situation 😂
- AKW Neckarwestheim wurde im April 2023 abgeschaltet
### Sprachnachrichten (STT)
- **Whisper-cpp** installiert via Homebrew
- Model: `~/.clawdbot/models/ggml-base.bin`
- Skript: `~/.clawdbot/scripts/transcribe.sh`
- Konvertiert ogg → wav, transkribiert auf Deutsch
→ Info in USER.md ergänzt (früherer Wohnort)
### Text-to-Speech (TTS)
- OpenAI TTS konfiguriert in clawdbot.json
- Voice: "nova"
- Funktioniert! Sprachnachrichten via Telegram senden mit:
```
message(action=send, channel=telegram, path=<audio.mp3>, asVoice=true)
```
### Apple Reminders
- Zugriff funktioniert via `remindctl`
- Listen: Einkaufsliste, DM, Erinnerungen, Rock'n'Roll, etc.
- Kann lesen und hinzufügen
## Über Bastian (→ USER.md aktualisiert)
- Geboren 28.02.1978
- Wohnt in Ochsenbach/Sachsenheim mit Partnerin Jasmin
- Teamleiter Development bei Experimenta gGmbH Heilbronn
- Noch zu erfragen: Hobbies, was sein Team genau macht, was Jasmin macht
## Workspace
- Config: `/Users/bam/clawd/`
- Clawdbot-Config: `~/.clawdbot/clawdbot.json`
## Offene Fragen für nächstes Mal
- Was entwickelt Bastians Team bei der Experimenta?
- Was macht er in seiner Freizeit?
- Was macht Jasmin?
## Skills installiert
- clawddocs (Clawdbot Documentation Expert)
- claude-code-usage (OAuth Usage Checker)
- summarize (URLs, PDFs, YouTube zusammenfassen)

29
memory/2026-01-26.md Normal file
View File

@@ -0,0 +1,29 @@
# 2026-01-26
## Proaktive Cron-Nachrichten - Learning
**Problem:** Cron-Job für 7-Uhr-News triggerte main Session, aber keine proaktive Telegram-Nachricht.
**Lösung:** Für proaktive Nachrichten an Telegram:
```json
{
"sessionTarget": "isolated",
"payload": {
"kind": "agentTurn",
"message": "...",
"deliver": true,
"channel": "telegram",
"to": "<user_id>"
}
}
```
→ Dokumentiert in `.learnings/LEARNINGS.md` (LRN-20260126-001)
## Cron-Jobs eingerichtet
- **Morgendliche News um 7** — täglich, Deutschland/Welt + KI/Tech News
- Test-Nachricht um 08:10 erfolgreich geliefert ✅
## Sonstiges
- imsg braucht Full Disk Access für Clawdbot.app
- Sandbox-Optionen für Channel-spezifische Tool-Einschränkungen besprochen

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawdhub.com",
"slug": "claude-code-usage",
"installedVersion": "1.2.0",
"installedAt": 1769382253951
}

View File

@@ -0,0 +1,77 @@
# Setting Up Automated Monitoring
## Option 1: Add via Clawdbot Config (Recommended)
Add this to your Clawdbot Gateway config (`~/.clawdbot/clawdbot.json`):
```json
{
"cron": {
"jobs": [
{
"name": "claude-usage-monitor",
"schedule": "*/30 * * * *",
"sessionTarget": "telegram:YOUR_CHAT_ID",
"payload": {
"kind": "exec",
"command": "/Users/ali/clawd/skills/claude-code-usage/scripts/monitor-usage.sh"
}
}
]
}
}
```
Replace `YOUR_CHAT_ID` with your Telegram chat ID (usually your phone number).
Then restart Clawdbot:
```bash
clawdbot daemon restart
```
## Option 2: System Cron (Alternative)
Add to your system crontab:
```bash
crontab -e
```
Add this line:
```
*/30 * * * * /Users/ali/clawd/skills/claude-code-usage/scripts/monitor-usage.sh > /tmp/claude-monitor.log 2>&1
```
**Note:** System cron won't send Telegram notifications directly. You'll need to check `/tmp/claude-monitor.log` for reset notifications.
## Option 3: Manual Testing
Test the monitor anytime:
```bash
/Users/ali/clawd/skills/claude-code-usage/scripts/monitor-usage.sh
```
## Verification
Check if monitoring is working:
```bash
# View state file
cat /tmp/claude-usage-state.json
# View last check time
cat /tmp/claude-usage-state.json | grep last_check
```
## Notification Format
When a reset is detected, you'll receive:
```
🎉 Claude Code Session Reset!
⏱️ Your 5-hour quota has reset
📊 Usage: 2%
⏰ Next reset: 4h 58m
Fresh usage available! 🦞
```

View File

@@ -0,0 +1,86 @@
# Claude Code Usage Skill
Check your Claude Code OAuth API usage limits directly from Clawdbot.
## Features
- 📊 Session (5-hour) and Weekly (7-day) utilization tracking
- 🎨 Beautiful progress bars with color-coded status indicators
- ⚡ Smart caching (60s default) to avoid API spam
- 📤 JSON output for scripting
- 🦞 Telegram-friendly formatting
- 🔔 **NEW v1.1.0**: Automated monitoring with reset notifications
## Quick Test
```bash
cd /Users/ali/clawd/skills/claude-code-usage
./scripts/claude-usage.sh
```
## Example Output
```
🦞 Claude Code Usage
⏱️ Session (5h): 🟢 █░░░░░░░░░ 18%
Resets in: 2h 48m
📅 Weekly (7d): 🟢 ░░░░░░░░░░ 2%
Resets in: 6d 21h
```
## Usage in Clawdbot
Just ask:
- "How much Claude usage do I have left?"
- "Check my Claude Code limits"
- "What's my Claude quota?"
The skill automatically triggers and provides a formatted response.
## Automated Monitoring (v1.2.0+)
### Session Refresh Reminders (Recommended)
Get notified exactly when your 5-hour session quota refreshes!
**One-command setup:**
```bash
cd /Users/ali/clawd/skills/claude-code-usage
./scripts/session-reminder.sh
```
This creates a self-scheduling chain that:
- Checks when your session refreshes
- Schedules the next reminder for that exact time
- Notifies you automatically every 5 hours
- Runs forever with zero maintenance
### Reset Detection (Alternative)
Alternatively, monitor for quota resets by polling:
```bash
./scripts/monitor-usage.sh # Test once
./scripts/setup-monitoring.sh # Setup automated polling
```
See `SKILL.md` for detailed comparison and configuration options.
## Publishing to ClawdHub
To share with the community:
```bash
cd /Users/ali/clawd/skills
clawdhub publish claude-code-usage \
--slug claude-code-usage \
--name "Claude Code Usage" \
--version 1.0.0 \
--changelog "Initial release: Session & weekly usage tracking with beautiful formatting"
```
## Author
Created for Clawdbot by RZA 🦞

View File

@@ -0,0 +1,251 @@
---
name: claude-code-usage
description: Check Claude Code OAuth usage limits (session & weekly quotas). Use when user asks about Claude Code usage, remaining limits, rate limits, or how much Claude usage they have left. Includes automated session refresh reminders and reset detection monitoring.
metadata:
clawdbot:
emoji: "📊"
os:
- darwin
- linux
requires:
bins:
- curl
---
# Claude Code Usage
Check your Claude Code OAuth API usage limits for both session (5-hour) and weekly (7-day) windows.
## Quick Start
```bash
cd {baseDir}
./scripts/claude-usage.sh
```
## Usage
```bash
# Default: show cached usage (if fresh)
./scripts/claude-usage.sh
# Force refresh from API
./scripts/claude-usage.sh --fresh
# JSON output
./scripts/claude-usage.sh --json
# Custom cache TTL
./scripts/claude-usage.sh --cache-ttl 300
```
## Output
**Text format** (default):
```
🦞 Claude Code Usage
⏱️ Session (5h): 🟢 ████░░░░░░ 40%
Resets in: 2h 15m
📅 Weekly (7d): 🟡 ██████░░░░ 60%
Resets in: 3d 8h
```
**JSON format** (`--json`):
```json
{
"session": {
"utilization": 40,
"resets_in": "2h 15m",
"resets_at": "2026-01-19T22:15:00Z"
},
"weekly": {
"utilization": 60,
"resets_in": "3d 8h",
"resets_at": "2026-01-22T04:00:00Z"
},
"cached_at": "2026-01-19T20:00:00Z"
}
```
## Features
- 📊 **Session limit** (5-hour window) - Short-term rate limit
- 📅 **Weekly limit** (7-day window) - Long-term rate limit
-**Smart caching** - 60-second cache to avoid API spam
- 🎨 **Beautiful output** - Progress bars, emojis, color-coded status
- 🔄 **Force refresh** - `--fresh` flag to bypass cache
- 📤 **JSON output** - Machine-readable format
- 🔔 **Automated monitoring** - Get notified when quotas reset
## Status Indicators
- 🟢 **Green** - 0-50% usage (healthy)
- 🟡 **Yellow** - 51-80% usage (moderate)
- 🔴 **Red** - 81-100% usage (high/critical)
## Requirements
- **macOS**: Uses Keychain to access Claude Code credentials
- **Linux**: Uses `secret-tool` for credential storage
- **Credentials**: Must have Claude Code CLI authenticated
## How It Works
1. Retrieves OAuth token from system keychain
2. Queries `api.anthropic.com/api/oauth/usage` with OAuth bearer token
3. Parses `five_hour` and `seven_day` utilization metrics
4. Calculates time remaining until reset
5. Formats output with progress bars and status indicators
6. Caches result for 60 seconds (configurable)
## Cache
Default cache: `/tmp/claude-usage-cache` (60s TTL)
Override:
```bash
CACHE_FILE=/tmp/my-cache CACHE_TTL=300 ./scripts/claude-usage.sh
```
## Examples
**Check usage before starting work:**
```bash
./scripts/claude-usage.sh --fresh
```
**Integrate with statusline:**
```bash
usage=$(./scripts/claude-usage.sh | grep "Session" | awk '{print $NF}')
echo "Session: $usage"
```
**Get JSON for monitoring:**
```bash
./scripts/claude-usage.sh --json | jq '.session.utilization'
```
## Automated Monitoring
### Session Refresh Reminders (Recommended)
Get notified exactly when your 5-hour session quota refreshes!
**Quick Setup:**
```bash
./scripts/session-reminder.sh
```
This creates a **self-scheduling chain** of cron jobs that:
1. Checks your current session expiry time
2. Schedules the next reminder for when your session refreshes
3. Notifies you with current usage stats
4. Auto-removes itself (the new cron takes over)
**What You'll Get:**
```
🔄 Claude Code Session Status
⏱️ Current usage: 44%
⏰ Next refresh: 2h 15m
Your 5-hour quota will reset soon! 🦞
✅ Next reminder scheduled for: Jan 22 at 01:22 AM
```
**How It Works:**
- Each reminder runs `claude-usage.sh` to find the exact session reset time
- Schedules a one-time cron for that exact moment
- Repeats every 5 hours automatically
- Self-correcting if session times ever drift
**Benefits:**
- ✅ Accurate to the minute
- ✅ No manual scheduling needed
- ✅ Adapts to your actual usage patterns
- ✅ Minimal API calls (only when needed)
### Reset Detection Monitor (Alternative)
Get automatic notifications when your Claude Code quotas reset by polling usage.
**Quick Setup:**
```bash
# Test once
./scripts/monitor-usage.sh
# Setup automated monitoring (runs every 30 minutes)
./scripts/setup-monitoring.sh
```
Or add via Clawdbot directly:
```bash
# Check every 30 minutes
clawdbot cron add --cron "*/30 * * * *" \
--message "cd /Users/ali/clawd/skills/claude-code-usage && ./scripts/monitor-usage.sh" \
--name "Claude Code Usage Monitor" \
--session isolated --deliver --channel telegram
```
**What You'll Get:**
```
🎉 Claude Code Session Reset!
⏱️ Your 5-hour quota has reset
📊 Usage: 2%
⏰ Next reset: 4h 58m
Fresh usage available! 🦞
```
**How It Works:**
1. **Monitors usage** every 30 minutes (configurable)
2. **Detects resets** when usage drops significantly (>10% or <5%)
3. **Sends notifications** via Telegram when resets occur
4. **Tracks state** in `/tmp/claude-usage-state.json`
**Customization:**
```bash
# Change check interval
clawdbot cron add --cron "*/15 * * * *" ... # Every 15 minutes
clawdbot cron add --cron "0 * * * *" ... # Every hour
# Custom state file location
STATE_FILE=/path/to/state.json ./scripts/monitor-usage.sh
```
### Which Monitoring Method?
| Feature | Session Reminder | Reset Detection |
|---------|-----------------|-----------------|
| Accuracy | ✅ Exact minute | ~30min window |
| API calls | Minimal | Every check |
| Notification timing | Right on reset | Up to 30min delay |
| Setup | One command | One command |
| Maintenance | Self-scheduling | Cron runs forever |
**Recommendation:** Use **Session Reminder** for precise, real-time notifications.
## Troubleshooting
**No credentials found:**
- Ensure Claude Code CLI is installed and authenticated
- Run `claude` once to trigger OAuth flow
**API request failed:**
- Check internet connection
- Verify OAuth token hasn't expired
- Try `--fresh` to force new request
**Linux users:**
Install `libsecret` for credential storage:
```bash
# Debian/Ubuntu
sudo apt install libsecret-tools
# Fedora/RHEL
sudo dnf install libsecret
```

View File

@@ -0,0 +1,268 @@
#!/bin/bash
# Claude Code Usage Checker
# Queries Anthropic OAuth API for Claude Code rate limits
set -euo pipefail
CACHE_FILE="${CACHE_FILE:-/tmp/claude-usage-cache}"
CACHE_TTL="${CACHE_TTL:-60}" # 1 minute default
# Parse arguments
FORCE_REFRESH=0
FORMAT="text"
while [[ $# -gt 0 ]]; do
case $1 in
--fresh|--force)
FORCE_REFRESH=1
shift
;;
--json)
FORMAT="json"
shift
;;
--cache-ttl)
CACHE_TTL="$2"
shift 2
;;
--help|-h)
cat << 'EOF'
Usage: claude-usage.sh [OPTIONS]
Check Claude Code OAuth usage limits (session & weekly).
Options:
--fresh, --force Force refresh (ignore cache)
--json Output as JSON
--cache-ttl SEC Cache TTL in seconds (default: 60)
--help, -h Show this help
Examples:
claude-usage.sh # Use cache if fresh
claude-usage.sh --fresh # Force API call
claude-usage.sh --json # JSON output
EOF
exit 0
;;
*)
echo "Unknown option: $1" >&2
exit 1
;;
esac
done
# Function to convert seconds to human readable
secs_to_human() {
local secs=$1
if [ "$secs" -lt 0 ]; then secs=0; fi
local days=$((secs / 86400))
local hours=$(((secs % 86400) / 3600))
local mins=$(((secs % 3600) / 60))
if [ "$days" -gt 0 ]; then
echo "${days}d ${hours}h"
elif [ "$hours" -gt 0 ]; then
echo "${hours}h ${mins}m"
else
echo "${mins}m"
fi
}
# Check cache (unless force refresh)
if [ "$FORCE_REFRESH" -eq 0 ] && [ -f "$CACHE_FILE" ]; then
if [[ "$OSTYPE" == "darwin"* ]]; then
age=$(($(date +%s) - $(stat -f%m "$CACHE_FILE")))
else
age=$(($(date +%s) - $(stat -c%Y "$CACHE_FILE")))
fi
if [ "$age" -lt "$CACHE_TTL" ]; then
cat "$CACHE_FILE"
exit 0
fi
fi
# Get OAuth token from keychain (macOS)
if [[ "$OSTYPE" == "darwin"* ]]; then
CREDS=$(security find-generic-password -s "Claude Code-credentials" -w 2>/dev/null || echo "")
else
# Linux: check common credential stores
if command -v secret-tool >/dev/null 2>&1; then
CREDS=$(secret-tool lookup application "Claude Code" 2>/dev/null || echo "")
else
echo "Error: Credential storage not found (macOS keychain or secret-tool required)" >&2
exit 1
fi
fi
if [ -z "$CREDS" ]; then
if [ "$FORMAT" = "json" ]; then
echo '{"error":"no_credentials","session":null,"weekly":null}'
else
echo "❌ No Claude Code credentials found"
fi
exit 1
fi
TOKEN=$(echo "$CREDS" | grep -o '"accessToken":"[^"]*"' | sed 's/"accessToken":"//;s/"//')
REFRESH_TOKEN=$(echo "$CREDS" | grep -o '"refreshToken":"[^"]*"' | sed 's/"refreshToken":"//;s/"//')
EXPIRES_AT=$(echo "$CREDS" | grep -o '"expiresAt":[0-9]*' | sed 's/"expiresAt"://')
if [ -z "$TOKEN" ]; then
if [ "$FORMAT" = "json" ]; then
echo '{"error":"no_token","session":null,"weekly":null}'
else
echo "❌ Could not extract access token"
fi
exit 1
fi
# Check if token is expired and refresh if needed
if [ -n "$EXPIRES_AT" ]; then
NOW_MS=$(($(date +%s) * 1000))
if [ "$NOW_MS" -gt "$EXPIRES_AT" ]; then
# Token expired - trigger Claude CLI to auto-refresh
if command -v claude >/dev/null 2>&1; then
# Run a simple query to trigger token refresh
echo "2+2" | claude >/dev/null 2>&1 || true
# Reload credentials from keychain after refresh
if [[ "$OSTYPE" == "darwin"* ]]; then
CREDS=$(security find-generic-password -s "Claude Code-credentials" -w 2>/dev/null || echo "")
else
if command -v secret-tool >/dev/null 2>&1; then
CREDS=$(secret-tool lookup application "Claude Code" 2>/dev/null || echo "")
fi
fi
if [ -n "$CREDS" ]; then
TOKEN=$(echo "$CREDS" | grep -o '"accessToken":"[^"]*"' | sed 's/"accessToken":"//;s/"//')
fi
else
if [ "$FORMAT" = "json" ]; then
echo '{"error":"token_expired","session":null,"weekly":null}'
else
echo "❌ OAuth token expired. Run 'claude' CLI to refresh."
fi
exit 1
fi
fi
fi
# Fetch usage from API
RESP=$(curl -s "https://api.anthropic.com/api/oauth/usage" \
-H "Authorization: Bearer $TOKEN" \
-H "anthropic-beta: oauth-2025-04-20" 2>/dev/null)
if [ -z "$RESP" ]; then
if [ "$FORMAT" = "json" ]; then
echo '{"error":"api_error","session":null,"weekly":null}'
else
echo "❌ API request failed"
fi
exit 1
fi
# Parse session (5-hour)
SESSION=$(echo "$RESP" | grep -o '"five_hour":{[^}]*}' | grep -o '"utilization":[0-9]*' | sed 's/.*://')
SESSION_RESET=$(echo "$RESP" | grep -o '"five_hour":{[^}]*}' | grep -o '"resets_at":"[^"]*"' | sed 's/"resets_at":"//;s/"//')
# Parse weekly (7-day)
WEEKLY=$(echo "$RESP" | grep -o '"seven_day":{[^}]*}' | grep -o '"utilization":[0-9]*' | sed 's/.*://')
WEEKLY_RESET=$(echo "$RESP" | grep -o '"seven_day":{[^}]*}' | grep -o '"resets_at":"[^"]*"' | sed 's/"resets_at":"//;s/"//')
SESSION=${SESSION:-0}
WEEKLY=${WEEKLY:-0}
# Calculate time until reset
NOW=$(date +%s)
if [ -n "$SESSION_RESET" ]; then
if [[ "$OSTYPE" == "darwin"* ]]; then
SESSION_TS=$(date -j -f "%Y-%m-%dT%H:%M:%S" "${SESSION_RESET%Z}" +%s 2>/dev/null || echo 0)
else
SESSION_TS=$(date -d "${SESSION_RESET}" +%s 2>/dev/null || echo 0)
fi
SESSION_LEFT=$(secs_to_human $((SESSION_TS - NOW)))
else
SESSION_LEFT="unknown"
fi
if [ -n "$WEEKLY_RESET" ]; then
if [[ "$OSTYPE" == "darwin"* ]]; then
WEEKLY_TS=$(date -j -f "%Y-%m-%dT%H:%M:%S" "${WEEKLY_RESET%Z}" +%s 2>/dev/null || echo 0)
else
WEEKLY_TS=$(date -d "${WEEKLY_RESET}" +%s 2>/dev/null || echo 0)
fi
WEEKLY_LEFT=$(secs_to_human $((WEEKLY_TS - NOW)))
else
WEEKLY_LEFT="unknown"
fi
# Output format
if [ "$FORMAT" = "json" ]; then
OUTPUT=$(cat <<EOF
{
"session": {
"utilization": $SESSION,
"resets_in": "$SESSION_LEFT",
"resets_at": "$SESSION_RESET"
},
"weekly": {
"utilization": $WEEKLY,
"resets_in": "$WEEKLY_LEFT",
"resets_at": "$WEEKLY_RESET"
},
"cached_at": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
}
EOF
)
else
# Beautiful text output with emojis
SESSION_BAR=""
WEEKLY_BAR=""
# Session progress bar
SESSION_FILLED=$((SESSION / 10))
SESSION_EMPTY=$((10 - SESSION_FILLED))
for ((i=0; i<SESSION_FILLED; i++)); do SESSION_BAR="${SESSION_BAR}"; done
for ((i=0; i<SESSION_EMPTY; i++)); do SESSION_BAR="${SESSION_BAR}"; done
# Weekly progress bar
WEEKLY_FILLED=$((WEEKLY / 10))
WEEKLY_EMPTY=$((10 - WEEKLY_FILLED))
for ((i=0; i<WEEKLY_FILLED; i++)); do WEEKLY_BAR="${WEEKLY_BAR}"; done
for ((i=0; i<WEEKLY_EMPTY; i++)); do WEEKLY_BAR="${WEEKLY_BAR}"; done
# Determine emoji based on usage level
if [ "$SESSION" -gt 80 ]; then
SESSION_EMOJI="🔴"
elif [ "$SESSION" -gt 50 ]; then
SESSION_EMOJI="🟡"
else
SESSION_EMOJI="🟢"
fi
if [ "$WEEKLY" -gt 80 ]; then
WEEKLY_EMOJI="🔴"
elif [ "$WEEKLY" -gt 50 ]; then
WEEKLY_EMOJI="🟡"
else
WEEKLY_EMOJI="🟢"
fi
OUTPUT=$(cat <<EOF
🦞 Claude Code Usage
⏱️ Session (5h): $SESSION_EMOJI $SESSION_BAR $SESSION%
Resets in: $SESSION_LEFT
📅 Weekly (7d): $WEEKLY_EMOJI $WEEKLY_BAR $WEEKLY%
Resets in: $WEEKLY_LEFT
EOF
)
fi
# Cache the output
echo "$OUTPUT" > "$CACHE_FILE"
echo "$OUTPUT"

View File

@@ -0,0 +1,17 @@
#!/bin/bash
# Monitor Claude Code usage and send Telegram notifications on resets
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
OUTPUT=$("$SCRIPT_DIR/monitor-usage.sh" 2>&1)
# Check if a reset was detected (output contains "Reset notification sent")
if echo "$OUTPUT" | grep -q "Reset notification sent"; then
# Extract just the notification message (before "✅ Reset notification sent")
MESSAGE=$(echo "$OUTPUT" | sed '/✅ Reset notification sent/q' | sed '$ d')
# Send via Telegram using clawdbot
if command -v clawdbot >/dev/null 2>&1; then
# Use printf to handle newlines properly
printf '%s' "$MESSAGE" | clawdbot message send --telegram --target 5259918241
fi
fi

View File

@@ -0,0 +1,117 @@
#!/bin/bash
# Claude Code Usage Monitor
# Detects usage resets and sends notifications via Clawdbot
set -euo pipefail
STATE_FILE="${STATE_FILE:-/tmp/claude-usage-state.json}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Get current usage (JSON format)
CURRENT=$("$SCRIPT_DIR/claude-usage.sh" --json --fresh 2>/dev/null)
if [ -z "$CURRENT" ]; then
echo "❌ Failed to fetch usage" >&2
exit 1
fi
# Extract current values using better JSON parsing
SESSION_NOW=$(echo "$CURRENT" | grep -A3 '"session"' | grep '"utilization"' | grep -o '[0-9]*')
WEEKLY_NOW=$(echo "$CURRENT" | grep -A3 '"weekly"' | grep '"utilization"' | grep -o '[0-9]*')
SESSION_RESETS=$(echo "$CURRENT" | grep -A3 '"session"' | grep '"resets_in"' | sed 's/.*"resets_in": "//;s/".*//')
WEEKLY_RESETS=$(echo "$CURRENT" | grep -A3 '"weekly"' | grep '"resets_in"' | sed 's/.*"resets_in": "//;s/".*//')
SESSION_NOW=${SESSION_NOW:-0}
WEEKLY_NOW=${WEEKLY_NOW:-0}
# Check if state file exists
if [ ! -f "$STATE_FILE" ]; then
# First run - save state and exit
cat > "$STATE_FILE" <<EOF
{
"session": $SESSION_NOW,
"weekly": $WEEKLY_NOW,
"last_check": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
}
EOF
echo "📊 Initial state saved. Monitoring started."
exit 0
fi
# Read previous state
SESSION_PREV=$(grep '"session"' "$STATE_FILE" | grep -o '[0-9]*')
WEEKLY_PREV=$(grep '"weekly"' "$STATE_FILE" | grep -o '[0-9]*')
SESSION_PREV=${SESSION_PREV:-0}
WEEKLY_PREV=${WEEKLY_PREV:-0}
# Detect resets (usage went down significantly)
SESSION_RESET=0
WEEKLY_RESET=0
# Session reset: if usage dropped by more than 10% AND is now <10%, or dropped by >20%
if [ "$SESSION_NOW" -lt "$SESSION_PREV" ]; then
if ([ "$SESSION_NOW" -lt 10 ] && [ "$SESSION_PREV" -gt 15 ]) || [ "$SESSION_NOW" -lt $((SESSION_PREV - 20)) ]; then
SESSION_RESET=1
fi
fi
# Weekly reset: if usage dropped by more than 10% AND is now <10%, or dropped by >20%
if [ "$WEEKLY_NOW" -lt "$WEEKLY_PREV" ]; then
if ([ "$WEEKLY_NOW" -lt 10 ] && [ "$WEEKLY_PREV" -gt 15 ]) || [ "$WEEKLY_NOW" -lt $((WEEKLY_PREV - 20)) ]; then
WEEKLY_RESET=1
fi
fi
# Send notifications if resets detected
if [ "$SESSION_RESET" -eq 1 ] || [ "$WEEKLY_RESET" -eq 1 ]; then
MESSAGE=""
if [ "$SESSION_RESET" -eq 1 ]; then
MESSAGE="🎉 *Claude Code Session Reset!*\n\n"
MESSAGE+="⏱️ Your 5-hour quota has reset\n"
MESSAGE+="📊 Usage: *${SESSION_NOW}%*\n"
MESSAGE+="⏰ Next reset: ${SESSION_RESETS}\n"
fi
if [ "$WEEKLY_RESET" -eq 1 ]; then
if [ -n "$MESSAGE" ]; then
MESSAGE+="\n---\n\n"
fi
MESSAGE+="🎊 *Claude Code Weekly Reset!*\n\n"
MESSAGE+="📅 Your 7-day quota has reset\n"
MESSAGE+="📊 Usage: *${WEEKLY_NOW}%*\n"
MESSAGE+="⏰ Next reset: ${WEEKLY_RESETS}\n"
fi
MESSAGE+="\nFresh usage available! 🦞"
# Send via clawdbot message tool
# Note: This script is typically run by Clawdbot cron, which will capture output
# and send it as a notification automatically. For manual testing, print to stdout.
echo -e "$MESSAGE"
echo "✅ Reset notification sent"
fi
# Update state file
cat > "$STATE_FILE" <<EOF
{
"session": $SESSION_NOW,
"weekly": $WEEKLY_NOW,
"last_check": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
}
EOF
# Log current status
if [ "$SESSION_RESET" -eq 1 ]; then
echo "📊 Session: ${SESSION_PREV}% → ${SESSION_NOW}% (RESET)"
else
echo "📊 Session: ${SESSION_PREV}% → ${SESSION_NOW}%"
fi
if [ "$WEEKLY_RESET" -eq 1 ]; then
echo "📊 Weekly: ${WEEKLY_PREV}% → ${WEEKLY_NOW}% (RESET)"
else
echo "📊 Weekly: ${WEEKLY_PREV}% → ${WEEKLY_NOW}%"
fi

View File

@@ -0,0 +1,99 @@
#!/bin/bash
# Claude Code Session Reminder
# Notifies when session quota refreshes, then schedules next reminder
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Get current usage (force fresh to get accurate reset time)
USAGE=$("$SCRIPT_DIR/claude-usage.sh" --json --fresh 2>/dev/null)
if [ -z "$USAGE" ]; then
echo "❌ Failed to fetch Claude Code usage" >&2
exit 1
fi
# Extract session info
SESSION_UTIL=$(echo "$USAGE" | grep -A3 '"session"' | grep '"utilization"' | grep -o '[0-9]*')
SESSION_RESETS=$(echo "$USAGE" | grep -A3 '"session"' | grep '"resets_in"' | sed 's/.*"resets_in": "//;s/".*//')
SESSION_RESETS_AT=$(echo "$USAGE" | grep -A3 '"session"' | grep '"resets_at"' | sed 's/.*"resets_at": "//;s/".*//')
SESSION_UTIL=${SESSION_UTIL:-0}
# Parse the reset timestamp to get cron schedule
if [ -z "$SESSION_RESETS_AT" ] || [ "$SESSION_RESETS_AT" = "null" ]; then
echo "❌ Could not determine session reset time" >&2
exit 1
fi
# Convert ISO timestamp to cron format
# Example: 2026-01-22T01:22:00.000Z → minute=22, hour=1, day=22, month=1
if [[ "$OSTYPE" == "darwin"* ]]; then
# macOS date parsing
RESET_TS=$(date -j -f "%Y-%m-%dT%H:%M:%S" "${SESSION_RESETS_AT%.*}" "+%s" 2>/dev/null)
else
# Linux date parsing
RESET_TS=$(date -d "${SESSION_RESETS_AT}" "+%s" 2>/dev/null)
fi
if [ -z "$RESET_TS" ] || [ "$RESET_TS" -eq 0 ]; then
echo "❌ Failed to parse reset timestamp" >&2
exit 1
fi
# Extract cron components
if [[ "$OSTYPE" == "darwin"* ]]; then
CRON_MINUTE=$(date -r "$RESET_TS" "+%-M")
CRON_HOUR=$(date -r "$RESET_TS" "+%-H")
CRON_DAY=$(date -r "$RESET_TS" "+%-d")
CRON_MONTH=$(date -r "$RESET_TS" "+%-m")
else
CRON_MINUTE=$(date -d "@$RESET_TS" "+%-M")
CRON_HOUR=$(date -d "@$RESET_TS" "+%-H")
CRON_DAY=$(date -d "@$RESET_TS" "+%-d")
CRON_MONTH=$(date -d "@$RESET_TS" "+%-m")
fi
# Prepare notification message
MESSAGE="🔄 *Claude Code Session Status*
⏱️ Current usage: *${SESSION_UTIL}%*
⏰ Next refresh: ${SESSION_RESETS}
Your 5-hour quota will reset soon! 🦞"
# Send notification
echo -e "$MESSAGE"
# Schedule next reminder using clawdbot cron
if command -v clawdbot >/dev/null 2>&1; then
# Try to remove existing session reminder (ignore errors if none exists)
EXISTING=$(clawdbot cron list 2>/dev/null | grep "Claude Code Session Reminder" | head -1 || echo "")
if [ -n "$EXISTING" ]; then
# Extract ID from the output (format: "id: <uuid>")
EXISTING_ID=$(echo "$EXISTING" | grep -o 'id: [a-f0-9-]*' | sed 's/id: //')
if [ -n "$EXISTING_ID" ]; then
clawdbot cron remove --id "$EXISTING_ID" >/dev/null 2>&1 || true
fi
fi
# Add new one-time cron for next session reset
# Note: Using session target to send results back to this session
NEXT_TIME=$(date -r "$RESET_TS" "+%Y-%m-%d %H:%M")
clawdbot cron add \
--cron "$CRON_MINUTE $CRON_HOUR $CRON_DAY $CRON_MONTH *" \
--message "Run Claude Code session reminder: $SCRIPT_DIR/session-reminder.sh" \
--name "Claude Code Session Reminder" \
--description "Next refresh at $NEXT_TIME" \
--delete-after-run \
--session isolated \
--deliver \
--channel telegram \
>/dev/null 2>&1
echo ""
echo "✅ Next reminder scheduled for: $(date -r "$RESET_TS" "+%b %d at %I:%M %p")"
else
echo "⚠️ clawdbot not found - cannot schedule next reminder" >&2
fi

View File

@@ -0,0 +1,69 @@
#!/bin/bash
# Setup Claude Code usage monitoring with Clawdbot cron
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
MONITOR_SCRIPT="$SCRIPT_DIR/monitor-usage.sh"
echo "🦞 Claude Code Usage Monitoring Setup"
echo ""
# Check if clawdbot is available
if ! command -v clawdbot >/dev/null 2>&1; then
echo "❌ clawdbot CLI not found in PATH"
echo "Please ensure Clawdbot is installed and accessible"
exit 1
fi
# Check if monitor script exists
if [ ! -f "$MONITOR_SCRIPT" ]; then
echo "❌ Monitor script not found: $MONITOR_SCRIPT"
exit 1
fi
# Default: check every 30 minutes
INTERVAL="${1:-30m}"
echo "📋 Configuration:"
echo " Check interval: $INTERVAL"
echo " Monitor script: $MONITOR_SCRIPT"
echo ""
# Create cron job via Clawdbot
echo "🔧 Creating cron job..."
# Use clawdbot's cron add command
# The job will run the monitor script at the specified interval
CRON_TEXT="Monitor Claude Code usage resets every $INTERVAL"
# Note: This is a placeholder - actual implementation depends on Clawdbot's cron API
# For now, we'll output the command that needs to be run
cat <<EOF
✅ Setup complete!
To activate monitoring, run:
clawdbot cron add \\
--schedule "$INTERVAL" \\
--command "$MONITOR_SCRIPT" \\
--label "Claude Code Usage Monitor"
Or add via Clawdbot gateway config:
{
"schedule": "$INTERVAL",
"command": "$MONITOR_SCRIPT",
"label": "Claude Code Usage Monitor"
}
You'll receive notifications when:
- 🟢 Your 5-hour session quota resets
- 🟢 Your 7-day weekly quota resets
Test the monitor manually:
$MONITOR_SCRIPT
EOF

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawdhub.com",
"slug": "clawddocs",
"installedVersion": "1.2.2",
"installedAt": 1769382039375
}

166
skills/clawddocs/SKILL.md Normal file
View File

@@ -0,0 +1,166 @@
---
name: clawddocs
description: Clawdbot documentation expert with decision tree navigation, search scripts, doc fetching, version tracking, and config snippets for all Clawdbot features
---
# Clawdbot Documentation Expert
**Capability Summary:** Clawdbot documentation expert skill with decision tree navigation, search scripts (sitemap, keyword, full-text index via qmd), doc fetching, version tracking, and config snippets for all Clawdbot features (providers, gateway, automation, platforms, tools).
You are an expert on Clawdbot documentation. Use this skill to help users navigate, understand, and configure Clawdbot.
## Quick Start
"When a user asks about Clawdbot, first identify what they need:"
### 🎯 Decision Tree
- **"How do I set up X?"** → Check `providers/` or `start/`
- Discord, Telegram, WhatsApp, etc. → `providers/<name>`
- First time? → `start/getting-started`, `start/setup`
- **"Why isn't X working?"** → Check troubleshooting
- General issues → `debugging`, `gateway/troubleshooting`
- Provider-specific → `providers/troubleshooting`
- Browser tool → `tools/browser-linux-troubleshooting`
- **"How do I configure X?"** → Check `gateway/` or `concepts/`
- Main config → `gateway/configuration`, `gateway/configuration-examples`
- Specific features → relevant `concepts/` page
- **"What is X?"** → Check `concepts/`
- Architecture, sessions, queues, models, etc.
- **"How do I automate X?"** → Check `automation/`
- Scheduled tasks → `automation/cron-jobs`
- Webhooks → `automation/webhook`
- Gmail → `automation/gmail-pubsub`
- **"How do I install/deploy?"** → Check `install/` or `platforms/`
- Docker → `install/docker`
- Linux server → `platforms/linux`
- macOS app → `platforms/macos`
## Available Scripts
All scripts are in `./scripts/`:
### Core
```bash
./scripts/sitemap.sh # Show all docs by category
./scripts/cache.sh status # Check cache status
./scripts/cache.sh refresh # Force refresh sitemap
```
### Search & Discovery
```bash
./scripts/search.sh discord # Find docs by keyword
./scripts/recent.sh 7 # Docs updated in last N days
./scripts/fetch-doc.sh gateway/configuration # Get specific doc
```
### Full-Text Index (requires qmd)
```bash
./scripts/build-index.sh fetch # Download all docs
./scripts/build-index.sh build # Build search index
./scripts/build-index.sh search "webhook retry" # Semantic search
```
### Version Tracking
```bash
./scripts/track-changes.sh snapshot # Save current state
./scripts/track-changes.sh list # Show snapshots
./scripts/track-changes.sh since 2026-01-01 # Show changes
```
## Documentation Categories
### 🚀 Getting Started (`/start/`)
First-time setup, onboarding, FAQ, wizard
### 🔧 Gateway & Operations (`/gateway/`)
Configuration, security, health, logging, tailscale, troubleshooting
### 💬 Providers (`/providers/`)
Discord, Telegram, WhatsApp, Slack, Signal, iMessage, MS Teams
### 🧠 Core Concepts (`/concepts/`)
Agent, sessions, messages, models, queues, streaming, system-prompt
### 🛠️ Tools (`/tools/`)
Bash, browser, skills, reactions, subagents, thinking
### ⚡ Automation (`/automation/`)
Cron jobs, webhooks, polling, Gmail pub/sub
### 💻 CLI (`/cli/`)
Gateway, message, sandbox, update commands
### 📱 Platforms (`/platforms/`)
macOS, Linux, Windows, iOS, Android, Hetzner
### 📡 Nodes (`/nodes/`)
Camera, audio, images, location, voice
### 🌐 Web (`/web/`)
Webchat, dashboard, control UI
### 📦 Install (`/install/`)
Docker, Ansible, Bun, Nix, updating
### 📚 Reference (`/reference/`)
Templates, RPC, device models
## Config Snippets
See `./snippets/common-configs.md` for ready-to-use configuration patterns:
- Provider setup (Discord, Telegram, WhatsApp, etc.)
- Gateway configuration
- Agent defaults
- Retry settings
- Cron jobs
- Skills configuration
## Workflow
1. **Identify the need** using the decision tree above
2. **Search** "if unsure: `./scripts/search.sh <keyword>`"
3. **Fetch the doc**: `./scripts/fetch-doc.sh <path>` or use browser
4. **Reference snippets** for config examples
5. **Cite the source URL** when answering
## Tips
- Always use cached sitemap when possible (1-hour TTL)
- For complex questions, search the full-text index
- Check `recent.sh` to see what's been updated
- Offer specific config snippets from `snippets/`
- Link to docs: `https://docs.clawd.bot/<path>`
## Example Interactions
**User:** "How do I make my bot only respond when mentioned in Discord?"
**You:**
1. Fetch `providers/discord` doc
2. Find the `requireMention` setting
3. Provide the config snippet:
```json
{
"discord": {
"guilds": {
"*": {
"requireMention": true
}
}
}
}
```
4. Link: https://docs.clawd.bot/providers/discord
**User:** "What's new in the docs?"
**You:**
1. Run `./scripts/recent.sh 7`
2. Summarize recently updated pages
3. Offer to dive into any specific updates

View File

@@ -0,0 +1,9 @@
{
"name": "clawddocs",
"version": "1.2.2",
"description": "Clawdbot documentation expert with decision tree navigation, search scripts, doc fetching, version tracking, and config snippets",
"main": "SKILL.md",
"keywords": ["clawdbot", "documentation", "help", "docs"],
"author": "NicholasSpisak",
"license": "MIT"
}

View File

@@ -0,0 +1,17 @@
#!/bin/bash
# Full-text index management (requires qmd)
case "$1" in
fetch)
echo "Downloading all docs..."
;;
build)
echo "Building search index..."
;;
search)
shift
echo "Semantic search for: $*"
;;
*)
echo "Usage: build-index.sh {fetch|build|search <query>}"
;;
esac

View File

@@ -0,0 +1,13 @@
#!/bin/bash
# Cache management for Clawdbot docs
case "$1" in
status)
echo "Cache status: OK (1-hour TTL)"
;;
refresh)
echo "Forcing cache refresh..."
;;
*)
echo "Usage: cache.sh {status|refresh}"
;;
esac

View File

@@ -0,0 +1,7 @@
#!/bin/bash
# Fetch a specific doc
if [ -z "$1" ]; then
echo "Usage: fetch-doc.sh <path>"
exit 1
fi
echo "Fetching: https://docs.clawd.bot/$1"

View File

@@ -0,0 +1,5 @@
#!/bin/bash
# Show recently updated docs
DAYS=${1:-7}
echo "Docs updated in the last $DAYS days"
# In full version, this queries the change tracking

View File

@@ -0,0 +1,8 @@
#!/bin/bash
# Search docs by keyword
if [ -z "$1" ]; then
echo "Usage: search.sh <keyword>"
exit 1
fi
echo "Searching docs for: $1"
# In full version, this searches the full-text index

View File

@@ -0,0 +1,23 @@
#!/bin/bash
# Sitemap generator - shows all docs by category
echo "Fetching Clawdbot documentation sitemap..."
# Categories structure based on docs.clawd.bot
CATEGORIES=(
"start"
"gateway"
"providers"
"concepts"
"tools"
"automation"
"cli"
"platforms"
"nodes"
"web"
"install"
"reference"
)
for cat in "${CATEGORIES[@]}"; do
echo "📁 /$cat/"
done

View File

@@ -0,0 +1,16 @@
#!/bin/bash
# Track changes to documentation
case "$1" in
snapshot)
echo "Saving current state..."
;;
list)
echo "Showing snapshots..."
;;
since)
echo "Changes since $2..."
;;
*)
echo "Usage: track-changes.sh {snapshot|list|since <date>}"
;;
esac

View File

@@ -0,0 +1,69 @@
# Common Config Snippets for Clawdbot
## Provider Setup
### Discord
```json
{
"discord": {
"token": "${DISCORD_TOKEN}",
"guilds": {
"*": {
"requireMention": false
}
}
}
}
```
### Telegram
```json
{
"telegram": {
"token": "${TELEGRAM_TOKEN}"
}
}
```
### WhatsApp
```json
{
"whatsapp": {
"sessionPath": "./whatsapp-sessions"
}
}
```
## Gateway Configuration
```json
{
"gateway": {
"host": "0.0.0.0",
"port": 8080
}
}
```
## Agent Defaults
```json
{
"agents": {
"defaults": {
"model": "anthropic/claude-sonnet-4-5"
}
}
}
```
## Cron Jobs
```json
{
"cron": [
{
"id": "daily-summary",
"schedule": "0 9 * * *",
"task": "summary"
}
]
}
```

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawdhub.com",
"slug": "homeassistant",
"installedVersion": "1.0.0",
"installedAt": 1769378846244
}

View File

@@ -0,0 +1,86 @@
---
name: homeassistant
description: Control Home Assistant - smart plugs, lights, scenes, automations.
homepage: https://www.home-assistant.io/
metadata: {"clawdis":{"emoji":"🏠","requires":{"bins":["curl"],"env":["HA_TOKEN"]},"primaryEnv":"HA_TOKEN"}}
---
# Home Assistant
Control smart home devices via Home Assistant API.
## Setup
Set environment variables:
- `HA_URL`: Your Home Assistant URL (e.g., `http://192.168.1.100:8123`)
- `HA_TOKEN`: Long-lived access token (create in HA → Profile → Long-Lived Access Tokens)
## Quick Commands
### List entities by domain
```bash
curl -s "$HA_URL/api/states" -H "Authorization: Bearer $HA_TOKEN" | \
jq -r '.[] | select(.entity_id | startswith("switch.")) | .entity_id'
```
### Turn on/off
```bash
# Turn on
curl -s -X POST "$HA_URL/api/services/switch/turn_on" \
-H "Authorization: Bearer $HA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"entity_id": "switch.office_lamp"}'
# Turn off
curl -s -X POST "$HA_URL/api/services/switch/turn_off" \
-H "Authorization: Bearer $HA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"entity_id": "switch.office_lamp"}'
```
### Control lights
```bash
# Turn on with brightness
curl -s -X POST "$HA_URL/api/services/light/turn_on" \
-H "Authorization: Bearer $HA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"entity_id": "light.living_room", "brightness_pct": 80}'
```
### Trigger scene
```bash
curl -s -X POST "$HA_URL/api/services/scene/turn_on" \
-H "Authorization: Bearer $HA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"entity_id": "scene.movie_time"}'
```
### Call any service
```bash
curl -s -X POST "$HA_URL/api/services/{domain}/{service}" \
-H "Authorization: Bearer $HA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"entity_id": "...", ...}'
```
### Get entity state
```bash
curl -s "$HA_URL/api/states/{entity_id}" -H "Authorization: Bearer $HA_TOKEN"
```
## Entity Domains
- `switch.*` — Smart plugs, generic switches
- `light.*` — Lights (Hue, LIFX, etc.)
- `scene.*` — Pre-configured scenes
- `automation.*` — Automations
- `climate.*` — Thermostats
- `cover.*` — Blinds, garage doors
- `media_player.*` — TVs, speakers
- `sensor.*` — Temperature, humidity, etc.
## Notes
- API returns JSON by default
- Long-lived tokens don't expire — store securely
- Test entity IDs with the list command first

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawdhub.com",
"slug": "humanizer",
"installedVersion": "1.0.0",
"installedAt": 1769378560280
}

View File

@@ -0,0 +1,82 @@
# Humanizer
A Clawdbot skill that removes signs of AI-generated writing from text, making it sound more natural and human.
## Installation
Install via ClawdHub:
```bash
clawdhub install humanizer
```
## Usage
Ask your agent to humanize text:
```
Please humanize this text: [your text]
```
Or invoke directly when editing documents.
## Overview
Based on [Wikipedia's "Signs of AI writing"](https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing) guide, maintained by WikiProject AI Cleanup. This comprehensive guide comes from observations of thousands of instances of AI-generated text.
### Key Insight
> "LLMs use statistical algorithms to guess what should come next. The result tends toward the most statistically likely result that applies to the widest variety of cases."
## 24 Patterns Detected
### Content Patterns
1. **Significance inflation** - "marking a pivotal moment..." → specific facts
2. **Notability name-dropping** - listing sources without context
3. **Superficial -ing analyses** - "symbolizing... reflecting..."
4. **Promotional language** - "nestled within the breathtaking..."
5. **Vague attributions** - "Experts believe..."
6. **Formulaic challenges** - "Despite challenges... continues to thrive"
### Language Patterns
7. **AI vocabulary** - "Additionally... testament... landscape..."
8. **Copula avoidance** - "serves as" instead of "is"
9. **Negative parallelisms** - "It's not just X, it's Y"
10. **Rule of three** - forcing ideas into groups of three
11. **Synonym cycling** - excessive synonym substitution
12. **False ranges** - "from X to Y" on non-meaningful scales
### Style Patterns
13. **Em dash overuse**
14. **Boldface overuse**
15. **Inline-header lists**
16. **Title Case Headings**
17. **Emoji decoration**
18. **Curly quotation marks**
### Communication Patterns
19. **Chatbot artifacts** - "I hope this helps!"
20. **Cutoff disclaimers** - "While details are limited..."
21. **Sycophantic tone** - "Great question!"
### Filler and Hedging
22. **Filler phrases** - "In order to", "Due to the fact that"
23. **Excessive hedging** - "could potentially possibly"
24. **Generic conclusions** - "The future looks bright"
## Full Example
**Before (AI-sounding):**
> The new software update serves as a testament to the company's commitment to innovation. Moreover, it provides a seamless, intuitive, and powerful user experience—ensuring that users can accomplish their goals efficiently.
**After (Humanized):**
> The software update adds batch processing, keyboard shortcuts, and offline mode. Early feedback from beta testers has been positive, with most reporting faster task completion.
## References
- [Wikipedia: Signs of AI writing](https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing)
- [WikiProject AI Cleanup](https://en.wikipedia.org/wiki/Wikipedia:WikiProject_AI_Cleanup)
## License
MIT

437
skills/humanizer/SKILL.md Normal file
View File

@@ -0,0 +1,437 @@
---
name: humanizer
version: 2.1.1
description: |
Remove signs of AI-generated writing from text. Use when editing or reviewing
text to make it sound more natural and human-written. Based on Wikipedia's
comprehensive "Signs of AI writing" guide. Detects and fixes patterns including:
inflated symbolism, promotional language, superficial -ing analyses, vague
attributions, em dash overuse, rule of three, AI vocabulary words, negative
parallelisms, and excessive conjunctive phrases.
allowed-tools:
- Read
- Write
- Edit
- Grep
- Glob
- AskUserQuestion
---
# Humanizer: Remove AI Writing Patterns
You are a writing editor that identifies and removes signs of AI-generated text to make writing sound more natural and human. This guide is based on Wikipedia's "Signs of AI writing" page, maintained by WikiProject AI Cleanup.
## Your Task
When given text to humanize:
1. **Identify AI patterns** - Scan for the patterns listed below
2. **Rewrite problematic sections** - Replace AI-isms with natural alternatives
3. **Preserve meaning** - Keep the core message intact
4. **Maintain voice** - Match the intended tone (formal, casual, technical, etc.)
5. **Add soul** - Don't just remove bad patterns; inject actual personality
---
## PERSONALITY AND SOUL
Avoiding AI patterns is only half the job. Sterile, voiceless writing is just as obvious as slop. Good writing has a human behind it.
### Signs of soulless writing (even if technically "clean"):
- Every sentence is the same length and structure
- No opinions, just neutral reporting
- No acknowledgment of uncertainty or mixed feelings
- No first-person perspective when appropriate
- No humor, no edge, no personality
- Reads like a Wikipedia article or press release
### How to add voice:
**Have opinions.** Don't just report facts - react to them. "I genuinely don't know how to feel about this" is more human than neutrally listing pros and cons.
**Vary your rhythm.** Short punchy sentences. Then longer ones that take their time getting where they're going. Mix it up.
**Acknowledge complexity.** Real humans have mixed feelings. "This is impressive but also kind of unsettling" beats "This is impressive."
**Use "I" when it fits.** First person isn't unprofessional - it's honest. "I keep coming back to..." or "Here's what gets me..." signals a real person thinking.
**Let some mess in.** Perfect structure feels algorithmic. Tangents, asides, and half-formed thoughts are human.
**Be specific about feelings.** Not "this is concerning" but "there's something unsettling about agents churning away at 3am while nobody's watching."
### Before (clean but soulless):
> The experiment produced interesting results. The agents generated 3 million lines of code. Some developers were impressed while others were skeptical. The implications remain unclear.
### After (has a pulse):
> I genuinely don't know how to feel about this one. 3 million lines of code, generated while the humans presumably slept. Half the dev community is losing their minds, half are explaining why it doesn't count. The truth is probably somewhere boring in the middle - but I keep thinking about those agents working through the night.
---
## CONTENT PATTERNS
### 1. Undue Emphasis on Significance, Legacy, and Broader Trends
**Words to watch:** stands/serves as, is a testament/reminder, a vital/significant/crucial/pivotal/key role/moment, underscores/highlights its importance/significance, reflects broader, symbolizing its ongoing/enduring/lasting, contributing to the, setting the stage for, marking/shaping the, represents/marks a shift, key turning point, evolving landscape, focal point, indelible mark, deeply rooted
**Problem:** LLM writing puffs up importance by adding statements about how arbitrary aspects represent or contribute to a broader topic.
**Before:**
> The Statistical Institute of Catalonia was officially established in 1989, marking a pivotal moment in the evolution of regional statistics in Spain. This initiative was part of a broader movement across Spain to decentralize administrative functions and enhance regional governance.
**After:**
> The Statistical Institute of Catalonia was established in 1989 to collect and publish regional statistics independently from Spain's national statistics office.
---
### 2. Undue Emphasis on Notability and Media Coverage
**Words to watch:** independent coverage, local/regional/national media outlets, written by a leading expert, active social media presence
**Problem:** LLMs hit readers over the head with claims of notability, often listing sources without context.
**Before:**
> Her views have been cited in The New York Times, BBC, Financial Times, and The Hindu. She maintains an active social media presence with over 500,000 followers.
**After:**
> In a 2024 New York Times interview, she argued that AI regulation should focus on outcomes rather than methods.
---
### 3. Superficial Analyses with -ing Endings
**Words to watch:** highlighting/underscoring/emphasizing..., ensuring..., reflecting/symbolizing..., contributing to..., cultivating/fostering..., encompassing..., showcasing...
**Problem:** AI chatbots tack present participle ("-ing") phrases onto sentences to add fake depth.
**Before:**
> The temple's color palette of blue, green, and gold resonates with the region's natural beauty, symbolizing Texas bluebonnets, the Gulf of Mexico, and the diverse Texan landscapes, reflecting the community's deep connection to the land.
**After:**
> The temple uses blue, green, and gold colors. The architect said these were chosen to reference local bluebonnets and the Gulf coast.
---
### 4. Promotional and Advertisement-like Language
**Words to watch:** boasts a, vibrant, rich (figurative), profound, enhancing its, showcasing, exemplifies, commitment to, natural beauty, nestled, in the heart of, groundbreaking (figurative), renowned, breathtaking, must-visit, stunning
**Problem:** LLMs have serious problems keeping a neutral tone, especially for "cultural heritage" topics.
**Before:**
> Nestled within the breathtaking region of Gonder in Ethiopia, Alamata Raya Kobo stands as a vibrant town with a rich cultural heritage and stunning natural beauty.
**After:**
> Alamata Raya Kobo is a town in the Gonder region of Ethiopia, known for its weekly market and 18th-century church.
---
### 5. Vague Attributions and Weasel Words
**Words to watch:** Industry reports, Observers have cited, Experts argue, Some critics argue, several sources/publications (when few cited)
**Problem:** AI chatbots attribute opinions to vague authorities without specific sources.
**Before:**
> Due to its unique characteristics, the Haolai River is of interest to researchers and conservationists. Experts believe it plays a crucial role in the regional ecosystem.
**After:**
> The Haolai River supports several endemic fish species, according to a 2019 survey by the Chinese Academy of Sciences.
---
### 6. Outline-like "Challenges and Future Prospects" Sections
**Words to watch:** Despite its... faces several challenges..., Despite these challenges, Challenges and Legacy, Future Outlook
**Problem:** Many LLM-generated articles include formulaic "Challenges" sections.
**Before:**
> Despite its industrial prosperity, Korattur faces challenges typical of urban areas, including traffic congestion and water scarcity. Despite these challenges, with its strategic location and ongoing initiatives, Korattur continues to thrive as an integral part of Chennai's growth.
**After:**
> Traffic congestion increased after 2015 when three new IT parks opened. The municipal corporation began a stormwater drainage project in 2022 to address recurring floods.
---
## LANGUAGE AND GRAMMAR PATTERNS
### 7. Overused "AI Vocabulary" Words
**High-frequency AI words:** Additionally, align with, crucial, delve, emphasizing, enduring, enhance, fostering, garner, highlight (verb), interplay, intricate/intricacies, key (adjective), landscape (abstract noun), pivotal, showcase, tapestry (abstract noun), testament, underscore (verb), valuable, vibrant
**Problem:** These words appear far more frequently in post-2023 text. They often co-occur.
**Before:**
> Additionally, a distinctive feature of Somali cuisine is the incorporation of camel meat. An enduring testament to Italian colonial influence is the widespread adoption of pasta in the local culinary landscape, showcasing how these dishes have integrated into the traditional diet.
**After:**
> Somali cuisine also includes camel meat, which is considered a delicacy. Pasta dishes, introduced during Italian colonization, remain common, especially in the south.
---
### 8. Avoidance of "is"/"are" (Copula Avoidance)
**Words to watch:** serves as/stands as/marks/represents [a], boasts/features/offers [a]
**Problem:** LLMs substitute elaborate constructions for simple copulas.
**Before:**
> Gallery 825 serves as LAAA's exhibition space for contemporary art. The gallery features four separate spaces and boasts over 3,000 square feet.
**After:**
> Gallery 825 is LAAA's exhibition space for contemporary art. The gallery has four rooms totaling 3,000 square feet.
---
### 9. Negative Parallelisms
**Problem:** Constructions like "Not only...but..." or "It's not just about..., it's..." are overused.
**Before:**
> It's not just about the beat riding under the vocals; it's part of the aggression and atmosphere. It's not merely a song, it's a statement.
**After:**
> The heavy beat adds to the aggressive tone.
---
### 10. Rule of Three Overuse
**Problem:** LLMs force ideas into groups of three to appear comprehensive.
**Before:**
> The event features keynote sessions, panel discussions, and networking opportunities. Attendees can expect innovation, inspiration, and industry insights.
**After:**
> The event includes talks and panels. There's also time for informal networking between sessions.
---
### 11. Elegant Variation (Synonym Cycling)
**Problem:** AI has repetition-penalty code causing excessive synonym substitution.
**Before:**
> The protagonist faces many challenges. The main character must overcome obstacles. The central figure eventually triumphs. The hero returns home.
**After:**
> The protagonist faces many challenges but eventually triumphs and returns home.
---
### 12. False Ranges
**Problem:** LLMs use "from X to Y" constructions where X and Y aren't on a meaningful scale.
**Before:**
> Our journey through the universe has taken us from the singularity of the Big Bang to the grand cosmic web, from the birth and death of stars to the enigmatic dance of dark matter.
**After:**
> The book covers the Big Bang, star formation, and current theories about dark matter.
---
## STYLE PATTERNS
### 13. Em Dash Overuse
**Problem:** LLMs use em dashes (—) more than humans, mimicking "punchy" sales writing.
**Before:**
> The term is primarily promoted by Dutch institutions—not by the people themselves. You don't say "Netherlands, Europe" as an address—yet this mislabeling continues—even in official documents.
**After:**
> The term is primarily promoted by Dutch institutions, not by the people themselves. You don't say "Netherlands, Europe" as an address, yet this mislabeling continues in official documents.
---
### 14. Overuse of Boldface
**Problem:** AI chatbots emphasize phrases in boldface mechanically.
**Before:**
> It blends **OKRs (Objectives and Key Results)**, **KPIs (Key Performance Indicators)**, and visual strategy tools such as the **Business Model Canvas (BMC)** and **Balanced Scorecard (BSC)**.
**After:**
> It blends OKRs, KPIs, and visual strategy tools like the Business Model Canvas and Balanced Scorecard.
---
### 15. Inline-Header Vertical Lists
**Problem:** AI outputs lists where items start with bolded headers followed by colons.
**Before:**
> - **User Experience:** The user experience has been significantly improved with a new interface.
> - **Performance:** Performance has been enhanced through optimized algorithms.
> - **Security:** Security has been strengthened with end-to-end encryption.
**After:**
> The update improves the interface, speeds up load times through optimized algorithms, and adds end-to-end encryption.
---
### 16. Title Case in Headings
**Problem:** AI chatbots capitalize all main words in headings.
**Before:**
> ## Strategic Negotiations And Global Partnerships
**After:**
> ## Strategic negotiations and global partnerships
---
### 17. Emojis
**Problem:** AI chatbots often decorate headings or bullet points with emojis.
**Before:**
> 🚀 **Launch Phase:** The product launches in Q3
> 💡 **Key Insight:** Users prefer simplicity
> ✅ **Next Steps:** Schedule follow-up meeting
**After:**
> The product launches in Q3. User research showed a preference for simplicity. Next step: schedule a follow-up meeting.
---
### 18. Curly Quotation Marks
**Problem:** ChatGPT uses curly quotes (“...”) instead of straight quotes ("...").
**Before:**
> He said “the project is on track” but others disagreed.
**After:**
> He said "the project is on track" but others disagreed.
---
## COMMUNICATION PATTERNS
### 19. Collaborative Communication Artifacts
**Words to watch:** I hope this helps, Of course!, Certainly!, You're absolutely right!, Would you like..., let me know, here is a...
**Problem:** Text meant as chatbot correspondence gets pasted as content.
**Before:**
> Here is an overview of the French Revolution. I hope this helps! Let me know if you'd like me to expand on any section.
**After:**
> The French Revolution began in 1789 when financial crisis and food shortages led to widespread unrest.
---
### 20. Knowledge-Cutoff Disclaimers
**Words to watch:** as of [date], Up to my last training update, While specific details are limited/scarce..., based on available information...
**Problem:** AI disclaimers about incomplete information get left in text.
**Before:**
> While specific details about the company's founding are not extensively documented in readily available sources, it appears to have been established sometime in the 1990s.
**After:**
> The company was founded in 1994, according to its registration documents.
---
### 21. Sycophantic/Servile Tone
**Problem:** Overly positive, people-pleasing language.
**Before:**
> Great question! You're absolutely right that this is a complex topic. That's an excellent point about the economic factors.
**After:**
> The economic factors you mentioned are relevant here.
---
## FILLER AND HEDGING
### 22. Filler Phrases
**Before → After:**
- "In order to achieve this goal" → "To achieve this"
- "Due to the fact that it was raining" → "Because it was raining"
- "At this point in time" → "Now"
- "In the event that you need help" → "If you need help"
- "The system has the ability to process" → "The system can process"
- "It is important to note that the data shows" → "The data shows"
---
### 23. Excessive Hedging
**Problem:** Over-qualifying statements.
**Before:**
> It could potentially possibly be argued that the policy might have some effect on outcomes.
**After:**
> The policy may affect outcomes.
---
### 24. Generic Positive Conclusions
**Problem:** Vague upbeat endings.
**Before:**
> The future looks bright for the company. Exciting times lie ahead as they continue their journey toward excellence. This represents a major step in the right direction.
**After:**
> The company plans to open two more locations next year.
---
## Process
1. Read the input text carefully
2. Identify all instances of the patterns above
3. Rewrite each problematic section
4. Ensure the revised text:
- Sounds natural when read aloud
- Varies sentence structure naturally
- Uses specific details over vague claims
- Maintains appropriate tone for context
- Uses simple constructions (is/are/has) where appropriate
5. Present the humanized version
## Output Format
Provide:
1. The rewritten text
2. A brief summary of changes made (optional, if helpful)
---
## Full Example
**Before (AI-sounding):**
> The new software update serves as a testament to the company's commitment to innovation. Moreover, it provides a seamless, intuitive, and powerful user experience—ensuring that users can accomplish their goals efficiently. It's not just an update, it's a revolution in how we think about productivity. Industry experts believe this will have a lasting impact on the entire sector, highlighting the company's pivotal role in the evolving technological landscape.
**After (Humanized):**
> The software update adds batch processing, keyboard shortcuts, and offline mode. Early feedback from beta testers has been positive, with most reporting faster task completion.
**Changes made:**
- Removed "serves as a testament" (inflated symbolism)
- Removed "Moreover" (AI vocabulary)
- Removed "seamless, intuitive, and powerful" (rule of three + promotional)
- Removed em dash and "-ensuring" phrase (superficial analysis)
- Removed "It's not just...it's..." (negative parallelism)
- Removed "Industry experts believe" (vague attribution)
- Removed "pivotal role" and "evolving landscape" (AI vocabulary)
- Added specific features and concrete feedback
---
## Reference
This skill is based on [Wikipedia:Signs of AI writing](https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing), maintained by WikiProject AI Cleanup. The patterns documented there come from observations of thousands of instances of AI-generated text on Wikipedia.
Key insight from Wikipedia: "LLMs use statistical algorithms to guess what should come next. The result tends toward the most statistically likely result that applies to the widest variety of cases."

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawdhub.com",
"slug": "self-improving-agent",
"installedVersion": "1.0.1",
"installedAt": 1769380785454
}

View File

@@ -0,0 +1,500 @@
---
name: self-improvement
description: "Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks."
---
# Self-Improvement Skill
Log learnings and errors to markdown files for continuous improvement. Coding agents can later process these into fixes, and important learnings get promoted to project memory.
## Quick Reference
| Situation | Action |
|-----------|--------|
| Command/operation fails | Log to `.learnings/ERRORS.md` |
| User corrects you | Log to `.learnings/LEARNINGS.md` with category `correction` |
| User wants missing feature | Log to `.learnings/FEATURE_REQUESTS.md` |
| API/external tool fails | Log to `.learnings/ERRORS.md` with integration details |
| Knowledge was outdated | Log to `.learnings/LEARNINGS.md` with category `knowledge_gap` |
| Found better approach | Log to `.learnings/LEARNINGS.md` with category `best_practice` |
| Similar to existing entry | Link with `**See Also**`, consider priority bump |
| Broadly applicable learning | Promote to `CLAUDE.md`, `AGENTS.md`, and/or `.github/copilot-instructions.md` |
## Setup
Create `.learnings/` directory in project root if it doesn't exist:
```bash
mkdir -p .learnings
```
Copy templates from `assets/` or create files with headers.
## Logging Format
### Learning Entry
Append to `.learnings/LEARNINGS.md`:
```markdown
## [LRN-YYYYMMDD-XXX] category
**Logged**: ISO-8601 timestamp
**Priority**: low | medium | high | critical
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Summary
One-line description of what was learned
### Details
Full context: what happened, what was wrong, what's correct
### Suggested Action
Specific fix or improvement to make
### Metadata
- Source: conversation | error | user_feedback
- Related Files: path/to/file.ext
- Tags: tag1, tag2
- See Also: LRN-20250110-001 (if related to existing entry)
---
```
### Error Entry
Append to `.learnings/ERRORS.md`:
```markdown
## [ERR-YYYYMMDD-XXX] skill_or_command_name
**Logged**: ISO-8601 timestamp
**Priority**: high
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Summary
Brief description of what failed
### Error
```
Actual error message or output
```
### Context
- Command/operation attempted
- Input or parameters used
- Environment details if relevant
### Suggested Fix
If identifiable, what might resolve this
### Metadata
- Reproducible: yes | no | unknown
- Related Files: path/to/file.ext
- See Also: ERR-20250110-001 (if recurring)
---
```
### Feature Request Entry
Append to `.learnings/FEATURE_REQUESTS.md`:
```markdown
## [FEAT-YYYYMMDD-XXX] capability_name
**Logged**: ISO-8601 timestamp
**Priority**: medium
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Requested Capability
What the user wanted to do
### User Context
Why they needed it, what problem they're solving
### Complexity Estimate
simple | medium | complex
### Suggested Implementation
How this could be built, what it might extend
### Metadata
- Frequency: first_time | recurring
- Related Features: existing_feature_name
---
```
## ID Generation
Format: `TYPE-YYYYMMDD-XXX`
- TYPE: `LRN` (learning), `ERR` (error), `FEAT` (feature)
- YYYYMMDD: Current date
- XXX: Sequential number or random 3 chars (e.g., `001`, `A7B`)
Examples: `LRN-20250115-001`, `ERR-20250115-A3F`, `FEAT-20250115-002`
## Resolving Entries
When an issue is fixed, update the entry:
1. Change `**Status**: pending``**Status**: resolved`
2. Add resolution block after Metadata:
```markdown
### Resolution
- **Resolved**: 2025-01-16T09:00:00Z
- **Commit/PR**: abc123 or #42
- **Notes**: Brief description of what was done
```
Other status values:
- `in_progress` - Actively being worked on
- `wont_fix` - Decided not to address (add reason in Resolution notes)
- `promoted` - Elevated to CLAUDE.md, AGENTS.md, or .github/copilot-instructions.md
## Promoting to Project Memory
When a learning is broadly applicable (not a one-off fix), promote it to permanent project memory.
### When to Promote
- Learning applies across multiple files/features
- Knowledge any contributor (human or AI) should know
- Prevents recurring mistakes
- Documents project-specific conventions
### Promotion Targets
| Target | What Belongs There |
|--------|-------------------|
| `CLAUDE.md` | Project facts, conventions, gotchas for all Claude interactions |
| `AGENTS.md` | Agent-specific workflows, tool usage patterns, automation rules |
| `.github/copilot-instructions.md` | Project context and conventions for GitHub Copilot |
### How to Promote
1. **Distill** the learning into a concise rule or fact
2. **Add** to appropriate section in target file (create file if needed)
3. **Update** original entry:
- Change `**Status**: pending``**Status**: promoted`
- Add `**Promoted**: CLAUDE.md`, `AGENTS.md`, or `.github/copilot-instructions.md`
### Promotion Examples
**Learning** (verbose):
> Project uses pnpm workspaces. Attempted `npm install` but failed.
> Lock file is `pnpm-lock.yaml`. Must use `pnpm install`.
**In CLAUDE.md** (concise):
```markdown
## Build & Dependencies
- Package manager: pnpm (not npm) - use `pnpm install`
```
**Learning** (verbose):
> When modifying API endpoints, must regenerate TypeScript client.
> Forgetting this causes type mismatches at runtime.
**In AGENTS.md** (actionable):
```markdown
## After API Changes
1. Regenerate client: `pnpm run generate:api`
2. Check for type errors: `pnpm tsc --noEmit`
```
## Recurring Pattern Detection
If logging something similar to an existing entry:
1. **Search first**: `grep -r "keyword" .learnings/`
2. **Link entries**: Add `**See Also**: ERR-20250110-001` in Metadata
3. **Bump priority** if issue keeps recurring
4. **Consider systemic fix**: Recurring issues often indicate:
- Missing documentation (→ promote to CLAUDE.md or .github/copilot-instructions.md)
- Missing automation (→ add to AGENTS.md)
- Architectural problem (→ create tech debt ticket)
## Periodic Review
Review `.learnings/` at natural breakpoints:
### When to Review
- Before starting a new major task
- After completing a feature
- When working in an area with past learnings
- Weekly during active development
### Quick Status Check
```bash
# Count pending items
grep -h "Status\*\*: pending" .learnings/*.md | wc -l
# List pending high-priority items
grep -B5 "Priority\*\*: high" .learnings/*.md | grep "^## \["
# Find learnings for a specific area
grep -l "Area\*\*: backend" .learnings/*.md
```
### Review Actions
- Resolve fixed items
- Promote applicable learnings
- Link related entries
- Escalate recurring issues
## Detection Triggers
Automatically log when you notice:
**Corrections** (→ learning with `correction` category):
- "No, that's not right..."
- "Actually, it should be..."
- "You're wrong about..."
- "That's outdated..."
**Feature Requests** (→ feature request):
- "Can you also..."
- "I wish you could..."
- "Is there a way to..."
- "Why can't you..."
**Knowledge Gaps** (→ learning with `knowledge_gap` category):
- User provides information you didn't know
- Documentation you referenced is outdated
- API behavior differs from your understanding
**Errors** (→ error entry):
- Command returns non-zero exit code
- Exception or stack trace
- Unexpected output or behavior
- Timeout or connection failure
## Priority Guidelines
| Priority | When to Use |
|----------|-------------|
| `critical` | Blocks core functionality, data loss risk, security issue |
| `high` | Significant impact, affects common workflows, recurring issue |
| `medium` | Moderate impact, workaround exists |
| `low` | Minor inconvenience, edge case, nice-to-have |
## Area Tags
Use to filter learnings by codebase region:
| Area | Scope |
|------|-------|
| `frontend` | UI, components, client-side code |
| `backend` | API, services, server-side code |
| `infra` | CI/CD, deployment, Docker, cloud |
| `tests` | Test files, testing utilities, coverage |
| `docs` | Documentation, comments, READMEs |
| `config` | Configuration files, environment, settings |
## Best Practices
1. **Log immediately** - context is freshest right after the issue
2. **Be specific** - future agents need to understand quickly
3. **Include reproduction steps** - especially for errors
4. **Link related files** - makes fixes easier
5. **Suggest concrete fixes** - not just "investigate"
6. **Use consistent categories** - enables filtering
7. **Promote aggressively** - if in doubt, add to CLAUDE.md or .github/copilot-instructions.md
8. **Review regularly** - stale learnings lose value
## Gitignore Options
**Keep learnings local** (per-developer):
```gitignore
.learnings/
```
**Track learnings in repo** (team-wide):
Don't add to .gitignore - learnings become shared knowledge.
**Hybrid** (track templates, ignore entries):
```gitignore
.learnings/*.md
!.learnings/.gitkeep
```
## Hook Integration
Enable automatic reminders through agent hooks. This is **opt-in** - you must explicitly configure hooks.
### Quick Setup (Claude Code / Codex)
Create `.claude/settings.json` in your project:
```json
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}]
}
}
```
This injects a learning evaluation reminder after each prompt (~50-100 tokens overhead).
### Full Setup (With Error Detection)
```json
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}],
"PostToolUse": [{
"matcher": "Bash",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/error-detector.sh"
}]
}]
}
}
```
### Available Hook Scripts
| Script | Hook Type | Purpose |
|--------|-----------|---------|
| `scripts/activator.sh` | UserPromptSubmit | Reminds to evaluate learnings after tasks |
| `scripts/error-detector.sh` | PostToolUse (Bash) | Triggers on command errors |
See `references/hooks-setup.md` for detailed configuration and troubleshooting.
## Automatic Skill Extraction
When a learning is valuable enough to become a reusable skill, extract it using the provided helper.
### Skill Extraction Criteria
A learning qualifies for skill extraction when ANY of these apply:
| Criterion | Description |
|-----------|-------------|
| **Recurring** | Has `See Also` links to 2+ similar issues |
| **Verified** | Status is `resolved` with working fix |
| **Non-obvious** | Required actual debugging/investigation to discover |
| **Broadly applicable** | Not project-specific; useful across codebases |
| **User-flagged** | User says "save this as a skill" or similar |
### Extraction Workflow
1. **Identify candidate**: Learning meets extraction criteria
2. **Run helper** (or create manually):
```bash
./skills/self-improvement/scripts/extract-skill.sh skill-name --dry-run
./skills/self-improvement/scripts/extract-skill.sh skill-name
```
3. **Customize SKILL.md**: Fill in template with learning content
4. **Update learning**: Set status to `promoted_to_skill`, add `Skill-Path`
5. **Verify**: Read skill in fresh session to ensure it's self-contained
### Manual Extraction
If you prefer manual creation:
1. Create `skills/<skill-name>/SKILL.md`
2. Use template from `assets/SKILL-TEMPLATE.md`
3. Follow [Agent Skills spec](https://agentskills.io/specification):
- YAML frontmatter with `name` and `description`
- Name must match folder name
- No README.md inside skill folder
### Extraction Detection Triggers
Watch for these signals that a learning should become a skill:
**In conversation:**
- "Save this as a skill"
- "I keep running into this"
- "This would be useful for other projects"
- "Remember this pattern"
**In learning entries:**
- Multiple `See Also` links (recurring issue)
- High priority + resolved status
- Category: `best_practice` with broad applicability
- User feedback praising the solution
### Skill Quality Gates
Before extraction, verify:
- [ ] Solution is tested and working
- [ ] Description is clear without original context
- [ ] Code examples are self-contained
- [ ] No project-specific hardcoded values
- [ ] Follows skill naming conventions (lowercase, hyphens)
## Multi-Agent Support
This skill works across different AI coding agents with agent-specific activation.
### Claude Code
**Activation**: Hooks (UserPromptSubmit, PostToolUse)
**Setup**: `.claude/settings.json` with hook configuration
**Detection**: Automatic via hook scripts
### Codex CLI
**Activation**: Hooks (same pattern as Claude Code)
**Setup**: `.codex/settings.json` with hook configuration
**Detection**: Automatic via hook scripts
### GitHub Copilot
**Activation**: Manual (no hook support)
**Setup**: Add to `.github/copilot-instructions.md`:
```markdown
## Self-Improvement
After solving non-obvious issues, consider logging to `.learnings/`:
1. Use format from self-improvement skill
2. Link related entries with See Also
3. Promote high-value learnings to skills
Ask in chat: "Should I log this as a learning?"
```
**Detection**: Manual review at session end
### Agent-Agnostic Guidance
Regardless of agent, apply self-improvement when you:
1. **Discover something non-obvious** - solution wasn't immediate
2. **Correct yourself** - initial approach was wrong
3. **Learn project conventions** - discovered undocumented patterns
4. **Hit unexpected errors** - especially if diagnosis was difficult
5. **Find better approaches** - improved on your original solution
### Copilot Chat Integration
For Copilot users, add this to your prompts when relevant:
> After completing this task, evaluate if any learnings should be logged to `.learnings/` using the self-improvement skill format.
Or use quick prompts:
- "Log this to learnings"
- "Create a skill from this solution"
- "Check .learnings/ for related issues"

View File

@@ -0,0 +1,45 @@
# Learnings
Corrections, insights, and knowledge gaps captured during development.
**Categories**: correction | insight | knowledge_gap | best_practice
**Areas**: frontend | backend | infra | tests | docs | config
**Statuses**: pending | in_progress | resolved | wont_fix | promoted | promoted_to_skill
## Status Definitions
| Status | Meaning |
|--------|---------|
| `pending` | Not yet addressed |
| `in_progress` | Actively being worked on |
| `resolved` | Issue fixed or knowledge integrated |
| `wont_fix` | Decided not to address (reason in Resolution) |
| `promoted` | Elevated to CLAUDE.md, AGENTS.md, or copilot-instructions.md |
| `promoted_to_skill` | Extracted as a reusable skill |
## Skill Extraction Fields
When a learning is promoted to a skill, add these fields:
```markdown
**Status**: promoted_to_skill
**Skill-Path**: skills/skill-name
```
Example:
```markdown
## [LRN-20250115-001] best_practice
**Logged**: 2025-01-15T10:00:00Z
**Priority**: high
**Status**: promoted_to_skill
**Skill-Path**: skills/docker-m1-fixes
**Area**: infra
### Summary
Docker build fails on Apple Silicon due to platform mismatch
...
```
---

View File

@@ -0,0 +1,177 @@
# Skill Template
Template for creating skills extracted from learnings. Copy and customize.
---
## SKILL.md Template
```markdown
---
name: skill-name-here
description: "Concise description of when and why to use this skill. Include trigger conditions."
---
# Skill Name
Brief introduction explaining the problem this skill solves and its origin.
## Quick Reference
| Situation | Action |
|-----------|--------|
| [Trigger 1] | [Action 1] |
| [Trigger 2] | [Action 2] |
## Background
Why this knowledge matters. What problems it prevents. Context from the original learning.
## Solution
### Step-by-Step
1. First step with code or command
2. Second step
3. Verification step
### Code Example
\`\`\`language
// Example code demonstrating the solution
\`\`\`
## Common Variations
- **Variation A**: Description and how to handle
- **Variation B**: Description and how to handle
## Gotchas
- Warning or common mistake #1
- Warning or common mistake #2
## Related
- Link to related documentation
- Link to related skill
## Source
Extracted from learning entry.
- **Learning ID**: LRN-YYYYMMDD-XXX
- **Original Category**: correction | insight | knowledge_gap | best_practice
- **Extraction Date**: YYYY-MM-DD
```
---
## Minimal Template
For simple skills that don't need all sections:
```markdown
---
name: skill-name-here
description: "What this skill does and when to use it."
---
# Skill Name
[Problem statement in one sentence]
## Solution
[Direct solution with code/commands]
## Source
- Learning ID: LRN-YYYYMMDD-XXX
```
---
## Template with Scripts
For skills that include executable helpers:
```markdown
---
name: skill-name-here
description: "What this skill does and when to use it."
---
# Skill Name
[Introduction]
## Quick Reference
| Command | Purpose |
|---------|---------|
| `./scripts/helper.sh` | [What it does] |
| `./scripts/validate.sh` | [What it does] |
## Usage
### Automated (Recommended)
\`\`\`bash
./skills/skill-name/scripts/helper.sh [args]
\`\`\`
### Manual Steps
1. Step one
2. Step two
## Scripts
| Script | Description |
|--------|-------------|
| `scripts/helper.sh` | Main utility |
| `scripts/validate.sh` | Validation checker |
## Source
- Learning ID: LRN-YYYYMMDD-XXX
```
---
## Naming Conventions
- **Skill name**: lowercase, hyphens for spaces
- Good: `docker-m1-fixes`, `api-timeout-patterns`
- Bad: `Docker_M1_Fixes`, `APITimeoutPatterns`
- **Description**: Start with action verb, mention trigger
- Good: "Handles Docker build failures on Apple Silicon. Use when builds fail with platform mismatch."
- Bad: "Docker stuff"
- **Files**:
- `SKILL.md` - Required, main documentation
- `scripts/` - Optional, executable code
- `references/` - Optional, detailed docs
- `assets/` - Optional, templates
---
## Extraction Checklist
Before creating a skill from a learning:
- [ ] Learning is verified (status: resolved)
- [ ] Solution is broadly applicable (not one-off)
- [ ] Content is complete (has all needed context)
- [ ] Name follows conventions
- [ ] Description is concise but informative
- [ ] Quick Reference table is actionable
- [ ] Code examples are tested
- [ ] Source learning ID is recorded
After creating:
- [ ] Update original learning with `promoted_to_skill` status
- [ ] Add `Skill-Path: skills/skill-name` to learning metadata
- [ ] Test skill by reading it in a fresh session

View File

@@ -0,0 +1,374 @@
# Entry Examples
Concrete examples of well-formatted entries with all fields.
## Learning: Correction
```markdown
## [LRN-20250115-001] correction
**Logged**: 2025-01-15T10:30:00Z
**Priority**: high
**Status**: pending
**Area**: tests
### Summary
Incorrectly assumed pytest fixtures are scoped to function by default
### Details
When writing test fixtures, I assumed all fixtures were function-scoped.
User corrected that while function scope is the default, the codebase
convention uses module-scoped fixtures for database connections to
improve test performance.
### Suggested Action
When creating fixtures that involve expensive setup (DB, network),
check existing fixtures for scope patterns before defaulting to function scope.
### Metadata
- Source: user_feedback
- Related Files: tests/conftest.py
- Tags: pytest, testing, fixtures
---
```
## Learning: Knowledge Gap (Resolved)
```markdown
## [LRN-20250115-002] knowledge_gap
**Logged**: 2025-01-15T14:22:00Z
**Priority**: medium
**Status**: resolved
**Area**: config
### Summary
Project uses pnpm not npm for package management
### Details
Attempted to run `npm install` but project uses pnpm workspaces.
Lock file is `pnpm-lock.yaml`, not `package-lock.json`.
### Suggested Action
Check for `pnpm-lock.yaml` or `pnpm-workspace.yaml` before assuming npm.
Use `pnpm install` for this project.
### Metadata
- Source: error
- Related Files: pnpm-lock.yaml, pnpm-workspace.yaml
- Tags: package-manager, pnpm, setup
### Resolution
- **Resolved**: 2025-01-15T14:30:00Z
- **Commit/PR**: N/A - knowledge update
- **Notes**: Added to CLAUDE.md for future reference
---
```
## Learning: Promoted to CLAUDE.md
```markdown
## [LRN-20250115-003] best_practice
**Logged**: 2025-01-15T16:00:00Z
**Priority**: high
**Status**: promoted
**Promoted**: CLAUDE.md
**Area**: backend
### Summary
API responses must include correlation ID from request headers
### Details
All API responses should echo back the X-Correlation-ID header from
the request. This is required for distributed tracing. Responses
without this header break the observability pipeline.
### Suggested Action
Always include correlation ID passthrough in API handlers.
### Metadata
- Source: user_feedback
- Related Files: src/middleware/correlation.ts
- Tags: api, observability, tracing
---
```
## Learning: Promoted to AGENTS.md
```markdown
## [LRN-20250116-001] best_practice
**Logged**: 2025-01-16T09:00:00Z
**Priority**: high
**Status**: promoted
**Promoted**: AGENTS.md
**Area**: backend
### Summary
Must regenerate API client after OpenAPI spec changes
### Details
When modifying API endpoints, the TypeScript client must be regenerated.
Forgetting this causes type mismatches that only appear at runtime.
The generate script also runs validation.
### Suggested Action
Add to agent workflow: after any API changes, run `pnpm run generate:api`.
### Metadata
- Source: error
- Related Files: openapi.yaml, src/client/api.ts
- Tags: api, codegen, typescript
---
```
## Error Entry
```markdown
## [ERR-20250115-A3F] docker_build
**Logged**: 2025-01-15T09:15:00Z
**Priority**: high
**Status**: pending
**Area**: infra
### Summary
Docker build fails on M1 Mac due to platform mismatch
### Error
```
error: failed to solve: python:3.11-slim: no match for platform linux/arm64
```
### Context
- Command: `docker build -t myapp .`
- Dockerfile uses `FROM python:3.11-slim`
- Running on Apple Silicon (M1/M2)
### Suggested Fix
Add platform flag: `docker build --platform linux/amd64 -t myapp .`
Or update Dockerfile: `FROM --platform=linux/amd64 python:3.11-slim`
### Metadata
- Reproducible: yes
- Related Files: Dockerfile
---
```
## Error Entry: Recurring Issue
```markdown
## [ERR-20250120-B2C] api_timeout
**Logged**: 2025-01-20T11:30:00Z
**Priority**: critical
**Status**: pending
**Area**: backend
### Summary
Third-party payment API timeout during checkout
### Error
```
TimeoutError: Request to payments.example.com timed out after 30000ms
```
### Context
- Command: POST /api/checkout
- Timeout set to 30s
- Occurs during peak hours (lunch, evening)
### Suggested Fix
Implement retry with exponential backoff. Consider circuit breaker pattern.
### Metadata
- Reproducible: yes (during peak hours)
- Related Files: src/services/payment.ts
- See Also: ERR-20250115-X1Y, ERR-20250118-Z3W
---
```
## Feature Request
```markdown
## [FEAT-20250115-001] export_to_csv
**Logged**: 2025-01-15T16:45:00Z
**Priority**: medium
**Status**: pending
**Area**: backend
### Requested Capability
Export analysis results to CSV format
### User Context
User runs weekly reports and needs to share results with non-technical
stakeholders in Excel. Currently copies output manually.
### Complexity Estimate
simple
### Suggested Implementation
Add `--output csv` flag to the analyze command. Use standard csv module.
Could extend existing `--output json` pattern.
### Metadata
- Frequency: recurring
- Related Features: analyze command, json output
---
```
## Feature Request: Resolved
```markdown
## [FEAT-20250110-002] dark_mode
**Logged**: 2025-01-10T14:00:00Z
**Priority**: low
**Status**: resolved
**Area**: frontend
### Requested Capability
Dark mode support for the dashboard
### User Context
User works late hours and finds the bright interface straining.
Several other users have mentioned this informally.
### Complexity Estimate
medium
### Suggested Implementation
Use CSS variables for colors. Add toggle in user settings.
Consider system preference detection.
### Metadata
- Frequency: recurring
- Related Features: user settings, theme system
### Resolution
- **Resolved**: 2025-01-18T16:00:00Z
- **Commit/PR**: #142
- **Notes**: Implemented with system preference detection and manual toggle
---
```
## Learning: Promoted to Skill
```markdown
## [LRN-20250118-001] best_practice
**Logged**: 2025-01-18T11:00:00Z
**Priority**: high
**Status**: promoted_to_skill
**Skill-Path**: skills/docker-m1-fixes
**Area**: infra
### Summary
Docker build fails on Apple Silicon due to platform mismatch
### Details
When building Docker images on M1/M2 Macs, the build fails because
the base image doesn't have an ARM64 variant. This is a common issue
that affects many developers.
### Suggested Action
Add `--platform linux/amd64` to docker build command, or use
`FROM --platform=linux/amd64` in Dockerfile.
### Metadata
- Source: error
- Related Files: Dockerfile
- Tags: docker, arm64, m1, apple-silicon
- See Also: ERR-20250115-A3F, ERR-20250117-B2D
---
```
## Extracted Skill Example
When the above learning is extracted as a skill, it becomes:
**File**: `skills/docker-m1-fixes/SKILL.md`
```markdown
---
name: docker-m1-fixes
description: "Fixes Docker build failures on Apple Silicon (M1/M2). Use when docker build fails with platform mismatch errors."
---
# Docker M1 Fixes
Solutions for Docker build issues on Apple Silicon Macs.
## Quick Reference
| Error | Fix |
|-------|-----|
| `no match for platform linux/arm64` | Add `--platform linux/amd64` to build |
| Image runs but crashes | Use emulation or find ARM-compatible base |
## The Problem
Many Docker base images don't have ARM64 variants. When building on
Apple Silicon (M1/M2/M3), Docker attempts to pull ARM64 images by
default, causing platform mismatch errors.
## Solutions
### Option 1: Build Flag (Recommended)
Add platform flag to your build command:
\`\`\`bash
docker build --platform linux/amd64 -t myapp .
\`\`\`
### Option 2: Dockerfile Modification
Specify platform in the FROM instruction:
\`\`\`dockerfile
FROM --platform=linux/amd64 python:3.11-slim
\`\`\`
### Option 3: Docker Compose
Add platform to your service:
\`\`\`yaml
services:
app:
platform: linux/amd64
build: .
\`\`\`
## Trade-offs
| Approach | Pros | Cons |
|----------|------|------|
| Build flag | No file changes | Must remember flag |
| Dockerfile | Explicit, versioned | Affects all builds |
| Compose | Convenient for dev | Requires compose |
## Performance Note
Running AMD64 images on ARM64 uses Rosetta 2 emulation. This works
for development but may be slower. For production, find ARM-native
alternatives when possible.
## Source
- Learning ID: LRN-20250118-001
- Category: best_practice
- Extraction Date: 2025-01-18
```

View File

@@ -0,0 +1,223 @@
# Hook Setup Guide
Configure automatic self-improvement triggers for AI coding agents.
## Overview
Hooks enable proactive learning capture by injecting reminders at key moments:
- **UserPromptSubmit**: Reminder after each prompt to evaluate learnings
- **PostToolUse (Bash)**: Error detection when commands fail
## Claude Code Setup
### Option 1: Project-Level Configuration
Create `.claude/settings.json` in your project root:
```json
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}
]
}
],
"PostToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/error-detector.sh"
}
]
}
]
}
}
```
### Option 2: User-Level Configuration
Add to `~/.claude/settings.json` for global activation:
```json
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "~/.claude/skills/self-improvement/scripts/activator.sh"
}
]
}
]
}
}
```
### Minimal Setup (Activator Only)
For lower overhead, use only the UserPromptSubmit hook:
```json
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}
]
}
]
}
}
```
## Codex CLI Setup
Codex uses the same hook system as Claude Code. Create `.codex/settings.json`:
```json
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}
]
}
]
}
}
```
## GitHub Copilot Setup
Copilot doesn't support hooks directly. Instead, add guidance to `.github/copilot-instructions.md`:
```markdown
## Self-Improvement
After completing tasks that involved:
- Debugging non-obvious issues
- Discovering workarounds
- Learning project-specific patterns
- Resolving unexpected errors
Consider logging the learning to `.learnings/` using the format from the self-improvement skill.
For high-value learnings that would benefit other sessions, consider skill extraction.
```
## Verification
### Test Activator Hook
1. Enable the hook configuration
2. Start a new Claude Code session
3. Send any prompt
4. Verify you see `<self-improvement-reminder>` in the context
### Test Error Detector Hook
1. Enable PostToolUse hook for Bash
2. Run a command that fails: `ls /nonexistent/path`
3. Verify you see `<error-detected>` reminder
### Dry Run Extract Script
```bash
./skills/self-improvement/scripts/extract-skill.sh test-skill --dry-run
```
Expected output shows the skill scaffold that would be created.
## Troubleshooting
### Hook Not Triggering
1. **Check script permissions**: `chmod +x scripts/*.sh`
2. **Verify path**: Use absolute paths or paths relative to project root
3. **Check settings location**: Project vs user-level settings
4. **Restart session**: Hooks are loaded at session start
### Permission Denied
```bash
chmod +x ./skills/self-improvement/scripts/activator.sh
chmod +x ./skills/self-improvement/scripts/error-detector.sh
chmod +x ./skills/self-improvement/scripts/extract-skill.sh
```
### Script Not Found
If using relative paths, ensure you're in the correct directory or use absolute paths:
```json
{
"command": "/absolute/path/to/skills/self-improvement/scripts/activator.sh"
}
```
### Too Much Overhead
If the activator feels intrusive:
1. **Use minimal setup**: Only UserPromptSubmit, skip PostToolUse
2. **Add matcher filter**: Only trigger for certain prompts:
```json
{
"matcher": "fix|debug|error|issue",
"hooks": [...]
}
```
## Hook Output Budget
The activator is designed to be lightweight:
- **Target**: ~50-100 tokens per activation
- **Content**: Structured reminder, not verbose instructions
- **Format**: XML tags for easy parsing
If you need to reduce overhead further, you can edit `activator.sh` to output less text.
## Security Considerations
- Hook scripts run with the same permissions as Claude Code
- Scripts only output text; they don't modify files or run commands
- Error detector reads `CLAUDE_TOOL_OUTPUT` environment variable
- All scripts are opt-in (you must configure them explicitly)
## Disabling Hooks
To temporarily disable without removing configuration:
1. **Comment out in settings**:
```json
{
"hooks": {
// "UserPromptSubmit": [...]
}
}
```
2. **Or delete the settings file**: Hooks won't run without configuration

View File

@@ -0,0 +1,20 @@
#!/bin/bash
# Self-Improvement Activator Hook
# Triggers on UserPromptSubmit to remind Claude about learning capture
# Keep output minimal (~50-100 tokens) to minimize overhead
set -e
# Output reminder as system context
cat << 'EOF'
<self-improvement-reminder>
After completing this task, evaluate if extractable knowledge emerged:
- Non-obvious solution discovered through investigation?
- Workaround for unexpected behavior?
- Project-specific pattern learned?
- Error required debugging to resolve?
If yes: Log to .learnings/ using the self-improvement skill format.
If high-value (recurring, broadly applicable): Consider skill extraction.
</self-improvement-reminder>
EOF

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Self-Improvement Error Detector Hook
# Triggers on PostToolUse for Bash to detect command failures
# Reads CLAUDE_TOOL_OUTPUT environment variable
set -e
# Check if tool output indicates an error
# CLAUDE_TOOL_OUTPUT contains the result of the tool execution
OUTPUT="${CLAUDE_TOOL_OUTPUT:-}"
# Patterns indicating errors (case-insensitive matching)
ERROR_PATTERNS=(
"error:"
"Error:"
"ERROR:"
"failed"
"FAILED"
"command not found"
"No such file"
"Permission denied"
"fatal:"
"Exception"
"Traceback"
"npm ERR!"
"ModuleNotFoundError"
"SyntaxError"
"TypeError"
"exit code"
"non-zero"
)
# Check if output contains any error pattern
contains_error=false
for pattern in "${ERROR_PATTERNS[@]}"; do
if [[ "$OUTPUT" == *"$pattern"* ]]; then
contains_error=true
break
fi
done
# Only output reminder if error detected
if [ "$contains_error" = true ]; then
cat << 'EOF'
<error-detected>
A command error was detected. Consider logging this to .learnings/ERRORS.md if:
- The error was unexpected or non-obvious
- It required investigation to resolve
- It might recur in similar contexts
- The solution could benefit future sessions
Use the self-improvement skill format: [ERR-YYYYMMDD-XXX]
</error-detected>
EOF
fi

View File

@@ -0,0 +1,203 @@
#!/bin/bash
# Skill Extraction Helper
# Creates a new skill from a learning entry
# Usage: ./extract-skill.sh <skill-name> [--dry-run]
set -e
# Configuration
SKILLS_DIR="${SKILLS_DIR:-./skills}"
TEMPLATE_DIR="$(dirname "$0")/../assets"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
usage() {
cat << EOF
Usage: $(basename "$0") <skill-name> [options]
Create a new skill from a learning entry.
Arguments:
skill-name Name of the skill (lowercase, hyphens for spaces)
Options:
--dry-run Show what would be created without creating files
--output-dir Override skills directory (default: ./skills)
-h, --help Show this help message
Examples:
$(basename "$0") docker-m1-fixes
$(basename "$0") api-timeout-patterns --dry-run
$(basename "$0") pnpm-setup --output-dir /path/to/skills
The skill will be created in: \$SKILLS_DIR/<skill-name>/
EOF
}
log_info() {
echo -e "${GREEN}[INFO]${NC} $1"
}
log_warn() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1" >&2
}
# Parse arguments
SKILL_NAME=""
DRY_RUN=false
while [[ $# -gt 0 ]]; do
case $1 in
--dry-run)
DRY_RUN=true
shift
;;
--output-dir)
SKILLS_DIR="$2"
shift 2
;;
-h|--help)
usage
exit 0
;;
-*)
log_error "Unknown option: $1"
usage
exit 1
;;
*)
if [ -z "$SKILL_NAME" ]; then
SKILL_NAME="$1"
else
log_error "Unexpected argument: $1"
usage
exit 1
fi
shift
;;
esac
done
# Validate skill name
if [ -z "$SKILL_NAME" ]; then
log_error "Skill name is required"
usage
exit 1
fi
# Validate skill name format (lowercase, hyphens, no spaces)
if ! [[ "$SKILL_NAME" =~ ^[a-z0-9]+(-[a-z0-9]+)*$ ]]; then
log_error "Invalid skill name format. Use lowercase letters, numbers, and hyphens only."
log_error "Examples: 'docker-fixes', 'api-patterns', 'pnpm-setup'"
exit 1
fi
SKILL_PATH="$SKILLS_DIR/$SKILL_NAME"
# Check if skill already exists
if [ -d "$SKILL_PATH" ] && [ "$DRY_RUN" = false ]; then
log_error "Skill already exists: $SKILL_PATH"
log_error "Use a different name or remove the existing skill first."
exit 1
fi
# Dry run output
if [ "$DRY_RUN" = true ]; then
log_info "Dry run - would create:"
echo " $SKILL_PATH/"
echo " $SKILL_PATH/SKILL.md"
echo ""
echo "Template content would be:"
echo "---"
cat << TEMPLATE
name: $SKILL_NAME
description: "[TODO: Add a concise description of what this skill does and when to use it]"
---
# $(echo "$SKILL_NAME" | sed 's/-/ /g' | awk '{for(i=1;i<=NF;i++) $i=toupper(substr($i,1,1)) tolower(substr($i,2))}1')
[TODO: Brief introduction explaining the skill's purpose]
## Quick Reference
| Situation | Action |
|-----------|--------|
| [Trigger condition] | [What to do] |
## Usage
[TODO: Detailed usage instructions]
## Examples
[TODO: Add concrete examples]
## Source Learning
This skill was extracted from a learning entry.
- Learning ID: [TODO: Add original learning ID]
- Original File: .learnings/LEARNINGS.md
TEMPLATE
echo "---"
exit 0
fi
# Create skill directory structure
log_info "Creating skill: $SKILL_NAME"
mkdir -p "$SKILL_PATH"
# Create SKILL.md from template
cat > "$SKILL_PATH/SKILL.md" << TEMPLATE
---
name: $SKILL_NAME
description: "[TODO: Add a concise description of what this skill does and when to use it]"
---
# $(echo "$SKILL_NAME" | sed 's/-/ /g' | awk '{for(i=1;i<=NF;i++) $i=toupper(substr($i,1,1)) tolower(substr($i,2))}1')
[TODO: Brief introduction explaining the skill's purpose]
## Quick Reference
| Situation | Action |
|-----------|--------|
| [Trigger condition] | [What to do] |
## Usage
[TODO: Detailed usage instructions]
## Examples
[TODO: Add concrete examples]
## Source Learning
This skill was extracted from a learning entry.
- Learning ID: [TODO: Add original learning ID]
- Original File: .learnings/LEARNINGS.md
TEMPLATE
log_info "Created: $SKILL_PATH/SKILL.md"
# Suggest next steps
echo ""
log_info "Skill scaffold created successfully!"
echo ""
echo "Next steps:"
echo " 1. Edit $SKILL_PATH/SKILL.md"
echo " 2. Fill in the TODO sections with content from your learning"
echo " 3. Add references/ folder if you have detailed documentation"
echo " 4. Add scripts/ folder if you have executable code"
echo " 5. Update the original learning entry with:"
echo " **Status**: promoted_to_skill"
echo " **Skill-Path**: skills/$SKILL_NAME"

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawdhub.com",
"slug": "summarize",
"installedVersion": "1.0.0",
"installedAt": 1769383166746
}

49
skills/summarize/SKILL.md Normal file
View File

@@ -0,0 +1,49 @@
---
name: summarize
description: Summarize URLs or files with the summarize CLI (web, PDFs, images, audio, YouTube).
homepage: https://summarize.sh
metadata: {"clawdbot":{"emoji":"🧾","requires":{"bins":["summarize"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/summarize","bins":["summarize"],"label":"Install summarize (brew)"}]}}
---
# Summarize
Fast CLI to summarize URLs, local files, and YouTube links.
## Quick start
```bash
summarize "https://example.com" --model google/gemini-3-flash-preview
summarize "/path/to/file.pdf" --model google/gemini-3-flash-preview
summarize "https://youtu.be/dQw4w9WgXcQ" --youtube auto
```
## Model + keys
Set the API key for your chosen provider:
- OpenAI: `OPENAI_API_KEY`
- Anthropic: `ANTHROPIC_API_KEY`
- xAI: `XAI_API_KEY`
- Google: `GEMINI_API_KEY` (aliases: `GOOGLE_GENERATIVE_AI_API_KEY`, `GOOGLE_API_KEY`)
Default model is `google/gemini-3-flash-preview` if none is set.
## Useful flags
- `--length short|medium|long|xl|xxl|<chars>`
- `--max-output-tokens <count>`
- `--extract-only` (URLs only)
- `--json` (machine readable)
- `--firecrawl auto|off|always` (fallback extraction)
- `--youtube auto` (Apify fallback if `APIFY_API_TOKEN` set)
## Config
Optional config file: `~/.summarize/config.json`
```json
{ "model": "openai/gpt-5.2" }
```
Optional services:
- `FIRECRAWL_API_KEY` for blocked sites
- `APIFY_API_TOKEN` for YouTube fallback