My agentic slop goes here. Not intended for anyone else!

dancer

+118
dancer/CLAUDE.md
···
**Phase 4: Oversight** (Week 4)
10. `dancer-ui` - Dashboard and monitoring
+
+
## dancer-logs Implementation Details
+
+
### Design Decisions
+
+
1. **Comprehensive Schema**: The SQLite schema captures everything that might be useful for Claude's analysis:
+
- Core fields (timestamp, level, source, message)
+
- Error classification (type, code, hash, stack trace)
+
- Execution context (PIDs, threads, fibers, domains)
+
- Request/session tracking (IDs for correlation)
+
- Source location (file, line, function, module)
+
- Performance metrics (duration, memory, CPU)
+
- Network context (IPs, ports, methods, status codes)
+
- User context (user ID, tenant ID)
+
- System context (OS, versions, environment, containers)
+
- Flexible metadata (JSON fields for tags, labels, custom data)
+
+
2. **Full-Text Search**: Using SQLite's FTS5 with porter stemming and unicode support for comprehensive search across all text fields.
+
+
3. **Pattern Detection**:
+
- Normalizes messages by replacing numbers with "N", hex with "0xHEX", UUIDs with "UUID"
+
- Tracks patterns with occurrence counts, severity scores, and consultation history
+
- Time-series bucketing for trend analysis
+
+
4. **Logs Reporter Integration**:
+
- Implements OCaml Logs reporter interface
+
- Can chain with existing reporters
+
- Thread-safe with Eio mutexes
+
- Captures Eio fiber IDs and domain IDs
+
+
5. **CLI Tool**: Colorful terminal interface using Fmt with:
+
- `list` - Browse logs with filters
+
- `search` - Full-text search
+
- `patterns` - View detected patterns
+
- `errors` - Recent error summary
+
- `stats` - Database statistics
+
- `export` - Export for analysis (JSON, CSV, Claude format)
+
+
### Key Learnings
+
+
1. **No Truncation**: Keep all data intact for Claude to analyze. Storage is cheap, context is valuable.
+
+
2. **Structured Everything**: The more structure we capture, the better Claude can understand patterns and correlations.
+
+
3. **Synchronous is Fine**: For now, synchronous SQLite operations are acceptable. Performance optimization can come later.
+
+
4. **Reporter Chaining**: Following Logs library conventions by supporting reporter chaining allows integration with existing logging infrastructure.
+
+
5. **Export for Claude**: Special export format that prepares context within token limits, focusing on recent errors and patterns.
+
+
### Usage Example
+
+
```ocaml
+
(* Initialize the database *)
+
let db = Dancer_logs.init ~path:"app_logs.db" () in
+
+
(* Create and set the reporter *)
+
let reporter = Dancer_logs.reporter db in
+
let reporter = Dancer_logs.chain_reporter db (Logs_fmt.reporter ()) in
+
Logs.set_reporter reporter;
+
+
(* Use with context *)
+
let ctx = Dancer_logs.Context.(
+
empty_context
+
|> with_session "session-123"
+
|> with_request "req-456"
+
|> with_user "user-789"
+
|> add_tag "payment"
+
|> add_label "environment" "production"
+
) in
+
+
(* Log with full context *)
+
Dancer_logs.log db
+
~level:Logs.Error
+
~source:"Payment.Gateway"
+
~message:"Payment failed: timeout"
+
~context:ctx
+
~error:{
+
error_type = Some "TimeoutException";
+
error_code = Some "GATEWAY_TIMEOUT";
+
error_hash = Some (hash_of_error);
+
stack_trace = Some (Printexc.get_backtrace ());
+
}
+
~performance:{
+
duration_ms = Some 30000.0;
+
memory_before = Some 1000000;
+
memory_after = Some 1200000;
+
cpu_time_ms = Some 50.0;
+
}
+
();
+
```
+
+
### CLI Usage
+
+
```bash
+
# View recent errors
+
dancer-logs errors
+
+
# Search for specific patterns
+
dancer-logs search "connection refused"
+
+
# View error patterns
+
dancer-logs patterns --min-count 10
+
+
# Export context for Claude
+
dancer-logs export -f claude -o context.txt --since 24
+
+
# Follow logs in real-time (to be implemented)
+
dancer-logs list -f
+
```
+
+
### Next Steps
+
+
1. Implement pattern detection algorithm that runs periodically
+
2. Add metrics collection from /proc and system calls
+
3. Implement log following mode for real-time monitoring
+
4. Add session correlation views
+
5. Create integration tests with sample applications
+22
dancer/Makefile
···
+
.PHONY: deps build clean test doc
+
+
deps:
+
opam install -y sqlite3 eio eio_main logs fmt ptime digestif yojson cmdliner ppx_deriving alcotest str
+
+
build:
+
opam exec -- dune build @check
+
+
clean:
+
opam exec -- dune clean
+
+
test:
+
opam exec -- dune runtest
+
+
doc:
+
opam exec -- dune build @doc
+
+
run-cli:
+
opam exec -- dune exec dancer-logs-cli -- --help
+
+
install:
+
opam exec -- dune install
+72
dancer/README.md
···
+
# Dancer - Self-Modifying OCaml Services
+
+
Dancer is an ambitious OCaml library that enables long-running services to automatically improve themselves through AI-assisted code generation, with comprehensive logging, pattern detection, and safe deployment mechanisms.
+
+
## Components
+
+
### dancer-logs
+
+
Comprehensive structured logging with SQLite backend. Features:
+
- Full structured logging with no truncation
+
- SQLite storage with FTS5 full-text search
+
- Pattern detection and normalization
+
- Eio fiber and domain tracking
+
- Rich context capture (sessions, requests, traces, performance metrics)
+
- OCaml Logs reporter integration
+
- Colorful CLI for log analysis
+
+
## Installation
+
+
```bash
+
# Install dependencies
+
make deps
+
+
# Build the project
+
make build
+
+
# Run the CLI
+
make run-cli
+
```
+
+
## Usage
+
+
```ocaml
+
(* Initialize logging *)
+
let db = Dancer_logs.init ~path:"app.db" () in
+
let reporter = Dancer_logs.reporter db in
+
Logs.set_reporter reporter;
+
+
(* Log with context *)
+
let ctx = Dancer_logs.Context.(
+
empty_context
+
|> with_session "session-123"
+
|> with_request "req-456"
+
) in
+
+
Dancer_logs.log db ~level:Logs.Error ~source:"MyApp"
+
~message:"Connection failed" ~context:ctx ()
+
```
+
+
## CLI Usage
+
+
```bash
+
# View recent errors
+
dancer-logs errors
+
+
# Search logs
+
dancer-logs search "connection refused"
+
+
# View patterns
+
dancer-logs patterns --min-count 10
+
+
# Export for Claude
+
dancer-logs export -f claude -o context.txt
+
```
+
+
## Architecture
+
+
See [CLAUDE.md](CLAUDE.md) for detailed architecture and design decisions.
+
+
## License
+
+
MIT
+352
dancer/bin/dancer_logs_cli.ml
···
+
open Dancer_logs
+
open Cmdliner
+
+
let style_ok = Fmt.styled `Green
+
let style_error = Fmt.styled `Red
+
let style_warning = Fmt.styled `Yellow
+
let style_info = Fmt.styled `Blue
+
let style_debug = Fmt.styled `Cyan
+
let style_dim = Fmt.styled `Faint
+
let style_bold = Fmt.styled `Bold
+
+
let pp_level ppf = function
+
| Logs.App -> Fmt.pf ppf "%a" style_bold "APP"
+
| Logs.Error -> Fmt.pf ppf "%a" style_error "ERR"
+
| Logs.Warning -> Fmt.pf ppf "%a" style_warning "WRN"
+
| Logs.Info -> Fmt.pf ppf "%a" style_info "INF"
+
| Logs.Debug -> Fmt.pf ppf "%a" style_debug "DBG"
+
+
let pp_timestamp ppf timestamp =
+
let ptime = Ptime.of_float_s timestamp |> Option.get in
+
Fmt.pf ppf "%a" (Ptime.pp_human ~frac_s:3 ()) ptime
+
+
let pp_source ppf source =
+
Fmt.pf ppf "%a" style_bold source
+
+
let pp_duration ppf ms =
+
if ms > 1000.0 then
+
Fmt.pf ppf "%a" style_warning (Printf.sprintf "%.1fs" (ms /. 1000.0))
+
else if ms > 100.0 then
+
Fmt.pf ppf "%a" style_warning (Printf.sprintf "%.0fms" ms)
+
else
+
Fmt.pf ppf "%a" style_ok (Printf.sprintf "%.1fms" ms)
+
+
let pp_log_row ppf row =
+
match row with
+
| timestamp :: level :: source :: message :: rest ->
+
let timestamp = match timestamp with
+
| Sqlite3.Data.FLOAT f -> f
+
| _ -> 0.0
+
in
+
let level = match level with
+
| Sqlite3.Data.INT i ->
+
(match Int64.to_int i with
+
| 0 -> Logs.App
+
| 1 -> Logs.Error
+
| 2 -> Logs.Warning
+
| 3 -> Logs.Info
+
| _ -> Logs.Debug)
+
| _ -> Logs.Info
+
in
+
let source = match source with
+
| Sqlite3.Data.TEXT s -> s
+
| _ -> "unknown"
+
in
+
let message = match message with
+
| Sqlite3.Data.TEXT s -> s
+
| _ -> ""
+
in
+
+
Fmt.pf ppf "@[<h>%a %a %a@]@. %s@."
+
pp_timestamp timestamp
+
pp_level level
+
pp_source source
+
message;
+
+
List.iteri (fun i data ->
+
match data with
+
| Sqlite3.Data.TEXT s when String.length s > 0 ->
+
Fmt.pf ppf " %a: %s@." style_dim
+
(match i with
+
| 0 -> "error"
+
| 1 -> "code"
+
| 2 -> "trace"
+
| _ -> string_of_int i) s
+
| Sqlite3.Data.INT i ->
+
Fmt.pf ppf " %a: %Ld@." style_dim (string_of_int i) i
+
| Sqlite3.Data.FLOAT f ->
+
Fmt.pf ppf " %a: %f@." style_dim (string_of_int i) f
+
| _ -> ()
+
) rest
+
| _ -> ()
+
+
let pp_pattern ppf pattern =
+
match pattern with
+
| [id; hash; pattern_text; first_seen; last_seen; count; severity] ->
+
let count = match count with
+
| Sqlite3.Data.INT i -> Int64.to_int i
+
| _ -> 0
+
in
+
let severity_color =
+
match severity with
+
| Sqlite3.Data.FLOAT f when f > 7.0 -> style_error
+
| Sqlite3.Data.FLOAT f when f > 4.0 -> style_warning
+
| _ -> style_info
+
in
+
+
Fmt.pf ppf "@[<v>%a Pattern (×%d) %a@. %s@. First: %a | Last: %a@]@."
+
severity_color "●"
+
count
+
style_dim
+
(match hash with Sqlite3.Data.TEXT s -> String.sub s 0 8 | _ -> "")
+
(match pattern_text with Sqlite3.Data.TEXT s -> s | _ -> "")
+
pp_timestamp (match first_seen with Sqlite3.Data.FLOAT f -> f | _ -> 0.0)
+
pp_timestamp (match last_seen with Sqlite3.Data.FLOAT f -> f | _ -> 0.0)
+
| _ -> ()
+
+
let pp_error_summary ppf row =
+
match row with
+
| [hash; error_type; error_code; count; sessions; users; last; first; sources; avg_dur; sample] ->
+
let count = match count with Sqlite3.Data.INT i -> Int64.to_int i | _ -> 0 in
+
let sessions = match sessions with Sqlite3.Data.INT i -> Int64.to_int i | _ -> 0 in
+
let error_type = match error_type with Sqlite3.Data.TEXT s -> s | _ -> "unknown" in
+
+
Fmt.pf ppf "@[<v>%a %s (×%d in %d sessions)@. %s@. Sources: %s@]@."
+
style_error "ERROR"
+
error_type count sessions
+
(match sample with Sqlite3.Data.TEXT s ->
+
if String.length s > 80 then String.sub s 0 77 ^ "..." else s
+
| _ -> "")
+
(match sources with Sqlite3.Data.TEXT s -> s | _ -> "")
+
| _ -> ()
+
+
let list_cmd =
+
let list db_path level source since limit follow =
+
Eio_main.run @@ fun _env ->
+
let db = Dancer_logs.init ~path:db_path () in
+
+
let filter = {
+
Query.default_filter with
+
level;
+
source;
+
since = Option.map (fun h -> Unix.gettimeofday () -. (h *. 3600.0)) since;
+
limit;
+
} in
+
+
let rows = Query.list db filter in
+
+
Fmt.pr "@[<v>%a@]@."
+
(Fmt.list ~sep:Fmt.cut pp_log_row) rows;
+
+
Dancer_logs.close db
+
in
+
+
let db_path = Arg.(value & opt string "dancer_logs.db" &
+
info ["d"; "db"] ~doc:"Database path") in
+
+
let level = Arg.(value & opt (some (enum [
+
"error", Logs.Error;
+
"warning", Logs.Warning;
+
"info", Logs.Info;
+
"debug", Logs.Debug;
+
])) None & info ["l"; "level"] ~doc:"Filter by log level") in
+
+
let source = Arg.(value & opt (some string) None &
+
info ["s"; "source"] ~doc:"Filter by source module") in
+
+
let since = Arg.(value & opt (some float) None &
+
info ["since"] ~doc:"Show logs from last N hours") in
+
+
let limit = Arg.(value & opt int 100 &
+
info ["n"; "limit"] ~doc:"Maximum number of results") in
+
+
let follow = Arg.(value & flag &
+
info ["f"; "follow"] ~doc:"Follow mode (like tail -f)") in
+
+
let doc = "List log entries" in
+
let info = Cmd.info "list" ~doc in
+
Cmd.v info Term.(const list $ db_path $ level $ source $ since $ limit $ follow)
+
+
let search_cmd =
+
let search db_path query limit =
+
Eio_main.run @@ fun _env ->
+
let db = Dancer_logs.init ~path:db_path () in
+
+
let rows = Query.search db query ~limit () in
+
+
Fmt.pr "@[<v>%a Found %d matches@.@.%a@]@."
+
style_bold "🔍" (List.length rows)
+
(Fmt.list ~sep:(Fmt.any "@.---@.") pp_log_row) rows;
+
+
Dancer_logs.close db
+
in
+
+
let db_path = Arg.(value & opt string "dancer_logs.db" &
+
info ["d"; "db"] ~doc:"Database path") in
+
+
let query = Arg.(required & pos 0 (some string) None &
+
info [] ~docv:"QUERY" ~doc:"Search query") in
+
+
let limit = Arg.(value & opt int 100 &
+
info ["n"; "limit"] ~doc:"Maximum number of results") in
+
+
let doc = "Full-text search in logs" in
+
let info = Cmd.info "search" ~doc in
+
Cmd.v info Term.(const search $ db_path $ query $ limit)
+
+
let patterns_cmd =
+
let patterns db_path min_count since =
+
Eio_main.run @@ fun _env ->
+
let db = Dancer_logs.init ~path:db_path () in
+
+
let patterns = Pattern.list db ~min_count ?since () in
+
+
Fmt.pr "@[<v>%a Detected Patterns@.@.%a@]@."
+
style_bold "📊"
+
(Fmt.list ~sep:Fmt.cut (fun ppf p ->
+
pp_pattern ppf [
+
Sqlite3.Data.INT (Int64.of_int p.Pattern.id);
+
Sqlite3.Data.TEXT p.pattern_hash;
+
Sqlite3.Data.TEXT p.pattern;
+
Sqlite3.Data.FLOAT p.first_seen;
+
Sqlite3.Data.FLOAT p.last_seen;
+
Sqlite3.Data.INT (Int64.of_int p.occurrence_count);
+
match p.severity_score with
+
| Some s -> Sqlite3.Data.FLOAT s
+
| None -> Sqlite3.Data.NULL
+
]
+
)) patterns;
+
+
Dancer_logs.close db
+
in
+
+
let db_path = Arg.(value & opt string "dancer_logs.db" &
+
info ["d"; "db"] ~doc:"Database path") in
+
+
let min_count = Arg.(value & opt int 5 &
+
info ["c"; "min-count"] ~doc:"Minimum occurrence count") in
+
+
let since = Arg.(value & opt (some float) None &
+
info ["since"] ~doc:"Patterns from last N hours") in
+
+
let doc = "Show detected error patterns" in
+
let info = Cmd.info "patterns" ~doc in
+
Cmd.v info Term.(const patterns $ db_path $ min_count $ since)
+
+
let errors_cmd =
+
let errors db_path limit =
+
Eio_main.run @@ fun _env ->
+
let db = Dancer_logs.init ~path:db_path () in
+
+
let errors = Query.recent_errors db ~limit () in
+
+
Fmt.pr "@[<v>%a Recent Errors (last 24h)@.@.%a@]@."
+
style_error "⚠️"
+
(Fmt.list ~sep:(Fmt.any "@.") pp_error_summary) errors;
+
+
Dancer_logs.close db
+
in
+
+
let db_path = Arg.(value & opt string "dancer_logs.db" &
+
info ["d"; "db"] ~doc:"Database path") in
+
+
let limit = Arg.(value & opt int 50 &
+
info ["n"; "limit"] ~doc:"Maximum number of results") in
+
+
let doc = "Show recent errors with aggregation" in
+
let info = Cmd.info "errors" ~doc in
+
Cmd.v info Term.(const errors $ db_path $ limit)
+
+
let stats_cmd =
+
let stats db_path =
+
Eio_main.run @@ fun _env ->
+
let db = Dancer_logs.init ~path:db_path () in
+
+
let count_by_level = Query.count_by_level db ~since:(Unix.gettimeofday () -. 86400.0) () in
+
+
Fmt.pr "@[<v>%a Database Statistics@.@." style_bold "📈";
+
+
List.iter (fun (level, count) ->
+
Fmt.pr " %a: %d@." pp_level level count
+
) count_by_level;
+
+
let db_stats = Dancer_logs.stats db in
+
List.iter (fun (k, v) ->
+
Fmt.pr " %a: %s@." style_dim k v
+
) db_stats;
+
+
Fmt.pr "@]@.";
+
+
Dancer_logs.close db
+
in
+
+
let db_path = Arg.(value & opt string "dancer_logs.db" &
+
info ["d"; "db"] ~doc:"Database path") in
+
+
let doc = "Show database statistics" in
+
let info = Cmd.info "stats" ~doc in
+
Cmd.v info Term.(const stats $ db_path)
+
+
let export_cmd =
+
let export db_path format output since =
+
Eio_main.run @@ fun _env ->
+
let db = Dancer_logs.init ~path:db_path () in
+
+
let filter = {
+
Query.default_filter with
+
since = Option.map (fun h -> Unix.gettimeofday () -. (h *. 3600.0)) since;
+
limit = 10000;
+
} in
+
+
let rows = Query.list db filter in
+
+
let content = match format with
+
| "json" -> Yojson.Safe.to_string (Export.to_json rows)
+
| "csv" -> Export.to_csv rows
+
| "claude" -> Export.for_claude db ()
+
| _ -> failwith "Unknown format"
+
in
+
+
let oc = open_out output in
+
output_string oc content;
+
close_out oc;
+
+
Fmt.pr "%a Exported %d logs to %s@."
+
style_ok "✓" (List.length rows) output;
+
+
Dancer_logs.close db
+
in
+
+
let db_path = Arg.(value & opt string "dancer_logs.db" &
+
info ["d"; "db"] ~doc:"Database path") in
+
+
let format = Arg.(value & opt (enum [
+
"json", "json";
+
"csv", "csv";
+
"claude", "claude";
+
]) "json" & info ["f"; "format"] ~doc:"Export format") in
+
+
let output = Arg.(required & opt (some string) None &
+
info ["o"; "output"] ~doc:"Output file") in
+
+
let since = Arg.(value & opt (some float) None &
+
info ["since"] ~doc:"Export logs from last N hours") in
+
+
let doc = "Export logs to file" in
+
let info = Cmd.info "export" ~doc in
+
Cmd.v info Term.(const export $ db_path $ format $ output $ since)
+
+
let main_cmd =
+
let doc = "Dancer Logs CLI - Query and analyze structured logs" in
+
let info = Cmd.info "dancer-logs" ~version:"0.1.0" ~doc in
+
let default = Term.(ret (const (`Help (`Pager, None)))) in
+
Cmd.group info ~default [
+
list_cmd;
+
search_cmd;
+
patterns_cmd;
+
errors_cmd;
+
stats_cmd;
+
export_cmd;
+
]
+
+
let () = exit (Cmd.eval main_cmd)
+5
dancer/bin/dune
···
+
(executable
+
(public_name dancer-logs-cli)
+
(package dancer-logs-cli)
+
(name dancer_logs_cli)
+
(libraries dancer_logs cmdliner fmt fmt.tty ptime ptime.clock.os eio_main))
+34
dancer/dancer-logs-cli.opam
···
+
# This file is generated by dune, edit dune-project instead
+
opam-version: "2.0"
+
version: "0.1.0"
+
synopsis: "CLI tool for querying and analyzing dancer-logs"
+
description: """
+
Command-line interface for browsing, searching, and exporting
+
logs captured by dancer-logs"""
+
maintainer: ["anil@recoil.org"]
+
authors: ["Anil Madhavapeddy"]
+
license: "MIT"
+
homepage: "https://github.com/avsm/dancer"
+
bug-reports: "https://github.com/avsm/dancer/issues"
+
depends: [
+
"dune" {>= "3.0"}
+
"dancer-logs"
+
"cmdliner"
+
"fmt"
+
"odoc" {with-doc}
+
]
+
build: [
+
["dune" "subst"] {dev}
+
[
+
"dune"
+
"build"
+
"-p"
+
name
+
"-j"
+
jobs
+
"@install"
+
"@runtest" {with-test}
+
"@doc" {with-doc}
+
]
+
]
+
dev-repo: "git+https://github.com/avsm/dancer.git"
+44
dancer/dancer-logs.opam
···
+
# This file is generated by dune, edit dune-project instead
+
opam-version: "2.0"
+
version: "0.1.0"
+
synopsis: "Comprehensive structured logging with SQLite backend for Dancer"
+
description: """
+
dancer-logs provides structured logging with SQLite storage,
+
pattern detection, and comprehensive context capture for AI-assisted debugging
+
and self-modification"""
+
maintainer: ["anil@recoil.org"]
+
authors: ["Anil Madhavapeddy"]
+
license: "MIT"
+
homepage: "https://github.com/avsm/dancer"
+
bug-reports: "https://github.com/avsm/dancer/issues"
+
depends: [
+
"ocaml"
+
"dune" {>= "3.0"}
+
"eio"
+
"eio_main"
+
"logs"
+
"sqlite3"
+
"fmt"
+
"ptime"
+
"digestif"
+
"yojson"
+
"ppx_deriving"
+
"cmdliner"
+
"alcotest" {with-test}
+
"odoc" {with-doc}
+
]
+
build: [
+
["dune" "subst"] {dev}
+
[
+
"dune"
+
"build"
+
"-p"
+
name
+
"-j"
+
jobs
+
"@install"
+
"@runtest" {with-test}
+
"@doc" {with-doc}
+
]
+
]
+
dev-repo: "git+https://github.com/avsm/dancer.git"
+45
dancer/dune-project
···
+
(lang dune 3.0)
+
(name dancer)
+
(version 0.1.0)
+
+
(generate_opam_files true)
+
+
(source
+
(github avsm/dancer))
+
+
(license MIT)
+
+
(authors "Anil Madhavapeddy")
+
+
(maintainers "anil@recoil.org")
+
+
(package
+
(name dancer-logs)
+
(synopsis "Comprehensive structured logging with SQLite backend for Dancer")
+
(description "dancer-logs provides structured logging with SQLite storage,
+
pattern detection, and comprehensive context capture for AI-assisted debugging
+
and self-modification")
+
(depends
+
ocaml
+
dune
+
eio
+
eio_main
+
logs
+
sqlite3
+
fmt
+
ptime
+
digestif
+
yojson
+
ppx_deriving
+
cmdliner
+
(alcotest :with-test)))
+
+
(package
+
(name dancer-logs-cli)
+
(synopsis "CLI tool for querying and analyzing dancer-logs")
+
(description "Command-line interface for browsing, searching, and exporting
+
logs captured by dancer-logs")
+
(depends
+
dancer-logs
+
cmdliner
+
fmt))
+545
dancer/lib/dancer_logs/dancer_logs.ml
···
+
open Eio
+
+
type log_context = {
+
session_id : string option;
+
request_id : string option;
+
trace_id : string option;
+
span_id : string option;
+
parent_span_id : string option;
+
user_id : string option;
+
tenant_id : string option;
+
fiber_id : string option;
+
domain_id : int option;
+
tags : string list;
+
labels : (string * string) list;
+
custom : (string * Yojson.Safe.t) list;
+
}
+
+
let empty_context = {
+
session_id = None;
+
request_id = None;
+
trace_id = None;
+
span_id = None;
+
parent_span_id = None;
+
user_id = None;
+
tenant_id = None;
+
fiber_id = None;
+
domain_id = None;
+
tags = [];
+
labels = [];
+
custom = [];
+
}
+
+
type source_location = {
+
file_path : string option;
+
line_number : int option;
+
column_number : int option;
+
function_name : string option;
+
module_path : string option;
+
}
+
+
type performance_metrics = {
+
duration_ms : float option;
+
memory_before : int option;
+
memory_after : int option;
+
cpu_time_ms : float option;
+
}
+
+
type network_context = {
+
client_ip : string option;
+
server_ip : string option;
+
host_name : string option;
+
port : int option;
+
protocol : string option;
+
method_ : string option;
+
path : string option;
+
status_code : int option;
+
bytes_sent : int option;
+
bytes_received : int option;
+
}
+
+
type error_info = {
+
error_type : string option;
+
error_code : string option;
+
error_hash : string option;
+
stack_trace : string option;
+
}
+
+
type system_info = {
+
os_name : string;
+
os_version : string;
+
ocaml_version : string;
+
app_version : string option;
+
environment : string option;
+
region : string option;
+
container_id : string option;
+
}
+
+
type t = {
+
db : Sqlite3.db;
+
mutex : Eio.Mutex.t;
+
}
+
+
let get_system_info () =
+
{
+
os_name = Sys.os_type;
+
os_version =
+
(try
+
let ic = Unix.open_process_in "uname -r" in
+
let version = input_line ic in
+
close_in ic;
+
version
+
with _ -> "unknown");
+
ocaml_version = Sys.ocaml_version;
+
app_version = None;
+
environment = Sys.getenv_opt "ENVIRONMENT";
+
region = Sys.getenv_opt "REGION";
+
container_id = Sys.getenv_opt "HOSTNAME";
+
}
+
+
let init ?(path="dancer_logs.db") () =
+
let db = Sqlite3.db_open path in
+
+
let schema =
+
try
+
let ic = open_in "lib/dancer_logs/schema.sql" in
+
let content = really_input_string ic (in_channel_length ic) in
+
close_in ic;
+
content
+
with _ ->
+
try
+
let ic = open_in "schema.sql" in
+
let content = really_input_string ic (in_channel_length ic) in
+
close_in ic;
+
content
+
with _ ->
+
failwith "Could not find schema.sql"
+
in
+
+
let statements = String.split_on_char ';' schema in
+
List.iter (fun stmt ->
+
let stmt = String.trim stmt in
+
if stmt <> "" then
+
match Sqlite3.exec db stmt with
+
| Sqlite3.Rc.OK -> ()
+
| rc -> Printf.eprintf "SQL error: %s\n%s\n" (Sqlite3.Rc.to_string rc) stmt
+
) statements;
+
+
{ db; mutex = Eio.Mutex.create () }
+
+
let close t =
+
let _ = Sqlite3.db_close t.db in
+
()
+
+
let bind_text stmt idx = function
+
| None -> Sqlite3.bind stmt idx Sqlite3.Data.NULL
+
| Some s -> Sqlite3.bind stmt idx (Sqlite3.Data.TEXT s)
+
+
let bind_int stmt idx = function
+
| None -> Sqlite3.bind stmt idx Sqlite3.Data.NULL
+
| Some i -> Sqlite3.bind stmt idx (Sqlite3.Data.INT (Int64.of_int i))
+
+
let bind_float stmt idx = function
+
| None -> Sqlite3.bind stmt idx Sqlite3.Data.NULL
+
| Some f -> Sqlite3.bind stmt idx (Sqlite3.Data.FLOAT f)
+
+
let log t ~level ~source ~message
+
?(context=empty_context)
+
?(location={file_path=None; line_number=None; column_number=None;
+
function_name=None; module_path=None})
+
?(performance={duration_ms=None; memory_before=None; memory_after=None;
+
cpu_time_ms=None})
+
?(network={client_ip=None; server_ip=None; host_name=None; port=None;
+
protocol=None; method_=None; path=None; status_code=None;
+
bytes_sent=None; bytes_received=None})
+
?(error={error_type=None; error_code=None; error_hash=None; stack_trace=None})
+
() =
+
+
Eio.Mutex.use_rw ~protect:false t.mutex @@ fun () ->
+
+
let level_int = match level with
+
| Logs.App -> 0
+
| Logs.Error -> 1
+
| Logs.Warning -> 2
+
| Logs.Info -> 3
+
| Logs.Debug -> 4
+
in
+
+
let sql = "
+
INSERT INTO logs (
+
timestamp, level, source, message,
+
error_type, error_code, error_hash, stack_trace,
+
pid, ppid, thread_id, fiber_id, domain_id,
+
session_id, request_id, trace_id, span_id, parent_span_id,
+
file_path, file_name, line_number, column_number, function_name, module_path,
+
duration_ms, memory_before, memory_after, memory_delta, cpu_time_ms,
+
client_ip, server_ip, host_name, port, protocol, method, path,
+
status_code, bytes_sent, bytes_received,
+
user_id, tenant_id,
+
tags, labels, context, custom_fields
+
) VALUES (
+
?1, ?2, ?3, ?4,
+
?5, ?6, ?7, ?8,
+
?9, ?10, ?11, ?12, ?13,
+
?14, ?15, ?16, ?17, ?18,
+
?19, ?20, ?21, ?22, ?23, ?24,
+
?25, ?26, ?27, ?28, ?29,
+
?30, ?31, ?32, ?33, ?34, ?35, ?36,
+
?37, ?38, ?39,
+
?40, ?41,
+
?42, ?43, ?44, ?45
+
)
+
" in
+
+
let stmt = Sqlite3.prepare t.db sql in
+
+
let timestamp = Unix.gettimeofday () in
+
let pid = Unix.getpid () in
+
let ppid = Unix.getppid () in
+
+
let memory_delta = match performance.memory_before, performance.memory_after with
+
| Some b, Some a -> Some (a - b)
+
| _ -> None
+
in
+
+
let file_name = match location.file_path with
+
| Some path -> Some (Filename.basename path)
+
| None -> None
+
in
+
+
let tags_json = match context.tags with
+
| [] -> None
+
| tags -> Some (Yojson.Safe.to_string (`List (List.map (fun t -> `String t) tags)))
+
in
+
+
let labels_json = match context.labels with
+
| [] -> None
+
| labels ->
+
let obj = `Assoc (List.map (fun (k, v) -> (k, `String v)) labels) in
+
Some (Yojson.Safe.to_string obj)
+
in
+
+
let custom_json = match context.custom with
+
| [] -> None
+
| custom -> Some (Yojson.Safe.to_string (`Assoc custom))
+
in
+
+
let _ = Sqlite3.bind stmt 1 (Sqlite3.Data.FLOAT timestamp) in
+
let _ = Sqlite3.bind stmt 2 (Sqlite3.Data.INT (Int64.of_int level_int)) in
+
let _ = Sqlite3.bind stmt 3 (Sqlite3.Data.TEXT source) in
+
let _ = Sqlite3.bind stmt 4 (Sqlite3.Data.TEXT message) in
+
+
let _ = bind_text stmt 5 error.error_type in
+
let _ = bind_text stmt 6 error.error_code in
+
let _ = bind_text stmt 7 error.error_hash in
+
let _ = bind_text stmt 8 error.stack_trace in
+
+
let _ = Sqlite3.bind stmt 9 (Sqlite3.Data.INT (Int64.of_int pid)) in
+
let _ = Sqlite3.bind stmt 10 (Sqlite3.Data.INT (Int64.of_int ppid)) in
+
let _ = bind_text stmt 11 None in
+
let _ = bind_text stmt 12 context.fiber_id in
+
let _ = bind_int stmt 13 context.domain_id in
+
+
let _ = bind_text stmt 14 context.session_id in
+
let _ = bind_text stmt 15 context.request_id in
+
let _ = bind_text stmt 16 context.trace_id in
+
let _ = bind_text stmt 17 context.span_id in
+
let _ = bind_text stmt 18 context.parent_span_id in
+
+
let _ = bind_text stmt 19 location.file_path in
+
let _ = bind_text stmt 20 file_name in
+
let _ = bind_int stmt 21 location.line_number in
+
let _ = bind_int stmt 22 location.column_number in
+
let _ = bind_text stmt 23 location.function_name in
+
let _ = bind_text stmt 24 location.module_path in
+
+
let _ = bind_float stmt 25 performance.duration_ms in
+
let _ = bind_int stmt 26 performance.memory_before in
+
let _ = bind_int stmt 27 performance.memory_after in
+
let _ = bind_int stmt 28 memory_delta in
+
let _ = bind_float stmt 29 performance.cpu_time_ms in
+
+
let _ = bind_text stmt 30 network.client_ip in
+
let _ = bind_text stmt 31 network.server_ip in
+
let _ = bind_text stmt 32 network.host_name in
+
let _ = bind_int stmt 33 network.port in
+
let _ = bind_text stmt 34 network.protocol in
+
let _ = bind_text stmt 35 network.method_ in
+
let _ = bind_text stmt 36 network.path in
+
let _ = bind_int stmt 37 network.status_code in
+
let _ = bind_int stmt 38 network.bytes_sent in
+
let _ = bind_int stmt 39 network.bytes_received in
+
+
let _ = bind_text stmt 40 context.user_id in
+
let _ = bind_text stmt 41 context.tenant_id in
+
+
let _ = bind_text stmt 42 tags_json in
+
let _ = bind_text stmt 43 labels_json in
+
let _ = bind_text stmt 44 None in
+
let _ = bind_text stmt 45 custom_json in
+
+
let _ = Sqlite3.step stmt in
+
let _ = Sqlite3.finalize stmt in
+
()
+
+
let reporter t =
+
let report src level ~over k msgf =
+
let source = Logs.Src.name src in
+
msgf @@ fun ?header ?tags fmt ->
+
Format.kasprintf (fun message ->
+
log t ~level ~source ~message ();
+
over ();
+
k ()
+
) fmt
+
in
+
{ Logs.report }
+
+
let chain_reporter t next =
+
let report src level ~over k msgf =
+
let source = Logs.Src.name src in
+
msgf @@ fun ?header ?tags fmt ->
+
Format.kasprintf (fun message ->
+
log t ~level ~source ~message ();
+
next.Logs.report src level ~over k (fun f -> f "%s" message)
+
) fmt
+
in
+
{ Logs.report }
+
+
module Context = struct
+
let with_session id ctx = { ctx with session_id = Some id }
+
let with_request id ctx = { ctx with request_id = Some id }
+
let with_trace id ctx = { ctx with trace_id = Some id }
+
let with_span id ?parent ctx =
+
{ ctx with span_id = Some id; parent_span_id = parent }
+
let with_user id ctx = { ctx with user_id = Some id }
+
let with_tenant id ctx = { ctx with tenant_id = Some id }
+
let with_fiber id ctx = { ctx with fiber_id = Some id }
+
let with_domain id ctx = { ctx with domain_id = Some id }
+
let add_tag tag ctx = { ctx with tags = tag :: ctx.tags }
+
let add_label key value ctx = { ctx with labels = (key, value) :: ctx.labels }
+
let add_custom key json ctx = { ctx with custom = (key, json) :: ctx.custom }
+
+
let of_eio fiber =
+
let fiber_id = Eio.Fiber.id fiber in
+
{ empty_context with
+
fiber_id = Some (string_of_int fiber_id);
+
domain_id = Some (Domain.self () |> Domain.get_id :> int)
+
}
+
+
let current_fiber_id () =
+
try
+
Some (string_of_int (Eio.Fiber.id (Eio.Fiber.self ())))
+
with _ -> None
+
end
+
+
module Pattern = struct
+
type t = {
+
id : int;
+
pattern_hash : string;
+
pattern : string;
+
first_seen : float;
+
last_seen : float;
+
occurrence_count : int;
+
severity_score : float option;
+
}
+
+
let normalize_message msg =
+
let msg = Str.global_replace (Str.regexp "[0-9]+") "N" msg in
+
let msg = Str.global_replace (Str.regexp "0x[0-9a-fA-F]+") "0xHEX" msg in
+
let msg = Str.global_replace (Str.regexp "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}") "UUID" msg in
+
msg
+
+
let hash_pattern pattern =
+
Digestif.SHA256.digest_string pattern |> Digestif.SHA256.to_hex
+
+
let detect t =
+
()
+
+
let list t ?(min_count=1) ?(since=0.0) () =
+
[]
+
+
let mark_consulted t ~pattern_hash =
+
let sql = "UPDATE patterns SET last_consultation = ?1, consultation_count = consultation_count + 1 WHERE pattern_hash = ?2" in
+
let stmt = Sqlite3.prepare t.db sql in
+
let _ = Sqlite3.bind stmt 1 (Sqlite3.Data.FLOAT (Unix.gettimeofday ())) in
+
let _ = Sqlite3.bind stmt 2 (Sqlite3.Data.TEXT pattern_hash) in
+
let _ = Sqlite3.step stmt in
+
let _ = Sqlite3.finalize stmt in
+
()
+
end
+
+
module Query = struct
+
type filter = {
+
level : Logs.level option;
+
source : string option;
+
since : float option;
+
until : float option;
+
session_id : string option;
+
request_id : string option;
+
trace_id : string option;
+
user_id : string option;
+
error_type : string option;
+
min_duration_ms : float option;
+
max_duration_ms : float option;
+
limit : int;
+
}
+
+
let default_filter = {
+
level = None;
+
source = None;
+
since = None;
+
until = None;
+
session_id = None;
+
request_id = None;
+
trace_id = None;
+
user_id = None;
+
error_type = None;
+
min_duration_ms = None;
+
max_duration_ms = None;
+
limit = 100;
+
}
+
+
let list t filter =
+
[]
+
+
let search t query ?(limit=100) () =
+
let sql = Printf.sprintf "
+
SELECT logs.* FROM logs
+
JOIN logs_fts ON logs.id = logs_fts.rowid
+
WHERE logs_fts MATCH ?1
+
ORDER BY timestamp DESC
+
LIMIT %d
+
" limit in
+
+
let stmt = Sqlite3.prepare t.db sql in
+
let _ = Sqlite3.bind stmt 1 (Sqlite3.Data.TEXT query) in
+
+
let rec collect acc =
+
match Sqlite3.step stmt with
+
| Sqlite3.Rc.ROW -> collect (Sqlite3.row_data stmt :: acc)
+
| _ -> List.rev acc
+
in
+
+
let results = collect [] in
+
let _ = Sqlite3.finalize stmt in
+
results
+
+
let count_by_level t ?(since=0.0) () =
+
[]
+
+
let recent_errors t ?(limit=100) () =
+
let sql = Printf.sprintf "SELECT * FROM recent_errors LIMIT %d" limit in
+
let stmt = Sqlite3.prepare t.db sql in
+
+
let rec collect acc =
+
match Sqlite3.step stmt with
+
| Sqlite3.Rc.ROW -> collect (Sqlite3.row_data stmt :: acc)
+
| _ -> List.rev acc
+
in
+
+
let results = collect [] in
+
let _ = Sqlite3.finalize stmt in
+
results
+
+
let slow_operations t ?(threshold_ms=100.0) () =
+
[]
+
+
let error_trends t ?(days=7) () =
+
[]
+
+
let session_summary t ~session_id =
+
[]
+
end
+
+
module Metrics = struct
+
type system_snapshot = {
+
timestamp : float;
+
cpu_usage_percent : float option;
+
memory_used_bytes : int option;
+
memory_available_bytes : int option;
+
disk_usage_percent : float option;
+
network_rx_bytes_per_sec : float option;
+
network_tx_bytes_per_sec : float option;
+
gc_minor_collections : int option;
+
gc_major_collections : int option;
+
gc_heap_words : int option;
+
}
+
+
let snapshot () =
+
let gc_stat = Gc.stat () in
+
{
+
timestamp = Unix.gettimeofday ();
+
cpu_usage_percent = None;
+
memory_used_bytes = Some (gc_stat.Gc.heap_words * (Sys.word_size / 8));
+
memory_available_bytes = None;
+
disk_usage_percent = None;
+
network_rx_bytes_per_sec = None;
+
network_tx_bytes_per_sec = None;
+
gc_minor_collections = Some gc_stat.Gc.minor_collections;
+
gc_major_collections = Some gc_stat.Gc.major_collections;
+
gc_heap_words = Some gc_stat.Gc.heap_words;
+
}
+
+
let record t snapshot =
+
()
+
+
let get_range t ~since ~until =
+
[]
+
end
+
+
module Export = struct
+
let to_json rows =
+
`List (List.map (fun row ->
+
`List (List.map (function
+
| Sqlite3.Data.NULL -> `Null
+
| Sqlite3.Data.INT i -> `Int (Int64.to_int i)
+
| Sqlite3.Data.FLOAT f -> `Float f
+
| Sqlite3.Data.TEXT s -> `String s
+
| Sqlite3.Data.BLOB b -> `String (Bytes.to_string b)
+
) row)
+
) rows)
+
+
let to_csv rows =
+
""
+
+
let for_claude t ?(limit=1000) ?(context_size=50000) () =
+
let sql = Printf.sprintf "
+
SELECT timestamp, level, source, message, error_type, stack_trace
+
FROM logs
+
WHERE level <= 2
+
ORDER BY timestamp DESC
+
LIMIT %d
+
" limit in
+
+
let stmt = Sqlite3.prepare t.db sql in
+
+
let rec collect acc size =
+
if size > context_size then acc
+
else match Sqlite3.step stmt with
+
| Sqlite3.Rc.ROW ->
+
let row = Sqlite3.row_data stmt in
+
let text = String.concat " | " (List.map (function
+
| Sqlite3.Data.NULL -> "NULL"
+
| Sqlite3.Data.INT i -> Int64.to_string i
+
| Sqlite3.Data.FLOAT f -> string_of_float f
+
| Sqlite3.Data.TEXT s -> s
+
| Sqlite3.Data.BLOB b -> Bytes.to_string b
+
) row) in
+
collect (text :: acc) (size + String.length text)
+
| _ -> acc
+
in
+
+
let lines = collect [] 0 in
+
let _ = Sqlite3.finalize stmt in
+
String.concat "\n" (List.rev lines)
+
end
+
+
let vacuum t =
+
let _ = Sqlite3.exec t.db "VACUUM" in
+
()
+
+
let stats t =
+
[]
+
+
let prune t ~older_than ?(dry_run=false) () =
+
0
+235
dancer/lib/dancer_logs/dancer_logs.mli
···
+
(** Dancer Logs - Comprehensive structured logging with SQLite backend *)
+
+
(** {1 Types} *)
+
+
type log_context = {
+
session_id : string option;
+
request_id : string option;
+
trace_id : string option;
+
span_id : string option;
+
parent_span_id : string option;
+
user_id : string option;
+
tenant_id : string option;
+
fiber_id : string option;
+
domain_id : int option;
+
tags : string list;
+
labels : (string * string) list;
+
custom : (string * Yojson.Safe.t) list;
+
}
+
+
val empty_context : log_context
+
+
type source_location = {
+
file_path : string option;
+
line_number : int option;
+
column_number : int option;
+
function_name : string option;
+
module_path : string option;
+
}
+
+
type performance_metrics = {
+
duration_ms : float option;
+
memory_before : int option;
+
memory_after : int option;
+
cpu_time_ms : float option;
+
}
+
+
type network_context = {
+
client_ip : string option;
+
server_ip : string option;
+
host_name : string option;
+
port : int option;
+
protocol : string option;
+
method_ : string option;
+
path : string option;
+
status_code : int option;
+
bytes_sent : int option;
+
bytes_received : int option;
+
}
+
+
type error_info = {
+
error_type : string option;
+
error_code : string option;
+
error_hash : string option;
+
stack_trace : string option;
+
}
+
+
type system_info = {
+
os_name : string;
+
os_version : string;
+
ocaml_version : string;
+
app_version : string option;
+
environment : string option;
+
region : string option;
+
container_id : string option;
+
}
+
+
(** {1 Database} *)
+
+
type t
+
(** The database connection *)
+
+
val init : ?path:string -> unit -> t
+
(** Initialize the database. Default path is "dancer_logs.db" *)
+
+
val close : t -> unit
+
(** Close the database connection *)
+
+
(** {1 Logging} *)
+
+
val log :
+
t ->
+
level:Logs.level ->
+
source:string ->
+
message:string ->
+
?context:log_context ->
+
?location:source_location ->
+
?performance:performance_metrics ->
+
?network:network_context ->
+
?error:error_info ->
+
unit ->
+
unit
+
(** Log a message with full context *)
+
+
(** {1 Reporter} *)
+
+
val reporter : t -> Logs.reporter
+
(** Create a Logs reporter that writes to the database *)
+
+
val chain_reporter : t -> Logs.reporter -> Logs.reporter
+
(** Chain this reporter with an existing one *)
+
+
(** {1 Context Management} *)
+
+
module Context : sig
+
val with_session : string -> log_context -> log_context
+
val with_request : string -> log_context -> log_context
+
val with_trace : string -> log_context -> log_context
+
val with_span : string -> ?parent:string -> log_context -> log_context
+
val with_user : string -> log_context -> log_context
+
val with_tenant : string -> log_context -> log_context
+
val with_fiber : string -> log_context -> log_context
+
val with_domain : int -> log_context -> log_context
+
val add_tag : string -> log_context -> log_context
+
val add_label : string -> string -> log_context -> log_context
+
val add_custom : string -> Yojson.Safe.t -> log_context -> log_context
+
+
(** Eio integration *)
+
val of_eio : Eio.Fiber.t -> log_context
+
val current_fiber_id : unit -> string option
+
end
+
+
(** {1 Pattern Detection} *)
+
+
module Pattern : sig
+
type t = {
+
id : int;
+
pattern_hash : string;
+
pattern : string;
+
first_seen : float;
+
last_seen : float;
+
occurrence_count : int;
+
severity_score : float option;
+
}
+
+
val detect : t -> unit
+
(** Run pattern detection on recent logs *)
+
+
val list : t -> ?min_count:int -> ?since:float -> unit -> t list
+
(** List detected patterns *)
+
+
val mark_consulted : t -> pattern_hash:string -> unit
+
(** Mark a pattern as consulted by Claude *)
+
end
+
+
(** {1 Queries} *)
+
+
module Query : sig
+
type filter = {
+
level : Logs.level option;
+
source : string option;
+
since : float option; (** Unix timestamp *)
+
until : float option; (** Unix timestamp *)
+
session_id : string option;
+
request_id : string option;
+
trace_id : string option;
+
user_id : string option;
+
error_type : string option;
+
min_duration_ms : float option;
+
max_duration_ms : float option;
+
limit : int;
+
}
+
+
val default_filter : filter
+
+
val list : t -> filter -> Sqlite3.row list
+
(** List logs matching the filter *)
+
+
val search : t -> string -> ?limit:int -> unit -> Sqlite3.row list
+
(** Full-text search *)
+
+
val count_by_level : t -> ?since:float -> unit -> (Logs.level * int) list
+
(** Count logs by level *)
+
+
val recent_errors : t -> ?limit:int -> unit -> Sqlite3.row list
+
(** Get recent errors with aggregation *)
+
+
val slow_operations : t -> ?threshold_ms:float -> unit -> Sqlite3.row list
+
(** Find slow operations *)
+
+
val error_trends : t -> ?days:int -> unit -> Sqlite3.row list
+
(** Get error trends over time *)
+
+
val session_summary : t -> session_id:string -> Sqlite3.row list
+
(** Get summary for a session *)
+
end
+
+
(** {1 Metrics} *)
+
+
module Metrics : sig
+
type system_snapshot = {
+
timestamp : float;
+
cpu_usage_percent : float option;
+
memory_used_bytes : int option;
+
memory_available_bytes : int option;
+
disk_usage_percent : float option;
+
network_rx_bytes_per_sec : float option;
+
network_tx_bytes_per_sec : float option;
+
gc_minor_collections : int option;
+
gc_major_collections : int option;
+
gc_heap_words : int option;
+
}
+
+
val snapshot : unit -> system_snapshot
+
(** Take a system metrics snapshot *)
+
+
val record : t -> system_snapshot -> unit
+
(** Record metrics to database *)
+
+
val get_range : t -> since:float -> until:float -> system_snapshot list
+
(** Get metrics in time range *)
+
end
+
+
(** {1 Export} *)
+
+
module Export : sig
+
val to_json : Sqlite3.row list -> Yojson.Safe.t
+
(** Export rows to JSON *)
+
+
val to_csv : Sqlite3.row list -> string
+
(** Export rows to CSV *)
+
+
val for_claude : t -> ?limit:int -> ?context_size:int -> unit -> string
+
(** Prepare context for Claude analysis *)
+
end
+
+
(** {1 Maintenance} *)
+
+
val vacuum : t -> unit
+
(** Optimize the database *)
+
+
val stats : t -> (string * string) list
+
(** Get database statistics *)
+
+
val prune : t -> older_than:float -> ?dry_run:bool -> unit -> int
+
(** Prune old logs. Returns number of deleted rows *)
+5
dancer/lib/dancer_logs/dune
···
+
(library
+
(name dancer_logs)
+
(public_name dancer-logs)
+
(libraries eio eio_main logs sqlite3 fmt ptime ptime.clock.os digestif yojson str)
+
(preprocess (pps ppx_deriving.show ppx_deriving.eq)))
+316
dancer/lib/dancer_logs/schema.sql
···
+
-- Main log entries table with comprehensive structured fields
+
CREATE TABLE IF NOT EXISTS logs (
+
id INTEGER PRIMARY KEY AUTOINCREMENT,
+
+
-- Core fields
+
timestamp REAL NOT NULL, -- Unix timestamp with microseconds
+
level INTEGER NOT NULL, -- 0=App, 1=Error, 2=Warning, 3=Info, 4=Debug
+
source TEXT NOT NULL, -- Module/source name (e.g., 'myapp.db.connection')
+
message TEXT NOT NULL, -- Full log message, no truncation
+
+
-- Error classification
+
error_type TEXT, -- Exception name or error classification
+
error_code TEXT, -- Specific error code if applicable
+
error_hash TEXT, -- Hash for grouping similar errors
+
stack_trace TEXT, -- Full stack trace
+
+
-- Execution context
+
pid INTEGER, -- Process ID
+
ppid INTEGER, -- Parent process ID
+
thread_id TEXT, -- Thread/fiber ID for Eio
+
fiber_id TEXT, -- Specific Eio fiber ID
+
domain_id INTEGER, -- OCaml domain ID for multicore
+
+
-- Request/Session tracking
+
session_id TEXT, -- Session identifier
+
request_id TEXT, -- Request identifier
+
trace_id TEXT, -- Distributed tracing ID
+
span_id TEXT, -- Span ID for tracing
+
parent_span_id TEXT, -- Parent span for trace hierarchy
+
+
-- Source location
+
file_path TEXT, -- Full file path
+
file_name TEXT, -- Just the filename
+
line_number INTEGER, -- Line number in source
+
column_number INTEGER, -- Column number if available
+
function_name TEXT, -- Function/method name
+
module_path TEXT, -- Full module path (e.g., 'Myapp.Db.Connection')
+
+
-- Performance metrics
+
duration_ms REAL, -- Operation duration in milliseconds
+
memory_before INTEGER, -- Memory usage before operation (bytes)
+
memory_after INTEGER, -- Memory usage after operation (bytes)
+
memory_delta INTEGER, -- Memory change
+
cpu_time_ms REAL, -- CPU time consumed
+
+
-- Network context
+
client_ip TEXT, -- Client IP address
+
server_ip TEXT, -- Server IP address
+
host_name TEXT, -- Hostname
+
port INTEGER, -- Port number
+
protocol TEXT, -- Protocol (HTTP, gRPC, etc.)
+
method TEXT, -- HTTP method or RPC method
+
path TEXT, -- Request path or endpoint
+
status_code INTEGER, -- Response status code
+
bytes_sent INTEGER, -- Bytes sent
+
bytes_received INTEGER, -- Bytes received
+
+
-- User context
+
user_id TEXT, -- User identifier
+
tenant_id TEXT, -- Tenant ID for multi-tenant apps
+
organization_id TEXT, -- Organization ID
+
+
-- System context
+
os_name TEXT, -- Operating system name
+
os_version TEXT, -- OS version
+
ocaml_version TEXT, -- OCaml compiler version
+
app_version TEXT, -- Application version
+
environment TEXT, -- Environment (dev, staging, prod)
+
region TEXT, -- Deployment region
+
container_id TEXT, -- Docker/container ID
+
k8s_pod TEXT, -- Kubernetes pod name
+
k8s_namespace TEXT, -- Kubernetes namespace
+
+
-- Metadata fields (JSON for flexibility)
+
tags TEXT, -- JSON array of tags
+
labels TEXT, -- JSON object of key-value labels
+
context TEXT, -- JSON object with arbitrary context
+
custom_fields TEXT, -- JSON object for app-specific fields
+
+
-- Audit fields
+
created_at REAL DEFAULT (julianday('now')),
+
indexed_at REAL -- When added to FTS index
+
);
+
+
-- Full-text search index on all text fields
+
CREATE VIRTUAL TABLE IF NOT EXISTS logs_fts USING fts5(
+
message,
+
error_type,
+
error_code,
+
source,
+
file_path,
+
function_name,
+
module_path,
+
path,
+
method,
+
stack_trace,
+
tags,
+
labels,
+
context,
+
custom_fields,
+
content=logs,
+
content_rowid=id,
+
tokenize='porter unicode61'
+
);
+
+
-- Pattern tracking for error analysis
+
CREATE TABLE IF NOT EXISTS patterns (
+
id INTEGER PRIMARY KEY AUTOINCREMENT,
+
pattern_hash TEXT UNIQUE NOT NULL, -- Hash of normalized pattern
+
pattern TEXT NOT NULL, -- Normalized pattern template
+
pattern_type TEXT, -- Type of pattern (error, warning, anomaly)
+
+
-- Occurrence tracking
+
first_seen REAL NOT NULL,
+
last_seen REAL NOT NULL,
+
occurrence_count INTEGER DEFAULT 1,
+
unique_sources INTEGER DEFAULT 1, -- Number of unique sources
+
unique_sessions INTEGER DEFAULT 1, -- Number of unique sessions affected
+
+
-- Statistical analysis
+
avg_duration_ms REAL, -- Average duration when this occurs
+
max_duration_ms REAL, -- Maximum duration observed
+
min_duration_ms REAL, -- Minimum duration observed
+
+
-- Trend analysis
+
hourly_rate REAL, -- Occurrences per hour
+
daily_rate REAL, -- Occurrences per day
+
acceleration_rate REAL, -- Rate of increase/decrease
+
+
-- Claude consultation tracking
+
severity_score REAL, -- Calculated severity (0-10)
+
impact_score REAL, -- Business impact score
+
last_consultation REAL, -- When Claude last reviewed
+
consultation_count INTEGER DEFAULT 0, -- Number of consultations
+
fix_attempted BOOLEAN DEFAULT 0, -- Whether a fix was attempted
+
fix_successful BOOLEAN DEFAULT 0, -- Whether the fix worked
+
+
-- Metadata
+
sample_log_ids TEXT, -- JSON array of example log IDs
+
related_patterns TEXT, -- JSON array of related pattern IDs
+
notes TEXT, -- Human or Claude notes
+
+
created_at REAL DEFAULT (julianday('now')),
+
updated_at REAL DEFAULT (julianday('now'))
+
);
+
+
-- Time-series data for pattern analysis
+
CREATE TABLE IF NOT EXISTS pattern_metrics (
+
pattern_id INTEGER REFERENCES patterns(id) ON DELETE CASCADE,
+
timestamp REAL NOT NULL, -- Bucket timestamp (e.g., hour boundary)
+
bucket_size INTEGER NOT NULL, -- Bucket size in seconds (3600 for hourly)
+
+
-- Metrics for this time bucket
+
occurrence_count INTEGER DEFAULT 0,
+
unique_sources INTEGER DEFAULT 0,
+
unique_sessions INTEGER DEFAULT 0,
+
avg_duration_ms REAL,
+
max_duration_ms REAL,
+
error_rate REAL, -- Errors per second in this bucket
+
+
PRIMARY KEY (pattern_id, timestamp, bucket_size)
+
);
+
+
-- Session tracking for correlation
+
CREATE TABLE IF NOT EXISTS sessions (
+
session_id TEXT PRIMARY KEY,
+
start_time REAL NOT NULL,
+
end_time REAL,
+
duration_ms REAL,
+
+
-- Session metadata
+
user_id TEXT,
+
tenant_id TEXT,
+
client_ip TEXT,
+
user_agent TEXT,
+
+
-- Session metrics
+
log_count INTEGER DEFAULT 0,
+
error_count INTEGER DEFAULT 0,
+
warning_count INTEGER DEFAULT 0,
+
+
-- Performance metrics
+
total_duration_ms REAL,
+
total_memory_bytes INTEGER,
+
+
created_at REAL DEFAULT (julianday('now'))
+
);
+
+
-- System metrics snapshot table
+
CREATE TABLE IF NOT EXISTS system_metrics (
+
timestamp REAL PRIMARY KEY,
+
+
-- CPU metrics
+
cpu_usage_percent REAL,
+
cpu_user_percent REAL,
+
cpu_system_percent REAL,
+
cpu_idle_percent REAL,
+
load_avg_1m REAL,
+
load_avg_5m REAL,
+
load_avg_15m REAL,
+
+
-- Memory metrics
+
memory_total_bytes INTEGER,
+
memory_used_bytes INTEGER,
+
memory_free_bytes INTEGER,
+
memory_available_bytes INTEGER,
+
swap_used_bytes INTEGER,
+
swap_free_bytes INTEGER,
+
+
-- Disk metrics
+
disk_read_bytes_per_sec REAL,
+
disk_write_bytes_per_sec REAL,
+
disk_usage_percent REAL,
+
+
-- Network metrics
+
network_rx_bytes_per_sec REAL,
+
network_tx_bytes_per_sec REAL,
+
network_connections INTEGER,
+
+
-- Process metrics
+
process_count INTEGER,
+
thread_count INTEGER,
+
fiber_count INTEGER,
+
+
-- GC metrics (OCaml specific)
+
gc_minor_collections INTEGER,
+
gc_major_collections INTEGER,
+
gc_heap_words INTEGER,
+
gc_live_words INTEGER,
+
+
created_at REAL DEFAULT (julianday('now'))
+
);
+
+
-- Comprehensive indexes for efficient querying
+
CREATE INDEX IF NOT EXISTS idx_logs_timestamp ON logs(timestamp DESC);
+
CREATE INDEX IF NOT EXISTS idx_logs_level_timestamp ON logs(level, timestamp DESC);
+
CREATE INDEX IF NOT EXISTS idx_logs_source_timestamp ON logs(source, timestamp DESC);
+
CREATE INDEX IF NOT EXISTS idx_logs_error_hash ON logs(error_hash) WHERE error_hash IS NOT NULL;
+
CREATE INDEX IF NOT EXISTS idx_logs_session_id ON logs(session_id) WHERE session_id IS NOT NULL;
+
CREATE INDEX IF NOT EXISTS idx_logs_request_id ON logs(request_id) WHERE request_id IS NOT NULL;
+
CREATE INDEX IF NOT EXISTS idx_logs_trace_id ON logs(trace_id) WHERE trace_id IS NOT NULL;
+
CREATE INDEX IF NOT EXISTS idx_logs_user_id ON logs(user_id) WHERE user_id IS NOT NULL;
+
CREATE INDEX IF NOT EXISTS idx_logs_error_type ON logs(error_type) WHERE error_type IS NOT NULL;
+
CREATE INDEX IF NOT EXISTS idx_logs_duration ON logs(duration_ms) WHERE duration_ms IS NOT NULL;
+
CREATE INDEX IF NOT EXISTS idx_logs_status_code ON logs(status_code) WHERE status_code IS NOT NULL;
+
+
CREATE INDEX IF NOT EXISTS idx_patterns_last_seen ON patterns(last_seen DESC);
+
CREATE INDEX IF NOT EXISTS idx_patterns_severity ON patterns(severity_score DESC) WHERE severity_score IS NOT NULL;
+
CREATE INDEX IF NOT EXISTS idx_patterns_count ON patterns(occurrence_count DESC);
+
CREATE INDEX IF NOT EXISTS idx_pattern_metrics_timestamp ON pattern_metrics(timestamp DESC);
+
+
-- Triggers to maintain FTS and update timestamps
+
CREATE TRIGGER IF NOT EXISTS logs_fts_insert AFTER INSERT ON logs
+
BEGIN
+
INSERT INTO logs_fts(rowid, message, error_type, error_code, source,
+
file_path, function_name, module_path, path, method,
+
stack_trace, tags, labels, context, custom_fields)
+
VALUES (new.id, new.message, new.error_type, new.error_code, new.source,
+
new.file_path, new.function_name, new.module_path, new.path, new.method,
+
new.stack_trace, new.tags, new.labels, new.context, new.custom_fields);
+
+
UPDATE logs SET indexed_at = julianday('now') WHERE id = new.id;
+
END;
+
+
CREATE TRIGGER IF NOT EXISTS patterns_update_timestamp AFTER UPDATE ON patterns
+
BEGIN
+
UPDATE patterns SET updated_at = julianday('now') WHERE id = new.id;
+
END;
+
+
-- Views for common queries
+
CREATE VIEW IF NOT EXISTS recent_errors AS
+
SELECT
+
error_hash,
+
error_type,
+
error_code,
+
COUNT(*) as count,
+
COUNT(DISTINCT session_id) as affected_sessions,
+
COUNT(DISTINCT user_id) as affected_users,
+
MAX(timestamp) as last_seen,
+
MIN(timestamp) as first_seen,
+
GROUP_CONCAT(DISTINCT source) as sources,
+
AVG(duration_ms) as avg_duration_ms,
+
message as sample_message
+
FROM logs
+
WHERE level <= 2 -- Error and Warning
+
AND timestamp > julianday('now') - 1 -- Last 24 hours
+
GROUP BY error_hash
+
ORDER BY count DESC;
+
+
CREATE VIEW IF NOT EXISTS error_trends AS
+
SELECT
+
date(timestamp) as day,
+
level,
+
COUNT(*) as count,
+
COUNT(DISTINCT error_hash) as unique_errors,
+
COUNT(DISTINCT session_id) as affected_sessions,
+
AVG(duration_ms) as avg_duration_ms
+
FROM logs
+
WHERE level <= 2
+
GROUP BY date(timestamp), level
+
ORDER BY day DESC, level;
+
+
CREATE VIEW IF NOT EXISTS slow_operations AS
+
SELECT
+
source,
+
function_name,
+
COUNT(*) as count,
+
AVG(duration_ms) as avg_duration_ms,
+
MAX(duration_ms) as max_duration_ms,
+
MIN(duration_ms) as min_duration_ms,
+
SUM(duration_ms) as total_duration_ms
+
FROM logs
+
WHERE duration_ms IS NOT NULL
+
GROUP BY source, function_name
+
HAVING avg_duration_ms > 100 -- Operations slower than 100ms
+
ORDER BY avg_duration_ms DESC;