A community based topic aggregation platform built on atproto

feat: Set up local development environment with Bluesky PDS

- Add docker-compose.dev.yml with Bluesky PDS (port 3001)
- Add .env.dev with development configuration
- Add Makefile with convenient dev commands (help, dev-up, dev-down, etc.)
- Add comprehensive docs/LOCAL_DEVELOPMENT.md guide
- Update CLAUDE.md and ATPROTO_GUIDE.md with correct architecture
- Remove custom carstore implementation (PDS handles this)
- Remove internal/atproto/repo wrapper (not needed)
- Add feed lexicon schemas (getAll, getCommunity, getTimeline)
- Update post lexicons to remove getFeed (replaced by feed queries)
- Update PROJECT_STRUCTURE.md to reflect new architecture

Architecture:
- PDS is self-contained with internal SQLite + CAR storage
- PostgreSQL database only used by Coves AppView for indexing
- AppView subscribes directly to PDS firehose (no relay needed for local dev)
- PDS runs on port 3001 to avoid conflicts with production PDS on 3000

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

+69
.env.dev
···
+
# Coves Local Development Environment Configuration
+
# This file contains all environment variables for the local atProto development stack
+
# DO NOT commit secrets to version control in production!
+
+
# =============================================================================
+
# PostgreSQL Configuration (Shared Database)
+
# =============================================================================
+
# Uses existing database from internal/db/local_dev_db_compose/
+
POSTGRES_HOST=localhost
+
POSTGRES_PORT=5433
+
POSTGRES_DB=coves_dev
+
POSTGRES_USER=dev_user
+
POSTGRES_PASSWORD=dev_password
+
+
# =============================================================================
+
# PDS (Personal Data Server) Configuration
+
# =============================================================================
+
# PDS runs on port 3001 (to avoid conflict with production PDS on :3000)
+
PDS_HOSTNAME=localhost
+
PDS_PORT=3001
+
+
# DID PLC Directory (use Bluesky's for development)
+
PDS_DID_PLC_URL=https://plc.directory
+
+
# JWT Secret (for signing tokens - change in production!)
+
PDS_JWT_SECRET=local-dev-jwt-secret-change-in-production
+
+
# Admin password for PDS management
+
PDS_ADMIN_PASSWORD=admin
+
+
# Handle domains (users will get handles like alice.local.coves.dev)
+
PDS_SERVICE_HANDLE_DOMAINS=.local.coves.dev
+
+
# PLC Rotation Key (k256 private key in hex format - for local dev only)
+
# This is a randomly generated key for testing - DO NOT use in production
+
PDS_PLC_ROTATION_KEY=af514fb84c4356241deed29feb392d1ee359f99c05a7b8f7bff2e5f2614f64b2
+
+
# =============================================================================
+
# AppView Configuration (Your Go Application)
+
# =============================================================================
+
# AppView runs on port 8081 (to avoid conflicts)
+
APPVIEW_PORT=8081
+
+
# PDS Firehose URL (WebSocket connection - direct to PDS, no relay)
+
FIREHOSE_URL=ws://localhost:3001/xrpc/com.atproto.sync.subscribeRepos
+
+
# PDS URL (for XRPC calls)
+
PDS_URL=http://localhost:3001
+
+
# =============================================================================
+
# Development Settings
+
# =============================================================================
+
# Environment
+
ENV=development
+
NODE_ENV=development
+
+
# Logging
+
LOG_LEVEL=debug
+
LOG_ENABLED=true
+
+
# =============================================================================
+
# Notes
+
# =============================================================================
+
# - PDS port 3001 avoids conflict with your production PDS on :3000
+
# - AppView port 8081 avoids conflicts
+
# - PostgreSQL port 5433 matches your existing local dev database
+
# - All services connect to the shared PostgreSQL database
+
# - AppView subscribes directly to PDS firehose (no relay needed for local dev)
+
# - PDS firehose: ws://localhost:3001/xrpc/com.atproto.sync.subscribeRepos
+45 -83
ATPROTO_GUIDE.md
···
### Key Components
1. **DIDs (Decentralized Identifiers)** - Persistent user identifiers (e.g., `did:plc:xyz123`)
2. **Handles** - Human-readable names that resolve to DIDs (e.g., `alice.bsky.social`)
-
3. **Repositories** - User data stored as signed Merkle trees in CAR files
+
3. **Repositories** - User data stored in PDS'
4. **Lexicons** - Schema definitions for data types and API methods
5. **XRPC** - The RPC protocol for client-server communication
6. **Firehose** - Real-time event stream of repository changes
## Architecture Overview
-
### Two-Database Pattern
-
AT Protocol requires two distinct data stores:
+
### Coves Architecture Pattern
+
Coves uses a simplified, single-database architecture that leverages existing atProto infrastructure:
-
#### 1. Repository Database (Source of Truth)
-
- **Purpose**: Stores user-generated content as immutable, signed records
-
- **Storage**: CAR files containing Merkle trees + PostgreSQL metadata
-
- **Access**: Through XRPC procedures that modify repositories
-
- **Properties**:
-
- Append-only (soft deletes via tombstones)
-
- Cryptographically verifiable
-
- User-controlled and portable
+
#### Components
-
#### 2. AppView Database (Query Layer)
-
- **Purpose**: Denormalized, indexed data optimized for queries
-
- **Storage**: PostgreSQL with application-specific schema
-
- **Access**: Through XRPC queries (read-only)
-
- **Properties**:
-
- Eventually consistent with repositories
-
- Can be rebuilt from repository data
-
- Application-specific aggregations
+
1. **PDS (Personal Data Server)**
+
- Managed by Bluesky's official PDS implementation
+
- Handles user repositories, DIDs, and CAR file storage
+
- Users can use our PDS or any external PDS (federated)
+
- Emits events to the Relay/firehose
+
+
2. **Relay (BigSky)**
+
- Aggregates firehose events from multiple PDSs
+
- For development: subscribes only to local dev PDS
+
- For production: can subscribe to multiple PDSs or public relay
+
+
3. **AppView Database (Single PostgreSQL)**
+
- **Purpose**: Denormalized, indexed data optimized for Coves queries
+
- **Storage**: PostgreSQL with Coves-specific schema
+
- **Contains**:
+
- Indexed posts, communities, feeds
+
- User read states and preferences
+
- PDS metadata and record references
+
- **Properties**:
+
- Eventually consistent with PDS repositories
+
- Can be rebuilt from firehose replay
+
- Application-specific aggregations
+
+
4. **Coves AppView (Go Application)**
+
- Subscribes to Relay firehose
+
- Indexes relevant records into PostgreSQL
+
- Serves XRPC queries for Coves features
+
- Implements custom feed algorithms
### Data Flow
```
Write Path:
-
Client → XRPC Procedure → Service → Write Repo → CAR Store
-
-
Firehose Event
-
-
AppView Indexer
-
-
AppView Database
+
Client → PDS (via XRPC) → Repository Record Created
+
+
Firehose Event
+
+
Relay aggregates events
+
+
Coves AppView subscribes
+
+
Index in PostgreSQL
Read Path:
-
Client → XRPC Query → Service → Read Repo → AppView Database
+
Client → Coves AppView (via XRPC) → PostgreSQL Query → Response
```
+
+
**Key Point**: Coves AppView only reads from the firehose and indexes data. It does NOT write to CAR files or manage repositories directly - the PDS handles that.
## Lexicons
···
- Procedures often start with `create`, `update`, or `delete`
- Keep names descriptive but concise
-
## XRPC
-
-
### What is XRPC?
-
XRPC (Cross-Protocol RPC) is AT Protocol's HTTP-based RPC system:
-
- All methods live under `/xrpc/` path
-
- Method names map directly to Lexicon IDs
-
- Supports both JSON and binary data
-
-
### Request Format
-
```
-
# Query (GET)
-
GET /xrpc/social.coves.community.getCommunity?id=123
-
-
# Procedure (POST)
-
POST /xrpc/social.coves.community.createPost
-
Content-Type: application/json
-
Authorization: Bearer <token>
-
-
{"text": "Hello, Coves!"}
-
```
-
-
### Authentication
-
- Uses Bearer tokens in Authorization header
-
- Tokens are JWTs signed by the user's signing key
-
- Service auth for server-to-server calls
-
-
## Data Storage
-
-
### CAR Files
-
Content Addressable archive files store repository data:
-
- Contains IPLD blocks forming a Merkle tree
-
- Each block identified by CID (Content IDentifier)
-
- Enables cryptographic verification and efficient sync
-
-
### Record Keys (rkeys)
-
- Unique identifiers for records within a collection
-
- Can be TIDs (timestamp-based) or custom strings
-
- Must match pattern: `[a-zA-Z0-9._~-]{1,512}`
-
-
### Repository Structure
-
```
-
Repository (did:plc:user123)
-
├── social.coves.post
-
│ ├── 3kkreaz3amd27 (TID)
-
│ └── 3kkreaz3amd28 (TID)
-
├── social.coves.community.member
-
│ ├── community123
-
│ └── community456
-
└── app.bsky.actor.profile
-
└── self
-
```
## Identity & Authentication
···
2. HTTPS well-known: `https://alice.com/.well-known/atproto-did`
### Authentication Flow
-
1. Client creates session with identifier/password
-
2. Server returns access/refresh tokens
-
3. Client uses access token for API requests
-
4. Refresh when access token expires
+
1. Client creates session with OAuth
## Firehose & Sync
···
### Using Indigo Library
Bluesky's official Go implementation provides:
- Lexicon code generation
-
- CAR file handling
- XRPC client/server
- Firehose subscription
+14 -80
CLAUDE.md
···
#### Human & LLM Readability Guidelines:
- Descriptive Naming: Use full words over abbreviations (e.g., CommunityGovernance not CommGov)
-
## Build Process
-
-
### Phase 1: Planning (Before Writing Code)
-
-
**ALWAYS START WITH:**
-
-
- [ ] Identify which atProto patterns apply (check ATPROTO_GUIDE.md or context7 https://context7.com/bluesky-social/atproto)
-
- [ ] Check if Indigo (also in context7) packages already solve this: https://context7.com/bluesky-social/indigo
-
- [ ] Define the XRPC interface first
-
- [ ] Write the Lexicon schema
-
- [ ] Plan the data flow: CAR store → AppView
-
- [ ] - Follow the two-database pattern: Repository (CAR files)(PostgreSQL for metadata) and AppView (PostgreSQL)
-
- [ ] **Identify auth requirements and data sensitivity**
-
-
### Phase 2: Test-First Implementation
-
-
**BUILD ORDER:**
-
-
1. **Domain Model** (`core/[domain]/[domain].go`)
-
-
- Start with the simplest struct
-
- Add validation methods
-
- Define error types
-
- **Add input validation from the start**
-
2. **Repository Interfaces** (`core/[domain]/repository.go`)
-
-
```go
-
type CommunityWriteRepository interface {
-
Create(ctx context.Context, community *Community) error
-
Update(ctx context.Context, community *Community) error
-
}
-
-
type CommunityReadRepository interface {
-
GetByID(ctx context.Context, id string) (*Community, error)
-
List(ctx context.Context, limit, offset int) ([]*Community, error)
-
}
-
```
-
-
3. **Service Tests** (`core/[domain]/service_test.go`)
-
-
- Write failing tests for happy path
-
- **Add tests for invalid inputs**
-
- **Add tests for unauthorized access**
-
- Mock repositories
-
4. **Service Implementation** (`core/[domain]/service.go`)
-
-
- Implement to pass tests
-
- **Validate all inputs before processing**
-
- **Check permissions before operations**
-
- Handle transactions
-
5. **Repository Implementations**
+
## atProto Essentials for Coves
-
- **Always use parameterized queries**
-
- **Never concatenate user input into queries**
-
- Write repo: `internal/atproto/carstore/[domain]_write_repo.go`
-
- Read repo: `db/appview/[domain]_read_repo.go`
-
6. **XRPC Handler** (`xrpc/handlers/[domain]_handler.go`)
+
### Architecture
+
- **PDS is Self-Contained**: Uses internal SQLite + CAR files (in Docker volume)
+
- **PostgreSQL for AppView Only**: One database for Coves AppView indexing
+
- **Don't Touch PDS Internals**: PDS manages its own storage, we just read from firehose
+
- **Data Flow**: Client → PDS → Firehose → AppView → PostgreSQL
-
- **Verify auth tokens/DIDs**
-
- Parse XRPC request
-
- Call service
-
- **Sanitize errors before responding**
-
-
### Phase 3: Integration
-
-
**WIRE IT UP:**
-
-
- [ ] Add to dependency injection in main.go
-
- [ ] Register XRPC routes with proper auth middleware
-
- [ ] Create migration if needed
-
- [ ] Write integration test including auth flows
+
### Always Consider:
+
- [ ] **Identity**: Every action needs DID verification
+
- [ ] **Record Types**: Define custom lexicons (e.g., `social.coves.post`, `social.coves.community`)
+
- [ ] **Is it federated-friendly?** (Can other PDSs interact with it?)
+
- [ ] **Does the Lexicon make sense?** (Would it work for other forums?)
+
- [ ] **AppView only indexes**: We don't write to CAR files, only read from firehose
## Security-First Building
···
- Error messages with internal details → Wrap errors properly
- Unbounded queries → Add limits/pagination
-
## Quick Decision Guide
-
-
### "Should I use X?"
-
-
1. Does Indigo have it? → Use it
-
2. Can PostgreSQL + Go do it securely? → Build it simple
-
3. Requires external dependency? → Check Context7 first
-
### "How should I structure this?"
1. One domain, one package
···
- [ ] Tests pass (including security tests)
- [ ] Follows atProto patterns
-
- [ ] No security checklist items missed
- [ ] Handles errors gracefully
- [ ] Works end-to-end with auth
## Quick Checks Before Committing
1. **Will it work?** (Integration test proves it)
-
2. 1. **Is it secure?** (Auth, validation, parameterized queries)
+
2. **Is it secure?** (Auth, validation, parameterized queries)
3. **Is it simple?** (Could you explain to a junior?)
4. **Is it complete?** (Test, implementation, documentation)
-
Remember: We're building a working product. Perfect is the enemy of shipped.
+
Remember: We're building a working product. Perfect is the enemy of shipped.
+161
Makefile
···
+
.PHONY: help dev-up dev-down dev-logs dev-status dev-reset dev-db-up dev-db-down dev-db-reset test clean
+
+
# Default target - show help
+
.DEFAULT_GOAL := help
+
+
# Colors for output
+
CYAN := \033[36m
+
RESET := \033[0m
+
GREEN := \033[32m
+
YELLOW := \033[33m
+
+
##@ General
+
+
help: ## Show this help message
+
@echo ""
+
@echo "$(CYAN)Coves Development Commands$(RESET)"
+
@echo ""
+
@awk 'BEGIN {FS = ":.*##"; printf "Usage: make $(CYAN)<target>$(RESET)\n"} \
+
/^[a-zA-Z_-]+:.*?##/ { printf " $(CYAN)%-15s$(RESET) %s\n", $$1, $$2 } \
+
/^##@/ { printf "\n$(YELLOW)%s$(RESET)\n", substr($$0, 5) }' $(MAKEFILE_LIST)
+
@echo ""
+
+
##@ Local Development (atProto Stack)
+
+
dev-up: ## Start PDS for local development
+
@echo "$(GREEN)Starting Coves development stack...$(RESET)"
+
@echo "$(YELLOW)Note: Make sure PostgreSQL is running on port 5433$(RESET)"
+
@echo "Run 'make dev-db-up' if database is not running"
+
@docker-compose -f docker-compose.dev.yml --env-file .env.dev up -d pds
+
@echo ""
+
@echo "$(GREEN)✓ Development stack started!$(RESET)"
+
@echo ""
+
@echo "Services available at:"
+
@echo " - PDS (XRPC): http://localhost:3001"
+
@echo " - PDS Firehose (WS): ws://localhost:3001/xrpc/com.atproto.sync.subscribeRepos"
+
@echo " - AppView (API): http://localhost:8081 (when uncommented)"
+
@echo ""
+
@echo "Run 'make dev-logs' to view logs"
+
+
dev-down: ## Stop the atProto development stack
+
@echo "$(YELLOW)Stopping Coves development stack...$(RESET)"
+
@docker-compose -f docker-compose.dev.yml down
+
@echo "$(GREEN)✓ Development stack stopped$(RESET)"
+
+
dev-logs: ## Tail logs from all development services
+
@docker-compose -f docker-compose.dev.yml logs -f
+
+
dev-status: ## Show status of all development containers
+
@echo "$(CYAN)Development Stack Status:$(RESET)"
+
@docker-compose -f docker-compose.dev.yml ps
+
@echo ""
+
@echo "$(CYAN)Database Status:$(RESET)"
+
@cd internal/db/local_dev_db_compose && docker-compose ps
+
+
dev-reset: ## Nuclear option - stop everything and remove all volumes
+
@echo "$(YELLOW)⚠️ WARNING: This will delete all PDS data and volumes!$(RESET)"
+
@read -p "Are you sure? (y/N): " confirm && [ "$$confirm" = "y" ] || exit 1
+
@echo "$(YELLOW)Stopping and removing containers and volumes...$(RESET)"
+
@docker-compose -f docker-compose.dev.yml down -v
+
@echo "$(GREEN)✓ Reset complete - all data removed$(RESET)"
+
@echo "Run 'make dev-up' to start fresh"
+
+
##@ Database Management
+
+
dev-db-up: ## Start local PostgreSQL database (port 5433)
+
@echo "$(GREEN)Starting local PostgreSQL database...$(RESET)"
+
@cd internal/db/local_dev_db_compose && docker-compose up -d
+
@echo "$(GREEN)✓ Database started on port 5433$(RESET)"
+
@echo "Connection: postgresql://dev_user:dev_password@localhost:5433/coves_dev"
+
+
dev-db-down: ## Stop local PostgreSQL database
+
@echo "$(YELLOW)Stopping local PostgreSQL database...$(RESET)"
+
@cd internal/db/local_dev_db_compose && docker-compose down
+
@echo "$(GREEN)✓ Database stopped$(RESET)"
+
+
dev-db-reset: ## Reset database (delete all data and restart)
+
@echo "$(YELLOW)⚠️ WARNING: This will delete all database data!$(RESET)"
+
@read -p "Are you sure? (y/N): " confirm && [ "$$confirm" = "y" ] || exit 1
+
@echo "$(YELLOW)Resetting database...$(RESET)"
+
@cd internal/db/local_dev_db_compose && docker-compose down -v
+
@cd internal/db/local_dev_db_compose && docker-compose up -d
+
@echo "$(GREEN)✓ Database reset complete$(RESET)"
+
+
##@ Testing
+
+
test: ## Run all tests with test database
+
@echo "$(GREEN)Starting test database...$(RESET)"
+
@cd internal/db/test_db_compose && ./start-test-db.sh
+
@echo "$(GREEN)Running tests...$(RESET)"
+
@./run-tests.sh
+
@echo "$(GREEN)✓ Tests complete$(RESET)"
+
+
test-db-reset: ## Reset test database
+
@echo "$(GREEN)Resetting test database...$(RESET)"
+
@cd internal/db/test_db_compose && ./reset-test-db.sh
+
@echo "$(GREEN)✓ Test database reset$(RESET)"
+
+
##@ Build & Run
+
+
build: ## Build the Coves server
+
@echo "$(GREEN)Building Coves server...$(RESET)"
+
@go build -o server ./cmd/server
+
@echo "$(GREEN)✓ Build complete: ./server$(RESET)"
+
+
run: ## Run the Coves server (requires database running)
+
@echo "$(GREEN)Starting Coves server...$(RESET)"
+
@go run ./cmd/server
+
+
##@ Cleanup
+
+
clean: ## Clean build artifacts and temporary files
+
@echo "$(YELLOW)Cleaning build artifacts...$(RESET)"
+
@rm -f server main validate-lexicon
+
@go clean
+
@echo "$(GREEN)✓ Clean complete$(RESET)"
+
+
clean-all: clean ## Clean everything including Docker volumes (DESTRUCTIVE)
+
@echo "$(YELLOW)⚠️ WARNING: This will remove ALL Docker volumes!$(RESET)"
+
@read -p "Are you sure? (y/N): " confirm && [ "$$confirm" = "y" ] || exit 1
+
@make dev-reset
+
@make dev-db-reset
+
@echo "$(GREEN)✓ All clean$(RESET)"
+
+
##@ Workflows (Common Tasks)
+
+
fresh-start: ## Complete fresh start (reset DB, reset stack, start everything)
+
@echo "$(CYAN)Starting fresh development environment...$(RESET)"
+
@make dev-db-reset
+
@make dev-reset || true
+
@sleep 2
+
@make dev-db-up
+
@sleep 2
+
@make dev-up
+
@echo ""
+
@echo "$(GREEN)✓ Fresh environment ready!$(RESET)"
+
@make dev-status
+
+
quick-restart: ## Quick restart of development stack (keeps data)
+
@make dev-down
+
@make dev-up
+
+
##@ Utilities
+
+
validate-lexicon: ## Validate all Lexicon schemas
+
@echo "$(GREEN)Validating Lexicon schemas...$(RESET)"
+
@./validate-lexicon
+
@echo "$(GREEN)✓ Lexicon validation complete$(RESET)"
+
+
db-shell: ## Open PostgreSQL shell for local database
+
@echo "$(CYAN)Connecting to local database...$(RESET)"
+
@PGPASSWORD=dev_password psql -h localhost -p 5433 -U dev_user -d coves_dev
+
+
##@ Documentation
+
+
docs: ## Open project documentation
+
@echo "$(CYAN)Project Documentation:$(RESET)"
+
@echo " - Setup Guide: docs/LOCAL_DEVELOPMENT.md"
+
@echo " - Project Structure: PROJECT_STRUCTURE.md"
+
@echo " - Build Guide: CLAUDE.md"
+
@echo " - atProto Guide: ATPROTO_GUIDE.md"
+
@echo " - PRD: PRD.md"
-23
PROJECT_STRUCTURE.md
···
This document provides an overview of the Coves project directory structure, following atProto architecture patterns.
**Legend:**
-
- † = Planned but not yet implemented
- 🔒 = Security-sensitive files
```
···
└── build/ † # Build artifacts
```
-
## Implementation Status
-
-
### Completed ✓
-
- Basic repository structure
-
- User domain models
-
- CAR store foundation
-
- Lexicon schemas
-
- Database migrations
-
-
### In Progress 🚧
-
- Repository service implementation
-
- User service
-
- Basic authentication
-
-
### Planned 📋
-
- XRPC handlers
-
- AppView indexer
-
- Firehose implementation
-
- Community features
-
- Moderation system
-
- Feed algorithms
## Development Guidelines
···
1. **Start with Lexicons**: Define data schemas first
2. **Implement Core Domain**: Create models and interfaces
3. **Build Services**: Implement business logic
-
4. **Add Repositories**: Create data access layers
5. **Wire XRPC**: Connect handlers last
+144
docker-compose.dev.yml
···
+
version: '3.8'
+
+
# Coves Local Development Stack
+
# Simple setup: PDS + AppView (no relay needed for local dev)
+
# AppView subscribes directly to PDS firehose at ws://localhost:3001/xrpc/com.atproto.sync.subscribeRepos
+
# Ports configured to avoid conflicts with production PDS on :3000
+
+
services:
+
# Bluesky Personal Data Server (PDS)
+
# Handles user repositories, DIDs, and CAR files
+
pds:
+
image: ghcr.io/bluesky-social/pds:latest
+
container_name: coves-dev-pds
+
ports:
+
- "3001:3000" # PDS XRPC API (avoiding production PDS on :3000)
+
environment:
+
# PDS Configuration
+
PDS_HOSTNAME: ${PDS_HOSTNAME:-localhost}
+
PDS_PORT: 3000
+
PDS_DATA_DIRECTORY: /pds
+
PDS_BLOBSTORE_DISK_LOCATION: /pds/blocks
+
PDS_DID_PLC_URL: ${PDS_DID_PLC_URL:-https://plc.directory}
+
# PDS_CRAWLERS not needed - we're not using a relay for local dev
+
+
# Note: PDS uses its own internal SQLite database and CAR file storage
+
# Our PostgreSQL database is only for the Coves AppView
+
+
# JWT secrets (for local dev only)
+
PDS_JWT_SECRET: ${PDS_JWT_SECRET:-local-dev-jwt-secret-change-in-production}
+
PDS_ADMIN_PASSWORD: ${PDS_ADMIN_PASSWORD:-admin}
+
PDS_PLC_ROTATION_KEY_K256_PRIVATE_KEY_HEX: ${PDS_PLC_ROTATION_KEY:-af514fb84c4356241deed29feb392d1ee359f99c05a7b8f7bff2e5f2614f64b2}
+
+
# Service endpoints
+
PDS_SERVICE_HANDLE_DOMAINS: ${PDS_SERVICE_HANDLE_DOMAINS:-.local.coves.dev}
+
+
# Dev mode settings (allows HTTP instead of HTTPS)
+
PDS_DEV_MODE: "true"
+
+
# Development settings
+
NODE_ENV: development
+
LOG_ENABLED: "true"
+
LOG_LEVEL: ${LOG_LEVEL:-debug}
+
volumes:
+
- pds-data:/pds
+
networks:
+
- coves-dev
+
healthcheck:
+
test: ["CMD", "curl", "-f", "http://localhost:3000/xrpc/_health"]
+
interval: 10s
+
timeout: 5s
+
retries: 5
+
+
# Indigo Relay (BigSky) - OPTIONAL for local dev
+
# WARNING: BigSky is designed to crawl the entire atProto network!
+
# For local dev, consider using direct PDS firehose instead (see AppView config below)
+
#
+
# To use relay: docker-compose -f docker-compose.dev.yml up pds relay
+
# To skip relay: docker-compose -f docker-compose.dev.yml up pds
+
#
+
# If using relay, you MUST manually configure it to only watch local PDS:
+
# 1. Start relay
+
# 2. Use admin API to block all domains except localhost
+
# curl -X POST http://localhost:2471/admin/pds/requestCrawl \
+
# -H "Authorization: Bearer dev-admin-key" \
+
# -d '{"hostname": "localhost:3001"}'
+
relay:
+
image: ghcr.io/bluesky-social/indigo:bigsky-0a2d4173e6e89e49b448f6bb0a6e1ab58d12b385
+
container_name: coves-dev-relay
+
ports:
+
- "2471:2470" # Relay firehose WebSocket (avoiding conflicts)
+
environment:
+
# Relay Configuration
+
BGS_ADMIN_KEY: ${BGS_ADMIN_KEY:-dev-admin-key}
+
BGS_PORT: 2470
+
+
# IMPORTANT: Allow insecure WebSocket for local PDS (ws:// instead of wss://)
+
BGS_CRAWL_INSECURE_WS: "true"
+
+
# Database connection (uses shared PostgreSQL for relay state)
+
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@host.docker.internal:${POSTGRES_PORT}/${POSTGRES_DB}?sslmode=disable
+
+
# Relay will discover PDSs automatically - use admin API to restrict!
+
# See comments above for how to configure allowlist
+
+
# Development settings
+
LOG_LEVEL: ${LOG_LEVEL:-debug}
+
networks:
+
- coves-dev
+
extra_hosts:
+
- "host.docker.internal:host-gateway"
+
depends_on:
+
pds:
+
condition: service_healthy
+
healthcheck:
+
test: ["CMD", "curl", "-f", "http://localhost:2470/xrpc/_health"]
+
interval: 10s
+
timeout: 5s
+
retries: 5
+
# Mark as optional - start with: docker-compose up pds relay
+
profiles:
+
- relay
+
+
# Coves AppView (Your Go Application)
+
# Subscribes to PDS firehose and indexes Coves-specific data
+
# Note: Uncomment when you have a Dockerfile for the AppView
+
# appview:
+
# build:
+
# context: .
+
# dockerfile: Dockerfile
+
# container_name: coves-dev-appview
+
# ports:
+
# - "8081:8080" # AppView API (avoiding conflicts)
+
# environment:
+
# # Database connection
+
# DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@host.docker.internal:${POSTGRES_PORT}/${POSTGRES_DB}?sslmode=disable
+
#
+
# # PDS Firehose subscription (direct, no relay)
+
# FIREHOSE_URL: ws://pds:3000/xrpc/com.atproto.sync.subscribeRepos
+
#
+
# # PDS connection (for XRPC calls)
+
# PDS_URL: http://pds:3000
+
#
+
# # Application settings
+
# PORT: 8080
+
# ENV: development
+
# LOG_LEVEL: ${LOG_LEVEL:-debug}
+
# networks:
+
# - coves-dev
+
# extra_hosts:
+
# - "host.docker.internal:host-gateway"
+
# depends_on:
+
# - pds
+
+
# Note: PostgreSQL runs separately via internal/db/local_dev_db_compose/
+
# This stack connects to it via host.docker.internal:5433
+
+
networks:
+
coves-dev:
+
driver: bridge
+
name: coves-dev-network
+
+
volumes:
+
pds-data:
+
name: coves-dev-pds-data
+450
docs/LOCAL_DEVELOPMENT.md
···
+
# Coves Local Development Guide
+
+
Complete guide for setting up and running the Coves atProto development environment.
+
+
## Table of Contents
+
- [Quick Start](#quick-start)
+
- [Architecture Overview](#architecture-overview)
+
- [Prerequisites](#prerequisites)
+
- [Setup Instructions](#setup-instructions)
+
- [Using the Makefile](#using-the-makefile)
+
- [Development Workflow](#development-workflow)
+
- [Troubleshooting](#troubleshooting)
+
- [Environment Variables](#environment-variables)
+
+
## Quick Start
+
+
```bash
+
# 1. Start the PostgreSQL database
+
make dev-db-up
+
+
# 2. Start the PDS
+
make dev-up
+
+
# 3. View logs
+
make dev-logs
+
+
# 4. Check status
+
make dev-status
+
+
# 5. When done
+
make dev-down
+
```
+
+
## Architecture Overview
+
+
Coves uses a simplified single-database architecture with direct PDS firehose subscription:
+
+
```
+
┌─────────────────────────────────────────────┐
+
│ Coves Local Development Stack │
+
├─────────────────────────────────────────────┤
+
│ │
+
│ ┌──────────────┐ │
+
│ │ PDS │ │
+
│ │ :3001 │ │
+
│ │ │ │
+
│ │ Firehose───────────┐ │
+
│ └──────────────┘ │ │
+
│ │ │
+
│ ▼ │
+
│ ┌──────────────┐ │
+
│ │ Coves AppView│ │
+
│ │ (Go) │ │
+
│ │ :8081 │ │
+
│ └──────┬───────┘ │
+
│ │ │
+
│ ┌──────▼───────┐ │
+
│ │ PostgreSQL │ │
+
│ │ :5433 │ │
+
│ └──────────────┘ │
+
│ │
+
└─────────────────────────────────────────────┘
+
+
Your Production PDS (:3000) ← Runs independently
+
```
+
+
### Components
+
+
1. **PDS (Port 3001)** - Bluesky's Personal Data Server with:
+
- User repositories and CAR files (stored in Docker volume)
+
- Internal SQLite database for PDS metadata
+
- Firehose WebSocket: `ws://localhost:3001/xrpc/com.atproto.sync.subscribeRepos`
+
2. **PostgreSQL (Port 5433)** - Database for Coves AppView data only
+
3. **Coves AppView (Port 8081)** - Your Go application that:
+
- Subscribes directly to PDS firehose
+
- Indexes Coves-specific data to PostgreSQL
+
+
**Key Points:**
+
- ✅ Ports chosen to avoid conflicts with production PDS on :3000
+
- ✅ PDS is self-contained with its own SQLite database and CAR storage
+
- ✅ PostgreSQL is only used by the Coves AppView for indexing
+
- ✅ AppView subscribes directly to PDS firehose (no relay needed)
+
- ✅ Simple, clean architecture for local development
+
+
## Prerequisites
+
+
- **Docker & Docker Compose** - For running containerized services
+
- **Go 1.22+** - For building the Coves AppView
+
- **PostgreSQL client** (optional) - For database inspection
+
- **Make** (optional but recommended) - For convenient commands
+
+
## Setup Instructions
+
+
### Step 1: Start the Database
+
+
The PostgreSQL database must be running first:
+
+
```bash
+
# Start the database
+
make dev-db-up
+
+
# Verify it's running
+
make dev-status
+
```
+
+
**Connection Details:**
+
- Host: `localhost`
+
- Port: `5433`
+
- Database: `coves_dev`
+
- User: `dev_user`
+
- Password: `dev_password`
+
+
### Step 2: Start the PDS
+
+
Start the Personal Data Server:
+
+
```bash
+
# Start PDS
+
make dev-up
+
+
# View logs (follows in real-time)
+
make dev-logs
+
```
+
+
Wait for health checks to pass (~10-30 seconds).
+
+
### Step 3: Verify Services
+
+
```bash
+
# Check PDS is running
+
make dev-status
+
+
# Test PDS health endpoint
+
curl http://localhost:3001/xrpc/_health
+
+
# Test PDS firehose endpoint (should get WebSocket upgrade response)
+
curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" \
+
http://localhost:3001/xrpc/com.atproto.sync.subscribeRepos
+
```
+
+
### Step 4: Run Coves AppView (When Ready)
+
+
When you have a Dockerfile for the AppView:
+
+
1. Uncomment the `appview` service in `docker-compose.dev.yml`
+
2. Restart the stack: `make dev-down && make dev-up`
+
+
Or run the AppView locally:
+
+
```bash
+
# Set environment variables
+
export DATABASE_URL="postgresql://dev_user:dev_password@localhost:5433/coves_dev?sslmode=disable"
+
export FIREHOSE_URL="ws://localhost:3001/xrpc/com.atproto.sync.subscribeRepos"
+
export PDS_URL="http://localhost:3001"
+
export PORT=8081
+
+
# Run the AppView
+
go run ./cmd/server
+
```
+
+
## Using the Makefile
+
+
The Makefile provides convenient commands for development. Run `make help` to see all available commands:
+
+
### General Commands
+
+
```bash
+
make help # Show all available commands with descriptions
+
```
+
+
### Development Stack Commands
+
+
```bash
+
make dev-up # Start PDS for local development
+
make dev-down # Stop the stack
+
make dev-logs # Tail logs from PDS
+
make dev-status # Show status of containers
+
make dev-reset # Nuclear option - remove all data and volumes
+
```
+
+
### Database Commands
+
+
```bash
+
make dev-db-up # Start PostgreSQL database
+
make dev-db-down # Stop PostgreSQL database
+
make dev-db-reset # Reset database (delete all data)
+
make db-shell # Open psql shell to the database
+
```
+
+
### Testing Commands
+
+
```bash
+
make test # Run all tests with test database
+
make test-db-reset # Reset test database
+
```
+
+
### Workflow Commands
+
+
```bash
+
make fresh-start # Complete fresh start (reset everything)
+
make quick-restart # Quick restart (keeps data)
+
```
+
+
### Build Commands
+
+
```bash
+
make build # Build the Coves server binary
+
make run # Run the Coves server
+
make clean # Clean build artifacts
+
```
+
+
### Utilities
+
+
```bash
+
make validate-lexicon # Validate all Lexicon schemas
+
make docs # Show documentation file locations
+
```
+
+
## Development Workflow
+
+
### Typical Development Session
+
+
```bash
+
# 1. Start fresh environment
+
make fresh-start
+
+
# 2. Work on code...
+
+
# 3. Restart services as needed
+
make quick-restart
+
+
# 4. View logs
+
make dev-logs
+
+
# 5. Run tests
+
make test
+
+
# 6. Clean up when done
+
make dev-down
+
```
+
+
### Testing Lexicon Changes
+
+
```bash
+
# 1. Edit Lexicon files in internal/atproto/lexicon/
+
+
# 2. Validate schemas
+
make validate-lexicon
+
+
# 3. Restart services to pick up changes
+
make quick-restart
+
```
+
+
### Database Inspection
+
+
```bash
+
# Open PostgreSQL shell
+
make db-shell
+
+
# Or use psql directly
+
PGPASSWORD=dev_password psql -h localhost -p 5433 -U dev_user -d coves_dev
+
```
+
+
### Viewing Logs
+
+
```bash
+
# Follow all logs
+
make dev-logs
+
+
# Or use docker-compose directly
+
docker-compose -f docker-compose.dev.yml logs -f pds
+
docker-compose -f docker-compose.dev.yml logs -f relay
+
```
+
+
## Troubleshooting
+
+
### Port Already in Use
+
+
**Problem:** Error binding to port 3000, 5433, etc.
+
+
**Solution:**
+
- The dev environment uses non-standard ports to avoid conflicts
+
- PDS: 3001 (not 3000)
+
- PostgreSQL: 5433 (not 5432)
+
- Relay: 2471 (not 2470)
+
- AppView: 8081 (not 8080)
+
+
If you still have conflicts, check what's using the port:
+
+
```bash
+
# Check what's using a port
+
lsof -i :3001
+
lsof -i :5433
+
+
# Kill the process
+
kill -9 <PID>
+
```
+
+
### Database Connection Failed
+
+
**Problem:** Services can't connect to PostgreSQL
+
+
**Solution:**
+
+
```bash
+
# Ensure database is running
+
make dev-db-up
+
+
# Check database logs
+
cd internal/db/local_dev_db_compose && docker-compose logs
+
+
# Verify connection manually
+
PGPASSWORD=dev_password psql -h localhost -p 5433 -U dev_user -d coves_dev
+
```
+
+
### PDS Health Check Failing
+
+
**Problem:** PDS container keeps restarting
+
+
**Solution:**
+
+
```bash
+
# Check PDS logs
+
docker-compose -f docker-compose.dev.yml logs pds
+
+
# Common issues:
+
# 1. Database not accessible - ensure DB is running
+
# 2. Invalid environment variables - check .env.dev
+
# 3. Port conflict - ensure port 3001 is free
+
```
+
+
### AppView Not Receiving Firehose Events
+
+
**Problem:** AppView isn't receiving events from PDS firehose
+
+
**Solution:**
+
+
```bash
+
# Check PDS logs for firehose activity
+
docker-compose -f docker-compose.dev.yml logs pds
+
+
# Verify firehose endpoint is accessible
+
curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" \
+
http://localhost:3001/xrpc/com.atproto.sync.subscribeRepos
+
+
# Check AppView is connecting to correct URL:
+
# FIREHOSE_URL=ws://localhost:3001/xrpc/com.atproto.sync.subscribeRepos
+
```
+
+
### Fresh Start Not Working
+
+
**Problem:** `make fresh-start` fails
+
+
**Solution:**
+
+
```bash
+
# Manually clean everything
+
docker-compose -f docker-compose.dev.yml down -v
+
cd internal/db/local_dev_db_compose && docker-compose down -v
+
docker volume prune -f
+
docker network prune -f
+
+
# Then start fresh
+
make dev-db-up
+
sleep 2
+
make dev-up
+
```
+
+
### Production PDS Interference
+
+
**Problem:** Dev environment conflicts with your production PDS
+
+
**Solution:**
+
- Dev PDS runs on port 3001 (production is 3000)
+
- Dev services use different handle domain (`.local.coves.dev`)
+
- They should not interfere unless you have custom networking
+
+
```bash
+
# Verify production PDS is still accessible
+
curl http://localhost:3000/xrpc/_health
+
+
# Verify dev PDS is separate
+
curl http://localhost:3001/xrpc/_health
+
```
+
+
## Environment Variables
+
+
All configuration is in `.env.dev`:
+
+
### Database Configuration
+
```bash
+
POSTGRES_HOST=localhost
+
POSTGRES_PORT=5433
+
POSTGRES_DB=coves_dev
+
POSTGRES_USER=dev_user
+
POSTGRES_PASSWORD=dev_password
+
```
+
+
### PDS Configuration
+
```bash
+
PDS_HOSTNAME=localhost
+
PDS_PORT=3001
+
PDS_JWT_SECRET=local-dev-jwt-secret-change-in-production
+
PDS_ADMIN_PASSWORD=admin
+
PDS_SERVICE_HANDLE_DOMAINS=.local.coves.dev
+
```
+
+
### Relay Configuration
+
```bash
+
BGS_PORT=2471
+
BGS_ADMIN_KEY=dev-admin-key
+
```
+
+
### AppView Configuration
+
```bash
+
APPVIEW_PORT=8081
+
FIREHOSE_URL=ws://localhost:3001/xrpc/com.atproto.sync.subscribeRepos
+
PDS_URL=http://localhost:3001
+
```
+
+
### Development Settings
+
```bash
+
ENV=development
+
LOG_LEVEL=debug
+
```
+
+
## Next Steps
+
+
1. **Build the Firehose Subscriber** - Create the AppView component that subscribes to the relay
+
2. **Define Custom Lexicons** - Create Coves-specific schemas in `internal/atproto/lexicon/social/coves/`
+
3. **Implement XRPC Handlers** - Build the API endpoints for Coves features
+
4. **Create Integration Tests** - Use Testcontainers to test the full stack
+
+
## Additional Resources
+
+
- [CLAUDE.md](../CLAUDE.md) - Build guidelines and security practices
+
- [ATPROTO_GUIDE.md](../ATPROTO_GUIDE.md) - Comprehensive atProto implementation guide
+
- [PROJECT_STRUCTURE.md](../PROJECT_STRUCTURE.md) - Project organization
+
- [PRD.md](../PRD.md) - Product requirements and roadmap
+
+
## Getting Help
+
+
- Check logs: `make dev-logs`
+
- View status: `make dev-status`
+
- Reset everything: `make fresh-start`
+
- Inspect database: `make db-shell`
+
+
For issues with atProto concepts, see [ATPROTO_GUIDE.md](../ATPROTO_GUIDE.md).
+
+
For build process questions, see [CLAUDE.md](../CLAUDE.md).
-104
internal/atproto/carstore/README.md
···
-
# CarStore Package
-
-
This package provides integration with Indigo's carstore for managing ATProto repository CAR files in the Coves platform.
-
-
## Overview
-
-
The carstore package wraps Indigo's carstore implementation to provide:
-
- Filesystem-based storage of CAR (Content Addressable aRchive) files
-
- PostgreSQL metadata tracking via GORM
-
- DID to UID mapping for user repositories
-
- Automatic garbage collection and compaction
-
-
## Architecture
-
-
```
-
[Repository Service]
-
-
[RepoStore] ← Provides DID-based interface
-
-
[CarStore] ← Wraps Indigo's carstore
-
-
[Indigo CarStore] ← Actual implementation
-
-
[PostgreSQL + Filesystem]
-
```
-
-
## Components
-
-
### CarStore (`carstore.go`)
-
Wraps Indigo's carstore implementation, providing methods for:
-
- `ImportSlice`: Import CAR data for a user
-
- `ReadUserCar`: Export user's repository as CAR
-
- `GetUserRepoHead`: Get latest repository state
-
- `CompactUserShards`: Run garbage collection
-
- `WipeUserData`: Delete all user data
-
-
### UserMapping (`user_mapping.go`)
-
Maps DIDs (Decentralized Identifiers) to numeric UIDs required by Indigo's carstore:
-
- DIDs are strings like `did:plc:abc123xyz`
-
- UIDs are numeric identifiers (models.Uid)
-
- Maintains bidirectional mapping in PostgreSQL
-
-
### RepoStore (`repo_store.go`)
-
Combines CarStore with UserMapping to provide DID-based operations:
-
- `ImportRepo`: Import repository for a DID
-
- `ReadRepo`: Export repository for a DID
-
- `GetRepoHead`: Get latest state for a DID
-
- `CompactRepo`: Run garbage collection for a DID
-
- `DeleteRepo`: Remove all data for a DID
-
-
## Data Flow
-
-
### Creating a New Repository
-
1. Service calls `RepoStore.ImportRepo(did, carData)`
-
2. RepoStore maps DID to UID via UserMapping
-
3. CarStore imports the CAR slice
-
4. Indigo's carstore:
-
- Stores CAR data as file on disk
-
- Records metadata in PostgreSQL
-
-
### Reading a Repository
-
1. Service calls `RepoStore.ReadRepo(did)`
-
2. RepoStore maps DID to UID
-
3. CarStore reads user's CAR data
-
4. Returns complete CAR file
-
-
## Database Schema
-
-
### user_maps table
-
```sql
-
CREATE TABLE user_maps (
-
uid SERIAL PRIMARY KEY,
-
did VARCHAR UNIQUE NOT NULL,
-
created_at BIGINT,
-
updated_at BIGINT
-
);
-
```
-
-
### Indigo's tables (auto-created)
-
- `car_shards`: Metadata about CAR file shards
-
- `block_refs`: Block reference tracking
-
-
## Storage
-
-
CAR files are stored on the filesystem at the path specified during initialization (e.g., `./data/carstore/`). The storage is organized by Indigo's carstore implementation, typically with sharding for performance.
-
-
## Configuration
-
-
Initialize the carstore with:
-
```go
-
carDirs := []string{"./data/carstore"}
-
repoStore, err := carstore.NewRepoStore(gormDB, carDirs)
-
```
-
-
## Future Enhancements
-
-
Current implementation supports repository-level operations. Record-level CRUD operations would require:
-
1. Reading the CAR file
-
2. Parsing into a repository structure
-
3. Modifying records
-
4. Re-serializing as CAR
-
5. Writing back to carstore
-
-
This is planned for future XRPC implementation.
-100
internal/atproto/carstore/carstore.go
···
-
package carstore
-
-
import (
-
"context"
-
"fmt"
-
"io"
-
-
"github.com/bluesky-social/indigo/carstore"
-
"github.com/bluesky-social/indigo/models"
-
"github.com/ipfs/go-cid"
-
"gorm.io/gorm"
-
)
-
-
// CarStore wraps Indigo's carstore for managing ATProto repository CAR files
-
type CarStore struct {
-
cs carstore.CarStore
-
}
-
-
// NewCarStore creates a new CarStore instance using Indigo's implementation
-
func NewCarStore(db *gorm.DB, carDirs []string) (*CarStore, error) {
-
// Initialize Indigo's carstore
-
cs, err := carstore.NewCarStore(db, carDirs)
-
if err != nil {
-
return nil, fmt.Errorf("initializing carstore: %w", err)
-
}
-
-
return &CarStore{
-
cs: cs,
-
}, nil
-
}
-
-
// ImportSlice imports a CAR file slice for a user
-
func (c *CarStore) ImportSlice(ctx context.Context, uid models.Uid, since *string, carData []byte) (cid.Cid, error) {
-
rootCid, _, err := c.cs.ImportSlice(ctx, uid, since, carData)
-
if err != nil {
-
return cid.Undef, fmt.Errorf("importing CAR slice for UID %d: %w", uid, err)
-
}
-
return rootCid, nil
-
}
-
-
// ReadUserCar reads a user's repository CAR file
-
func (c *CarStore) ReadUserCar(ctx context.Context, uid models.Uid, sinceRev string, incremental bool, w io.Writer) error {
-
if err := c.cs.ReadUserCar(ctx, uid, sinceRev, incremental, w); err != nil {
-
return fmt.Errorf("reading user CAR for UID %d: %w", uid, err)
-
}
-
return nil
-
}
-
-
// GetUserRepoHead gets the latest repository head CID for a user
-
func (c *CarStore) GetUserRepoHead(ctx context.Context, uid models.Uid) (cid.Cid, error) {
-
head, err := c.cs.GetUserRepoHead(ctx, uid)
-
if err != nil {
-
return cid.Undef, fmt.Errorf("getting repo head for UID %d: %w", uid, err)
-
}
-
return head, nil
-
}
-
-
// CompactUserShards performs garbage collection and compaction for a user's data
-
func (c *CarStore) CompactUserShards(ctx context.Context, uid models.Uid, aggressive bool) error {
-
_, err := c.cs.CompactUserShards(ctx, uid, aggressive)
-
if err != nil {
-
return fmt.Errorf("compacting shards for UID %d: %w", uid, err)
-
}
-
return nil
-
}
-
-
// WipeUserData removes all data for a user
-
func (c *CarStore) WipeUserData(ctx context.Context, uid models.Uid) error {
-
if err := c.cs.WipeUserData(ctx, uid); err != nil {
-
return fmt.Errorf("wiping data for UID %d: %w", uid, err)
-
}
-
return nil
-
}
-
-
// NewDeltaSession creates a new session for writing deltas
-
func (c *CarStore) NewDeltaSession(ctx context.Context, uid models.Uid, since *string) (*carstore.DeltaSession, error) {
-
session, err := c.cs.NewDeltaSession(ctx, uid, since)
-
if err != nil {
-
return nil, fmt.Errorf("creating delta session for UID %d: %w", uid, err)
-
}
-
return session, nil
-
}
-
-
// ReadOnlySession creates a read-only session for reading user data
-
func (c *CarStore) ReadOnlySession(uid models.Uid) (*carstore.DeltaSession, error) {
-
session, err := c.cs.ReadOnlySession(uid)
-
if err != nil {
-
return nil, fmt.Errorf("creating read-only session for UID %d: %w", uid, err)
-
}
-
return session, nil
-
}
-
-
// Stat returns statistics about the carstore
-
func (c *CarStore) Stat(ctx context.Context, uid models.Uid) ([]carstore.UserStat, error) {
-
stats, err := c.cs.Stat(ctx, uid)
-
if err != nil {
-
return nil, fmt.Errorf("getting stats for UID %d: %w", uid, err)
-
}
-
return stats, nil
-
}
-122
internal/atproto/carstore/repo_store.go
···
-
package carstore
-
-
import (
-
"bytes"
-
"context"
-
"fmt"
-
"io"
-
-
"github.com/bluesky-social/indigo/models"
-
"github.com/ipfs/go-cid"
-
"gorm.io/gorm"
-
)
-
-
// RepoStore combines CarStore with UserMapping to provide DID-based repository storage
-
type RepoStore struct {
-
cs *CarStore
-
mapping *UserMapping
-
}
-
-
// NewRepoStore creates a new RepoStore instance
-
func NewRepoStore(db *gorm.DB, carDirs []string) (*RepoStore, error) {
-
// Create carstore
-
cs, err := NewCarStore(db, carDirs)
-
if err != nil {
-
return nil, fmt.Errorf("creating carstore: %w", err)
-
}
-
-
// Create user mapping
-
mapping, err := NewUserMapping(db)
-
if err != nil {
-
return nil, fmt.Errorf("creating user mapping: %w", err)
-
}
-
-
return &RepoStore{
-
cs: cs,
-
mapping: mapping,
-
}, nil
-
}
-
-
// ImportRepo imports a repository CAR file for a DID
-
func (rs *RepoStore) ImportRepo(ctx context.Context, did string, carData io.Reader) (cid.Cid, error) {
-
uid, err := rs.mapping.GetOrCreateUID(ctx, did)
-
if err != nil {
-
return cid.Undef, fmt.Errorf("getting UID for DID %s: %w", did, err)
-
}
-
-
// Read all data from the reader
-
data, err := io.ReadAll(carData)
-
if err != nil {
-
return cid.Undef, fmt.Errorf("reading CAR data: %w", err)
-
}
-
-
return rs.cs.ImportSlice(ctx, uid, nil, data)
-
}
-
-
// ReadRepo reads a repository CAR file for a DID
-
func (rs *RepoStore) ReadRepo(ctx context.Context, did string, sinceRev string) ([]byte, error) {
-
uid, err := rs.mapping.GetUID(did)
-
if err != nil {
-
return nil, fmt.Errorf("getting UID for DID %s: %w", did, err)
-
}
-
-
var buf bytes.Buffer
-
err = rs.cs.ReadUserCar(ctx, uid, sinceRev, false, &buf)
-
if err != nil {
-
return nil, fmt.Errorf("reading repo for DID %s: %w", did, err)
-
}
-
-
return buf.Bytes(), nil
-
}
-
-
// GetRepoHead gets the latest repository head CID for a DID
-
func (rs *RepoStore) GetRepoHead(ctx context.Context, did string) (cid.Cid, error) {
-
uid, err := rs.mapping.GetUID(did)
-
if err != nil {
-
return cid.Undef, fmt.Errorf("getting UID for DID %s: %w", did, err)
-
}
-
-
return rs.cs.GetUserRepoHead(ctx, uid)
-
}
-
-
// CompactRepo performs garbage collection for a DID's repository
-
func (rs *RepoStore) CompactRepo(ctx context.Context, did string) error {
-
uid, err := rs.mapping.GetUID(did)
-
if err != nil {
-
return fmt.Errorf("getting UID for DID %s: %w", did, err)
-
}
-
-
return rs.cs.CompactUserShards(ctx, uid, false)
-
}
-
-
// DeleteRepo removes all data for a DID's repository
-
func (rs *RepoStore) DeleteRepo(ctx context.Context, did string) error {
-
uid, err := rs.mapping.GetUID(did)
-
if err != nil {
-
return fmt.Errorf("getting UID for DID %s: %w", did, err)
-
}
-
-
return rs.cs.WipeUserData(ctx, uid)
-
}
-
-
// HasRepo checks if a repository exists for a DID
-
func (rs *RepoStore) HasRepo(ctx context.Context, did string) (bool, error) {
-
uid, err := rs.mapping.GetUID(did)
-
if err != nil {
-
// If no UID mapping exists, repo doesn't exist
-
return false, nil
-
}
-
-
// Try to get the repo head
-
head, err := rs.cs.GetUserRepoHead(ctx, uid)
-
if err != nil {
-
return false, nil
-
}
-
-
return head.Defined(), nil
-
}
-
-
// GetOrCreateUID gets or creates a UID for a DID
-
func (rs *RepoStore) GetOrCreateUID(ctx context.Context, did string) (models.Uid, error) {
-
return rs.mapping.GetOrCreateUID(ctx, did)
-
}
-127
internal/atproto/carstore/user_mapping.go
···
-
package carstore
-
-
import (
-
"context"
-
"fmt"
-
"sync"
-
-
"github.com/bluesky-social/indigo/models"
-
"gorm.io/gorm"
-
)
-
-
// UserMapping manages the mapping between DIDs and numeric UIDs required by Indigo's carstore
-
type UserMapping struct {
-
db *gorm.DB
-
mu sync.RWMutex
-
didToUID map[string]models.Uid
-
uidToDID map[models.Uid]string
-
nextUID models.Uid
-
}
-
-
// UserMap represents the database model for DID to UID mapping
-
type UserMap struct {
-
UID models.Uid `gorm:"primaryKey;autoIncrement"`
-
DID string `gorm:"column:did;uniqueIndex;not null"`
-
CreatedAt int64
-
UpdatedAt int64
-
}
-
-
// NewUserMapping creates a new UserMapping instance
-
func NewUserMapping(db *gorm.DB) (*UserMapping, error) {
-
// Auto-migrate the user mapping table
-
if err := db.AutoMigrate(&UserMap{}); err != nil {
-
return nil, fmt.Errorf("migrating user mapping table: %w", err)
-
}
-
-
um := &UserMapping{
-
db: db,
-
didToUID: make(map[string]models.Uid),
-
uidToDID: make(map[models.Uid]string),
-
nextUID: 1,
-
}
-
-
// Load existing mappings
-
if err := um.loadMappings(); err != nil {
-
return nil, fmt.Errorf("loading user mappings: %w", err)
-
}
-
-
return um, nil
-
}
-
-
// loadMappings loads all existing DID to UID mappings from the database
-
func (um *UserMapping) loadMappings() error {
-
var mappings []UserMap
-
if err := um.db.Find(&mappings).Error; err != nil {
-
return fmt.Errorf("querying user mappings: %w", err)
-
}
-
-
um.mu.Lock()
-
defer um.mu.Unlock()
-
-
for _, m := range mappings {
-
um.didToUID[m.DID] = m.UID
-
um.uidToDID[m.UID] = m.DID
-
if m.UID >= um.nextUID {
-
um.nextUID = m.UID + 1
-
}
-
}
-
-
return nil
-
}
-
-
// GetOrCreateUID gets or creates a UID for a given DID
-
func (um *UserMapping) GetOrCreateUID(ctx context.Context, did string) (models.Uid, error) {
-
um.mu.RLock()
-
if uid, exists := um.didToUID[did]; exists {
-
um.mu.RUnlock()
-
return uid, nil
-
}
-
um.mu.RUnlock()
-
-
// Need to create a new mapping
-
um.mu.Lock()
-
defer um.mu.Unlock()
-
-
// Double-check in case another goroutine created it
-
if uid, exists := um.didToUID[did]; exists {
-
return uid, nil
-
}
-
-
// Create new mapping
-
userMap := &UserMap{
-
DID: did,
-
}
-
-
if err := um.db.Create(userMap).Error; err != nil {
-
return 0, fmt.Errorf("creating user mapping for DID %s: %w", did, err)
-
}
-
-
um.didToUID[did] = userMap.UID
-
um.uidToDID[userMap.UID] = did
-
-
return userMap.UID, nil
-
}
-
-
// GetUID returns the UID for a DID, or an error if not found
-
func (um *UserMapping) GetUID(did string) (models.Uid, error) {
-
um.mu.RLock()
-
defer um.mu.RUnlock()
-
-
uid, exists := um.didToUID[did]
-
if !exists {
-
return 0, fmt.Errorf("UID not found for DID: %s", did)
-
}
-
return uid, nil
-
}
-
-
// GetDID returns the DID for a UID, or an error if not found
-
func (um *UserMapping) GetDID(uid models.Uid) (string, error) {
-
um.mu.RLock()
-
defer um.mu.RUnlock()
-
-
did, exists := um.uidToDID[uid]
-
if !exists {
-
return "", fmt.Errorf("DID not found for UID: %d", uid)
-
}
-
return did, nil
-
}
+68
internal/atproto/lexicon/social/coves/feed/getAll.json
···
+
{
+
"lexicon": 1,
+
"id": "social.coves.feed.getAll",
+
"defs": {
+
"main": {
+
"type": "query",
+
"description": "Get a global feed of all posts across communities",
+
"parameters": {
+
"type": "params",
+
"properties": {
+
"sort": {
+
"type": "string",
+
"enum": ["hot", "top", "new"],
+
"default": "hot",
+
"description": "Sort order for global feed"
+
},
+
"postType": {
+
"type": "string",
+
"enum": ["text", "article", "image", "video", "microblog"],
+
"description": "Filter by a single post type"
+
},
+
"postTypes": {
+
"type": "array",
+
"items": {
+
"type": "string",
+
"enum": ["text", "article", "image", "video", "microblog"]
+
},
+
"description": "Filter by multiple post types"
+
},
+
"timeframe": {
+
"type": "string",
+
"enum": ["hour", "day", "week", "month", "year", "all"],
+
"default": "day",
+
"description": "Timeframe for top sorting (only applies when sort=top)"
+
},
+
"limit": {
+
"type": "integer",
+
"minimum": 1,
+
"maximum": 50,
+
"default": 15
+
},
+
"cursor": {
+
"type": "string"
+
}
+
}
+
},
+
"output": {
+
"encoding": "application/json",
+
"schema": {
+
"type": "object",
+
"required": ["feed"],
+
"properties": {
+
"feed": {
+
"type": "array",
+
"items": {
+
"type": "ref",
+
"ref": "social.coves.feed.getTimeline#feedViewPost"
+
}
+
},
+
"cursor": {
+
"type": "string"
+
}
+
}
+
}
+
}
+
}
+
}
+
}
+74
internal/atproto/lexicon/social/coves/feed/getCommunity.json
···
+
{
+
"lexicon": 1,
+
"id": "social.coves.feed.getCommunity",
+
"defs": {
+
"main": {
+
"type": "query",
+
"description": "Get a feed of posts from a specific community",
+
"parameters": {
+
"type": "params",
+
"required": ["community"],
+
"properties": {
+
"community": {
+
"type": "string",
+
"format": "at-identifier",
+
"description": "Get community feed for specific community (DID or handle)"
+
},
+
"sort": {
+
"type": "string",
+
"enum": ["hot", "top", "new"],
+
"default": "hot",
+
"description": "Sort order for community feed"
+
},
+
"postType": {
+
"type": "string",
+
"enum": ["text", "article", "image", "video", "microblog"],
+
"description": "Filter by a single post type"
+
},
+
"postTypes": {
+
"type": "array",
+
"items": {
+
"type": "string",
+
"enum": ["text", "article", "image", "video", "microblog"]
+
},
+
"description": "Filter by multiple post types"
+
},
+
"timeframe": {
+
"type": "string",
+
"enum": ["hour", "day", "week", "month", "year", "all"],
+
"default": "day",
+
"description": "Timeframe for top sorting (only applies when sort=top)"
+
},
+
"limit": {
+
"type": "integer",
+
"minimum": 1,
+
"maximum": 50,
+
"default": 15
+
},
+
"cursor": {
+
"type": "string"
+
}
+
}
+
},
+
"output": {
+
"encoding": "application/json",
+
"schema": {
+
"type": "object",
+
"required": ["feed"],
+
"properties": {
+
"feed": {
+
"type": "array",
+
"items": {
+
"type": "ref",
+
"ref": "social.coves.feed.getTimeline#feedViewPost"
+
}
+
},
+
"cursor": {
+
"type": "string"
+
}
+
}
+
}
+
}
+
}
+
}
+
}
+127
internal/atproto/lexicon/social/coves/feed/getTimeline.json
···
+
{
+
"lexicon": 1,
+
"id": "social.coves.feed.getTimeline",
+
"defs": {
+
"main": {
+
"type": "query",
+
"description": "Get the home timeline feed for the authenticated user",
+
"parameters": {
+
"type": "params",
+
"properties": {
+
"postType": {
+
"type": "string",
+
"enum": ["text", "article", "image", "video", "microblog"],
+
"description": "Filter by a single post type"
+
},
+
"postTypes": {
+
"type": "array",
+
"items": {
+
"type": "string",
+
"enum": ["text", "article", "image", "video", "microblog"]
+
},
+
"description": "Filter by multiple post types"
+
},
+
"limit": {
+
"type": "integer",
+
"minimum": 1,
+
"maximum": 50,
+
"default": 15
+
},
+
"cursor": {
+
"type": "string"
+
}
+
}
+
},
+
"output": {
+
"encoding": "application/json",
+
"schema": {
+
"type": "object",
+
"required": ["feed"],
+
"properties": {
+
"feed": {
+
"type": "array",
+
"items": {
+
"type": "ref",
+
"ref": "#feedViewPost"
+
}
+
},
+
"cursor": {
+
"type": "string"
+
}
+
}
+
}
+
}
+
},
+
"feedViewPost": {
+
"type": "object",
+
"required": ["post"],
+
"properties": {
+
"post": {
+
"type": "ref",
+
"ref": "social.coves.post.get#postView"
+
},
+
"reason": {
+
"type": "union",
+
"description": "Additional context for why this post is in the feed",
+
"refs": ["#reasonRepost", "#reasonPin"]
+
},
+
"reply": {
+
"type": "ref",
+
"ref": "#replyRef"
+
}
+
}
+
},
+
"reasonRepost": {
+
"type": "object",
+
"required": ["by", "indexedAt"],
+
"properties": {
+
"by": {
+
"type": "ref",
+
"ref": "social.coves.post.get#authorView"
+
},
+
"indexedAt": {
+
"type": "string",
+
"format": "datetime"
+
}
+
}
+
},
+
"reasonPin": {
+
"type": "object",
+
"required": ["community"],
+
"properties": {
+
"community": {
+
"type": "ref",
+
"ref": "social.coves.post.get#communityRef"
+
}
+
}
+
},
+
"replyRef": {
+
"type": "object",
+
"required": ["root", "parent"],
+
"properties": {
+
"root": {
+
"type": "ref",
+
"ref": "#postRef"
+
},
+
"parent": {
+
"type": "ref",
+
"ref": "#postRef"
+
}
+
}
+
},
+
"postRef": {
+
"type": "object",
+
"required": ["uri", "cid"],
+
"properties": {
+
"uri": {
+
"type": "string",
+
"format": "at-uri"
+
},
+
"cid": {
+
"type": "string",
+
"format": "cid"
+
}
+
}
+
}
+
}
+
}
+6 -1
internal/atproto/lexicon/social/coves/post/get.json
···
},
"postView": {
"type": "object",
-
"required": ["uri", "cid", "author", "record", "community", "postType", "createdAt"],
+
"required": ["uri", "cid", "author", "record", "community", "postType", "createdAt", "indexedAt"],
"properties": {
"uri": {
"type": "string",
···
"editedAt": {
"type": "string",
"format": "datetime"
+
},
+
"indexedAt": {
+
"type": "string",
+
"format": "datetime",
+
"description": "When this post was indexed by the AppView"
},
"stats": {
"type": "ref",
-143
internal/atproto/lexicon/social/coves/post/getFeed.json
···
-
{
-
"lexicon": 1,
-
"id": "social.coves.post.getFeed",
-
"defs": {
-
"main": {
-
"type": "query",
-
"description": "Get a feed of posts. Use 'feed' parameter for global feeds (home/all) or 'community' + 'sort' for community-specific feeds. These modes are mutually exclusive.",
-
"parameters": {
-
"type": "params",
-
"properties": {
-
"feed": {
-
"type": "string",
-
"enum": ["home", "all"],
-
"default": "home",
-
"description": "Type of global feed to retrieve (mutually exclusive with community parameter)"
-
},
-
"community": {
-
"type": "string",
-
"format": "at-identifier",
-
"description": "Get community feed for specific community (DID or handle, mutually exclusive with feed parameter)"
-
},
-
"sort": {
-
"type": "string",
-
"enum": ["hot", "top", "new"],
-
"default": "hot",
-
"description": "Sort order for community feeds (required when community is specified, ignored for global feeds)"
-
},
-
"postType": {
-
"type": "string",
-
"enum": ["text", "article", "image", "video", "microblog"],
-
"description": "Filter by a single post type"
-
},
-
"postTypes": {
-
"type": "array",
-
"items": {
-
"type": "string",
-
"enum": ["text", "article", "image", "video", "microblog"]
-
},
-
"description": "Filter by multiple post types"
-
},
-
"timeframe": {
-
"type": "string",
-
"enum": ["hour", "day", "week", "month", "year", "all"],
-
"default": "day",
-
"description": "Timeframe for top sorting (only applies when sort=top)"
-
},
-
"limit": {
-
"type": "integer",
-
"minimum": 1,
-
"maximum": 50,
-
"default": 15
-
},
-
"cursor": {
-
"type": "string"
-
}
-
}
-
},
-
"output": {
-
"encoding": "application/json",
-
"schema": {
-
"type": "object",
-
"required": ["posts"],
-
"properties": {
-
"posts": {
-
"type": "array",
-
"items": {
-
"type": "ref",
-
"ref": "#feedPost"
-
}
-
},
-
"cursor": {
-
"type": "string"
-
}
-
}
-
}
-
}
-
},
-
"feedPost": {
-
"type": "object",
-
"required": ["uri", "author", "community", "postType", "createdAt"],
-
"properties": {
-
"uri": {
-
"type": "string",
-
"format": "at-uri"
-
},
-
"author": {
-
"type": "ref",
-
"ref": "social.coves.post.get#authorView"
-
},
-
"community": {
-
"type": "ref",
-
"ref": "social.coves.post.get#communityRef"
-
},
-
"postType": {
-
"type": "string",
-
"enum": ["text", "article", "image", "video", "microblog"],
-
"description": "Type of the post for UI rendering"
-
},
-
"title": {
-
"type": "string"
-
},
-
"content": {
-
"type": "string",
-
"maxLength": 500,
-
"description": "Truncated preview of the post content"
-
},
-
"embed": {
-
"type": "union",
-
"description": "Embedded content preview",
-
"refs": [
-
"social.coves.post.get#imagesView",
-
"social.coves.post.get#videoView",
-
"social.coves.post.get#externalView",
-
"social.coves.post.get#postView"
-
]
-
},
-
"originalAuthor": {
-
"type": "ref",
-
"ref": "social.coves.post.record#originalAuthor",
-
"description": "For microblog posts - original author info"
-
},
-
"contentLabels": {
-
"type": "array",
-
"items": {
-
"type": "string"
-
}
-
},
-
"createdAt": {
-
"type": "string",
-
"format": "datetime"
-
},
-
"stats": {
-
"type": "ref",
-
"ref": "social.coves.post.get#postStats"
-
},
-
"viewer": {
-
"type": "ref",
-
"ref": "social.coves.post.get#viewerState"
-
}
-
}
-
}
-
}
-
}
+6 -1
internal/atproto/lexicon/social/coves/post/record.json
···
"key": "tid",
"record": {
"type": "object",
-
"required": ["community", "postType", "createdAt"],
+
"required": ["$type", "community", "postType", "createdAt"],
"properties": {
+
"$type": {
+
"type": "string",
+
"const": "social.coves.post.record",
+
"description": "The record type identifier"
+
},
"community": {
"type": "string",
"format": "at-identifier",
-201
internal/atproto/repo/wrapper.go
···
-
package repo
-
-
import (
-
"bytes"
-
"context"
-
"fmt"
-
-
"github.com/bluesky-social/indigo/mst"
-
"github.com/bluesky-social/indigo/repo"
-
"github.com/ipfs/go-cid"
-
blockstore "github.com/ipfs/go-ipfs-blockstore"
-
cbornode "github.com/ipfs/go-ipld-cbor"
-
cbg "github.com/whyrusleeping/cbor-gen"
-
)
-
-
// Wrapper provides a thin wrapper around Indigo's repo package
-
type Wrapper struct {
-
repo *repo.Repo
-
blockstore blockstore.Blockstore
-
}
-
-
// NewWrapper creates a new wrapper for a repository with the provided blockstore
-
func NewWrapper(did string, signingKey interface{}, bs blockstore.Blockstore) (*Wrapper, error) {
-
// Create new repository with the provided blockstore
-
r := repo.NewRepo(context.Background(), did, bs)
-
-
return &Wrapper{
-
repo: r,
-
blockstore: bs,
-
}, nil
-
}
-
-
// OpenWrapper opens an existing repository from CAR data with the provided blockstore
-
func OpenWrapper(carData []byte, signingKey interface{}, bs blockstore.Blockstore) (*Wrapper, error) {
-
r, err := repo.ReadRepoFromCar(context.Background(), bytes.NewReader(carData))
-
if err != nil {
-
return nil, fmt.Errorf("failed to read repo from CAR: %w", err)
-
}
-
-
return &Wrapper{
-
repo: r,
-
blockstore: bs,
-
}, nil
-
}
-
-
// CreateRecord adds a new record to the repository
-
func (w *Wrapper) CreateRecord(collection string, recordKey string, record cbg.CBORMarshaler) (cid.Cid, string, error) {
-
// The repo.CreateRecord generates its own key, so we'll use that
-
recordCID, rkey, err := w.repo.CreateRecord(context.Background(), collection, record)
-
if err != nil {
-
return cid.Undef, "", fmt.Errorf("failed to create record: %w", err)
-
}
-
-
// If a specific key was requested, we'd need to use PutRecord instead
-
if recordKey != "" {
-
// Use PutRecord for specific keys
-
path := fmt.Sprintf("%s/%s", collection, recordKey)
-
recordCID, err = w.repo.PutRecord(context.Background(), path, record)
-
if err != nil {
-
return cid.Undef, "", fmt.Errorf("failed to put record with key: %w", err)
-
}
-
return recordCID, recordKey, nil
-
}
-
-
return recordCID, rkey, nil
-
}
-
-
// GetRecord retrieves a record from the repository
-
func (w *Wrapper) GetRecord(collection string, recordKey string) (cid.Cid, []byte, error) {
-
path := fmt.Sprintf("%s/%s", collection, recordKey)
-
-
recordCID, rec, err := w.repo.GetRecord(context.Background(), path)
-
if err != nil {
-
return cid.Undef, nil, fmt.Errorf("failed to get record: %w", err)
-
}
-
-
// Encode record to CBOR
-
buf := new(bytes.Buffer)
-
if err := rec.(cbg.CBORMarshaler).MarshalCBOR(buf); err != nil {
-
return cid.Undef, nil, fmt.Errorf("failed to encode record: %w", err)
-
}
-
-
return recordCID, buf.Bytes(), nil
-
}
-
-
// UpdateRecord updates an existing record in the repository
-
func (w *Wrapper) UpdateRecord(collection string, recordKey string, record cbg.CBORMarshaler) (cid.Cid, error) {
-
path := fmt.Sprintf("%s/%s", collection, recordKey)
-
-
// Check if record exists
-
_, _, err := w.repo.GetRecord(context.Background(), path)
-
if err != nil {
-
return cid.Undef, fmt.Errorf("record not found: %w", err)
-
}
-
-
// Update the record
-
recordCID, err := w.repo.UpdateRecord(context.Background(), path, record)
-
if err != nil {
-
return cid.Undef, fmt.Errorf("failed to update record: %w", err)
-
}
-
-
return recordCID, nil
-
}
-
-
// DeleteRecord removes a record from the repository
-
func (w *Wrapper) DeleteRecord(collection string, recordKey string) error {
-
path := fmt.Sprintf("%s/%s", collection, recordKey)
-
-
if err := w.repo.DeleteRecord(context.Background(), path); err != nil {
-
return fmt.Errorf("failed to delete record: %w", err)
-
}
-
-
return nil
-
}
-
-
// ListRecords returns all records in a collection
-
func (w *Wrapper) ListRecords(collection string) ([]RecordInfo, error) {
-
var records []RecordInfo
-
-
err := w.repo.ForEach(context.Background(), collection, func(k string, v cid.Cid) error {
-
// Skip if not in the requested collection
-
if len(k) <= len(collection)+1 || k[:len(collection)] != collection || k[len(collection)] != '/' {
-
return nil
-
}
-
-
recordKey := k[len(collection)+1:]
-
records = append(records, RecordInfo{
-
Collection: collection,
-
RecordKey: recordKey,
-
CID: v,
-
})
-
-
return nil
-
})
-
-
if err != nil {
-
return nil, fmt.Errorf("failed to list records: %w", err)
-
}
-
-
return records, nil
-
}
-
-
// Commit creates a new signed commit
-
func (w *Wrapper) Commit(did string, signingKey interface{}) (*repo.SignedCommit, error) {
-
// The commit function expects a signing function with context
-
signingFunc := func(ctx context.Context, did string, data []byte) ([]byte, error) {
-
// TODO: Implement proper signing based on signingKey type
-
return []byte("mock-signature"), nil
-
}
-
-
_, _, err := w.repo.Commit(context.Background(), signingFunc)
-
if err != nil {
-
return nil, fmt.Errorf("failed to commit: %w", err)
-
}
-
-
// Return the signed commit from the repo
-
sc := w.repo.SignedCommit()
-
-
return &sc, nil
-
}
-
-
// GetHeadCID returns the CID of the current repository head
-
func (w *Wrapper) GetHeadCID() (cid.Cid, error) {
-
// TODO: Implement this properly
-
// The repo package doesn't expose a direct way to get the head CID
-
return cid.Undef, fmt.Errorf("not implemented")
-
}
-
-
// Export exports the repository as a CAR file
-
func (w *Wrapper) Export() ([]byte, error) {
-
// TODO: Implement proper CAR export using Indigo's carstore functionality
-
// For now, return a placeholder
-
return nil, fmt.Errorf("CAR export not yet implemented")
-
}
-
-
// GetMST returns the underlying Merkle Search Tree
-
func (w *Wrapper) GetMST() (*mst.MerkleSearchTree, error) {
-
// TODO: Implement MST access
-
return nil, fmt.Errorf("not implemented")
-
}
-
-
// RecordInfo contains information about a record
-
type RecordInfo struct {
-
Collection string
-
RecordKey string
-
CID cid.Cid
-
}
-
-
// DecodeRecord decodes CBOR data into a record structure
-
func DecodeRecord(data []byte, v interface{}) error {
-
return cbornode.DecodeInto(data, v)
-
}
-
-
// EncodeRecord encodes a record structure into CBOR data
-
func EncodeRecord(v cbg.CBORMarshaler) ([]byte, error) {
-
buf := new(bytes.Buffer)
-
if err := v.MarshalCBOR(buf); err != nil {
-
return nil, err
-
}
-
return buf.Bytes(), nil
-
}
server

This is a binary file and will not be displayed.

+28 -34
tests/lexicon_validation_test.go
···
package tests
import (
+
"os"
+
"path/filepath"
"strings"
"testing"
···
t.Fatalf("Failed to load lexicon schemas: %v", err)
}
-
// Test that we can resolve our key schemas
-
expectedSchemas := []string{
-
"social.coves.actor.profile",
-
"social.coves.actor.subscription",
-
"social.coves.actor.membership",
-
"social.coves.community.profile",
-
"social.coves.community.rules",
-
"social.coves.community.wiki",
-
"social.coves.post.text",
-
"social.coves.post.image",
-
"social.coves.post.video",
-
"social.coves.post.article",
-
"social.coves.richtext.facet",
-
"social.coves.embed.image",
-
"social.coves.embed.video",
-
"social.coves.embed.external",
-
"social.coves.embed.post",
-
"social.coves.interaction.vote",
-
"social.coves.interaction.tag",
-
"social.coves.interaction.comment",
-
"social.coves.interaction.share",
-
"social.coves.moderation.vote",
-
"social.coves.moderation.tribunalVote",
-
"social.coves.moderation.ruleProposal",
+
// Walk through the directory and find all lexicon files
+
var lexiconFiles []string
+
err := filepath.Walk(schemaPath, func(path string, info os.FileInfo, err error) error {
+
if err != nil {
+
return err
+
}
+
if strings.HasSuffix(path, ".json") && !info.IsDir() {
+
lexiconFiles = append(lexiconFiles, path)
+
}
+
return nil
+
})
+
if err != nil {
+
t.Fatalf("Failed to walk directory: %v", err)
}
-
for _, schemaID := range expectedSchemas {
+
t.Logf("Found %d lexicon files to validate", len(lexiconFiles))
+
+
// Extract schema IDs from file paths and test resolution
+
for _, filePath := range lexiconFiles {
+
// Convert file path to schema ID
+
// e.g., ../internal/atproto/lexicon/social/coves/actor/profile.json -> social.coves.actor.profile
+
relPath, _ := filepath.Rel(schemaPath, filePath)
+
relPath = strings.TrimSuffix(relPath, ".json")
+
schemaID := strings.ReplaceAll(relPath, string(filepath.Separator), ".")
+
t.Run(schemaID, func(t *testing.T) {
if _, err := catalog.Resolve(schemaID); err != nil {
t.Errorf("Failed to resolve schema %s: %v", schemaID, err)
···
"community": "did:plc:programming123",
"postType": "text",
"title": "Test Post",
-
"text": "This is a test post",
-
"tags": []string{"test", "golang"},
-
"language": "en",
-
"contentWarnings": []string{},
+
"content": "This is a test post",
"createdAt": "2025-01-09T14:30:00Z",
},
shouldFail: false,
···
"community": "did:plc:programming123",
"postType": "invalid-type",
"title": "Test Post",
-
"text": "This is a test post",
-
"tags": []string{"test"},
-
"language": "en",
-
"contentWarnings": []string{},
+
"content": "This is a test post",
"createdAt": "2025-01-09T14:30:00Z",
},
shouldFail: true,
···
if err != nil {
t.Errorf("Expected lenient validation to pass, got error: %v", err)
}
-
}
+
}