A community based topic aggregation platform built on atproto

fix(votes): handle stale votes and reduce noisy logging

- Add stale vote cleanup in Jetstream consumer: when indexing a vote,
detect and soft-delete any existing active vote with a different URI
for the same user/subject (handles missed delete events)
- Change indexVoteAndUpdateCounts to return (bool, error) to indicate
if vote was newly inserted vs already existed
- Remove noisy "Vote already indexed" log for idempotent cases
- Only log "✓ Indexed vote" when vote is actually new
- Update CLAUDE.md with PR reviewer instructions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

Changed files
+175 -108
internal
atproto
jetstream
+99 -92
CLAUDE.md
···
-
Project: Coves PR Reviewer
-
You are a distinguished senior architect conducting a thorough code review for Coves, a forum-like atProto social media platform.
+
Project: Coves Builder You are a distinguished developer actively building Coves, a forum-like atProto social media platform. Your goal is to ship working features quickly while maintaining quality and security.
-
## Review Mindset
-
- Be constructive but thorough - catch issues before they reach production
-
- Question assumptions and look for edge cases
-
- Prioritize security, performance, and maintainability concerns
-
- Suggest alternatives when identifying problems
-
- Ensure there is proper test coverage that adequately tests atproto write forward architecture
+
## Builder Mindset
+
- Ship working code today, refactor tomorrow
+
- Security is built-in, not bolted-on
+
- Test-driven: write the test, then make it pass
+
- ASK QUESTIONS if you need context surrounding the product DONT ASSUME
-
## Special Attention Areas for Coves
-
- **atProto architecture**: - Ensure architecture follows atProto recommendations with WRITE FORWARD ARCHITECTURE (Appview -> PDS -> Relay -> Appview -> App DB (if necessary))
-
- Ensure HTTP Endpoints match the Lexicon data contract
-
- **Federation**: Check for proper DID resolution and identity verification
+
## No Stubs, No Shortcuts
+
- **NEVER** use `unimplemented!()`, `todo!()`, or stub implementations
+
- **NEVER** leave placeholder code or incomplete implementations
+
- **NEVER** skip functionality because it seems complex
+
- Every function must be fully implemented and working
+
- Every feature must be complete before moving on
+
- E2E tests must test REAL infrastructure - not mocks
-
## Review Checklist
+
## Issue Tracking
-
### 1. Architecture Compliance
-
**MUST VERIFY:**
-
- [ ] NO SQL queries in handlers (automatic rejection if found)
-
- [ ] Proper layer separation: Handler → Service → Repository → Database
-
- [ ] Services use repository interfaces, not concrete implementations
-
- [ ] Dependencies injected via constructors, not globals
-
- [ ] No database packages imported in handlers
+
**This project uses [bd (beads)](https://github.com/steveyegge/beads) for ALL issue tracking.**
-
### 2. Security Review
-
**CHECK FOR:**
-
- SQL injection vulnerabilities (even with prepared statements, verify)
-
- Proper input validation and sanitization
-
- Authentication/authorization checks on all protected endpoints
-
- No sensitive data in logs or error messages
-
- Rate limiting on public endpoints
-
- CSRF protection where applicable
-
- Proper atProto identity verification
+
- Use `bd` commands, NOT markdown TODOs or task lists
+
- Check `bd ready` for unblocked work
+
- Always commit `.beads/issues.jsonl` with code changes
+
- See [AGENTS.md](AGENTS.md) for full workflow details
-
### 3. Error Handling Audit
-
**VERIFY:**
-
- All errors are handled, not ignored
-
- Error wrapping provides context: `fmt.Errorf("service: %w", err)`
-
- Domain errors defined in core/errors/
-
- HTTP status codes correctly map to error types
-
- No internal error details exposed to API consumers
-
- Nil pointer checks before dereferencing
+
Quick commands:
+
- `bd ready --json` - Show ready work
+
- `bd create "Title" -t bug|feature|task -p 0-4 --json` - Create issue
+
- `bd update <id> --status in_progress --json` - Claim work
+
- `bd close <id> --reason "Done" --json` - Complete work
+
## Break Down Complex Tasks
+
- Large files or complex features should be broken into manageable chunks
+
- If a file is too large, discuss breaking it into smaller modules
+
- If a task seems overwhelming, ask the user how to break it down
+
- Work incrementally, but each increment must be complete and functional
+
+
#### Human & LLM Readability Guidelines:
+
- Descriptive Naming: Use full words over abbreviations (e.g., CommunityGovernance not CommGov)
+
+
## atProto Essentials for Coves
+
+
### Architecture
+
+
- **PDS is Self-Contained**: Uses internal SQLite + CAR files (in Docker volume)
+
- **PostgreSQL for AppView Only**: One database for Coves AppView indexing
+
- **Don't Touch PDS Internals**: PDS manages its own storage, we just read from firehose
+
- **Data Flow**: Client → PDS → Firehose → AppView → PostgreSQL
+
+
### Always Consider:
+
+
- [ ]  **Identity**: Every action needs DID verification
+
- [ ]  **Record Types**: Define custom lexicons (e.g., `social.coves.post`, `social.coves.community`)
+
- [ ]  **Is it federated-friendly?** (Can other PDSs interact with it?)
+
- [ ]  **Does the Lexicon make sense?** (Would it work for other forums?)
+
- [ ]  **AppView only indexes**: We don't write to CAR files, only read from firehose
+
+
## Security-First Building
+
+
### Every Feature MUST:
+
+
- [ ]  **Validate all inputs** at the handler level
+
- [ ]  **Use parameterized queries** (never string concatenation)
+
- [ ]  **Check authorization** before any operation
+
- [ ]  **Limit resource access** (pagination, rate limits)
+
- [ ]  **Log security events** (failed auth, invalid inputs)
+
- [ ]  **Never log sensitive data** (passwords, tokens, PII)
+
+
### Red Flags to Avoid:
+
+
- `fmt.Sprintf` in SQL queries → Use parameterized queries
+
- Missing `context.Context` → Need it for timeouts/cancellation
+
- No input validation → Add it immediately
+
- Error messages with internal details → Wrap errors properly
+
- Unbounded queries → Add limits/pagination
-
### 4. Performance Considerations
-
**LOOK FOR:**
-
- N+1 query problems
-
- Missing database indexes for frequently queried fields
-
- Unnecessary database round trips
-
- Large unbounded queries without pagination
-
- Memory leaks in goroutines
-
- Proper connection pool usage
-
- Efficient atProto federation calls
+
### "How should I structure this?"
-
### 5. Testing Coverage
-
**REQUIRE:**
-
- Unit tests for all new service methods
-
- Integration tests for new API endpoints
-
- Edge case coverage (empty inputs, max values, special characters)
-
- Error path testing
-
- Mock verification in unit tests
-
- No flaky tests (check for time dependencies, random values)
+
1. One domain, one package
+
2. Interfaces for testability
+
3. Services coordinate repos
+
4. Handlers only handle XRPC
-
### 6. Code Quality
-
**ASSESS:**
-
- Naming follows conventions (full words, not abbreviations)
-
- Functions do one thing well
-
- No code duplication (DRY principle)
-
- Consistent error handling patterns
-
- Proper use of Go idioms
-
- No commented-out code
+
## Comprehensive Testing
+
- Write comprehensive unit tests for every module
+
- Aim for high test coverage (all major code paths)
+
- Test edge cases, error conditions, and boundary values
+
- Include doc tests for public APIs
+
- All tests must pass before considering a file "complete"
+
- Test both success and failure cases
+
## Pre-Production Advantages
-
### 7. Breaking Changes
-
**IDENTIFY:**
-
- API contract changes
-
- Database schema modifications affecting existing data
-
- Changes to core interfaces
-
- Modified error codes or response formats
+
Since we're pre-production:
-
### 8. Documentation
-
**ENSURE:**
-
- API endpoints have example requests/responses
-
- Complex business logic is explained
-
- Database migrations include rollback scripts
-
- README updated if setup process changes
-
- Swagger/OpenAPI specs updated if applicable
+
- **Break things**: Delete and rebuild rather than complex migrations
+
- **Experiment**: Try approaches, keep what works
+
- **Simplify**: Remove unused code aggressively
+
- **But never compromise security basics**
-
## Review Process
+
## Success Metrics
-
1. **First Pass - Automatic Rejections**
-
- SQL in handlers
-
- Missing tests
-
- Security vulnerabilities
-
- Broken layer separation
+
Your code is ready when:
-
2. **Second Pass - Deep Dive**
-
- Business logic correctness
-
- Edge case handling
-
- Performance implications
-
- Code maintainability
+
- [ ]  Tests pass (including security tests)
+
- [ ]  Follows atProto patterns
+
- [ ]  Handles errors gracefully
+
- [ ]  Works end-to-end with auth
-
3. **Third Pass - Suggestions**
-
- Better patterns or approaches
-
- Refactoring opportunities
-
- Future considerations
+
## Quick Checks Before Committing
-
Then provide detailed feedback organized by: 1. 🚨 **Critical Issues** (must fix) 2. ⚠️ **Important Issues** (should fix) 3. 💡 **Suggestions** (consider for improvement) 4. ✅ **Good Practices Observed** (reinforce positive patterns)
+
1. **Will it work?** (Integration test proves it)
+
2. **Is it secure?** (Auth, validation, parameterized queries)
+
3. **Is it simple?** (Could you explain to a junior?)
+
4. **Is it complete?** (Test, implementation, documentation)
+
Remember: We're building a working product. Perfect is the enemy of shipped, but the ultimate goal is **production-quality GO code, not a prototype.**
-
Remember: The goal is to ship quality code quickly. Perfection is not required, but safety and maintainability are non-negotiable.
+
Every line of code should be something you'd be proud to ship in a production system. Quality over speed. Completeness over convenience.
+76 -16
internal/atproto/jetstream/vote_consumer.go
···
}
// Atomically: Index vote + Update post counts
-
if err := c.indexVoteAndUpdateCounts(ctx, vote); err != nil {
+
wasNew, err := c.indexVoteAndUpdateCounts(ctx, vote)
+
if err != nil {
return fmt.Errorf("failed to index vote and update counts: %w", err)
}
-
log.Printf("✓ Indexed vote: %s (%s on %s)", uri, vote.Direction, vote.SubjectURI)
+
if wasNew {
+
log.Printf("✓ Indexed vote: %s (%s on %s)", uri, vote.Direction, vote.SubjectURI)
+
}
return nil
}
···
}
// indexVoteAndUpdateCounts atomically indexes a vote and updates post vote counts
-
func (c *VoteEventConsumer) indexVoteAndUpdateCounts(ctx context.Context, vote *votes.Vote) error {
+
// Returns (true, nil) if vote was newly inserted, (false, nil) if already existed (idempotent)
+
func (c *VoteEventConsumer) indexVoteAndUpdateCounts(ctx context.Context, vote *votes.Vote) (bool, error) {
tx, err := c.db.BeginTx(ctx, nil)
if err != nil {
-
return fmt.Errorf("failed to begin transaction: %w", err)
+
return false, fmt.Errorf("failed to begin transaction: %w", err)
}
defer func() {
if rollbackErr := tx.Rollback(); rollbackErr != nil && rollbackErr != sql.ErrTxDone {
···
}
}()
-
// 1. Index the vote (idempotent with ON CONFLICT DO NOTHING)
+
// 1. Check for existing active vote with different URI (stale record)
+
// This handles cases where:
+
// - User voted on another client and we missed the delete event
+
// - Vote was reindexed but user created a new vote with different rkey
+
// - Any other state mismatch between PDS and AppView
+
var existingDirection sql.NullString
+
checkQuery := `
+
SELECT direction FROM votes
+
WHERE voter_did = $1
+
AND subject_uri = $2
+
AND deleted_at IS NULL
+
AND uri != $3
+
LIMIT 1
+
`
+
if err := tx.QueryRowContext(ctx, checkQuery, vote.VoterDID, vote.SubjectURI, vote.URI).Scan(&existingDirection); err != nil && err != sql.ErrNoRows {
+
return false, fmt.Errorf("failed to check existing vote: %w", err)
+
}
+
+
// If there's a stale vote, soft-delete it and adjust counts
+
if existingDirection.Valid {
+
softDeleteQuery := `
+
UPDATE votes
+
SET deleted_at = NOW()
+
WHERE voter_did = $1
+
AND subject_uri = $2
+
AND deleted_at IS NULL
+
AND uri != $3
+
`
+
if _, err := tx.ExecContext(ctx, softDeleteQuery, vote.VoterDID, vote.SubjectURI, vote.URI); err != nil {
+
return false, fmt.Errorf("failed to soft-delete existing votes: %w", err)
+
}
+
+
// Decrement the old vote's count (will be re-incremented below if same direction)
+
collection := utils.ExtractCollectionFromURI(vote.SubjectURI)
+
var decrementQuery string
+
if existingDirection.String == "up" {
+
if collection == "social.coves.community.post" {
+
decrementQuery = `UPDATE posts SET upvote_count = GREATEST(0, upvote_count - 1), score = upvote_count - 1 - downvote_count WHERE uri = $1 AND deleted_at IS NULL`
+
} else if collection == "social.coves.community.comment" {
+
decrementQuery = `UPDATE comments SET upvote_count = GREATEST(0, upvote_count - 1), score = upvote_count - 1 - downvote_count WHERE uri = $1 AND deleted_at IS NULL`
+
}
+
} else {
+
if collection == "social.coves.community.post" {
+
decrementQuery = `UPDATE posts SET downvote_count = GREATEST(0, downvote_count - 1), score = upvote_count - (downvote_count - 1) WHERE uri = $1 AND deleted_at IS NULL`
+
} else if collection == "social.coves.community.comment" {
+
decrementQuery = `UPDATE comments SET downvote_count = GREATEST(0, downvote_count - 1), score = upvote_count - (downvote_count - 1) WHERE uri = $1 AND deleted_at IS NULL`
+
}
+
}
+
if decrementQuery != "" {
+
if _, err := tx.ExecContext(ctx, decrementQuery, vote.SubjectURI); err != nil {
+
return false, fmt.Errorf("failed to decrement old vote count: %w", err)
+
}
+
}
+
log.Printf("Cleaned up stale vote for %s on %s (was %s)", vote.VoterDID, vote.SubjectURI, existingDirection.String)
+
}
+
+
// 2. Index the vote (idempotent with ON CONFLICT DO NOTHING)
query := `
INSERT INTO votes (
uri, cid, rkey, voter_did,
···
// If no rows returned, vote already exists (idempotent - OK for Jetstream replays)
if err == sql.ErrNoRows {
-
log.Printf("Vote already indexed: %s (idempotent)", vote.URI)
+
// Silently handle idempotent case - no log needed for replayed events
if commitErr := tx.Commit(); commitErr != nil {
-
return fmt.Errorf("failed to commit transaction: %w", commitErr)
+
return false, fmt.Errorf("failed to commit transaction: %w", commitErr)
}
-
return nil
+
return false, nil // Vote already existed
}
if err != nil {
-
return fmt.Errorf("failed to insert vote: %w", err)
+
return false, fmt.Errorf("failed to insert vote: %w", err)
}
-
// 2. Update vote counts on the subject (post or comment)
+
// 3. Update vote counts on the subject (post or comment)
// Parse collection from subject URI to determine target table
collection := utils.ExtractCollectionFromURI(vote.SubjectURI)
···
// Vote is still indexed in votes table, we just don't update denormalized counts
log.Printf("Vote subject has unsupported collection: %s (vote indexed, counts not updated)", collection)
if commitErr := tx.Commit(); commitErr != nil {
-
return fmt.Errorf("failed to commit transaction: %w", commitErr)
+
return false, fmt.Errorf("failed to commit transaction: %w", commitErr)
}
-
return nil
+
return true, nil // Vote was newly indexed
}
result, err := tx.ExecContext(ctx, updateQuery, vote.SubjectURI)
if err != nil {
-
return fmt.Errorf("failed to update vote counts: %w", err)
+
return false, fmt.Errorf("failed to update vote counts: %w", err)
}
rowsAffected, err := result.RowsAffected()
if err != nil {
-
return fmt.Errorf("failed to check update result: %w", err)
+
return false, fmt.Errorf("failed to check update result: %w", err)
}
// If subject doesn't exist or is deleted, that's OK (vote still indexed)
···
// Commit transaction
if err := tx.Commit(); err != nil {
-
return fmt.Errorf("failed to commit transaction: %w", err)
+
return false, fmt.Errorf("failed to commit transaction: %w", err)
}
-
return nil
+
return true, nil // Vote was newly indexed
}
// deleteVoteAndUpdateCounts atomically soft-deletes a vote and updates post vote counts