An atproto PDS written in Go

Compare changes

Choose any two refs to compare.

+1 -1
.env.example
···
COCOON_RELAYS=https://bsky.network
# Generate with `openssl rand -hex 16`
COCOON_ADMIN_PASSWORD=
-
# openssl rand -hex 32
+
# Generate with `openssl rand -hex 32`
COCOON_SESSION_SECRET=
+3 -1
.github/workflows/docker-image.yml
···
on:
workflow_dispatch:
push:
+
branches:
+
- main
env:
REGISTRY: ghcr.io
···
with:
subject-name: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME}}
subject-digest: ${{ steps.push.outputs.digest }}
-
push-to-registry: true
+
push-to-registry: true
+2
.gitignore
···
*.key
*.secret
.DS_Store
+
data/
+
keys/
+10
Caddyfile
···
+
{$COCOON_HOSTNAME} {
+
reverse_proxy localhost:8080
+
+
encode gzip
+
+
log {
+
output file /data/access.log
+
format json
+
}
+
}
+137 -6
README.md
···
# Cocoon
> [!WARNING]
-
You should not use this PDS. You should not rely on this code as a reference for a PDS implementation. You should not trust this code. Using this PDS implementation may result in data loss, corruption, etc.
+
I migrated and have been running my main account on this PDS for months now without issue, however, I am still not responsible if things go awry, particularly during account migration. Please use caution.
Cocoon is a PDS implementation in Go. It is highly experimental, and is not ready for any production use.
+
## Quick Start with Docker Compose
+
+
### Prerequisites
+
+
- Docker and Docker Compose installed
+
- A domain name pointing to your server (for automatic HTTPS)
+
- Ports 80 and 443 open in i.e. UFW
+
+
### Installation
+
+
1. **Clone the repository**
+
```bash
+
git clone https://github.com/haileyok/cocoon.git
+
cd cocoon
+
```
+
+
2. **Create your configuration file**
+
```bash
+
cp .env.example .env
+
```
+
+
3. **Edit `.env` with your settings**
+
+
Required settings:
+
```bash
+
COCOON_DID="did:web:your-domain.com"
+
COCOON_HOSTNAME="your-domain.com"
+
COCOON_CONTACT_EMAIL="you@example.com"
+
COCOON_RELAYS="https://bsky.network"
+
+
# Generate with: openssl rand -hex 16
+
COCOON_ADMIN_PASSWORD="your-secure-password"
+
+
# Generate with: openssl rand -hex 32
+
COCOON_SESSION_SECRET="your-session-secret"
+
```
+
+
4. **Start the services**
+
```bash
+
# Pull pre-built image from GitHub Container Registry
+
docker-compose pull
+
docker-compose up -d
+
```
+
+
Or build locally:
+
```bash
+
docker-compose build
+
docker-compose up -d
+
```
+
+
5. **Get your invite code**
+
+
On first run, an invite code is automatically created. View it with:
+
```bash
+
docker-compose logs create-invite
+
```
+
+
Or check the saved file:
+
```bash
+
cat keys/initial-invite-code.txt
+
```
+
+
**IMPORTANT**: Save this invite code! You'll need it to create your first account.
+
+
6. **Monitor the services**
+
```bash
+
docker-compose logs -f
+
```
+
+
### What Gets Set Up
+
+
The Docker Compose setup includes:
+
+
- **init-keys**: Automatically generates cryptographic keys (rotation key and JWK) on first run
+
- **cocoon**: The main PDS service running on port 8080
+
- **create-invite**: Automatically creates an initial invite code after Cocoon starts (first run only)
+
- **caddy**: Reverse proxy with automatic HTTPS via Let's Encrypt
+
+
### Data Persistence
+
+
The following directories will be created automatically:
+
+
- `./keys/` - Cryptographic keys (generated automatically)
+
- `rotation.key` - PDS rotation key
+
- `jwk.key` - JWK private key
+
- `initial-invite-code.txt` - Your first invite code (first run only)
+
- `./data/` - SQLite database and blockstore
+
- Docker volumes for Caddy configuration and certificates
+
+
### Optional Configuration
+
+
#### SMTP Email Settings
+
```bash
+
COCOON_SMTP_USER="your-smtp-username"
+
COCOON_SMTP_PASS="your-smtp-password"
+
COCOON_SMTP_HOST="smtp.example.com"
+
COCOON_SMTP_PORT="587"
+
COCOON_SMTP_EMAIL="noreply@example.com"
+
COCOON_SMTP_NAME="Cocoon PDS"
+
```
+
+
#### S3 Storage
+
```bash
+
COCOON_S3_BACKUPS_ENABLED=true
+
COCOON_S3_BLOBSTORE_ENABLED=true
+
COCOON_S3_REGION="us-east-1"
+
COCOON_S3_BUCKET="your-bucket"
+
COCOON_S3_ENDPOINT="https://s3.amazonaws.com"
+
COCOON_S3_ACCESS_KEY="your-access-key"
+
COCOON_S3_SECRET_KEY="your-secret-key"
+
```
+
+
### Management Commands
+
+
Create an invite code:
+
```bash
+
docker exec cocoon-pds /cocoon create-invite-code --uses 1
+
```
+
+
Reset a user's password:
+
```bash
+
docker exec cocoon-pds /cocoon reset-password --did "did:plc:xxx"
+
```
+
+
### Updating
+
+
```bash
+
docker-compose pull
+
docker-compose up -d
+
```
+
## Implemented Endpoints
> [!NOTE]
···
### Identity
-
- [ ] `com.atproto.identity.getRecommendedDidCredentials`
-
- [ ] `com.atproto.identity.requestPlcOperationSignature`
+
- [x] `com.atproto.identity.getRecommendedDidCredentials`
+
- [x] `com.atproto.identity.requestPlcOperationSignature`
- [x] `com.atproto.identity.resolveHandle`
-
- [ ] `com.atproto.identity.signPlcOperation`
-
- [ ] `com.atproto.identity.submitPlcOperation`
+
- [x] `com.atproto.identity.signPlcOperation`
+
- [x] `com.atproto.identity.submitPlcOperation`
- [x] `com.atproto.identity.updateHandle`
### Repo
···
- [x] `com.atproto.repo.deleteRecord`
- [x] `com.atproto.repo.describeRepo`
- [x] `com.atproto.repo.getRecord`
-
- [x] `com.atproto.repo.importRepo` (Works "okay". You still have to handle PLC operations on your own when migrating. Use with extreme caution.)
+
- [x] `com.atproto.repo.importRepo` (Works "okay". Use with extreme caution.)
- [x] `com.atproto.repo.listRecords`
- [ ] `com.atproto.repo.listMissingBlobs`
+26 -39
cmd/cocoon/main.go
···
EnvVars: []string{"COCOON_DB_NAME"},
},
&cli.StringFlag{
-
Name: "did",
-
Required: true,
-
EnvVars: []string{"COCOON_DID"},
+
Name: "did",
+
EnvVars: []string{"COCOON_DID"},
},
&cli.StringFlag{
-
Name: "hostname",
-
Required: true,
-
EnvVars: []string{"COCOON_HOSTNAME"},
+
Name: "hostname",
+
EnvVars: []string{"COCOON_HOSTNAME"},
},
&cli.StringFlag{
-
Name: "rotation-key-path",
-
Required: true,
-
EnvVars: []string{"COCOON_ROTATION_KEY_PATH"},
+
Name: "rotation-key-path",
+
EnvVars: []string{"COCOON_ROTATION_KEY_PATH"},
},
&cli.StringFlag{
-
Name: "jwk-path",
-
Required: true,
-
EnvVars: []string{"COCOON_JWK_PATH"},
+
Name: "jwk-path",
+
EnvVars: []string{"COCOON_JWK_PATH"},
},
&cli.StringFlag{
-
Name: "contact-email",
-
Required: true,
-
EnvVars: []string{"COCOON_CONTACT_EMAIL"},
+
Name: "contact-email",
+
EnvVars: []string{"COCOON_CONTACT_EMAIL"},
},
&cli.StringSliceFlag{
-
Name: "relays",
-
Required: true,
-
EnvVars: []string{"COCOON_RELAYS"},
+
Name: "relays",
+
EnvVars: []string{"COCOON_RELAYS"},
},
&cli.StringFlag{
-
Name: "admin-password",
-
Required: true,
-
EnvVars: []string{"COCOON_ADMIN_PASSWORD"},
+
Name: "admin-password",
+
EnvVars: []string{"COCOON_ADMIN_PASSWORD"},
},
&cli.StringFlag{
-
Name: "smtp-user",
-
Required: false,
-
EnvVars: []string{"COCOON_SMTP_USER"},
+
Name: "smtp-user",
+
EnvVars: []string{"COCOON_SMTP_USER"},
},
&cli.StringFlag{
-
Name: "smtp-pass",
-
Required: false,
-
EnvVars: []string{"COCOON_SMTP_PASS"},
+
Name: "smtp-pass",
+
EnvVars: []string{"COCOON_SMTP_PASS"},
},
&cli.StringFlag{
-
Name: "smtp-host",
-
Required: false,
-
EnvVars: []string{"COCOON_SMTP_HOST"},
+
Name: "smtp-host",
+
EnvVars: []string{"COCOON_SMTP_HOST"},
},
&cli.StringFlag{
-
Name: "smtp-port",
-
Required: false,
-
EnvVars: []string{"COCOON_SMTP_PORT"},
+
Name: "smtp-port",
+
EnvVars: []string{"COCOON_SMTP_PORT"},
},
&cli.StringFlag{
-
Name: "smtp-email",
-
Required: false,
-
EnvVars: []string{"COCOON_SMTP_EMAIL"},
+
Name: "smtp-email",
+
EnvVars: []string{"COCOON_SMTP_EMAIL"},
},
&cli.StringFlag{
-
Name: "smtp-name",
-
Required: false,
-
EnvVars: []string{"COCOON_SMTP_NAME"},
+
Name: "smtp-name",
+
EnvVars: []string{"COCOON_SMTP_NAME"},
},
&cli.BoolFlag{
Name: "s3-backups-enabled",
+56
create-initial-invite.sh
···
+
#!/bin/sh
+
+
INVITE_FILE="/keys/initial-invite-code.txt"
+
MARKER="/keys/.invite_created"
+
+
# Check if invite code was already created
+
if [ -f "$MARKER" ]; then
+
echo "โœ“ Initial invite code already created"
+
exit 0
+
fi
+
+
echo "Waiting for database to be ready..."
+
sleep 10
+
+
# Try to create invite code - retry until database is ready
+
MAX_ATTEMPTS=30
+
ATTEMPT=0
+
INVITE_CODE=""
+
+
while [ $ATTEMPT -lt $MAX_ATTEMPTS ]; do
+
ATTEMPT=$((ATTEMPT + 1))
+
OUTPUT=$(/cocoon create-invite-code --uses 1 2>&1)
+
INVITE_CODE=$(echo "$OUTPUT" | grep -oE '[a-zA-Z0-9]{8}-[a-zA-Z0-9]{8}' || echo "")
+
+
if [ -n "$INVITE_CODE" ]; then
+
break
+
fi
+
+
if [ $((ATTEMPT % 5)) -eq 0 ]; then
+
echo " Waiting for database... ($ATTEMPT/$MAX_ATTEMPTS)"
+
fi
+
sleep 2
+
done
+
+
if [ -n "$INVITE_CODE" ]; then
+
echo ""
+
echo "โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—"
+
echo "โ•‘ SAVE THIS INVITE CODE! โ•‘"
+
echo "โ•‘ โ•‘"
+
echo "โ•‘ $INVITE_CODE โ•‘"
+
echo "โ•‘ โ•‘"
+
echo "โ•‘ Use this to create your first โ•‘"
+
echo "โ•‘ account on your PDS. โ•‘"
+
echo "โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•"
+
echo ""
+
+
echo "$INVITE_CODE" > "$INVITE_FILE"
+
echo "โœ“ Invite code saved to: $INVITE_FILE"
+
+
touch "$MARKER"
+
echo "โœ“ Initial setup complete!"
+
else
+
echo "โœ— Failed to create invite code"
+
echo "Output: $OUTPUT"
+
exit 1
+
fi
+125
docker-compose.yaml
···
+
version: '3.8'
+
+
services:
+
init-keys:
+
build:
+
context: .
+
dockerfile: Dockerfile
+
image: ghcr.io/haileyok/cocoon:latest
+
container_name: cocoon-init-keys
+
volumes:
+
- ./keys:/keys
+
- ./data:/data/cocoon
+
- ./init-keys.sh:/init-keys.sh:ro
+
environment:
+
COCOON_DID: ${COCOON_DID}
+
COCOON_HOSTNAME: ${COCOON_HOSTNAME}
+
COCOON_ROTATION_KEY_PATH: /keys/rotation.key
+
COCOON_JWK_PATH: /keys/jwk.key
+
COCOON_CONTACT_EMAIL: ${COCOON_CONTACT_EMAIL}
+
COCOON_RELAYS: ${COCOON_RELAYS:-https://bsky.network}
+
COCOON_ADMIN_PASSWORD: ${COCOON_ADMIN_PASSWORD}
+
entrypoint: ["/bin/sh", "/init-keys.sh"]
+
restart: "no"
+
+
cocoon:
+
build:
+
context: .
+
dockerfile: Dockerfile
+
image: ghcr.io/haileyok/cocoon:latest
+
container_name: cocoon-pds
+
network_mode: host
+
depends_on:
+
init-keys:
+
condition: service_completed_successfully
+
volumes:
+
- ./data:/data/cocoon
+
- ./keys/rotation.key:/keys/rotation.key:ro
+
- ./keys/jwk.key:/keys/jwk.key:ro
+
environment:
+
# Required settings
+
COCOON_DID: ${COCOON_DID}
+
COCOON_HOSTNAME: ${COCOON_HOSTNAME}
+
COCOON_ROTATION_KEY_PATH: /keys/rotation.key
+
COCOON_JWK_PATH: /keys/jwk.key
+
COCOON_CONTACT_EMAIL: ${COCOON_CONTACT_EMAIL}
+
COCOON_RELAYS: ${COCOON_RELAYS:-https://bsky.network}
+
COCOON_ADMIN_PASSWORD: ${COCOON_ADMIN_PASSWORD}
+
COCOON_SESSION_SECRET: ${COCOON_SESSION_SECRET}
+
+
# Server configuration
+
COCOON_ADDR: ":8080"
+
COCOON_DB_NAME: /data/cocoon/cocoon.db
+
COCOON_BLOCKSTORE_VARIANT: ${COCOON_BLOCKSTORE_VARIANT:-sqlite}
+
+
# Optional: SMTP settings for email
+
COCOON_SMTP_USER: ${COCOON_SMTP_USER:-}
+
COCOON_SMTP_PASS: ${COCOON_SMTP_PASS:-}
+
COCOON_SMTP_HOST: ${COCOON_SMTP_HOST:-}
+
COCOON_SMTP_PORT: ${COCOON_SMTP_PORT:-}
+
COCOON_SMTP_EMAIL: ${COCOON_SMTP_EMAIL:-}
+
COCOON_SMTP_NAME: ${COCOON_SMTP_NAME:-}
+
+
# Optional: S3 configuration
+
COCOON_S3_BACKUPS_ENABLED: ${COCOON_S3_BACKUPS_ENABLED:-false}
+
COCOON_S3_BLOBSTORE_ENABLED: ${COCOON_S3_BLOBSTORE_ENABLED:-false}
+
COCOON_S3_REGION: ${COCOON_S3_REGION:-}
+
COCOON_S3_BUCKET: ${COCOON_S3_BUCKET:-}
+
COCOON_S3_ENDPOINT: ${COCOON_S3_ENDPOINT:-}
+
COCOON_S3_ACCESS_KEY: ${COCOON_S3_ACCESS_KEY:-}
+
COCOON_S3_SECRET_KEY: ${COCOON_S3_SECRET_KEY:-}
+
+
# Optional: Fallback proxy
+
COCOON_FALLBACK_PROXY: ${COCOON_FALLBACK_PROXY:-}
+
restart: unless-stopped
+
healthcheck:
+
test: ["CMD", "curl", "-f", "http://localhost:8080/xrpc/_health"]
+
interval: 30s
+
timeout: 10s
+
retries: 3
+
start_period: 40s
+
+
create-invite:
+
build:
+
context: .
+
dockerfile: Dockerfile
+
image: ghcr.io/haileyok/cocoon:latest
+
container_name: cocoon-create-invite
+
network_mode: host
+
volumes:
+
- ./keys:/keys
+
- ./create-initial-invite.sh:/create-initial-invite.sh:ro
+
environment:
+
COCOON_DID: ${COCOON_DID}
+
COCOON_HOSTNAME: ${COCOON_HOSTNAME}
+
COCOON_ROTATION_KEY_PATH: /keys/rotation.key
+
COCOON_JWK_PATH: /keys/jwk.key
+
COCOON_CONTACT_EMAIL: ${COCOON_CONTACT_EMAIL}
+
COCOON_RELAYS: ${COCOON_RELAYS:-https://bsky.network}
+
COCOON_ADMIN_PASSWORD: ${COCOON_ADMIN_PASSWORD}
+
COCOON_DB_NAME: /data/cocoon/cocoon.db
+
depends_on:
+
- init-keys
+
entrypoint: ["/bin/sh", "/create-initial-invite.sh"]
+
restart: "no"
+
+
caddy:
+
image: caddy:2-alpine
+
container_name: cocoon-caddy
+
network_mode: host
+
volumes:
+
- ./Caddyfile:/etc/caddy/Caddyfile:ro
+
- caddy_data:/data
+
- caddy_config:/config
+
restart: unless-stopped
+
environment:
+
COCOON_HOSTNAME: ${COCOON_HOSTNAME}
+
CADDY_ACME_EMAIL: ${COCOON_CONTACT_EMAIL:-}
+
+
volumes:
+
data:
+
driver: local
+
caddy_data:
+
driver: local
+
caddy_config:
+
driver: local
+34
init-keys.sh
···
+
#!/bin/sh
+
set -e
+
+
mkdir -p /keys
+
mkdir -p /data/cocoon
+
+
if [ ! -f /keys/rotation.key ]; then
+
echo "Generating rotation key..."
+
/cocoon create-rotation-key --out /keys/rotation.key 2>/dev/null || true
+
if [ -f /keys/rotation.key ]; then
+
echo "โœ“ Rotation key generated at /keys/rotation.key"
+
else
+
echo "โœ— Failed to generate rotation key"
+
exit 1
+
fi
+
else
+
echo "โœ“ Rotation key already exists"
+
fi
+
+
if [ ! -f /keys/jwk.key ]; then
+
echo "Generating JWK..."
+
/cocoon create-private-jwk --out /keys/jwk.key 2>/dev/null || true
+
if [ -f /keys/jwk.key ]; then
+
echo "โœ“ JWK generated at /keys/jwk.key"
+
else
+
echo "โœ— Failed to generate JWK"
+
exit 1
+
fi
+
else
+
echo "โœ“ JWK already exists"
+
fi
+
+
echo ""
+
echo "โœ“ Key initialization complete!"
+2
models/models.go
···
EmailUpdateCodeExpiresAt *time.Time
PasswordResetCode *string
PasswordResetCodeExpiresAt *time.Time
+
PlcOperationCode *string
+
PlcOperationCodeExpiresAt *time.Time
Password string
SigningKey []byte
Rev string
+44 -28
oauth/client/manager.go
···
cli *http.Client
logger *slog.Logger
jwksCache cache.Cache[string, jwk.Key]
-
metadataCache cache.Cache[string, Metadata]
+
metadataCache cache.Cache[string, *Metadata]
}
type ManagerArgs struct {
···
}
jwksCache := cache.NewCache[string, jwk.Key]().WithLRU().WithMaxKeys(500).WithTTL(5 * time.Minute)
-
metadataCache := cache.NewCache[string, Metadata]().WithLRU().WithMaxKeys(500).WithTTL(5 * time.Minute)
+
metadataCache := cache.NewCache[string, *Metadata]().WithLRU().WithMaxKeys(500).WithTTL(5 * time.Minute)
return &Manager{
cli: args.Cli,
···
}
var jwks jwk.Key
-
if metadata.JWKS != nil && len(metadata.JWKS.Keys) > 0 {
-
// TODO: this is kinda bad but whatever for now. there could obviously be more than one jwk, and we need to
-
// make sure we use the right one
-
b, err := json.Marshal(metadata.JWKS.Keys[0])
-
if err != nil {
-
return nil, err
-
}
+
if metadata.TokenEndpointAuthMethod == "private_key_jwt" {
+
if metadata.JWKS != nil && len(metadata.JWKS.Keys) > 0 {
+
// TODO: this is kinda bad but whatever for now. there could obviously be more than one jwk, and we need to
+
// make sure we use the right one
+
b, err := json.Marshal(metadata.JWKS.Keys[0])
+
if err != nil {
+
return nil, err
+
}
-
k, err := helpers.ParseJWKFromBytes(b)
-
if err != nil {
-
return nil, err
-
}
+
k, err := helpers.ParseJWKFromBytes(b)
+
if err != nil {
+
return nil, err
+
}
-
jwks = k
-
} else if metadata.JWKSURI != nil {
-
maybeJwks, err := cm.getClientJwks(ctx, clientId, *metadata.JWKSURI)
-
if err != nil {
-
return nil, err
-
}
+
jwks = k
+
} else if metadata.JWKS != nil {
+
} else if metadata.JWKSURI != nil {
+
maybeJwks, err := cm.getClientJwks(ctx, clientId, *metadata.JWKSURI)
+
if err != nil {
+
return nil, err
+
}
-
jwks = maybeJwks
-
} else {
-
return nil, fmt.Errorf("no valid jwks found in oauth client metadata")
+
jwks = maybeJwks
+
} else {
+
return nil, fmt.Errorf("no valid jwks found in oauth client metadata")
+
}
}
return &Client{
···
}
func (cm *Manager) getClientMetadata(ctx context.Context, clientId string) (*Metadata, error) {
-
metadataCached, ok := cm.metadataCache.Get(clientId)
+
cached, ok := cm.metadataCache.Get(clientId)
if !ok {
req, err := http.NewRequestWithContext(ctx, "GET", clientId, nil)
if err != nil {
···
return nil, err
}
+
cm.metadataCache.Set(clientId, validated, 10*time.Minute)
+
return validated, nil
} else {
-
return &metadataCached, nil
+
return cached, nil
}
}
···
return nil, fmt.Errorf("error unmarshaling metadata: %w", err)
}
+
if metadata.ClientURI == "" {
+
u, err := url.Parse(metadata.ClientID)
+
if err != nil {
+
return nil, fmt.Errorf("unable to parse client id: %w", err)
+
}
+
u.RawPath = ""
+
u.RawQuery = ""
+
metadata.ClientURI = u.String()
+
}
+
u, err := url.Parse(metadata.ClientURI)
if err != nil {
return nil, fmt.Errorf("unable to parse client uri: %w", err)
}
+
if metadata.ClientName == "" {
+
metadata.ClientName = metadata.ClientURI
+
}
+
if isLocalHostname(u.Hostname()) {
-
return nil, errors.New("`client_uri` hostname is invalid")
+
return nil, fmt.Errorf("`client_uri` hostname is invalid: %s", u.Hostname())
}
if metadata.Scope == "" {
···
if u.Scheme != "http" {
return nil, fmt.Errorf("loopback redirect uri %s must use http", ruri)
}
-
-
break
case u.Scheme == "http":
return nil, errors.New("only loopbvack redirect uris are allowed to use the `http` scheme")
case u.Scheme == "https":
if isLocalHostname(u.Hostname()) {
return nil, fmt.Errorf("redirect uri %s's domain must not be a local hostname", ruri)
}
-
break
case strings.Contains(u.Scheme, "."):
if metadata.ApplicationType != "native" {
return nil, errors.New("private-use uri scheme redirect uris are only allowed for native apps")
+31 -15
plc/client.go
···
}
func (c *Client) CreateDID(sigkey *atcrypto.PrivateKeyK256, recovery string, handle string) (string, *Operation, error) {
-
pubsigkey, err := sigkey.PublicKey()
+
creds, err := c.CreateDidCredentials(sigkey, recovery, handle)
if err != nil {
return "", nil, err
}
-
pubrotkey, err := c.rotationKey.PublicKey()
+
op := Operation{
+
Type: "plc_operation",
+
VerificationMethods: creds.VerificationMethods,
+
RotationKeys: creds.RotationKeys,
+
AlsoKnownAs: creds.AlsoKnownAs,
+
Services: creds.Services,
+
Prev: nil,
+
}
+
+
if err := c.SignOp(sigkey, &op); err != nil {
+
return "", nil, err
+
}
+
+
did, err := DidFromOp(&op)
if err != nil {
return "", nil, err
}
+
return did, &op, nil
+
}
+
+
func (c *Client) CreateDidCredentials(sigkey *atcrypto.PrivateKeyK256, recovery string, handle string) (*DidCredentials, error) {
+
pubsigkey, err := sigkey.PublicKey()
+
if err != nil {
+
return nil, err
+
}
+
+
pubrotkey, err := c.rotationKey.PublicKey()
+
if err != nil {
+
return nil, err
+
}
+
// todo
rotationKeys := []string{pubrotkey.DIDKey()}
if recovery != "" {
···
}(recovery)
}
-
op := Operation{
-
Type: "plc_operation",
+
creds := DidCredentials{
VerificationMethods: map[string]string{
"atproto": pubsigkey.DIDKey(),
},
···
Endpoint: "https://" + c.pdsHostname,
},
},
-
Prev: nil,
}
-
if err := c.SignOp(sigkey, &op); err != nil {
-
return "", nil, err
-
}
-
-
did, err := DidFromOp(&op)
-
if err != nil {
-
return "", nil, err
-
}
-
-
return did, &op, nil
+
return &creds, nil
}
func (c *Client) SignOp(sigkey *atcrypto.PrivateKeyK256, op *Operation) error {
+8
plc/types.go
···
cbg "github.com/whyrusleeping/cbor-gen"
)
+
+
type DidCredentials struct {
+
VerificationMethods map[string]string `json:"verificationMethods"`
+
RotationKeys []string `json:"rotationKeys"`
+
AlsoKnownAs []string `json:"alsoKnownAs"`
+
Services map[string]identity.OperationService `json:"services"`
+
}
+
type Operation struct {
Type string `json:"type"`
VerificationMethods map[string]string `json:"verificationMethods"`
+1
server/handle_account.go
···
func (s *Server) handleAccount(e echo.Context) error {
ctx := e.Request().Context()
+
repo, sess, err := s.getSessionRepoOrErr(e)
if err != nil {
return e.Redirect(303, "/account/signin")
+29
server/handle_identity_request_plc_operation.go
···
+
package server
+
+
import (
+
"fmt"
+
"time"
+
+
"github.com/haileyok/cocoon/internal/helpers"
+
"github.com/haileyok/cocoon/models"
+
"github.com/labstack/echo/v4"
+
)
+
+
func (s *Server) handleIdentityRequestPlcOperationSignature(e echo.Context) error {
+
urepo := e.Get("repo").(*models.RepoActor)
+
+
code := fmt.Sprintf("%s-%s", helpers.RandomVarchar(5), helpers.RandomVarchar(5))
+
eat := time.Now().Add(10 * time.Minute).UTC()
+
+
if err := s.db.Exec("UPDATE repos SET plc_operation_code = ?, plc_operation_code_expires_at = ? WHERE did = ?", nil, code, eat, urepo.Repo.Did).Error; err != nil {
+
s.logger.Error("error updating user", "error", err)
+
return helpers.ServerError(e, nil)
+
}
+
+
if err := s.sendPlcTokenReset(urepo.Email, urepo.Handle, code); err != nil {
+
s.logger.Error("error sending mail", "error", err)
+
return helpers.ServerError(e, nil)
+
}
+
+
return e.NoContent(200)
+
}
+103
server/handle_identity_sign_plc_operation.go
···
+
package server
+
+
import (
+
"context"
+
"strings"
+
"time"
+
+
"github.com/Azure/go-autorest/autorest/to"
+
"github.com/bluesky-social/indigo/atproto/atcrypto"
+
"github.com/haileyok/cocoon/identity"
+
"github.com/haileyok/cocoon/internal/helpers"
+
"github.com/haileyok/cocoon/models"
+
"github.com/haileyok/cocoon/plc"
+
"github.com/labstack/echo/v4"
+
)
+
+
type ComAtprotoSignPlcOperationRequest struct {
+
Token string `json:"token"`
+
VerificationMethods *map[string]string `json:"verificationMethods"`
+
RotationKeys *[]string `json:"rotationKeys"`
+
AlsoKnownAs *[]string `json:"alsoKnownAs"`
+
Services *map[string]identity.OperationService `json:"services"`
+
}
+
+
type ComAtprotoSignPlcOperationResponse struct {
+
Operation plc.Operation `json:"operation"`
+
}
+
+
func (s *Server) handleSignPlcOperation(e echo.Context) error {
+
repo := e.Get("repo").(*models.RepoActor)
+
+
var req ComAtprotoSignPlcOperationRequest
+
if err := e.Bind(&req); err != nil {
+
s.logger.Error("error binding", "error", err)
+
return helpers.ServerError(e, nil)
+
}
+
+
if !strings.HasPrefix(repo.Repo.Did, "did:plc:") {
+
return helpers.InputError(e, nil)
+
}
+
+
if repo.PlcOperationCode == nil || repo.PlcOperationCodeExpiresAt == nil {
+
return helpers.InputError(e, to.StringPtr("InvalidToken"))
+
}
+
+
if *repo.PlcOperationCode != req.Token {
+
return helpers.InvalidTokenError(e)
+
}
+
+
if time.Now().UTC().After(*repo.PlcOperationCodeExpiresAt) {
+
return helpers.ExpiredTokenError(e)
+
}
+
+
ctx := context.WithValue(e.Request().Context(), "skip-cache", true)
+
log, err := identity.FetchDidAuditLog(ctx, nil, repo.Repo.Did)
+
if err != nil {
+
s.logger.Error("error fetching doc", "error", err)
+
return helpers.ServerError(e, nil)
+
}
+
+
latest := log[len(log)-1]
+
+
op := plc.Operation{
+
Type: "plc_operation",
+
VerificationMethods: latest.Operation.VerificationMethods,
+
RotationKeys: latest.Operation.RotationKeys,
+
AlsoKnownAs: latest.Operation.AlsoKnownAs,
+
Services: latest.Operation.Services,
+
Prev: &latest.Cid,
+
}
+
if req.VerificationMethods != nil {
+
op.VerificationMethods = *req.VerificationMethods
+
}
+
if req.RotationKeys != nil {
+
op.RotationKeys = *req.RotationKeys
+
}
+
if req.AlsoKnownAs != nil {
+
op.AlsoKnownAs = *req.AlsoKnownAs
+
}
+
if req.Services != nil {
+
op.Services = *req.Services
+
}
+
+
k, err := atcrypto.ParsePrivateBytesK256(repo.SigningKey)
+
if err != nil {
+
s.logger.Error("error parsing signing key", "error", err)
+
return helpers.ServerError(e, nil)
+
}
+
+
if err := s.plcClient.SignOp(k, &op); err != nil {
+
s.logger.Error("error signing plc operation", "error", err)
+
return helpers.ServerError(e, nil)
+
}
+
+
if err := s.db.Exec("UPDATE repos SET plc_operation_code = NULL, plc_operation_code_expires_at = NULL WHERE did = ?", nil, repo.Repo.Did).Error; err != nil {
+
s.logger.Error("error updating repo", "error", err)
+
return helpers.ServerError(e, nil)
+
}
+
+
return e.JSON(200, ComAtprotoSignPlcOperationResponse{
+
Operation: op,
+
})
+
}
+87
server/handle_identity_submit_plc_operation.go
···
+
package server
+
+
import (
+
"context"
+
"slices"
+
"strings"
+
"time"
+
+
"github.com/bluesky-social/indigo/api/atproto"
+
"github.com/bluesky-social/indigo/atproto/atcrypto"
+
"github.com/bluesky-social/indigo/events"
+
"github.com/bluesky-social/indigo/util"
+
"github.com/haileyok/cocoon/internal/helpers"
+
"github.com/haileyok/cocoon/models"
+
"github.com/haileyok/cocoon/plc"
+
"github.com/labstack/echo/v4"
+
)
+
+
type ComAtprotoSubmitPlcOperationRequest struct {
+
Operation plc.Operation `json:"operation"`
+
}
+
+
func (s *Server) handleSubmitPlcOperation(e echo.Context) error {
+
repo := e.Get("repo").(*models.RepoActor)
+
+
var req ComAtprotoSubmitPlcOperationRequest
+
if err := e.Bind(&req); err != nil {
+
s.logger.Error("error binding", "error", err)
+
return helpers.ServerError(e, nil)
+
}
+
+
if err := e.Validate(req); err != nil {
+
return helpers.InputError(e, nil)
+
}
+
if !strings.HasPrefix(repo.Repo.Did, "did:plc:") {
+
return helpers.InputError(e, nil)
+
}
+
+
op := req.Operation
+
+
k, err := atcrypto.ParsePrivateBytesK256(repo.SigningKey)
+
if err != nil {
+
s.logger.Error("error parsing key", "error", err)
+
return helpers.ServerError(e, nil)
+
}
+
required, err := s.plcClient.CreateDidCredentials(k, "", repo.Actor.Handle)
+
if err != nil {
+
s.logger.Error("error crating did credentials", "error", err)
+
return helpers.ServerError(e, nil)
+
}
+
+
for _, expectedKey := range required.RotationKeys {
+
if !slices.Contains(op.RotationKeys, expectedKey) {
+
return helpers.InputError(e, nil)
+
}
+
}
+
if op.Services["atproto_pds"].Type != "AtprotoPersonalDataServer" {
+
return helpers.InputError(e, nil)
+
}
+
if op.Services["atproto_pds"].Endpoint != required.Services["atproto_pds"].Endpoint {
+
return helpers.InputError(e, nil)
+
}
+
if op.VerificationMethods["atproto"] != required.VerificationMethods["atproto"] {
+
return helpers.InputError(e, nil)
+
}
+
if op.AlsoKnownAs[0] != required.AlsoKnownAs[0] {
+
return helpers.InputError(e, nil)
+
}
+
+
if err := s.plcClient.SendOperation(e.Request().Context(), repo.Repo.Did, &op); err != nil {
+
return err
+
}
+
+
if err := s.passport.BustDoc(context.TODO(), repo.Repo.Did); err != nil {
+
s.logger.Warn("error busting did doc", "error", err)
+
}
+
+
s.evtman.AddEvent(context.TODO(), &events.XRPCStreamEvent{
+
RepoIdentity: &atproto.SyncSubscribeRepos_Identity{
+
Did: repo.Repo.Did,
+
Seq: time.Now().UnixMicro(), // TODO: no
+
Time: time.Now().Format(util.ISO8601),
+
},
+
})
+
+
return nil
+
}
+9 -8
server/handle_import_repo.go
···
import (
"bytes"
-
"context"
"io"
"slices"
"strings"
···
)
func (s *Server) handleRepoImportRepo(e echo.Context) error {
+
ctx := e.Request().Context()
+
urepo := e.Get("repo").(*models.RepoActor)
b, err := io.ReadAll(e.Request().Body)
···
slices.Reverse(orderedBlocks)
-
if err := bs.PutMany(context.TODO(), orderedBlocks); err != nil {
+
if err := bs.PutMany(ctx, orderedBlocks); err != nil {
s.logger.Error("could not insert blocks", "error", err)
return helpers.ServerError(e, nil)
}
-
r, err := repo.OpenRepo(context.TODO(), bs, cs.Header.Roots[0])
+
r, err := repo.OpenRepo(ctx, bs, cs.Header.Roots[0])
if err != nil {
s.logger.Error("could not open repo", "error", err)
return helpers.ServerError(e, nil)
···
clock := syntax.NewTIDClock(0)
-
if err := r.ForEach(context.TODO(), "", func(key string, cid cid.Cid) error {
+
if err := r.ForEach(ctx, "", func(key string, cid cid.Cid) error {
pts := strings.Split(key, "/")
nsid := pts[0]
rkey := pts[1]
cidStr := cid.String()
-
b, err := bs.Get(context.TODO(), cid)
+
b, err := bs.Get(ctx, cid)
if err != nil {
s.logger.Error("record bytes don't exist in blockstore", "error", err)
return helpers.ServerError(e, nil)
···
Value: b.RawData(),
}
-
if err := tx.Create(rec).Error; err != nil {
+
if err := tx.Save(rec).Error; err != nil {
return err
}
···
tx.Commit()
-
root, rev, err := r.Commit(context.TODO(), urepo.SignFor)
+
root, rev, err := r.Commit(ctx, urepo.SignFor)
if err != nil {
s.logger.Error("error committing", "error", err)
return helpers.ServerError(e, nil)
}
-
if err := s.UpdateRepo(context.TODO(), urepo.Repo.Did, root, rev); err != nil {
+
if err := s.UpdateRepo(ctx, urepo.Repo.Did, root, rev); err != nil {
s.logger.Error("error updating repo after commit", "error", err)
return helpers.ServerError(e, nil)
}
+15 -2
server/handle_proxy.go
···
encheader := strings.TrimRight(base64.RawURLEncoding.EncodeToString(hj), "=")
+
// When proxying app.bsky.feed.getFeed the token is actually issued for the
+
// underlying feed generator and the app view passes it on. This allows the
+
// getFeed implementation to pass in the desired lxm and aud for the token
+
// and then just delegate to the general proxying logic
+
lxm, proxyTokenLxmExists := e.Get("proxyTokenLxm").(string)
+
if !proxyTokenLxmExists || lxm == "" {
+
lxm = pts[2]
+
}
+
aud, proxyTokenAudExists := e.Get("proxyTokenAud").(string)
+
if !proxyTokenAudExists || aud == "" {
+
aud = svcDid
+
}
+
payload := map[string]any{
"iss": repo.Repo.Did,
-
"aud": svcDid,
-
"lxm": pts[2],
+
"aud": aud,
+
"lxm": lxm,
"jti": uuid.NewString(),
"exp": time.Now().Add(1 * time.Minute).UTC().Unix(),
}
+35
server/handle_proxy_get_feed.go
···
+
package server
+
+
import (
+
"github.com/Azure/go-autorest/autorest/to"
+
"github.com/bluesky-social/indigo/api/atproto"
+
"github.com/bluesky-social/indigo/api/bsky"
+
"github.com/bluesky-social/indigo/atproto/syntax"
+
"github.com/bluesky-social/indigo/xrpc"
+
"github.com/haileyok/cocoon/internal/helpers"
+
"github.com/labstack/echo/v4"
+
)
+
+
func (s *Server) handleProxyBskyFeedGetFeed(e echo.Context) error {
+
feedUri, err := syntax.ParseATURI(e.QueryParam("feed"))
+
if err != nil {
+
return helpers.InputError(e, to.StringPtr("invalid feed uri"))
+
}
+
+
appViewEndpoint, _, err := s.getAtprotoProxyEndpointFromRequest(e)
+
if err != nil {
+
e.Logger().Error("could not get atproto proxy", "error", err)
+
return helpers.ServerError(e, nil)
+
}
+
+
appViewClient := xrpc.Client{
+
Host: appViewEndpoint,
+
}
+
feedRecord, err := atproto.RepoGetRecord(e.Request().Context(), &appViewClient, "", feedUri.Collection().String(), feedUri.Authority().String(), feedUri.RecordKey().String())
+
feedGeneratorDid := feedRecord.Value.Val.(*bsky.FeedGenerator).Did
+
+
e.Set("proxyTokenLxm", "app.bsky.feed.getFeedSkeleton")
+
e.Set("proxyTokenAud", feedGeneratorDid)
+
+
return s.handleProxy(e)
+
}
+3 -1
server/handle_repo_apply_writes.go
···
}
func (s *Server) handleApplyWrites(e echo.Context) error {
+
ctx := e.Request().Context()
+
repo := e.Get("repo").(*models.RepoActor)
var req ComAtprotoRepoApplyWritesRequest
···
})
}
-
results, err := s.repoman.applyWrites(repo.Repo, ops, req.SwapCommit)
+
results, err := s.repoman.applyWrites(ctx, repo.Repo, ops, req.SwapCommit)
if err != nil {
s.logger.Error("error applying writes", "error", err)
return helpers.ServerError(e, nil)
+3 -1
server/handle_repo_create_record.go
···
}
func (s *Server) handleCreateRecord(e echo.Context) error {
+
ctx := e.Request().Context()
+
repo := e.Get("repo").(*models.RepoActor)
var req ComAtprotoRepoCreateRecordRequest
···
optype = OpTypeUpdate
}
-
results, err := s.repoman.applyWrites(repo.Repo, []Op{
+
results, err := s.repoman.applyWrites(ctx, repo.Repo, []Op{
{
Type: optype,
Collection: req.Collection,
+3 -1
server/handle_repo_delete_record.go
···
}
func (s *Server) handleDeleteRecord(e echo.Context) error {
+
ctx := e.Request().Context()
+
repo := e.Get("repo").(*models.RepoActor)
var req ComAtprotoRepoDeleteRecordRequest
···
return helpers.InputError(e, nil)
}
-
results, err := s.repoman.applyWrites(repo.Repo, []Op{
+
results, err := s.repoman.applyWrites(ctx, repo.Repo, []Op{
{
Type: OpTypeDelete,
Collection: req.Collection,
+3 -1
server/handle_repo_put_record.go
···
}
func (s *Server) handlePutRecord(e echo.Context) error {
+
ctx := e.Request().Context()
+
repo := e.Get("repo").(*models.RepoActor)
var req ComAtprotoRepoPutRecordRequest
···
optype = OpTypeUpdate
}
-
results, err := s.repoman.applyWrites(repo.Repo, []Op{
+
results, err := s.repoman.applyWrites(ctx, repo.Repo, []Op{
{
Type: optype,
Collection: req.Collection,
+46 -15
server/handle_server_check_account_status.go
···
package server
import (
+
"errors"
+
"sync"
+
+
"github.com/bluesky-social/indigo/atproto/syntax"
"github.com/haileyok/cocoon/internal/helpers"
"github.com/haileyok/cocoon/models"
"github.com/ipfs/go-cid"
···
func (s *Server) handleServerCheckAccountStatus(e echo.Context) error {
urepo := e.Get("repo").(*models.RepoActor)
+
_, didErr := syntax.ParseDID(urepo.Repo.Did)
+
if didErr != nil {
+
s.logger.Error("error validating did", "err", didErr)
+
}
+
resp := ComAtprotoServerCheckAccountStatusResponse{
Activated: true, // TODO: should allow for deactivation etc.
-
ValidDid: true, // TODO: should probably verify?
+
ValidDid: didErr == nil,
RepoRev: urepo.Rev,
ImportedBlobs: 0, // TODO: ???
}
···
s.logger.Error("error casting cid", "error", err)
return helpers.ServerError(e, nil)
}
+
resp.RepoCommit = rootcid.String()
type CountResp struct {
···
}
var blockCtResp CountResp
-
if err := s.db.Raw("SELECT COUNT(*) AS ct FROM blocks WHERE did = ?", nil, urepo.Repo.Did).Scan(&blockCtResp).Error; err != nil {
-
s.logger.Error("error getting block count", "error", err)
-
return helpers.ServerError(e, nil)
-
}
-
resp.RepoBlocks = blockCtResp.Ct
+
var recCtResp CountResp
+
var blobCtResp CountResp
+
+
var wg sync.WaitGroup
+
var procErr error
+
+
wg.Add(1)
+
go func() {
+
defer wg.Done()
+
if err := s.db.Raw("SELECT COUNT(*) AS ct FROM blocks WHERE did = ?", nil, urepo.Repo.Did).Scan(&blockCtResp).Error; err != nil {
+
s.logger.Error("error getting block count", "error", err)
+
procErr = errors.Join(procErr, err)
+
}
+
}()
+
+
wg.Add(1)
+
go func() {
+
defer wg.Done()
+
if err := s.db.Raw("SELECT COUNT(*) AS ct FROM records WHERE did = ?", nil, urepo.Repo.Did).Scan(&recCtResp).Error; err != nil {
+
s.logger.Error("error getting record count", "error", err)
+
procErr = errors.Join(procErr, err)
+
}
+
}()
-
var recCtResp CountResp
-
if err := s.db.Raw("SELECT COUNT(*) AS ct FROM records WHERE did = ?", nil, urepo.Repo.Did).Scan(&recCtResp).Error; err != nil {
-
s.logger.Error("error getting record count", "error", err)
-
return helpers.ServerError(e, nil)
-
}
-
resp.IndexedRecords = recCtResp.Ct
+
wg.Add(1)
+
go func() {
+
if err := s.db.Raw("SELECT COUNT(*) AS ct FROM blobs WHERE did = ?", nil, urepo.Repo.Did).Scan(&blobCtResp).Error; err != nil {
+
s.logger.Error("error getting expected blobs count", "error", err)
+
procErr = errors.Join(procErr, err)
+
}
+
}()
-
var blobCtResp CountResp
-
if err := s.db.Raw("SELECT COUNT(*) AS ct FROM blobs WHERE did = ?", nil, urepo.Repo.Did).Scan(&blobCtResp).Error; err != nil {
-
s.logger.Error("error getting record count", "error", err)
+
wg.Wait()
+
if procErr != nil {
return helpers.ServerError(e, nil)
}
+
+
resp.RepoBlocks = blockCtResp.Ct
+
resp.IndexedRecords = recCtResp.Ct
resp.ExpectedBlobs = blobCtResp.Ct
return e.JSON(200, resp)
+3 -1
server/handle_sync_get_record.go
···
)
func (s *Server) handleSyncGetRecord(e echo.Context) error {
+
ctx := e.Request().Context()
+
did := e.QueryParam("did")
collection := e.QueryParam("collection")
rkey := e.QueryParam("rkey")
···
return helpers.ServerError(e, nil)
}
-
root, blocks, err := s.repoman.getRecordProof(urepo, collection, rkey)
+
root, blocks, err := s.repoman.getRecordProof(ctx, urepo, collection, rkey)
if err != nil {
return err
}
+19
server/mail.go
···
return nil
}
+
func (s *Server) sendPlcTokenReset(email, handle, code string) error {
+
if s.mail == nil {
+
return nil
+
}
+
+
s.mailLk.Lock()
+
defer s.mailLk.Unlock()
+
+
s.mail.To(email)
+
s.mail.Subject("PLC token for " + s.config.Hostname)
+
s.mail.Plain().Set(fmt.Sprintf("Hello %s. Your PLC operation code is %s. This code will expire in ten minutes.", handle, code))
+
+
if err := s.mail.Send(); err != nil {
+
return err
+
}
+
+
return nil
+
}
+
func (s *Server) sendEmailUpdate(email, handle, code string) error {
if s.mail == nil {
return nil
+19 -16
server/repo.go
···
}
// TODO make use of swap commit
-
func (rm *RepoMan) applyWrites(urepo models.Repo, writes []Op, swapCommit *string) ([]ApplyWriteResult, error) {
+
func (rm *RepoMan) applyWrites(ctx context.Context, urepo models.Repo, writes []Op, swapCommit *string) ([]ApplyWriteResult, error) {
rootcid, err := cid.Cast(urepo.Root)
if err != nil {
return nil, err
···
dbs := rm.s.getBlockstore(urepo.Did)
bs := recording_blockstore.New(dbs)
-
r, err := repo.OpenRepo(context.TODO(), bs, rootcid)
+
r, err := repo.OpenRepo(ctx, bs, rootcid)
-
entries := []models.Record{}
-
var results []ApplyWriteResult
+
entries := make([]models.Record, 0, len(writes))
+
results := make([]ApplyWriteResult, 0, len(writes))
for i, op := range writes {
if op.Type != OpTypeCreate && op.Rkey == nil {
return nil, fmt.Errorf("invalid rkey")
} else if op.Type == OpTypeCreate && op.Rkey != nil {
-
_, _, err := r.GetRecord(context.TODO(), op.Collection+"/"+*op.Rkey)
+
_, _, err := r.GetRecord(ctx, op.Collection+"/"+*op.Rkey)
if err == nil {
op.Type = OpTypeUpdate
}
···
mm["$type"] = op.Collection
}
-
nc, err := r.PutRecord(context.TODO(), op.Collection+"/"+*op.Rkey, &mm)
+
nc, err := r.PutRecord(ctx, op.Collection+"/"+*op.Rkey, &mm)
if err != nil {
return nil, err
}
···
Rkey: *op.Rkey,
Value: old.Value,
})
-
err := r.DeleteRecord(context.TODO(), op.Collection+"/"+*op.Rkey)
+
err := r.DeleteRecord(ctx, op.Collection+"/"+*op.Rkey)
if err != nil {
return nil, err
}
···
return nil, err
}
mm := MarshalableMap(out)
-
nc, err := r.UpdateRecord(context.TODO(), op.Collection+"/"+*op.Rkey, &mm)
+
nc, err := r.UpdateRecord(ctx, op.Collection+"/"+*op.Rkey, &mm)
if err != nil {
return nil, err
}
···
}
}
-
newroot, rev, err := r.Commit(context.TODO(), urepo.SignFor)
+
newroot, rev, err := r.Commit(ctx, urepo.SignFor)
if err != nil {
return nil, err
}
···
Roots: []cid.Cid{newroot},
Version: 1,
})
+
if err != nil {
+
return nil, err
+
}
if _, err := carstore.LdWrite(buf, hb); err != nil {
return nil, err
}
-
diffops, err := r.DiffSince(context.TODO(), rootcid)
+
diffops, err := r.DiffSince(ctx, rootcid)
if err != nil {
return nil, err
}
···
})
}
-
blk, err := dbs.Get(context.TODO(), c)
+
blk, err := dbs.Get(ctx, c)
if err != nil {
return nil, err
}
···
}
}
-
rm.s.evtman.AddEvent(context.TODO(), &events.XRPCStreamEvent{
+
rm.s.evtman.AddEvent(ctx, &events.XRPCStreamEvent{
RepoCommit: &atproto.SyncSubscribeRepos_Commit{
Repo: urepo.Did,
Blocks: buf.Bytes(),
···
},
})
-
if err := rm.s.UpdateRepo(context.TODO(), urepo.Did, newroot, rev); err != nil {
+
if err := rm.s.UpdateRepo(ctx, urepo.Did, newroot, rev); err != nil {
return nil, err
}
···
return results, nil
}
-
func (rm *RepoMan) getRecordProof(urepo models.Repo, collection, rkey string) (cid.Cid, []blocks.Block, error) {
+
func (rm *RepoMan) getRecordProof(ctx context.Context, urepo models.Repo, collection, rkey string) (cid.Cid, []blocks.Block, error) {
c, err := cid.Cast(urepo.Root)
if err != nil {
return cid.Undef, nil, err
···
dbs := rm.s.getBlockstore(urepo.Did)
bs := recording_blockstore.New(dbs)
-
r, err := repo.OpenRepo(context.TODO(), bs, c)
+
r, err := repo.OpenRepo(ctx, bs, c)
if err != nil {
return cid.Undef, nil, err
}
-
_, _, err = r.GetRecordBytes(context.TODO(), collection+"/"+rkey)
+
_, _, err = r.GetRecordBytes(ctx, collection+"/"+rkey)
if err != nil {
return cid.Undef, nil, err
}
+5
server/server.go
···
s.echo.GET("/xrpc/com.atproto.server.getSession", s.handleGetSession, s.handleLegacySessionMiddleware, s.handleOauthSessionMiddleware)
s.echo.POST("/xrpc/com.atproto.server.refreshSession", s.handleRefreshSession, s.handleLegacySessionMiddleware, s.handleOauthSessionMiddleware)
s.echo.POST("/xrpc/com.atproto.server.deleteSession", s.handleDeleteSession, s.handleLegacySessionMiddleware, s.handleOauthSessionMiddleware)
+
s.echo.GET("/xrpc/com.atproto.identity.getRecommendedDidCredentials", s.handleGetRecommendedDidCredentials, s.handleLegacySessionMiddleware, s.handleOauthSessionMiddleware)
s.echo.POST("/xrpc/com.atproto.identity.updateHandle", s.handleIdentityUpdateHandle, s.handleLegacySessionMiddleware, s.handleOauthSessionMiddleware)
+
s.echo.POST("/xrpc/com.atproto.identity.requestPlcOperationSignature", s.handleIdentityRequestPlcOperationSignature, s.handleLegacySessionMiddleware, s.handleOauthSessionMiddleware)
+
s.echo.POST("/xrpc/com.atproto.identity.signPlcOperation", s.handleSignPlcOperation, s.handleLegacySessionMiddleware, s.handleOauthSessionMiddleware)
+
s.echo.POST("/xrpc/com.atproto.identity.submitPlcOperation", s.handleSubmitPlcOperation, s.handleLegacySessionMiddleware, s.handleOauthSessionMiddleware)
s.echo.POST("/xrpc/com.atproto.server.confirmEmail", s.handleServerConfirmEmail, s.handleLegacySessionMiddleware, s.handleOauthSessionMiddleware)
s.echo.POST("/xrpc/com.atproto.server.requestEmailConfirmation", s.handleServerRequestEmailConfirmation, s.handleLegacySessionMiddleware, s.handleOauthSessionMiddleware)
s.echo.POST("/xrpc/com.atproto.server.requestPasswordReset", s.handleServerRequestPasswordReset) // AUTH NOT REQUIRED FOR THIS ONE
···
// stupid silly endpoints
s.echo.GET("/xrpc/app.bsky.actor.getPreferences", s.handleActorGetPreferences, s.handleLegacySessionMiddleware, s.handleOauthSessionMiddleware)
s.echo.POST("/xrpc/app.bsky.actor.putPreferences", s.handleActorPutPreferences, s.handleLegacySessionMiddleware, s.handleOauthSessionMiddleware)
+
s.echo.GET("/xrpc/app.bsky.feed.getFeed", s.handleProxyBskyFeedGetFeed, s.handleLegacySessionMiddleware, s.handleOauthSessionMiddleware)
// admin routes
s.echo.POST("/xrpc/com.atproto.server.createInviteCode", s.handleCreateInviteCode, s.handleAdminMiddleware)