+218
-5
Cargo.lock
+218
-5
Cargo.lock
··········································
+19
-2
cozy-setup (move to another repo).md
+19
-2
cozy-setup (move to another repo).md
······+well... the gateway fell over IMMEDIATELY with like 2 req/sec from deletions, with that ^^ config. for now i removed everything except the reverse proxy config + normal caddy metrics and it's running fine on vanilla caddy. i did try reducing the rate-limiting configs to a single, fixed-key global limit but it still ate all the ram and died. maybe badger w/ the cache config was still a problem. maybe it would have been ok on a machine with more than 1GB mem.+- nginx. i should probably just use this. acme-client is a piece of cake to set up, and i know how to configure it.+- haproxy. also kind of familiar, it's old and stable. no idea how it handle low-mem (our 1gb) vs nginx.+- rpxy. like caddy (auto-tls) but in rust and actually fast? has an "experimental" cache feature. but the cache feature looks good.+- rama. build-your-own proxy. not sure that it has both cache and limiter in their standard features?+- pingora. build-your-own cloudflare, so like, probably stable. has tools for cache and limiting. low-mem...?+- cache stuff in pingora seems a little... hit and miss (byeeeee). only a test impl for Storage for the main cache feature?+- but the rate-limiter has a guide: https://github.com/cloudflare/pingora/blob/main/docs/user_guide/rate_limiter.md+what i want is low-resource reverse proxy with built-in rate-limiting and caching. but maybe cache (and/or ratelimiting) could be external to the reverse proxy+- varnish is a dedicated cache. has https://github.com/varnish/varnish-modules/blob/master/src/vmod_vsthrottle.vcc
+1
link_aggregator/Cargo.toml
+1
link_aggregator/Cargo.toml
+9
-6
link_aggregator/readme.md
+9
-6
link_aggregator/readme.md
·········- [ ] put delete-account tasks into a separate (persisted?) task queue for the writer so it can work on them incrementally.+- [x] don't remove deleted links from the reverse records -- null them out. this will keep things stable for paging.
+1
-1
link_aggregator/src/lib.rs
+1
-1
link_aggregator/src/lib.rs
+93
-4
link_aggregator/src/server.rs
+93
-4
link_aggregator/src/server.rs
···pub async fn serve<S, A>(store: S, addr: A, stay_alive: CancellationToken) -> anyhow::Result<()>············
+112
-23
link_aggregator/src/storage/mem_store.rs
+112
-23
link_aggregator/src/storage/mem_store.rs
······+type Linkers = Vec<Option<(Did, RKey)>>; // optional because we replace with None for deleted links to keep cursors stable+targets: HashMap<Target, HashMap<Source, Linkers>>, // target -> (collection, path) -> (did, rkey)?[]links: HashMap<Did, HashMap<RepoId, Vec<(RecordPath, Target)>>>, // did -> collection:rkey -> (path, target)[]·········-// only delete one instance: a user can create multiple links to something, we're only deleting one-// (we don't know which one in the list we should be deleting, and it hopefully mostly doesn't matter)············
+416
-29
link_aggregator/src/storage/mod.rs
+416
-29
link_aggregator/src/storage/mod.rs
···············
+179
-42
link_aggregator/src/storage/rocks_store.rs
+179
-42
link_aggregator/src/storage/rocks_store.rs
······db: Arc<DBWithThreadMode<MultiThreaded>>, // TODO: mov seqs here (concat merge op will be fun)·········+impl<Orig: Clone, IdVal: IdTableValue, const WITH_REVERSE: bool> IdTable<Orig, IdVal, WITH_REVERSE>························eprintln!("about to blow up because a linked target apparently does not have us in its dids.");···// use a separate batch for all their links, since it can be a lot and make us crash at around 1GiB batch size.// this should still hopefully be crash-safe: as long as we don't actually delete the DidId entry until after all links are cleared.············+eprintln!("failed to look up did_value from did_id {did_id:?}: {did:?}: data consistency bug?");·········