Monorepo for wisp.place. A static site hosting service built on top of the AT Protocol. wisp.place

client error when uploading site via cli #3

open
opened by jbc.lol edited

I am planning to add a wisp.place mirror of my site and I have adapted my script on my already existing CI workflow. It says 'client error' after past minute of working and doesn't continue.

screenshot of workflow output saying client error

https://github.com/jbcarreon123/jbsite4/actions/runs/19222990750/job/54944356251 here's the workflow (sorry im still on gh, they have powerful ci servers lol)

click the 'deploy to wisp', i have it resume on fail so a mirror that is down won't fail the ci

how many files is your site? I saw a lot of blobs when i did a scroll. This might be slightly out of my control because the manifest record just dumps all the files into one record and sometimes the PDS doesnt like getting a record thats entire megabytes to parse. I was planning on doing zip bundles on sites like this and only tracking those but the risk here is that i cant ensure file integrity on my backend but it should be fine.

my initial proof of concept was done this way but i was wary of ZIP bombs coming in from malevolent users off the firehose (firehose in general is just SSRF hell) but i think ive figured out how to safely handle it that it shouldnt really be a problem

hm, i see 360ush. that should be fine and it did deploy for me:

https://pdsls.dev/at://did:plc:ttdrpj45ibqunmfhdsb4zdwq/place.wisp.fs/jbsite4 https://sites.wisp.place/did:plc:ttdrpj45ibqunmfhdsb4zdwq/jbsite4 (i didnt build it, i just dumped the entire repo)

I wonder if its because it takes so long that either GH drops the connection or jacquard somehow crashes

whats your PDS?

My site is ~100MB, including all the images and stuff

yeah it doesn't work on the dashboard either:

request entity too large error

i think it's because my pds is behind cloudflare

yea that would do it, cloudflare limits to 100MB. I need to figure out batching but if a blob is bigger than 100MB too the PDS will reject it unless you raise the limit with PDS_BLOB_UPLOAD_LIMIT

the issue for this isnt that but actually because records can only be 150kB before the PDS rejects it. im working on implementing a subfs lexicon to offload large directories into

should be fixed now give it a try

still doesn't work

image, i can't write good alt text right now

uploading the artifact on the dashboard seems to work but it seems to not work with my subpaths (especially _astro/)

oh yea i shouldve mentioned i fixed it with the site, ill have the cli split the record soon too

cli should have this fixed now i think

ok on the latest builld its now fixed! thanks!

investigating, sorry about that

i might already have a fix but didnt push because i was waiting for jacquard to fix an issue over there but i might just do my hacky workaround for now

https://files.catbox.moe/jezhn5.png yea i have this fixed already, ill have updated binaries around tomorrow at the latest

give a try now, i pushed v0.4.0 (it has --spa and --directory as settings for serve and deploy too)

im unable to reproduce this even on github: https://github.com/WaveringAna/jbsite4/actions/runs/19907535627/job/57068692085

are you self hosting your pds? can you give me the .env you have set for it? i think maybe the request is being failed somewhere

sign up or login to add to the discussion
Labels

None yet.

assignee

None yet.

Participants 2
AT URI
at://did:plc:l2wisafcekcguy6kq627e5a3/sh.tangled.repo.issue/3m5b2fsejhs22