commits
Signed-off-by: oppiliappan <me@oppi.li>
Signed-off-by: oppiliappan <me@oppi.li>
Signed-off-by: oppiliappan <me@oppi.li>
Signed-off-by: oppiliappan <me@oppi.li>
allows the spindle to dynamically configure the knots it is listening
to.
Signed-off-by: oppiliappan <me@oppi.li>
- add spindle to repo
- fix spindle.member lexicon field name
Signed-off-by: oppiliappan <me@oppi.li>
Signed-off-by: Anirudh Oppiliappan <anirudh@tangled.sh>
Signed-off-by: Anirudh Oppiliappan <anirudh@tangled.sh>
Signed-off-by: oppiliappan <me@oppi.li>
Signed-off-by: oppiliappan <me@oppi.li>
Signed-off-by: oppiliappan <me@oppi.li>
Signed-off-by: oppiliappan <me@oppi.li>
During setup, we register cleanup functions which get executed at the
end of the workflow goroutine (deferred exec of DestroyWorkflow).
Signed-off-by: Anirudh Oppiliappan <anirudh@tangled.sh>
As it sees use in both spindle and knotserver.
Signed-off-by: Anirudh Oppiliappan <anirudh@tangled.sh>
Signed-off-by: Anirudh Oppiliappan <anirudh@tangled.sh>
Logs are streamed for each running pipeline on a websocket at
/logs/{pipelineID}. engine.TailStep demuxes stdout and stderr from the
container's logs and pipes that out to corresponding stdout and stderr
channels.
These channels are maintained inside engine's container
struct, key'd by the pipeline ID, and protected by a read/write mutex.
engine.LogChannels fetches the stdout/stderr chans as recieve-only if
the pipeline is known to exist.
Signed-off-by: Anirudh Oppiliappan <anirudh@tangled.sh>
Signed-off-by: Anirudh Oppiliappan <anirudh@tangled.sh>
Signed-off-by: Anirudh Oppiliappan <anirudh@tangled.sh>
Signed-off-by: Anirudh Oppiliappan <anirudh@tangled.sh>
Signed-off-by: Anirudh Oppiliappan <anirudh@tangled.sh>
Signed-off-by: Anirudh Oppiliappan <anirudh@tangled.sh>
The engine package currently uses the docker client setup the pipeline
and execute steps. The flow is like so:
- setup pipeline network for all steps to join to
- create a volume for the nix store (to persist packages across steps)
- create a volume for the workspace directory
- build a nixery.dev URL with packages we want in the container
- execute each step command in a new container using the same image
It's pretty unfinished still. Things to be done:
- support for other registries; currently only works with nixpkgs
- custom nixery URL
- ... a lot more that I'm forgetting now
Signed-off-by: Anirudh Oppiliappan <anirudh@tangled.sh>
Signed-off-by: Anirudh Oppiliappan <anirudh@tangled.sh>
Signed-off-by: Anirudh Oppiliappan <anirudh@tangled.sh>
Signed-off-by: Anirudh Oppiliappan <anirudh@tangled.sh>
Signed-off-by: oppiliappan <me@oppi.li>
Signed-off-by: oppiliappan <me@oppi.li>
Signed-off-by: oppiliappan <me@oppi.li>
Signed-off-by: oppiliappan <me@oppi.li>
there exists a git flag to generate patches for a single commit: -1. use
this instead of calculating the parent when preparing format-patches.
Signed-off-by: oppiliappan <me@oppi.li>
the rev-list approach to building format-patch internally performs
merge-base calculations.
Signed-off-by: oppiliappan <me@oppi.li>
the previous commit iterator approach was faulty in cases of merge
commits, as it only uses the first parent returned by .Parents().
this also makes use of the --no-merges flag when preparing commits for
format-patch.
Signed-off-by: oppiliappan <me@oppi.li>
when calling rev-list, HEAD was passed as a default argument:
git rev-list HEAD --count
but HEAD is only populated when we perform git.Open; it is zeroed out
when we perform git.PlainOpen:
git rev-list 00000... --count
this causes rev-list to exit with an error, when used in repos that are
git.PlainOpen'd.
Signed-off-by: oppiliappan <me@oppi.li>
similar to jetstream consumer, we now ingest events from every known
knot.
Signed-off-by: oppiliappan <me@oppi.li>
Signed-off-by: oppiliappan <me@oppi.li>
tracks commit per user by day
Signed-off-by: oppiliappan <me@oppi.li>
Signed-off-by: oppiliappan <me@oppi.li>
consumers can configure a cursor-store, where cursors of individual
event sources are stored. the module provides an in-memory store and a
redis-backed store.
Signed-off-by: oppiliappan <me@oppi.li>
- EventSource: a knot's identity (its hostname currently)
- Message: message struct that follows the event structure to
deserialize just rkey and nsid upfront, this could allow consumers to
filter by nsid or configure cursor ranges to listen to
Signed-off-by: oppiliappan <me@oppi.li>
Signed-off-by: oppiliappan <me@oppi.li>
initial commit does not have a parent commit and should be excluded from
the signature check.
Signed-off-by: oppiliappan <me@oppi.li>
this was incorrectly adding the current pull into the formatpatch twice.
Signed-off-by: oppiliappan <me@oppi.li>
Signed-off-by: oppiliappan <me@oppi.li>
takes a yaml based workflow and compiles it down to a
sh.tangled.pipeline object, after performing some basic analysis.
Signed-off-by: oppiliappan <me@oppi.li>
in order to work around the limitations of not having github actions'
marketplace, this approach opts to use nixpkgs as a source for packages.
alternate registries can be specified too, these are expected to be nix
flakes that expose packages.
this takes a page out of replit's approach to supplying packages to
their devshells, however, instead of using nix syntax, we use only a
flake + package combo, and the compiler will simply convert this into a
step like so:
nix profile install flake#package
Signed-off-by: oppiliappan <me@oppi.li>
the `db.Op` event is now replaced by the `refUpdate` event. all knot
generated events will be stored in the events db as raw json.
Signed-off-by: oppiliappan <me@oppi.li>
Signed-off-by: oppiliappan <me@oppi.li>
move the parsing logic into a separate file
Signed-off-by: oppiliappan <me@oppi.li>
Signed-off-by: oppiliappan <me@oppi.li>
Signed-off-by: oppiliappan <me@oppi.li>
Logs are streamed for each running pipeline on a websocket at
/logs/{pipelineID}. engine.TailStep demuxes stdout and stderr from the
container's logs and pipes that out to corresponding stdout and stderr
channels.
These channels are maintained inside engine's container
struct, key'd by the pipeline ID, and protected by a read/write mutex.
engine.LogChannels fetches the stdout/stderr chans as recieve-only if
the pipeline is known to exist.
Signed-off-by: Anirudh Oppiliappan <anirudh@tangled.sh>
The engine package currently uses the docker client setup the pipeline
and execute steps. The flow is like so:
- setup pipeline network for all steps to join to
- create a volume for the nix store (to persist packages across steps)
- create a volume for the workspace directory
- build a nixery.dev URL with packages we want in the container
- execute each step command in a new container using the same image
It's pretty unfinished still. Things to be done:
- support for other registries; currently only works with nixpkgs
- custom nixery URL
- ... a lot more that I'm forgetting now
Signed-off-by: Anirudh Oppiliappan <anirudh@tangled.sh>
when calling rev-list, HEAD was passed as a default argument:
git rev-list HEAD --count
but HEAD is only populated when we perform git.Open; it is zeroed out
when we perform git.PlainOpen:
git rev-list 00000... --count
this causes rev-list to exit with an error, when used in repos that are
git.PlainOpen'd.
Signed-off-by: oppiliappan <me@oppi.li>
in order to work around the limitations of not having github actions'
marketplace, this approach opts to use nixpkgs as a source for packages.
alternate registries can be specified too, these are expected to be nix
flakes that expose packages.
this takes a page out of replit's approach to supplying packages to
their devshells, however, instead of using nix syntax, we use only a
flake + package combo, and the compiler will simply convert this into a
step like so:
nix profile install flake#package
Signed-off-by: oppiliappan <me@oppi.li>