I've been waiting to write this patch for a long time. I firmly believe
that the idea of "exported resources" was truly a brilliant one, but
which was never even properly understood by its original inventors! This
patch set aims to show how it should have been done.
The main differences are:
* Real-time modelling, since "once per run" makes no sense.
* Filter with code/functions not language syntax.
* Directed exporting to limit the intended recipients.
The next step is to add more "World" reading and filtering functions to
make it easy and expressive to make your selection of resources to
collect!
These were really just stubs so that I could prove out the reactive
model very early, and I don't think they're really used anywhere.
I'm also not really using the yamlgraph frontend. If someone wants any
of that code, step up, or it will rot even more.
This is a basic implementation of a detection method for whether mgmt is
running in a virtualized environment. We achieve this by doing two types
of checks: on one hand, we check if the CPU flags can confirm the
presence of a virtualized env; on the other, we check if the presence
of known files related with DMI (and their contents) can confirm whether
we're inside a virt env. Either of these situations will cause the
function to return true, with the default case being false. All of these
checks are relatively naive and can be improved by looking at the main
inspiration for this implementation, which was systemd's own check for
the presence of virtualization.
This provides a new kind of "world" backend, one that runs etcd over an
SSH connection. This is useful for situations where you want to run an
etcd cluster somewhere for clients across the net, but where you don't
want to expose the ports publicly.
If SSH authentication is setup correctly (using public keys) this will
tunnel over SSH for etcd to connect.
This patch does not yet support deploys over SSH, but that should be
fixed in the future as the world code gets cleaned up more.
A subtlety about the engine is that while it guarantees CheckApply
happens in the listed edge-based dependency order, it doesn't stop
Watch from starting up in whatever order it wants to. As a result, we
can prematurely error since the docker service isn't running yet. It may
in fact be in the process of getting installed and started by mgmt
before we then try and use this resource! As a result, let it error once
for free and wait for CheckApply to get going before we start again.
Keep in mind, Watch has to use the .Running() method once to tell
CheckApply to do its initial event. So this concurrency is complex!
It's unclear if this is a bug in mgmt or not, but I'm leaning towards
not, particularly since there isn't an obvious way to fix it.
They made the assumption that there would be a based docker service
installed at Init which could not be guaranteed. Also use the internal
metaparameter timeout feature instead of private counters.
The GAPI API is a bit of a mess, but I think this seems to work for
standalone run and also deploys. Hopefully I didn't add any unnecessary
extra dead code here, but that's archaeology for another day.
This adds some simplistic configuration management / provisioning
functionality to this virt:builder resource which makes it easier to
kick off special functionality that we might want to build.
Change the default "wait" state for if you run the empty frontend when
there's already an available deploy waiting. You almost certainly want
to start running it right away.
Example:
mgmt etcd
mgmt run --hostname h1 --no-server --tmp-prefix --seeds=http://127.0.0.1:2379 empty
mgmt run --hostname h2 --no-server --tmp-prefix --seeds=http://127.0.0.1:2379 empty
mgmt deploy --no-git --seeds=http://127.0.0.1:2379 lang examples/lang/hello0.mcl
mgmt run --hostname h3 --no-server --tmp-prefix --seeds=http://127.0.0.1:2379 empty
In fact, you don't even need to start up etcd first for this to all
work.
There are some rare situations with completely symmetrical graphs which
mean that there isn't a "more correct" error. This is due to the
annoying map iteration non-determinism, and so instead of fighting to
remove every bit of that, let's just accept more than one error here.
I think this makes it more deterministic, but I'm not sure it matters,
since we are comparing based in the .String() property, and some nodes
have the same value, so it ends up depending on the order they're added
to the graph datastructure, but then we lose this information since it's
a map. Yuck.