I've been waiting to write this patch for a long time. I firmly believe
that the idea of "exported resources" was truly a brilliant one, but
which was never even properly understood by its original inventors! This
patch set aims to show how it should have been done.
The main differences are:
* Real-time modelling, since "once per run" makes no sense.
* Filter with code/functions not language syntax.
* Directed exporting to limit the intended recipients.
The next step is to add more "World" reading and filtering functions to
make it easy and expressive to make your selection of resources to
collect!
This provides a new kind of "world" backend, one that runs etcd over an
SSH connection. This is useful for situations where you want to run an
etcd cluster somewhere for clients across the net, but where you don't
want to expose the ports publicly.
If SSH authentication is setup correctly (using public keys) this will
tunnel over SSH for etcd to connect.
This patch does not yet support deploys over SSH, but that should be
fixed in the future as the world code gets cleaned up more.
The GAPI API is a bit of a mess, but I think this seems to work for
standalone run and also deploys. Hopefully I didn't add any unnecessary
extra dead code here, but that's archaeology for another day.
Change the default "wait" state for if you run the empty frontend when
there's already an available deploy waiting. You almost certainly want
to start running it right away.
Example:
mgmt etcd
mgmt run --hostname h1 --no-server --tmp-prefix --seeds=http://127.0.0.1:2379 empty
mgmt run --hostname h2 --no-server --tmp-prefix --seeds=http://127.0.0.1:2379 empty
mgmt deploy --no-git --seeds=http://127.0.0.1:2379 lang examples/lang/hello0.mcl
mgmt run --hostname h3 --no-server --tmp-prefix --seeds=http://127.0.0.1:2379 empty
In fact, you don't even need to start up etcd first for this to all
work.
Instead of constantly making these updates, let's just remove the year
since things are stored in git anyways, and this is not an actual modern
legal risk anymore.
This is at least a stop-gap until we redo the whole filesystem API mess.
I think golang is partly to blame because they don't have proper API's
merged yet.
We want to be able to put useful scripts in $vardir type places, but if
the perms at the higher levels block this, then that can't work. The
top-level should always be more permissive, and then it grows more
restricted as we descend.
With the recent merging of embedded package imports and the entry CLI
package, it is now possible for users to build in mcl code into a single
binary. This additional permission makes it explicitly clear that this
is permitted to make it easier for those users. The condition is phrased
so that the terms can be "patched" by the original author if it's
necessary for the project. For example, if the name of the language
(mcl) changes, has a differently named new version, someone finds a
phrasing improvement or a legal loophole, or for some other
reasonable circumstance. Now go write some beautiful embedded tools!
This moves over the cli `arg` struct tags which are used to generate and
parse things on the command line. Furthermore, we then embed this data
directly in our more general parser struct so that we avoid duplication.
Finally, since the data shares a common struct type, we don't need to do
the manual field-by-field copying to pull things in!
The new version of the urfave/cli library is moving to generics, and
it's completely unclear to me why this is an improvement. Their new API
is very complicated to understand, which for me, defeats the purpose of
golang.
In parallel, I needed to do some upcoming cli API refactoring, so this
was a good time to look into new libraries. After a review of the
landscape, I found the alexflint/go-arg library which has a delightfully
elegant API. It does have a few rough edges, but it's otherwise very
usable, and I think it would be straightforward to add features and fix
issues.
Thanks Alex!
I'm currently refactoring the CLI code. Unfortunately this means a
pretty big churn in the various GAPI frontends. Since nobody is actively
using the puppet frontend code, I'm removing it for now. If someone is
actively using it, and wants to either port it to the new API, or
sponsor the porting of it to the new API, I'm happy to allow it back in.
Sorry Felix, it was a fun idea, and I loved seeing it work, but I can't
personally afford the maintenance cost of having this in right now.
This should help us determine what steps need improving. As it turns
out, autoedges are approximately the slowest. (Highly dependent on the
specific mcl code being used.)
The trend is clear though; note the units:
main: new graph took: 913.653µs
main: auto edges took: 9.273807153s
main: auto grouping took: 28.690819ms
main: send/recv building took: 566ns
main: new graph took: 779.255µs
main: auto edges took: 4.03670168s
main: auto grouping took: 37.682101ms
main: send/recv building took: 121.017µs
main: new graph took: 1.157479ms
main: auto edges took: 3.794132165s
main: auto grouping took: 49.732836ms
main: send/recv building took: 95.921µs
main: new graph took: 900.937µs
main: auto edges took: 7.206085s
main: auto grouping took: 25.508671ms
main: send/recv building took: 489ns
main: new graph took: 794.224µs
main: auto edges took: 4.313729756s
main: auto grouping took: 47.970533ms
main: send/recv building took: 207.62µs
main: new graph took: 884.49µs
main: auto edges took: 7.585529786s
main: auto grouping took: 24.327938ms
main: send/recv building took: 72.741µs
main: new graph took: 774.157µs
main: auto edges took: 2.827380129s
main: auto grouping took: 28.303023ms
main: send/recv building took: 85.246µs
main: new graph took: 746.841µs
main: auto edges took: 2.775868117s
main: auto grouping took: 33.11291ms
main: send/recv building took: 104.875µs
main: new graph took: 796.445µs
main: auto edges took: 2.71556122s
main: auto grouping took: 24.03827ms
main: send/recv building took: 106.414µs
main: new graph took: 1.217452ms
main: auto edges took: 2.908416104s
main: auto grouping took: 61.175916ms
main: send/recv building took: 92.328µs
main: new graph took: 807.894µs
main: auto edges took: 3.222089261s
main: auto grouping took: 40.032629ms
main: send/recv building took: 106.49µs
main: new graph took: 986.963µs
main: auto edges took: 3.538425263s
main: auto grouping took: 30.660849ms
main: send/recv building took: 99.74µs
This pulls in the Send/Recv values from the previous graph so that our
Cmp functions are more likely to not remake resources that should
otherwise not have changed. Unnecessary remakes can destroy the private
state of a resource which can make certain operations impossible.
This expands the Local API with the first (and in theory, only ever) API
for reading and writing simple values. This is a coordination point for
resources and functions to share things directly.
This is a new API that is similar in spirit and plumbing to the World
API, but it intended for all local machine operations and will likely
only ever have one implementation.
There were a bunch of packages that weren't well documented. With the
recent split up of the lang package, I figured it would be more helpful
for new contributors who want to learn the structure of the project.
The old system with vendor/ and git submodules worked great,
unfortunately FUD around git submodules seemed to scare people away and
golang moved to a go.mod system that adds a new lock file format instead
of using the built-in git version. It's now almost impossible to use
modern golang without this, so we've switched.
So much for the golang compatibility promise-- turns out it doesn't
apply to the useful parts that I actually care about like this.
Thanks to frebib for his incredibly valuable contributions to this
patch. This snide commit message is mine alone.
This patch also mixes in some changes due to legacy golang as we've also
bumped the minimum version to 1.16 in the docs and tests.
Lastly, we had to disable some tests and fix up a few other misc things
to get this passing. We've definitely hot bugs in the go.mod system, and
our Makefile tries to workaround those.
This moves to the newest etcd release, and also updates the imports to
the new go.etcd.io path. I think this is a bit of a pain, but might as
well get it done.
This adds the first reversible resource (file) and the necessary engine
API hooks to make it all work. This allows a special "reversed" resource
to be added to the subsequent graph in the stream when an earlier
version "disappears". This disappearance can happen if it was previously
in an if statement that then becomes false.
It might be wise to combine the use of this meta parameter with the use
of the `realize` meta parameter to ensure that your reversed resource
actually runs at least once, if there's a chance that it might be gone
for a while.
This patch also adds a new test harness for testing resources. It
doesn't test the "live" aspect of resources, as it doesn't run Watch,
but it was designed to ensure CheckApply works as intended, and it runs
very quickly with a simplified timeline of happenings.
If running mgmt from a systemd unit, this enables the
STATE_DIRECTORY environment variable to be used for creating the
cache directory defined by StateDirectory= in the unit file. It
also enables the XDG_CACHE_HOME environment variable to be used.
If the user isn't root and the environment variable isn't set,
it will use the default XDG_CACHE_HOME directory.