This should ensure this sort order of longer thing first, when running
the algorithm. We want to autogroup two or more http:server:flag
resources together first, before the whole grouped resource gets pulled
into the http:server one.
Resources that can be grouped into the http:server resource must have
that prefix. Grouping is basically hierarchical, and without that common
prefix, it means we'd have to special-case our grouping algorithm.
Many years ago I built and demoed a prototype of a simple web ui with a
slider, and as you moved it left and right, it started up or shutdown
some number of virtual machines.
The webui was standalone code, but the rough idea of having events from
a high-level overview flow into mgmt, was what I wanted to test out. At
this stage, I didn't even have the language built yet. This prototype
helped convince me of the way a web ui would fit into everything.
Years later, I build an autogrouping prototype which looks quite similar
to what we have today. I recently picked it back up to polish it a bit
more. It's certainly not perfect, and might even be buggy, but it's
useful enough that it's worth sharing.
If I had more cycles, I'd probably consider removing the "store" mode,
and replace it with the normal "value" system, but we would need the
resource "mutate" API if we wanted this. This would allow us to directly
change the "value" field, without triggering a graph swap, which would
be a lot less clunky than the "store" situation.
Of course I'd love to see a GTK version of this concept, but I figured
it would be more practical to have a web ui over HTTP.
One notable missing feature, is that if the "web ui" changes (rather
than just a value changing) we need to offer to the user to reload it.
It currently doesn't get an event for that, and so don't confuse your
users. We also need to be better at validating "untrusted" input here.
There's also no major reason to use the "gin" framework, we should
probably redo this with the standard library alone, but it was easier
for me to push out something quick this way. We can optimize that later.
Lastly, this is all quite ugly since I'm not a very good web dev, so if
you want to make this polished, please do! The wasm code is also quite
terrible due to limitations in the compiler, and maybe one day when that
works better and doesn't constantly deadlock, we can improve it.
Instead of constantly making these updates, let's just remove the year
since things are stored in git anyways, and this is not an actual modern
legal risk anymore.
This extends the autogrouping API so that a child can easily get a
reference to the parent that it is autogrouped in. This can simplify the
API for some resources when it makes sense to allow them access to the
parent handle. Use sparingly and intelligently!
With the recent merging of embedded package imports and the entry CLI
package, it is now possible for users to build in mcl code into a single
binary. This additional permission makes it explicitly clear that this
is permitted to make it easier for those users. The condition is phrased
so that the terms can be "patched" by the original author if it's
necessary for the project. For example, if the name of the language
(mcl) changes, has a differently named new version, someone finds a
phrasing improvement or a legal loophole, or for some other
reasonable circumstance. Now go write some beautiful embedded tools!
This improves the autogrouping algorithm to support hierarchical
autogrouping. It's not guaranteed to work if we replace the reachability
grouper with something more efficient, but it's good enough for now.
This ensures that docstring comments are wrapped to 80 chars. ffrank
seemed to be making this mistake far too often, and it's a silly thing
to look for manually. As it turns out, I've made it too, as have many
others. Now we have a test that checks for most cases. There are still a
few stray cases that aren't checked automatically, but this can be
improved upon if someone is motivated to do so.
Before anyone complains about the 80 character limit: this only checks
docstring comments, not source code length or inline source code
comments. There's no excuse for having docstrings that are badly
reflowed or over 80 chars, particularly if you have an automated test.
This adds the first reversible resource (file) and the necessary engine
API hooks to make it all work. This allows a special "reversed" resource
to be added to the subsequent graph in the stream when an earlier
version "disappears". This disappearance can happen if it was previously
in an if statement that then becomes false.
It might be wise to combine the use of this meta parameter with the use
of the `realize` meta parameter to ensure that your reversed resource
actually runs at least once, if there's a chance that it might be gone
for a while.
This patch also adds a new test harness for testing resources. It
doesn't test the "live" aspect of resources, as it doesn't run Watch,
but it was designed to ensure CheckApply works as intended, and it runs
very quickly with a simplified timeline of happenings.
This allows golang tests to be marked as root or !root using build tags.
The matching tests are then run as expected using our test runner.
This also disables test caching which is unfriendly to repeated test
running and is an absurd golang default to add.
Lastly this hooks up the testing verbose flag to tests that accept a
debug variable.
These tests aren't enabled on travis yet because of how it installs
golang.
This giant patch makes some much needed improvements to the code base.
* The engine has been rewritten and lives within engine/graph/
* All of the common interfaces and code now live in engine/
* All of the resources are in one package called engine/resources/
* The Res API can use different "traits" from engine/traits/
* The Res API has been simplified to hide many of the old internals
* The Watch & Process loops were previously inverted, but is now fixed
* The likelihood of package cycles has been reduced drastically
* And much, much more...
Unfortunately, some code had to be temporarily removed. The remote code
had to be taken out, as did the prometheus code. We hope to have these
back in new forms as soon as possible.