Resources can send "refresh" notifications along edges. These messages
are sent whenever the upstream (initiating vertex) changes state. When
the changed state propagates downstream, it will be paired with a
refresh flag which can be queried in the CheckApply method of that
resource.
Future work will include a stateful refresh tracking mechanism so that
if a refresh event is generated and not consumed, it will be saved
across an interrupt (shutdown) or a crash so that it can be re-applied
on the subsequent run. This is important because the unapplied refresh
is a form of hysteresis which needs to be tracked and remembered or we
won't be able to determine that the state is wrong!
Still to do:
* Update the autogrouping code to handle the edge notify properties!
* Actually finish the stateful bool code
This is a new design idea which I had. Whether it stays around or not is
up for debate. For now it's a rough POC.
The idea is that any resource can _produce_ data, and any resource can
_consume_ data. This is what we call send and recv. By linking the two
together, data can be passed directly between resources, which will
maximize code re-use, and allow for some interesting logical graphs.
For example, you might have an HTTP resource which puts its output in a
particular file. This avoids having to overload the HTTP resource with
all of the special behaviours of the File resource.
For our POC, I implemented a `password` resource which generates a
random string which can then be passed to a receiver such as a file. At
this point the password resource isn't recommended for sensitive
applications because it caches the password as plain text.
Still to do:
* Statically check all of the type matching before we run the graph
* Verify that our autogrouping works correctly around this feature
* Verify that appropriate edges exist between send->recv pairs
* Label the password as generated instead of storing the plain text
* Consider moving password logic from Init() to CheckApply()
* Consider combining multiple send values (list?) into a single receiver
* Consider intermediary transformation nodes for value combining
You can try it out yourself by running `go build` and then calling it.
Use a bare integer argument to create that number of noop resources.
There are clearly some performance optimizations that we could do for
extremely large graphs.
This is a monster patch that splits out the yaml and puppet based graph
generation and pushes them behind a common API. In addition alternate
pluggable GAPI's can be easily added! The important side benefit is that
you can now write a custom GAPI for embedding mgmt!
This also includes some slight clean ups that I didn't find it worth
splitting into separate patches.
This is an initial implementation of a possible golang API. In this
particular version, the *gconfig.GraphConfig data structures are
emitted, instead of possibly building a pgraph. As long as we can
represent any local graph as the data structure, then this is fine!
Is there a way to merge the gconfig Vertex and the pgraph Vertex?
This adds an initial implementation of a virt resource based on libvirt.
It is not complete and requires more testing. The initial skeleton was
written by nseps but was not merged. It was later cleaned up and merged
in its current form by purpleidea. Many thanks to nseps for getting this
going, and hopefully we'll get you contributing more in the future!
All resources can now set a retry limit (-1 for infinite) and a delay
between retries. This applies to both the CheckApply methods, and the
Watch methods as well. They each have their own separate counts, but use
the same input meta param, since I decided it wouldn't be useful to have
a separate watchRetry and watchDelay set of meta parameters.
In the process, we got rid of about 15 error cases which would normally
panic.
This patch required a slight overhaul of the Event system.
The previous commit is an earlier version of this patch which I decided
to leave in to "show my work" as I used to have to do in math class.
It's slightly more correct with the current event system, and this
version is less correct and has a few bugs, but that is because the
event system needs a massive overhaul, and once that's done this should
all work properly for the corner cases.
The file resource contained some of the early golang code that I wrote
for this project. Needless to say, some of it was quite yucky, and it
was also lacking a number of important features. This patch builds upon
it so that it starts being usable for directories of files too.
Many thanks to Sam Gélineau for helping with the recursive watching. My
brain officially didn't want to look at that code anymore.
This is a new mode to be used for bootstrapping mgmt clusters or in
situations with tight operational restrictions.
This includes the basics, additional functionality will follow!
This monster patch embeds the etcd server. It took a good deal of
iterative work to tweak small details, and survived a rewrite from the
initial etcd v2 API implementation to the beta version of v3.
It has a notable race, and is missing some features, but it is ready for
git master and external developer consumption.
Sorry for the size of this patch, I was busy hacking and plumbing away
and it got out of hand! I'm allowing this because there doesn't seem to
be anyone hacking away on parts of the code that this would break, since
the resource code is fairly stable in this change. In particular, it
revisits and refreshes some areas of the code that didn't see anything
new or innovative since the project first started. I've gotten rid of a
lot of cruft, and in particular cleaned up some things that I didn't
know how to do better before! Here's hoping I'll continue to learn and
have more to improve upon in the future! (Well let's not hope _too_ hard
though!)
The logical goal of this patch was to make logical grouping of resources
possible. For example, it might be more efficient to group three package
installations into a single transaction, instead of having to run three
separate transactions. This is because a package installation typically
has an initial one-time per run cost which shouldn't need to be
repeated.
Another future goal would be to group file resources sharing a common
base path under a common recursive fanotify watcher. Since this depends
on fanotify capabilities first, this hasn't been implemented yet, but
could be a useful method of reducing the number of separate watches
needed, since there is a finite limit.
It's worth mentioning that grouping resources typically _reduces_ the
parallel execution capability of a particular graph, but depending on
the cost/benefit tradeoff, this might be preferential. I'd submit it's
almost universally beneficial for pkg resources.
This monster patch includes:
* the autogroup feature
* the grouping interface
* a placeholder algorithm
* an extensive test case infrastructure to test grouping algorithms
* a move of some base resource methods into pgraph refactoring
* some config/compile clean ups to remove code duplication
* b64 encoding/decoding improvements
* a rename of the yaml "res" entries to "kind" (more logical)
* some docs
* small fixes
* and more!
This is a monster patch that finally gets the iterative pkg auto edges
working the way they should. For each file, as soon as one matches, we
don't want to keep add dependencies on other file objects under that
tree structure. This reduces the number of necessary edges considerably,
and allows the graph to run more concurrently.
This allows for resources to automatically add necessary edges to the
graph so that the event system doesn't have to work overtime due to
sub-optimal execution order.
This is based on PackageKit, which means events, *and* we automatically
get support for any of the backends that PackageKit supports. This means
dpkg, and rpm are both first class citizens! Many other backends will
surely work, although thorough testing is left as an exercise to the
reader, or to someone who would like to write more test cases!
Unfortunately at the moment, there are a few upstream PackageKit bugs
which cause us issues, but those have been apparently resolved upstream.
If you experience issues with an old version of PackageKit, test if it
is working correctly before blaming mgmt :)
In parallel, mgmt might increase the testing surface for PackageKit, so
hopefully this makes it more robust for everyone involved!
Lastly, I'd like to point out that many great things that are typically
used for servers do start in the GNOME desktop world. Help support your
GNOME GNU/Linux desktop today!
Naming the resources "type" was a stupid mistake, and is a huge source
of confusion when also talking about real types. Fix this before it gets
out of hand.
* Fixup graph state readability
* Rename original SetState() to SetConvergedState() and friends...
* Add type state management for proper BackPoke() commands...
* Add better DEBUG logging
This is an important optimization that prevents running a BackPoke on a
parent which is in the process of running and will most certainly poke
the caller back in a moment. This avoids unnecessary roundtrips.
Unfortunately, there are still other algorithms required so that races
can't cause the graph to run for longer than necessary.
* Fix Process() object calling
* Add PokeParent() to poke upwards
* Break linear exec chains :(
This was the issue where in a graph f1 -> f2, if you were to rm f2 &&
cat f2, then f2 would not come back because we didn't poke upwards to
refresh the timestamp. Unfortunately this adds another bug which we
solve in a later patch.
* Add exec type
* Switch erroneous use of fmt to log instead
* Check for edge existence for safety before using
* Avoid recalling etcd channel maker
* Clean up logging output
This is the third main feature of this system. The code needs a bunch of
polish, but it actually all works :)
I've tested this briefly with N <= 3.
Currently you have to build your own etcd cluster. It's quite easy, just
run `etcd` and it will be ready. I usually run it in a throw away /tmp/
dir so that I can blow away the stored data easily.
This is still a dirty prototype, so please excuse the mess. Please
excuse the fact that this is a mega patch. Once things settle down this
won't happen any more.
Some of the changes squashed into here include:
* Merge vertex loop with type loop
(The file watcher seems to cache events anyways)
* Improve pgraph library
* Add indegree, outdegree, and topological sort with tests
* Add reverse function for vertex list
* Tons of additional cleanup!
Amazingly, on my first successful compile, this seemed to run!
A special thanks to Ira Cooper who helped me talk through some of the
algorithmic decisions and for his help in finding better ones!
This is a prototype that i'm attempting to "release early". Expect a lot
of changes! It is intended to be a config management tool that will:
* be event based
* execute actions in parallel
* function as a distributed system
There are a bunch more design ideas going into this, please stay tuned!