This is an example of a race-free long-poll server and client. It uses a
redirection method to signal that the "Watch" is running.
Other race-free methods exist.
This patch adds autoedges between users and groups, and extends
users with additional fields for supplementary groups and a named
primary group. Also, some small fixes to log and error messages.
This allows the implementer of the GAPI to specify three parameters for
every Next message sent on the channel. The Fast parameter tells the
agent if it should do the pause quickly or if it should finish the
sequence. A quick pause means that it will cause a pause immediately
after the currently running resources finish, where as a slow (default)
pause will allow the wave of execution to finish. This is usually
preferred in scenarios where complex graphs are used where we want each
step to complete. The Exit parameter tells the engine to exit, and the
Err parameter tells the engine that an error occurred.
This adds send/recv output parameters from exec for stdout, stderr, and
output which is a combination of those two. This also includes a few
tests, and a working example too!
Gone are the `some_command > some_file` days of puppet.
Since the pgraph graph can store arbitrary pointers, we don't need a
special method to create the vertices or edges as long as they implement
the String() string method. This cleans up the library and some of the
examples which I let rot previously.
This is something I've wanted to do for a while, but for the reasons
mentioned in the comments, I've been unable to complete yet. I figured
I'd at least merge what does exist so far in case someone else would
like to pick this up. It's a bit of a brain hurdle / monster, because
the tricky part is refactoring the core engine so that this fits in
nicely. Perhaps someone will have more time and/or less tunnel vision
than I to either merge something or sketch out some ideas on the path
forwards. I think it's a useful goal because if recursive resources are
possible, it could force the core engine into a more elegant design.
Happy hacking!
These are helper functions to merge in existing graphs into a main graph
with or without adding an edge relationship between a vertex and the new
graph. These are particularly useful if using mgmt as a lib to break
apart units of work into functions that create sub graphs, which are
then added to the main graph when they're returned.
The graph of dependencies in golang is a DAG, and as such doesn't allow
cycles. Clean up this lib so that it eventually doesn't import our
resources module or anything else which might want to import it.
This patch makes adjacency private, and adds a generalized key store to
the graph struct.
This puts the generation of the initial event into the Next method of
the GAPI. If it does not happen, then we will never get a graph. This is
important because this notifies the GAPI when we're actually ready to
try and generate a graph, rather than blocking on the Graph method if we
have a long compile for example.
This is also required for the etcd watch cleanup.
This is a new resource for setting key value pairs in our global world
database. Currently only etcd is supported. Some of the implications and
possibilities of this resource will become more obvious with future
commits!
You can bother/test this resource with these commands:
ETCDCTL_API=3 etcdctl get "/_mgmt/strings/" --prefix=true
ETCDCTL_API=3 etcdctl put "/_mgmt/strings/KEY/HOSTNAME" 42
Replace the KEY and HOSTNAME variables with the actual values you'd like
to use. The 42 is the value that is set.
This adds a P/V style semaphore mechanism to the resource graph. This
enables the user to specify a number of "id:count" tags associated with
each resource which will reduce the parallelism of the CheckApply
operation to that maximum count.
This is particularly interesting because (assuming I'm not mistaken) the
implementation is dead-lock free assuming that no individual resource
permanently ever blocks during execution! I don't have a formal proof of
this, but I was able to convince myself on paper that it was the case.
An actual proof that N P/V counting semaphores in a DAG won't ever
dead-lock would be particularly welcome! Hint: the trick is to acquire
them in alphabetical order while respecting the DAG flow. Disclaimer,
this assumes that the lock count is always > 0 of course.
This allows hot (un)plugging of CPU's! It also includes some general
cleanups which were necessary to support this as well as some other
features to the virt resource. Hotunplug requires Fedora 25.
It also comes with a mini shell script to help demo this capability.
Many thanks to pkrempa for his help with the libvirt API!
There was a race condition that would sometimes occur in that if we
stopped reading from the gapiChan (on shutdown) but then a new message
was available before we managed to close the GAPI, then we would wait
forever to finish the close because the channel never sent, and the
WaitGroup wouldn't let us exit.
This fixes this horrible, horrible race.
This makes examples slightly nicer to commit, since you don't have to
have a hardcoded ~/james/ in their source value. It's also probably a
useful feature for the resource.
This adds rate limiting with the limit and burst meta parameters. The
limits apply to how often the Process check is called. As a result, it
might get called more often than there are Watch events due to possible
Poke/BackPoke events.
This system might need to get rethought in the future depending on its
usefulness.
This allows a resource to use polling instead of the event based
mechanism. This isn't recommended, but it could be useful, and it was
certainly fun to code!
This is the initial base of what will hopefully become a powerful API
that machines will use to communicate. It will be the basis of the
stateful data store that can be used for exported resources, fact
exchange, state machine flags, locks, and much more.
This polishes the password resource so that it can actually avoid
writing the password to disk, and so that the work actually happens in
CheckApply where it can properly interact with the graph. This resource
now re-generates the password when it receives a notification.
The send/recv plumbing has been extended so that receivers can detect
when they're receiving new values. This is particularly important if
they might otherwise not expect those values to change and cache them
for efficiency purposes.
Resources can send "refresh" notifications along edges. These messages
are sent whenever the upstream (initiating vertex) changes state. When
the changed state propagates downstream, it will be paired with a
refresh flag which can be queried in the CheckApply method of that
resource.
Future work will include a stateful refresh tracking mechanism so that
if a refresh event is generated and not consumed, it will be saved
across an interrupt (shutdown) or a crash so that it can be re-applied
on the subsequent run. This is important because the unapplied refresh
is a form of hysteresis which needs to be tracked and remembered or we
won't be able to determine that the state is wrong!
Still to do:
* Update the autogrouping code to handle the edge notify properties!
* Actually finish the stateful bool code
This is a new design idea which I had. Whether it stays around or not is
up for debate. For now it's a rough POC.
The idea is that any resource can _produce_ data, and any resource can
_consume_ data. This is what we call send and recv. By linking the two
together, data can be passed directly between resources, which will
maximize code re-use, and allow for some interesting logical graphs.
For example, you might have an HTTP resource which puts its output in a
particular file. This avoids having to overload the HTTP resource with
all of the special behaviours of the File resource.
For our POC, I implemented a `password` resource which generates a
random string which can then be passed to a receiver such as a file. At
this point the password resource isn't recommended for sensitive
applications because it caches the password as plain text.
Still to do:
* Statically check all of the type matching before we run the graph
* Verify that our autogrouping works correctly around this feature
* Verify that appropriate edges exist between send->recv pairs
* Label the password as generated instead of storing the plain text
* Consider moving password logic from Init() to CheckApply()
* Consider combining multiple send values (list?) into a single receiver
* Consider intermediary transformation nodes for value combining
You can try it out yourself by running `go build` and then calling it.
Use a bare integer argument to create that number of noop resources.
There are clearly some performance optimizations that we could do for
extremely large graphs.