This adds a P/V style semaphore mechanism to the resource graph. This
enables the user to specify a number of "id:count" tags associated with
each resource which will reduce the parallelism of the CheckApply
operation to that maximum count.
This is particularly interesting because (assuming I'm not mistaken) the
implementation is dead-lock free assuming that no individual resource
permanently ever blocks during execution! I don't have a formal proof of
this, but I was able to convince myself on paper that it was the case.
An actual proof that N P/V counting semaphores in a DAG won't ever
dead-lock would be particularly welcome! Hint: the trick is to acquire
them in alphabetical order while respecting the DAG flow. Disclaimer,
this assumes that the lock count is always > 0 of course.
Improvements in the engine have uncovered some annoying race conditions
which would cause the engine to block between transitions. This is a
test which catches the most obvious file based ones.
This requires inotify to work in the test environment.
This allows hot (un)plugging of CPU's! It also includes some general
cleanups which were necessary to support this as well as some other
features to the virt resource. Hotunplug requires Fedora 25.
It also comes with a mini shell script to help demo this capability.
Many thanks to pkrempa for his help with the libvirt API!
* Check and install libvirt with Homebrew
macOS does not have apt, dnf or yum. Add checking for homebrew for
installing libvirt.
* Use platform timeout for tests
* Add timeout detection to test/util.sh
* Use $timeout for shell test requiring timeout
Add owner which must be username or uid of the file owner, group which is
the group name or gid of the file, and mode which is the octal unix file
permissions.
Add separate implementation for Go 1.6 and lower.
The mgmt graph depends on state tracking to eliminate redundant pokes.
With the Watch loop now able to produce events quickly, it should no
longer play a part in determining the vertex state. This simplifies the
resource API as well!
This is a monster patch that splits out the yaml and puppet based graph
generation and pushes them behind a common API. In addition alternate
pluggable GAPI's can be easily added! The important side benefit is that
you can now write a custom GAPI for embedding mgmt!
This also includes some slight clean ups that I didn't find it worth
splitting into separate patches.
This is an initial implementation of a possible golang API. In this
particular version, the *gconfig.GraphConfig data structures are
emitted, instead of possibly building a pgraph. As long as we can
represent any local graph as the data structure, then this is fine!
Is there a way to merge the gconfig Vertex and the pgraph Vertex?
If you don't give your two host cluster enough time to "feel healthy",
it will generate an error if you do operations within five seconds. This
is a regression and the five seconds is also quite arbitrary. This is
detailed at: https://github.com/coreos/etcd/issues/6305
This seems to be a bit of a race condition, even with a 10s timer, so
this also disables the StrictReconfigCheck. Re-enable this as soon as
possible.
This is a new mode to be used for bootstrapping mgmt clusters or in
situations with tight operational restrictions.
This includes the basics, additional functionality will follow!