This is a new mode to be used for bootstrapping mgmt clusters or in
situations with tight operational restrictions.
This includes the basics, additional functionality will follow!
The MemberList() function which apparently looks at all of the endpoints
was blocking right after a member exited, because we still had a stale
list of endpoints, and while a member that exited safely would update
the endpoints lists, the other member would (unavoidably) race to get
that message, and so it might call the MemberList() with the now stale
endpoint list. This way we invalidate an endpoint we know to be gone
immediately.
This also adds a simple test case to catch this scenario.
These patches are proposed upstream changes and code for and from etcd.
Ideally we would revert this patch when/if things are merged upstream!
The majority of the work is in: https://github.com/coreos/etcd/pull/5584
This monster patch embeds the etcd server. It took a good deal of
iterative work to tweak small details, and survived a rewrite from the
initial etcd v2 API implementation to the beta version of v3.
It has a notable race, and is missing some features, but it is ready for
git master and external developer consumption.
The old converged detection was hacked in code, instead of something
with a nice interface. This cleans it up, splits it into a separate
file, and removes a race condition that happened with the old code.
We also take the time to get rid of the ugly Set* methods and replace
them all with a single AssociateData method. This might be unnecessary
if we can pass in the Converger method at Resource construction.
Lastly, and most interesting, we suspend the individual timeout callers
when they've already converged, thus reducing unnecessary traffic, and
avoiding fast (eg: < 5 second) timers triggering more than once if they
stay converged!
A quick note on theory for any future readers... What happens if we have
--converged-timeout=0 ? Well, for this and any other positive value,
it's important to realize that deciding if something is converged is
actually a race between if the converged timer will fire and if some
random new event will get triggered. This is because there is nothing
that can actually predict if or when a new event will happen (eg the
user modifying a file). As a result, a race is always inherent, and
actually not a negative or "incorrect" algorithm.
A future improvement could be to add a global lock to each resource, and
to lock all resources when computing if we are converged or not. In
practice, this hasn't been necessary. The worst case scenario would be
(in theory, because this hasn't been tested) if an event happens
*during* the converged calculation, and starts running, the exit command
then runs, and the event finishes, but it doesn't get a chance to notify
some service to restart. A lock could probably fix this theoretical
case.
Sorry for the size of this patch, I was busy hacking and plumbing away
and it got out of hand! I'm allowing this because there doesn't seem to
be anyone hacking away on parts of the code that this would break, since
the resource code is fairly stable in this change. In particular, it
revisits and refreshes some areas of the code that didn't see anything
new or innovative since the project first started. I've gotten rid of a
lot of cruft, and in particular cleaned up some things that I didn't
know how to do better before! Here's hoping I'll continue to learn and
have more to improve upon in the future! (Well let's not hope _too_ hard
though!)
The logical goal of this patch was to make logical grouping of resources
possible. For example, it might be more efficient to group three package
installations into a single transaction, instead of having to run three
separate transactions. This is because a package installation typically
has an initial one-time per run cost which shouldn't need to be
repeated.
Another future goal would be to group file resources sharing a common
base path under a common recursive fanotify watcher. Since this depends
on fanotify capabilities first, this hasn't been implemented yet, but
could be a useful method of reducing the number of separate watches
needed, since there is a finite limit.
It's worth mentioning that grouping resources typically _reduces_ the
parallel execution capability of a particular graph, but depending on
the cost/benefit tradeoff, this might be preferential. I'd submit it's
almost universally beneficial for pkg resources.
This monster patch includes:
* the autogroup feature
* the grouping interface
* a placeholder algorithm
* an extensive test case infrastructure to test grouping algorithms
* a move of some base resource methods into pgraph refactoring
* some config/compile clean ups to remove code duplication
* b64 encoding/decoding improvements
* a rename of the yaml "res" entries to "kind" (more logical)
* some docs
* small fixes
* and more!
Naming the resources "type" was a stupid mistake, and is a huge source
of confusion when also talking about real types. Fix this before it gets
out of hand.
* Fixup graph state readability
* Rename original SetState() to SetConvergedState() and friends...
* Add type state management for proper BackPoke() commands...
* Add better DEBUG logging
This is an important optimization that prevents running a BackPoke on a
parent which is in the process of running and will most certainly poke
the caller back in a moment. This avoids unnecessary roundtrips.
Unfortunately, there are still other algorithms required so that races
can't cause the graph to run for longer than necessary.
* Add exec type
* Switch erroneous use of fmt to log instead
* Check for edge existence for safety before using
* Avoid recalling etcd channel maker
* Clean up logging output
This is the third main feature of this system. The code needs a bunch of
polish, but it actually all works :)
I've tested this briefly with N <= 3.
Currently you have to build your own etcd cluster. It's quite easy, just
run `etcd` and it will be ready. I usually run it in a throw away /tmp/
dir so that I can blow away the stored data easily.
This is still a dirty prototype, so please excuse the mess. Please
excuse the fact that this is a mega patch. Once things settle down this
won't happen any more.
Some of the changes squashed into here include:
* Merge vertex loop with type loop
(The file watcher seems to cache events anyways)
* Improve pgraph library
* Add indegree, outdegree, and topological sort with tests
* Add reverse function for vertex list
* Tons of additional cleanup!
Amazingly, on my first successful compile, this seemed to run!
A special thanks to Ira Cooper who helped me talk through some of the
algorithmic decisions and for his help in finding better ones!