Add resource auto grouping

Sorry for the size of this patch, I was busy hacking and plumbing away
and it got out of hand! I'm allowing this because there doesn't seem to
be anyone hacking away on parts of the code that this would break, since
the resource code is fairly stable in this change. In particular, it
revisits and refreshes some areas of the code that didn't see anything
new or innovative since the project first started. I've gotten rid of a
lot of cruft, and in particular cleaned up some things that I didn't
know how to do better before! Here's hoping I'll continue to learn and
have more to improve upon in the future! (Well let's not hope _too_ hard
though!)

The logical goal of this patch was to make logical grouping of resources
possible. For example, it might be more efficient to group three package
installations into a single transaction, instead of having to run three
separate transactions. This is because a package installation typically
has an initial one-time per run cost which shouldn't need to be
repeated.

Another future goal would be to group file resources sharing a common
base path under a common recursive fanotify watcher. Since this depends
on fanotify capabilities first, this hasn't been implemented yet, but
could be a useful method of reducing the number of separate watches
needed, since there is a finite limit.

It's worth mentioning that grouping resources typically _reduces_ the
parallel execution capability of a particular graph, but depending on
the cost/benefit tradeoff, this might be preferential. I'd submit it's
almost universally beneficial for pkg resources.

This monster patch includes:
* the autogroup feature
* the grouping interface
* a placeholder algorithm
* an extensive test case infrastructure to test grouping algorithms
* a move of some base resource methods into pgraph refactoring
* some config/compile clean ups to remove code duplication
* b64 encoding/decoding improvements
* a rename of the yaml "res" entries to "kind" (more logical)
* some docs
* small fixes
* and more!
This commit is contained in:
James Shubin
2016-03-20 01:17:35 -04:00
parent 9720812a78
commit 1b01f908e3
36 changed files with 1791 additions and 515 deletions

33
main.go
View File

@@ -63,7 +63,7 @@ func run(c *cli.Context) {
converged := make(chan bool) // converged signal
log.Printf("This is: %v, version: %v", program, version)
log.Printf("Main: Start: %v", start)
G := NewGraph("Graph") // give graph a default name
var G, fullGraph *Graph
// exit after `max-runtime` seconds for no reason at all...
if i := c.Int("max-runtime"); i > 0 {
@@ -102,10 +102,11 @@ func run(c *cli.Context) {
if !c.Bool("no-watch") {
configchan = ConfigWatch(file)
}
log.Printf("Etcd: Starting...")
log.Println("Etcd: Starting...")
etcdchan := etcdO.EtcdWatch()
first := true // first loop or not
for {
log.Println("Main: Waiting...")
select {
case _ = <-startchan: // kick the loop once at start
// pass
@@ -134,17 +135,29 @@ func run(c *cli.Context) {
}
// run graph vertex LOCK...
if !first { // XXX: we can flatten this check out I think
log.Printf("State: %v -> %v", G.SetState(graphPausing), G.GetState())
if !first { // TODO: we can flatten this check out I think
G.Pause() // sync
log.Printf("State: %v -> %v", G.SetState(graphPaused), G.GetState())
}
// build the graph from a config file
// build the graph on events (eg: from etcd)
if !UpdateGraphFromConfig(config, hostname, G, etcdO) {
log.Fatal("Config: We borked the graph.") // XXX
// build graph from yaml file on events (eg: from etcd)
// we need the vertices to be paused to work on them
if newFullgraph, err := fullGraph.NewGraphFromConfig(config, etcdO, hostname); err == nil { // keep references to all original elements
fullGraph = newFullgraph
} else {
log.Printf("Config: Error making new graph from config: %v", err)
// unpause!
if !first {
G.Start(&wg, first) // sync
}
continue
}
G = fullGraph.Copy() // copy to active graph
// XXX: do etcd transaction out here...
G.AutoEdges() // add autoedges; modifies the graph
//G.AutoGroup() // run autogroup; modifies the graph // TODO
// TODO: do we want to do a transitive reduction?
log.Printf("Graph: %v", G) // show graph
err := G.ExecGraphviz(c.String("graphviz-filter"), c.String("graphviz"))
if err != nil {
@@ -159,9 +172,7 @@ func run(c *cli.Context) {
// some are not ready yet and the EtcdWatch
// loops, we'll cause G.Pause(...) before we
// even got going, thus causing nil pointer errors
log.Printf("State: %v -> %v", G.SetState(graphStarting), G.GetState())
G.Start(&wg, first) // sync
log.Printf("State: %v -> %v", G.SetState(graphStarted), G.GetState())
first = false
}
}()