Resources can send "refresh" notifications along edges. These messages
are sent whenever the upstream (initiating vertex) changes state. When
the changed state propagates downstream, it will be paired with a
refresh flag which can be queried in the CheckApply method of that
resource.
Future work will include a stateful refresh tracking mechanism so that
if a refresh event is generated and not consumed, it will be saved
across an interrupt (shutdown) or a crash so that it can be re-applied
on the subsequent run. This is important because the unapplied refresh
is a form of hysteresis which needs to be tracked and remembered or we
won't be able to determine that the state is wrong!
Still to do:
* Update the autogrouping code to handle the edge notify properties!
* Actually finish the stateful bool code
This is a new design idea which I had. Whether it stays around or not is
up for debate. For now it's a rough POC.
The idea is that any resource can _produce_ data, and any resource can
_consume_ data. This is what we call send and recv. By linking the two
together, data can be passed directly between resources, which will
maximize code re-use, and allow for some interesting logical graphs.
For example, you might have an HTTP resource which puts its output in a
particular file. This avoids having to overload the HTTP resource with
all of the special behaviours of the File resource.
For our POC, I implemented a `password` resource which generates a
random string which can then be passed to a receiver such as a file. At
this point the password resource isn't recommended for sensitive
applications because it caches the password as plain text.
Still to do:
* Statically check all of the type matching before we run the graph
* Verify that our autogrouping works correctly around this feature
* Verify that appropriate edges exist between send->recv pairs
* Label the password as generated instead of storing the plain text
* Consider moving password logic from Init() to CheckApply()
* Consider combining multiple send values (list?) into a single receiver
* Consider intermediary transformation nodes for value combining
This ineffassign didn't seem to cause problems, but perhaps we didn't
exhaustively test all the areas.
Watch out to make sure this doesn't break any of the sync requirements,
eg: "A WaitGroup must not be copied after first use."
https://golang.org/pkg/sync/#WaitGroup
You can try it out yourself by running `go build` and then calling it.
Use a bare integer argument to create that number of noop resources.
There are clearly some performance optimizations that we could do for
extremely large graphs.
This is a monster patch that splits out the yaml and puppet based graph
generation and pushes them behind a common API. In addition alternate
pluggable GAPI's can be easily added! The important side benefit is that
you can now write a custom GAPI for embedding mgmt!
This also includes some slight clean ups that I didn't find it worth
splitting into separate patches.
Remove the implicit emulator path from the domain definition. Libvirt is
already configured to use the correct emulator for kvm or qemu and
specifying it creates distro dependence.
Fixes#85
This is an initial implementation of a possible golang API. In this
particular version, the *gconfig.GraphConfig data structures are
emitted, instead of possibly building a pgraph. As long as we can
represent any local graph as the data structure, then this is fine!
Is there a way to merge the gconfig Vertex and the pgraph Vertex?
This adds an initial implementation of a virt resource based on libvirt.
It is not complete and requires more testing. The initial skeleton was
written by nseps but was not merged. It was later cleaned up and merged
in its current form by purpleidea. Many thanks to nseps for getting this
going, and hopefully we'll get you contributing more in the future!
Felix pointed out that these ID's are unique, but not universally unique
across the cluster, which might be confusing to new programmers. As a
result, rename them all.