This signals which resources have to run their initial pokes, and
removes the racy retry timer. We actually get a proper signal when
things are running too!
This removes some boilerplate from the Watch methods which can be baked
into the engine instead.
This code should be checked for races and locks to make sure we only
start resources when it makes sense to.
This takes the Converged initialization and Startup patterns that are
common in all resources, and bakes it into the core engine. This way
resource writing is much more concise and there is less boilerplate!
This polishes the password resource so that it can actually avoid
writing the password to disk, and so that the work actually happens in
CheckApply where it can properly interact with the graph. This resource
now re-generates the password when it receives a notification.
The send/recv plumbing has been extended so that receivers can detect
when they're receiving new values. This is particularly important if
they might otherwise not expect those values to change and cache them
for efficiency purposes.
Resources can send "refresh" notifications along edges. These messages
are sent whenever the upstream (initiating vertex) changes state. When
the changed state propagates downstream, it will be paired with a
refresh flag which can be queried in the CheckApply method of that
resource.
Future work will include a stateful refresh tracking mechanism so that
if a refresh event is generated and not consumed, it will be saved
across an interrupt (shutdown) or a crash so that it can be re-applied
on the subsequent run. This is important because the unapplied refresh
is a form of hysteresis which needs to be tracked and remembered or we
won't be able to determine that the state is wrong!
Still to do:
* Update the autogrouping code to handle the edge notify properties!
* Actually finish the stateful bool code
This is a new design idea which I had. Whether it stays around or not is
up for debate. For now it's a rough POC.
The idea is that any resource can _produce_ data, and any resource can
_consume_ data. This is what we call send and recv. By linking the two
together, data can be passed directly between resources, which will
maximize code re-use, and allow for some interesting logical graphs.
For example, you might have an HTTP resource which puts its output in a
particular file. This avoids having to overload the HTTP resource with
all of the special behaviours of the File resource.
For our POC, I implemented a `password` resource which generates a
random string which can then be passed to a receiver such as a file. At
this point the password resource isn't recommended for sensitive
applications because it caches the password as plain text.
Still to do:
* Statically check all of the type matching before we run the graph
* Verify that our autogrouping works correctly around this feature
* Verify that appropriate edges exist between send->recv pairs
* Label the password as generated instead of storing the plain text
* Consider moving password logic from Init() to CheckApply()
* Consider combining multiple send values (list?) into a single receiver
* Consider intermediary transformation nodes for value combining
Felix pointed out that these ID's are unique, but not universally unique
across the cluster, which might be confusing to new programmers. As a
result, rename them all.
This was a sneaky deadlock which wasn't 100% reproducible. It was pretty
tricky to find because the same deadlocked behaviour was seen due to the
regression in the fsnotify module.
The event system needs a bit of a cleaning, but this can happen later.
This makes this logically more separate! :) As an aside...
I really hate the way golang does dependencies and packages. Yes, some
people insist on nesting their code deep into a $GOPATH, which is fine
if you're a google dev and are forced to work this way, but annoying for
the rest of the world. Your code shouldn't need a git commit to switch
to a a different vcs host! Gah I hate this so much.