When we send a ^C to the main process, our children see it too! This
puts them in their own process group so that they're not affected.
There's still the matter of properly hooking up the internal exit signal
to a proper shutdown, but that's separate.
This might mean that there should be a case for an interrupt aspect to
the resource API which would allow a second ^C by the engine, to cause a
forceful termination by the resource if that resource supported that.
This prevents some nasty races where a BackPoke could arrive on a paused
vertex either during a resume or pause operation. Previously we might
also have poked an excessive number of resources on resume.
The solution was to discard BackPokes during pause or resume. On pause,
they can be discarded because we've asked the graph to quiesce, and any
further work can be done on resume, and on resume we ignore them because
this should only happen during the unrolling (reverse topological resume
of the graph) and at the end of this the indegree == 0 vertices will
initiate a series of pokes which should deal with any BackPoke that was
possibly discarded.
One other aspect of this which is important: if an indegree == 0 vertex
is poked (Process runs) but it's already in the correct state, it should
still transmit the Poke through itself so that subsequent vertices know
to run. Currently this is done correctly in Process().
I'm a bit ashamed that this wasn't done properly in the engine earlier,
but I suppose that's what comes out of running fancier graphs and really
thinking in detail about what's truly correct. Hopefully I got it right
this time!
This prevents a nasty race that can happen in a graph with more than one
resource. If a resource has someone that it can BackPoke, and then
suppose an event comes in. It runs the obj.Event() method (from inside
its Watch loop) and then *before* the resulting Process method can run
it receives a pause event and pauses. Then the parent resource pauses as
well. Finally (it's a race) the Process gets around to running, and
decides it needs to BackPoke. At this point since the parent resource is
paused, it receives the BackPoke at a time when it can't handle
receiving one, and it panics!
As a result, we now track the number of running Process possibilities
via a WaitGroup which gets incremented from the obj.Event() and we don't
finish our pause or exit operations until it has quiesced and our
WaitGroup lets us know via Wait(). Lastly in order to prevent repeated
replays, we detect when we're quiescing and suspend replaying until post
pause. We don't need to save the replay (playback variable) explicitly
because its state remains during pause, and on exit it would get
re-checked anyways.
This is a new resource for setting key value pairs in our global world
database. Currently only etcd is supported. Some of the implications and
possibilities of this resource will become more obvious with future
commits!
You can bother/test this resource with these commands:
ETCDCTL_API=3 etcdctl get "/_mgmt/strings/" --prefix=true
ETCDCTL_API=3 etcdctl put "/_mgmt/strings/KEY/HOSTNAME" 42
Replace the KEY and HOSTNAME variables with the actual values you'd like
to use. The 42 is the value that is set.
This was necessary to fix some "import cycle" errors I was having when
adding the World api to the resource Data struct.
I think this is a good hint that I need to start splitting up existing
packages into sub files, and cleaning up and inter-package problems too.
If two resources are grouped, then the result should contain the
semaphores of both resources. This is because the user is expecting
(independently) resource A and resource B to have a limiting choke
point. If when combined those choke points aren't preserved, then we
have broken an important promise to the user.
This adds a P/V style semaphore mechanism to the resource graph. This
enables the user to specify a number of "id:count" tags associated with
each resource which will reduce the parallelism of the CheckApply
operation to that maximum count.
This is particularly interesting because (assuming I'm not mistaken) the
implementation is dead-lock free assuming that no individual resource
permanently ever blocks during execution! I don't have a formal proof of
this, but I was able to convince myself on paper that it was the case.
An actual proof that N P/V counting semaphores in a DAG won't ever
dead-lock would be particularly welcome! Hint: the trick is to acquire
them in alphabetical order while respecting the DAG flow. Disclaimer,
this assumes that the lock count is always > 0 of course.
I don't think this early exit is necessary any more, since the main
CheckApply function really just spawns out to the different sub workers
which all individually check the apply variable.
If I'm wrong, we can revert this. It was @roidelapluie that noticed the
check here to begin with.
This prevents us blocking an exit if we close when a callback was about
to run. This is because the callbacks are called from the
EventRunDefaultImpl method, which waits for their return to exit and
release the WaitGroup.
I think we should probably get rid of the obj.wg since the engine is
supposed to guarantee that Close doesn't happen before Watch finishes.
This cleans up some of the resource events and also reorganizes the
struct for simplicity. This should hopefully kill off at least one race
which would cause unnecessary blocking!
Yes this patch is a bit yucky, but so was the bug I was fighting with!
I'm still working on reducing the size of the monster patches that I
land, but I'm exercising the priviledge as the initial author. In any
case, this refactors worker into two, and cleans up the passing around
of the processChan. This puts common code into Init and Close.
This allows hot (un)plugging of CPU's! It also includes some general
cleanups which were necessary to support this as well as some other
features to the virt resource. Hotunplug requires Fedora 25.
It also comes with a mini shell script to help demo this capability.
Many thanks to pkrempa for his help with the libvirt API!
When creating new resources, we didn't specify the defaults, which for
the limit metaparam caused invalid resources by default. It would be
nice to change the limit param to have the 1/X (reciprocal) as the
default, although the problem with that is that (1) it is illogical, and
(2) it's not clear if the precision for the common cases is enough.
If someone wants to investigate this further, please do! Zero value
structs are definitely more useful! In any case, we can now specify the
default. It's not entirely obvious to me if this is the best way to do
it, or if there is a superior method.
Remove the New constructors since calling Init should be done by the
engine, and not by the user even when using mgmt as a lib. This is also
the case in tests! It used to be the case that a user might want to call
Init manually, but that is no longer the case!
Add owner which must be username or uid of the file owner, group which is
the group name or gid of the file, and mode which is the octal unix file
permissions.
Add separate implementation for Go 1.6 and lower.
This makes examples slightly nicer to commit, since you don't have to
have a hardcoded ~/james/ in their source value. It's also probably a
useful feature for the resource.