This is a new resource for setting key value pairs in our global world
database. Currently only etcd is supported. Some of the implications and
possibilities of this resource will become more obvious with future
commits!
You can bother/test this resource with these commands:
ETCDCTL_API=3 etcdctl get "/_mgmt/strings/" --prefix=true
ETCDCTL_API=3 etcdctl put "/_mgmt/strings/KEY/HOSTNAME" 42
Replace the KEY and HOSTNAME variables with the actual values you'd like
to use. The 42 is the value that is set.
This was necessary to fix some "import cycle" errors I was having when
adding the World api to the resource Data struct.
I think this is a good hint that I need to start splitting up existing
packages into sub files, and cleaning up and inter-package problems too.
Since we don't return the actual values and instead only tell about
events (which leaves the `Get` of the value as a second operation) then
we don't have to use a channel with backpressure since all the events
are identical.
If two resources are grouped, then the result should contain the
semaphores of both resources. This is because the user is expecting
(independently) resource A and resource B to have a limiting choke
point. If when combined those choke points aren't preserved, then we
have broken an important promise to the user.
This adds a P/V style semaphore mechanism to the resource graph. This
enables the user to specify a number of "id:count" tags associated with
each resource which will reduce the parallelism of the CheckApply
operation to that maximum count.
This is particularly interesting because (assuming I'm not mistaken) the
implementation is dead-lock free assuming that no individual resource
permanently ever blocks during execution! I don't have a formal proof of
this, but I was able to convince myself on paper that it was the case.
An actual proof that N P/V counting semaphores in a DAG won't ever
dead-lock would be particularly welcome! Hint: the trick is to acquire
them in alphabetical order while respecting the DAG flow. Disclaimer,
this assumes that the lock count is always > 0 of course.
So builds take about 30s on my shitty machine which is pretty long.
Turns out golang caches things in $GOPATH/pkg/ but is not clever enough
to erase things from there that are out of date. As a result, it was
rebuilding everything (including the unchanged dependencies) every
build!
By completely wiping out $GOPATH/pkg/ and then running `go build -i`,
this now takes builds down to about 8 seconds. (After one full build is
finished.)
This is basically the same as running `go install`, but without copying
junk to $GOPATH/bin.
Hopefully the tooling will be smart enough to know when to throw out
stuff in $GOPATH/pkg automatically and avoid this problem entirely.
Is it wrong to send Google a bill for all the extra cpu cycles I've
used? ;)
It is really strange that whe I run make, it does not build mgmt. This
commit makes build the default target, without moving the target,
therefore we keep as much as we can the order of the file.
This also removes the confusion for designers that would run "make"
instead of "make art", whose work would be disrupted when we add a --
let's say -- make alpharelese command.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
I don't think this early exit is necessary any more, since the main
CheckApply function really just spawns out to the different sub workers
which all individually check the apply variable.
If I'm wrong, we can revert this. It was @roidelapluie that noticed the
check here to begin with.
This prevents us blocking an exit if we close when a callback was about
to run. This is because the callbacks are called from the
EventRunDefaultImpl method, which waits for their return to exit and
release the WaitGroup.
I think we should probably get rid of the obj.wg since the engine is
supposed to guarantee that Close doesn't happen before Watch finishes.
Don't leave it running unnecessarily! This might have contributed to a
block, but it was hard to isolate if this was the cause or if this was
one of many causes.
This cleans up some of the resource events and also reorganizes the
struct for simplicity. This should hopefully kill off at least one race
which would cause unnecessary blocking!
Yes this patch is a bit yucky, but so was the bug I was fighting with!
Improvements in the engine have uncovered some annoying race conditions
which would cause the engine to block between transitions. This is a
test which catches the most obvious file based ones.
This requires inotify to work in the test environment.
This cleans up the recwatch code to avoid some possible races. While you
were previously able to call Close more than once, this was really
something that would mask bugs, rather than a useful feature.
I'm still working on reducing the size of the monster patches that I
land, but I'm exercising the priviledge as the initial author. In any
case, this refactors worker into two, and cleans up the passing around
of the processChan. This puts common code into Init and Close.
This allows hot (un)plugging of CPU's! It also includes some general
cleanups which were necessary to support this as well as some other
features to the virt resource. Hotunplug requires Fedora 25.
It also comes with a mini shell script to help demo this capability.
Many thanks to pkrempa for his help with the libvirt API!
For the record, I currently don't feel that my level of contributions
warrant a spot here, but I'm entering myself upon express invitation
by James =) ...also in anticipation of much more substantial work.