178 Commits

Author SHA1 Message Date
James Shubin
e8b03545bb test: Don't fail on tag builds
This seems to be causing our failures with:

$ git fetch --unshallow
fatal: Couldn't find remote ref refs/heads/0.0.x

where x is some tag.

Hopefully this doesn't break the other use case we added this patch for!
2018-01-11 18:03:56 -05:00
James Shubin
70c59eab4a misc: Don't display script name in output 2018-01-11 18:03:04 -05:00
jonathangold
3c677543e0 resources: aws: ec2: Fix closed channel handling
If awschan closes, longpollWatch and snsWatch return nil
instead of an error. This will prevent the engine from
shutting down in case we choose to close the channel
early or from other struct methods.
2018-01-06 15:15:30 -05:00
jonathangold
c455ef2c62 resources: aws: ec2: Send IP addresses and InstanceID 2018-01-03 21:34:28 -05:00
Jonathan Gold
032d0992d6 resources: aws: ec2: Refactor CheckApply
CheckApply was rewritten, using the new describe methods to improve
readability and maintainability.
2018-01-03 21:34:28 -05:00
jonathangold
67837a47ac resources: aws: ec2: Refactor longpollWatch
Complete rewrite of longpollWatch() for correctness and maintanability.
2018-01-03 21:34:28 -05:00
Jonathan Gold
32e3c4e029 resources: aws: ec2: Refactor longpollWatch
This patch simplifies longpollwatch by getting rid of some unnecessary
api calls and breaking the waiters out into their own functions.
2018-01-03 21:34:28 -05:00
Jonathan Gold
76fcb7a06e resources: aws: ec2: Wait for stop and terminate concurrently
In longpollWatch it was no longer sufficient to use only
WaitUntilInstanceStopped as it would block if the instance was
terminated. This patch launches two goroutines in its place, one
waits until the instance stops and the other waits until it
terminates. When either one returns, it cancels their context,
and execution continues.
2018-01-03 21:34:28 -05:00
Jonathan Gold
149a2188e2 resources: aws: ec2: Retry on exceeded wait attempts error
The waiters now return the AwsErr error "ResourceNotReady: exceeded wait
attempts" when the instance state does not converge after 40 retries.
During longpollWatch() we need to detect this error and continue to
the top of the loop so we can restart the waiters and keep watching for
events.
2018-01-03 21:34:28 -05:00
Jonathan Gold
08e7caea6b resources: aws: ec2: CheckApply fix pending and stopping cases
If CheckApply was called when the instance was pending or stopping, it
would return an error. This patch supresses these errors and tells the
engine that the state can't yet be changed.
2018-01-03 21:34:28 -05:00
Jonathan Gold
e330ebc8c9 resources: aws: ec2: Verify SNS message signatures 2018-01-03 21:34:28 -05:00
Jonathan Gold
388a08e13a resources: aws: ec2: Check that policy.Statement != nil 2018-01-03 21:34:28 -05:00
Jonathan Gold
9ba9ef1cbf resources: aws: ec2: Close closeChan before server shutdown
This patch makes sure that closeChan is closed as soon as the main loop
returns, so any channel operations are unblocked before we run shutdown.
This ensures that the server's goroutine can return before shutdown
completes and we don't panic by trying to serve the client after
shutdown returns.
2018-01-03 21:34:27 -05:00
Jonathan Gold
fac004b774 resources: aws: ec2: Update postHandler to process messages 2018-01-03 21:34:27 -05:00
Jonathan Gold
8cd3f28734 resources: aws: ec2: Authorize CloudWatch to publish to sns 2018-01-03 21:34:27 -05:00
Jonathan Gold
dcd23fcf75 resources: aws: ec2: Add CloudWatch rule and target SNS
This patch creates the cloudwatch rule that detects ec2 instance
state changes, and targets the rule to publish on our sns topic
which, in turn, pushes those event notifications to our endpoint.
2018-01-03 21:34:27 -05:00
Jonathan Gold
1162485c2c resources: aws: ec2: Subscribe SNS endpoint to topic
This patch adds methods to subscribe and confirm the subscription
to the sns topic.
2018-01-03 21:34:27 -05:00
Jonathan Gold
966172eac6 resources: aws: ec2: Use custom listener for snsServer
This patch replaces the call to Server.ListenAndServe() with
Server.Serve(listener) in order to make sure the listener is up
and running before we subscribe to the topic in a future patch.
2018-01-03 21:34:27 -05:00
James Shubin
12fce52cd7 legal: Happy 2018 everyone...
Done with:

ack '2017+' -l | xargs sed -i -e 's/2017+/2018+/g'

Checked manually with:

git add -p

Hello to future James from 2019, and Happy Hacking!
2018-01-03 21:22:07 -05:00
Felix Frank
5ca1e2a23f puppet: Avoid empty parameters to puppet mgmtgraph
This solves an issue first observed with golang 1.8.

Creating an exec.Command with an empty string parameter (when no puppet.conf
file is specified) would lead to an error from Puppet, stating that an
unexpected argument was passed to "puppet mgmtgraph print".

The workaround is to not include *any* positional argument (not even the
empty string) when --puppet-conf is not used.
2017-12-26 00:18:46 +01:00
Paul Morgan
98f8a61e83 git: Configure editorconfig to indent with tabs in bash scripts
This follows `test/test-bashfmt.sh` style check(s).
2017-12-20 21:09:15 +00:00
Paul Morgan
2e86d7c5ab git: Ensure the tagging script is idempotent 2017-12-20 21:04:57 +00:00
Jonathan Gold
62ca12608d cli: Add license flag
This patch adds the option to print the license with a cli flag. It
uses go-bindata to store the license file. The file is generated by
running `make bindata` and the result is stored in the bindata
directory.
2017-12-08 00:57:58 -05:00
Jonathan Gold
406aa55667 resources: virt: Update libvirt-xml target
Builds started failing due to go-libvirt-xml 6d97448. In that patch,
the DomainChannelTarget struct was changed from having a single type
field, to having an individual field for each virtualization type.

This patch updates the connection check in Init to reflect the changes
to go-libvirt-xml, so that builds no longer fail.
2017-11-29 19:03:56 -05:00
James Shubin
a76dce8b15 docs: Add missing blog post about augeas resource 2017-11-26 17:15:49 -05:00
James Shubin
b01d453ae3 docs: Refresh documentation to provide a better new user experience
This does some cleanups and moves some things around for a better
experience. If you're an expert in this area, or are a new user who has
some feedback about their first impressions and experiences, please let
us know!
2017-11-25 20:45:57 -05:00
Guillaume Herail
ac629404f4 test: Switch to goimports instead of gofmt
see https://github.com/purpleidea/mgmt/pull/256#issuecomment-346360414
2017-11-25 06:49:00 -05:00
Guillaume Herail
3575d597f7 resources: Add User/Group to ExecRes 2017-11-24 10:38:16 -05:00
Toshaan Bharvani
2affcba3b4 build: Added build option to strip binary
This is a build option in Golang that will strip the binary.
The binary becomes about 50% smaller.

Signed-off-by: Toshaan Bharvani <toshaan@vantosh.com>
2017-11-24 10:26:48 -05:00
James Shubin
846c5f8762 test: Add another check for off-by-one-error commit tags 2017-11-24 09:46:32 -05:00
Julien Pivotto
086af712d2 example: Remove content out of directory definition
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-11-24 14:26:20 +01:00
Julien Pivotto
2b6e39f283 build: Remove go 1.3 and 1.4 support
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-11-24 05:35:09 -05:00
Julien Pivotto
472663193a prometheus: Initialize all metrics
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-11-24 11:02:36 +01:00
James Shubin
879ff838ae resources: Replace golang 1.6 specific code with newer 1.7 version
We now require at least 1.8 so we might as well fix this up.
2017-11-23 10:57:11 -05:00
Julien Pivotto
5e9a085e39 exec: Add autoEdges between ExecRes and PkgRes
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-11-23 16:30:22 +01:00
Julien Pivotto
c2b5729ebd build: Build mgmt on any go file change
Prior to this commit, running make would only rebuild mgmt when
main.go was changed. It means that make clean build was needed.

With this commit, any go file change in this directory will
trigger a new compilation.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-11-23 09:32:02 -05:00
Julien Pivotto
fdce9d6a6a prometheus: Initialize mgmt_checkapply_total metrics
It is recommended by Prometheus to initialize metrics:

https://prometheus.io/docs/practices/instrumentation/#avoid-missing-metrics

This commits initialize the mgmt_checkapply_total metric
for each registered resource.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-11-23 15:23:41 +01:00
Guillaume Herail
bfc2549289 resources: Move FileRes.uid()/.gid() to util.go 2017-11-23 08:34:38 -05:00
James Shubin
52fd1ae73e test: Add check for common doc vs docs ambiguity 2017-11-23 08:20:44 -05:00
Julien Pivotto
23e167616f doc: Fix link to the prometheus wiki
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-11-23 09:52:28 +01:00
James Shubin
51ce83f20b test: Add extra commit message tests for some common mistakes
Feel free to add more if we identify them.
2017-11-21 11:05:20 -05:00
Jonathan Gold
5e5bbf4b39 travis: Allow travis builds to access target branches
Because travis builds only fetch a single branch (master) by default,
test-commit-message.sh only had access to commits in the master branch.
In order to fetch the correct branch for our build, we need to run
'git config remote.origin.fetch..' with the target branch's information
before executing git fetch on the repo in before_install.

Now git will always fetch the appropriate branch.
2017-11-18 21:04:12 -05:00
Guillaume Herail
cbc3a691b9 docker: Bump to golang 1.8 2017-11-16 17:36:35 +01:00
Jonathan Gold
a5247d6e69 resources: aws: ec2: Change event messages to iota consts 2017-11-14 16:48:51 -05:00
Jonathan Gold
d698b82a83 resources: aws: ec2: Start and stop SNS endpoint in snsWatch
This patch adds snsWatch which launches the HTTP server and listens
for messages on awsChan to forward as events to the mgmt engine.
2017-11-11 23:07:12 -05:00
Jonathan Gold
91eff75288 resources: aws: ec2: Add method to make sns topic 2017-11-10 17:31:19 -05:00
James Shubin
91a9edb322 resources: aws: ec2: Fix deadlock on rare error scenarios
If we get an error in the Watch loop, it will send this on awsChan,
which will cause Watch to loop. However, in this scenario it will never
cause closeChan to close, and we will deadlock because we have a
waitGroup in a helper goroutine which is waiting on this channel to
close the context.

Normally this wouldn't be an issue, but since we have more than one
goroutine (with associated waitGroup) it is. It's also good practice to
close all the channels to help avoid this kind of bug.

This patch also moves the waitGroup Wait into a more logical place for
visibility.
2017-11-10 14:17:54 -05:00
Jonathan Gold
c8ddbeaa5c resource: aws: ec2: Add http server 2017-11-09 13:13:42 -05:00
Jonathan Gold
3634b3450d resource: aws: ec2: Move waitgroup to resource struct 2017-11-08 16:57:41 -05:00
Jonathan Gold
c2a5e3f5d8 resources: aws: ec2: Move watch channels into struct 2017-11-08 16:16:01 -05:00
Jonathan Gold
db49fe85e4 resources: aws: ec2: Move chanStruct type out of longpollWatch 2017-11-08 16:08:25 -05:00
Jonathan Gold
567a2e9fd8 resources: aws: ec2: Reorganized consts 2017-11-08 16:02:29 -05:00
Jonathan Gold
987de00e17 resources: aws: ec2: Remove extra wait from Watch
There were two calls to WaitUntilInstanceTerminatedWithContext in a row.
There's no reason to make the call twice.
2017-11-08 16:02:24 -05:00
Jonathan Gold
baeafec74a resources: aws: ec2: Move Watch to longpollWatch 2017-11-08 16:02:12 -05:00
James Shubin
9cfa0b14d4 yamlgraph: Improve error output
This makes it easier to know what's missing.
2017-11-02 09:13:27 -04:00
James Shubin
948ded6792 github: This event is over
And it wasn't successful at all.
2017-11-01 07:07:14 -04:00
James Shubin
3c69619fd9 github: Add new label for design discussions and trackers
Open ideas related to designs can be tracked here. We've already got a
few such tickets open.
2017-11-01 07:04:32 -04:00
Jonathan Gold
e7c4bc7f47 resources: Add UserData field to AwsEc2
UserData specifies first-launch bash and cloud-init commands. See
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
for documentation and examples.
2017-10-30 00:22:30 -04:00
Jonathan Gold
277ecc901b etcd: Plumbed in the new cli flags for advertise urls 2017-10-29 17:16:51 -04:00
Jonathan Gold
0f70c31a30 etcd: Add advertise urls to cli
This patch adds the option to specify URLs to advertise for clients and peers.
This will facilitate etcd communication through nat, where we want to listen
on a local IP, but expose a public IP to clients/peers.
2017-10-28 22:42:27 -04:00
James Shubin
9a97a92e31 github: Use third-party settings app to sync github settings
Let's give this a try. One downside is that giving anyone push access
gives them ability to rename repo and do other bad admin type things.
2017-10-26 05:04:41 -04:00
James Shubin
f9d452ad2c examples: Add longpoll server and client
This is an example of a race-free long-poll server and client. It uses a
redirection method to signal that the "Watch" is running.

Other race-free methods exist.
2017-10-24 04:20:19 -04:00
Jonathan Gold
9907c12eda resources: Enhancements to user and group
This patch adds autoedges between users and groups, and extends
users with additional fields for supplementary groups and a named
primary group. Also, some small fixes to log and error messages.
2017-10-23 19:18:52 -04:00
Jonathan Gold
19533a32b5 resources: Add a group resource 2017-10-21 01:28:22 -04:00
Jonathan Gold
c5a5004f9e resources: Fix user gid compare 2017-10-19 06:58:31 -04:00
Jonathan Gold
677cdea99d resources: Improve nspawn resource 2017-10-17 19:23:04 -04:00
Jonathan Gold
4d7c0ddbce resources: Add an Aws resource 2017-10-09 04:05:13 -04:00
James Shubin
81daf10157 test: Fix linter issues
These are some linter issues that were found in a new version of the
linter. Let's fix them now before that linter hits our test suite.
2017-09-26 19:38:53 -04:00
James Shubin
b3ef4e41bf test: Use stable version of gometalinter
Hopefully this prevents the various breakages seen in our lint test.
2017-09-26 19:08:43 -04:00
James Shubin
9fbf149717 etcd: Bump to newer versions 2017-09-19 18:21:15 -04:00
James Shubin
95cb94a039 vendor: Add codec package because of breakage
Recent git master 54210f4e076c57f351166f0ed60e67d3fca57a36 of
github.com/ugorji/go broke the builds. See:
https://github.com/coreos/etcd/issues/8579
2017-09-19 18:21:15 -04:00
Juan Luis de Sousa-Valadas Castaño
21f7f87716 resources: Refresh packagekit cache before install
Fixes #80
2017-09-17 22:29:15 +02:00
Jonathan Gold
831c7e2c32 resources: Add user resource 2017-09-17 01:04:36 -04:00
James Shubin
cc0d04c8b7 git: Ignore .envrc file from direnv
Some find this useful for setting a custom GOPATH per project.
2017-09-15 16:17:40 -04:00
James Shubin
46be83f8f7 legal: Re-license to GPLv3 2017-09-11 18:07:47 -04:00
James Shubin
28560e2045 resources: Fix formatting 2017-09-11 18:06:34 -04:00
James Shubin
0df4824a56 test: Increase timeouts for slow travis
Should prevent more intermittent failures.
2017-09-09 15:31:06 -04:00
James Shubin
dbcabc6517 github: Improve the PR template 2017-09-09 15:03:53 -04:00
Jonathan Gold
69f479b67e virt: Allow more than 26 disks 2017-09-08 02:15:40 +00:00
James Shubin
af75696018 github: Add a PR template to help new users
Hopefully this addresses the most common things.
2017-09-07 16:14:11 -04:00
Arthur Mello
80b8f8740f virt: Added support for ~user into expandHome
- Enabled expandHome to expand both ~/ and ~username/ paths
- Added some unit tests for expandHome
2017-09-06 14:59:08 -04:00
James Shubin
71ab325940 yaml2: Meta should keep defaults, and Res should have kind
This would previously panic since it wouldn't get a kind, and the meta
parameters would overwrite the defaults so it would block because limit
didn't have the default of +inf.

The removal of the SetKind was my fault in:

b8ff6938df

It's funny because it ends in `bad`. Guess I should have checked that!
2017-09-06 13:44:21 -04:00
James Shubin
653c76709a test: Fix another intermittent failure
Some of the tests had very precise timeouts, which weren't very
important. Here's another one that timed out early.
2017-09-04 16:39:01 -04:00
Juan Luis de Sousa-Valadas Castaño
83cc1bab38 vagrant: Fix PATH
gometalinter failed because it's not in $PATH
2017-09-04 22:08:59 +02:00
James Shubin
6c8588c019 test: Increase timeouts because travis is slow
Should hopefully prevent some intermittent failures.
2017-09-04 13:02:05 -04:00
Ismael Puerto
5b00ed2fb2 vagrant: Change box to F26
F26 provides GO 1.8
2017-09-01 22:21:39 +02:00
Juan-Luis de Sousa-Valadas Castaño
9f66962bfb docs: Change go required version to 1.8 2017-08-31 23:56:16 +02:00
James Shubin
0edba74091 etcd: Bump to version 3.2.6 and update all the grpc deps
Note: When go-grpc-prometheus was in the main $gopath (even at this
version) and everyone else was where they always were in vendor/ this
didn't build! It gave errors like:

	have SendHeader("github.com/purpleidea/mgmt/vendor/google.golang.org/grpc/metadata".MD) error
	want SendHeader("google.golang.org/grpc/metadata".MD) error

and I got frustrated. Putting it "next" to the other vendored deps seems
to have fixed this. Where are the golang docs that explain this
phenomenon?

This also requires golang 1.8+ as that is a requirement for etcd. It's
probably a reasonable thing for us too.

Note the older versions of etcd had some bugs with the concurrency
package and other things, so this is a necessary bump.
2017-08-30 14:16:02 -04:00
Dennis Kliban
1003b49dd9 resources: Add validation for Msg Priority field
This adds validation that ensures that Msg Priority field is one of the following values:
"Emerg", "Alert", "Crit", "Err", "Warning", "Notice", "Info", "Debug".
2017-08-20 12:37:39 +00:00
James Shubin
884ba54f96 resources: Include default MetaParams so Validate will pass in tests 2017-08-18 19:52:02 -04:00
Dennis Kliban
cf2325a2da vagrant: Increase amount of RAM allocated to boxes backed by libvirt 2017-08-07 13:55:21 -04:00
AdnanLFC
db6972638d pgraph: test: Added tests for DeleteEdge 2017-07-28 02:02:22 +02:00
James Shubin
74e04e81d5 travis: Update to golang 1.8 as the default
Since the release of Fedora 26 with golang 1.8.1, this is a fine
default.
2017-07-19 12:15:54 -04:00
James Shubin
7c5d7365c7 readme: Add new recording 2017-06-29 13:14:25 -04:00
James Shubin
0dadf3d78a resources: Add NewNamedResource helper
This makes the common pattern of NewResource, SetName, easier. It also
makes it less likely for you to forget to use SetName.
2017-06-17 18:09:49 -04:00
James Shubin
e341256627 resources: Add a utility to map from struct fields
For GAPI front ends that want to know what fields they can use and which
they map to, these two functions can be used.
2017-06-17 11:49:30 -04:00
James Shubin
5a3bd3ca67 hcl: Consistent formatting
Nit picks.
2017-06-16 23:01:46 -04:00
ChrisMcKenzie
8102e0a468 hcl: Added hil string interpolation to hcl frontend 2017-06-15 22:53:55 -07:00
ChrisMcKenzie
7d55179727 hcl: Removed edge object in favor of depends_on field in resource 2017-06-12 10:44:13 -07:00
ChrisMcKenzie
bc1a1d1818 hcl: Added basic hcl frontend 2017-06-09 10:31:34 -07:00
James Shubin
a8bbb22fe8 resources: Fix golint issues
Including a trick to get the golinter to allow our compact code!
2017-06-08 04:38:25 -04:00
James Shubin
6b489f71a1 remote: Add a Ready method to know when startup is finished
Previously, there was an extremely rare race where we would startup,
kick off the Run method in a goroutine, and then run Exit before Run got
very far in its execution. If Run ran some early sections of its code
_after_ we had Exited, we would trigger a panic due to the converger UID
being unregistered.

This patch blocks Exit from progressing until Run has started and
finished running. It also adds a Ready method so that you can monitor
this signal yourself if you'd like to add the necessary wait to your
code.
2017-06-08 03:55:03 -04:00
James Shubin
f1db088af4 test: Don't be noisy when running cd during testing 2017-06-08 01:05:58 -04:00
James Shubin
6fe12b3fb5 resources: Compare grouped resources properly
When comparing resources, we have to recursively compare grouped
resources as well! Now fixed.
2017-06-08 01:05:58 -04:00
James Shubin
dacbf9b68d resources: Add resource sorting and clean tests
Resource sorting is needed for comparing resource groups.
2017-06-08 01:05:58 -04:00
James Shubin
9f5057eac7 resources: Do not panic on autogrouped graph switches
Graph changes from autogrouped -> not autogrouped or vice versa cause a
panic (or I assume a leak) because we compared the auto grouped graph to
the ungrouped one, which would cause an Exit on an unstarted Vertex.
This includes a test that seems to reliably reproduces the issue.
2017-06-08 01:05:58 -04:00
James Shubin
525cd54921 pgraph: Improve testing and refactor out some test utilities 2017-06-07 07:13:12 -04:00
James Shubin
7ac94bbf5f resources: Panic if attempting to register a duplicate resource
Don't silently let this overwrite pass. It would mean a mistake.
2017-06-07 03:15:06 -04:00
James Shubin
b8ff6938df resources: Unify resource creation and kind setting
This removes the duplication of the kind string and cleans up things for
resource creation.
2017-06-07 03:07:02 -04:00
James Shubin
2f6c77fba2 misc: Update my tag script to deal with large releases 2017-06-03 03:54:49 -04:00
James Shubin
28a6430778 test: Add gometalinter to our test suite
Add a bunch of new linters to our tests! We can uncomment each sub
linter as we fix up the few remaining issues.
2017-06-03 02:04:10 -04:00
James Shubin
6e4157da35 test: Remove debugging echo from go vet test
I accidentally left it in which totally defeats the point of tests!
2017-06-03 01:34:02 -04:00
James Shubin
4f420dde05 etcd: Wait for server to start before continuing
I think there was a rare race where we would make use of the etcd server
before it had fully started up. I only ever saw this occur on travis,
and with this fix hopefully we'll never see it again.

It is worth mentioning that much of my etcd code and the lib Run()
function could use a solid cleaning.
2017-06-03 01:00:35 -04:00
James Shubin
d9601471df etcd: Small cleanup of the package
Split things into multiple files, and fix up some doc formatting.
2017-06-03 00:34:58 -04:00
James Shubin
9941a97e37 resources: pkg: Add a simple test based on internal logic
We expect the following to stay true. This has always been a bit weird
for me to either remember or expect, so I added a test for my sanity.
2017-06-03 00:15:30 -04:00
James Shubin
0a64b08669 resources: autoedges: Process in a deterministic order
The order you loop through map's isn't necessarily stable, so make sure
you sort everything before you go through it.
2017-06-02 22:29:42 -04:00
James Shubin
4d9d0d4548 resources: Improve AutoEdge API and pkg breakage
I previously broke the pkg auto edges because the package list wasn't
available by the time it was called. This fixes the pkg resource so that
it gets the necessary list of packages when needed. Since this means
that a possible failure could happen, we also update the AutoEdges API
to support errors. Errors can only be generated at AutoEdge struct
creation, once the struct has been returned (right before modification
of the graph structure) there is no possibility to return any errors.

It's important to remember that the AutoEdges stuff gets called because
the Init of each resource, so make sure it doesn't depend on anything
that happens there or that gets cached as a result of Init.

This is all much nicer now and has a test too :)
2017-06-02 22:15:28 -04:00
James Shubin
5f6c8545c6 resources: Replace stored pgraph with mgraph and clean up hacks
Now that we're using our meta wrapper graph struct instead of the
pgraph, we can re-implement our SetValue hacks in terms of struct fields
and the implementation is now cleaner.
2017-06-02 18:50:23 -04:00
James Shubin
ddc335d65a resources: Reorganize package and split into multiple files
This should hopefully make finding and changing code easier.
2017-06-02 18:08:47 -04:00
James Shubin
9cbaa892d3 gapi: Allow the GAPI implementer to specify fast and exit
This allows the implementer of the GAPI to specify three parameters for
every Next message sent on the channel. The Fast parameter tells the
agent if it should do the pause quickly or if it should finish the
sequence. A quick pause means that it will cause a pause immediately
after the currently running resources finish, where as a slow (default)
pause will allow the wave of execution to finish. This is usually
preferred in scenarios where complex graphs are used where we want each
step to complete. The Exit parameter tells the engine to exit, and the
Err parameter tells the engine that an error occurred.
2017-06-02 04:03:10 -04:00
James Shubin
9531465410 test: Make sure our examples build
Since there are occasional API changes, I'd like to at least remember to
keep the examples building, so we now have a test to remind us!
2017-06-02 03:32:53 -04:00
James Shubin
c35916fad1 resources: Rename the Data struct to ResData to avoid ambiguity
There's a similarly named gapi.Data struct which we could also rename.
2017-06-02 02:53:53 -04:00
James Shubin
bf476a058e resources: exec: Add send/recv for exec output, stdout and stderr
This adds send/recv output parameters from exec for stdout, stderr, and
output which is a combination of those two. This also includes a few
tests, and a working example too!

Gone are the `some_command > some_file` days of puppet.
2017-06-02 02:52:03 -04:00
James Shubin
d4e815a4cb resources: Clean up converger and make it easier for tests
This cleans up the resource converger code slightly and makes it easier
to write resource specific test cases.
2017-06-02 01:15:25 -04:00
James Shubin
0545c4167b pgraph: Remove NewVertex and NewEdge methods and fix examples
Since the pgraph graph can store arbitrary pointers, we don't need a
special method to create the vertices or edges as long as they implement
the String() string method. This cleans up the library and some of the
examples which I let rot previously.
2017-05-31 18:04:58 -04:00
James Shubin
6838dd02c0 resources: graph: Add partial implementation of a graph resource
This is something I've wanted to do for a while, but for the reasons
mentioned in the comments, I've been unable to complete yet. I figured
I'd at least merge what does exist so far in case someone else would
like to pick this up. It's a bit of a brain hurdle / monster, because
the tricky part is refactoring the core engine so that this fits in
nicely. Perhaps someone will have more time and/or less tunnel vision
than I to either merge something or sketch out some ideas on the path
forwards. I think it's a useful goal because if recursive resources are
possible, it could force the core engine into a more elegant design.

Happy hacking!
2017-05-31 17:27:34 -04:00
James Shubin
14c2fd1edd resources: Add proper edge compare method
Might as well do this cleanly in one place.
2017-05-31 17:27:34 -04:00
James Shubin
6e503cc79b resources: Simplify the resource Compare functions
This removes one level of indentation and simplifies the code.
2017-05-31 17:27:34 -04:00
James Shubin
bd4563b699 pgraph: Add sort function to sort a list of vertices
With tests too!
2017-05-31 17:27:34 -04:00
James Shubin
458e115490 pgraph: Add logic functions for adding subgraphs
These are helper functions to merge in existing graphs into a main graph
with or without adding an edge relationship between a vertex and the new
graph. These are particularly useful if using mgmt as a lib to break
apart units of work into functions that create sub graphs, which are
then added to the main graph when they're returned.
2017-05-31 17:27:25 -04:00
James Shubin
51369adad1 pgraph: Add a GraphCmp method
This could probably be more efficient using a known algorithm, and it
could definitely require more tests, but is good enough for now.
2017-05-31 16:45:39 -04:00
James Shubin
f65c5fb147 resources: nspawn: Fix small style issues 2017-05-31 15:36:15 -04:00
James Shubin
4150ae7307 pgraph: Replace edge struct with interface
This further cleans up the pgraph lib to be more generic.
2017-05-31 15:36:15 -04:00
James Shubin
a87288d519 pgraph, resources: Major refactoring continued
There was simply some technical debt I needed to kill off. Sorry for not
splitting this up into more patches.
2017-05-31 15:36:14 -04:00
James Shubin
3cf9639e99 pgraph, resources: Major refactor to remove pgraph to resource dep
This is the mechanical port of the remaining bits. Next to clean it up a
bit.
2017-05-29 15:43:50 -04:00
James Shubin
4490c3ed1a resources: Map to semaphores doesn't need to be a pointer
A map in golang is a reference type.
2017-05-29 15:43:50 -04:00
James Shubin
fbcb562781 pgraph: Move the timestamp storage into the resource 2017-05-29 15:43:50 -04:00
James Shubin
b1e035f96a pgraph: Move get/set state methods out to resource package 2017-05-29 15:43:50 -04:00
James Shubin
11c3a26c23 pgraph: Move the AutoEdges mechanism into the resource package
Remove the pgraph->resource dependency.
2017-05-29 15:43:50 -04:00
James Shubin
1fbe72b52d test: Run go vet across whole packages not individual files
The golang tooling is quite deficient, in that it makes it quite
difficult to get the tools to do_the_right_thing, without ample wrapping
of bash scripting. Go vet was finding issues because it didn't have the
full context available. Hopefully this package level context is
sufficient for now. It still lacks inter-package context though.
2017-05-29 15:43:50 -04:00
James Shubin
f4bb066737 test: Run go vet with -source flag in newer releases
This should hopefully eliminate some false positives.
https://github.com/golang/go/issues/20514
2017-05-29 15:43:50 -04:00
Julien Pivotto
aaac9cbeeb vagrant: Setup Packagekit in the box
Without packagekit the 'pkg' resources can not be used

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-05-17 09:54:23 +02:00
Julien Pivotto
0e68ff6923 vagrant: Install make in the Vagrant box
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-05-17 06:41:43 +02:00
James Shubin
1c59712cbf pgraph: Move AssociateData function out of the package
This removes another dependency on the resource package.
2017-05-15 10:19:46 -04:00
James Shubin
c2cb1c9168 pgraph: Move GraphMetas function out of package
This removes a dependency on the resources package which wasn't
necessary.
2017-05-15 10:06:31 -04:00
James Shubin
cc8e2e40dd pgraph: Update graph API to remove Get prefix and add Adjacency
Simple cleanups.
2017-05-15 09:58:10 -04:00
James Shubin
e67d97d9da pgraph: Replace CompareMatch with VertexMatchFn
This removes a reference to the resources package in pgraph.
2017-05-13 13:55:42 -04:00
James Shubin
d74c2115fd pgraph: Untangle the semaphore code from the pgraph implementation
This re-implements the semaphore code on top of the graph kv store.
2017-05-13 13:28:41 -04:00
James Shubin
70e7ee2d46 pgraph: Remove use of Flags struct in favour of Value API
One small step to completely cleaning up the pgraph package so that we
can eventually fix the code that would otherwise create a cycle!
2017-05-13 13:28:41 -04:00
James Shubin
d11854f4e8 pgraph: Clean up pgraph module to get ready for clean lib status
The graph of dependencies in golang is a DAG, and as such doesn't allow
cycles. Clean up this lib so that it eventually doesn't import our
resources module or anything else which might want to import it.

This patch makes adjacency private, and adds a generalized key store to
the graph struct.
2017-05-13 13:28:41 -04:00
James Shubin
4bb553e015 pgraph: Use the correct vertex handle to prevent a race
Small typo made that is now fixed! These need to get caught with golint!
2017-05-13 10:08:38 -04:00
James Shubin
0af9af44e5 etcd, resources, world: Add World API for shared keys
It's up to the end user to decide who is writing and/or overwriting
them.

It could also be useful to reimplement (refactor) some of the existing
World API's to be implemented in terms of these primitives.
2017-04-17 07:03:29 -04:00
James Shubin
3a0d73f740 readme: Add new links 2017-04-13 04:35:59 -04:00
James Shubin
9b9ff2622d resources: Make resource kind and baseuid fields public
This is required if we're going to have out of package resources. In
particular for third party packages, and also for if we decide to split
out each resource into a separate sub package.
2017-04-11 01:52:21 -04:00
James Shubin
a4858be967 lib, gapi: Next method of GAPI should generate first event
This puts the generation of the initial event into the Next method of
the GAPI. If it does not happen, then we will never get a graph. This is
important because this notifies the GAPI when we're actually ready to
try and generate a graph, rather than blocking on the Graph method if we
have a long compile for example.

This is also required for the etcd watch cleanup.
2017-04-10 03:20:58 -04:00
James Shubin
6fd5623b1f gapi: Move separate etcd Watch method into GAPI
This cleans up the API to not have a special case for etcd anymore. In
particular, this also adds the requirement that the GAPI must generate
an event on startup as soon as it is ready to generate a graph.
2017-04-10 03:20:58 -04:00
James Shubin
66d9c7091c lib: examples: Update to most recent API
At some point in the past the API changed. Fixed now.
2017-04-10 03:20:58 -04:00
Mildred Ki'Lya
525a1e8140 yamlgraph: Refactor parsing for dynamic resource registration
Avoid use of the reflect package, and use an extensible list of registred
resource kinds. This also has the benefit of removing the empty VirtRes and
AugeasRes struct types when compiling without libvirt and libaugeas.
2017-03-24 22:38:06 +01:00
James Shubin
64dc47d7e9 misc: Fixup documentation 2017-03-20 17:11:51 -04:00
James Shubin
f3fc7bb91e resources: svc: Add basic support for user services
These are user specific services and are available on the session bus.
This doesn't use the private user API because
https://github.com/coreos/go-systemd/pull/225 was NACKed.
2017-03-17 10:15:02 -04:00
James Shubin
028ef14cc0 misc: Replace sloppy use of %v with %s 2017-03-16 13:18:36 -04:00
James Shubin
3e001f9a1c main: Update log messages for consistency 2017-03-16 13:14:50 -04:00
Julien Pivotto
33d20ac6d8 prometheus: Add detailed metrics
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-03-16 14:18:46 +01:00
James Shubin
660554cc45 todo: Update the TODO file to be more current
We should really try to remember to patch it with fixes when we do them
:)
2017-03-15 14:57:29 -04:00
James Shubin
a455324e8c examples: Add missing file example
I was using this for testing graph changes but forgot to commit it.
2017-03-13 08:05:49 -04:00
James Shubin
cd5e2e1148 pgraph: Add fast pausing and exiting of graphs
This causes a graph to actually stop processing part way through, even
if there are poke's that want to continue on. This is so that the user
experience of pressing ^C actually causes a shutdown without finishing
the graph execution. It might be preferred to have this be a user
defined setting at some point in the future, such as if the user presses
^C twice. As well, we might want to implement an interrupt API so that
individual resource execution can be asked to bail out early if
requested. This could happen on a third ^C press.
2017-03-13 07:54:03 -04:00
James Shubin
074da4da19 pgraph, resources: Run the resource Setup in parallel
This is a reasonable thing to do at this time.
2017-03-13 07:54:03 -04:00
James Shubin
e4e39d820c pgraph: semaphore: Refactor semaphore size function and test 2017-03-13 07:49:29 -04:00
James Shubin
e5dbb214a2 pgraph: Move the BackPoke to before the semaphores
I can't think of a reason we should grab a semaphore before backpoking.
The semaphore is intended to block around the actual work in CheckApply,
not the dependency resolution of the correct vertex.
2017-03-13 07:49:29 -04:00
James Shubin
91af528ff8 pgraph: Move the quiesce done indicator to avoid deadlock
This avoids a deadlock on resource failure when retry==0. Without this
we would never exit. This adds a test in too!
2017-03-12 13:52:35 -04:00
James Shubin
18c4e39ea3 resources: exit: Misc cleanups
Some of this code hadn't been touched much since an early mgmt. Here's a
quick cleanup of some cruft.
2017-03-12 13:21:22 -04:00
James Shubin
bda455ce78 resources: exec: Ignore signals sent to main process
When we send a ^C to the main process, our children see it too! This
puts them in their own process group so that they're not affected.
There's still the matter of properly hooking up the internal exit signal
to a proper shutdown, but that's separate.

This might mean that there should be a case for an interrupt aspect to
the resource API which would allow a second ^C by the engine, to cause a
forceful termination by the resource if that resource supported that.
2017-03-12 13:11:54 -04:00
James Shubin
a07aea1ad3 resources: exec: Clean up command error processing
Show the exit status on error and general cleanups.
2017-03-12 12:44:03 -04:00
James Shubin
18e2dbf144 resources: exec: Remove state checks that are done in the engine
These state checks are now done automatically in the engine, and so they
should be removed to make the code easier to read.
2017-03-12 12:35:03 -04:00
James Shubin
564a07e62e resources: exec: Don't invalidate state on poke
This was some legacy incorrect decision from earlier mgmt.
2017-03-12 12:35:02 -04:00
James Shubin
a358135e41 resources: exec: Remove the pollint parameter
Since we now have a poll metaparameter, we don't need the resource
specific code.
2017-03-12 10:49:26 -04:00
James Shubin
6d9be15035 pgraph: semaphore: Add lock around semaphore map
I forgot about the `concurrent map write` race, but now it's fixed. I
suppose we could probably pre-create all semaphores in the graph at once
before Start, and remove this lock, but that's an optimization for a
later day.
2017-03-11 09:06:18 -05:00
James Shubin
b740e0b78a git: Add more features to tag.sh script
This helps me make releases and probably won't help you, but why not be
transparent about things and tools!
2017-03-11 08:41:56 -05:00
181 changed files with 13203 additions and 4683 deletions

View File

@@ -12,6 +12,9 @@ end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
[*.sh]
indent_style = tab
[*.go]
indent_style = tab

36
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,36 @@
## Tips:
* commit message titles must be in the form:
```topic: Capitalized message with no trailing period```
or:
```topic, topic2: Capitalized message with no trailing period```
* golang code must be formatted according to the standard, please run:
```
make gofmt # formats the entire project correctly
```
or format a single golang file correctly:
```
gofmt -w yourcode.go
```
* please rebase your patch against current git master:
```
git checkout master
git pull origin master
git checkout your-feature
git rebase master
git push your-remote your-feature
hub pull-request # or submit with the github web ui
```
* after a patch review, please ping @purpleidea so we know to re-review:
```
# make changes based on reviews...
git add -p # add new changes
git commit --amend # combine with existing commit
git push your-remote your-feature -f
# now ping @purpleidea in the github PR since it doesn't notify us automatically
```
## Thanks for contributing to mgmt and welcome to the team!

96
.github/settings.yml vendored Normal file
View File

@@ -0,0 +1,96 @@
# These settings are synced to GitHub by https://probot.github.io/apps/settings/
repository:
# See https://developer.github.com/v3/repos/#edit for all available settings.
# The name of the repository. Changing this will rename the repository
name: mgmt
# A short description of the repository that will show up on GitHub
description: Next generation distributed, event-driven, parallel config management!
# A URL with more information about the repository
homepage: https://ttboj.wordpress.com/?s=mgmtconfig
# A comma-separated list of topics to set on the repository
topics: golang, go, configuration-management, config-management, devops, etcd, distributed-systems, graph-theory
# Either `true` to make the repository private, or `false` to make it public.
private: false
# Either `true` to enable issues for this repository, `false` to disable them.
has_issues: true
# Either `true` to enable projects for this repository, or `false` to disable them.
# If projects are disabled for the organization, passing `true` will cause an API error.
has_projects: false
# Either `true` to enable the wiki for this repository, `false` to disable it.
has_wiki: false
# Either `true` to enable downloads for this repository, `false` to disable them.
has_downloads: true
# Updates the default branch for this repository.
default_branch: master
# Either `true` to allow squash-merging pull requests, or `false` to prevent
# squash-merging.
allow_squash_merge: false
# Either `true` to allow merging pull requests with a merge commit, or `false`
# to prevent merging pull requests with merge commits.
allow_merge_commit: false
# Either `true` to allow rebase-merging pull requests, or `false` to prevent
# rebase-merging.
allow_rebase_merge: true
# Labels: define labels for Issues and Pull Requests (in alphabetical order)
labels:
- name: bug
color: fc2929
- name: confirmed
color: d93f0b
- name: design
color: 5319e7
- name: duplicate
color: cccccc
- name: enhancement
color: 84b6eb
- name: good first issue
color: 7057ff
- name: help wanted
color: 159818
- name: invalid
color: e6e6e6
- name: mgmtlove
color: e11d21
- name: question
color: cc317c
- name: wontfix
color: ffffff
# - name: first-timers-only
# # include the old name to rename an existing label
# oldname: Help Wanted
# Collaborators: give specific users access to this repository.
#collaborators:
# - username: purpleidea
# # Note: Only valid on organization-owned repositories.
# # The permission to grant the collaborator. Can be one of:
# # * `pull` - can pull, but not push to or administer this repository.
# # * `push` - can pull and push, but not administer this repository.
# # * `admin` - can pull, push and administer this repository.
# permission: push
# - username: hubot
# permission: pull
# NOTE: The APIs needed for teams are not supported yet by GitHub Apps
# https://developer.github.com/v3/apps/available-endpoints/
#teams:
# - name: core
# permission: admin
# - name: docs
# permission: push

2
.gitignore vendored
View File

@@ -2,9 +2,11 @@
.omv/
.ssh/
.vagrant/
.envrc
old/
tmp/
*_stringer.go
bindata/*.go
mgmt
mgmt.static
mgmt.iml

6
.gitmodules vendored
View File

@@ -16,3 +16,9 @@
[submodule "vendor/honnef.co/go/augeas"]
path = vendor/honnef.co/go/augeas
url = https://github.com/dominikh/go-augeas/
[submodule "vendor/github.com/grpc-ecosystem/go-grpc-prometheus"]
path = vendor/github.com/grpc-ecosystem/go-grpc-prometheus
url = https://github.com/grpc-ecosystem/go-grpc-prometheus
[submodule "vendor/github.com/ugorji/go"]
path = vendor/github.com/ugorji/go
url = https://github.com/ugorji/go

View File

@@ -1,14 +1,14 @@
language: go
go:
- 1.6.x
- 1.7.x
- 1.8.x
- 1.9.x
- tip
go_import_path: github.com/purpleidea/mgmt
sudo: true
dist: trusty
before_install:
- sudo apt update
- git config remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*"
- git fetch --unshallow
install: 'make deps'
script: 'make test'
@@ -16,7 +16,7 @@ matrix:
fast_finish: true
allow_failures:
- go: tip
- go: 1.8.x
- go: 1.9.x
notifications:
irc:
channels:

141
COPYING
View File

@@ -1,5 +1,5 @@
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
@@ -7,15 +7,17 @@
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
@@ -24,34 +26,44 @@ them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
@@ -60,7 +72,7 @@ modification follow.
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
@@ -537,45 +549,35 @@ to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
@@ -633,29 +635,40 @@ the "copyright" line and a pointer to where the full notice is found.
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
GNU General Public License for more details.
You should have received a copy of the GNU Affero General Public License
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

View File

@@ -1,16 +1,16 @@
Mgmt
Copyright (C) 2013-2017+ James Shubin and the project contributors
Copyright (C) 2013-2018+ James Shubin and the project contributors
Written by James Shubin <james@shubin.ca> and the project contributors
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
GNU General Public License for more details.
You should have received a copy of the GNU Affero General Public License
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.

View File

@@ -1,28 +1,28 @@
# Mgmt
# Copyright (C) 2013-2017+ James Shubin and the project contributors
# Copyright (C) 2013-2018+ James Shubin and the project contributors
# Written by James Shubin <james@shubin.ca> and the project contributors
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# GNU General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
SHELL = /usr/bin/env bash
.PHONY: all art cleanart version program path deps run race generate build clean test gofmt yamlfmt format docs rpmbuild mkdirs rpm srpm spec tar upload upload-sources upload-srpms upload-rpms copr
.SILENT: clean
.PHONY: all art cleanart version program path deps run race bindata generate build clean test gofmt yamlfmt format docs rpmbuild mkdirs rpm srpm spec tar upload upload-sources upload-srpms upload-rpms copr
.SILENT: clean bindata
GO_FILES := $(shell find . -name '*.go')
SVERSION := $(or $(SVERSION),$(shell git describe --match '[0-9]*\.[0-9]*\.[0-9]*' --tags --dirty --always))
VERSION := $(or $(VERSION),$(shell git describe --match '[0-9]*\.[0-9]*\.[0-9]*' --tags --abbrev=0))
PROGRAM := $(shell echo $(notdir $(CURDIR)) | cut -f1 -d"-")
OLDGOLANG := $(shell go version | grep -E 'go1.3|go1.4')
ifeq ($(VERSION),$(SVERSION))
RELEASE = 1
else
@@ -101,40 +101,36 @@ run:
race:
find . -maxdepth 1 -type f -name '*.go' -not -name '*_test.go' | xargs go run -race -ldflags "-X main.program=$(PROGRAM) -X main.version=$(SVERSION)"
# generate go files from non-go source
bindata:
$(MAKE) --quiet -C bindata
generate:
go generate
build: $(PROGRAM)
build: bindata $(PROGRAM)
$(PROGRAM): main.go
$(PROGRAM): $(GO_FILES)
@echo "Building: $(PROGRAM), version: $(SVERSION)..."
ifneq ($(OLDGOLANG),)
@# avoid equals sign in old golang versions eg in: -X foo=bar
time go build -ldflags "-X main.program $(PROGRAM) -X main.version $(SVERSION)" -o $(PROGRAM) $(BUILD_FLAGS);
else
time go build -i -ldflags "-X main.program=$(PROGRAM) -X main.version=$(SVERSION)" -o $(PROGRAM) $(BUILD_FLAGS);
endif
$(PROGRAM).static: main.go
$(PROGRAM).static: $(GO_FILES)
@echo "Building: $(PROGRAM).static, version: $(SVERSION)..."
go generate
ifneq ($(OLDGOLANG),)
@# avoid equals sign in old golang versions eg in: -X foo=bar
go build -a -installsuffix cgo -tags netgo -ldflags '-extldflags "-static" -X main.program $(PROGRAM) -X main.version $(SVERSION)' -o $(PROGRAM).static $(BUILD_FLAGS);
else
go build -a -installsuffix cgo -tags netgo -ldflags '-extldflags "-static" -X main.program=$(PROGRAM) -X main.version=$(SVERSION)' -o $(PROGRAM).static $(BUILD_FLAGS);
endif
go build -a -installsuffix cgo -tags netgo -ldflags '-extldflags "-static" -X main.program=$(PROGRAM) -X main.version=$(SVERSION) -s -w' -o $(PROGRAM).static $(BUILD_FLAGS);
clean:
[ ! -e $(PROGRAM) ] || rm $(PROGRAM)
rm -f *_stringer.go # generated by `go generate`
rm -f *_mock.go # generated by `go generate`
test:
test: bindata
./test.sh
gofmt:
find . -maxdepth 3 -type f -name '*.go' -not -path './old/*' -not -path './tmp/*' -exec gofmt -w {} \;
# TODO: remove gofmt once goimports has a -s option
find . -maxdepth 3 -type f -name '*.go' -not -path './old/*' -not -path './tmp/*' -exec gofmt -s -w {} \;
find . -maxdepth 3 -type f -name '*.go' -not -path './old/*' -not -path './tmp/*' -exec goimports -w {} \;
yamlfmt:
find . -maxdepth 3 -type f -name '*.yaml' -not -path './old/*' -not -path './tmp/*' -not -path './omv.yaml' -exec ruby -e "require 'yaml'; x=YAML.load_file('{}').to_yaml.each_line.map(&:rstrip).join(10.chr)+10.chr; File.open('{}', 'w').write x" \;

View File

@@ -12,7 +12,7 @@
Come join us in the `mgmt` community!
| Medium | Link |
|---|---|---|
|---|---|
| IRC | [#mgmtconfig](https://webchat.freenode.net/?channels=#mgmtconfig) on Freenode |
| Twitter | [@mgmtconfig](https://twitter.com/mgmtconfig) & [#mgmtconfig](https://twitter.com/hashtag/mgmtconfig) |
| Mailing list | [mgmtconfig-list@redhat.com](https://www.redhat.com/mailman/listinfo/mgmtconfig-list) |
@@ -22,6 +22,7 @@ Mgmt is a fairly new project.
We're working towards being minimally useful for production environments.
We aren't feature complete for what we'd consider a 1.x release yet.
With your help you'll be able to influence our design and get us there sooner!
Interested developers should read the [quick start guide](docs/quick-start-guide.md).
## Documentation:
Please read, enjoy and help improve our documentation!
@@ -30,6 +31,7 @@ Please read, enjoy and help improve our documentation!
|---|---|
| [general documentation](docs/documentation.md) | for everyone |
| [quick start guide](docs/quick-start-guide.md) | for mgmt developers |
| [frequently asked questions](docs/faq.md) | for everyone |
| [resource guide](docs/resource-guide.md) | for mgmt developers |
| [godoc API reference](https://godoc.org/github.com/purpleidea/mgmt) | for mgmt developers |
| [prometheus guide](docs/prometheus.md) | for everyone |
@@ -40,9 +42,9 @@ Please ask in the [community](#community)!
If you have a well phrased question that might benefit others, consider asking it by sending a patch to the documentation [FAQ](https://github.com/purpleidea/mgmt/blob/master/docs/documentation.md#usage-and-frequently-asked-questions) section. I'll merge your question, and a patch with the answer!
## Roadmap:
Feel free to grab one of the straightforward [#mgmtlove](https://github.com/purpleidea/mgmt/labels/mgmtlove) issues if you're a first time contributor to the project or if you're unsure about what to hack on!
Please see: [TODO.md](TODO.md) for a list of upcoming work and TODO items.
Please get involved by working on one of these items or by suggesting something else!
Feel free to grab one of the straightforward [#mgmtlove](https://github.com/purpleidea/mgmt/labels/mgmtlove) issues if you're a first time contributor to the project or if you're unsure about what to hack on!
## Bugs:
Please set the `DEBUG` constant in [main.go](https://github.com/purpleidea/mgmt/blob/master/main.go) to `true`, and post the logs when you report the [issue](https://github.com/purpleidea/mgmt/issues).
@@ -53,31 +55,7 @@ Feel free to read my article on [debugging golang programs](https://ttboj.wordpr
We'd love to have your patches! Please send them by email, or as a pull request.
## On the web:
| Author | Format | Subject |
|---|---|---|
| James Shubin | blog | [Next generation configuration mgmt](https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/) |
| James Shubin | video | [Introductory recording from DevConf.cz 2016](https://www.youtube.com/watch?v=GVhpPF0j-iE&html5=1) |
| James Shubin | video | [Introductory recording from CfgMgmtCamp.eu 2016](https://www.youtube.com/watch?v=fNeooSiIRnA&html5=1) |
| Julian Dunn | video | [On mgmt at CfgMgmtCamp.eu 2016](https://www.youtube.com/watch?v=kfF9IATUask&t=1949&html5=1) |
| Walter Heck | slides | [On mgmt at CfgMgmtCamp.eu 2016](http://www.slideshare.net/olindata/configuration-management-time-for-a-4th-generation/3) |
| Marco Marongiu | blog | [On mgmt](http://syslog.me/2016/02/15/leap-or-die/) |
| Felix Frank | blog | [From Catalog To Mgmt (on puppet to mgmt "transpiling")](https://ffrank.github.io/features/2016/02/18/from-catalog-to-mgmt/) |
| James Shubin | blog | [Automatic edges in mgmt (...and the pkg resource)](https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/) |
| James Shubin | blog | [Automatic grouping in mgmt](https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/) |
| John Arundel | tweet | [“Puppets days are numbered.”](https://twitter.com/bitfield/status/732157519142002688) |
| Felix Frank | blog | [Puppet, Meet Mgmt (on puppet to mgmt internals)](https://ffrank.github.io/features/2016/06/12/puppet,-meet-mgmt/) |
| Felix Frank | blog | [Puppet Powered Mgmt (puppet to mgmt tl;dr)](https://ffrank.github.io/features/2016/06/19/puppet-powered-mgmt/) |
| James Shubin | blog | [Automatic clustering in mgmt](https://ttboj.wordpress.com/2016/06/20/automatic-clustering-in-mgmt/) |
| James Shubin | video | [Recording from CoreOSFest 2016](https://www.youtube.com/watch?v=KVmDCUA42wc&html5=1) |
| James Shubin | video | [Recording from DebConf16](http://meetings-archive.debian.net/pub/debian-meetings/2016/debconf16/Next_Generation_Config_Mgmt.webm) ([Slides](https://annex.debconf.org//debconf-share/debconf16/slides/15-next-generation-config-mgmt.pdf)) |
| Felix Frank | blog | [Edging It All In (puppet and mgmt edges)](https://ffrank.github.io/features/2016/07/12/edging-it-all-in/) |
| Felix Frank | blog | [Translating All The Things (puppet to mgmt translation warnings)](https://ffrank.github.io/features/2016/08/19/translating-all-the-things/) |
| James Shubin | video | [Recording from systemd.conf 2016](https://www.youtube.com/watch?v=jB992Zb3nH0&html5=1) |
| James Shubin | blog | [Remote execution in mgmt](https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/) |
| James Shubin | video | [Recording from High Load Strategy 2016](https://vimeo.com/191493409) |
| James Shubin | video | [Recording from NLUUG 2016](https://www.youtube.com/watch?v=MmpwOQAb_SE&html5=1) |
| James Shubin | blog | [Send/Recv in mgmt](https://ttboj.wordpress.com/2016/12/07/sendrecv-in-mgmt/) |
| James Shubin | blog | [Metaparameters in mgmt](https://ttboj.wordpress.com/2017/03/01/metaparameters-in-mgmt/) |
[Read what people are saying and publishing about mgmt!](docs/on-the-web.md)
##

10
TODO.md
View File

@@ -1,16 +1,16 @@
# TODO
If you're looking for something to do, look here!
Let us know if you're working on one of the items.
If you'd like something to work on, ping @purpleidea and I'll create an issue
tailored especially for you! Just let me know your approximate golang skill
level and how many hours you'd like to spend on the patch.
## Package resource
- [ ] getfiles support on debian [bug](https://github.com/hughsie/PackageKit/issues/118)
- [ ] directory info on fedora [bug](https://github.com/hughsie/PackageKit/issues/117)
- [ ] dnf blocker [bug](https://github.com/hughsie/PackageKit/issues/110)
- [ ] install signal blocker [bug](https://github.com/hughsie/PackageKit/issues/109)
## File resource [bug](https://github.com/purpleidea/mgmt/issues/64) [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
- [ ] chown/chmod support [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
- [ ] user/group support [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
- [ ] recurse limit support [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
- [ ] fanotify support [bug](https://github.com/go-fsnotify/fsnotify/issues/114)
@@ -29,7 +29,6 @@ Let us know if you're working on one of the items.
## Virt (libvirt) resource
- [ ] base resource improvements [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
- [ ] port to upstream https://github.com/libvirt/libvirt-go [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Net (systemd-networkd) resource
- [ ] base resource
@@ -52,6 +51,9 @@ Let us know if you're working on one of the items.
## Torrent/dht file transfer
- [ ] base plumbing
## GPG/Auth improvements
- [ ] base plumbing
## Language improvements
- [ ] language design
- [ ] lexer/parser

16
Vagrantfile vendored
View File

@@ -6,13 +6,16 @@ Vagrant.configure(2) do |config|
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.define "mgmt-dev" do |instance|
instance.vm.box = "fedora/24-cloud-base"
instance.vm.box = "fedora/26-cloud-base"
end
config.vm.provider "virtualbox" do |v|
v.memory = 1536
v.cpus = 2
end
config.vm.provider "libvirt" do |v|
v.memory = 2048
end
config.vm.provision "file", source: "vagrant/motd", destination: ".motd"
config.vm.provision "shell", inline: "cp ~vagrant/.motd /etc/motd"
@@ -21,7 +24,16 @@ Vagrant.configure(2) do |config|
config.vm.provision "file", source: "~/.gitconfig", destination: ".gitconfig"
# copied from make-deps.sh (with added git)
config.vm.provision "shell", inline: "dnf install -y libvirt-devel golang golang-googlecode-tools-stringer hg git"
config.vm.provision "shell", inline: "dnf install -y libvirt-devel golang golang-googlecode-tools-stringer hg git make"
# set up packagekit
config.vm.provision "shell" do |shell|
shell.inline = <<-SCRIPT
dnf install -y PackageKit
systemctl enable packagekit
systemctl start packagekit
SCRIPT
end
# set up vagrant home
script = <<-SCRIPT

33
bindata/Makefile Normal file
View File

@@ -0,0 +1,33 @@
# Mgmt
# Copyright (C) 2013-2018+ James Shubin and the project contributors
# Written by James Shubin <james@shubin.ca> and the project contributors
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# The bindata target generates go files from any source defined below. To use
# the files, import the "bindata" package and use:
# `bytes, err := bindata.Asset("FILEPATH")`
# where FILEPATH is the path of the original input file relative to `bindata/`.
.PHONY: build
default: build
build: bindata.go
# add more input files as dependencies at the end here...
bindata.go: ../COPYING
# go-bindata --pkg bindata -o {OUTPUT} {INPUT}
go-bindata --pkg bindata -o ./$@ $^
# gofmt the output file
gofmt -s -w $@

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package converger is a facility for reporting the converged state.
@@ -85,7 +85,7 @@ type cuid struct {
}
// NewConverger builds a new converger struct.
func NewConverger(timeout int, stateFn func(bool) error) *converger {
func NewConverger(timeout int, stateFn func(bool) error) Converger {
return &converger{
timeout: timeout,
stateFn: stateFn,

8
doc.go
View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package main provides the main entrypoint for using the `mgmt` software.

View File

@@ -1,10 +1,10 @@
FROM golang:1.6.2
FROM golang:1.8
MAINTAINER Michał Czeraszkiewicz <contact@czerasz.com>
# Set the reset cache variable
# Read more here: http://czerasz.com/2014/11/13/docker-tip-and-tricks/#use-refreshedat-variable-for-better-cache-control
ENV REFRESHED_AT 2016-05-10
ENV REFRESHED_AT 2017-11-16
# Update the package list to be able to use required packages
RUN apt-get update

View File

@@ -1,10 +1,10 @@
FROM golang:1.6.2
FROM golang:1.8
MAINTAINER Michał Czeraszkiewicz <contact@czerasz.com>
# Set the reset cache variable
# Read more here: http://czerasz.com/2014/11/13/docker-tip-and-tricks/#use-refreshedat-variable-for-better-cache-control
ENV REFRESHED_AT 2016-05-14
ENV REFRESHED_AT 2017-11-16
RUN apt-get update
@@ -27,5 +27,8 @@ WORKDIR /home/$USER_NAME/mgmt
# Install dependencies
RUN make deps
# Chown $GOPATH
RUN chown -R ${USER_ID}:${GROUP_ID} /go
# Change user
USER ${USER_NAME}

View File

@@ -51,7 +51,7 @@ master_doc = 'index'
# General information about the project.
project = u'mgmt'
copyright = u'2013-2017+ James Shubin and the project contributors'
copyright = u'2013-2018+ James Shubin and the project contributors'
author = u'James Shubin'
# The version info for the project you're documenting, acts as replacement for

View File

@@ -1,9 +1,4 @@
# mgmt
Available from:
[https://github.com/purpleidea/mgmt/](https://github.com/purpleidea/mgmt/)
This documentation is available in: [Markdown](https://github.com/purpleidea/mgmt/blob/master/docs/documentation.md) or [PDF](https://pdfdoc-purpleidea.rhcloud.com/pdf/https://github.com/purpleidea/mgmt/blob/master/docs/documentation.md) format.
# General documentation
## Overview
@@ -26,16 +21,12 @@ For more information, you may like to read some blog posts from the author:
* [Send/Recv in mgmt](https://ttboj.wordpress.com/2016/12/07/sendrecv-in-mgmt/)
* [Metaparameters in mgmt](https://ttboj.wordpress.com/2017/03/01/metaparameters-in-mgmt/)
There is also an [introductory video](http://meetings-archive.debian.net/pub/debian-meetings/2016/debconf16/Next_Generation_Config_Mgmt.webm) available.
Older videos and other material [is available](https://github.com/purpleidea/mgmt/#on-the-web).
There is also an [introductory video](https://www.youtube.com/watch?v=LkEtBVLfygE&html5=1) available.
Older videos and other material [is available](on-the-web.md).
## Setup
During this prototype phase, the tool can be run out of the source directory.
You'll probably want to use ```./run.sh run --yaml examples/graph1.yaml``` to
get started. Beware that this _can_ cause data loss. Understand what you're
doing first, or perform these actions in a virtual environment such as the one
provided by [Oh-My-Vagrant](https://github.com/purpleidea/oh-my-vagrant).
You'll probably want to read the [quick start guide](quick-start-guide.md) to get going.
## Features
@@ -162,255 +153,6 @@ For more details and caveats see [Puppet.md](Puppet.md).
An introductory post on the Puppet support is on
[Felix's blog](http://ffrank.github.io/features/2016/06/19/puppet-powered-mgmt/).
## Resources
This section lists all the built-in resources and their properties. The
resource primitives in `mgmt` are typically more powerful than resources in
other configuration management systems because they can be event based which
lets them respond in real-time to converge to the desired state. This property
allows you to build more complex resources that you probably hadn't considered
in the past.
In addition to the resource specific properties, there are resource properties
(otherwise known as parameters) which can apply to every resource. These are
called [meta parameters](#meta-parameters) and are listed separately. Certain
meta parameters aren't very useful when combined with certain resources, but
in general, it should be fairly obvious, such as when combining the `noop` meta
parameter with the [Noop](#Noop) resource.
* [Augeas](#Augeas): Manipulate files using augeas.
* [Exec](#Exec): Execute shell commands on the system.
* [File](#File): Manage files and directories.
* [Hostname](#Hostname): Manages the hostname on the system.
* [KV](#KV): Set a key value pair in our shared world database.
* [Msg](#Msg): Send log messages.
* [Noop](#Noop): A simple resource that does nothing.
* [Nspawn](#Nspawn): Manage systemd-machined nspawn containers.
* [Password](#Password): Create random password strings.
* [Pkg](#Pkg): Manage system packages with PackageKit.
* [Svc](#Svc): Manage system systemd services.
* [Timer](#Timer): Manage system systemd services.
* [Virt](#Virt): Manage virtual machines with libvirt.
### Augeas
The augeas resource uses [augeas](http://augeas.net/) commands to manipulate
files.
### Exec
The exec resource can execute commands on your system.
### File
The file resource manages files and directories. In `mgmt`, directories are
identified by a trailing slash in their path name. File have no such slash.
It has the following properties:
- `path`: file path (directories have a trailing slash here)
- `content`: raw file content
- `state`: either `exists` (the default value) or `absent`
- `mode`: octal unix file permissions
- `owner`: username or uid for the file owner
- `group`: group name or gid for the file group
#### Path
The path property specifies the file or directory that we are managing.
#### Content
The content property is a string that specifies the desired file contents.
#### Source
The source property points to a source file or directory path that we wish to
copy over and use as the desired contents for our resource.
#### State
The state property describes the action we'd like to apply for the resource. The
possible values are: `exists` and `absent`.
#### Recurse
The recurse property limits whether file resource operations should recurse into
and monitor directory contents with a depth greater than one.
#### Force
The force property is required if we want the file resource to be able to change
a file into a directory or vice-versa. If such a change is needed, but the force
property is not set to `true`, then this file resource will error.
### Hostname
The hostname resource manages static, transient/dynamic and pretty hostnames
on the system and watches them for changes.
#### static_hostname
The static hostname is the one configured in /etc/hostname or a similar
file.
It is chosen by the local user. It is not always in sync with the current
host name as returned by the gethostname() system call.
#### transient_hostname
The transient / dynamic hostname is the one configured via the kernel's
sethostbyname().
It can be different from the static hostname in case DHCP or mDNS have been
configured to change the name based on network information.
#### pretty_hostname
The pretty hostname is a free-form UTF8 host name for presentation to the user.
#### hostname
Hostname is the fallback value for all 3 fields above, if only `hostname` is
specified, it will set all 3 fields to this value.
### KV
The KV resource sets a key and value pair in the global world database. This is
quite useful for setting a flag after a number of resources have run. It will
ignore database updates to the value that are greater in compare order than the
requested key if the `SkipLessThan` parameter is set to true. If we receive a
refresh, then the stored value will be reset to the requested value even if the
stored value is greater.
#### Key
The string key used to store the key.
#### Value
The string value to set. This can also be set via Send/Recv.
#### SkipLessThan
If this parameter is set to `true`, then it will ignore updating the value as
long as the database versions are greater than the requested value. The compare
operation used is based on the `SkipCmpStyle` parameter.
#### SkipCmpStyle
By default this converts the string values to integers and compares them as you
would expect.
### Msg
The msg resource sends messages to the main log, or an external service such
as systemd's journal.
### Noop
The noop resource does absolutely nothing. It does have some utility in testing
`mgmt` and also as a placeholder in the resource graph.
### Nspawn
The nspawn resource is used to manage systemd-machined style containers.
### Password
The password resource can generate a random string to be used as a password. It
will re-generate the password if it receives a refresh notification.
### Pkg
The pkg resource is used to manage system packages. This resource works on many
different distributions because it uses the underlying packagekit facility which
supports different backends for different environments. This ensures that we
have great Debian (deb/dpkg) and Fedora (rpm/dnf) support simultaneously.
### Svc
The service resource is still very WIP. Please help us my improving it!
### Timer
This resource needs better documentation. Please help us my improving it!
### Virt
The virt resource can manage virtual machines via libvirt.
## Usage and frequently asked questions
(Send your questions as a patch to this FAQ! I'll review it, merge it, and
respond by commit with the answer.)
### Why did you start this project?
I wanted a next generation config management solution that didn't have all of
the design flaws or limitations that the current generation of tools do, and no
tool existed!
### Why did you use etcd? What about consul?
Etcd and consul are both written in golang, which made them the top two
contenders for my prototype. Ultimately a choice had to be made, and etcd was
chosen, but it was also somewhat arbitrary. If there is available interest,
good reasoning, *and* patches, then we would consider either switching or
supporting both, but this is not a high priority at this time.
### Can I use an existing etcd cluster instead of the automatic embedded servers?
Yes, it's possible to use an existing etcd cluster instead of the automatic,
elastic embedded etcd servers. To do so, simply point to the cluster with the
`--seeds` variable, the same way you would if you were seeding a new member to
an existing mgmt cluster.
The downside to this approach is that you won't benefit from the automatic
elastic nature of the embedded etcd servers, and that you're responsible if you
accidentally break your etcd cluster, or if you use an unsupported version.
### What does the error message about an inconsistent dataDir mean?
If you get an error message similar to:
```
Etcd: Connect: CtxError...
Etcd: CtxError: Reason: CtxDelayErr(5s): No endpoints available yet!
Etcd: Connect: Endpoints: []
Etcd: The dataDir (/var/lib/mgmt/etcd) might be inconsistent or corrupt.
```
This happens when there are a series of fatal connect errors in a row. This can
happen when you start `mgmt` using a dataDir that doesn't correspond to the
current cluster view. As a result, the embedded etcd server never finishes
starting up, and as a result, a default endpoint never gets added. The solution
is to either reconcile the mistake, and if there is no important data saved, you
can remove the etcd dataDir. This is typically `/var/lib/mgmt/etcd/member/`.
### Why do resources have both a `Compare` method and an `IFF` (on the UID) method?
The `Compare()` methods are for determining if two resources are effectively the
same, which is used to make graph change delta's efficient. This is when we want
to change from the current running graph to a new graph, but preserve the common
vertices. Since we want to make this process efficient, we only update the parts
that are different, and leave everything else alone. This `Compare()` method can
tell us if two resources are the same.
The `IFF()` method is part of the whole UID system, which is for discerning if a
resource meets the requirements another expects for an automatic edge. This is
because the automatic edge system assumes a unified UID pattern to test for
equality. In the future it might be helpful or sane to merge the two similar
comparison functions although for now they are separate because they are
actually answer different questions.
### Did you know that there is a band named `MGMT`?
I didn't realize this when naming the project, and it is accidental. After much
anguishing, I chose the name because it was short and I thought it was
appropriately descriptive. If you need a less ambiguous search term or phrase,
you can try using `mgmtconfig` or `mgmt config`.
### You didn't answer my question, or I have a question!
It's best to ask on [IRC](https://webchat.freenode.net/?channels=#mgmtconfig)
to see if someone can help you. Once we get a big enough community going, we'll
add a mailing list. If you don't get any response from the above, you can
contact me through my [technical blog](https://ttboj.wordpress.com/contact/)
and I'll do my best to help. If you have a good question, please add it as a
patch to this documentation. I'll merge your question, and add a patch with the
answer!
## Reference
Please note that there are a number of undocumented options. For more
information on these options, please view the source at:
@@ -635,7 +377,7 @@ To report any bugs, please file a ticket at: [https://github.com/purpleidea/mgmt
## Authors
Copyright (C) 2013-2017+ James Shubin and the project contributors
Copyright (C) 2013-2018+ James Shubin and the project contributors
Please see the
[AUTHORS](https://github.com/purpleidea/mgmt/tree/master/AUTHORS) file

87
docs/faq.md Normal file
View File

@@ -0,0 +1,87 @@
# Frequently asked questions
(Send your questions as a patch to this FAQ! I'll review it, merge it, and
respond by commit with the answer.)
### Why did you start this project?
I wanted a next generation config management solution that didn't have all of
the design flaws or limitations that the current generation of tools do, and no
tool existed!
### Why did you use etcd? What about consul?
Etcd and consul are both written in golang, which made them the top two
contenders for my prototype. Ultimately a choice had to be made, and etcd was
chosen, but it was also somewhat arbitrary. If there is available interest,
good reasoning, *and* patches, then we would consider either switching or
supporting both, but this is not a high priority at this time.
### Can I use an existing etcd cluster instead of the automatic embedded servers?
Yes, it's possible to use an existing etcd cluster instead of the automatic,
elastic embedded etcd servers. To do so, simply point to the cluster with the
`--seeds` variable, the same way you would if you were seeding a new member to
an existing mgmt cluster.
The downside to this approach is that you won't benefit from the automatic
elastic nature of the embedded etcd servers, and that you're responsible if you
accidentally break your etcd cluster, or if you use an unsupported version.
### What does the error message about an inconsistent dataDir mean?
If you get an error message similar to:
```
Etcd: Connect: CtxError...
Etcd: CtxError: Reason: CtxDelayErr(5s): No endpoints available yet!
Etcd: Connect: Endpoints: []
Etcd: The dataDir (/var/lib/mgmt/etcd) might be inconsistent or corrupt.
```
This happens when there are a series of fatal connect errors in a row. This can
happen when you start `mgmt` using a dataDir that doesn't correspond to the
current cluster view. As a result, the embedded etcd server never finishes
starting up, and as a result, a default endpoint never gets added. The solution
is to either reconcile the mistake, and if there is no important data saved, you
can remove the etcd dataDir. This is typically `/var/lib/mgmt/etcd/member/`.
### Why do resources have both a `Compare` method and an `IFF` (on the UID) method?
The `Compare()` methods are for determining if two resources are effectively the
same, which is used to make graph change delta's efficient. This is when we want
to change from the current running graph to a new graph, but preserve the common
vertices. Since we want to make this process efficient, we only update the parts
that are different, and leave everything else alone. This `Compare()` method can
tell us if two resources are the same.
The `IFF()` method is part of the whole UID system, which is for discerning if a
resource meets the requirements another expects for an automatic edge. This is
because the automatic edge system assumes a unified UID pattern to test for
equality. In the future it might be helpful or sane to merge the two similar
comparison functions although for now they are separate because they are
actually answer different questions.
### Does this support Windows? OSX? GNU Hurd?
Mgmt probably works best on Linux, because that's what most developers use for
serious automation workloads. Support for non-Linux operating systems isn't a
high priority of mine, but we're happy to accept patches for missing features
or resources that you think would make sense on your favourite platform.
### Did you know that there is a band named `MGMT`?
I didn't realize this when naming the project, and it is accidental. After much
anguishing, I chose the name because it was short and I thought it was
appropriately descriptive. If you need a less ambiguous search term or phrase,
you can try using `mgmtconfig` or `mgmt config`.
### You didn't answer my question, or I have a question!
It's best to ask on [IRC](https://webchat.freenode.net/?channels=#mgmtconfig)
to see if someone can help you. Once we get a big enough community going, we'll
add a mailing list. If you don't get any response from the above, you can
contact me through my [technical blog](https://ttboj.wordpress.com/contact/)
and I'll do my best to help. If you have a good question, please add it as a
patch to this documentation. I'll merge your question, and add a patch with the
answer!

36
docs/on-the-web.md Normal file
View File

@@ -0,0 +1,36 @@
# On the web
Here is a list of places mgmt has appeared on the web. Feel free to send a patch
if we missed something that you think is relevant!
## Links
| Author | Format | Subject |
|---|---|---|
| James Shubin | blog | [Next generation configuration mgmt](https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/) |
| James Shubin | video | [Introductory recording from DevConf.cz 2016](https://www.youtube.com/watch?v=GVhpPF0j-iE&html5=1) |
| James Shubin | video | [Introductory recording from CfgMgmtCamp.eu 2016](https://www.youtube.com/watch?v=fNeooSiIRnA&html5=1) |
| Julian Dunn | video | [On mgmt at CfgMgmtCamp.eu 2016](https://www.youtube.com/watch?v=kfF9IATUask&t=1949&html5=1) |
| Walter Heck | slides | [On mgmt at CfgMgmtCamp.eu 2016](http://www.slideshare.net/olindata/configuration-management-time-for-a-4th-generation/3) |
| Marco Marongiu | blog | [On mgmt](http://syslog.me/2016/02/15/leap-or-die/) |
| Felix Frank | blog | [From Catalog To Mgmt (on puppet to mgmt "transpiling")](https://ffrank.github.io/features/2016/02/18/from-catalog-to-mgmt/) |
| James Shubin | blog | [Automatic edges in mgmt (...and the pkg resource)](https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/) |
| James Shubin | blog | [Automatic grouping in mgmt](https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/) |
| John Arundel | tweet | [“Puppets days are numbered.”](https://twitter.com/bitfield/status/732157519142002688) |
| Felix Frank | blog | [Puppet, Meet Mgmt (on puppet to mgmt internals)](https://ffrank.github.io/features/2016/06/12/puppet,-meet-mgmt/) |
| Felix Frank | blog | [Puppet Powered Mgmt (puppet to mgmt tl;dr)](https://ffrank.github.io/features/2016/06/19/puppet-powered-mgmt/) |
| James Shubin | blog | [Automatic clustering in mgmt](https://ttboj.wordpress.com/2016/06/20/automatic-clustering-in-mgmt/) |
| James Shubin | video | [Recording from CoreOSFest 2016](https://www.youtube.com/watch?v=KVmDCUA42wc&html5=1) |
| James Shubin | video | [Recording from DebConf16](http://meetings-archive.debian.net/pub/debian-meetings/2016/debconf16/Next_Generation_Config_Mgmt.webm) ([Slides](https://annex.debconf.org//debconf-share/debconf16/slides/15-next-generation-config-mgmt.pdf)) |
| Felix Frank | blog | [Edging It All In (puppet and mgmt edges)](https://ffrank.github.io/features/2016/07/12/edging-it-all-in/) |
| Felix Frank | blog | [Translating All The Things (puppet to mgmt translation warnings)](https://ffrank.github.io/features/2016/08/19/translating-all-the-things/) |
| James Shubin | video | [Recording from systemd.conf 2016](https://www.youtube.com/watch?v=jB992Zb3nH0&html5=1) |
| James Shubin | blog | [Remote execution in mgmt](https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/) |
| James Shubin | video | [Recording from High Load Strategy 2016](https://vimeo.com/191493409) |
| James Shubin | video | [Recording from NLUUG 2016](https://www.youtube.com/watch?v=MmpwOQAb_SE&html5=1) |
| James Shubin | blog | [Send/Recv in mgmt](https://ttboj.wordpress.com/2016/12/07/sendrecv-in-mgmt/) |
| Julien Pivotto | blog | [Augeas resource for mgmt](https://roidelapluie.be/blog/2017/02/14/mgmt-augeas/) |
| James Shubin | blog | [Metaparameters in mgmt](https://ttboj.wordpress.com/2017/03/01/metaparameters-in-mgmt/) |
| James Shubin | video | [Recording from Incontro DevOps 2017](https://vimeo.com/212241877) |
| Yves Brissaud | blog | [mgmt aux HumanTalks Grenoble (french)](http://log.winsos.net/2017/04/12/mgmt-aux-human-talks-grenoble.html) |
| James Shubin | video | [Recording from OSDC Berlin 2017](https://www.youtube.com/watch?v=LkEtBVLfygE&html5=1) |
| Jonathan Gold | blog | [AWS:EC2 in mgmt](http://jonathangold.ca/awsec2-in-mgmt/) |

View File

@@ -30,7 +30,7 @@ Here is a list of the metrics we provide:
- `mgmt_resources_total`: The number of resources that mgmt is managing
- `mgmt_checkapply_total`: The number of CheckApply's that mgmt has run
- `mgmt_failures_total`: The number of resources that have failed
- `mgmt_failures_current`: The number of resources that have failed
- `mgmt_failures`: The number of resources that have failed
- `mgmt_graph_start_time_seconds`: Start time of the current graph since unix epoch in seconds
For each metric, you will get some extra labels:
@@ -63,4 +63,4 @@ We do not have grafana dashboards yet. Patches welcome!
[pgc]: https://github.com/prometheus/client_golang/blob/master/prometheus/go_collector.go
[etcdm]: https://coreos.com/etcd/docs/latest/metrics.html
[pd]: https://github.com/prometheus/prometheus/wiki/Default-port-allocation
[pd]: https://github.com/prometheus/prometheus/wiki/Default-port-allocations

View File

@@ -2,65 +2,22 @@
## Introduction
This guide is intended for developers. Once `mgmt` is minimally viable, we'll
publish a quick start guide for users too. In the meantime, please contribute!
If you're brand new to `mgmt`, it's probably a good idea to start by reading the
publish a quick start guide for users too. If you're brand new to `mgmt`, it's
probably a good idea to start by reading the
[introductory article](https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/)
or to watch an [introductory video](https://github.com/purpleidea/mgmt/#on-the-web).
or to watch an [introductory video](https://www.youtube.com/watch?v=LkEtBVLfygE&html5=1).
Once you're familiar with the general idea, please start hacking...
## Vagrant
If you would like to avoid doing the following steps manually, we have prepared
a [Vagrant](https://www.vagrantup.com/) environment for your convenience. From
the project directory, run a `vagrant up`, and then a `vagrant status`. From
there, you can `vagrant ssh` into the `mgmt` machine. The MOTD will explain the
rest.
## Dependencies
Software projects have a few different kinds of dependencies. There are _build_
dependencies, _runtime_ dependencies, and additionally, a few extra dependencies
required for running the _test_ suite.
### Build
* `golang` 1.6 or higher (required, available in most distros)
* golang libraries (required, available with `go get ./...`) a partial list includes:
```
github.com/coreos/etcd/client
gopkg.in/yaml.v2
gopkg.in/fsnotify.v1
github.com/urfave/cli
github.com/coreos/go-systemd/dbus
github.com/coreos/go-systemd/util
github.com/libvirt/libvirt-go
```
* `stringer` (optional), available as a package on some platforms, otherwise via `go get`
```
golang.org/x/tools/cmd/stringer
```
* `pandoc` (optional), for building a pdf of the documentation
### Runtime
A relatively modern GNU/Linux system should be able to run `mgmt` without any
problems. Since `mgmt` runs as a single statically compiled binary, all of the
library dependencies are included. It is expected, that certain advanced
resources require host specific facilities to work. These requirements are
listed below:
| Resource | Dependency | Version |
|----------|-------------------|---------|
| file | inotify | ? |
| hostname | systemd-hostnamed | ? |
| nspawn | systemd-nspawn | ? |
| pkg | packagekitd | ? |
| svc | systemd | ? |
| virt | libvirtd | ? |
For building a visual representation of the graph, `graphviz` is required.
### Testing
* golint `github.com/golang/lint/golint`
## Quick start
* Make sure you have golang version 1.6 or greater installed.
### Installing golang
* You need golang version 1.8 or greater installed.
** To install on rpm style systems: `sudo dnf install golang`
** To install on apt style systems: `sudo apt install golang`
* You can run `go version` to check the golang version.
* If your distro is tool old, you may need to [download](https://golang.org/dl/) a newer golang version.
### Setting up golang
* If you do not have a GOPATH yet, create one and export it:
```
mkdir $HOME/gopath
@@ -68,7 +25,9 @@ export GOPATH=$HOME/gopath
```
* You might also want to add the GOPATH to your `~/.bashrc` or `~/.profile`.
* For more information you can read the [GOPATH documentation](https://golang.org/cmd/go/#hdr-GOPATH_environment_variable).
* Next download the mgmt code base, and switch to that directory:
### Getting the mgmt code and dependencies
* Download the `mgmt` code into the GOPATH, and switch to that directory:
```
mkdir -p $GOPATH/src/github.com/purpleidea/
cd $GOPATH/src/github.com/purpleidea/
@@ -77,15 +36,64 @@ cd $GOPATH/src/github.com/purpleidea/mgmt
```
* Run `make deps` to install system and golang dependencies. Take a look at `misc/make-deps.sh` for details.
* Run `make build` to get a freshly built `mgmt` binary.
### Running mgmt
* Run `time ./mgmt run --yaml examples/graph0.yaml --converged-timeout=5 --tmp-prefix` to try out a very simple example!
* To run continuously in the default mode of operation, omit the `--converged-timeout` option.
* Have fun hacking on our future technology!
* Look in that example file that you ran to see if you can figure out what it did!
* The yaml frontend is provided as a developer tool to test the engine until the language is ready.
* Have fun hacking on our future technology and get involved to shape the project!
## Examples
Please look in the [examples/](../examples/) folder for some examples!
Please look in the [examples/](../examples/) folder for some more examples!
## Installation
## Vagrant
If you would like to avoid doing the above steps manually, we have prepared a
[Vagrant](https://www.vagrantup.com/) environment for your convenience. From the
project directory, run a `vagrant up`, and then a `vagrant status`. From there,
you can `vagrant ssh` into the `mgmt` machine. The MOTD will explain the rest.
## Information about dependencies
Software projects have a few different kinds of dependencies. There are _build_
dependencies, _runtime_ dependencies, and additionally, a few extra dependencies
required for running the _test_ suite.
### Build
* `golang` 1.8 or higher (required, available in some distros and distributed
as a binary officially by [golang.org](https://golang.org/dl/))
### Runtime
A relatively modern GNU/Linux system should be able to run `mgmt` without any
problems. Since `mgmt` runs as a single statically compiled binary, all of the
library dependencies are included. It is expected, that certain advanced
resources require host specific facilities to work. These requirements are
listed below:
| Resource | Dependency | Version | Check version with |
|----------|-------------------|-----------------------------|-----------------------------------------------------------|
| augeas | augeas-devel | `augeas 1.6` or greater | `dnf info augeas-devel` or `apt-cache show libaugeas-dev` |
| file | inotify | `Linux 2.6.27` or greater | `uname -a` |
| hostname | systemd-hostnamed | `systemd 25` or greater | `systemctl --version` |
| nspawn | systemd-nspawn | `systemd ???` or greater | `systemctl --version` |
| pkg | packagekitd | `packagekit 1.x` or greater | `pkcon --version` |
| svc | systemd | `systemd ???` or greater | `systemctl --version` |
| virt | libvirt-devel | `libvirt 1.2.0` or greater | `dnf info libvirt-devel` or `apt-cache show libvirt-dev` |
| virt | libvirtd | `libvirt 1.2.0` or greater | `libvirtd --version` |
For building a visual representation of the graph, `graphviz` is required.
To build `mgmt` without augeas support please run:
`GOTAGS='noaugeas' make build`
To build `mgmt` without libvirt support please run:
`GOTAGS='novirt' make build`
To build `mgmt` without augeas or libvirt support please run:
`GOTAGS='noaugeas novirt' make build`
## Binary Package Installation
Installation of `mgmt` from distribution packages currently needs improvement.
They are not always up-to-date with git master and as such are not recommended.
At the moment we have:
* [COPR](https://copr.fedoraproject.org/coprs/purpleidea/mgmt/)
* [Arch](https://aur.archlinux.org/packages/mgmt/)

View File

@@ -73,14 +73,13 @@ Init() error
```
This is called to initialize the resource. If something goes wrong, it should
return an error. It should set the resource `kind`, do any resource specific
work, and finish by calling the `Init` method of the base resource.
return an error. It should do any resource specific work, and finish by calling
the `Init` method of the base resource.
#### Example
```golang
// Init initializes the Foo resource.
func (obj *FooRes) Init() error {
obj.BaseRes.kind = "foo" // must lower case resource kind
// run the resource specific initialization, and error if anything fails
if some_error {
return err // something went wrong!
@@ -202,7 +201,7 @@ will likely find the state to now be correct.
### Watch
```golang
Watch(chan *Event) error
Watch() error
```
`Watch` is a main loop that runs and sends messages when it detects that the
@@ -344,25 +343,26 @@ some way.
#### Example
```golang
// Compare two resources and return if they are equivalent.
func (obj *FooRes) Compare(res Res) bool {
switch res.(type) {
case *FooRes: // only compare to other resources of the Foo kind!
res := res.(*FileRes)
func (obj *FooRes) Compare(r Res) bool {
// we can only compare FooRes to others of the same resource kind
res, ok := r.(*FooRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
if obj.whatever != res.whatever {
return false
}
if obj.Flag != res.Flag {
return false
}
default:
return false // different kind of resource
}
return true // they must match!
}
```
@@ -378,7 +378,7 @@ if another resource can match a dependency to this one.
### AutoEdges
```golang
AutoEdges() AutoEdge
AutoEdges() (AutoEdge, error)
```
This returns a struct that implements the `AutoEdge` interface. This struct
@@ -398,9 +398,9 @@ UnmarshalYAML(unmarshal func(interface{}) error) error // optional
```
This is optional, but recommended for any resource that will have a YAML
accessible struct, and an entry in the `GraphConfig` struct. It is not required
because to do so would mean that third-party or custom resources (such as those
someone writes to use with `libmgmt`) would have to implement this needlessly.
accessible struct. It is not required because to do so would mean that
third-party or custom resources (such as those someone writes to use with
`libmgmt`) would have to implement this needlessly.
The signature intentionally matches what is required to satisfy the `go-yaml`
[Unmarshaler](https://godoc.org/gopkg.in/yaml.v2#Unmarshaler) interface.
@@ -452,35 +452,15 @@ type FooRes struct {
}
```
### YAML
In addition to labelling your resource struct with YAML fields, you must also
add an entry to the internal `GraphConfig` struct. It is a fairly straight
forward one line patch.
### Resource registration
All resources must be registered with the engine so that they can be found. This
also ensures they can be encoded and decoded. Make sure to include the following
code snippet for this to work.
```golang
type GraphConfig struct {
// [snip...]
Resources struct {
Noop []*resources.NoopRes `yaml:"noop"`
File []*resources.FileRes `yaml:"file"`
// [snip...]
Foo []*resources.FooRes `yaml:"foo"` // tada :)
}
}
```
It's also recommended that you add the [UnmarshalYAML](#unmarshalyaml) method to
your resources so that unspecified values are given sane defaults.
### Gob registration
All resources must be registered with the `golang` _gob_ module so that they can
be encoded and decoded. Make sure to include the following code snippet for this
to work.
```golang
import "encoding/gob"
func init() { // special golang method that runs once
gob.Register(&FooRes{}) // substitude your resource here
// set your resource kind and struct here (the kind must be lower case)
RegisterResource("foo", func() Res { return &FooRes{} })
}
```
@@ -516,7 +496,7 @@ This can _only_ be done inside of the `CheckApply` function!
```golang
// inside CheckApply, probably near the top
if val, exists := obj.Recv["SomeKey"]; exists {
log.Printf("SomeKey was sent to us from: %s[%s].%s", val.Res.Kind(), val.Res.GetName(), val.Key)
log.Printf("SomeKey was sent to us from: %s.%s", val.Res, val.Key)
if val.Changed {
log.Printf("SomeKey was just updated!")
// you may want to invalidate some local cache

169
docs/resources.md Normal file
View File

@@ -0,0 +1,169 @@
# Resources
Here we list all the built-in resources and their properties. The resource
primitives in `mgmt` are typically more powerful than resources in other
configuration management systems because they can be event based which lets them
respond in real-time to converge to the desired state. This property allows you
to build more complex resources that you probably hadn't considered in the past.
In addition to the resource specific properties, there are resource properties
(otherwise known as parameters) which can apply to every resource. These are
called [meta parameters](documentation.md#meta-parameters) and are listed
separately. Certain meta parameters aren't very useful when combined with
certain resources, but in general, it should be fairly obvious, such as when
combining the `noop` meta parameter with the [Noop](#Noop) resource.
You might want to look at the [generated documentation](https://godoc.org/github.com/purpleidea/mgmt/resources)
for more up-to-date information about these resources.
* [Augeas](#Augeas): Manipulate files using augeas.
* [Exec](#Exec): Execute shell commands on the system.
* [File](#File): Manage files and directories.
* [Hostname](#Hostname): Manages the hostname on the system.
* [KV](#KV): Set a key value pair in our shared world database.
* [Msg](#Msg): Send log messages.
* [Noop](#Noop): A simple resource that does nothing.
* [Nspawn](#Nspawn): Manage systemd-machined nspawn containers.
* [Password](#Password): Create random password strings.
* [Pkg](#Pkg): Manage system packages with PackageKit.
* [Svc](#Svc): Manage system systemd services.
* [Timer](#Timer): Manage system systemd services.
* [Virt](#Virt): Manage virtual machines with libvirt.
## Augeas
The augeas resource uses [augeas](http://augeas.net/) commands to manipulate
files.
## Exec
The exec resource can execute commands on your system.
## File
The file resource manages files and directories. In `mgmt`, directories are
identified by a trailing slash in their path name. File have no such slash.
It has the following properties:
- `path`: file path (directories have a trailing slash here)
- `content`: raw file content
- `state`: either `exists` (the default value) or `absent`
- `mode`: octal unix file permissions
- `owner`: username or uid for the file owner
- `group`: group name or gid for the file group
### Path
The path property specifies the file or directory that we are managing.
### Content
The content property is a string that specifies the desired file contents.
### Source
The source property points to a source file or directory path that we wish to
copy over and use as the desired contents for our resource.
### State
The state property describes the action we'd like to apply for the resource. The
possible values are: `exists` and `absent`.
### Recurse
The recurse property limits whether file resource operations should recurse into
and monitor directory contents with a depth greater than one.
### Force
The force property is required if we want the file resource to be able to change
a file into a directory or vice-versa. If such a change is needed, but the force
property is not set to `true`, then this file resource will error.
## Hostname
The hostname resource manages static, transient/dynamic and pretty hostnames
on the system and watches them for changes.
### static_hostname
The static hostname is the one configured in /etc/hostname or a similar
file.
It is chosen by the local user. It is not always in sync with the current
host name as returned by the gethostname() system call.
### transient_hostname
The transient / dynamic hostname is the one configured via the kernel's
sethostbyname().
It can be different from the static hostname in case DHCP or mDNS have been
configured to change the name based on network information.
### pretty_hostname
The pretty hostname is a free-form UTF8 host name for presentation to the user.
### hostname
Hostname is the fallback value for all 3 fields above, if only `hostname` is
specified, it will set all 3 fields to this value.
## KV
The KV resource sets a key and value pair in the global world database. This is
quite useful for setting a flag after a number of resources have run. It will
ignore database updates to the value that are greater in compare order than the
requested key if the `SkipLessThan` parameter is set to true. If we receive a
refresh, then the stored value will be reset to the requested value even if the
stored value is greater.
### Key
The string key used to store the key.
### Value
The string value to set. This can also be set via Send/Recv.
### SkipLessThan
If this parameter is set to `true`, then it will ignore updating the value as
long as the database versions are greater than the requested value. The compare
operation used is based on the `SkipCmpStyle` parameter.
### SkipCmpStyle
By default this converts the string values to integers and compares them as you
would expect.
## Msg
The msg resource sends messages to the main log, or an external service such
as systemd's journal.
## Noop
The noop resource does absolutely nothing. It does have some utility in testing
`mgmt` and also as a placeholder in the resource graph.
## Nspawn
The nspawn resource is used to manage systemd-machined style containers.
## Password
The password resource can generate a random string to be used as a password. It
will re-generate the password if it receives a refresh notification.
## Pkg
The pkg resource is used to manage system packages. This resource works on many
different distributions because it uses the underlying packagekit facility which
supports different backends for different environments. This ensures that we
have great Debian (deb/dpkg) and Fedora (rpm/dnf) support simultaneously.
## Svc
The service resource is still very WIP. Please help us my improving it!
## Timer
This resource needs better documentation. Please help us my improving it!
## Virt
The virt resource can manage virtual machines via libvirt.

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// TODO: Add TTL's (eg: volunteering)
@@ -65,7 +65,6 @@ import (
"github.com/purpleidea/mgmt/converger"
"github.com/purpleidea/mgmt/event"
"github.com/purpleidea/mgmt/resources"
"github.com/purpleidea/mgmt/util"
etcd "github.com/coreos/etcd/clientv3" // "clientv3"
@@ -82,8 +81,8 @@ import (
const (
NS = "_mgmt" // root namespace for mgmt operations
seedSentinel = "_seed" // you must not name your hostname this
maxStartServerTimeout = 60 // max number of seconds to wait for server to start
maxStartServerRetries = 3 // number of times to retry starting the etcd server
MaxStartServerTimeout = 60 // max number of seconds to wait for server to start
MaxStartServerRetries = 3 // number of times to retry starting the etcd server
maxClientConnectRetries = 5 // number of times to retry consecutive connect failures
selfRemoveTimeout = 3 // give unnominated members a chance to self exit
exitDelay = 3 // number of sec of inactivity after exit to clean up
@@ -96,7 +95,7 @@ var (
errApplyDeltaEventsInconsistent = errors.New("inconsistent key in ApplyDeltaEvents")
)
// AW is a struct for the AddWatcher queue
// AW is a struct for the AddWatcher queue.
type AW struct {
path string
opts []etcd.OpOption
@@ -107,8 +106,8 @@ type AW struct {
cancelFunc func() // data
}
// RE is a response + error struct since these two values often occur together
// This is now called an event with the move to the etcd v3 API
// RE is a response + error struct since these two values often occur together.
// This is now called an event with the move to the etcd v3 API.
type RE struct {
response etcd.WatchResponse
path string
@@ -120,7 +119,7 @@ type RE struct {
retries uint // number of times we've retried on error
}
// KV is a key + value struct to hold the two items together
// KV is a key + value struct to hold the two items together.
type KV struct {
key string
value string
@@ -128,7 +127,7 @@ type KV struct {
resp event.Resp
}
// GQ is a struct for the get queue
// GQ is a struct for the get queue.
type GQ struct {
path string
skipConv bool
@@ -137,7 +136,7 @@ type GQ struct {
data map[string]string
}
// DL is a struct for the delete queue
// DL is a struct for the delete queue.
type DL struct {
path string
opts []etcd.OpOption
@@ -145,7 +144,7 @@ type DL struct {
data int64
}
// TN is a struct for the txn queue
// TN is a struct for the txn queue.
type TN struct {
ifcmps []etcd.Cmp
thenops []etcd.Op
@@ -161,7 +160,7 @@ type Flags struct {
Verbose bool // add extra log message output
}
// EmbdEtcd provides the embedded server and client etcd functionality
// EmbdEtcd provides the embedded server and client etcd functionality.
type EmbdEtcd struct { // EMBeddeD etcd
// etcd client connection related
cLock sync.Mutex // client connect lock
@@ -182,6 +181,8 @@ type EmbdEtcd struct { // EMBeddeD etcd
endpoints etcdtypes.URLsMap // map of servers a client could connect to
clientURLs etcdtypes.URLs // locations to listen for clients if i am a server
serverURLs etcdtypes.URLs // locations to listen for servers if i am a server (peer)
advertiseClientURLs etcdtypes.URLs // client urls to advertise
advertiseServerURLs etcdtypes.URLs // server urls to advertise
noServer bool // disable all server peering if true
// local tracked state
@@ -205,10 +206,11 @@ type EmbdEtcd struct { // EMBeddeD etcd
serverwg sync.WaitGroup // wait for server to shutdown
server *embed.Etcd // technically this contains the server struct
dataDir string // our data dir, prefix + "etcd"
serverReady chan struct{} // closes when ready
}
// NewEmbdEtcd creates the top level embedded etcd struct client and server obj
func NewEmbdEtcd(hostname string, seeds, clientURLs, serverURLs etcdtypes.URLs, noServer bool, idealClusterSize uint16, flags Flags, prefix string, converger converger.Converger) *EmbdEtcd {
// NewEmbdEtcd creates the top level embedded etcd struct client and server obj.
func NewEmbdEtcd(hostname string, seeds, clientURLs, serverURLs, advertiseClientURLs, advertiseServerURLs etcdtypes.URLs, noServer bool, idealClusterSize uint16, flags Flags, prefix string, converger converger.Converger) *EmbdEtcd {
endpoints := make(etcdtypes.URLsMap)
if hostname == seedSentinel { // safety
return nil
@@ -233,6 +235,8 @@ func NewEmbdEtcd(hostname string, seeds, clientURLs, serverURLs etcdtypes.URLs,
endpoints: endpoints,
clientURLs: clientURLs,
serverURLs: serverURLs,
advertiseClientURLs: advertiseClientURLs,
advertiseServerURLs: advertiseServerURLs,
noServer: noServer,
idealClusterSize: idealClusterSize,
@@ -240,6 +244,7 @@ func NewEmbdEtcd(hostname string, seeds, clientURLs, serverURLs etcdtypes.URLs,
flags: flags,
prefix: prefix,
dataDir: path.Join(prefix, "etcd"),
serverReady: make(chan struct{}),
}
// TODO: add some sort of auto assign method for picking these defaults
// add a default so that our local client can connect locally if needed
@@ -260,7 +265,7 @@ func NewEmbdEtcd(hostname string, seeds, clientURLs, serverURLs etcdtypes.URLs,
return obj
}
// GetConfig returns the config struct to be used for the etcd client connect
// GetConfig returns the config struct to be used for the etcd client connect.
func (obj *EmbdEtcd) GetConfig() etcd.Config {
endpoints := []string{}
// XXX: filter out any urls which wouldn't resolve here ?
@@ -342,7 +347,7 @@ func (obj *EmbdEtcd) Connect(reconnect bool) error {
return nil
}
// Startup is the main entry point to kick off the embedded etcd client & server
// Startup is the main entry point to kick off the embedded etcd client & server.
func (obj *EmbdEtcd) Startup() error {
bootstrapping := len(obj.endpoints) == 0 // because value changes after start
@@ -395,9 +400,13 @@ func (obj *EmbdEtcd) Startup() error {
// if i am alone and will have to be a server...
if !obj.noServer && bootstrapping {
log.Printf("Etcd: Bootstrapping...")
surls := obj.serverURLs
if len(obj.advertiseServerURLs) > 0 {
surls = obj.advertiseServerURLs
}
// give an initial value to the obj.nominate map we keep in sync
// this emulates Nominate(obj, obj.hostname, obj.serverURLs)
obj.nominated[obj.hostname] = obj.serverURLs // initial value
obj.nominated[obj.hostname] = surls // initial value
// NOTE: when we are stuck waiting for the server to start up,
// it is probably happening on this call right here...
obj.nominateCallback(nil) // kick this off once
@@ -406,8 +415,12 @@ func (obj *EmbdEtcd) Startup() error {
// self volunteer
if !obj.noServer && len(obj.serverURLs) > 0 {
// we run this in a go routine because it blocks waiting for server
surls := obj.serverURLs
if len(obj.advertiseServerURLs) > 0 {
surls = obj.advertiseServerURLs
}
log.Printf("Etcd: Startup: Volunteering...")
go Volunteer(obj, obj.serverURLs)
go Volunteer(obj, surls)
}
if bootstrapping {
@@ -464,7 +477,7 @@ func (obj *EmbdEtcd) Destroy() error {
return nil
}
// CtxDelayErr requests a retry in Delta duration
// CtxDelayErr requests a retry in Delta duration.
type CtxDelayErr struct {
Delta time.Duration
Message string
@@ -474,7 +487,7 @@ func (obj *CtxDelayErr) Error() string {
return fmt.Sprintf("CtxDelayErr(%v): %s", obj.Delta, obj.Message)
}
// CtxRetriesErr lets you retry as long as you have retries available
// CtxRetriesErr lets you retry as long as you have retries available.
// TODO: consider combining this with CtxDelayErr
type CtxRetriesErr struct {
Retries uint
@@ -494,7 +507,7 @@ func (obj *CtxPermanentErr) Error() string {
return fmt.Sprintf("CtxPermanentErr: %s", obj.Message)
}
// CtxReconnectErr requests a client reconnect to the new endpoint list
// CtxReconnectErr requests a client reconnect to the new endpoint list.
type CtxReconnectErr struct {
Message string
}
@@ -503,7 +516,7 @@ func (obj *CtxReconnectErr) Error() string {
return fmt.Sprintf("CtxReconnectErr: %s", obj.Message)
}
// CancelCtx adds a tracked cancel function around an existing context
// CancelCtx adds a tracked cancel function around an existing context.
func (obj *EmbdEtcd) CancelCtx(ctx context.Context) (context.Context, func()) {
cancelCtx, cancelFunc := context.WithCancel(ctx)
obj.cancelLock.Lock()
@@ -512,7 +525,7 @@ func (obj *EmbdEtcd) CancelCtx(ctx context.Context) (context.Context, func()) {
return cancelCtx, cancelFunc
}
// TimeoutCtx adds a tracked cancel function with timeout around an existing context
// TimeoutCtx adds a tracked cancel function with timeout around an existing context.
func (obj *EmbdEtcd) TimeoutCtx(ctx context.Context, t time.Duration) (context.Context, func()) {
timeoutCtx, cancelFunc := context.WithTimeout(ctx, t)
obj.cancelLock.Lock()
@@ -528,8 +541,9 @@ func (obj *EmbdEtcd) CtxError(ctx context.Context, err error) (context.Context,
if obj.ctxErr != nil { // stop on permanent error
return ctx, obj.ctxErr
}
const ctxErr = "ctxErr"
const ctxIter = "ctxIter"
type ctxKey string // use a non-basic type as ctx key (str can conflict)
const ctxErr ctxKey = "ctxErr"
const ctxIter ctxKey = "ctxIter"
expBackoff := func(tmin, texp, iter, tmax int) time.Duration {
// https://en.wikipedia.org/wiki/Exponential_backoff
// tmin <= texp^iter - 1 <= tmax // TODO: check my math
@@ -699,7 +713,7 @@ func (obj *EmbdEtcd) CtxError(ctx context.Context, err error) (context.Context,
return ctx, obj.ctxErr
}
// CbLoop is the loop where callback execution is serialized
// CbLoop is the loop where callback execution is serialized.
func (obj *EmbdEtcd) CbLoop() {
cuid := obj.converger.Register()
cuid.SetName("Etcd: CbLoop")
@@ -755,7 +769,7 @@ func (obj *EmbdEtcd) CbLoop() {
}
}
// Loop is the main loop where everything is serialized
// Loop is the main loop where everything is serialized.
func (obj *EmbdEtcd) Loop() {
cuid := obj.converger.Register()
cuid.SetName("Etcd: Loop")
@@ -933,7 +947,7 @@ func (obj *EmbdEtcd) loopProcessAW(ctx context.Context, aw *AW) {
}
}
// Set queues up a set operation to occur using our mainloop
// Set queues up a set operation to occur using our mainloop.
func (obj *EmbdEtcd) Set(key, value string, opts ...etcd.OpOption) error {
resp := event.NewResp()
obj.setq <- &KV{key: key, value: value, opts: opts, resp: resp}
@@ -943,7 +957,7 @@ func (obj *EmbdEtcd) Set(key, value string, opts ...etcd.OpOption) error {
return nil
}
// rawSet actually implements the key set operation
// rawSet actually implements the key set operation.
func (obj *EmbdEtcd) rawSet(ctx context.Context, kv *KV) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: rawSet()")
@@ -960,7 +974,7 @@ func (obj *EmbdEtcd) rawSet(ctx context.Context, kv *KV) error {
return err
}
// Get performs a get operation and waits for an ACK to continue
// Get performs a get operation and waits for an ACK to continue.
func (obj *EmbdEtcd) Get(path string, opts ...etcd.OpOption) (map[string]string, error) {
return obj.ComplexGet(path, false, opts...)
}
@@ -1001,7 +1015,7 @@ func (obj *EmbdEtcd) rawGet(ctx context.Context, gq *GQ) (result map[string]stri
return
}
// Delete performs a delete operation and waits for an ACK to continue
// Delete performs a delete operation and waits for an ACK to continue.
func (obj *EmbdEtcd) Delete(path string, opts ...etcd.OpOption) (int64, error) {
resp := event.NewResp()
dl := &DL{path: path, opts: opts, resp: resp, data: -1}
@@ -1029,7 +1043,7 @@ func (obj *EmbdEtcd) rawDelete(ctx context.Context, dl *DL) (count int64, err er
return
}
// Txn performs a transaction and waits for an ACK to continue
// Txn performs a transaction and waits for an ACK to continue.
func (obj *EmbdEtcd) Txn(ifcmps []etcd.Cmp, thenops, elseops []etcd.Op) (*etcd.TxnResponse, error) {
resp := event.NewResp()
tn := &TN{ifcmps: ifcmps, thenops: thenops, elseops: elseops, resp: resp, data: nil}
@@ -1053,8 +1067,8 @@ func (obj *EmbdEtcd) rawTxn(ctx context.Context, tn *TN) (*etcd.TxnResponse, err
return response, err
}
// AddWatcher queues up an add watcher request and returns a cancel function
// Remember to add the etcd.WithPrefix() option if you want to watch recursively
// AddWatcher queues up an add watcher request and returns a cancel function.
// Remember to add the etcd.WithPrefix() option if you want to watch recursively.
func (obj *EmbdEtcd) AddWatcher(path string, callback func(re *RE) error, errCheck bool, skipConv bool, opts ...etcd.OpOption) (func(), error) {
resp := event.NewResp()
awq := &AW{path: path, opts: opts, callback: callback, errCheck: errCheck, skipConv: skipConv, cancelFunc: nil, resp: resp}
@@ -1065,7 +1079,7 @@ func (obj *EmbdEtcd) AddWatcher(path string, callback func(re *RE) error, errChe
return awq.cancelFunc, nil
}
// rawAddWatcher adds a watcher and returns a cancel function to call to end it
// rawAddWatcher adds a watcher and returns a cancel function to call to end it.
func (obj *EmbdEtcd) rawAddWatcher(ctx context.Context, aw *AW) (func(), error) {
cancelCtx, cancelFunc := obj.CancelCtx(ctx)
go func(ctx context.Context) {
@@ -1142,7 +1156,7 @@ func (obj *EmbdEtcd) rawAddWatcher(ctx context.Context, aw *AW) (func(), error)
return cancelFunc, nil
}
// rawCallback is the companion to AddWatcher which runs the callback processing
// rawCallback is the companion to AddWatcher which runs the callback processing.
func rawCallback(ctx context.Context, re *RE) error {
var err = re.err // the watch event itself might have had an error
if err == nil {
@@ -1161,8 +1175,8 @@ func rawCallback(ctx context.Context, re *RE) error {
return err
}
// volunteerCallback runs to respond to the volunteer list change events
// functionally, it controls the adding and removing of members
// volunteerCallback runs to respond to the volunteer list change events.
// Functionally, it controls the adding and removing of members.
// FIXME: we might need to respond to member change/disconnect/shutdown events,
// see: https://github.com/coreos/etcd/issues/5277
func (obj *EmbdEtcd) volunteerCallback(re *RE) error {
@@ -1351,8 +1365,8 @@ func (obj *EmbdEtcd) volunteerCallback(re *RE) error {
return nil
}
// nominateCallback runs to respond to the nomination list change events
// functionally, it controls the starting and stopping of the server process
// nominateCallback runs to respond to the nomination list change events.
// Functionally, it controls the starting and stopping of the server process.
func (obj *EmbdEtcd) nominateCallback(re *RE) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: nominateCallback()")
@@ -1419,8 +1433,8 @@ func (obj *EmbdEtcd) nominateCallback(re *RE) error {
if re != nil {
retries = re.retries
}
// retry maxStartServerRetries times, then permanently fail
return &CtxRetriesErr{maxStartServerRetries - retries, fmt.Sprintf("Etcd: StartServer: Error: %+v", err)}
// retry MaxStartServerRetries times, then permanently fail
return &CtxRetriesErr{MaxStartServerRetries - retries, fmt.Sprintf("Etcd: StartServer: Error: %+v", err)}
}
if len(obj.endpoints) == 0 {
@@ -1434,14 +1448,21 @@ func (obj *EmbdEtcd) nominateCallback(re *RE) error {
// client connects to one of the obj.endpoints servers...
log.Printf("Etcd: Addresses are: %s", addresses)
surls := obj.serverURLs
if len(obj.advertiseServerURLs) > 0 {
surls = obj.advertiseServerURLs
}
// XXX: just put this wherever for now so we don't block
// nominate self so "member" list is correct for peers to see
Nominate(obj, obj.hostname, obj.serverURLs)
Nominate(obj, obj.hostname, surls)
// XXX: if this fails, where will we retry this part ?
}
// advertise client urls
if curls := obj.clientURLs; len(curls) > 0 {
if len(obj.advertiseClientURLs) > 0 {
curls = obj.advertiseClientURLs
}
// XXX: don't advertise local addresses! 127.0.0.1:2381 doesn't really help remote hosts
// XXX: but sometimes this is what we want... hmmm how do we decide? filter on callback?
AdvertiseEndpoints(obj, curls)
@@ -1504,7 +1525,7 @@ func (obj *EmbdEtcd) nominateCallback(re *RE) error {
return nil
}
// endpointCallback runs to respond to the endpoint list change events
// endpointCallback runs to respond to the endpoint list change events.
func (obj *EmbdEtcd) endpointCallback(re *RE) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: endpointCallback()")
@@ -1570,7 +1591,7 @@ func (obj *EmbdEtcd) endpointCallback(re *RE) error {
return nil
}
// idealClusterSizeCallback runs to respond to the ideal cluster size changes
// idealClusterSizeCallback runs to respond to the ideal cluster size changes.
func (obj *EmbdEtcd) idealClusterSizeCallback(re *RE) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: idealClusterSizeCallback()")
@@ -1604,8 +1625,8 @@ func (obj *EmbdEtcd) idealClusterSizeCallback(re *RE) error {
return nil
}
// LocalhostClientURLs returns the most localhost like URLs for direct connection
// this gets clients to talk to the local servers first before searching remotely
// LocalhostClientURLs returns the most localhost like URLs for direct connection.
// This gets clients to talk to the local servers first before searching remotely.
func (obj *EmbdEtcd) LocalhostClientURLs() etcdtypes.URLs {
// look through obj.clientURLs and return the localhost ones
urls := etcdtypes.URLs{}
@@ -1645,14 +1666,23 @@ func (obj *EmbdEtcd) StartServer(newCluster bool, peerURLsMap etcdtypes.URLsMap)
initialPeerURLsMap[memberName] = peerURLs
}
aCUrls := obj.clientURLs
if len(obj.advertiseClientURLs) > 0 {
aCUrls = obj.advertiseClientURLs
}
aPUrls := peerURLs
if len(obj.advertiseServerURLs) > 0 {
aPUrls = obj.advertiseServerURLs
}
// embed etcd
cfg := embed.NewConfig()
cfg.Name = memberName // hostname
cfg.Dir = obj.dataDir
cfg.ACUrls = obj.clientURLs
cfg.APUrls = peerURLs
cfg.LCUrls = obj.clientURLs
cfg.LPUrls = peerURLs
cfg.ACUrls = aCUrls
cfg.APUrls = aPUrls
cfg.StrictReconfigCheck = false // XXX: workaround https://github.com/coreos/etcd/issues/6305
cfg.InitialCluster = initialPeerURLsMap.String() // including myself!
@@ -1671,8 +1701,8 @@ func (obj *EmbdEtcd) StartServer(newCluster bool, peerURLsMap etcdtypes.URLsMap)
select {
case <-obj.server.Server.ReadyNotify(): // we hang here if things are bad
log.Printf("Etcd: StartServer: Done starting server!") // it didn't hang!
case <-time.After(time.Duration(maxStartServerTimeout) * time.Second):
e := fmt.Errorf("timeout of %d seconds reached", maxStartServerTimeout)
case <-time.After(time.Duration(MaxStartServerTimeout) * time.Second):
e := fmt.Errorf("timeout of %d seconds reached", MaxStartServerTimeout)
log.Printf("Etcd: StartServer: %s", e.Error())
obj.server.Server.Stop() // trigger a shutdown
obj.serverwg.Add(1) // add for the DestroyServer()
@@ -1690,12 +1720,16 @@ func (obj *EmbdEtcd) StartServer(newCluster bool, peerURLsMap etcdtypes.URLsMap)
//log.Fatal(<-obj.server.Err()) XXX
log.Printf("Etcd: StartServer: Server running...")
obj.memberID = uint64(obj.server.Server.ID()) // store member id for internal use
close(obj.serverReady) // send a signal
obj.serverwg.Add(1)
return nil
}
// DestroyServer shuts down the embedded etcd server portion
// ServerReady returns on a channel when the server has started successfully.
func (obj *EmbdEtcd) ServerReady() <-chan struct{} { return obj.serverReady }
// DestroyServer shuts down the embedded etcd server portion.
func (obj *EmbdEtcd) DestroyServer() error {
var err error
log.Printf("Etcd: DestroyServer: Destroying...")
@@ -1710,544 +1744,11 @@ func (obj *EmbdEtcd) DestroyServer() error {
}
obj.server = nil // important because this is used as an isRunning flag
log.Printf("Etcd: DestroyServer: Unlocking server...")
obj.serverReady = make(chan struct{}) // reset the signal
obj.serverwg.Done() // -1
return err
}
// TODO: Could all these Etcd*(obj *EmbdEtcd, ...) functions which deal with the
// interface between etcd paths and behaviour be grouped into a single struct ?
// Nominate nominates a particular client to be a server (peer)
func Nominate(obj *EmbdEtcd, hostname string, urls etcdtypes.URLs) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Nominate(%v): %v", hostname, urls.String())
defer log.Printf("Trace: Etcd: Nominate(%v): Finished!", hostname)
}
// nominate someone to be a server
nominate := fmt.Sprintf("/%s/nominated/%s", NS, hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
ops = append(ops, etcd.OpPut(nominate, urls.String())) // TODO: add a TTL? (etcd.WithLease)
} else { // delete message if set to erase
ops = append(ops, etcd.OpDelete(nominate))
}
if _, err := obj.Txn(nil, ops, nil); err != nil {
return fmt.Errorf("nominate failed") // exit in progress?
}
return nil
}
// Nominated returns a urls map of nominated etcd server volunteers
// NOTE: I know 'nominees' might be more correct, but is less consistent here
func Nominated(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
path := fmt.Sprintf("/%s/nominated/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix()) // map[string]string, bool
if err != nil {
return nil, fmt.Errorf("nominated isn't available: %v", err)
}
nominated := make(etcdtypes.URLsMap)
for key, val := range keyMap { // loop through directory of nominated
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of nominee
if val == "" { // skip "erased" values
continue
}
urls, err := etcdtypes.NewURLs(strings.Split(val, ","))
if err != nil {
return nil, fmt.Errorf("nominated data format error: %v", err)
}
nominated[name] = urls // add to map
if obj.flags.Debug {
log.Printf("Etcd: Nominated(%v): %v", name, val)
}
}
return nominated, nil
}
// Volunteer offers yourself up to be a server if needed
func Volunteer(obj *EmbdEtcd, urls etcdtypes.URLs) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Volunteer(%v): %v", obj.hostname, urls.String())
defer log.Printf("Trace: Etcd: Volunteer(%v): Finished!", obj.hostname)
}
// volunteer to be a server
volunteer := fmt.Sprintf("/%s/volunteers/%s", NS, obj.hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
// XXX: adding a TTL is crucial! (i think)
ops = append(ops, etcd.OpPut(volunteer, urls.String())) // value is usually a peer "serverURL"
} else { // delete message if set to erase
ops = append(ops, etcd.OpDelete(volunteer))
}
if _, err := obj.Txn(nil, ops, nil); err != nil {
return fmt.Errorf("volunteering failed") // exit in progress?
}
return nil
}
// Volunteers returns a urls map of available etcd server volunteers
func Volunteers(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Volunteers()")
defer log.Printf("Trace: Etcd: Volunteers(): Finished!")
}
path := fmt.Sprintf("/%s/volunteers/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix())
if err != nil {
return nil, fmt.Errorf("volunteers aren't available: %v", err)
}
volunteers := make(etcdtypes.URLsMap)
for key, val := range keyMap { // loop through directory of volunteers
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of volunteer
if val == "" { // skip "erased" values
continue
}
urls, err := etcdtypes.NewURLs(strings.Split(val, ","))
if err != nil {
return nil, fmt.Errorf("volunteers data format error: %v", err)
}
volunteers[name] = urls // add to map
if obj.flags.Debug {
log.Printf("Etcd: Volunteer(%v): %v", name, val)
}
}
return volunteers, nil
}
// AdvertiseEndpoints advertises the list of available client endpoints
func AdvertiseEndpoints(obj *EmbdEtcd, urls etcdtypes.URLs) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: AdvertiseEndpoints(%v): %v", obj.hostname, urls.String())
defer log.Printf("Trace: Etcd: AdvertiseEndpoints(%v): Finished!", obj.hostname)
}
// advertise endpoints
endpoints := fmt.Sprintf("/%s/endpoints/%s", NS, obj.hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
// TODO: add a TTL? (etcd.WithLease)
ops = append(ops, etcd.OpPut(endpoints, urls.String())) // value is usually a "clientURL"
} else { // delete message if set to erase
ops = append(ops, etcd.OpDelete(endpoints))
}
if _, err := obj.Txn(nil, ops, nil); err != nil {
return fmt.Errorf("endpoint advertising failed") // exit in progress?
}
return nil
}
// Endpoints returns a urls map of available etcd server endpoints
func Endpoints(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Endpoints()")
defer log.Printf("Trace: Etcd: Endpoints(): Finished!")
}
path := fmt.Sprintf("/%s/endpoints/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix())
if err != nil {
return nil, fmt.Errorf("endpoints aren't available: %v", err)
}
endpoints := make(etcdtypes.URLsMap)
for key, val := range keyMap { // loop through directory of endpoints
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of volunteer
if val == "" { // skip "erased" values
continue
}
urls, err := etcdtypes.NewURLs(strings.Split(val, ","))
if err != nil {
return nil, fmt.Errorf("endpoints data format error: %v", err)
}
endpoints[name] = urls // add to map
if obj.flags.Debug {
log.Printf("Etcd: Endpoint(%v): %v", name, val)
}
}
return endpoints, nil
}
// SetHostnameConverged sets whether a specific hostname is converged.
func SetHostnameConverged(obj *EmbdEtcd, hostname string, isConverged bool) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: SetHostnameConverged(%s): %v", hostname, isConverged)
defer log.Printf("Trace: Etcd: SetHostnameConverged(%v): Finished!", hostname)
}
converged := fmt.Sprintf("/%s/converged/%s", NS, hostname)
op := []etcd.Op{etcd.OpPut(converged, fmt.Sprintf("%t", isConverged))}
if _, err := obj.Txn(nil, op, nil); err != nil { // TODO: do we need a skipConv flag here too?
return fmt.Errorf("set converged failed") // exit in progress?
}
return nil
}
// HostnameConverged returns a map of every hostname's converged state.
func HostnameConverged(obj *EmbdEtcd) (map[string]bool, error) {
if obj.flags.Trace {
log.Printf("Trace: Etcd: HostnameConverged()")
defer log.Printf("Trace: Etcd: HostnameConverged(): Finished!")
}
path := fmt.Sprintf("/%s/converged/", NS)
keyMap, err := obj.ComplexGet(path, true, etcd.WithPrefix()) // don't un-converge
if err != nil {
return nil, fmt.Errorf("converged values aren't available: %v", err)
}
converged := make(map[string]bool)
for key, val := range keyMap { // loop through directory...
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of key
if val == "" { // skip "erased" values
continue
}
b, err := strconv.ParseBool(val)
if err != nil {
return nil, fmt.Errorf("converged data format error: %v", err)
}
converged[name] = b // add to map
}
return converged, nil
}
// AddHostnameConvergedWatcher adds a watcher with a callback that runs on
// hostname state changes.
func AddHostnameConvergedWatcher(obj *EmbdEtcd, callbackFn func(map[string]bool) error) (func(), error) {
path := fmt.Sprintf("/%s/converged/", NS)
internalCbFn := func(re *RE) error {
// TODO: get the value from the response, and apply delta...
// for now, just run a get operation which is easier to code!
m, err := HostnameConverged(obj)
if err != nil {
return err
}
return callbackFn(m) // call my function
}
return obj.AddWatcher(path, internalCbFn, true, true, etcd.WithPrefix()) // no block and no converger reset
}
// SetClusterSize sets the ideal target cluster size of etcd peers
func SetClusterSize(obj *EmbdEtcd, value uint16) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: SetClusterSize(): %v", value)
defer log.Printf("Trace: Etcd: SetClusterSize(): Finished!")
}
key := fmt.Sprintf("/%s/idealClusterSize", NS)
if err := obj.Set(key, strconv.FormatUint(uint64(value), 10)); err != nil {
return fmt.Errorf("function SetClusterSize failed: %v", err) // exit in progress?
}
return nil
}
// GetClusterSize gets the ideal target cluster size of etcd peers
func GetClusterSize(obj *EmbdEtcd) (uint16, error) {
key := fmt.Sprintf("/%s/idealClusterSize", NS)
keyMap, err := obj.Get(key)
if err != nil {
return 0, fmt.Errorf("function GetClusterSize failed: %v", err)
}
val, exists := keyMap[key]
if !exists || val == "" {
return 0, fmt.Errorf("function GetClusterSize failed: %v", err)
}
v, err := strconv.ParseUint(val, 10, 16)
if err != nil {
return 0, fmt.Errorf("function GetClusterSize failed: %v", err)
}
return uint16(v), nil
}
// MemberAdd adds a member to the cluster.
func MemberAdd(obj *EmbdEtcd, peerURLs etcdtypes.URLs) (*etcd.MemberAddResponse, error) {
//obj.Connect(false) // TODO: ?
ctx := context.Background()
var response *etcd.MemberAddResponse
var err error
for {
if obj.exiting { // the exit signal has been sent!
return nil, fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
response, err = obj.client.MemberAdd(ctx, peerURLs.StringSlice())
obj.rLock.RUnlock()
if err == nil {
break
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return nil, err
}
}
return response, nil
}
// MemberRemove removes a member by mID and returns if it worked, and also
// if there was an error. This is because it might have run without error, but
// the member wasn't found, for example.
func MemberRemove(obj *EmbdEtcd, mID uint64) (bool, error) {
//obj.Connect(false) // TODO: ?
ctx := context.Background()
for {
if obj.exiting { // the exit signal has been sent!
return false, fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
_, err := obj.client.MemberRemove(ctx, mID)
obj.rLock.RUnlock()
if err == nil {
break
} else if err == rpctypes.ErrMemberNotFound {
// if we get this, member already shut itself down :)
return false, nil
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return false, err
}
}
return true, nil
}
// Members returns information on cluster membership.
// The member ID's are the keys, because an empty names means unstarted!
// TODO: consider queueing this through the main loop with CtxError(ctx, err)
func Members(obj *EmbdEtcd) (map[uint64]string, error) {
//obj.Connect(false) // TODO: ?
ctx := context.Background()
var response *etcd.MemberListResponse
var err error
for {
if obj.exiting { // the exit signal has been sent!
return nil, fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
if obj.flags.Trace {
log.Printf("Trace: Etcd: Members(): Endpoints are: %v", obj.client.Endpoints())
}
response, err = obj.client.MemberList(ctx)
obj.rLock.RUnlock()
if err == nil {
break
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return nil, err
}
}
members := make(map[uint64]string)
for _, x := range response.Members {
members[x.ID] = x.Name // x.Name will be "" if unstarted!
}
return members, nil
}
// Leader returns the current leader of the etcd server cluster
func Leader(obj *EmbdEtcd) (string, error) {
//obj.Connect(false) // TODO: ?
var err error
membersMap := make(map[uint64]string)
if membersMap, err = Members(obj); err != nil {
return "", err
}
addresses := obj.LocalhostClientURLs() // heuristic, but probably correct
if len(addresses) == 0 {
// probably a programming error...
return "", fmt.Errorf("programming error")
}
endpoint := addresses[0].Host // FIXME: arbitrarily picked the first one
// part two
ctx := context.Background()
var response *etcd.StatusResponse
for {
if obj.exiting { // the exit signal has been sent!
return "", fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
response, err = obj.client.Maintenance.Status(ctx, endpoint)
obj.rLock.RUnlock()
if err == nil {
break
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return "", err
}
}
// isLeader: response.Header.MemberId == response.Leader
for id, name := range membersMap {
if id == response.Leader {
return name, nil
}
}
return "", fmt.Errorf("members map is not current") // not found
}
// WatchAll returns a channel that outputs a true bool when activity occurs
// TODO: Filter our watch (on the server side if possible) based on the
// collection prefixes and filters that we care about...
func WatchAll(obj *EmbdEtcd) chan bool {
ch := make(chan bool, 1) // buffer it so we can measure it
path := fmt.Sprintf("/%s/exported/", NS)
callback := func(re *RE) error {
// TODO: is this even needed? it used to happen on conn errors
log.Printf("Etcd: Watch: Path: %v", path) // event
if re == nil || re.response.Canceled {
return fmt.Errorf("watch is empty") // will cause a CtxError+retry
}
// we normally need to check if anything changed since the last
// event, since a set (export) with no changes still causes the
// watcher to trigger and this would cause an infinite loop. we
// don't need to do this check anymore because we do the export
// transactionally, and only if a change is needed. since it is
// atomic, all the changes arrive together which avoids dupes!!
if len(ch) == 0 { // send event only if one isn't pending
// this check avoids multiple events all queueing up and then
// being released continuously long after the changes stopped
// do not block!
ch <- true // event
}
return nil
}
_, _ = obj.AddWatcher(path, callback, true, false, etcd.WithPrefix()) // no need to check errors
return ch
}
// SetResources exports all of the resources which we pass in to etcd
func SetResources(obj *EmbdEtcd, hostname string, resourceList []resources.Res) error {
// key structure is /$NS/exported/$hostname/resources/$uid = $data
var kindFilter []string // empty to get from everyone
hostnameFilter := []string{hostname}
// this is not a race because we should only be reading keys which we
// set, and there should not be any contention with other hosts here!
originals, err := GetResources(obj, hostnameFilter, kindFilter)
if err != nil {
return err
}
if len(originals) == 0 && len(resourceList) == 0 { // special case of no add or del
return nil
}
ifs := []etcd.Cmp{} // list matching the desired state
ops := []etcd.Op{} // list of ops in this transaction
for _, res := range resourceList {
if res.Kind() == "" {
log.Fatalf("Etcd: SetResources: Error: Empty kind: %v", res.GetName())
}
uid := fmt.Sprintf("%s/%s", res.Kind(), res.GetName())
path := fmt.Sprintf("/%s/exported/%s/resources/%s", NS, hostname, uid)
if data, err := resources.ResToB64(res); err == nil {
ifs = append(ifs, etcd.Compare(etcd.Value(path), "=", data)) // desired state
ops = append(ops, etcd.OpPut(path, data))
} else {
return fmt.Errorf("can't convert to B64: %v", err)
}
}
match := func(res resources.Res, resourceList []resources.Res) bool { // helper lambda
for _, x := range resourceList {
if res.Kind() == x.Kind() && res.GetName() == x.GetName() {
return true
}
}
return false
}
hasDeletes := false
// delete old, now unused resources here...
for _, res := range originals {
if res.Kind() == "" {
log.Fatalf("Etcd: SetResources: Error: Empty kind: %v", res.GetName())
}
uid := fmt.Sprintf("%s/%s", res.Kind(), res.GetName())
path := fmt.Sprintf("/%s/exported/%s/resources/%s", NS, hostname, uid)
if match(res, resourceList) { // if we match, no need to delete!
continue
}
ops = append(ops, etcd.OpDelete(path))
hasDeletes = true
}
// if everything is already correct, do nothing, otherwise, run the ops!
// it's important to do this in one transaction, and atomically, because
// this way, we only generate one watch event, and only when it's needed
if hasDeletes { // always run, ifs don't matter
_, err = obj.Txn(nil, ops, nil) // TODO: does this run? it should!
} else {
_, err = obj.Txn(ifs, nil, ops) // TODO: do we need to look at response?
}
return err
}
// GetResources collects all of the resources which match a filter from etcd
// If the kindfilter or hostnameFilter is empty, then it assumes no filtering...
// TODO: Expand this with a more powerful filter based on what we eventually
// support in our collect DSL. Ideally a server side filter like WithFilter()
// We could do this if the pattern was /$NS/exported/$kind/$hostname/$uid = $data
func GetResources(obj *EmbdEtcd, hostnameFilter, kindFilter []string) ([]resources.Res, error) {
// key structure is /$NS/exported/$hostname/resources/$uid = $data
path := fmt.Sprintf("/%s/exported/", NS)
resourceList := []resources.Res{}
keyMap, err := obj.Get(path, etcd.WithPrefix(), etcd.WithSort(etcd.SortByKey, etcd.SortAscend))
if err != nil {
return nil, fmt.Errorf("could not get resources: %v", err)
}
for key, val := range keyMap {
if !strings.HasPrefix(key, path) { // sanity check
continue
}
str := strings.Split(key[len(path):], "/")
if len(str) != 4 {
return nil, fmt.Errorf("unexpected chunk count")
}
hostname, r, kind, name := str[0], str[1], str[2], str[3]
if r != "resources" {
return nil, fmt.Errorf("unexpected chunk pattern")
}
if kind == "" {
return nil, fmt.Errorf("unexpected kind chunk")
}
// FIXME: ideally this would be a server side filter instead!
if len(hostnameFilter) > 0 && !util.StrInList(hostname, hostnameFilter) {
continue
}
// FIXME: ideally this would be a server side filter instead!
if len(kindFilter) > 0 && !util.StrInList(kind, kindFilter) {
continue
}
if obj, err := resources.B64ToRes(val); err == nil {
obj.SetKind(kind) // cheap init
log.Printf("Etcd: Get: (Hostname, Kind, Name): (%s, %s, %s)", hostname, kind, name)
resourceList = append(resourceList, obj)
} else {
return nil, fmt.Errorf("can't convert from B64: %v", err)
}
}
return resourceList, nil
}
//func UrlRemoveScheme(urls etcdtypes.URLs) []string {
// strs := []string{}
// for _, u := range urls {
@@ -2256,7 +1757,7 @@ func GetResources(obj *EmbdEtcd, hostnameFilter, kindFilter []string) ([]resourc
// return strs
//}
// ApplyDeltaEvents modifies a URLsMap with the deltas from a WatchResponse
// ApplyDeltaEvents modifies a URLsMap with the deltas from a WatchResponse.
func ApplyDeltaEvents(re *RE, urlsmap etcdtypes.URLsMap) (etcdtypes.URLsMap, error) {
if re == nil { // passthrough
return urlsmap, nil

412
etcd/methods.go Normal file
View File

@@ -0,0 +1,412 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package etcd
import (
"fmt"
"log"
"strconv"
"strings"
etcd "github.com/coreos/etcd/clientv3"
rpctypes "github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
etcdtypes "github.com/coreos/etcd/pkg/types"
context "golang.org/x/net/context"
)
// TODO: Could all these Etcd*(obj *EmbdEtcd, ...) functions which deal with the
// interface between etcd paths and behaviour be grouped into a single struct ?
// Nominate nominates a particular client to be a server (peer).
func Nominate(obj *EmbdEtcd, hostname string, urls etcdtypes.URLs) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Nominate(%v): %v", hostname, urls.String())
defer log.Printf("Trace: Etcd: Nominate(%v): Finished!", hostname)
}
// nominate someone to be a server
nominate := fmt.Sprintf("/%s/nominated/%s", NS, hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
ops = append(ops, etcd.OpPut(nominate, urls.String())) // TODO: add a TTL? (etcd.WithLease)
} else { // delete message if set to erase
ops = append(ops, etcd.OpDelete(nominate))
}
if _, err := obj.Txn(nil, ops, nil); err != nil {
return fmt.Errorf("nominate failed") // exit in progress?
}
return nil
}
// Nominated returns a urls map of nominated etcd server volunteers.
// NOTE: I know 'nominees' might be more correct, but is less consistent here
func Nominated(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
path := fmt.Sprintf("/%s/nominated/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix()) // map[string]string, bool
if err != nil {
return nil, fmt.Errorf("nominated isn't available: %v", err)
}
nominated := make(etcdtypes.URLsMap)
for key, val := range keyMap { // loop through directory of nominated
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of nominee
if val == "" { // skip "erased" values
continue
}
urls, err := etcdtypes.NewURLs(strings.Split(val, ","))
if err != nil {
return nil, fmt.Errorf("nominated data format error: %v", err)
}
nominated[name] = urls // add to map
if obj.flags.Debug {
log.Printf("Etcd: Nominated(%v): %v", name, val)
}
}
return nominated, nil
}
// Volunteer offers yourself up to be a server if needed.
func Volunteer(obj *EmbdEtcd, urls etcdtypes.URLs) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Volunteer(%v): %v", obj.hostname, urls.String())
defer log.Printf("Trace: Etcd: Volunteer(%v): Finished!", obj.hostname)
}
// volunteer to be a server
volunteer := fmt.Sprintf("/%s/volunteers/%s", NS, obj.hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
// XXX: adding a TTL is crucial! (i think)
ops = append(ops, etcd.OpPut(volunteer, urls.String())) // value is usually a peer "serverURL"
} else { // delete message if set to erase
ops = append(ops, etcd.OpDelete(volunteer))
}
if _, err := obj.Txn(nil, ops, nil); err != nil {
return fmt.Errorf("volunteering failed") // exit in progress?
}
return nil
}
// Volunteers returns a urls map of available etcd server volunteers.
func Volunteers(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Volunteers()")
defer log.Printf("Trace: Etcd: Volunteers(): Finished!")
}
path := fmt.Sprintf("/%s/volunteers/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix())
if err != nil {
return nil, fmt.Errorf("volunteers aren't available: %v", err)
}
volunteers := make(etcdtypes.URLsMap)
for key, val := range keyMap { // loop through directory of volunteers
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of volunteer
if val == "" { // skip "erased" values
continue
}
urls, err := etcdtypes.NewURLs(strings.Split(val, ","))
if err != nil {
return nil, fmt.Errorf("volunteers data format error: %v", err)
}
volunteers[name] = urls // add to map
if obj.flags.Debug {
log.Printf("Etcd: Volunteer(%v): %v", name, val)
}
}
return volunteers, nil
}
// AdvertiseEndpoints advertises the list of available client endpoints.
func AdvertiseEndpoints(obj *EmbdEtcd, urls etcdtypes.URLs) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: AdvertiseEndpoints(%v): %v", obj.hostname, urls.String())
defer log.Printf("Trace: Etcd: AdvertiseEndpoints(%v): Finished!", obj.hostname)
}
// advertise endpoints
endpoints := fmt.Sprintf("/%s/endpoints/%s", NS, obj.hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
// TODO: add a TTL? (etcd.WithLease)
ops = append(ops, etcd.OpPut(endpoints, urls.String())) // value is usually a "clientURL"
} else { // delete message if set to erase
ops = append(ops, etcd.OpDelete(endpoints))
}
if _, err := obj.Txn(nil, ops, nil); err != nil {
return fmt.Errorf("endpoint advertising failed") // exit in progress?
}
return nil
}
// Endpoints returns a urls map of available etcd server endpoints.
func Endpoints(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Endpoints()")
defer log.Printf("Trace: Etcd: Endpoints(): Finished!")
}
path := fmt.Sprintf("/%s/endpoints/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix())
if err != nil {
return nil, fmt.Errorf("endpoints aren't available: %v", err)
}
endpoints := make(etcdtypes.URLsMap)
for key, val := range keyMap { // loop through directory of endpoints
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of volunteer
if val == "" { // skip "erased" values
continue
}
urls, err := etcdtypes.NewURLs(strings.Split(val, ","))
if err != nil {
return nil, fmt.Errorf("endpoints data format error: %v", err)
}
endpoints[name] = urls // add to map
if obj.flags.Debug {
log.Printf("Etcd: Endpoint(%v): %v", name, val)
}
}
return endpoints, nil
}
// SetHostnameConverged sets whether a specific hostname is converged.
func SetHostnameConverged(obj *EmbdEtcd, hostname string, isConverged bool) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: SetHostnameConverged(%s): %v", hostname, isConverged)
defer log.Printf("Trace: Etcd: SetHostnameConverged(%v): Finished!", hostname)
}
converged := fmt.Sprintf("/%s/converged/%s", NS, hostname)
op := []etcd.Op{etcd.OpPut(converged, fmt.Sprintf("%t", isConverged))}
if _, err := obj.Txn(nil, op, nil); err != nil { // TODO: do we need a skipConv flag here too?
return fmt.Errorf("set converged failed") // exit in progress?
}
return nil
}
// HostnameConverged returns a map of every hostname's converged state.
func HostnameConverged(obj *EmbdEtcd) (map[string]bool, error) {
if obj.flags.Trace {
log.Printf("Trace: Etcd: HostnameConverged()")
defer log.Printf("Trace: Etcd: HostnameConverged(): Finished!")
}
path := fmt.Sprintf("/%s/converged/", NS)
keyMap, err := obj.ComplexGet(path, true, etcd.WithPrefix()) // don't un-converge
if err != nil {
return nil, fmt.Errorf("converged values aren't available: %v", err)
}
converged := make(map[string]bool)
for key, val := range keyMap { // loop through directory...
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of key
if val == "" { // skip "erased" values
continue
}
b, err := strconv.ParseBool(val)
if err != nil {
return nil, fmt.Errorf("converged data format error: %v", err)
}
converged[name] = b // add to map
}
return converged, nil
}
// AddHostnameConvergedWatcher adds a watcher with a callback that runs on
// hostname state changes.
func AddHostnameConvergedWatcher(obj *EmbdEtcd, callbackFn func(map[string]bool) error) (func(), error) {
path := fmt.Sprintf("/%s/converged/", NS)
internalCbFn := func(re *RE) error {
// TODO: get the value from the response, and apply delta...
// for now, just run a get operation which is easier to code!
m, err := HostnameConverged(obj)
if err != nil {
return err
}
return callbackFn(m) // call my function
}
return obj.AddWatcher(path, internalCbFn, true, true, etcd.WithPrefix()) // no block and no converger reset
}
// SetClusterSize sets the ideal target cluster size of etcd peers.
func SetClusterSize(obj *EmbdEtcd, value uint16) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: SetClusterSize(): %v", value)
defer log.Printf("Trace: Etcd: SetClusterSize(): Finished!")
}
key := fmt.Sprintf("/%s/idealClusterSize", NS)
if err := obj.Set(key, strconv.FormatUint(uint64(value), 10)); err != nil {
return fmt.Errorf("function SetClusterSize failed: %v", err) // exit in progress?
}
return nil
}
// GetClusterSize gets the ideal target cluster size of etcd peers.
func GetClusterSize(obj *EmbdEtcd) (uint16, error) {
key := fmt.Sprintf("/%s/idealClusterSize", NS)
keyMap, err := obj.Get(key)
if err != nil {
return 0, fmt.Errorf("function GetClusterSize failed: %v", err)
}
val, exists := keyMap[key]
if !exists || val == "" {
return 0, fmt.Errorf("function GetClusterSize failed: %v", err)
}
v, err := strconv.ParseUint(val, 10, 16)
if err != nil {
return 0, fmt.Errorf("function GetClusterSize failed: %v", err)
}
return uint16(v), nil
}
// MemberAdd adds a member to the cluster.
func MemberAdd(obj *EmbdEtcd, peerURLs etcdtypes.URLs) (*etcd.MemberAddResponse, error) {
//obj.Connect(false) // TODO: ?
ctx := context.Background()
var response *etcd.MemberAddResponse
var err error
for {
if obj.exiting { // the exit signal has been sent!
return nil, fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
response, err = obj.client.MemberAdd(ctx, peerURLs.StringSlice())
obj.rLock.RUnlock()
if err == nil {
break
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return nil, err
}
}
return response, nil
}
// MemberRemove removes a member by mID and returns if it worked, and also
// if there was an error. This is because it might have run without error, but
// the member wasn't found, for example.
func MemberRemove(obj *EmbdEtcd, mID uint64) (bool, error) {
//obj.Connect(false) // TODO: ?
ctx := context.Background()
for {
if obj.exiting { // the exit signal has been sent!
return false, fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
_, err := obj.client.MemberRemove(ctx, mID)
obj.rLock.RUnlock()
if err == nil {
break
} else if err == rpctypes.ErrMemberNotFound {
// if we get this, member already shut itself down :)
return false, nil
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return false, err
}
}
return true, nil
}
// Members returns information on cluster membership.
// The member ID's are the keys, because an empty names means unstarted!
// TODO: consider queueing this through the main loop with CtxError(ctx, err)
func Members(obj *EmbdEtcd) (map[uint64]string, error) {
//obj.Connect(false) // TODO: ?
ctx := context.Background()
var response *etcd.MemberListResponse
var err error
for {
if obj.exiting { // the exit signal has been sent!
return nil, fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
if obj.flags.Trace {
log.Printf("Trace: Etcd: Members(): Endpoints are: %v", obj.client.Endpoints())
}
response, err = obj.client.MemberList(ctx)
obj.rLock.RUnlock()
if err == nil {
break
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return nil, err
}
}
members := make(map[uint64]string)
for _, x := range response.Members {
members[x.ID] = x.Name // x.Name will be "" if unstarted!
}
return members, nil
}
// Leader returns the current leader of the etcd server cluster.
func Leader(obj *EmbdEtcd) (string, error) {
//obj.Connect(false) // TODO: ?
var err error
membersMap := make(map[uint64]string)
if membersMap, err = Members(obj); err != nil {
return "", err
}
addresses := obj.LocalhostClientURLs() // heuristic, but probably correct
if len(addresses) == 0 {
// probably a programming error...
return "", fmt.Errorf("programming error")
}
endpoint := addresses[0].Host // FIXME: arbitrarily picked the first one
// part two
ctx := context.Background()
var response *etcd.StatusResponse
for {
if obj.exiting { // the exit signal has been sent!
return "", fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
response, err = obj.client.Maintenance.Status(ctx, endpoint)
obj.rLock.RUnlock()
if err == nil {
break
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return "", err
}
}
// isLeader: response.Header.MemberId == response.Leader
for id, name := range membersMap {
if id == response.Leader {
return name, nil
}
}
return "", fmt.Errorf("members map is not current") // not found
}

181
etcd/resources.go Normal file
View File

@@ -0,0 +1,181 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package etcd
import (
"fmt"
"log"
"strings"
"github.com/purpleidea/mgmt/resources"
"github.com/purpleidea/mgmt/util"
etcd "github.com/coreos/etcd/clientv3"
)
// WatchResources returns a channel that outputs events when exported resources
// change.
// TODO: Filter our watch (on the server side if possible) based on the
// collection prefixes and filters that we care about...
func WatchResources(obj *EmbdEtcd) chan error {
ch := make(chan error, 1) // buffer it so we can measure it
path := fmt.Sprintf("/%s/exported/", NS)
callback := func(re *RE) error {
// TODO: is this even needed? it used to happen on conn errors
log.Printf("Etcd: Watch: Path: %v", path) // event
if re == nil || re.response.Canceled {
return fmt.Errorf("watch is empty") // will cause a CtxError+retry
}
// we normally need to check if anything changed since the last
// event, since a set (export) with no changes still causes the
// watcher to trigger and this would cause an infinite loop. we
// don't need to do this check anymore because we do the export
// transactionally, and only if a change is needed. since it is
// atomic, all the changes arrive together which avoids dupes!!
if len(ch) == 0 { // send event only if one isn't pending
// this check avoids multiple events all queueing up and then
// being released continuously long after the changes stopped
// do not block!
ch <- nil // event
}
return nil
}
_, _ = obj.AddWatcher(path, callback, true, false, etcd.WithPrefix()) // no need to check errors
return ch
}
// SetResources exports all of the resources which we pass in to etcd.
func SetResources(obj *EmbdEtcd, hostname string, resourceList []resources.Res) error {
// key structure is /$NS/exported/$hostname/resources/$uid = $data
var kindFilter []string // empty to get from everyone
hostnameFilter := []string{hostname}
// this is not a race because we should only be reading keys which we
// set, and there should not be any contention with other hosts here!
originals, err := GetResources(obj, hostnameFilter, kindFilter)
if err != nil {
return err
}
if len(originals) == 0 && len(resourceList) == 0 { // special case of no add or del
return nil
}
ifs := []etcd.Cmp{} // list matching the desired state
ops := []etcd.Op{} // list of ops in this transaction
for _, res := range resourceList {
if res.GetKind() == "" {
log.Fatalf("Etcd: SetResources: Error: Empty kind: %v", res.GetName())
}
uid := fmt.Sprintf("%s/%s", res.GetKind(), res.GetName())
path := fmt.Sprintf("/%s/exported/%s/resources/%s", NS, hostname, uid)
if data, err := resources.ResToB64(res); err == nil {
ifs = append(ifs, etcd.Compare(etcd.Value(path), "=", data)) // desired state
ops = append(ops, etcd.OpPut(path, data))
} else {
return fmt.Errorf("can't convert to B64: %v", err)
}
}
match := func(res resources.Res, resourceList []resources.Res) bool { // helper lambda
for _, x := range resourceList {
if res.GetKind() == x.GetKind() && res.GetName() == x.GetName() {
return true
}
}
return false
}
hasDeletes := false
// delete old, now unused resources here...
for _, res := range originals {
if res.GetKind() == "" {
log.Fatalf("Etcd: SetResources: Error: Empty kind: %v", res.GetName())
}
uid := fmt.Sprintf("%s/%s", res.GetKind(), res.GetName())
path := fmt.Sprintf("/%s/exported/%s/resources/%s", NS, hostname, uid)
if match(res, resourceList) { // if we match, no need to delete!
continue
}
ops = append(ops, etcd.OpDelete(path))
hasDeletes = true
}
// if everything is already correct, do nothing, otherwise, run the ops!
// it's important to do this in one transaction, and atomically, because
// this way, we only generate one watch event, and only when it's needed
if hasDeletes { // always run, ifs don't matter
_, err = obj.Txn(nil, ops, nil) // TODO: does this run? it should!
} else {
_, err = obj.Txn(ifs, nil, ops) // TODO: do we need to look at response?
}
return err
}
// GetResources collects all of the resources which match a filter from etcd.
// If the kindfilter or hostnameFilter is empty, then it assumes no filtering...
// TODO: Expand this with a more powerful filter based on what we eventually
// support in our collect DSL. Ideally a server side filter like WithFilter()
// We could do this if the pattern was /$NS/exported/$kind/$hostname/$uid = $data.
func GetResources(obj *EmbdEtcd, hostnameFilter, kindFilter []string) ([]resources.Res, error) {
// key structure is /$NS/exported/$hostname/resources/$uid = $data
path := fmt.Sprintf("/%s/exported/", NS)
resourceList := []resources.Res{}
keyMap, err := obj.Get(path, etcd.WithPrefix(), etcd.WithSort(etcd.SortByKey, etcd.SortAscend))
if err != nil {
return nil, fmt.Errorf("could not get resources: %v", err)
}
for key, val := range keyMap {
if !strings.HasPrefix(key, path) { // sanity check
continue
}
str := strings.Split(key[len(path):], "/")
if len(str) != 4 {
return nil, fmt.Errorf("unexpected chunk count")
}
hostname, r, kind, name := str[0], str[1], str[2], str[3]
if r != "resources" {
return nil, fmt.Errorf("unexpected chunk pattern")
}
if kind == "" {
return nil, fmt.Errorf("unexpected kind chunk")
}
// FIXME: ideally this would be a server side filter instead!
if len(hostnameFilter) > 0 && !util.StrInList(hostname, hostnameFilter) {
continue
}
// FIXME: ideally this would be a server side filter instead!
if len(kindFilter) > 0 && !util.StrInList(kind, kindFilter) {
continue
}
if obj, err := resources.B64ToRes(val); err == nil {
log.Printf("Etcd: Get: (Hostname, Kind, Name): (%s, %s, %s)", hostname, kind, name)
resourceList = append(resourceList, obj)
} else {
return nil, fmt.Errorf("can't convert from B64: %v", err)
}
}
return resourceList, nil
}

View File

@@ -1,37 +1,39 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package etcd
import (
"errors"
"fmt"
"strings"
"github.com/purpleidea/mgmt/util"
etcd "github.com/coreos/etcd/clientv3"
errwrap "github.com/pkg/errors"
)
// ErrNotExist is returned when GetStr can not find the requested key.
// TODO: https://dave.cheney.net/2016/04/07/constant-errors
var ErrNotExist = errors.New("errNotExist")
// WatchStr returns a channel which spits out events on key activity.
// FIXME: It should close the channel when it's done, and spit out errors when
// something goes wrong.
func WatchStr(obj *EmbdEtcd, key string) chan error {
// new key structure is /$NS/strings/$key/$hostname = $data
// new key structure is /$NS/strings/$key = $data
path := fmt.Sprintf("/%s/strings/%s", NS, key)
ch := make(chan error, 1)
// FIXME: fix our API so that we get a close event on shutdown.
@@ -50,50 +52,38 @@ func WatchStr(obj *EmbdEtcd, key string) chan error {
return ch
}
// GetStr collects all of the strings which match a namespace in etcd.
func GetStr(obj *EmbdEtcd, hostnameFilter []string, key string) (map[string]string, error) {
// old key structure is /$NS/strings/$hostname/$key = $data
// new key structure is /$NS/strings/$key/$hostname = $data
// FIXME: if we have the $key as the last token (old key structure), we
// can allow the key to contain the slash char, otherwise we need to
// verify that one isn't present in the input string.
// GetStr collects the string which matches a global namespace in etcd.
func GetStr(obj *EmbdEtcd, key string) (string, error) {
// new key structure is /$NS/strings/$key = $data
path := fmt.Sprintf("/%s/strings/%s", NS, key)
keyMap, err := obj.Get(path, etcd.WithPrefix(), etcd.WithSort(etcd.SortByKey, etcd.SortAscend))
keyMap, err := obj.Get(path, etcd.WithPrefix())
if err != nil {
return nil, errwrap.Wrapf(err, "could not get strings in: %s", key)
}
result := make(map[string]string)
for key, val := range keyMap {
if !strings.HasPrefix(key, path) { // sanity check
continue
return "", errwrap.Wrapf(err, "could not get strings in: %s", key)
}
str := strings.Split(key[len(path):], "/")
if len(str) != 2 {
return nil, fmt.Errorf("unexpected chunk count of %d", len(str))
}
_, hostname := str[0], str[1]
if hostname == "" {
return nil, fmt.Errorf("unexpected chunk length of %d", len(hostname))
if len(keyMap) == 0 {
return "", ErrNotExist
}
// FIXME: ideally this would be a server side filter instead!
if len(hostnameFilter) > 0 && !util.StrInList(hostname, hostnameFilter) {
continue
if count := len(keyMap); count != 1 {
return "", fmt.Errorf("returned %d entries", count)
}
//log.Printf("Etcd: GetStr(%s): (Hostname, Data): (%s, %s)", key, hostname, val)
result[hostname] = val
val, exists := keyMap[path]
if !exists {
return "", fmt.Errorf("path `%s` is missing", path)
}
return result, nil
//log.Printf("Etcd: GetStr(%s): %s", key, val)
return val, nil
}
// SetStr sets a key and hostname pair to a certain value. If the value is nil,
// then it deletes the key. Otherwise the value should point to a string.
// SetStr sets a key and hostname pair to a certain value. If the value is
// nil, then it deletes the key. Otherwise the value should point to a string.
// TODO: TTL or delete disconnect?
func SetStr(obj *EmbdEtcd, hostname, key string, data *string) error {
// key structure is /$NS/strings/$key/$hostname = $data
path := fmt.Sprintf("/%s/strings/%s/%s", NS, key, hostname)
func SetStr(obj *EmbdEtcd, key string, data *string) error {
// key structure is /$NS/strings/$key = $data
path := fmt.Sprintf("/%s/strings/%s", NS, key)
ifs := []etcd.Cmp{} // list matching the desired state
ops := []etcd.Op{} // list of ops in this transaction (then)
els := []etcd.Op{} // list of ops in this transaction (else)

115
etcd/strmap.go Normal file
View File

@@ -0,0 +1,115 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package etcd
import (
"fmt"
"strings"
"github.com/purpleidea/mgmt/util"
etcd "github.com/coreos/etcd/clientv3"
errwrap "github.com/pkg/errors"
)
// WatchStrMap returns a channel which spits out events on key activity.
// FIXME: It should close the channel when it's done, and spit out errors when
// something goes wrong.
func WatchStrMap(obj *EmbdEtcd, key string) chan error {
// new key structure is /$NS/strings/$key/$hostname = $data
path := fmt.Sprintf("/%s/strings/%s", NS, key)
ch := make(chan error, 1)
// FIXME: fix our API so that we get a close event on shutdown.
callback := func(re *RE) error {
// TODO: is this even needed? it used to happen on conn errors
//log.Printf("Etcd: Watch: Path: %v", path) // event
if re == nil || re.response.Canceled {
return fmt.Errorf("watch is empty") // will cause a CtxError+retry
}
if len(ch) == 0 { // send event only if one isn't pending
ch <- nil // event
}
return nil
}
_, _ = obj.AddWatcher(path, callback, true, false, etcd.WithPrefix()) // no need to check errors
return ch
}
// GetStrMap collects all of the strings which match a namespace in etcd.
func GetStrMap(obj *EmbdEtcd, hostnameFilter []string, key string) (map[string]string, error) {
// old key structure is /$NS/strings/$hostname/$key = $data
// new key structure is /$NS/strings/$key/$hostname = $data
// FIXME: if we have the $key as the last token (old key structure), we
// can allow the key to contain the slash char, otherwise we need to
// verify that one isn't present in the input string.
path := fmt.Sprintf("/%s/strings/%s", NS, key)
keyMap, err := obj.Get(path, etcd.WithPrefix(), etcd.WithSort(etcd.SortByKey, etcd.SortAscend))
if err != nil {
return nil, errwrap.Wrapf(err, "could not get strings in: %s", key)
}
result := make(map[string]string)
for key, val := range keyMap {
if !strings.HasPrefix(key, path) { // sanity check
continue
}
str := strings.Split(key[len(path):], "/")
if len(str) != 2 {
return nil, fmt.Errorf("unexpected chunk count of %d", len(str))
}
_, hostname := str[0], str[1]
if hostname == "" {
return nil, fmt.Errorf("unexpected chunk length of %d", len(hostname))
}
// FIXME: ideally this would be a server side filter instead!
if len(hostnameFilter) > 0 && !util.StrInList(hostname, hostnameFilter) {
continue
}
//log.Printf("Etcd: GetStr(%s): (Hostname, Data): (%s, %s)", key, hostname, val)
result[hostname] = val
}
return result, nil
}
// SetStrMap sets a key and hostname pair to a certain value. If the value is
// nil, then it deletes the key. Otherwise the value should point to a string.
// TODO: TTL or delete disconnect?
func SetStrMap(obj *EmbdEtcd, hostname, key string, data *string) error {
// key structure is /$NS/strings/$key/$hostname = $data
path := fmt.Sprintf("/%s/strings/%s/%s", NS, key, hostname)
ifs := []etcd.Cmp{} // list matching the desired state
ops := []etcd.Op{} // list of ops in this transaction (then)
els := []etcd.Op{} // list of ops in this transaction (else)
if data == nil { // perform a delete
// TODO: use https://github.com/coreos/etcd/pull/7417 if merged
//ifs = append(ifs, etcd.KeyExists(path))
ifs = append(ifs, etcd.Compare(etcd.Version(path), ">", 0))
ops = append(ops, etcd.OpDelete(path))
} else {
data := *data // get the real value
ifs = append(ifs, etcd.Compare(etcd.Value(path), "=", data)) // desired state
els = append(els, etcd.OpPut(path, data))
}
// it's important to do this in one transaction, and atomically, because
// this way, we only generate one watch event, and only when it's needed
_, err := obj.Txn(ifs, ops, els) // TODO: do we need to look at response?
return errwrap.Wrapf(err, "could not set strings in: %s", key)
}

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package etcd
@@ -27,6 +27,12 @@ type World struct {
EmbdEtcd *EmbdEtcd
}
// ResWatch returns a channel which spits out events on possible exported
// resource changes.
func (obj *World) ResWatch() chan error {
return WatchResources(obj.EmbdEtcd)
}
// ResExport exports a list of resources under our hostname namespace.
// Subsequent calls replace the previously set collection atomically.
func (obj *World) ResExport(resourceList []resources.Res) error {
@@ -42,23 +48,48 @@ func (obj *World) ResCollect(hostnameFilter, kindFilter []string) ([]resources.R
return GetResources(obj.EmbdEtcd, hostnameFilter, kindFilter)
}
// SetWatch returns a channel which spits out events on possible string changes.
// StrWatch returns a channel which spits out events on possible string changes.
func (obj *World) StrWatch(namespace string) chan error {
return WatchStr(obj.EmbdEtcd, namespace)
}
// StrGet returns a map of hostnames to values in the given namespace.
func (obj *World) StrGet(namespace string) (map[string]string, error) {
return GetStr(obj.EmbdEtcd, []string{}, namespace)
// StrIsNotExist returns whether the error from StrGet is a key missing error.
func (obj *World) StrIsNotExist(err error) bool {
return err == ErrNotExist
}
// StrSet sets the namespace value to a particular string under the identity of
// its own hostname.
// StrGet returns the value for the the given namespace.
func (obj *World) StrGet(namespace string) (string, error) {
return GetStr(obj.EmbdEtcd, namespace)
}
// StrSet sets the namespace value to a particular string.
func (obj *World) StrSet(namespace, value string) error {
return SetStr(obj.EmbdEtcd, obj.Hostname, namespace, &value)
return SetStr(obj.EmbdEtcd, namespace, &value)
}
// StrDel deletes the value in a particular namespace.
func (obj *World) StrDel(namespace string) error {
return SetStr(obj.EmbdEtcd, obj.Hostname, namespace, nil)
return SetStr(obj.EmbdEtcd, namespace, nil)
}
// StrMapWatch returns a channel which spits out events on possible string changes.
func (obj *World) StrMapWatch(namespace string) chan error {
return WatchStrMap(obj.EmbdEtcd, namespace)
}
// StrMapGet returns a map of hostnames to values in the given namespace.
func (obj *World) StrMapGet(namespace string) (map[string]string, error) {
return GetStrMap(obj.EmbdEtcd, []string{}, namespace)
}
// StrMapSet sets the namespace value to a particular string under the identity
// of its own hostname.
func (obj *World) StrMapSet(namespace, value string) error {
return SetStrMap(obj.EmbdEtcd, obj.Hostname, namespace, &value)
}
// StrMapDel deletes the value in a particular namespace.
func (obj *World) StrMapDel(namespace string) error {
return SetStrMap(obj.EmbdEtcd, obj.Hostname, namespace, nil)
}

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package event provides some primitives that are used for message passing.

View File

@@ -13,7 +13,5 @@ resources:
meta:
autoedge: true
path: "/tmp/foo/"
content: |
i am f2
state: exists
edges: []

19
examples/autoedges4.yaml Normal file
View File

@@ -0,0 +1,19 @@
---
graph: mygraph
resources:
user:
- name: edgeuser
state: absent
gid: 10000
- name: edgeuser2
state: exists
group: edgegroup
groups: [edgegroup2, edgegroup3]
group:
- name: edgegroup
state: exists
gid: 10000
- name: edgegroup2
state: exists
- name: edgegroup3
state: exists

21
examples/autoedges5.yaml Normal file
View File

@@ -0,0 +1,21 @@
---
graph: mygraph
resources:
pkg:
- name: httpd
meta:
autoedge: true
noop: true
state: installed
exec:
- name: pkg10
cmd: /usr/bin/apachectl status
shell: ''
timeout: 0
watchcmd: ''
watchshell: ''
ifcmd: ''
ifshell: ''
pollint: 0
state: present
edges: []

10
examples/aws_ec2_1.yaml Normal file
View File

@@ -0,0 +1,10 @@
---
graph: mygraph
resources:
aws:ec2:
- name: ec2example
region: ca-central-1
type: t2.micro
imageid: ami-5ac17f3e
state: running
edges: []

10
examples/file0.yaml Normal file
View File

@@ -0,0 +1,10 @@
---
graph: mygraph
resources:
file:
- name: file0
path: "/tmp/mgmt/f1"
content: |
i am f0
state: exists
edges: []

10
examples/file4.yaml Normal file
View File

@@ -0,0 +1,10 @@
---
graph: mygraph
resources:
file:
- name: file1
path: "/tmp/mgmt/hello"
content: |
i am a file
state: exists
edges: []

14
examples/graph0.hcl Normal file
View File

@@ -0,0 +1,14 @@
resource "file" "file1" {
path = "/tmp/mgmt-hello-world"
content = "hello, world"
state = "exists"
depends_on = ["noop.noop1", "exec.sleep"]
}
resource "noop" "noop1" {
test = "nil"
}
resource "exec" "sleep" {
cmd = "sleep 10s"
}

4
examples/graph1.hcl Normal file
View File

@@ -0,0 +1,4 @@
resource "exec" "exec1" {
cmd = "cat /tmp/mgmt-hello-world"
state = "present"
}

8
examples/group1.yaml Normal file
View File

@@ -0,0 +1,8 @@
---
graph: mygraph
resources:
group:
- name: testgroup
state: exists
gid: 10000
edges: []

9
examples/hil.hcl Normal file
View File

@@ -0,0 +1,9 @@
resource "file" "file1" {
path = "/tmp/mgmt-hello-world"
content = "${exec.sleep.Output}"
state = "exists"
}
resource "exec" "sleep" {
cmd = "echo hello"
}

View File

@@ -0,0 +1,246 @@
// libmgmt example of send->recv
package main
import (
"fmt"
"log"
"os"
"os/signal"
"sync"
"syscall"
"time"
"github.com/purpleidea/mgmt/gapi"
mgmt "github.com/purpleidea/mgmt/lib"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
)
// MyGAPI implements the main GAPI interface.
type MyGAPI struct {
Name string // graph name
Interval uint // refresh interval, 0 to never refresh
data gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewMyGAPI creates a new MyGAPI struct and calls Init().
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
obj := &MyGAPI{
Name: name,
Interval: interval,
}
return obj, obj.Init(data)
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}
if obj.Name == "" {
return fmt.Errorf("the graph name must be specified")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
obj.initialized = true
return nil
}
// Graph returns a current Graph.
func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, err
}
metaparams := resources.DefaultMetaParams
exec1 := &resources.ExecRes{
BaseRes: resources.BaseRes{
Name: "exec1",
Kind: "exec",
MetaParams: metaparams,
},
Cmd: "echo hello world && echo goodbye world 1>&2", // to stdout && stderr
Shell: "/bin/bash",
}
g.AddVertex(exec1)
output := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "output",
Kind: "file",
MetaParams: metaparams,
// send->recv!
Recv: map[string]*resources.Send{
"Content": {Res: exec1, Key: "Output"},
},
},
Path: "/tmp/mgmt/output",
State: "present",
}
g.AddVertex(output)
g.AddEdge(exec1, output, &resources.Edge{Name: "e0"})
stdout := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "stdout",
Kind: "file",
MetaParams: metaparams,
// send->recv!
Recv: map[string]*resources.Send{
"Content": {Res: exec1, Key: "Stdout"},
},
},
Path: "/tmp/mgmt/stdout",
State: "present",
}
g.AddVertex(stdout)
g.AddEdge(exec1, stdout, &resources.Edge{Name: "e1"})
stderr := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "stderr",
Kind: "file",
MetaParams: metaparams,
// send->recv!
Recv: map[string]*resources.Send{
"Content": {Res: exec1, Key: "Stderr"},
},
},
Path: "/tmp/mgmt/stderr",
State: "present",
}
g.AddVertex(stderr)
g.AddEdge(exec1, stderr, &resources.Edge{Name: "e2"})
//g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
return g, nil
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
return
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}
}
}()
return ch
}
// Close shuts down the MyGAPI.
func (obj *MyGAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
close(obj.closeChan)
obj.wg.Wait()
obj.initialized = false // closed = true
return nil
}
// Run runs an embedded mgmt server.
func Run() error {
obj := &mgmt.Main{}
obj.Program = "libmgmt" // TODO: set on compilation
obj.Version = "0.0.1" // TODO: set on compilation
obj.TmpPrefix = true // disable for easy debugging
//prefix := "/tmp/testprefix/"
//obj.Prefix = &p // enable for easy debugging
obj.IdealClusterSize = -1
obj.ConvergedTimeout = -1
obj.Noop = false // FIXME: careful!
obj.GAPI = &MyGAPI{ // graph API
Name: "libmgmt", // TODO: set on compilation
Interval: 60 * 10, // arbitrarily change graph every 15 seconds
}
if err := obj.Init(); err != nil {
return err
}
// install the exit signal handler
exit := make(chan struct{})
defer close(exit)
go func() {
signals := make(chan os.Signal, 1)
signal.Notify(signals, os.Interrupt) // catch ^C
//signal.Notify(signals, os.Kill) // catch signals
signal.Notify(signals, syscall.SIGTERM)
select {
case sig := <-signals: // any signal will do
if sig == os.Interrupt {
log.Println("Interrupted by ^C")
obj.Exit(nil)
return
}
log.Println("Interrupted by signal")
obj.Exit(fmt.Errorf("killed by %v", sig))
return
case <-exit:
return
}
}()
return obj.Run()
}
func main() {
log.Printf("Hello!")
if err := Run(); err != nil {
fmt.Println(err)
os.Exit(1)
return
}
log.Printf("Goodbye!")
}

View File

@@ -0,0 +1,253 @@
// libmgmt example of flattened subgraph
package main
import (
"fmt"
"log"
"os"
"os/signal"
"sync"
"syscall"
"time"
"github.com/purpleidea/mgmt/gapi"
mgmt "github.com/purpleidea/mgmt/lib"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
errwrap "github.com/pkg/errors"
)
// MyGAPI implements the main GAPI interface.
type MyGAPI struct {
Name string // graph name
Interval uint // refresh interval, 0 to never refresh
data gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewMyGAPI creates a new MyGAPI struct and calls Init().
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
obj := &MyGAPI{
Name: name,
Interval: interval,
}
return obj, obj.Init(data)
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}
if obj.Name == "" {
return fmt.Errorf("the graph name must be specified")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
obj.initialized = true
return nil
}
func (obj *MyGAPI) subGraph() (*pgraph.Graph, error) {
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, err
}
metaparams := resources.DefaultMetaParams
f1 := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "file1",
Kind: "file",
MetaParams: metaparams,
},
Path: "/tmp/mgmt/sub1",
State: "present",
}
g.AddVertex(f1)
n1 := &resources.NoopRes{
BaseRes: resources.BaseRes{
Name: "noop1",
Kind: "noop",
MetaParams: metaparams,
},
}
g.AddVertex(n1)
return g, nil
}
// Graph returns a current Graph.
func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, err
}
metaparams := resources.DefaultMetaParams
content := "I created a subgraph!\n"
f0 := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "README",
MetaParams: metaparams,
},
Path: "/tmp/mgmt/README",
Content: &content,
State: "present",
}
g.AddVertex(f0)
subGraph, err := obj.subGraph()
if err != nil {
return nil, errwrap.Wrapf(err, "running subGraph() failed")
}
edgeGenFn := func(v1, v2 pgraph.Vertex) pgraph.Edge {
edge := &resources.Edge{
Name: fmt.Sprintf("edge: %s->%s", v1, v2),
}
// if we want to do something specific based on input
_, v2IsFile := v2.(*resources.FileRes)
if v1 == f0 && v2IsFile {
edge.Notify = true
}
return edge
}
g.AddEdgeVertexGraph(f0, subGraph, edgeGenFn)
//g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
return g, nil
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
return
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}
}
}()
return ch
}
// Close shuts down the MyGAPI.
func (obj *MyGAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
close(obj.closeChan)
obj.wg.Wait()
obj.initialized = false // closed = true
return nil
}
// Run runs an embedded mgmt server.
func Run() error {
obj := &mgmt.Main{}
obj.Program = "libmgmt" // TODO: set on compilation
obj.Version = "0.0.1" // TODO: set on compilation
obj.TmpPrefix = true // disable for easy debugging
//prefix := "/tmp/testprefix/"
//obj.Prefix = &p // enable for easy debugging
obj.IdealClusterSize = -1
obj.ConvergedTimeout = -1
obj.Noop = false // FIXME: careful!
obj.GAPI = &MyGAPI{ // graph API
Name: "libmgmt", // TODO: set on compilation
Interval: 60 * 10, // arbitrarily change graph every 15 seconds
}
if err := obj.Init(); err != nil {
return err
}
// install the exit signal handler
exit := make(chan struct{})
defer close(exit)
go func() {
signals := make(chan os.Signal, 1)
signal.Notify(signals, os.Interrupt) // catch ^C
//signal.Notify(signals, os.Kill) // catch signals
signal.Notify(signals, syscall.SIGTERM)
select {
case sig := <-signals: // any signal will do
if sig == os.Interrupt {
log.Println("Interrupted by ^C")
obj.Exit(nil)
return
}
log.Println("Interrupted by signal")
obj.Exit(fmt.Errorf("killed by %v", sig))
return
case <-exit:
return
}
}()
return obj.Run()
}
func main() {
log.Printf("Hello!")
if err := Run(); err != nil {
fmt.Println(err)
os.Exit(1)
return
}
log.Printf("Goodbye!")
}

View File

@@ -0,0 +1,243 @@
// libmgmt example of graph resource
package main
import (
"fmt"
"log"
"os"
"os/signal"
"sync"
"syscall"
"time"
"github.com/purpleidea/mgmt/gapi"
mgmt "github.com/purpleidea/mgmt/lib"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
)
// MyGAPI implements the main GAPI interface.
type MyGAPI struct {
Name string // graph name
Interval uint // refresh interval, 0 to never refresh
data gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewMyGAPI creates a new MyGAPI struct and calls Init().
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
obj := &MyGAPI{
Name: name,
Interval: interval,
}
return obj, obj.Init(data)
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}
if obj.Name == "" {
return fmt.Errorf("the graph name must be specified")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
obj.initialized = true
return nil
}
// Graph returns a current Graph.
func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, err
}
metaparams := resources.DefaultMetaParams
content := "I created a subgraph!\n"
f0 := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "README",
Kind: "file",
MetaParams: metaparams,
},
Path: "/tmp/mgmt/README",
Content: &content,
State: "present",
}
g.AddVertex(f0)
// create a subgraph to add *into* a graph resource
subGraph, err := pgraph.NewGraph(fmt.Sprintf("%s->subgraph", obj.Name))
if err != nil {
return nil, err
}
// add elements into the sub graph
f1 := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "file1",
Kind: "file",
MetaParams: metaparams,
},
Path: "/tmp/mgmt/sub1",
State: "present",
}
subGraph.AddVertex(f1)
n1 := &resources.NoopRes{
BaseRes: resources.BaseRes{
Name: "noop1",
Kind: "noop",
MetaParams: metaparams,
},
}
subGraph.AddVertex(n1)
e0 := &resources.Edge{Name: "e0"}
e0.Notify = true // send a notification from v0 to v1
subGraph.AddEdge(f1, n1, e0)
// create the actual resource to hold the sub graph
subGraphRes0 := &resources.GraphRes{ // TODO: should we name this SubGraphRes ?
BaseRes: resources.BaseRes{
Name: "subgraph1",
Kind: "graph",
MetaParams: metaparams,
},
Graph: subGraph,
}
g.AddVertex(subGraphRes0) // add it to the main graph
//g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
return g, nil
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
return
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}
}
}()
return ch
}
// Close shuts down the MyGAPI.
func (obj *MyGAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
close(obj.closeChan)
obj.wg.Wait()
obj.initialized = false // closed = true
return nil
}
// Run runs an embedded mgmt server.
func Run() error {
obj := &mgmt.Main{}
obj.Program = "libmgmt" // TODO: set on compilation
obj.Version = "0.0.1" // TODO: set on compilation
obj.TmpPrefix = true // disable for easy debugging
//prefix := "/tmp/testprefix/"
//obj.Prefix = &p // enable for easy debugging
obj.IdealClusterSize = -1
obj.ConvergedTimeout = -1
obj.Noop = false // FIXME: careful!
obj.GAPI = &MyGAPI{ // graph API
Name: "libmgmt", // TODO: set on compilation
Interval: 60 * 10, // arbitrarily change graph every 15 seconds
}
if err := obj.Init(); err != nil {
return err
}
// install the exit signal handler
exit := make(chan struct{})
defer close(exit)
go func() {
signals := make(chan os.Signal, 1)
signal.Notify(signals, os.Interrupt) // catch ^C
//signal.Notify(signals, os.Kill) // catch signals
signal.Notify(signals, syscall.SIGTERM)
select {
case sig := <-signals: // any signal will do
if sig == os.Interrupt {
log.Println("Interrupted by ^C")
obj.Exit(nil)
return
}
log.Println("Interrupted by signal")
obj.Exit(fmt.Errorf("killed by %v", sig))
return
case <-exit:
return
}
}()
return obj.Run()
}
func main() {
log.Printf("Hello!")
if err := Run(); err != nil {
fmt.Println(err)
os.Exit(1)
return
}
log.Printf("Goodbye!")
}

View File

@@ -57,11 +57,13 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
n1, err := resources.NewNoopRes("noop1")
n1, err := resources.NewNamedResource("noop", "noop1")
if err != nil {
return nil, fmt.Errorf("can't create resource: %v", err)
return nil, err
}
// NOTE: This is considered the legacy method to build graphs. Avoid
// importing the legacy `yamlgraph` lib if possible for custom graphs.
// we can still build a graph via the yaml method
gc := &yamlgraph.GraphConfig{
Graph: obj.Name,
@@ -70,7 +72,7 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
Exec: []*resources.ExecRes{},
File: []*resources.FileRes{},
Msg: []*resources.MsgRes{},
Noop: []*resources.NoopRes{n1},
Noop: []*resources.NoopRes{n1.(*resources.NoopRes)},
Pkg: []*resources.PkgRes{},
Svc: []*resources.SvcRes{},
Timer: []*resources.TimerRes{},
@@ -86,32 +88,45 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan error {
if obj.data.NoWatch || obj.Interval <= 0 {
return nil
}
ch := make(chan error)
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("libmgmt: MyGAPI is not initialized")
return
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
ticker := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer ticker.Stop()
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-ticker.C:
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- nil: // trigger a run
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}
@@ -175,10 +190,7 @@ func Run() error {
}
}()
if err := obj.Run(); err != nil {
return err
}
return nil
return obj.Run()
}
func main() {

View File

@@ -59,19 +59,21 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
g := pgraph.NewGraph(obj.Name)
var vertex *pgraph.Vertex
for i := uint(0); i < obj.Count; i++ {
n, err := resources.NewNoopRes(fmt.Sprintf("noop%d", i))
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, fmt.Errorf("can't create resource: %v", err)
return nil, err
}
v := pgraph.NewVertex(n)
g.AddVertex(v)
var vertex pgraph.Vertex
for i := uint(0); i < obj.Count; i++ {
n, err := resources.NewNamedResource("noop", fmt.Sprintf("noop%d", i))
if err != nil {
return nil, err
}
g.AddVertex(n)
if i > 0 {
g.AddEdge(vertex, v, pgraph.NewEdge(fmt.Sprintf("e%d", i)))
g.AddEdge(vertex, n, &resources.Edge{Name: fmt.Sprintf("e%d", i)})
}
vertex = v // save
vertex = n // save
}
//g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
@@ -79,32 +81,45 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan error {
if obj.data.NoWatch || obj.Interval <= 0 {
return nil
}
ch := make(chan error)
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("libmgmt: MyGAPI is not initialized")
return
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
ticker := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer ticker.Stop()
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-ticker.C:
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- nil: // trigger a run
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}
@@ -169,10 +184,7 @@ func Run(count uint) error {
}
}()
if err := obj.Run(); err != nil {
return err
}
return nil
return obj.Run()
}
func main() {

View File

@@ -14,8 +14,6 @@ import (
mgmt "github.com/purpleidea/mgmt/lib"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
"golang.org/x/time/rate"
)
// MyGAPI implements the main GAPI interface.
@@ -58,18 +56,18 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
g := pgraph.NewGraph(obj.Name)
// FIXME: these are being specified temporarily until it's the default!
metaparams := resources.MetaParams{
Limit: rate.Inf,
Burst: 0,
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, err
}
metaparams := resources.DefaultMetaParams
content := "Delete me to trigger a notification!\n"
f0 := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "README",
Kind: "file",
MetaParams: metaparams,
},
Path: "/tmp/mgmt/README",
@@ -77,23 +75,23 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
State: "present",
}
v0 := pgraph.NewVertex(f0)
g.AddVertex(v0)
g.AddVertex(f0)
p1 := &resources.PasswordRes{
BaseRes: resources.BaseRes{
Name: "password1",
Kind: "password",
MetaParams: metaparams,
},
Length: 8, // generated string will have this many characters
Saved: true, // this causes passwords to be stored in plain text!
}
v1 := pgraph.NewVertex(p1)
g.AddVertex(v1)
g.AddVertex(p1)
f1 := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "file1",
Kind: "file",
MetaParams: metaparams,
// send->recv!
Recv: map[string]*resources.Send{
@@ -105,60 +103,72 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
State: "present",
}
v2 := pgraph.NewVertex(f1)
g.AddVertex(v2)
g.AddVertex(f1)
n1 := &resources.NoopRes{
BaseRes: resources.BaseRes{
Name: "noop1",
Kind: "noop",
MetaParams: metaparams,
},
}
v3 := pgraph.NewVertex(n1)
g.AddVertex(v3)
g.AddVertex(n1)
e0 := pgraph.NewEdge("e0")
e0.Notify = true // send a notification from v0 to v1
g.AddEdge(v0, v1, e0)
e0 := &resources.Edge{Name: "e0"}
e0.Notify = true // send a notification from f0 to p1
g.AddEdge(f0, p1, e0)
g.AddEdge(v1, v2, pgraph.NewEdge("e1"))
g.AddEdge(p1, f1, &resources.Edge{Name: "e1"})
e2 := pgraph.NewEdge("e2")
e2.Notify = true // send a notification from v2 to v3
g.AddEdge(v2, v3, e2)
e2 := &resources.Edge{Name: "e2"}
e2.Notify = true // send a notification from f1 to n1
g.AddEdge(f1, n1, e2)
//g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
return g, nil
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan error {
if obj.data.NoWatch || obj.Interval <= 0 {
return nil
}
ch := make(chan error)
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("libmgmt: MyGAPI is not initialized")
return
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
ticker := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer ticker.Stop()
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-ticker.C:
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- nil: // trigger a run
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}
@@ -224,10 +234,7 @@ func Run() error {
}
}()
if err := obj.Run(); err != nil {
return err
}
return nil
return obj.Run()
}
func main() {

View File

@@ -0,0 +1,52 @@
// This is an example longpoll client. The connection to the corresponding
// server initiates a request on a "Watch". It then waits until a redirect is
// received from the server which indicates that the watch is ready. To signal
// than an event on this watch has occurred, the server sends a final message.
package main
import (
"bytes"
"fmt"
"io/ioutil"
"log"
"math/rand"
"net/http"
"time"
)
const (
timeout = 15
)
func main() {
log.Printf("Starting...")
checkRedirectFunc := func(req *http.Request, via []*http.Request) error {
log.Printf("Watch is ready!")
return nil
}
client := &http.Client{
Timeout: time.Duration(timeout) * time.Second,
CheckRedirect: checkRedirectFunc,
}
id := rand.Intn(2 ^ 32 - 1)
body := bytes.NewBufferString("hello")
url := fmt.Sprintf("http://127.0.0.1:12345/watch?id=%d", id)
req, err := http.NewRequest("GET", url, body)
if err != nil {
log.Printf("err: %+v", err)
return
}
result, err := client.Do(req)
if err != nil {
log.Printf("err: %+v", err)
return
}
log.Printf("Event received: %+v", result)
s, err := ioutil.ReadAll(result.Body) // TODO: apparently we can stream
result.Body.Close()
log.Printf("Response: %+v", string(s))
}

View File

@@ -0,0 +1,56 @@
// This is an example longpoll server. On client connection it starts a "Watch",
// and notifies the client with a redirect when that watch is ready. This is
// important to avoid a possible race between when the client believes the watch
// is actually ready, and when the server actually is watching.
package main
import (
"fmt"
"io"
"log"
"math/rand"
"net/http"
"time"
)
// you can use `wget http://127.0.0.1:12345/hello -O /dev/null`
// or `go run client.go`
const (
addr = ":12345"
)
// WatchStart kicks off the initial watch and then redirects the client to
// notify them that we're ready. The watch operation here is simulated.
func WatchStart(w http.ResponseWriter, req *http.Request) {
log.Printf("Start received...")
time.Sleep(time.Duration(5) * time.Second) // 5 seconds to get ready and start *our* watch ;)
//started := time.Now().UnixNano() // time since watch is "started"
log.Printf("URL: %+v", req.URL)
token := fmt.Sprintf("%d", rand.Intn(2^32-1))
http.Redirect(w, req, fmt.Sprintf("/ready?token=%s", token), http.StatusSeeOther) // TODO: which code should we use ?
log.Printf("Redirect sent!")
}
// WatchReady receives the client connection when it has been notified that the
// watch has started, and it returns to signal that an event on the watch
// occurred. The event operation here is simulated.
func WatchReady(w http.ResponseWriter, req *http.Request) {
log.Printf("Ready received")
log.Printf("URL: %+v", req.URL)
//time.Sleep(time.Duration(10) * time.Second)
time.Sleep(time.Duration(rand.Intn(10)) * time.Second) // wait until an "event" happens
io.WriteString(w, "Event happened!\n")
log.Printf("Event sent")
}
func main() {
log.Printf("Starting...")
//rand.Seed(time.Now().UTC().UnixNano())
http.HandleFunc("/watch", WatchStart)
http.HandleFunc("/ready", WatchReady)
log.Printf("Listening on %s", addr)
log.Fatal(http.ListenAndServe(addr, nil))
}

7
examples/noop0.yaml Normal file
View File

@@ -0,0 +1,7 @@
---
graph: mygraph
comment: simple noop example
resources:
noop:
- name: noop0
edges: []

View File

@@ -2,6 +2,6 @@
graph: mygraph
resources:
nspawn:
- name: mgmt-nspawn1
- name: nspawn1
state: running
edges: []

View File

@@ -1,7 +0,0 @@
---
graph: mygraph
resources:
nspawn:
- name: mgmt-nspawn2
state: stopped
edges: []

8
examples/svc2.yaml Normal file
View File

@@ -0,0 +1,8 @@
---
graph: mygraph
resources:
svc:
- name: purpleidea
state: running
session: true
edges: []

9
examples/user1.yaml Normal file
View File

@@ -0,0 +1,9 @@
---
graph: mygraph
resources:
user:
- name: testuser
uid: 1002
gid: 100
state: exists
edges: []

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package gapi defines the interface that graph API generators must meet.
@@ -28,14 +28,28 @@ type Data struct {
Hostname string // uuid for the host, required for GAPI
World resources.World
Noop bool
NoWatch bool
NoConfigWatch bool
NoStreamWatch bool
// NOTE: we can add more fields here if needed by GAPI endpoints
}
// Next describes the particular response the GAPI implementer wishes to emit.
type Next struct {
// FIXME: the Fast pause parameter should eventually get replaced with a
// "SwitchMethod" parameter or similar that instead lets the implementer
// choose between fast pause, slow pause, and interrupt. Interrupt could
// be a future extension to the Resource API that lets an Interrupt() be
// called if we want to exit immediately from the CheckApply part of the
// resource for some reason. For now we'll keep this simple with a bool.
Fast bool // run a fast pause to switch?
Exit bool // should we cause the program to exit? (specify err or not)
Err error // if something goes wrong (use with or without exit!)
}
// GAPI is a Graph API that represents incoming graphs and change streams.
type GAPI interface {
Init(Data) error // initializes the GAPI and passes in useful data
Graph() (*pgraph.Graph, error) // returns the most recent pgraph
Next() chan error // returns a stream of switch events
Next() chan Next // returns a stream of switch events
Close() error // shutdown the GAPI
}

155
hcl/gapi.go Normal file
View File

@@ -0,0 +1,155 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package hcl
import (
"fmt"
"log"
"sync"
"github.com/purpleidea/mgmt/gapi"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/recwatch"
)
// GAPI ...
type GAPI struct {
File *string
initialized bool
data gapi.Data
wg sync.WaitGroup
closeChan chan struct{}
configWatcher *recwatch.ConfigWatcher
}
// NewGAPI ...
func NewGAPI(data gapi.Data, file *string) (*GAPI, error) {
if file == nil {
return nil, fmt.Errorf("empty file given")
}
obj := &GAPI{
File: file,
}
return obj, obj.Init(data)
}
// Init ...
func (obj *GAPI) Init(d gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}
if obj.File == nil {
return fmt.Errorf("file cannot be nil")
}
obj.data = d
obj.closeChan = make(chan struct{})
obj.initialized = true
obj.configWatcher = recwatch.NewConfigWatcher()
return nil
}
// Graph ...
func (obj *GAPI) Graph() (*pgraph.Graph, error) {
config, err := loadHcl(obj.File)
if err != nil {
return nil, fmt.Errorf("unable to parse graph: %s", err)
}
return graphFromConfig(config, obj.data)
}
// Next ...
func (obj *GAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch)
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("hcl: GAPI is not initialized"),
Exit: true,
}
ch <- next
return
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
watchChan, configChan := make(chan error), make(chan error)
if obj.data.NoConfigWatch {
configChan = nil
} else {
configChan = obj.configWatcher.ConfigWatch(*obj.File) // simple
}
if obj.data.NoStreamWatch {
watchChan = nil
} else {
watchChan = obj.data.World.ResWatch()
}
for {
var err error
var ok bool
select {
case <-startChan:
startChan = nil
case err, ok = <-watchChan:
case err, ok = <-configChan:
if !ok {
return
}
case <-obj.closeChan:
return
}
log.Printf("hcl: generating new graph")
next := gapi.Next{
Err: err,
}
select {
case ch <- next:
case <-obj.closeChan:
return
}
}
}()
return ch
}
// Close ...
func (obj *GAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("hcl: GAPI is not initialized")
}
obj.configWatcher.Close()
close(obj.closeChan)
obj.wg.Wait()
obj.initialized = false
return nil
}

387
hcl/parse.go Normal file
View File

@@ -0,0 +1,387 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package hcl
import (
"fmt"
"io/ioutil"
"log"
"strings"
"github.com/hashicorp/hcl"
"github.com/hashicorp/hcl/hcl/ast"
"github.com/hashicorp/hil"
"github.com/purpleidea/mgmt/gapi"
hv "github.com/purpleidea/mgmt/hil"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
)
type collectorResConfig struct {
Kind string
Pattern string
}
// Config defines the structure of the hcl config.
type Config struct {
Resources []*Resource
Edges []*Edge
Collector []collectorResConfig
}
// vertex is the data structure of a vertex.
type vertex struct {
Kind string `hcl:"kind"`
Name string `hcl:"name"`
}
// Edge defines an edge in hcl.
type Edge struct {
Name string
From vertex
To vertex
Notify bool
}
// Resources define the state for resources.
type Resources struct {
Resources []resources.Res
}
// Resource ...
type Resource struct {
Name string
Kind string
resource resources.Res
Meta resources.MetaParams
deps []*Edge
rcv map[string]*hv.ResourceVariable
}
type key struct {
kind, name string
}
func graphFromConfig(c *Config, data gapi.Data) (*pgraph.Graph, error) {
var graph *pgraph.Graph
var err error
graph, err = pgraph.NewGraph("Graph")
if err != nil {
return nil, fmt.Errorf("unable to create graph from config: %s", err)
}
lookup := make(map[key]pgraph.Vertex)
var keep []pgraph.Vertex
var resourceList []resources.Res
log.Printf("hcl: parsing %d resources", len(c.Resources))
for _, r := range c.Resources {
res := r.resource
kind := r.resource.GetKind()
log.Printf("hcl: resource \"%s\" \"%s\"", kind, r.Name)
if !strings.HasPrefix(res.GetName(), "@@") {
fn := func(v pgraph.Vertex) (bool, error) {
return resources.VtoR(v).Compare(res), nil
}
v, err := graph.VertexMatchFn(fn)
if err != nil {
return nil, fmt.Errorf("could not match vertex: %s", err)
}
if v == nil {
v = res
graph.AddVertex(v)
}
lookup[key{kind, res.GetName()}] = v
keep = append(keep, v)
} else if !data.Noop {
res.SetName(res.GetName()[2:])
res.SetKind(kind)
resourceList = append(resourceList, res)
}
}
// store in backend (usually etcd)
if err := data.World.ResExport(resourceList); err != nil {
return nil, fmt.Errorf("Config: Could not export resources: %v", err)
}
// lookup from backend (usually etcd)
var hostnameFilter []string // empty to get from everyone
kindFilter := []string{}
for _, t := range c.Collector {
kind := strings.ToLower(t.Kind)
kindFilter = append(kindFilter, kind)
}
// do all the graph look ups in one single step, so that if the backend
// database changes, we don't have a partial state of affairs...
if len(kindFilter) > 0 { // if kindFilter is empty, don't need to do lookups!
var err error
resourceList, err = data.World.ResCollect(hostnameFilter, kindFilter)
if err != nil {
return nil, fmt.Errorf("Config: Could not collect resources: %v", err)
}
}
for _, res := range resourceList {
matched := false
// see if we find a collect pattern that matches
for _, t := range c.Collector {
kind := strings.ToLower(t.Kind)
// use t.Kind and optionally t.Pattern to collect from storage
log.Printf("Collect: %v; Pattern: %v", kind, t.Pattern)
// XXX: expand to more complex pattern matching here...
if res.GetKind() != kind {
continue
}
if matched {
// we've already matched this resource, should we match again?
log.Printf("Config: Warning: Matching %s again!", res)
}
matched = true
// collect resources but add the noop metaparam
//if noop { // now done in mgmtmain
// res.Meta().Noop = noop
//}
if t.Pattern != "" { // XXX: simplistic for now
res.CollectPattern(t.Pattern) // res.Dirname = t.Pattern
}
log.Printf("Collect: %s: collected!", res)
// XXX: similar to other resource add code:
// if _, exists := lookup[kind]; !exists {
// lookup[kind] = make(map[string]pgraph.Vertex)
// }
fn := func(v pgraph.Vertex) (bool, error) {
return resources.VtoR(v).Compare(res), nil
}
v, err := graph.VertexMatchFn(fn)
if err != nil {
return nil, fmt.Errorf("could not VertexMatchFn() resource: %s", err)
}
if v == nil { // no match found
v = res // a standalone res can be a vertex
graph.AddVertex(v) // call standalone in case not part of an edge
}
lookup[key{kind, res.GetName()}] = v // used for constructing edges
keep = append(keep, v) // append
//break // let's see if another resource even matches
}
}
for _, r := range c.Resources {
for _, e := range r.deps {
if _, ok := lookup[key{strings.ToLower(e.From.Kind), e.From.Name}]; !ok {
return nil, fmt.Errorf("can't find 'from' name")
}
if _, ok := lookup[key{strings.ToLower(e.To.Kind), e.To.Name}]; !ok {
return nil, fmt.Errorf("can't find 'to' name")
}
from := lookup[key{strings.ToLower(e.From.Kind), e.From.Name}]
to := lookup[key{strings.ToLower(e.To.Kind), e.To.Name}]
edge := &resources.Edge{
Name: e.Name,
Notify: e.Notify,
}
graph.AddEdge(from, to, edge)
}
recv := make(map[string]*resources.Send)
// build Rcv's from resource variables
for k, v := range r.rcv {
send, ok := lookup[key{strings.ToLower(v.Kind), v.Name}]
if !ok {
return nil, fmt.Errorf("resource not found")
}
recv[strings.ToUpper(string(k[0]))+k[1:]] = &resources.Send{
Res: resources.VtoR(send),
Key: v.Field,
}
to := lookup[key{strings.ToLower(r.Kind), r.Name}]
edge := &resources.Edge{
Name: v.Name,
Notify: true,
}
graph.AddEdge(send, to, edge)
}
r.resource.SetRecv(recv)
}
return graph, nil
}
func loadHcl(f *string) (*Config, error) {
if f == nil {
return nil, fmt.Errorf("empty file given")
}
data, err := ioutil.ReadFile(*f)
if err != nil {
return nil, fmt.Errorf("unable to read file: %v", err)
}
file, err := hcl.ParseBytes(data)
if err != nil {
return nil, fmt.Errorf("unable to parse file: %s", err)
}
config := new(Config)
list, ok := file.Node.(*ast.ObjectList)
if !ok {
return nil, fmt.Errorf("unable to parse file: file does not contain root node object")
}
if resources := list.Filter("resource"); len(resources.Items) > 0 {
var err error
config.Resources, err = loadResourcesHcl(resources)
if err != nil {
return nil, fmt.Errorf("unable to parse: %s", err)
}
}
return config, nil
}
func loadResourcesHcl(list *ast.ObjectList) ([]*Resource, error) {
list = list.Children()
if len(list.Items) == 0 {
return nil, nil
}
var result []*Resource
for _, item := range list.Items {
kind := item.Keys[0].Token.Value().(string)
name := item.Keys[1].Token.Value().(string)
var listVal *ast.ObjectList
if ot, ok := item.Val.(*ast.ObjectType); ok {
listVal = ot.List
} else {
return nil, fmt.Errorf("module '%s': should be an object", name)
}
var params = resources.DefaultMetaParams
if o := listVal.Filter("meta"); len(o.Items) > 0 {
err := hcl.DecodeObject(&params, o)
if err != nil {
return nil, fmt.Errorf(
"Error parsing meta for %s: %s",
name,
err)
}
}
var deps []string
if edges := listVal.Filter("depends_on"); len(edges.Items) > 0 {
err := hcl.DecodeObject(&deps, edges.Items[0].Val)
if err != nil {
return nil, fmt.Errorf("unable to parse: %s", err)
}
}
var edges []*Edge
for _, dep := range deps {
vertices := strings.Split(dep, ".")
edges = append(edges, &Edge{
To: vertex{
Kind: kind,
Name: name,
},
From: vertex{
Kind: vertices[0],
Name: vertices[1],
},
})
}
var config map[string]interface{}
if err := hcl.DecodeObject(&config, item.Val); err != nil {
log.Printf("hcl: unable to decode body: %v", err)
return nil, fmt.Errorf(
"Error reading config for %s: %s",
name,
err)
}
delete(config, "meta")
delete(config, "depends_on")
rcv := make(map[string]*hv.ResourceVariable)
// parse strings for hil
for k, v := range config {
n, err := hil.Parse(v.(string))
if err != nil {
return nil, fmt.Errorf("unable to parse fields: %v", err)
}
variables, err := hv.ParseVariables(n)
if err != nil {
return nil, fmt.Errorf("unable to parse variables: %v", err)
}
for _, v := range variables {
val, ok := v.(*hv.ResourceVariable)
if !ok {
continue
}
rcv[k] = val
}
}
res, err := resources.NewNamedResource(kind, name)
if err != nil {
log.Printf("hcl: unable to parse resource: %v", err)
return nil, err
}
if err := hcl.DecodeObject(res, item.Val); err != nil {
log.Printf("hcl: unable to decode body: %v", err)
return nil, fmt.Errorf(
"Error reading config for %s: %s",
name,
err)
}
meta := res.Meta()
*meta = params
result = append(result, &Resource{
Name: name,
Kind: kind,
resource: res,
deps: edges,
rcv: rcv,
})
}
return result, nil
}

89
hil/interpolate.go Normal file
View File

@@ -0,0 +1,89 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package hil
import (
"fmt"
"strings"
"github.com/hashicorp/hil/ast"
)
// Variable defines an interpolated variable.
type Variable interface {
Key() string
}
// ResourceVariable defines a variable type used to reference fields of a resource
// e.g. ${file.file1.Content}
type ResourceVariable struct {
Kind, Name, Field string
}
// Key returns a string representation of the variable key.
func (r *ResourceVariable) Key() string {
return fmt.Sprintf("%s.%s.%s", r.Kind, r.Name, r.Field)
}
// NewInterpolatedVariable takes a variable key and return the interpolated variable
// of the required type.
func NewInterpolatedVariable(k string) (Variable, error) {
// for now resource variables are the only thing.
parts := strings.SplitN(k, ".", 3)
return &ResourceVariable{
Kind: parts[0],
Name: parts[1],
Field: parts[2],
}, nil
}
// ParseVariables will traverse a HIL tree looking for variables and returns a
// list of them.
func ParseVariables(tree ast.Node) ([]Variable, error) {
var result []Variable
var finalErr error
visitor := func(n ast.Node) ast.Node {
if finalErr != nil {
return n
}
switch nt := n.(type) {
case *ast.VariableAccess:
v, err := NewInterpolatedVariable(nt.Name)
if err != nil {
finalErr = err
return n
}
result = append(result, v)
default:
return n
}
return n
}
tree.Accept(visitor)
if finalErr != nil {
return nil, finalErr
}
return result, nil
}

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package lib
@@ -24,8 +24,11 @@ import (
"os/signal"
"syscall"
"github.com/purpleidea/mgmt/bindata"
"github.com/purpleidea/mgmt/hcl"
"github.com/purpleidea/mgmt/puppet"
"github.com/purpleidea/mgmt/yamlgraph"
"github.com/purpleidea/mgmt/yamlgraph2"
"github.com/urfave/cli"
)
@@ -71,6 +74,14 @@ func run(c *cli.Context) error {
File: &y,
}
}
if y := c.String("yaml2"); c.IsSet("yaml2") {
if obj.GAPI != nil {
return fmt.Errorf("can't combine YAMLv2 GAPI with existing GAPI")
}
obj.GAPI = &yamlgraph2.GAPI{
File: &y,
}
}
if p := c.String("puppet"); c.IsSet("puppet") {
if obj.GAPI != nil {
return fmt.Errorf("can't combine puppet GAPI with existing GAPI")
@@ -80,9 +91,20 @@ func run(c *cli.Context) error {
PuppetConf: c.String("puppet-conf"),
}
}
if h := c.String("hcl"); c.IsSet("hcl") {
if obj.GAPI != nil {
return fmt.Errorf("can't combine hcl GAPI with existing GAPI")
}
obj.GAPI = &hcl.GAPI{
File: &h,
}
}
obj.Remotes = c.StringSlice("remote") // FIXME: GAPI-ify somehow?
obj.NoWatch = c.Bool("no-watch")
obj.NoConfigWatch = c.Bool("no-config-watch")
obj.NoStreamWatch = c.Bool("no-stream-watch")
obj.Noop = c.Bool("noop")
obj.Sema = c.Int("sema")
obj.Graphviz = c.String("graphviz")
@@ -93,6 +115,8 @@ func run(c *cli.Context) error {
obj.Seeds = c.StringSlice("seeds")
obj.ClientURLs = c.StringSlice("client-urls")
obj.ServerURLs = c.StringSlice("server-urls")
obj.AdvertiseClientURLs = c.StringSlice("advertise-client-urls")
obj.AdvertiseServerURLs = c.StringSlice("advertise-server-urls")
obj.IdealClusterSize = c.Int("ideal-cluster-size")
obj.NoServer = c.Bool("no-server")
@@ -165,7 +189,32 @@ func CLI(program, version string, flags Flags) error {
app.Metadata = map[string]interface{}{ // additional flags
"flags": flags,
}
//app.Action = ... // without a default action, help runs
// if no app.Command is specified
app.Action = func(c *cli.Context) error {
// print the license
if c.Bool("license") {
license, err := bindata.Asset("../COPYING") // use go-bindata to get the bytes
if err != nil {
return err
}
fmt.Printf("%s", license)
return nil
}
// print help if no flags are set
cli.ShowAppHelp(c)
return nil
}
// global flags
app.Flags = []cli.Flag{
cli.BoolFlag{
Name: "license",
Usage: "prints the software license",
},
}
app.Commands = []cli.Command{
{
@@ -205,6 +254,16 @@ func CLI(program, version string, flags Flags) error {
Value: "",
Usage: "yaml graph definition to run",
},
cli.StringFlag{
Name: "yaml2",
Value: "",
Usage: "yaml graph definition to run (parser v2)",
},
cli.StringFlag{
Name: "hcl",
Value: "",
Usage: "hcl graph definition to run",
},
cli.StringFlag{
Name: "puppet, p",
Value: "",
@@ -223,8 +282,17 @@ func CLI(program, version string, flags Flags) error {
cli.BoolFlag{
Name: "no-watch",
Usage: "do not update graph under any switch events",
},
cli.BoolFlag{
Name: "no-config-watch",
Usage: "do not update graph on config switch events",
},
cli.BoolFlag{
Name: "no-stream-watch",
Usage: "do not update graph on stream switch events",
},
cli.BoolFlag{
Name: "noop",
Usage: "globally force all resources into no-op mode",
@@ -278,6 +346,20 @@ func CLI(program, version string, flags Flags) error {
Usage: "list of URLs to listen on for server (peer) traffic",
EnvVar: "MGMT_SERVER_URLS",
},
// port 2379 and 4001 are common
cli.StringSliceFlag{
Name: "advertise-client-urls",
Value: &cli.StringSlice{},
Usage: "list of URLs to listen on for client traffic",
EnvVar: "MGMT_ADVERTISE_CLIENT_URLS",
},
// port 2380 and 7001 are common
cli.StringSliceFlag{
Name: "advertise-server-urls, advertise-peer-urls",
Value: &cli.StringSlice{},
Usage: "list of URLs to listen on for server (peer) traffic",
EnvVar: "MGMT_ADVERTISE_SERVER_URLS",
},
cli.IntFlag{
Name: "ideal-cluster-size",
Value: -1,

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package lib
@@ -65,7 +65,10 @@ type Main struct {
GAPI gapi.GAPI // graph API interface struct
Remotes []string // list of remote graph definitions to run
NoWatch bool // do not update graph on watched graph definition file changes
NoWatch bool // do not change graph under any circumstances
NoConfigWatch bool // do not update graph due to config changes
NoStreamWatch bool // do not update graph due to stream changes
Noop bool // globally force all resources into no-op mode
Sema int // add a semaphore with this lock count to each resource
Graphviz string // output file for graphviz data
@@ -76,6 +79,8 @@ type Main struct {
Seeds []string // default etc client endpoint
ClientURLs []string // list of URLs to listen on for client traffic
ServerURLs []string // list of URLs to listen on for server (peer) traffic
AdvertiseClientURLs []string // list of URLs to advertise for client traffic
AdvertiseServerURLs []string // list of URLs to advertise for server (peer) traffic
IdealClusterSize int // ideal number of server peers in cluster; only read by initial server
NoServer bool // do not let other servers peer with me
@@ -88,6 +93,8 @@ type Main struct {
seeds etcdtypes.URLs // processed seeds value
clientURLs etcdtypes.URLs // processed client urls value
serverURLs etcdtypes.URLs // processed server urls value
advertiseClientURLs etcdtypes.URLs // processed advertise client urls value
advertiseServerURLs etcdtypes.URLs // processed advertise server urls value
idealClusterSize uint16 // processed ideal cluster size value
NoPgp bool // disallow pgp functionality
@@ -112,6 +119,15 @@ func (obj *Main) Init() error {
return fmt.Errorf("choosing a prefix and the request for a tmp prefix is illogical")
}
// if we've turned off watching, then be explicit and disable them all!
// if all the watches are disabled, then it's equivalent to no watching
if obj.NoWatch {
obj.NoConfigWatch = true
obj.NoStreamWatch = true
} else if obj.NoConfigWatch && obj.NoStreamWatch {
obj.NoWatch = true
}
obj.idealClusterSize = uint16(obj.IdealClusterSize)
if obj.IdealClusterSize < 0 { // value is undefined, set to the default
obj.idealClusterSize = etcd.DefaultIdealClusterSize
@@ -161,6 +177,18 @@ func (obj *Main) Init() error {
if err != nil && len(obj.ServerURLs) > 0 {
return fmt.Errorf("the ServerURLs didn't parse correctly")
}
obj.advertiseClientURLs, err = etcdtypes.NewURLs(
util.FlattenListWithSplit(obj.AdvertiseClientURLs, []string{",", ";", " "}),
)
if err != nil && len(obj.AdvertiseClientURLs) > 0 {
return fmt.Errorf("the AdvertiseClientURLs didn't parse correctly")
}
obj.advertiseServerURLs, err = etcdtypes.NewURLs(
util.FlattenListWithSplit(obj.AdvertiseServerURLs, []string{",", ";", " "}),
)
if err != nil && len(obj.AdvertiseServerURLs) > 0 {
return fmt.Errorf("the AdvertiseServerURLs didn't parse correctly")
}
obj.exit = make(chan error)
return nil
@@ -241,6 +269,10 @@ func (obj *Main) Run() error {
if err := prom.Start(); err != nil {
return errwrap.Wrapf(err, "can't start initiate Prometheus instance")
}
if err := prom.InitKindMetrics(resources.RegisteredResourcesNames()); err != nil {
return errwrap.Wrapf(err, "can't initialize kind-specific prometheus metrics")
}
}
if !obj.NoPgp {
@@ -286,7 +318,11 @@ func (obj *Main) Run() error {
// TODO: Import admin key
}
var G, oldGraph *pgraph.Graph
oldGraph := &pgraph.Graph{}
graph := &resources.MGraph{}
// pass in the information we need
graph.Debug = obj.Flags.Debug
graph.Init()
// exit after `max-runtime` seconds for no reason at all...
if i := obj.MaxRuntime; i > 0 {
@@ -314,6 +350,8 @@ func (obj *Main) Run() error {
obj.seeds,
obj.clientURLs,
obj.serverURLs,
obj.advertiseClientURLs,
obj.advertiseServerURLs,
obj.NoServer,
obj.idealClusterSize,
etcd.Flags{
@@ -330,6 +368,16 @@ func (obj *Main) Run() error {
} else if err := EmbdEtcd.Startup(); err != nil { // startup (returns when etcd main loop is running)
obj.Exit(fmt.Errorf("Main: Etcd: Startup failed: %v", err))
}
// wait for etcd server to be ready before continuing...
select {
case <-EmbdEtcd.ServerReady():
log.Printf("Main: Etcd: Server: Ready!")
// pass
case <-time.After(((etcd.MaxStartServerTimeout * etcd.MaxStartServerRetries) + 1) * time.Second):
obj.Exit(fmt.Errorf("Main: Etcd: Startup timeout"))
}
convergerStateFn := func(b bool) error {
// exit if we are using the converged timeout and we are the
// root node. otherwise, if we are a child node in a remote
@@ -337,7 +385,7 @@ func (obj *Main) Run() error {
// state and wait for the parent to trigger the exit.
if t := obj.ConvergedTimeout; obj.Depth == 0 && t >= 0 {
if b {
log.Printf("Converged for %d seconds, exiting!", t)
log.Printf("Main: Converged for %d seconds, exiting!", t)
obj.Exit(nil) // trigger an exit!
}
return nil
@@ -355,43 +403,43 @@ func (obj *Main) Run() error {
EmbdEtcd: EmbdEtcd,
}
var gapiChan chan error // stream events are nil errors
graph.Data = &resources.ResData{
Hostname: hostname,
Converger: converger,
Prometheus: prom,
World: world,
Prefix: pgraphPrefix,
Debug: obj.Flags.Debug,
}
var gapiChan chan gapi.Next // stream events contain some instructions!
if obj.GAPI != nil {
data := gapi.Data{
Hostname: hostname,
World: world,
Noop: obj.Noop,
NoWatch: obj.NoWatch,
//NoWatch: obj.NoWatch,
NoConfigWatch: obj.NoConfigWatch,
NoStreamWatch: obj.NoStreamWatch,
}
if err := obj.GAPI.Init(data); err != nil {
obj.Exit(fmt.Errorf("Main: GAPI: Init failed: %v", err))
} else if !obj.NoWatch {
} else {
// this must generate at least one event for it to work
gapiChan = obj.GAPI.Next() // stream of graph switch events!
}
}
exitchan := make(chan struct{}) // exit on close
go func() {
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
log.Println("Etcd: Starting...")
etcdChan := etcd.WatchAll(EmbdEtcd)
first := true // first loop or not
for {
log.Println("Main: Waiting...")
// The GAPI should always kick off an event on Next() at
// startup when (and if) it indeed has a graph to share!
fastPause := false
select {
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case b := <-etcdChan:
if !b { // ignore the message
continue
}
// everything else passes through to cause a compile!
case err, ok := <-gapiChan:
case next, ok := <-gapiChan:
if !ok { // channel closed
if obj.Flags.Debug {
log.Printf("Main: GAPI exited")
@@ -399,21 +447,29 @@ func (obj *Main) Run() error {
gapiChan = nil // disable it
continue
}
if err != nil {
obj.Exit(err) // trigger exit
// if we've been asked to exit...
if next.Exit {
obj.Exit(next.Err) // trigger exit
continue // wait for exitchan
}
if obj.NoWatch { // extra safety for bad GAPI's
log.Printf("Main: GAPI stream should be quiet with NoWatch!") // fix the GAPI!
continue // no stream events should be sent
// the gapi lets us send an error to the channel
// this means there was a failure, but not fatal
if err := next.Err; err != nil {
log.Printf("Main: Error with graph stream: %v", err)
continue // wait for another event
}
// everything else passes through to cause a compile!
fastPause = next.Fast // should we pause fast?
case <-exitchan:
return
}
if obj.GAPI == nil {
log.Printf("Config: GAPI is empty!")
log.Printf("Main: GAPI is empty!")
continue
}
@@ -421,34 +477,31 @@ func (obj *Main) Run() error {
// run graph vertex LOCK...
if !first { // TODO: we can flatten this check out I think
converger.Pause() // FIXME: add sync wait?
G.Pause() // sync
graph.Pause(fastPause) // sync
//G.UnGroup() // FIXME: implement me if needed!
//graph.UnGroup() // FIXME: implement me if needed!
}
// make the graph from yaml, lib, puppet->yaml, or dsl!
newGraph, err := obj.GAPI.Graph() // generate graph!
if err != nil {
log.Printf("Config: Error creating new graph: %v", err)
log.Printf("Main: Error creating new graph: %v", err)
// unpause!
if !first {
G.Start(first) // sync
converger.Start() // after G.Start()
graph.Start(first) // sync
converger.Start() // after Start()
}
continue
}
newGraph.Flags = pgraph.Flags{Debug: obj.Flags.Debug}
// pass in the information we need
newGraph.AssociateData(&resources.Data{
Hostname: hostname,
Converger: converger,
Prometheus: prom,
World: world,
Prefix: pgraphPrefix,
Debug: obj.Flags.Debug,
})
if obj.Flags.Debug {
log.Printf("Main: New Graph: %v", newGraph)
}
for _, m := range newGraph.GraphMetas() {
// this edits the paused vertices, but it is safe to do
// so even if we don't use this new graph, since those
// value should be the same for existing vertices...
for _, v := range newGraph.Vertices() {
m := resources.VtoR(v).Meta()
// apply the global noop parameter if requested
if obj.Noop {
m.Noop = obj.Noop
@@ -461,51 +514,90 @@ func (obj *Main) Run() error {
}
}
// FIXME: make sure we "UnGroup()" any semi-destructive
// changes to the resources so our efficient GraphSync
// will be able to re-use and cmp to the old graph.
// We don't have to "UnGroup()" to compare, since we
// save the old graph to use when we compare.
// TODO: Does this hurt performance or graph changes ?
log.Printf("Main: GraphSync...")
newFullGraph, err := newGraph.GraphSync(oldGraph)
if err != nil {
log.Printf("Config: Error running graph sync: %v", err)
vertexCmpFn := func(v1, v2 pgraph.Vertex) (bool, error) {
return resources.VtoR(v1).Compare(resources.VtoR(v2)), nil
}
vertexAddFn := func(v pgraph.Vertex) error {
err := resources.VtoR(v).Validate()
return errwrap.Wrapf(err, "could not Validate() resource")
}
vertexRemoveFn := func(v pgraph.Vertex) error {
// wait for exit before starting new graph!
resources.VtoR(v).Exit() // sync
return nil
}
edgeCmpFn := func(e1, e2 pgraph.Edge) (bool, error) {
edge1 := e1.(*resources.Edge) // panic if wrong
edge2 := e2.(*resources.Edge) // panic if wrong
return edge1.Compare(edge2), nil
}
// on success, this updates the receiver graph...
if err := oldGraph.GraphSync(newGraph, vertexCmpFn, vertexAddFn, vertexRemoveFn, edgeCmpFn); err != nil {
log.Printf("Main: Error running graph sync: %v", err)
// unpause!
if !first {
G.Start(first) // sync
converger.Start() // after G.Start()
graph.Start(first) // sync
converger.Start() // after Start()
}
continue
}
oldGraph = newFullGraph // save old graph
G = oldGraph.Copy() // copy to active graph
G.AutoEdges() // add autoedges; modifies the graph
G.AutoGroup() // run autogroup; modifies the graph
//savedGraph := oldGraph.Copy() // save a copy for errors
// TODO: should we call each Res.Setup() here instead?
// add autoedges; modifies the graph only if no error
if err := resources.AutoEdges(oldGraph); err != nil {
log.Printf("Main: Error running auto edges: %v", err)
// unpause!
if !first {
graph.Start(first) // sync
converger.Start() // after Start()
}
continue
}
// at this point, any time we error after a destructive
// modification of the graph we need to restore the old
// graph that was previously running, eg:
//
// oldGraph = savedGraph.Copy()
//
// which we are (luckily) able to avoid testing for now
resources.AutoGroup(oldGraph, &resources.NonReachabilityGrouper{}) // run autogroup; modifies the graph
// TODO: do we want to do a transitive reduction?
// FIXME: run a type checker that verifies all the send->recv relationships
// Call this here because at this point the graph does not
// know anything about the prometheus instance.
graph.Update(oldGraph) // copy in structure of new graph
// Call this here because at this point the graph does
// not know anything about the prometheus instance.
if err := prom.UpdatePgraphStartTime(); err != nil {
log.Printf("Main: Prometheus.UpdatePgraphStartTime() errored: %v", err)
}
// G.Start(...) needs to be synchronous or wait,
// Start() needs to be synchronous or wait,
// because if half of the nodes are started and
// some are not ready yet and the EtcdWatch
// loops, we'll cause G.Pause(...) before we
// loops, we'll cause Pause() before we
// even got going, thus causing nil pointer errors
G.Start(first) // sync
converger.Start() // after G.Start()
graph.Start(first) // sync
converger.Start() // after Start()
log.Printf("Graph: %v", G) // show graph
log.Printf("Main: Graph: %v", graph) // show graph
if obj.Graphviz != "" {
filter := obj.GraphvizFilter
if filter == "" {
filter = "dot" // directed graph default
}
if err := G.ExecGraphviz(filter, obj.Graphviz, hostname); err != nil {
log.Printf("Graphviz: %v", err)
if err := graph.ExecGraphviz(filter, obj.Graphviz, hostname); err != nil {
log.Printf("Main: Graphviz: %v", err)
} else {
log.Printf("Graphviz: Successfully generated graph!")
log.Printf("Main: Graphviz: Successfully generated graph!")
}
}
first = false
@@ -515,7 +607,7 @@ func (obj *Main) Run() error {
configWatcher := recwatch.NewConfigWatcher()
configWatcher.Flags = recwatch.Flags{Debug: obj.Flags.Debug}
events := configWatcher.Events()
if !obj.NoWatch {
if !obj.NoWatch { // FIXME: fit this into a clean GAPI?
configWatcher.Add(obj.Remotes...) // add all the files...
} else {
events = nil // signal that no-watch is true
@@ -559,6 +651,14 @@ func (obj *Main) Run() error {
// TODO: is there any benefit to running the remotes above in the loop?
// wait for etcd to be running before we remote in, which we do above!
go remotes.Run()
// wait for remotes to be ready before continuing...
select {
case <-remotes.Ready():
log.Printf("Main: Remotes: Run: Ready!")
// pass
//case <-time.After( ? * time.Second):
// obj.Exit(fmt.Errorf("Main: Remotes: Run timeout"))
}
if obj.GAPI == nil {
converger.Start() // better start this for empty graphs
@@ -567,7 +667,7 @@ func (obj *Main) Run() error {
reterr := <-obj.exit // wait for exit signal
log.Println("Destroy...")
log.Println("Main: Destroy...")
if obj.GAPI != nil {
if err := obj.GAPI.Close(); err != nil {
@@ -585,7 +685,7 @@ func (obj *Main) Run() error {
// tell inner main loop to exit
close(exitchan)
G.Exit() // tells all the children to exit, and waits for them to do so
graph.Exit() // tells all the children to exit, and waits for them to do so
// cleanup etcd main loop last so it can process everything first
if err := EmbdEtcd.Destroy(); err != nil { // shutdown and cleanup etcd
@@ -602,7 +702,7 @@ func (obj *Main) Run() error {
}
if obj.Flags.Debug {
log.Printf("Main: Graph: %v", G)
log.Printf("Main: Graph: %v", graph)
}
// TODO: wait for each vertex to exit...

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package main

View File

@@ -79,5 +79,7 @@ if [[ $ret != 0 ]]; then
go get golang.org/x/tools/cmd/vet # add in `go vet` for travis
fi
go get golang.org/x/tools/cmd/stringer # for automatic stringer-ing
go get github.com/jteeuwen/go-bindata/go-bindata # for compiling in non golang files
go get github.com/golang/lint/golint # for `golint`-ing
go get -u gopkg.in/alecthomas/gometalinter.v1 && mv "$(dirname $(which gometalinter.v1))/gometalinter.v1" "$(dirname $(which gometalinter.v1))/gometalinter" && gometalinter --install # bonus
cd "$XPWD" >/dev/null

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgp

View File

@@ -1,103 +0,0 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package pgraph represents the internal "pointer graph" that we use.
package pgraph
import (
"fmt"
"log"
"github.com/purpleidea/mgmt/resources"
)
// add edges to the vertex in a graph based on if it matches a uid list
func (g *Graph) addEdgesByMatchingUIDS(v *Vertex, uids []resources.ResUID) []bool {
// search for edges and see what matches!
var result []bool
// loop through each uid, and see if it matches any vertex
for _, uid := range uids {
var found = false
// uid is a ResUID object
for _, vv := range g.GetVertices() { // search
if v == vv { // skip self
continue
}
if g.Flags.Debug {
log.Printf("Compile: AutoEdge: Match: %v[%v] with UID: %v[%v]", vv.Kind(), vv.GetName(), uid.Kind(), uid.GetName())
}
// we must match to an effective UID for the resource,
// that is to say, the name value of a res is a helpful
// handle, but it is not necessarily a unique identity!
// remember, resources can return multiple UID's each!
if resources.UIDExistsInUIDs(uid, vv.UIDs()) {
// add edge from: vv -> v
if uid.Reversed() {
txt := fmt.Sprintf("AutoEdge: %v[%v] -> %v[%v]", vv.Kind(), vv.GetName(), v.Kind(), v.GetName())
log.Printf("Compile: Adding %v", txt)
g.AddEdge(vv, v, NewEdge(txt))
} else { // edges go the "normal" way, eg: pkg resource
txt := fmt.Sprintf("AutoEdge: %v[%v] -> %v[%v]", v.Kind(), v.GetName(), vv.Kind(), vv.GetName())
log.Printf("Compile: Adding %v", txt)
g.AddEdge(v, vv, NewEdge(txt))
}
found = true
break
}
}
result = append(result, found)
}
return result
}
// AutoEdges adds the automatic edges to the graph.
func (g *Graph) AutoEdges() {
log.Println("Compile: Adding AutoEdges...")
for _, v := range g.GetVertices() { // for each vertexes autoedges
if !v.Meta().AutoEdge { // is the metaparam true?
continue
}
autoEdgeObj := v.AutoEdges()
if autoEdgeObj == nil {
log.Printf("%v[%v]: Config: No auto edges were found!", v.Kind(), v.GetName())
continue // next vertex
}
for { // while the autoEdgeObj has more uids to add...
uids := autoEdgeObj.Next() // get some!
if uids == nil {
log.Printf("%v[%v]: Config: The auto edge list is empty!", v.Kind(), v.GetName())
break // inner loop
}
if g.Flags.Debug {
log.Println("Compile: AutoEdge: UIDS:")
for i, u := range uids {
log.Printf("Compile: AutoEdge: UID%d: %v", i, u)
}
}
// match and add edges
result := g.addEdgesByMatchingUIDS(v, uids)
// report back, and find out if we should continue
if !autoEdgeObj.Test(result) {
break
}
}
}
}

View File

@@ -1,486 +0,0 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
import (
"testing"
)
// all of the following test cases are laid out with the following semantics:
// * vertices which start with the same single letter are considered "like"
// * "like" elements should be merged
// * vertices can have any integer after their single letter "family" type
// * grouped vertices should have a name with a comma separated list of names
// * edges follow the same conventions about grouping
// empty graph
func TestPgraphGrouping1(t *testing.T) {
g1 := NewGraph("g1") // original graph
g2 := NewGraph("g2") // expected result
runGraphCmp(t, g1, g2)
}
// single vertex
func TestPgraphGrouping2(t *testing.T) {
g1 := NewGraph("g1") // original graph
{ // grouping to limit variable scope
a1 := NewVertex(NewNoopResTest("a1"))
g1.AddVertex(a1)
}
g2 := NewGraph("g2") // expected result
{
a1 := NewVertex(NewNoopResTest("a1"))
g2.AddVertex(a1)
}
runGraphCmp(t, g1, g2)
}
// two vertices
func TestPgraphGrouping3(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
b1 := NewVertex(NewNoopResTest("b1"))
g1.AddVertex(a1, b1)
}
g2 := NewGraph("g2") // expected result
{
a1 := NewVertex(NewNoopResTest("a1"))
b1 := NewVertex(NewNoopResTest("b1"))
g2.AddVertex(a1, b1)
}
runGraphCmp(t, g1, g2)
}
// two vertices merge
func TestPgraphGrouping4(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
g1.AddVertex(a1, a2)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2"))
g2.AddVertex(a)
}
runGraphCmp(t, g1, g2)
}
// three vertices merge
func TestPgraphGrouping5(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
a3 := NewVertex(NewNoopResTest("a3"))
g1.AddVertex(a1, a2, a3)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2,a3"))
g2.AddVertex(a)
}
runGraphCmp(t, g1, g2)
}
// three vertices, two merge
func TestPgraphGrouping6(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
b1 := NewVertex(NewNoopResTest("b1"))
g1.AddVertex(a1, a2, b1)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2"))
b1 := NewVertex(NewNoopResTest("b1"))
g2.AddVertex(a, b1)
}
runGraphCmp(t, g1, g2)
}
// four vertices, three merge
func TestPgraphGrouping7(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
a3 := NewVertex(NewNoopResTest("a3"))
b1 := NewVertex(NewNoopResTest("b1"))
g1.AddVertex(a1, a2, a3, b1)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2,a3"))
b1 := NewVertex(NewNoopResTest("b1"))
g2.AddVertex(a, b1)
}
runGraphCmp(t, g1, g2)
}
// four vertices, two&two merge
func TestPgraphGrouping8(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
b1 := NewVertex(NewNoopResTest("b1"))
b2 := NewVertex(NewNoopResTest("b2"))
g1.AddVertex(a1, a2, b1, b2)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2"))
b := NewVertex(NewNoopResTest("b1,b2"))
g2.AddVertex(a, b)
}
runGraphCmp(t, g1, g2)
}
// five vertices, two&three merge
func TestPgraphGrouping9(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
b1 := NewVertex(NewNoopResTest("b1"))
b2 := NewVertex(NewNoopResTest("b2"))
b3 := NewVertex(NewNoopResTest("b3"))
g1.AddVertex(a1, a2, b1, b2, b3)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2"))
b := NewVertex(NewNoopResTest("b1,b2,b3"))
g2.AddVertex(a, b)
}
runGraphCmp(t, g1, g2)
}
// three unique vertices
func TestPgraphGrouping10(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
b1 := NewVertex(NewNoopResTest("b1"))
c1 := NewVertex(NewNoopResTest("c1"))
g1.AddVertex(a1, b1, c1)
}
g2 := NewGraph("g2") // expected result
{
a1 := NewVertex(NewNoopResTest("a1"))
b1 := NewVertex(NewNoopResTest("b1"))
c1 := NewVertex(NewNoopResTest("c1"))
g2.AddVertex(a1, b1, c1)
}
runGraphCmp(t, g1, g2)
}
// three unique vertices, two merge
func TestPgraphGrouping11(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
b1 := NewVertex(NewNoopResTest("b1"))
b2 := NewVertex(NewNoopResTest("b2"))
c1 := NewVertex(NewNoopResTest("c1"))
g1.AddVertex(a1, b1, b2, c1)
}
g2 := NewGraph("g2") // expected result
{
a1 := NewVertex(NewNoopResTest("a1"))
b := NewVertex(NewNoopResTest("b1,b2"))
c1 := NewVertex(NewNoopResTest("c1"))
g2.AddVertex(a1, b, c1)
}
runGraphCmp(t, g1, g2)
}
// simple merge 1
// a1 a2 a1,a2
// \ / >>> | (arrows point downwards)
// b b
func TestPgraphGrouping12(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
b1 := NewVertex(NewNoopResTest("b1"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(a2, b1, e2)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2"))
b1 := NewVertex(NewNoopResTest("b1"))
e := NewEdge("e1,e2")
g2.AddEdge(a, b1, e)
}
runGraphCmp(t, g1, g2)
}
// simple merge 2
// b b
// / \ >>> | (arrows point downwards)
// a1 a2 a1,a2
func TestPgraphGrouping13(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
b1 := NewVertex(NewNoopResTest("b1"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
g1.AddEdge(b1, a1, e1)
g1.AddEdge(b1, a2, e2)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2"))
b1 := NewVertex(NewNoopResTest("b1"))
e := NewEdge("e1,e2")
g2.AddEdge(b1, a, e)
}
runGraphCmp(t, g1, g2)
}
// triple merge
// a1 a2 a3 a1,a2,a3
// \ | / >>> | (arrows point downwards)
// b b
func TestPgraphGrouping14(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
a3 := NewVertex(NewNoopResTest("a3"))
b1 := NewVertex(NewNoopResTest("b1"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(a2, b1, e2)
g1.AddEdge(a3, b1, e3)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2,a3"))
b1 := NewVertex(NewNoopResTest("b1"))
e := NewEdge("e1,e2,e3")
g2.AddEdge(a, b1, e)
}
runGraphCmp(t, g1, g2)
}
// chain merge
// a1 a1
// / \ |
// b1 b2 >>> b1,b2 (arrows point downwards)
// \ / |
// c1 c1
func TestPgraphGrouping15(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
b1 := NewVertex(NewNoopResTest("b1"))
b2 := NewVertex(NewNoopResTest("b2"))
c1 := NewVertex(NewNoopResTest("c1"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
e4 := NewEdge("e4")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(a1, b2, e2)
g1.AddEdge(b1, c1, e3)
g1.AddEdge(b2, c1, e4)
}
g2 := NewGraph("g2") // expected result
{
a1 := NewVertex(NewNoopResTest("a1"))
b := NewVertex(NewNoopResTest("b1,b2"))
c1 := NewVertex(NewNoopResTest("c1"))
e1 := NewEdge("e1,e2")
e2 := NewEdge("e3,e4")
g2.AddEdge(a1, b, e1)
g2.AddEdge(b, c1, e2)
}
runGraphCmp(t, g1, g2)
}
// re-attach 1 (outer)
// technically the second possibility is valid too, depending on which order we
// merge edges in, and if we don't filter out any unnecessary edges afterwards!
// a1 a2 a1,a2 a1,a2
// | / | | \
// b1 / >>> b1 OR b1 / (arrows point downwards)
// | / | | /
// c1 c1 c1
func TestPgraphGrouping16(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
b1 := NewVertex(NewNoopResTest("b1"))
c1 := NewVertex(NewNoopResTest("c1"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(b1, c1, e2)
g1.AddEdge(a2, c1, e3)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2"))
b1 := NewVertex(NewNoopResTest("b1"))
c1 := NewVertex(NewNoopResTest("c1"))
e1 := NewEdge("e1,e3")
e2 := NewEdge("e2,e3") // e3 gets "merged through" to BOTH edges!
g2.AddEdge(a, b1, e1)
g2.AddEdge(b1, c1, e2)
}
runGraphCmp(t, g1, g2)
}
// re-attach 2 (inner)
// a1 b2 a1
// | / |
// b1 / >>> b1,b2 (arrows point downwards)
// | / |
// c1 c1
func TestPgraphGrouping17(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
b1 := NewVertex(NewNoopResTest("b1"))
b2 := NewVertex(NewNoopResTest("b2"))
c1 := NewVertex(NewNoopResTest("c1"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(b1, c1, e2)
g1.AddEdge(b2, c1, e3)
}
g2 := NewGraph("g2") // expected result
{
a1 := NewVertex(NewNoopResTest("a1"))
b := NewVertex(NewNoopResTest("b1,b2"))
c1 := NewVertex(NewNoopResTest("c1"))
e1 := NewEdge("e1")
e2 := NewEdge("e2,e3")
g2.AddEdge(a1, b, e1)
g2.AddEdge(b, c1, e2)
}
runGraphCmp(t, g1, g2)
}
// re-attach 3 (double)
// similar to "re-attach 1", technically there is a second possibility for this
// a2 a1 b2 a1,a2
// \ | / |
// \ b1 / >>> b1,b2 (arrows point downwards)
// \ | / |
// c1 c1
func TestPgraphGrouping18(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
b1 := NewVertex(NewNoopResTest("b1"))
b2 := NewVertex(NewNoopResTest("b2"))
c1 := NewVertex(NewNoopResTest("c1"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
e4 := NewEdge("e4")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(b1, c1, e2)
g1.AddEdge(a2, c1, e3)
g1.AddEdge(b2, c1, e4)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2"))
b := NewVertex(NewNoopResTest("b1,b2"))
c1 := NewVertex(NewNoopResTest("c1"))
e1 := NewEdge("e1,e3")
e2 := NewEdge("e2,e3,e4") // e3 gets "merged through" to BOTH edges!
g2.AddEdge(a, b, e1)
g2.AddEdge(b, c1, e2)
}
runGraphCmp(t, g1, g2)
}
// connected merge 0, (no change!)
// a1 a1
// \ >>> \ (arrows point downwards)
// a2 a2
func TestPgraphGroupingConnected0(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
e1 := NewEdge("e1")
g1.AddEdge(a1, a2, e1)
}
g2 := NewGraph("g2") // expected result ?
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
e1 := NewEdge("e1")
g2.AddEdge(a1, a2, e1)
}
runGraphCmp(t, g1, g2)
}
// connected merge 1, (no change!)
// a1 a1
// \ \
// b >>> b (arrows point downwards)
// \ \
// a2 a2
func TestPgraphGroupingConnected1(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
b := NewVertex(NewNoopResTest("b"))
a2 := NewVertex(NewNoopResTest("a2"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
g1.AddEdge(a1, b, e1)
g1.AddEdge(b, a2, e2)
}
g2 := NewGraph("g2") // expected result ?
{
a1 := NewVertex(NewNoopResTest("a1"))
b := NewVertex(NewNoopResTest("b"))
a2 := NewVertex(NewNoopResTest("a2"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
g2.AddEdge(a1, b, e1)
g2.AddEdge(b, a2, e2)
}
runGraphCmp(t, g1, g2)
}

155
pgraph/graphsync.go Normal file
View File

@@ -0,0 +1,155 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
import (
"fmt"
errwrap "github.com/pkg/errors"
)
func strVertexCmpFn(v1, v2 Vertex) (bool, error) {
if v1.String() == "" || v2.String() == "" {
return false, fmt.Errorf("empty vertex")
}
return v1.String() == v2.String(), nil
}
func strEdgeCmpFn(e1, e2 Edge) (bool, error) {
if e1.String() == "" || e2.String() == "" {
return false, fmt.Errorf("empty edge")
}
return e1.String() == e2.String(), nil
}
// GraphSync updates the Graph so that it matches the newGraph. It leaves
// identical elements alone so that they don't need to be refreshed.
// It tries to mutate existing elements into new ones, if they support this.
// This updates the Graph on success only.
// FIXME: should we do this with copies of the vertex resources?
func (obj *Graph) GraphSync(newGraph *Graph, vertexCmpFn func(Vertex, Vertex) (bool, error), vertexAddFn func(Vertex) error, vertexRemoveFn func(Vertex) error, edgeCmpFn func(Edge, Edge) (bool, error)) error {
oldGraph := obj.Copy() // work on a copy of the old graph
if oldGraph == nil {
var err error
oldGraph, err = NewGraph(newGraph.GetName()) // copy over the name
if err != nil {
return errwrap.Wrapf(err, "GraphSync failed")
}
}
oldGraph.SetName(newGraph.GetName()) // overwrite the name
if vertexCmpFn == nil {
vertexCmpFn = strVertexCmpFn // use simple string cmp version
}
if vertexAddFn == nil {
vertexAddFn = func(Vertex) error { return nil } // noop
}
if vertexRemoveFn == nil {
vertexRemoveFn = func(Vertex) error { return nil } // noop
}
if edgeCmpFn == nil {
edgeCmpFn = strEdgeCmpFn // use simple string cmp version
}
var lookup = make(map[Vertex]Vertex)
var vertexKeep []Vertex // list of vertices which are the same in new graph
var edgeKeep []Edge // list of vertices which are the same in new graph
for v := range newGraph.Adjacency() { // loop through the vertices (resources)
var vertex Vertex
// step one, direct compare with res.Compare
if vertex == nil { // redundant guard for consistency
fn := func(vv Vertex) (bool, error) {
b, err := vertexCmpFn(vv, v)
return b, errwrap.Wrapf(err, "vertexCmpFn failed")
}
var err error
vertex, err = oldGraph.VertexMatchFn(fn)
if err != nil {
return errwrap.Wrapf(err, "VertexMatchFn failed")
}
}
// TODO: consider adding a mutate API.
// step two, try and mutate with res.Mutate
//if vertex == nil { // not found yet...
// vertex = oldGraph.MutateMatch(res)
//}
if vertex == nil { // no match found yet
if err := vertexAddFn(v); err != nil {
return errwrap.Wrapf(err, "vertexAddFn failed")
}
vertex = v
oldGraph.AddVertex(vertex) // call standalone in case not part of an edge
}
lookup[v] = vertex // used for constructing edges
vertexKeep = append(vertexKeep, vertex) // append
}
// get rid of any vertices we shouldn't keep (that aren't in new graph)
for v := range oldGraph.Adjacency() {
if !VertexContains(v, vertexKeep) {
if err := vertexRemoveFn(v); err != nil {
return errwrap.Wrapf(err, "vertexRemoveFn failed")
}
oldGraph.DeleteVertex(v)
}
}
// compare edges
for v1 := range newGraph.Adjacency() { // loop through the vertices (resources)
for v2, e := range newGraph.Adjacency()[v1] {
// we have an edge!
// lookup vertices (these should exist now)
vertex1, exists1 := lookup[v1]
vertex2, exists2 := lookup[v2]
if !exists1 || !exists2 { // no match found, bug?
//if vertex1 == nil || vertex2 == nil { // no match found
return fmt.Errorf("new vertices weren't found") // programming error
}
edge, exists := oldGraph.Adjacency()[vertex1][vertex2]
if !exists {
edge = e // use edge
} else if b, err := edgeCmpFn(edge, e); err != nil {
return errwrap.Wrapf(err, "edgeCmpFn failed")
} else if !b {
edge = e // overwrite edge
}
oldGraph.Adjacency()[vertex1][vertex2] = edge // store it (AddEdge)
edgeKeep = append(edgeKeep, edge) // mark as saved
}
}
// delete unused edges
for v1 := range oldGraph.Adjacency() {
for _, e := range oldGraph.Adjacency()[v1] {
// we have an edge!
if !EdgeContains(e, edgeKeep) {
oldGraph.DeleteEdge(e)
}
}
}
// success
*obj = *oldGraph // save old graph
return nil
}

92
pgraph/graphsync_test.go Normal file
View File

@@ -0,0 +1,92 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
import (
"testing"
)
func TestGraphSync1(t *testing.T) {
g := &Graph{}
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
e1 := NE("e1")
e2 := NE("e2")
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
// new graph
newGraph := &Graph{}
v4 := NV("v4")
v5 := NV("v5")
e3 := NE("e3")
newGraph.AddEdge(v4, v5, e3)
err := g.GraphSync(newGraph, nil, nil, nil, nil)
if err != nil {
t.Errorf("GraphSync failed: %v", err)
return
}
// g should change and become the same
if s := runGraphCmp(t, g, newGraph); s != "" {
t.Errorf("%s", s)
}
}
func TestGraphSync2(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g := &Graph{}
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
// new graph
newGraph := &Graph{}
newGraph.AddEdge(v1, v3, e1)
newGraph.AddEdge(v2, v3, e2)
newGraph.AddEdge(v4, v5, e3)
//newGraph.AddEdge(v3, v4, NE("v3,v4"))
//newGraph.AddEdge(v3, v5, NE("v3,v5"))
// graphs should differ!
if runGraphCmp(t, g, newGraph) == "" {
t.Errorf("graphs should differ initially")
return
}
err := g.GraphSync(newGraph, strVertexCmpFn, vertexAddFn, vertexRemoveFn, strEdgeCmpFn)
if err != nil {
t.Errorf("GraphSync failed: %v", err)
return
}
// g should change and become the same
if s := runGraphCmp(t, g, newGraph); s != "" {
t.Errorf("%s", s)
}
}

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph // TODO: this should be a subpackage
@@ -45,16 +45,16 @@ func (g *Graph) Graphviz() (out string) {
out += fmt.Sprintf("\tlabel=\"%s\";\n", g.GetName())
//out += "\tnode [shape=box];\n"
str := ""
for i := range g.Adjacency { // reverse paths
out += fmt.Sprintf("\t\"%s\" [label=\"%s[%s]\"];\n", i.GetName(), i.Kind(), i.GetName())
for j := range g.Adjacency[i] {
k := g.Adjacency[i][j]
for i := range g.Adjacency() { // reverse paths
out += fmt.Sprintf("\t\"%s\" [label=\"%s\"];\n", i, i)
for j := range g.Adjacency()[i] {
k := g.Adjacency()[i][j]
// use str for clearer output ordering
if k.Notify {
str += fmt.Sprintf("\t\"%s\" -> \"%s\" [label=\"%s\",style=bold];\n", i.GetName(), j.GetName(), k.Name)
} else {
str += fmt.Sprintf("\t\"%s\" -> \"%s\" [label=\"%s\"];\n", i.GetName(), j.GetName(), k.Name)
}
//if fmtBoldFn(k) { // TODO: add this sort of formatting
// str += fmt.Sprintf("\t\"%s\" -> \"%s\" [label=\"%s\",style=bold];\n", i, j, k)
//} else {
str += fmt.Sprintf("\t\"%s\" -> \"%s\" [label=\"%s\"];\n", i, j, k)
//}
}
}
out += str

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package pgraph represents the internal "pointer graph" that we use.
@@ -21,32 +21,10 @@ package pgraph
import (
"fmt"
"sort"
"sync"
"github.com/purpleidea/mgmt/event"
"github.com/purpleidea/mgmt/prometheus"
"github.com/purpleidea/mgmt/resources"
"github.com/purpleidea/mgmt/util/semaphore"
errwrap "github.com/pkg/errors"
)
//go:generate stringer -type=graphState -output=graphstate_stringer.go
type graphState int
const (
graphStateNil graphState = iota
graphStateStarting
graphStateStarted
graphStatePausing
graphStatePaused
)
// Flags contains specific constants used by the graph.
type Flags struct {
Debug bool
}
// Graph is the graph structure in this library.
// The graph abstract data type (ADT) is defined as follows:
// * the directed graph arrows point from left to right ( -> )
@@ -55,82 +33,71 @@ type Flags struct {
// * This is also the direction that the notify should happen in...
type Graph struct {
Name string
Adjacency map[*Vertex]map[*Vertex]*Edge // *Vertex -> *Vertex (edge)
Flags Flags
state graphState
mutex *sync.Mutex // used when modifying graph State variable
wg *sync.WaitGroup
semas map[string]*semaphore.Semaphore
prometheus *prometheus.Prometheus // the prometheus instance
adjacency map[Vertex]map[Vertex]Edge // Vertex -> Vertex (edge)
kv map[string]interface{} // some values associated with the graph
}
// Vertex is the primary vertex struct in this library.
type Vertex struct {
resources.Res // anonymous field
timestamp int64 // last updated timestamp ?
// Vertex is the primary vertex struct in this library. It can be anything that
// implements Stringer. The string output must be stable and unique in a graph.
type Vertex interface {
fmt.Stringer // String() string
}
// Edge is the primary edge struct in this library.
type Edge struct {
Name string
Notify bool // should we send a refresh notification along this edge?
// Edge is the primary edge struct in this library. It can be anything that
// implements Stringer. The string output must be stable and unique in a graph.
type Edge interface {
fmt.Stringer // String() string
}
refresh bool // is there a notify pending for the dest vertex ?
// Init initializes the graph which populates all the internal structures.
func (g *Graph) Init() error {
if g.Name == "" { // FIXME: is this really a good requirement?
return fmt.Errorf("can't initialize graph with empty name")
}
//g.adjacency = make(map[Vertex]map[Vertex]Edge) // not required
//g.kv = make(map[string]interface{}) // not required
return nil
}
// NewGraph builds a new graph.
func NewGraph(name string) *Graph {
return &Graph{
Name: name,
Adjacency: make(map[*Vertex]map[*Vertex]*Edge),
state: graphStateNil,
// ptr b/c: Mutex/WaitGroup must not be copied after first use
mutex: &sync.Mutex{},
wg: &sync.WaitGroup{},
semas: make(map[string]*semaphore.Semaphore),
}
}
// NewVertex returns a new graph vertex struct with a contained resource.
func NewVertex(r resources.Res) *Vertex {
return &Vertex{
Res: r,
}
}
// NewEdge returns a new graph edge struct.
func NewEdge(name string) *Edge {
return &Edge{
func NewGraph(name string) (*Graph, error) {
g := &Graph{
Name: name,
}
if err := g.Init(); err != nil {
return nil, err
}
return g, nil
}
// Refresh returns the pending refresh status of this edge.
func (obj *Edge) Refresh() bool {
return obj.refresh
// Value returns a value stored alongside the graph in a particular key.
func (g *Graph) Value(key string) (interface{}, bool) {
val, exists := g.kv[key]
return val, exists
}
// SetRefresh sets the pending refresh status of this edge.
func (obj *Edge) SetRefresh(b bool) {
obj.refresh = b
// SetValue sets a value to be stored alongside the graph in a particular key.
func (g *Graph) SetValue(key string, val interface{}) {
if g.kv == nil { // initialize on first use
g.kv = make(map[string]interface{})
}
g.kv[key] = val
}
// Copy makes a copy of the graph struct
// Copy makes a copy of the graph struct.
func (g *Graph) Copy() *Graph {
if g == nil { // allow nil graphs through
return g
}
newGraph := &Graph{
Name: g.Name,
Adjacency: make(map[*Vertex]map[*Vertex]*Edge, len(g.Adjacency)),
Flags: g.Flags,
state: g.state,
mutex: g.mutex,
wg: g.wg,
semas: g.semas,
prometheus: g.prometheus,
adjacency: make(map[Vertex]map[Vertex]Edge, len(g.adjacency)),
kv: g.kv,
}
for k, v := range g.Adjacency {
newGraph.Adjacency[k] = v // copy
for k, v := range g.adjacency {
newGraph.adjacency[k] = v // copy
}
return newGraph
}
@@ -145,87 +112,49 @@ func (g *Graph) SetName(name string) {
g.Name = name
}
// getState returns the state of the graph. This state is used for optimizing
// certain algorithms by knowing what part of processing the graph is currently
// undergoing.
func (g *Graph) getState() graphState {
//g.mutex.Lock()
//defer g.mutex.Unlock()
return g.state
}
// setState sets the graph state and returns the previous state.
func (g *Graph) setState(state graphState) graphState {
g.mutex.Lock()
defer g.mutex.Unlock()
prev := g.getState()
g.state = state
return prev
}
// AddVertex uses variadic input to add all listed vertices to the graph
func (g *Graph) AddVertex(xv ...*Vertex) {
// AddVertex uses variadic input to add all listed vertices to the graph.
func (g *Graph) AddVertex(xv ...Vertex) {
if g.adjacency == nil { // initialize on first use
g.adjacency = make(map[Vertex]map[Vertex]Edge)
}
for _, v := range xv {
if _, exists := g.Adjacency[v]; !exists {
g.Adjacency[v] = make(map[*Vertex]*Edge)
if _, exists := g.adjacency[v]; !exists {
g.adjacency[v] = make(map[Vertex]Edge)
}
}
}
// DeleteVertex deletes a particular vertex from the graph.
func (g *Graph) DeleteVertex(v *Vertex) {
delete(g.Adjacency, v)
for k := range g.Adjacency {
delete(g.Adjacency[k], v)
func (g *Graph) DeleteVertex(v Vertex) {
delete(g.adjacency, v)
for k := range g.adjacency {
delete(g.adjacency[k], v)
}
}
// AddEdge adds a directed edge to the graph from v1 to v2.
func (g *Graph) AddEdge(v1, v2 *Vertex, e *Edge) {
func (g *Graph) AddEdge(v1, v2 Vertex, e Edge) {
// NOTE: this doesn't allow more than one edge between two vertexes...
g.AddVertex(v1, v2) // supports adding N vertices now
// TODO: check if an edge exists to avoid overwriting it!
// NOTE: VertexMerge() depends on overwriting it at the moment...
g.Adjacency[v1][v2] = e
g.adjacency[v1][v2] = e
}
// DeleteEdge deletes a particular edge from the graph.
// FIXME: add test cases
func (g *Graph) DeleteEdge(e *Edge) {
for v1 := range g.Adjacency {
for v2, edge := range g.Adjacency[v1] {
func (g *Graph) DeleteEdge(e Edge) {
for v1 := range g.adjacency {
for v2, edge := range g.adjacency[v1] {
if e == edge {
delete(g.Adjacency[v1], v2)
delete(g.adjacency[v1], v2)
}
}
}
}
// CompareMatch searches for an equivalent resource in the graph and returns the
// vertex it is found in, or nil if not found.
func (g *Graph) CompareMatch(obj resources.Res) *Vertex {
for v := range g.Adjacency {
if v.Res.Compare(obj) {
return v
}
}
return nil
}
// TODO: consider adding a mutate API.
//func (g *Graph) MutateMatch(obj resources.Res) *Vertex {
// for v := range g.Adjacency {
// if err := v.Res.Mutate(obj); err == nil {
// // transmogrified!
// return v
// }
// }
// return nil
//}
// HasVertex returns if the input vertex exists in the graph.
func (g *Graph) HasVertex(v *Vertex) bool {
if _, exists := g.Adjacency[v]; exists {
func (g *Graph) HasVertex(v Vertex) bool {
if _, exists := g.adjacency[v]; exists {
return true
}
return false
@@ -233,33 +162,40 @@ func (g *Graph) HasVertex(v *Vertex) bool {
// NumVertices returns the number of vertices in the graph.
func (g *Graph) NumVertices() int {
return len(g.Adjacency)
return len(g.adjacency)
}
// NumEdges returns the number of edges in the graph.
func (g *Graph) NumEdges() int {
count := 0
for k := range g.Adjacency {
count += len(g.Adjacency[k])
for k := range g.adjacency {
count += len(g.adjacency[k])
}
return count
}
// GetVertices returns a randomly sorted slice of all vertices in the graph
// Adjacency returns the adjacency map representing this graph. This is useful
// for users who which to operate on the raw data structure more efficiently.
// This works because maps are reference types so we can edit this at will.
func (g *Graph) Adjacency() map[Vertex]map[Vertex]Edge {
return g.adjacency
}
// Vertices returns a randomly sorted slice of all vertices in the graph.
// The order is random, because the map implementation is intentionally so!
func (g *Graph) GetVertices() []*Vertex {
var vertices []*Vertex
for k := range g.Adjacency {
func (g *Graph) Vertices() []Vertex {
var vertices []Vertex
for k := range g.adjacency {
vertices = append(vertices, k)
}
return vertices
}
// GetVerticesChan returns a channel of all vertices in the graph.
func (g *Graph) GetVerticesChan() chan *Vertex {
ch := make(chan *Vertex)
go func(ch chan *Vertex) {
for k := range g.Adjacency {
// VerticesChan returns a channel of all vertices in the graph.
func (g *Graph) VerticesChan() chan Vertex {
ch := make(chan Vertex)
go func(ch chan Vertex) {
for k := range g.adjacency {
ch <- k
}
close(ch)
@@ -268,17 +204,17 @@ func (g *Graph) GetVerticesChan() chan *Vertex {
}
// VertexSlice is a linear list of vertices. It can be sorted.
type VertexSlice []*Vertex
type VertexSlice []Vertex
func (vs VertexSlice) Len() int { return len(vs) }
func (vs VertexSlice) Swap(i, j int) { vs[i], vs[j] = vs[j], vs[i] }
func (vs VertexSlice) Less(i, j int) bool { return vs[i].String() < vs[j].String() }
// GetVerticesSorted returns a sorted slice of all vertices in the graph
// The order is sorted by String() to avoid the non-determinism in the map type
func (g *Graph) GetVerticesSorted() []*Vertex {
var vertices []*Vertex
for k := range g.Adjacency {
// VerticesSorted returns a sorted slice of all vertices in the graph.
// The order is sorted by String() to avoid the non-determinism in the map type.
func (g *Graph) VerticesSorted() []Vertex {
var vertices []Vertex
for k := range g.adjacency {
vertices = append(vertices, k)
}
sort.Sort(VertexSlice(vertices)) // add determinism
@@ -290,19 +226,14 @@ func (g *Graph) String() string {
return fmt.Sprintf("Vertices(%d), Edges(%d)", g.NumVertices(), g.NumEdges())
}
// String returns the canonical form for a vertex
func (v *Vertex) String() string {
return fmt.Sprintf("%s[%s]", v.Res.Kind(), v.Res.GetName())
}
// IncomingGraphVertices returns an array (slice) of all directed vertices to
// vertex v (??? -> v). OKTimestamp should probably use this.
func (g *Graph) IncomingGraphVertices(v *Vertex) []*Vertex {
func (g *Graph) IncomingGraphVertices(v Vertex) []Vertex {
// TODO: we might be able to implement this differently by reversing
// the Adjacency graph and then looping through it again...
var s []*Vertex
for k := range g.Adjacency { // reverse paths
for w := range g.Adjacency[k] {
var s []Vertex
for k := range g.adjacency { // reverse paths
for w := range g.adjacency[k] {
if w == v {
s = append(s, k)
}
@@ -313,9 +244,9 @@ func (g *Graph) IncomingGraphVertices(v *Vertex) []*Vertex {
// OutgoingGraphVertices returns an array (slice) of all vertices that vertex v
// points to (v -> ???). Poke should probably use this.
func (g *Graph) OutgoingGraphVertices(v *Vertex) []*Vertex {
var s []*Vertex
for k := range g.Adjacency[v] { // forward paths
func (g *Graph) OutgoingGraphVertices(v Vertex) []Vertex {
var s []Vertex
for k := range g.adjacency[v] { // forward paths
s = append(s, k)
}
return s
@@ -323,18 +254,18 @@ func (g *Graph) OutgoingGraphVertices(v *Vertex) []*Vertex {
// GraphVertices returns an array (slice) of all vertices that connect to vertex v.
// This is the union of IncomingGraphVertices and OutgoingGraphVertices.
func (g *Graph) GraphVertices(v *Vertex) []*Vertex {
var s []*Vertex
func (g *Graph) GraphVertices(v Vertex) []Vertex {
var s []Vertex
s = append(s, g.IncomingGraphVertices(v)...)
s = append(s, g.OutgoingGraphVertices(v)...)
return s
}
// IncomingGraphEdges returns all of the edges that point to vertex v (??? -> v).
func (g *Graph) IncomingGraphEdges(v *Vertex) []*Edge {
var edges []*Edge
for v1 := range g.Adjacency { // reverse paths
for v2, e := range g.Adjacency[v1] {
func (g *Graph) IncomingGraphEdges(v Vertex) []Edge {
var edges []Edge
for v1 := range g.adjacency { // reverse paths
for v2, e := range g.adjacency[v1] {
if v2 == v {
edges = append(edges, e)
}
@@ -344,9 +275,9 @@ func (g *Graph) IncomingGraphEdges(v *Vertex) []*Edge {
}
// OutgoingGraphEdges returns all of the edges that point from vertex v (v -> ???).
func (g *Graph) OutgoingGraphEdges(v *Vertex) []*Edge {
var edges []*Edge
for _, e := range g.Adjacency[v] { // forward paths
func (g *Graph) OutgoingGraphEdges(v Vertex) []Edge {
var edges []Edge
for _, e := range g.adjacency[v] { // forward paths
edges = append(edges, e)
}
return edges
@@ -354,18 +285,18 @@ func (g *Graph) OutgoingGraphEdges(v *Vertex) []*Edge {
// GraphEdges returns an array (slice) of all edges that connect to vertex v.
// This is the union of IncomingGraphEdges and OutgoingGraphEdges.
func (g *Graph) GraphEdges(v *Vertex) []*Edge {
var edges []*Edge
func (g *Graph) GraphEdges(v Vertex) []Edge {
var edges []Edge
edges = append(edges, g.IncomingGraphEdges(v)...)
edges = append(edges, g.OutgoingGraphEdges(v)...)
return edges
}
// DFS returns a depth first search for the graph, starting at the input vertex.
func (g *Graph) DFS(start *Vertex) []*Vertex {
var d []*Vertex // discovered
var s []*Vertex // stack
if _, exists := g.Adjacency[start]; !exists {
func (g *Graph) DFS(start Vertex) []Vertex {
var d []Vertex // discovered
var s []Vertex // stack
if _, exists := g.adjacency[start]; !exists {
return nil // TODO: error
}
v := start
@@ -385,31 +316,32 @@ func (g *Graph) DFS(start *Vertex) []*Vertex {
}
// FilterGraph builds a new graph containing only vertices from the list.
func (g *Graph) FilterGraph(name string, vertices []*Vertex) *Graph {
newgraph := NewGraph(name)
for k1, x := range g.Adjacency {
func (g *Graph) FilterGraph(name string, vertices []Vertex) (*Graph, error) {
newGraph := &Graph{Name: name}
if err := newGraph.Init(); err != nil {
return nil, errwrap.Wrapf(err, "could not run FilterGraph() properly")
}
for k1, x := range g.adjacency {
for k2, e := range x {
//log.Printf("Filter: %s -> %s # %s", k1.Name, k2.Name, e.Name)
if VertexContains(k1, vertices) || VertexContains(k2, vertices) {
newgraph.AddEdge(k1, k2, e)
newGraph.AddEdge(k1, k2, e)
}
}
}
return newgraph
return newGraph, nil
}
// GetDisconnectedGraphs returns a channel containing the N disconnected graphs
// in our main graph. We can then process each of these in parallel.
func (g *Graph) GetDisconnectedGraphs() chan *Graph {
ch := make(chan *Graph)
go func() {
var start *Vertex
var d []*Vertex // discovered
// DisconnectedGraphs returns a list containing the N disconnected graphs.
func (g *Graph) DisconnectedGraphs() ([]*Graph, error) {
graphs := []*Graph{}
var start Vertex
var d []Vertex // discovered
c := g.NumVertices()
for len(d) < c {
// get an undiscovered vertex to start from
for _, s := range g.GetVertices() {
for _, s := range g.Vertices() {
if !VertexContains(s, d) {
start = s
}
@@ -418,31 +350,31 @@ func (g *Graph) GetDisconnectedGraphs() chan *Graph {
// dfs through the graph
dfs := g.DFS(start)
// filter all the collected elements into a new graph
newgraph := g.FilterGraph(g.Name, dfs)
newgraph, err := g.FilterGraph(g.Name, dfs)
if err != nil {
return nil, errwrap.Wrapf(err, "could not run DisconnectedGraphs() properly")
}
// add number of elements found to found variable
d = append(d, dfs...) // extend
// return this new graph to the channel
ch <- newgraph
// append this new graph to the list
graphs = append(graphs, newgraph)
// if we've found all the elements, then we're done
// otherwise loop through to continue...
}
close(ch)
}()
return ch
return graphs, nil
}
// InDegree returns the count of vertices that point to me in one big lookup map.
func (g *Graph) InDegree() map[*Vertex]int {
result := make(map[*Vertex]int)
for k := range g.Adjacency {
func (g *Graph) InDegree() map[Vertex]int {
result := make(map[Vertex]int)
for k := range g.adjacency {
result[k] = 0 // initialize
}
for k := range g.Adjacency {
for z := range g.Adjacency[k] {
for k := range g.adjacency {
for z := range g.adjacency[k] {
result[z]++
}
}
@@ -450,12 +382,12 @@ func (g *Graph) InDegree() map[*Vertex]int {
}
// OutDegree returns the count of vertices that point away in one big lookup map.
func (g *Graph) OutDegree() map[*Vertex]int {
result := make(map[*Vertex]int)
func (g *Graph) OutDegree() map[Vertex]int {
result := make(map[Vertex]int)
for k := range g.Adjacency {
for k := range g.adjacency {
result[k] = 0 // initialize
for range g.Adjacency[k] {
for range g.adjacency[k] {
result[k]++
}
}
@@ -463,12 +395,12 @@ func (g *Graph) OutDegree() map[*Vertex]int {
}
// TopologicalSort returns the sort of graph vertices in that order.
// based on descriptions and code from wikipedia and rosetta code
// It is based on descriptions and code from wikipedia and rosetta code.
// TODO: add memoization, and cache invalidation to speed this up :)
func (g *Graph) TopologicalSort() ([]*Vertex, error) { // kahn's algorithm
var L []*Vertex // empty list that will contain the sorted elements
var S []*Vertex // set of all nodes with no incoming edges
remaining := make(map[*Vertex]int) // amount of edges remaining
func (g *Graph) TopologicalSort() ([]Vertex, error) { // kahn's algorithm
var L []Vertex // empty list that will contain the sorted elements
var S []Vertex // set of all nodes with no incoming edges
remaining := make(map[Vertex]int) // amount of edges remaining
for v, d := range g.InDegree() {
if d == 0 {
@@ -485,7 +417,7 @@ func (g *Graph) TopologicalSort() ([]*Vertex, error) { // kahn's algorithm
v := S[last]
S = S[:last]
L = append(L, v) // add v to tail of L
for n := range g.Adjacency[v] {
for n := range g.adjacency[v] {
// for each node n remaining in the graph, consume from
// remaining, so for remaining[n] > 0
if remaining[n] > 0 {
@@ -500,7 +432,7 @@ func (g *Graph) TopologicalSort() ([]*Vertex, error) { // kahn's algorithm
// if graph has edges, eg if any value in rem is > 0
for c, in := range remaining {
if in > 0 {
for n := range g.Adjacency[c] {
for n := range g.adjacency[c] {
if remaining[n] > 0 {
return nil, fmt.Errorf("not a dag")
}
@@ -519,19 +451,19 @@ func (g *Graph) TopologicalSort() ([]*Vertex, error) { // kahn's algorithm
// actually return a tree if we cared about correctness.
// This operates by a recursive algorithm; a more efficient version is likely.
// If you don't give this function a DAG, you might cause infinite recursion!
func (g *Graph) Reachability(a, b *Vertex) []*Vertex {
func (g *Graph) Reachability(a, b Vertex) []Vertex {
if a == nil || b == nil {
return nil
}
vertices := g.OutgoingGraphVertices(a) // what points away from a ?
if len(vertices) == 0 {
return []*Vertex{} // nope
return []Vertex{} // nope
}
if VertexContains(b, vertices) {
return []*Vertex{a, b} // found
return []Vertex{a, b} // found
}
// TODO: parallelize this with go routines?
var collected = make([][]*Vertex, len(vertices))
var collected = make([][]Vertex, len(vertices))
pick := -1
for i, v := range vertices {
collected[i] = g.Reachability(v, b) // find b by recursion
@@ -544,126 +476,111 @@ func (g *Graph) Reachability(a, b *Vertex) []*Vertex {
}
}
if pick < 0 {
return []*Vertex{} // nope
return []Vertex{} // nope
}
result := []*Vertex{a} // tack on a
result := []Vertex{a} // tack on a
result = append(result, collected[pick]...)
return result
}
// GraphSync updates the oldGraph so that it matches the newGraph receiver. It
// leaves identical elements alone so that they don't need to be refreshed. It
// tries to mutate existing elements into new ones, if they support this.
// FIXME: add test cases
func (g *Graph) GraphSync(oldGraph *Graph) (*Graph, error) {
if oldGraph == nil {
oldGraph = NewGraph(g.GetName()) // copy over the name
}
oldGraph.SetName(g.GetName()) // overwrite the name
var lookup = make(map[*Vertex]*Vertex)
var vertexKeep []*Vertex // list of vertices which are the same in new graph
var edgeKeep []*Edge // list of vertices which are the same in new graph
for v := range g.Adjacency { // loop through the vertices (resources)
res := v.Res // resource
var vertex *Vertex
// step one, direct compare with res.Compare
if vertex == nil { // redundant guard for consistency
vertex = oldGraph.CompareMatch(res)
}
// TODO: consider adding a mutate API.
// step two, try and mutate with res.Mutate
//if vertex == nil { // not found yet...
// vertex = oldGraph.MutateMatch(res)
//}
if vertex == nil { // no match found yet
if err := res.Validate(); err != nil {
return nil, errwrap.Wrapf(err, "could not Validate() resource")
}
vertex = v
oldGraph.AddVertex(vertex) // call standalone in case not part of an edge
}
lookup[v] = vertex // used for constructing edges
vertexKeep = append(vertexKeep, vertex) // append
}
// get rid of any vertices we shouldn't keep (that aren't in new graph)
for v := range oldGraph.Adjacency {
if !VertexContains(v, vertexKeep) {
// wait for exit before starting new graph!
v.SendEvent(event.EventExit, nil) // sync
v.Res.WaitGroup().Wait()
oldGraph.DeleteVertex(v)
// VertexMatchFn searches for a vertex in the graph and returns the vertex if
// one matches. It uses a user defined function to match. That function must
// return true on match, and an error if anything goes wrong.
func (g *Graph) VertexMatchFn(fn func(Vertex) (bool, error)) (Vertex, error) {
for v := range g.adjacency {
if b, err := fn(v); err != nil {
return nil, errwrap.Wrapf(err, "fn in VertexMatchFn() errored")
} else if b {
return v, nil
}
}
// compare edges
for v1 := range g.Adjacency { // loop through the vertices (resources)
for v2, e := range g.Adjacency[v1] {
// we have an edge!
// lookup vertices (these should exist now)
//res1 := v1.Res // resource
//res2 := v2.Res
//vertex1 := oldGraph.CompareMatch(res1)
//vertex2 := oldGraph.CompareMatch(res2)
vertex1, exists1 := lookup[v1]
vertex2, exists2 := lookup[v2]
if !exists1 || !exists2 { // no match found, bug?
//if vertex1 == nil || vertex2 == nil { // no match found
return nil, fmt.Errorf("new vertices weren't found") // programming error
}
edge, exists := oldGraph.Adjacency[vertex1][vertex2]
if !exists || edge.Name != e.Name { // TODO: edgeCmp
edge = e // use or overwrite edge
}
oldGraph.Adjacency[vertex1][vertex2] = edge // store it (AddEdge)
edgeKeep = append(edgeKeep, edge) // mark as saved
}
}
// delete unused edges
for v1 := range oldGraph.Adjacency {
for _, e := range oldGraph.Adjacency[v1] {
// we have an edge!
if !EdgeContains(e, edgeKeep) {
oldGraph.DeleteEdge(e)
}
}
}
return oldGraph, nil
return nil, nil // nothing found
}
// GraphMetas returns a list of pointers to each of the resource MetaParams.
func (g *Graph) GraphMetas() []*resources.MetaParams {
metas := []*resources.MetaParams{}
for v := range g.Adjacency { // loop through the vertices (resources))
res := v.Res // resource
meta := res.Meta()
metas = append(metas, meta)
// GraphCmp compares the topology of this graph to another and returns nil if
// they're equal. It uses a user defined function to compare topologically
// equivalent vertices, and edges.
// FIXME: add more test cases
func (g *Graph) GraphCmp(graph *Graph, vertexCmpFn func(Vertex, Vertex) (bool, error), edgeCmpFn func(Edge, Edge) (bool, error)) error {
n1, n2 := g.NumVertices(), graph.NumVertices()
if n1 != n2 {
return fmt.Errorf("base graph has %d vertices, while input graph has %d", n1, n2)
}
return metas
}
// AssociateData associates some data with the object in the graph in question.
func (g *Graph) AssociateData(data *resources.Data) {
// prometheus needs to be associated to this graph as well
g.prometheus = data.Prometheus
for k := range g.Adjacency {
*k.Res.Data() = *data
if e1, e2 := g.NumEdges(), graph.NumEdges(); e1 != e2 {
return fmt.Errorf("base graph has %d edges, while input graph has %d", e1, e2)
}
var m = make(map[Vertex]Vertex) // g to graph vertex correspondence
Loop:
// check vertices
for v1 := range g.Adjacency() { // for each vertex in g
for v2 := range graph.Adjacency() { // does it match in graph ?
b, err := vertexCmpFn(v1, v2)
if err != nil {
return errwrap.Wrapf(err, "could not run vertexCmpFn() properly")
}
// does it match ?
if b {
m[v1] = v2 // store the mapping
continue Loop
}
}
return fmt.Errorf("base graph, has no match in input graph for: %s", v1)
}
// vertices match :)
// is the mapping the right length?
if n1 := len(m); n1 != n2 {
return fmt.Errorf("mapping only has correspondence of %d, when it should have %d", n1, n2)
}
// check if mapping is unique (are there duplicates?)
m1 := []Vertex{}
m2 := []Vertex{}
for k, v := range m {
if VertexContains(k, m1) {
return fmt.Errorf("mapping from %s is used more than once to: %s", k, m1)
}
if VertexContains(v, m2) {
return fmt.Errorf("mapping to %s is used more than once from: %s", v, m2)
}
m1 = append(m1, k)
m2 = append(m2, v)
}
// check edges
for v1 := range g.Adjacency() { // for each vertex in g
v2 := m[v1] // lookup in map to get correspondance
// g.Adjacency()[v1] corresponds to graph.Adjacency()[v2]
if e1, e2 := len(g.Adjacency()[v1]), len(graph.Adjacency()[v2]); e1 != e2 {
return fmt.Errorf("base graph, vertex(%s) has %d edges, while input graph, vertex(%s) has %d", v1, e1, v2, e2)
}
for vv1, ee1 := range g.Adjacency()[v1] {
vv2 := m[vv1]
ee2 := graph.Adjacency()[v2][vv2]
// these are edges from v1 -> vv1 via ee1 (graph 1)
// to cmp to edges from v2 -> vv2 via ee2 (graph 2)
// check: (1) vv1 == vv2 ? (we've already checked this!)
// check: (2) ee1 == ee2
b, err := edgeCmpFn(ee1, ee2)
if err != nil {
return errwrap.Wrapf(err, "could not run edgeCmpFn() properly")
}
if !b {
return fmt.Errorf("base graph edge(%s) doesn't match input graph edge(%s)", ee1, ee2)
}
}
}
return nil // success!
}
// VertexContains is an "in array" function to test for a vertex in a slice of vertices.
func VertexContains(needle *Vertex, haystack []*Vertex) bool {
func VertexContains(needle Vertex, haystack []Vertex) bool {
for _, v := range haystack {
if needle == v {
return true
@@ -673,7 +590,7 @@ func VertexContains(needle *Vertex, haystack []*Vertex) bool {
}
// EdgeContains is an "in array" function to test for an edge in a slice of edges.
func EdgeContains(needle *Edge, haystack []*Edge) bool {
func EdgeContains(needle Edge, haystack []Edge) bool {
for _, v := range haystack {
if needle == v {
return true
@@ -683,12 +600,23 @@ func EdgeContains(needle *Edge, haystack []*Edge) bool {
}
// Reverse reverses a list of vertices.
func Reverse(vs []*Vertex) []*Vertex {
//var out []*Vertex // XXX: golint suggests, but it fails testing
out := make([]*Vertex, 0) // empty list
func Reverse(vs []Vertex) []Vertex {
out := []Vertex{}
l := len(vs)
for i := range vs {
out = append(out, vs[l-i-1])
}
return out
}
// Sort the list of vertices and return a copy without modifying the input.
func Sort(vs []Vertex) []Vertex {
vertices := []Vertex{}
for _, v := range vs { // copy
vertices = append(vertices, v)
}
sort.Sort(VertexSlice(vertices))
return vertices
// sort.Sort(VertexSlice(vs)) // this is wrong, it would modify input!
//return vs
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,93 +0,0 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
import (
"testing"
"github.com/purpleidea/mgmt/resources"
)
func NewNoopResTestSema(name string, semas []string) *NoopResTest {
obj := &NoopResTest{
NoopRes: resources.NoopRes{
BaseRes: resources.BaseRes{
Name: name,
MetaParams: resources.MetaParams{
AutoGroup: true, // always autogroup
Sema: semas,
},
},
},
}
return obj
}
func TestPgraphSemaphoreGrouping1(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTestSema("a1", []string{"s:1"}))
a2 := NewVertex(NewNoopResTestSema("a2", []string{"s:2"}))
a3 := NewVertex(NewNoopResTestSema("a3", []string{"s:3"}))
g1.AddVertex(a1)
g1.AddVertex(a2)
g1.AddVertex(a3)
}
g2 := NewGraph("g2") // expected result
{
a123 := NewVertex(NewNoopResTestSema("a1,a2,a3", []string{"s:1", "s:2", "s:3"}))
g2.AddVertex(a123)
}
runGraphCmp(t, g1, g2)
}
func TestPgraphSemaphoreGrouping2(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTestSema("a1", []string{"s:10", "s:11"}))
a2 := NewVertex(NewNoopResTestSema("a2", []string{"s:2"}))
a3 := NewVertex(NewNoopResTestSema("a3", []string{"s:3"}))
g1.AddVertex(a1)
g1.AddVertex(a2)
g1.AddVertex(a3)
}
g2 := NewGraph("g2") // expected result
{
a123 := NewVertex(NewNoopResTestSema("a1,a2,a3", []string{"s:10", "s:11", "s:2", "s:3"}))
g2.AddVertex(a123)
}
runGraphCmp(t, g1, g2)
}
func TestPgraphSemaphoreGrouping3(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTestSema("a1", []string{"s:1", "s:2"}))
a2 := NewVertex(NewNoopResTestSema("a2", []string{"s:2"}))
a3 := NewVertex(NewNoopResTestSema("a3", []string{"s:3"}))
g1.AddVertex(a1)
g1.AddVertex(a2)
g1.AddVertex(a3)
}
g2 := NewGraph("g2") // expected result
{
a123 := NewVertex(NewNoopResTestSema("a1,a2,a3", []string{"s:1", "s:2", "s:3"}))
g2.AddVertex(a123)
}
runGraphCmp(t, g1, g2)
}

106
pgraph/subgraph.go Normal file
View File

@@ -0,0 +1,106 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
// AddGraph adds the set of edges and vertices of a graph to the existing graph.
func (g *Graph) AddGraph(graph *Graph) {
g.addEdgeVertexGraphHelper(nil, graph, nil, false, false)
}
// AddEdgeVertexGraph adds a directed edge to the graph from a vertex.
// This is useful for flattening the relationship between a subgraph and an
// existing graph, without having to run the subgraph recursively. It adds the
// maximum number of edges, creating a relationship to every vertex.
func (g *Graph) AddEdgeVertexGraph(vertex Vertex, graph *Graph, edgeGenFn func(v1, v2 Vertex) Edge) {
g.addEdgeVertexGraphHelper(vertex, graph, edgeGenFn, false, false)
}
// AddEdgeVertexGraphLight adds a directed edge to the graph from a vertex.
// This is useful for flattening the relationship between a subgraph and an
// existing graph, without having to run the subgraph recursively. It adds the
// minimum number of edges, creating a relationship to the vertices with
// indegree equal to zero.
func (g *Graph) AddEdgeVertexGraphLight(vertex Vertex, graph *Graph, edgeGenFn func(v1, v2 Vertex) Edge) {
g.addEdgeVertexGraphHelper(vertex, graph, edgeGenFn, false, true)
}
// AddEdgeGraphVertex adds a directed edge to the vertex from a graph.
// This is useful for flattening the relationship between a subgraph and an
// existing graph, without having to run the subgraph recursively. It adds the
// maximum number of edges, creating a relationship from every vertex.
func (g *Graph) AddEdgeGraphVertex(graph *Graph, vertex Vertex, edgeGenFn func(v1, v2 Vertex) Edge) {
g.addEdgeVertexGraphHelper(vertex, graph, edgeGenFn, true, false)
}
// AddEdgeGraphVertexLight adds a directed edge to the vertex from a graph.
// This is useful for flattening the relationship between a subgraph and an
// existing graph, without having to run the subgraph recursively. It adds the
// minimum number of edges, creating a relationship from the vertices with
// outdegree equal to zero.
func (g *Graph) AddEdgeGraphVertexLight(graph *Graph, vertex Vertex, edgeGenFn func(v1, v2 Vertex) Edge) {
g.addEdgeVertexGraphHelper(vertex, graph, edgeGenFn, true, true)
}
// addEdgeVertexGraphHelper is a helper function to add a directed edges to the
// graph from a vertex, or vice-versa. It operates in this reverse direction by
// specifying the reverse argument as true. It is useful for flattening the
// relationship between a subgraph and an existing graph, without having to run
// the subgraph recursively. It adds the maximum number of edges, creating a
// relationship to or from every vertex if the light argument is false, and if
// it is true, it adds the minimum number of edges, creating a relationship to
// or from the vertices with an indegree or outdegree equal to zero depending on
// if we specified reverse or not.
func (g *Graph) addEdgeVertexGraphHelper(vertex Vertex, graph *Graph, edgeGenFn func(v1, v2 Vertex) Edge, reverse, light bool) {
var degree map[Vertex]int // compute all of the in/outdegree's if needed
if light && reverse {
degree = graph.OutDegree()
} else if light { // && !reverse
degree = graph.InDegree()
}
for _, v := range graph.VerticesSorted() { // sort to help out edgeGenFn
// forward:
// we only want to add edges to indegree == 0, because every
// other vertex is a dependency of at least one of those
// reverse:
// we only want to add edges to outdegree == 0, because every
// other vertex is a pre-requisite to at least one of these
if light && degree[v] != 0 {
continue
}
g.AddVertex(v) // ensure vertex is part of the graph
if vertex != nil && reverse {
edge := edgeGenFn(v, vertex) // generate a new unique edge
g.AddEdge(v, vertex, edge)
} else if vertex != nil { // && !reverse
edge := edgeGenFn(vertex, v)
g.AddEdge(vertex, v, edge)
}
}
// also remember to suck in all of the graph's edges too!
for v1 := range graph.Adjacency() {
for v2, e := range graph.Adjacency()[v1] {
g.AddEdge(v1, v2, e)
}
}
}

187
pgraph/subgraph_test.go Normal file
View File

@@ -0,0 +1,187 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
import (
"testing"
)
func TestAddEdgeGraph1(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g := &Graph{}
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
sub := &Graph{}
sub.AddEdge(v4, v5, e3)
g.AddGraph(sub)
// expected (can re-use the same vertices)
expected := &Graph{}
expected.AddEdge(v1, v3, e1)
expected.AddEdge(v2, v3, e2)
expected.AddEdge(v4, v5, e3)
//expected.AddEdge(v3, v4, NE("v3,v4"))
//expected.AddEdge(v3, v5, NE("v3,v5"))
if s := runGraphCmp(t, g, expected); s != "" {
t.Errorf("%s", s)
}
}
func TestAddEdgeVertexGraph1(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g := &Graph{}
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
sub := &Graph{}
sub.AddEdge(v4, v5, e3)
g.AddEdgeVertexGraph(v3, sub, edgeGenFn)
// expected (can re-use the same vertices)
expected := &Graph{}
expected.AddEdge(v1, v3, e1)
expected.AddEdge(v2, v3, e2)
expected.AddEdge(v4, v5, e3)
expected.AddEdge(v3, v4, NE("v3,v4"))
expected.AddEdge(v3, v5, NE("v3,v5"))
if s := runGraphCmp(t, g, expected); s != "" {
t.Errorf("%s", s)
}
}
func TestAddEdgeGraphVertex1(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g := &Graph{}
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
sub := &Graph{}
sub.AddEdge(v4, v5, e3)
g.AddEdgeGraphVertex(sub, v3, edgeGenFn)
// expected (can re-use the same vertices)
expected := &Graph{}
expected.AddEdge(v1, v3, e1)
expected.AddEdge(v2, v3, e2)
expected.AddEdge(v4, v5, e3)
expected.AddEdge(v4, v3, NE("v4,v3"))
expected.AddEdge(v5, v3, NE("v5,v3"))
if s := runGraphCmp(t, g, expected); s != "" {
t.Errorf("%s", s)
}
}
func TestAddEdgeVertexGraphLight1(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g := &Graph{}
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
sub := &Graph{}
sub.AddEdge(v4, v5, e3)
g.AddEdgeVertexGraphLight(v3, sub, edgeGenFn)
// expected (can re-use the same vertices)
expected := &Graph{}
expected.AddEdge(v1, v3, e1)
expected.AddEdge(v2, v3, e2)
expected.AddEdge(v4, v5, e3)
expected.AddEdge(v3, v4, NE("v3,v4"))
//expected.AddEdge(v3, v5, NE("v3,v5")) // not needed with light
if s := runGraphCmp(t, g, expected); s != "" {
t.Errorf("%s", s)
}
}
func TestAddEdgeGraphVertexLight1(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g := &Graph{}
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
sub := &Graph{}
sub.AddEdge(v4, v5, e3)
g.AddEdgeGraphVertexLight(sub, v3, edgeGenFn)
// expected (can re-use the same vertices)
expected := &Graph{}
expected.AddEdge(v1, v3, e1)
expected.AddEdge(v2, v3, e2)
expected.AddEdge(v4, v5, e3)
//expected.AddEdge(v4, v3, NE("v4,v3")) // not needed with light
expected.AddEdge(v5, v3, NE("v5,v3"))
if s := runGraphCmp(t, g, expected); s != "" {
t.Errorf("%s", s)
}
}

93
pgraph/util_test.go Normal file
View File

@@ -0,0 +1,93 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
import (
"fmt"
"testing"
)
// vertex is a test struct to test the library.
type vertex struct {
name string
}
// String is a required method of the Vertex interface that we must fulfill.
func (v *vertex) String() string {
return v.name
}
// NV is a helper function to make testing easier. It creates a new noop vertex.
func NV(s string) Vertex {
return &vertex{s}
}
// edge is a test struct to test the library.
type edge struct {
name string
}
// String is a required method of the Edge interface that we must fulfill.
func (e *edge) String() string {
return e.name
}
// NE is a helper function to make testing easier. It creates a new noop edge.
func NE(s string) Edge {
return &edge{s}
}
// edgeGenFn generates unique edges for each vertex pair, assuming unique
// vertices.
func edgeGenFn(v1, v2 Vertex) Edge {
return NE(fmt.Sprintf("%s,%s", v1, v2))
}
func vertexAddFn(v Vertex) error {
return nil
}
func vertexRemoveFn(v Vertex) error {
return nil
}
func runGraphCmp(t *testing.T, g1, g2 *Graph) string {
err := g1.GraphCmp(g2, strVertexCmpFn, strEdgeCmpFn)
if err != nil {
str := ""
str += fmt.Sprintf(" actual (g1): %v%s", g1, fullPrint(g1))
str += fmt.Sprintf("expected (g2): %v%s", g2, fullPrint(g2))
str += fmt.Sprintf("cmp error:")
str += fmt.Sprintf("%v", err)
return str
}
return ""
}
func fullPrint(g *Graph) (str string) {
str += "\n"
for v := range g.Adjacency() {
str += fmt.Sprintf("* v: %s\n", v)
}
for v1 := range g.Adjacency() {
for v2, e := range g.Adjacency()[v1] {
str += fmt.Sprintf("* e: %s -> %s # %s\n", v1, v2, e)
}
}
return
}

View File

@@ -1,36 +1,52 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package prometheus provides functions that are useful to control and manage
// the build-in prometheus instance.
// the built-in prometheus instance.
package prometheus
import (
"errors"
"net/http"
"strconv"
"sync"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
errwrap "github.com/pkg/errors"
)
// DefaultPrometheusListen is registered in
// https://github.com/prometheus/prometheus/wiki/Default-port-allocations
const DefaultPrometheusListen = "127.0.0.1:9233"
// ResState represents the status of a resource.
type ResState int
const (
// ResStateOK represents a working resource
ResStateOK ResState = iota
// ResStateSoftFail represents a resource in soft fail (will be retried)
ResStateSoftFail
// ResStateHardFail represents a resource in hard fail (will NOT be retried)
ResStateHardFail
)
// Prometheus is the struct that contains information about the
// prometheus instance. Run Init() on it.
type Prometheus struct {
@@ -38,7 +54,18 @@ type Prometheus struct {
checkApplyTotal *prometheus.CounterVec // total of CheckApplies that have been triggered
pgraphStartTimeSeconds prometheus.Gauge // process start time in seconds since unix epoch
managedResources *prometheus.GaugeVec // Resources we manage now
failedResourcesTotal *prometheus.CounterVec // Total of failures since mgmt has started
failedResources *prometheus.GaugeVec // Number of current resources
resourcesState map[string]resStateWithKind // Maps the resources with their current kind/state
mutex *sync.Mutex // Mutex used to update resourcesState
}
// resStateWithKind is used to count the failures by kind
type resStateWithKind struct {
state ResState
kind string
}
// Init some parameters - currently the Listen address.
@@ -46,6 +73,10 @@ func (obj *Prometheus) Init() error {
if len(obj.Listen) == 0 {
obj.Listen = DefaultPrometheusListen
}
obj.mutex = &sync.Mutex{}
obj.resourcesState = make(map[string]resStateWithKind)
obj.checkApplyTotal = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "mgmt_checkapply_total",
@@ -68,6 +99,38 @@ func (obj *Prometheus) Init() error {
)
prometheus.MustRegister(obj.pgraphStartTimeSeconds)
obj.managedResources = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "mgmt_resources",
Help: "Number of managed resources.",
},
// kind: resource type: Svc, File, ...
[]string{"kind"},
)
prometheus.MustRegister(obj.managedResources)
obj.failedResourcesTotal = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "mgmt_failures_total",
Help: "Total of failed resources.",
},
// kind: resource type: Svc, File, ...
// failure: soft or hard
[]string{"kind", "failure"},
)
prometheus.MustRegister(obj.failedResourcesTotal)
obj.failedResources = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "mgmt_failures",
Help: "Number of failing resources.",
},
// kind: resource type: Svc, File, ...
// failure: soft or hard
[]string{"kind", "failure"},
)
prometheus.MustRegister(obj.failedResources)
return nil
}
@@ -86,6 +149,40 @@ func (obj *Prometheus) Stop() error {
return nil
}
// InitKindMetrics initialized prometheus counters. For each kind of
// resource checkApply counters are initialized with all the possible value.
func (obj *Prometheus) InitKindMetrics(kinds []string) error {
if obj == nil {
return nil // happens when mgmt is launched without --prometheus
}
bools := []bool{true, false}
for _, kind := range kinds {
for _, apply := range bools {
for _, eventful := range bools {
for _, errorful := range bools {
labels := prometheus.Labels{
"kind": kind,
"apply": strconv.FormatBool(apply),
"eventful": strconv.FormatBool(eventful),
"errorful": strconv.FormatBool(errorful),
}
obj.checkApplyTotal.With(labels)
}
}
}
obj.managedResources.With(prometheus.Labels{"kind": kind})
failures := []string{"soft", "hard"}
for _, f := range failures {
failLabels := prometheus.Labels{"kind": kind, "failure": f}
obj.failedResourcesTotal.With(failLabels)
obj.failedResources.With(failLabels)
}
}
return nil
}
// UpdateCheckApplyTotal refreshes the failing gauge by parsing the internal
// state map.
func (obj *Prometheus) UpdateCheckApplyTotal(kind string, apply, eventful, errorful bool) error {
@@ -107,3 +204,94 @@ func (obj *Prometheus) UpdatePgraphStartTime() error {
obj.pgraphStartTimeSeconds.SetToCurrentTime()
return nil
}
// AddManagedResource increments the Managed Resource counter and updates the resource status.
func (obj *Prometheus) AddManagedResource(resUUID string, rtype string) error {
if obj == nil {
return nil // happens when mgmt is launched without --prometheus
}
obj.managedResources.With(prometheus.Labels{"kind": rtype}).Inc()
if err := obj.UpdateState(resUUID, rtype, ResStateOK); err != nil {
return errwrap.Wrapf(err, "can't update the resource status in the map")
}
return nil
}
// RemoveManagedResource decrements the Managed Resource counter and updates the resource status.
func (obj *Prometheus) RemoveManagedResource(resUUID string, rtype string) error {
if obj == nil {
return nil // happens when mgmt is launched without --prometheus
}
obj.managedResources.With(prometheus.Labels{"kind": rtype}).Dec()
if err := obj.deleteState(resUUID); err != nil {
return errwrap.Wrapf(err, "can't remove the resource status from the map")
}
return nil
}
// deleteState removes the resources for the state map and re-populates the failing gauge.
func (obj *Prometheus) deleteState(resUUID string) error {
if obj == nil {
return nil // happens when mgmt is launched without --prometheus
}
obj.mutex.Lock()
delete(obj.resourcesState, resUUID)
obj.mutex.Unlock()
if err := obj.updateFailingGauge(); err != nil {
return errwrap.Wrapf(err, "can't update the failing gauge")
}
return nil
}
// UpdateState updates the state of the resources in our internal state map
// then triggers a refresh of the failing gauge.
func (obj *Prometheus) UpdateState(resUUID string, rtype string, newState ResState) error {
defer obj.updateFailingGauge()
if obj == nil {
return nil // happens when mgmt is launched without --prometheus
}
obj.mutex.Lock()
obj.resourcesState[resUUID] = resStateWithKind{state: newState, kind: rtype}
obj.mutex.Unlock()
if newState != ResStateOK {
var strState string
if newState == ResStateSoftFail {
strState = "soft"
} else if newState == ResStateHardFail {
strState = "hard"
} else {
return errors.New("state should be soft or hard failure")
}
obj.failedResourcesTotal.With(prometheus.Labels{"kind": rtype, "failure": strState}).Inc()
}
return nil
}
// updateFailingGauge refreshes the failing gauge by parsking the internal
// state map.
func (obj *Prometheus) updateFailingGauge() error {
if obj == nil {
return nil // happens when mgmt is launched without --prometheus
}
var softFails, hardFails map[string]float64
softFails = make(map[string]float64)
hardFails = make(map[string]float64)
for _, v := range obj.resourcesState {
if v.state == ResStateSoftFail {
softFails[v.kind]++
} else if v.state == ResStateHardFail {
hardFails[v.kind]++
}
}
// TODO: we might want to Zero the metrics we are not using
// because in prometheus design the metrics keep living for some time
// even after they are removed.
obj.failedResources.Reset()
for k, v := range softFails {
obj.failedResources.With(prometheus.Labels{"kind": k, "failure": "soft"}).Set(v)
}
for k, v := range hardFails {
obj.failedResources.With(prometheus.Labels{"kind": k, "failure": "hard"}).Set(v)
}
return nil
}

View File

@@ -0,0 +1,75 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package prometheus
import (
"testing"
"github.com/prometheus/client_golang/prometheus"
)
// TestInitKindMetrics tests that we are initializing the Prometheus
// metrics correctly for all kind of resources.
func TestInitKindMetrics(t *testing.T) {
var prom Prometheus
prom.Init()
prom.InitKindMetrics([]string{"file", "exec"})
// Get a list of metrics collected by Prometheus.
// This is the only way to get Prometheus metrics
// without implicitly creating them.
gatherer := prometheus.DefaultGatherer
metrics, err := gatherer.Gather()
if err != nil {
t.Errorf("Error while gathering metrics: %s", err)
return
}
// expectedMetrics is a map: keys are metrics name and
// values are expected and actual count of metrics with
// that name.
expectedMetrics := map[string][2]int{
"mgmt_checkapply_total": {
16, 0,
},
"mgmt_failures_total": {
4, 0,
},
"mgmt_failures": {
4, 0,
},
"mgmt_resources": {
2, 0,
},
}
for _, metric := range metrics {
for name, count := range expectedMetrics {
if *metric.Name == name {
value := len(metric.Metric)
expectedMetrics[name] = [2]int{count[0], value}
}
}
}
for name, count := range expectedMetrics {
if count[1] != count[0] {
t.Errorf("%s: Expected %d metrics, got %d metrics", name, count[0], count[1])
}
}
}

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package puppet
@@ -75,37 +75,61 @@ func (obj *GAPI) Graph() (*pgraph.Graph, error) {
}
// Next returns nil errors every time there could be a new graph.
func (obj *GAPI) Next() chan error {
if obj.data.NoWatch {
return nil
}
func (obj *GAPI) Next() chan gapi.Next {
puppetChan := func() <-chan time.Time { // helper function
return time.Tick(time.Duration(RefreshInterval(obj.PuppetConf)) * time.Second)
}
ch := make(chan error)
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("the puppet GAPI is not initialized")
next := gapi.Next{
Err: fmt.Errorf("the puppet GAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
return
}
pChan := puppetChan()
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
pChan := make(<-chan time.Time)
// NOTE: we don't look at obj.data.NoConfigWatch since emulating
// puppet means we do not switch graphs on code changes anyways.
if obj.data.NoStreamWatch {
pChan = nil
} else {
pChan = puppetChan()
}
for {
select {
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case _, ok := <-pChan:
if !ok { // the channel closed!
return
}
log.Printf("Puppet: Generating new graph...")
pChan = puppetChan() // TODO: okay to update interval in case it changed?
select {
case ch <- nil: // trigger a run (send a msg)
// unblock if we exit while waiting to send!
case <-obj.closeChan:
return
}
log.Printf("Puppet: Generating new graph...")
if obj.data.NoStreamWatch {
pChan = nil
} else {
pChan = puppetChan() // TODO: okay to update interval in case it changed?
}
next := gapi.Next{
//Exit: true, // TODO: for permanent shutdown!
Err: nil,
}
select {
case ch <- next: // trigger a run (send a msg)
// unblock if we exit while waiting to send!
case <-obj.closeChan:
return
}

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package puppet provides the integration entrypoint for the puppet language.
@@ -89,20 +89,21 @@ func runPuppetCommand(cmd *exec.Cmd) ([]byte, error) {
// ParseConfigFromPuppet takes a special puppet param string and config and
// returns the graph configuration structure.
func ParseConfigFromPuppet(puppetParam, puppetConf string) *yamlgraph.GraphConfig {
var puppetConfArg string
if puppetConf != "" {
puppetConfArg = "--config=" + puppetConf
var args []string
if puppetParam == "agent" {
args = []string{"mgmtgraph", "print"}
} else if strings.HasSuffix(puppetParam, ".pp") {
args = []string{"mgmtgraph", "print", "--manifest", puppetParam}
} else {
args = []string{"mgmtgraph", "print", "--code", puppetParam}
}
var cmd *exec.Cmd
if puppetParam == "agent" {
cmd = exec.Command("puppet", "mgmtgraph", "print", puppetConfArg)
} else if strings.HasSuffix(puppetParam, ".pp") {
cmd = exec.Command("puppet", "mgmtgraph", "print", puppetConfArg, "--manifest", puppetParam)
} else {
cmd = exec.Command("puppet", "mgmtgraph", "print", puppetConfArg, "--code", puppetParam)
if puppetConf != "" {
args = append(args, "--config="+puppetConf)
}
cmd := exec.Command("puppet", args...)
log.Println("Puppet: launching translator")
var config yamlgraph.GraphConfig

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package recwatch

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package recwatch

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package recwatch provides recursive file watching events via fsnotify.

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package remote provides the remoting facilities for agentless execution.
@@ -692,7 +692,7 @@ type Remotes struct {
fileWatch chan string
cConns uint16 // number of concurrent ssh connections, zero means unlimited
interactive bool // allow interactive prompting
sshPrivIdRsa string // path to ~/.ssh/id_rsa
sshPrivIDRsa string // path to ~/.ssh/id_rsa
caching bool // whether to try and cache the copy of the binary
depth uint16 // depth of this node in the remote execution hierarchy
prefix string // folder prefix to use for misc storage
@@ -702,6 +702,7 @@ type Remotes struct {
wg sync.WaitGroup // keep track of each running SSH connection
lock sync.Mutex // mutex for access to sshmap
sshmap map[string]*SSH // map to each SSH struct with the remote as the key
running chan struct{} // closes when main loop is running
exiting bool // flag to let us know if we're exiting
exitChan chan struct{} // closes when we should exit
semaphore *semaphore.Semaphore // counting semaphore to limit concurrent connections
@@ -714,7 +715,7 @@ type Remotes struct {
}
// NewRemotes builds a Remotes struct.
func NewRemotes(clientURLs, remoteURLs []string, noop bool, remotes []string, fileWatch chan string, cConns uint16, interactive bool, sshPrivIdRsa string, caching bool, depth uint16, prefix string, converger cv.Converger, convergerCb func(func(map[string]bool) error) (func(), error), flags Flags) *Remotes {
func NewRemotes(clientURLs, remoteURLs []string, noop bool, remotes []string, fileWatch chan string, cConns uint16, interactive bool, sshPrivIDRsa string, caching bool, depth uint16, prefix string, converger cv.Converger, convergerCb func(func(map[string]bool) error) (func(), error), flags Flags) *Remotes {
return &Remotes{
clientURLs: clientURLs,
remoteURLs: remoteURLs,
@@ -723,13 +724,14 @@ func NewRemotes(clientURLs, remoteURLs []string, noop bool, remotes []string, fi
fileWatch: fileWatch,
cConns: cConns,
interactive: interactive,
sshPrivIdRsa: sshPrivIdRsa,
sshPrivIDRsa: sshPrivIDRsa,
caching: caching,
depth: depth,
prefix: prefix,
converger: converger,
convergerCb: convergerCb,
sshmap: make(map[string]*SSH),
running: make(chan struct{}),
exitChan: make(chan struct{}),
semaphore: semaphore.NewSemaphore(int(cConns)),
hostnames: make([]string, len(remotes)),
@@ -830,18 +832,18 @@ func (obj *Remotes) NewSSH(file string) (*SSH, error) {
// sshKeyAuth is a helper function to get the ssh key auth struct needed
func (obj *Remotes) sshKeyAuth() (ssh.AuthMethod, error) {
if obj.sshPrivIdRsa == "" {
if obj.sshPrivIDRsa == "" {
return nil, fmt.Errorf("empty path specified")
}
p := ""
// TODO: this doesn't match strings of the form: ~james/.ssh/id_rsa
if strings.HasPrefix(obj.sshPrivIdRsa, "~/") {
if strings.HasPrefix(obj.sshPrivIDRsa, "~/") {
usr, err := user.Current()
if err != nil {
log.Printf("Remote: Can't find home directory automatically.")
return nil, err
}
p = path.Join(usr.HomeDir, obj.sshPrivIdRsa[len("~/"):])
p = path.Join(usr.HomeDir, obj.sshPrivIDRsa[len("~/"):])
}
if p == "" {
return nil, fmt.Errorf("empty path specified")
@@ -1022,11 +1024,17 @@ func (obj *Remotes) Run() {
}(sshobj, f)
obj.lock.Unlock()
}
close(obj.running) // notify
}
// Ready closes its returned channel when the Run method is up and ready. It is
// useful to know when ready, since we often execute Run in a go routine.
func (obj *Remotes) Ready() <-chan struct{} { return obj.running }
// Exit causes as much of the Remotes struct to shutdown as quickly and as
// cleanly as possible. It only returns once everything is shutdown.
func (obj *Remotes) Exit() error {
<-obj.running // wait for Run to be finished before we exit!
obj.lock.Lock()
obj.exiting = true // don't spawn new ones once this flag is set!
obj.lock.Unlock()

View File

@@ -1,21 +1,21 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
package resources
import (
"fmt"
@@ -26,35 +26,36 @@ import (
"time"
"github.com/purpleidea/mgmt/event"
"github.com/purpleidea/mgmt/resources"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/prometheus"
"github.com/purpleidea/mgmt/util"
multierr "github.com/hashicorp/go-multierror"
errwrap "github.com/pkg/errors"
"golang.org/x/time/rate"
)
// GetTimestamp returns the timestamp of a vertex
func (v *Vertex) GetTimestamp() int64 {
return v.timestamp
// SentinelErr is a sentinal as an error type that wraps an arbitrary error.
type SentinelErr struct {
err error
}
// UpdateTimestamp updates the timestamp on a vertex and returns the new value
func (v *Vertex) UpdateTimestamp() int64 {
v.timestamp = time.Now().UnixNano() // update
return v.timestamp
// Error is the required method to fulfill the error type.
func (obj *SentinelErr) Error() string {
return obj.err.Error()
}
// OKTimestamp returns true if this element can run right now?
func (g *Graph) OKTimestamp(v *Vertex) bool {
func (obj *BaseRes) OKTimestamp() bool {
// these are all the vertices pointing TO v, eg: ??? -> v
for _, n := range g.IncomingGraphVertices(v) {
for _, n := range obj.Graph.IncomingGraphVertices(obj.Vertex) {
// if the vertex has a greater timestamp than any pre-req (n)
// then we can't run right now...
// if they're equal (eg: on init of 0) then we also can't run
// b/c we should let our pre-req's go first...
x, y := v.GetTimestamp(), n.GetTimestamp()
if g.Flags.Debug {
log.Printf("%s[%s]: OKTimestamp: (%v) >= %s[%s](%v): !%v", v.Kind(), v.GetName(), x, n.Kind(), n.GetName(), y, x >= y)
x, y := obj.Timestamp(), VtoR(n).Timestamp()
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: OKTimestamp: (%v) >= %s(%v): !%v", obj, x, n, y, x >= y)
}
if x >= y {
return false
@@ -64,28 +65,35 @@ func (g *Graph) OKTimestamp(v *Vertex) bool {
}
// Poke tells nodes after me in the dependency graph that they need to refresh.
func (g *Graph) Poke(v *Vertex) error {
func (obj *BaseRes) Poke() error {
// if we're pausing (or exiting) then we should suspend poke's so that
// the graph doesn't go on running forever until it's completely done!
// this is an optional feature which we can do by default on user exit
if obj.Graph.FastPause {
return nil // TODO: should this be an error instead?
}
var wg sync.WaitGroup
// these are all the vertices pointing AWAY FROM v, eg: v -> ???
for _, n := range g.OutgoingGraphVertices(v) {
for _, n := range obj.Graph.OutgoingGraphVertices(obj.Vertex) {
// we can skip this poke if resource hasn't done work yet... it
// needs to be poked if already running, or not running though!
// TODO: does this need an || activity flag?
if n.Res.GetState() != resources.ResStateProcess {
if g.Flags.Debug {
log.Printf("%s[%s]: Poke: %s[%s]", v.Kind(), v.GetName(), n.Kind(), n.GetName())
if VtoR(n).GetState() != ResStateProcess {
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: Poke: %s", obj, n)
}
wg.Add(1)
go func(nn *Vertex) error {
go func(nn pgraph.Vertex) error {
defer wg.Done()
//edge := g.Adjacency[v][nn] // lookup
//edge := obj.Graph.adjacency[v][nn] // lookup
//notify := edge.Notify && edge.Refresh()
return nn.SendEvent(event.EventPoke, nil)
return VtoR(nn).SendEvent(event.EventPoke, nil)
}(n)
} else {
if g.Flags.Debug {
log.Printf("%s[%s]: Poke: %s[%s]: Skipped!", v.Kind(), v.GetName(), n.Kind(), n.GetName())
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: Poke: %s: Skipped!", obj, n)
}
}
}
@@ -95,30 +103,30 @@ func (g *Graph) Poke(v *Vertex) error {
}
// BackPoke pokes the pre-requisites that are stale and need to run before I can run.
func (g *Graph) BackPoke(v *Vertex) {
func (obj *BaseRes) BackPoke() {
var wg sync.WaitGroup
// these are all the vertices pointing TO v, eg: ??? -> v
for _, n := range g.IncomingGraphVertices(v) {
x, y, s := v.GetTimestamp(), n.GetTimestamp(), n.Res.GetState()
for _, n := range obj.Graph.IncomingGraphVertices(obj.Vertex) {
x, y, s := obj.Timestamp(), VtoR(n).Timestamp(), VtoR(n).GetState()
// If the parent timestamp needs poking AND it's not running
// Process, then poke it. If the parent is in ResStateProcess it
// means that an event is pending, so we'll be expecting a poke
// back soon, so we can safely discard the extra parent poke...
// TODO: implement a stateLT (less than) to tell if something
// happens earlier in the state cycle and that doesn't wrap nil
if x >= y && (s != resources.ResStateProcess && s != resources.ResStateCheckApply) {
if g.Flags.Debug {
log.Printf("%s[%s]: BackPoke: %s[%s]", v.Kind(), v.GetName(), n.Kind(), n.GetName())
if x >= y && (s != ResStateProcess && s != ResStateCheckApply) {
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: BackPoke: %s", obj, n)
}
wg.Add(1)
go func(nn *Vertex) error {
go func(nn pgraph.Vertex) error {
defer wg.Done()
return nn.SendEvent(event.EventBackPoke, nil)
return VtoR(nn).SendEvent(event.EventBackPoke, nil)
}(n)
} else {
if g.Flags.Debug {
log.Printf("%s[%s]: BackPoke: %s[%s]: Skipped!", v.Kind(), v.GetName(), n.Kind(), n.GetName())
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: BackPoke: %s: Skipped!", obj, n)
}
}
}
@@ -128,10 +136,11 @@ func (g *Graph) BackPoke(v *Vertex) {
// RefreshPending determines if any previous nodes have a refresh pending here.
// If this is true, it means I am expected to apply a refresh when I next run.
func (g *Graph) RefreshPending(v *Vertex) bool {
func (obj *BaseRes) RefreshPending() bool {
var refresh bool
for _, edge := range g.IncomingGraphEdges(v) {
for _, edge := range obj.Graph.IncomingGraphEdges(obj.Vertex) {
// if we asked for a notify *and* if one is pending!
edge := edge.(*Edge) // panic if wrong
if edge.Notify && edge.Refresh() {
refresh = true
break
@@ -141,8 +150,9 @@ func (g *Graph) RefreshPending(v *Vertex) bool {
}
// SetUpstreamRefresh sets the refresh value to any upstream vertices.
func (g *Graph) SetUpstreamRefresh(v *Vertex, b bool) {
for _, edge := range g.IncomingGraphEdges(v) {
func (obj *BaseRes) SetUpstreamRefresh(b bool) {
for _, edge := range obj.Graph.IncomingGraphEdges(obj.Vertex) {
edge := edge.(*Edge) // panic if wrong
if edge.Notify {
edge.SetRefresh(b)
}
@@ -150,8 +160,9 @@ func (g *Graph) SetUpstreamRefresh(v *Vertex, b bool) {
}
// SetDownstreamRefresh sets the refresh value to any downstream vertices.
func (g *Graph) SetDownstreamRefresh(v *Vertex, b bool) {
for _, edge := range g.OutgoingGraphEdges(v) {
func (obj *BaseRes) SetDownstreamRefresh(b bool) {
for _, edge := range obj.Graph.OutgoingGraphEdges(obj.Vertex) {
edge := edge.(*Edge) // panic if wrong
// if we asked for a notify *and* if one is pending!
if edge.Notify {
edge.SetRefresh(b)
@@ -160,14 +171,25 @@ func (g *Graph) SetDownstreamRefresh(v *Vertex, b bool) {
}
// Process is the primary function to execute for a particular vertex in the graph.
func (g *Graph) Process(v *Vertex) error {
obj := v.Res
if g.Flags.Debug {
log.Printf("%s[%s]: Process()", obj.Kind(), obj.GetName())
func (obj *BaseRes) Process() error {
if obj.debug {
log.Printf("%s: Process()", obj)
}
// FIXME: should these SetState methods be here or after the sema code?
defer obj.SetState(resources.ResStateNil) // reset state when finished
obj.SetState(resources.ResStateProcess)
defer obj.SetState(ResStateNil) // reset state when finished
obj.SetState(ResStateProcess)
// is it okay to run dependency wise right now?
// if not, that's okay because when the dependency runs, it will poke
// us back and we will run if needed then!
if !obj.OKTimestamp() {
go obj.BackPoke()
return nil
}
// timestamp must be okay...
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: OKTimestamp(%v)", obj, obj.Timestamp())
}
// semaphores!
// These shouldn't ever block an exit, since the graph should eventually
@@ -177,35 +199,23 @@ func (g *Graph) Process(v *Vertex) error {
// The exception is that semaphores with a zero count will always block!
// TODO: Add a close mechanism to close/unblock zero count semaphores...
semas := obj.Meta().Sema
if g.Flags.Debug && len(semas) > 0 {
log.Printf("%s[%s]: Sema: P(%s)", obj.Kind(), obj.GetName(), strings.Join(semas, ", "))
if obj.debug && len(semas) > 0 {
log.Printf("%s: Sema: P(%s)", obj, strings.Join(semas, ", "))
}
if err := g.SemaLock(semas); err != nil { // lock
if err := obj.Graph.SemaLock(semas); err != nil { // lock
// NOTE: in practice, this might not ever be truly necessary...
return fmt.Errorf("shutdown of semaphores")
}
defer g.SemaUnlock(semas) // unlock
if g.Flags.Debug && len(semas) > 0 {
defer log.Printf("%s[%s]: Sema: V(%s)", obj.Kind(), obj.GetName(), strings.Join(semas, ", "))
defer obj.Graph.SemaUnlock(semas) // unlock
if obj.debug && len(semas) > 0 {
defer log.Printf("%s: Sema: V(%s)", obj, strings.Join(semas, ", "))
}
var ok = true
var applied = false // did we run an apply?
// is it okay to run dependency wise right now?
// if not, that's okay because when the dependency runs, it will poke
// us back and we will run if needed then!
if !g.OKTimestamp(v) {
go g.BackPoke(v)
return nil
}
// timestamp must be okay...
if g.Flags.Debug {
log.Printf("%s[%s]: OKTimestamp(%v)", obj.Kind(), obj.GetName(), v.GetTimestamp())
}
// connect any senders to receivers and detect if values changed
if updated, err := obj.SendRecv(obj); err != nil {
if updated, err := obj.SendRecv(obj.Res); err != nil {
return errwrap.Wrapf(err, "could not SendRecv in Process")
} else if len(updated) > 0 {
for _, changed := range updated {
@@ -221,16 +231,16 @@ func (g *Graph) Process(v *Vertex) error {
var checkOK bool
var err error
if g.Flags.Debug {
log.Printf("%s[%s]: CheckApply(%t)", obj.Kind(), obj.GetName(), !noop)
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: CheckApply(%t)", obj, !noop)
}
// lookup the refresh (notification) variable
refresh = g.RefreshPending(v) // do i need to perform a refresh?
refresh = obj.RefreshPending() // do i need to perform a refresh?
obj.SetRefresh(refresh) // tell the resource
// changes can occur after this...
obj.SetState(resources.ResStateCheckApply)
obj.SetState(ResStateCheckApply)
// check cached state, to skip CheckApply; can't skip if refreshing
if !refresh && obj.IsStateOK() {
@@ -245,38 +255,37 @@ func (g *Graph) Process(v *Vertex) error {
// run the CheckApply!
} else {
// if this fails, don't UpdateTimestamp()
checkOK, err = obj.CheckApply(!noop)
checkOK, err = obj.Res.CheckApply(!noop)
if promErr := obj.Prometheus().UpdateCheckApplyTotal(obj.Kind(), !noop, !checkOK, err != nil); promErr != nil {
if promErr := obj.Data().Prometheus.UpdateCheckApplyTotal(obj.GetKind(), !noop, !checkOK, err != nil); promErr != nil {
// TODO: how to error correctly
log.Printf("%s[%s]: Prometheus.UpdateCheckApplyTotal() errored: %v", v.Kind(), v.GetName(), err)
log.Printf("%s: Prometheus.UpdateCheckApplyTotal() errored: %v", obj, err)
}
// TODO: Can the `Poll` converged timeout tracking be a
// more general method for all converged timeouts? this
// would simplify the resources by removing boilerplate
if v.Meta().Poll > 0 {
if obj.Meta().Poll > 0 {
if !checkOK { // something changed, restart timer
cuid, _, _ := v.Res.ConvergerUIDs() // get the converger uid used to report status
cuid.ResetTimer() // activity!
if g.Flags.Debug {
log.Printf("%s[%s]: Converger: ResetTimer", obj.Kind(), obj.GetName())
obj.cuid.ResetTimer() // activity!
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: Converger: ResetTimer", obj)
}
}
}
}
if checkOK && err != nil { // should never return this way
log.Fatalf("%s[%s]: CheckApply(): %t, %+v", obj.Kind(), obj.GetName(), checkOK, err)
log.Fatalf("%s: CheckApply(): %t, %+v", obj, checkOK, err)
}
if g.Flags.Debug {
log.Printf("%s[%s]: CheckApply(): %t, %v", obj.Kind(), obj.GetName(), checkOK, err)
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: CheckApply(): %t, %v", obj, checkOK, err)
}
// if CheckApply ran without noop and without error, state should be good
if !noop && err == nil { // aka !noop || checkOK
obj.StateOK(true) // reset
if refresh {
g.SetUpstreamRefresh(v, false) // refresh happened, clear the request
obj.SetUpstreamRefresh(false) // refresh happened, clear the request
obj.SetRefresh(false)
}
}
@@ -302,14 +311,14 @@ func (g *Graph) Process(v *Vertex) error {
}
if activity { // add refresh flag to downstream edges...
g.SetDownstreamRefresh(v, true)
obj.SetDownstreamRefresh(true)
}
// update this timestamp *before* we poke or the poked
// nodes might fail due to having a too old timestamp!
v.UpdateTimestamp() // this was touched...
obj.SetState(resources.ResStatePoking) // can't cancel parent poke
if err := g.Poke(v); err != nil {
obj.UpdateTimestamp() // this was touched...
obj.SetState(ResStatePoking) // can't cancel parent poke
if err := obj.Poke(); err != nil {
return errwrap.Wrapf(err, "the Poke() failed")
}
}
@@ -317,24 +326,11 @@ func (g *Graph) Process(v *Vertex) error {
return errwrap.Wrapf(err, "could not Process() successfully")
}
// SentinelErr is a sentinal as an error type that wraps an arbitrary error.
type SentinelErr struct {
err error
}
// Error is the required method to fulfill the error type.
func (obj *SentinelErr) Error() string {
return obj.err.Error()
}
// innerWorker is the CheckApply runner that reads from processChan.
// TODO: would it be better if this was a method on BaseRes that took in *Graph?
func (g *Graph) innerWorker(v *Vertex) {
obj := v.Res
func (obj *BaseRes) innerWorker() {
running := false
done := make(chan struct{})
playback := false // do we need to run another one?
_, wcuid, pcuid := obj.ConvergerUIDs() // get extra cuids (worker, process)
waiting := false
var timer = time.NewTimer(time.Duration(math.MaxInt64)) // longest duration
@@ -342,9 +338,9 @@ func (g *Graph) innerWorker(v *Vertex) {
<-timer.C // unnecessary, shouldn't happen
}
var delay = time.Duration(v.Meta().Delay) * time.Millisecond
var retry = v.Meta().Retry // number of tries left, -1 for infinite
var limiter = rate.NewLimiter(v.Meta().Limit, v.Meta().Burst)
var delay = time.Duration(obj.Meta().Delay) * time.Millisecond
var retry = obj.Meta().Retry // number of tries left, -1 for infinite
var limiter = rate.NewLimiter(obj.Meta().Limit, obj.Meta().Burst)
limited := false
wg := &sync.WaitGroup{} // wait for Process routine to exit
@@ -352,49 +348,49 @@ func (g *Graph) innerWorker(v *Vertex) {
Loop:
for {
select {
case ev, ok := <-obj.ProcessChan(): // must use like this
case ev, ok := <-obj.processChan: // must use like this
if !ok { // processChan closed, let's exit
break Loop // no event, so no ack!
}
if v.Res.Meta().Poll == 0 { // skip for polling
wcuid.SetConverged(false)
if obj.Meta().Poll == 0 { // skip for polling
obj.wcuid.SetConverged(false)
}
// if process started, but no action yet, skip!
if v.Res.GetState() == resources.ResStateProcess {
if g.Flags.Debug {
log.Printf("%s[%s]: Skipped event!", v.Kind(), v.GetName())
if obj.GetState() == ResStateProcess {
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: Skipped event!", obj)
}
ev.ACK() // ready for next message
v.Res.QuiesceGroup().Done()
obj.quiesceGroup.Done()
continue
}
// if running, we skip running a new execution!
// if waiting, we skip running a new execution!
if running || waiting {
if g.Flags.Debug {
log.Printf("%s[%s]: Playback added!", v.Kind(), v.GetName())
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: Playback added!", obj)
}
playback = true
ev.ACK() // ready for next message
v.Res.QuiesceGroup().Done()
obj.quiesceGroup.Done()
continue
}
// catch invalid rates
if v.Meta().Burst == 0 && !(v.Meta().Limit == rate.Inf) { // blocked
e := fmt.Errorf("%s[%s]: Permanently limited (rate != Inf, burst: 0)", v.Kind(), v.GetName())
v.SendEvent(event.EventExit, &SentinelErr{e})
if obj.Meta().Burst == 0 && !(obj.Meta().Limit == rate.Inf) { // blocked
e := fmt.Errorf("%s: Permanently limited (rate != Inf, burst: 0)", obj)
ev.ACK() // ready for next message
v.Res.QuiesceGroup().Done()
obj.quiesceGroup.Done()
obj.SendEvent(event.EventExit, &SentinelErr{e})
continue
}
// rate limit
// FIXME: consider skipping rate limit check if
// the event is a poke instead of a watch event
if !limited && !(v.Meta().Limit == rate.Inf) { // skip over the playback event...
if !limited && !(obj.Meta().Limit == rate.Inf) { // skip over the playback event...
now := time.Now()
r := limiter.ReserveN(now, 1) // one event
// r.OK() seems to always be true here!
@@ -402,12 +398,12 @@ Loop:
if d > 0 { // delay
limited = true
playback = true
log.Printf("%s[%s]: Limited (rate: %v/sec, burst: %d, next: %v)", v.Kind(), v.GetName(), v.Meta().Limit, v.Meta().Burst, d)
log.Printf("%s: Limited (rate: %v/sec, burst: %d, next: %v)", obj, obj.Meta().Limit, obj.Meta().Burst, d)
// start the timer...
timer.Reset(d)
waiting = true // waiting for retry timer
ev.ACK()
v.Res.QuiesceGroup().Done()
obj.quiesceGroup.Done()
continue
} // otherwise, we run directly!
}
@@ -416,51 +412,60 @@ Loop:
wg.Add(1)
running = true
go func(ev *event.Event) {
pcuid.SetConverged(false) // "block" Process
obj.pcuid.SetConverged(false) // "block" Process
defer wg.Done()
if e := g.Process(v); e != nil {
if e := obj.Process(); e != nil {
playback = true
log.Printf("%s[%s]: CheckApply errored: %v", v.Kind(), v.GetName(), e)
log.Printf("%s: CheckApply errored: %v", obj, e)
if retry == 0 {
if err := obj.Data().Prometheus.UpdateState(obj.String(), obj.GetKind(), prometheus.ResStateHardFail); err != nil {
// TODO: how to error this?
log.Printf("%s: Prometheus.UpdateState() errored: %v", obj, err)
}
// wrap the error in the sentinel
v.SendEvent(event.EventExit, &SentinelErr{e})
v.Res.QuiesceGroup().Done()
obj.quiesceGroup.Done() // before the Wait that happens in SendEvent!
obj.SendEvent(event.EventExit, &SentinelErr{e})
return
}
if retry > 0 { // don't decrement the -1
retry--
}
log.Printf("%s[%s]: CheckApply: Retrying after %.4f seconds (%d left)", v.Kind(), v.GetName(), delay.Seconds(), retry)
if err := obj.Data().Prometheus.UpdateState(obj.String(), obj.GetKind(), prometheus.ResStateSoftFail); err != nil {
// TODO: how to error this?
log.Printf("%s: Prometheus.UpdateState() errored: %v", obj, err)
}
log.Printf("%s: CheckApply: Retrying after %.4f seconds (%d left)", obj, delay.Seconds(), retry)
// start the timer...
timer.Reset(delay)
waiting = true // waiting for retry timer
// don't v.Res.QuiesceGroup().Done() b/c
// don't obj.quiesceGroup.Done() b/c
// the timer is running and it can exit!
return
}
retry = v.Meta().Retry // reset on success
retry = obj.Meta().Retry // reset on success
close(done) // trigger
}(ev)
ev.ACK() // sync (now mostly useless)
case <-timer.C:
if v.Res.Meta().Poll == 0 { // skip for polling
wcuid.SetConverged(false)
if obj.Meta().Poll == 0 { // skip for polling
obj.wcuid.SetConverged(false)
}
waiting = false
if !timer.Stop() {
//<-timer.C // blocks, docs are wrong!
}
log.Printf("%s[%s]: CheckApply delay expired!", v.Kind(), v.GetName())
log.Printf("%s: CheckApply delay expired!", obj)
close(done)
// a CheckApply run (with possibly retry pause) finished
case <-done:
if v.Res.Meta().Poll == 0 { // skip for polling
wcuid.SetConverged(false)
if obj.Meta().Poll == 0 { // skip for polling
obj.wcuid.SetConverged(false)
}
if g.Flags.Debug {
log.Printf("%s[%s]: CheckApply finished!", v.Kind(), v.GetName())
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: CheckApply finished!", obj)
}
done = make(chan struct{}) // reset
// re-send this event, to trigger a CheckApply()
@@ -470,21 +475,21 @@ Loop:
// TODO: can this experience indefinite postponement ?
// see: https://github.com/golang/go/issues/11506
// pause or exit is in process if not quiescing!
if !v.Res.IsQuiescing() {
if !obj.quiescing {
playback = false
v.Res.QuiesceGroup().Add(1) // lock around it, b/c still running...
obj.quiesceGroup.Add(1) // lock around it, b/c still running...
go func() {
obj.Event() // replay a new event
v.Res.QuiesceGroup().Done()
obj.quiesceGroup.Done()
}()
}
}
running = false
pcuid.SetConverged(true) // "unblock" Process
v.Res.QuiesceGroup().Done()
obj.pcuid.SetConverged(true) // "unblock" Process
obj.quiesceGroup.Done()
case <-wcuid.ConvergedTimer():
wcuid.SetConverged(true) // converged!
case <-obj.wcuid.ConvergedTimer():
obj.wcuid.SetConverged(true) // converged!
continue
}
}
@@ -495,22 +500,21 @@ Loop:
// Worker is the common run frontend of the vertex. It handles all of the retry
// and retry delay common code, and ultimately returns the final status of this
// vertex execution.
func (g *Graph) Worker(v *Vertex) error {
func (obj *BaseRes) Worker() error {
// listen for chan events from Watch() and run
// the Process() function when they're received
// this avoids us having to pass the data into
// the Watch() function about which graph it is
// running on, which isolates things nicely...
obj := v.Res
if g.Flags.Debug {
log.Printf("%s[%s]: Worker: Running", v.Kind(), v.GetName())
defer log.Printf("%s[%s]: Worker: Stopped", v.Kind(), v.GetName())
if obj.debug {
log.Printf("%s: Worker: Running", obj)
defer log.Printf("%s: Worker: Stopped", obj)
}
// run the init (should match 1-1 with Close function)
if err := obj.Init(); err != nil {
if err := obj.Res.Init(); err != nil {
obj.ProcessExit()
// always exit the worker function by finishing with Close()
if e := obj.Close(); e != nil {
if e := obj.Res.Close(); e != nil {
err = multierr.Append(err, e) // list of errors
}
return errwrap.Wrapf(err, "could not Init() resource")
@@ -520,16 +524,15 @@ func (g *Graph) Worker(v *Vertex) error {
// timeout, we could inappropriately converge mid-apply!
// avoid this by blocking convergence with a fake report
// we also add a similar blocker around the worker loop!
_, wcuid, pcuid := obj.ConvergerUIDs() // get extra cuids (worker, process)
// XXX: put these in Init() ?
wcuid.SetConverged(true) // starts off false, and waits for loop timeout
pcuid.SetConverged(true) // starts off true, because it's not running...
// get extra cuids (worker, process)
obj.wcuid.SetConverged(true) // starts off false, and waits for loop timeout
obj.pcuid.SetConverged(true) // starts off true, because it's not running...
wg := obj.ProcessSync()
wg.Add(1)
obj.processSync.Add(1)
go func() {
defer wg.Done()
g.innerWorker(v)
defer obj.processSync.Done()
obj.innerWorker()
}()
var err error // propagate the error up (this is a permanent BAD error!)
@@ -539,7 +542,7 @@ func (g *Graph) Worker(v *Vertex) error {
// NOTE: we're using the same retry and delay metaparams that CheckApply
// uses. This is for practicality. We can separate them later if needed!
var watchDelay time.Duration
var watchRetry = v.Meta().Retry // number of tries left, -1 for infinite
var watchRetry = obj.Meta().Retry // number of tries left, -1 for infinite
// watch blocks until it ends, & errors to retry
for {
// TODO: do we have to stop the converged-timeout when in this block (perhaps we're in the delay block!)
@@ -562,7 +565,7 @@ func (g *Graph) Worker(v *Vertex) error {
if exit, send := obj.ReadEvent(event); exit != nil {
obj.ProcessExit()
err := *exit // exit err
if e := obj.Close(); err == nil {
if e := obj.Res.Close(); err == nil {
err = e
} else if e != nil {
err = multierr.Append(err, e) // list of errors
@@ -592,7 +595,7 @@ func (g *Graph) Worker(v *Vertex) error {
}
}
timer.Stop() // it's nice to cleanup
log.Printf("%s[%s]: Watch delay expired!", v.Kind(), v.GetName())
log.Printf("%s: Watch delay expired!", obj)
// NOTE: we can avoid the send if running Watch guarantees
// one CheckApply event on startup!
//if pendingSendEvent { // TODO: should this become a list in the future?
@@ -604,13 +607,12 @@ func (g *Graph) Worker(v *Vertex) error {
// TODO: reset the watch retry count after some amount of success
var e error
if v.Res.Meta().Poll > 0 { // poll instead of watching :(
cuid, _, _ := v.Res.ConvergerUIDs() // get the converger uid used to report status
cuid.StartTimer()
e = v.Res.Poll()
cuid.StopTimer() // clean up nicely
if obj.Meta().Poll > 0 { // poll instead of watching :(
obj.cuid.StartTimer()
e = obj.Poll()
obj.cuid.StopTimer() // clean up nicely
} else {
e = v.Res.Watch() // run the watch normally
e = obj.Res.Watch() // run the watch normally
}
if e == nil { // exit signal
err = nil // clean exit
@@ -620,7 +622,7 @@ func (g *Graph) Worker(v *Vertex) error {
err = sentinelErr.err
break // sentinel means, perma-exit
}
log.Printf("%s[%s]: Watch errored: %v", v.Kind(), v.GetName(), e)
log.Printf("%s: Watch errored: %v", obj, e)
if watchRetry == 0 {
err = fmt.Errorf("Permanent watch error: %v", e)
break
@@ -628,8 +630,8 @@ func (g *Graph) Worker(v *Vertex) error {
if watchRetry > 0 { // don't decrement the -1
watchRetry--
}
watchDelay = time.Duration(v.Meta().Delay) * time.Millisecond
log.Printf("%s[%s]: Watch: Retrying after %.4f seconds (%d left)", v.Kind(), v.GetName(), watchDelay.Seconds(), watchRetry)
watchDelay = time.Duration(obj.Meta().Delay) * time.Millisecond
log.Printf("%s: Watch: Retrying after %.4f seconds (%d left)", obj, watchDelay.Seconds(), watchRetry)
// We need to trigger a CheckApply after Watch restarts, so that
// we catch any lost events that happened while down. We do this
// by getting the Watch resource to send one event once it's up!
@@ -638,111 +640,10 @@ func (g *Graph) Worker(v *Vertex) error {
obj.ProcessExit()
// close resource and return possible errors if any
if e := obj.Close(); err == nil {
if e := obj.Res.Close(); err == nil {
err = e
} else if e != nil {
err = multierr.Append(err, e) // list of errors
}
return err
}
// Start is a main kick to start the graph. It goes through in reverse topological
// sort order so that events can't hit un-started vertices.
func (g *Graph) Start(first bool) { // start or continue
log.Printf("State: %v -> %v", g.setState(graphStateStarting), g.getState())
defer log.Printf("State: %v -> %v", g.setState(graphStateStarted), g.getState())
t, _ := g.TopologicalSort()
indegree := g.InDegree() // compute all of the indegree's
reversed := Reverse(t)
for _, v := range reversed { // run the Setup() for everyone first
if !v.Res.IsWorking() { // if Worker() is not running...
v.Res.Setup() // initialize some vars in the resource
}
}
// run through the topological reverse, and start or unpause each vertex
for _, v := range reversed {
// selective poke: here we reduce the number of initial pokes
// to the minimum required to activate every vertex in the
// graph, either by direct action, or by getting poked by a
// vertex that was previously activated. if we poke each vertex
// that has no incoming edges, then we can be sure to reach the
// whole graph. Please note: this may mask certain optimization
// failures, such as any poke limiting code in Poke() or
// BackPoke(). You might want to disable this selective start
// when experimenting with and testing those elements.
// if we are unpausing (since it's not the first run of this
// function) we need to poke to *unpause* every graph vertex,
// and not just selectively the subset with no indegree.
// let the startup code know to poke or not
// this triggers a CheckApply AFTER Watch is Running()
// We *don't* need to also do this to new nodes or nodes that
// are about to get unpaused, because they'll get poked by one
// of the indegree == 0 vertices, and an important aspect of the
// Process() function is that even if the state is correct, it
// will pass through the Poke so that it flows through the DAG.
v.Res.Starter(indegree[v] == 0)
var unpause = true
if !v.Res.IsWorking() { // if Worker() is not running...
unpause = false // doesn't need unpausing on first start
g.wg.Add(1)
// must pass in value to avoid races...
// see: https://ttboj.wordpress.com/2015/07/27/golang-parallelism-issues-causing-too-many-open-files-error/
go func(vv *Vertex) {
defer g.wg.Done()
defer v.Res.Reset()
// TODO: if a sufficient number of workers error,
// should something be done? Should these restart
// after perma-failure if we have a graph change?
log.Printf("%s[%s]: Started", vv.Kind(), vv.GetName())
if err := g.Worker(vv); err != nil { // contains the Watch and CheckApply loops
log.Printf("%s[%s]: Exited with failure: %v", vv.Kind(), vv.GetName(), err)
return
}
log.Printf("%s[%s]: Exited", vv.Kind(), vv.GetName())
}(v)
}
select {
case <-v.Res.Started(): // block until started
case <-v.Res.Stopped(): // we failed on init
// if the resource Init() fails, we don't hang!
}
if unpause { // unpause (if needed)
v.Res.SendEvent(event.EventStart, nil) // sync!
}
}
// we wait for everyone to start before exiting!
}
// Pause sends pause events to the graph in a topological sort order.
func (g *Graph) Pause() {
log.Printf("State: %v -> %v", g.setState(graphStatePausing), g.getState())
defer log.Printf("State: %v -> %v", g.setState(graphStatePaused), g.getState())
t, _ := g.TopologicalSort()
for _, v := range t { // squeeze out the events...
v.SendEvent(event.EventPause, nil) // sync
}
}
// Exit sends exit events to the graph in a topological sort order.
func (g *Graph) Exit() {
if g == nil {
return
} // empty graph that wasn't populated yet
t, _ := g.TopologicalSort()
for _, v := range t { // squeeze out the events...
// turn off the taps...
// XXX: consider instead doing this by closing the Res.events channel instead?
// XXX: do this by sending an exit signal, and then returning
// when we hit the 'default' in the select statement!
// XXX: we can do this to quiesce, but it's not necessary now
v.SendEvent(event.EventExit, nil)
v.Res.WaitGroup().Wait()
}
g.wg.Wait() // for now, this doesn't need to be a separate Wait() method
}

View File

@@ -1,25 +1,25 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !noaugeas
package resources
import (
"encoding/gob"
"fmt"
"log"
"os"
@@ -39,7 +39,7 @@ const (
)
func init() {
gob.Register(&AugeasRes{})
RegisterResource("augeas", func() Res { return &AugeasRes{} })
}
// AugeasRes is a resource that enables you to use the augeas resource.
@@ -93,7 +93,6 @@ func (obj *AugeasRes) Validate() error {
// Init initiates the resource.
func (obj *AugeasRes) Init() error {
obj.BaseRes.kind = "augeas"
return obj.BaseRes.Init() // call base init, b/c we're overriding
}
@@ -118,7 +117,7 @@ func (obj *AugeasRes) Watch() error {
for {
if obj.debug {
log.Printf("%s[%s]: Watching: %s", obj.Kind(), obj.GetName(), obj.File) // attempting to watch...
log.Printf("%s: Watching: %s", obj, obj.File) // attempting to watch...
}
select {
@@ -127,10 +126,10 @@ func (obj *AugeasRes) Watch() error {
return nil
}
if err := event.Error; err != nil {
return errwrap.Wrapf(err, "Unknown %s[%s] watcher error", obj.Kind(), obj.GetName())
return errwrap.Wrapf(err, "Unknown %s watcher error", obj)
}
if obj.debug { // don't access event.Body if event.Error isn't nil
log.Printf("%s[%s]: Event(%s): %v", obj.Kind(), obj.GetName(), event.Body.Name, event.Body.Op)
log.Printf("%s: Event(%s): %v", obj, event.Body.Name, event.Body.Op)
}
send = true
obj.StateOK(false) // dirty
@@ -177,7 +176,7 @@ func (obj *AugeasRes) checkApplySet(apply bool, ag *augeas.Augeas, set AugeasSet
// CheckApply method for Augeas resource.
func (obj *AugeasRes) CheckApply(apply bool) (bool, error) {
log.Printf("%s[%s]: CheckApply: %s", obj.Kind(), obj.GetName(), obj.File)
log.Printf("%s: CheckApply: %s", obj, obj.File)
// By default we do not set any option to augeas, we use the defaults.
opts := augeas.None
if obj.Lens != "" {
@@ -225,7 +224,7 @@ func (obj *AugeasRes) CheckApply(apply bool) (bool, error) {
return checkOK, nil
}
log.Printf("%s[%s]: changes needed, saving", obj.Kind(), obj.GetName())
log.Printf("%s: changes needed, saving", obj)
if err = ag.Save(); err != nil {
return false, errwrap.Wrapf(err, "augeas: error while saving augeas values")
}
@@ -247,15 +246,10 @@ type AugeasUID struct {
name string
}
// AutoEdges returns the AutoEdge interface. In this case no autoedges are used.
func (obj *AugeasRes) AutoEdges() AutoEdge {
return nil
}
// UIDs includes all params to make a unique identification of this object.
func (obj *AugeasRes) UIDs() []ResUID {
x := &AugeasUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
BaseUID: BaseUID{Name: obj.GetName(), Kind: obj.GetKind()},
name: obj.Name,
}
return []ResUID{x}
@@ -267,20 +261,19 @@ func (obj *AugeasRes) GroupCmp(r Res) bool {
}
// Compare two resources and return if they are equivalent.
func (obj *AugeasRes) Compare(res Res) bool {
switch res.(type) {
// we can only compare AugeasRes to others of the same resource
case *AugeasRes:
res := res.(*AugeasRes)
func (obj *AugeasRes) Compare(r Res) bool {
// we can only compare AugeasRes to others of the same resource kind
res, ok := r.(*AugeasRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
default:
return false
}
return true
}

View File

@@ -1,19 +1,20 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build noaugeas
package resources

148
resources/autoedge.go Normal file
View File

@@ -0,0 +1,148 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"fmt"
"log"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util"
multierr "github.com/hashicorp/go-multierror"
errwrap "github.com/pkg/errors"
)
// The AutoEdge interface is used to implement the autoedges feature.
type AutoEdge interface {
Next() []ResUID // call to get list of edges to add
Test([]bool) bool // call until false
}
// UIDExistsInUIDs wraps the IFF method when used with a list of UID's.
func UIDExistsInUIDs(uid ResUID, uids []ResUID) bool {
for _, u := range uids {
if uid.IFF(u) {
return true
}
}
return false
}
// addEdgesByMatchingUIDS adds edges to the vertex in a graph based on if it
// matches a uid list.
func addEdgesByMatchingUIDS(g *pgraph.Graph, v pgraph.Vertex, uids []ResUID) []bool {
// search for edges and see what matches!
var result []bool
// loop through each uid, and see if it matches any vertex
for _, uid := range uids {
var found = false
// uid is a ResUID object
for _, vv := range g.Vertices() { // search
if v == vv { // skip self
continue
}
if b, ok := g.Value("debug"); ok && util.Bool(b) {
log.Printf("Compile: AutoEdge: Match: %s with UID: %s", vv, uid)
}
// we must match to an effective UID for the resource,
// that is to say, the name value of a res is a helpful
// handle, but it is not necessarily a unique identity!
// remember, resources can return multiple UID's each!
if UIDExistsInUIDs(uid, VtoR(vv).UIDs()) {
// add edge from: vv -> v
if uid.IsReversed() {
txt := fmt.Sprintf("AutoEdge: %s -> %s", vv, v)
log.Printf("Compile: Adding %s", txt)
edge := &Edge{Name: txt}
g.AddEdge(vv, v, edge)
} else { // edges go the "normal" way, eg: pkg resource
txt := fmt.Sprintf("AutoEdge: %s -> %s", v, vv)
log.Printf("Compile: Adding %s", txt)
edge := &Edge{Name: txt}
g.AddEdge(v, vv, edge)
}
found = true
break
}
}
result = append(result, found)
}
return result
}
// AutoEdges adds the automatic edges to the graph.
func AutoEdges(g *pgraph.Graph) error {
log.Println("Compile: Adding AutoEdges...")
// initially get all of the autoedges to seek out all possible errors
var err error
autoEdgeObjVertexMap := make(map[pgraph.Vertex]AutoEdge)
sorted := g.VerticesSorted()
for _, v := range sorted { // for each vertexes autoedges
if !VtoR(v).Meta().AutoEdge { // is the metaparam true?
continue
}
autoEdgeObj, e := VtoR(v).AutoEdges()
if e != nil {
err = multierr.Append(err, e) // collect all errors
continue
}
if autoEdgeObj == nil {
log.Printf("%s: No auto edges were found!", v)
continue // next vertex
}
autoEdgeObjVertexMap[v] = autoEdgeObj // save for next loop
}
if err != nil {
return errwrap.Wrapf(err, "the auto edges had errors")
}
// now that we're guaranteed error free, we can modify the graph safely
for _, v := range sorted { // stable sort order for determinism in logs
autoEdgeObj, exists := autoEdgeObjVertexMap[v]
if !exists {
continue
}
for { // while the autoEdgeObj has more uids to add...
uids := autoEdgeObj.Next() // get some!
if uids == nil {
log.Printf("%s: The auto edge list is empty!", v)
break // inner loop
}
if b, ok := g.Value("debug"); ok && util.Bool(b) {
log.Println("Compile: AutoEdge: UIDS:")
for i, u := range uids {
log.Printf("Compile: AutoEdge: UID%d: %v", i, u)
}
}
// match and add edges
result := addEdgesByMatchingUIDS(g, v, uids)
// report back, and find out if we should continue
if !autoEdgeObj.Test(result) {
break
}
}
}
return nil
}

View File

@@ -1,63 +1,66 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
package resources
import (
"fmt"
"log"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util"
errwrap "github.com/pkg/errors"
)
// AutoGrouper is the required interface to implement for an autogroup algorithm
// AutoGrouper is the required interface to implement for an autogroup algorithm.
type AutoGrouper interface {
// listed in the order these are typically called in...
name() string // friendly identifier
init(*Graph) error // only call once
vertexNext() (*Vertex, *Vertex, error) // mostly algorithmic
vertexCmp(*Vertex, *Vertex) error // can we merge these ?
vertexMerge(*Vertex, *Vertex) (*Vertex, error) // vertex merge fn to use
edgeMerge(*Edge, *Edge) *Edge // edge merge fn to use
init(*pgraph.Graph) error // only call once
vertexNext() (pgraph.Vertex, pgraph.Vertex, error) // mostly algorithmic
vertexCmp(pgraph.Vertex, pgraph.Vertex) error // can we merge these ?
vertexMerge(pgraph.Vertex, pgraph.Vertex) (pgraph.Vertex, error) // vertex merge fn to use
edgeMerge(pgraph.Edge, pgraph.Edge) pgraph.Edge // edge merge fn to use
vertexTest(bool) (bool, error) // call until false
}
// baseGrouper is the base type for implementing the AutoGrouper interface
// baseGrouper is the base type for implementing the AutoGrouper interface.
type baseGrouper struct {
graph *Graph // store a pointer to the graph
vertices []*Vertex // cached list of vertices
graph *pgraph.Graph // store a pointer to the graph
vertices []pgraph.Vertex // cached list of vertices
i int
j int
done bool
}
// name provides a friendly name for the logs to see
// name provides a friendly name for the logs to see.
func (ag *baseGrouper) name() string {
return "baseGrouper"
}
// init is called only once and before using other AutoGrouper interface methods
// the name method is the only exception: call it any time without side effects!
func (ag *baseGrouper) init(g *Graph) error {
func (ag *baseGrouper) init(g *pgraph.Graph) error {
if ag.graph != nil {
return fmt.Errorf("the init method has already been called")
}
ag.graph = g // pointer
ag.vertices = ag.graph.GetVerticesSorted() // cache in deterministic order!
ag.vertices = ag.graph.VerticesSorted() // cache in deterministic order!
ag.i = 0
ag.j = 0
if len(ag.vertices) == 0 { // empty graph
@@ -71,7 +74,7 @@ func (ag *baseGrouper) init(g *Graph) error {
// an intelligent algorithm would selectively offer only valid pairs of vertices
// these should satisfy logical grouping requirements for the autogroup designs!
// the desired algorithms can override, but keep this method as a base iterator!
func (ag *baseGrouper) vertexNext() (v1, v2 *Vertex, err error) {
func (ag *baseGrouper) vertexNext() (v1, v2 pgraph.Vertex, err error) {
// this does a for v... { for w... { return v, w }} but stepwise!
l := len(ag.vertices)
if ag.i < l {
@@ -106,48 +109,49 @@ func (ag *baseGrouper) vertexNext() (v1, v2 *Vertex, err error) {
return
}
func (ag *baseGrouper) vertexCmp(v1, v2 *Vertex) error {
func (ag *baseGrouper) vertexCmp(v1, v2 pgraph.Vertex) error {
if v1 == nil || v2 == nil {
return fmt.Errorf("the vertex is nil")
}
if v1 == v2 { // skip yourself
return fmt.Errorf("the vertices are the same")
}
if v1.Kind() != v2.Kind() { // we must group similar kinds
if VtoR(v1).GetKind() != VtoR(v2).GetKind() { // we must group similar kinds
// TODO: maybe future resources won't need this limitation?
return fmt.Errorf("the two resources aren't the same kind")
}
// someone doesn't want to group!
if !v1.Meta().AutoGroup || !v2.Meta().AutoGroup {
if !VtoR(v1).Meta().AutoGroup || !VtoR(v2).Meta().AutoGroup {
return fmt.Errorf("one of the autogroup flags is false")
}
if v1.Res.IsGrouped() { // already grouped!
if VtoR(v1).IsGrouped() { // already grouped!
return fmt.Errorf("already grouped")
}
if len(v2.Res.GetGroup()) > 0 { // already has children grouped!
if len(VtoR(v2).GetGroup()) > 0 { // already has children grouped!
return fmt.Errorf("already has groups")
}
if !v1.Res.GroupCmp(v2.Res) { // resource groupcmp failed!
if !VtoR(v1).GroupCmp(VtoR(v2)) { // resource groupcmp failed!
return fmt.Errorf("the GroupCmp failed")
}
return nil // success
}
func (ag *baseGrouper) vertexMerge(v1, v2 *Vertex) (v *Vertex, err error) {
func (ag *baseGrouper) vertexMerge(v1, v2 pgraph.Vertex) (v pgraph.Vertex, err error) {
// NOTE: it's important to use w.Res instead of w, b/c
// the w by itself is the *Vertex obj, not the *Res obj
// which is contained within it! They both satisfy the
// Res interface, which is why both will compile! :(
err = v1.Res.GroupRes(v2.Res) // GroupRes skips stupid groupings
err = VtoR(v1).GroupRes(VtoR(v2)) // GroupRes skips stupid groupings
return // success or fail, and no need to merge the actual vertices!
}
func (ag *baseGrouper) edgeMerge(e1, e2 *Edge) *Edge {
func (ag *baseGrouper) edgeMerge(e1, e2 pgraph.Edge) pgraph.Edge {
// FIXME: should we merge the edge.Notify or edge.refresh values?
return e1 // noop
}
// vertexTest processes the results of the grouping for the algorithm to know
// return an error if something went horribly wrong, and bool false to stop
// return an error if something went horribly wrong, and bool false to stop.
func (ag *baseGrouper) vertexTest(b bool) (bool, error) {
// NOTE: this particular baseGrouper version doesn't track what happens
// because since we iterate over every pair, we don't care which merge!
@@ -157,19 +161,20 @@ func (ag *baseGrouper) vertexTest(b bool) (bool, error) {
return true, nil
}
// NonReachabilityGrouper is the most straight-forward algorithm for grouping.
// TODO: this algorithm may not be correct in all cases. replace if needed!
type nonReachabilityGrouper struct {
type NonReachabilityGrouper struct {
baseGrouper // "inherit" what we want, and reimplement the rest
}
func (ag *nonReachabilityGrouper) name() string {
return "nonReachabilityGrouper"
func (ag *NonReachabilityGrouper) name() string {
return "NonReachabilityGrouper"
}
// this algorithm relies on the observation that if there's a path from a to b,
// This algorithm relies on the observation that if there's a path from a to b,
// then they *can't* be merged (b/c of the existing dependency) so therefore we
// merge anything that *doesn't* satisfy this condition or that of the reverse!
func (ag *nonReachabilityGrouper) vertexNext() (v1, v2 *Vertex, err error) {
func (ag *NonReachabilityGrouper) vertexNext() (v1, v2 pgraph.Vertex, err error) {
for {
v1, v2, err = ag.baseGrouper.vertexNext() // get all iterable pairs
if err != nil {
@@ -200,15 +205,15 @@ func (ag *nonReachabilityGrouper) vertexNext() (v1, v2 *Vertex, err error) {
// and then by deleting v2 from the graph. Since more than one edge between two
// vertices is not allowed, duplicate edges are merged as well. an edge merge
// function can be provided if you'd like to control how you merge the edges!
func (g *Graph) VertexMerge(v1, v2 *Vertex, vertexMergeFn func(*Vertex, *Vertex) (*Vertex, error), edgeMergeFn func(*Edge, *Edge) *Edge) error {
func VertexMerge(g *pgraph.Graph, v1, v2 pgraph.Vertex, vertexMergeFn func(pgraph.Vertex, pgraph.Vertex) (pgraph.Vertex, error), edgeMergeFn func(pgraph.Edge, pgraph.Edge) pgraph.Edge) error {
// methodology
// 1) edges between v1 and v2 are removed
//Loop:
for k1 := range g.Adjacency {
for k2 := range g.Adjacency[k1] {
for k1 := range g.Adjacency() {
for k2 := range g.Adjacency()[k1] {
// v1 -> v2 || v2 -> v1
if (k1 == v1 && k2 == v2) || (k1 == v2 && k2 == v1) {
delete(g.Adjacency[k1], k2) // delete map & edge
delete(g.Adjacency()[k1], k2) // delete map & edge
// NOTE: if we assume this is a DAG, then we can
// assume only v1 -> v2 OR v2 -> v1 exists, and
// we can break out of these loops immediately!
@@ -220,10 +225,10 @@ func (g *Graph) VertexMerge(v1, v2 *Vertex, vertexMergeFn func(*Vertex, *Vertex)
// 2) edges that point towards v2 from X now point to v1 from X (no dupes)
for _, x := range g.IncomingGraphVertices(v2) { // all to vertex v (??? -> v)
e := g.Adjacency[x][v2] // previous edge
e := g.Adjacency()[x][v2] // previous edge
r := g.Reachability(x, v1)
// merge e with ex := g.Adjacency[x][v1] if it exists!
if ex, exists := g.Adjacency[x][v1]; exists && edgeMergeFn != nil && len(r) == 0 {
// merge e with ex := g.Adjacency()[x][v1] if it exists!
if ex, exists := g.Adjacency()[x][v1]; exists && edgeMergeFn != nil && len(r) == 0 {
e = edgeMergeFn(e, ex)
}
if len(r) == 0 { // if not reachable, add it
@@ -236,21 +241,21 @@ func (g *Graph) VertexMerge(v1, v2 *Vertex, vertexMergeFn func(*Vertex, *Vertex)
continue
}
// this edge is from: prev, to: next
ex, _ := g.Adjacency[prev][next] // get
ex, _ := g.Adjacency()[prev][next] // get
ex = edgeMergeFn(ex, e)
g.Adjacency[prev][next] = ex // set
g.Adjacency()[prev][next] = ex // set
prev = next
}
}
delete(g.Adjacency[x], v2) // delete old edge
delete(g.Adjacency()[x], v2) // delete old edge
}
// 3) edges that point from v2 to X now point from v1 to X (no dupes)
for _, x := range g.OutgoingGraphVertices(v2) { // all from vertex v (v -> ???)
e := g.Adjacency[v2][x] // previous edge
e := g.Adjacency()[v2][x] // previous edge
r := g.Reachability(v1, x)
// merge e with ex := g.Adjacency[v1][x] if it exists!
if ex, exists := g.Adjacency[v1][x]; exists && edgeMergeFn != nil && len(r) == 0 {
// merge e with ex := g.Adjacency()[v1][x] if it exists!
if ex, exists := g.Adjacency()[v1][x]; exists && edgeMergeFn != nil && len(r) == 0 {
e = edgeMergeFn(e, ex)
}
if len(r) == 0 {
@@ -263,13 +268,13 @@ func (g *Graph) VertexMerge(v1, v2 *Vertex, vertexMergeFn func(*Vertex, *Vertex)
continue
}
// this edge is from: prev, to: next
ex, _ := g.Adjacency[prev][next]
ex, _ := g.Adjacency()[prev][next]
ex = edgeMergeFn(ex, e)
g.Adjacency[prev][next] = ex
g.Adjacency()[prev][next] = ex
prev = next
}
}
delete(g.Adjacency[v2], x)
delete(g.Adjacency()[v2], x)
}
// 4) merge and then remove the (now merged/grouped) vertex
@@ -277,7 +282,8 @@ func (g *Graph) VertexMerge(v1, v2 *Vertex, vertexMergeFn func(*Vertex, *Vertex)
if v, err := vertexMergeFn(v1, v2); err != nil {
return err
} else if v != nil { // replace v1 with the "merged" version...
*v1 = *v // TODO: is this safe? (replacing mutexes is undefined!)
//*v1 = *v // TODO: is this safe? (replacing mutexes is undefined!)
v1 = v
}
}
g.DeleteVertex(v2) // remove grouped vertex
@@ -289,8 +295,8 @@ func (g *Graph) VertexMerge(v1, v2 *Vertex, vertexMergeFn func(*Vertex, *Vertex)
return nil // success
}
// autoGroup is the mechanical auto group "runner" that runs the interface spec
func (g *Graph) autoGroup(ag AutoGrouper) chan string {
// autoGroup is the mechanical auto group "runner" that runs the interface spec.
func autoGroup(g *pgraph.Graph, ag AutoGrouper) chan string {
strch := make(chan string) // output log messages here
go func(strch chan string) {
strch <- fmt.Sprintf("Compile: Grouping: Algorithm: %v...", ag.name())
@@ -299,7 +305,7 @@ func (g *Graph) autoGroup(ag AutoGrouper) chan string {
}
for {
var v, w *Vertex
var v, w pgraph.Vertex
v, w, err := ag.vertexNext() // get pair to compare
if err != nil {
log.Fatalf("error running autoGroup(vertexNext): %v", err)
@@ -310,12 +316,12 @@ func (g *Graph) autoGroup(ag AutoGrouper) chan string {
wStr := fmt.Sprintf("%s", w)
if err := ag.vertexCmp(v, w); err != nil { // cmp ?
if g.Flags.Debug {
if b, ok := g.Value("debug"); ok && util.Bool(b) {
strch <- fmt.Sprintf("Compile: Grouping: !GroupCmp for: %s into %s", wStr, vStr)
}
// remove grouped vertex and merge edges (res is safe)
} else if err := g.VertexMerge(v, w, ag.vertexMerge, ag.edgeMerge); err != nil { // merge...
} else if err := VertexMerge(g, v, w, ag.vertexMerge, ag.edgeMerge); err != nil { // merge...
strch <- fmt.Sprintf("Compile: Grouping: !VertexMerge for: %s into %s", wStr, vStr)
} else { // success!
@@ -337,12 +343,12 @@ func (g *Graph) autoGroup(ag AutoGrouper) chan string {
return strch
}
// AutoGroup runs the auto grouping on the graph and prints out log messages
func (g *Graph) AutoGroup() {
// AutoGroup runs the auto grouping on the graph and prints out log messages.
func AutoGroup(g *pgraph.Graph, ag AutoGrouper) {
// receive log messages from channel...
// this allows test cases to avoid printing them when they're unwanted!
// TODO: this algorithm may not be correct in all cases. replace if needed!
for str := range g.autoGroup(&nonReachabilityGrouper{}) {
for str := range autoGroup(g, ag) {
log.Println(str)
}
}

733
resources/autogroup_test.go Normal file
View File

@@ -0,0 +1,733 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"fmt"
"reflect"
"sort"
"strings"
"testing"
"time"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util"
)
// NE is a helper function to make testing easier. It creates a new noop edge.
func NE(s string) pgraph.Edge {
obj := &Edge{Name: s}
return obj
}
type testGrouper struct {
// TODO: this algorithm may not be correct in all cases. replace if needed!
NonReachabilityGrouper // "inherit" what we want, and reimplement the rest
}
func (ag *testGrouper) name() string {
return "testGrouper"
}
func (ag *testGrouper) vertexMerge(v1, v2 pgraph.Vertex) (v pgraph.Vertex, err error) {
if err := VtoR(v1).GroupRes(VtoR(v2)); err != nil { // group them first
return nil, err
}
// HACK: update the name so it matches full list of self+grouped
obj := VtoR(v1)
names := strings.Split(obj.GetName(), ",") // load in stored names
for _, n := range obj.GetGroup() {
names = append(names, n.GetName()) // add my contents
}
names = util.StrRemoveDuplicatesInList(names) // remove duplicates
sort.Strings(names)
obj.SetName(strings.Join(names, ","))
return // success or fail, and no need to merge the actual vertices!
}
func (ag *testGrouper) edgeMerge(e1, e2 pgraph.Edge) pgraph.Edge {
edge1 := e1.(*Edge) // panic if wrong
edge2 := e2.(*Edge) // panic if wrong
// HACK: update the name so it makes a union of both names
n1 := strings.Split(edge1.Name, ",") // load
n2 := strings.Split(edge2.Name, ",") // load
names := append(n1, n2...)
names = util.StrRemoveDuplicatesInList(names) // remove duplicates
sort.Strings(names)
return &Edge{Name: strings.Join(names, ",")}
}
// helper function
func runGraphCmp(t *testing.T, g1, g2 *pgraph.Graph) {
AutoGroup(g1, &testGrouper{}) // edits the graph
err := GraphCmp(g1, g2)
if err != nil {
t.Logf(" actual (g1): %v%v", g1, fullPrint(g1))
t.Logf("expected (g2): %v%v", g2, fullPrint(g2))
t.Logf("Cmp error:")
t.Errorf("%v", err)
}
}
type NoopResTest struct {
NoopRes
}
func (obj *NoopResTest) GroupCmp(r Res) bool {
res, ok := r.(*NoopResTest)
if !ok {
return false
}
// TODO: implement this in vertexCmp for *testGrouper instead?
if strings.Contains(res.Name, ",") { // HACK
return false // element to be grouped is already grouped!
}
// group if they start with the same letter! (helpful hack for testing)
return obj.Name[0] == res.Name[0]
}
func NewNoopResTest(name string) *NoopResTest {
obj := &NoopResTest{
NoopRes: NoopRes{
BaseRes: BaseRes{
Name: name,
Kind: "noop",
MetaParams: MetaParams{
AutoGroup: true, // always autogroup
},
},
},
}
return obj
}
// GraphCmp compares the topology of two graphs and returns nil if they're
// equal. It also compares if grouped element groups are identical.
// TODO: port this to use the pgraph.GraphCmp function instead.
func GraphCmp(g1, g2 *pgraph.Graph) error {
if n1, n2 := g1.NumVertices(), g2.NumVertices(); n1 != n2 {
return fmt.Errorf("graph g1 has %d vertices, while g2 has %d", n1, n2)
}
if e1, e2 := g1.NumEdges(), g2.NumEdges(); e1 != e2 {
return fmt.Errorf("graph g1 has %d edges, while g2 has %d", e1, e2)
}
var m = make(map[pgraph.Vertex]pgraph.Vertex) // g1 to g2 vertex correspondence
Loop:
// check vertices
for v1 := range g1.Adjacency() { // for each vertex in g1
l1 := strings.Split(VtoR(v1).GetName(), ",") // make list of everyone's names...
for _, x1 := range VtoR(v1).GetGroup() {
l1 = append(l1, x1.GetName()) // add my contents
}
l1 = util.StrRemoveDuplicatesInList(l1) // remove duplicates
sort.Strings(l1)
// inner loop
for v2 := range g2.Adjacency() { // does it match in g2 ?
l2 := strings.Split(VtoR(v2).GetName(), ",")
for _, x2 := range VtoR(v2).GetGroup() {
l2 = append(l2, x2.GetName())
}
l2 = util.StrRemoveDuplicatesInList(l2) // remove duplicates
sort.Strings(l2)
// does l1 match l2 ?
if ListStrCmp(l1, l2) { // cmp!
m[v1] = v2
continue Loop
}
}
return fmt.Errorf("graph g1, has no match in g2 for: %v", VtoR(v1).GetName())
}
// vertices (and groups) match :)
// check edges
for v1 := range g1.Adjacency() { // for each vertex in g1
v2 := m[v1] // lookup in map to get correspondance
// g1.Adjacency()[v1] corresponds to g2.Adjacency()[v2]
if e1, e2 := len(g1.Adjacency()[v1]), len(g2.Adjacency()[v2]); e1 != e2 {
return fmt.Errorf("graph g1, vertex(%v) has %d edges, while g2, vertex(%v) has %d", VtoR(v1).GetName(), e1, VtoR(v2).GetName(), e2)
}
for vv1, ee1 := range g1.Adjacency()[v1] {
vv2 := m[vv1]
ee1 := ee1.(*Edge)
ee2 := g2.Adjacency()[v2][vv2].(*Edge)
// these are edges from v1 -> vv1 via ee1 (graph 1)
// to cmp to edges from v2 -> vv2 via ee2 (graph 2)
// check: (1) vv1 == vv2 ? (we've already checked this!)
l1 := strings.Split(VtoR(vv1).GetName(), ",") // make list of everyone's names...
for _, x1 := range VtoR(vv1).GetGroup() {
l1 = append(l1, x1.GetName()) // add my contents
}
l1 = util.StrRemoveDuplicatesInList(l1) // remove duplicates
sort.Strings(l1)
l2 := strings.Split(VtoR(vv2).GetName(), ",")
for _, x2 := range VtoR(vv2).GetGroup() {
l2 = append(l2, x2.GetName())
}
l2 = util.StrRemoveDuplicatesInList(l2) // remove duplicates
sort.Strings(l2)
// does l1 match l2 ?
if !ListStrCmp(l1, l2) { // cmp!
return fmt.Errorf("graph g1 and g2 don't agree on: %v and %v", VtoR(vv1).GetName(), VtoR(vv2).GetName())
}
// check: (2) ee1 == ee2
if ee1.Name != ee2.Name {
return fmt.Errorf("graph g1 edge(%v) doesn't match g2 edge(%v)", ee1.Name, ee2.Name)
}
}
}
// check meta parameters
for v1 := range g1.Adjacency() { // for each vertex in g1
for v2 := range g2.Adjacency() { // does it match in g2 ?
s1, s2 := VtoR(v1).Meta().Sema, VtoR(v2).Meta().Sema
sort.Strings(s1)
sort.Strings(s2)
if !reflect.DeepEqual(s1, s2) {
return fmt.Errorf("vertex %s and vertex %s have different semaphores", VtoR(v1).GetName(), VtoR(v2).GetName())
}
}
}
return nil // success!
}
// ListStrCmp compares two lists of strings
func ListStrCmp(a, b []string) bool {
//fmt.Printf("CMP: %v with %v\n", a, b) // debugging
if a == nil && b == nil {
return true
}
if a == nil || b == nil {
return false
}
if len(a) != len(b) {
return false
}
for i := range a {
if a[i] != b[i] {
return false
}
}
return true
}
func fullPrint(g *pgraph.Graph) (str string) {
str += "\n"
for v := range g.Adjacency() {
if semas := VtoR(v).Meta().Sema; len(semas) > 0 {
str += fmt.Sprintf("* v: %v; sema: %v\n", VtoR(v).GetName(), semas)
} else {
str += fmt.Sprintf("* v: %v\n", VtoR(v).GetName())
}
// TODO: add explicit grouping data?
}
for v1 := range g.Adjacency() {
for v2, e := range g.Adjacency()[v1] {
edge := e.(*Edge)
str += fmt.Sprintf("* e: %v -> %v # %v\n", VtoR(v1).GetName(), VtoR(v2).GetName(), edge.Name)
}
}
return
}
func TestDurationAssumptions(t *testing.T) {
var d time.Duration
if (d == 0) != true {
t.Errorf("empty time.Duration is no longer equal to zero")
}
if (d > 0) != false {
t.Errorf("empty time.Duration is now greater than zero")
}
}
// all of the following test cases are laid out with the following semantics:
// * vertices which start with the same single letter are considered "like"
// * "like" elements should be merged
// * vertices can have any integer after their single letter "family" type
// * grouped vertices should have a name with a comma separated list of names
// * edges follow the same conventions about grouping
// empty graph
func TestPgraphGrouping1(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
g2, _ := pgraph.NewGraph("g2") // expected result
runGraphCmp(t, g1, g2)
}
// single vertex
func TestPgraphGrouping2(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{ // grouping to limit variable scope
a1 := NewNoopResTest("a1")
g1.AddVertex(a1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
g2.AddVertex(a1)
}
runGraphCmp(t, g1, g2)
}
// two vertices
func TestPgraphGrouping3(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
g1.AddVertex(a1, b1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
g2.AddVertex(a1, b1)
}
runGraphCmp(t, g1, g2)
}
// two vertices merge
func TestPgraphGrouping4(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
g1.AddVertex(a1, a2)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
g2.AddVertex(a)
}
runGraphCmp(t, g1, g2)
}
// three vertices merge
func TestPgraphGrouping5(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
a3 := NewNoopResTest("a3")
g1.AddVertex(a1, a2, a3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2,a3")
g2.AddVertex(a)
}
runGraphCmp(t, g1, g2)
}
// three vertices, two merge
func TestPgraphGrouping6(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
g1.AddVertex(a1, a2, b1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b1 := NewNoopResTest("b1")
g2.AddVertex(a, b1)
}
runGraphCmp(t, g1, g2)
}
// four vertices, three merge
func TestPgraphGrouping7(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
a3 := NewNoopResTest("a3")
b1 := NewNoopResTest("b1")
g1.AddVertex(a1, a2, a3, b1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2,a3")
b1 := NewNoopResTest("b1")
g2.AddVertex(a, b1)
}
runGraphCmp(t, g1, g2)
}
// four vertices, two&two merge
func TestPgraphGrouping8(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
g1.AddVertex(a1, a2, b1, b2)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b := NewNoopResTest("b1,b2")
g2.AddVertex(a, b)
}
runGraphCmp(t, g1, g2)
}
// five vertices, two&three merge
func TestPgraphGrouping9(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
b3 := NewNoopResTest("b3")
g1.AddVertex(a1, a2, b1, b2, b3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b := NewNoopResTest("b1,b2,b3")
g2.AddVertex(a, b)
}
runGraphCmp(t, g1, g2)
}
// three unique vertices
func TestPgraphGrouping10(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
c1 := NewNoopResTest("c1")
g1.AddVertex(a1, b1, c1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
c1 := NewNoopResTest("c1")
g2.AddVertex(a1, b1, c1)
}
runGraphCmp(t, g1, g2)
}
// three unique vertices, two merge
func TestPgraphGrouping11(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
c1 := NewNoopResTest("c1")
g1.AddVertex(a1, b1, b2, c1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
b := NewNoopResTest("b1,b2")
c1 := NewNoopResTest("c1")
g2.AddVertex(a1, b, c1)
}
runGraphCmp(t, g1, g2)
}
// simple merge 1
// a1 a2 a1,a2
// \ / >>> | (arrows point downwards)
// b b
func TestPgraphGrouping12(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
e1 := NE("e1")
e2 := NE("e2")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(a2, b1, e2)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b1 := NewNoopResTest("b1")
e := NE("e1,e2")
g2.AddEdge(a, b1, e)
}
runGraphCmp(t, g1, g2)
}
// simple merge 2
// b b
// / \ >>> | (arrows point downwards)
// a1 a2 a1,a2
func TestPgraphGrouping13(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
e1 := NE("e1")
e2 := NE("e2")
g1.AddEdge(b1, a1, e1)
g1.AddEdge(b1, a2, e2)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b1 := NewNoopResTest("b1")
e := NE("e1,e2")
g2.AddEdge(b1, a, e)
}
runGraphCmp(t, g1, g2)
}
// triple merge
// a1 a2 a3 a1,a2,a3
// \ | / >>> | (arrows point downwards)
// b b
func TestPgraphGrouping14(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
a3 := NewNoopResTest("a3")
b1 := NewNoopResTest("b1")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(a2, b1, e2)
g1.AddEdge(a3, b1, e3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2,a3")
b1 := NewNoopResTest("b1")
e := NE("e1,e2,e3")
g2.AddEdge(a, b1, e)
}
runGraphCmp(t, g1, g2)
}
// chain merge
// a1 a1
// / \ |
// b1 b2 >>> b1,b2 (arrows point downwards)
// \ / |
// c1 c1
func TestPgraphGrouping15(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e4 := NE("e4")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(a1, b2, e2)
g1.AddEdge(b1, c1, e3)
g1.AddEdge(b2, c1, e4)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
b := NewNoopResTest("b1,b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1,e2")
e2 := NE("e3,e4")
g2.AddEdge(a1, b, e1)
g2.AddEdge(b, c1, e2)
}
runGraphCmp(t, g1, g2)
}
// re-attach 1 (outer)
// technically the second possibility is valid too, depending on which order we
// merge edges in, and if we don't filter out any unnecessary edges afterwards!
// a1 a2 a1,a2 a1,a2
// | / | | \
// b1 / >>> b1 OR b1 / (arrows point downwards)
// | / | | /
// c1 c1 c1
func TestPgraphGrouping16(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
c1 := NewNoopResTest("c1")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(b1, c1, e2)
g1.AddEdge(a2, c1, e3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b1 := NewNoopResTest("b1")
c1 := NewNoopResTest("c1")
e1 := NE("e1,e3")
e2 := NE("e2,e3") // e3 gets "merged through" to BOTH edges!
g2.AddEdge(a, b1, e1)
g2.AddEdge(b1, c1, e2)
}
runGraphCmp(t, g1, g2)
}
// re-attach 2 (inner)
// a1 b2 a1
// | / |
// b1 / >>> b1,b2 (arrows point downwards)
// | / |
// c1 c1
func TestPgraphGrouping17(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(b1, c1, e2)
g1.AddEdge(b2, c1, e3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
b := NewNoopResTest("b1,b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1")
e2 := NE("e2,e3")
g2.AddEdge(a1, b, e1)
g2.AddEdge(b, c1, e2)
}
runGraphCmp(t, g1, g2)
}
// re-attach 3 (double)
// similar to "re-attach 1", technically there is a second possibility for this
// a2 a1 b2 a1,a2
// \ | / |
// \ b1 / >>> b1,b2 (arrows point downwards)
// \ | / |
// c1 c1
func TestPgraphGrouping18(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e4 := NE("e4")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(b1, c1, e2)
g1.AddEdge(a2, c1, e3)
g1.AddEdge(b2, c1, e4)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b := NewNoopResTest("b1,b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1,e3")
e2 := NE("e2,e3,e4") // e3 gets "merged through" to BOTH edges!
g2.AddEdge(a, b, e1)
g2.AddEdge(b, c1, e2)
}
runGraphCmp(t, g1, g2)
}
// connected merge 0, (no change!)
// a1 a1
// \ >>> \ (arrows point downwards)
// a2 a2
func TestPgraphGroupingConnected0(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
e1 := NE("e1")
g1.AddEdge(a1, a2, e1)
}
g2, _ := pgraph.NewGraph("g2") // expected result ?
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
e1 := NE("e1")
g2.AddEdge(a1, a2, e1)
}
runGraphCmp(t, g1, g2)
}
// connected merge 1, (no change!)
// a1 a1
// \ \
// b >>> b (arrows point downwards)
// \ \
// a2 a2
func TestPgraphGroupingConnected1(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b := NewNoopResTest("b")
a2 := NewNoopResTest("a2")
e1 := NE("e1")
e2 := NE("e2")
g1.AddEdge(a1, b, e1)
g1.AddEdge(b, a2, e2)
}
g2, _ := pgraph.NewGraph("g2") // expected result ?
{
a1 := NewNoopResTest("a1")
b := NewNoopResTest("b")
a2 := NewNoopResTest("a2")
e1 := NE("e1")
e2 := NE("e2")
g2.AddEdge(a1, b, e1)
g2.AddEdge(b, a2, e2)
}
runGraphCmp(t, g1, g2)
}

1435
resources/aws_ec2.go Normal file

File diff suppressed because it is too large Load Diff

56
resources/edge.go Normal file
View File

@@ -0,0 +1,56 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
// Edge is a struct that represents a graph's edge.
type Edge struct {
Name string
Notify bool // should we send a refresh notification along this edge?
refresh bool // is there a notify pending for the dest vertex ?
}
// String is a required method of the Edge interface that we must fulfill.
func (obj *Edge) String() string {
return obj.Name
}
// Compare returns true if two edges are equivalent. Otherwise it returns false.
func (obj *Edge) Compare(edge *Edge) bool {
if obj.Name != edge.Name {
return false
}
if obj.Notify != edge.Notify {
return false
}
// FIXME: should we compare this as well?
//if obj.refresh != edge.refresh {
// return false
//}
return true
}
// Refresh returns the pending refresh status of this edge.
func (obj *Edge) Refresh() bool {
return obj.refresh
}
// SetRefresh sets the pending refresh status of this edge.
func (obj *Edge) SetRefresh(b bool) {
obj.refresh = b
}

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
@@ -20,11 +20,13 @@ package resources
import (
"bufio"
"bytes"
"encoding/gob"
"fmt"
"log"
"os/exec"
"os/user"
"strings"
"sync"
"syscall"
"github.com/purpleidea/mgmt/util"
@@ -32,13 +34,12 @@ import (
)
func init() {
gob.Register(&ExecRes{})
RegisterResource("exec", func() Res { return &ExecRes{} })
}
// ExecRes is an exec resource for running commands.
type ExecRes struct {
BaseRes `yaml:",inline"`
State string `yaml:"state"` // state: exists/present?, absent, (undefined?)
Cmd string `yaml:"cmd"` // the command to run
Shell string `yaml:"shell"` // the (optional) shell to use to run the cmd
Timeout int `yaml:"timeout"` // the cmd timeout in seconds
@@ -46,7 +47,11 @@ type ExecRes struct {
WatchShell string `yaml:"watchshell"` // the (optional) shell to use to run the watch cmd
IfCmd string `yaml:"ifcmd"` // the if command to run
IfShell string `yaml:"ifshell"` // the (optional) shell to use to run the if cmd
PollInt int `yaml:"pollint"` // the poll interval for the ifcmd
User string `yaml:"user"` // the (optional) user to use to execute the command
Group string `yaml:"group"` // the (optional) group to use to execute the command
Output *string // all cmd output, read only, do not set!
Stdout *string // the cmd stdout, read only, do not set!
Stderr *string // the cmd stderr, read only, do not set!
}
// Default returns some sensible defaults for this resource.
@@ -64,9 +69,15 @@ func (obj *ExecRes) Validate() error {
return fmt.Errorf("command can't be empty")
}
// if we have a watch command, then we don't poll with the if command!
if obj.WatchCmd != "" && obj.PollInt > 0 {
return fmt.Errorf("don't poll when we have a watch command")
// check that, if an user or a group is set, we're running as root
if obj.User != "" || obj.Group != "" {
currentUser, err := user.Current()
if err != nil {
return errwrap.Wrapf(err, "error looking up current user")
}
if currentUser.Uid != "0" {
return errwrap.Errorf("running as root is required if you want to use exec with a different user/group")
}
}
return obj.BaseRes.Validate()
@@ -74,7 +85,6 @@ func (obj *ExecRes) Validate() error {
// Init runs some startup code for this resource.
func (obj *ExecRes) Init() error {
obj.BaseRes.kind = "exec"
return obj.BaseRes.Init() // call base init, b/c we're overriding
}
@@ -119,6 +129,17 @@ func (obj *ExecRes) Watch() error {
}
cmd := exec.Command(cmdName, cmdArgs...)
//cmd.Dir = "" // look for program in pwd ?
// ignore signals sent to parent process (we're in our own group)
cmd.SysProcAttr = &syscall.SysProcAttr{
Setpgid: true,
Pgid: 0,
}
// if we have a user and group, use them
var err error
if cmd.SysProcAttr.Credential, err = obj.getCredential(); err != nil {
return errwrap.Wrapf(err, "error while setting credential")
}
cmdReader, err := cmd.StdoutPipe()
if err != nil {
@@ -126,11 +147,11 @@ func (obj *ExecRes) Watch() error {
}
scanner := bufio.NewScanner(cmdReader)
defer cmd.Wait() // XXX: is this necessary?
defer cmd.Wait() // wait for the command to exit before return!
defer func() {
// FIXME: without wrapping this in this func it panic's
// when running examples/graph8d.yaml
cmd.Process.Kill() // TODO: is this necessary?
// when running certain graphs... why?
cmd.Process.Kill() // shutdown the Watch command on exit
}()
if err := cmd.Start(); err != nil {
return errwrap.Wrapf(err, "error starting Cmd")
@@ -148,9 +169,10 @@ func (obj *ExecRes) Watch() error {
select {
case text := <-bufioch:
// each time we get a line of output, we loop!
log.Printf("%s[%s]: Watch output: %s", obj.Kind(), obj.GetName(), text)
log.Printf("%s: Watch output: %s", obj, text)
if text != "" {
send = true
obj.StateOK(false) // something made state dirty
}
case err := <-errch:
@@ -171,8 +193,6 @@ func (obj *ExecRes) Watch() error {
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
// it is okay to invalidate the clean state on poke too
obj.StateOK(false) // something made state dirty
obj.Event()
}
}
@@ -181,24 +201,12 @@ func (obj *ExecRes) Watch() error {
// CheckApply checks the resource state and applies the resource if the bool
// input is true. It returns error info and if the state check passed or not.
// TODO: expand the IfCmd to be a list of commands
func (obj *ExecRes) CheckApply(apply bool) (checkOK bool, err error) {
func (obj *ExecRes) CheckApply(apply bool) (bool, error) {
// If we receive a refresh signal, then the engine skips the IsStateOK()
// check and this will run. It is still guarded by the IfCmd, but it can
// have a chance to execute, and all without the check of obj.Refresh()!
// if there is a watch command, but no if command, run based on state
if obj.WatchCmd != "" && obj.IfCmd == "" {
if obj.IsStateOK() { // FIXME: this is done by engine now...
return true, nil
}
// if there is no watcher, but there is an onlyif check, run it to see
} else if obj.IfCmd != "" { // && obj.WatchCmd == ""
// there is a watcher, but there is also an if command
//} else if obj.IfCmd != "" && obj.WatchCmd != "" {
if obj.PollInt > 0 { // && obj.WatchCmd == ""
// XXX: have the Watch() command output onlyif poll events...
// XXX: we can optimize by saving those results for returning here
// return XXX
}
if obj.IfCmd != "" { // if there is no onlyif check, we should just run
var cmdName string
var cmdArgs []string
@@ -214,18 +222,24 @@ func (obj *ExecRes) CheckApply(apply bool) (checkOK bool, err error) {
cmdName = obj.IfShell // usually bash, or sh
cmdArgs = []string{"-c", obj.IfCmd}
}
err = exec.Command(cmdName, cmdArgs...).Run()
if err != nil {
cmd := exec.Command(cmdName, cmdArgs...)
// ignore signals sent to parent process (we're in our own group)
cmd.SysProcAttr = &syscall.SysProcAttr{
Setpgid: true,
Pgid: 0,
}
// if we have an user and group, use them
var err error
if cmd.SysProcAttr.Credential, err = obj.getCredential(); err != nil {
return false, errwrap.Wrapf(err, "error while setting credential")
}
if err := cmd.Run(); err != nil {
// TODO: check exit value
return true, nil // don't run
}
// if there is no watcher and no onlyif check, assume we should run
} else { // if obj.WatchCmd == "" && obj.IfCmd == "" {
// just run if state is dirty
if obj.IsStateOK() { // FIXME: this is done by engine now...
return true, nil
}
}
// state is not okay, no work done, exit, but without error
@@ -234,7 +248,7 @@ func (obj *ExecRes) CheckApply(apply bool) (checkOK bool, err error) {
}
// apply portion
log.Printf("%s[%s]: Apply", obj.Kind(), obj.GetName())
log.Printf("%s: Apply", obj)
var cmdName string
var cmdArgs []string
if obj.Shell == "" {
@@ -252,11 +266,27 @@ func (obj *ExecRes) CheckApply(apply bool) (checkOK bool, err error) {
}
cmd := exec.Command(cmdName, cmdArgs...)
//cmd.Dir = "" // look for program in pwd ?
var out bytes.Buffer
cmd.Stdout = &out
// ignore signals sent to parent process (we're in our own group)
cmd.SysProcAttr = &syscall.SysProcAttr{
Setpgid: true,
Pgid: 0,
}
// if we have a user and group, use them
var err error
if cmd.SysProcAttr.Credential, err = obj.getCredential(); err != nil {
return false, errwrap.Wrapf(err, "error while setting credential")
}
var out splitWriter
out.Init()
// from the docs: "If Stdout and Stderr are the same writer, at most one
// goroutine at a time will call Write." so we trick it here!
cmd.Stdout = out.Stdout
cmd.Stderr = out.Stderr
if err := cmd.Start(); err != nil {
return false, errwrap.Wrapf(err, "error starting Cmd")
return false, errwrap.Wrapf(err, "error starting cmd")
}
timeout := obj.Timeout
@@ -267,28 +297,55 @@ func (obj *ExecRes) CheckApply(apply bool) (checkOK bool, err error) {
go func() { done <- cmd.Wait() }()
select {
case err := <-done:
if err != nil {
e := errwrap.Wrapf(err, "error waiting for Cmd")
return false, e
}
case e := <-done:
err = e // store
case <-util.TimeAfterOrBlock(timeout):
//cmd.Process.Kill() // TODO: is this necessary?
return false, fmt.Errorf("timeout waiting for Cmd")
cmd.Process.Kill() // TODO: check error?
return false, fmt.Errorf("timeout for cmd")
}
// save in memory for send/recv
// we use pointers to strings to indicate if used or not
if out.Stdout.Activity || out.Stderr.Activity {
str := out.String()
obj.Output = &str
}
if out.Stdout.Activity {
str := out.Stdout.String()
obj.Stdout = &str
}
if out.Stderr.Activity {
str := out.Stderr.String()
obj.Stderr = &str
}
// process the err result from cmd, we process non-zero exits here too!
exitErr, ok := err.(*exec.ExitError) // embeds an os.ProcessState
if err != nil && ok {
pStateSys := exitErr.Sys() // (*os.ProcessState) Sys
wStatus, ok := pStateSys.(syscall.WaitStatus)
if !ok {
e := errwrap.Wrapf(err, "error running cmd")
return false, e
}
return false, fmt.Errorf("cmd error, exit status: %d", wStatus.ExitStatus())
} else if err != nil {
e := errwrap.Wrapf(err, "general cmd error")
return false, e
}
// TODO: if we printed the stdout while the command is running, this
// would be nice, but it would require terminal log output that doesn't
// interleave all the parallel parts which would mix it all up...
if s := out.String(); s == "" {
log.Printf("%s[%s]: Command output is empty!", obj.Kind(), obj.GetName())
log.Printf("%s: Command output is empty!", obj)
} else {
log.Printf("%s[%s]: Command output is:", obj.Kind(), obj.GetName())
log.Printf("%s: Command output is:", obj)
log.Printf(out.String())
}
// XXX: return based on exit value!!
// The state tracking is for exec resources that can't "detect" their
// state, and assume it's invalid when the Watch() function triggers.
@@ -306,52 +363,46 @@ type ExecUID struct {
// TODO: add more elements here
}
// IFF aka if and only if they are equivalent, return true. If not, false.
func (obj *ExecUID) IFF(uid ResUID) bool {
res, ok := uid.(*ExecUID)
if !ok {
return false
}
if obj.Cmd != res.Cmd {
return false
}
// TODO: add more checks here
//if obj.Shell != res.Shell {
// return false
//}
//if obj.Timeout != res.Timeout {
// return false
//}
//if obj.WatchCmd != res.WatchCmd {
// return false
//}
//if obj.WatchShell != res.WatchShell {
// return false
//}
if obj.IfCmd != res.IfCmd {
return false
}
//if obj.PollInt != res.PollInt {
// return false
//}
//if obj.State != res.State {
// return false
//}
return true
// ExecResAutoEdges holds the state of the auto edge generator.
type ExecResAutoEdges struct {
edges []ResUID
}
// AutoEdges returns the AutoEdge interface. In this case no autoedges are used.
func (obj *ExecRes) AutoEdges() AutoEdge {
// TODO: parse as many exec params to look for auto edges, for example
// the path of the binary in the Cmd variable might be from in a pkg
return nil
// Next returns the next automatic edge.
func (obj *ExecResAutoEdges) Next() []ResUID {
return obj.edges
}
// Test gets results of the earlier Next() call, & returns if we should continue!
func (obj *ExecResAutoEdges) Test(input []bool) bool {
return false // Never keep going
// TODO: We could return false if we find as many edges as the number of different path in cmdFiles()
}
// AutoEdges returns the AutoEdge interface. In this case the systemd units.
func (obj *ExecRes) AutoEdges() (AutoEdge, error) {
var data []ResUID
for _, x := range obj.cmdFiles() {
var reversed = true
data = append(data, &PkgFileUID{
BaseUID: BaseUID{
Name: obj.GetName(),
Kind: obj.GetKind(),
Reversed: &reversed,
},
path: x, // what matters
})
}
return &ExecResAutoEdges{
edges: data,
}, nil
}
// UIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *ExecRes) UIDs() []ResUID {
x := &ExecUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
BaseUID: BaseUID{Name: obj.GetName(), Kind: obj.GetKind()},
Cmd: obj.Cmd,
IfCmd: obj.IfCmd,
// TODO: add more params here
@@ -369,17 +420,19 @@ func (obj *ExecRes) GroupCmp(r Res) bool {
}
// Compare two resources and return if they are equivalent.
func (obj *ExecRes) Compare(res Res) bool {
switch res.(type) {
case *ExecRes:
res := res.(*ExecRes)
func (obj *ExecRes) Compare(r Res) bool {
// we can only compare ExecRes to others of the same resource kind
res, ok := r.(*ExecRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
if obj.Cmd != res.Cmd {
return false
}
@@ -398,15 +451,16 @@ func (obj *ExecRes) Compare(res Res) bool {
if obj.IfCmd != res.IfCmd {
return false
}
if obj.PollInt != res.PollInt {
if obj.IfShell != res.IfShell {
return false
}
if obj.State != res.State {
if obj.User != res.User {
return false
}
default:
if obj.Group != res.Group {
return false
}
return true
}
@@ -429,3 +483,123 @@ func (obj *ExecRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
*obj = ExecRes(raw) // restore from indirection with type conversion!
return nil
}
// getCredential returns the correct *syscall.Credential if an User and Group
// are set.
func (obj *ExecRes) getCredential() (*syscall.Credential, error) {
var uid, gid int
var err error
var currentUser *user.User
if currentUser, err = user.Current(); err != nil {
return nil, errwrap.Wrapf(err, "error looking up current user")
}
if currentUser.Uid != "0" {
// since we're not root, we've got nothing to do
return nil, nil
}
if obj.Group != "" {
gid, err = GetGID(obj.Group)
if err != nil {
return nil, errwrap.Wrapf(err, "error looking up gid for %s", obj.Group)
}
}
if obj.User != "" {
uid, err = GetUID(obj.User)
if err != nil {
return nil, errwrap.Wrapf(err, "error looking up uid for %s", obj.User)
}
}
return &syscall.Credential{Uid: uint32(uid), Gid: uint32(gid)}, nil
}
// splitWriter mimics what the ssh.CombinedOutput command does, but stores the
// the stdout and stderr separately. This is slightly tricky because we don't
// want the combined output to be interleaved incorrectly. It creates sub writer
// structs which share the same lock and a shared output buffer.
type splitWriter struct {
Stdout *wrapWriter
Stderr *wrapWriter
stdout bytes.Buffer // just the stdout
stderr bytes.Buffer // just the stderr
output bytes.Buffer // combined output
mutex *sync.Mutex
initialized bool // is this initialized?
}
// Init initializes the splitWriter.
func (sw *splitWriter) Init() {
if sw.initialized {
panic("splitWriter is already initialized")
}
sw.mutex = &sync.Mutex{}
sw.Stdout = &wrapWriter{
Mutex: sw.mutex,
Buffer: &sw.stdout,
Output: &sw.output,
}
sw.Stderr = &wrapWriter{
Mutex: sw.mutex,
Buffer: &sw.stderr,
Output: &sw.output,
}
sw.initialized = true
}
// String returns the contents of the combined output buffer.
func (sw *splitWriter) String() string {
if !sw.initialized {
panic("splitWriter is not initialized")
}
return sw.output.String()
}
// wrapWriter is a simple writer which is used internally by splitWriter.
type wrapWriter struct {
Mutex *sync.Mutex
Buffer *bytes.Buffer // stdout or stderr
Output *bytes.Buffer // combined output
Activity bool // did we get any writes?
}
// Write writes to both bytes buffers with a parent lock to mix output safely.
func (w *wrapWriter) Write(p []byte) (int, error) {
// TODO: can we move the lock to only guard around the Output.Write ?
w.Mutex.Lock()
defer w.Mutex.Unlock()
w.Activity = true
i, err := w.Buffer.Write(p) // first write
if err != nil {
return i, err
}
return w.Output.Write(p) // shared write
}
// String returns the contents of the unshared buffer.
func (w *wrapWriter) String() string {
return w.Buffer.String()
}
// cmdFiles returns all the potential files/commands this command might need.
func (obj *ExecRes) cmdFiles() []string {
var paths []string
if obj.Shell != "" {
paths = append(paths, obj.Shell)
} else if cmdSplit := strings.Fields(obj.Cmd); len(cmdSplit) > 0 {
paths = append(paths, cmdSplit[0])
}
if obj.WatchShell != "" {
paths = append(paths, obj.WatchShell)
} else if watchSplit := strings.Fields(obj.WatchCmd); len(watchSplit) > 0 {
paths = append(paths, watchSplit[0])
}
if obj.IfShell != "" {
paths = append(paths, obj.IfShell)
} else if ifSplit := strings.Fields(obj.IfCmd); len(ifSplit) > 0 {
paths = append(paths, ifSplit[0])
}
return paths
}

190
resources/exec_test.go Normal file
View File

@@ -0,0 +1,190 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"testing"
)
func TestExecSendRecv1(t *testing.T) {
r1 := &ExecRes{
BaseRes: BaseRes{
Name: "exec1",
Kind: "exec",
MetaParams: DefaultMetaParams,
},
Cmd: "echo hello world",
Shell: "/bin/bash",
}
r1.Setup(nil, r1, r1)
if err := r1.Validate(); err != nil {
t.Errorf("validate failed with: %v", err)
}
defer func() {
if err := r1.Close(); err != nil {
t.Errorf("close failed with: %v", err)
}
}()
if err := r1.Init(); err != nil {
t.Errorf("init failed with: %v", err)
}
// run artificially without the entire engine
if _, err := r1.CheckApply(true); err != nil {
t.Errorf("checkapply failed with: %v", err)
}
t.Logf("output is: %v", r1.Output)
if r1.Output != nil {
t.Logf("output is: %v", *r1.Output)
}
t.Logf("stdout is: %v", r1.Stdout)
if r1.Stdout != nil {
t.Logf("stdout is: %v", *r1.Stdout)
}
t.Logf("stderr is: %v", r1.Stderr)
if r1.Stderr != nil {
t.Logf("stderr is: %v", *r1.Stderr)
}
if r1.Stdout == nil {
t.Errorf("stdout is nil")
} else {
if out := *r1.Stdout; out != "hello world\n" {
t.Errorf("got wrong stdout(%d): %s", len(out), out)
}
}
}
func TestExecSendRecv2(t *testing.T) {
r1 := &ExecRes{
BaseRes: BaseRes{
Name: "exec1",
Kind: "exec",
MetaParams: DefaultMetaParams,
},
Cmd: "echo hello world 1>&2", // to stderr
Shell: "/bin/bash",
}
r1.Setup(nil, r1, r1)
if err := r1.Validate(); err != nil {
t.Errorf("validate failed with: %v", err)
}
defer func() {
if err := r1.Close(); err != nil {
t.Errorf("close failed with: %v", err)
}
}()
if err := r1.Init(); err != nil {
t.Errorf("init failed with: %v", err)
}
// run artificially without the entire engine
if _, err := r1.CheckApply(true); err != nil {
t.Errorf("checkapply failed with: %v", err)
}
t.Logf("output is: %v", r1.Output)
if r1.Output != nil {
t.Logf("output is: %v", *r1.Output)
}
t.Logf("stdout is: %v", r1.Stdout)
if r1.Stdout != nil {
t.Logf("stdout is: %v", *r1.Stdout)
}
t.Logf("stderr is: %v", r1.Stderr)
if r1.Stderr != nil {
t.Logf("stderr is: %v", *r1.Stderr)
}
if r1.Stderr == nil {
t.Errorf("stderr is nil")
} else {
if out := *r1.Stderr; out != "hello world\n" {
t.Errorf("got wrong stderr(%d): %s", len(out), out)
}
}
}
func TestExecSendRecv3(t *testing.T) {
r1 := &ExecRes{
BaseRes: BaseRes{
Name: "exec1",
Kind: "exec",
MetaParams: DefaultMetaParams,
},
Cmd: "echo hello world && echo goodbye world 1>&2", // to stdout && stderr
Shell: "/bin/bash",
}
r1.Setup(nil, r1, r1)
if err := r1.Validate(); err != nil {
t.Errorf("validate failed with: %v", err)
}
defer func() {
if err := r1.Close(); err != nil {
t.Errorf("close failed with: %v", err)
}
}()
if err := r1.Init(); err != nil {
t.Errorf("init failed with: %v", err)
}
// run artificially without the entire engine
if _, err := r1.CheckApply(true); err != nil {
t.Errorf("checkapply failed with: %v", err)
}
t.Logf("output is: %v", r1.Output)
if r1.Output != nil {
t.Logf("output is: %v", *r1.Output)
}
t.Logf("stdout is: %v", r1.Stdout)
if r1.Stdout != nil {
t.Logf("stdout is: %v", *r1.Stdout)
}
t.Logf("stderr is: %v", r1.Stderr)
if r1.Stderr != nil {
t.Logf("stderr is: %v", *r1.Stderr)
}
if r1.Output == nil {
t.Errorf("output is nil")
} else {
// it looks like bash or golang race to the write, so whichever
// order they come out in is ok, as long as they come out whole
if out := *r1.Output; out != "hello world\ngoodbye world\n" && out != "goodbye world\nhello world\n" {
t.Errorf("got wrong output(%d): %s", len(out), out)
}
}
if r1.Stdout == nil {
t.Errorf("stdout is nil")
} else {
if out := *r1.Stdout; out != "hello world\n" {
t.Errorf("got wrong stdout(%d): %s", len(out), out)
}
}
if r1.Stderr == nil {
t.Errorf("stderr is nil")
} else {
if out := *r1.Stderr; out != "goodbye world\n" {
t.Errorf("got wrong stderr(%d): %s", len(out), out)
}
}
}

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
@@ -20,14 +20,12 @@ package resources
import (
"bytes"
"crypto/sha256"
"encoding/gob"
"encoding/hex"
"fmt"
"io"
"io/ioutil"
"log"
"os"
"os/user"
"path"
"path/filepath"
"strconv"
@@ -41,7 +39,7 @@ import (
)
func init() {
gob.Register(&FileRes{})
RegisterResource("file", func() Res { return &FileRes{} })
}
// FileRes is a file and directory resource.
@@ -98,11 +96,11 @@ func (obj *FileRes) Validate() error {
}
}
if _, err := obj.uid(); obj.Owner != "" && err != nil {
if _, err := GetUID(obj.Owner); obj.Owner != "" && err != nil {
return err
}
if _, err := obj.gid(); obj.Group != "" && err != nil {
if _, err := GetGID(obj.Group); obj.Group != "" && err != nil {
return err
}
@@ -125,29 +123,12 @@ func (obj *FileRes) mode() (os.FileMode, error) {
return os.FileMode(m), nil
}
// uid returns the user id for the owner specified in the yaml file graph.
// Caller should first check obj.Owner is not empty
func (obj *FileRes) uid() (int, error) {
u2, err2 := user.LookupId(obj.Owner)
if err2 == nil {
return strconv.Atoi(u2.Uid)
}
u, err := user.Lookup(obj.Owner)
if err == nil {
return strconv.Atoi(u.Uid)
}
return -1, errwrap.Wrapf(err, "owner lookup error (%s)", obj.Owner)
}
// Init runs some startup code for this resource.
func (obj *FileRes) Init() error {
obj.sha256sum = ""
obj.path = obj.GetPath() // compute once
obj.isDir = strings.HasSuffix(obj.path, "/") // dirs have trailing slashes
obj.BaseRes.kind = "file"
return obj.BaseRes.Init() // call base init, b/c we're overriding
}
@@ -198,7 +179,7 @@ func (obj *FileRes) Watch() error {
for {
if obj.debug {
log.Printf("%s[%s]: Watching: %s", obj.Kind(), obj.GetName(), obj.path) // attempting to watch...
log.Printf("%s: Watching: %s", obj, obj.path) // attempting to watch...
}
select {
@@ -207,10 +188,10 @@ func (obj *FileRes) Watch() error {
return nil
}
if err := event.Error; err != nil {
return errwrap.Wrapf(err, "unknown %s[%s] watcher error", obj.Kind(), obj.GetName())
return errwrap.Wrapf(err, "unknown %s watcher error", obj)
}
if obj.debug { // don't access event.Body if event.Error isn't nil
log.Printf("%s[%s]: Event(%s): %v", obj.Kind(), obj.GetName(), event.Body.Name, event.Body.Op)
log.Printf("%s: Event(%s): %v", obj, event.Body.Name, event.Body.Op)
}
send = true
obj.StateOK(false) // dirty
@@ -635,7 +616,7 @@ func (obj *FileRes) syncCheckApply(apply bool, src, dst string) (bool, error) {
// contentCheckApply performs a CheckApply for the file existence and content.
func (obj *FileRes) contentCheckApply(apply bool) (checkOK bool, _ error) {
log.Printf("%s[%s]: contentCheckApply(%t)", obj.Kind(), obj.GetName(), apply)
log.Printf("%s: contentCheckApply(%t)", obj, apply)
if obj.State == "absent" {
if _, err := os.Stat(obj.path); os.IsNotExist(err) {
@@ -697,7 +678,7 @@ func (obj *FileRes) contentCheckApply(apply bool) (checkOK bool, _ error) {
// chmodCheckApply performs a CheckApply for the file permissions.
func (obj *FileRes) chmodCheckApply(apply bool) (checkOK bool, _ error) {
log.Printf("%s[%s]: chmodCheckApply(%t)", obj.Kind(), obj.GetName(), apply)
log.Printf("%s: chmodCheckApply(%t)", obj, apply)
if obj.State == "absent" {
// File is absent
@@ -743,7 +724,7 @@ func (obj *FileRes) chmodCheckApply(apply bool) (checkOK bool, _ error) {
// chownCheckApply performs a CheckApply for the file ownership.
func (obj *FileRes) chownCheckApply(apply bool) (checkOK bool, _ error) {
var expectedUID, expectedGID int
log.Printf("%s[%s]: chownCheckApply(%t)", obj.Kind(), obj.GetName(), apply)
log.Printf("%s: chownCheckApply(%t)", obj, apply)
if obj.State == "absent" {
// File is absent or no owner specified
@@ -769,7 +750,7 @@ func (obj *FileRes) chownCheckApply(apply bool) (checkOK bool, _ error) {
}
if obj.Owner != "" {
expectedUID, err = obj.uid()
expectedUID, err = GetUID(obj.Owner)
if err != nil {
return false, err
}
@@ -779,7 +760,7 @@ func (obj *FileRes) chownCheckApply(apply bool) (checkOK bool, _ error) {
}
if obj.Group != "" {
expectedGID, err = obj.gid()
expectedGID, err = GetGID(obj.Group)
if err != nil {
return false, err
}
@@ -897,17 +878,18 @@ func (obj *FileResAutoEdges) Test(input []bool) bool {
// AutoEdges generates a simple linear sequence of each parent directory from
// the bottom up!
func (obj *FileRes) AutoEdges() AutoEdge {
func (obj *FileRes) AutoEdges() (AutoEdge, error) {
var data []ResUID // store linear result chain here...
values := util.PathSplitFullReversed(obj.path) // build it
// build it, but don't use obj.path because this gets called before Init
values := util.PathSplitFullReversed(obj.GetPath())
_, values = values[0], values[1:] // get rid of first value which is me!
for _, x := range values {
var reversed = true // cheat by passing a pointer
data = append(data, &FileUID{
BaseUID: BaseUID{
name: obj.GetName(),
kind: obj.Kind(),
reversed: &reversed,
Name: obj.GetName(),
Kind: obj.GetKind(),
Reversed: &reversed,
},
path: x, // what matters
}) // build list
@@ -916,15 +898,15 @@ func (obj *FileRes) AutoEdges() AutoEdge {
data: data,
pointer: 0,
found: false,
}
}, nil
}
// UIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *FileRes) UIDs() []ResUID {
x := &FileUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
path: obj.path,
BaseUID: BaseUID{Name: obj.GetName(), Kind: obj.GetKind()},
path: obj.GetPath(), // not obj.path b/c we didn't init yet!
}
return []ResUID{x}
}
@@ -941,17 +923,19 @@ func (obj *FileRes) GroupCmp(r Res) bool {
}
// Compare two resources and return if they are equivalent.
func (obj *FileRes) Compare(res Res) bool {
switch res.(type) {
case *FileRes:
res := res.(*FileRes)
func (obj *FileRes) Compare(r Res) bool {
// we can only compare FileRes to others of the same resource kind
res, ok := r.(*FileRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
if obj.path != res.path {
return false
}
@@ -975,9 +959,7 @@ func (obj *FileRes) Compare(res Res) bool {
if obj.Force != res.Force {
return false
}
default:
return false
}
return true
}

View File

@@ -1,43 +0,0 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build go1.7
package resources
import (
"os/user"
"strconv"
errwrap "github.com/pkg/errors"
)
// gid returns the group id for the group specified in the yaml file graph.
// Caller should first check obj.Group is not empty
func (obj *FileRes) gid() (int, error) {
g2, err2 := user.LookupGroupId(obj.Group)
if err2 == nil {
return strconv.Atoi(g2.Gid)
}
g, err := user.LookupGroup(obj.Group)
if err == nil {
return strconv.Atoi(g.Gid)
}
return -1, errwrap.Wrapf(err, "Group lookup error (%s)", obj.Group)
}

View File

@@ -1,43 +0,0 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !go1.7
package resources
import (
"strconv"
group "github.com/hnakamur/group"
errwrap "github.com/pkg/errors"
)
// gid returns the group id for the group specified in the yaml file graph.
// Caller should first check obj.Group is not empty
func (obj *FileRes) gid() (int, error) {
g2, err2 := group.LookupId(obj.Group)
if err2 == nil {
return strconv.Atoi(g2.Gid)
}
g, err := group.Lookup(obj.Group)
if err == nil {
return strconv.Atoi(g.Gid)
}
return -1, errwrap.Wrapf(err, "Group lookup error (%s)", obj.Group)
}

79
resources/file_test.go Normal file
View File

@@ -0,0 +1,79 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"testing"
"github.com/purpleidea/mgmt/pgraph"
)
func TestFileAutoEdge1(t *testing.T) {
g, err := pgraph.NewGraph("TestGraph")
if err != nil {
t.Errorf("error creating graph: %v", err)
return
}
r1 := &FileRes{
BaseRes: BaseRes{
Name: "file1",
Kind: "file",
MetaParams: MetaParams{
AutoEdge: true,
},
},
Path: "/tmp/a/b/", // some dir
}
r2 := &FileRes{
BaseRes: BaseRes{
Name: "file2",
Kind: "file",
MetaParams: MetaParams{
AutoEdge: true,
},
},
Path: "/tmp/a/", // some parent dir
}
r3 := &FileRes{
BaseRes: BaseRes{
Name: "file3",
Kind: "file",
MetaParams: MetaParams{
AutoEdge: true,
},
},
Path: "/tmp/a/b/c", // some child file
}
g.AddVertex(r1, r2, r3)
if i := g.NumEdges(); i != 0 {
t.Errorf("should have 0 edges instead of: %d", i)
}
// run artificially without the entire engine
if err := AutoEdges(g); err != nil {
t.Errorf("error running autoedges: %v", err)
}
// two edges should have been added
if i := g.NumEdges(); i != 2 {
t.Errorf("should have 2 edges instead of: %d", i)
}
}

234
resources/graph.go Normal file
View File

@@ -0,0 +1,234 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"fmt"
"github.com/purpleidea/mgmt/pgraph"
multierr "github.com/hashicorp/go-multierror"
errwrap "github.com/pkg/errors"
)
func init() {
RegisterResource("graph", func() Res { return &GraphRes{} })
}
// GraphRes is a resource that recursively runs a sub graph of resources.
// TODO: should we name this SubGraphRes instead?
// TODO: we could also flatten "sub graphs" into the main graph to avoid this,
// and this could even be done with a graph transformation called flatten,
// similar to where autogroup and autoedges run.
// XXX: this resource is not complete, and hasn't even been tested
type GraphRes struct {
BaseRes `yaml:",inline"`
Graph *pgraph.Graph `yaml:"graph"` // TODO: how do we suck in a graph via yaml?
initCount int // number of successfully initialized resources
}
// GraphUID is a unique representation for a GraphRes object.
type GraphUID struct {
BaseUID
//foo string // XXX: not implemented
}
// Default returns some sensible defaults for this resource.
func (obj *GraphRes) Default() Res {
return &GraphRes{
BaseRes: BaseRes{
MetaParams: DefaultMetaParams, // force a default
},
}
}
// Validate the params and sub resources that are passed to GraphRes.
func (obj *GraphRes) Validate() error {
var err error
for _, v := range obj.Graph.VerticesSorted() { // validate everyone
if e := VtoR(v).Validate(); err != nil {
err = multierr.Append(err, e) // list of errors
}
}
if err != nil {
return errwrap.Wrapf(err, "could not Validate() graph")
}
return obj.BaseRes.Validate()
}
// Init runs some startup code for this resource.
func (obj *GraphRes) Init() error {
// Loop through each vertex and initialize it, but keep track of how far
// we've succeeded, because on failure we'll stop and prepare to reverse
// through from there running the Close operation on each vertex that we
// previously did an Init on. The engine always ensures that we run this
// with a 1-1 relationship between Init and Close, so we must do so too.
for i, v := range obj.Graph.VerticesSorted() { // deterministic order!
obj.initCount = i + 1 // store the number that we tried to init
if err := VtoR(v).Init(); err != nil {
return errwrap.Wrapf(err, "could not Init() graph")
}
}
return obj.BaseRes.Init() // call base init, b/c we're overrriding
}
// Close runs some cleanup code for this resource.
func (obj *GraphRes) Close() error {
// The idea is to Close anything we did an Init on including the BaseRes
// methods which are not guaranteed to be safe if called multiple times!
var err error
vertices := obj.Graph.VerticesSorted() // deterministic order!
last := obj.initCount - 1 // index of last vertex we did init on
for i := range vertices {
v := vertices[last-i] // go through in reverse
// if we hit this condition, we haven't been able to get through
// the entire list of vertices that we'd have liked to, on init!
if obj.initCount == 0 {
// if we get here, we exit without calling BaseRes.Close
// because the matching BaseRes.Init did not get called!
return errwrap.Wrapf(err, "could not Close() partial graph")
//break
}
obj.initCount-- // count to avoid closing one that didn't init!
// try to close everyone that got an init, don't stop suddenly!
if e := VtoR(v).Close(); e != nil {
err = multierr.Append(err, e) // list of errors
}
}
// call base close, b/c we're overriding
if e := obj.BaseRes.Close(); err == nil {
err = e
} else if e != nil {
err = multierr.Append(err, e) // list of errors
}
// this returns nil if err is nil
return errwrap.Wrapf(err, "could not Close() graph")
}
// Watch is the primary listener for this resource and it outputs events.
// XXX: should this use mgraph.Start/Pause? if so then what does CheckApply do?
// XXX: we should probably refactor the core engine to make this work, which
// will hopefully lead us to a more elegant core that is easier to understand
func (obj *GraphRes) Watch() error {
return fmt.Errorf("Not implemented")
}
// CheckApply method for Graph resource.
// XXX: not implemented
func (obj *GraphRes) CheckApply(apply bool) (bool, error) {
return false, fmt.Errorf("Not implemented")
}
// UIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *GraphRes) UIDs() []ResUID {
x := &GraphUID{
BaseUID: BaseUID{
Name: obj.GetName(),
Kind: obj.GetKind(),
},
//foo: obj.foo, // XXX: not implemented
}
uids := []ResUID{}
for _, v := range obj.Graph.VerticesSorted() {
uids = append(uids, VtoR(v).UIDs()...)
}
return append([]ResUID{x}, uids...)
}
// XXX: hook up the autogrouping magic!
// Compare two resources and return if they are equivalent.
func (obj *GraphRes) Compare(r Res) bool {
// we can only compare GraphRes to others of the same resource kind
res, ok := r.(*GraphRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) {
return false
}
if obj.Name != res.Name {
return false
}
//if obj.Foo != res.Foo { // XXX: not implemented
// return false
//}
// compare the structure of the two graphs...
vertexCmpFn := func(v1, v2 pgraph.Vertex) (bool, error) {
if v1.String() == "" || v2.String() == "" {
return false, fmt.Errorf("oops, empty vertex")
}
return VtoR(v1).Compare(VtoR(v2)), nil
}
edgeCmpFn := func(e1, e2 pgraph.Edge) (bool, error) {
if e1.String() == "" || e2.String() == "" {
return false, fmt.Errorf("oops, empty edge")
}
edge1 := e1.(*Edge) // panic if wrong
edge2 := e2.(*Edge) // panic if wrong
return edge1.Compare(edge2), nil
}
if err := obj.Graph.GraphCmp(res.Graph, vertexCmpFn, edgeCmpFn); err != nil {
return false
}
// compare individual elements in structurally equivalent graphs
// TODO: is this redundant with the GraphCmp?
g1 := obj.Graph.VerticesSorted()
g2 := res.Graph.VerticesSorted()
for i, v1 := range g1 {
v2 := g2[i]
if !VtoR(v1).Compare(VtoR(v2)) {
return false
}
}
return true
}
// UnmarshalYAML is the custom unmarshal handler for this struct.
// It is primarily useful for setting the defaults.
func (obj *GraphRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes GraphRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*GraphRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to GraphRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = GraphRes(raw) // restore from indirection with type conversion!
return nil
}

314
resources/group.go Normal file
View File

@@ -0,0 +1,314 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"fmt"
"io/ioutil"
"log"
"os/exec"
"os/user"
"strconv"
"syscall"
"github.com/purpleidea/mgmt/recwatch"
errwrap "github.com/pkg/errors"
)
func init() {
RegisterResource("group", func() Res { return &GroupRes{} })
}
const groupFile = "/etc/group"
// GroupRes is a user group resource.
type GroupRes struct {
BaseRes `yaml:",inline"`
State string `yaml:"state"` // state: exists, absent
GID *uint32 `yaml:"gid"` // the group's gid
recWatcher *recwatch.RecWatcher
}
// Default returns some sensible defaults for this resource.
func (obj *GroupRes) Default() Res {
return &GroupRes{
BaseRes: BaseRes{
MetaParams: DefaultMetaParams, // force a default
},
}
}
// Validate if the params passed in are valid data.
func (obj *GroupRes) Validate() error {
if obj.State != "exists" && obj.State != "absent" {
return fmt.Errorf("State must be 'exists' or 'absent'")
}
return obj.BaseRes.Validate()
}
// Init initializes the resource.
func (obj *GroupRes) Init() error {
return obj.BaseRes.Init() // call base init, b/c we're overriding
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *GroupRes) Watch() error {
var err error
obj.recWatcher, err = recwatch.NewRecWatcher(groupFile, false)
if err != nil {
return err
}
defer obj.recWatcher.Close()
// notify engine that we're running
if err := obj.Running(); err != nil {
return err // bubble up a NACK...
}
var send = false // send event?
var exit *error
for {
if obj.debug {
log.Printf("%s: Watching: %s", obj, groupFile) // attempting to watch...
}
select {
case event, ok := <-obj.recWatcher.Events():
if !ok { // channel shutdown
return nil
}
if err := event.Error; err != nil {
return errwrap.Wrapf(err, "Unknown %s watcher error", obj)
}
if obj.debug { // don't access event.Body if event.Error isn't nil
log.Printf("%s: Event(%s): %v", obj, event.Body.Name, event.Body.Op)
}
send = true
obj.StateOK(false) // dirty
case event := <-obj.Events():
if exit, send = obj.ReadEvent(event); exit != nil {
return *exit // exit
}
//obj.StateOK(false) // dirty // these events don't invalidate state
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.Event()
}
}
}
// CheckApply method for Group resource.
func (obj *GroupRes) CheckApply(apply bool) (checkOK bool, err error) {
log.Printf("%s: CheckApply(%t)", obj, apply)
// check if the group exists
exists := true
group, err := user.LookupGroup(obj.GetName())
if err != nil {
if _, ok := err.(user.UnknownGroupError); !ok {
return false, errwrap.Wrapf(err, "error looking up group")
}
exists = false
}
// if the group doesn't exist and should be absent, we are done
if obj.State == "absent" && !exists {
return true, nil
}
// if the group exists and no GID is specified, we are done
if obj.State == "exists" && exists && obj.GID == nil {
return true, nil
}
if exists && obj.GID != nil {
// check if GID is taken
lookupGID, err := user.LookupGroupId(strconv.Itoa(int(*obj.GID)))
if err != nil {
if _, ok := err.(user.UnknownGroupIdError); !ok {
return false, errwrap.Wrapf(err, "error looking up GID")
}
}
if lookupGID != nil && lookupGID.Name != obj.GetName() {
return false, fmt.Errorf("the requested GID belongs to another group")
}
// get the existing group's GID
existingGID, err := strconv.ParseUint(group.Gid, 10, 32)
if err != nil {
return false, errwrap.Wrapf(err, "error casting existing GID")
}
// check if existing group has the wrong GID
// if it is wrong groupmod will change it to the desired value
if *obj.GID != uint32(existingGID) {
log.Printf("%s: Inconsistent GID: %s", obj, obj.GetName())
}
// if the group exists and has the correct GID, we are done
if obj.State == "exists" && *obj.GID == uint32(existingGID) {
return true, nil
}
}
if !apply {
return false, nil
}
var cmdName string
args := []string{obj.GetName()}
if obj.State == "exists" {
if exists {
log.Printf("%s: Modifying group: %s", obj, obj.GetName())
cmdName = "groupmod"
} else {
log.Printf("%s: Adding group: %s", obj, obj.GetName())
cmdName = "groupadd"
}
if obj.GID != nil {
args = append(args, "-g", fmt.Sprintf("%d", *obj.GID))
}
}
if obj.State == "absent" && exists {
log.Printf("%s: Deleting group: %s", obj, obj.GetName())
cmdName = "groupdel"
}
cmd := exec.Command(cmdName, args...)
cmd.SysProcAttr = &syscall.SysProcAttr{
Setpgid: true,
Pgid: 0,
}
// open a pipe to get error messages from os/exec
stderr, err := cmd.StderrPipe()
if err != nil {
return false, errwrap.Wrapf(err, "failed to initialize stderr pipe")
}
// start the command
if err := cmd.Start(); err != nil {
return false, errwrap.Wrapf(err, "cmd failed to start")
}
// capture any error messages
slurp, err := ioutil.ReadAll(stderr)
if err != nil {
return false, errwrap.Wrapf(err, "error slurping error message")
}
// wait until cmd exits and return error message if any
if err := cmd.Wait(); err != nil {
return false, errwrap.Wrapf(err, "%s", slurp)
}
return false, nil
}
// GroupUID is the UID struct for GroupRes.
type GroupUID struct {
BaseUID
name string
gid *uint32
}
// IFF aka if and only if they are equivalent, return true. If not, false.
func (obj *GroupUID) IFF(uid ResUID) bool {
res, ok := uid.(*GroupUID)
if !ok {
return false
}
if obj.gid != nil && res.gid != nil {
if *obj.gid != *res.gid {
return false
}
}
if obj.name != "" && res.name != "" {
if obj.name != res.name {
return false
}
}
return true
}
// UIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *GroupRes) UIDs() []ResUID {
x := &GroupUID{
BaseUID: BaseUID{Name: obj.GetName(), Kind: obj.GetKind()},
name: obj.Name,
gid: obj.GID,
}
return []ResUID{x}
}
// GroupCmp returns whether two resources can be grouped together or not.
func (obj *GroupRes) GroupCmp(r Res) bool {
_, ok := r.(*GroupRes)
if !ok {
return false
}
return false
}
// Compare two resources and return if they are equivalent.
func (obj *GroupRes) Compare(r Res) bool {
// we can only compare GroupRes to others of the same resource kind
res, ok := r.(*GroupRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
if obj.State != res.State {
return false
}
if (obj.GID == nil) != (res.GID == nil) {
return false
}
if obj.GID != nil && res.GID != nil {
if *obj.GID != *res.GID {
return false
}
}
return true
}
// UnmarshalYAML is the custom unmarshal handler for this struct.
// It is primarily useful for setting the defaults.
func (obj *GroupRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes GroupRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*GroupRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to GroupRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = GroupRes(raw) // restore from indirection with type conversion!
return nil
}

View File

@@ -1,24 +1,23 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"encoding/gob"
"errors"
"fmt"
"log"
@@ -35,7 +34,7 @@ var ErrResourceInsufficientParameters = errors.New(
"Insufficient parameters for this resource")
func init() {
gob.Register(&HostnameRes{})
RegisterResource("hostname", func() Res { return &HostnameRes{} })
}
const (
@@ -87,7 +86,6 @@ func (obj *HostnameRes) Validate() error {
// Init runs some startup code for this resource.
func (obj *HostnameRes) Init() error {
obj.BaseRes.kind = "hostname"
if obj.PrettyHostname == "" {
obj.PrettyHostname = obj.Hostname
}
@@ -227,16 +225,11 @@ type HostnameUID struct {
transientHostname string
}
// AutoEdges returns the AutoEdge interface. In this case no autoedges are used.
func (obj *HostnameRes) AutoEdges() AutoEdge {
return nil
}
// UIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *HostnameRes) UIDs() []ResUID {
x := &HostnameUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
BaseUID: BaseUID{Name: obj.GetName(), Kind: obj.GetKind()},
name: obj.Name,
prettyHostname: obj.PrettyHostname,
staticHostname: obj.StaticHostname,
@@ -251,16 +244,19 @@ func (obj *HostnameRes) GroupCmp(r Res) bool {
}
// Compare two resources and return if they are equivalent.
func (obj *HostnameRes) Compare(res Res) bool {
switch res := res.(type) {
// we can only compare HostnameRes to others of the same resource
case *HostnameRes:
func (obj *HostnameRes) Compare(r Res) bool {
// we can only compare HostnameRes to others of the same resource kind
res, ok := r.(*HostnameRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
if obj.PrettyHostname != res.PrettyHostname {
return false
}
@@ -270,9 +266,7 @@ func (obj *HostnameRes) Compare(res Res) bool {
if obj.TransientHostname != res.TransientHostname {
return false
}
default:
return false
}
return true
}

Some files were not shown because too many files have changed in this diff Show More