261 Commits

Author SHA1 Message Date
Jonathan Gold
9907c12eda resources: Enhancements to user and group
This patch adds autoedges between users and groups, and extends
users with additional fields for supplementary groups and a named
primary group. Also, some small fixes to log and error messages.
2017-10-23 19:18:52 -04:00
Jonathan Gold
19533a32b5 resources: Add a group resource 2017-10-21 01:28:22 -04:00
Jonathan Gold
c5a5004f9e resources: Fix user gid compare 2017-10-19 06:58:31 -04:00
Jonathan Gold
677cdea99d resources: Improve nspawn resource 2017-10-17 19:23:04 -04:00
Jonathan Gold
4d7c0ddbce resources: Add an Aws resource 2017-10-09 04:05:13 -04:00
James Shubin
81daf10157 test: Fix linter issues
These are some linter issues that were found in a new version of the
linter. Let's fix them now before that linter hits our test suite.
2017-09-26 19:38:53 -04:00
James Shubin
b3ef4e41bf test: Use stable version of gometalinter
Hopefully this prevents the various breakages seen in our lint test.
2017-09-26 19:08:43 -04:00
James Shubin
9fbf149717 etcd: Bump to newer versions 2017-09-19 18:21:15 -04:00
James Shubin
95cb94a039 vendor: Add codec package because of breakage
Recent git master 54210f4e076c57f351166f0ed60e67d3fca57a36 of
github.com/ugorji/go broke the builds. See:
https://github.com/coreos/etcd/issues/8579
2017-09-19 18:21:15 -04:00
Juan Luis de Sousa-Valadas Castaño
21f7f87716 resources: Refresh packagekit cache before install
Fixes #80
2017-09-17 22:29:15 +02:00
Jonathan Gold
831c7e2c32 resources: Add user resource 2017-09-17 01:04:36 -04:00
James Shubin
cc0d04c8b7 git: Ignore .envrc file from direnv
Some find this useful for setting a custom GOPATH per project.
2017-09-15 16:17:40 -04:00
James Shubin
46be83f8f7 legal: Re-license to GPLv3 2017-09-11 18:07:47 -04:00
James Shubin
28560e2045 resources: Fix formatting 2017-09-11 18:06:34 -04:00
James Shubin
0df4824a56 test: Increase timeouts for slow travis
Should prevent more intermittent failures.
2017-09-09 15:31:06 -04:00
James Shubin
dbcabc6517 github: Improve the PR template 2017-09-09 15:03:53 -04:00
Jonathan Gold
69f479b67e virt: Allow more than 26 disks 2017-09-08 02:15:40 +00:00
James Shubin
af75696018 github: Add a PR template to help new users
Hopefully this addresses the most common things.
2017-09-07 16:14:11 -04:00
Arthur Mello
80b8f8740f virt: Added support for ~user into expandHome
- Enabled expandHome to expand both ~/ and ~username/ paths
- Added some unit tests for expandHome
2017-09-06 14:59:08 -04:00
James Shubin
71ab325940 yaml2: Meta should keep defaults, and Res should have kind
This would previously panic since it wouldn't get a kind, and the meta
parameters would overwrite the defaults so it would block because limit
didn't have the default of +inf.

The removal of the SetKind was my fault in:

b8ff6938df

It's funny because it ends in `bad`. Guess I should have checked that!
2017-09-06 13:44:21 -04:00
James Shubin
653c76709a test: Fix another intermittent failure
Some of the tests had very precise timeouts, which weren't very
important. Here's another one that timed out early.
2017-09-04 16:39:01 -04:00
Juan Luis de Sousa-Valadas Castaño
83cc1bab38 vagrant: Fix PATH
gometalinter failed because it's not in $PATH
2017-09-04 22:08:59 +02:00
James Shubin
6c8588c019 test: Increase timeouts because travis is slow
Should hopefully prevent some intermittent failures.
2017-09-04 13:02:05 -04:00
Ismael Puerto
5b00ed2fb2 vagrant: Change box to F26
F26 provides GO 1.8
2017-09-01 22:21:39 +02:00
Juan-Luis de Sousa-Valadas Castaño
9f66962bfb docs: Change go required version to 1.8 2017-08-31 23:56:16 +02:00
James Shubin
0edba74091 etcd: Bump to version 3.2.6 and update all the grpc deps
Note: When go-grpc-prometheus was in the main $gopath (even at this
version) and everyone else was where they always were in vendor/ this
didn't build! It gave errors like:

	have SendHeader("github.com/purpleidea/mgmt/vendor/google.golang.org/grpc/metadata".MD) error
	want SendHeader("google.golang.org/grpc/metadata".MD) error

and I got frustrated. Putting it "next" to the other vendored deps seems
to have fixed this. Where are the golang docs that explain this
phenomenon?

This also requires golang 1.8+ as that is a requirement for etcd. It's
probably a reasonable thing for us too.

Note the older versions of etcd had some bugs with the concurrency
package and other things, so this is a necessary bump.
2017-08-30 14:16:02 -04:00
Dennis Kliban
1003b49dd9 resources: Add validation for Msg Priority field
This adds validation that ensures that Msg Priority field is one of the following values:
"Emerg", "Alert", "Crit", "Err", "Warning", "Notice", "Info", "Debug".
2017-08-20 12:37:39 +00:00
James Shubin
884ba54f96 resources: Include default MetaParams so Validate will pass in tests 2017-08-18 19:52:02 -04:00
Dennis Kliban
cf2325a2da vagrant: Increase amount of RAM allocated to boxes backed by libvirt 2017-08-07 13:55:21 -04:00
AdnanLFC
db6972638d pgraph: test: Added tests for DeleteEdge 2017-07-28 02:02:22 +02:00
James Shubin
74e04e81d5 travis: Update to golang 1.8 as the default
Since the release of Fedora 26 with golang 1.8.1, this is a fine
default.
2017-07-19 12:15:54 -04:00
James Shubin
7c5d7365c7 readme: Add new recording 2017-06-29 13:14:25 -04:00
James Shubin
0dadf3d78a resources: Add NewNamedResource helper
This makes the common pattern of NewResource, SetName, easier. It also
makes it less likely for you to forget to use SetName.
2017-06-17 18:09:49 -04:00
James Shubin
e341256627 resources: Add a utility to map from struct fields
For GAPI front ends that want to know what fields they can use and which
they map to, these two functions can be used.
2017-06-17 11:49:30 -04:00
James Shubin
5a3bd3ca67 hcl: Consistent formatting
Nit picks.
2017-06-16 23:01:46 -04:00
ChrisMcKenzie
8102e0a468 hcl: Added hil string interpolation to hcl frontend 2017-06-15 22:53:55 -07:00
ChrisMcKenzie
7d55179727 hcl: Removed edge object in favor of depends_on field in resource 2017-06-12 10:44:13 -07:00
ChrisMcKenzie
bc1a1d1818 hcl: Added basic hcl frontend 2017-06-09 10:31:34 -07:00
James Shubin
a8bbb22fe8 resources: Fix golint issues
Including a trick to get the golinter to allow our compact code!
2017-06-08 04:38:25 -04:00
James Shubin
6b489f71a1 remote: Add a Ready method to know when startup is finished
Previously, there was an extremely rare race where we would startup,
kick off the Run method in a goroutine, and then run Exit before Run got
very far in its execution. If Run ran some early sections of its code
_after_ we had Exited, we would trigger a panic due to the converger UID
being unregistered.

This patch blocks Exit from progressing until Run has started and
finished running. It also adds a Ready method so that you can monitor
this signal yourself if you'd like to add the necessary wait to your
code.
2017-06-08 03:55:03 -04:00
James Shubin
f1db088af4 test: Don't be noisy when running cd during testing 2017-06-08 01:05:58 -04:00
James Shubin
6fe12b3fb5 resources: Compare grouped resources properly
When comparing resources, we have to recursively compare grouped
resources as well! Now fixed.
2017-06-08 01:05:58 -04:00
James Shubin
dacbf9b68d resources: Add resource sorting and clean tests
Resource sorting is needed for comparing resource groups.
2017-06-08 01:05:58 -04:00
James Shubin
9f5057eac7 resources: Do not panic on autogrouped graph switches
Graph changes from autogrouped -> not autogrouped or vice versa cause a
panic (or I assume a leak) because we compared the auto grouped graph to
the ungrouped one, which would cause an Exit on an unstarted Vertex.
This includes a test that seems to reliably reproduces the issue.
2017-06-08 01:05:58 -04:00
James Shubin
525cd54921 pgraph: Improve testing and refactor out some test utilities 2017-06-07 07:13:12 -04:00
James Shubin
7ac94bbf5f resources: Panic if attempting to register a duplicate resource
Don't silently let this overwrite pass. It would mean a mistake.
2017-06-07 03:15:06 -04:00
James Shubin
b8ff6938df resources: Unify resource creation and kind setting
This removes the duplication of the kind string and cleans up things for
resource creation.
2017-06-07 03:07:02 -04:00
James Shubin
2f6c77fba2 misc: Update my tag script to deal with large releases 2017-06-03 03:54:49 -04:00
James Shubin
28a6430778 test: Add gometalinter to our test suite
Add a bunch of new linters to our tests! We can uncomment each sub
linter as we fix up the few remaining issues.
2017-06-03 02:04:10 -04:00
James Shubin
6e4157da35 test: Remove debugging echo from go vet test
I accidentally left it in which totally defeats the point of tests!
2017-06-03 01:34:02 -04:00
James Shubin
4f420dde05 etcd: Wait for server to start before continuing
I think there was a rare race where we would make use of the etcd server
before it had fully started up. I only ever saw this occur on travis,
and with this fix hopefully we'll never see it again.

It is worth mentioning that much of my etcd code and the lib Run()
function could use a solid cleaning.
2017-06-03 01:00:35 -04:00
James Shubin
d9601471df etcd: Small cleanup of the package
Split things into multiple files, and fix up some doc formatting.
2017-06-03 00:34:58 -04:00
James Shubin
9941a97e37 resources: pkg: Add a simple test based on internal logic
We expect the following to stay true. This has always been a bit weird
for me to either remember or expect, so I added a test for my sanity.
2017-06-03 00:15:30 -04:00
James Shubin
0a64b08669 resources: autoedges: Process in a deterministic order
The order you loop through map's isn't necessarily stable, so make sure
you sort everything before you go through it.
2017-06-02 22:29:42 -04:00
James Shubin
4d9d0d4548 resources: Improve AutoEdge API and pkg breakage
I previously broke the pkg auto edges because the package list wasn't
available by the time it was called. This fixes the pkg resource so that
it gets the necessary list of packages when needed. Since this means
that a possible failure could happen, we also update the AutoEdges API
to support errors. Errors can only be generated at AutoEdge struct
creation, once the struct has been returned (right before modification
of the graph structure) there is no possibility to return any errors.

It's important to remember that the AutoEdges stuff gets called because
the Init of each resource, so make sure it doesn't depend on anything
that happens there or that gets cached as a result of Init.

This is all much nicer now and has a test too :)
2017-06-02 22:15:28 -04:00
James Shubin
5f6c8545c6 resources: Replace stored pgraph with mgraph and clean up hacks
Now that we're using our meta wrapper graph struct instead of the
pgraph, we can re-implement our SetValue hacks in terms of struct fields
and the implementation is now cleaner.
2017-06-02 18:50:23 -04:00
James Shubin
ddc335d65a resources: Reorganize package and split into multiple files
This should hopefully make finding and changing code easier.
2017-06-02 18:08:47 -04:00
James Shubin
9cbaa892d3 gapi: Allow the GAPI implementer to specify fast and exit
This allows the implementer of the GAPI to specify three parameters for
every Next message sent on the channel. The Fast parameter tells the
agent if it should do the pause quickly or if it should finish the
sequence. A quick pause means that it will cause a pause immediately
after the currently running resources finish, where as a slow (default)
pause will allow the wave of execution to finish. This is usually
preferred in scenarios where complex graphs are used where we want each
step to complete. The Exit parameter tells the engine to exit, and the
Err parameter tells the engine that an error occurred.
2017-06-02 04:03:10 -04:00
James Shubin
9531465410 test: Make sure our examples build
Since there are occasional API changes, I'd like to at least remember to
keep the examples building, so we now have a test to remind us!
2017-06-02 03:32:53 -04:00
James Shubin
c35916fad1 resources: Rename the Data struct to ResData to avoid ambiguity
There's a similarly named gapi.Data struct which we could also rename.
2017-06-02 02:53:53 -04:00
James Shubin
bf476a058e resources: exec: Add send/recv for exec output, stdout and stderr
This adds send/recv output parameters from exec for stdout, stderr, and
output which is a combination of those two. This also includes a few
tests, and a working example too!

Gone are the `some_command > some_file` days of puppet.
2017-06-02 02:52:03 -04:00
James Shubin
d4e815a4cb resources: Clean up converger and make it easier for tests
This cleans up the resource converger code slightly and makes it easier
to write resource specific test cases.
2017-06-02 01:15:25 -04:00
James Shubin
0545c4167b pgraph: Remove NewVertex and NewEdge methods and fix examples
Since the pgraph graph can store arbitrary pointers, we don't need a
special method to create the vertices or edges as long as they implement
the String() string method. This cleans up the library and some of the
examples which I let rot previously.
2017-05-31 18:04:58 -04:00
James Shubin
6838dd02c0 resources: graph: Add partial implementation of a graph resource
This is something I've wanted to do for a while, but for the reasons
mentioned in the comments, I've been unable to complete yet. I figured
I'd at least merge what does exist so far in case someone else would
like to pick this up. It's a bit of a brain hurdle / monster, because
the tricky part is refactoring the core engine so that this fits in
nicely. Perhaps someone will have more time and/or less tunnel vision
than I to either merge something or sketch out some ideas on the path
forwards. I think it's a useful goal because if recursive resources are
possible, it could force the core engine into a more elegant design.

Happy hacking!
2017-05-31 17:27:34 -04:00
James Shubin
14c2fd1edd resources: Add proper edge compare method
Might as well do this cleanly in one place.
2017-05-31 17:27:34 -04:00
James Shubin
6e503cc79b resources: Simplify the resource Compare functions
This removes one level of indentation and simplifies the code.
2017-05-31 17:27:34 -04:00
James Shubin
bd4563b699 pgraph: Add sort function to sort a list of vertices
With tests too!
2017-05-31 17:27:34 -04:00
James Shubin
458e115490 pgraph: Add logic functions for adding subgraphs
These are helper functions to merge in existing graphs into a main graph
with or without adding an edge relationship between a vertex and the new
graph. These are particularly useful if using mgmt as a lib to break
apart units of work into functions that create sub graphs, which are
then added to the main graph when they're returned.
2017-05-31 17:27:25 -04:00
James Shubin
51369adad1 pgraph: Add a GraphCmp method
This could probably be more efficient using a known algorithm, and it
could definitely require more tests, but is good enough for now.
2017-05-31 16:45:39 -04:00
James Shubin
f65c5fb147 resources: nspawn: Fix small style issues 2017-05-31 15:36:15 -04:00
James Shubin
4150ae7307 pgraph: Replace edge struct with interface
This further cleans up the pgraph lib to be more generic.
2017-05-31 15:36:15 -04:00
James Shubin
a87288d519 pgraph, resources: Major refactoring continued
There was simply some technical debt I needed to kill off. Sorry for not
splitting this up into more patches.
2017-05-31 15:36:14 -04:00
James Shubin
3cf9639e99 pgraph, resources: Major refactor to remove pgraph to resource dep
This is the mechanical port of the remaining bits. Next to clean it up a
bit.
2017-05-29 15:43:50 -04:00
James Shubin
4490c3ed1a resources: Map to semaphores doesn't need to be a pointer
A map in golang is a reference type.
2017-05-29 15:43:50 -04:00
James Shubin
fbcb562781 pgraph: Move the timestamp storage into the resource 2017-05-29 15:43:50 -04:00
James Shubin
b1e035f96a pgraph: Move get/set state methods out to resource package 2017-05-29 15:43:50 -04:00
James Shubin
11c3a26c23 pgraph: Move the AutoEdges mechanism into the resource package
Remove the pgraph->resource dependency.
2017-05-29 15:43:50 -04:00
James Shubin
1fbe72b52d test: Run go vet across whole packages not individual files
The golang tooling is quite deficient, in that it makes it quite
difficult to get the tools to do_the_right_thing, without ample wrapping
of bash scripting. Go vet was finding issues because it didn't have the
full context available. Hopefully this package level context is
sufficient for now. It still lacks inter-package context though.
2017-05-29 15:43:50 -04:00
James Shubin
f4bb066737 test: Run go vet with -source flag in newer releases
This should hopefully eliminate some false positives.
https://github.com/golang/go/issues/20514
2017-05-29 15:43:50 -04:00
Julien Pivotto
aaac9cbeeb vagrant: Setup Packagekit in the box
Without packagekit the 'pkg' resources can not be used

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-05-17 09:54:23 +02:00
Julien Pivotto
0e68ff6923 vagrant: Install make in the Vagrant box
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-05-17 06:41:43 +02:00
James Shubin
1c59712cbf pgraph: Move AssociateData function out of the package
This removes another dependency on the resource package.
2017-05-15 10:19:46 -04:00
James Shubin
c2cb1c9168 pgraph: Move GraphMetas function out of package
This removes a dependency on the resources package which wasn't
necessary.
2017-05-15 10:06:31 -04:00
James Shubin
cc8e2e40dd pgraph: Update graph API to remove Get prefix and add Adjacency
Simple cleanups.
2017-05-15 09:58:10 -04:00
James Shubin
e67d97d9da pgraph: Replace CompareMatch with VertexMatchFn
This removes a reference to the resources package in pgraph.
2017-05-13 13:55:42 -04:00
James Shubin
d74c2115fd pgraph: Untangle the semaphore code from the pgraph implementation
This re-implements the semaphore code on top of the graph kv store.
2017-05-13 13:28:41 -04:00
James Shubin
70e7ee2d46 pgraph: Remove use of Flags struct in favour of Value API
One small step to completely cleaning up the pgraph package so that we
can eventually fix the code that would otherwise create a cycle!
2017-05-13 13:28:41 -04:00
James Shubin
d11854f4e8 pgraph: Clean up pgraph module to get ready for clean lib status
The graph of dependencies in golang is a DAG, and as such doesn't allow
cycles. Clean up this lib so that it eventually doesn't import our
resources module or anything else which might want to import it.

This patch makes adjacency private, and adds a generalized key store to
the graph struct.
2017-05-13 13:28:41 -04:00
James Shubin
4bb553e015 pgraph: Use the correct vertex handle to prevent a race
Small typo made that is now fixed! These need to get caught with golint!
2017-05-13 10:08:38 -04:00
James Shubin
0af9af44e5 etcd, resources, world: Add World API for shared keys
It's up to the end user to decide who is writing and/or overwriting
them.

It could also be useful to reimplement (refactor) some of the existing
World API's to be implemented in terms of these primitives.
2017-04-17 07:03:29 -04:00
James Shubin
3a0d73f740 readme: Add new links 2017-04-13 04:35:59 -04:00
James Shubin
9b9ff2622d resources: Make resource kind and baseuid fields public
This is required if we're going to have out of package resources. In
particular for third party packages, and also for if we decide to split
out each resource into a separate sub package.
2017-04-11 01:52:21 -04:00
James Shubin
a4858be967 lib, gapi: Next method of GAPI should generate first event
This puts the generation of the initial event into the Next method of
the GAPI. If it does not happen, then we will never get a graph. This is
important because this notifies the GAPI when we're actually ready to
try and generate a graph, rather than blocking on the Graph method if we
have a long compile for example.

This is also required for the etcd watch cleanup.
2017-04-10 03:20:58 -04:00
James Shubin
6fd5623b1f gapi: Move separate etcd Watch method into GAPI
This cleans up the API to not have a special case for etcd anymore. In
particular, this also adds the requirement that the GAPI must generate
an event on startup as soon as it is ready to generate a graph.
2017-04-10 03:20:58 -04:00
James Shubin
66d9c7091c lib: examples: Update to most recent API
At some point in the past the API changed. Fixed now.
2017-04-10 03:20:58 -04:00
Mildred Ki'Lya
525a1e8140 yamlgraph: Refactor parsing for dynamic resource registration
Avoid use of the reflect package, and use an extensible list of registred
resource kinds. This also has the benefit of removing the empty VirtRes and
AugeasRes struct types when compiling without libvirt and libaugeas.
2017-03-24 22:38:06 +01:00
James Shubin
64dc47d7e9 misc: Fixup documentation 2017-03-20 17:11:51 -04:00
James Shubin
f3fc7bb91e resources: svc: Add basic support for user services
These are user specific services and are available on the session bus.
This doesn't use the private user API because
https://github.com/coreos/go-systemd/pull/225 was NACKed.
2017-03-17 10:15:02 -04:00
James Shubin
028ef14cc0 misc: Replace sloppy use of %v with %s 2017-03-16 13:18:36 -04:00
James Shubin
3e001f9a1c main: Update log messages for consistency 2017-03-16 13:14:50 -04:00
Julien Pivotto
33d20ac6d8 prometheus: Add detailed metrics
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-03-16 14:18:46 +01:00
James Shubin
660554cc45 todo: Update the TODO file to be more current
We should really try to remember to patch it with fixes when we do them
:)
2017-03-15 14:57:29 -04:00
James Shubin
a455324e8c examples: Add missing file example
I was using this for testing graph changes but forgot to commit it.
2017-03-13 08:05:49 -04:00
James Shubin
cd5e2e1148 pgraph: Add fast pausing and exiting of graphs
This causes a graph to actually stop processing part way through, even
if there are poke's that want to continue on. This is so that the user
experience of pressing ^C actually causes a shutdown without finishing
the graph execution. It might be preferred to have this be a user
defined setting at some point in the future, such as if the user presses
^C twice. As well, we might want to implement an interrupt API so that
individual resource execution can be asked to bail out early if
requested. This could happen on a third ^C press.
2017-03-13 07:54:03 -04:00
James Shubin
074da4da19 pgraph, resources: Run the resource Setup in parallel
This is a reasonable thing to do at this time.
2017-03-13 07:54:03 -04:00
James Shubin
e4e39d820c pgraph: semaphore: Refactor semaphore size function and test 2017-03-13 07:49:29 -04:00
James Shubin
e5dbb214a2 pgraph: Move the BackPoke to before the semaphores
I can't think of a reason we should grab a semaphore before backpoking.
The semaphore is intended to block around the actual work in CheckApply,
not the dependency resolution of the correct vertex.
2017-03-13 07:49:29 -04:00
James Shubin
91af528ff8 pgraph: Move the quiesce done indicator to avoid deadlock
This avoids a deadlock on resource failure when retry==0. Without this
we would never exit. This adds a test in too!
2017-03-12 13:52:35 -04:00
James Shubin
18c4e39ea3 resources: exit: Misc cleanups
Some of this code hadn't been touched much since an early mgmt. Here's a
quick cleanup of some cruft.
2017-03-12 13:21:22 -04:00
James Shubin
bda455ce78 resources: exec: Ignore signals sent to main process
When we send a ^C to the main process, our children see it too! This
puts them in their own process group so that they're not affected.
There's still the matter of properly hooking up the internal exit signal
to a proper shutdown, but that's separate.

This might mean that there should be a case for an interrupt aspect to
the resource API which would allow a second ^C by the engine, to cause a
forceful termination by the resource if that resource supported that.
2017-03-12 13:11:54 -04:00
James Shubin
a07aea1ad3 resources: exec: Clean up command error processing
Show the exit status on error and general cleanups.
2017-03-12 12:44:03 -04:00
James Shubin
18e2dbf144 resources: exec: Remove state checks that are done in the engine
These state checks are now done automatically in the engine, and so they
should be removed to make the code easier to read.
2017-03-12 12:35:03 -04:00
James Shubin
564a07e62e resources: exec: Don't invalidate state on poke
This was some legacy incorrect decision from earlier mgmt.
2017-03-12 12:35:02 -04:00
James Shubin
a358135e41 resources: exec: Remove the pollint parameter
Since we now have a poll metaparameter, we don't need the resource
specific code.
2017-03-12 10:49:26 -04:00
James Shubin
6d9be15035 pgraph: semaphore: Add lock around semaphore map
I forgot about the `concurrent map write` race, but now it's fixed. I
suppose we could probably pre-create all semaphores in the graph at once
before Start, and remove this lock, but that's an optimization for a
later day.
2017-03-11 09:06:18 -05:00
James Shubin
b740e0b78a git: Add more features to tag.sh script
This helps me make releases and probably won't help you, but why not be
transparent about things and tools!
2017-03-11 08:41:56 -05:00
James Shubin
9546949945 git: Improve tagging script 2017-03-11 08:29:53 -05:00
James Shubin
8ff048d055 test: Disable prometheus-3.sh test temporarily
It seems to be failing, and I'm not sure where the regression is, or if
there is a race. Sorry roidelapluie.
2017-03-09 11:46:21 -05:00
James Shubin
95a1c6e7fb pgraph, resources: Discard BackPokes during pause and resume
This prevents some nasty races where a BackPoke could arrive on a paused
vertex either during a resume or pause operation. Previously we might
also have poked an excessive number of resources on resume.

The solution was to discard BackPokes during pause or resume. On pause,
they can be discarded because we've asked the graph to quiesce, and any
further work can be done on resume, and on resume we ignore them because
this should only happen during the unrolling (reverse topological resume
of the graph) and at the end of this the indegree == 0 vertices will
initiate a series of pokes which should deal with any BackPoke that was
possibly discarded.

One other aspect of this which is important: if an indegree == 0 vertex
is poked (Process runs) but it's already in the correct state, it should
still transmit the Poke through itself so that subsequent vertices know
to run. Currently this is done correctly in Process().

I'm a bit ashamed that this wasn't done properly in the engine earlier,
but I suppose that's what comes out of running fancier graphs and really
thinking in detail about what's truly correct. Hopefully I got it right
this time!
2017-03-09 06:35:15 -05:00
James Shubin
0b1a4a0f30 pgraph, resources: Quiesce when pausing or exiting the resource
This prevents a nasty race that can happen in a graph with more than one
resource. If a resource has someone that it can BackPoke, and then
suppose an event comes in. It runs the obj.Event() method (from inside
its Watch loop) and then *before* the resulting Process method can run
it receives a pause event and pauses. Then the parent resource pauses as
well. Finally (it's a race) the Process gets around to running, and
decides it needs to BackPoke. At this point since the parent resource is
paused, it receives the BackPoke at a time when it can't handle
receiving one, and it panics!

As a result, we now track the number of running Process possibilities
via a WaitGroup which gets incremented from the obj.Event() and we don't
finish our pause or exit operations until it has quiesced and our
WaitGroup lets us know via Wait(). Lastly in order to prevent repeated
replays, we detect when we're quiescing and suspend replaying until post
pause. We don't need to save the replay (playback variable) explicitly
because its state remains during pause, and on exit it would get
re-checked anyways.
2017-03-09 02:50:55 -05:00
James Shubin
22b48e296a resources, yamlgraph: Drop the kind capitalization
This stopped making sense now that we have a resource with two primary
capitals. It was just a silly formatting hack anyways. Welcome kv!
2017-03-09 02:50:55 -05:00
James Shubin
c696ebf53c resources: svc: Add failed state
Services can be in a failed state too.
2017-03-08 19:23:33 -05:00
James Shubin
a0686b7d2b pgraph: graphviz: Update Graphviz lib to quote names properly
This also moves the library to after the graph starts so that the kind
fields will be visible.
2017-03-08 19:23:33 -05:00
James Shubin
8d94be8924 resources: kv: Add new KV resource which sets key value pairs
This is a new resource for setting key value pairs in our global world
database. Currently only etcd is supported. Some of the implications and
possibilities of this resource will become more obvious with future
commits!

You can bother/test this resource with these commands:

ETCDCTL_API=3 etcdctl get "/_mgmt/strings/" --prefix=true
ETCDCTL_API=3 etcdctl put "/_mgmt/strings/KEY/HOSTNAME" 42

Replace the KEY and HOSTNAME variables with the actual values you'd like
to use. The 42 is the value that is set.
2017-03-08 19:23:33 -05:00
James Shubin
e97ac5033f resources: Split util functions into separate file
This also adds errwrap to their implementation.
2017-03-08 19:23:33 -05:00
James Shubin
44771a0049 gapi: Move the World interface into resources
This was necessary to fix some "import cycle" errors I was having when
adding the World api to the resource Data struct.

I think this is a good hint that I need to start splitting up existing
packages into sub files, and cleaning up and inter-package problems too.
2017-03-08 19:23:33 -05:00
James Shubin
32aae8f57a lib, pgraph, resources: Refactor data association API
This should make things cleaner and help avoid as much churn every time
we change a property.
2017-03-07 22:51:11 -05:00
James Shubin
8207e23cd9 lib: Refactor instantiation of world API 2017-03-07 22:51:11 -05:00
James Shubin
a469029698 etcd: Switch to buffered channel to remove duplicates
Since we don't return the actual values and instead only tell about
events (which leaves the `Get` of the value as a second operation) then
we don't have to use a channel with backpressure since all the events
are identical.
2017-03-07 22:51:11 -05:00
James Shubin
203d866643 gapi, etcd: Define and implement a string sharing API for the World
This adds a new set of methods to the World API (for sharing data
throughout the cluster) and adds an etcd backed implementation.
2017-03-07 22:51:11 -05:00
Michael Borden
1488e5ec4d travis: Add go_import_path
This should fix an issue with turning on travis for any mgmt fork.
2017-03-07 14:17:53 -08:00
Michael Borden
af66138a17 misc: Add pacman support 2017-03-03 21:04:29 -08:00
James Shubin
5f060d60a7 test: Avoid matching three X's
This helps my "WIP" detector script avoid false positives. It is a
simple script which helps me find release critical problems.
2017-03-01 22:37:08 -05:00
James Shubin
73ccbb69ea travis: Ensure recent golang versions in test
This ensures travis picks the latest versions. Apparently writing 1.x is
also a valid choice.
2017-03-01 21:45:02 -05:00
James Shubin
be60440b20 readme: Add new blog post about metaparameters
This also documents the recent semaphore work.
2017-03-01 16:18:14 -05:00
James Shubin
837efb78e6 spelling: Fix typos as found by goreportcard 2017-02-28 23:48:34 -05:00
James Shubin
4a62a290d8 pgraph: Clean up tests
This splits the tests into multiple files.
2017-02-28 16:47:16 -05:00
James Shubin
018399cb1f semaphore, pgraph: Add semaphore grouping and tests
If two resources are grouped, then the result should contain the
semaphores of both resources. This is because the user is expecting
(independently) resource A and resource B to have a limiting choke
point. If when combined those choke points aren't preserved, then we
have broken an important promise to the user.
2017-02-28 16:40:53 -05:00
James Shubin
646a576358 remote: Replace builtin semaphore type with common util lib
Refactor code to use the new fancy lib!
2017-02-27 02:57:06 -05:00
James Shubin
d8e19cd79a semaphore: Create a semaphore metaparam
This adds a P/V style semaphore mechanism to the resource graph. This
enables the user to specify a number of "id:count" tags associated with
each resource which will reduce the parallelism of the CheckApply
operation to that maximum count.

This is particularly interesting because (assuming I'm not mistaken) the
implementation is dead-lock free assuming that no individual resource
permanently ever blocks during execution! I don't have a formal proof of
this, but I was able to convince myself on paper that it was the case.

An actual proof that N P/V counting semaphores in a DAG won't ever
dead-lock would be particularly welcome! Hint: the trick is to acquire
them in alphabetical order while respecting the DAG flow. Disclaimer,
this assumes that the lock count is always > 0 of course.
2017-02-27 02:57:06 -05:00
James Shubin
757cb0cf23 test: Small fixups to t4 and a rename 2017-02-26 20:54:22 -05:00
Julien Pivotto
7d92ab335a prometheus: Add mgmt_pgraph_start_time_seconds metric
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-26 15:28:43 +01:00
James Shubin
46c6d6f656 compiling: Add -i flag to go build to speed up builds
So builds take about 30s on my shitty machine which is pretty long.
Turns out golang caches things in $GOPATH/pkg/ but is not clever enough
to erase things from there that are out of date. As a result, it was
rebuilding everything (including the unchanged dependencies) every
build!

By completely wiping out $GOPATH/pkg/ and then running `go build -i`,
this now takes builds down to about 8 seconds. (After one full build is
finished.)

This is basically the same as running `go install`, but without copying
junk to $GOPATH/bin.

Hopefully the tooling will be smart enough to know when to throw out
stuff in $GOPATH/pkg automatically and avoid this problem entirely.

Is it wrong to send Google a bill for all the extra cpu cycles I've
used? ;)
2017-02-26 04:06:36 -05:00
Julien Pivotto
46260749c1 prometheus: Move the prometheus nil check inside the prometheus function
That pattern will be reused in future metrics.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-26 09:33:34 +01:00
Julien Pivotto
50664fe115 compilation: Make build the default target
It is really strange that whe I run make, it does not build mgmt. This
commit makes build the default target, without moving the target,
therefore we keep as much as we can the order of the file.

This also removes the confusion for designers that would run "make"
instead of "make art", whose work would be disrupted when we add a --
let's say -- make alpharelese command.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-26 09:04:20 +01:00
James Shubin
c480bd94db resources: virt: Remove unnecessary early exit from CheckApply
I don't think this early exit is necessary any more, since the main
CheckApply function really just spawns out to the different sub workers
which all individually check the apply variable.

If I'm wrong, we can revert this. It was @roidelapluie that noticed the
check here to begin with.
2017-02-25 21:29:08 -05:00
James Shubin
79923a939b resources: virt: Catch bad calls to CheckApply
If the engine cheats, we'll know!
2017-02-25 21:28:35 -05:00
James Shubin
327b22113a resources: virt: Don't block exit in callbacks
This prevents us blocking an exit if we close when a callback was about
to run. This is because the callbacks are called from the
EventRunDefaultImpl method, which waits for their return to exit and
release the WaitGroup.

I think we should probably get rid of the obj.wg since the engine is
supposed to guarantee that Close doesn't happen before Watch finishes.
2017-02-25 21:04:36 -05:00
James Shubin
12160ab539 pgraph: Wait for Process routine to exit
Wait for the innerWorker's Process routine to exit, before we exit too.
2017-02-25 21:01:02 -05:00
James Shubin
2462ea0892 pgraph, resources: Wait for innerWorker to exit cleanly
Don't run the Close() method until the innerWorker has exited cleanly.
This is a guarantee which we make to the resources.
2017-02-25 21:00:38 -05:00
James Shubin
8be09eadd4 test: Fix up probable timeout failures due to slow ci
We had *occasional* failures most likely due to slow ci and compounded
by low entropy. We also weren't pointing at the right test!
2017-02-22 23:08:09 -05:00
James Shubin
98bc96c911 golint: Fixup issues found in the report
This also increases the max allowed to 5% -- I'm happy to make this
lower if someone asks.
2017-02-22 22:18:55 -05:00
James Shubin
b0fce6a80d travis: Start building golang 1.8 as well
Require success from 1.7 since we'll probably move to it as the default
within six months.
2017-02-22 22:18:55 -05:00
James Shubin
53b8a21d1e resources: virt: Cleanup cleanly on Close
Don't block accidentally on error!
2017-02-22 22:18:55 -05:00
James Shubin
1346492d72 yamlgraph: Close the recwatcher properly after use
Don't leave it running unnecessarily! This might have contributed to a
block, but it was hard to isolate if this was the cause or if this was
one of many causes.
2017-02-22 18:37:43 -05:00
James Shubin
e5bb8d7992 recwatch: Close the ConfigWatch properly
This improves the shutdown process so that there is no change of
blocking if the sender runs close without having emptied the channel.
2017-02-22 17:45:16 -05:00
James Shubin
49594b8435 pgraph, resources: Clean up the event system around the resources
This cleans up some of the resource events and also reorganizes the
struct for simplicity. This should hopefully kill off at least one race
which would cause unnecessary blocking!

Yes this patch is a bit yucky, but so was the bug I was fighting with!
2017-02-22 17:45:16 -05:00
James Shubin
3bd37a7906 test: Don't block on graph transitions
Improvements in the engine have uncovered some annoying race conditions
which would cause the engine to block between transitions. This is a
test which catches the most obvious file based ones.

This requires inotify to work in the test environment.
2017-02-22 17:45:16 -05:00
James Shubin
e070a85ae0 lib: Misc cleanups and new log message 2017-02-22 17:45:16 -05:00
James Shubin
c189278e24 recwatch: Unblock from sending on exit
If we receive an exit signal while we are waiting to send a message, we
should still be able to shut down.
2017-02-22 16:41:47 -05:00
James Shubin
2a8606bd98 recwatch: Close cleanly after Watcher finishes
This cleans up the recwatch code to avoid some possible races. While you
were previously able to call Close more than once, this was really
something that would mask bugs, rather than a useful feature.
2017-02-22 16:41:26 -05:00
James Shubin
18ea05c837 pgraph, resources: Add proper start/stop signals
We need to perform some operations in lock step between graph
transitions. This should help with that!
2017-02-21 18:48:27 -05:00
James Shubin
86c3072515 resources: The user should not call Init
Even in tests...
2017-02-21 18:42:07 -05:00
James Shubin
fccf508dde resources, pgraph: Refactor Worker and simplify API
I'm still working on reducing the size of the monster patches that I
land, but I'm exercising the priviledge as the initial author. In any
case, this refactors worker into two, and cleans up the passing around
of the processChan. This puts common code into Init and Close.
2017-02-21 18:42:07 -05:00
James Shubin
2da21f90f4 pgraph, resources: Improve Init/Close and Worker status
This should do some rough cleanups around the Init/Close of resources,
and tracking of Worker function status.
2017-02-21 18:42:07 -05:00
James Shubin
bec7f1726f resources: virt: Allow hotplugging
This allows hot (un)plugging of CPU's! It also includes some general
cleanups which were necessary to support this as well as some other
features to the virt resource. Hotunplug requires Fedora 25.

It also comes with a mini shell script to help demo this capability.

Many thanks to pkrempa for his help with the libvirt API!
2017-02-21 18:42:07 -05:00
James Shubin
74dfb9d88d test: Make test status more clear 2017-02-21 18:40:31 -05:00
James Shubin
02dddfc227 test: Fix yamlfmt test
Last chance before we kill this entirely.
2017-02-21 16:16:41 -05:00
Julien Pivotto
545016b38f project: Add $me to AUTHORS
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-21 07:07:41 +01:00
Felix Frank
0ccceaf226 project: Reluctantly add myself as an author
For the record, I currently don't feel that my level of contributions
warrant a spot here, but I'm entering myself upon express invitation
by James =) ...also in anticipation of much more substantial work.
2017-02-20 22:53:30 -05:00
James Shubin
a601115650 test: Fix false negative on go vet
This was my fault, now it is fixed :) It passed locally due to a bug.
2017-02-20 18:12:30 -05:00
Kaushal M
ae6267c906 build: Use GOTAGS for static builds as well 2017-02-20 17:16:03 -05:00
James Shubin
ac142694f5 test: Improve go vet so that it is less noisy 2017-02-20 17:08:48 -05:00
James Shubin
69b0913315 test: Fix tests by hooking up go test properly
The internal golang tests broke when we turned everything into packages.
This resurrects them with the hopes that we'll add more!
2017-02-20 16:40:40 -05:00
Julien Pivotto
421bacd7dc travis: Run apt update before installing dependencies
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-20 22:23:06 +01:00
James Shubin
573a76eedb build: Require golang 1.6 or greater 2017-02-20 15:17:49 -05:00
James Shubin
b7948c7f40 resources: Specify defaults for the MetaParams
When creating new resources, we didn't specify the defaults, which for
the limit metaparam caused invalid resources by default. It would be
nice to change the limit param to have the 1/X (reciprocal) as the
default, although the problem with that is that (1) it is illogical, and
(2) it's not clear if the precision for the common cases is enough.

If someone wants to investigate this further, please do! Zero value
structs are definitely more useful! In any case, we can now specify the
default. It's not entirely obvious to me if this is the best way to do
it, or if there is a superior method.
2017-02-16 21:08:46 -05:00
James Shubin
2647d09b8f resources: file: Don't modify resource in Init
This didn't break anything previously, but technically wasn't correct.
Pure functions are superior in this case!
2017-02-16 21:06:58 -05:00
James Shubin
57e919d7e5 resources: Remove "NewRes" constructors
Remove the New constructors since calling Init should be done by the
engine, and not by the user even when using mgmt as a lib. This is also
the case in tests! It used to be the case that a user might want to call
Init manually, but that is no longer the case!
2017-02-16 21:06:12 -05:00
James Shubin
f456aa1407 resources: file: Small fixups and force additions 2017-02-16 20:46:51 -05:00
Mildred Ki'Lya
d0d62892c8 resources: file: Allow creation of empty directories 2017-02-16 20:41:59 -05:00
James Shubin
a981cfa053 legal: Oh yeah, it is 2017 2017-02-16 01:34:32 -05:00
James Shubin
55290dd1e3 docs: Add missing newline and remove extra one 2017-02-16 01:13:42 -05:00
Julien Pivotto
9c4e255994 documentation: Implement sphinx documentation
Licence removed due to to a read the docs (or sphinx/recommonmark) bug:
everything after the comment is not rendered.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-16 07:08:44 +01:00
Julien Pivotto
f9c7d5f7bc resources: augeas: comments: Improvements
- Remove obvious statements
- Fix typo

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-15 22:42:42 +01:00
Julien Pivotto
49baea5f6a documentation: Embed command-line syntax in a code block
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-15 15:30:32 -05:00
James Shubin
6209cf3933 resources: file: Use the computed path in our resource
We compute the actual path in Init() and forget to use it everywhere.
2017-02-15 15:25:03 -05:00
Mildred Ki'Lya
d170a523c3 misc: Split OPTS variable of systemd unit
Systemd split variables when specified like $VAR and not when specified
like ${VAR}. Since OPTS must contain multiple options, we must ensure
systemd will split it.

See also systemd.service(5)
2017-02-15 19:31:44 +01:00
Julien Pivotto
be5040e7a8 prometheus: Remove mgmt_process_start_time_seconds metric
That metric is useless as by default the prometheus golang client
provides the `process_start_time_seconds` metric.

This reverts commit 25e2af7c89.
2017-02-14 22:56:12 +01:00
James Shubin
ecbaa5bfc1 github: Show a friendly message in issues template 2017-02-14 16:45:35 -05:00
Julien Pivotto
25e2af7c89 prometheus: Add mgmt_process_start_time_seconds metric 2017-02-14 22:14:59 +01:00
Julien Pivotto
605688426d resources: file: Do not error on os.Stat in noop mode
Fix #142.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-14 20:54:32 +01:00
James Shubin
0e069f1e75 docs: Update the quick start guide with git clone and vagrant
There was a bit of a catch-22, since go get wouldn't succeed without
some system build libs installed first. We also added a Vagrant
environment.
2017-02-14 12:13:43 -05:00
James Shubin
e9adbf18d3 test: prometheus: Fix up test 2017-02-14 12:10:54 -05:00
Sean Jones
610202097a test: Enable macOS shell testing
* Check and install libvirt with Homebrew

  macOS does not have apt, dnf or yum. Add checking for homebrew for
  installing libvirt.

* Use platform timeout for tests
    * Add timeout detection to test/util.sh
    * Use $timeout for shell test requiring timeout
2017-02-14 11:59:44 -05:00
Mildred Ki'Lya
8c2c552164 resources: file: Implement file attributes
Add owner which must be username or uid of the file owner, group which is
the group name or gid of the file, and mode which is the octal unix file
permissions.

Add separate implementation for Go 1.6 and lower.
2017-02-14 11:55:00 -05:00
Julien Pivotto
b9976cf693 documentation: prometheus: Improvements
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-13 21:59:05 +01:00
Julien Pivotto
3261c405bd resources: augeas: Make augeas support optional
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-13 07:14:14 +01:00
James Shubin
35d3328e3e etcd: Remove stuttering in package
This is a good first step to cleaning up the package.
2017-02-12 22:51:46 -05:00
James Shubin
e96041d76f resources: augeas: Turn augeas namespace into a constant 2017-02-12 19:46:16 -05:00
Felix Frank
c2034bc0c0 vagrant: Add Vagrantfile and some basic configuration 2017-02-12 19:18:33 -05:00
Julien Pivotto
e8855f7621 prometheus: Implement mgmt_checkapply_total metric
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-12 23:45:47 +01:00
Julien Pivotto
bdb8368e89 resources: augeas: New resource
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-12 23:02:12 +01:00
Julien Pivotto
f160db2032 compilation: virt: Make libvirt support optional
refs #114

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-12 22:09:46 +01:00
Julien Pivotto
de9a32a273 recwatch: Remove watcher on file move
Fix #120

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-11 13:45:32 +01:00
Steve Milner
6ba7422c3b spec: Fix rpmlint warnings
- Switched spaces to tabs
- Added quiet switch to setup
2017-02-10 11:55:58 -05:00
Julien Pivotto
5cbb0ceb80 test: commit: Improve commit message testing
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-10 11:12:26 +01:00
James Shubin
5b29358b37 test: Small nitpicks with messages 2017-02-09 11:16:18 -05:00
Julien Pivotto
90147f3dfb travis: more strict commit messages tests
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-09 16:33:06 +01:00
Julien Pivotto
72873abe05 test: file: test the behaviour of inotify on parent dir moves
This is a test for #124. It is disabled until #124 is fixed, so it can
already me merged.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-09 16:01:09 +01:00
Julien Pivotto
de1810ba68 travis: add a test regarding commit messages
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-09 16:01:09 +01:00
Julien Pivotto
7b7c765d78 prometheus: Add a new test, with --prometheus-listen
Also: rename t9 to prometheus-1

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-09 00:05:19 +01:00
Felix Frank
806d4660cf tests: simplify shell code, skip YAML test if Ruby is too old 2017-02-08 09:29:00 -05:00
Julien Pivotto
5ae5d364bb Prometheus: fix URL to the "Default port allocations"
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-08 13:41:09 +01:00
Julien Pivotto
1af67e72d4 prometheus: Implement basic Prometheus support
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-02-08 12:13:33 +01:00
James Shubin
ed268ad683 pgraph, yamlgraph: Allow specifying notify value from YAML 2017-02-06 16:29:02 -05:00
James Shubin
5bdd2ca02f examples: Update the examples 2017-02-06 16:28:16 -05:00
Daniele Sluijters
eb59861d4d TODO: Point to current file improvement issue 2017-02-06 13:31:26 +01:00
Tom Ritserveldt
427e46a2aa point to the correct puppet-guide path in docs. 2017-02-06 10:35:39 +01:00
James Shubin
68a8649292 resources: Parse YAML infinity specifications correctly
This makes it easier to specify an infinite rate.
2017-02-05 21:01:52 -05:00
James Shubin
2aff8709a5 gapi: Unblock from a waiting send on GAPI close
There was a race condition that would sometimes occur in that if we
stopped reading from the gapiChan (on shutdown) but then a new message
was available before we managed to close the GAPI, then we would wait
forever to finish the close because the channel never sent, and the
WaitGroup wouldn't let us exit.

This fixes this horrible, horrible race.
2017-02-05 21:01:52 -05:00
James Shubin
62c3add888 lib: Make initial one-shot start pattern more elegant
This should do the same thing, in a more elegant and efficient way.
2017-02-05 19:00:14 -05:00
James Shubin
3ac878db62 recwatch: Do not send to channel if closed.
If we close the recwatcher right after it's opened, we might have
previously tried to send on the now closed channel.
2017-02-05 18:57:08 -05:00
James Shubin
c247cd8fea resources: Don't double close on Running restart
If we use the "retry" metaparam on a Watch, we want to avoid a double
close due to the second Running() signal. This avoids this with a simple
flag.
2017-02-05 18:47:40 -05:00
James Shubin
b6772b7280 examples: etcd: Simplify the etcd examples 2017-02-03 19:52:29 -05:00
James Shubin
807a3df9d1 travis: Limit notifications now that travis cron is enabled
This should hopefully cause less noise!
2017-02-01 08:45:57 -05:00
James Shubin
491d60e267 examples: Have libmgmt work around the lack of rate defaults
The rate limit lib needs a proper default. Until that's the case,
specify this so it doesn't block the resources.
2017-01-27 18:41:09 -05:00
James Shubin
4811eafd67 etcd: Update example to ignore pgp
This makes demos go faster since pgp is wip anyways.
2017-01-27 18:24:47 -05:00
James Shubin
8dedbb9620 examples: Update example to be Validate safe
This is also safer too!
2017-01-27 18:24:05 -05:00
James Shubin
dd8454161f pgraph: Set the false starter value too
We might have left re-used nodes as true, even if they no longer were
anymore due to graph changes, which would have caused additional pokes.
2017-01-27 17:43:24 -05:00
James Shubin
9421f2cddd resources: Rename GetUIDs to UIDs
This is more in line with the style guide for golang.
2017-01-25 14:51:23 -05:00
James Shubin
d8c4f78ec1 virt: Allow the use of ~ to expand to home directory
This makes examples slightly nicer to commit, since you don't have to
have a hardcoded ~/james/ in their source value. It's also probably a
useful feature for the resource.
2017-01-25 13:06:28 -05:00
James Shubin
54296da647 converger: Remove converger boilerplate from the resources
This simplifies the resource code by now removing all the converger
related material. Happy resource writing!
2017-01-25 11:30:47 -05:00
James Shubin
357102fdb5 converger: Block converging in the engine
This more appropriately blocks converging in the engine, since we are
now 1-1 decoupled from the Watch resource. This simplifies resource
writing, and should be more accurate around small converged timeouts.

We don't block in the Worker routine when we are polling, because we
expect to get constant poll events, and we can instead be more careful
about these by looking at CheckApply results.

If we can do this for all resources in the future, it would be
excellent!
2017-01-25 11:06:02 -05:00
James Shubin
7e15a9e181 pgraph: Add debug messages
These two messages in particular make graph analysis easier.
2017-01-25 09:52:34 -05:00
James Shubin
12e0b2d6f7 pgraph: Parallelize the BackPoke facility
I can't guarantee this has a significant effect, but it's likely to add
some efficiency when sending multiple BackPoke's at the same time, so
that erroneous ones can be cancelled out easier.
2017-01-25 09:13:59 -05:00
James Shubin
11b40bf32f resources: Update state checks
The mgmt graph depends on state tracking to eliminate redundant pokes.
With the Watch loop now able to produce events quickly, it should no
longer play a part in determining the vertex state. This simplifies the
resource API as well!
2017-01-25 09:13:59 -05:00
James Shubin
8d2b53373f pgraph: Remove unnecessary indentation in Process code path
We can indent the simple BackPoke code path instead!
2017-01-25 09:13:59 -05:00
James Shubin
9ecc49e592 resources: Force a sane default for zero value limiting
The default UnmarshalYAML on *BaseRes doesn't work properly at the
moment, so hack in a default so that we don't need to specify one if the
MetaParams struct isn't specified. The problem is that if there isn't a
meta value added, its UnmarshalYAML doesn't get a chance to run.
2017-01-25 09:13:59 -05:00
James Shubin
4f34f7083b resources: rate limiting: Implement resource rate limiting
This adds rate limiting with the limit and burst meta parameters. The
limits apply to how often the Process check is called. As a result, it
might get called more often than there are Watch events due to possible
Poke/BackPoke events.

This system might need to get rethought in the future depending on its
usefulness.
2017-01-25 09:13:59 -05:00
James Shubin
2a6df875ec resources: Improve composition of Validate API in resources
This now appropriately calls the Base method.
2017-01-22 06:00:37 -05:00
James Shubin
51c83116a2 resources: Overhaul legacy code around the resource API
This patch makes a number of changes in the engine surrounding the
resource API. In particular:

* Cleanup of send/read event.
* Cleanup of DoSend (now Event) in the Watch method.
* Events are now more consistently pointers.
* Exiting within Watch is now done in a single place.
* Multiple incoming events will be combined into a single action.
* Events in flight during an action are played back after CheckApply.
* Addition of Close method to API

This gets things ready for rate limiting and semaphore metaparams!
2017-01-22 05:59:15 -05:00
James Shubin
74435aac76 docs: Small fixes to the README 2017-01-22 05:47:42 -05:00
goberghen
5dfdb5b5f9 docs: Fix headers formating 2017-01-22 16:05:27 +06:00
James Shubin
ac892a3f3d docs: Add missing newline 2017-01-16 11:15:59 -05:00
James Shubin
1a2e99f559 docs: Add new blog post link 2017-01-16 11:14:19 -05:00
James Shubin
e97bba0524 docs: Add an introductory paragraph to the quick start guide 2017-01-16 11:12:07 -05:00
James Shubin
0538f0c524 docs: Improve README 2017-01-16 11:01:27 -05:00
James Shubin
fc3e35868d docs: Rename puppet guide and small cleanups 2017-01-16 10:27:17 -05:00
James Shubin
f1e0cfea1c docs: Split out a separate quick start guide 2017-01-16 10:27:17 -05:00
James Shubin
56efef69ba resources: Officially add Validate method
This officially adds the Validate method to the resource API, and also
cleans up the ordering in existing resources.
2017-01-09 05:10:26 -05:00
James Shubin
668ec8a248 docs: Add missing headings to table of contents 2017-01-09 04:35:48 -05:00
James Shubin
60912bd01c resources: Add a Default method to the resource API
This provides sensible defaults for when they're not the zero value.
2017-01-09 04:35:48 -05:00
Daniel P. Berrange
0b416e44f8 resources: virt: convert to use github.com/libvirt/libvirt-go
Convert the code to use the new libvirt Go bindings.

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2017-01-05 04:55:51 -05:00
James Shubin
ecc4aa09d3 resources: timer: Polish the timer resource
We should probably use this more correct data type which now matches
what is used by the Poll metaparam.
2016-12-24 00:51:39 -05:00
James Shubin
b921aabbed resources: Add poll metaparameter
This allows a resource to use polling instead of the event based
mechanism. This isn't recommended, but it could be useful, and it was
certainly fun to code!
2016-12-24 00:51:39 -05:00
James Shubin
6ad8ac0b6b converger: Wait till the timer exits
Wait till the timer loop exits, otherwise we might race and panic if
ConvergedTimer gets called after we've asked for an exit, but before it
does.
2016-12-24 00:51:39 -05:00
James Shubin
44e7e0e970 resources: virt: Add missing converger call
This was previously left out accidentally I suppose.
2016-12-23 23:33:13 -05:00
James Shubin
45820b4ce3 resources: Rename Converger to ConvergerUID
This is more correct since we need the Converger method to return the
actual converger!
2016-12-23 23:32:05 -05:00
Lars Kulseng
3a098377cb readme: Changed wording of quick start section 2016-12-23 23:29:27 +01:00
James Shubin
35875485ee todo: Make TODO file more current 2016-12-21 03:42:44 -05:00
217 changed files with 16776 additions and 5505 deletions

9
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,9 @@
## Versions:
* mgmt version (eg: `mgmt --version`):
* operating system/distribution (eg: `uname -a`):
* golang version (eg: `go version`):
## Description:

36
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,36 @@
## Tips:
* commit message titles must be in the form:
```topic: Capitalized message with no trailing period```
or:
```topic, topic2: Capitalized message with no trailing period```
* golang code must be formatted according to the standard, please run:
```
make gofmt # formats the entire project correctly
```
or format a single golang file correctly:
```
gofmt -w yourcode.go
```
* please rebase your patch against current git master:
```
git checkout master
git pull origin master
git checkout your-feature
git rebase master
git push your-remote your-feature
hub pull-request # or submit with the github web ui
```
* after a patch review, please ping @purpleidea so we know to re-review:
```
# make changes based on reviews...
git add -p # add new changes
git commit --amend # combine with existing commit
git push your-remote your-feature -f
# now ping @purpleidea in the github PR since it doesn't notify us automatically
```
## Thanks for contributing to mgmt and welcome to the team!

1
.gitignore vendored
View File

@@ -2,6 +2,7 @@
.omv/
.ssh/
.vagrant/
.envrc
old/
tmp/
*_stringer.go

9
.gitmodules vendored
View File

@@ -13,3 +13,12 @@
[submodule "vendor/github.com/purpleidea/go-systemd"]
path = vendor/github.com/purpleidea/go-systemd
url = https://github.com/purpleidea/go-systemd
[submodule "vendor/honnef.co/go/augeas"]
path = vendor/honnef.co/go/augeas
url = https://github.com/dominikh/go-augeas/
[submodule "vendor/github.com/grpc-ecosystem/go-grpc-prometheus"]
path = vendor/github.com/grpc-ecosystem/go-grpc-prometheus
url = https://github.com/grpc-ecosystem/go-grpc-prometheus
[submodule "vendor/github.com/ugorji/go"]
path = vendor/github.com/ugorji/go
url = https://github.com/ugorji/go

View File

@@ -1,18 +1,21 @@
language: go
go:
- 1.6
- 1.7
- 1.8.x
- 1.9.x
- tip
go_import_path: github.com/purpleidea/mgmt
sudo: true
dist: trusty
before_install: 'git fetch --unshallow'
before_install:
- sudo apt update
- git fetch --unshallow
install: 'make deps'
script: 'make test'
matrix:
fast_finish: true
allow_failures:
- go: tip
- go: 1.7
- go: 1.9.x
notifications:
irc:
channels:
@@ -25,4 +28,7 @@ notifications:
use_notice: false
skip_join: false
email:
- travis-ci@shubin.ca
recipients:
- travis-ci@shubin.ca
on_failure: change
on_success: change

View File

@@ -4,5 +4,7 @@ If you appreciate the work of one of the contributors, thank them a beverage!
For a more exhaustive list please run: git log --format='%aN' | sort -u
This list is sorted alphabetically by first name.
Felix Frank
James Shubin
Julien Pivotto
Paul Morgan

141
COPYING
View File

@@ -1,5 +1,5 @@
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
@@ -7,15 +7,17 @@
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
@@ -24,34 +26,44 @@ them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
@@ -60,7 +72,7 @@ modification follow.
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
@@ -537,45 +549,35 @@ to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
@@ -633,29 +635,40 @@ the "copyright" line and a pointer to where the full notice is found.
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
GNU General Public License for more details.
You should have received a copy of the GNU Affero General Public License
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

View File

@@ -1,16 +1,16 @@
Mgmt
Copyright (C) 2013-2016+ James Shubin and the project contributors
Copyright (C) 2013-2017+ James Shubin and the project contributors
Written by James Shubin <james@shubin.ca> and the project contributors
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
GNU General Public License for more details.
You should have received a copy of the GNU Affero General Public License
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.

View File

@@ -1,18 +1,18 @@
# Mgmt
# Copyright (C) 2013-2016+ James Shubin and the project contributors
# Copyright (C) 2013-2017+ James Shubin and the project contributors
# Written by James Shubin <james@shubin.ca> and the project contributors
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# GNU General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
SHELL = /usr/bin/env bash
@@ -37,6 +37,11 @@ RPM = rpmbuild/RPMS/$(PROGRAM)-$(VERSION)-$(RELEASE).$(ARCH).rpm
USERNAME := $(shell cat ~/.config/copr 2>/dev/null | grep username | awk -F '=' '{print $$2}' | tr -d ' ')
SERVER = 'dl.fedoraproject.org'
REMOTE_PATH = 'pub/alt/$(USERNAME)/$(PROGRAM)'
ifneq ($(GOTAGS),)
BUILD_FLAGS = -tags '$(GOTAGS)'
endif
default: build
#
# art
@@ -105,9 +110,9 @@ $(PROGRAM): main.go
@echo "Building: $(PROGRAM), version: $(SVERSION)..."
ifneq ($(OLDGOLANG),)
@# avoid equals sign in old golang versions eg in: -X foo=bar
time go build -ldflags "-X main.program $(PROGRAM) -X main.version $(SVERSION)" -o $(PROGRAM);
time go build -ldflags "-X main.program $(PROGRAM) -X main.version $(SVERSION)" -o $(PROGRAM) $(BUILD_FLAGS);
else
time go build -ldflags "-X main.program=$(PROGRAM) -X main.version=$(SVERSION)" -o $(PROGRAM);
time go build -i -ldflags "-X main.program=$(PROGRAM) -X main.version=$(SVERSION)" -o $(PROGRAM) $(BUILD_FLAGS);
endif
$(PROGRAM).static: main.go
@@ -115,9 +120,9 @@ $(PROGRAM).static: main.go
go generate
ifneq ($(OLDGOLANG),)
@# avoid equals sign in old golang versions eg in: -X foo=bar
go build -a -installsuffix cgo -tags netgo -ldflags '-extldflags "-static" -X main.program $(PROGRAM) -X main.version $(SVERSION)' -o $(PROGRAM).static;
go build -a -installsuffix cgo -tags netgo -ldflags '-extldflags "-static" -X main.program $(PROGRAM) -X main.version $(SVERSION)' -o $(PROGRAM).static $(BUILD_FLAGS);
else
go build -a -installsuffix cgo -tags netgo -ldflags '-extldflags "-static" -X main.program=$(PROGRAM) -X main.version=$(SVERSION)' -o $(PROGRAM).static;
go build -a -installsuffix cgo -tags netgo -ldflags '-extldflags "-static" -X main.program=$(PROGRAM) -X main.version=$(SVERSION)' -o $(PROGRAM).static $(BUILD_FLAGS);
endif
clean:
@@ -183,7 +188,7 @@ $(SRPM): $(SPEC) $(SOURCE)
#
$(SPEC): rpmbuild/ spec.in
@echo Running templater...
#cat spec.in > $(SPEC)
cat spec.in > $(SPEC)
sed -e s/__PROGRAM__/$(PROGRAM)/g -e s/__VERSION__/$(VERSION)/g -e s/__RELEASE__/$(RELEASE)/g < spec.in > $(SPEC)
# append a changelog to the .spec file
git log --format="* %cd %aN <%aE>%n- (%h) %s%d%n" --date=local | sed -r 's/[0-9]+:[0-9]+:[0-9]+ //' >> $(SPEC)

134
README.md
View File

@@ -2,18 +2,20 @@
[![mgmt!](art/mgmt.png)](art/)
[![Go Report Card](https://goreportcard.com/badge/github.com/purpleidea/mgmt)](https://goreportcard.com/report/github.com/purpleidea/mgmt)
[![Build Status](https://secure.travis-ci.org/purpleidea/mgmt.png?branch=master)](http://travis-ci.org/purpleidea/mgmt)
[![Documentation](https://img.shields.io/docs/markdown.png)](docs/documentation.md)
[![GoDoc](https://godoc.org/github.com/purpleidea/mgmt?status.svg)](https://godoc.org/github.com/purpleidea/mgmt)
[![IRC](https://img.shields.io/irc/%23mgmtconfig.png)](https://webchat.freenode.net/?channels=#mgmtconfig)
[![Jenkins](https://img.shields.io/jenkins/status.png)](https://ci.centos.org/job/purpleidea-mgmt/)
[![COPR](https://img.shields.io/copr/builds.png)](https://copr.fedoraproject.org/coprs/purpleidea/mgmt/)
[![arch](https://img.shields.io/arch/aur.png)](https://aur.archlinux.org/packages/mgmt/)
[![Go Report Card](https://goreportcard.com/badge/github.com/purpleidea/mgmt?style=flat-square)](https://goreportcard.com/report/github.com/purpleidea/mgmt)
[![Build Status](https://img.shields.io/travis/purpleidea/mgmt/master.svg?style=flat-square)](http://travis-ci.org/purpleidea/mgmt)
[![GoDoc](https://img.shields.io/badge/godoc-reference-5272B4.svg?style=flat-square)](https://godoc.org/github.com/purpleidea/mgmt)
[![IRC](https://img.shields.io/badge/irc-%23mgmtconfig-brightgreen.svg?style=flat-square)](https://webchat.freenode.net/?channels=#mgmtconfig)
[![Jenkins](https://img.shields.io/badge/jenkins-status-brightgreen.svg?style=flat-square)](https://ci.centos.org/job/purpleidea-mgmt/)
## Community:
Come join us on IRC in [#mgmtconfig](https://webchat.freenode.net/?channels=#mgmtconfig) on Freenode!
You may like the [#mgmtconfig](https://twitter.com/hashtag/mgmtconfig) hashtag if you're on [Twitter](https://twitter.com/#!/purpleidea).
Come join us in the `mgmt` community!
| Medium | Link |
|---|---|
| IRC | [#mgmtconfig](https://webchat.freenode.net/?channels=#mgmtconfig) on Freenode |
| Twitter | [@mgmtconfig](https://twitter.com/mgmtconfig) & [#mgmtconfig](https://twitter.com/hashtag/mgmtconfig) |
| Mailing list | [mgmtconfig-list@redhat.com](https://www.redhat.com/mailman/listinfo/mgmtconfig-list) |
## Status:
Mgmt is a fairly new project.
@@ -21,35 +23,21 @@ We're working towards being minimally useful for production environments.
We aren't feature complete for what we'd consider a 1.x release yet.
With your help you'll be able to influence our design and get us there sooner!
## Questions:
Please join the [#mgmtconfig](https://webchat.freenode.net/?channels=#mgmtconfig) IRC community!
If you have a well phrased question that might benefit others, consider asking it by sending a patch to the documentation [FAQ](https://github.com/purpleidea/mgmt/blob/master/docs/documentation.md#usage-and-frequently-asked-questions) section. I'll merge your question, and a patch with the answer!
## Quick start:
* Make sure you have golang version 1.6 or greater installed.
* If you do not have a GOPATH yet, create one and export it:
```
mkdir $HOME/gopath
export GOPATH=$HOME/gopath
```
* You might also want to add the GOPATH to your `~/.bashrc` or `~/.profile`.
* For more information you can read the [GOPATH documentation](https://golang.org/cmd/go/#hdr-GOPATH_environment_variable).
* Next download the mgmt code base, and switch to that directory:
```
go get -u github.com/purpleidea/mgmt
cd $GOPATH/src/github.com/purpleidea/mgmt
```
* Get the remaining golang deps with `go get ./...`, or run `make deps` if you're comfortable with how we install them.
* Run `make build` to get a freshly built `mgmt` binary.
* Run `time ./mgmt run --yaml examples/graph0.yaml --converged-timeout=5 --tmp-prefix` to try out a very simple example!
* To run continuously in the default mode of operation, omit the `--converged-timeout` option.
* Have fun hacking on our future technology!
## Examples:
Please look in the [examples/](examples/) folder for more examples!
## Documentation:
Please see: the manually created [documentation.md](docs/documentation.md) (also available as [PDF](https://pdfdoc-purpleidea.rhcloud.com/pdf/https://github.com/purpleidea/mgmt/blob/master/docs/documentation.md)) and the automatically generated [GoDoc documentation](https://godoc.org/github.com/purpleidea/mgmt).
Please read, enjoy and help improve our documentation!
| Documentation | Additional Notes |
|---|---|
| [general documentation](docs/documentation.md) | for everyone |
| [quick start guide](docs/quick-start-guide.md) | for mgmt developers |
| [resource guide](docs/resource-guide.md) | for mgmt developers |
| [godoc API reference](https://godoc.org/github.com/purpleidea/mgmt) | for mgmt developers |
| [prometheus guide](docs/prometheus.md) | for everyone |
| [puppet guide](docs/puppet-guide.md) | for puppet sysadmins |
## Questions:
Please ask in the [community](#community)!
If you have a well phrased question that might benefit others, consider asking it by sending a patch to the documentation [FAQ](https://github.com/purpleidea/mgmt/blob/master/docs/documentation.md#usage-and-frequently-asked-questions) section. I'll merge your question, and a patch with the answer!
## Roadmap:
Please see: [TODO.md](TODO.md) for a list of upcoming work and TODO items.
@@ -61,52 +49,38 @@ Please set the `DEBUG` constant in [main.go](https://github.com/purpleidea/mgmt/
Bonus points if you provide a [shell](https://github.com/purpleidea/mgmt/tree/master/test/shell) or [OMV](https://github.com/purpleidea/mgmt/tree/master/test/omv) reproducible test case.
Feel free to read my article on [debugging golang programs](https://ttboj.wordpress.com/2016/02/15/debugging-golang-programs/).
## Dependencies:
* golang 1.6 or higher (required, available in most distros)
* golang libraries (required, available with `go get`)
```
go get github.com/coreos/etcd/client
go get gopkg.in/yaml.v2
go get gopkg.in/fsnotify.v1
go get github.com/urfave/cli
go get github.com/coreos/go-systemd/dbus
go get github.com/coreos/go-systemd/util
go get github.com/coreos/pkg/capnslog
go get github.com/rgbkrk/libvirt-go
```
* stringer (optional for building), available as a package on some platforms, otherwise via `go get`
```
go get golang.org/x/tools/cmd/stringer
```
* pandoc (optional, for building a pdf of the documentation)
* graphviz (optional, for building a visual representation of the graph)
## Patches:
We'd love to have your patches! Please send them by email, or as a pull request.
## On the web:
* James Shubin; blog: [Next generation configuration mgmt](https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/)
* James Shubin; video: [Introductory recording from DevConf.cz 2016](https://www.youtube.com/watch?v=GVhpPF0j-iE&html5=1)
* James Shubin; video: [Introductory recording from CfgMgmtCamp.eu 2016](https://www.youtube.com/watch?v=fNeooSiIRnA&html5=1)
* Julian Dunn; video: [On mgmt at CfgMgmtCamp.eu 2016](https://www.youtube.com/watch?v=kfF9IATUask&t=1949&html5=1)
* Walter Heck; slides: [On mgmt at CfgMgmtCamp.eu 2016](http://www.slideshare.net/olindata/configuration-management-time-for-a-4th-generation/3)
* Marco Marongiu; blog: [On mgmt](http://syslog.me/2016/02/15/leap-or-die/)
* Felix Frank; blog: [From Catalog To Mgmt (on puppet to mgmt "transpiling")](https://ffrank.github.io/features/2016/02/18/from-catalog-to-mgmt/)
* James Shubin; blog: [Automatic edges in mgmt (...and the pkg resource)](https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/)
* James Shubin; blog: [Automatic grouping in mgmt](https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/)
* John Arundel; tweet: [“Puppets days are numbered.”](https://twitter.com/bitfield/status/732157519142002688)
* Felix Frank; blog: [Puppet, Meet Mgmt (on puppet to mgmt internals)](https://ffrank.github.io/features/2016/06/12/puppet,-meet-mgmt/)
* Felix Frank; blog: [Puppet Powered Mgmt (puppet to mgmt tl;dr)](https://ffrank.github.io/features/2016/06/19/puppet-powered-mgmt/)
* James Shubin; blog: [Automatic clustering in mgmt](https://ttboj.wordpress.com/2016/06/20/automatic-clustering-in-mgmt/)
* James Shubin; video: [Recording from CoreOSFest 2016](https://www.youtube.com/watch?v=KVmDCUA42wc&html5=1)
* James Shubin; video: [Recording from DebConf16](http://meetings-archive.debian.net/pub/debian-meetings/2016/debconf16/Next_Generation_Config_Mgmt.webm) ([Slides](https://annex.debconf.org//debconf-share/debconf16/slides/15-next-generation-config-mgmt.pdf))
* Felix Frank; blog: [Edging It All In (puppet and mgmt edges)](https://ffrank.github.io/features/2016/07/12/edging-it-all-in/)
* Felix Frank; blog: [Translating All The Things (puppet to mgmt translation warnings)](https://ffrank.github.io/features/2016/08/19/translating-all-the-things/)
* James Shubin; video: [Recording from systemd.conf 2016](https://www.youtube.com/watch?v=jB992Zb3nH0&html5=1)
* James Shubin; blog: [Remote execution in mgmt](https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/)
* James Shubin; video: [Recording from High Load Strategy 2016](https://vimeo.com/191493409)
* James Shubin; video: [Recording from NLUUG 2016](https://www.youtube.com/watch?v=MmpwOQAb_SE&html5=1)
* James Shubin; blog: [Send/Recv in mgmt](https://ttboj.wordpress.com/2016/12/07/sendrecv-in-mgmt/)
| Author | Format | Subject |
|---|---|---|
| James Shubin | blog | [Next generation configuration mgmt](https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/) |
| James Shubin | video | [Introductory recording from DevConf.cz 2016](https://www.youtube.com/watch?v=GVhpPF0j-iE&html5=1) |
| James Shubin | video | [Introductory recording from CfgMgmtCamp.eu 2016](https://www.youtube.com/watch?v=fNeooSiIRnA&html5=1) |
| Julian Dunn | video | [On mgmt at CfgMgmtCamp.eu 2016](https://www.youtube.com/watch?v=kfF9IATUask&t=1949&html5=1) |
| Walter Heck | slides | [On mgmt at CfgMgmtCamp.eu 2016](http://www.slideshare.net/olindata/configuration-management-time-for-a-4th-generation/3) |
| Marco Marongiu | blog | [On mgmt](http://syslog.me/2016/02/15/leap-or-die/) |
| Felix Frank | blog | [From Catalog To Mgmt (on puppet to mgmt "transpiling")](https://ffrank.github.io/features/2016/02/18/from-catalog-to-mgmt/) |
| James Shubin | blog | [Automatic edges in mgmt (...and the pkg resource)](https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/) |
| James Shubin | blog | [Automatic grouping in mgmt](https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/) |
| John Arundel | tweet | [Puppets days are numbered.”](https://twitter.com/bitfield/status/732157519142002688) |
| Felix Frank | blog | [Puppet, Meet Mgmt (on puppet to mgmt internals)](https://ffrank.github.io/features/2016/06/12/puppet,-meet-mgmt/) |
| Felix Frank | blog | [Puppet Powered Mgmt (puppet to mgmt tl;dr)](https://ffrank.github.io/features/2016/06/19/puppet-powered-mgmt/) |
| James Shubin | blog | [Automatic clustering in mgmt](https://ttboj.wordpress.com/2016/06/20/automatic-clustering-in-mgmt/) |
| James Shubin | video | [Recording from CoreOSFest 2016](https://www.youtube.com/watch?v=KVmDCUA42wc&html5=1) |
| James Shubin | video | [Recording from DebConf16](http://meetings-archive.debian.net/pub/debian-meetings/2016/debconf16/Next_Generation_Config_Mgmt.webm) ([Slides](https://annex.debconf.org//debconf-share/debconf16/slides/15-next-generation-config-mgmt.pdf)) |
| Felix Frank | blog | [Edging It All In (puppet and mgmt edges)](https://ffrank.github.io/features/2016/07/12/edging-it-all-in/) |
| Felix Frank | blog | [Translating All The Things (puppet to mgmt translation warnings)](https://ffrank.github.io/features/2016/08/19/translating-all-the-things/) |
| James Shubin | video | [Recording from systemd.conf 2016](https://www.youtube.com/watch?v=jB992Zb3nH0&html5=1) |
| James Shubin | blog | [Remote execution in mgmt](https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/) |
| James Shubin | video | [Recording from High Load Strategy 2016](https://vimeo.com/191493409) |
| James Shubin | video | [Recording from NLUUG 2016](https://www.youtube.com/watch?v=MmpwOQAb_SE&html5=1) |
| James Shubin | blog | [Send/Recv in mgmt](https://ttboj.wordpress.com/2016/12/07/sendrecv-in-mgmt/) |
| James Shubin | blog | [Metaparameters in mgmt](https://ttboj.wordpress.com/2017/03/01/metaparameters-in-mgmt/) |
| James Shubin | video | [Recording from Incontro DevOps 2017](https://vimeo.com/212241877) |
| Yves Brissaud | blog | [mgmt aux HumanTalks Grenoble (french)](http://log.winsos.net/2017/04/12/mgmt-aux-human-talks-grenoble.html) |
| James Shubin | video | [Recording from OSDC Berlin 2017](https://www.youtube.com/watch?v=LkEtBVLfygE&html5=1) |
##

16
TODO.md
View File

@@ -1,16 +1,16 @@
# TODO
If you're looking for something to do, look here!
Let us know if you're working on one of the items.
If you'd like something to work on, ping @purpleidea and I'll create an issue
tailored especially for you! Just let me know your approximate golang skill
level and how many hours you'd like to spend on the patch.
## Package resource
- [ ] getfiles support on debian [bug](https://github.com/hughsie/PackageKit/issues/118)
- [ ] directory info on fedora [bug](https://github.com/hughsie/PackageKit/issues/117)
- [ ] dnf blocker [bug](https://github.com/hughsie/PackageKit/issues/110)
- [ ] install signal blocker [bug](https://github.com/hughsie/PackageKit/issues/109)
## File resource [bug](https://github.com/purpleidea/mgmt/issues/13) [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
- [ ] chown/chmod support [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
- [ ] user/group support [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## File resource [bug](https://github.com/purpleidea/mgmt/issues/64) [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
- [ ] recurse limit support [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
- [ ] fanotify support [bug](https://github.com/go-fsnotify/fsnotify/issues/114)
@@ -21,7 +21,6 @@ Let us know if you're working on one of the items.
- [ ] base resource improvements
## Timer resource
- [ ] reset on recompile
- [ ] increment algorithm (linear, exponential, etc...) [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## User/Group resource
@@ -29,7 +28,7 @@ Let us know if you're working on one of the items.
- [ ] automatic edges to file resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Virt (libvirt) resource
- [ ] base resource [bug](https://github.com/purpleidea/mgmt/issues/25)
- [ ] base resource improvements [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Net (systemd-networkd) resource
- [ ] base resource
@@ -44,7 +43,7 @@ Let us know if you're working on one of the items.
- [ ] base resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Http resource
- [ ] base resource
- [ ] base resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Etcd improvements
- [ ] fix embedded etcd master race
@@ -52,6 +51,9 @@ Let us know if you're working on one of the items.
## Torrent/dht file transfer
- [ ] base plumbing
## GPG/Auth improvements
- [ ] base plumbing
## Language improvements
- [ ] language design
- [ ] lexer/parser

50
Vagrantfile vendored Normal file
View File

@@ -0,0 +1,50 @@
Vagrant.configure(2) do |config|
config.ssh.forward_agent = true
config.ssh.username = 'vagrant'
config.vm.network "private_network", ip: "192.168.219.2"
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.define "mgmt-dev" do |instance|
instance.vm.box = "fedora/26-cloud-base"
end
config.vm.provider "virtualbox" do |v|
v.memory = 1536
v.cpus = 2
end
config.vm.provider "libvirt" do |v|
v.memory = 2048
end
config.vm.provision "file", source: "vagrant/motd", destination: ".motd"
config.vm.provision "shell", inline: "cp ~vagrant/.motd /etc/motd"
config.vm.provision "file", source: "vagrant/mgmt.bashrc", destination: ".mgmt.bashrc"
config.vm.provision "file", source: "~/.gitconfig", destination: ".gitconfig"
# copied from make-deps.sh (with added git)
config.vm.provision "shell", inline: "dnf install -y libvirt-devel golang golang-googlecode-tools-stringer hg git make"
# set up packagekit
config.vm.provision "shell" do |shell|
shell.inline = <<-SCRIPT
dnf install -y PackageKit
systemctl enable packagekit
systemctl start packagekit
SCRIPT
end
# set up vagrant home
script = <<-SCRIPT
grep -q 'mgmt\.bashrc' ~/.bashrc || echo '. ~/.mgmt.bashrc' >>~/.bashrc
. ~/.mgmt.bashrc
go get -u github.com/purpleidea/mgmt
cd ~/gopath/src/github.com/purpleidea/mgmt
make deps
SCRIPT
config.vm.provision "shell" do |shell|
shell.privileged = false
shell.inline = script
end
end

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package converger is a facility for reporting the converged state.
@@ -29,24 +29,24 @@ import (
// TODO: we could make a new function that masks out the state of certain
// UID's, but at the moment the new Timer code has obsoleted the need...
// Converger is the general interface for implementing a convergence watcher
// Converger is the general interface for implementing a convergence watcher.
type Converger interface { // TODO: need a better name
Register() ConvergerUID
IsConverged(ConvergerUID) bool // is the UID converged ?
SetConverged(ConvergerUID, bool) error // set the converged state of the UID
Unregister(ConvergerUID)
Register() UID
IsConverged(UID) bool // is the UID converged ?
SetConverged(UID, bool) error // set the converged state of the UID
Unregister(UID)
Start()
Pause()
Loop(bool)
ConvergedTimer(ConvergerUID) <-chan time.Time
ConvergedTimer(UID) <-chan time.Time
Status() map[uint64]bool
Timeout() int // returns the timeout that this was created with
SetStateFn(func(bool) error) // sets the stateFn
}
// ConvergerUID is the interface resources can use to notify with if converged
// you'll need to use part of the Converger interface to Register initially too
type ConvergerUID interface {
// UID is the interface resources can use to notify with if converged. You'll
// need to use part of the Converger interface to Register initially too.
type UID interface {
ID() uint64 // get Id
Name() string // get a friendly name
SetName(string)
@@ -61,7 +61,7 @@ type ConvergerUID interface {
StopTimer() error
}
// converger is an implementation of the Converger interface
// converger is an implementation of the Converger interface.
type converger struct {
timeout int // must be zero (instant) or greater seconds to run
stateFn func(bool) error // run on converged state changes with state bool
@@ -73,18 +73,19 @@ type converger struct {
status map[uint64]bool
}
// convergerUID is an implementation of the ConvergerUID interface
type convergerUID struct {
// cuid is an implementation of the UID interface.
type cuid struct {
converger Converger
id uint64
name string // user defined, friendly name
mutex sync.Mutex
timer chan struct{}
running bool // is the above timer running?
wg sync.WaitGroup
}
// NewConverger builds a new converger struct
func NewConverger(timeout int, stateFn func(bool) error) *converger {
// NewConverger builds a new converger struct.
func NewConverger(timeout int, stateFn func(bool) error) Converger {
return &converger{
timeout: timeout,
stateFn: stateFn,
@@ -95,13 +96,13 @@ func NewConverger(timeout int, stateFn func(bool) error) *converger {
}
}
// Register assigns a ConvergerUID to the caller
func (obj *converger) Register() ConvergerUID {
// Register assigns a UID to the caller.
func (obj *converger) Register() UID {
obj.mutex.Lock()
defer obj.mutex.Unlock()
obj.lastid++
obj.status[obj.lastid] = false // initialize as not converged
return &convergerUID{
return &cuid{
converger: obj,
id: obj.lastid,
name: fmt.Sprintf("%d", obj.lastid), // some default
@@ -110,28 +111,28 @@ func (obj *converger) Register() ConvergerUID {
}
}
// IsConverged gets the converged status of a uid
func (obj *converger) IsConverged(uid ConvergerUID) bool {
// IsConverged gets the converged status of a uid.
func (obj *converger) IsConverged(uid UID) bool {
if !uid.IsValid() {
panic(fmt.Sprintf("Id of ConvergerUID(%s) is nil!", uid.Name()))
panic(fmt.Sprintf("the ID of UID(%s) is nil", uid.Name()))
}
obj.mutex.RLock()
isConverged, found := obj.status[uid.ID()] // lookup
obj.mutex.RUnlock()
if !found {
panic("Id of ConvergerUID is unregistered!")
panic("the ID of UID is unregistered")
}
return isConverged
}
// SetConverged updates the converger with the converged state of the UID
func (obj *converger) SetConverged(uid ConvergerUID, isConverged bool) error {
// SetConverged updates the converger with the converged state of the UID.
func (obj *converger) SetConverged(uid UID, isConverged bool) error {
if !uid.IsValid() {
return fmt.Errorf("Id of ConvergerUID(%s) is nil!", uid.Name())
return fmt.Errorf("the ID of UID(%s) is nil", uid.Name())
}
obj.mutex.Lock()
if _, found := obj.status[uid.ID()]; !found {
panic("Id of ConvergerUID is unregistered!")
panic("the ID of UID is unregistered")
}
obj.status[uid.ID()] = isConverged // set
obj.mutex.Unlock() // unlock *before* poke or deadlock!
@@ -143,7 +144,7 @@ func (obj *converger) SetConverged(uid ConvergerUID, isConverged bool) error {
return nil
}
// isConverged returns true if *every* registered uid has converged
// isConverged returns true if *every* registered uid has converged.
func (obj *converger) isConverged() bool {
obj.mutex.RLock() // take a read lock
defer obj.mutex.RUnlock()
@@ -155,10 +156,10 @@ func (obj *converger) isConverged() bool {
return true
}
// Unregister dissociates the ConvergedUID from the converged checking
func (obj *converger) Unregister(uid ConvergerUID) {
// Unregister dissociates the ConvergedUID from the converged checking.
func (obj *converger) Unregister(uid UID) {
if !uid.IsValid() {
panic(fmt.Sprintf("Id of ConvergerUID(%s) is nil!", uid.Name()))
panic(fmt.Sprintf("the ID of UID(%s) is nil", uid.Name()))
}
obj.mutex.Lock()
uid.StopTimer() // ignore any errors
@@ -167,30 +168,30 @@ func (obj *converger) Unregister(uid ConvergerUID) {
uid.InvalidateID()
}
// Start causes a Converger object to start or resume running
// Start causes a Converger object to start or resume running.
func (obj *converger) Start() {
obj.control <- true
}
// Pause causes a Converger object to stop running temporarily
// Pause causes a Converger object to stop running temporarily.
func (obj *converger) Pause() { // FIXME: add a sync ACK on pause before return
obj.control <- false
}
// Loop is the main loop for a Converger object; it usually runs in a goroutine
// TODO: we could eventually have each resource tell us as soon as it converges
// and then keep track of the time delays here, to avoid callers needing select
// Loop is the main loop for a Converger object. It usually runs in a goroutine.
// TODO: we could eventually have each resource tell us as soon as it converges,
// and then keep track of the time delays here, to avoid callers needing select.
// NOTE: when we have very short timeouts, if we start before all the resources
// have joined the map, then it might appears as if we converged before we did!
// have joined the map, then it might appear as if we converged before we did!
func (obj *converger) Loop(startPaused bool) {
if obj.control == nil {
panic("Converger not initialized correctly")
panic("converger not initialized correctly")
}
if startPaused { // start paused without racing
select {
case e := <-obj.control:
if !e {
panic("Converger expected true!")
panic("converger expected true")
}
}
}
@@ -198,13 +199,13 @@ func (obj *converger) Loop(startPaused bool) {
select {
case e := <-obj.control: // expecting "false" which means pause!
if e {
panic("Converger expected false!")
panic("converger expected false")
}
// now i'm paused...
select {
case e := <-obj.control:
if !e {
panic("Converger expected true!")
panic("converger expected true")
}
// restart
// kick once to refresh the check...
@@ -243,9 +244,9 @@ func (obj *converger) Loop(startPaused bool) {
}
}
// ConvergedTimer adds a timeout to a select call and blocks until then
// ConvergedTimer adds a timeout to a select call and blocks until then.
// TODO: this means we could eventually have per resource converged timeouts
func (obj *converger) ConvergedTimer(uid ConvergerUID) <-chan time.Time {
func (obj *converger) ConvergedTimer(uid UID) <-chan time.Time {
// be clever: if i'm already converged, this timeout should block which
// avoids unnecessary new signals being sent! this avoids fast loops if
// we have a low timeout, or in particular a timeout == 0
@@ -279,63 +280,65 @@ func (obj *converger) SetStateFn(stateFn func(bool) error) {
obj.stateFn = stateFn
}
// Id returns the unique id of this UID object
func (obj *convergerUID) ID() uint64 {
// ID returns the unique id of this UID object.
func (obj *cuid) ID() uint64 {
return obj.id
}
// Name returns a user defined name for the specific convergerUID.
func (obj *convergerUID) Name() string {
// Name returns a user defined name for the specific cuid.
func (obj *cuid) Name() string {
return obj.name
}
// SetName sets a user defined name for the specific convergerUID.
func (obj *convergerUID) SetName(name string) {
// SetName sets a user defined name for the specific cuid.
func (obj *cuid) SetName(name string) {
obj.name = name
}
// IsValid tells us if the id is valid or has already been destroyed
func (obj *convergerUID) IsValid() bool {
// IsValid tells us if the id is valid or has already been destroyed.
func (obj *cuid) IsValid() bool {
return obj.id != 0 // an id of 0 is invalid
}
// InvalidateID marks the id as no longer valid
func (obj *convergerUID) InvalidateID() {
// InvalidateID marks the id as no longer valid.
func (obj *cuid) InvalidateID() {
obj.id = 0 // an id of 0 is invalid
}
// IsConverged is a helper function to the regular IsConverged method
func (obj *convergerUID) IsConverged() bool {
// IsConverged is a helper function to the regular IsConverged method.
func (obj *cuid) IsConverged() bool {
return obj.converger.IsConverged(obj)
}
// SetConverged is a helper function to the regular SetConverged notification
func (obj *convergerUID) SetConverged(isConverged bool) error {
// SetConverged is a helper function to the regular SetConverged notification.
func (obj *cuid) SetConverged(isConverged bool) error {
return obj.converger.SetConverged(obj, isConverged)
}
// Unregister is a helper function to unregister myself
func (obj *convergerUID) Unregister() {
// Unregister is a helper function to unregister myself.
func (obj *cuid) Unregister() {
obj.converger.Unregister(obj)
}
// ConvergedTimer is a helper around the regular ConvergedTimer method
func (obj *convergerUID) ConvergedTimer() <-chan time.Time {
// ConvergedTimer is a helper around the regular ConvergedTimer method.
func (obj *cuid) ConvergedTimer() <-chan time.Time {
return obj.converger.ConvergedTimer(obj)
}
// StartTimer runs an invisible timer that automatically converges on timeout.
func (obj *convergerUID) StartTimer() (func() error, error) {
func (obj *cuid) StartTimer() (func() error, error) {
obj.mutex.Lock()
if !obj.running {
obj.timer = make(chan struct{})
obj.running = true
} else {
obj.mutex.Unlock()
return obj.StopTimer, fmt.Errorf("Timer already started!")
return obj.StopTimer, fmt.Errorf("timer already started")
}
obj.mutex.Unlock()
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
for {
select {
case _, ok := <-obj.timer: // reset signal channel
@@ -359,24 +362,25 @@ func (obj *convergerUID) StartTimer() (func() error, error) {
}
// ResetTimer resets the counter to zero if using a StartTimer internally.
func (obj *convergerUID) ResetTimer() error {
func (obj *cuid) ResetTimer() error {
obj.mutex.Lock()
defer obj.mutex.Unlock()
if obj.running {
obj.timer <- struct{}{} // send the reset message
return nil
}
return fmt.Errorf("Timer hasn't been started!")
return fmt.Errorf("timer hasn't been started")
}
// StopTimer stops the running timer permanently until a StartTimer is run.
func (obj *convergerUID) StopTimer() error {
func (obj *cuid) StopTimer() error {
obj.mutex.Lock()
defer obj.mutex.Unlock()
if !obj.running {
return fmt.Errorf("Timer isn't running!")
return fmt.Errorf("timer isn't running")
}
close(obj.timer)
obj.wg.Wait()
obj.running = false
return nil
}

8
doc.go
View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package main provides the main entrypoint for using the `mgmt` software.

1
docs/.gitignore vendored
View File

@@ -1 +1,2 @@
mgmt-documentation.pdf
_build

20
docs/Makefile Normal file
View File

@@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
SPHINXPROJ = mgmt
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

158
docs/conf.py Normal file
View File

@@ -0,0 +1,158 @@
# -*- coding: utf-8 -*-
#
# mgmt documentation build configuration file, created by
# sphinx-quickstart on Wed Feb 15 21:34:09 2017.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
from recommonmark.parser import CommonMarkParser
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = []
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
source_parsers = {
'.md': CommonMarkParser,
}
source_suffix = ['.rst', '.md']
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'mgmt'
copyright = u'2013-2017+ James Shubin and the project contributors'
author = u'James Shubin'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = u''
# The full version, including alpha/beta/rc tags.
release = u''
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'venv']
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
#html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'mgmtdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'mgmt.tex', u'mgmt Documentation',
u'James Shubin', 'manual'),
]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'mgmt', u'mgmt Documentation',
[author], 1)
]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'mgmt', u'mgmt Documentation',
author, 'mgmt', 'A next generation config management prototype!',
'Miscellaneous'),
]

View File

@@ -1,57 +1,16 @@
#mgmt
# mgmt
<!--
Mgmt
Copyright (C) 2013-2016+ James Shubin and the project contributors
Written by James Shubin <james@shubin.ca> and the project contributors
Available from:
[https://github.com/purpleidea/mgmt/](https://github.com/purpleidea/mgmt/)
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This documentation is available in: [Markdown](https://github.com/purpleidea/mgmt/blob/master/docs/documentation.md) or [PDF](https://pdfdoc-purpleidea.rhcloud.com/pdf/https://github.com/purpleidea/mgmt/blob/master/docs/documentation.md) format.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
-->
##mgmt by [James](https://ttboj.wordpress.com/)
####Available from:
####[https://github.com/purpleidea/mgmt/](https://github.com/purpleidea/mgmt/)
####This documentation is available in: [Markdown](https://github.com/purpleidea/mgmt/blob/master/docs/documentation.md) or [PDF](https://pdfdoc-purpleidea.rhcloud.com/pdf/https://github.com/purpleidea/mgmt/blob/master/docs/documentation.md) format.
####Table of Contents
1. [Overview](#overview)
2. [Project description - What the project does](#project-description)
3. [Setup - Getting started with mgmt](#setup)
4. [Features - All things mgmt can do](#features)
* [Autoedges - Automatic resource relationships](#autoedges)
* [Autogrouping - Automatic resource grouping](#autogrouping)
* [Automatic clustering - Automatic cluster management](#automatic-clustering)
* [Remote mode - Remote "agent-less" execution](#remote-agent-less-mode)
* [Puppet support - write manifest code for mgmt](#puppet-support)
5. [Resources - All built-in primitives](#resources)
6. [Usage/FAQ - Notes on usage and frequently asked questions](#usage-and-frequently-asked-questions)
7. [Reference - Detailed reference](#reference)
* [Meta parameters](#meta-parameters)
* [Graph definition file](#graph-definition-file)
* [Command line](#command-line)
8. [Examples - Example configurations](#examples)
9. [Development - Background on module development and reporting bugs](#development)
10. [Authors - Authors and contact information](#authors)
##Overview
## Overview
The `mgmt` tool is a next generation config management prototype. It's not yet
ready for production, but we hope to get there soon. Get involved today!
##Project Description
## Project Description
The mgmt tool is a distributed, event driven, config management tool, that
supports parallel execution, and librarification to be used as the management
@@ -63,11 +22,14 @@ For more information, you may like to read some blog posts from the author:
* [Automatic edges in mgmt](https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/)
* [Automatic grouping in mgmt](https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/)
* [Automatic clustering in mgmt](https://ttboj.wordpress.com/2016/06/20/automatic-clustering-in-mgmt/)
* [Remote execution in mgmt](https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/)
* [Send/Recv in mgmt](https://ttboj.wordpress.com/2016/12/07/sendrecv-in-mgmt/)
* [Metaparameters in mgmt](https://ttboj.wordpress.com/2017/03/01/metaparameters-in-mgmt/)
There is also an [introductory video](http://meetings-archive.debian.net/pub/debian-meetings/2016/debconf16/Next_Generation_Config_Mgmt.webm) available.
Older videos and other material [is available](https://github.com/purpleidea/mgmt/#on-the-web).
##Setup
## Setup
During this prototype phase, the tool can be run out of the source directory.
You'll probably want to use ```./run.sh run --yaml examples/graph1.yaml``` to
@@ -75,12 +37,12 @@ get started. Beware that this _can_ cause data loss. Understand what you're
doing first, or perform these actions in a virtual environment such as the one
provided by [Oh-My-Vagrant](https://github.com/purpleidea/oh-my-vagrant).
##Features
## Features
This section details the numerous features of mgmt and some caveats you might
need to be aware of.
###Autoedges
### Autoedges
Automatic edges, or AutoEdges, is the mechanism in mgmt by which it will
automatically create dependencies for you between resources. For example,
@@ -89,7 +51,7 @@ automatically ensure that any file resource you declare that matches a
file installed by your package resource will only be processed after the
package is installed.
####Controlling autoedges
#### Controlling autoedges
Though autoedges is likely to be very helpful and avoid you having to declare
all dependencies explicitly, there are cases where this behaviour is
@@ -106,12 +68,12 @@ installation of the `mysql-server` package.
You can disable autoedges for a resource by setting the `autoedge` key on
the meta attributes of that resource to `false`.
####Blog post
#### Blog post
You can read the introductory blog post about this topic here:
[https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/](https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/)
###Autogrouping
### Autogrouping
Automatic grouping or AutoGroup is the mechanism in mgmt by which it will
automatically group multiple resource vertices into a single one. This is
@@ -125,12 +87,12 @@ used for other use cases too.
You can disable autogrouping for a resource by setting the `autogroup` key on
the meta attributes of that resource to `false`.
####Blog post
#### Blog post
You can read the introductory blog post about this topic here:
[https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/](https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/)
###Automatic clustering
### Automatic clustering
Automatic clustering is a feature by which mgmt automatically builds, scales,
and manages the embedded etcd cluster which is compiled into mgmt itself. It is
@@ -141,12 +103,12 @@ If you prefer to avoid this feature. you can always opt to use an existing etcd
cluster that is managed separately from mgmt by pointing your mgmt agents at it
with the `--seeds` variable.
####Blog post
#### Blog post
You can read the introductory blog post about this topic here:
[https://ttboj.wordpress.com/2016/06/20/automatic-clustering-in-mgmt/](https://ttboj.wordpress.com/2016/06/20/automatic-clustering-in-mgmt/)
###Remote ("agent-less") mode
### Remote ("agent-less") mode
Remote mode is a special mode that lets you kick off mgmt runs on one or more
remote machines which are only accessible via SSH. In this mode the initiating
@@ -168,12 +130,12 @@ entire set of running mgmt agents will need to all simultaneously converge for
the group to exit. This is particularly useful for bootstrapping new clusters
which need to exchange information that is only available at run time.
####Blog post
#### Blog post
You can read the introductory blog post about this topic here:
[https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/](https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/)
###Puppet support
### Puppet support
You can supply a Puppet manifest instead of creating the (YAML) graph manually.
Puppet must be installed and in `mgmt`'s search path. You also need the
@@ -195,12 +157,12 @@ Invoke `mgmt` with the `--puppet` switch, which supports 3 variants:
For more details and caveats see [Puppet.md](Puppet.md).
####Blog post
#### Blog post
An introductory post on the Puppet support is on
[Felix's blog](http://ffrank.github.io/features/2016/06/19/puppet-powered-mgmt/).
##Resources
## Resources
This section lists all the built-in resources and their properties. The
resource primitives in `mgmt` are typically more powerful than resources in
@@ -216,9 +178,11 @@ meta parameters aren't very useful when combined with certain resources, but
in general, it should be fairly obvious, such as when combining the `noop` meta
parameter with the [Noop](#Noop) resource.
* [Augeas](#Augeas): Manipulate files using augeas.
* [Exec](#Exec): Execute shell commands on the system.
* [File](#File): Manage files and directories.
* [Hostname](#Hostname): Manages the hostname on the system.
* [KV](#KV): Set a key value pair in our shared world database.
* [Msg](#Msg): Send log messages.
* [Noop](#Noop): A simple resource that does nothing.
* [Nspawn](#Nspawn): Manage systemd-machined nspawn containers.
@@ -228,45 +192,60 @@ parameter with the [Noop](#Noop) resource.
* [Timer](#Timer): Manage system systemd services.
* [Virt](#Virt): Manage virtual machines with libvirt.
###Exec
### Augeas
The augeas resource uses [augeas](http://augeas.net/) commands to manipulate
files.
### Exec
The exec resource can execute commands on your system.
###File
### File
The file resource manages files and directories. In `mgmt`, directories are
identified by a trailing slash in their path name. File have no such slash.
####Path
It has the following properties:
- `path`: file path (directories have a trailing slash here)
- `content`: raw file content
- `state`: either `exists` (the default value) or `absent`
- `mode`: octal unix file permissions
- `owner`: username or uid for the file owner
- `group`: group name or gid for the file group
#### Path
The path property specifies the file or directory that we are managing.
####Content
#### Content
The content property is a string that specifies the desired file contents.
####Source
#### Source
The source property points to a source file or directory path that we wish to
copy over and use as the desired contents for our resource.
####State
#### State
The state property describes the action we'd like to apply for the resource. The
possible values are: `exists` and `absent`.
####Recurse
#### Recurse
The recurse property limits whether file resource operations should recurse into
and monitor directory contents with a depth greater than one.
####Force
#### Force
The force property is required if we want the file resource to be able to change
a file into a directory or vice-versa. If such a change is needed, but the force
property is not set to `true`, then this file resource will error.
###Hostname
### Hostname
The hostname resource manages static, transient/dynamic and pretty hostnames
on the system and watches them for changes.
@@ -290,55 +269,79 @@ The pretty hostname is a free-form UTF8 host name for presentation to the user.
Hostname is the fallback value for all 3 fields above, if only `hostname` is
specified, it will set all 3 fields to this value.
###Msg
### KV
The KV resource sets a key and value pair in the global world database. This is
quite useful for setting a flag after a number of resources have run. It will
ignore database updates to the value that are greater in compare order than the
requested key if the `SkipLessThan` parameter is set to true. If we receive a
refresh, then the stored value will be reset to the requested value even if the
stored value is greater.
#### Key
The string key used to store the key.
#### Value
The string value to set. This can also be set via Send/Recv.
#### SkipLessThan
If this parameter is set to `true`, then it will ignore updating the value as
long as the database versions are greater than the requested value. The compare
operation used is based on the `SkipCmpStyle` parameter.
#### SkipCmpStyle
By default this converts the string values to integers and compares them as you
would expect.
### Msg
The msg resource sends messages to the main log, or an external service such
as systemd's journal.
###Noop
### Noop
The noop resource does absolutely nothing. It does have some utility in testing
`mgmt` and also as a placeholder in the resource graph.
###Nspawn
### Nspawn
The nspawn resource is used to manage systemd-machined style containers.
###Password
### Password
The password resource can generate a random string to be used as a password. It
will re-generate the password if it receives a refresh notification.
###Pkg
### Pkg
The pkg resource is used to manage system packages. This resource works on many
different distributions because it uses the underlying packagekit facility which
supports different backends for different environments. This ensures that we
have great Debian (deb/dpkg) and Fedora (rpm/dnf) support simultaneously.
###Svc
### Svc
The service resource is still very WIP. Please help us my improving it!
###Timer
### Timer
This resource needs better documentation. Please help us my improving it!
###Virt
### Virt
The virt resource can manage virtual machines via libvirt.
##Usage and frequently asked questions
## Usage and frequently asked questions
(Send your questions as a patch to this FAQ! I'll review it, merge it, and
respond by commit with the answer.)
###Why did you start this project?
### Why did you start this project?
I wanted a next generation config management solution that didn't have all of
the design flaws or limitations that the current generation of tools do, and no
tool existed!
###Why did you use etcd? What about consul?
### Why did you use etcd? What about consul?
Etcd and consul are both written in golang, which made them the top two
contenders for my prototype. Ultimately a choice had to be made, and etcd was
@@ -346,7 +349,7 @@ chosen, but it was also somewhat arbitrary. If there is available interest,
good reasoning, *and* patches, then we would consider either switching or
supporting both, but this is not a high priority at this time.
###Can I use an existing etcd cluster instead of the automatic embedded servers?
### Can I use an existing etcd cluster instead of the automatic embedded servers?
Yes, it's possible to use an existing etcd cluster instead of the automatic,
elastic embedded etcd servers. To do so, simply point to the cluster with the
@@ -357,7 +360,7 @@ The downside to this approach is that you won't benefit from the automatic
elastic nature of the embedded etcd servers, and that you're responsible if you
accidentally break your etcd cluster, or if you use an unsupported version.
###What does the error message about an inconsistent dataDir mean?
### What does the error message about an inconsistent dataDir mean?
If you get an error message similar to:
@@ -375,7 +378,7 @@ starting up, and as a result, a default endpoint never gets added. The solution
is to either reconcile the mistake, and if there is no important data saved, you
can remove the etcd dataDir. This is typically `/var/lib/mgmt/etcd/member/`.
###Why do resources have both a `Compare` method and an `IFF` (on the UID) method?
### Why do resources have both a `Compare` method and an `IFF` (on the UID) method?
The `Compare()` methods are for determining if two resources are effectively the
same, which is used to make graph change delta's efficient. This is when we want
@@ -391,14 +394,14 @@ equality. In the future it might be helpful or sane to merge the two similar
comparison functions although for now they are separate because they are
actually answer different questions.
###Did you know that there is a band named `MGMT`?
### Did you know that there is a band named `MGMT`?
I didn't realize this when naming the project, and it is accidental. After much
anguishing, I chose the name because it was short and I thought it was
appropriately descriptive. If you need a less ambiguous search term or phrase,
you can try using `mgmtconfig` or `mgmt config`.
###You didn't answer my question, or I have a question!
### You didn't answer my question, or I have a question!
It's best to ask on [IRC](https://webchat.freenode.net/?channels=#mgmtconfig)
to see if someone can help you. Once we get a big enough community going, we'll
@@ -408,40 +411,41 @@ and I'll do my best to help. If you have a good question, please add it as a
patch to this documentation. I'll merge your question, and add a patch with the
answer!
##Reference
## Reference
Please note that there are a number of undocumented options. For more
information on these options, please view the source at:
[https://github.com/purpleidea/mgmt/](https://github.com/purpleidea/mgmt/).
If you feel that a well used option needs documenting here, please patch it!
###Overview of reference
### Overview of reference
* [Meta parameters](#meta-parameters): List of available resource meta parameters.
* [Graph definition file](#graph-definition-file): Main graph definition file.
* [Command line](#command-line): Command line parameters.
* [Compilation options](#compilation-options): Compilation options.
###Meta parameters
### Meta parameters
These meta parameters are special parameters (or properties) which can apply to
any resource. The usefulness of doing so will depend on the particular meta
parameter and resource combination.
####AutoEdge
#### AutoEdge
Boolean. Should we generate auto edges for this resource?
####AutoGroup
#### AutoGroup
Boolean. Should we attempt to automatically group this resource with others?
####Noop
#### Noop
Boolean. Should the Apply portion of the CheckApply method of the resource
make any changes? Noop is a concatenation of no-operation.
####Retry
#### Retry
Integer. The number of times to retry running the resource on error. Use -1 for
infinite. This currently applies for both the Watch operation (which can fail)
and for the CheckApply operation. While they could have separate values, I've
decided to use the same ones for both until there's a proper reason to want to
do something differently for the Watch errors.
####Delay
#### Delay
Integer. Number of milliseconds to wait between retries. The same value is
shared between the Watch and CheckApply retries. This currently applies for both
the Watch operation (which can fail) and for the CheckApply operation. While
@@ -449,63 +453,113 @@ they could have separate values, I've decided to use the same ones for both
until there's a proper reason to want to do something differently for the Watch
errors.
###Graph definition file
#### Poll
Integer. Number of seconds to wait between `CheckApply` checks. If this is
greater than zero, then the standard event based `Watch` mechanism for this
resource is replaced with a simple polling mechanism. In general, this is not
recommended, unless you have a very good reason for doing so.
Please keep in mind that if you have a resource which changes every `I` seconds,
and you poll it every `J` seconds, and you've asked for a converged timeout of
`K` seconds, and `I <= J <= K`, then your graph will likely never converge.
When polling, the system detects that a resource is not converged if its
`CheckApply` method returns false. This allows a resource which changes every
`I` seconds, and which is polled every `J` seconds, and with a converged timeout
of `K` seconds to still converge when `J <= K`, as long as `I > J || I > K`,
which is another way of saying that if the resource finally settles down to give
the graph enough time, it can probably converge.
#### Limit
Float. Maximum rate of `CheckApply` runs started per second. Useful to limit
an especially _eventful_ process from causing excessive checks to run. This
defaults to `+Infinity` which adds no limiting. If you change this value, you
will also need to change the `Burst` value to a non-zero value. Please see the
[rate](https://godoc.org/golang.org/x/time/rate) package for more information.
#### Burst
Integer. Burst is the maximum number of runs which can happen without invoking
the rate limiter as designated by the `Limit` value. If the `Limit` is not set
to `+Infinity`, this must be a non-zero value. Please see the
[rate](https://godoc.org/golang.org/x/time/rate) package for more information.
#### Sema
List of string ids. Sema is a P/V style counting semaphore which can be used to
limit parallelism during the CheckApply phase of resource execution. Each
resource can have `N` different semaphores which share a graph global namespace.
Each semaphore has a maximum count associated with it. The default value of the
size is 1 (one) if size is unspecified. Each string id is the unique id of the
semaphore. If the id contains a trailing colon (:) followed by a positive
integer, then that value is the max size for that semaphore. Valid semaphore
id's include: `some_id`, `hello:42`, `not:smart:4` and `:13`. It is expected
that the last bare example be only used by the engine to add a global semaphore.
### Graph definition file
graph.yaml is the compiled graph definition file. The format is currently
undocumented, but by looking through the [examples/](https://github.com/purpleidea/mgmt/tree/master/examples)
you can probably figure out most of it, as it's fairly intuitive.
###Command line
### Command line
The main interface to the `mgmt` tool is the command line. For the most recent
documentation, please run `mgmt --help`.
####`--yaml <graph.yaml>`
#### `--yaml <graph.yaml>`
Point to a graph file to run.
####`--converged-timeout <seconds>`
#### `--converged-timeout <seconds>`
Exit if the machine has converged for approximately this many seconds.
####`--max-runtime <seconds>`
#### `--max-runtime <seconds>`
Exit when the agent has run for approximately this many seconds. This is not
generally recommended, but may be useful for users who know what they're doing.
####`--noop`
#### `--noop`
Globally force all resources into no-op mode. This also disables the export to
etcd functionality, but does not disable resource collection, however all
resources that are collected will have their individual noop settings set.
####`--remote <graph.yaml>`
#### `--sema <size>`
Globally add a counting semaphore of this size to each resource in the graph.
The semaphore will get given an id of `:size`. In other words if you specify a
size of 42, you can expect a semaphore if named: `:42`. It is expected that
consumers of the semaphore metaparameter always include a prefix to avoid a
collision with this globally defined semaphore. The size value must be greater
than zero at this time. The traditional non-parallel execution found in config
management tools such as `Puppet` can be obtained with `--sema 1`.
#### `--remote <graph.yaml>`
Point to a graph file to run on the remote host specified within. This parameter
can be used multiple times if you'd like to remotely run on multiple hosts in
parallel.
####`--allow-interactive`
#### `--allow-interactive`
Allow interactive prompting for SSH passwords if there is no authentication
method that works.
####`--ssh-priv-id-rsa`
#### `--ssh-priv-id-rsa`
Specify the path for finding SSH keys. This defaults to `~/.ssh/id_rsa`. To
never use this method of authentication, set this to the empty string.
####`--cconns`
#### `--cconns`
The maximum number of concurrent remote ssh connections to run. This defaults
to `0`, which means unlimited.
####`--no-caching`
#### `--no-caching`
Don't allow remote caching of the remote execution binary. This will require
the binary to be copied over for every remote execution, but it limits the
likelihood that there is leftover information from the configuration process.
####`--prefix <path>`
#### `--prefix <path>`
Specify a path to a custom working directory prefix. This directory will get
created if it does not exist. This usually defaults to `/var/lib/mgmt/`. This
can't be combined with the `--tmp-prefix` option. It can be combined with the
`--allow-tmp-prefix` option.
####`--tmp-prefix`
#### `--tmp-prefix`
If this option is specified, a temporary prefix will be used instead of the
default prefix. This can't be combined with the `--prefix` option.
####`--allow-tmp-prefix`
#### `--allow-tmp-prefix`
If this option is specified, we will attempt to fall back to a temporary prefix
if the primary prefix couldn't be created. This is useful for avoiding failures
in environments where the primary prefix may or may not be available, but you'd
@@ -513,7 +567,35 @@ like to try. The canonical example is when running `mgmt` with `--remote` there
might be a cached copy of the binary in the primary prefix, but in case there's
no binary available continue working in a temporary directory to avoid failure.
##Examples
### Compilation options
You can control some compilation variables by using environment variables.
#### Disable libvirt support
If you wish to compile mgmt without libvirt, you can use the following command:
```
GOTAGS=novirt make build
```
#### Disable augeas support
If you wish to compile mgmt without augeas support, you can use the following command:
```
GOTAGS=noaugeas make build
```
#### Combining compile-time flags
You can combine multiple tags by using a space-separated list:
```
GOTAGS="noaugeas novirt" make build
```
## Examples
For example configurations, please consult the [examples/](https://github.com/purpleidea/mgmt/tree/master/examples) directory in the git
source repository. It is available from:
@@ -541,7 +623,7 @@ EOF
sudo systemctl daemon-reload
```
##Development
## Development
This is a project that I started in my free time in 2013. Development is driven
by all of our collective patches! Dive right in, and start hacking!
@@ -551,9 +633,9 @@ You can follow along [on my technical blog](https://ttboj.wordpress.com/).
To report any bugs, please file a ticket at: [https://github.com/purpleidea/mgmt/issues](https://github.com/purpleidea/mgmt/issues).
##Authors
## Authors
Copyright (C) 2013-2016+ James Shubin and the project contributors
Copyright (C) 2013-2017+ James Shubin and the project contributors
Please see the
[AUTHORS](https://github.com/purpleidea/mgmt/tree/master/AUTHORS) file

17
docs/index.rst Normal file
View File

@@ -0,0 +1,17 @@
.. mgmt documentation master file, created by
sphinx-quickstart on Wed Feb 15 21:34:09 2017.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to mgmt's documentation!
================================
.. toctree::
:maxdepth: 2
:caption: Contents:
documentation
quick-start-guide
resource-guide
prometheus
puppet-guide

66
docs/prometheus.md Normal file
View File

@@ -0,0 +1,66 @@
# Prometheus support
Mgmt comes with a built-in prometheus support. It is disabled by default, and
can be enabled with the `--prometheus` command line switch.
By default, the prometheus instance will listen on [`127.0.0.1:9233`][pd]. You
can change this setting by using the `--prometheus-listen` cli option:
To have mgmt prometheus bind interface on 0.0.0.0:45001, use:
`./mgmt r --prometheus --prometheus-listen :45001`
## Metrics
Mgmt exposes three kinds of resources: _go_ metrics, _etcd_ metrics and _mgmt_
metrics.
### go metrics
We use the [prometheus go_collector][pgc] to expose go metrics. Those metrics
are mainly useful for debugging and perf testing.
### etcd metrics
mgmt exposes etcd metrics. Read more in the [upstream documentation][etcdm]
### mgmt metrics
Here is a list of the metrics we provide:
- `mgmt_resources_total`: The number of resources that mgmt is managing
- `mgmt_checkapply_total`: The number of CheckApply's that mgmt has run
- `mgmt_failures_total`: The number of resources that have failed
- `mgmt_failures_current`: The number of resources that have failed
- `mgmt_graph_start_time_seconds`: Start time of the current graph since unix epoch in seconds
For each metric, you will get some extra labels:
- `kind`: The kind of mgmt resource
For `mgmt_checkapply_total`, those extra labels are set:
- `eventful`: "true" or "false", if the CheckApply triggered some changes
- `errorful`: "true" or "false", if the CheckApply reported an error
- `apply`: "true" or "false", if the CheckApply ran in apply or noop mode
## Alerting
You can use prometheus to alert you upon changes or failures. We do not provide
such templates yet, but we plan to provide some examples in this repository.
Patches welcome!
## Grafana
We do not have grafana dashboards yet. Patches welcome!
## External resources
- [prometheus website](https://prometheus.io/)
- [prometheus documentation](https://prometheus.io/docs/introduction/overview/)
- [prometheus best practices regarding metrics
naming](https://prometheus.io/docs/practices/naming/)
- [grafana website](http://grafana.org/)
[pgc]: https://github.com/prometheus/client_golang/blob/master/prometheus/go_collector.go
[etcdm]: https://coreos.com/etcd/docs/latest/metrics.html
[pd]: https://github.com/prometheus/prometheus/wiki/Default-port-allocation

View File

@@ -1,13 +1,4 @@
#mgmt Puppet support
1. [Prerequisites](#prerequisites)
* [Testing the Puppet side](#testing-the-puppet-side)
2. [Writing a suitable manifest](#writing-a-suitable-manifest)
* [Unsupported attributes](#unsupported-attributes)
* [Unsupported resources](#unsupported-resources)
* [Avoiding common warnings](#avoiding-common-warnings)
3. [Configuring Puppet](#configuring-puppet)
4. [Caveats](#caveats)
# Puppet guide
`mgmt` can use Puppet as its source for the configuration graph.
This document goes into detail on how this works, and lists
@@ -16,7 +7,7 @@ some pitfalls and limitations.
For basic instructions on how to use the Puppet support, see
the [main documentation](documentation.md#puppet-support).
##Prerequisites
## Prerequisites
You need Puppet installed in your system. It is not important how you
get it. On the most common Linux distributions, you can use packages
@@ -29,14 +20,16 @@ Any release of Puppet's 3.x and 4.x series should be suitable for use with
`mgmt`. Most importantly, make sure to install the `ffrank-mgmtgraph` Puppet
module (referred to below as "the translator module").
puppet module install ffrank-mgmtgraph
```
puppet module install ffrank-mgmtgraph
```
Please note that the module is not required on your Puppet master (if you
use a master/agent setup). It's needed on the machine that runs `mgmt`.
You can install the module on the master anyway, so that it gets distributed
to your agents through Puppet's `pluginsync` mechanism.
###Testing the Puppet side
### Testing the Puppet side
The following command should run successfully and print a YAML hash on your
terminal:
@@ -48,9 +41,9 @@ puppet mgmtgraph print --code 'file { "/tmp/mgmt-test": ensure => present }'
You can use this CLI to test any manifests before handing them straight
to `mgmt`.
##Writing a suitable manifest
## Writing a suitable manifest
###Unsupported attributes
### Unsupported attributes
`mgmt` inherited its resource module from Puppet, so by and large, it's quite
possible to express `mgmt` graphs in terms of Puppet manifests. However,
@@ -62,8 +55,10 @@ For example, at the time of writing this, the `file` type in `mgmt` had no
notion of permissions (the file `mode`) yet. This lead to the following
warning (among others that will be discussed below):
$ puppet mgmtgraph print --code 'file { "/tmp/foo": mode => "0600" }'
Warning: cannot translate: File[/tmp/foo] { mode => "600" } (attribute is ignored)
```
$ puppet mgmtgraph print --code 'file { "/tmp/foo": mode => "0600" }'
Warning: cannot translate: File[/tmp/foo] { mode => "600" } (attribute is ignored)
```
This is a heads-up for the user, because the resulting `mgmt` graph will
in fact not pass this information to the `/tmp/foo` file resource, and
@@ -71,7 +66,7 @@ in fact not pass this information to the `/tmp/foo` file resource, and
manifests that are written expressly for `mgmt` is not sensible and should
be avoided.
###Unsupported resources
### Unsupported resources
Puppet has a fairly large number of
[built-in types](https://docs.puppet.com/puppet/latest/reference/type.html),
@@ -91,28 +86,32 @@ this overhead can amount to several orders of magnitude.
Avoid Puppet types that `mgmt` does not implement (yet).
###Avoiding common warnings
### Avoiding common warnings
Many resource parameters in Puppet take default values. For the most part,
the translator module just ignores them. However, there are cases in which
Puppet will default to convenient behavior that `mgmt` cannot quite replicate.
For example, translating a plain `file` resource will lead to a warning message:
$ puppet mgmtgraph print --code 'file { "/tmp/mgmt-test": }'
Warning: File[/tmp/mgmt-test] uses the 'puppet' file bucket, which mgmt cannot do. There will be no backup copies!
```
$ puppet mgmtgraph print --code 'file { "/tmp/mgmt-test": }'
Warning: File[/tmp/mgmt-test] uses the 'puppet' file bucket, which mgmt cannot do. There will be no backup copies!
```
The reason is that per default, Puppet assumes the following parameter value
(among others)
```puppet
file { "/tmp/mgmt-test":
backup => 'puppet',
backup => 'puppet',
}
```
To avoid this, specify the parameter explicitly:
$ puppet mgmtgraph print --code 'file { "/tmp/mgmt-test": backup => false }'
```
$ puppet mgmtgraph print --code 'file { "/tmp/mgmt-test": backup => false }'
```
This is tedious in a more complex manifest. A good simplification is the
following [resource default](https://docs.puppet.com/puppet/latest/reference/lang_defaults.html)
@@ -125,7 +124,7 @@ File { backup => false }
If you encounter similar warnings from other types and/or parameters,
use the same approach to silence them if possible.
##Configuring Puppet
## Configuring Puppet
Since `mgmt` uses an actual Puppet CLI behind the scenes, you might
need to tweak some of Puppet's runtime options in order to make it
@@ -143,16 +142,20 @@ control all of them, through its `--puppet-conf` option. It allows
you to specify which `puppet.conf` file should be used during
translation.
mgmt run --puppet /opt/my-manifest.pp --puppet-conf /etc/mgmt/puppet.conf
```
mgmt run --puppet /opt/my-manifest.pp --puppet-conf /etc/mgmt/puppet.conf
```
Within this file, you can just specify any needed options in the
`[main]` section:
[main]
server=mgmt-master.example.net
vardir=/var/lib/mgmt/puppet
```
[main]
server=mgmt-master.example.net
vardir=/var/lib/mgmt/puppet
```
##Caveats
## Caveats
Please see the [README](https://github.com/ffrank/puppet-mgmtgraph/blob/master/README.md)
of the translator module for the current state of supported and unsupported

94
docs/quick-start-guide.md Normal file
View File

@@ -0,0 +1,94 @@
# Quick start guide
## Introduction
This guide is intended for developers. Once `mgmt` is minimally viable, we'll
publish a quick start guide for users too. In the meantime, please contribute!
If you're brand new to `mgmt`, it's probably a good idea to start by reading the
[introductory article](https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/)
or to watch an [introductory video](https://github.com/purpleidea/mgmt/#on-the-web).
Once you're familiar with the general idea, please start hacking...
## Vagrant
If you would like to avoid doing the following steps manually, we have prepared
a [Vagrant](https://www.vagrantup.com/) environment for your convenience. From
the project directory, run a `vagrant up`, and then a `vagrant status`. From
there, you can `vagrant ssh` into the `mgmt` machine. The MOTD will explain the
rest.
## Dependencies
Software projects have a few different kinds of dependencies. There are _build_
dependencies, _runtime_ dependencies, and additionally, a few extra dependencies
required for running the _test_ suite.
### Build
* `golang` 1.8 or higher (required, available in some distros and distributed
as a binary officially by [golang.org](https://golang.org/dl/))
* golang libraries (required, available with `go get ./...`) a partial list includes:
```
github.com/coreos/etcd/client
gopkg.in/yaml.v2
gopkg.in/fsnotify.v1
github.com/urfave/cli
github.com/coreos/go-systemd/dbus
github.com/coreos/go-systemd/util
github.com/libvirt/libvirt-go
```
* `stringer` (optional), available as a package on some platforms, otherwise via `go get`
```
golang.org/x/tools/cmd/stringer
```
* `pandoc` (optional), for building a pdf of the documentation
### Runtime
A relatively modern GNU/Linux system should be able to run `mgmt` without any
problems. Since `mgmt` runs as a single statically compiled binary, all of the
library dependencies are included. It is expected, that certain advanced
resources require host specific facilities to work. These requirements are
listed below:
| Resource | Dependency | Version |
|----------|-------------------|---------|
| file | inotify | ? |
| hostname | systemd-hostnamed | ? |
| nspawn | systemd-nspawn | ? |
| pkg | packagekitd | ? |
| svc | systemd | ? |
| virt | libvirtd | ? |
For building a visual representation of the graph, `graphviz` is required.
### Testing
* golint `github.com/golang/lint/golint`
## Quick start
* Make sure you have golang version 1.8 or greater installed.
* If you do not have a GOPATH yet, create one and export it:
```
mkdir $HOME/gopath
export GOPATH=$HOME/gopath
```
* You might also want to add the GOPATH to your `~/.bashrc` or `~/.profile`.
* For more information you can read the [GOPATH documentation](https://golang.org/cmd/go/#hdr-GOPATH_environment_variable).
* Next download the mgmt code base, and switch to that directory:
```
mkdir -p $GOPATH/src/github.com/purpleidea/
cd $GOPATH/src/github.com/purpleidea/
git clone --recursive https://github.com/purpleidea/mgmt/
cd $GOPATH/src/github.com/purpleidea/mgmt
```
* Run `make deps` to install system and golang dependencies. Take a look at `misc/make-deps.sh` for details.
* Run `make build` to get a freshly built `mgmt` binary.
* Run `time ./mgmt run --yaml examples/graph0.yaml --converged-timeout=5 --tmp-prefix` to try out a very simple example!
* To run continuously in the default mode of operation, omit the `--converged-timeout` option.
* Have fun hacking on our future technology!
## Examples
Please look in the [examples/](../examples/) folder for some examples!
## Installation
Installation of `mgmt` from distribution packages currently needs improvement.
At the moment we have:
* [COPR](https://copr.fedoraproject.org/coprs/purpleidea/mgmt/)
* [Arch](https://aur.archlinux.org/packages/mgmt/)
Please contribute more! We'd especially like to see a Debian package!

View File

@@ -1,49 +1,6 @@
#mgmt
# Resource guide
<!--
Mgmt
Copyright (C) 2013-2016+ James Shubin and the project contributors
Written by James Shubin <james@shubin.ca> and the project contributors
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
-->
##mgmt resource guide by [James](https://ttboj.wordpress.com/)
####Available from:
####[https://github.com/purpleidea/mgmt/](https://github.com/purpleidea/mgmt/)
####This documentation is available in: [Markdown](https://github.com/purpleidea/mgmt/blob/master/docs/resource-guide.md) or [PDF](https://pdfdoc-purpleidea.rhcloud.com/pdf/https://github.com/purpleidea/mgmt/blob/master/docs/resource-guide.md) format.
####Table of Contents
1. [Overview](#overview)
2. [Theory - Resource theory in mgmt](#theory)
3. [Resource API - Getting started with mgmt](#resource-api)
* [Init - Initialize the resource](#init)
* [CheckApply - Check and apply resource state](#checkapply)
* [Watch - Detect resource changes](#watch)
* [Compare - Compare resource with another](#compare)
4. [Further considerations - More information about resource writing](#further-considerations)
5. [Automatic edges - Adding automatic resources dependencies](#automatic-edges)
6. [Automatic grouping - Grouping multiple resources into one](#automatic-grouping)
7. [Send/Recv - Communication between resources](#send-recv)
8. [Composite resources - Importing code from one resource into another](#composite-resources)
9. [FAQ - Frequently asked questions](#frequently-asked-questions)
10. [Suggestions - API change suggestions](#suggestions)
11. [Authors - Authors and contact information](#authors)
##Overview
## Overview
The `mgmt` tool has built-in resource primitives which make up the building
blocks of any configuration. Each instance of a resource is mapped to a single
@@ -52,7 +9,7 @@ This guide is meant to instruct developers on how to write a brand new resource.
Since `mgmt` and the core resources are written in golang, some prior golang
knowledge is assumed.
##Theory
## Theory
Resources in `mgmt` are similar to resources in other systems in that they are
[idempotent](https://en.wikipedia.org/wiki/Idempotence). Our resources are
@@ -62,27 +19,67 @@ on this design, please read the
[original article](https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/)
on the subject.
##Resource API
## Resource API
To implement a resource in `mgmt` it must satisfy the
[`Res`](https://github.com/purpleidea/mgmt/blob/master/resources/resources.go)
interface. What follows are each of the method signatures and a description of
each.
###Init
### Default
```golang
Default() Res
```
This returns a populated resource struct as a `Res`. It shouldn't populate any
values which already have the correct default as the golang zero value. In
general it is preferable if the zero values make for the correct defaults.
#### Example
```golang
// Default returns some sensible defaults for this resource.
func (obj *FooRes) Default() Res {
return &FooRes{
Answer: 42, // sometimes, defaults shouldn't be the zero value
}
}
```
### Validate
```golang
Validate() error
```
This method is used to validate if the populated resource struct is a valid
representation of the resource kind. If it does not conform to the resource
specifications, it should generate an error. If you notice that this method is
quite large, it might be an indication that you should reconsider the parameter
list and interface to this resource. This method is called _before_ `Init`.
#### Example
```golang
// Validate reports any problems with the struct definition.
func (obj *FooRes) Validate() error {
if obj.Answer != 42 { // validate whatever you want
return fmt.Errorf("expected an answer of 42")
}
return obj.BaseRes.Validate() // remember to call the base method!
}
```
### Init
```golang
Init() error
```
This is called to initialize the resource. If something goes wrong, it should
return an error. It should set the resource `kind`, do any resource specific
work, and finish by calling the `Init` method of the base resource.
return an error. It should do any resource specific work, and finish by calling
the `Init` method of the base resource.
####Example
#### Example
```golang
// Init initializes the Foo resource.
func (obj *FooRes) Init() error {
obj.BaseRes.kind = "Foo" // must set capitalized resource kind
// run the resource specific initialization, and error if anything fails
if some_error {
return err // something went wrong!
@@ -91,7 +88,43 @@ func (obj *FooRes) Init() error {
}
```
###CheckApply
This method is always called after `Validate` has run successfully, with the
exception that we can't prevent a malicious or buggy `libmgmt` user to not run
this. In other words, you should expect `Validate` to have run first, but you
shouldn't allow `Init` to dangerously `rm -rf /$the_world` if your code only
checks `$the_world` in `Validate`. Remember to always program safely!
### Close
```golang
Close() error
```
This is called to cleanup after the resource. It is usually not necessary, but
can be useful if you'd like to properly close a persistent connection that you
opened in the `Init` method and were using throughout the resource.
#### Example
```golang
// Close runs some cleanup code for this resource.
func (obj *FooRes) Close() error {
err := obj.conn.Close() // close some internal connection
// call base close, b/c we're overriding
if e := obj.BaseRes.Close(); err == nil {
err = e
} else if e != nil {
err = multierr.Append(err, e) // list of errors
}
return err
}
```
You should probably check the return errors of your internal methods, and pass
on an error if something went wrong. Remember to always call the base `Close`
method! If you plan to return early if you hit an internal error, then at least
call it with a defer!
### CheckApply
```golang
CheckApply(apply bool) (checkOK bool, err error)
```
@@ -116,7 +149,7 @@ resource isn't now converged. This is not a bug, as the resources `Watch`
facility will detect the change, ultimately resulting in a subsequent call to
`CheckApply`.
####Example
#### Example
```golang
// CheckApply does the idempotent work of checking and applying resource state.
func (obj *FooRes) CheckApply(apply bool) (bool, error) {
@@ -137,7 +170,7 @@ skipped. This is an engine optimization, and not a bug. It is mentioned here in
the documentation in case you are confused as to why a debug message you've
added to the code isn't always printed.
####Refresh notifications
#### Refresh notifications
Some resources may choose to support receiving refresh notifications. In general
these should be avoided if possible, but nevertheless, they do make sense in
certain situations. Resources that support these need to verify if one was sent
@@ -150,7 +183,7 @@ action is applied by that resource and are transmitted through graph edges which
have enabled their propagation. Resources that currently perform some refresh
action include `svc`, `timer`, and `password`.
####Paired execution
#### Paired execution
For many resources it is not uncommon to see `CheckApply` run twice in rapid
succession. This is usually not a pathological occurrence, but rather a healthy
pattern which is a consequence of the event system. When the state of the
@@ -159,21 +192,21 @@ having just changed the state, it is usually the case that this repair will
trigger the `Watch` code! In response, a second `CheckApply` is triggered, which
will likely find the state to now be correct.
####Summary
#### Summary
* Anytime an error occurs during `CheckApply`, you should return `(false, err)`.
* If the state is correct and no changes are needed, return `(true, nil)`.
* You should only make changes to the system if `apply` is set to `true`.
* After checking the state and possibly applying the fix, return `(false, nil)`.
* Returning `(true, err)` is a programming error and will cause a `Fatal`.
###Watch
### Watch
```golang
Watch(chan Event) error
Watch() error
```
`Watch` is a main loop that runs and sends messages when it detects that the
state of the resource might have changed. To send a message you should write to
the input `Event` channel using the `DoSend` helper method. The Watch function
the input event channel using the `Event` helper method. The Watch function
should run continuously until a shutdown message is received. If at any time
something goes wrong, you should return an error, and the `mgmt` engine will
handle possibly restarting the main loop based on the `retry` meta parameters.
@@ -195,36 +228,35 @@ If the resource is activated in `polling` mode, the `Watch` method will not get
executed. As a result, the resource must still work even if the main loop is not
running.
####Select
#### Select
The lifetime of most resources `Watch` method should be spent in an infinite
loop that is bounded by a `select` call. The `select` call is the point where
our method hands back control to the engine (and the kernel) so that we can
sleep until something of interest wakes us up. In this loop we must process
events from the engine via the `<-obj.Events()` call, wait for the converged
timeout with `<-cuid.ConvergedTimer()`, and receive events for our resource
itself!
events from the engine via the `<-obj.Events()` call, and receive events for our
resource itself!
####Events
#### Events
If we receive an internal event from the `<-obj.Events()` method, we can read it
with the ReadEvent helper function. This function tells us if we should shutdown
our resource, and if we should generate an event. When we want to send an event,
we use the `DoSend` helper function. It is also important to mark the resource
we use the `Event` helper function. It is also important to mark the resource
state as `dirty` if we believe it might have changed. We do this with the
`StateOK(false)` function.
####Startup
#### Startup
Once the `Watch` function has finished starting up successfully, it is important
to generate one event to notify the `mgmt` engine that we're now listening
successfully, so that it can run an initial `CheckApply` to ensure we're safely
tracking a healthy state and that we didn't miss anything when `Watch` was down
or from before `mgmt` was running. It does this by calling the `Running` method.
####Converged
#### Converged
The engine might be asked to shutdown when the entire state of the system has
not seen any changes for some duration of time. In order for the engine to be
able to make this determination, each resource must report its converged state.
not seen any changes for some duration of time. The engine can determine this
automatically, but each resource can block this if it is absolutely necessary.
To do this, the `Watch` method should get the `ConvergedUID` handle that has
been prepared for it by the engine. This is done by calling the `Converger`
been prepared for it by the engine. This is done by calling the `ConvergerUID`
method on the resource object. The result can be used to set the converged
status with `SetConverged`, and to notify when the particular timeout has been
reached by waiting on `ConvergedTimer`.
@@ -233,12 +265,14 @@ Instead of interacting with the `ConvergedUID` with these two methods, we can
instead use the `StartTimer` and `ResetTimer` methods which accomplish the same
thing, but provide a `select`-free interface for different coding situations.
####Example
This particular facility is most likely not required for most resources. It may
prove to be useful if a resource wants to start off a long operation, but avoid
sending out erroneous `Event` messages to keep things alive until it finishes.
#### Example
```golang
// Watch is the listener and main loop for this resource.
func (obj *FooRes) Watch(processChan chan event.Event) error {
cuid := obj.Converger() // get the converger uid used to report status
func (obj *FooRes) Watch() error {
// setup the Foo resource
var err error
if err, obj.foo = OpenFoo(); err != nil {
@@ -247,59 +281,49 @@ func (obj *FooRes) Watch(processChan chan event.Event) error {
defer obj.whatever.CloseFoo() // shutdown our
// notify engine that we're running
if err := obj.Running(processChan); err != nil {
if err := obj.Running(); err != nil {
return err // bubble up a NACK...
}
var send = false // send event?
var exit = false
var exit *error
for {
obj.SetState(ResStateWatching) // reset
select {
case event := <-obj.Events():
cuid.SetConverged(false)
// we avoid sending events on unpause
if exit, send = obj.ReadEvent(&event); exit {
return nil // exit
if exit, send = obj.ReadEvent(event); exit != nil {
return *exit // exit
}
// the actual events!
case event := <-obj.foo.Events:
if is_an_event {
send = true // used below
cuid.SetConverged(false)
obj.StateOK(false) // dirty
}
// event errors
case err := <-obj.foo.Errors:
cuuid.SetConverged(false)
return err // will cause a retry or permanent failure
case <-cuid.ConvergedTimer():
cuid.SetConverged(true) // converged!
continue
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
if exit, err := obj.DoSend(processChan, ""); exit || err != nil {
return err // we exit or bubble up a NACK...
}
obj.Event() // send the event!
}
}
}
```
####Summary
#### Summary
* Remember to call the appropriate `converger` methods throughout the resource.
* Remember to call `Startup` when the `Watch` is running successfully.
* Remember to process internal events and shutdown promptly if asked to.
* Ensure the design of your resource is well thought out.
* Have a look at the existing resources for a rough idea of how this all works.
###Compare
### Compare
```golang
Compare(Res) bool
```
@@ -316,80 +340,106 @@ need to be changed. On occasion, not all of them need to be compared, in
particular if they store some generated state, or if they aren't significant in
some way.
####Example
#### Example
```golang
// Compare two resources and return if they are equivalent.
func (obj *FooRes) Compare(res Res) bool {
switch res.(type) {
case *FooRes: // only compare to other resources of the Foo kind!
res := res.(*FileRes)
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
if obj.whatever != res.whatever {
return false
}
if obj.Flag != res.Flag {
return false
}
default:
return false // different kind of resource
func (obj *FooRes) Compare(r Res) bool {
// we can only compare FooRes to others of the same resource kind
res, ok := r.(*FooRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
if obj.whatever != res.whatever {
return false
}
if obj.Flag != res.Flag {
return false
}
return true // they must match!
}
```
###Validate
### UIDs
```golang
Validate() error
UIDs() []ResUID
```
This method is used to validate if the populated resource struct is a valid
representation of the resource kind. If it does not conform to the resource
specifications, it should generate an error. If you notice that this method is
quite large, it might be an indication that you might want to reconsider the
parameter list and interface to this resource.
###GetUIDs
```golang
GetUIDs() []ResUID
```
The `GetUIDs` method returns a list of `ResUID` interfaces that represent the
The `UIDs` method returns a list of `ResUID` interfaces that represent the
particular resource uniquely. This is used with the AutoEdges API to determine
if another resource can match a dependency to this one.
###AutoEdges
### AutoEdges
```golang
AutoEdges() AutoEdge
AutoEdges() (AutoEdge, error)
```
This returns a struct that implements the `AutoEdge` interface. This struct
is used to match other resources that might be relevant dependencies for this
resource.
###CollectPattern
### CollectPattern
```golang
CollectPattern() string
```
This is currently a stub and will be updated once the DSL is further along.
##Further considerations
### UnmarshalYAML
```golang
UnmarshalYAML(unmarshal func(interface{}) error) error // optional
```
This is optional, but recommended for any resource that will have a YAML
accessible struct. It is not required because to do so would mean that
third-party or custom resources (such as those someone writes to use with
`libmgmt`) would have to implement this needlessly.
The signature intentionally matches what is required to satisfy the `go-yaml`
[Unmarshaler](https://godoc.org/gopkg.in/yaml.v2#Unmarshaler) interface.
#### Example
```golang
// UnmarshalYAML is the custom unmarshal handler for this struct.
// It is primarily useful for setting the defaults.
func (obj *FooRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes FooRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*FooRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to FooRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = FooRes(raw) // restore from indirection with type conversion!
return nil
}
```
## Further considerations
There is some additional information that any resource writer will need to know.
Each issue is listed separately below!
###Resource struct
### Resource struct
Each resource will implement methods as pointer receivers on a resource struct.
The resource struct must include an anonymous reference to the `BaseRes` struct.
The naming convention for resources is that they end with a `Res` suffix. If
you'd like your resource to be accessible by the `YAML` graph API (GAPI), then
you'll need to include the appropriate YAML fields as shown below.
####Example
#### Example
```golang
type FooRes struct {
BaseRes `yaml:",inline"` // base properties
@@ -402,43 +452,26 @@ type FooRes struct {
}
```
###YAML
In addition to labelling your resource struct with YAML fields, you must also
add an entry to the internal `GraphConfig` struct. It is a fairly straight
forward one line patch.
### Resource registration
All resources must be registered with the engine so that they can be found. This
also ensures they can be encoded and decoded. Make sure to include the following
code snippet for this to work.
```golang
type GraphConfig struct {
// [snip...]
Resources struct {
Noop []*resources.NoopRes `yaml:"noop"`
File []*resources.FileRes `yaml:"file"`
// [snip...]
Foo []*resources.FooRes `yaml:"foo"` // tada :)
}
}
```
###Gob registration
All resources must be registered with the `golang` _gob_ module so that they can
be encoded and decoded. Make sure to include the following code snippet for this
to work.
```golang
import "encoding/gob"
func init() { // special golang method that runs once
gob.Register(&FooRes{}) // substitude your resource here
// set your resource kind and struct here (the kind must be lower case)
RegisterResource("foo", func() Res { return &FooRes{} })
}
```
##Automatic edges
## Automatic edges
Automatic edges in `mgmt` are well described in [this article](https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/).
The best example of this technique can be seen in the `svc` resource.
Unfortunately no further documentation about this subject has been written. To
expand this section, please send a patch! Please contact us if you'd like to
work on a resource that uses this feature, or to add it to an existing one!
##Automatic grouping
## Automatic grouping
Automatic grouping in `mgmt` is well described in [this article](https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/).
The best example of this technique can be seen in the `pkg` resource.
Unfortunately no further documentation about this subject has been written. To
@@ -446,7 +479,7 @@ expand this section, please send a patch! Please contact us if you'd like to
work on a resource that uses this feature, or to add it to an existing one!
##Send/Recv
## Send/Recv
In `mgmt` there is a novel concept called _Send/Recv_. For some background,
please [read the introductory article](https://ttboj.wordpress.com/2016/12/07/sendrecv-in-mgmt/).
When using this feature, the engine will automatically send the user specified
@@ -463,7 +496,7 @@ This can _only_ be done inside of the `CheckApply` function!
```golang
// inside CheckApply, probably near the top
if val, exists := obj.Recv["SomeKey"]; exists {
log.Printf("SomeKey was sent to us from: %s[%s].%s", val.Res.Kind(), val.Res.GetName(), val.Key)
log.Printf("SomeKey was sent to us from: %s.%s", val.Res, val.Key)
if val.Changed {
log.Printf("SomeKey was just updated!")
// you may want to invalidate some local cache
@@ -489,7 +522,7 @@ such as for cache invalidation.
Remember, `Send/Recv` only changes your resource code if you cache state.
##Composite resources
## Composite resources
Composite resources are resources which embed one or more existing resources.
This is useful to prevent code duplication in higher level resource scenarios.
The best example of this technique can be seen in the `nspawn` resource which
@@ -498,37 +531,25 @@ Unfortunately no further documentation about this subject has been written. To
expand this section, please send a patch! Please contact us if you'd like to
work on a resource that uses this feature, or to add it to an existing one!
##Frequently asked questions
## Frequently asked questions
(Send your questions as a patch to this FAQ! I'll review it, merge it, and
respond by commit with the answer.)
###Can I write resources in a different language?
### Can I write resources in a different language?
Currently `golang` is the only supported language for built-in resources. We
might consider allowing external resources to be imported in the future. This
will likely require a language that can expose a C-like API, such as `python` or
`ruby`. Custom `golang` resources are already possible when using mgmt as a lib.
Higher level resource collections will be possible once the `mgmt` DSL is ready.
###What new resource primitives need writing?
### What new resource primitives need writing?
There are still many ideas for new resources that haven't been written yet. If
you'd like to contribute one, please contact us and tell us about your idea!
###Where can I find more information about mgmt?
### Where can I find more information about mgmt?
Additional blog posts, videos and other material [is available!](https://github.com/purpleidea/mgmt/#on-the-web).
##Suggestions
## Suggestions
If you have any ideas for API changes or other improvements to resource writing,
please let us know! We're still pre 1.0 and pre 0.1 and happy to break API in
order to get it right!
##Authors
Copyright (C) 2013-2016+ James Shubin and the project contributors
Please see the
[AUTHORS](https://github.com/purpleidea/mgmt/tree/master/AUTHORS) file
for more information.
* [github](https://github.com/purpleidea/)
* [&#64;purpleidea](https://twitter.com/#!/purpleidea)
* [https://ttboj.wordpress.com/](https://ttboj.wordpress.com/)

File diff suppressed because it is too large Load Diff

412
etcd/methods.go Normal file
View File

@@ -0,0 +1,412 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package etcd
import (
"fmt"
"log"
"strconv"
"strings"
etcd "github.com/coreos/etcd/clientv3"
rpctypes "github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
etcdtypes "github.com/coreos/etcd/pkg/types"
context "golang.org/x/net/context"
)
// TODO: Could all these Etcd*(obj *EmbdEtcd, ...) functions which deal with the
// interface between etcd paths and behaviour be grouped into a single struct ?
// Nominate nominates a particular client to be a server (peer).
func Nominate(obj *EmbdEtcd, hostname string, urls etcdtypes.URLs) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Nominate(%v): %v", hostname, urls.String())
defer log.Printf("Trace: Etcd: Nominate(%v): Finished!", hostname)
}
// nominate someone to be a server
nominate := fmt.Sprintf("/%s/nominated/%s", NS, hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
ops = append(ops, etcd.OpPut(nominate, urls.String())) // TODO: add a TTL? (etcd.WithLease)
} else { // delete message if set to erase
ops = append(ops, etcd.OpDelete(nominate))
}
if _, err := obj.Txn(nil, ops, nil); err != nil {
return fmt.Errorf("nominate failed") // exit in progress?
}
return nil
}
// Nominated returns a urls map of nominated etcd server volunteers.
// NOTE: I know 'nominees' might be more correct, but is less consistent here
func Nominated(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
path := fmt.Sprintf("/%s/nominated/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix()) // map[string]string, bool
if err != nil {
return nil, fmt.Errorf("nominated isn't available: %v", err)
}
nominated := make(etcdtypes.URLsMap)
for key, val := range keyMap { // loop through directory of nominated
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of nominee
if val == "" { // skip "erased" values
continue
}
urls, err := etcdtypes.NewURLs(strings.Split(val, ","))
if err != nil {
return nil, fmt.Errorf("nominated data format error: %v", err)
}
nominated[name] = urls // add to map
if obj.flags.Debug {
log.Printf("Etcd: Nominated(%v): %v", name, val)
}
}
return nominated, nil
}
// Volunteer offers yourself up to be a server if needed.
func Volunteer(obj *EmbdEtcd, urls etcdtypes.URLs) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Volunteer(%v): %v", obj.hostname, urls.String())
defer log.Printf("Trace: Etcd: Volunteer(%v): Finished!", obj.hostname)
}
// volunteer to be a server
volunteer := fmt.Sprintf("/%s/volunteers/%s", NS, obj.hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
// XXX: adding a TTL is crucial! (i think)
ops = append(ops, etcd.OpPut(volunteer, urls.String())) // value is usually a peer "serverURL"
} else { // delete message if set to erase
ops = append(ops, etcd.OpDelete(volunteer))
}
if _, err := obj.Txn(nil, ops, nil); err != nil {
return fmt.Errorf("volunteering failed") // exit in progress?
}
return nil
}
// Volunteers returns a urls map of available etcd server volunteers.
func Volunteers(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Volunteers()")
defer log.Printf("Trace: Etcd: Volunteers(): Finished!")
}
path := fmt.Sprintf("/%s/volunteers/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix())
if err != nil {
return nil, fmt.Errorf("volunteers aren't available: %v", err)
}
volunteers := make(etcdtypes.URLsMap)
for key, val := range keyMap { // loop through directory of volunteers
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of volunteer
if val == "" { // skip "erased" values
continue
}
urls, err := etcdtypes.NewURLs(strings.Split(val, ","))
if err != nil {
return nil, fmt.Errorf("volunteers data format error: %v", err)
}
volunteers[name] = urls // add to map
if obj.flags.Debug {
log.Printf("Etcd: Volunteer(%v): %v", name, val)
}
}
return volunteers, nil
}
// AdvertiseEndpoints advertises the list of available client endpoints.
func AdvertiseEndpoints(obj *EmbdEtcd, urls etcdtypes.URLs) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: AdvertiseEndpoints(%v): %v", obj.hostname, urls.String())
defer log.Printf("Trace: Etcd: AdvertiseEndpoints(%v): Finished!", obj.hostname)
}
// advertise endpoints
endpoints := fmt.Sprintf("/%s/endpoints/%s", NS, obj.hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
// TODO: add a TTL? (etcd.WithLease)
ops = append(ops, etcd.OpPut(endpoints, urls.String())) // value is usually a "clientURL"
} else { // delete message if set to erase
ops = append(ops, etcd.OpDelete(endpoints))
}
if _, err := obj.Txn(nil, ops, nil); err != nil {
return fmt.Errorf("endpoint advertising failed") // exit in progress?
}
return nil
}
// Endpoints returns a urls map of available etcd server endpoints.
func Endpoints(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Endpoints()")
defer log.Printf("Trace: Etcd: Endpoints(): Finished!")
}
path := fmt.Sprintf("/%s/endpoints/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix())
if err != nil {
return nil, fmt.Errorf("endpoints aren't available: %v", err)
}
endpoints := make(etcdtypes.URLsMap)
for key, val := range keyMap { // loop through directory of endpoints
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of volunteer
if val == "" { // skip "erased" values
continue
}
urls, err := etcdtypes.NewURLs(strings.Split(val, ","))
if err != nil {
return nil, fmt.Errorf("endpoints data format error: %v", err)
}
endpoints[name] = urls // add to map
if obj.flags.Debug {
log.Printf("Etcd: Endpoint(%v): %v", name, val)
}
}
return endpoints, nil
}
// SetHostnameConverged sets whether a specific hostname is converged.
func SetHostnameConverged(obj *EmbdEtcd, hostname string, isConverged bool) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: SetHostnameConverged(%s): %v", hostname, isConverged)
defer log.Printf("Trace: Etcd: SetHostnameConverged(%v): Finished!", hostname)
}
converged := fmt.Sprintf("/%s/converged/%s", NS, hostname)
op := []etcd.Op{etcd.OpPut(converged, fmt.Sprintf("%t", isConverged))}
if _, err := obj.Txn(nil, op, nil); err != nil { // TODO: do we need a skipConv flag here too?
return fmt.Errorf("set converged failed") // exit in progress?
}
return nil
}
// HostnameConverged returns a map of every hostname's converged state.
func HostnameConverged(obj *EmbdEtcd) (map[string]bool, error) {
if obj.flags.Trace {
log.Printf("Trace: Etcd: HostnameConverged()")
defer log.Printf("Trace: Etcd: HostnameConverged(): Finished!")
}
path := fmt.Sprintf("/%s/converged/", NS)
keyMap, err := obj.ComplexGet(path, true, etcd.WithPrefix()) // don't un-converge
if err != nil {
return nil, fmt.Errorf("converged values aren't available: %v", err)
}
converged := make(map[string]bool)
for key, val := range keyMap { // loop through directory...
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of key
if val == "" { // skip "erased" values
continue
}
b, err := strconv.ParseBool(val)
if err != nil {
return nil, fmt.Errorf("converged data format error: %v", err)
}
converged[name] = b // add to map
}
return converged, nil
}
// AddHostnameConvergedWatcher adds a watcher with a callback that runs on
// hostname state changes.
func AddHostnameConvergedWatcher(obj *EmbdEtcd, callbackFn func(map[string]bool) error) (func(), error) {
path := fmt.Sprintf("/%s/converged/", NS)
internalCbFn := func(re *RE) error {
// TODO: get the value from the response, and apply delta...
// for now, just run a get operation which is easier to code!
m, err := HostnameConverged(obj)
if err != nil {
return err
}
return callbackFn(m) // call my function
}
return obj.AddWatcher(path, internalCbFn, true, true, etcd.WithPrefix()) // no block and no converger reset
}
// SetClusterSize sets the ideal target cluster size of etcd peers.
func SetClusterSize(obj *EmbdEtcd, value uint16) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: SetClusterSize(): %v", value)
defer log.Printf("Trace: Etcd: SetClusterSize(): Finished!")
}
key := fmt.Sprintf("/%s/idealClusterSize", NS)
if err := obj.Set(key, strconv.FormatUint(uint64(value), 10)); err != nil {
return fmt.Errorf("function SetClusterSize failed: %v", err) // exit in progress?
}
return nil
}
// GetClusterSize gets the ideal target cluster size of etcd peers.
func GetClusterSize(obj *EmbdEtcd) (uint16, error) {
key := fmt.Sprintf("/%s/idealClusterSize", NS)
keyMap, err := obj.Get(key)
if err != nil {
return 0, fmt.Errorf("function GetClusterSize failed: %v", err)
}
val, exists := keyMap[key]
if !exists || val == "" {
return 0, fmt.Errorf("function GetClusterSize failed: %v", err)
}
v, err := strconv.ParseUint(val, 10, 16)
if err != nil {
return 0, fmt.Errorf("function GetClusterSize failed: %v", err)
}
return uint16(v), nil
}
// MemberAdd adds a member to the cluster.
func MemberAdd(obj *EmbdEtcd, peerURLs etcdtypes.URLs) (*etcd.MemberAddResponse, error) {
//obj.Connect(false) // TODO: ?
ctx := context.Background()
var response *etcd.MemberAddResponse
var err error
for {
if obj.exiting { // the exit signal has been sent!
return nil, fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
response, err = obj.client.MemberAdd(ctx, peerURLs.StringSlice())
obj.rLock.RUnlock()
if err == nil {
break
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return nil, err
}
}
return response, nil
}
// MemberRemove removes a member by mID and returns if it worked, and also
// if there was an error. This is because it might have run without error, but
// the member wasn't found, for example.
func MemberRemove(obj *EmbdEtcd, mID uint64) (bool, error) {
//obj.Connect(false) // TODO: ?
ctx := context.Background()
for {
if obj.exiting { // the exit signal has been sent!
return false, fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
_, err := obj.client.MemberRemove(ctx, mID)
obj.rLock.RUnlock()
if err == nil {
break
} else if err == rpctypes.ErrMemberNotFound {
// if we get this, member already shut itself down :)
return false, nil
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return false, err
}
}
return true, nil
}
// Members returns information on cluster membership.
// The member ID's are the keys, because an empty names means unstarted!
// TODO: consider queueing this through the main loop with CtxError(ctx, err)
func Members(obj *EmbdEtcd) (map[uint64]string, error) {
//obj.Connect(false) // TODO: ?
ctx := context.Background()
var response *etcd.MemberListResponse
var err error
for {
if obj.exiting { // the exit signal has been sent!
return nil, fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
if obj.flags.Trace {
log.Printf("Trace: Etcd: Members(): Endpoints are: %v", obj.client.Endpoints())
}
response, err = obj.client.MemberList(ctx)
obj.rLock.RUnlock()
if err == nil {
break
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return nil, err
}
}
members := make(map[uint64]string)
for _, x := range response.Members {
members[x.ID] = x.Name // x.Name will be "" if unstarted!
}
return members, nil
}
// Leader returns the current leader of the etcd server cluster.
func Leader(obj *EmbdEtcd) (string, error) {
//obj.Connect(false) // TODO: ?
var err error
membersMap := make(map[uint64]string)
if membersMap, err = Members(obj); err != nil {
return "", err
}
addresses := obj.LocalhostClientURLs() // heuristic, but probably correct
if len(addresses) == 0 {
// probably a programming error...
return "", fmt.Errorf("programming error")
}
endpoint := addresses[0].Host // FIXME: arbitrarily picked the first one
// part two
ctx := context.Background()
var response *etcd.StatusResponse
for {
if obj.exiting { // the exit signal has been sent!
return "", fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
response, err = obj.client.Maintenance.Status(ctx, endpoint)
obj.rLock.RUnlock()
if err == nil {
break
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return "", err
}
}
// isLeader: response.Header.MemberId == response.Leader
for id, name := range membersMap {
if id == response.Leader {
return name, nil
}
}
return "", fmt.Errorf("members map is not current") // not found
}

181
etcd/resources.go Normal file
View File

@@ -0,0 +1,181 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package etcd
import (
"fmt"
"log"
"strings"
"github.com/purpleidea/mgmt/resources"
"github.com/purpleidea/mgmt/util"
etcd "github.com/coreos/etcd/clientv3"
)
// WatchResources returns a channel that outputs events when exported resources
// change.
// TODO: Filter our watch (on the server side if possible) based on the
// collection prefixes and filters that we care about...
func WatchResources(obj *EmbdEtcd) chan error {
ch := make(chan error, 1) // buffer it so we can measure it
path := fmt.Sprintf("/%s/exported/", NS)
callback := func(re *RE) error {
// TODO: is this even needed? it used to happen on conn errors
log.Printf("Etcd: Watch: Path: %v", path) // event
if re == nil || re.response.Canceled {
return fmt.Errorf("watch is empty") // will cause a CtxError+retry
}
// we normally need to check if anything changed since the last
// event, since a set (export) with no changes still causes the
// watcher to trigger and this would cause an infinite loop. we
// don't need to do this check anymore because we do the export
// transactionally, and only if a change is needed. since it is
// atomic, all the changes arrive together which avoids dupes!!
if len(ch) == 0 { // send event only if one isn't pending
// this check avoids multiple events all queueing up and then
// being released continuously long after the changes stopped
// do not block!
ch <- nil // event
}
return nil
}
_, _ = obj.AddWatcher(path, callback, true, false, etcd.WithPrefix()) // no need to check errors
return ch
}
// SetResources exports all of the resources which we pass in to etcd.
func SetResources(obj *EmbdEtcd, hostname string, resourceList []resources.Res) error {
// key structure is /$NS/exported/$hostname/resources/$uid = $data
var kindFilter []string // empty to get from everyone
hostnameFilter := []string{hostname}
// this is not a race because we should only be reading keys which we
// set, and there should not be any contention with other hosts here!
originals, err := GetResources(obj, hostnameFilter, kindFilter)
if err != nil {
return err
}
if len(originals) == 0 && len(resourceList) == 0 { // special case of no add or del
return nil
}
ifs := []etcd.Cmp{} // list matching the desired state
ops := []etcd.Op{} // list of ops in this transaction
for _, res := range resourceList {
if res.GetKind() == "" {
log.Fatalf("Etcd: SetResources: Error: Empty kind: %v", res.GetName())
}
uid := fmt.Sprintf("%s/%s", res.GetKind(), res.GetName())
path := fmt.Sprintf("/%s/exported/%s/resources/%s", NS, hostname, uid)
if data, err := resources.ResToB64(res); err == nil {
ifs = append(ifs, etcd.Compare(etcd.Value(path), "=", data)) // desired state
ops = append(ops, etcd.OpPut(path, data))
} else {
return fmt.Errorf("can't convert to B64: %v", err)
}
}
match := func(res resources.Res, resourceList []resources.Res) bool { // helper lambda
for _, x := range resourceList {
if res.GetKind() == x.GetKind() && res.GetName() == x.GetName() {
return true
}
}
return false
}
hasDeletes := false
// delete old, now unused resources here...
for _, res := range originals {
if res.GetKind() == "" {
log.Fatalf("Etcd: SetResources: Error: Empty kind: %v", res.GetName())
}
uid := fmt.Sprintf("%s/%s", res.GetKind(), res.GetName())
path := fmt.Sprintf("/%s/exported/%s/resources/%s", NS, hostname, uid)
if match(res, resourceList) { // if we match, no need to delete!
continue
}
ops = append(ops, etcd.OpDelete(path))
hasDeletes = true
}
// if everything is already correct, do nothing, otherwise, run the ops!
// it's important to do this in one transaction, and atomically, because
// this way, we only generate one watch event, and only when it's needed
if hasDeletes { // always run, ifs don't matter
_, err = obj.Txn(nil, ops, nil) // TODO: does this run? it should!
} else {
_, err = obj.Txn(ifs, nil, ops) // TODO: do we need to look at response?
}
return err
}
// GetResources collects all of the resources which match a filter from etcd.
// If the kindfilter or hostnameFilter is empty, then it assumes no filtering...
// TODO: Expand this with a more powerful filter based on what we eventually
// support in our collect DSL. Ideally a server side filter like WithFilter()
// We could do this if the pattern was /$NS/exported/$kind/$hostname/$uid = $data.
func GetResources(obj *EmbdEtcd, hostnameFilter, kindFilter []string) ([]resources.Res, error) {
// key structure is /$NS/exported/$hostname/resources/$uid = $data
path := fmt.Sprintf("/%s/exported/", NS)
resourceList := []resources.Res{}
keyMap, err := obj.Get(path, etcd.WithPrefix(), etcd.WithSort(etcd.SortByKey, etcd.SortAscend))
if err != nil {
return nil, fmt.Errorf("could not get resources: %v", err)
}
for key, val := range keyMap {
if !strings.HasPrefix(key, path) { // sanity check
continue
}
str := strings.Split(key[len(path):], "/")
if len(str) != 4 {
return nil, fmt.Errorf("unexpected chunk count")
}
hostname, r, kind, name := str[0], str[1], str[2], str[3]
if r != "resources" {
return nil, fmt.Errorf("unexpected chunk pattern")
}
if kind == "" {
return nil, fmt.Errorf("unexpected kind chunk")
}
// FIXME: ideally this would be a server side filter instead!
if len(hostnameFilter) > 0 && !util.StrInList(hostname, hostnameFilter) {
continue
}
// FIXME: ideally this would be a server side filter instead!
if len(kindFilter) > 0 && !util.StrInList(kind, kindFilter) {
continue
}
if obj, err := resources.B64ToRes(val); err == nil {
log.Printf("Etcd: Get: (Hostname, Kind, Name): (%s, %s, %s)", hostname, kind, name)
resourceList = append(resourceList, obj)
} else {
return nil, fmt.Errorf("can't convert from B64: %v", err)
}
}
return resourceList, nil
}

105
etcd/str.go Normal file
View File

@@ -0,0 +1,105 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package etcd
import (
"errors"
"fmt"
etcd "github.com/coreos/etcd/clientv3"
errwrap "github.com/pkg/errors"
)
// ErrNotExist is returned when GetStr can not find the requested key.
// TODO: https://dave.cheney.net/2016/04/07/constant-errors
var ErrNotExist = errors.New("errNotExist")
// WatchStr returns a channel which spits out events on key activity.
// FIXME: It should close the channel when it's done, and spit out errors when
// something goes wrong.
func WatchStr(obj *EmbdEtcd, key string) chan error {
// new key structure is /$NS/strings/$key = $data
path := fmt.Sprintf("/%s/strings/%s", NS, key)
ch := make(chan error, 1)
// FIXME: fix our API so that we get a close event on shutdown.
callback := func(re *RE) error {
// TODO: is this even needed? it used to happen on conn errors
//log.Printf("Etcd: Watch: Path: %v", path) // event
if re == nil || re.response.Canceled {
return fmt.Errorf("watch is empty") // will cause a CtxError+retry
}
if len(ch) == 0 { // send event only if one isn't pending
ch <- nil // event
}
return nil
}
_, _ = obj.AddWatcher(path, callback, true, false, etcd.WithPrefix()) // no need to check errors
return ch
}
// GetStr collects the string which matches a global namespace in etcd.
func GetStr(obj *EmbdEtcd, key string) (string, error) {
// new key structure is /$NS/strings/$key = $data
path := fmt.Sprintf("/%s/strings/%s", NS, key)
keyMap, err := obj.Get(path, etcd.WithPrefix())
if err != nil {
return "", errwrap.Wrapf(err, "could not get strings in: %s", key)
}
if len(keyMap) == 0 {
return "", ErrNotExist
}
if count := len(keyMap); count != 1 {
return "", fmt.Errorf("returned %d entries", count)
}
val, exists := keyMap[path]
if !exists {
return "", fmt.Errorf("path `%s` is missing", path)
}
//log.Printf("Etcd: GetStr(%s): %s", key, val)
return val, nil
}
// SetStr sets a key and hostname pair to a certain value. If the value is
// nil, then it deletes the key. Otherwise the value should point to a string.
// TODO: TTL or delete disconnect?
func SetStr(obj *EmbdEtcd, key string, data *string) error {
// key structure is /$NS/strings/$key = $data
path := fmt.Sprintf("/%s/strings/%s", NS, key)
ifs := []etcd.Cmp{} // list matching the desired state
ops := []etcd.Op{} // list of ops in this transaction (then)
els := []etcd.Op{} // list of ops in this transaction (else)
if data == nil { // perform a delete
// TODO: use https://github.com/coreos/etcd/pull/7417 if merged
//ifs = append(ifs, etcd.KeyExists(path))
ifs = append(ifs, etcd.Compare(etcd.Version(path), ">", 0))
ops = append(ops, etcd.OpDelete(path))
} else {
data := *data // get the real value
ifs = append(ifs, etcd.Compare(etcd.Value(path), "=", data)) // desired state
els = append(els, etcd.OpPut(path, data))
}
// it's important to do this in one transaction, and atomically, because
// this way, we only generate one watch event, and only when it's needed
_, err := obj.Txn(ifs, ops, els) // TODO: do we need to look at response?
return errwrap.Wrapf(err, "could not set strings in: %s", key)
}

115
etcd/strmap.go Normal file
View File

@@ -0,0 +1,115 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package etcd
import (
"fmt"
"strings"
"github.com/purpleidea/mgmt/util"
etcd "github.com/coreos/etcd/clientv3"
errwrap "github.com/pkg/errors"
)
// WatchStrMap returns a channel which spits out events on key activity.
// FIXME: It should close the channel when it's done, and spit out errors when
// something goes wrong.
func WatchStrMap(obj *EmbdEtcd, key string) chan error {
// new key structure is /$NS/strings/$key/$hostname = $data
path := fmt.Sprintf("/%s/strings/%s", NS, key)
ch := make(chan error, 1)
// FIXME: fix our API so that we get a close event on shutdown.
callback := func(re *RE) error {
// TODO: is this even needed? it used to happen on conn errors
//log.Printf("Etcd: Watch: Path: %v", path) // event
if re == nil || re.response.Canceled {
return fmt.Errorf("watch is empty") // will cause a CtxError+retry
}
if len(ch) == 0 { // send event only if one isn't pending
ch <- nil // event
}
return nil
}
_, _ = obj.AddWatcher(path, callback, true, false, etcd.WithPrefix()) // no need to check errors
return ch
}
// GetStrMap collects all of the strings which match a namespace in etcd.
func GetStrMap(obj *EmbdEtcd, hostnameFilter []string, key string) (map[string]string, error) {
// old key structure is /$NS/strings/$hostname/$key = $data
// new key structure is /$NS/strings/$key/$hostname = $data
// FIXME: if we have the $key as the last token (old key structure), we
// can allow the key to contain the slash char, otherwise we need to
// verify that one isn't present in the input string.
path := fmt.Sprintf("/%s/strings/%s", NS, key)
keyMap, err := obj.Get(path, etcd.WithPrefix(), etcd.WithSort(etcd.SortByKey, etcd.SortAscend))
if err != nil {
return nil, errwrap.Wrapf(err, "could not get strings in: %s", key)
}
result := make(map[string]string)
for key, val := range keyMap {
if !strings.HasPrefix(key, path) { // sanity check
continue
}
str := strings.Split(key[len(path):], "/")
if len(str) != 2 {
return nil, fmt.Errorf("unexpected chunk count of %d", len(str))
}
_, hostname := str[0], str[1]
if hostname == "" {
return nil, fmt.Errorf("unexpected chunk length of %d", len(hostname))
}
// FIXME: ideally this would be a server side filter instead!
if len(hostnameFilter) > 0 && !util.StrInList(hostname, hostnameFilter) {
continue
}
//log.Printf("Etcd: GetStr(%s): (Hostname, Data): (%s, %s)", key, hostname, val)
result[hostname] = val
}
return result, nil
}
// SetStrMap sets a key and hostname pair to a certain value. If the value is
// nil, then it deletes the key. Otherwise the value should point to a string.
// TODO: TTL or delete disconnect?
func SetStrMap(obj *EmbdEtcd, hostname, key string, data *string) error {
// key structure is /$NS/strings/$key/$hostname = $data
path := fmt.Sprintf("/%s/strings/%s/%s", NS, key, hostname)
ifs := []etcd.Cmp{} // list matching the desired state
ops := []etcd.Op{} // list of ops in this transaction (then)
els := []etcd.Op{} // list of ops in this transaction (else)
if data == nil { // perform a delete
// TODO: use https://github.com/coreos/etcd/pull/7417 if merged
//ifs = append(ifs, etcd.KeyExists(path))
ifs = append(ifs, etcd.Compare(etcd.Version(path), ">", 0))
ops = append(ops, etcd.OpDelete(path))
} else {
data := *data // get the real value
ifs = append(ifs, etcd.Compare(etcd.Value(path), "=", data)) // desired state
els = append(els, etcd.OpPut(path, data))
}
// it's important to do this in one transaction, and atomically, because
// this way, we only generate one watch event, and only when it's needed
_, err := obj.Txn(ifs, ops, els) // TODO: do we need to look at response?
return errwrap.Wrapf(err, "could not set strings in: %s", key)
}

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package etcd
@@ -27,10 +27,16 @@ type World struct {
EmbdEtcd *EmbdEtcd
}
// ResWatch returns a channel which spits out events on possible exported
// resource changes.
func (obj *World) ResWatch() chan error {
return WatchResources(obj.EmbdEtcd)
}
// ResExport exports a list of resources under our hostname namespace.
// Subsequent calls replace the previously set collection atomically.
func (obj *World) ResExport(resourceList []resources.Res) error {
return EtcdSetResources(obj.EmbdEtcd, obj.Hostname, resourceList)
return SetResources(obj.EmbdEtcd, obj.Hostname, resourceList)
}
// ResCollect gets the collection of exported resources which match the filter.
@@ -39,5 +45,51 @@ func (obj *World) ResCollect(hostnameFilter, kindFilter []string) ([]resources.R
// XXX: should we be restricted to retrieving resources that were
// exported with a tag that allows or restricts our hostname? We could
// enforce that here if the underlying API supported it... Add this?
return EtcdGetResources(obj.EmbdEtcd, hostnameFilter, kindFilter)
return GetResources(obj.EmbdEtcd, hostnameFilter, kindFilter)
}
// StrWatch returns a channel which spits out events on possible string changes.
func (obj *World) StrWatch(namespace string) chan error {
return WatchStr(obj.EmbdEtcd, namespace)
}
// StrIsNotExist returns whether the error from StrGet is a key missing error.
func (obj *World) StrIsNotExist(err error) bool {
return err == ErrNotExist
}
// StrGet returns the value for the the given namespace.
func (obj *World) StrGet(namespace string) (string, error) {
return GetStr(obj.EmbdEtcd, namespace)
}
// StrSet sets the namespace value to a particular string.
func (obj *World) StrSet(namespace, value string) error {
return SetStr(obj.EmbdEtcd, namespace, &value)
}
// StrDel deletes the value in a particular namespace.
func (obj *World) StrDel(namespace string) error {
return SetStr(obj.EmbdEtcd, namespace, nil)
}
// StrMapWatch returns a channel which spits out events on possible string changes.
func (obj *World) StrMapWatch(namespace string) chan error {
return WatchStrMap(obj.EmbdEtcd, namespace)
}
// StrMapGet returns a map of hostnames to values in the given namespace.
func (obj *World) StrMapGet(namespace string) (map[string]string, error) {
return GetStrMap(obj.EmbdEtcd, []string{}, namespace)
}
// StrMapSet sets the namespace value to a particular string under the identity
// of its own hostname.
func (obj *World) StrMapSet(namespace, value string) error {
return SetStrMap(obj.EmbdEtcd, obj.Hostname, namespace, &value)
}
// StrMapDel deletes the value in a particular namespace.
func (obj *World) StrMapDel(namespace string) error {
return SetStrMap(obj.EmbdEtcd, obj.Hostname, namespace, nil)
}

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package event provides some primitives that are used for message passing.
@@ -22,14 +22,14 @@ import (
"fmt"
)
//go:generate stringer -type=EventName -output=eventname_stringer.go
//go:generate stringer -type=Kind -output=kind_stringer.go
// EventName represents the type of event being passed.
type EventName int
// Kind represents the type of event being passed.
type Kind int
// The different event names are used in different contexts.
// The different event kinds are used in different contexts.
const (
EventNil EventName = iota
EventNil Kind = iota
EventExit
EventStart
EventPause
@@ -43,11 +43,10 @@ type Resp chan error
// Event is the main struct that stores event information and responses.
type Event struct {
Name EventName
Kind Kind
Resp Resp // channel to send an ack response on, nil to skip
//Wg *sync.WaitGroup // receiver barrier to Wait() for everyone else on
Msg string // some words for fun
Activity bool // did something interesting happen?
Err error // store an error in our event
}
// ACK sends a single acknowledgement on the channel if one was requested.
@@ -80,7 +79,7 @@ func NewResp() Resp {
// ACK sends a true value to resp.
func (resp Resp) ACK() {
if resp != nil {
resp <- nil
resp <- nil // TODO: close instead?
}
}
@@ -114,7 +113,7 @@ func (resp Resp) ACKWait() {
}
}
// GetActivity returns the activity value.
func (event *Event) GetActivity() bool {
return event.Activity
// Error returns the stored error value.
func (event *Event) Error() error {
return event.Err
}

11
examples/augeas1.yaml Normal file
View File

@@ -0,0 +1,11 @@
---
graph: mygraph
resources:
augeas:
- name: sshd_config
lens: Sshd.lns
file: "/etc/ssh/sshd_config"
sets:
- path: X11Forwarding
value: false
edges:

View File

@@ -5,11 +5,13 @@ resources:
- name: drbd-utils
meta:
autoedge: true
noop: true
state: installed
file:
- name: file1
meta:
autoedge: true
noop: true
path: "/etc/drbd.conf"
content: |
# this is an mgmt test
@@ -17,13 +19,14 @@ resources:
- name: file2
meta:
autoedge: true
noop: true
path: "/etc/drbd.d/"
content: |
i am a directory
source: /dev/null
state: exists
svc:
- name: drbd
meta:
autoedge: true
noop: true
state: stopped
edges: []

19
examples/autoedges4.yaml Normal file
View File

@@ -0,0 +1,19 @@
---
graph: mygraph
resources:
user:
- name: edgeuser
state: absent
gid: 10000
- name: edgeuser2
state: exists
group: edgegroup
groups: [edgegroup2, edgegroup3]
group:
- name: edgegroup
state: exists
gid: 10000
- name: edgegroup2
state: exists
- name: edgegroup3
state: exists

10
examples/aws_ec2_1.yaml Normal file
View File

@@ -0,0 +1,10 @@
---
graph: mygraph
resources:
aws:ec2:
- name: ec2example
region: ca-central-1
type: t2.micro
imageid: ami-5ac17f3e
state: running
edges: []

9
examples/deep-dirs.yaml Normal file
View File

@@ -0,0 +1,9 @@
---
graph: mygraph
resources:
file:
- name: file1
path: "/tmp/mgmt/a/b/c/f1"
content: |
i am f1
state: exists

View File

@@ -2,15 +2,10 @@
graph: mygraph
resources:
file:
- name: file1a
path: "/tmp/mgmtA/f1a"
- name: "@@filea"
path: "/tmp/mgmtA/fA"
content: |
i am f1
state: exists
- name: "@@file2a"
path: "/tmp/mgmtA/f2a"
content: |
i am f2, exported from host A
i am fA, exported from host A
state: exists
collect:
- kind: file

View File

@@ -2,15 +2,10 @@
graph: mygraph
resources:
file:
- name: file1b
path: "/tmp/mgmtB/f1b"
- name: "@@fileb"
path: "/tmp/mgmtB/fB"
content: |
i am f1
state: exists
- name: "@@file2b"
path: "/tmp/mgmtB/f2b"
content: |
i am f2, exported from host B
i am fB, exported from host B
state: exists
collect:
- kind: file

View File

@@ -2,15 +2,10 @@
graph: mygraph
resources:
file:
- name: file1c
path: "/tmp/mgmtC/f1c"
- name: "@@filec"
path: "/tmp/mgmtC/fC"
content: |
i am f1
state: exists
- name: "@@file2c"
path: "/tmp/mgmtC/f2c"
content: |
i am f2, exported from host C
i am fC, exported from host C
state: exists
collect:
- kind: file

View File

@@ -2,15 +2,10 @@
graph: mygraph
resources:
file:
- name: file1d
path: "/tmp/mgmtD/f1d"
- name: "@@filed"
path: "/tmp/mgmtD/fD"
content: |
i am f1
state: exists
- name: "@@file2d"
path: "/tmp/mgmtD/f2d"
content: |
i am f2, exported from host D
i am fD, exported from host D
state: exists
collect:
- kind: file

13
examples/etcd1e.yaml Normal file
View File

@@ -0,0 +1,13 @@
---
graph: mygraph
resources:
file:
- name: "@@filee"
path: "/tmp/mgmtE/fE"
content: |
i am fE, exported from host E
state: exists
collect:
- kind: file
pattern: "/tmp/mgmtE/"
edges: []

67
examples/exec3-sema.yaml Normal file
View File

@@ -0,0 +1,67 @@
---
graph: parallel
resources:
exec:
- name: pkg10
meta:
sema: ['mylock:1', 'otherlock:42']
cmd: sleep 10s
shell: ''
timeout: 0
watchcmd: ''
watchshell: ''
ifcmd: ''
ifshell: ''
pollint: 0
state: present
- name: svc10
meta:
sema: ['mylock:1']
cmd: sleep 10s
shell: ''
timeout: 0
watchcmd: ''
watchshell: ''
ifcmd: ''
ifshell: ''
pollint: 0
state: present
- name: exec10
meta:
sema: ['mylock:1']
cmd: sleep 10s
shell: ''
timeout: 0
watchcmd: ''
watchshell: ''
ifcmd: ''
ifshell: ''
pollint: 0
state: present
- name: pkg15
meta:
sema: ['mylock:1', 'otherlock:42']
cmd: sleep 15s
shell: ''
timeout: 0
watchcmd: ''
watchshell: ''
ifcmd: ''
ifshell: ''
pollint: 0
state: present
edges:
- name: e1
from:
kind: exec
name: pkg10
to:
kind: exec
name: svc10
- name: e2
from:
kind: exec
name: svc10
to:
kind: exec
name: exec10

10
examples/file0.yaml Normal file
View File

@@ -0,0 +1,10 @@
---
graph: mygraph
resources:
file:
- name: file0
path: "/tmp/mgmt/f1"
content: |
i am f0
state: exists
edges: []

View File

@@ -1,14 +1,13 @@
---
graph: mygraph
comment: You can test Watch and CheckApply failures with chmod ugo-r and chmod ugo-w.
resources:
file:
- name: file1
path: "/tmp/mgmt/f1"
meta:
retry: 3
delay: 5000
limit: .inf
burst: 0
path: "/tmp/mgmt/hello"
content: |
i am f1
i am a file
state: exists
edges: []

10
examples/file4.yaml Normal file
View File

@@ -0,0 +1,10 @@
---
graph: mygraph
resources:
file:
- name: file1
path: "/tmp/mgmt/hello"
content: |
i am a file
state: exists
edges: []

14
examples/graph0.hcl Normal file
View File

@@ -0,0 +1,14 @@
resource "file" "file1" {
path = "/tmp/mgmt-hello-world"
content = "hello, world"
state = "exists"
depends_on = ["noop.noop1", "exec.sleep"]
}
resource "noop" "noop1" {
test = "nil"
}
resource "exec" "sleep" {
cmd = "sleep 10s"
}

4
examples/graph1.hcl Normal file
View File

@@ -0,0 +1,4 @@
resource "exec" "exec1" {
cmd = "cat /tmp/mgmt-hello-world"
state = "present"
}

8
examples/group1.yaml Normal file
View File

@@ -0,0 +1,8 @@
---
graph: mygraph
resources:
group:
- name: testgroup
state: exists
gid: 10000
edges: []

9
examples/hil.hcl Normal file
View File

@@ -0,0 +1,9 @@
resource "file" "file1" {
path = "/tmp/mgmt-hello-world"
content = "${exec.sleep.Output}"
state = "exists"
}
resource "exec" "sleep" {
cmd = "echo hello"
}

8
examples/kv1.yaml Normal file
View File

@@ -0,0 +1,8 @@
---
graph: mygraph
resources:
kv:
- name: kv1
key: "hello"
value: "world"
edges: []

7
examples/kv2.yaml Normal file
View File

@@ -0,0 +1,7 @@
---
graph: mygraph
resources:
kv:
- name: kv1
key: "iamdeleted"
edges: []

9
examples/kv3.yaml Normal file
View File

@@ -0,0 +1,9 @@
---
graph: mygraph
resources:
kv:
- name: kv1
key: "stage"
value: "3"
skiplessthan: true
edges: []

31
examples/kv4.yaml Normal file
View File

@@ -0,0 +1,31 @@
---
graph: mygraph
resources:
kv:
- name: kv1
key: "stage"
value: "1"
skiplessthan: true
- name: kv2
key: "stage"
value: "2"
skiplessthan: true
- name: kv3
key: "stage"
value: "3"
skiplessthan: true
edges:
- name: e1
from:
kind: kv
name: kv1
to:
kind: kv
name: kv2
- name: e2
from:
kind: kv
name: kv2
to:
kind: kv
name: kv3

View File

@@ -0,0 +1,246 @@
// libmgmt example of send->recv
package main
import (
"fmt"
"log"
"os"
"os/signal"
"sync"
"syscall"
"time"
"github.com/purpleidea/mgmt/gapi"
mgmt "github.com/purpleidea/mgmt/lib"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
)
// MyGAPI implements the main GAPI interface.
type MyGAPI struct {
Name string // graph name
Interval uint // refresh interval, 0 to never refresh
data gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewMyGAPI creates a new MyGAPI struct and calls Init().
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
obj := &MyGAPI{
Name: name,
Interval: interval,
}
return obj, obj.Init(data)
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}
if obj.Name == "" {
return fmt.Errorf("the graph name must be specified")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
obj.initialized = true
return nil
}
// Graph returns a current Graph.
func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, err
}
metaparams := resources.DefaultMetaParams
exec1 := &resources.ExecRes{
BaseRes: resources.BaseRes{
Name: "exec1",
Kind: "exec",
MetaParams: metaparams,
},
Cmd: "echo hello world && echo goodbye world 1>&2", // to stdout && stderr
Shell: "/bin/bash",
}
g.AddVertex(exec1)
output := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "output",
Kind: "file",
MetaParams: metaparams,
// send->recv!
Recv: map[string]*resources.Send{
"Content": {Res: exec1, Key: "Output"},
},
},
Path: "/tmp/mgmt/output",
State: "present",
}
g.AddVertex(output)
g.AddEdge(exec1, output, &resources.Edge{Name: "e0"})
stdout := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "stdout",
Kind: "file",
MetaParams: metaparams,
// send->recv!
Recv: map[string]*resources.Send{
"Content": {Res: exec1, Key: "Stdout"},
},
},
Path: "/tmp/mgmt/stdout",
State: "present",
}
g.AddVertex(stdout)
g.AddEdge(exec1, stdout, &resources.Edge{Name: "e1"})
stderr := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "stderr",
Kind: "file",
MetaParams: metaparams,
// send->recv!
Recv: map[string]*resources.Send{
"Content": {Res: exec1, Key: "Stderr"},
},
},
Path: "/tmp/mgmt/stderr",
State: "present",
}
g.AddVertex(stderr)
g.AddEdge(exec1, stderr, &resources.Edge{Name: "e2"})
//g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
return g, nil
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
return
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}
}
}()
return ch
}
// Close shuts down the MyGAPI.
func (obj *MyGAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
close(obj.closeChan)
obj.wg.Wait()
obj.initialized = false // closed = true
return nil
}
// Run runs an embedded mgmt server.
func Run() error {
obj := &mgmt.Main{}
obj.Program = "libmgmt" // TODO: set on compilation
obj.Version = "0.0.1" // TODO: set on compilation
obj.TmpPrefix = true // disable for easy debugging
//prefix := "/tmp/testprefix/"
//obj.Prefix = &p // enable for easy debugging
obj.IdealClusterSize = -1
obj.ConvergedTimeout = -1
obj.Noop = false // FIXME: careful!
obj.GAPI = &MyGAPI{ // graph API
Name: "libmgmt", // TODO: set on compilation
Interval: 60 * 10, // arbitrarily change graph every 15 seconds
}
if err := obj.Init(); err != nil {
return err
}
// install the exit signal handler
exit := make(chan struct{})
defer close(exit)
go func() {
signals := make(chan os.Signal, 1)
signal.Notify(signals, os.Interrupt) // catch ^C
//signal.Notify(signals, os.Kill) // catch signals
signal.Notify(signals, syscall.SIGTERM)
select {
case sig := <-signals: // any signal will do
if sig == os.Interrupt {
log.Println("Interrupted by ^C")
obj.Exit(nil)
return
}
log.Println("Interrupted by signal")
obj.Exit(fmt.Errorf("killed by %v", sig))
return
case <-exit:
return
}
}()
return obj.Run()
}
func main() {
log.Printf("Hello!")
if err := Run(); err != nil {
fmt.Println(err)
os.Exit(1)
return
}
log.Printf("Goodbye!")
}

View File

@@ -0,0 +1,253 @@
// libmgmt example of flattened subgraph
package main
import (
"fmt"
"log"
"os"
"os/signal"
"sync"
"syscall"
"time"
"github.com/purpleidea/mgmt/gapi"
mgmt "github.com/purpleidea/mgmt/lib"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
errwrap "github.com/pkg/errors"
)
// MyGAPI implements the main GAPI interface.
type MyGAPI struct {
Name string // graph name
Interval uint // refresh interval, 0 to never refresh
data gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewMyGAPI creates a new MyGAPI struct and calls Init().
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
obj := &MyGAPI{
Name: name,
Interval: interval,
}
return obj, obj.Init(data)
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}
if obj.Name == "" {
return fmt.Errorf("the graph name must be specified")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
obj.initialized = true
return nil
}
func (obj *MyGAPI) subGraph() (*pgraph.Graph, error) {
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, err
}
metaparams := resources.DefaultMetaParams
f1 := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "file1",
Kind: "file",
MetaParams: metaparams,
},
Path: "/tmp/mgmt/sub1",
State: "present",
}
g.AddVertex(f1)
n1 := &resources.NoopRes{
BaseRes: resources.BaseRes{
Name: "noop1",
Kind: "noop",
MetaParams: metaparams,
},
}
g.AddVertex(n1)
return g, nil
}
// Graph returns a current Graph.
func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, err
}
metaparams := resources.DefaultMetaParams
content := "I created a subgraph!\n"
f0 := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "README",
MetaParams: metaparams,
},
Path: "/tmp/mgmt/README",
Content: &content,
State: "present",
}
g.AddVertex(f0)
subGraph, err := obj.subGraph()
if err != nil {
return nil, errwrap.Wrapf(err, "running subGraph() failed")
}
edgeGenFn := func(v1, v2 pgraph.Vertex) pgraph.Edge {
edge := &resources.Edge{
Name: fmt.Sprintf("edge: %s->%s", v1, v2),
}
// if we want to do something specific based on input
_, v2IsFile := v2.(*resources.FileRes)
if v1 == f0 && v2IsFile {
edge.Notify = true
}
return edge
}
g.AddEdgeVertexGraph(f0, subGraph, edgeGenFn)
//g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
return g, nil
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
return
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}
}
}()
return ch
}
// Close shuts down the MyGAPI.
func (obj *MyGAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
close(obj.closeChan)
obj.wg.Wait()
obj.initialized = false // closed = true
return nil
}
// Run runs an embedded mgmt server.
func Run() error {
obj := &mgmt.Main{}
obj.Program = "libmgmt" // TODO: set on compilation
obj.Version = "0.0.1" // TODO: set on compilation
obj.TmpPrefix = true // disable for easy debugging
//prefix := "/tmp/testprefix/"
//obj.Prefix = &p // enable for easy debugging
obj.IdealClusterSize = -1
obj.ConvergedTimeout = -1
obj.Noop = false // FIXME: careful!
obj.GAPI = &MyGAPI{ // graph API
Name: "libmgmt", // TODO: set on compilation
Interval: 60 * 10, // arbitrarily change graph every 15 seconds
}
if err := obj.Init(); err != nil {
return err
}
// install the exit signal handler
exit := make(chan struct{})
defer close(exit)
go func() {
signals := make(chan os.Signal, 1)
signal.Notify(signals, os.Interrupt) // catch ^C
//signal.Notify(signals, os.Kill) // catch signals
signal.Notify(signals, syscall.SIGTERM)
select {
case sig := <-signals: // any signal will do
if sig == os.Interrupt {
log.Println("Interrupted by ^C")
obj.Exit(nil)
return
}
log.Println("Interrupted by signal")
obj.Exit(fmt.Errorf("killed by %v", sig))
return
case <-exit:
return
}
}()
return obj.Run()
}
func main() {
log.Printf("Hello!")
if err := Run(); err != nil {
fmt.Println(err)
os.Exit(1)
return
}
log.Printf("Goodbye!")
}

View File

@@ -0,0 +1,243 @@
// libmgmt example of graph resource
package main
import (
"fmt"
"log"
"os"
"os/signal"
"sync"
"syscall"
"time"
"github.com/purpleidea/mgmt/gapi"
mgmt "github.com/purpleidea/mgmt/lib"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
)
// MyGAPI implements the main GAPI interface.
type MyGAPI struct {
Name string // graph name
Interval uint // refresh interval, 0 to never refresh
data gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewMyGAPI creates a new MyGAPI struct and calls Init().
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
obj := &MyGAPI{
Name: name,
Interval: interval,
}
return obj, obj.Init(data)
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}
if obj.Name == "" {
return fmt.Errorf("the graph name must be specified")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
obj.initialized = true
return nil
}
// Graph returns a current Graph.
func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, err
}
metaparams := resources.DefaultMetaParams
content := "I created a subgraph!\n"
f0 := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "README",
Kind: "file",
MetaParams: metaparams,
},
Path: "/tmp/mgmt/README",
Content: &content,
State: "present",
}
g.AddVertex(f0)
// create a subgraph to add *into* a graph resource
subGraph, err := pgraph.NewGraph(fmt.Sprintf("%s->subgraph", obj.Name))
if err != nil {
return nil, err
}
// add elements into the sub graph
f1 := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "file1",
Kind: "file",
MetaParams: metaparams,
},
Path: "/tmp/mgmt/sub1",
State: "present",
}
subGraph.AddVertex(f1)
n1 := &resources.NoopRes{
BaseRes: resources.BaseRes{
Name: "noop1",
Kind: "noop",
MetaParams: metaparams,
},
}
subGraph.AddVertex(n1)
e0 := &resources.Edge{Name: "e0"}
e0.Notify = true // send a notification from v0 to v1
subGraph.AddEdge(f1, n1, e0)
// create the actual resource to hold the sub graph
subGraphRes0 := &resources.GraphRes{ // TODO: should we name this SubGraphRes ?
BaseRes: resources.BaseRes{
Name: "subgraph1",
Kind: "graph",
MetaParams: metaparams,
},
Graph: subGraph,
}
g.AddVertex(subGraphRes0) // add it to the main graph
//g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
return g, nil
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
return
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}
}
}()
return ch
}
// Close shuts down the MyGAPI.
func (obj *MyGAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
close(obj.closeChan)
obj.wg.Wait()
obj.initialized = false // closed = true
return nil
}
// Run runs an embedded mgmt server.
func Run() error {
obj := &mgmt.Main{}
obj.Program = "libmgmt" // TODO: set on compilation
obj.Version = "0.0.1" // TODO: set on compilation
obj.TmpPrefix = true // disable for easy debugging
//prefix := "/tmp/testprefix/"
//obj.Prefix = &p // enable for easy debugging
obj.IdealClusterSize = -1
obj.ConvergedTimeout = -1
obj.Noop = false // FIXME: careful!
obj.GAPI = &MyGAPI{ // graph API
Name: "libmgmt", // TODO: set on compilation
Interval: 60 * 10, // arbitrarily change graph every 15 seconds
}
if err := obj.Init(); err != nil {
return err
}
// install the exit signal handler
exit := make(chan struct{})
defer close(exit)
go func() {
signals := make(chan os.Signal, 1)
signal.Notify(signals, os.Interrupt) // catch ^C
//signal.Notify(signals, os.Kill) // catch signals
signal.Notify(signals, syscall.SIGTERM)
select {
case sig := <-signals: // any signal will do
if sig == os.Interrupt {
log.Println("Interrupted by ^C")
obj.Exit(nil)
return
}
log.Println("Interrupted by signal")
obj.Exit(fmt.Errorf("killed by %v", sig))
return
case <-exit:
return
}
}()
return obj.Run()
}
func main() {
log.Printf("Hello!")
if err := Run(); err != nil {
fmt.Println(err)
os.Exit(1)
return
}
log.Printf("Goodbye!")
}

View File

@@ -40,10 +40,10 @@ func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("Already initialized!")
return fmt.Errorf("already initialized")
}
if obj.Name == "" {
return fmt.Errorf("The graph name must be specified!")
return fmt.Errorf("the graph name must be specified")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
@@ -57,11 +57,13 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
n1, err := resources.NewNoopRes("noop1")
n1, err := resources.NewNamedResource("noop", "noop1")
if err != nil {
return nil, fmt.Errorf("Can't create resource: %v", err)
return nil, err
}
// NOTE: This is considered the legacy method to build graphs. Avoid
// importing the legacy `yamlgraph` lib if possible for custom graphs.
// we can still build a graph via the yaml method
gc := &yamlgraph.GraphConfig{
Graph: obj.Name,
@@ -70,7 +72,7 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
Exec: []*resources.ExecRes{},
File: []*resources.FileRes{},
Msg: []*resources.MsgRes{},
Noop: []*resources.NoopRes{n1},
Noop: []*resources.NoopRes{n1.(*resources.NoopRes)},
Pkg: []*resources.PkgRes{},
Svc: []*resources.SvcRes{},
Timer: []*resources.TimerRes{},
@@ -86,28 +88,45 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan error {
if obj.data.NoWatch || obj.Interval <= 0 {
return nil
}
ch := make(chan error)
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("libmgmt: MyGAPI is not initialized")
return
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
// arbitrarily change graph every interval seconds
ticker := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer ticker.Stop()
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-ticker.C:
log.Printf("libmgmt: Generating new graph...")
ch <- nil // trigger a run
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}
@@ -164,17 +183,14 @@ func Run() error {
return
}
log.Println("Interrupted by signal")
obj.Exit(fmt.Errorf("Killed by %v", sig))
obj.Exit(fmt.Errorf("killed by %v", sig))
return
case <-exit:
return
}
}()
if err := obj.Run(); err != nil {
return err
}
return nil
return obj.Run()
}
func main() {

View File

@@ -42,10 +42,10 @@ func NewMyGAPI(data gapi.Data, name string, interval uint, count uint) (*MyGAPI,
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("Already initialized!")
return fmt.Errorf("already initialized")
}
if obj.Name == "" {
return fmt.Errorf("The graph name must be specified!")
return fmt.Errorf("the graph name must be specified")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
@@ -59,19 +59,21 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
g := pgraph.NewGraph(obj.Name)
var vertex *pgraph.Vertex
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, err
}
var vertex pgraph.Vertex
for i := uint(0); i < obj.Count; i++ {
n, err := resources.NewNoopRes(fmt.Sprintf("noop%d", i))
n, err := resources.NewNamedResource("noop", fmt.Sprintf("noop%d", i))
if err != nil {
return nil, fmt.Errorf("Can't create resource: %v", err)
return nil, err
}
v := pgraph.NewVertex(n)
g.AddVertex(v)
g.AddVertex(n)
if i > 0 {
g.AddEdge(vertex, v, pgraph.NewEdge(fmt.Sprintf("e%d", i)))
g.AddEdge(vertex, n, &resources.Edge{Name: fmt.Sprintf("e%d", i)})
}
vertex = v // save
vertex = n // save
}
//g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
@@ -79,28 +81,45 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan error {
if obj.data.NoWatch || obj.Interval <= 0 {
return nil
}
ch := make(chan error)
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("libmgmt: MyGAPI is not initialized")
return
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
// arbitrarily change graph every interval seconds
ticker := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer ticker.Stop()
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-ticker.C:
log.Printf("libmgmt: Generating new graph...")
ch <- nil // trigger a run
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}
@@ -158,17 +177,14 @@ func Run(count uint) error {
return
}
log.Println("Interrupted by signal")
obj.Exit(fmt.Errorf("Killed by %v", sig))
obj.Exit(fmt.Errorf("killed by %v", sig))
return
case <-exit:
return
}
}()
if err := obj.Run(); err != nil {
return err
}
return nil
return obj.Run()
}
func main() {

View File

@@ -39,10 +39,10 @@ func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("Already initialized!")
return fmt.Errorf("already initialized")
}
if obj.Name == "" {
return fmt.Errorf("The graph name must be specified!")
return fmt.Errorf("the graph name must be specified")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
@@ -56,34 +56,43 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
g := pgraph.NewGraph(obj.Name)
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, err
}
metaparams := resources.DefaultMetaParams
content := "Delete me to trigger a notification!\n"
f0 := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "README",
Name: "README",
Kind: "file",
MetaParams: metaparams,
},
Path: "/tmp/mgmt/README",
Content: &content,
State: "present",
}
v0 := pgraph.NewVertex(f0)
g.AddVertex(v0)
g.AddVertex(f0)
p1 := &resources.PasswordRes{
BaseRes: resources.BaseRes{
Name: "password1",
Name: "password1",
Kind: "password",
MetaParams: metaparams,
},
Length: 8, // generated string will have this many characters
Saved: true, // this causes passwords to be stored in plain text!
}
v1 := pgraph.NewVertex(p1)
g.AddVertex(v1)
g.AddVertex(p1)
f1 := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "file1",
Name: "file1",
Kind: "file",
MetaParams: metaparams,
// send->recv!
Recv: map[string]*resources.Send{
"Content": {Res: p1, Key: "Password"},
@@ -94,55 +103,72 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
State: "present",
}
v2 := pgraph.NewVertex(f1)
g.AddVertex(v2)
g.AddVertex(f1)
n1 := &resources.NoopRes{
BaseRes: resources.BaseRes{
Name: "noop1",
Name: "noop1",
Kind: "noop",
MetaParams: metaparams,
},
}
v3 := pgraph.NewVertex(n1)
g.AddVertex(v3)
g.AddVertex(n1)
e0 := pgraph.NewEdge("e0")
e0.Notify = true // send a notification from v0 to v1
g.AddEdge(v0, v1, e0)
e0 := &resources.Edge{Name: "e0"}
e0.Notify = true // send a notification from f0 to p1
g.AddEdge(f0, p1, e0)
g.AddEdge(v1, v2, pgraph.NewEdge("e1"))
g.AddEdge(p1, f1, &resources.Edge{Name: "e1"})
e2 := pgraph.NewEdge("e2")
e2.Notify = true // send a notification from v2 to v3
g.AddEdge(v2, v3, e2)
e2 := &resources.Edge{Name: "e2"}
e2.Notify = true // send a notification from f1 to n1
g.AddEdge(f1, n1, e2)
//g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
return g, nil
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan error {
if obj.data.NoWatch || obj.Interval <= 0 {
return nil
}
ch := make(chan error)
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("libmgmt: MyGAPI is not initialized")
return
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
// arbitrarily change graph every interval seconds
ticker := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer ticker.Stop()
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-ticker.C:
log.Printf("libmgmt: Generating new graph...")
ch <- nil // trigger a run
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}
@@ -201,17 +227,14 @@ func Run() error {
return
}
log.Println("Interrupted by signal")
obj.Exit(fmt.Errorf("Killed by %v", sig))
obj.Exit(fmt.Errorf("killed by %v", sig))
return
case <-exit:
return
}
}()
if err := obj.Run(); err != nil {
return err
}
return nil
return obj.Run()
}
func main() {

13
examples/limit1.yaml Normal file
View File

@@ -0,0 +1,13 @@
---
graph: mygraph
resources:
file:
- name: file1
meta:
limit: 0.2
burst: 5
path: "/tmp/mgmt/limit"
content: |
i am a normal file
state: exists
edges: []

7
examples/noop0.yaml Normal file
View File

@@ -0,0 +1,7 @@
---
graph: mygraph
comment: simple noop example
resources:
noop:
- name: noop0
edges: []

30
examples/noop2.yaml Normal file
View File

@@ -0,0 +1,30 @@
---
graph: mygraph
comment: dangerous noop example
resources:
noop:
- name: noop1
meta:
noop: true
file:
- name: file1
path: "/tmp/mgmt/hello-noop"
content: |
hello world from @purpleidea
state: exists
meta:
noop: true
exec:
- name: exec1
meta:
noop: true
cmd: 'rm -rf /'
shell: '/bin/bash'
timeout: 0
watchcmd: ''
watchshell: ''
ifcmd: ''
ifshell: ''
pollint: 0
state: present
edges: []

View File

@@ -2,6 +2,6 @@
graph: mygraph
resources:
nspawn:
- name: mgmt-nspawn1
- name: nspawn1
state: running
edges: []

View File

@@ -1,7 +0,0 @@
---
graph: mygraph
resources:
nspawn:
- name: mgmt-nspawn2
state: stopped
edges: []

24
examples/poll1.yaml Normal file
View File

@@ -0,0 +1,24 @@
---
graph: mygraph
resources:
file:
- name: file1
meta:
poll: 5
path: "/tmp/mgmt/f1"
content: |
i poll every 5 seconds
state: exists
- name: file2
path: "/tmp/mgmt/f2"
content: |
i use the event based watcher
state: exists
- name: file3
meta:
poll: 1
path: "/tmp/mgmt/f3"
content: |
i poll every second
state: exists
edges: []

57
examples/retry1.yaml Normal file
View File

@@ -0,0 +1,57 @@
---
graph: mygraph
comment: You can test Watch and CheckApply failures with chmod ugo-r and chmod ugo-w.
resources:
exec:
- name: exec1
cmd: 'touch /tmp/mgmt/no-read && chmod ugo-r /tmp/mgmt/no-read'
shell: '/bin/bash'
timeout: 0
watchcmd: ''
watchshell: ''
ifcmd: ''
ifshell: ''
pollint: 0
state: present
- name: exec2
cmd: 'touch /tmp/mgmt/no-write && chmod ugo-w /tmp/mgmt/no-write'
shell: '/bin/bash'
timeout: 0
watchcmd: ''
watchshell: ''
ifcmd: ''
ifshell: ''
pollint: 0
state: present
file:
- name: noread
path: "/tmp/mgmt/no-read"
meta:
retry: 3
delay: 5000
content: |
i am f1
state: exists
- name: nowrite
path: "/tmp/mgmt/no-write"
meta:
retry: 3
delay: 5000
content: |
i am f1
state: exists
edges:
- name: e1
from:
kind: exec
name: exec1
to:
kind: file
name: noread
- name: e2
from:
kind: exec
name: exec2
to:
kind: file
name: nowrite

8
examples/svc2.yaml Normal file
View File

@@ -0,0 +1,8 @@
---
graph: mygraph
resources:
svc:
- name: purpleidea
state: running
session: true
edges: []

View File

@@ -4,7 +4,7 @@ comment: timer example
resources:
timer:
- name: timer1
interval: 30
interval: 3
exec:
- name: exec1
cmd: echo hello world

9
examples/user1.yaml Normal file
View File

@@ -0,0 +1,9 @@
---
graph: mygraph
resources:
user:
- name: testuser
uid: 1002
gid: 100
state: exists
edges: []

21
examples/virt4.yaml Normal file
View File

@@ -0,0 +1,21 @@
---
graph: mygraph
resources:
virt:
- name: mgmt4
meta:
limit: .inf
burst: 0
uri: 'qemu:///session'
cpus: 1
maxcpus: 4
memory: 524288
boot:
- hd
disk:
- type: qcow2
source: "~/.local/share/libvirt/images/fedora-23-scratch.qcow2"
state: running
transient: false
edges: []
comment: "qemu-img create -b fedora-23.qcow2 -f qcow2 fedora-23-scratch.qcow2"

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package gapi defines the interface that graph API generators must meet.
@@ -23,28 +23,33 @@ import (
"github.com/purpleidea/mgmt/resources"
)
// World is an interface to the rest of the different graph state. It allows
// the GAPI to store state and exchange information throughout the cluster. It
// is the interface each machine uses to communicate with the rest of the world.
type World interface { // TODO: is there a better name for this interface?
ResExport([]resources.Res) error
// FIXME: should this method take a "filter" data struct instead of many args?
ResCollect(hostnameFilter, kindFilter []string) ([]resources.Res, error)
}
// Data is the set of input values passed into the GAPI structs via Init.
type Data struct {
Hostname string // uuid for the host, required for GAPI
World World
Noop bool
NoWatch bool
Hostname string // uuid for the host, required for GAPI
World resources.World
Noop bool
NoConfigWatch bool
NoStreamWatch bool
// NOTE: we can add more fields here if needed by GAPI endpoints
}
// Next describes the particular response the GAPI implementer wishes to emit.
type Next struct {
// FIXME: the Fast pause parameter should eventually get replaced with a
// "SwitchMethod" parameter or similar that instead lets the implementer
// choose between fast pause, slow pause, and interrupt. Interrupt could
// be a future extension to the Resource API that lets an Interrupt() be
// called if we want to exit immediately from the CheckApply part of the
// resource for some reason. For now we'll keep this simple with a bool.
Fast bool // run a fast pause to switch?
Exit bool // should we cause the program to exit? (specify err or not)
Err error // if something goes wrong (use with or without exit!)
}
// GAPI is a Graph API that represents incoming graphs and change streams.
type GAPI interface {
Init(Data) error // initializes the GAPI and passes in useful data
Graph() (*pgraph.Graph, error) // returns the most recent pgraph
Next() chan error // returns a stream of switch events
Next() chan Next // returns a stream of switch events
Close() error // shutdown the GAPI
}

155
hcl/gapi.go Normal file
View File

@@ -0,0 +1,155 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package hcl
import (
"fmt"
"log"
"sync"
"github.com/purpleidea/mgmt/gapi"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/recwatch"
)
// GAPI ...
type GAPI struct {
File *string
initialized bool
data gapi.Data
wg sync.WaitGroup
closeChan chan struct{}
configWatcher *recwatch.ConfigWatcher
}
// NewGAPI ...
func NewGAPI(data gapi.Data, file *string) (*GAPI, error) {
if file == nil {
return nil, fmt.Errorf("empty file given")
}
obj := &GAPI{
File: file,
}
return obj, obj.Init(data)
}
// Init ...
func (obj *GAPI) Init(d gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}
if obj.File == nil {
return fmt.Errorf("file cannot be nil")
}
obj.data = d
obj.closeChan = make(chan struct{})
obj.initialized = true
obj.configWatcher = recwatch.NewConfigWatcher()
return nil
}
// Graph ...
func (obj *GAPI) Graph() (*pgraph.Graph, error) {
config, err := loadHcl(obj.File)
if err != nil {
return nil, fmt.Errorf("unable to parse graph: %s", err)
}
return graphFromConfig(config, obj.data)
}
// Next ...
func (obj *GAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch)
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("hcl: GAPI is not initialized"),
Exit: true,
}
ch <- next
return
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
watchChan, configChan := make(chan error), make(chan error)
if obj.data.NoConfigWatch {
configChan = nil
} else {
configChan = obj.configWatcher.ConfigWatch(*obj.File) // simple
}
if obj.data.NoStreamWatch {
watchChan = nil
} else {
watchChan = obj.data.World.ResWatch()
}
for {
var err error
var ok bool
select {
case <-startChan:
startChan = nil
case err, ok = <-watchChan:
case err, ok = <-configChan:
if !ok {
return
}
case <-obj.closeChan:
return
}
log.Printf("hcl: generating new graph")
next := gapi.Next{
Err: err,
}
select {
case ch <- next:
case <-obj.closeChan:
return
}
}
}()
return ch
}
// Close ...
func (obj *GAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("hcl: GAPI is not initialized")
}
obj.configWatcher.Close()
close(obj.closeChan)
obj.wg.Wait()
obj.initialized = false
return nil
}

387
hcl/parse.go Normal file
View File

@@ -0,0 +1,387 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package hcl
import (
"fmt"
"io/ioutil"
"log"
"strings"
"github.com/hashicorp/hcl"
"github.com/hashicorp/hcl/hcl/ast"
"github.com/hashicorp/hil"
"github.com/purpleidea/mgmt/gapi"
hv "github.com/purpleidea/mgmt/hil"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
)
type collectorResConfig struct {
Kind string
Pattern string
}
// Config defines the structure of the hcl config.
type Config struct {
Resources []*Resource
Edges []*Edge
Collector []collectorResConfig
}
// vertex is the data structure of a vertex.
type vertex struct {
Kind string `hcl:"kind"`
Name string `hcl:"name"`
}
// Edge defines an edge in hcl.
type Edge struct {
Name string
From vertex
To vertex
Notify bool
}
// Resources define the state for resources.
type Resources struct {
Resources []resources.Res
}
// Resource ...
type Resource struct {
Name string
Kind string
resource resources.Res
Meta resources.MetaParams
deps []*Edge
rcv map[string]*hv.ResourceVariable
}
type key struct {
kind, name string
}
func graphFromConfig(c *Config, data gapi.Data) (*pgraph.Graph, error) {
var graph *pgraph.Graph
var err error
graph, err = pgraph.NewGraph("Graph")
if err != nil {
return nil, fmt.Errorf("unable to create graph from config: %s", err)
}
lookup := make(map[key]pgraph.Vertex)
var keep []pgraph.Vertex
var resourceList []resources.Res
log.Printf("hcl: parsing %d resources", len(c.Resources))
for _, r := range c.Resources {
res := r.resource
kind := r.resource.GetKind()
log.Printf("hcl: resource \"%s\" \"%s\"", kind, r.Name)
if !strings.HasPrefix(res.GetName(), "@@") {
fn := func(v pgraph.Vertex) (bool, error) {
return resources.VtoR(v).Compare(res), nil
}
v, err := graph.VertexMatchFn(fn)
if err != nil {
return nil, fmt.Errorf("could not match vertex: %s", err)
}
if v == nil {
v = res
graph.AddVertex(v)
}
lookup[key{kind, res.GetName()}] = v
keep = append(keep, v)
} else if !data.Noop {
res.SetName(res.GetName()[2:])
res.SetKind(kind)
resourceList = append(resourceList, res)
}
}
// store in backend (usually etcd)
if err := data.World.ResExport(resourceList); err != nil {
return nil, fmt.Errorf("Config: Could not export resources: %v", err)
}
// lookup from backend (usually etcd)
var hostnameFilter []string // empty to get from everyone
kindFilter := []string{}
for _, t := range c.Collector {
kind := strings.ToLower(t.Kind)
kindFilter = append(kindFilter, kind)
}
// do all the graph look ups in one single step, so that if the backend
// database changes, we don't have a partial state of affairs...
if len(kindFilter) > 0 { // if kindFilter is empty, don't need to do lookups!
var err error
resourceList, err = data.World.ResCollect(hostnameFilter, kindFilter)
if err != nil {
return nil, fmt.Errorf("Config: Could not collect resources: %v", err)
}
}
for _, res := range resourceList {
matched := false
// see if we find a collect pattern that matches
for _, t := range c.Collector {
kind := strings.ToLower(t.Kind)
// use t.Kind and optionally t.Pattern to collect from storage
log.Printf("Collect: %v; Pattern: %v", kind, t.Pattern)
// XXX: expand to more complex pattern matching here...
if res.GetKind() != kind {
continue
}
if matched {
// we've already matched this resource, should we match again?
log.Printf("Config: Warning: Matching %s again!", res)
}
matched = true
// collect resources but add the noop metaparam
//if noop { // now done in mgmtmain
// res.Meta().Noop = noop
//}
if t.Pattern != "" { // XXX: simplistic for now
res.CollectPattern(t.Pattern) // res.Dirname = t.Pattern
}
log.Printf("Collect: %s: collected!", res)
// XXX: similar to other resource add code:
// if _, exists := lookup[kind]; !exists {
// lookup[kind] = make(map[string]pgraph.Vertex)
// }
fn := func(v pgraph.Vertex) (bool, error) {
return resources.VtoR(v).Compare(res), nil
}
v, err := graph.VertexMatchFn(fn)
if err != nil {
return nil, fmt.Errorf("could not VertexMatchFn() resource: %s", err)
}
if v == nil { // no match found
v = res // a standalone res can be a vertex
graph.AddVertex(v) // call standalone in case not part of an edge
}
lookup[key{kind, res.GetName()}] = v // used for constructing edges
keep = append(keep, v) // append
//break // let's see if another resource even matches
}
}
for _, r := range c.Resources {
for _, e := range r.deps {
if _, ok := lookup[key{strings.ToLower(e.From.Kind), e.From.Name}]; !ok {
return nil, fmt.Errorf("can't find 'from' name")
}
if _, ok := lookup[key{strings.ToLower(e.To.Kind), e.To.Name}]; !ok {
return nil, fmt.Errorf("can't find 'to' name")
}
from := lookup[key{strings.ToLower(e.From.Kind), e.From.Name}]
to := lookup[key{strings.ToLower(e.To.Kind), e.To.Name}]
edge := &resources.Edge{
Name: e.Name,
Notify: e.Notify,
}
graph.AddEdge(from, to, edge)
}
recv := make(map[string]*resources.Send)
// build Rcv's from resource variables
for k, v := range r.rcv {
send, ok := lookup[key{strings.ToLower(v.Kind), v.Name}]
if !ok {
return nil, fmt.Errorf("resource not found")
}
recv[strings.ToUpper(string(k[0]))+k[1:]] = &resources.Send{
Res: resources.VtoR(send),
Key: v.Field,
}
to := lookup[key{strings.ToLower(r.Kind), r.Name}]
edge := &resources.Edge{
Name: v.Name,
Notify: true,
}
graph.AddEdge(send, to, edge)
}
r.resource.SetRecv(recv)
}
return graph, nil
}
func loadHcl(f *string) (*Config, error) {
if f == nil {
return nil, fmt.Errorf("empty file given")
}
data, err := ioutil.ReadFile(*f)
if err != nil {
return nil, fmt.Errorf("unable to read file: %v", err)
}
file, err := hcl.ParseBytes(data)
if err != nil {
return nil, fmt.Errorf("unable to parse file: %s", err)
}
config := new(Config)
list, ok := file.Node.(*ast.ObjectList)
if !ok {
return nil, fmt.Errorf("unable to parse file: file does not contain root node object")
}
if resources := list.Filter("resource"); len(resources.Items) > 0 {
var err error
config.Resources, err = loadResourcesHcl(resources)
if err != nil {
return nil, fmt.Errorf("unable to parse: %s", err)
}
}
return config, nil
}
func loadResourcesHcl(list *ast.ObjectList) ([]*Resource, error) {
list = list.Children()
if len(list.Items) == 0 {
return nil, nil
}
var result []*Resource
for _, item := range list.Items {
kind := item.Keys[0].Token.Value().(string)
name := item.Keys[1].Token.Value().(string)
var listVal *ast.ObjectList
if ot, ok := item.Val.(*ast.ObjectType); ok {
listVal = ot.List
} else {
return nil, fmt.Errorf("module '%s': should be an object", name)
}
var params = resources.DefaultMetaParams
if o := listVal.Filter("meta"); len(o.Items) > 0 {
err := hcl.DecodeObject(&params, o)
if err != nil {
return nil, fmt.Errorf(
"Error parsing meta for %s: %s",
name,
err)
}
}
var deps []string
if edges := listVal.Filter("depends_on"); len(edges.Items) > 0 {
err := hcl.DecodeObject(&deps, edges.Items[0].Val)
if err != nil {
return nil, fmt.Errorf("unable to parse: %s", err)
}
}
var edges []*Edge
for _, dep := range deps {
vertices := strings.Split(dep, ".")
edges = append(edges, &Edge{
To: vertex{
Kind: kind,
Name: name,
},
From: vertex{
Kind: vertices[0],
Name: vertices[1],
},
})
}
var config map[string]interface{}
if err := hcl.DecodeObject(&config, item.Val); err != nil {
log.Printf("hcl: unable to decode body: %v", err)
return nil, fmt.Errorf(
"Error reading config for %s: %s",
name,
err)
}
delete(config, "meta")
delete(config, "depends_on")
rcv := make(map[string]*hv.ResourceVariable)
// parse strings for hil
for k, v := range config {
n, err := hil.Parse(v.(string))
if err != nil {
return nil, fmt.Errorf("unable to parse fields: %v", err)
}
variables, err := hv.ParseVariables(n)
if err != nil {
return nil, fmt.Errorf("unable to parse variables: %v", err)
}
for _, v := range variables {
val, ok := v.(*hv.ResourceVariable)
if !ok {
continue
}
rcv[k] = val
}
}
res, err := resources.NewNamedResource(kind, name)
if err != nil {
log.Printf("hcl: unable to parse resource: %v", err)
return nil, err
}
if err := hcl.DecodeObject(res, item.Val); err != nil {
log.Printf("hcl: unable to decode body: %v", err)
return nil, fmt.Errorf(
"Error reading config for %s: %s",
name,
err)
}
meta := res.Meta()
*meta = params
result = append(result, &Resource{
Name: name,
Kind: kind,
resource: res,
deps: edges,
rcv: rcv,
})
}
return result, nil
}

89
hil/interpolate.go Normal file
View File

@@ -0,0 +1,89 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package hil
import (
"fmt"
"strings"
"github.com/hashicorp/hil/ast"
)
// Variable defines an interpolated variable.
type Variable interface {
Key() string
}
// ResourceVariable defines a variable type used to reference fields of a resource
// e.g. ${file.file1.Content}
type ResourceVariable struct {
Kind, Name, Field string
}
// Key returns a string representation of the variable key.
func (r *ResourceVariable) Key() string {
return fmt.Sprintf("%s.%s.%s", r.Kind, r.Name, r.Field)
}
// NewInterpolatedVariable takes a variable key and return the interpolated variable
// of the required type.
func NewInterpolatedVariable(k string) (Variable, error) {
// for now resource variables are the only thing.
parts := strings.SplitN(k, ".", 3)
return &ResourceVariable{
Kind: parts[0],
Name: parts[1],
Field: parts[2],
}, nil
}
// ParseVariables will traverse a HIL tree looking for variables and returns a
// list of them.
func ParseVariables(tree ast.Node) ([]Variable, error) {
var result []Variable
var finalErr error
visitor := func(n ast.Node) ast.Node {
if finalErr != nil {
return n
}
switch nt := n.(type) {
case *ast.VariableAccess:
v, err := NewInterpolatedVariable(nt.Name)
if err != nil {
finalErr = err
return n
}
result = append(result, v)
default:
return n
}
return n
}
tree.Accept(visitor)
if finalErr != nil {
return nil, finalErr
}
return result, nil
}

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package lib
@@ -24,8 +24,10 @@ import (
"os/signal"
"syscall"
"github.com/purpleidea/mgmt/hcl"
"github.com/purpleidea/mgmt/puppet"
"github.com/purpleidea/mgmt/yamlgraph"
"github.com/purpleidea/mgmt/yamlgraph2"
"github.com/urfave/cli"
)
@@ -55,35 +57,55 @@ func run(c *cli.Context) error {
if _ = c.String("code"); c.IsSet("code") {
if obj.GAPI != nil {
return fmt.Errorf("Can't combine code GAPI with existing GAPI.")
return fmt.Errorf("can't combine code GAPI with existing GAPI")
}
// TODO: implement DSL GAPI
//obj.GAPI = &dsl.GAPI{
// Code: &s,
//}
return fmt.Errorf("The Code GAPI is not implemented yet!") // TODO: DSL
return fmt.Errorf("the Code GAPI is not implemented yet") // TODO: DSL
}
if y := c.String("yaml"); c.IsSet("yaml") {
if obj.GAPI != nil {
return fmt.Errorf("Can't combine YAML GAPI with existing GAPI.")
return fmt.Errorf("can't combine YAML GAPI with existing GAPI")
}
obj.GAPI = &yamlgraph.GAPI{
File: &y,
}
}
if y := c.String("yaml2"); c.IsSet("yaml2") {
if obj.GAPI != nil {
return fmt.Errorf("can't combine YAMLv2 GAPI with existing GAPI")
}
obj.GAPI = &yamlgraph2.GAPI{
File: &y,
}
}
if p := c.String("puppet"); c.IsSet("puppet") {
if obj.GAPI != nil {
return fmt.Errorf("Can't combine puppet GAPI with existing GAPI.")
return fmt.Errorf("can't combine puppet GAPI with existing GAPI")
}
obj.GAPI = &puppet.GAPI{
PuppetParam: &p,
PuppetConf: c.String("puppet-conf"),
}
}
if h := c.String("hcl"); c.IsSet("hcl") {
if obj.GAPI != nil {
return fmt.Errorf("can't combine hcl GAPI with existing GAPI")
}
obj.GAPI = &hcl.GAPI{
File: &h,
}
}
obj.Remotes = c.StringSlice("remote") // FIXME: GAPI-ify somehow?
obj.NoWatch = c.Bool("no-watch")
obj.NoConfigWatch = c.Bool("no-config-watch")
obj.NoStreamWatch = c.Bool("no-stream-watch")
obj.Noop = c.Bool("noop")
obj.Sema = c.Int("sema")
obj.Graphviz = c.String("graphviz")
obj.GraphvizFilter = c.String("graphviz-filter")
obj.ConvergedTimeout = c.Int("converged-timeout")
@@ -115,6 +137,9 @@ func run(c *cli.Context) error {
return err
}
obj.Prometheus = c.Bool("prometheus")
obj.PrometheusListen = c.String("prometheus-listen")
// install the exit signal handler
exit := make(chan struct{})
defer close(exit)
@@ -132,7 +157,7 @@ func run(c *cli.Context) error {
return
}
log.Println("Interrupted by signal")
obj.Exit(fmt.Errorf("Killed by %v", sig))
obj.Exit(fmt.Errorf("killed by %v", sig))
return
case <-exit:
return
@@ -152,7 +177,7 @@ func CLI(program, version string, flags Flags) error {
// test for sanity
if program == "" || version == "" {
return fmt.Errorf("Program was not compiled correctly. Please see Makefile.")
return fmt.Errorf("program was not compiled correctly, see Makefile")
}
app := cli.NewApp()
app.Name = program // App.name and App.version pass these values through
@@ -201,6 +226,16 @@ func CLI(program, version string, flags Flags) error {
Value: "",
Usage: "yaml graph definition to run",
},
cli.StringFlag{
Name: "yaml2",
Value: "",
Usage: "yaml graph definition to run (parser v2)",
},
cli.StringFlag{
Name: "hcl",
Value: "",
Usage: "hcl graph definition to run",
},
cli.StringFlag{
Name: "puppet, p",
Value: "",
@@ -219,12 +254,26 @@ func CLI(program, version string, flags Flags) error {
cli.BoolFlag{
Name: "no-watch",
Usage: "do not update graph under any switch events",
},
cli.BoolFlag{
Name: "no-config-watch",
Usage: "do not update graph on config switch events",
},
cli.BoolFlag{
Name: "no-stream-watch",
Usage: "do not update graph on stream switch events",
},
cli.BoolFlag{
Name: "noop",
Usage: "globally force all resources into no-op mode",
},
cli.IntFlag{
Name: "sema",
Value: -1,
Usage: "globally add a semaphore to all resources with this lock count",
},
cli.StringFlag{
Name: "graphviz, g",
Value: "",
@@ -232,7 +281,7 @@ func CLI(program, version string, flags Flags) error {
},
cli.StringFlag{
Name: "graphviz-filter, gf",
Value: "dot", // directed graph default
Value: "",
Usage: "graphviz filter to use",
},
cli.IntFlag{
@@ -320,6 +369,15 @@ func CLI(program, version string, flags Flags) error {
Value: "",
Usage: "default identity used for generation",
},
cli.BoolFlag{
Name: "prometheus",
Usage: "start a prometheus instance",
},
cli.StringFlag{
Name: "prometheus-listen",
Value: "",
Usage: "specify prometheus instance binding",
},
},
},
}

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package lib
@@ -30,6 +30,7 @@ import (
"github.com/purpleidea/mgmt/gapi"
"github.com/purpleidea/mgmt/pgp"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/prometheus"
"github.com/purpleidea/mgmt/recwatch"
"github.com/purpleidea/mgmt/remote"
"github.com/purpleidea/mgmt/resources"
@@ -64,8 +65,12 @@ type Main struct {
GAPI gapi.GAPI // graph API interface struct
Remotes []string // list of remote graph definitions to run
NoWatch bool // do not update graph on watched graph definition file changes
NoWatch bool // do not change graph under any circumstances
NoConfigWatch bool // do not update graph due to config changes
NoStreamWatch bool // do not update graph due to stream changes
Noop bool // globally force all resources into no-op mode
Sema int // add a semaphore with this lock count to each resource
Graphviz string // output file for graphviz data
GraphvizFilter string // graphviz filter to use
ConvergedTimeout int // exit after approximately this many seconds in a converged state; -1 to disable
@@ -93,6 +98,9 @@ type Main struct {
PgpIdentity *string
pgpKeys *pgp.PGP // agent key pair
Prometheus bool // enable prometheus metrics
PrometheusListen string // prometheus instance bind specification
exit chan error // exit signal
}
@@ -100,11 +108,20 @@ type Main struct {
func (obj *Main) Init() error {
if obj.Program == "" || obj.Version == "" {
return fmt.Errorf("You must set the Program and Version strings!")
return fmt.Errorf("you must set the Program and Version strings")
}
if obj.Prefix != nil && obj.TmpPrefix {
return fmt.Errorf("Choosing a prefix and the request for a tmp prefix is illogical!")
return fmt.Errorf("choosing a prefix and the request for a tmp prefix is illogical")
}
// if we've turned off watching, then be explicit and disable them all!
// if all the watches are disabled, then it's equivalent to no watching
if obj.NoWatch {
obj.NoConfigWatch = true
obj.NoStreamWatch = true
} else if obj.NoConfigWatch && obj.NoStreamWatch {
obj.NoWatch = true
}
obj.idealClusterSize = uint16(obj.IdealClusterSize)
@@ -113,7 +130,7 @@ func (obj *Main) Init() error {
}
if obj.idealClusterSize < 1 {
return fmt.Errorf("IdealClusterSize should be at least one!")
return fmt.Errorf("the IdealClusterSize should be at least one")
}
if obj.NoServer && len(obj.Remotes) > 0 {
@@ -121,19 +138,19 @@ func (obj *Main) Init() error {
// here, so if we're okay with every remote graph running in an
// isolated mode, then this is okay. Improve on this if there's
// someone who really wants to be able to do this.
return fmt.Errorf("The Server is required when using Remotes!")
return fmt.Errorf("the Server is required when using Remotes")
}
if obj.CConns < 0 {
return fmt.Errorf("The CConns value should be at least zero!")
return fmt.Errorf("the CConns value should be at least zero")
}
if obj.ConvergedTimeout >= 0 && obj.CConns > 0 && len(obj.Remotes) > int(obj.CConns) {
return fmt.Errorf("You can't converge if you have more remotes than available connections!")
return fmt.Errorf("you can't converge if you have more remotes than available connections")
}
if obj.Depth < 0 { // user should not be using this argument manually
return fmt.Errorf("Negative values for Depth are not permitted!")
return fmt.Errorf("negative values for Depth are not permitted")
}
// transform the url list inputs into etcd typed lists
@@ -142,19 +159,19 @@ func (obj *Main) Init() error {
util.FlattenListWithSplit(obj.Seeds, []string{",", ";", " "}),
)
if err != nil && len(obj.Seeds) > 0 {
return fmt.Errorf("Seeds didn't parse correctly!")
return fmt.Errorf("the Seeds didn't parse correctly")
}
obj.clientURLs, err = etcdtypes.NewURLs(
util.FlattenListWithSplit(obj.ClientURLs, []string{",", ";", " "}),
)
if err != nil && len(obj.ClientURLs) > 0 {
return fmt.Errorf("ClientURLs didn't parse correctly!")
return fmt.Errorf("the ClientURLs didn't parse correctly")
}
obj.serverURLs, err = etcdtypes.NewURLs(
util.FlattenListWithSplit(obj.ServerURLs, []string{",", ";", " "}),
)
if err != nil && len(obj.ServerURLs) > 0 {
return fmt.Errorf("ServerURLs didn't parse correctly!")
return fmt.Errorf("the ServerURLs didn't parse correctly")
}
obj.exit = make(chan error)
@@ -194,10 +211,10 @@ func (obj *Main) Run() error {
if h := obj.Hostname; h != nil && *h != "" { // override by cli
hostname = *h
} else if err != nil {
return errwrap.Wrapf(err, "Can't get default hostname!")
return errwrap.Wrapf(err, "can't get default hostname")
}
if hostname == "" { // safety check
return fmt.Errorf("Hostname cannot be empty!")
return fmt.Errorf("hostname cannot be empty")
}
var prefix = fmt.Sprintf("/var/lib/%s/", obj.Program) // default prefix
@@ -209,24 +226,39 @@ func (obj *Main) Run() error {
if obj.TmpPrefix || obj.AllowTmpPrefix {
var err error
if prefix, err = ioutil.TempDir("", obj.Program+"-"+hostname+"-"); err != nil {
return fmt.Errorf("Main: Error: Can't create temporary prefix!")
return fmt.Errorf("can't create temporary prefix")
}
log.Println("Main: Warning: Working prefix directory is temporary!")
} else {
return fmt.Errorf("Main: Error: Can't create prefix!")
return fmt.Errorf("can't create prefix")
}
}
log.Printf("Main: Working prefix is: %s", prefix)
pgraphPrefix := fmt.Sprintf("%s/", path.Join(prefix, "pgraph")) // pgraph namespace
if err := os.MkdirAll(pgraphPrefix, 0770); err != nil {
return errwrap.Wrapf(err, "Can't create pgraph prefix")
return errwrap.Wrapf(err, "can't create pgraph prefix")
}
var prom *prometheus.Prometheus
if obj.Prometheus {
prom = &prometheus.Prometheus{
Listen: obj.PrometheusListen,
}
if err := prom.Init(); err != nil {
return errwrap.Wrapf(err, "can't create initiate Prometheus instance")
}
log.Printf("Main: Prometheus: Starting instance on %s", prom.Listen)
if err := prom.Start(); err != nil {
return errwrap.Wrapf(err, "can't start initiate Prometheus instance")
}
}
if !obj.NoPgp {
pgpPrefix := fmt.Sprintf("%s/", path.Join(prefix, "pgp"))
if err := os.MkdirAll(pgpPrefix, 0770); err != nil {
return errwrap.Wrapf(err, "Can't create pgp prefix")
return errwrap.Wrapf(err, "can't create pgp prefix")
}
pgpKeyringPath := path.Join(pgpPrefix, pgp.DefaultKeyringFile) // default path
@@ -237,7 +269,7 @@ func (obj *Main) Run() error {
var err error
if obj.pgpKeys, err = pgp.Import(pgpKeyringPath); err != nil && !os.IsNotExist(err) {
return errwrap.Wrapf(err, "Can't import pgp key")
return errwrap.Wrapf(err, "can't import pgp key")
}
if obj.pgpKeys == nil {
@@ -249,24 +281,28 @@ func (obj *Main) Run() error {
name, comment, email, err := pgp.ParseIdentity(identity)
if err != nil {
return errwrap.Wrapf(err, "Can't parse user string")
return errwrap.Wrapf(err, "can't parse user string")
}
// TODO: Make hash configurable
if obj.pgpKeys, err = pgp.Generate(name, comment, email, nil); err != nil {
return errwrap.Wrapf(err, "Can't creating pgp key")
return errwrap.Wrapf(err, "can't create pgp key")
}
if err := obj.pgpKeys.SaveKey(pgpKeyringPath); err != nil {
return errwrap.Wrapf(err, "Can't save pgp key")
return errwrap.Wrapf(err, "can't save pgp key")
}
}
// TODO: Import admin key
}
var G, oldGraph *pgraph.Graph
oldGraph := &pgraph.Graph{}
graph := &resources.MGraph{}
// pass in the information we need
graph.Debug = obj.Flags.Debug
graph.Init()
// exit after `max-runtime` seconds for no reason at all...
if i := obj.MaxRuntime; i > 0 {
@@ -306,10 +342,20 @@ func (obj *Main) Run() error {
)
if EmbdEtcd == nil {
// TODO: verify EmbdEtcd is not nil below...
obj.Exit(fmt.Errorf("Main: Etcd: Creation failed!"))
obj.Exit(fmt.Errorf("Main: Etcd: Creation failed"))
} else if err := EmbdEtcd.Startup(); err != nil { // startup (returns when etcd main loop is running)
obj.Exit(fmt.Errorf("Main: Etcd: Startup failed: %v", err))
}
// wait for etcd server to be ready before continuing...
select {
case <-EmbdEtcd.ServerReady():
log.Printf("Main: Etcd: Server: Ready!")
// pass
case <-time.After(((etcd.MaxStartServerTimeout * etcd.MaxStartServerRetries) + 1) * time.Second):
obj.Exit(fmt.Errorf("Main: Etcd: Startup timeout"))
}
convergerStateFn := func(b bool) error {
// exit if we are using the converged timeout and we are the
// root node. otherwise, if we are a child node in a remote
@@ -317,58 +363,61 @@ func (obj *Main) Run() error {
// state and wait for the parent to trigger the exit.
if t := obj.ConvergedTimeout; obj.Depth == 0 && t >= 0 {
if b {
log.Printf("Converged for %d seconds, exiting!", t)
log.Printf("Main: Converged for %d seconds, exiting!", t)
obj.Exit(nil) // trigger an exit!
}
return nil
}
// send our individual state into etcd for others to see
return etcd.EtcdSetHostnameConverged(EmbdEtcd, hostname, b) // TODO: what should happen on error?
return etcd.SetHostnameConverged(EmbdEtcd, hostname, b) // TODO: what should happen on error?
}
if EmbdEtcd != nil {
converger.SetStateFn(convergerStateFn)
}
var gapiChan chan error // stream events are nil errors
// implementation of the World API (alternates can be substituted in)
world := &etcd.World{
Hostname: hostname,
EmbdEtcd: EmbdEtcd,
}
graph.Data = &resources.ResData{
Hostname: hostname,
Converger: converger,
Prometheus: prom,
World: world,
Prefix: pgraphPrefix,
Debug: obj.Flags.Debug,
}
var gapiChan chan gapi.Next // stream events contain some instructions!
if obj.GAPI != nil {
data := gapi.Data{
Hostname: hostname,
// NOTE: alternate implementations can be substituted in
World: &etcd.World{
Hostname: hostname,
EmbdEtcd: EmbdEtcd,
},
Noop: obj.Noop,
NoWatch: obj.NoWatch,
World: world,
Noop: obj.Noop,
//NoWatch: obj.NoWatch,
NoConfigWatch: obj.NoConfigWatch,
NoStreamWatch: obj.NoStreamWatch,
}
if err := obj.GAPI.Init(data); err != nil {
obj.Exit(fmt.Errorf("Main: GAPI: Init failed: %v", err))
} else if !obj.NoWatch {
} else {
// this must generate at least one event for it to work
gapiChan = obj.GAPI.Next() // stream of graph switch events!
}
}
exitchan := make(chan struct{}) // exit on close
go func() {
startChan := make(chan struct{}) // start signal
go func() { startChan <- struct{}{} }()
log.Println("Etcd: Starting...")
etcdChan := etcd.EtcdWatch(EmbdEtcd)
first := true // first loop or not
for {
log.Println("Main: Waiting...")
// The GAPI should always kick off an event on Next() at
// startup when (and if) it indeed has a graph to share!
fastPause := false
select {
case <-startChan: // kick the loop once at start
// pass
case b := <-etcdChan:
if !b { // ignore the message
continue
}
// everything else passes through to cause a compile!
case err, ok := <-gapiChan:
case next, ok := <-gapiChan:
if !ok { // channel closed
if obj.Flags.Debug {
log.Printf("Main: GAPI exited")
@@ -376,96 +425,159 @@ func (obj *Main) Run() error {
gapiChan = nil // disable it
continue
}
if err != nil {
obj.Exit(err) // trigger exit
continue
//return // TODO: return or wait for exitchan?
// if we've been asked to exit...
if next.Exit {
obj.Exit(next.Err) // trigger exit
continue // wait for exitchan
}
if obj.NoWatch { // extra safety for bad GAPI's
log.Printf("Main: GAPI stream should be quiet with NoWatch!") // fix the GAPI!
continue // no stream events should be sent
// the gapi lets us send an error to the channel
// this means there was a failure, but not fatal
if err := next.Err; err != nil {
log.Printf("Main: Error with graph stream: %v", err)
continue // wait for another event
}
// everything else passes through to cause a compile!
fastPause = next.Fast // should we pause fast?
case <-exitchan:
return
}
if obj.GAPI == nil {
log.Printf("Config: GAPI is empty!")
log.Printf("Main: GAPI is empty!")
continue
}
// we need the vertices to be paused to work on them, so
// run graph vertex LOCK...
if !first { // TODO: we can flatten this check out I think
converger.Pause() // FIXME: add sync wait?
G.Pause() // sync
converger.Pause() // FIXME: add sync wait?
graph.Pause(fastPause) // sync
//G.UnGroup() // FIXME: implement me if needed!
//graph.UnGroup() // FIXME: implement me if needed!
}
// make the graph from yaml, lib, puppet->yaml, or dsl!
newGraph, err := obj.GAPI.Graph() // generate graph!
if err != nil {
log.Printf("Config: Error creating new graph: %v", err)
log.Printf("Main: Error creating new graph: %v", err)
// unpause!
if !first {
G.Start(first) // sync
converger.Start() // after G.Start()
graph.Start(first) // sync
converger.Start() // after Start()
}
continue
}
newGraph.Flags = pgraph.Flags{Debug: obj.Flags.Debug}
// pass in the information we need
newGraph.AssociateData(&resources.Data{
Converger: converger,
Prefix: pgraphPrefix,
Debug: obj.Flags.Debug,
})
if obj.Flags.Debug {
log.Printf("Main: New Graph: %v", newGraph)
}
// apply the global noop parameter if requested
if obj.Noop {
for _, m := range newGraph.GraphMetas() {
// this edits the paused vertices, but it is safe to do
// so even if we don't use this new graph, since those
// value should be the same for existing vertices...
for _, v := range newGraph.Vertices() {
m := resources.VtoR(v).Meta()
// apply the global noop parameter if requested
if obj.Noop {
m.Noop = obj.Noop
}
// append the semaphore to each resource
if obj.Sema > 0 { // NOTE: size == 0 would block
// a semaphore with an empty id is valid
m.Sema = append(m.Sema, fmt.Sprintf(":%d", obj.Sema))
}
}
// FIXME: make sure we "UnGroup()" any semi-destructive
// changes to the resources so our efficient GraphSync
// will be able to re-use and cmp to the old graph.
newFullGraph, err := newGraph.GraphSync(oldGraph)
if err != nil {
log.Printf("Config: Error running graph sync: %v", err)
// We don't have to "UnGroup()" to compare, since we
// save the old graph to use when we compare.
// TODO: Does this hurt performance or graph changes ?
log.Printf("Main: GraphSync...")
vertexCmpFn := func(v1, v2 pgraph.Vertex) (bool, error) {
return resources.VtoR(v1).Compare(resources.VtoR(v2)), nil
}
vertexAddFn := func(v pgraph.Vertex) error {
err := resources.VtoR(v).Validate()
return errwrap.Wrapf(err, "could not Validate() resource")
}
vertexRemoveFn := func(v pgraph.Vertex) error {
// wait for exit before starting new graph!
resources.VtoR(v).Exit() // sync
return nil
}
edgeCmpFn := func(e1, e2 pgraph.Edge) (bool, error) {
edge1 := e1.(*resources.Edge) // panic if wrong
edge2 := e2.(*resources.Edge) // panic if wrong
return edge1.Compare(edge2), nil
}
// on success, this updates the receiver graph...
if err := oldGraph.GraphSync(newGraph, vertexCmpFn, vertexAddFn, vertexRemoveFn, edgeCmpFn); err != nil {
log.Printf("Main: Error running graph sync: %v", err)
// unpause!
if !first {
G.Start(first) // sync
converger.Start() // after G.Start()
graph.Start(first) // sync
converger.Start() // after Start()
}
continue
}
oldGraph = newFullGraph // save old graph
G = oldGraph.Copy() // copy to active graph
G.AutoEdges() // add autoedges; modifies the graph
G.AutoGroup() // run autogroup; modifies the graph
//savedGraph := oldGraph.Copy() // save a copy for errors
// TODO: should we call each Res.Setup() here instead?
// add autoedges; modifies the graph only if no error
if err := resources.AutoEdges(oldGraph); err != nil {
log.Printf("Main: Error running auto edges: %v", err)
// unpause!
if !first {
graph.Start(first) // sync
converger.Start() // after Start()
}
continue
}
// at this point, any time we error after a destructive
// modification of the graph we need to restore the old
// graph that was previously running, eg:
//
// oldGraph = savedGraph.Copy()
//
// which we are (luckily) able to avoid testing for now
resources.AutoGroup(oldGraph, &resources.NonReachabilityGrouper{}) // run autogroup; modifies the graph
// TODO: do we want to do a transitive reduction?
// FIXME: run a type checker that verifies all the send->recv relationships
log.Printf("Graph: %v", G) // show graph
if obj.GraphvizFilter != "" {
if err := G.ExecGraphviz(obj.GraphvizFilter, obj.Graphviz); err != nil {
log.Printf("Graphviz: %v", err)
} else {
log.Printf("Graphviz: Successfully generated graph!")
}
graph.Update(oldGraph) // copy in structure of new graph
// Call this here because at this point the graph does
// not know anything about the prometheus instance.
if err := prom.UpdatePgraphStartTime(); err != nil {
log.Printf("Main: Prometheus.UpdatePgraphStartTime() errored: %v", err)
}
// G.Start(...) needs to be synchronous or wait,
// Start() needs to be synchronous or wait,
// because if half of the nodes are started and
// some are not ready yet and the EtcdWatch
// loops, we'll cause G.Pause(...) before we
// loops, we'll cause Pause() before we
// even got going, thus causing nil pointer errors
G.Start(first) // sync
converger.Start() // after G.Start()
graph.Start(first) // sync
converger.Start() // after Start()
log.Printf("Main: Graph: %v", graph) // show graph
if obj.Graphviz != "" {
filter := obj.GraphvizFilter
if filter == "" {
filter = "dot" // directed graph default
}
if err := graph.ExecGraphviz(filter, obj.Graphviz, hostname); err != nil {
log.Printf("Main: Graphviz: %v", err)
} else {
log.Printf("Main: Graphviz: Successfully generated graph!")
}
}
first = false
}
}()
@@ -473,7 +585,7 @@ func (obj *Main) Run() error {
configWatcher := recwatch.NewConfigWatcher()
configWatcher.Flags = recwatch.Flags{Debug: obj.Flags.Debug}
events := configWatcher.Events()
if !obj.NoWatch {
if !obj.NoWatch { // FIXME: fit this into a clean GAPI?
configWatcher.Add(obj.Remotes...) // add all the files...
} else {
events = nil // signal that no-watch is true
@@ -490,7 +602,7 @@ func (obj *Main) Run() error {
// initialize the add watcher, which calls the f callback on map changes
convergerCb := func(f func(map[string]bool) error) (func(), error) {
return etcd.EtcdAddHostnameConvergedWatcher(EmbdEtcd, f)
return etcd.AddHostnameConvergedWatcher(EmbdEtcd, f)
}
// build remotes struct for remote ssh
@@ -517,6 +629,14 @@ func (obj *Main) Run() error {
// TODO: is there any benefit to running the remotes above in the loop?
// wait for etcd to be running before we remote in, which we do above!
go remotes.Run()
// wait for remotes to be ready before continuing...
select {
case <-remotes.Ready():
log.Printf("Main: Remotes: Run: Ready!")
// pass
//case <-time.After( ? * time.Second):
// obj.Exit(fmt.Errorf("Main: Remotes: Run timeout"))
}
if obj.GAPI == nil {
converger.Start() // better start this for empty graphs
@@ -525,34 +645,42 @@ func (obj *Main) Run() error {
reterr := <-obj.exit // wait for exit signal
log.Println("Destroy...")
log.Println("Main: Destroy...")
if obj.GAPI != nil {
if err := obj.GAPI.Close(); err != nil {
err = errwrap.Wrapf(err, "GAPI closed poorly!")
err = errwrap.Wrapf(err, "the GAPI closed poorly")
reterr = multierr.Append(reterr, err) // list of errors
}
}
configWatcher.Close() // stop sending file changes to remotes
if err := remotes.Exit(); err != nil { // tell all the remote connections to shutdown; waits!
err = errwrap.Wrapf(err, "Remote exited poorly!")
err = errwrap.Wrapf(err, "the Remote exited poorly")
reterr = multierr.Append(reterr, err) // list of errors
}
// tell inner main loop to exit
close(exitchan)
G.Exit() // tell all the children to exit, and waits for them to do so
graph.Exit() // tells all the children to exit, and waits for them to do so
// cleanup etcd main loop last so it can process everything first
if err := EmbdEtcd.Destroy(); err != nil { // shutdown and cleanup etcd
err = errwrap.Wrapf(err, "Etcd exited poorly!")
err = errwrap.Wrapf(err, "embedded Etcd exited poorly")
reterr = multierr.Append(reterr, err) // list of errors
}
if obj.Prometheus {
log.Printf("Main: Prometheus: Stopping instance")
if err := prom.Stop(); err != nil {
err = errwrap.Wrapf(err, "the Prometheus instance exited poorly")
reterr = multierr.Append(reterr, err)
}
}
if obj.Flags.Debug {
log.Printf("Main: Graph: %v", G)
log.Printf("Main: Graph: %v", graph)
}
// TODO: wait for each vertex to exit...

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package main

81
misc/delta-cpu.sh Executable file
View File

@@ -0,0 +1,81 @@
#!/bin/bash
# shitty cpu count control, useful for live demos
minimum=1 # don't decrease below this number of cpus
maximum=8 # don't increase above this number of cpus
count=1 # initial count
factor=3
function output() {
count=$1 # arg!
cat << EOF > ~/code/mgmt/examples/virt4.yaml
---
graph: mygraph
resources:
virt:
- name: mgmt4
meta:
limit: .inf
burst: 0
uri: 'qemu:///session'
cpus: $count
maxcpus: $maximum
memory: 524288
boot:
- hd
disk:
- type: qcow2
source: "~/.local/share/libvirt/images/fedora-23-scratch.qcow2"
state: running
transient: false
edges: []
comment: "qemu-img create -b fedora-23.qcow2 -f qcow2 fedora-23-scratch.qcow2"
EOF
}
#tput cuu 1 && tput el # remove last line
str=''
tnuoc=$((maximum-count)) # backwards count
count2=$((count * factor))
tnuoc2=$((tnuoc * factor))
left=`yes '>' | head -$count2 | paste -s -d '' -`
right=`yes ' ' | head -$tnuoc2 | paste -s -d '' -`
str="${left}${right}"
_min=$((minimum-1))
_max=$((maximum+1))
reset # clean up once...
output $count # call function
while true; do
read -n1 -r -s -p "CPUs count is: $count; ${str} Press +/- key to adjust." key
if [ "$key" = "q" ] || [ "$key" = "Q" ]; then
echo # newline
exit
fi
if [ ! "$key" = "+" ] && [ ! "$key" = "-" ] && [ ! "$key" = "=" ] && [ ! "$key" = "_" ]; then # wrong key
reset # woops, reset it all...
continue
fi
if [ "$key" == "+" ] || [ "$key" == "=" ]; then
count=$((count+1))
fi
if [ "$key" == "-" ] || [ "$key" == "_" ]; then
count=$((count-1))
fi
if [ $count -eq $_min ]; then # min
count=$minimum
fi
if [ $count -eq $_max ]; then # max
count=$maximum
fi
tnuoc=$((maximum-count)) # backwards count
#echo "count is: $count"
#echo "tnuoc is: $tnuoc"
count2=$((count * factor))
tnuoc2=$((tnuoc * factor))
left=`yes '>' | head -$count2 | paste -s -d '' -`
right=`yes ' ' | head -$tnuoc2 | paste -s -d '' -`
str="${left}${right}"
#echo "str is: $str"
echo -ne '\r' # backup
output $count # call function
done

View File

@@ -14,26 +14,38 @@ sudo_command=$(which sudo)
YUM=`which yum 2>/dev/null`
DNF=`which dnf 2>/dev/null`
APT=`which apt-get 2>/dev/null`
BREW=`which brew 2>/dev/null`
PACMAN=`which pacman 2>/dev/null`
# if DNF is available use it
if [ -x "$DNF" ]; then
YUM=$DNF
fi
if [ -z "$YUM" -a -z "$APT" ]; then
if [ -z "$YUM" -a -z "$APT" -a -z "$BREW" -a -z "$PACMAN" ]; then
echo "The package managers can't be found."
exit 1
fi
if [ ! -z "$YUM" ]; then
$sudo_command $YUM install -y libvirt-devel
$sudo_command $YUM install -y augeas-devel
fi
if [ ! -z "$APT" ]; then
$sudo_command $APT install -y libvirt-dev || true
$sudo_command $APT install -y libaugeas-dev || true
$sudo_command $APT install -y libpcap0.8-dev || true
fi
if [ ! -z "$BREW" ]; then
$BREW install libvirt || true
fi
if [ ! -z "$PACMAN" ]; then
$sudo_command $PACMAN -S --noconfirm libvirt augeas libpcap
fi
if [ $travis -eq 0 ]; then
if [ ! -z "$YUM" ]; then
# some go dependencies are stored in mercurial
@@ -47,11 +59,14 @@ if [ $travis -eq 0 ]; then
$sudo_command $APT install -y golang-golang-x-tools || true
$sudo_command $APT install -y golang-go.tools || true
fi
if [ ! -z "$PACMAN" ]; then
$sudo_command $PACMAN -S --noconfirm go
fi
fi
# if golang is too old, we don't want to fail with an obscure error later
if go version | grep 'go1\.[0123]\.'; then
echo "mgmt requires go1.4 or higher."
if go version | grep 'go1\.[012345]\.'; then
echo "mgmt requires go1.6 or higher."
exit 1
fi
@@ -65,4 +80,5 @@ if [[ $ret != 0 ]]; then
fi
go get golang.org/x/tools/cmd/stringer # for automatic stringer-ing
go get github.com/golang/lint/golint # for `golint`-ing
go get -u gopkg.in/alecthomas/gometalinter.v1 && mv "$(dirname $(which gometalinter.v1))/gometalinter.v1" "$(dirname $(which gometalinter.v1))/gometalinter" && gometalinter --install # bonus
cd "$XPWD" >/dev/null

View File

@@ -5,7 +5,7 @@ After=systemd-networkd.service
Requires=systemd-networkd.service
[Service]
ExecStart=/usr/bin/mgmt run ${OPTS}
ExecStart=/usr/bin/mgmt run $OPTS
RestartSec=5s
Restart=always

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgp

View File

@@ -1,535 +0,0 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
import (
"fmt"
"log"
"math"
"sync"
"time"
"github.com/purpleidea/mgmt/event"
"github.com/purpleidea/mgmt/resources"
errwrap "github.com/pkg/errors"
)
// GetTimestamp returns the timestamp of a vertex
func (v *Vertex) GetTimestamp() int64 {
return v.timestamp
}
// UpdateTimestamp updates the timestamp on a vertex and returns the new value
func (v *Vertex) UpdateTimestamp() int64 {
v.timestamp = time.Now().UnixNano() // update
return v.timestamp
}
// OKTimestamp returns true if this element can run right now?
func (g *Graph) OKTimestamp(v *Vertex) bool {
// these are all the vertices pointing TO v, eg: ??? -> v
for _, n := range g.IncomingGraphVertices(v) {
// if the vertex has a greater timestamp than any pre-req (n)
// then we can't run right now...
// if they're equal (eg: on init of 0) then we also can't run
// b/c we should let our pre-req's go first...
x, y := v.GetTimestamp(), n.GetTimestamp()
if g.Flags.Debug {
log.Printf("%s[%s]: OKTimestamp: (%v) >= %s[%s](%v): !%v", v.Kind(), v.GetName(), x, n.Kind(), n.GetName(), y, x >= y)
}
if x >= y {
return false
}
}
return true
}
// Poke notifies nodes after me in the dependency graph that they need refreshing...
// NOTE: this assumes that this can never fail or need to be rescheduled
func (g *Graph) Poke(v *Vertex, activity bool) error {
var wg sync.WaitGroup
// these are all the vertices pointing AWAY FROM v, eg: v -> ???
for _, n := range g.OutgoingGraphVertices(v) {
// XXX: if we're in state event and haven't been cancelled by
// apply, then we can cancel a poke to a child, right? XXX
// XXX: if n.Res.getState() != resources.ResStateEvent || activity { // is this correct?
if true || activity { // XXX: ???
if g.Flags.Debug {
log.Printf("%s[%s]: Poke: %s[%s]", v.Kind(), v.GetName(), n.Kind(), n.GetName())
}
wg.Add(1)
go func(nn *Vertex) error {
defer wg.Done()
edge := g.Adjacency[v][nn] // lookup
notify := edge.Notify && edge.Refresh()
// FIXME: is it okay that this is sync?
nn.SendEvent(event.EventPoke, true, notify)
// TODO: check return value?
return nil // never error for now...
}(n)
} else {
if g.Flags.Debug {
log.Printf("%s[%s]: Poke: %s[%s]: Skipped!", v.Kind(), v.GetName(), n.Kind(), n.GetName())
}
}
}
wg.Wait() // wait for all the pokes to complete
return nil
}
// BackPoke pokes the pre-requisites that are stale and need to run before I can run.
func (g *Graph) BackPoke(v *Vertex) {
// these are all the vertices pointing TO v, eg: ??? -> v
for _, n := range g.IncomingGraphVertices(v) {
x, y, s := v.GetTimestamp(), n.GetTimestamp(), n.Res.GetState()
// if the parent timestamp needs poking AND it's not in state
// ResStateEvent, then poke it. If the parent is in ResStateEvent it
// means that an event is pending, so we'll be expecting a poke
// back soon, so we can safely discard the extra parent poke...
// TODO: implement a stateLT (less than) to tell if something
// happens earlier in the state cycle and that doesn't wrap nil
if x >= y && (s != resources.ResStateEvent && s != resources.ResStateCheckApply) {
if g.Flags.Debug {
log.Printf("%s[%s]: BackPoke: %s[%s]", v.Kind(), v.GetName(), n.Kind(), n.GetName())
}
// FIXME: is it okay that this is sync?
n.SendEvent(event.EventBackPoke, true, false)
} else {
if g.Flags.Debug {
log.Printf("%s[%s]: BackPoke: %s[%s]: Skipped!", v.Kind(), v.GetName(), n.Kind(), n.GetName())
}
}
}
}
// RefreshPending determines if any previous nodes have a refresh pending here.
// If this is true, it means I am expected to apply a refresh when I next run.
func (g *Graph) RefreshPending(v *Vertex) bool {
var refresh bool
for _, edge := range g.IncomingGraphEdges(v) {
// if we asked for a notify *and* if one is pending!
if edge.Notify && edge.Refresh() {
refresh = true
break
}
}
return refresh
}
// SetUpstreamRefresh sets the refresh value to any upstream vertices.
func (g *Graph) SetUpstreamRefresh(v *Vertex, b bool) {
for _, edge := range g.IncomingGraphEdges(v) {
if edge.Notify {
edge.SetRefresh(b)
}
}
}
// SetDownstreamRefresh sets the refresh value to any downstream vertices.
func (g *Graph) SetDownstreamRefresh(v *Vertex, b bool) {
for _, edge := range g.OutgoingGraphEdges(v) {
// if we asked for a notify *and* if one is pending!
if edge.Notify {
edge.SetRefresh(b)
}
}
}
// Process is the primary function to execute for a particular vertex in the graph.
func (g *Graph) Process(v *Vertex) error {
obj := v.Res
if g.Flags.Debug {
log.Printf("%s[%s]: Process()", obj.Kind(), obj.GetName())
}
obj.SetState(resources.ResStateEvent)
var ok = true
var applied = false // did we run an apply?
// is it okay to run dependency wise right now?
// if not, that's okay because when the dependency runs, it will poke
// us back and we will run if needed then!
if g.OKTimestamp(v) {
if g.Flags.Debug {
log.Printf("%s[%s]: OKTimestamp(%v)", obj.Kind(), obj.GetName(), v.GetTimestamp())
}
obj.SetState(resources.ResStateCheckApply)
// connect any senders to receivers and detect if values changed
if updated, err := obj.SendRecv(obj); err != nil {
return errwrap.Wrapf(err, "could not SendRecv in Process")
} else if len(updated) > 0 {
for _, changed := range updated {
if changed { // at least one was updated
obj.StateOK(false) // invalidate cache, mark as dirty
break
}
}
}
var noop = obj.Meta().Noop // lookup the noop value
var refresh bool
var checkOK bool
var err error
if g.Flags.Debug {
log.Printf("%s[%s]: CheckApply(%t)", obj.Kind(), obj.GetName(), !noop)
}
// lookup the refresh (notification) variable
refresh = g.RefreshPending(v) // do i need to perform a refresh?
obj.SetRefresh(refresh) // tell the resource
// check cached state, to skip CheckApply; can't skip if refreshing
if !refresh && obj.IsStateOK() {
checkOK, err = true, nil
// NOTE: technically this block is wrong because we don't know
// if the resource implements refresh! If it doesn't, we could
// skip this, but it doesn't make a big difference under noop!
} else if noop && refresh { // had a refresh to do w/ noop!
checkOK, err = false, nil // therefore the state is wrong
// run the CheckApply!
} else {
// if this fails, don't UpdateTimestamp()
checkOK, err = obj.CheckApply(!noop)
}
if checkOK && err != nil { // should never return this way
log.Fatalf("%s[%s]: CheckApply(): %t, %+v", obj.Kind(), obj.GetName(), checkOK, err)
}
if g.Flags.Debug {
log.Printf("%s[%s]: CheckApply(): %t, %v", obj.Kind(), obj.GetName(), checkOK, err)
}
// if CheckApply ran without noop and without error, state should be good
if !noop && err == nil { // aka !noop || checkOK
obj.StateOK(true) // reset
if refresh {
g.SetUpstreamRefresh(v, false) // refresh happened, clear the request
obj.SetRefresh(false)
}
}
if !checkOK { // if state *was* not ok, we had to have apply'ed
if err != nil { // error during check or apply
ok = false
} else {
applied = true
}
}
// when noop is true we always want to update timestamp
if noop && err == nil {
ok = true
}
if ok {
// did we actually do work?
activity := applied
if noop {
activity = false // no we didn't do work...
}
if activity { // add refresh flag to downstream edges...
g.SetDownstreamRefresh(v, true)
}
// update this timestamp *before* we poke or the poked
// nodes might fail due to having a too old timestamp!
v.UpdateTimestamp() // this was touched...
obj.SetState(resources.ResStatePoking) // can't cancel parent poke
if err := g.Poke(v, activity); err != nil {
return errwrap.Wrapf(err, "the Poke() failed")
}
}
// poke at our pre-req's instead since they need to refresh/run...
return errwrap.Wrapf(err, "could not Process() successfully")
}
// else... only poke at the pre-req's that need to run
go g.BackPoke(v)
return nil
}
// SentinelErr is a sentinal as an error type that wraps an arbitrary error.
type SentinelErr struct {
err error
}
// Error is the required method to fulfill the error type.
func (obj *SentinelErr) Error() string {
return obj.err.Error()
}
// Worker is the common run frontend of the vertex. It handles all of the retry
// and retry delay common code, and ultimately returns the final status of this
// vertex execution.
func (g *Graph) Worker(v *Vertex) error {
// listen for chan events from Watch() and run
// the Process() function when they're received
// this avoids us having to pass the data into
// the Watch() function about which graph it is
// running on, which isolates things nicely...
obj := v.Res
// TODO: is there a better system for the `Watching` flag?
obj.SetWatching(true)
defer obj.SetWatching(false)
processChan := make(chan event.Event)
go func() {
running := false
var timer = time.NewTimer(time.Duration(math.MaxInt64)) // longest duration
if !timer.Stop() {
<-timer.C // unnecessary, shouldn't happen
}
var delay = time.Duration(v.Meta().Delay) * time.Millisecond
var retry = v.Meta().Retry // number of tries left, -1 for infinite
var saved event.Event
Loop:
for {
// this has to be synchronous, because otherwise the Res
// event loop will keep running and change state,
// causing the converged timeout to fire!
select {
case event, ok := <-processChan: // must use like this
if running && ok {
// we got an event that wasn't a close,
// while we were waiting for the timer!
// if this happens, it might be a bug:(
log.Fatalf("%s[%s]: Worker: Unexpected event: %+v", v.Kind(), v.GetName(), event)
}
if !ok { // processChan closed, let's exit
break Loop // no event, so no ack!
}
// the above mentioned synchronous part, is the
// running of this function, paired with an ack.
if e := g.Process(v); e != nil {
saved = event
log.Printf("%s[%s]: CheckApply errored: %v", v.Kind(), v.GetName(), e)
if retry == 0 {
// wrap the error in the sentinel
event.ACKNACK(&SentinelErr{e}) // fail the Watch()
break Loop
}
if retry > 0 { // don't decrement the -1
retry--
}
log.Printf("%s[%s]: CheckApply: Retrying after %.4f seconds (%d left)", v.Kind(), v.GetName(), delay.Seconds(), retry)
// start the timer...
timer.Reset(delay)
running = true
continue
}
retry = v.Meta().Retry // reset on success
event.ACK() // sync
case <-timer.C:
if !timer.Stop() {
//<-timer.C // blocks, docs are wrong!
}
running = false
log.Printf("%s[%s]: CheckApply delay expired!", v.Kind(), v.GetName())
// re-send this failed event, to trigger a CheckApply()
go func() { processChan <- saved }()
// TODO: should we send a fake event instead?
//saved = nil
}
}
}()
var err error // propagate the error up (this is a permanent BAD error!)
// the watch delay runs inside of the Watch resource loop, so that it
// can still process signals and exit if needed. It shouldn't run any
// resource specific code since this is supposed to be a retry delay.
// NOTE: we're using the same retry and delay metaparams that CheckApply
// uses. This is for practicality. We can separate them later if needed!
var watchDelay time.Duration
var watchRetry = v.Meta().Retry // number of tries left, -1 for infinite
// watch blocks until it ends, & errors to retry
for {
// TODO: do we have to stop the converged-timeout when in this block (perhaps we're in the delay block!)
// TODO: should we setup/manage some of the converged timeout stuff in here anyways?
// if a retry-delay was requested, wait, but don't block our events!
if watchDelay > 0 {
//var pendingSendEvent bool
timer := time.NewTimer(watchDelay)
Loop:
for {
select {
case <-timer.C: // the wait is over
break Loop // critical
// TODO: resources could have a separate exit channel to avoid this complexity!?
case event := <-obj.Events():
// NOTE: this code should match the similar Res code!
//cuid.SetConverged(false) // TODO: ?
if exit, send := obj.ReadEvent(&event); exit {
return nil // exit
} else if send {
// if we dive down this rabbit hole, our
// timer.C won't get seen until we get out!
// in this situation, the Watch() is blocked
// from performing until CheckApply returns
// successfully, or errors out. This isn't
// so bad, but we should document it. Is it
// possible that some resource *needs* Watch
// to run to be able to execute a CheckApply?
// That situation shouldn't be common, and
// should probably not be allowed. Can we
// avoid it though?
//if exit, err := doSend(); exit || err != nil {
// return err // we exit or bubble up a NACK...
//}
// Instead of doing the above, we can
// add events to a pending list, and
// when we finish the delay, we can run
// them.
//pendingSendEvent = true // all events are identical for now...
}
}
}
timer.Stop() // it's nice to cleanup
log.Printf("%s[%s]: Watch delay expired!", v.Kind(), v.GetName())
// NOTE: we can avoid the send if running Watch guarantees
// one CheckApply event on startup!
//if pendingSendEvent { // TODO: should this become a list in the future?
// if exit, err := obj.DoSend(processChan, ""); exit || err != nil {
// return err // we exit or bubble up a NACK...
// }
//}
}
// TODO: reset the watch retry count after some amount of success
v.Res.RegisterConverger()
e := v.Res.Watch(processChan)
v.Res.UnregisterConverger()
if e == nil { // exit signal
err = nil // clean exit
break
}
if sentinelErr, ok := e.(*SentinelErr); ok { // unwrap the sentinel
err = sentinelErr.err
break // sentinel means, perma-exit
}
log.Printf("%s[%s]: Watch errored: %v", v.Kind(), v.GetName(), e)
if watchRetry == 0 {
err = fmt.Errorf("Permanent watch error: %v", e)
break
}
if watchRetry > 0 { // don't decrement the -1
watchRetry--
}
watchDelay = time.Duration(v.Meta().Delay) * time.Millisecond
log.Printf("%s[%s]: Watch: Retrying after %.4f seconds (%d left)", v.Kind(), v.GetName(), watchDelay.Seconds(), watchRetry)
// We need to trigger a CheckApply after Watch restarts, so that
// we catch any lost events that happened while down. We do this
// by getting the Watch resource to send one event once it's up!
//v.SendEvent(eventPoke, false, false)
}
close(processChan)
return err
}
// Start is a main kick to start the graph. It goes through in reverse topological
// sort order so that events can't hit un-started vertices.
func (g *Graph) Start(first bool) { // start or continue
log.Printf("State: %v -> %v", g.setState(graphStateStarting), g.getState())
defer log.Printf("State: %v -> %v", g.setState(graphStateStarted), g.getState())
var wg sync.WaitGroup
t, _ := g.TopologicalSort()
// TODO: only calculate indegree if `first` is true to save resources
indegree := g.InDegree() // compute all of the indegree's
for _, v := range Reverse(t) {
// selective poke: here we reduce the number of initial pokes
// to the minimum required to activate every vertex in the
// graph, either by direct action, or by getting poked by a
// vertex that was previously activated. if we poke each vertex
// that has no incoming edges, then we can be sure to reach the
// whole graph. Please note: this may mask certain optimization
// failures, such as any poke limiting code in Poke() or
// BackPoke(). You might want to disable this selective start
// when experimenting with and testing those elements.
// if we are unpausing (since it's not the first run of this
// function) we need to poke to *unpause* every graph vertex,
// and not just selectively the subset with no indegree.
if (!first) || indegree[v] == 0 {
v.Res.Starter(true) // let the startup code know to poke
}
if !v.Res.IsWatching() { // if Watch() is not running...
g.wg.Add(1)
// must pass in value to avoid races...
// see: https://ttboj.wordpress.com/2015/07/27/golang-parallelism-issues-causing-too-many-open-files-error/
go func(vv *Vertex) {
defer g.wg.Done()
// TODO: if a sufficient number of workers error,
// should something be done? Will these restart
// after perma-failure if we have a graph change?
if err := g.Worker(vv); err != nil { // contains the Watch and CheckApply loops
log.Printf("%s[%s]: Exited with failure: %v", vv.Kind(), vv.GetName(), err)
return
}
log.Printf("%s[%s]: Exited", vv.Kind(), vv.GetName())
}(v)
}
// let the vertices run their startup code in parallel
wg.Add(1)
go func(vv *Vertex) {
defer wg.Done()
vv.Res.Started() // block until started
}(v)
if !first { // unpause!
v.Res.SendEvent(event.EventStart, true, false) // sync!
}
}
wg.Wait() // wait for everyone
}
// Pause sends pause events to the graph in a topological sort order.
func (g *Graph) Pause() {
log.Printf("State: %v -> %v", g.setState(graphStatePausing), g.getState())
defer log.Printf("State: %v -> %v", g.setState(graphStatePaused), g.getState())
t, _ := g.TopologicalSort()
for _, v := range t { // squeeze out the events...
v.SendEvent(event.EventPause, true, false)
}
}
// Exit sends exit events to the graph in a topological sort order.
func (g *Graph) Exit() {
if g == nil {
return
} // empty graph that wasn't populated yet
t, _ := g.TopologicalSort()
for _, v := range t { // squeeze out the events...
// turn off the taps...
// XXX: consider instead doing this by closing the Res.events channel instead?
// XXX: do this by sending an exit signal, and then returning
// when we hit the 'default' in the select statement!
// XXX: we can do this to quiesce, but it's not necessary now
v.SendEvent(event.EventExit, true, false)
}
g.wg.Wait() // for now, this doesn't need to be a separate Wait() method
}

View File

@@ -1,103 +0,0 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package pgraph represents the internal "pointer graph" that we use.
package pgraph
import (
"fmt"
"log"
"github.com/purpleidea/mgmt/resources"
)
// add edges to the vertex in a graph based on if it matches a uid list
func (g *Graph) addEdgesByMatchingUIDS(v *Vertex, uids []resources.ResUID) []bool {
// search for edges and see what matches!
var result []bool
// loop through each uid, and see if it matches any vertex
for _, uid := range uids {
var found = false
// uid is a ResUID object
for _, vv := range g.GetVertices() { // search
if v == vv { // skip self
continue
}
if g.Flags.Debug {
log.Printf("Compile: AutoEdge: Match: %v[%v] with UID: %v[%v]", vv.Kind(), vv.GetName(), uid.Kind(), uid.GetName())
}
// we must match to an effective UID for the resource,
// that is to say, the name value of a res is a helpful
// handle, but it is not necessarily a unique identity!
// remember, resources can return multiple UID's each!
if resources.UIDExistsInUIDs(uid, vv.GetUIDs()) {
// add edge from: vv -> v
if uid.Reversed() {
txt := fmt.Sprintf("AutoEdge: %v[%v] -> %v[%v]", vv.Kind(), vv.GetName(), v.Kind(), v.GetName())
log.Printf("Compile: Adding %v", txt)
g.AddEdge(vv, v, NewEdge(txt))
} else { // edges go the "normal" way, eg: pkg resource
txt := fmt.Sprintf("AutoEdge: %v[%v] -> %v[%v]", v.Kind(), v.GetName(), vv.Kind(), vv.GetName())
log.Printf("Compile: Adding %v", txt)
g.AddEdge(v, vv, NewEdge(txt))
}
found = true
break
}
}
result = append(result, found)
}
return result
}
// AutoEdges adds the automatic edges to the graph.
func (g *Graph) AutoEdges() {
log.Println("Compile: Adding AutoEdges...")
for _, v := range g.GetVertices() { // for each vertexes autoedges
if !v.Meta().AutoEdge { // is the metaparam true?
continue
}
autoEdgeObj := v.AutoEdges()
if autoEdgeObj == nil {
log.Printf("%v[%v]: Config: No auto edges were found!", v.Kind(), v.GetName())
continue // next vertex
}
for { // while the autoEdgeObj has more uids to add...
uids := autoEdgeObj.Next() // get some!
if uids == nil {
log.Printf("%v[%v]: Config: The auto edge list is empty!", v.Kind(), v.GetName())
break // inner loop
}
if g.Flags.Debug {
log.Println("Compile: AutoEdge: UIDS:")
for i, u := range uids {
log.Printf("Compile: AutoEdge: UID%d: %v", i, u)
}
}
// match and add edges
result := g.addEdgesByMatchingUIDS(v, uids)
// report back, and find out if we should continue
if !autoEdgeObj.Test(result) {
break
}
}
}
}

155
pgraph/graphsync.go Normal file
View File

@@ -0,0 +1,155 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
import (
"fmt"
errwrap "github.com/pkg/errors"
)
func strVertexCmpFn(v1, v2 Vertex) (bool, error) {
if v1.String() == "" || v2.String() == "" {
return false, fmt.Errorf("empty vertex")
}
return v1.String() == v2.String(), nil
}
func strEdgeCmpFn(e1, e2 Edge) (bool, error) {
if e1.String() == "" || e2.String() == "" {
return false, fmt.Errorf("empty edge")
}
return e1.String() == e2.String(), nil
}
// GraphSync updates the Graph so that it matches the newGraph. It leaves
// identical elements alone so that they don't need to be refreshed.
// It tries to mutate existing elements into new ones, if they support this.
// This updates the Graph on success only.
// FIXME: should we do this with copies of the vertex resources?
func (obj *Graph) GraphSync(newGraph *Graph, vertexCmpFn func(Vertex, Vertex) (bool, error), vertexAddFn func(Vertex) error, vertexRemoveFn func(Vertex) error, edgeCmpFn func(Edge, Edge) (bool, error)) error {
oldGraph := obj.Copy() // work on a copy of the old graph
if oldGraph == nil {
var err error
oldGraph, err = NewGraph(newGraph.GetName()) // copy over the name
if err != nil {
return errwrap.Wrapf(err, "GraphSync failed")
}
}
oldGraph.SetName(newGraph.GetName()) // overwrite the name
if vertexCmpFn == nil {
vertexCmpFn = strVertexCmpFn // use simple string cmp version
}
if vertexAddFn == nil {
vertexAddFn = func(Vertex) error { return nil } // noop
}
if vertexRemoveFn == nil {
vertexRemoveFn = func(Vertex) error { return nil } // noop
}
if edgeCmpFn == nil {
edgeCmpFn = strEdgeCmpFn // use simple string cmp version
}
var lookup = make(map[Vertex]Vertex)
var vertexKeep []Vertex // list of vertices which are the same in new graph
var edgeKeep []Edge // list of vertices which are the same in new graph
for v := range newGraph.Adjacency() { // loop through the vertices (resources)
var vertex Vertex
// step one, direct compare with res.Compare
if vertex == nil { // redundant guard for consistency
fn := func(vv Vertex) (bool, error) {
b, err := vertexCmpFn(vv, v)
return b, errwrap.Wrapf(err, "vertexCmpFn failed")
}
var err error
vertex, err = oldGraph.VertexMatchFn(fn)
if err != nil {
return errwrap.Wrapf(err, "VertexMatchFn failed")
}
}
// TODO: consider adding a mutate API.
// step two, try and mutate with res.Mutate
//if vertex == nil { // not found yet...
// vertex = oldGraph.MutateMatch(res)
//}
if vertex == nil { // no match found yet
if err := vertexAddFn(v); err != nil {
return errwrap.Wrapf(err, "vertexAddFn failed")
}
vertex = v
oldGraph.AddVertex(vertex) // call standalone in case not part of an edge
}
lookup[v] = vertex // used for constructing edges
vertexKeep = append(vertexKeep, vertex) // append
}
// get rid of any vertices we shouldn't keep (that aren't in new graph)
for v := range oldGraph.Adjacency() {
if !VertexContains(v, vertexKeep) {
if err := vertexRemoveFn(v); err != nil {
return errwrap.Wrapf(err, "vertexRemoveFn failed")
}
oldGraph.DeleteVertex(v)
}
}
// compare edges
for v1 := range newGraph.Adjacency() { // loop through the vertices (resources)
for v2, e := range newGraph.Adjacency()[v1] {
// we have an edge!
// lookup vertices (these should exist now)
vertex1, exists1 := lookup[v1]
vertex2, exists2 := lookup[v2]
if !exists1 || !exists2 { // no match found, bug?
//if vertex1 == nil || vertex2 == nil { // no match found
return fmt.Errorf("new vertices weren't found") // programming error
}
edge, exists := oldGraph.Adjacency()[vertex1][vertex2]
if !exists {
edge = e // use edge
} else if b, err := edgeCmpFn(edge, e); err != nil {
return errwrap.Wrapf(err, "edgeCmpFn failed")
} else if !b {
edge = e // overwrite edge
}
oldGraph.Adjacency()[vertex1][vertex2] = edge // store it (AddEdge)
edgeKeep = append(edgeKeep, edge) // mark as saved
}
}
// delete unused edges
for v1 := range oldGraph.Adjacency() {
for _, e := range oldGraph.Adjacency()[v1] {
// we have an edge!
if !EdgeContains(e, edgeKeep) {
oldGraph.DeleteEdge(e)
}
}
}
// success
*obj = *oldGraph // save old graph
return nil
}

92
pgraph/graphsync_test.go Normal file
View File

@@ -0,0 +1,92 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
import (
"testing"
)
func TestGraphSync1(t *testing.T) {
g := &Graph{}
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
e1 := NE("e1")
e2 := NE("e2")
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
// new graph
newGraph := &Graph{}
v4 := NV("v4")
v5 := NV("v5")
e3 := NE("e3")
newGraph.AddEdge(v4, v5, e3)
err := g.GraphSync(newGraph, nil, nil, nil, nil)
if err != nil {
t.Errorf("GraphSync failed: %v", err)
return
}
// g should change and become the same
if s := runGraphCmp(t, g, newGraph); s != "" {
t.Errorf("%s", s)
}
}
func TestGraphSync2(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g := &Graph{}
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
// new graph
newGraph := &Graph{}
newGraph.AddEdge(v1, v3, e1)
newGraph.AddEdge(v2, v3, e2)
newGraph.AddEdge(v4, v5, e3)
//newGraph.AddEdge(v3, v4, NE("v3,v4"))
//newGraph.AddEdge(v3, v5, NE("v3,v5"))
// graphs should differ!
if runGraphCmp(t, g, newGraph) == "" {
t.Errorf("graphs should differ initially")
return
}
err := g.GraphSync(newGraph, strVertexCmpFn, vertexAddFn, vertexRemoveFn, strEdgeCmpFn)
if err != nil {
t.Errorf("GraphSync failed: %v", err)
return
}
// g should change and become the same
if s := runGraphCmp(t, g, newGraph); s != "" {
t.Errorf("%s", s)
}
}

View File

@@ -1,21 +1,21 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
package pgraph // TODO: this should be a subpackage
import (
"fmt"
@@ -45,12 +45,16 @@ func (g *Graph) Graphviz() (out string) {
out += fmt.Sprintf("\tlabel=\"%s\";\n", g.GetName())
//out += "\tnode [shape=box];\n"
str := ""
for i := range g.Adjacency { // reverse paths
out += fmt.Sprintf("\t%s [label=\"%s[%s]\"];\n", i.GetName(), i.Kind(), i.GetName())
for j := range g.Adjacency[i] {
k := g.Adjacency[i][j]
for i := range g.Adjacency() { // reverse paths
out += fmt.Sprintf("\t\"%s\" [label=\"%s\"];\n", i, i)
for j := range g.Adjacency()[i] {
k := g.Adjacency()[i][j]
// use str for clearer output ordering
str += fmt.Sprintf("\t%s -> %s [label=%s];\n", i.GetName(), j.GetName(), k.Name)
//if fmtBoldFn(k) { // TODO: add this sort of formatting
// str += fmt.Sprintf("\t\"%s\" -> \"%s\" [label=\"%s\",style=bold];\n", i, j, k)
//} else {
str += fmt.Sprintf("\t\"%s\" -> \"%s\" [label=\"%s\"];\n", i, j, k)
//}
}
}
out += str
@@ -60,16 +64,20 @@ func (g *Graph) Graphviz() (out string) {
// ExecGraphviz writes out the graphviz data and runs the correct graphviz
// filter command.
func (g *Graph) ExecGraphviz(program, filename string) error {
func (g *Graph) ExecGraphviz(program, filename, hostname string) error {
switch program {
case "dot", "neato", "twopi", "circo", "fdp":
default:
return fmt.Errorf("Invalid graphviz program selected!")
return fmt.Errorf("invalid graphviz program selected")
}
if filename == "" {
return fmt.Errorf("No filename given!")
return fmt.Errorf("no filename given")
}
if hostname != "" {
filename = fmt.Sprintf("%s@%s", filename, hostname)
}
// run as a normal user if possible when run with sudo
@@ -78,18 +86,18 @@ func (g *Graph) ExecGraphviz(program, filename string) error {
err := ioutil.WriteFile(filename, []byte(g.Graphviz()), 0644)
if err != nil {
return fmt.Errorf("Error writing to filename!")
return fmt.Errorf("error writing to filename")
}
if err1 == nil && err2 == nil {
if err := os.Chown(filename, uid, gid); err != nil {
return fmt.Errorf("Error changing file owner!")
return fmt.Errorf("error changing file owner")
}
}
path, err := exec.LookPath(program)
if err != nil {
return fmt.Errorf("Graphviz is missing!")
return fmt.Errorf("the Graphviz program is missing")
}
out := fmt.Sprintf("%s.png", filename)
@@ -104,7 +112,7 @@ func (g *Graph) ExecGraphviz(program, filename string) error {
}
_, err = cmd.Output()
if err != nil {
return fmt.Errorf("Error writing to image!")
return fmt.Errorf("error writing to image")
}
return nil
}

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package pgraph represents the internal "pointer graph" that we use.
@@ -21,30 +21,10 @@ package pgraph
import (
"fmt"
"sort"
"sync"
"github.com/purpleidea/mgmt/event"
"github.com/purpleidea/mgmt/resources"
errwrap "github.com/pkg/errors"
)
//go:generate stringer -type=graphState -output=graphstate_stringer.go
type graphState int
const (
graphStateNil graphState = iota
graphStateStarting
graphStateStarted
graphStatePausing
graphStatePaused
)
// Flags contains specific constants used by the graph.
type Flags struct {
Debug bool
}
// Graph is the graph structure in this library.
// The graph abstract data type (ADT) is defined as follows:
// * the directed graph arrows point from left to right ( -> )
@@ -52,76 +32,72 @@ type Flags struct {
// * IOW, you might see package -> file -> service (where package runs first)
// * This is also the direction that the notify should happen in...
type Graph struct {
Name string
Adjacency map[*Vertex]map[*Vertex]*Edge // *Vertex -> *Vertex (edge)
Flags Flags
state graphState
mutex *sync.Mutex // used when modifying graph State variable
wg *sync.WaitGroup
Name string
adjacency map[Vertex]map[Vertex]Edge // Vertex -> Vertex (edge)
kv map[string]interface{} // some values associated with the graph
}
// Vertex is the primary vertex struct in this library.
type Vertex struct {
resources.Res // anonymous field
timestamp int64 // last updated timestamp ?
// Vertex is the primary vertex struct in this library. It can be anything that
// implements Stringer. The string output must be stable and unique in a graph.
type Vertex interface {
fmt.Stringer // String() string
}
// Edge is the primary edge struct in this library.
type Edge struct {
Name string
Notify bool // should we send a refresh notification along this edge?
// Edge is the primary edge struct in this library. It can be anything that
// implements Stringer. The string output must be stable and unique in a graph.
type Edge interface {
fmt.Stringer // String() string
}
refresh bool // is there a notify pending for the dest vertex ?
// Init initializes the graph which populates all the internal structures.
func (g *Graph) Init() error {
if g.Name == "" { // FIXME: is this really a good requirement?
return fmt.Errorf("can't initialize graph with empty name")
}
//g.adjacency = make(map[Vertex]map[Vertex]Edge) // not required
//g.kv = make(map[string]interface{}) // not required
return nil
}
// NewGraph builds a new graph.
func NewGraph(name string) *Graph {
return &Graph{
Name: name,
Adjacency: make(map[*Vertex]map[*Vertex]*Edge),
state: graphStateNil,
// ptr b/c: Mutex/WaitGroup must not be copied after first use
mutex: &sync.Mutex{},
wg: &sync.WaitGroup{},
}
}
// NewVertex returns a new graph vertex struct with a contained resource.
func NewVertex(r resources.Res) *Vertex {
return &Vertex{
Res: r,
}
}
// NewEdge returns a new graph edge struct.
func NewEdge(name string) *Edge {
return &Edge{
func NewGraph(name string) (*Graph, error) {
g := &Graph{
Name: name,
}
if err := g.Init(); err != nil {
return nil, err
}
return g, nil
}
// Refresh returns the pending refresh status of this edge.
func (obj *Edge) Refresh() bool {
return obj.refresh
// Value returns a value stored alongside the graph in a particular key.
func (g *Graph) Value(key string) (interface{}, bool) {
val, exists := g.kv[key]
return val, exists
}
// SetRefresh sets the pending refresh status of this edge.
func (obj *Edge) SetRefresh(b bool) {
obj.refresh = b
// SetValue sets a value to be stored alongside the graph in a particular key.
func (g *Graph) SetValue(key string, val interface{}) {
if g.kv == nil { // initialize on first use
g.kv = make(map[string]interface{})
}
g.kv[key] = val
}
// Copy makes a copy of the graph struct
// Copy makes a copy of the graph struct.
func (g *Graph) Copy() *Graph {
if g == nil { // allow nil graphs through
return g
}
newGraph := &Graph{
Name: g.Name,
Adjacency: make(map[*Vertex]map[*Vertex]*Edge, len(g.Adjacency)),
Flags: g.Flags,
state: g.state,
mutex: g.mutex,
wg: g.wg,
adjacency: make(map[Vertex]map[Vertex]Edge, len(g.adjacency)),
kv: g.kv,
}
for k, v := range g.Adjacency {
newGraph.Adjacency[k] = v // copy
for k, v := range g.adjacency {
newGraph.adjacency[k] = v // copy
}
return newGraph
}
@@ -136,76 +112,49 @@ func (g *Graph) SetName(name string) {
g.Name = name
}
// getState returns the state of the graph. This state is used for optimizing
// certain algorithms by knowing what part of processing the graph is currently
// undergoing.
func (g *Graph) getState() graphState {
//g.mutex.Lock()
//defer g.mutex.Unlock()
return g.state
}
// setState sets the graph state and returns the previous state.
func (g *Graph) setState(state graphState) graphState {
g.mutex.Lock()
defer g.mutex.Unlock()
prev := g.getState()
g.state = state
return prev
}
// AddVertex uses variadic input to add all listed vertices to the graph
func (g *Graph) AddVertex(xv ...*Vertex) {
// AddVertex uses variadic input to add all listed vertices to the graph.
func (g *Graph) AddVertex(xv ...Vertex) {
if g.adjacency == nil { // initialize on first use
g.adjacency = make(map[Vertex]map[Vertex]Edge)
}
for _, v := range xv {
if _, exists := g.Adjacency[v]; !exists {
g.Adjacency[v] = make(map[*Vertex]*Edge)
if _, exists := g.adjacency[v]; !exists {
g.adjacency[v] = make(map[Vertex]Edge)
}
}
}
// DeleteVertex deletes a particular vertex from the graph.
func (g *Graph) DeleteVertex(v *Vertex) {
delete(g.Adjacency, v)
for k := range g.Adjacency {
delete(g.Adjacency[k], v)
func (g *Graph) DeleteVertex(v Vertex) {
delete(g.adjacency, v)
for k := range g.adjacency {
delete(g.adjacency[k], v)
}
}
// AddEdge adds a directed edge to the graph from v1 to v2.
func (g *Graph) AddEdge(v1, v2 *Vertex, e *Edge) {
func (g *Graph) AddEdge(v1, v2 Vertex, e Edge) {
// NOTE: this doesn't allow more than one edge between two vertexes...
g.AddVertex(v1, v2) // supports adding N vertices now
// TODO: check if an edge exists to avoid overwriting it!
// NOTE: VertexMerge() depends on overwriting it at the moment...
g.Adjacency[v1][v2] = e
g.adjacency[v1][v2] = e
}
// DeleteEdge deletes a particular edge from the graph.
// FIXME: add test cases
func (g *Graph) DeleteEdge(e *Edge) {
for v1 := range g.Adjacency {
for v2, edge := range g.Adjacency[v1] {
func (g *Graph) DeleteEdge(e Edge) {
for v1 := range g.adjacency {
for v2, edge := range g.adjacency[v1] {
if e == edge {
delete(g.Adjacency[v1], v2)
delete(g.adjacency[v1], v2)
}
}
}
}
// GetVertexMatch searches for an equivalent resource in the graph and returns
// the vertex it is found in, or nil if not found.
func (g *Graph) GetVertexMatch(obj resources.Res) *Vertex {
for k := range g.Adjacency {
if k.Res.Compare(obj) {
return k
}
}
return nil
}
// HasVertex returns if the input vertex exists in the graph.
func (g *Graph) HasVertex(v *Vertex) bool {
if _, exists := g.Adjacency[v]; exists {
func (g *Graph) HasVertex(v Vertex) bool {
if _, exists := g.adjacency[v]; exists {
return true
}
return false
@@ -213,33 +162,40 @@ func (g *Graph) HasVertex(v *Vertex) bool {
// NumVertices returns the number of vertices in the graph.
func (g *Graph) NumVertices() int {
return len(g.Adjacency)
return len(g.adjacency)
}
// NumEdges returns the number of edges in the graph.
func (g *Graph) NumEdges() int {
count := 0
for k := range g.Adjacency {
count += len(g.Adjacency[k])
for k := range g.adjacency {
count += len(g.adjacency[k])
}
return count
}
// GetVertices returns a randomly sorted slice of all vertices in the graph
// Adjacency returns the adjacency map representing this graph. This is useful
// for users who which to operate on the raw data structure more efficiently.
// This works because maps are reference types so we can edit this at will.
func (g *Graph) Adjacency() map[Vertex]map[Vertex]Edge {
return g.adjacency
}
// Vertices returns a randomly sorted slice of all vertices in the graph.
// The order is random, because the map implementation is intentionally so!
func (g *Graph) GetVertices() []*Vertex {
var vertices []*Vertex
for k := range g.Adjacency {
func (g *Graph) Vertices() []Vertex {
var vertices []Vertex
for k := range g.adjacency {
vertices = append(vertices, k)
}
return vertices
}
// GetVerticesChan returns a channel of all vertices in the graph.
func (g *Graph) GetVerticesChan() chan *Vertex {
ch := make(chan *Vertex)
go func(ch chan *Vertex) {
for k := range g.Adjacency {
// VerticesChan returns a channel of all vertices in the graph.
func (g *Graph) VerticesChan() chan Vertex {
ch := make(chan Vertex)
go func(ch chan Vertex) {
for k := range g.adjacency {
ch <- k
}
close(ch)
@@ -248,17 +204,17 @@ func (g *Graph) GetVerticesChan() chan *Vertex {
}
// VertexSlice is a linear list of vertices. It can be sorted.
type VertexSlice []*Vertex
type VertexSlice []Vertex
func (vs VertexSlice) Len() int { return len(vs) }
func (vs VertexSlice) Swap(i, j int) { vs[i], vs[j] = vs[j], vs[i] }
func (vs VertexSlice) Less(i, j int) bool { return vs[i].String() < vs[j].String() }
// GetVerticesSorted returns a sorted slice of all vertices in the graph
// The order is sorted by String() to avoid the non-determinism in the map type
func (g *Graph) GetVerticesSorted() []*Vertex {
var vertices []*Vertex
for k := range g.Adjacency {
// VerticesSorted returns a sorted slice of all vertices in the graph.
// The order is sorted by String() to avoid the non-determinism in the map type.
func (g *Graph) VerticesSorted() []Vertex {
var vertices []Vertex
for k := range g.adjacency {
vertices = append(vertices, k)
}
sort.Sort(VertexSlice(vertices)) // add determinism
@@ -270,19 +226,14 @@ func (g *Graph) String() string {
return fmt.Sprintf("Vertices(%d), Edges(%d)", g.NumVertices(), g.NumEdges())
}
// String returns the canonical form for a vertex
func (v *Vertex) String() string {
return fmt.Sprintf("%s[%s]", v.Res.Kind(), v.Res.GetName())
}
// IncomingGraphVertices returns an array (slice) of all directed vertices to
// vertex v (??? -> v). OKTimestamp should probably use this.
func (g *Graph) IncomingGraphVertices(v *Vertex) []*Vertex {
func (g *Graph) IncomingGraphVertices(v Vertex) []Vertex {
// TODO: we might be able to implement this differently by reversing
// the Adjacency graph and then looping through it again...
var s []*Vertex
for k := range g.Adjacency { // reverse paths
for w := range g.Adjacency[k] {
var s []Vertex
for k := range g.adjacency { // reverse paths
for w := range g.adjacency[k] {
if w == v {
s = append(s, k)
}
@@ -293,9 +244,9 @@ func (g *Graph) IncomingGraphVertices(v *Vertex) []*Vertex {
// OutgoingGraphVertices returns an array (slice) of all vertices that vertex v
// points to (v -> ???). Poke should probably use this.
func (g *Graph) OutgoingGraphVertices(v *Vertex) []*Vertex {
var s []*Vertex
for k := range g.Adjacency[v] { // forward paths
func (g *Graph) OutgoingGraphVertices(v Vertex) []Vertex {
var s []Vertex
for k := range g.adjacency[v] { // forward paths
s = append(s, k)
}
return s
@@ -303,18 +254,18 @@ func (g *Graph) OutgoingGraphVertices(v *Vertex) []*Vertex {
// GraphVertices returns an array (slice) of all vertices that connect to vertex v.
// This is the union of IncomingGraphVertices and OutgoingGraphVertices.
func (g *Graph) GraphVertices(v *Vertex) []*Vertex {
var s []*Vertex
func (g *Graph) GraphVertices(v Vertex) []Vertex {
var s []Vertex
s = append(s, g.IncomingGraphVertices(v)...)
s = append(s, g.OutgoingGraphVertices(v)...)
return s
}
// IncomingGraphEdges returns all of the edges that point to vertex v (??? -> v).
func (g *Graph) IncomingGraphEdges(v *Vertex) []*Edge {
var edges []*Edge
for v1 := range g.Adjacency { // reverse paths
for v2, e := range g.Adjacency[v1] {
func (g *Graph) IncomingGraphEdges(v Vertex) []Edge {
var edges []Edge
for v1 := range g.adjacency { // reverse paths
for v2, e := range g.adjacency[v1] {
if v2 == v {
edges = append(edges, e)
}
@@ -324,9 +275,9 @@ func (g *Graph) IncomingGraphEdges(v *Vertex) []*Edge {
}
// OutgoingGraphEdges returns all of the edges that point from vertex v (v -> ???).
func (g *Graph) OutgoingGraphEdges(v *Vertex) []*Edge {
var edges []*Edge
for _, e := range g.Adjacency[v] { // forward paths
func (g *Graph) OutgoingGraphEdges(v Vertex) []Edge {
var edges []Edge
for _, e := range g.adjacency[v] { // forward paths
edges = append(edges, e)
}
return edges
@@ -334,18 +285,18 @@ func (g *Graph) OutgoingGraphEdges(v *Vertex) []*Edge {
// GraphEdges returns an array (slice) of all edges that connect to vertex v.
// This is the union of IncomingGraphEdges and OutgoingGraphEdges.
func (g *Graph) GraphEdges(v *Vertex) []*Edge {
var edges []*Edge
func (g *Graph) GraphEdges(v Vertex) []Edge {
var edges []Edge
edges = append(edges, g.IncomingGraphEdges(v)...)
edges = append(edges, g.OutgoingGraphEdges(v)...)
return edges
}
// DFS returns a depth first search for the graph, starting at the input vertex.
func (g *Graph) DFS(start *Vertex) []*Vertex {
var d []*Vertex // discovered
var s []*Vertex // stack
if _, exists := g.Adjacency[start]; !exists {
func (g *Graph) DFS(start Vertex) []Vertex {
var d []Vertex // discovered
var s []Vertex // stack
if _, exists := g.adjacency[start]; !exists {
return nil // TODO: error
}
v := start
@@ -365,64 +316,65 @@ func (g *Graph) DFS(start *Vertex) []*Vertex {
}
// FilterGraph builds a new graph containing only vertices from the list.
func (g *Graph) FilterGraph(name string, vertices []*Vertex) *Graph {
newgraph := NewGraph(name)
for k1, x := range g.Adjacency {
func (g *Graph) FilterGraph(name string, vertices []Vertex) (*Graph, error) {
newGraph := &Graph{Name: name}
if err := newGraph.Init(); err != nil {
return nil, errwrap.Wrapf(err, "could not run FilterGraph() properly")
}
for k1, x := range g.adjacency {
for k2, e := range x {
//log.Printf("Filter: %s -> %s # %s", k1.Name, k2.Name, e.Name)
if VertexContains(k1, vertices) || VertexContains(k2, vertices) {
newgraph.AddEdge(k1, k2, e)
newGraph.AddEdge(k1, k2, e)
}
}
}
return newgraph
return newGraph, nil
}
// GetDisconnectedGraphs returns a channel containing the N disconnected graphs
// in our main graph. We can then process each of these in parallel.
func (g *Graph) GetDisconnectedGraphs() chan *Graph {
ch := make(chan *Graph)
go func() {
var start *Vertex
var d []*Vertex // discovered
c := g.NumVertices()
for len(d) < c {
// DisconnectedGraphs returns a list containing the N disconnected graphs.
func (g *Graph) DisconnectedGraphs() ([]*Graph, error) {
graphs := []*Graph{}
var start Vertex
var d []Vertex // discovered
c := g.NumVertices()
for len(d) < c {
// get an undiscovered vertex to start from
for _, s := range g.GetVertices() {
if !VertexContains(s, d) {
start = s
}
// get an undiscovered vertex to start from
for _, s := range g.Vertices() {
if !VertexContains(s, d) {
start = s
}
// dfs through the graph
dfs := g.DFS(start)
// filter all the collected elements into a new graph
newgraph := g.FilterGraph(g.Name, dfs)
// add number of elements found to found variable
d = append(d, dfs...) // extend
// return this new graph to the channel
ch <- newgraph
// if we've found all the elements, then we're done
// otherwise loop through to continue...
}
close(ch)
}()
return ch
// dfs through the graph
dfs := g.DFS(start)
// filter all the collected elements into a new graph
newgraph, err := g.FilterGraph(g.Name, dfs)
if err != nil {
return nil, errwrap.Wrapf(err, "could not run DisconnectedGraphs() properly")
}
// add number of elements found to found variable
d = append(d, dfs...) // extend
// append this new graph to the list
graphs = append(graphs, newgraph)
// if we've found all the elements, then we're done
// otherwise loop through to continue...
}
return graphs, nil
}
// InDegree returns the count of vertices that point to me in one big lookup map.
func (g *Graph) InDegree() map[*Vertex]int {
result := make(map[*Vertex]int)
for k := range g.Adjacency {
func (g *Graph) InDegree() map[Vertex]int {
result := make(map[Vertex]int)
for k := range g.adjacency {
result[k] = 0 // initialize
}
for k := range g.Adjacency {
for z := range g.Adjacency[k] {
for k := range g.adjacency {
for z := range g.adjacency[k] {
result[z]++
}
}
@@ -430,12 +382,12 @@ func (g *Graph) InDegree() map[*Vertex]int {
}
// OutDegree returns the count of vertices that point away in one big lookup map.
func (g *Graph) OutDegree() map[*Vertex]int {
result := make(map[*Vertex]int)
func (g *Graph) OutDegree() map[Vertex]int {
result := make(map[Vertex]int)
for k := range g.Adjacency {
for k := range g.adjacency {
result[k] = 0 // initialize
for range g.Adjacency[k] {
for range g.adjacency[k] {
result[k]++
}
}
@@ -443,12 +395,12 @@ func (g *Graph) OutDegree() map[*Vertex]int {
}
// TopologicalSort returns the sort of graph vertices in that order.
// based on descriptions and code from wikipedia and rosetta code
// It is based on descriptions and code from wikipedia and rosetta code.
// TODO: add memoization, and cache invalidation to speed this up :)
func (g *Graph) TopologicalSort() ([]*Vertex, error) { // kahn's algorithm
var L []*Vertex // empty list that will contain the sorted elements
var S []*Vertex // set of all nodes with no incoming edges
remaining := make(map[*Vertex]int) // amount of edges remaining
func (g *Graph) TopologicalSort() ([]Vertex, error) { // kahn's algorithm
var L []Vertex // empty list that will contain the sorted elements
var S []Vertex // set of all nodes with no incoming edges
remaining := make(map[Vertex]int) // amount of edges remaining
for v, d := range g.InDegree() {
if d == 0 {
@@ -465,7 +417,7 @@ func (g *Graph) TopologicalSort() ([]*Vertex, error) { // kahn's algorithm
v := S[last]
S = S[:last]
L = append(L, v) // add v to tail of L
for n := range g.Adjacency[v] {
for n := range g.adjacency[v] {
// for each node n remaining in the graph, consume from
// remaining, so for remaining[n] > 0
if remaining[n] > 0 {
@@ -480,9 +432,9 @@ func (g *Graph) TopologicalSort() ([]*Vertex, error) { // kahn's algorithm
// if graph has edges, eg if any value in rem is > 0
for c, in := range remaining {
if in > 0 {
for n := range g.Adjacency[c] {
for n := range g.adjacency[c] {
if remaining[n] > 0 {
return nil, fmt.Errorf("Not a dag!")
return nil, fmt.Errorf("not a dag")
}
}
}
@@ -499,19 +451,19 @@ func (g *Graph) TopologicalSort() ([]*Vertex, error) { // kahn's algorithm
// actually return a tree if we cared about correctness.
// This operates by a recursive algorithm; a more efficient version is likely.
// If you don't give this function a DAG, you might cause infinite recursion!
func (g *Graph) Reachability(a, b *Vertex) []*Vertex {
func (g *Graph) Reachability(a, b Vertex) []Vertex {
if a == nil || b == nil {
return nil
}
vertices := g.OutgoingGraphVertices(a) // what points away from a ?
if len(vertices) == 0 {
return []*Vertex{} // nope
return []Vertex{} // nope
}
if VertexContains(b, vertices) {
return []*Vertex{a, b} // found
return []Vertex{a, b} // found
}
// TODO: parallelize this with go routines?
var collected = make([][]*Vertex, len(vertices))
var collected = make([][]Vertex, len(vertices))
pick := -1
for i, v := range vertices {
collected[i] = g.Reachability(v, b) // find b by recursion
@@ -524,110 +476,111 @@ func (g *Graph) Reachability(a, b *Vertex) []*Vertex {
}
}
if pick < 0 {
return []*Vertex{} // nope
return []Vertex{} // nope
}
result := []*Vertex{a} // tack on a
result := []Vertex{a} // tack on a
result = append(result, collected[pick]...)
return result
}
// GraphSync updates the oldGraph so that it matches the newGraph receiver. It
// leaves identical elements alone so that they don't need to be refreshed.
// FIXME: add test cases
func (g *Graph) GraphSync(oldGraph *Graph) (*Graph, error) {
if oldGraph == nil {
oldGraph = NewGraph(g.GetName()) // copy over the name
}
oldGraph.SetName(g.GetName()) // overwrite the name
var lookup = make(map[*Vertex]*Vertex)
var vertexKeep []*Vertex // list of vertices which are the same in new graph
var edgeKeep []*Edge // list of vertices which are the same in new graph
for v := range g.Adjacency { // loop through the vertices (resources)
res := v.Res // resource
vertex := oldGraph.GetVertexMatch(res)
if vertex == nil { // no match found
if err := res.Init(); err != nil {
return nil, errwrap.Wrapf(err, "could not Init() resource")
}
vertex = NewVertex(res)
oldGraph.AddVertex(vertex) // call standalone in case not part of an edge
}
lookup[v] = vertex // used for constructing edges
vertexKeep = append(vertexKeep, vertex) // append
}
// get rid of any vertices we shouldn't keep (that aren't in new graph)
for v := range oldGraph.Adjacency {
if !VertexContains(v, vertexKeep) {
// wait for exit before starting new graph!
v.SendEvent(event.EventExit, true, false)
oldGraph.DeleteVertex(v)
// VertexMatchFn searches for a vertex in the graph and returns the vertex if
// one matches. It uses a user defined function to match. That function must
// return true on match, and an error if anything goes wrong.
func (g *Graph) VertexMatchFn(fn func(Vertex) (bool, error)) (Vertex, error) {
for v := range g.adjacency {
if b, err := fn(v); err != nil {
return nil, errwrap.Wrapf(err, "fn in VertexMatchFn() errored")
} else if b {
return v, nil
}
}
// compare edges
for v1 := range g.Adjacency { // loop through the vertices (resources)
for v2, e := range g.Adjacency[v1] {
// we have an edge!
// lookup vertices (these should exist now)
//res1 := v1.Res // resource
//res2 := v2.Res
//vertex1 := oldGraph.GetVertexMatch(res1)
//vertex2 := oldGraph.GetVertexMatch(res2)
vertex1, exists1 := lookup[v1]
vertex2, exists2 := lookup[v2]
if !exists1 || !exists2 { // no match found, bug?
//if vertex1 == nil || vertex2 == nil { // no match found
return nil, fmt.Errorf("New vertices weren't found!") // programming error
}
edge, exists := oldGraph.Adjacency[vertex1][vertex2]
if !exists || edge.Name != e.Name { // TODO: edgeCmp
edge = e // use or overwrite edge
}
oldGraph.Adjacency[vertex1][vertex2] = edge // store it (AddEdge)
edgeKeep = append(edgeKeep, edge) // mark as saved
}
}
// delete unused edges
for v1 := range oldGraph.Adjacency {
for _, e := range oldGraph.Adjacency[v1] {
// we have an edge!
if !EdgeContains(e, edgeKeep) {
oldGraph.DeleteEdge(e)
}
}
}
return oldGraph, nil
return nil, nil // nothing found
}
// GraphMetas returns a list of pointers to each of the resource MetaParams.
func (g *Graph) GraphMetas() []*resources.MetaParams {
metas := []*resources.MetaParams{}
for v := range g.Adjacency { // loop through the vertices (resources))
res := v.Res // resource
meta := res.Meta()
metas = append(metas, meta)
// GraphCmp compares the topology of this graph to another and returns nil if
// they're equal. It uses a user defined function to compare topologically
// equivalent vertices, and edges.
// FIXME: add more test cases
func (g *Graph) GraphCmp(graph *Graph, vertexCmpFn func(Vertex, Vertex) (bool, error), edgeCmpFn func(Edge, Edge) (bool, error)) error {
n1, n2 := g.NumVertices(), graph.NumVertices()
if n1 != n2 {
return fmt.Errorf("base graph has %d vertices, while input graph has %d", n1, n2)
}
if e1, e2 := g.NumEdges(), graph.NumEdges(); e1 != e2 {
return fmt.Errorf("base graph has %d edges, while input graph has %d", e1, e2)
}
return metas
}
// AssociateData associates some data with the object in the graph in question.
func (g *Graph) AssociateData(data *resources.Data) {
for k := range g.Adjacency {
k.Res.AssociateData(data)
var m = make(map[Vertex]Vertex) // g to graph vertex correspondence
Loop:
// check vertices
for v1 := range g.Adjacency() { // for each vertex in g
for v2 := range graph.Adjacency() { // does it match in graph ?
b, err := vertexCmpFn(v1, v2)
if err != nil {
return errwrap.Wrapf(err, "could not run vertexCmpFn() properly")
}
// does it match ?
if b {
m[v1] = v2 // store the mapping
continue Loop
}
}
return fmt.Errorf("base graph, has no match in input graph for: %s", v1)
}
// vertices match :)
// is the mapping the right length?
if n1 := len(m); n1 != n2 {
return fmt.Errorf("mapping only has correspondence of %d, when it should have %d", n1, n2)
}
// check if mapping is unique (are there duplicates?)
m1 := []Vertex{}
m2 := []Vertex{}
for k, v := range m {
if VertexContains(k, m1) {
return fmt.Errorf("mapping from %s is used more than once to: %s", k, m1)
}
if VertexContains(v, m2) {
return fmt.Errorf("mapping to %s is used more than once from: %s", v, m2)
}
m1 = append(m1, k)
m2 = append(m2, v)
}
// check edges
for v1 := range g.Adjacency() { // for each vertex in g
v2 := m[v1] // lookup in map to get correspondance
// g.Adjacency()[v1] corresponds to graph.Adjacency()[v2]
if e1, e2 := len(g.Adjacency()[v1]), len(graph.Adjacency()[v2]); e1 != e2 {
return fmt.Errorf("base graph, vertex(%s) has %d edges, while input graph, vertex(%s) has %d", v1, e1, v2, e2)
}
for vv1, ee1 := range g.Adjacency()[v1] {
vv2 := m[vv1]
ee2 := graph.Adjacency()[v2][vv2]
// these are edges from v1 -> vv1 via ee1 (graph 1)
// to cmp to edges from v2 -> vv2 via ee2 (graph 2)
// check: (1) vv1 == vv2 ? (we've already checked this!)
// check: (2) ee1 == ee2
b, err := edgeCmpFn(ee1, ee2)
if err != nil {
return errwrap.Wrapf(err, "could not run edgeCmpFn() properly")
}
if !b {
return fmt.Errorf("base graph edge(%s) doesn't match input graph edge(%s)", ee1, ee2)
}
}
}
return nil // success!
}
// VertexContains is an "in array" function to test for a vertex in a slice of vertices.
func VertexContains(needle *Vertex, haystack []*Vertex) bool {
func VertexContains(needle Vertex, haystack []Vertex) bool {
for _, v := range haystack {
if needle == v {
return true
@@ -637,7 +590,7 @@ func VertexContains(needle *Vertex, haystack []*Vertex) bool {
}
// EdgeContains is an "in array" function to test for an edge in a slice of edges.
func EdgeContains(needle *Edge, haystack []*Edge) bool {
func EdgeContains(needle Edge, haystack []Edge) bool {
for _, v := range haystack {
if needle == v {
return true
@@ -647,12 +600,23 @@ func EdgeContains(needle *Edge, haystack []*Edge) bool {
}
// Reverse reverses a list of vertices.
func Reverse(vs []*Vertex) []*Vertex {
//var out []*Vertex // XXX: golint suggests, but it fails testing
out := make([]*Vertex, 0) // empty list
func Reverse(vs []Vertex) []Vertex {
out := []Vertex{}
l := len(vs)
for i := range vs {
out = append(out, vs[l-i-1])
}
return out
}
// Sort the list of vertices and return a copy without modifying the input.
func Sort(vs []Vertex) []Vertex {
vertices := []Vertex{}
for _, v := range vs { // copy
vertices = append(vertices, v)
}
sort.Sort(VertexSlice(vertices))
return vertices
// sort.Sort(VertexSlice(vs)) // this is wrong, it would modify input!
//return vs
}

File diff suppressed because it is too large Load Diff

106
pgraph/subgraph.go Normal file
View File

@@ -0,0 +1,106 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
// AddGraph adds the set of edges and vertices of a graph to the existing graph.
func (g *Graph) AddGraph(graph *Graph) {
g.addEdgeVertexGraphHelper(nil, graph, nil, false, false)
}
// AddEdgeVertexGraph adds a directed edge to the graph from a vertex.
// This is useful for flattening the relationship between a subgraph and an
// existing graph, without having to run the subgraph recursively. It adds the
// maximum number of edges, creating a relationship to every vertex.
func (g *Graph) AddEdgeVertexGraph(vertex Vertex, graph *Graph, edgeGenFn func(v1, v2 Vertex) Edge) {
g.addEdgeVertexGraphHelper(vertex, graph, edgeGenFn, false, false)
}
// AddEdgeVertexGraphLight adds a directed edge to the graph from a vertex.
// This is useful for flattening the relationship between a subgraph and an
// existing graph, without having to run the subgraph recursively. It adds the
// minimum number of edges, creating a relationship to the vertices with
// indegree equal to zero.
func (g *Graph) AddEdgeVertexGraphLight(vertex Vertex, graph *Graph, edgeGenFn func(v1, v2 Vertex) Edge) {
g.addEdgeVertexGraphHelper(vertex, graph, edgeGenFn, false, true)
}
// AddEdgeGraphVertex adds a directed edge to the vertex from a graph.
// This is useful for flattening the relationship between a subgraph and an
// existing graph, without having to run the subgraph recursively. It adds the
// maximum number of edges, creating a relationship from every vertex.
func (g *Graph) AddEdgeGraphVertex(graph *Graph, vertex Vertex, edgeGenFn func(v1, v2 Vertex) Edge) {
g.addEdgeVertexGraphHelper(vertex, graph, edgeGenFn, true, false)
}
// AddEdgeGraphVertexLight adds a directed edge to the vertex from a graph.
// This is useful for flattening the relationship between a subgraph and an
// existing graph, without having to run the subgraph recursively. It adds the
// minimum number of edges, creating a relationship from the vertices with
// outdegree equal to zero.
func (g *Graph) AddEdgeGraphVertexLight(graph *Graph, vertex Vertex, edgeGenFn func(v1, v2 Vertex) Edge) {
g.addEdgeVertexGraphHelper(vertex, graph, edgeGenFn, true, true)
}
// addEdgeVertexGraphHelper is a helper function to add a directed edges to the
// graph from a vertex, or vice-versa. It operates in this reverse direction by
// specifying the reverse argument as true. It is useful for flattening the
// relationship between a subgraph and an existing graph, without having to run
// the subgraph recursively. It adds the maximum number of edges, creating a
// relationship to or from every vertex if the light argument is false, and if
// it is true, it adds the minimum number of edges, creating a relationship to
// or from the vertices with an indegree or outdegree equal to zero depending on
// if we specified reverse or not.
func (g *Graph) addEdgeVertexGraphHelper(vertex Vertex, graph *Graph, edgeGenFn func(v1, v2 Vertex) Edge, reverse, light bool) {
var degree map[Vertex]int // compute all of the in/outdegree's if needed
if light && reverse {
degree = graph.OutDegree()
} else if light { // && !reverse
degree = graph.InDegree()
}
for _, v := range graph.VerticesSorted() { // sort to help out edgeGenFn
// forward:
// we only want to add edges to indegree == 0, because every
// other vertex is a dependency of at least one of those
// reverse:
// we only want to add edges to outdegree == 0, because every
// other vertex is a pre-requisite to at least one of these
if light && degree[v] != 0 {
continue
}
g.AddVertex(v) // ensure vertex is part of the graph
if vertex != nil && reverse {
edge := edgeGenFn(v, vertex) // generate a new unique edge
g.AddEdge(v, vertex, edge)
} else if vertex != nil { // && !reverse
edge := edgeGenFn(vertex, v)
g.AddEdge(vertex, v, edge)
}
}
// also remember to suck in all of the graph's edges too!
for v1 := range graph.Adjacency() {
for v2, e := range graph.Adjacency()[v1] {
g.AddEdge(v1, v2, e)
}
}
}

187
pgraph/subgraph_test.go Normal file
View File

@@ -0,0 +1,187 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
import (
"testing"
)
func TestAddEdgeGraph1(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g := &Graph{}
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
sub := &Graph{}
sub.AddEdge(v4, v5, e3)
g.AddGraph(sub)
// expected (can re-use the same vertices)
expected := &Graph{}
expected.AddEdge(v1, v3, e1)
expected.AddEdge(v2, v3, e2)
expected.AddEdge(v4, v5, e3)
//expected.AddEdge(v3, v4, NE("v3,v4"))
//expected.AddEdge(v3, v5, NE("v3,v5"))
if s := runGraphCmp(t, g, expected); s != "" {
t.Errorf("%s", s)
}
}
func TestAddEdgeVertexGraph1(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g := &Graph{}
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
sub := &Graph{}
sub.AddEdge(v4, v5, e3)
g.AddEdgeVertexGraph(v3, sub, edgeGenFn)
// expected (can re-use the same vertices)
expected := &Graph{}
expected.AddEdge(v1, v3, e1)
expected.AddEdge(v2, v3, e2)
expected.AddEdge(v4, v5, e3)
expected.AddEdge(v3, v4, NE("v3,v4"))
expected.AddEdge(v3, v5, NE("v3,v5"))
if s := runGraphCmp(t, g, expected); s != "" {
t.Errorf("%s", s)
}
}
func TestAddEdgeGraphVertex1(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g := &Graph{}
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
sub := &Graph{}
sub.AddEdge(v4, v5, e3)
g.AddEdgeGraphVertex(sub, v3, edgeGenFn)
// expected (can re-use the same vertices)
expected := &Graph{}
expected.AddEdge(v1, v3, e1)
expected.AddEdge(v2, v3, e2)
expected.AddEdge(v4, v5, e3)
expected.AddEdge(v4, v3, NE("v4,v3"))
expected.AddEdge(v5, v3, NE("v5,v3"))
if s := runGraphCmp(t, g, expected); s != "" {
t.Errorf("%s", s)
}
}
func TestAddEdgeVertexGraphLight1(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g := &Graph{}
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
sub := &Graph{}
sub.AddEdge(v4, v5, e3)
g.AddEdgeVertexGraphLight(v3, sub, edgeGenFn)
// expected (can re-use the same vertices)
expected := &Graph{}
expected.AddEdge(v1, v3, e1)
expected.AddEdge(v2, v3, e2)
expected.AddEdge(v4, v5, e3)
expected.AddEdge(v3, v4, NE("v3,v4"))
//expected.AddEdge(v3, v5, NE("v3,v5")) // not needed with light
if s := runGraphCmp(t, g, expected); s != "" {
t.Errorf("%s", s)
}
}
func TestAddEdgeGraphVertexLight1(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g := &Graph{}
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
sub := &Graph{}
sub.AddEdge(v4, v5, e3)
g.AddEdgeGraphVertexLight(sub, v3, edgeGenFn)
// expected (can re-use the same vertices)
expected := &Graph{}
expected.AddEdge(v1, v3, e1)
expected.AddEdge(v2, v3, e2)
expected.AddEdge(v4, v5, e3)
//expected.AddEdge(v4, v3, NE("v4,v3")) // not needed with light
expected.AddEdge(v5, v3, NE("v5,v3"))
if s := runGraphCmp(t, g, expected); s != "" {
t.Errorf("%s", s)
}
}

93
pgraph/util_test.go Normal file
View File

@@ -0,0 +1,93 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
import (
"fmt"
"testing"
)
// vertex is a test struct to test the library.
type vertex struct {
name string
}
// String is a required method of the Vertex interface that we must fulfill.
func (v *vertex) String() string {
return v.name
}
// NV is a helper function to make testing easier. It creates a new noop vertex.
func NV(s string) Vertex {
return &vertex{s}
}
// edge is a test struct to test the library.
type edge struct {
name string
}
// String is a required method of the Edge interface that we must fulfill.
func (e *edge) String() string {
return e.name
}
// NE is a helper function to make testing easier. It creates a new noop edge.
func NE(s string) Edge {
return &edge{s}
}
// edgeGenFn generates unique edges for each vertex pair, assuming unique
// vertices.
func edgeGenFn(v1, v2 Vertex) Edge {
return NE(fmt.Sprintf("%s,%s", v1, v2))
}
func vertexAddFn(v Vertex) error {
return nil
}
func vertexRemoveFn(v Vertex) error {
return nil
}
func runGraphCmp(t *testing.T, g1, g2 *Graph) string {
err := g1.GraphCmp(g2, strVertexCmpFn, strEdgeCmpFn)
if err != nil {
str := ""
str += fmt.Sprintf(" actual (g1): %v%s", g1, fullPrint(g1))
str += fmt.Sprintf("expected (g2): %v%s", g2, fullPrint(g2))
str += fmt.Sprintf("cmp error:")
str += fmt.Sprintf("%v", err)
return str
}
return ""
}
func fullPrint(g *Graph) (str string) {
str += "\n"
for v := range g.Adjacency() {
str += fmt.Sprintf("* v: %s\n", v)
}
for v1 := range g.Adjacency() {
for v2, e := range g.Adjacency()[v1] {
str += fmt.Sprintf("* e: %s -> %s # %s\n", v1, v2, e)
}
}
return
}

263
prometheus/prometheus.go Normal file
View File

@@ -0,0 +1,263 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package prometheus provides functions that are useful to control and manage
// the build-in prometheus instance.
package prometheus
import (
"errors"
"net/http"
"strconv"
"sync"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
errwrap "github.com/pkg/errors"
)
// DefaultPrometheusListen is registered in
// https://github.com/prometheus/prometheus/wiki/Default-port-allocations
const DefaultPrometheusListen = "127.0.0.1:9233"
// ResState represents the status of a resource.
type ResState int
const (
// ResStateOK represents a working resource
ResStateOK ResState = iota
// ResStateSoftFail represents a resource in soft fail (will be retried)
ResStateSoftFail
// ResStateHardFail represents a resource in hard fail (will NOT be retried)
ResStateHardFail
)
// Prometheus is the struct that contains information about the
// prometheus instance. Run Init() on it.
type Prometheus struct {
Listen string // the listen specification for the net/http server
checkApplyTotal *prometheus.CounterVec // total of CheckApplies that have been triggered
pgraphStartTimeSeconds prometheus.Gauge // process start time in seconds since unix epoch
managedResources *prometheus.GaugeVec // Resources we manage now
failedResourcesTotal *prometheus.CounterVec // Total of failures since mgmt has started
failedResources *prometheus.GaugeVec // Number of current resources
resourcesState map[string]resStateWithKind // Maps the resources with their current kind/state
mutex *sync.Mutex // Mutex used to update resourcesState
}
// resStateWithKind is used to count the failures by kind
type resStateWithKind struct {
state ResState
kind string
}
// Init some parameters - currently the Listen address.
func (obj *Prometheus) Init() error {
if len(obj.Listen) == 0 {
obj.Listen = DefaultPrometheusListen
}
obj.mutex = &sync.Mutex{}
obj.resourcesState = make(map[string]resStateWithKind)
obj.checkApplyTotal = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "mgmt_checkapply_total",
Help: "Number of CheckApply that have run.",
},
// Labels for this metric.
// kind: resource type: Svc, File, ...
// apply: if the CheckApply happened in "apply" mode
// eventful: did the CheckApply generate an event
// errorful: did the CheckApply generate an error
[]string{"kind", "apply", "eventful", "errorful"},
)
prometheus.MustRegister(obj.checkApplyTotal)
obj.pgraphStartTimeSeconds = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: "mgmt_graph_start_time_seconds",
Help: "Start time of the current graph since unix epoch in seconds.",
},
)
prometheus.MustRegister(obj.pgraphStartTimeSeconds)
obj.managedResources = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "mgmt_resources",
Help: "Number of managed resources.",
},
// kind: resource type: Svc, File, ...
[]string{"kind"},
)
prometheus.MustRegister(obj.managedResources)
obj.failedResourcesTotal = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "mgmt_failures_total",
Help: "Total of failed resources.",
},
// kind: resource type: Svc, File, ...
// failure: soft or hard
[]string{"kind", "failure"},
)
prometheus.MustRegister(obj.failedResourcesTotal)
obj.failedResources = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "mgmt_failures",
Help: "Number of failing resources.",
},
// kind: resource type: Svc, File, ...
// failure: soft or hard
[]string{"kind", "failure"},
)
prometheus.MustRegister(obj.failedResources)
return nil
}
// Start runs a http server in a go routine, that responds to /metrics
// as prometheus would expect.
func (obj *Prometheus) Start() error {
http.Handle("/metrics", promhttp.Handler())
go http.ListenAndServe(obj.Listen, nil)
return nil
}
// Stop the http server.
func (obj *Prometheus) Stop() error {
// TODO: There is no way in go < 1.8 to stop a http server.
// https://stackoverflow.com/questions/39320025/go-how-to-stop-http-listenandserve/41433555#41433555
return nil
}
// UpdateCheckApplyTotal refreshes the failing gauge by parsing the internal
// state map.
func (obj *Prometheus) UpdateCheckApplyTotal(kind string, apply, eventful, errorful bool) error {
if obj == nil {
return nil // happens when mgmt is launched without --prometheus
}
labels := prometheus.Labels{"kind": kind, "apply": strconv.FormatBool(apply), "eventful": strconv.FormatBool(eventful), "errorful": strconv.FormatBool(errorful)}
metric := obj.checkApplyTotal.With(labels)
metric.Inc()
return nil
}
// UpdatePgraphStartTime updates the mgmt_graph_start_time_seconds metric
// to the current timestamp.
func (obj *Prometheus) UpdatePgraphStartTime() error {
if obj == nil {
return nil // happens when mgmt is launched without --prometheus
}
obj.pgraphStartTimeSeconds.SetToCurrentTime()
return nil
}
// AddManagedResource increments the Managed Resource counter and updates the resource status.
func (obj *Prometheus) AddManagedResource(resUUID string, rtype string) error {
if obj == nil {
return nil // happens when mgmt is launched without --prometheus
}
obj.managedResources.With(prometheus.Labels{"kind": rtype}).Inc()
if err := obj.UpdateState(resUUID, rtype, ResStateOK); err != nil {
return errwrap.Wrapf(err, "can't update the resource status in the map")
}
return nil
}
// RemoveManagedResource decrements the Managed Resource counter and updates the resource status.
func (obj *Prometheus) RemoveManagedResource(resUUID string, rtype string) error {
if obj == nil {
return nil // happens when mgmt is launched without --prometheus
}
obj.managedResources.With(prometheus.Labels{"kind": rtype}).Dec()
if err := obj.deleteState(resUUID); err != nil {
return errwrap.Wrapf(err, "can't remove the resource status from the map")
}
return nil
}
// deleteState removes the resources for the state map and re-populates the failing gauge.
func (obj *Prometheus) deleteState(resUUID string) error {
if obj == nil {
return nil // happens when mgmt is launched without --prometheus
}
obj.mutex.Lock()
delete(obj.resourcesState, resUUID)
obj.mutex.Unlock()
if err := obj.updateFailingGauge(); err != nil {
return errwrap.Wrapf(err, "can't update the failing gauge")
}
return nil
}
// UpdateState updates the state of the resources in our internal state map
// then triggers a refresh of the failing gauge.
func (obj *Prometheus) UpdateState(resUUID string, rtype string, newState ResState) error {
defer obj.updateFailingGauge()
if obj == nil {
return nil // happens when mgmt is launched without --prometheus
}
obj.mutex.Lock()
obj.resourcesState[resUUID] = resStateWithKind{state: newState, kind: rtype}
obj.mutex.Unlock()
if newState != ResStateOK {
var strState string
if newState == ResStateSoftFail {
strState = "soft"
} else if newState == ResStateHardFail {
strState = "hard"
} else {
return errors.New("state should be soft or hard failure")
}
obj.failedResourcesTotal.With(prometheus.Labels{"kind": rtype, "failure": strState}).Inc()
}
return nil
}
// updateFailingGauge refreshes the failing gauge by parsking the internal
// state map.
func (obj *Prometheus) updateFailingGauge() error {
if obj == nil {
return nil // happens when mgmt is launched without --prometheus
}
var softFails, hardFails map[string]float64
softFails = make(map[string]float64)
hardFails = make(map[string]float64)
for _, v := range obj.resourcesState {
if v.state == ResStateSoftFail {
softFails[v.kind]++
} else if v.state == ResStateHardFail {
hardFails[v.kind]++
}
}
// TODO: we might want to Zero the metrics we are not using
// because in prometheus design the metrics keep living for some time
// even after they are removed.
obj.failedResources.Reset()
for k, v := range softFails {
obj.failedResources.With(prometheus.Labels{"kind": k, "failure": "soft"}).Set(v)
}
for k, v := range hardFails {
obj.failedResources.With(prometheus.Labels{"kind": k, "failure": "hard"}).Set(v)
}
return nil
}

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package puppet
@@ -50,10 +50,10 @@ func NewGAPI(data gapi.Data, puppetParam *string, puppetConf string) (*GAPI, err
// Init initializes the puppet GAPI struct.
func (obj *GAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("Already initialized!")
return fmt.Errorf("already initialized")
}
if obj.PuppetParam == nil {
return fmt.Errorf("The PuppetParam param must be specified!")
return fmt.Errorf("the PuppetParam param must be specified")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
@@ -64,43 +64,72 @@ func (obj *GAPI) Init(data gapi.Data) error {
// Graph returns a current Graph.
func (obj *GAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("Puppet: GAPI is not initialized!")
return nil, fmt.Errorf("the puppet GAPI is not initialized")
}
config := ParseConfigFromPuppet(*obj.PuppetParam, obj.PuppetConf)
if config == nil {
return nil, fmt.Errorf("Puppet: ParseConfigFromPuppet returned nil!")
return nil, fmt.Errorf("function ParseConfigFromPuppet returned nil")
}
g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
return g, err
}
// Next returns nil errors every time there could be a new graph.
func (obj *GAPI) Next() chan error {
if obj.data.NoWatch {
return nil
}
func (obj *GAPI) Next() chan gapi.Next {
puppetChan := func() <-chan time.Time { // helper function
return time.Tick(time.Duration(PuppetInterval(obj.PuppetConf)) * time.Second)
return time.Tick(time.Duration(RefreshInterval(obj.PuppetConf)) * time.Second)
}
ch := make(chan error)
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("Puppet: GAPI is not initialized!")
next := gapi.Next{
Err: fmt.Errorf("the puppet GAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
return
}
pChan := puppetChan()
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
pChan := make(<-chan time.Time)
// NOTE: we don't look at obj.data.NoConfigWatch since emulating
// puppet means we do not switch graphs on code changes anyways.
if obj.data.NoStreamWatch {
pChan = nil
} else {
pChan = puppetChan()
}
for {
select {
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case _, ok := <-pChan:
if !ok { // the channel closed!
return
}
log.Printf("Puppet: Generating new graph...")
case <-obj.closeChan:
return
}
log.Printf("Puppet: Generating new graph...")
if obj.data.NoStreamWatch {
pChan = nil
} else {
pChan = puppetChan() // TODO: okay to update interval in case it changed?
ch <- nil // trigger a run
}
next := gapi.Next{
//Exit: true, // TODO: for permanent shutdown!
Err: nil,
}
select {
case ch <- next: // trigger a run (send a msg)
// unblock if we exit while waiting to send!
case <-obj.closeChan:
return
}
@@ -112,7 +141,7 @@ func (obj *GAPI) Next() chan error {
// Close shuts down the Puppet GAPI.
func (obj *GAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("Puppet: GAPI is not initialized!")
return fmt.Errorf("the puppet GAPI is not initialized")
}
close(obj.closeChan)
obj.wg.Wait()

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package puppet provides the integration entrypoint for the puppet language.
@@ -116,8 +116,8 @@ func ParseConfigFromPuppet(puppetParam, puppetConf string) *yamlgraph.GraphConfi
return &config
}
// PuppetInterval returns the graph refresh interval from the puppet configuration.
func PuppetInterval(puppetConf string) int {
// RefreshInterval returns the graph refresh interval from the puppet configuration.
func RefreshInterval(puppetConf string) int {
if Debug {
log.Printf("Puppet: determining graph refresh interval")
}

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package recwatch
@@ -59,15 +59,23 @@ func (obj *ConfigWatcher) Add(file ...string) {
ch := obj.ConfigWatch(file[0])
for {
select {
case e := <-ch:
case e, ok := <-ch:
if !ok { // channel closed
return
}
if e != nil {
obj.errorchan <- e
return
}
obj.ch <- file[0]
select {
case obj.ch <- file[0]: // send on channel
case <-obj.closechan:
return // never mind, close early!
}
continue
case <-obj.closechan:
return
// not needed, closes via ConfigWatch() chan close
//case <-obj.closechan:
// return
}
}
}()
@@ -126,7 +134,12 @@ func (obj *ConfigWatcher) ConfigWatch(file string) chan error {
close(ch)
return
}
ch <- nil // send event!
select {
case ch <- nil: // send event!
case <-obj.closechan:
close(ch)
return
}
}
}
//close(ch)

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package recwatch

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package recwatch provides recursive file watching events via fsnotify.
@@ -51,10 +51,10 @@ type RecWatcher struct {
watcher *fsnotify.Watcher
watches map[string]struct{}
events chan Event // one channel for events and err...
once sync.Once
closed bool // is the events channel closed?
mutex sync.Mutex // lock guarding the channel closing
wg sync.WaitGroup
exit chan struct{}
closeErr error
}
// NewRecWatcher creates an initializes a new recursive watcher.
@@ -87,11 +87,22 @@ func (obj *RecWatcher) Init() error {
}
}
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
if err := obj.Watch(); err != nil {
obj.events <- Event{Error: err}
// we need this mutex, because if we Init and then Close
// immediately, this can send after closed which panics!
obj.mutex.Lock()
if !obj.closed {
select {
case obj.events <- Event{Error: err}:
case <-obj.exit:
// pass
}
}
obj.mutex.Unlock()
}
obj.Close()
}()
return nil
}
@@ -106,26 +117,18 @@ func (obj *RecWatcher) Init() error {
// Close shuts down the watcher.
func (obj *RecWatcher) Close() error {
obj.once.Do(obj.close) // don't cause the channel to close twice
return obj.closeErr
}
// This close function is the function that actually does the close work. Don't
// call it more than once!
func (obj *RecWatcher) close() {
var err error
close(obj.exit) // send exit signal
obj.wg.Wait()
if obj.watcher != nil {
err = obj.watcher.Close()
obj.watcher = nil
// TODO: should we send the close error?
//if err != nil {
// obj.events <- Event{Error: err}
//}
}
obj.mutex.Lock() // FIXME: I don't think this mutex is needed anymore...
obj.closed = true
close(obj.events)
obj.closeErr = err // set the error
obj.mutex.Unlock()
return err
}
// Events returns a channel of events. These include events for errors.
@@ -134,10 +137,8 @@ func (obj *RecWatcher) Events() chan Event { return obj.events }
// Watch is the primary listener for this resource and it outputs events.
func (obj *RecWatcher) Watch() error {
if obj.watcher == nil {
return fmt.Errorf("Watcher is not initialized!")
return fmt.Errorf("the watcher is not initialized")
}
obj.wg.Add(1)
defer obj.wg.Done()
patharray := util.PathSplit(obj.safename) // tokenize the path
var index = len(patharray) // starting index
@@ -169,11 +170,11 @@ func (obj *RecWatcher) Watch() error {
// no space left on device, out of inotify watches
// TODO: consider letting the user fall back to
// polling if they hit this error very often...
return fmt.Errorf("Out of inotify watches: %v", err)
return fmt.Errorf("out of inotify watches: %v", err)
} else if os.IsPermission(err) {
return fmt.Errorf("Permission denied adding a watch: %v", err)
return fmt.Errorf("permission denied adding a watch: %v", err)
}
return fmt.Errorf("Unknown error: %v", err)
return fmt.Errorf("unknown error: %v", err)
}
select {
@@ -236,6 +237,13 @@ func (obj *RecWatcher) Watch() error {
index--
}
// when the file is moved, remove the watcher and add a new one,
// so we stop tracking the old inode.
if deltaDepth >= 0 && (event.Op&fsnotify.Rename == fsnotify.Rename) {
obj.watcher.Remove(current)
obj.watcher.Add(current)
}
// we must be a parent watcher, so descend in
if deltaDepth < 0 {
// XXX: we can block here due to: https://github.com/fsnotify/fsnotify/issues/123
@@ -272,11 +280,16 @@ func (obj *RecWatcher) Watch() error {
if send {
send = false
// only invalid state on certain types of events
obj.events <- Event{Error: nil, Body: &event}
select {
// exit even when we're blocked on event sending
case obj.events <- Event{Error: nil, Body: &event}:
case <-obj.exit:
return fmt.Errorf("pending event not sent")
}
}
case err := <-obj.watcher.Errors:
return fmt.Errorf("Unknown watcher error: %v", err)
return fmt.Errorf("unknown watcher error: %v", err)
case <-obj.exit:
return nil

View File

@@ -1,18 +1,18 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// GNU General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package remote provides the remoting facilities for agentless execution.
@@ -63,6 +63,7 @@ import (
cv "github.com/purpleidea/mgmt/converger"
"github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/util/semaphore"
"github.com/purpleidea/mgmt/yamlgraph"
multierr "github.com/hashicorp/go-multierror"
@@ -691,29 +692,30 @@ type Remotes struct {
fileWatch chan string
cConns uint16 // number of concurrent ssh connections, zero means unlimited
interactive bool // allow interactive prompting
sshPrivIdRsa string // path to ~/.ssh/id_rsa
sshPrivIDRsa string // path to ~/.ssh/id_rsa
caching bool // whether to try and cache the copy of the binary
depth uint16 // depth of this node in the remote execution hierarchy
prefix string // folder prefix to use for misc storage
converger cv.Converger
convergerCb func(func(map[string]bool) error) (func(), error)
wg sync.WaitGroup // keep track of each running SSH connection
lock sync.Mutex // mutex for access to sshmap
sshmap map[string]*SSH // map to each SSH struct with the remote as the key
exiting bool // flag to let us know if we're exiting
exitChan chan struct{} // closes when we should exit
semaphore Semaphore // counting semaphore to limit concurrent connections
hostnames []string // list of hostnames we've seen so far
cuid cv.ConvergerUID // convergerUID for the remote itself
cuids map[string]cv.ConvergerUID // map to each SSH struct with the remote as the key
callbackCancelFunc func() // stored callback function cancel function
wg sync.WaitGroup // keep track of each running SSH connection
lock sync.Mutex // mutex for access to sshmap
sshmap map[string]*SSH // map to each SSH struct with the remote as the key
running chan struct{} // closes when main loop is running
exiting bool // flag to let us know if we're exiting
exitChan chan struct{} // closes when we should exit
semaphore *semaphore.Semaphore // counting semaphore to limit concurrent connections
hostnames []string // list of hostnames we've seen so far
cuid cv.UID // convergerUID for the remote itself
cuids map[string]cv.UID // map to each SSH struct with the remote as the key
callbackCancelFunc func() // stored callback function cancel function
flags Flags // constant runtime values
}
// NewRemotes builds a Remotes struct.
func NewRemotes(clientURLs, remoteURLs []string, noop bool, remotes []string, fileWatch chan string, cConns uint16, interactive bool, sshPrivIdRsa string, caching bool, depth uint16, prefix string, converger cv.Converger, convergerCb func(func(map[string]bool) error) (func(), error), flags Flags) *Remotes {
func NewRemotes(clientURLs, remoteURLs []string, noop bool, remotes []string, fileWatch chan string, cConns uint16, interactive bool, sshPrivIDRsa string, caching bool, depth uint16, prefix string, converger cv.Converger, convergerCb func(func(map[string]bool) error) (func(), error), flags Flags) *Remotes {
return &Remotes{
clientURLs: clientURLs,
remoteURLs: remoteURLs,
@@ -722,17 +724,18 @@ func NewRemotes(clientURLs, remoteURLs []string, noop bool, remotes []string, fi
fileWatch: fileWatch,
cConns: cConns,
interactive: interactive,
sshPrivIdRsa: sshPrivIdRsa,
sshPrivIDRsa: sshPrivIDRsa,
caching: caching,
depth: depth,
prefix: prefix,
converger: converger,
convergerCb: convergerCb,
sshmap: make(map[string]*SSH),
running: make(chan struct{}),
exitChan: make(chan struct{}),
semaphore: NewSemaphore(int(cConns)),
semaphore: semaphore.NewSemaphore(int(cConns)),
hostnames: make([]string, len(remotes)),
cuids: make(map[string]cv.ConvergerUID),
cuids: make(map[string]cv.UID),
flags: flags,
}
}
@@ -829,18 +832,18 @@ func (obj *Remotes) NewSSH(file string) (*SSH, error) {
// sshKeyAuth is a helper function to get the ssh key auth struct needed
func (obj *Remotes) sshKeyAuth() (ssh.AuthMethod, error) {
if obj.sshPrivIdRsa == "" {
if obj.sshPrivIDRsa == "" {
return nil, fmt.Errorf("empty path specified")
}
p := ""
// TODO: this doesn't match strings of the form: ~james/.ssh/id_rsa
if strings.HasPrefix(obj.sshPrivIdRsa, "~/") {
if strings.HasPrefix(obj.sshPrivIDRsa, "~/") {
usr, err := user.Current()
if err != nil {
log.Printf("Remote: Can't find home directory automatically.")
return nil, err
}
p = path.Join(usr.HomeDir, obj.sshPrivIdRsa[len("~/"):])
p = path.Join(usr.HomeDir, obj.sshPrivIDRsa[len("~/"):])
}
if p == "" {
return nil, fmt.Errorf("empty path specified")
@@ -1021,11 +1024,17 @@ func (obj *Remotes) Run() {
}(sshobj, f)
obj.lock.Unlock()
}
close(obj.running) // notify
}
// Ready closes its returned channel when the Run method is up and ready. It is
// useful to know when ready, since we often execute Run in a go routine.
func (obj *Remotes) Ready() <-chan struct{} { return obj.running }
// Exit causes as much of the Remotes struct to shutdown as quickly and as
// cleanly as possible. It only returns once everything is shutdown.
func (obj *Remotes) Exit() error {
<-obj.running // wait for Run to be finished before we exit!
obj.lock.Lock()
obj.exiting = true // don't spawn new ones once this flag is set!
obj.lock.Unlock()
@@ -1078,29 +1087,6 @@ func cleanURL(s string) string {
return u.Host
}
// Semaphore is a counting semaphore.
type Semaphore chan struct{}
// NewSemaphore creates a new semaphore.
func NewSemaphore(size int) Semaphore {
return make(Semaphore, size)
}
// P acquires n resources.
func (s Semaphore) P(n int) {
e := struct{}{}
for i := 0; i < n; i++ {
s <- e // acquire one
}
}
// V releases n resources.
func (s Semaphore) V(n int) {
for i := 0; i < n; i++ {
<-s // release one
}
}
// combinedWriter mimics what the ssh.CombinedOutput command does.
type combinedWriter struct {
b bytes.Buffer

649
resources/actions.go Normal file
View File

@@ -0,0 +1,649 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"fmt"
"log"
"math"
"strings"
"sync"
"time"
"github.com/purpleidea/mgmt/event"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/prometheus"
"github.com/purpleidea/mgmt/util"
multierr "github.com/hashicorp/go-multierror"
errwrap "github.com/pkg/errors"
"golang.org/x/time/rate"
)
// SentinelErr is a sentinal as an error type that wraps an arbitrary error.
type SentinelErr struct {
err error
}
// Error is the required method to fulfill the error type.
func (obj *SentinelErr) Error() string {
return obj.err.Error()
}
// OKTimestamp returns true if this element can run right now?
func (obj *BaseRes) OKTimestamp() bool {
// these are all the vertices pointing TO v, eg: ??? -> v
for _, n := range obj.Graph.IncomingGraphVertices(obj.Vertex) {
// if the vertex has a greater timestamp than any pre-req (n)
// then we can't run right now...
// if they're equal (eg: on init of 0) then we also can't run
// b/c we should let our pre-req's go first...
x, y := obj.Timestamp(), VtoR(n).Timestamp()
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: OKTimestamp: (%v) >= %s(%v): !%v", obj, x, n, y, x >= y)
}
if x >= y {
return false
}
}
return true
}
// Poke tells nodes after me in the dependency graph that they need to refresh.
func (obj *BaseRes) Poke() error {
// if we're pausing (or exiting) then we should suspend poke's so that
// the graph doesn't go on running forever until it's completely done!
// this is an optional feature which we can do by default on user exit
if obj.Graph.FastPause {
return nil // TODO: should this be an error instead?
}
var wg sync.WaitGroup
// these are all the vertices pointing AWAY FROM v, eg: v -> ???
for _, n := range obj.Graph.OutgoingGraphVertices(obj.Vertex) {
// we can skip this poke if resource hasn't done work yet... it
// needs to be poked if already running, or not running though!
// TODO: does this need an || activity flag?
if VtoR(n).GetState() != ResStateProcess {
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: Poke: %s", obj, n)
}
wg.Add(1)
go func(nn pgraph.Vertex) error {
defer wg.Done()
//edge := obj.Graph.adjacency[v][nn] // lookup
//notify := edge.Notify && edge.Refresh()
return VtoR(nn).SendEvent(event.EventPoke, nil)
}(n)
} else {
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: Poke: %s: Skipped!", obj, n)
}
}
}
// TODO: do something with return values?
wg.Wait() // wait for all the pokes to complete
return nil
}
// BackPoke pokes the pre-requisites that are stale and need to run before I can run.
func (obj *BaseRes) BackPoke() {
var wg sync.WaitGroup
// these are all the vertices pointing TO v, eg: ??? -> v
for _, n := range obj.Graph.IncomingGraphVertices(obj.Vertex) {
x, y, s := obj.Timestamp(), VtoR(n).Timestamp(), VtoR(n).GetState()
// If the parent timestamp needs poking AND it's not running
// Process, then poke it. If the parent is in ResStateProcess it
// means that an event is pending, so we'll be expecting a poke
// back soon, so we can safely discard the extra parent poke...
// TODO: implement a stateLT (less than) to tell if something
// happens earlier in the state cycle and that doesn't wrap nil
if x >= y && (s != ResStateProcess && s != ResStateCheckApply) {
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: BackPoke: %s", obj, n)
}
wg.Add(1)
go func(nn pgraph.Vertex) error {
defer wg.Done()
return VtoR(nn).SendEvent(event.EventBackPoke, nil)
}(n)
} else {
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: BackPoke: %s: Skipped!", obj, n)
}
}
}
// TODO: do something with return values?
wg.Wait() // wait for all the pokes to complete
}
// RefreshPending determines if any previous nodes have a refresh pending here.
// If this is true, it means I am expected to apply a refresh when I next run.
func (obj *BaseRes) RefreshPending() bool {
var refresh bool
for _, edge := range obj.Graph.IncomingGraphEdges(obj.Vertex) {
// if we asked for a notify *and* if one is pending!
edge := edge.(*Edge) // panic if wrong
if edge.Notify && edge.Refresh() {
refresh = true
break
}
}
return refresh
}
// SetUpstreamRefresh sets the refresh value to any upstream vertices.
func (obj *BaseRes) SetUpstreamRefresh(b bool) {
for _, edge := range obj.Graph.IncomingGraphEdges(obj.Vertex) {
edge := edge.(*Edge) // panic if wrong
if edge.Notify {
edge.SetRefresh(b)
}
}
}
// SetDownstreamRefresh sets the refresh value to any downstream vertices.
func (obj *BaseRes) SetDownstreamRefresh(b bool) {
for _, edge := range obj.Graph.OutgoingGraphEdges(obj.Vertex) {
edge := edge.(*Edge) // panic if wrong
// if we asked for a notify *and* if one is pending!
if edge.Notify {
edge.SetRefresh(b)
}
}
}
// Process is the primary function to execute for a particular vertex in the graph.
func (obj *BaseRes) Process() error {
if obj.debug {
log.Printf("%s: Process()", obj)
}
// FIXME: should these SetState methods be here or after the sema code?
defer obj.SetState(ResStateNil) // reset state when finished
obj.SetState(ResStateProcess)
// is it okay to run dependency wise right now?
// if not, that's okay because when the dependency runs, it will poke
// us back and we will run if needed then!
if !obj.OKTimestamp() {
go obj.BackPoke()
return nil
}
// timestamp must be okay...
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: OKTimestamp(%v)", obj, obj.Timestamp())
}
// semaphores!
// These shouldn't ever block an exit, since the graph should eventually
// converge causing their them to unlock. More interestingly, since they
// run in a DAG alphabetically, there is no way to permanently deadlock,
// assuming that resources individually don't ever block from finishing!
// The exception is that semaphores with a zero count will always block!
// TODO: Add a close mechanism to close/unblock zero count semaphores...
semas := obj.Meta().Sema
if obj.debug && len(semas) > 0 {
log.Printf("%s: Sema: P(%s)", obj, strings.Join(semas, ", "))
}
if err := obj.Graph.SemaLock(semas); err != nil { // lock
// NOTE: in practice, this might not ever be truly necessary...
return fmt.Errorf("shutdown of semaphores")
}
defer obj.Graph.SemaUnlock(semas) // unlock
if obj.debug && len(semas) > 0 {
defer log.Printf("%s: Sema: V(%s)", obj, strings.Join(semas, ", "))
}
var ok = true
var applied = false // did we run an apply?
// connect any senders to receivers and detect if values changed
if updated, err := obj.SendRecv(obj.Res); err != nil {
return errwrap.Wrapf(err, "could not SendRecv in Process")
} else if len(updated) > 0 {
for _, changed := range updated {
if changed { // at least one was updated
obj.StateOK(false) // invalidate cache, mark as dirty
break
}
}
}
var noop = obj.Meta().Noop // lookup the noop value
var refresh bool
var checkOK bool
var err error
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: CheckApply(%t)", obj, !noop)
}
// lookup the refresh (notification) variable
refresh = obj.RefreshPending() // do i need to perform a refresh?
obj.SetRefresh(refresh) // tell the resource
// changes can occur after this...
obj.SetState(ResStateCheckApply)
// check cached state, to skip CheckApply; can't skip if refreshing
if !refresh && obj.IsStateOK() {
checkOK, err = true, nil
// NOTE: technically this block is wrong because we don't know
// if the resource implements refresh! If it doesn't, we could
// skip this, but it doesn't make a big difference under noop!
} else if noop && refresh { // had a refresh to do w/ noop!
checkOK, err = false, nil // therefore the state is wrong
// run the CheckApply!
} else {
// if this fails, don't UpdateTimestamp()
checkOK, err = obj.Res.CheckApply(!noop)
if promErr := obj.Data().Prometheus.UpdateCheckApplyTotal(obj.GetKind(), !noop, !checkOK, err != nil); promErr != nil {
// TODO: how to error correctly
log.Printf("%s: Prometheus.UpdateCheckApplyTotal() errored: %v", obj, err)
}
// TODO: Can the `Poll` converged timeout tracking be a
// more general method for all converged timeouts? this
// would simplify the resources by removing boilerplate
if obj.Meta().Poll > 0 {
if !checkOK { // something changed, restart timer
obj.cuid.ResetTimer() // activity!
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: Converger: ResetTimer", obj)
}
}
}
}
if checkOK && err != nil { // should never return this way
log.Fatalf("%s: CheckApply(): %t, %+v", obj, checkOK, err)
}
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: CheckApply(): %t, %v", obj, checkOK, err)
}
// if CheckApply ran without noop and without error, state should be good
if !noop && err == nil { // aka !noop || checkOK
obj.StateOK(true) // reset
if refresh {
obj.SetUpstreamRefresh(false) // refresh happened, clear the request
obj.SetRefresh(false)
}
}
if !checkOK { // if state *was* not ok, we had to have apply'ed
if err != nil { // error during check or apply
ok = false
} else {
applied = true
}
}
// when noop is true we always want to update timestamp
if noop && err == nil {
ok = true
}
if ok {
// did we actually do work?
activity := applied
if noop {
activity = false // no we didn't do work...
}
if activity { // add refresh flag to downstream edges...
obj.SetDownstreamRefresh(true)
}
// update this timestamp *before* we poke or the poked
// nodes might fail due to having a too old timestamp!
obj.UpdateTimestamp() // this was touched...
obj.SetState(ResStatePoking) // can't cancel parent poke
if err := obj.Poke(); err != nil {
return errwrap.Wrapf(err, "the Poke() failed")
}
}
// poke at our pre-req's instead since they need to refresh/run...
return errwrap.Wrapf(err, "could not Process() successfully")
}
// innerWorker is the CheckApply runner that reads from processChan.
func (obj *BaseRes) innerWorker() {
running := false
done := make(chan struct{})
playback := false // do we need to run another one?
waiting := false
var timer = time.NewTimer(time.Duration(math.MaxInt64)) // longest duration
if !timer.Stop() {
<-timer.C // unnecessary, shouldn't happen
}
var delay = time.Duration(obj.Meta().Delay) * time.Millisecond
var retry = obj.Meta().Retry // number of tries left, -1 for infinite
var limiter = rate.NewLimiter(obj.Meta().Limit, obj.Meta().Burst)
limited := false
wg := &sync.WaitGroup{} // wait for Process routine to exit
Loop:
for {
select {
case ev, ok := <-obj.processChan: // must use like this
if !ok { // processChan closed, let's exit
break Loop // no event, so no ack!
}
if obj.Meta().Poll == 0 { // skip for polling
obj.wcuid.SetConverged(false)
}
// if process started, but no action yet, skip!
if obj.GetState() == ResStateProcess {
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: Skipped event!", obj)
}
ev.ACK() // ready for next message
obj.quiesceGroup.Done()
continue
}
// if running, we skip running a new execution!
// if waiting, we skip running a new execution!
if running || waiting {
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: Playback added!", obj)
}
playback = true
ev.ACK() // ready for next message
obj.quiesceGroup.Done()
continue
}
// catch invalid rates
if obj.Meta().Burst == 0 && !(obj.Meta().Limit == rate.Inf) { // blocked
e := fmt.Errorf("%s: Permanently limited (rate != Inf, burst: 0)", obj)
ev.ACK() // ready for next message
obj.quiesceGroup.Done()
obj.SendEvent(event.EventExit, &SentinelErr{e})
continue
}
// rate limit
// FIXME: consider skipping rate limit check if
// the event is a poke instead of a watch event
if !limited && !(obj.Meta().Limit == rate.Inf) { // skip over the playback event...
now := time.Now()
r := limiter.ReserveN(now, 1) // one event
// r.OK() seems to always be true here!
d := r.DelayFrom(now)
if d > 0 { // delay
limited = true
playback = true
log.Printf("%s: Limited (rate: %v/sec, burst: %d, next: %v)", obj, obj.Meta().Limit, obj.Meta().Burst, d)
// start the timer...
timer.Reset(d)
waiting = true // waiting for retry timer
ev.ACK()
obj.quiesceGroup.Done()
continue
} // otherwise, we run directly!
}
limited = false // let one through
wg.Add(1)
running = true
go func(ev *event.Event) {
obj.pcuid.SetConverged(false) // "block" Process
defer wg.Done()
if e := obj.Process(); e != nil {
playback = true
log.Printf("%s: CheckApply errored: %v", obj, e)
if retry == 0 {
if err := obj.Data().Prometheus.UpdateState(obj.String(), obj.GetKind(), prometheus.ResStateHardFail); err != nil {
// TODO: how to error this?
log.Printf("%s: Prometheus.UpdateState() errored: %v", obj, err)
}
// wrap the error in the sentinel
obj.quiesceGroup.Done() // before the Wait that happens in SendEvent!
obj.SendEvent(event.EventExit, &SentinelErr{e})
return
}
if retry > 0 { // don't decrement the -1
retry--
}
if err := obj.Data().Prometheus.UpdateState(obj.String(), obj.GetKind(), prometheus.ResStateSoftFail); err != nil {
// TODO: how to error this?
log.Printf("%s: Prometheus.UpdateState() errored: %v", obj, err)
}
log.Printf("%s: CheckApply: Retrying after %.4f seconds (%d left)", obj, delay.Seconds(), retry)
// start the timer...
timer.Reset(delay)
waiting = true // waiting for retry timer
// don't obj.quiesceGroup.Done() b/c
// the timer is running and it can exit!
return
}
retry = obj.Meta().Retry // reset on success
close(done) // trigger
}(ev)
ev.ACK() // sync (now mostly useless)
case <-timer.C:
if obj.Meta().Poll == 0 { // skip for polling
obj.wcuid.SetConverged(false)
}
waiting = false
if !timer.Stop() {
//<-timer.C // blocks, docs are wrong!
}
log.Printf("%s: CheckApply delay expired!", obj)
close(done)
// a CheckApply run (with possibly retry pause) finished
case <-done:
if obj.Meta().Poll == 0 { // skip for polling
obj.wcuid.SetConverged(false)
}
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: CheckApply finished!", obj)
}
done = make(chan struct{}) // reset
// re-send this event, to trigger a CheckApply()
if playback {
// this lock avoids us sending to
// channel after we've closed it!
// TODO: can this experience indefinite postponement ?
// see: https://github.com/golang/go/issues/11506
// pause or exit is in process if not quiescing!
if !obj.quiescing {
playback = false
obj.quiesceGroup.Add(1) // lock around it, b/c still running...
go func() {
obj.Event() // replay a new event
obj.quiesceGroup.Done()
}()
}
}
running = false
obj.pcuid.SetConverged(true) // "unblock" Process
obj.quiesceGroup.Done()
case <-obj.wcuid.ConvergedTimer():
obj.wcuid.SetConverged(true) // converged!
continue
}
}
wg.Wait()
return
}
// Worker is the common run frontend of the vertex. It handles all of the retry
// and retry delay common code, and ultimately returns the final status of this
// vertex execution.
func (obj *BaseRes) Worker() error {
// listen for chan events from Watch() and run
// the Process() function when they're received
// this avoids us having to pass the data into
// the Watch() function about which graph it is
// running on, which isolates things nicely...
if obj.debug {
log.Printf("%s: Worker: Running", obj)
defer log.Printf("%s: Worker: Stopped", obj)
}
// run the init (should match 1-1 with Close function)
if err := obj.Res.Init(); err != nil {
obj.ProcessExit()
// always exit the worker function by finishing with Close()
if e := obj.Res.Close(); e != nil {
err = multierr.Append(err, e) // list of errors
}
return errwrap.Wrapf(err, "could not Init() resource")
}
// if the CheckApply run takes longer than the converged
// timeout, we could inappropriately converge mid-apply!
// avoid this by blocking convergence with a fake report
// we also add a similar blocker around the worker loop!
// XXX: put these in Init() ?
// get extra cuids (worker, process)
obj.wcuid.SetConverged(true) // starts off false, and waits for loop timeout
obj.pcuid.SetConverged(true) // starts off true, because it's not running...
obj.processSync.Add(1)
go func() {
defer obj.processSync.Done()
obj.innerWorker()
}()
var err error // propagate the error up (this is a permanent BAD error!)
// the watch delay runs inside of the Watch resource loop, so that it
// can still process signals and exit if needed. It shouldn't run any
// resource specific code since this is supposed to be a retry delay.
// NOTE: we're using the same retry and delay metaparams that CheckApply
// uses. This is for practicality. We can separate them later if needed!
var watchDelay time.Duration
var watchRetry = obj.Meta().Retry // number of tries left, -1 for infinite
// watch blocks until it ends, & errors to retry
for {
// TODO: do we have to stop the converged-timeout when in this block (perhaps we're in the delay block!)
// TODO: should we setup/manage some of the converged timeout stuff in here anyways?
// if a retry-delay was requested, wait, but don't block our events!
if watchDelay > 0 {
//var pendingSendEvent bool
timer := time.NewTimer(watchDelay)
Loop:
for {
select {
case <-timer.C: // the wait is over
break Loop // critical
// TODO: resources could have a separate exit channel to avoid this complexity!?
case event := <-obj.Events():
// NOTE: this code should match the similar Res code!
//cuid.SetConverged(false) // TODO: ?
if exit, send := obj.ReadEvent(event); exit != nil {
obj.ProcessExit()
err := *exit // exit err
if e := obj.Res.Close(); err == nil {
err = e
} else if e != nil {
err = multierr.Append(err, e) // list of errors
}
return err // exit
} else if send {
// if we dive down this rabbit hole, our
// timer.C won't get seen until we get out!
// in this situation, the Watch() is blocked
// from performing until CheckApply returns
// successfully, or errors out. This isn't
// so bad, but we should document it. Is it
// possible that some resource *needs* Watch
// to run to be able to execute a CheckApply?
// That situation shouldn't be common, and
// should probably not be allowed. Can we
// avoid it though?
//if exit, err := doSend(); exit || err != nil {
// return err // we exit or bubble up a NACK...
//}
// Instead of doing the above, we can
// add events to a pending list, and
// when we finish the delay, we can run
// them.
//pendingSendEvent = true // all events are identical for now...
}
}
}
timer.Stop() // it's nice to cleanup
log.Printf("%s: Watch delay expired!", obj)
// NOTE: we can avoid the send if running Watch guarantees
// one CheckApply event on startup!
//if pendingSendEvent { // TODO: should this become a list in the future?
// if err := obj.Event() err != nil {
// return err // we exit or bubble up a NACK...
// }
//}
}
// TODO: reset the watch retry count after some amount of success
var e error
if obj.Meta().Poll > 0 { // poll instead of watching :(
obj.cuid.StartTimer()
e = obj.Poll()
obj.cuid.StopTimer() // clean up nicely
} else {
e = obj.Res.Watch() // run the watch normally
}
if e == nil { // exit signal
err = nil // clean exit
break
}
if sentinelErr, ok := e.(*SentinelErr); ok { // unwrap the sentinel
err = sentinelErr.err
break // sentinel means, perma-exit
}
log.Printf("%s: Watch errored: %v", obj, e)
if watchRetry == 0 {
err = fmt.Errorf("Permanent watch error: %v", e)
break
}
if watchRetry > 0 { // don't decrement the -1
watchRetry--
}
watchDelay = time.Duration(obj.Meta().Delay) * time.Millisecond
log.Printf("%s: Watch: Retrying after %.4f seconds (%d left)", obj, watchDelay.Seconds(), watchRetry)
// We need to trigger a CheckApply after Watch restarts, so that
// we catch any lost events that happened while down. We do this
// by getting the Watch resource to send one event once it's up!
//v.SendEvent(eventPoke, false, false)
}
obj.ProcessExit()
// close resource and return possible errors if any
if e := obj.Res.Close(); err == nil {
err = e
} else if e != nil {
err = multierr.Append(err, e) // list of errors
}
return err
}

298
resources/augeas.go Normal file
View File

@@ -0,0 +1,298 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !noaugeas
package resources
import (
"fmt"
"log"
"os"
"strings"
"github.com/purpleidea/mgmt/recwatch"
errwrap "github.com/pkg/errors"
// FIXME: we vendor go/augeas because master requires augeas 1.6.0
// and libaugeas-dev-1.6.0 is not yet available in a PPA.
"honnef.co/go/augeas"
)
const (
// NS is a namespace for augeas operations
NS = "Xmgmt"
)
func init() {
RegisterResource("augeas", func() Res { return &AugeasRes{} })
}
// AugeasRes is a resource that enables you to use the augeas resource.
// Currently only allows you to change simple files (e.g sshd_config).
type AugeasRes struct {
BaseRes `yaml:",inline"`
// File is the path to the file targeted by this resource.
File string `yaml:"file"`
// Lens is the lens used by this resource. If specified, mgmt
// will lower the augeas overhead by only loading that lens.
Lens string `yaml:"lens"`
// Sets is a list of changes that will be applied to the file, in the form of
// ["path", "value"]. mgmt will run augeas.Get() before augeas.Set(), to
// prevent changing the file when it is not needed.
Sets []AugeasSet `yaml:"sets"`
recWatcher *recwatch.RecWatcher // used to watch the changed files
}
// AugeasSet represents a key/value pair of settings to be applied.
type AugeasSet struct {
Path string `yaml:"path"` // The relative path to the value to be changed.
Value string `yaml:"value"` // The value to be set on the given Path.
}
// Default returns some sensible defaults for this resource.
func (obj *AugeasRes) Default() Res {
return &AugeasRes{
BaseRes: BaseRes{
MetaParams: DefaultMetaParams, // force a default
},
}
}
// Validate if the params passed in are valid data.
func (obj *AugeasRes) Validate() error {
if !strings.HasPrefix(obj.File, "/") {
return fmt.Errorf("the File param should start with a slash")
}
if obj.Lens != "" && !strings.HasSuffix(obj.Lens, ".lns") {
return fmt.Errorf("the Lens param should have a .lns suffix")
}
if (obj.Lens == "") != (obj.File == "") {
return fmt.Errorf("the File and Lens params must be specified together")
}
return obj.BaseRes.Validate()
}
// Init initiates the resource.
func (obj *AugeasRes) Init() error {
return obj.BaseRes.Init() // call base init, b/c we're overriding
}
// Watch is the primary listener for this resource and it outputs events.
// Taken from the File resource.
// FIXME: DRY - This is taken from the file resource
func (obj *AugeasRes) Watch() error {
var err error
obj.recWatcher, err = recwatch.NewRecWatcher(obj.File, false)
if err != nil {
return err
}
defer obj.recWatcher.Close()
// notify engine that we're running
if err := obj.Running(); err != nil {
return err // bubble up a NACK...
}
var send = false // send event?
var exit *error
for {
if obj.debug {
log.Printf("%s: Watching: %s", obj, obj.File) // attempting to watch...
}
select {
case event, ok := <-obj.recWatcher.Events():
if !ok { // channel shutdown
return nil
}
if err := event.Error; err != nil {
return errwrap.Wrapf(err, "Unknown %s watcher error", obj)
}
if obj.debug { // don't access event.Body if event.Error isn't nil
log.Printf("%s: Event(%s): %v", obj, event.Body.Name, event.Body.Op)
}
send = true
obj.StateOK(false) // dirty
case event := <-obj.Events():
if exit, send = obj.ReadEvent(event); exit != nil {
return *exit // exit
}
//obj.StateOK(false) // dirty // these events don't invalidate state
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.Event()
}
}
}
// checkApplySet runs CheckApply for one element of the AugeasRes.Set
func (obj *AugeasRes) checkApplySet(apply bool, ag *augeas.Augeas, set AugeasSet) (bool, error) {
fullpath := fmt.Sprintf("/files/%v/%v", obj.File, set.Path)
// We do not check for errors because errors are also thrown when
// the path does not exist.
if getValue, _ := ag.Get(fullpath); set.Value == getValue {
// The value is what we expect, return directly
return true, nil
}
if !apply {
// If noop, we can return here directly. We return with
// nil even if err is not nil because it does not mean
// there is an error.
return false, nil
}
if err := ag.Set(fullpath, set.Value); err != nil {
return false, errwrap.Wrapf(err, "augeas: error while setting value")
}
return false, nil
}
// CheckApply method for Augeas resource.
func (obj *AugeasRes) CheckApply(apply bool) (bool, error) {
log.Printf("%s: CheckApply: %s", obj, obj.File)
// By default we do not set any option to augeas, we use the defaults.
opts := augeas.None
if obj.Lens != "" {
// if the lens is specified, we can speed up augeas by not
// loading everything. Without this option, augeas will try to
// read all the files it knows in the complete filesystem.
// e.g. to change /etc/ssh/sshd_config, it would load /etc/hosts, /etc/ntpd.conf, etc...
opts = augeas.NoModlAutoload
}
// Initiate augeas
ag, err := augeas.New("/", "", opts)
if err != nil {
return false, errwrap.Wrapf(err, "augeas: error while initializing")
}
defer ag.Close()
if obj.Lens != "" {
// If the lens is set, load the lens for the file we want to edit.
// We pick Xmgmt, as this name will not collide with any other lens name.
// We do not pick Mgmt as in the future there might be an Mgmt lens.
// https://github.com/hercules-team/augeas/wiki/Loading-specific-files
if err = ag.Set(fmt.Sprintf("/augeas/load/%s/lens", NS), obj.Lens); err != nil {
return false, errwrap.Wrapf(err, "augeas: error while initializing lens")
}
if err = ag.Set(fmt.Sprintf("/augeas/load/%s/incl", NS), obj.File); err != nil {
return false, errwrap.Wrapf(err, "augeas: error while initializing incl")
}
if err = ag.Load(); err != nil {
return false, errwrap.Wrapf(err, "augeas: error while loading")
}
}
checkOK := true
for _, set := range obj.Sets {
if setCheckOK, err := obj.checkApplySet(apply, &ag, set); err != nil {
return false, errwrap.Wrapf(err, "augeas: error during CheckApply of one Set")
} else if !setCheckOK {
checkOK = false
}
}
// If the state is correct or we can't apply, return early.
if checkOK || !apply {
return checkOK, nil
}
log.Printf("%s: changes needed, saving", obj)
if err = ag.Save(); err != nil {
return false, errwrap.Wrapf(err, "augeas: error while saving augeas values")
}
// FIXME: Workaround for https://github.com/dominikh/go-augeas/issues/13
// To be fixed upstream.
if obj.File != "" {
if _, err := os.Stat(obj.File); os.IsNotExist(err) {
return false, errwrap.Wrapf(err, "augeas: error: file does not exist")
}
}
return false, nil
}
// AugeasUID is the UID struct for AugeasRes.
type AugeasUID struct {
BaseUID
name string
}
// UIDs includes all params to make a unique identification of this object.
func (obj *AugeasRes) UIDs() []ResUID {
x := &AugeasUID{
BaseUID: BaseUID{Name: obj.GetName(), Kind: obj.GetKind()},
name: obj.Name,
}
return []ResUID{x}
}
// GroupCmp returns whether two resources can be grouped together or not.
func (obj *AugeasRes) GroupCmp(r Res) bool {
return false // Augeas commands can not be grouped together.
}
// Compare two resources and return if they are equivalent.
func (obj *AugeasRes) Compare(r Res) bool {
// we can only compare AugeasRes to others of the same resource kind
res, ok := r.(*AugeasRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
return true
}
// UnmarshalYAML is the custom unmarshal handler for this struct.
// It is primarily useful for setting the defaults.
func (obj *AugeasRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes AugeasRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*AugeasRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to AugeasRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = AugeasRes(raw) // restore from indirection with type conversion!
return nil
}

View File

@@ -0,0 +1,25 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build noaugeas
package resources
// AugeasRes represents the fields of the Augeas resource. Since this file is
// only invoked with the tag "noaugeas", we do not need any fields here.
type AugeasRes struct {
}

Some files were not shown because too many files have changed in this diff Show More