861 Commits

Author SHA1 Message Date
James Shubin
9c75c55fa4 lib: Update the help text
Give some longer descriptions so they show up nicely for the user.
2021-09-29 02:11:19 -04:00
Joe Groocock
b9741e87bd lang: interpolate: Fix string interpolation of dollar symbols
Dollar symbols were failing to parse when not followed by a non-brace,
non-dollar, non-EOF token and causing expected tests to fail. This
simplifies the rules to allow the remaining tests to succeed.

Fix and reinstate the final few failing tests, and add another.

Allow any escape sequence to be matched so that invalid sequences
produce a meaningful error message instead of a generic "cannot parse":

    ast: interpolate: interpolating: V: \?
    unhandled escape sequence token: \?

Tidy the related Makefile rule for generating the ragel parser.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-09-28 21:40:49 +00:00
James Shubin
c555478b54 engine, lang: Misc fixes for golang lint 2021-08-09 16:55:31 -04:00
Joe Groocock
3718372288 docs: Provide Libera.Chat webchat links over ircs URIs
Only a few days after updating the documentation[1] following the move
to libera.chat, the webchat client was added, mirroring the behaviour of
the documentation prior to the change. Replace the ircs:// links with
clickable URLs to a usable browser chat client, which is more ideal for
beginners. Advanced users will know what to do to connect using their
external client as normal

[1]: 7d7e225823

Signed-off-by: Joe Groocock <me@frebib.net>
2021-07-12 20:57:44 +00:00
James Shubin
390b41bc26 test: Add small test for weird bash spacing 2021-06-21 18:28:05 -04:00
James Shubin
530c5a64fb vendor: Pin version of consul until we're on golang 1.16
The builds broke because the consul dependency now requires golang 1.16,
so let's pin it for now.
2021-06-20 21:53:01 -04:00
Matthew Lesko-Krleza
d285aaedc9 test: Add a test for AddEdge 2021-05-30 21:08:37 -04:00
James Shubin
453fe18d7f lang: Move the Arg type into the common interface package
This lets it get used in multiple places.
2021-05-30 17:59:50 -04:00
James Shubin
5fae5cd308 lang: Fix grammar typos
Woops!
2021-05-30 17:16:49 -04:00
Joe Groocock
7d7e225823 docs: Update IRC links to Libera.Chat
#mgmtconfig has moved to Libera.Chat as the primary channel for IRC
communications. Update the documentation to reflect this.
Libera.Chat doesn't provide a first-party web portal but does recommend
a few in the linked documentation on the website. As there is no
suitable replacement for webchat.freenode.net, link to the "Choosing an
IRC client" page instead.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-05-27 08:50:24 +01:00
viq
19f404799d test: Fix awk usage for printing refs
This should print more fields than just 3rd one if they're present.
2021-05-24 23:49:19 +02:00
James Shubin
3e4652dca3 lang: funcs: Fix structlookup unification bug
We had mapped the field type to a dummy type instead of to T2 the return
type. Fixed now and added some tests.

This broke the unification for the load function lookups.
2021-05-23 22:52:50 -04:00
James Shubin
45b08de874 docs: Update the function guide
Hopefully this makes it easier for new function authors to get going
faster!
2021-05-23 20:21:14 -04:00
James Shubin
310e26dda9 lang: Switch over to the new PolyFunc interface
This isn't perfect yet, but we're trying to do this incrementally, and
merge whatever we can as early as possible.

During this work, I realized that the Simplify method of the exclusive
could probably be improved, and possibly receive a better signature.
This work will have to happen later.
2021-05-23 20:03:10 -04:00
James Shubin
f4eb54b835 lang: funcs: Add more invariants to contains func
This adds even more invariants to contains that I might have missed. It
may be redundant, or it may help. It also adds some tests.
2021-05-23 20:03:10 -04:00
James Shubin
3968c12947 lang: funcs: simplepoly: Support variant's in func definitions
This adds support for variant types in the simple poly definitions. It
is recommended that you avoid using these as much as possible, because
they're a bit harder for the type unification to solve for them. The way
this works is that these functions look at the available input types and
then generate a (recursive) set of invariants which might hold true. It
filters out any impossible ones, which is where this variant matching is
done. It's less likely that you'll get a solution with this mechanism,
but it is possible.
2021-05-23 20:03:10 -04:00
James Shubin
21c97d255f lang: funcs: simple: Check for function signatures
Make sure that we actually get function types here. This is just an
extra safety check.
2021-05-23 20:03:10 -04:00
James Shubin
eb1053607a lang: funcs: simple: Check for variant signatures
This adds a safety check in case someone sneaks in a variant type in the
simple function signature. These might be sneaky to detect, and it's
simpler to catch them right here.

From a design point of view, we might consider actually permitting
these, like we did with the simple poly API, but it's probably better
for them to get implemented in that API instead (if we decide to allow
this long-term) and keep this simple API very simple.
2021-05-23 20:03:10 -04:00
James Shubin
de7198e9dc lang: funcs: Check for functions that haven't been migrated
All polymorphic functions should use the new API, at least until we
either implement a compat wrapper. But it's probably best if we get rid
of the old API as soon as we make all this type unification work
properly.
2021-05-23 20:03:10 -04:00
James Shubin
0f30f47249 lang: funcs: core: world: Add unification to schedule return expr
This adds a sneaky unification between the expression of the function
return value in the unification. I am not entirely sure how often this
will get used, but it could be valuable in the right instance if this
isn't already learned through other sources. I'm fairly confident that
it isn't incorrect, so in the worst case scenario it's redundant
information for the unification solver.

This is being added as a separate commit so that it's obvious how this
type of unification invariant can be applied.
2021-05-23 20:03:10 -04:00
James Shubin
6b2ad8ebc8 lang: funcs: core: world: Add Unify method for schedule function
We should probably add some tests for this function because it once had
type unification ghosts, and while adding this new API method, I somehow
hit some temporary new ghosts that have since been killed.
2021-05-23 20:03:10 -04:00
James Shubin
1f302144ef lang: funcs: core: world: Move schedule func arg names to a const
This is a bit safer and cleaner.
2021-05-23 20:03:10 -04:00
James Shubin
d04c7a6ae4 lang: funcs: Add Unify method for history function
This could use some tests.
2021-05-23 20:03:10 -04:00
James Shubin
9ca2cda8c7 lang: funcs: core: Add more invariants to template func
This adds even more invariants to template that I might have missed. It
may be redundant, or it may help.
2021-05-23 20:03:10 -04:00
James Shubin
1fd06ecbf9 lang: funcs: core: fmt: Add more invariants to printf
This adds even more invariants to print that I might have missed. It may
be redundant, or it may help.
2021-05-23 20:03:10 -04:00
James Shubin
97baad4cb1 lang: funcs: Add Unify method for maplookup function
This also adds a few tests.
2021-05-23 20:03:10 -04:00
James Shubin
fbd93ecf0d lang: funcs: Add Unify method for structlookup function
This also adds a few tests.
2021-05-23 20:03:10 -04:00
James Shubin
e941ccea92 lang: funcs: Add Unify method for the simplepoly API
This is an implementation of the Unify approach for the simplepoly
function API, which wraps the full function API. It is unique in that a
lot of different functions use it, and it is easy to build functions
with it. It needs to use exclusives to represent the different options,
but at least it filters out any that aren't viable.

The Unify implementation here is fairly similar to the patterns in the
template() function.

To improve the filtering, it would be excellent if we could examine the
return type in `solved` somehow (if it is known) and use that to trim
our list of exclusives down even further! The smaller exclusives are,
the faster everything in the solver can run.
2021-05-23 20:03:10 -04:00
James Shubin
d692483bc3 lang: funcs: Add Unify method for operator function
This is an implementation of the Unify approach for the operator
function. It is unique in that it is a wrapper around the simple
operator function API.

To improve the filtering, it would be excellent if we could examine the
return type in `solved` somehow (if it is known) and use that to trim
our list of exclusives down even further! The smaller exclusives are,
the faster everything in the solver can run.
2021-05-23 20:03:10 -04:00
James Shubin
95cfbd0fff lang: funcs: Ensure that Info sig's are invalid if not built yet
In case something in the type unification tries to speculatively call
Info before it's ready to produce a valid sig, make sure we only return
a definitive answer (non-nil, and no variant types) once we've
conclusively finished defining the signature.
2021-05-23 20:03:10 -04:00
James Shubin
b3d1ed9e65 lang: funcs: core: math: Add a fortytwo function
This is mainly meant as a useful test case, but might as well have it be
fun too. As an aside, it taught me a surprising result about the %v verb
in printf, and we'll have to decide if it's an issue we care about.

https://github.com/golang/go/issues/46118

The interesting thing about this method is that it uses the simplepoly
API but has no input args-- only the output types are different. If it
had identical types in the input args, that might also have been
interesting, but it's more rare to have none. Hopefully this exercises
our type unification logic.
2021-05-12 03:30:25 -04:00
Joe Groocock
fe2b8c9fee engine: resources: exec: AutoEdge to User/Group/File
Fixes https://github.com/purpleidea/mgmt/issues/221

Signed-off-by: Joe Groocock <me@frebib.net>
2021-05-11 16:51:02 -04:00
James Shubin
2d7deef4e2 lang: unification: Don't stall the solver over generators
If we have a solution, and all that remains are generators, then feel
free to remove them and win.
2021-05-11 05:23:00 -04:00
James Shubin
b4a70b02e3 lang: funcs: Add Unify method for contains function
This is an implementation of the Unify approach for the contains
function. It is unique in that its generator invariant can recursively
generate a new generator invariant once.
2021-05-11 04:41:32 -04:00
James Shubin
c5c2364ed4 lang: funcs: core: fmt: Add an additional invariant to printf
This adds an invariant for printf that I might have missed. It may be
redundant, or it may help.
2021-05-11 03:23:33 -04:00
James Shubin
efcc4291a3 lang: funcs: core: Add Unify method for template function
This is an implementation of the Unify approach for the template
function.
2021-05-11 03:22:27 -04:00
James Shubin
6ea6ee264d lang: Add new unification rules for functions
This is meant as an incremental step into the new unification. Hopefully
it doesn't break anything and that we can rip out the old polymorphisms
work soon.
2021-05-11 02:52:35 -04:00
James Shubin
2865ba7632 lang: unification: Improve our simple solver
This removed a bug in the InvariantCall stuff, and also hopefully made
it more robust to actually solving when it had a solution.
2021-05-11 02:47:24 -04:00
James Shubin
2bed668d31 lang: interfaces: Small fixups to make unification work for now
This is all hacks until it works. Sorry that I am not a type unification
expert. If you are, please send us some patches =D
2021-05-11 01:31:10 -04:00
James Shubin
9dc24860f3 lang: interfaces: Add a new poly func interface
This new interface is subject to change and will probably be renamed if
we decide to keep it.
2021-05-11 00:45:25 -04:00
James Shubin
f01377b3bc lang: funcs: core: fmt: Add Unify method for printf
This is an implementation of the Unify approach for the printf function.
2021-05-11 00:33:50 -04:00
Joe Groocock
7443dfac4c misc: Run apt update before installing packages
Sometimes the package repo may be out of date and installing required
packages can return 404 because the version in the stale database has
been removed.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-05-09 19:22:40 +01:00
James Shubin
e6408e187c lang: Rename old fail5 and fail6 variables
Last of the numbered error scenario cleanups...
2021-05-08 05:19:36 -04:00
James Shubin
a02d282d3e lang: Rename old fail8 variable to failInterpolate
Eight... Interpolate?

Cleanups...
2021-05-08 04:30:17 -04:00
James Shubin
f778f53744 lang: Rename old fail9 variable to failInit
Cleanups...
2021-05-08 04:25:56 -04:00
James Shubin
95ea93564e lang: Rename old fail4 variable to failGraph
Cleanups...
2021-05-08 04:20:05 -04:00
James Shubin
d51029e86c lang: Rename old fail3 variable to failUnify
Cleanups...
2021-05-08 04:17:21 -04:00
James Shubin
1016699c94 lang: Rename old fail2 variable to failSetScope
Cleanups...
2021-05-08 04:13:33 -04:00
James Shubin
63f63955e7 lang: Replace numbered errors with named ones
This makes the tests easier to read and modify without having out of
order numbers. When writing the tests, you'll remember more easily which
section you're erroring in too!
2021-05-07 23:41:00 -04:00
James Shubin
37be9fda9f lang: Catch duplicate resource fields or meta entries statically
This teaches the compiler to catch entries with duplicate fields, and
duplicate meta entries, because it could be ambiguous to determine which
should take precedence. For example, if you specified `content` to a
file resource twice, this should error. This is known statically, so we
can catch it. If you specified two `Meta:noop` entries, this can also be
caught.

The interesting part happens when you specify one `Meta:noop` entry, and
one `Meta` entry which happens to contain a noop field in the struct.
For this, we actually have to wait until type unification is finished,
and catch the error there. This is because after type unification we
will know the precise type of the struct being passed to `Meta`, and so
we can look at its field names, even if their values aren't yet known
because the graph hasn't run yet.
2021-05-07 23:41:00 -04:00
James Shubin
0756133a7e lang: Add a named error for catching errors in test on Init
This makes it so that we can catch errors that happen in Init. We also
name the errors so that number sequence doesn't matter.
2021-05-07 23:14:49 -04:00
Joe Groocock
83c5ab318b lang: types: Clear map/list types during Into()
Map and list types are now unconditionally initialised during an Into()
call to ensure that the only data within them after the operation is
that added by the Into() function.

Prior to this change, map/list types would likely not be cleared prior
to the data being inserted into them with a few exceptions. Nil
pointers or maps/lists that were not sufficient in capacity would be
reinitialised and used to replace the existing backing data store. In
some cases this wouldn't occur meaning any residual data existing in the
container before the Into() call could persist after the data copy
completes. This behaviour is wildly inconsistent and not ideal in the
vast majority of cases. It should be assumed that the Into() call will
preserve nothing and always produce a consistent and deterministic
output.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-05-05 10:41:48 +01:00
Joe Groocock
0c28957016 lang: funcs: Funcs that never load are fatal
If there is a programming error in any func Stream() implementation then
the node could never output anything, causing the engine to hang
indefinitely waiting for an initial value that will never come,

Nodes keep track of whether they are loaded, so testing for this
occurence is pretty simple. Any nodes that do not return output at least
once before they close their output channel can be considered a fatal
error on which the engine will exit.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-05-04 11:39:34 -04:00
James Shubin
959084040d lang: Don't block the engine for empty values
If the user passed an empty list or map, we should send that and not
block. This also includes a simple test to ensure this keeps working.
2021-05-04 11:27:58 -04:00
James Shubin
8a428c6936 lang: Add a test timeout to catch blocked cases
This should catch any blocked tests and report an error. The timeout is
arbitrary.
2021-05-04 11:26:17 -04:00
Joe Groocock
48da23226c project: Add frebib to AUTHORS
Signed-off-by: Joe Groocock <me@frebib.net>
2021-05-04 14:16:54 +01:00
James Shubin
5f0c6e5102 lang: types: Add extra ValueOf test
Just to double check weird behaviours of the golang reflect lib.
2021-05-04 06:26:33 -04:00
Joe Groocock
29f1c6f50e lang: types: Fix ValueOf() panic with nil pointer values
Some forms of reflect.Value can cause ValueOf() to panic when there is a
nil pointer somewhere within the reflect.Value, whether that be a
container type like a struct, list or map, or just a raw nil pointer.

In these cases, ValueOf() attempted to dereference the pointer without
ever checking if it was nil. mgmt lang doesn't have pointers of any
kind, so these Golang values cannot be represented in mcl types in the
current form so return a helpful error to the user.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-05-04 06:26:23 -04:00
James Shubin
4d187419ac engine: Small typo/cleanups in autogrouping code 2021-05-04 05:30:27 -04:00
James Shubin
58998f9cab engine: Transform the send/recv init functions into helpers
Since we'll want to use them elsewhere, we should make these helper
functions. It also makes the code look a lot neater. Unfortunately, it
adds a bit more indirection, but this isn't a critical flaw here.
2021-05-04 05:30:27 -04:00
James Shubin
cdc5ca8854 util: Add a simple log wrapper for io.Writer
This lets logger interfaces that use io.Writer get met by our logging
interface.
2021-05-04 05:27:07 -04:00
James Shubin
44e1e41266 lang: types: Improve documentation for ValueOf functions
A reminder that nil's in golang don't map to anything in mcl.
2021-05-04 05:27:07 -04:00
James Shubin
33fda8605a lang: types: Add new ValueOf tests
Hopefully this makes all of this a bit more obvious.
2021-05-04 04:27:33 -04:00
Joe Groocock
5f9ed69299 misc: Replace go-bindata with maintained fork
As per [1] go-bindata was removed from GitHub and later replaced by the
community. jteeuwen/go-bindata has since been archived to represent this
state and now most communities use kevinburke/go-bindata instead as it
is more actively maintained.

[1]: https://github.com/jteeuwen/go-bindata/issues/5

Signed-off-by: Joe Groocock <me@frebib.net>
2021-05-04 04:10:05 -04:00
Joe Groocock
7f1baea3b0 engine: resources: docker: Replace deprecated NewClient() with NewClientWithOpts()
docker/client.NewClient() is deprecated in favour of NewClientWithOpts()
which takes a series of client.Opt functions to configure the API
client. As mgmt only passes the API version through, this simplifies the
NewClient() calls.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-05-02 10:39:04 +01:00
James Shubin
f75026e4b2 lang: unification: Teach the solver about new invariants
This extends our simple solver so that it can understand the new
invariants. For the Value holding invariants, it doesn't do much-- those
are expected to be used within the execution of the GeneratorInvariant
so they are passed through untouched. For the GeneratorInvariant it must
actually try running this periodically to see if it produces new
invariants that are helpful for the whole solution. Since this could get
expensive quickly, the logic is to only try running these once we've
entered steady state, but before we've tried to reach for the
ExclusiveInvariant's. The exclusives are the most expensive so we run
these last, and the generators are run late because they won't usually
produce anything helpful unless some of the basic solving has already
happened. If they could produce useful things right away, then there
wouldn't be a need for them!
2021-05-02 00:52:57 -04:00
James Shubin
ce7a1a9c67 lang: Add a CallFuncArgsValueInvariant invariant
This is a new invariant that I realized might be useful. It's not
guaranteed that it will be used or useful, but I wanted to get it out of
my WIP branch, to keep that work cleaner.
2021-05-02 00:52:57 -04:00
James Shubin
a62056fb19 lang: Add a GeneratorInvariant invariant
This is a new invariant that I realized might be useful. It's not
guaranteed that it will be used or useful, but I wanted to get it out of
my WIP branch, to keep that work cleaner.
2021-05-02 00:52:57 -04:00
James Shubin
f3434a8155 lang: Add a ValueInvariant invariant
This is a new invariant that I realized might be useful. It's not
guaranteed that it will be used or useful, but I wanted to get it out of
my WIP branch, to keep that work cleaner.
2021-05-02 00:52:57 -04:00
James Shubin
4e023ef517 lang: Move the ExprAny to the interfaces package
Having this special "placeholder" interface is useful for more than one
package.
2021-05-02 00:52:57 -04:00
James Shubin
97b80cb930 lang: unification: Move the InvariantSolution struct
We are just relocating this in the file for consistency.
2021-05-02 00:52:57 -04:00
James Shubin
525b4e6a53 lang: Move core unification structs into shared interfaces package
We should probably move these into the central interfaces package so
that these can be used from multiple places. They don't have any
dependencies, and it doesn't make sense to have the solver code mixed in
to the same package. Overall the interface being implemented here could
probably be improved, but that's a project for another day.
2021-05-02 00:52:57 -04:00
James Shubin
054eaf65b8 util: safepath: Add a new safe path helper library
This is a new path manipulation library that is designed to be safer
than using simple strings for everything. It is more work to use, but it
can help you keep track of the different path types.

It has been sitting unused in a git branch for too long, and I figured
it should see the light of day in case someone wants to start using it.
2021-05-02 00:52:57 -04:00
James Shubin
48fa796ab1 test: Disable failing test
Hit another intermittent failed test in GH CI.
2021-03-10 03:36:27 -05:00
Jean-Philippe Evrard
1873e022cc test: Add guard when no commit needs testing
Without this patch, github actions fail.

It's a temporary workaround until [1] is done.

[1]: https://github.com/purpleidea/mgmt/issues/643
2021-03-03 10:49:22 +01:00
James Shubin
35a8062b58 test: Disable travis IRC notifications for now
One of our contributors is unusually annoyed by them, and it's important
to keep your contributors happy!
2021-03-03 04:28:53 -05:00
Jean-Philippe Evrard
636248ad67 test: Ensure branches are also testable
test-commit-message runs on PR, but also on push in other branches
which aren't PRs. We need to test those too.

This is fixed by ensuring the same kind of behaviour than travis CI:
When a patch is put on a branch, it's using the branchname for
testing [1].

[1]:
https://docs-staging.travis-ci.com/user/environment-variables/#default-environment-variables
2021-03-03 09:52:49 +01:00
Jean-Philippe Evrard
4511c54fad test: Ensure github CI tests commit messages
Without this patch, the travis var is empty, and we just pass.
This is a problem, as we are using github CI nowadays.

This should fix it.
2021-03-03 02:47:30 -05:00
James Shubin
7f3970541b test: Skip more tests
I think some of these fail due to shared environments and noisy
neighbours in github. We'll have to fix that eventually or test
elsewhere.
2021-03-02 13:41:00 -05:00
James Shubin
4040f4d151 test: Skip yet another intermittent test
We shall not have intermittent tests!
2021-03-02 12:48:44 -05:00
James Shubin
887d374c53 lang: funcs: Catch simple function api usage without types
In case a programmer makes a mistake and passes in a function using the
simple function API without a type or even without the entire value,
we'll now return a sensible error message and panic in init() instead of
requiring a test to catch this alone.
2021-02-28 22:50:51 -05:00
James Shubin
be4b87155d test: Skip another intermittent test
I think this might be related to multiple jobs running at the same time
on the same host. Not sure though.
2021-02-24 04:00:34 -05:00
James Shubin
b987a7da4c examples: tftp: Fix missing error checking in example 2021-02-20 13:31:45 -05:00
James Shubin
7153fe5ad2 test: Skip intermittent tests
It would be great to fix some rare races or debug what's wrong in CI,
but for now let's get rid of these fails so that we can get better data
for when we break something more serious. We'll need to revisit all of
this for sure.
2021-02-19 21:17:57 -05:00
James Shubin
ccd8ba44d9 test: Exclude generated ragel parser from golint 2021-02-17 22:01:34 -05:00
James Shubin
e7ef0f7a6c test: Set default column size if $TERM env var isn't set
Seems Github actions breaks or unsets this, leading to the errors:

tput: No value for $TERM and no -T specified
seq: missing operand
Try 'seq --help' for more information.

Hopefully this makes things a bit more robust.
2021-02-17 04:07:30 -05:00
James Shubin
400b58c0e9 lang: Improve string interpolation
The original string interpolation was based on hil which didn't allow
proper escaping, since they used a different escape pattern. Secondly,
the golang Unquote function didn't deal with the variable substitution,
which meant it had to be performed in a second step.

Most importantly, because we did this partial job in Unquote (the fact
that is strips the leading and trailing quotes tricked me into thinking
I was done with interpolation!) it was impossible to remedy the
remaining parts in a second pass with hil. Both operations needs to be
done in a single step. This is logical when you aren't tunnel visioned.

This patch replaces both of these so that string interpolation works
properly. This removes the ability to allow inline function calls in a
string, however this was an incidental feature, and it's not clear that
having it is a good idea. It also requires you wrap the var name with
curly braces. (They are not optional.)

This comes with a load of tests, but I think I got some of it wrong,
since I'm quite new at ragel. If you find something, please say so =D In
any case, this is much better than the original hil implementation, and
easy for a new contributor to patch to make the necessary fixes.
2021-02-17 03:35:12 -05:00
James Shubin
5257496214 test: Make a few cosmetic changes and enable race testing 2021-02-17 02:45:47 -05:00
Jean-Philippe Evrard
e1bfe4a3ce test: Add GitHub Actions test support
Authored-By: Jean-Philippe Evrard <open-source@a.spamming.party>
Co-Authored-By: James Shubin <james@shubin.ca>
Signed-off-by: Joe Groocock <me@frebib.net>
2021-02-14 21:41:02 -05:00
James Shubin
f31cce8ec2 test: misc: Clarify golang wording 2021-02-14 21:39:01 -05:00
Joe Groocock
169ebfa72c test: make-deps: Add folds around tests and dep blocks
Improves readability of CI test output and hides away the complexity
when in most cases it is not required. Retain fold behaviour for both
Travis and GitHub Actions in case both are used in any capacity.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-02-12 16:44:12 +00:00
Joe Groocock
7cace52ab5 test: prevent LinuxBrew in GitHub Actions CI
Ubuntu-latest in GitHub Actions provides linuxbrew, so the tests install
both the native Debian dependency packages, and also the linuxbrew
variants which is slower and entirely redundant.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-02-12 16:44:12 +00:00
Joe Groocock
95b93c60d9 test: Invert negative bash assertions
In bash `-n` is `non zero length` which is the opposite of `-z` meaning
`zero length`. `-n` is semantically identical to `! -z` but `-n`.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-02-12 16:44:12 +00:00
Joe Groocock
5af1dcb8b1 test: Add in_ci utility test function
in_ci checks for environment variables set by a selection of CI systems
and returns true if the test appears to be running in CI. Additionally
it can test for specific CI systems, and returns true if the CI system
is listed.

Deduplicate existing environment checks for Travis and Jenkins.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-02-12 16:44:11 +00:00
Joe Groocock
6a61774fb7 docker: Bump docker dependencies, add containerd
These dependencies are maintained because the upstream repos bundle
vendor directories into the repos, which cause namespacing issues during
build. Git submodules don't strip the vendor directory whereas most
vendoring tools would.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-02-09 21:14:34 +00:00
James Shubin
ccbaca24f1 gopath: Remove this unused directory
I had this symlink hack a long time ago. I don't think it's being used
anymore.
2021-02-07 21:23:49 -05:00
Joe Groocock
07b6048dc5 etcd: Bump etcd + friends to the latest upstream version
This allows dropping the pinned grpc-prometheus and grpc-gateway
libraries as git master works fine for now.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-02-07 12:55:02 +00:00
Joe Groocock
60dd34d066 make: Drop support for Go 1.9 in make build
docs/development.md says the minimum required Golang version is 1.13 at
the time of writing.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-02-07 00:08:51 -05:00
James Shubin
28451d1e14 lang: funcs: Fixup race in vumeter example
The vumeter example was written quickly and without much care. This
fixes a possible race (panic) and also removes the busy loop that wastes
CPU while we're waiting for the first value to come in.
2021-02-06 23:59:06 -05:00
Joe Groocock
db95b6381f examples: lang: Reinstate mcl as unification bug is fixed
Struct field names now correctly map based on their `lang` tags in Go
structs, so this example now works as originally intended.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-02-06 16:57:01 +00:00
Joe Groocock
6b14c9bea4 lang: Map Go struct fields using lang struct tag
Converting a reflect.Type of KindStruct did not respect the `lang` tag
on struct fields incidating how fields from mcl structs should be mapped
even though resource field names did. This patch should allow structs
with mapped fields to be respected.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-02-06 16:57:01 +00:00
Joe Groocock
742adc00fe lang: Convert StmtRes to engine.Res with types.Into()
Replace existing field-mapping code with calls to types.Into() to
reflect the mcl data into the Go resource struct with finer granularity
and accuracy, and less reliance on the magic reflect.Set() function.

One major advantage over reflect.Value.Set() is Into() allows tailoring
the data that is set, specifically when coercing mcl struct values into
Golang struct values, the fields can be appropriately mapped based on
the lang tag on the struct field. With reflect.Value.Set() this was not
at all possible as there was a contradiction of logic given the
following rules:

- mcl struct fields must have lowercase names
- Golang struct fields with lowercase names are unexported
- Golang reflection does not allow modifying unexported fields

Prior to this change, it was impossible to map an mcl inline struct in a
resource to the matched Golang counterpart, even if the lang tag was
present on the struct field. This can be demonstrated with the following
trivial example:

    test "name" {
        key => struct{
            name => "hello",
        },
    }

and the accompanying Golang resource definition:

    type TestRes struct {
        traits.Base
        traits.Edgeable

        Key struct {
            Name string `lang:"name"`
        } `lang:"key"`
    }

Due to the mismatch in field names in the embedded struct, the type
unifier failed and refused to match mcl 'name' to Go 'Name' due to the
missing mapping logic.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-02-06 16:57:00 +00:00
Joe Groocock
52897cc16c lang: Implement Into() to set types.Value from reflect.Value
Into() mutates a given reflect.Value and sets the data represented by
the types.Value into the storage represented by the reflect.Value.

Into() is the opposite to ValueOf() which converts reflect.Value into
types.Value, and in theory they should be (almost) bijective with some
edge case exceptions where the conversion is lossy.

Simply, it replaces reflect.Value.Set() in the broad case, giving finer
control of how the reflect.Value is modified and how the data is set.
types.Value.Value() is now also a redundant function that achieves the
same outcome as Into(), but with less type specificity.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-02-06 16:57:00 +00:00
Joe Groocock
c950568f1b lang: Move StructTag const into lang/types
This constant value is strongly tied to the language, and little to do
with the engine. Move the definition into the lang/types package to
prevent circular imports between lang/types and engine/util.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-02-06 16:56:57 +00:00
Joe Groocock
845d7ff188 lang: Fix panic in lang/types/ValueOf() for Struct
Replace use of reflect.Value.Len() with NumField() which is intended to
return the number of fields in reflected Struct value.

Len should only be used for Array, Chan, Map, Slice and String types.

Add some trivial sanity check tests for ValueOf() for the simple and
complex container types.

Signed-off-by: Joe Groocock <me@frebib.net>
2021-01-31 23:10:16 +00:00
James Shubin
3bd8658da6 art: Use a transparent logo for dark themes
As helpful user frebib pointed out, our logo doesn't render nicely when
github users use the dark theme. This has already been complained about
by others in:

https://github.community/t/support-theme-context-for-images-in-light-vs-dark-mode/147981/2

For now, we'll switch to a transparent background.
2021-01-31 18:00:47 -05:00
James Shubin
336a38081a legal: Happy 2021 everyone...
Done with:

ack '2020+' -l | xargs sed -i -e 's/2020+/2021+/g'

Checked manually with:

git add -p

Hello to future James from 2022, and Happy Hacking!
2021-01-31 16:52:46 -05:00
Jean-Philippe Evrard
01c2131436 misc: Make-deps should assume go vet present
While the code comment says to check if go vet command is present,
it actually tests if go vet command returns properly.

This is a problem if go vet is _not_ returning 0 due to a
failure while go vet is present: it will try to install the
legacy go vet.

This fixes it by removing this block of code completely,
as we assume a golang version which contains it anyway.
2020-12-09 19:25:19 -05:00
Joe Groocock
c274231544 test: Fix implicit test fail in test-markdownlint
Fix the silent test failure by catching the uncaught error from
`command`, handling the failure gracefully.

    $ bash -x test/test-markdownlint.sh
    ...
    ++ command -v mdl
    + MDL=
    $ echo $?
    1

    $ bash -x test/test-markdownlint.sh
    ...
    ++ command -v mdl
    + MDL=
    + true
    + '[' -z '' ']'
    + fail_test 'The '\''mdl'\'' utility can'\''t be found.'
    + echo -e 'FAIL: The '\''mdl'\'' utility can'\''t be found.'
    FAIL: The 'mdl' utility can't be found.

Fix a couple of glaring shellcheck warnings and errors mostly
surrounding variable quoting.

Signed-off-by: Joe Groocock <me@frebib.net>
2020-12-08 15:51:52 +00:00
Joe Groocock
4a2864701c lang: Assert that 'metadata.yaml' is not parsed as raw mcl
Contradictory to expectations, `mgmt run lang metadata.yaml` would
produce an error similar to the following, which is likely unexpected
from the user perspective:

    2020-11-24 12:24:08.330968 I | cli: lang: lexing/parsing...
    2020-11-24 12:24:08.331153 I | run: error: cli parse error: could not generate AST: parser: `syntax error: unexpected DOT` @1:1

Produce a user-friendly warning instead, hinting with the supported
usage:

    2020-11-24 13:15:01.686659 I | run: error: cli parse error: could not activate an input parser: unexpected raw code 'metadata.yaml'. Did you mean './metadata.yaml'?

Signed-off-by: Joe Groocock <me@frebib.net>
2020-11-24 13:56:08 +00:00
James Shubin
76ede10e0a misc: Update path to go-fuzz
They moved it :/
2020-09-23 13:42:51 -04:00
Ahmed Al-Hulaibi
274e01bb75 misc, docs: Update minimum required golang version to 1.13 2020-09-23 12:34:54 -04:00
James Shubin
d75f763c99 misc, docs: Move to golang 1.12 2020-09-23 12:34:54 -04:00
Donald Bakong
5bc985663c docs: Add underscore issue to FAQ 2020-09-23 11:25:46 -04:00
James Shubin
df9e2e853f docs: Update state of remote execution and resource 2020-09-23 11:21:03 -04:00
Ivan Pejić
b4828a6f0a docs: Update FAQ to mention temp absence of remote 2020-09-23 11:15:44 -04:00
Donald Bakong
e99dd749a0 engine: resources: Fixing return bug 2020-09-23 11:10:38 -04:00
David Randall
10ce7178c0 misc: Incorrect dependency path for dvyukov/go-fuzz
Fix the dependency path for dvyukov/go-fuzz to github.com from golang.org.

Fixes #601
2020-06-07 15:05:46 -04:00
Donald Bakong
5c6a66eaf5 lang: funcs: Add cidr_to_ip function 2020-06-03 22:01:01 -04:00
Francois Rompre-Lanctot
36d30bc985 lang: funcs: Add macfmt function 2020-06-03 17:46:39 -04:00
Adam Sigal
a5152b82e9 engine: resources: exec: Add Env
Add functionality to specify environment variables in exec.
2020-04-19 20:30:43 -06:00
James Shubin
e9af8a2595 engine: resources: exec: Clean up error handling
Some quick fixes, this whole resource should be looked at for
discrepancies, since it was written very early.
2020-04-14 22:59:33 -04:00
Adam Sigal
84b5b60d49 engine: resources: exec: Fix typo
Typo in description of cwd field fixed.
2020-04-14 22:36:25 -04:00
James Shubin
8f60f42be3 engine: resources: Add http:server and http:file resources
This adds a new http server resource, as well as a http file resource
that is used to specify files to serve in that server. This allows you
to have an http server that is entirely server from memory without
needing files on disk.

It does this by using the autogrouping magic that is already available
in the engine.

The http resource is not meant to be a full-featured http server
replacement, and it might still be useful to use the venerable webserver
of your choice, however for integrated, pure-golang bootstrapping
environments, this might prove to be very useful.

It can be combined with the tftp and dhcp resources to build PXE setups
with mgmt!

This resource can be extended further to support an http:flag endpoint,
an http:ui endpoint, automatic edges, and more!
2020-04-11 03:23:04 -04:00
James Shubin
583344138a engine: resources: Add dhcp:server and dhcp:host resources
This adds a new dhcp server resource, as well as a dhcp host resource
used to specify the static mapping between mac address and ip address.
It also adds a simple, pure-golang example dhcp client which might make
testing easier.

The dhcp resource is not meant to be a full-featured dhcp server
replacement, and it might still be useful to use the venerable dhcpd,
however for integrated, pure-golang bootstrapping environments, this
might prove to be very useful.

It can be combined with the tftp resource to build PXE setups with mgmt.

This resource can be extended further to support a dhcp:range directive,
automatic edges, and more!
2020-04-11 02:45:18 -04:00
James Shubin
016d021d5a engine: resources: tftp: Improve validation and error messages
Just some small cleanups for our tftp resource. We also rename the
struct to make it consistent, since golint complains about similar
protocols when it is not all capitalized.
2020-04-11 02:45:18 -04:00
James Shubin
115dc4bfa4 engine: resources: net: Add an IP forward field
This adds an IP forward field (boolean) and improves the documentation.
2020-04-11 02:02:35 -04:00
James Shubin
5b83febb23 etcd: Log an error that wasn't getting seen
We would error when the address could not bind, but it was for
non-standard reasons, and we didn't see the specific reason why.
2020-04-11 02:02:35 -04:00
James Shubin
c9d5c50402 test: Improve our test for long lines
This now allows long URL's to start part way through a sentence instead
of requiring them to start on the beginning of a new line.
2020-04-03 01:15:57 -04:00
James Shubin
fc839d2983 vendor: Fix broken upstream hashicorp module
This module broke compat with old versions of golang. Vendor it until
we're at a minimum of golang 1.13.x everywhere.
2020-04-02 22:27:06 -04:00
James Shubin
3bce96bbd5 docs: Add new talk from cfgmgmtcamp 2020
Those first ten seconds of the video are awesome!
2020-02-29 19:40:26 -05:00
Patrick Meyer
6279be073b lang: Prevent struct types with duplicate field names
The previous fix for #591 in 70eecd5 didn't address all issues
concerning duplicate struct field names. It still crashed for inputs
like `$d []struct{x int; x float}`. Note the different types but
duplicate names.
2020-02-29 19:11:51 -05:00
Patrick Meyer
ea37132ce4 lang: Fix wrong go-fuzz bin name
I missed renaming this after moving the fuzz.go from the lang package
to its own fuzz package.
2020-02-29 05:28:38 +01:00
James Shubin
70eecd5289 lang: Prevent struct types with duplicate fields
Struct types with duplicate fields are invalid types and weren't caught
by the parser. This fixes the issue and adds some associated tests. It
also checks and tests for duplicate struct value field names.

As a technical side-note, this doesn't change the lang/types/ functions
to remove panics-- the signatures are simplified to make their use
simple, and we intentionally panic if they're used incorrectly. In this
case, one was being used without having previously validated the input.

Thanks to Patrick Meyer for finding this issue via fuzzing!
2020-02-27 18:52:02 -05:00
James Shubin
380d03257f misc, lang: Small fixups
Change some minor style issues.
2020-02-27 17:34:03 -05:00
Patrick Meyer
006de6da14 lang: Add fuzz target to lang Makefile 2020-02-26 12:07:00 +01:00
Kenneth Hoste
10aa80e8f5 docs: Add link to recording of James' FOSDEM 2020 talks 2020-02-17 14:23:20 -05:00
Felix Frank
013439af6d test: Make prometheus tests safer and more verbose 2020-02-16 18:38:58 -05:00
Felix Frank
3408961155 test: Fix syntax in the loadavg test 2020-02-16 18:38:43 -05:00
Francois Rompre-Lanctot
f3b4a8d055 engine: resources: Add a test case for resource owner check
This adds FileOwnerExpect as a new Step which allows validating if the
owner was set properly on a resource.
2020-02-10 22:17:42 -05:00
Francois Rompre-Lanctot
104af7e86f engine: resources: Fix typo 2020-02-03 22:57:13 -05:00
James Shubin
be39fbeff6 examples: lang: Update examples 2020-02-01 16:48:23 -05:00
James Shubin
4109045fa4 github: Add new needinfo tag 2020-01-29 11:38:53 -05:00
James Shubin
90fd8023dd lang, engine: Add a facility for resources to export constants
Since we focus on safety, it would be nice to reduce the chance of any
runtime errors if we made a typo for a resource parameter. With this
patch, each resource can export constants into the global namespace so
that typos would cause a compile error.

Of course in the future if we had a more advanced type system, then we
could support precise types for each individual resource param, but in
an attempt to keep things simple, we'll leave that for another day. It
would add complexity too if we ever wanted to store a parameter
externally.

Lastly, we might consider adding "special case" parsing so that directly
specified fields would parse intelligently. For example, we could allow:

	file "/tmp/hello" {
		state => exists,	# magic sugar!
	}

This isn't supported for now, but if it works after all the other parser
changes have been made, it might be something to consider.
2020-01-29 11:16:04 -05:00
James Shubin
f67ad9c061 test: Add a check for too long or badly reflowed docstrings
This ensures that docstring comments are wrapped to 80 chars. ffrank
seemed to be making this mistake far too often, and it's a silly thing
to look for manually. As it turns out, I've made it too, as have many
others. Now we have a test that checks for most cases. There are still a
few stray cases that aren't checked automatically, but this can be
improved upon if someone is motivated to do so.

Before anyone complains about the 80 character limit: this only checks
docstring comments, not source code length or inline source code
comments. There's no excuse for having docstrings that are badly
reflowed or over 80 chars, particularly if you have an automated test.
2020-01-25 04:43:33 -05:00
James Shubin
525e2bafee misc: Let sigtee work on older golang versions 2020-01-25 04:43:33 -05:00
James Shubin
b65a9abf8e misc: Add two scripts to help debug things
This adds two new helper scripts that are good for debugging mgmt. Just
build and add `sigtee` to your ~/bin/ along with filter-golang-stack.py
and mgmt_debug.sh and use the later to call mgmt as you would normally.

For example, I might do:

$ mgmt_debug.sh ./mgmt run --tmp-prefix lang examples/lang/hello0.mcl

And if I kill it with ^\ then I'll get a filtered trace at the end in my
$PAGER (which is assumed to be `less`) and this should make my life
easier.

As a cool bonus, this means we use bash, python, and golang all
together!
2020-01-13 01:07:00 -05:00
James Shubin
fec94aa53a engine, lang: Fix simple test failures
Two bugs sneaked in while pushing old stuff.
2020-01-12 19:35:11 -05:00
James Shubin
3d4b345728 examples: lang: Improve reverse example
It's cool to show just the mode changes.
2020-01-12 17:42:20 -05:00
James Shubin
579975f08d engine: graph: Don't error when state file is missing
For some reason we get errors when we try to remove a non-existent state
file. There's a slight possibility that it could be a bug we're working
around, but it's not clear that this is the case, and I think it's
possible that a state file could have gotten nuked by the user somehow,
although this was occurring "naturally" when running reverse1.mcl so
let's keep that working for now.
2020-01-12 16:41:09 -05:00
James Shubin
3707b39fef engine: graph: Improve comments
Clarify that we're referring to cycles in the graph, since it needs to
be a DAG.
2020-01-12 16:39:32 -05:00
James Shubin
f07387225b engine: resources: Log more info about tftp errors
This helps for debugging this kind of issue:
https://github.com/pin/tftp/issues/41#issuecomment-570744056
2020-01-03 20:42:34 -05:00
James Shubin
2648fb1bb1 legal: Happy 2020 everyone...
Done with:

ack '2019+' -l | xargs sed -i -e 's/2019+/2020+/g'

Checked manually with:

git add -p

Hello to future James from 2021, and Happy Hacking!
2020-01-03 20:08:37 -05:00
James Shubin
d34715b4ba engine: resources: pippet: Cleanup and proper wrapping
Felix, please configure your editor to wrap at 80 chars and/or help us
write a test for this please =D
2020-01-03 01:20:00 -05:00
Felix Frank
63af50bf98 engine: resources: pippet: Initial implementation for new resource type
The pippet resource implements faster integration of Puppet resources
in mgmt at runtime, by piping synchronization commands to a Puppet
process that keeps running alongside mgmt. This avoids huge overhead
through launching a Puppet process for each operation on a resource
that is delegated to Puppet.
2020-01-03 01:19:37 -05:00
James Shubin
456550c1d4 engine: resources: docker: Make a few fixups
Here are a few fixups to the docker resources. All miscellaneous stuff,
nothing major.
2020-01-03 00:53:20 -05:00
Jonathan Gold
8174b88ec3 engine: resources: docker: Add sensible defaults 2020-01-03 00:30:01 -05:00
Jonathan Gold
3233973748 engine: resources: docker: Add AutoEdges between container and image 2020-01-03 00:30:01 -05:00
Jonathan Gold
bdfb1cf33e engine: resources: docker: Ensure image is specified for containers 2020-01-03 00:30:01 -05:00
Jonathan Gold
1c5fcd59e7 engine: resources: docker: Add a docker image resource 2020-01-03 00:30:01 -05:00
James Shubin
5cc960527e lang: funcs: Differentiate between empty and nil values
It would be good to differentiate between receiving an empty value or
not having received a value yet. This is similar to the previous commit.
2020-01-03 00:28:54 -05:00
James Shubin
762c53fb8d lang: funcs: Send empty values when appropriate
I seem to have forgotten to differentiate between the empty string and
no data because the zero value for the stored result was the empty
string. This turns it into a pointer so that we don't block the function
engine if a template or one of the other patched functions sends an
empty string as the first value.
2019-12-30 12:35:08 -05:00
James Shubin
ff20e67d07 lang: funcs: Improve template function
Template function should be able to be called with just one arg (no
input vars) and we should correctly use the known arg name and not the
string "a" or "b".
2019-12-30 11:21:43 -05:00
Francois Rompre-Lanctot
c0cea013d1 pgraph: Add test for SetValue function 2019-12-18 20:00:47 -05:00
James Shubin
5526bbba64 engine: resources: Add a tftp server and tftp file resource
This adds a tftp server and tftp file resource to help you run a small
pure golang tftp server embedded inside the mgmt resource model.
2019-12-17 03:41:45 -05:00
James Shubin
f0aa96ea8c etcd: Remove the capnslog stuff and switch to zap
Unfortunately, this doesn't give us a way to pass in our own logger
function, and afaict by reading the source, it's not possible because
the necessary methods are private. In any case, this is left as a future
exercise.
2019-12-17 03:40:44 -05:00
James Shubin
e73007c398 etcd: Bump to new 3.4.x version
This moves to the newest etcd release, and also updates the imports to
the new go.etcd.io path. I think this is a bit of a pain, but might as
well get it done.
2019-12-17 02:45:38 -05:00
Jonathan Gold
fdc459ec5b vagrant: Update Vagrantfile
This patch updates the Vagrantfile to Fedora 31, and updates the
install process to match the quick start guide.
2019-11-17 20:25:06 -05:00
Felix Frank
bdb523ece1 lang: funcs: funcgen: Suppress informational messages
Send non-error log messages to stdout rather than stderr. Any messages
outside the main function are expected to be purely informational. By
sending to stdout rather than stderr, they can be discarded during the
build.

Fixes #568
2019-11-17 20:21:52 -05:00
James Shubin
164a9479ad test: Add a new test to the commit message checking
Fix the missing "s" bug.
2019-11-17 20:21:52 -05:00
James Shubin
e18adc781f git: Add a simple gitignore helper
Run `touch 1 && chmod ugo-w 1` to prevent silly scripting bugs from
running without being caught.
2019-11-12 17:33:58 -05:00
Felix Frank
33d89c2739 examples: lib: Update code for urfave/cli v2 2019-11-12 21:07:38 +01:00
Felix Frank
7cc9ab9083 lib: Update for urfave/cli v2 2019-11-12 21:07:38 +01:00
Donald Bakong
4b4b7dc169 engine: resources: Adding tests to file mode 2019-11-08 10:02:30 -05:00
Yohan Belval
71ad5c5f05 examples: lang: Added os check for pkg example fix 2019-11-06 10:20:32 -05:00
Yohan Belval
39368bb5cb examples: lang: Fixed pkg example with cowsay 2019-11-06 10:03:20 -05:00
James Shubin
7a587ee8d1 misc: mkosi: Switch to copy-git-more
I added a new feature in mkosi which got merged in:

31801e89e77188e697ed937ca6b8668fde4c4a4d

This allows us to pull in all of the git repository so that we can see
the version number that comes from git.
2019-11-02 07:34:04 -04:00
James Shubin
77346527f3 docs: Update style guide with more review items
Hopefully this helps new contributors understand review changes and
avoid making them too!
2019-11-01 22:01:38 -04:00
James Shubin
1eba5833d5 engine: resources: Consistency changes and cleanup for file mode
This makes a few consistency changes and cleanups to the file mode
feature so that it's more in style with the rest of the code base.
2019-11-01 22:01:38 -04:00
Derek Buckley
83a747794e engine: resources: Adds symbolic mode to file resource
Adds a symbolic parsing function to the util package for parsing in the
file resource.
2019-11-01 21:57:10 -04:00
Julien Pivotto
3e16d1da46 engine: resources: Add new consul resource
This commit adds a new consul:kv resource which allows us to set and
watch keys inside a consul kv datastore.

This was started by roidelapluie, and was finished during pair
programming with purpleidea.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
Signed-off-by: James Shubin <james@shubin.ca>
2019-11-01 21:38:08 -04:00
Julien Pivotto
ae1860e859 lang: funcs: Add datetime.format function
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2019-11-01 19:26:17 -04:00
Julien Pivotto
2ebc8fdf2a lang: funcs: Add datetime.hour function
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2019-11-01 20:45:45 +01:00
Julien Pivotto
be4023be66 docs: Update resource-guide.md 2019-11-01 10:03:36 -04:00
Julien Pivotto
7f4ad76298 lang: funcs: Fix autogenerated comments
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2019-11-01 11:18:48 +01:00
Julien Pivotto
0cbfaf98f3 lang: funcs: Support for []byte
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2019-10-30 22:14:08 +01:00
James Shubin
631124e658 lang: funcs: Add nitpicks from funcgen
Discussed nitpicks with roidelapluie to clean up slightly for
consistency.
2019-10-30 08:51:11 -04:00
Julien Pivotto
1685ee1ecb lang: funcs: Autogenerated a lot of new functions
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2019-10-30 08:42:53 -04:00
James Shubin
9b4d11f220 lang: funcs: Move convert into correct folder
This got merged in the wrong folder by accident.
2019-10-30 06:21:34 -04:00
James Shubin
46a71296a9 engine: resources: Add a Purge option to the file resource
This adds a "purge" parameter to the file resource. To do this, we have
to add the API hooks so the file resource can query other resources in
the graph to know if they are present, and as a result whether they
should be excluded from the purge or not.

This is useful for when we have a managed directory with some managed
contents. If a managed file is removed from the directory, then it will
be removed by the file (directory) resource if it has Purge set.
Alternatively, you can use the Reverse meta param, which is sometimes
preferable for this use case and sometimes not. This will be discussed
elsewhere.

This also adds a bunch of tests for this feature.

This also makes a few somewhat related cleanups in the file code.
2019-10-29 11:54:08 -04:00
James Shubin
1285588b62 engine: resources: Fix file absent helper
We should check this nil case too.
2019-10-29 07:15:43 -04:00
James Shubin
d96392f65e engine: resources: Improve error message in test
Nothing major, just improve testing here.
2019-10-29 07:15:43 -04:00
James Shubin
d1c5a736ae engine: resources: Allow nil helper functions in tests
This reduces some of the boilerplate when writing resource tests.
2019-10-29 07:15:43 -04:00
James Shubin
6b1e038c5c engine: resources: Add file exists helper
This will allow us to test even more things!
2019-10-29 07:15:43 -04:00
James Shubin
eaab1aae28 engine: graph, resources: Add filtered graph function
This lets a resource query the resource graph in a controlled way.
2019-10-29 07:15:43 -04:00
James Shubin
31030343a2 engine: Add graph queryable trait
This is a trait that specifies that your resource can be queried by
others. You can either enable it plainly to add wholesale access by
everyone, or you can implement your own Allow method to limit what is
permitted.
2019-10-29 07:15:43 -04:00
James Shubin
325ca03a13 engine: graph: Pass through the graph struct
We want to use it in the resources.
2019-10-29 07:15:43 -04:00
James Shubin
dea8e63df2 util: Add more tests to HasPathPrefix
We need to ensure this behaviour in a future commit, so might as well
double check that this works as expected!
2019-10-29 07:15:43 -04:00
James Shubin
58421fd31a engine: resources: Add fragments support to file resource
This adds a "fragments" mode to the file resource. In addition to
"content" and "source", you can now alternatively build a file from the
file fragments of other files.
2019-10-29 07:15:43 -04:00
James Shubin
b961c96862 lang: Include automatic edges in our test case
When running this test, we didn't attempt to build any automatic edges.
Since we'd like to test this here as well, let's add it.
2019-10-24 04:30:49 -04:00
Donald Bakong
2d23c1b0f3 lang: funcs: Add to_int and to_float functions 2019-10-22 08:34:29 -04:00
James Shubin
06952c224b engine: resources: nspawn: Use populate variable
We referred to the wrong variable. Not a major bug, but would produce a
useless or confusing error message otherwise.
2019-10-21 07:38:42 -04:00
Donald Bakong
2ea492c965 docs: Fix error on language-guide.md 2019-10-19 18:44:56 -04:00
Donald Bakong
dbf84f6879 docs: Fix typo on language-guide.md 2019-10-19 18:44:11 -04:00
James Shubin
0fa3d6c462 github: Update funding information file
Just got added to the GH sponsors thing and there are more fields
available in this file now. Let's hope this works!
2019-10-09 19:56:19 -04:00
James Shubin
d57f7aa03f misc: Specific mkosi fixes for centos-7
Seems we need golang from epel, to mask out the old git version, and to
workaround mkosi bugs.
2019-10-05 01:49:42 -04:00
Jimmy Tang
d64f9f5401 misc: Add CentOS 7 rpm build 2019-10-04 23:47:12 -04:00
James Shubin
a3029afc41 art: Rename art file to clarify it's about Winnie the Pooh
Pretty cool that we have our first meme =D Have a look in the FAQ to see
the real reason we watch our resources. (TL;DR: It allows us to make
graph changes very fast.)
2019-10-04 08:01:23 -04:00
James Shubin
6a7d904fae misc: Improve tagging script
This way we can push the tag *after* all the builds succeed. If
something goes wrong, we can always delete our local tag and try again.
2019-10-04 06:49:04 -04:00
James Shubin
d4043d3f86 misc, make: Add full file path into fpm script
This is needed for our fancier, unique file names.
2019-10-04 06:43:17 -04:00
James Shubin
b4902a4f58 make: Add a unique token to the package file name
This unique token is necessary so that storing the files in the same dir
(basically a GitHub release) or in the SHA1SUMS file doesn't cause a
conflict.
2019-10-04 06:06:44 -04:00
James Shubin
ffe402f201 misc: Add fedora-30 mkosi+fpm build environment
Good example of how to add a new distro or version.
2019-10-04 06:02:08 -04:00
James Shubin
09cc7da282 misc: Add proper archlinux prefix in build script 2019-10-04 06:01:23 -04:00
James Shubin
2d2dad41f4 todo: Update the TODO file so that it has a sane purpose
We stored some stuff in GitHub, and some stuff here. We can keep using
this, but let's do it for the stuff that hasn't changed in a while.
2019-10-04 04:11:26 -04:00
bjanssens
5f7c0a86dd art: Add the requested art
Signed-off-by: bjanssens <bjanssens@inuits.eu>
2019-10-04 09:23:20 +02:00
Donald Bakong
fc1c631c98 engine: resources: Change Res API from Compare to Cmp
This will be done by refactoring the current method, to return an error
message instead of a boolean value. This will also update a typo on the
user res.
2019-09-27 18:10:58 -04:00
James Shubin
89bdafacb8 misc: Refactor Makefile slightly
We could make this even better in the future with lists.
2019-09-23 06:57:26 -04:00
James Shubin
73b6b3f129 misc: Remove old image building cruft 2019-09-23 06:50:26 -04:00
James Shubin
b2a495f593 misc: Add mkosi target for ubuntu bionic
The name of these is pretty weird.
2019-09-23 06:50:00 -04:00
James Shubin
65ee904377 misc: Work around old golang in ubuntu
Hopefully this helps.
2019-09-23 06:48:55 -04:00
James Shubin
13f59230b5 misc: Split Makefile PHONY target into multiple lines
AIUI this is valid make. Please correct me if I'm wrong.
2019-09-23 06:48:55 -04:00
James Shubin
36d2a0de1e misc: Make mkosi building suitable for different distro versions
We'd like to be able to build both Fedora N and N-1 at the same time if
possible. This makes it more generally applicable for this scenario, as
well as for other distros.
2019-09-23 06:48:55 -04:00
James Shubin
a4db9fc8e5 misc: Add mkosi based package building with fpm
Building distro packages is great, however if they aren't built in the
correct environment with the associated dependencies, then they won't
work properly on those distros.

This patch adds an `mkosi` based image building environment that builds
the packages in their respective distros, and then copies them out into
our releases directory.

You'll now want to `make tag && make mkosi && make release` to get a new
release out. We use a small hack to trick the `make release` portion to
not re-build the distro packages if they're already present in the
releases/ directory for that version.

This commit depends on a very recent version of mkosi (it was tested
with git master) and also depends on two currently unmerged patches:
https://github.com/systemd/mkosi/pull/363 and
https://github.com/systemd/mkosi/pull/365
2019-09-20 12:32:41 -04:00
James Shubin
9dae5ef83b engine: resources: Improve the file res and add strict state
This might be slightly controversial, in that you must specify the state
if a file would need to be created to perform the action. We no longer
implicitly assume that just specifying content is enough. As it turns
out, I believe this is safer and more correct. The code to implement
this turns out to be much more logical and simplified, and does this
removes an ambiguous corner case from the reversed resource code.

Some discussion in: https://github.com/purpleidea/mgmt/issues/540

This patch also does a bit of related cleanup.
2019-09-14 16:07:53 -04:00
James Shubin
e8842a740c lang: Remove duplicate log message
Looks like we had two copies of the same message by accident.
2019-09-11 04:26:15 -04:00
James Shubin
0d3807ad09 lang, test: Fix copy paste error with log message
This changes this to the correct error message.
2019-09-11 04:26:15 -04:00
James Shubin
5c27a249b7 engine: resources: Add reversible API and file resource
This adds the first reversible resource (file) and the necessary engine
API hooks to make it all work. This allows a special "reversed" resource
to be added to the subsequent graph in the stream when an earlier
version "disappears". This disappearance can happen if it was previously
in an if statement that then becomes false.

It might be wise to combine the use of this meta parameter with the use
of the `realize` meta parameter to ensure that your reversed resource
actually runs at least once, if there's a chance that it might be gone
for a while.

This patch also adds a new test harness for testing resources. It
doesn't test the "live" aspect of resources, as it doesn't run Watch,
but it was designed to ensure CheckApply works as intended, and it runs
very quickly with a simplified timeline of happenings.
2019-09-11 03:40:22 -04:00
James Shubin
7e41860b28 docs: Add missing docs on the rewatch and realize meta params
Sometimes it's hard to keep this in sync.
2019-09-11 03:40:22 -04:00
James Shubin
43ff92bbe7 engine: resources: Clean up test log message 2019-09-11 03:40:22 -04:00
James Shubin
28adc7e563 engine: resource: Refactor helper functions
Maybe we can use them in other tests too.
2019-09-11 03:40:22 -04:00
James Shubin
9788411995 engine: resources: Add another validation check
This simple check should prevent some silly mistakes and make the logic
easier for other parts of the code that won't have to worry about this
pattern.
2019-09-11 03:40:22 -04:00
James Shubin
0c9e8cc50e engine: resources: Change the default file state
The default file state should be undefined. This is important because if
a reverse scenario that doesn't specify the state gets given this
default, it will be as if it was specified explicitly, which wouldn't
necessarily be what we want. Instead, an undefined state should
implicitly cause a file to get created if there's a reason to do so,
such as if content or another attribute is specified.

Hopefully this change doesn't introduce any bugs in the CheckApply code,
if it does, then it was due to a lack of implicit file creation.
2019-09-11 03:16:57 -04:00
James Shubin
34d572c523 engine: Improve the way we make a unique res path token
This is needed in the state directory.
2019-09-11 03:16:57 -04:00
James Shubin
011b496b3f engine: resources: Ensure the Kind and Name methods work
Triple check these work after decoding, by adding a test.
2019-09-11 03:16:57 -04:00
James Shubin
12b906eac6 engine: Refactor state dir into a separate function
This lets us re-use it, and know the path is fixed.
2019-09-11 03:16:57 -04:00
James Shubin
20937d05c3 engine: resources: file: Add undefined file state and validate it
We should consider using *string instead of the empty string, but let's
keep the diff smaller for now.
2019-09-11 03:16:57 -04:00
James Shubin
4943d37ccf engine: resources: file: Use constants for state values
More robustness is yay!
2019-09-11 03:16:57 -04:00
James Shubin
3a8fd215de engine: resources: file: Add Copy method to file res
This lets us implement the CopyableRes interface.
2019-09-11 03:16:57 -04:00
James Shubin
87572e8922 test: Catch capitalized error messages in tests 2019-09-06 03:28:49 -04:00
James Shubin
f1eedc7a01 lang: Clarify error message about missing field
User probably just mistyped a field name. Make that clear.
2019-09-06 03:28:49 -04:00
Donald Bakong
b79e48dd77 docs: Fix typo on quick-start-guide.md 2019-08-25 22:53:43 -04:00
James Shubin
18872194af misc: Warn users with weird computers
A user seemed to experience a weird golang issue when they had deps from
both package managers installed. I won't block or fail their install,
but we can print a warning message so that someone sees it in their
logs.
2019-08-23 22:10:34 -04:00
James Shubin
bafd7ba282 misc: Use apt install of apt-get where possible
The future is now!
2019-08-23 22:10:12 -04:00
Donald Bakong
b186481181 pgraph: Add a test for FindEdge() function 2019-08-08 00:43:26 -04:00
James Shubin
09ca6d11ad lang: funcs: Module name should be public
For consistency with the rest of the core functions.
2019-07-29 11:17:43 -04:00
James Shubin
e68e4e786d docs: Add newly recorded talks and blog post 2019-07-26 06:52:01 -04:00
James Shubin
ee638254c3 lang: Remove the specialized info structs
Since this was an early form of the modern data struct, remove those and
pass in the correct data. This is also important in case we have
something more complex inside our string interpolation!
2019-07-26 04:20:04 -04:00
James Shubin
1e678905c4 util: Fix typo 2019-07-26 04:20:04 -04:00
James Shubin
10804c4b25 lang: Improve the gapi copying
We hit a weird bug where dirs would not get copied properly. I thought
the solution might be to add the missing dirs so they'd get a proper
mkdir, but in the end that didn't work well, so we just use `mkdirall`
and that seems to work. Let's leave it like this for now. Some of the
previous work for that is in the previous commit.
2019-07-26 04:20:04 -04:00
James Shubin
4bf9b4d41b util: Add some path helper functions
In the end, I'm not sure how useful these will be, but we'll keep them
in for now.
2019-07-26 04:20:04 -04:00
James Shubin
1161872324 etcd: fs: Errors should start with lower case 2019-07-26 04:20:04 -04:00
James Shubin
98cb570896 util: Add new mkdirall variants for the copy functions
This adds versions that recursively `mkdir` and all don't error as
easily. This works around some bugs we were having with file copying.
2019-07-26 04:20:04 -04:00
James Shubin
ed4ee3b58e lang: funcs: Add deploy package with readfile related functions
This adds a readfile function to actually access files from our deploy.
A fun side effect is that we can even access our own code! In general,
it's a good reminder that you should only run trusted code on your own
infrastructure. This also includes a fancy new test case.
2019-07-26 03:38:26 -04:00
James Shubin
066048f4de lang: Pass through the Fs and the FsURI
This should give us options as to how a function should interact with an
FS. I feel like it's cleaner to go through the World API, and passing in
the FsURI lets us do that, but I passed in the Fs at the same time in
case it's useful for some reason. I think using it is a boundary
violation, but it's just a hunch. Does anything break when we move from
one deploy to the next?
2019-07-26 03:07:08 -04:00
James Shubin
4b6b91c08b lang: Make sure to call Init for functions that arrive via import
We weren't calling Init on some functions which should have had this
done. I'm not sure whether this is the right place, or if it should be
elsewhere as part of the scope building process. Good enough for now.
2019-07-22 06:49:02 -04:00
James Shubin
2980523a5b lang: Add a new function interface to accept data
Sometimes certain internal functions might want to get some data from
the AST or from something relating to the state of the language. This
adds a method to pass in that data. For now it's a very simple method,
but we could generalize it in the future if it becomes more useful.
2019-07-22 06:46:04 -04:00
James Shubin
f2f9c043bf lang, gapi: Work around a copy bug in the deploy
It seems when we had a files/ dir that we added to our deploy, it would
get copied into /files/files/whatever instead of /files/whatever where
it should be. Hopefully this works around the issue forever.
2019-07-22 06:40:47 -04:00
James Shubin
5d59cfd2c9 util: Ensure the afero copy function is working as intended
The destination should be a dir sometimes.
2019-07-22 06:38:02 -04:00
James Shubin
f94474e24f lang: Add the world implementation to our test suite
This allows our tests to actual run the World API in them.
2019-07-22 06:36:37 -04:00
James Shubin
a63fc6d9ba util: Add a remove path suffix util function
This pairs with a similar one we already had.
2019-07-22 06:35:13 -04:00
James Shubin
076adeef80 lang: funcs: Fix a copypasta error with the not equals operator
Woops, sorry!
2019-07-22 06:08:37 -04:00
James Shubin
a0e756317c lang: Add tests for slow unification
These used to be cases where our algorithm was unusably slow.

Thanks to foxxx0 for the report!
2019-07-21 03:15:06 -04:00
James Shubin
252cb5f2f3 lang: Detect windows style CR and return a better error
If you get a sneaky \r in your code, the error just looks like
whitespace, so this way we can warn you explicitly.
2019-07-21 03:10:21 -04:00
James Shubin
64288b4914 lang, test: Inline some overly indented tests
Sometimes you're busy hacking and it's nice for future you to fix up
your code!
2019-07-21 01:19:15 -04:00
James Shubin
9ca6c6a315 test: Split up long tests into multiple sub tests again
I think we need this for non --race tests too.
2019-07-21 00:55:36 -04:00
James Shubin
3651ab5c0c lang: Add more tests for function 2019-07-20 22:27:21 -04:00
James Shubin
b3f15e1ddc lang: Add more tests for class and include 2019-07-20 01:33:42 -04:00
James Shubin
da2a5f72bd lib: Update dep for uuid
Apparently the package has moved.
2019-07-17 04:07:24 -04:00
James Shubin
591e6b68e0 test: Split up long tests into multiple sub tests
Hopefully this avoids the timeouts running the lang package.
2019-07-17 02:45:04 -04:00
James Shubin
0119abdcdd travis: Try to work around CI slowdowns
I think they throttle strangely on their garbage machines.
2019-07-17 01:21:29 -04:00
James Shubin
e57ca15330 lang: Avoid running graphviz in tests by default
This will help travis actually run the tests faster.
2019-07-17 01:21:29 -04:00
James Shubin
f53376cea1 lang: Add function values and lambdas
This adds a giant missing piece of the language: proper function values!
It is lovely to now understand why early programming language designers
didn't implement these, but a joy to now reap the benefits of them. In
adding these, many other changes had to be made to get them to "fit"
correctly. This improved the code and fixed a number of bugs.
Unfortunately this touched many areas of the code, and since I was
learning how to do all of this for the first time, I've squashed most of
my work into a single commit. Some more information:

* This adds over 70 new tests to verify the new functionality.

* Functions, global variables, and classes can all be implemented
natively in mcl and built into core packages.

* A new compiler step called "Ordering" was added. It is called by the
SetScope step, and determines statement ordering and shadowing
precedence formally. It helped remove at least one bug and provided the
additional analysis required to properly capture variables when
implementing function generators and closures.

* The type unification code was improved to handle the new cases.

* Light copying of Node's allowed our function graphs to be more optimal
and share common vertices and edges. For example, if two different
closures capture a variable $x, they'll both use the same copy when
running the function, since the compiler can prove if they're identical.

* Some areas still need improvements, but this is ready for mainstream
testing and use!
2019-07-17 00:27:09 -04:00
James Shubin
4f1c463bdd misc: Add graphviz deps for travis
Some tests run graphviz.
2019-07-17 00:27:09 -04:00
James Shubin
6643a3d937 lang: Add function types to the yacc type parser
Hopefully our type unification algorithm will be sufficiently good that
you never need to actually specify the function type, but it's useful
for testing and completeness.
2019-07-12 16:46:08 -04:00
James Shubin
da8cb40242 lang: If the test fails earlier than expected, exit early
If a test failed in stage 2 (fail2) instead of an expected fail in stage
3 (fail3) then it would continue running, which was an undefined
behaviour in our API. IOW we should not run Unify if SetScope failed.
This patch adds these additional checks to ensure our tests are more
robust.
2019-07-12 16:46:08 -04:00
James Shubin
4c6d304e60 lang: types: Improve ComplexCmp function
This improves the ComplexCmp function so that it can compare partial
types to variant types. As a result of this improvement, it actually
ended up simplifying the code significantly. This also added a test
suite for this function. This fix was important for tricky type
unification problems.
2019-07-12 16:46:08 -04:00
James Shubin
99d3ef42e9 lang: Name the expr call graph differently
It was wrongly named func instead of call, although this doesn't
actually matter in terms of code execution.
2019-07-12 16:46:08 -04:00
James Shubin
e2289dc2a0 lang: funcs: Add better logging to the function engine 2019-07-12 16:46:08 -04:00
James Shubin
9b4f50cde9 lang: Add the NamedArgs interface
This lets you specify which args are being used in the general function
API, which can make code readability and debugability slightly better.

In an ideal world, we wouldn't need this at all, but I can't figure out
how to avoid it at the moment, so we'll include it for now, as it's
always easy to delete if we find a more elegant solution.
2019-07-12 16:46:08 -04:00
James Shubin
fe64bd9dbb lang: Move type duplicates checker into a separate package 2019-07-12 16:46:08 -04:00
James Shubin
0991264c8c lang: funcs: Use the correct arg names when running a pure func
We were using the default argnames when the actual list of names was
available. Use these instead, and validate that we have the correct
number of them.
2019-07-12 16:46:08 -04:00
James Shubin
3b608ad544 lang: funcs: Verify that simple polyfunc list is not empty
This could prevent some incorrect usage of the simplepoly API.
2019-07-12 16:46:08 -04:00
James Shubin
3f1a379908 lang: Use the short flag to list subtests
If you want to know which test to run, it's not always obvious, so by
adding the -short flag, we'll get a listing that we can use! You'll need
to add -v as well so that the output actually displays.
2019-07-12 16:46:08 -04:00
Adam Sigal
61a67dae29 pgraph: Add a test for FindEdge() and HasVertex() functions
Added a new test called TestFindEdge1(). In it, a 7-node
graph is created on which to test the aforementioned functions.
2019-07-11 22:05:13 -04:00
John Hooks
609aefd808 lib: Support for systemd STATE_DIRECTORY or XDG cache dir
If running mgmt from a systemd unit, this enables the
STATE_DIRECTORY environment variable to be used for creating the
cache directory defined by StateDirectory= in the unit file. It
also enables the XDG_CACHE_HOME environment variable to be used.
If the user isn't root and the environment variable isn't set,
it will use the default XDG_CACHE_HOME directory.
2019-07-08 21:19:33 -04:00
Felix Frank
191a2495a5 engine: resources: mount: Fix the dbus call for reloading systemd
The Reload method cannot just be invoked on the administrative DBus
object. Just like the method for reloading specific units, it needs
to be invoked on the proper DBus service, addressing the proper object
and using the right interface.

Added an additional constant for the systemd DBus service. Even though
it shares the same value as the interface base name, this is
happenstance and it's technically incorrect to open a connection to an
interface name. The connection needs a service name.

Fixes #509
2019-06-04 23:44:57 +02:00
James Shubin
a235b760dc docs: Fix typo and grammar issue 2019-05-20 09:53:19 -04:00
James Shubin
e4eb3c23a2 lang: funcs: core: Allow nested system imports
We were passing the wrong module name for system imports. This is now
fixed, includes an example, and some tests!
2019-05-20 09:23:28 -04:00
James Shubin
12582e963d lang: funcs: core: Make module names public
This is needed for when we have nested modules.
2019-05-20 08:45:43 -04:00
James Shubin
d5074871c7 examples: lang: Add a unicode example 2019-05-15 04:13:20 -04:00
James Shubin
e0d024ac95 examples: lang: Update autoedges example
The /dev/null thing isn't needed anymore. Also make it easier to change
noop value.
2019-05-15 03:51:01 -04:00
James Shubin
7a756cacb9 engine: graph: Add a mutex around waits map access
If you ran some extremely absurd code, it turns out you can cause a
race. This was found by roiedelapluie experimenting! In this case, it
would panic with: fatal error: concurrent map read and map write. This
patch adds the mutex to avoid this rare race.
2019-05-14 10:53:36 -04:00
Jan Martens
3c1da423fa engine: resources: nspawn: Trim possible systemd version suffix 2019-05-14 16:00:26 +02:00
James Shubin
38dfaa1caa docs: Update FAQ to mention go mod 2019-05-14 06:18:53 -04:00
James Shubin
a050cff50f docs: Add build issue to FAQ
Some new users might experience this if they setup their $GOPATH
incorrectly.
2019-05-13 07:10:36 -04:00
James Shubin
93c1b37aab lang: funcs: Add a mod function to the math package
This should make flip-flop functions easy to write.
2019-05-13 06:30:15 -04:00
James Shubin
01d4226c4a docs, readme: Improve new user experience
This hopefully improves some docs for new users, and makes releases more
easily available.
2019-05-06 07:56:38 -04:00
James Shubin
fc6032d3b7 lang: funcs: Add a weekday function to the datetime package
This returns the day of the week. It also includes a helpful example
demonstrating how this functionality can be fun!
2019-05-06 06:59:50 -04:00
James Shubin
43839d1090 all: Switch the --lang syntax to use argv instead
It was a bit awkward using `mgmt run lang --lang <input>` so instead, we
now drop the --lang, and read the positional argv for the input.

This also does the same for the --yaml frontend.
2019-05-05 11:10:47 -04:00
James Shubin
b3632584c3 etcd: Move to etcd v3.3.13
This makes a small jump to the new etcd stable release. This isn't a
major difference, but it includes an important patch in
7814718c73149e2bbb9517cd02edb8332b621d86 which caused mgmt users to
scratch their heads, since it wasn't obvious that etcd was doing a Fatal
instead of a Panic or an error.
2019-05-05 09:32:04 -04:00
James Shubin
e9257580cd misc: Update to golang 1.11.x
Bump to the newer version.
2019-05-05 09:32:04 -04:00
James Shubin
e3cc6309ea lib: Fix gofmt regression in golang 1.11.x
Golang has many exceptions to its "compatibility promise", including the
gofmt output. The fact that they change it arbitrarily for things like
this is absurd. (Remove the patch and run `gofmt` to see for yourself.)

This change re-worked the comment, since include the `gofmt` suggested
line break makes absolutely no sense, and is not convenient.
2019-05-05 09:32:04 -04:00
James Shubin
17fd625f7f lang: types: Workaround stringer regression in golang 1.11
Here's a fix for another golang regression in 1.11.x which wasn't needed
before! More info in: https://github.com/golang/go/issues/31843
2019-05-05 09:32:04 -04:00
James Shubin
d1ecfd8657 test: Fix typos, these aren't cats
Miau!
2019-05-05 09:32:04 -04:00
James Shubin
4aa3cfad40 lang: Add var prefix to var expr to avoid ambiguity 2019-05-05 09:32:04 -04:00
James Shubin
3bcb697662 lang: funcs: structs: Make error message more precise
This should prevent ambiguity with other similar or identical error
messages.
2019-05-05 09:32:04 -04:00
James Shubin
88318b73e4 lang: Print the actual stream error on test failure
This is useful for debugging.
2019-05-05 09:32:03 -04:00
James Shubin
2f7e202f40 lang, lang: types: Add automatic stringer generation
It's more useful if we know the string representation of Kind's.
2019-05-05 09:32:03 -04:00
James Shubin
310239e707 lang: types: Remove unnecessary prefix in generated kinds
This will make displayed errors cleaner.
2019-05-05 09:30:13 -04:00
James Shubin
4de75373dd pgraph: Use pointers for unique vertex identifiers
This will build more accurate graphs, since we could have duplicated
vertex names for distinct vertices. This now builds the correct
topology, even if the labels are duplicated.
2019-04-29 16:08:36 -04:00
James Shubin
c0d329e6d8 pgraph: Quote graphviz strings properly
If strings include quotes, this previously didn't work.
2019-04-29 16:08:36 -04:00
Johan Bloemberg
8a0840d35b lang: funcs: Add uptime implementation for macOS 2019-04-29 16:07:21 -04:00
Ward Vandewege
f9bb9ef33e docs: Fix link to puppet guide 2019-04-29 06:11:13 -04:00
Christian Rebischke
acb2a5d2b0 lang: funcs: Add ArchLinux family detection
Signed-off-by: Christian Rebischke <chris@nullday.de>
2019-04-25 17:05:25 +02:00
James Shubin
63ef11c708 lang: Add a new type unification test
I wanted to make sure that the type unification algorithm restricts the
implementation of the class when included, when one of the polymorphic
types is specified with a fixed type. It seems this works! I had the
idea for this test while walking around aimlessly.
2019-04-24 03:46:21 -04:00
James Shubin
d70bbfb5d0 lang: unification: Improve type unification algorithm
The simple type unification algorithm suffered from some serious
performance and memory problems when used with certain code bases. This
adds some crucial optimizations that improve performance drastically.
2019-04-23 21:21:42 -04:00
James Shubin
97d60ac98d lang: Quote printed strings
This quotes printed strings that contain special characters such as
newline. This changes the output of some tests, but makes future tests
that include a raw \n more appropriate.
2019-04-23 21:03:02 -04:00
Jonathan Gold
8f1f5d33fd engine: resources: mount: Restart remote-fs target 2019-04-23 16:24:49 -04:00
Wouter Dullaert
d65c85c19f cli: Removed obsolete no-watch-config flag
Having it around creates the expectation that by default mgmt will put a watch
on the config.
2019-04-22 13:42:27 +02:00
James Shubin
22d893fc1e test: shell: Increase etcd timeouts for slow travis again
Increase this one too...
2019-04-21 20:11:38 -04:00
James Shubin
806d2f6a4a lang: Fix import scoping issue with classes
When include-ing a class, we propagated the scope of the include into
the class instead of using the correct scope that existed when the class
was defined and instead propagating only the include arguments in.

This patch fixes the issue and adds a ton of tests as well. It also
propagates the scope into the include args, in case that is needed, and
adds a test for that as well.

Thanks to Nicolas Charles for the initial bug report.
2019-04-21 19:49:38 -04:00
James Shubin
fc3baa28d6 lang: funcs: Add regexp package and match function
This adds a simple regexp match function. This will be useful for
regexp based name classification, if you're into that sort of thing.
2019-04-16 21:42:32 -04:00
James Shubin
eba45e6207 lib, gapi: Display deploy ID to add some clarity
This should make it easier to understand exactly when a new deploy
starts.
2019-04-16 18:11:32 -04:00
James Shubin
272fd3edc3 test: shell: Increase etcd timeouts for slow travis
We need a real test environment that's not travis.
2019-04-16 18:08:48 -04:00
James Shubin
5ad8b33aa7 etcd: Make error more specific
This should clarify which member remove branch we're in if we error.
2019-04-16 18:08:39 -04:00
James Shubin
cacd14fcf8 util: Make test more resistant to races
This doesn't guarantee which print statement runs first, so the last
part of it can race. Adding a sleep makes this highly unlikely.
2019-04-11 21:43:48 -04:00
James Shubin
859e4749ae lib: Clean up logging
Since most of our logging goes through a single Logf command, we don't
need the file name information any more. Our hierarchial logging is
sufficient enough.

Eventually we will replace the top-level logger with a more visually
capable logging fixture.
2019-04-11 21:43:48 -04:00
James Shubin
a5842a41b2 etcd: Rewrite embed etcd implementation
This is a giant cleanup of the etcd code. The earlier version was
written when I was less experienced with golang.

This is still not perfect, and does contain some races, but at least
it's a decent base to start from. The automatic elastic clustering
should be considered an experimental feature. If you need a more
battle-tested cluster, then you should manage etcd manually and point
mgmt at your existing cluster.
2019-04-11 21:43:48 -04:00
James Shubin
fb275d9537 test: Skip net test in travis
Travis fails intermittently, and I have no idea what's wrong with their
infra or what's using this address, so skip it here.
2019-04-10 22:55:02 -04:00
James Shubin
88f7b7e786 docs: Add faq entry about locked binary
Doesn't happen very often, but add it in case someone is curious or uses
search engine foo to figure out the answer.
2019-04-05 17:50:25 -04:00
James Shubin
30402effa9 util: Add context utility functions
This adds a utility function to close a context via a closed signalling
channel, and also functions to wrap and unwrap a wait group into and out
of a context.
2019-04-02 15:30:34 -04:00
James Shubin
7d96623f06 util: Add safe easy ack that allows multiple ack's
Just another sync utility to make code more readable.
2019-04-02 07:07:36 -04:00
James Shubin
398706246e util: Add subscribed signal primitive
Add a little sync primitive to our utility library. This should
hopefully make some of the future code easier to deal with.
2019-04-02 07:07:36 -04:00
James Shubin
6628fc02f2 util: Add context wait signal to easy exit
Add an alternate way to wait for a signal. This just makes code look a
bit cleaner and less cluttered.
2019-04-02 07:07:36 -04:00
Michael Schubert
e2fa7f59a1 docs: Fix link 2019-04-02 10:33:41 +02:00
James Shubin
d5b7dc0acc github: Add funding information 2019-03-24 15:37:54 -04:00
James Shubin
e4d874cc69 engine: resources: Remove named return params
Named return params aren't a favourite feature of mine, and they're
rarely used in the resource writing. They keep popping up because I once
used them and we've been copying and pasting code ever since. Get rid of
them all to help prevent the unnecessary spread.
2019-03-24 15:30:02 -04:00
James Shubin
80a0abeead docs: Add FAQ entry about root requirements 2019-03-24 15:16:14 -04:00
James Shubin
0df2d46ca7 lib: Add static hello message 2019-03-24 15:11:01 -04:00
James Shubin
07f542b4d7 legal: Happy 2019 everyone...
Done with:

ack '2018+' -l | xargs sed -i -e 's/2018+/2019+/g'

Checked manually with:

git add -p

Hello to future James from 2020, and Happy Hacking!
2019-03-24 15:08:50 -04:00
Mitch Fossen
7db3e8556a lang, funcs: Remove deprecated syscall import 2019-03-24 14:46:58 -04:00
Felix Frank
dc03e67b81 docs: Slightly clarify parameter defaults 2019-03-24 14:44:51 -04:00
Adam Sigal
e587324b81 faq: Amended faq mailing list information
There was outdated information concerning the mailing list. I amended
this and added a link.
2019-03-19 14:39:13 -04:00
James Shubin
65a66492f4 docs: Move faq entry to more appropriate resource docs 2019-03-17 17:54:48 -04:00
James Shubin
17602d7065 docs: Add two faq entries 2019-03-17 17:54:48 -04:00
James Shubin
ae56261961 engine: util, resources: virt: Clean up virt resource
Do some cleanups which were long overdue.
2019-03-15 15:24:40 -04:00
James Shubin
c4f57608d0 test: Port yaml test to mcl 2019-03-15 13:01:50 -04:00
James Shubin
753d1104ef util: Port all multierr code to new errwrap package
This cleans things up and simplifies a lot of the code. Also it's easier
to just import one error package when needed.
2019-03-12 16:51:37 -04:00
James Shubin
880652f5d4 util: Port all code to new errwrap package
This should keep things more uniform.
2019-03-12 16:49:01 -04:00
James Shubin
54c81d6bb2 engine, pgp: Fixup incorrect error usage
Small fixups found by the next commit.
2019-03-12 16:49:01 -04:00
James Shubin
2bf43eae24 test: Improve test depth
Make sure we're catching everything, including new, deeper code.
2019-03-12 16:49:01 -04:00
James Shubin
58961d23bb test: Improve test depth
Make sure we're catching everything, including new, deeper code.
2019-03-12 15:50:30 -04:00
James Shubin
6044ade373 util: Add errwrap package
Simplify working with errors across our code base. Instead of constantly
importing the necessary error helpers, assemble them all into one
package and import and use that as needed.
2019-03-12 15:45:39 -04:00
James Shubin
da1c96c6fd examples: lang: Refresh examples
Removed two old examples which were no longer valid.
2019-03-09 18:54:50 -05:00
James Shubin
5bbb474db6 engine: resources: Clean up KV resource 2019-03-09 18:48:26 -05:00
James Shubin
a0c909914d lang: funcs: Don't allow interpolation in printf format string
We'd like to pre-compute the interpolation if we can, so that we can run
this code properly... For now, we can't so it's a compile time error...
Hopefully we can remove this restriction in the future. The problem is
the string must be a constant, or it would be possible to switch it from
"%d %s" to "%s %d %d" or anything that changes the type signature.
2019-03-09 18:06:18 -05:00
James Shubin
170e56b34a lang: Improve test case with more specific errors 2019-03-09 17:37:58 -05:00
James Shubin
de43569fa2 engine, lang: Improve send/recv significantly
Part of this was rotten, and not fully functional. This fixes the rot,
adds some tests, and improves the type checking that occurs when sending
and receiving values. In addition, a significant portion of this happens
at compile time.

There is still more work to be done here, but this should get us a good
chunk of the way for now.
2019-03-09 17:37:58 -05:00
James Shubin
aa6b701b77 lang: Improve the test case infra so it can detect different errors
Let the tests detect which specific error we want to fail on.
2019-03-09 17:14:45 -05:00
James Shubin
d69eb27557 lang: Small fixes about send/recv 2019-03-09 16:07:22 -05:00
James Shubin
0ca57d6a09 examples: lang: Add a basic file example 2019-03-09 16:05:12 -05:00
James Shubin
4c104d55cb engine: util: Add a new utility function for send/recv
This new utility function makes verifying send/recv struct comparisons
consistent. Unfortunately it doesn't yet support coercing from *string
to string or from string to *string.
2019-03-09 16:03:30 -05:00
James Shubin
8a8215fabe engine: util: Improve StructTagToFieldName and add tests
This improves this function to make it more generic.
2019-03-09 16:02:33 -05:00
James Shubin
4badeafb98 engine: resources: Add missing struct tags
These are needed for send/recv.
2019-03-09 16:01:25 -05:00
James Shubin
7cb79bec49 engine: resources: Replace the cached values with a live calculation
This replaces the static obj.path and obj.isDir with live variants. I
don't know why I ever cared about caching these before, and if we ever
care we can memoize properly in the future.

This caused a small bug to actually be masked in the gob code. It is now
fixed in the previous commit.
2019-03-06 10:08:14 -05:00
James Shubin
8da0da02d9 engine: traits: Make encoded fields public
The fields that we might want to store with encoding/gob or any other
encoding package, need to be public. We currently don't use any of these
at the moment, but we might in the future.
2019-03-06 10:08:14 -05:00
James Shubin
efef260764 engine: resources: Improve file Cmp function 2019-03-06 07:09:39 -05:00
James Shubin
a56991d081 engine: resources: Remove possible panic from within file res
Not sure how I let this in, but we should never do this. Hopefully the
Validate should catch this issue in advance, and if not, at least we'll
only error.
2019-03-06 07:09:39 -05:00
James Shubin
f0196540ab resources: file: Make some small cleanups to file res
This does some small cleaning for consistency, since I haven't reviewed
this code in a long while.
2019-03-06 07:09:39 -05:00
James Shubin
426b15313e engine: resources: Fix missing file when specified without content
If the file res was defined with state => "exists" but no content
specified, it was not created. This patch fixes this bug and adds a test
and an example.
2019-03-06 07:09:39 -05:00
James Shubin
11fc55d679 lang: funcs: Add a new test for readfile and fix a small bug
This adds a new test for readfile. Interestingly, it actually caught a
small bug, which was also fixed with this commit. I think the bug was
actually always masked, because it only occurred on shutdown, and in
this case we often don't care about how the stream exited, but it's a
good example of how a test case focused on just one small aspect can be
important.

As an aside, this test case also would have caught the bug fixed in
94c40909cc and by reverting that patch it
indeed fails.
2019-03-06 07:09:39 -05:00
James Shubin
de1691665f lang: funcs: Add live function stream test infrastructure
This adds the ability to test that functions return the expected
streams, and to model this behaviour over time. This is done via a
"timeline" which runs an ordered list of actions that can both push new
values into the function stream, and wait and expect particular outputs.
Hopefully this will make our function implementations more robust!
2019-03-06 07:09:39 -05:00
James Shubin
b1f93b40ae lang: funcs: Add runner pure func execution
This adds a function runner that runs pure functions. It will hopefully
be useful for speculative execution of functions for compile time
determination of types.
2019-03-05 11:42:33 -05:00
James Shubin
5e58251026 test: Improve govet log newline check
We don't match for log.Fatalf but we shouldn't really be using that.
2019-03-05 11:42:33 -05:00
James Shubin
4f4091a9bd engine: resources: Improve test case readability 2019-03-04 10:26:51 -05:00
James Shubin
e9fb41fdc8 test: shell: Fix rare breakage in load test
For some reason the load is occasionally zero. This broke the regexp.
Let's see if this ever happens with the other digits.
2019-03-04 10:16:21 -05:00
James Shubin
6b803656b2 engine: resources: Improve exec resource
The exec resource was an early addition to the project, and it was due
for some fixes and integration into our automated tests. This patch
fixes a number of issues, and makes it ready for more general use.
2019-03-04 10:16:21 -05:00
James Shubin
829741e2ac lang: Print a clear message on module import containing unused stmt
If you run an import, you only include everything that's part of a
scope. This includes, variables, classes, and functions. Anything else
should cause a compile error. This cleans up the error by adding a
String() method to each Stmt in our AST.
2019-02-28 09:35:13 -05:00
James Shubin
94c40909cc lang: funcs: Avoid erroneous empty message in readfile
Readfile had a bug where it sent an empty string on startup. This has
ben fixed, and it now waits until the file contents are ready before
sending a string.
2019-02-28 08:56:10 -05:00
James Shubin
95dab16e6e lang: funcs: Allow the len function to determine str length 2019-02-28 08:54:11 -05:00
James Shubin
c049413b47 examples: lang: Add is_debian and is_redhat family example
This is just the beginning.
2019-02-28 08:53:25 -05:00
Nicolas Charles
2d45f95501 engine: resources: print: Add RefreshOnly option
Add option RefreshOnly (default to false) on print ressource, to print only when
notified by other resource. When a print is RefreshOnly, it can't be grouped anymore.
2019-02-27 15:31:45 +01:00
Nicolas Charles
3cfc76b635 lang: funcs: Added a function to detect Debian and RedHat like systems 2019-02-26 18:13:34 +01:00
James Shubin
d88874845c test: shell: Improve load test
This might have failed once in travis because of a short timeout.
Hopefully if this happens again, we'll now know why.
2019-02-24 14:10:01 -05:00
James Shubin
5e38c1c8fe examples: Remove old hcl examples
The hcl frontend was removed a while back. Might as well remove these
examples too.
2019-02-24 14:10:01 -05:00
James Shubin
ae7ebeedd1 engine: resources: Add CheckApply event detection to resource tests
This adds the ability to wait with a timeout for CheckApply happenings
in a resource. This helps avoid unnecessary long sleeping and timing
guesses. This also adds a cleanup function to run at the end.
2019-02-24 14:10:01 -05:00
James Shubin
652b657809 resources: exec: Avoid possible deadlock race
Some of the early code I wrote probably wouldn't pass my own reviews
today. Here's one example of a rare deadlock that could sometimes occur
when a Process/CheckApply caused a shutdown, but the bufio tried to send
on a channel that nobody was going to read any more. Now we properly
unblock that send with a context.
2019-02-24 12:28:59 -05:00
James Shubin
62a6e0da1d misc: Add two test helpers
Hopefully these make testing and debugging easier!
2019-02-24 12:28:59 -05:00
James Shubin
0d0d48d9f6 test: Shell tests should use unified timeout command 2019-02-24 12:28:59 -05:00
James Shubin
ab5957f1e9 make: Clean up the Makefiles so the output is more elegant
This avoids printing erroneous messages when nothing is actually
happening.
2019-02-24 12:28:59 -05:00
James Shubin
463ba23003 util: Improve the sync primitives. 2019-02-24 12:28:59 -05:00
James Shubin
ccad6e7e1a test: Enable and fix up some more tests
An unstable engine probably masked some of these issues.
2019-02-24 12:28:59 -05:00
James Shubin
aa165b5e17 engine: Add the retry loop around Process
This adds back the retry loop around Process. This is done as a
separate commit so you can more easily see the logic of the retry magic
This commit is similar but different to the earlier commit adding retry
around Watch.
2019-02-24 12:28:59 -05:00
James Shubin
f06e87377c engine: Add limit delay before Process can run
This adds back the limit delay around Process.
2019-02-24 12:28:59 -05:00
James Shubin
4c3bf9fc7a engine: Add the retry loop around Watch
This adds back the retry loop around Watch. This is done as a separate
commit so you can more easily see the logic of the retry magic.
2019-02-24 12:28:59 -05:00
James Shubin
253ed78cc6 engine: Rewrite the core algorithm
The engine core had some unfortunate bugs that were the result of some
early design errors when I wasn't as familiar with channels. I've
finally rewritten most of the bad parts, and I think it's much more
logical and stable now.

This also simplifies the resource API, since more of the work is done
completely in the engine, and hidden from view.

Lastly, this adds a few new metaparameters and associated code.

There are still some open problems left to solve, but hopefully this
brings us one step closer.
2019-02-24 12:28:59 -05:00
James Shubin
4860d833c7 converger: Rewrite the converger module
I found a deadlock in the converger code, and I realized the code was
sufficiently bad that it needed a good clean up.
2019-02-24 12:28:59 -05:00
James Shubin
450d5c1a59 util: Add an easy ACK sync primitive 2019-02-24 12:28:59 -05:00
Toshaan Bharvani
88fcda2c99 lang: funcs: Added an uptime function
Signed-off-by: Toshaan Bharvani <toshaan@vantosh.com>
2019-02-24 12:20:58 -05:00
James Shubin
00db953c9f lang: funcs: funcgen: Clean up some small details
Some small changes were needed, here they are. Unfortunately this only
supports the `string` type at the moment.
2019-02-21 13:06:29 -05:00
Julien Pivotto
a0df4829a8 lang: Add more string functions, autogenerated
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2019-02-21 17:50:06 +01:00
James Shubin
b0e1f12c22 test: Add expanders when running in travis
Hopefully this makes things more readable.
2019-02-20 09:35:31 -05:00
James Shubin
ee56155ec4 test: Split travis tests into three blocks
Our tests were taking near 50 minutes which kills them. This also makes
it easier to spot small issues faster.
2019-02-20 09:35:02 -05:00
Jeff Waugh
16d7c6a933 build: Fix macOS build
Add pkg-config to fix builds with augeas and libvirt on macOS.
2019-02-14 23:06:18 +11:00
Johan Bloemberg
f7a06c1da9 etcd: Connection options (socket file, ipv6)
- Allow unix domain socket to be used as client url
- Using ::1 as clienturl should not create default local ipv4 listener
- Add shell tests
2019-02-13 18:55:20 +01:00
James Shubin
4c8086977a engine: resources: file: Update the format string
The %s in the format string is not technically correct here.
2019-02-08 12:38:10 -05:00
James Shubin
b1f088e5fa engine: resources: Add a test running for testing individual resources
This adds a simulated engine that can run and test single resources. It
can't test all aspects and features that the engine supports, but is
probably pretty decent for testing the actual CheckApply and Watch
semantics. Be warned that it actually applies changes on your machine,
so please don't write tests that make undesirable changes.
2019-02-08 12:36:37 -05:00
James Shubin
1247c789aa lang: Remove unnecessary log package 2019-02-08 10:23:44 -05:00
Johan Bloemberg
749038c76d misc: Make build on macOS work 2019-02-08 00:14:17 +01:00
Johan Bloemberg
0a052494c4 misc: Add goimports dep 2019-02-08 00:14:17 +01:00
James Shubin
90fa83a5cf lang: funcs: core: Move world API functions
Some of the core functions interact with the remote "world" API. Move
them all into the same package.
2019-02-07 12:32:32 -05:00
James Shubin
4eaff892c1 lang: funcs: core: Rename core module files
More cleanup...
2019-02-07 12:19:59 -05:00
James Shubin
f368f75209 lang: funcs: core: Drop unnecessary core prefix from imports
This unbreaks the mcl bindata code. Of course we could change the parser
to allow this prefix, but this is cleaner. The packages still have a
core prefix, which it seems we could also remove, but this isn't
particularly important for anything.
2019-02-07 09:33:20 -05:00
Lander Van den Bulcke
04048b13ed lang: funcs: Add strings.split function
Signed-off-by: Lander Van den Bulcke <landervdb@inuits.eu>
2019-02-07 10:55:39 +01:00
Lander Van den Bulcke
5acc33c751 lang: funcs: Add tests for sqrt function
Signed-off-by: Lander Van den Bulcke <landervdb@inuits.eu>
2019-02-06 17:11:42 +01:00
James Shubin
b449be89a7 examples: Add uncommited nspawn example 2019-02-06 08:57:11 -05:00
Lander Van den Bulcke
dac019290d lang: funcs: Add sqrt function
Signed-off-by: Lander Van den Bulcke <landervdb@inuits.eu>
2019-02-06 14:32:13 +01:00
Julien Pivotto
bdc424e39d lang: Add to_lower and to_upper functions
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2019-02-06 14:24:15 +01:00
Lander Van den Bulcke
10193a2796 make: Use gem --no-document instead of deprecated flags
Signed-off-by: Lander Van den Bulcke <landervdb@inuits.eu>
2019-02-06 12:02:10 +01:00
Julien Pivotto
2c9a12e941 docker: Update FROM to go:1.11
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2019-02-06 10:48:24 +01:00
Felix Frank
8ba6c40f0c langpuppet: Fix Cli method invocations for wrapped GAPIs
Since the langpuppet GAPI creates fresh new CliContext objects,
it has to make sure to provide the original parent context, because
the child GAPIs expect to be able to access its data.
2019-02-05 16:34:55 +01:00
James Shubin
bbfeb49cdf examples: Add more examples and clean up some 2019-02-04 05:03:37 -05:00
James Shubin
f61e1cb36d examples: Add missing mcl files
I forgot to add these, sorry.
2019-02-03 09:58:04 -05:00
James Shubin
4a3e2c3611 engine: nspawn: Add an nspawn example with an improved exec
This adds the cwd fields to exec, better error messages to svc (which is
nested in nspawn) and a fancier nspawn example!
2019-02-01 09:44:55 -05:00
James Shubin
81faec508c integration: Avoid duplicate events from recwatch 2019-02-01 07:58:38 -05:00
James Shubin
9966ca2e85 examples: Improve dynamic cpus virt example 2019-02-01 07:58:38 -05:00
James Shubin
35c26f9ee5 engine: resources: virt: Clean up virt resource for lang 2019-02-01 07:58:38 -05:00
James Shubin
b5e29771ab lang: funcs: Add a trim space function to the new strings module 2019-02-01 07:00:05 -05:00
James Shubin
f5f09d3640 lang: funcs: Add str2int example function
We might want to move this into a real module eventually.
2019-02-01 06:59:07 -05:00
James Shubin
5a531b7948 lang: funcs: Add a new readfile function
This adds a new function that reads files from the local host.
2019-02-01 05:20:22 -05:00
James Shubin
f716a3a73b lang: funcs: Rename template functions to remove periods
Due to a limitation in the template library, we need to rename some
functions. It's probably worth looking into modifying this library or
finding an alternate version.
2019-02-01 03:58:02 -05:00
James Shubin
ce8c8c8eea engine: resources: Fix a small typo in error message 2019-02-01 03:49:08 -05:00
James Shubin
fc48fda7e5 engine: resources: Fix a possible panic on closed channel
I don't know how often it happens, but we should catch it.
2019-02-01 03:48:24 -05:00
James Shubin
78936c5ce8 examples: lang: Update examples to fix imports and port from yaml
Some small fixes that are useful for demos!
2019-02-01 03:47:18 -05:00
Kevin Kuehler
5d0efce278 engine: lang: util: Kill race in socketset
After some investigation, it appears that SocketSet.Shutdown() and
SocketSet.Close() are not synchronous operations. The sendto system call
called in SocketSet.Shutdown() is not a blocking send. That means there
is a race in which SocketSet.Shutdown() sends a message to a file
descriptor to unblock select, while SocketSet.Close() will close the
file descriptor that the message is being sent to. If SocketSet.Close()
wins the race, select is listening on a dead file descriptor and will
hang indefinitely.

This is fixed in the current master by putting SocketSet.Close() inside
of the goroutine in which data from the socket is being received. It
relies on SocketSet.Shutdown() being called to terminate the goroutine.
While this works most of the time, there is a race here. All the
goroutines can also be terminated by a closeChan. If the goroutine
receives an event (thus unblocking select) and then closeChan is
triggered, both SocketSet.Shutdown() and SocketSet.Close() race, leading
to undefined behavior.

This patch ensures the ordering of the two function calls by pulling
them both out of the goroutine and separating them with a WaitGroup.

Co-authored-by: James Shubin <james@shubin.ca>
2019-01-22 20:59:17 -08:00
Kevin Kuehler
0c17a0b4f2 util: Add TestShutdown to socketset
Test to ensure that SocketSet is nonblocking and will close when
SocketSet.Shutdown() is called. Create a SocketSet that will never
receive any data and leave it running in a goroutine with a WaitGroup
for a second. If Shutdown is working correctly, the goroutine will be
terminated after the timer expires.
2019-01-22 20:59:17 -08:00
Kevin Kuehler
3f396a7c52 lang: funcs: Add cpucount fact
Adds a CPU count fact, that can be used to determine how many CPUs are
presently on the machine and ready for use (online). We get this by
reading from a netlink socket to the kernel, and the kernel sends us
uevents when CPUs are added, removed, and brought online or offline.
Whenever one of these events are received, we look in sysfs to update
the fact's Stream with the number of online CPUs.
2019-01-22 20:59:16 -08:00
Kevin Kuehler
8697f8f91f util: Libify socketset
Add the ReceiveBytes, ReceiveNetlinkMessage, and ReceiveUEvent methods.
This is because not everything passed through a netlink socket cannot
reliably be parsed using the ParseNetLinkMessage function.

With the ReceiveUEvent method, we add support for "uevent" kernel
events, which updates us about the state of devices currently on the
system. To make using this method easier, we add a UEvent struct, that
has the action (what event), Devpath (where the device lives in /proc or
/sysfs), and Subsystem (what subsystem this event belows to).
2019-01-22 20:59:16 -08:00
Kevin Kuehler
06c67685f1 util: Move socketset from net resource to util
Prepare the socketset api to be used outside of the scope the net
resource.
2019-01-22 20:59:11 -08:00
James Shubin
dc2e7de9e5 engine: resources: pkg: Clarify that correct state is newest
I accidentally typed "latest" which got me confused why everything was
broken. Surprised it didn't error earlier anyways.
2019-01-21 04:28:34 -05:00
James Shubin
db1dbe7a27 lang: Edges should allow lists of strings
This continues the earlier patch that allowed resource names to be lists
of strings so that edges can now allow the same. This also includes a
new fancy test!
2019-01-20 17:27:40 -05:00
James Shubin
d6bbb94be5 lang: test: Add a new giant test infra for matching static output
This greatly expands our test infra to allow us to drop in mcl tests and
look at their resource graph output. The only downside is that this only
runs the function engine once, so if the function graph would be
constantly changing over time, then this is not a good fit here.
2019-01-20 17:27:40 -05:00
James Shubin
e3b4c0aee3 test: Fix a small copy pasta typo 2019-01-20 17:27:40 -05:00
James Shubin
a1fbe152bb lang: unification: Fix up small typos in example code 2019-01-20 04:22:05 -05:00
James Shubin
9d28ff9b23 lang: unification: Catch unification error on typed var expr
This was similar to the typed if expr error.
2019-01-20 04:19:39 -05:00
James Shubin
43f0ddd25d lang: unification: Catch unification error on typed if expr
I found a case where we had two missing unification rules. Now fixed in
the previous commits, and including this test to show I'm responsible.
I've added the same test in two locations for redundancy and as an
example.
2019-01-20 04:19:39 -05:00
James Shubin
7a28b00d75 lang: If expression was missing two invariants
I forgot to ensure that the type of the final expression matched the
type of each of the branches. It's rare, but possible for this to occur.
Luckily, this never would have caused a panic, because the func engine
would have caught the issue anyways, but it's still better we catch it
here first!
2019-01-20 04:02:54 -05:00
James Shubin
32e29862f2 lang: Check that set type matches actual expression
I forgot to include these two invariants which are occasionally
necessary, although in most cases they're necessary to prevent incorrect
code from getting past unification. In any case, they would have been
caught by the engine.
2019-01-20 04:02:54 -05:00
James Shubin
6c5c38f5a7 lang: unification: Allow err string comparisons in tests
Let's improve our test infra to make it more capable. It's important to
catch we failed for the _right_ reason so as to not mask the wrong
errors.
2019-01-20 03:39:02 -05:00
James Shubin
2da7854b24 lang: unification: Add logging to make capturing errors easier
This makes building new tests easier.
2019-01-20 03:39:02 -05:00
James Shubin
6d0c5ab2d5 lang: unification: Add missing return to exit early
This exits the test early, since we don't need to continue.
2019-01-20 03:39:02 -05:00
James Shubin
9398deeabc etcd: Workaround a nil ptr bug
A clean re-write of this etcd code is needed, but until then, this
should hopefully workaround the occasional test failures. In practice I
don't think anyone has every hit this bug.
2019-01-17 20:07:24 -05:00
James Shubin
bf63d2e844 engine: graph: Avoid a possible panic sending on a closed channel
It's plausible that we send on a closed channel if we're running a back
poke and it tries to send a poke on something that has already closed.
If it detects this condition, it will exit.

Unfortunately, it's not clear if the wait group will protect this case,
but hopefully this will hold us until we can re-write the needed parts
of the engine.
2019-01-17 20:05:49 -05:00
James Shubin
b808592fb3 engine: Work around bad timestamp panic
Occasionally when a back poke happens downstream of an upstream vertex
which has already exited, it could get back poked, which would cause a
panic. This moves the deletion of the state struct until the entire
graph has completed so that it won't panic. It doesn't matter if a back
poke is lost, we're shutting down or pausing, and in this scenario it
can be lost.
2019-01-17 20:05:49 -05:00
James Shubin
e2296a631b engine: event: Switch events system to use simpler structs
Pass around pointers of things now. Also, naming is vastly improved and
clearer.
2019-01-17 20:04:17 -05:00
James Shubin
e20555d4bc test: Don't be unnecessarily noisy in this test
This is confusing if you're looking for an error in the test.
2019-01-17 19:33:35 -05:00
James Shubin
b89e2dcd3c test: Add a three host variant of the empty etcd test 2019-01-17 19:21:56 -05:00
James Shubin
165d11b2ca test: Rename t8 to be more descriptive 2019-01-17 19:21:56 -05:00
James Shubin
d4046c0acf test: Enable t8 to test for two host etcd clusters
I can't remember why we disabled this, so let's put it back. There's
still one rare etcd race, but hopefully it doesn't fail too much until
we fix it.
2019-01-17 19:21:56 -05:00
James Shubin
88498695ac test: Add a semaphore shell test
This test tests new language features and as a fan in-out graph.
2019-01-17 19:21:56 -05:00
James Shubin
354a1c23b0 engine: graph: Prevent converged timeout of dirty res
Somewhere after the engine re-write we seem to have regressed and
converge early even if some resource is dirty. This adds an additional
timer so that we don't start the individual resource converged countdown
until our state is okay.
2019-01-17 18:46:00 -05:00
Kevin Kuehler
34550246f4 lang: Add debug flag and Logf to fact init struct 2019-01-17 18:12:45 -05:00
Jonathan Gold
db1cc846dc test: Ensure gometalinter is available 2019-01-15 20:24:37 -05:00
Jonathan Gold
74484bcbdf make: deps: Only install gometalinter on CI/CD servers 2019-01-15 20:23:24 -05:00
James Shubin
d5ecf8ce16 engine: Fix typos 2019-01-12 15:03:03 -05:00
James Shubin
b1ffb1d4a4 lang: Add autoedge and autogroup meta params to mcl
These weren't yet exposed in mcl. They're now available under the same
Meta namespace as the normal meta param structs. Even though they live
as a separate trait, they should be exposed together for a consistent
interface in mcl. If autoedge or autogroup ever grow additional params,
we can always add: `Meta:autoedge:something` to break it down further.
2019-01-12 13:16:39 -05:00
James Shubin
451e1122a7 lang: Refactor the res metaparams helper
We can do all the actions without returning anything but an error.
2019-01-12 12:34:07 -05:00
James Shubin
10dcf32f3c lang: Allow a list of strings in the resource name
This adds a core looping construct by allowing a list of names to build
a resource. They'll all have the same parameters, but they'll
intelligently add the correct list of edges that they'd individually
create.

Constructs like these are one reason we do NOT have actual looping
functionality in the language, and it should stay that way.
2019-01-12 11:54:02 -05:00
James Shubin
7f1477b26d lang: Add a placeholder "ExprAny" expression for unification hacks
Instead of adding complexity to the unification engine, we can add a
fake placeholder expression that is unreachable by the AST, but used for
unification so that we can ensure a "wrap" invariant has some contents.

Ideally we'd improve the unification engine, but we'll leave that for
the future, and it's easy to revert this one commit in the future.
2019-01-12 11:45:53 -05:00
James Shubin
33b68c09d3 lang: Refactor edges helper method 2019-01-12 11:45:53 -05:00
James Shubin
7ec48ca845 lang: Refactor resource creation into a helper method 2019-01-12 11:45:53 -05:00
James Shubin
5c92cef983 docs: Add sub categories to the language guide
Hopefully this makes the longer sections easier to read.
2019-01-12 11:45:53 -05:00
James Shubin
75eba466c6 travis: Clean up my grammar
What was I thinking?
2019-01-11 04:38:12 -05:00
James Shubin
ad30737119 lang: Add meta parameter parsing to resources
Now we can actually specify metaparameters in the resources!
2019-01-11 04:13:13 -05:00
James Shubin
8e0bde3071 lang: Move capitalized res identifier into parser
This gives us more specificity when trying to match exactly.
2019-01-11 02:57:39 -05:00
James Shubin
7d641427d2 test: Fix golang cache regression
Golang decided to change the GOCACHE behaviour in newer versions of `go
test`. This changes our tests to use the new approach.

For users using a local `.envrc`, you might want to add:

GOFLAGS="-count=1"

Which is supposed to fix this problem for local tests.

More information is available in: https://github.com/golang/go/issues/29378
2019-01-10 20:41:10 -05:00
James Shubin
3b62beed26 travis: Print debug info to catch travis regressions 2019-01-10 18:23:11 -05:00
James Shubin
2d3cf68261 travis: Workaround another broken apt repo
This works around another travis NO_PUBKEY regression.
2019-01-10 18:22:45 -05:00
Vincent Membré
7d6080d13f engine: resources: exec: Use WatchShell in Exec resource when needed instead of Shell 2019-01-03 10:28:22 +01:00
James Shubin
e3eefeb3fe engine: resources: pkg: Implement the CompatibleRes interface
This signals to an interested consumer that two or more compatible
resources can be merged safely. This is so that we can avoid the
"duplicate resource" design problem that puppet had.

To test this, you can run:

./mgmt run --tmp-prefix lang --lang 'pkg "cowsay" { state => "installed", } pkg "cowsay" { state => "newest", }'

which should work.
2018-12-29 02:54:55 -05:00
James Shubin
f10dddadd6 lang: Handle merging of compatible resources properly
The duplicate resource problem that puppet had should now be correctly
solved in mgmt.
2018-12-29 02:51:09 -05:00
James Shubin
d166112917 engine: Add an interface for compatible resources
This also adds utility functions for merging and improved comparing.
2018-12-29 02:46:43 -05:00
James Shubin
8ed5c1bedf engine: Add a resource copy interface and implementation
If we want to copy an entire resource, we should use this helper method.
2018-12-29 02:42:02 -05:00
James Shubin
4489076fac engine: Add setters for the trait interfaces
Turns out it's useful to wholesale set the entire struct.
2018-12-29 01:16:38 -05:00
James Shubin
bdc33cd421 lang: Validate the edge field names in our resources
Validate these early instead of waiting for this to be caught during
output generation.
2018-12-29 00:18:10 -05:00
James Shubin
889dae2955 lang: Improve sub testing
This makes individual sub tests from the table easier to run.
2018-12-29 00:16:35 -05:00
James Shubin
9ff21b68e4 engine: resources: pkg: Simplify state check
Refactor this code.
2018-12-28 20:33:51 -05:00
James Shubin
a69a7009f8 engine: resources: pkg: Replace state strings with constants
This helps avoid typos, and gives us something we can export in the
future.
2018-12-28 20:32:23 -05:00
James Shubin
d413fac4cb engine: resources: pkg: Remove old Compare method
This was legacy code. Get rid of it.
2018-12-28 20:06:00 -05:00
James Shubin
246ecd8607 engine: resources: cron: Fix typo in error message 2018-12-28 20:00:14 -05:00
James Shubin
22105af720 lang: test: Add a test of duplicate resource generation
These two cases should be allowed in our language. This is something
that puppet got wrong, and hopefully this makes writing modules more
sane in mcl, since two modules both depending on a "cowsay" package
won't cause compile errors.

This only checks the language. The de-duplication is done there. We
don't currently have a check for this in the engine. (We should!)
2018-12-28 18:44:07 -05:00
James Shubin
880c4d2f48 lang, util: Tests that depend on the fs should be sorted
This ensures they're deterministic on any file system.
2018-12-28 18:00:08 -05:00
Jonathan Gold
443f489152 etcd: Add more test cases to TestEtcdCopyFs0 2018-12-22 04:47:49 -05:00
Jonathan Gold
39fdfdfd8c etcd: Add TestEtcdCopyFs0
This commit adds a new test to etcd/fs/fs_test.go that performs the same
actions (with some new cases) as TestFs2 and TestFs3, but allows us to
add more test cases as needed.
2018-12-22 04:46:58 -05:00
James Shubin
96dccca475 lang: Add module imports and more
This enables imports in mcl code, and is one of last remaining blockers
to using mgmt. Now we can start writing standalone modules, and adding
standard library functions as needed. There's still lots to do, but this
was a big missing piece. It was much harder to get right than I had
expected, but I think it's solid!

This unfortunately large commit is the result of some wild hacking I've
been doing for the past little while. It's the result of a rebase that
broke many "wip" commits that tracked my private progress, into
something that's not gratuitously messy for our git logs. Since this was
a learning and discovery process for me, I've "erased" the confusing git
history that wouldn't have helped. I'm happy to discuss the dead-ends,
and a small portion of that code was even left in for possible future
use.

This patch includes:

* A change to the cli interface:
You now specify the front-end explicitly, instead of leaving it up to
the front-end to decide when to "activate". For example, instead of:

mgmt run --lang code.mcl

we now do:

mgmt run lang --lang code.mcl

We might rename the --lang flag in the future to avoid the awkward word
repetition. Suggestions welcome, but I'm considering "input". One
side-effect of this change, is that flags which are "engine" specific
now must be specified with "run" before the front-end name. Eg:

mgmt run --tmp-prefix lang --lang code.mcl

instead of putting --tmp-prefix at the end. We also changed the GAPI
slightly, but I've patched all code that used it. This also makes things
consistent with the "deploy" command.

* The deploys are more robust and let you deploy after a run
This has been vastly improved and let's mgmt really run as a smart
engine that can handle different workloads. If you don't want to deploy
when you've started with `run` or if one comes in, you can use the
--no-watch-deploy option to block new deploys.

* The import statement exists and works!
We now have a working `import` statement. Read the docs, and try it out.
I think it's quite elegant how it fits in with `SetScope`. Have a look.
As a result, we now have some built-in functions available in modules.
This also adds the metadata.yaml entry-point for all modules. Have a
look at the examples or the tests. The bulk of the patch is to support
this.

* Improved lang input parsing code:
I re-wrote the parsing that determined what ran when we passed different
things to --lang. Deciding between running an mcl file or raw code is
now handled in a more intelligent, and re-usable way. See the inputs.go
file if you want to have a look. One casualty is that you can't stream
code from stdin *directly* to the front-end, it's encapsulated into a
deploy first. You can still use stdin though! I doubt anyone will notice
this change.

* The scope was extended to include functions and classes:
Go forth and import lovely code. All these exist in scopes now, and can
be re-used!

* Function calls actually use the scope now. Glad I got this sorted out.

* There is import cycle detection for modules!
Yes, this is another dag. I think that's #4. I guess they're useful.

* A ton of tests and new test infra was added!
This should make it much easier to add new tests that run mcl code. Have
a look at TestAstFunc1 to see how to add more of these.

As usual, I'll try to keep these commits smaller in the future!
2018-12-21 06:22:12 -05:00
James Shubin
948a3c6d08 gapi: Add a bytes helper
Use bytes directly if we've got them.
2018-12-20 21:21:30 -05:00
James Shubin
dc13d5d26b util: Add some useful path parsing functions
These two are useful for looking at path prefixes and rebasing the paths
onto other paths.
2018-12-20 21:21:30 -05:00
James Shubin
aae714db6b lang: Add a top-level stmt safety method
This adds a new method to the *StmtProg that lets us determine if the
prog contains only what is necessary for a scope and nothing more. This
is useful because that is exactly what is produced when doing an import.
With this detection method, we can know if a module contains dead code
that might mislead the user into thinking it will get run when it won't.
2018-12-20 21:21:30 -05:00
James Shubin
a7c9673bcf lang: Improve empty scope and output
For some reason these were unnecessary methods on the structs, even when
those structs contained nothing useful to offer.
2018-12-20 21:21:30 -05:00
James Shubin
3d06775ddc lang: Add some lambda function parsing and tests
Part of this isn't fully implemented, but might as well get the tests
running.
2018-12-20 21:21:30 -05:00
James Shubin
48beea3884 test: Clean up and improve golang tests
This adds some consistency to the tests and properly catches difficult
scenarios in some of the lexparse tests.
2018-12-20 21:21:30 -05:00
James Shubin
958d3f6094 lang: Add beginning of user defined functions
This adds the lexer, parser and struct basics for user defined
functions. It's far from finished, but it's good to get the foundation
started.
2018-12-20 21:21:30 -05:00
James Shubin
08f24fb272 lang: Add a URL result to the import name parser
This is meant to be useful for the downloader. This will probably get
more complicated over time, but for now the goal is to have it simple
enough to work for 80% of use cases.
2018-12-20 21:21:30 -05:00
James Shubin
07d57e1a64 git: Ignore some WIP files that won't get tracked in git 2018-12-20 21:21:30 -05:00
James Shubin
cd7711bdfe gapi: Add a prefix variable in case we want to namespace on disk
This could get passed through to use as a module download path.
2018-12-20 21:21:30 -05:00
James Shubin
433ffa05a5 bindata: Add infrastructure for building core mcl files
This should prepare us so that we can build native mcl code alongside
the core *.go files which we already have. This includes a single mcl
file that is used as a placeholder so that the build doesn't fail if we
don't have any mcl files in the core/ directory. It will get ignored
automatically.
2018-12-20 21:21:30 -05:00
James Shubin
046b21b907 lang: Refactor most functions to support modules
This is a giant refactor to move functions into a hierarchial module
layout. While this isn't entirely implemented yet, it should work
correctly once all the import bits have landed. What's broken at the
moment is the template function, which currently doesn't understand the
period separator.
2018-12-20 21:21:30 -05:00
James Shubin
c32183eb70 lang: Tidy up grouping of lexer tokens in the parser
Just some small cleaning.
2018-12-20 21:21:30 -05:00
James Shubin
73b11045f2 lang: Add lexing/parsing of import statements
This adds the basic import statement, and its associated variants. It
also adds the import structure which is the result of parsing.
2018-12-20 21:21:30 -05:00
James Shubin
57ce3fa587 lang: Allow matching underscores in some of the identifier's
This allows matching underscores in some of the identifier's, but not
when they're the last character.

This caused me to suffer a bit of pain tracking down a bug which turned
out to be in the lexer. It started with a failing test that I wrote in:

974c2498c4

and which followed with a fix in:

52682f463a

Glad that's fixed!
2018-12-20 21:21:30 -05:00
James Shubin
a26620da38 lang: Add resource specific tokens in lexer and parser
This adds some custom tokens for the lexer and parser so that resources
can have colons in their names.
2018-12-20 21:21:30 -05:00
James Shubin
86b8099eb9 lang: Add import spec parsing and tests
This adds parsing of the upcoming "import" statement contents. It is the
logic which determines how an import statement is read in the language.
Hopefully it won't need any changes or additional magic additions.
2018-12-20 21:21:30 -05:00
James Shubin
c8e9a100a6 lang: Support lexing and parsing a list of files with offsets
This adds a LexParseWithOffsets method that also takes a list of offsets
to be used if our input stream is composed of multiple io.Readers
combined together.

At the moment the offsets are based on line count instead of file size.
I think the latter would be preferable, but it seems it's much more
difficult to implement as it probably requires support in the lexer and
parser. That improved solution would probably be faster, and more
correct in case someone passed in a file without a trailing newline.
2018-12-20 21:21:30 -05:00
James Shubin
a287f028d1 lang: Detect sub tests with the same name
This detects identically named tests and fails the test in such a
scenario to prevent confusion.
2018-12-20 21:21:30 -05:00
James Shubin
cf50fb3568 lang: Allow dotted identifiers
This adds support for dotted identifiers in include statements, var
expressions and function call expressions. The dotted identifiers are
used to refer to classes, bind statements, and function definitions
(respectively) that are included in the scope by import statements.
2018-12-20 21:21:30 -05:00
James Shubin
4c8193876f util: Add a UInt64Slice and associated sorting functionality.
This adds an easy to sort slice of uint64's and associated functionality
to sort a list of strings by their associated order in a map indexed by
uint64's.
2018-12-20 21:21:30 -05:00
James Shubin
158bc1eb2a lang: Add an Apply iterator to the Stmt and Expr API
This adds a new interface Node which must implement the Apply method.
This method traverse the entire AST and applies a function to each node.
Both Stmt and Expr must implement this.
2018-12-20 21:21:30 -05:00
James Shubin
3f42e5f702 lang: Add logging and debug info via a new Init method
This expands the Stmt and Expr interfaces to add an Init method. This
is used to pass in Debug and Logf values, but is also used to validate
the AST. This gets rid of standalone use of the "log" package.
2018-12-20 21:21:30 -05:00
Tom Payne
75633817a7 etcd: Ensure that fs.Fs implements afero.Fs 2018-12-20 21:19:55 -05:00
Tom Payne
83b00fce3e etcd: Add Lchown (returns ErrNotImplemented) 2018-12-20 21:19:55 -05:00
Tom Payne
38befb53ad etcd: Add Chown (returns ErrNotImplemented) 2018-12-20 21:19:55 -05:00
Kevin Kuehler
d0b5c4de68 util: Patch CopyFs and add tests
Fix CopyFs bug that resulted in a flattened destination directory.
Added tests catch this bug, and ensure the data is in fact copied
to the destination directory.
2018-12-20 12:15:06 -08:00
James Shubin
1b68845b00 test: Fix up token vet test
I forgot some of the cases to catch earlier.
2018-12-19 22:24:20 -05:00
James Shubin
a7bc72540d util: Fix small linting error
Woops!
2018-12-19 12:29:44 -05:00
James Shubin
27ac7481f9 test: Increase the vet testing for irregular strings
Catch some inconsistent comments to keep things neat. Hey, anything we
can automate, we do :)
2018-12-19 06:52:23 -05:00
James Shubin
9bc36be513 util: Add a test for CopyFs
This adds a test case for the standalone CopyFs function, and an easy to
use test case infra.
2018-12-19 06:51:05 -05:00
James Shubin
e62e35bc88 util: Improve the test helper function and add a better one
This should help us write tests that use unique physical directories
inside the directory tree.
2018-12-19 06:10:48 -05:00
James Shubin
bd80ced9b2 util: Add an fs helper and a test helper 2018-12-17 12:10:09 -05:00
Jonathan Gold
bb2f2e5e54 util: Add PathSlice type that satisfies sort.Interface
This commit adds a []string{} type alias named PathSlice, and the
Len(), Swap(), and Less() methods required to satisfy sort.Interface.
Now you can do `sort.Sort(util.PathSlice(foo))` where foo is a slice
of paths. It will be sorted by depth in alphabetical order.
2018-12-17 01:14:54 -05:00
James Shubin
b1eb6711b7 engine: resources: Work around a subtle embedded res bug
This is a subtle issue that was found that caused a panic. This should
solve things for now, but it would be wise to build embedded or
composite resources sparingly until we we're certain this would work the
way we wanted for all scenarios.
2018-12-16 16:07:42 -05:00
Jonathan Gold
da0ffa5e56 engine: resources: cron: Add auto edges from SvcRes 2018-12-16 15:12:58 -05:00
Felix Frank
68ef312233 gitignore: Ignore vim swap files 2018-12-16 13:41:21 -05:00
Felix Frank
9fefadca24 docs: Explain the langpuppet interface and function 2018-12-16 13:35:47 -05:00
James Shubin
e14b14b88c engine: resources: svc: Add symmetric closing
This improves some of the closing in the svc resource. This still needs
lots of improvements, and it's sort of terrible because it was some very
early code written.
2018-12-16 08:27:26 -05:00
James Shubin
d5bfb7257e engine: resources: file: Require paths to be absolute
This is a requirement of our file resource, so we should validate this
and clearly express it in the documentation.
2018-12-16 07:24:07 -05:00
Jonathan Gold
8282f3b59c engine: resources: cron: Add lang examples 2018-12-15 11:01:05 -05:00
Jonathan Gold
dbf0c84f0b engine: resources: cron: Add support for user session timers 2018-12-15 10:47:35 -05:00
Jonathan Gold
a5977b993a engine: util: Add EdgeCombiner() for combining auto edges 2018-12-15 10:47:35 -05:00
Jonathan Gold
27df3ae876 engine: resources: cron: Add a systemd-timer resource 2018-12-15 10:47:35 -05:00
Felix Frank
a49d07cf01 gapi: langpuppet: Add initial implementation
This new entrypoint allows graph generation from both a Puppet manifest
and a piece of mcl code. The GAPI implementation wraps the two existing
GAPIs.
2018-12-15 03:43:15 +01:00
Jonathan Gold
28f343ac50 engine: resources: svc: Use dbus session bus for user session svc
This patch adds a util function, SessionBusUsable, that makes and returns
a new usable dbus session bus. If the svc bool session is true, the resource
will use a bus created with that function.
2018-12-14 00:16:21 -05:00
Jonathan Gold
4297a39d03 engine: resources: group: Make group edgeable
This adds the edgeable trait to the group resource and adds an
AutoEdges method which returns nil, nil. These changes are necessary
to allow UserRes to make autoedges to GroupRes.
2018-12-13 23:01:42 -05:00
Jonathan Gold
bd996e441c etcd: Use mgmt backend for fs tests 2018-12-11 18:11:45 -05:00
Jonathan Gold
086a89fad6 etcd: Use source filepath base in CopyFs destination path
This patch corrects the destination path in CopyFs to use the source's
base filepath, instead of the entire source path. Now copying /foo/bar
to /baz results in /baz/bar instead of /baz/foo/bar. This commit also adds
a test to verify this behaviour.
2018-12-11 02:20:11 -05:00
Michael Lesko-Krleza
70ac38e66c test: Increase test coverage for graphsync
This patch is an addition to graphsync_test.go, which increases the test
coverage from 72.4% to 72.9%.
2018-12-11 02:02:33 -05:00
James Shubin
d990d2ad86 travis: Bump to golang 1.10
This requires breaking changes in gofmt. It is hilarious that this was
changed. Oh well. This also moves to the latest stable etcd. Lastly,
this changes the `go vet` testing to test by package, since the new go
vet changed how it works and now fails without this change.
2018-12-11 01:46:17 -05:00
Jonathan Gold
56db31ca43 engine: resources: file: Add shell test for source field 2018-12-10 22:08:59 -05:00
Jonathan Gold
b902e2d30b engine: resources: file: Fix bug preventing use of source field
This patch fixes a previously undiscovered bug which prevented
the use of the source field in the file resource. CheckApply was
returning early if obj.Content was nil. It is also necessary to
check that obj.Source is empty before returning, otherwise
syncCheckApply never runs.
2018-12-10 22:08:59 -05:00
Jonathan Gold
d2bab32b0e engine: resources: packagekit: Fix dbus addmatch rule
I broke packagekit with commit 299080f5 due to a missing equals sign
in the DBus AddMatch rule. This commit adds the necessary equals sign.
2018-12-09 11:12:48 -05:00
Jonathan Gold
b2d726051b travis: Build on Xenial
Builds were failing on Trusty due to broken GPG keys, and upgrading
the build environment to Xenial Xerus solves the problem.
2018-12-04 20:27:47 -05:00
Jonathan Gold
8e25667f87 engine: resources: net: test: Add shell test for net resource
This patch adds a shell test for net, which creates a dummy interface
and runs mgmt to bring it up and assign it with an address. It then
checks if the state was applied correctly. Finally, it runs mgmt again
to bring the interface down, and tests that it comes down and stays
down.
2018-12-04 17:12:57 -05:00
Jonathan Gold
9b5c4c50e7 engine: resources: net: Allow addr without gateway
In some scenarios it is desirable to set the addrs and gateway
independently, i.e. if a default gateway is already set on
the machine. This patch removes the requirement to set them
together.
2018-12-04 17:12:57 -05:00
Jonathan Gold
d2ce70a673 puppet: Fix error message when puppet conf copy fails
This commit adds the missing config file location to the error
message.
2018-12-04 16:58:40 -05:00
Felix Frank
9db0fc4ee4 make: Speed up the build by skipping gem docs
Per default, the Ruby gems renerate documentation in two distinct formats
during installation. By passing --no-ri and --no-rdoc, gem is instructed
to skip this step for both formats.

If the user needs documentation for any of the gems after all, they can
manually generate the docs themselves.
2018-12-04 16:56:23 -05:00
Felix Frank
9ed830bb81 make: Remove spurious dependency package 'rubygems' for Debian-like systems
On Ubuntu, the apt-get install call to ruby, ruby-devel, and rubygems will
fail because there is no "rubygems" package in Ubuntu.

In Debian, this package is virtual only. In both cases, the ruby package
is sufficient. (See also https://packages.debian.org/jessie/rubygems)
2018-12-04 16:55:46 -05:00
James Shubin
4e42d9ed03 travis: Work around broken travis NO_PUBKEY error
W: GPG error: https://packagecloud.io/rabbitmq/rabbitmq-server/ubuntu trusty InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY F6609E60DC62814E
E: The repository 'https://packagecloud.io/rabbitmq/rabbitmq-server/ubuntu trusty InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
2018-12-04 16:52:02 -05:00
James Shubin
4c93bc3599 test: Add doc note about skipping docker tests
This is useful if you don't have docker running, since otherwise it
causes all the tests to fail.
2018-12-03 23:55:20 -05:00
Jonathan Gold
7c817802a8 engine: resources: net: test: Add some go tests
This patch adds go tests for NetRes.unitFileContents(), socketSet.fdSet(),
and socketSet.nfd(), in the net resource.
2018-12-03 23:42:42 -05:00
Jonathan Gold
de90b592fb lang: Fix error message format strings
This commit replaces %s with %d in two error messages, where the
argument is an integer, not a string.
2018-12-03 19:27:35 -05:00
Jonathan Gold
b9d0cc2e28 etcd: Fix deploy transaction error message
This commit removes an unused argument from the error format string.
2018-12-03 19:26:18 -05:00
James Shubin
0ec00fe57f make: Improve release pipeline
Hopefully this makes releases a little better for users.
In particular, this avoids listing old build artifacts in the SHA256SUMS
files when we make new releases, and users can now download them
directly.

Now to make a release you run: `make tag && make release`.
After the first make session ends, you'll have a new tag released
publicly, and then during the second make session, the release target
will notice this new tag, build some assets, and upload them!
2018-11-30 19:08:53 -05:00
Jonathan Gold
80931e1cb4 make: Release pipeline
This commit adds new make targets for rpm, deb, and pacman packages.
It also adds a phony target that uploads tarballs of the packages,
along with their signed (and unsigned) checksums to the github release
page. Once the current commit is tagged as a release, run `make release`
to build the packages and upload them to github.
2018-11-30 04:53:51 -05:00
James Shubin
cc02e96a13 engine: resources: Add nodocker build tag
Make it easy to disable building docker which is enormous.
2018-11-29 08:22:05 -05:00
Jonathan Gold
51ec91dd16 engine: resources: docker: Add a docker container resource 2018-11-29 08:14:07 -05:00
James Shubin
916a92c3d8 vendor: Add vendored docker modules with out of tree fix
The docker project absurdly *copies* all of the dependencies into the
vendor/ directory instead of using git submodules or avoiding
unnecessary vendoring entirely. We manually remove these changes until
they learn to use tools how they're intended.

As an aside, we recommend using a more intelligent, modern tool like
systemd-nspawn instead.
2018-11-29 08:14:07 -05:00
James Shubin
5431bfdc29 test: Improve commit message tests 2018-11-24 04:42:50 -05:00
Jonathan Gold
43b5b4f5a4 build: Add rubygems to make deps target
cffdb06 adds a linter for markdown which requires rubygems.
This commit adds the dependency to the make target.
2018-10-30 17:16:50 -04:00
James Shubin
f342e06ef0 readme: Add Liberapay link to README 2018-06-21 19:21:41 -04:00
James Shubin
81bb87f4cd test: Add a test to ensure the parser doesn't have any conflicts
Our grammar shouldn't be ambiguous, and it makes sense to test this.
2018-06-18 16:06:23 -04:00
James Shubin
c4b97fadcc lang: Update map type definition to include a prefix
It turns out that some planned additions to the parser make it so that
the map type definition can be ambiguous. As a result, this patch
updates the definition so that the map definition is not confused with
an open curly bracket anywhere.

Thanks to pestle and stbenjamin for their help understanding yacc!
2018-06-18 16:06:23 -04:00
James Shubin
05f6ba7297 lang: Add partial recursive support/detection to class
This adds the additional bits onto the class/include statements to
support or detect class recursion. It's not currently supported, but
I figured I'd commit the detection code as a variant of the recursion
implementation, since I think this is correct, and it was a bit tricky
for me to get it right.
2018-06-17 17:35:34 -04:00
James Shubin
c62b8a5d4f lang: Add class and include statements
This adds support for the class definition statement and the include
statement which produces the output from the corresponding class.

The classes in this language support optional input parameters.

In contrast with other tools, the class is *not* a singleton, although
it can be used as one. Using include with equivalent input parameters
will cause the class to act as a singleton, although it can also be used
to produce distinct output.

The output produced by including a class is actually a list of
statements (a prog) which is ultimately a list of resources and edges.
This is different from functions which produces values.
2018-06-17 17:29:44 -04:00
James Shubin
83dab30ecf lang: Simplify bind stmt collection in the prog stmt
This cleans up the code to be more consistent with the other
improvements in this area.
2018-06-12 17:44:42 -04:00
James Shubin
24b08a332d pgraph: Handle empty graphs when merging two
In case we choose to add an empty (nil) graph, handle it safely. This
could allow us to return nil in a lang/structs Graph method without
issue.
2018-06-12 17:44:36 -04:00
James Shubin
70ccb3022a lang: Simplify struct interpolation
Cleaner code, nothing fancy.
2018-06-12 17:40:57 -04:00
James Shubin
8019b90b8a lang: Don't add identical resources to graph
This means that it's legal to produce two compatible (usually identical)
resources without a compile error and without causing two of them to get
run. It's too bad puppet never got this right.

It's probably worth checking if this could be done for edges too, and if
the logic can be contained in the engine and not in the frontend.
2018-06-12 17:40:57 -04:00
James Shubin
5f12ff6178 lang: Add indentation test to parser
This adds a test case to catch some common typos.
2018-06-12 17:40:18 -04:00
James Shubin
6e20e48489 lang: Simplify graph function for edge half in parser 2018-06-12 17:40:18 -04:00
James Shubin
f29a72235c lang: funcs: Registered functions map should be private
Make the map is private so that the public methods must be used to
access it.
2018-06-12 17:40:18 -04:00
James Shubin
e25d499eeb lang: Add edges to StmtProg output
I think I forgot to add these previously, and I think they should be
part of the output now.
2018-06-12 17:40:18 -04:00
James Shubin
9cae339546 lang: Error parser if SetType fails to avoid a panic
Turns out we can actually cause the parser to error instead of needing
to panic. It definitely seems to work, and is better than the panic. The
only awkward thing is how this plumbing works in yacc world. If anyone
knows why this is wrong, please let me know. Reading the generated code
seems to imply that this is correct.
2018-05-22 20:02:50 -04:00
James Shubin
a049af6262 engine: resources: print: Add missing Recvable trait
We we're receiving values, but we forgot to list the trait. This caused
an intentional engine panic, but is easily fixed :)
2018-05-22 19:32:40 -04:00
Jonathan Gold
a402f50f9b docs: Update url for AWS EC2 blog post 2018-05-19 22:05:12 -04:00
Jonathan Gold
9f89ea9be6 docs: Add netlink post to on-the-web.md 2018-05-19 22:05:12 -04:00
phaer
e538aacf9d vagrant: Fix example path in motd 2018-05-19 09:21:14 +02:00
phaer
968c609697 vagrant: Add gem package 2018-05-19 09:21:06 +02:00
phaer
c11cfa0a62 vagrant: Bump to fedora 28 2018-05-19 09:20:51 +02:00
Jonathan Gold
074f4677d5 build: Fix ldflags pattern for 1.10
Prior to go 1.10 ldflags would apply to all packages by default.
As of go 1.10 it is necessary to specify the package for the
flags to apply. This patch checks the go version, and formats
the build command accordingly.
2018-05-11 16:17:24 -04:00
James Shubin
9ea5c03371 travis: Enable apt updates on builds
This used to happen by default, and travis changed the default.
2018-05-09 13:46:04 -04:00
James Shubin
22c0ff3cf5 test: Improve golang tests with root and disabling cache
This allows golang tests to be marked as root or !root using build tags.
The matching tests are then run as expected using our test runner.

This also disables test caching which is unfriendly to repeated test
running and is an absurd golang default to add.

Lastly this hooks up the testing verbose flag to tests that accept a
debug variable.

These tests aren't enabled on travis yet because of how it installs
golang.
2018-05-09 13:44:01 -04:00
James Shubin
3ced981d28 engine: test: Pass in the go test verbose flag
This hooks up our debug variable to the go test verbose flag.
2018-05-09 12:11:35 -04:00
Jonathan Gold
299080f590 engine: DBus cleanup 2018-05-07 15:57:17 -04:00
James Shubin
a407771eaf test: Catch naked returns and check for canonically named imports
This catches scenarios where we forgot to prefix the error with return.
One of our contributors occasionally made this typo, and since core go
vet didn't (surprisingly) catch it, we should add a test!

It also adds a simple check for import naming aliases. Expanding this
test to add other cases and check for differently named values might
make sense.
2018-05-06 15:18:46 -04:00
Jonathan Gold
d26a6de759 engine: resources: mount: Add a mount resource 2018-05-04 15:53:05 -04:00
Jonathan Gold
9baad56197 util: Move dbus AddMatch const to util package 2018-05-04 15:46:14 -04:00
James Shubin
a589e2ecf3 docs, test: Remove old reference to resources package
Forgot to change this previously. Also updated the resources list in the
documentation.
2018-05-02 15:28:15 -04:00
Jonathan Gold
d7029871b1 engine: resources: nspawn: Remove godbus channel buffer
https://github.com/godbus/dbus/issues/94 is fixed with
https://github.com/godbus/dbus/pull/105, so the
buffered channel is no longer necessary.
2018-05-01 12:19:34 -04:00
Alan Jenkins
b80a505be5 engine: resources: packagekit: Add Arch mapping 'any' for Arch Linux compatibility
Arch Linux uses the mapping architecture name 'any'. This mapping was
missing from mgmt resulting in an error stating that arch 'any' did not
exist. Adding this mapping allows successful installation of packages
under Arch Linux.
2018-04-30 07:28:58 +01:00
James Shubin
412a25462e test: Improve commit message test
We can classify better now that we have the new engine.
2018-04-21 19:29:26 -04:00
James Shubin
9a8408a092 engine: Small fixes 2018-04-20 21:11:32 -04:00
James Shubin
86a9181e9b puppet: Clean up the GAPI and remove log package
This uses the proper facilities which makes things a bit more uniform.
2018-04-19 01:56:31 -04:00
James Shubin
9969286224 engine: Resources package rewrite
This giant patch makes some much needed improvements to the code base.

* The engine has been rewritten and lives within engine/graph/
* All of the common interfaces and code now live in engine/
* All of the resources are in one package called engine/resources/
* The Res API can use different "traits" from engine/traits/
* The Res API has been simplified to hide many of the old internals
* The Watch & Process loops were previously inverted, but is now fixed
* The likelihood of package cycles has been reduced drastically
* And much, much more...

Unfortunately, some code had to be temporarily removed. The remote code
had to be taken out, as did the prometheus code. We hope to have these
back in new forms as soon as possible.
2018-04-19 01:10:58 -04:00
James Shubin
ef49aa7e08 lang: Don't race with a ^C to the obj.lang calls
If we trigger a close, we must not run the LangClose before we've exited
from the loop, because that loop could race and run code which depends
on LangClose not having run first. So run the loop shutdown, then let
the wait group expire, before shutting down the lang.
2018-04-16 08:38:22 -04:00
James Shubin
acdb497b80 etcd: Pull in default URLs from upstream
This depends on https://github.com/coreos/etcd/pull/6837
2018-04-16 08:38:22 -04:00
James Shubin
4d8faeb826 lib, yamlgraph: Remove old yamlgraph GAPI frontend
I should have removed this a long time ago, but didn't. Now it's done.
The new v2 frontend is loosing the v2 name and just replacing v1.
2018-04-16 08:38:22 -04:00
James Shubin
6e0dfdb16f lib: Remove hcl GAPI frontend
This is currently unmaintained and the normal mcl language exists which
is preferable to this. As a result, I'm removing this for now to make an
upcoming refactor easier. We can add it back easily if someone has
interest.
2018-04-16 08:38:22 -04:00
James Shubin
754480a9b6 readme: Add patreon link to README file 2018-04-16 08:37:49 -04:00
jesus m. rodriguez
15681ddca9 build: Add help to main Makefile 2018-04-08 23:09:47 -04:00
Jonathan Gold
3c8d424a43 util: Rename SortedStrSliceCompare and move to util package 2018-03-29 00:55:18 -04:00
jonathangold
7d7eb3d1cd resources: net: Add net resource
This patch adds a net resource for managing nework interfaces, based
around netlink.
2018-03-27 17:46:00 -04:00
James Shubin
8500339ba6 lang: Add mutex around Expr String/Value/SetValue calls
The golang race detector complains about some unimportant races, and as
a result, this patch adds some mutexes to prevent these test failures.
We actually lock more than necessary, because a more accurate version
would be more time consuming to implement. Secondarily, it's likely that
in the future we replace this function graph algorithm with something
that is guaranteed to be glitch-free and supports back pressure.
2018-03-27 15:30:59 -04:00
James Shubin
06ee05026b lang: funcs: Don't race when building an initial graph
I noticed a very intermittent test failure where interpret would end up
running, but *fail* because a value wasn't present. This should never
happen, because the function engine is designed to only call interpret
when there has been at least one value produced for every node in the
AST. So what is the bug that would produce:

interpret error: could not interpret: func value does not yet exist

About 20 minutes ago while I was getting to bed, it occurred to me where
to look! Out of bed and to the laptop, and after briefly reminding
myself of the code, I think I've found the issue.

What I think was happening, was that an AST node would produce a value,
and send a message on the aggregate channel. This channel is monitored,
and every time it receives a message, it checks to ensure that all the
values now exist before producing a message for interpret to run.
However, this AST node was not the final one to be produced, but before
the message was read by the aggregate channel, the last remaining AST
node ran and set it's "loaded" state to `true`, but *before* its value
was made available for the aggregate channel to read. That channel then
occasionally won the race and tried to access a value before it existed,
thus causing out intermittent bug.

At least I think that's what was going on. Hopefully this patch fixes
this, if not, then there's another bug hiding too! And of course, this
entire function engine could do with some proper analysis from someone
familiar with glitches, back pressure, and FRP parallelism.

One particular note was that I used my brain, not some fancy debugging
tool to find this. Maybe skilled debuggers can fork lift their tools
onto this type of problem, but I haven't those skills!

¯\_(ツ)_/¯
2018-03-15 23:22:21 -04:00
James Shubin
ddefb4e987 integration: Log the instance output
This adds logging so that you can dig deeper into crashes or issues.
2018-03-13 06:38:21 -04:00
James Shubin
62d1fc7ed3 test, integration: Add cluster primitives to integration framework
This further extends the integration framework to add some simple
primitives for building clusters. More complex primitives and patterns
can be added in the future, but this should serve the general cases.
2018-03-13 06:38:21 -04:00
James Shubin
f3b99b3940 test, integration: Add an integration test framework
This adds an initial implementation of an integration test framework for
writing more complicated tests. In particular this also makes some small
additions to the mgmt core so that testing is easier.
2018-03-13 06:38:21 -04:00
Lauri Ojansivu
97c11c18d0 resources: svc: Add activating state
There seems to be a "activating" state that some services can reach.
Related #369
2018-03-10 15:27:07 +02:00
James Shubin
93a909551f recwatch: Remove the ConfigWatch functionality
This is some now dead code which was buggy and badly written. Time to
get rid of unnecessary technical debt so that we can move forward!
2018-03-09 22:26:10 -05:00
James Shubin
ea52eb78d9 lib: Remove remote execution from core
I have an improved design for remote execution as a resource. Since I
need to get rid of some technical debt to clean up the resource API, and
this main loop, a good first step is to remote it's invocation. It will
be coming back as a resource as soon as possible!
2018-03-09 17:07:58 -05:00
James Shubin
fdd698dade resources: svc: Add deactivating state
There seems to be a "deactivating" state that some services can reach.
Add this case, and switch the panic to an error.
2018-03-09 17:04:30 -05:00
James Shubin
173ccf6861 pgraph: Don't panic on new or nil graphs
This adds a bit of flexibility so that we can still run a topological
sort on a nil graph.
2018-03-05 01:58:43 -05:00
James Shubin
a5c3db6303 lang: Misc fixes for typos and grammar 2018-02-28 00:35:22 -05:00
James Shubin
3ad7097c8a lang: Add internal, resource specific edges
This adds the ability to specify internal, resource specific edges, with
and without notifications. We use the special words: "Notify", "Before",
"Listen", and "Depend". They must have the first character capitalized.
They also support the "elvis" operator.
2018-02-27 23:26:25 -05:00
James Shubin
8e01b6db48 lang: Add a resource-specific elvis operator
This allows you to omit a resource parameter programmatically, and
avoids the need of an `undef` or `nil` in our language, which would
contribute to programming errors, crashes, and overall reduced safety.
2018-02-27 17:29:49 -05:00
James Shubin
67607eba8b travis: Fix the OSX builds
I don't use OSX, but here's a bit of sympathy for the poor travis OSX
builder that can't understand apt ;)
2018-02-27 17:28:09 -05:00
James Shubin
6e7a71d01a travis: Attempt to cut down on flaky failures
Travis has been spuriously failing a LOT. Hopefully this reduces some of
those failures.
2018-02-27 17:17:29 -05:00
James Shubin
ff69a82b57 lang: unification: Fix panic in struct/func cmp of partials
This was discovered by user aequitas. I modified his patch slightly, and
added some comments and a test.
2018-02-27 16:39:10 -05:00
James Shubin
df1e50e599 lang: funcs: Add math pow function and a few examples
Just a few small things I think should be committed.
2018-02-25 19:48:25 -05:00
James Shubin
6370f0cb95 lang: Add edges to lexer and parser
This adds some initial syntax for external edges to the language.

There are still improvements which are necessary for send/recv.
2018-02-25 19:29:27 -05:00
James Shubin
80784bb8f1 lang: types, funcs: Add simple polymorphic function API
This adds a simple API for adding static, polymorphic, pure functions.
This lets you define a list of type signatures and the associated
implementations to overload a particular function name. The internals of
this API then do all of the hard work of matching the available
signatures to what statically type checks, and then calling the
appropriate implementation.

While this seems as if this would only work for function polymorphism
with a finite number of possible types, while this is mostly true, it
also allows you to add the `variant` "wildcard" type into your
signatures which will allow you to match a wider set of signatures.

A canonical use case for this is the len function which can determine
the length of both lists and maps with any contained type. (Either the
type of the list elements, or the types of the map keys and values.)

When using this functionality, you must be careful to ensure that there
is only a single mapping from possible type to signature so that the
"dynamic dispatch" of the function is unique.

It is worth noting that this API won't cover functions which support an
arbitrary number of input arguments. The well-known case of this,
printf, is implemented with the more general function API which is more
complicated.

This patch also adds some necessary library improvements for comparing
types to partial types, and to types containing variants.

Lastly, this fixes a bug in the `NewType` parser which parsed certain
complex function types wrong.
2018-02-25 02:17:13 -05:00
James Shubin
40dcd6ec99 all: Misc fixes and test fixes 2018-02-25 02:13:51 -05:00
karimb
06f2d65500 docs: Add docs for docker usage 2018-02-24 12:04:23 +01:00
Johan Bloemberg
98d3c299ff project: Add me 2018-02-23 19:59:55 -05:00
James Shubin
46da3a34a0 docs: Add two new faq entries 2018-02-23 19:44:54 -05:00
Johan Bloemberg
f33f84d2f2 lang: Add getenv function
$x = getenv("NAME")
    $y = defaultenv("NOTEXIST", "defaultvalue")
    $z = hasenv("NAME")
    $a = env()
    $b = maplookup($a, "NAME", "defaultvalue")
2018-02-23 20:02:13 +01:00
James Shubin
a785a43ef3 travis: Attempt to workaround the constant travis failures
I'm beginning to think we need a more reliable CI...
2018-02-22 20:26:52 -05:00
James Shubin
b0911c6d70 lang: funcs: simple: Don't block on simple, pure, static functions
I forgot to handle the special case of a function using this API that
received no inputs. It was waiting for the first input to come in, and
as a result was never producing any output.

Remember that functions like this should *almost* be thought of as
constants of the system. You would expect their output to never change
during the lifetime of a particular program invocation.
2018-02-22 19:26:18 -05:00
James Shubin
81a0e9e8c7 build: Relocate time command to the front for readability
This makes the output more readable in my terminal.
2018-02-22 17:49:33 -05:00
Johan Bloemberg
06d33a45f5 docs, misc: Add tool references, .editorconfig for mcl 2018-02-22 17:23:11 -05:00
Jonathan Gold
cfb8deac56 project: Add Jonathan Gold to AUTHORS 2018-02-22 17:19:58 -05:00
Johan Bloemberg
9544ab2e02 recwatch: Fix watching newly created files on macOS
Fixes: https://github.com/purpleidea/mgmt/issues/33
2018-02-22 16:52:26 -05:00
Oliver Frommel
318fe4a5dc misc: Small fixes for makedeps script
- install Go distribution package only if no go binary found
2018-02-22 16:50:13 -05:00
James Shubin
5597183391 docs: Add two faq entries about the type system 2018-02-22 16:45:54 -05:00
James Shubin
05c60d9a59 test, docs: Restrict long lines in markdown linter
It's getting out of hand...
2018-02-22 16:19:23 -05:00
Peter Oliver
f01eea33e9 emacs: Bundle an Emacs major mode, mgmtconfig-mode
This provides syntax highlighting, commenting, and rudimentary indentation of the mgmt language.
2018-02-22 15:57:05 -05:00
James Shubin
9992c367bf misc: Update golint to new location
Somehow this got changed...
2018-02-22 02:01:14 -05:00
James Shubin
d275a23a81 misc: Add dependency on time package
Some environments apparently don't have this installed. We have it in
certain places where we like to time things.
2018-02-21 22:52:41 -05:00
James Shubin
14ddd7c196 golint: Fix ineffassign mistakes 2018-02-21 22:52:41 -05:00
James Shubin
0815b20b76 lang: funcs: Fix up some old comments
Woops, bad copy-paste issues.
2018-02-21 22:52:41 -05:00
James Shubin
cffdb06181 test, docs: Add a linter for testing markdown, and fix up our docs
While writing docs, I couldn't remember what the correct style was
supposed to be, and I remember someone complaining about this
previously, so I decided to add a linter! I excluded a bunch of annoying
style rules, but if we find more we can add those to the list too.

Hopefully this gives us a more consistent feel throughout.
2018-02-21 22:52:41 -05:00
James Shubin
837388ae4e lang: types, funcs: Add simple function API
This patch adds a simple function API for writing simple, pure
functions. This should reduce the amount of boilerplate required for
most functions, and make growing a stdlib significantly easier. If you
need to build more complex, event-generating functions, or statically
polymorphic functions, then you'll still need to use the normal API for
now.

This also makes all of these pure functions available automatically
within templates. It might make sense to group these functions into
packages to make their logical organization easier, but this is a good
enough start for now.

Lastly, this added some missing pieces to our types library. You can now
use `ValueOf` to convert from a `reflect.Value` to the corresponding
`Value` in our type system, if an equivalent exists.

Unfortunately, we're severely lacking in tests for these new types
library additions, but look forward to growing some in the future!
2018-02-21 21:32:31 -05:00
Johan Bloemberg
cbd2bdd4c5 travis: Retry flaky apt update at build start 2018-02-20 21:41:08 +01:00
Johan Bloemberg
f34ca3a5ca travis: Improve travis speed by only building 1 go version for osx 2018-02-20 21:41:03 +01:00
James Shubin
4898297cce travis: Avoid notification noise from forks
Encrypt name of IRC channel to workaround forks spamming us with their
testing messages.

Docs: https://docs.travis-ci.com/user/environment-variables/#Defining-encrypted-variables-in-.travis.yml
2018-02-20 14:12:09 -05:00
Johan Bloemberg
ffcc2aa2af lib: Provide detailed feedback about invalid URLs 2018-02-20 10:29:19 -05:00
Johan Bloemberg
158fb8d31c etcd: Warn about invalid configuration, clarify --no-server 2018-02-20 10:29:19 -05:00
Johan Bloemberg
07714c67cb cli: Log errors return by Run functions
Turns

```
$ ./mgmt run
00:44:15 hello.go:46: This is: mgmt, version: 0.0.14-30-ge3a2648
00:44:15 hello.go:47: Main: Start: 1518738255855525279
$
```

Into

```
$ ./mgmt run
01:07:02 hello.go:46: This is: mgmt, version: 0.0.14-30-ge3a2648-dirty
01:07:02 hello.go:47: Main: Start: 1518739622517652739
01:07:02 cli.go:167: Main: Error: can't create prefix: mkdir /var/lib/mgmt/: permission denied
$
```
2018-02-20 10:29:19 -05:00
James Shubin
f12e502c61 lang: funcs: Rename things for consistency
Also fix a few copy-pasta issues in the documentation.
2018-02-18 19:47:14 -05:00
Toshaan Bharvani
2fdf8d5dc3 lang: Interface sorting order
as golang does not loop over the same map/list always the same
we use a helper list to sort it

Signed-off-by: Toshaan Bharvani <toshaan@vantosh.com>
2018-02-18 18:32:15 -05:00
James Shubin
28ec7a1e54 etcd: scheduler: Remove etcd 3.2 specific hacks
Now that we're using etcd 3.3, we can simplify our code now that our
patches are in a release.
2018-02-18 18:28:45 -05:00
James Shubin
24cb2e6450 etcd: Increase the default max txn op count
The default of 128 is fairly low for large code bases. Please let us
know if you hit the new limit of 512.
2018-02-18 18:28:00 -05:00
James Shubin
915b022901 test: Show test output as it happens 2018-02-18 17:31:45 -05:00
James Shubin
4a623c1891 docs: Add an entry to the faq about converged timeouts 2018-02-18 16:28:50 -05:00
Johan Bloemberg
7508161c39 test: Exclude generated files from golint 2018-02-18 15:07:54 -05:00
Johan Bloemberg
d33861ccb4 test: Fix augeas test for macOS, improve test debuggability
- resolve a discrepancy in augeas behaviour on macOS
- on macOS `sed` requires an argument for `-i`.
- made the test fail as early as it can
- provide information about why the test is failing
2018-02-18 14:36:59 -05:00
Johan Bloemberg
572b2575c5 test: Export the mgmt command to be used during test 2018-02-18 14:19:05 -05:00
James Shubin
d99190b166 travis: Add golang 1.10.x to builds 2018-02-18 13:58:11 -05:00
James Shubin
bc91b03276 docs: Add faq entry about production readiness 2018-02-18 13:34:21 -05:00
James Shubin
9ba893c06c etcd: Bump to etcd v3.3 and golang 1.9
This moves us to etcd v3.3 (a new major release) which has some useful
features but that requires version 1.9 of golang.
2018-02-15 18:47:55 -05:00
James Shubin
27e51f1bcb authors: Clarify wording in AUTHORS file 2018-02-15 18:45:32 -05:00
James Shubin
e3a26483e8 test: Improve gometalinter test so that it skips generated files
This should improve things significantly, and avoid the failures now
that we're testing after the files have already been built.
2018-02-15 16:26:06 -05:00
James Shubin
33a4fd6fbe build: Add -i flag to go build
It got accidentally dropped, but is crucial for happiness.
See: https://purpleidea.com/blog/2017/02/26/faster-golang-builds/
2018-02-15 12:24:56 -05:00
Johan Bloemberg
b34b359860 test: Streamline test suite a little
This change aims to streamline the integrationtest suite and reduce friction when running (parts of) test suites.

Changes:
- add `test-testname` to makefile to easily run one suite
- made skipping tests first class citizen in test.sh (all available testsuites and the reasons they are skipped are now better exposed and discovered)
- suppress some output of gotest unless there is an error
- no longer build binary for examples and gotest suites
- removed .SILENT from makefile as it being applied to only some targets makes it feel weird (I just learned about this option btw, feel free to comment on this change)
- move individual tests out of `test.sh` and into `test-misc.sh`
- introduced the concept of testsuites to `test.sh`
2018-02-15 17:21:49 +01:00
James Shubin
8b9491823d etcd: Fix golint issue in test
Found with new gometalinter version.
2018-02-14 18:53:10 -05:00
James Shubin
b8b6e5266f build: Improve speed of make
Generating a huge amount of unnecessary targets caused "noop" make runs
to take seven seconds on my machine. This limits the list of these
drastically and now "noop" make's are now < 1s on my machine.

Issue discussed in:
https://github.com/purpleidea/mgmt/issues/331
2018-02-14 18:42:12 -05:00
James Shubin
5f80c1ac2a resources: nspawn: Don't panic if one svc is nil
Not sure why one of them was nil, but this prevents the panic.
2018-02-14 16:44:21 -05:00
James Shubin
3af7e815d0 docs: Add newly recorded talks and blog posts about mgmt 2018-02-14 15:45:40 -05:00
James Shubin
714afe35a1 test: Fix broken gometalinter test
The test for gometalinter got silently broken in an earlier commit.
Look for the missing space that was added back in this commit to see
why! In any case, this now fixes some of the things that weren't
previously caught by this change.

If anyone knows how to run these sorts of tests properly so that entire
packages are tested and so that we can enable additional tests, please
let me know!

It's also unclear why goreportcard catches a few additional problems
which aren't found by running this ourselves.

See:
https://goreportcard.com/report/github.com/purpleidea/mgmt
for more information.
2018-02-14 14:34:36 -05:00
James Shubin
b0a8f585c3 readme: Fix broken link 2018-02-14 14:03:07 -05:00
James Shubin
1a2918082d docs: Add FAQ entry about vendoring dependencies 2018-02-14 14:01:10 -05:00
Johan Bloemberg
22e4dfa534 build: Unify build/crossbuild
Changes:

- allows explicit crossbuild targets (eg: `make mgmt-darwin-amd64`)
- adds darwin/amd64 to default crossbuild targets
- gitignore only build artifacts (eg: not all files starting with `mgmt-`)
- `build` and `crossbuild` target now utilize the same build function (`build` still generates only a `mgmt` binary for the current os/arch)
- test crossbuilding
- allow specifying custom GOOSARCHES envvar to override defaults
- crossbuild artifacts go into `build/` now
- add `build-debug` which includes symbol tables and debug info
- the build function now has `-s -w` linker arguments which discards some debug info afaict, to build a debug release use `make build-debug`

On my mac crossbuilding won't work unless I disable augeas and libvirt:

```
~/.g/s/g/p/mgmt (build|●1✚8…3) $ make build
Generating: bindata...
Generating: lang...
/Applications/Xcode.app/Contents/Developer/usr/bin/make --quiet -C lang
Building: mgmt, os/arch: darwin-amd64, version: 0.0.14-12-g94c8bc1-dirty...
env GOOS=darwin GOARCH=amd64 time go build -ldflags "-X main.program=mgmt -X main.version=0.0.14-12-g94c8bc1-dirty -s -w" -o mgmt-darwin-amd64 ;
        7.14 real        10.36 user         1.73 sys
mv mgmt-darwin-amd64 mgmt
```

```
~/.g/s/g/p/mgmt (build|●1✚8…3) $ time env GOTAGS='noaugeas novirt' make crossbuild
Generating: bindata...
Generating: lang...
/Applications/Xcode.app/Contents/Developer/usr/bin/make --quiet -C lang
Building: mgmt, os/arch: linux-amd64, version: 0.0.14-12-g94c8bc1-dirty...
env GOOS=linux GOARCH=amd64 time go build -ldflags "-X main.program=mgmt -X main.version=0.0.14-12-g94c8bc1-dirty -s -w" -o mgmt-linux-amd64 -tags 'noaugeas novirt';
       18.48 real        50.02 user         5.83 sys
Building: mgmt, os/arch: linux-ppc64, version: 0.0.14-12-g94c8bc1-dirty...
env GOOS=linux GOARCH=ppc64 time go build -ldflags "-X main.program=mgmt -X main.version=0.0.14-12-g94c8bc1-dirty -s -w" -o mgmt-linux-ppc64 -tags 'noaugeas novirt';
       29.83 real        85.09 user        11.54 sys
Building: mgmt, os/arch: linux-ppc64le, version: 0.0.14-12-g94c8bc1-dirty...
env GOOS=linux GOARCH=ppc64le time go build -ldflags "-X main.program=mgmt -X main.version=0.0.14-12-g94c8bc1-dirty -s -w" -o mgmt-linux-ppc64le -tags 'noaugeas novirt';
       29.74 real        85.84 user        11.76 sys
Building: mgmt, os/arch: linux-arm64, version: 0.0.14-12-g94c8bc1-dirty...
env GOOS=linux GOARCH=arm64 time go build -ldflags "-X main.program=mgmt -X main.version=0.0.14-12-g94c8bc1-dirty -s -w" -o mgmt-linux-arm64 -tags 'noaugeas novirt';
       28.33 real        83.24 user        11.40 sys
Building: mgmt, os/arch: darwin-amd64, version: 0.0.14-12-g94c8bc1-dirty...
env GOOS=darwin GOARCH=amd64 time go build -ldflags "-X main.program=mgmt -X main.version=0.0.14-12-g94c8bc1-dirty -s -w" -o mgmt-darwin-amd64 -tags 'noaugeas novirt';
        7.16 real        10.15 user         1.74 sys
      114.71 real       315.26 user        42.44 sys
```
2018-02-14 12:37:49 -05:00
Johan Bloemberg
41eb850b3d debian: Add graphviz and packagekit runtime dependencies 2018-02-12 15:11:13 -05:00
Wim
3a50171d19 build: Add gcc,pkg-config deps 2018-02-12 15:10:19 -05:00
Wim
6c9e0ff974 docs: Add GOPATH/bin to PATH 2018-02-12 15:09:32 -05:00
James Shubin
644e5164b1 test: Increase timeouts for when travis is slow
Hopefully this cuts down on spurious failures.
2018-02-12 15:08:47 -05:00
James Shubin
4fefa9f2f0 travis: Disable fast finish for now
This causes a notification for each entry in the matrix which is now too
many emails. When travis adds and option to send just one notification,
but to still allow you to fast finish, then please lmk :)
2018-02-10 18:49:12 -05:00
Johan Bloemberg
4c793e0ee6 misc: Fix graphviz output for hostnames with dot in them 2018-02-11 00:02:52 +01:00
James Shubin
68a7de41ae etcd: Update broken link 2018-02-10 10:38:54 -05:00
dsx
94c8bc1de9 debian: Add packaging 2018-02-10 05:12:31 -05:00
Johan Bloemberg
8fb0373f82 resources: Do not return GID for UID lookup
On linux it is convention for users to have a group with the same GID as the users UID. On macOS this is not the case. This broke the test which lead to discovering this bug.
2018-02-10 05:01:12 -05:00
Johan Bloemberg
d567dc3769 lang: Use universal way to retrieve load
Sysinfo is not supported on macOS and results in a build error.
2018-02-10 05:01:12 -05:00
Johan Bloemberg
ba21554c5f build, docs: Improve macOS building
- New docker command for quickly running tasks in a Linux environment.
- Updated docs with macOS specific details.
- Fixed some test issues.
- Add (fallible) macOS test target for Travis.
2018-02-10 05:01:12 -05:00
James Shubin
e37bb3ac8a test: Add new test for language prefix 2018-02-07 21:52:06 -05:00
Carsten Thiel
79845f0dfd test: Refactor unification_test to subtests
Testsuite for unification now uses subtests feature.
2018-02-07 16:13:18 +01:00
Carsten Thiel
eb33a5a5df docs: Improve file resource documentation
Info on how to create a directory.
Explain more parameter options.
2018-02-07 14:19:14 +01:00
jonathangold
adbe9c7be1 misc: Replace missing go-bindata dependency 2018-02-07 06:15:03 -05:00
James Shubin
b19583e7d3 lang: Initial implementation of the mgmt language
This is an initial implementation of the mgmt language. It is a
declarative (immutable) functional, reactive, domain specific
programming language. It is intended to be a language that is:

* safe
* powerful
* easy to reason about

With these properties, we hope this language, and the mgmt engine will
allow you to model the real-time systems that you'd like to automate.

This also includes a number of other associated changes. Sorry for the
large size of this patch.
2018-01-20 08:09:29 -05:00
Joe Julian
1c8c0b2915 misc: Don't install packages that are already installed
pacman needs `--needed` to prevent reinstalling packages that are
already installed. Additionally I added `--asdeps` to allow later
cleanup of unneeded dependencies.
2018-02-03 17:17:23 -05:00
Joe Julian
bee1aa00f1 misc: Use bash's command instead of which
Bash has a built-in command, `command`, that will search the path and
return the full path to a command if it exists (or an exit code of 1 if
it does not), preventing the requirement of the `which` package.
2018-02-03 11:23:52 -05:00
Toshaan Bharvani
077b6e540a build: Add cross building option
Added a cross build option using a buildrelease function

Signed-off-by: Toshaan Bharvani <toshaan@vantosh.com>

build: Add gitignore entry for mgmt-* binaries

Signed-off-by: Toshaan Bharvani <toshaan@vantosh.com>

build: Update makefile based upon feed back

* rename cross to crossbuild
* added crossbuild to PHONY

Signed-off-by: Toshaan Bharvani <toshaan@vantosh.com>

build: Change the order of .PHONY as per the rest of the file

Signed-off-by: Toshaan Bharvani <toshaan@vantosh.com>
2018-02-03 16:43:40 +01:00
James Shubin
e8b03545bb test: Don't fail on tag builds
This seems to be causing our failures with:

$ git fetch --unshallow
fatal: Couldn't find remote ref refs/heads/0.0.x

where x is some tag.

Hopefully this doesn't break the other use case we added this patch for!
2018-01-11 18:03:56 -05:00
James Shubin
70c59eab4a misc: Don't display script name in output 2018-01-11 18:03:04 -05:00
jonathangold
3c677543e0 resources: aws: ec2: Fix closed channel handling
If awschan closes, longpollWatch and snsWatch return nil
instead of an error. This will prevent the engine from
shutting down in case we choose to close the channel
early or from other struct methods.
2018-01-06 15:15:30 -05:00
jonathangold
c455ef2c62 resources: aws: ec2: Send IP addresses and InstanceID 2018-01-03 21:34:28 -05:00
Jonathan Gold
032d0992d6 resources: aws: ec2: Refactor CheckApply
CheckApply was rewritten, using the new describe methods to improve
readability and maintainability.
2018-01-03 21:34:28 -05:00
jonathangold
67837a47ac resources: aws: ec2: Refactor longpollWatch
Complete rewrite of longpollWatch() for correctness and maintanability.
2018-01-03 21:34:28 -05:00
Jonathan Gold
32e3c4e029 resources: aws: ec2: Refactor longpollWatch
This patch simplifies longpollwatch by getting rid of some unnecessary
api calls and breaking the waiters out into their own functions.
2018-01-03 21:34:28 -05:00
Jonathan Gold
76fcb7a06e resources: aws: ec2: Wait for stop and terminate concurrently
In longpollWatch it was no longer sufficient to use only
WaitUntilInstanceStopped as it would block if the instance was
terminated. This patch launches two goroutines in its place, one
waits until the instance stops and the other waits until it
terminates. When either one returns, it cancels their context,
and execution continues.
2018-01-03 21:34:28 -05:00
Jonathan Gold
149a2188e2 resources: aws: ec2: Retry on exceeded wait attempts error
The waiters now return the AwsErr error "ResourceNotReady: exceeded wait
attempts" when the instance state does not converge after 40 retries.
During longpollWatch() we need to detect this error and continue to
the top of the loop so we can restart the waiters and keep watching for
events.
2018-01-03 21:34:28 -05:00
Jonathan Gold
08e7caea6b resources: aws: ec2: CheckApply fix pending and stopping cases
If CheckApply was called when the instance was pending or stopping, it
would return an error. This patch supresses these errors and tells the
engine that the state can't yet be changed.
2018-01-03 21:34:28 -05:00
Jonathan Gold
e330ebc8c9 resources: aws: ec2: Verify SNS message signatures 2018-01-03 21:34:28 -05:00
Jonathan Gold
388a08e13a resources: aws: ec2: Check that policy.Statement != nil 2018-01-03 21:34:28 -05:00
Jonathan Gold
9ba9ef1cbf resources: aws: ec2: Close closeChan before server shutdown
This patch makes sure that closeChan is closed as soon as the main loop
returns, so any channel operations are unblocked before we run shutdown.
This ensures that the server's goroutine can return before shutdown
completes and we don't panic by trying to serve the client after
shutdown returns.
2018-01-03 21:34:27 -05:00
Jonathan Gold
fac004b774 resources: aws: ec2: Update postHandler to process messages 2018-01-03 21:34:27 -05:00
Jonathan Gold
8cd3f28734 resources: aws: ec2: Authorize CloudWatch to publish to sns 2018-01-03 21:34:27 -05:00
Jonathan Gold
dcd23fcf75 resources: aws: ec2: Add CloudWatch rule and target SNS
This patch creates the cloudwatch rule that detects ec2 instance
state changes, and targets the rule to publish on our sns topic
which, in turn, pushes those event notifications to our endpoint.
2018-01-03 21:34:27 -05:00
Jonathan Gold
1162485c2c resources: aws: ec2: Subscribe SNS endpoint to topic
This patch adds methods to subscribe and confirm the subscription
to the sns topic.
2018-01-03 21:34:27 -05:00
Jonathan Gold
966172eac6 resources: aws: ec2: Use custom listener for snsServer
This patch replaces the call to Server.ListenAndServe() with
Server.Serve(listener) in order to make sure the listener is up
and running before we subscribe to the topic in a future patch.
2018-01-03 21:34:27 -05:00
James Shubin
12fce52cd7 legal: Happy 2018 everyone...
Done with:

ack '2017+' -l | xargs sed -i -e 's/2017+/2018+/g'

Checked manually with:

git add -p

Hello to future James from 2019, and Happy Hacking!
2018-01-03 21:22:07 -05:00
Felix Frank
5ca1e2a23f puppet: Avoid empty parameters to puppet mgmtgraph
This solves an issue first observed with golang 1.8.

Creating an exec.Command with an empty string parameter (when no puppet.conf
file is specified) would lead to an error from Puppet, stating that an
unexpected argument was passed to "puppet mgmtgraph print".

The workaround is to not include *any* positional argument (not even the
empty string) when --puppet-conf is not used.
2017-12-26 00:18:46 +01:00
Paul Morgan
98f8a61e83 git: Configure editorconfig to indent with tabs in bash scripts
This follows `test/test-bashfmt.sh` style check(s).
2017-12-20 21:09:15 +00:00
Paul Morgan
2e86d7c5ab git: Ensure the tagging script is idempotent 2017-12-20 21:04:57 +00:00
Jonathan Gold
62ca12608d cli: Add license flag
This patch adds the option to print the license with a cli flag. It
uses go-bindata to store the license file. The file is generated by
running `make bindata` and the result is stored in the bindata
directory.
2017-12-08 00:57:58 -05:00
Jonathan Gold
406aa55667 resources: virt: Update libvirt-xml target
Builds started failing due to go-libvirt-xml 6d97448. In that patch,
the DomainChannelTarget struct was changed from having a single type
field, to having an individual field for each virtualization type.

This patch updates the connection check in Init to reflect the changes
to go-libvirt-xml, so that builds no longer fail.
2017-11-29 19:03:56 -05:00
James Shubin
a76dce8b15 docs: Add missing blog post about augeas resource 2017-11-26 17:15:49 -05:00
James Shubin
b01d453ae3 docs: Refresh documentation to provide a better new user experience
This does some cleanups and moves some things around for a better
experience. If you're an expert in this area, or are a new user who has
some feedback about their first impressions and experiences, please let
us know!
2017-11-25 20:45:57 -05:00
Guillaume Herail
ac629404f4 test: Switch to goimports instead of gofmt
see https://github.com/purpleidea/mgmt/pull/256#issuecomment-346360414
2017-11-25 06:49:00 -05:00
Guillaume Herail
3575d597f7 resources: Add User/Group to ExecRes 2017-11-24 10:38:16 -05:00
Toshaan Bharvani
2affcba3b4 build: Added build option to strip binary
This is a build option in Golang that will strip the binary.
The binary becomes about 50% smaller.

Signed-off-by: Toshaan Bharvani <toshaan@vantosh.com>
2017-11-24 10:26:48 -05:00
James Shubin
846c5f8762 test: Add another check for off-by-one-error commit tags 2017-11-24 09:46:32 -05:00
Julien Pivotto
086af712d2 example: Remove content out of directory definition
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-11-24 14:26:20 +01:00
Julien Pivotto
2b6e39f283 build: Remove go 1.3 and 1.4 support
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-11-24 05:35:09 -05:00
Julien Pivotto
472663193a prometheus: Initialize all metrics
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-11-24 11:02:36 +01:00
James Shubin
879ff838ae resources: Replace golang 1.6 specific code with newer 1.7 version
We now require at least 1.8 so we might as well fix this up.
2017-11-23 10:57:11 -05:00
Julien Pivotto
5e9a085e39 exec: Add autoEdges between ExecRes and PkgRes
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-11-23 16:30:22 +01:00
Julien Pivotto
c2b5729ebd build: Build mgmt on any go file change
Prior to this commit, running make would only rebuild mgmt when
main.go was changed. It means that make clean build was needed.

With this commit, any go file change in this directory will
trigger a new compilation.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-11-23 09:32:02 -05:00
Julien Pivotto
fdce9d6a6a prometheus: Initialize mgmt_checkapply_total metrics
It is recommended by Prometheus to initialize metrics:

https://prometheus.io/docs/practices/instrumentation/#avoid-missing-metrics

This commits initialize the mgmt_checkapply_total metric
for each registered resource.

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-11-23 15:23:41 +01:00
Guillaume Herail
bfc2549289 resources: Move FileRes.uid()/.gid() to util.go 2017-11-23 08:34:38 -05:00
James Shubin
52fd1ae73e test: Add check for common doc vs docs ambiguity 2017-11-23 08:20:44 -05:00
Julien Pivotto
23e167616f doc: Fix link to the prometheus wiki
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-11-23 09:52:28 +01:00
James Shubin
51ce83f20b test: Add extra commit message tests for some common mistakes
Feel free to add more if we identify them.
2017-11-21 11:05:20 -05:00
Jonathan Gold
5e5bbf4b39 travis: Allow travis builds to access target branches
Because travis builds only fetch a single branch (master) by default,
test-commit-message.sh only had access to commits in the master branch.
In order to fetch the correct branch for our build, we need to run
'git config remote.origin.fetch..' with the target branch's information
before executing git fetch on the repo in before_install.

Now git will always fetch the appropriate branch.
2017-11-18 21:04:12 -05:00
Guillaume Herail
cbc3a691b9 docker: Bump to golang 1.8 2017-11-16 17:36:35 +01:00
Jonathan Gold
a5247d6e69 resources: aws: ec2: Change event messages to iota consts 2017-11-14 16:48:51 -05:00
Jonathan Gold
d698b82a83 resources: aws: ec2: Start and stop SNS endpoint in snsWatch
This patch adds snsWatch which launches the HTTP server and listens
for messages on awsChan to forward as events to the mgmt engine.
2017-11-11 23:07:12 -05:00
Jonathan Gold
91eff75288 resources: aws: ec2: Add method to make sns topic 2017-11-10 17:31:19 -05:00
James Shubin
91a9edb322 resources: aws: ec2: Fix deadlock on rare error scenarios
If we get an error in the Watch loop, it will send this on awsChan,
which will cause Watch to loop. However, in this scenario it will never
cause closeChan to close, and we will deadlock because we have a
waitGroup in a helper goroutine which is waiting on this channel to
close the context.

Normally this wouldn't be an issue, but since we have more than one
goroutine (with associated waitGroup) it is. It's also good practice to
close all the channels to help avoid this kind of bug.

This patch also moves the waitGroup Wait into a more logical place for
visibility.
2017-11-10 14:17:54 -05:00
Jonathan Gold
c8ddbeaa5c resource: aws: ec2: Add http server 2017-11-09 13:13:42 -05:00
Jonathan Gold
3634b3450d resource: aws: ec2: Move waitgroup to resource struct 2017-11-08 16:57:41 -05:00
Jonathan Gold
c2a5e3f5d8 resources: aws: ec2: Move watch channels into struct 2017-11-08 16:16:01 -05:00
Jonathan Gold
db49fe85e4 resources: aws: ec2: Move chanStruct type out of longpollWatch 2017-11-08 16:08:25 -05:00
Jonathan Gold
567a2e9fd8 resources: aws: ec2: Reorganized consts 2017-11-08 16:02:29 -05:00
Jonathan Gold
987de00e17 resources: aws: ec2: Remove extra wait from Watch
There were two calls to WaitUntilInstanceTerminatedWithContext in a row.
There's no reason to make the call twice.
2017-11-08 16:02:24 -05:00
Jonathan Gold
baeafec74a resources: aws: ec2: Move Watch to longpollWatch 2017-11-08 16:02:12 -05:00
James Shubin
9cfa0b14d4 yamlgraph: Improve error output
This makes it easier to know what's missing.
2017-11-02 09:13:27 -04:00
James Shubin
948ded6792 github: This event is over
And it wasn't successful at all.
2017-11-01 07:07:14 -04:00
James Shubin
3c69619fd9 github: Add new label for design discussions and trackers
Open ideas related to designs can be tracked here. We've already got a
few such tickets open.
2017-11-01 07:04:32 -04:00
Jonathan Gold
e7c4bc7f47 resources: Add UserData field to AwsEc2
UserData specifies first-launch bash and cloud-init commands. See
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
for documentation and examples.
2017-10-30 00:22:30 -04:00
Jonathan Gold
277ecc901b etcd: Plumbed in the new cli flags for advertise urls 2017-10-29 17:16:51 -04:00
Jonathan Gold
0f70c31a30 etcd: Add advertise urls to cli
This patch adds the option to specify URLs to advertise for clients and peers.
This will facilitate etcd communication through nat, where we want to listen
on a local IP, but expose a public IP to clients/peers.
2017-10-28 22:42:27 -04:00
James Shubin
9a97a92e31 github: Use third-party settings app to sync github settings
Let's give this a try. One downside is that giving anyone push access
gives them ability to rename repo and do other bad admin type things.
2017-10-26 05:04:41 -04:00
James Shubin
f9d452ad2c examples: Add longpoll server and client
This is an example of a race-free long-poll server and client. It uses a
redirection method to signal that the "Watch" is running.

Other race-free methods exist.
2017-10-24 04:20:19 -04:00
Jonathan Gold
9907c12eda resources: Enhancements to user and group
This patch adds autoedges between users and groups, and extends
users with additional fields for supplementary groups and a named
primary group. Also, some small fixes to log and error messages.
2017-10-23 19:18:52 -04:00
Jonathan Gold
19533a32b5 resources: Add a group resource 2017-10-21 01:28:22 -04:00
Jonathan Gold
c5a5004f9e resources: Fix user gid compare 2017-10-19 06:58:31 -04:00
Jonathan Gold
677cdea99d resources: Improve nspawn resource 2017-10-17 19:23:04 -04:00
Jonathan Gold
4d7c0ddbce resources: Add an Aws resource 2017-10-09 04:05:13 -04:00
James Shubin
81daf10157 test: Fix linter issues
These are some linter issues that were found in a new version of the
linter. Let's fix them now before that linter hits our test suite.
2017-09-26 19:38:53 -04:00
James Shubin
b3ef4e41bf test: Use stable version of gometalinter
Hopefully this prevents the various breakages seen in our lint test.
2017-09-26 19:08:43 -04:00
James Shubin
9fbf149717 etcd: Bump to newer versions 2017-09-19 18:21:15 -04:00
James Shubin
95cb94a039 vendor: Add codec package because of breakage
Recent git master 54210f4e076c57f351166f0ed60e67d3fca57a36 of
github.com/ugorji/go broke the builds. See:
https://github.com/coreos/etcd/issues/8579
2017-09-19 18:21:15 -04:00
Juan Luis de Sousa-Valadas Castaño
21f7f87716 resources: Refresh packagekit cache before install
Fixes #80
2017-09-17 22:29:15 +02:00
Jonathan Gold
831c7e2c32 resources: Add user resource 2017-09-17 01:04:36 -04:00
James Shubin
cc0d04c8b7 git: Ignore .envrc file from direnv
Some find this useful for setting a custom GOPATH per project.
2017-09-15 16:17:40 -04:00
James Shubin
46be83f8f7 legal: Re-license to GPLv3 2017-09-11 18:07:47 -04:00
James Shubin
28560e2045 resources: Fix formatting 2017-09-11 18:06:34 -04:00
James Shubin
0df4824a56 test: Increase timeouts for slow travis
Should prevent more intermittent failures.
2017-09-09 15:31:06 -04:00
James Shubin
dbcabc6517 github: Improve the PR template 2017-09-09 15:03:53 -04:00
Jonathan Gold
69f479b67e virt: Allow more than 26 disks 2017-09-08 02:15:40 +00:00
James Shubin
af75696018 github: Add a PR template to help new users
Hopefully this addresses the most common things.
2017-09-07 16:14:11 -04:00
Arthur Mello
80b8f8740f virt: Added support for ~user into expandHome
- Enabled expandHome to expand both ~/ and ~username/ paths
- Added some unit tests for expandHome
2017-09-06 14:59:08 -04:00
James Shubin
71ab325940 yaml2: Meta should keep defaults, and Res should have kind
This would previously panic since it wouldn't get a kind, and the meta
parameters would overwrite the defaults so it would block because limit
didn't have the default of +inf.

The removal of the SetKind was my fault in:

b8ff6938df

It's funny because it ends in `bad`. Guess I should have checked that!
2017-09-06 13:44:21 -04:00
James Shubin
653c76709a test: Fix another intermittent failure
Some of the tests had very precise timeouts, which weren't very
important. Here's another one that timed out early.
2017-09-04 16:39:01 -04:00
Juan Luis de Sousa-Valadas Castaño
83cc1bab38 vagrant: Fix PATH
gometalinter failed because it's not in $PATH
2017-09-04 22:08:59 +02:00
James Shubin
6c8588c019 test: Increase timeouts because travis is slow
Should hopefully prevent some intermittent failures.
2017-09-04 13:02:05 -04:00
Ismael Puerto
5b00ed2fb2 vagrant: Change box to F26
F26 provides GO 1.8
2017-09-01 22:21:39 +02:00
Juan-Luis de Sousa-Valadas Castaño
9f66962bfb docs: Change go required version to 1.8 2017-08-31 23:56:16 +02:00
James Shubin
0edba74091 etcd: Bump to version 3.2.6 and update all the grpc deps
Note: When go-grpc-prometheus was in the main $gopath (even at this
version) and everyone else was where they always were in vendor/ this
didn't build! It gave errors like:

	have SendHeader("github.com/purpleidea/mgmt/vendor/google.golang.org/grpc/metadata".MD) error
	want SendHeader("google.golang.org/grpc/metadata".MD) error

and I got frustrated. Putting it "next" to the other vendored deps seems
to have fixed this. Where are the golang docs that explain this
phenomenon?

This also requires golang 1.8+ as that is a requirement for etcd. It's
probably a reasonable thing for us too.

Note the older versions of etcd had some bugs with the concurrency
package and other things, so this is a necessary bump.
2017-08-30 14:16:02 -04:00
Dennis Kliban
1003b49dd9 resources: Add validation for Msg Priority field
This adds validation that ensures that Msg Priority field is one of the following values:
"Emerg", "Alert", "Crit", "Err", "Warning", "Notice", "Info", "Debug".
2017-08-20 12:37:39 +00:00
James Shubin
884ba54f96 resources: Include default MetaParams so Validate will pass in tests 2017-08-18 19:52:02 -04:00
Dennis Kliban
cf2325a2da vagrant: Increase amount of RAM allocated to boxes backed by libvirt 2017-08-07 13:55:21 -04:00
AdnanLFC
db6972638d pgraph: test: Added tests for DeleteEdge 2017-07-28 02:02:22 +02:00
James Shubin
74e04e81d5 travis: Update to golang 1.8 as the default
Since the release of Fedora 26 with golang 1.8.1, this is a fine
default.
2017-07-19 12:15:54 -04:00
James Shubin
7c5d7365c7 readme: Add new recording 2017-06-29 13:14:25 -04:00
James Shubin
0dadf3d78a resources: Add NewNamedResource helper
This makes the common pattern of NewResource, SetName, easier. It also
makes it less likely for you to forget to use SetName.
2017-06-17 18:09:49 -04:00
James Shubin
e341256627 resources: Add a utility to map from struct fields
For GAPI front ends that want to know what fields they can use and which
they map to, these two functions can be used.
2017-06-17 11:49:30 -04:00
James Shubin
5a3bd3ca67 hcl: Consistent formatting
Nit picks.
2017-06-16 23:01:46 -04:00
ChrisMcKenzie
8102e0a468 hcl: Added hil string interpolation to hcl frontend 2017-06-15 22:53:55 -07:00
ChrisMcKenzie
7d55179727 hcl: Removed edge object in favor of depends_on field in resource 2017-06-12 10:44:13 -07:00
ChrisMcKenzie
bc1a1d1818 hcl: Added basic hcl frontend 2017-06-09 10:31:34 -07:00
James Shubin
a8bbb22fe8 resources: Fix golint issues
Including a trick to get the golinter to allow our compact code!
2017-06-08 04:38:25 -04:00
James Shubin
6b489f71a1 remote: Add a Ready method to know when startup is finished
Previously, there was an extremely rare race where we would startup,
kick off the Run method in a goroutine, and then run Exit before Run got
very far in its execution. If Run ran some early sections of its code
_after_ we had Exited, we would trigger a panic due to the converger UID
being unregistered.

This patch blocks Exit from progressing until Run has started and
finished running. It also adds a Ready method so that you can monitor
this signal yourself if you'd like to add the necessary wait to your
code.
2017-06-08 03:55:03 -04:00
James Shubin
f1db088af4 test: Don't be noisy when running cd during testing 2017-06-08 01:05:58 -04:00
James Shubin
6fe12b3fb5 resources: Compare grouped resources properly
When comparing resources, we have to recursively compare grouped
resources as well! Now fixed.
2017-06-08 01:05:58 -04:00
James Shubin
dacbf9b68d resources: Add resource sorting and clean tests
Resource sorting is needed for comparing resource groups.
2017-06-08 01:05:58 -04:00
James Shubin
9f5057eac7 resources: Do not panic on autogrouped graph switches
Graph changes from autogrouped -> not autogrouped or vice versa cause a
panic (or I assume a leak) because we compared the auto grouped graph to
the ungrouped one, which would cause an Exit on an unstarted Vertex.
This includes a test that seems to reliably reproduces the issue.
2017-06-08 01:05:58 -04:00
James Shubin
525cd54921 pgraph: Improve testing and refactor out some test utilities 2017-06-07 07:13:12 -04:00
James Shubin
7ac94bbf5f resources: Panic if attempting to register a duplicate resource
Don't silently let this overwrite pass. It would mean a mistake.
2017-06-07 03:15:06 -04:00
James Shubin
b8ff6938df resources: Unify resource creation and kind setting
This removes the duplication of the kind string and cleans up things for
resource creation.
2017-06-07 03:07:02 -04:00
James Shubin
2f6c77fba2 misc: Update my tag script to deal with large releases 2017-06-03 03:54:49 -04:00
James Shubin
28a6430778 test: Add gometalinter to our test suite
Add a bunch of new linters to our tests! We can uncomment each sub
linter as we fix up the few remaining issues.
2017-06-03 02:04:10 -04:00
James Shubin
6e4157da35 test: Remove debugging echo from go vet test
I accidentally left it in which totally defeats the point of tests!
2017-06-03 01:34:02 -04:00
James Shubin
4f420dde05 etcd: Wait for server to start before continuing
I think there was a rare race where we would make use of the etcd server
before it had fully started up. I only ever saw this occur on travis,
and with this fix hopefully we'll never see it again.

It is worth mentioning that much of my etcd code and the lib Run()
function could use a solid cleaning.
2017-06-03 01:00:35 -04:00
James Shubin
d9601471df etcd: Small cleanup of the package
Split things into multiple files, and fix up some doc formatting.
2017-06-03 00:34:58 -04:00
James Shubin
9941a97e37 resources: pkg: Add a simple test based on internal logic
We expect the following to stay true. This has always been a bit weird
for me to either remember or expect, so I added a test for my sanity.
2017-06-03 00:15:30 -04:00
James Shubin
0a64b08669 resources: autoedges: Process in a deterministic order
The order you loop through map's isn't necessarily stable, so make sure
you sort everything before you go through it.
2017-06-02 22:29:42 -04:00
James Shubin
4d9d0d4548 resources: Improve AutoEdge API and pkg breakage
I previously broke the pkg auto edges because the package list wasn't
available by the time it was called. This fixes the pkg resource so that
it gets the necessary list of packages when needed. Since this means
that a possible failure could happen, we also update the AutoEdges API
to support errors. Errors can only be generated at AutoEdge struct
creation, once the struct has been returned (right before modification
of the graph structure) there is no possibility to return any errors.

It's important to remember that the AutoEdges stuff gets called because
the Init of each resource, so make sure it doesn't depend on anything
that happens there or that gets cached as a result of Init.

This is all much nicer now and has a test too :)
2017-06-02 22:15:28 -04:00
James Shubin
5f6c8545c6 resources: Replace stored pgraph with mgraph and clean up hacks
Now that we're using our meta wrapper graph struct instead of the
pgraph, we can re-implement our SetValue hacks in terms of struct fields
and the implementation is now cleaner.
2017-06-02 18:50:23 -04:00
James Shubin
ddc335d65a resources: Reorganize package and split into multiple files
This should hopefully make finding and changing code easier.
2017-06-02 18:08:47 -04:00
James Shubin
9cbaa892d3 gapi: Allow the GAPI implementer to specify fast and exit
This allows the implementer of the GAPI to specify three parameters for
every Next message sent on the channel. The Fast parameter tells the
agent if it should do the pause quickly or if it should finish the
sequence. A quick pause means that it will cause a pause immediately
after the currently running resources finish, where as a slow (default)
pause will allow the wave of execution to finish. This is usually
preferred in scenarios where complex graphs are used where we want each
step to complete. The Exit parameter tells the engine to exit, and the
Err parameter tells the engine that an error occurred.
2017-06-02 04:03:10 -04:00
James Shubin
9531465410 test: Make sure our examples build
Since there are occasional API changes, I'd like to at least remember to
keep the examples building, so we now have a test to remind us!
2017-06-02 03:32:53 -04:00
James Shubin
c35916fad1 resources: Rename the Data struct to ResData to avoid ambiguity
There's a similarly named gapi.Data struct which we could also rename.
2017-06-02 02:53:53 -04:00
James Shubin
bf476a058e resources: exec: Add send/recv for exec output, stdout and stderr
This adds send/recv output parameters from exec for stdout, stderr, and
output which is a combination of those two. This also includes a few
tests, and a working example too!

Gone are the `some_command > some_file` days of puppet.
2017-06-02 02:52:03 -04:00
James Shubin
d4e815a4cb resources: Clean up converger and make it easier for tests
This cleans up the resource converger code slightly and makes it easier
to write resource specific test cases.
2017-06-02 01:15:25 -04:00
James Shubin
0545c4167b pgraph: Remove NewVertex and NewEdge methods and fix examples
Since the pgraph graph can store arbitrary pointers, we don't need a
special method to create the vertices or edges as long as they implement
the String() string method. This cleans up the library and some of the
examples which I let rot previously.
2017-05-31 18:04:58 -04:00
James Shubin
6838dd02c0 resources: graph: Add partial implementation of a graph resource
This is something I've wanted to do for a while, but for the reasons
mentioned in the comments, I've been unable to complete yet. I figured
I'd at least merge what does exist so far in case someone else would
like to pick this up. It's a bit of a brain hurdle / monster, because
the tricky part is refactoring the core engine so that this fits in
nicely. Perhaps someone will have more time and/or less tunnel vision
than I to either merge something or sketch out some ideas on the path
forwards. I think it's a useful goal because if recursive resources are
possible, it could force the core engine into a more elegant design.

Happy hacking!
2017-05-31 17:27:34 -04:00
James Shubin
14c2fd1edd resources: Add proper edge compare method
Might as well do this cleanly in one place.
2017-05-31 17:27:34 -04:00
James Shubin
6e503cc79b resources: Simplify the resource Compare functions
This removes one level of indentation and simplifies the code.
2017-05-31 17:27:34 -04:00
James Shubin
bd4563b699 pgraph: Add sort function to sort a list of vertices
With tests too!
2017-05-31 17:27:34 -04:00
James Shubin
458e115490 pgraph: Add logic functions for adding subgraphs
These are helper functions to merge in existing graphs into a main graph
with or without adding an edge relationship between a vertex and the new
graph. These are particularly useful if using mgmt as a lib to break
apart units of work into functions that create sub graphs, which are
then added to the main graph when they're returned.
2017-05-31 17:27:25 -04:00
James Shubin
51369adad1 pgraph: Add a GraphCmp method
This could probably be more efficient using a known algorithm, and it
could definitely require more tests, but is good enough for now.
2017-05-31 16:45:39 -04:00
James Shubin
f65c5fb147 resources: nspawn: Fix small style issues 2017-05-31 15:36:15 -04:00
James Shubin
4150ae7307 pgraph: Replace edge struct with interface
This further cleans up the pgraph lib to be more generic.
2017-05-31 15:36:15 -04:00
James Shubin
a87288d519 pgraph, resources: Major refactoring continued
There was simply some technical debt I needed to kill off. Sorry for not
splitting this up into more patches.
2017-05-31 15:36:14 -04:00
James Shubin
3cf9639e99 pgraph, resources: Major refactor to remove pgraph to resource dep
This is the mechanical port of the remaining bits. Next to clean it up a
bit.
2017-05-29 15:43:50 -04:00
James Shubin
4490c3ed1a resources: Map to semaphores doesn't need to be a pointer
A map in golang is a reference type.
2017-05-29 15:43:50 -04:00
James Shubin
fbcb562781 pgraph: Move the timestamp storage into the resource 2017-05-29 15:43:50 -04:00
James Shubin
b1e035f96a pgraph: Move get/set state methods out to resource package 2017-05-29 15:43:50 -04:00
James Shubin
11c3a26c23 pgraph: Move the AutoEdges mechanism into the resource package
Remove the pgraph->resource dependency.
2017-05-29 15:43:50 -04:00
James Shubin
1fbe72b52d test: Run go vet across whole packages not individual files
The golang tooling is quite deficient, in that it makes it quite
difficult to get the tools to do_the_right_thing, without ample wrapping
of bash scripting. Go vet was finding issues because it didn't have the
full context available. Hopefully this package level context is
sufficient for now. It still lacks inter-package context though.
2017-05-29 15:43:50 -04:00
James Shubin
f4bb066737 test: Run go vet with -source flag in newer releases
This should hopefully eliminate some false positives.
https://github.com/golang/go/issues/20514
2017-05-29 15:43:50 -04:00
Julien Pivotto
aaac9cbeeb vagrant: Setup Packagekit in the box
Without packagekit the 'pkg' resources can not be used

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-05-17 09:54:23 +02:00
Julien Pivotto
0e68ff6923 vagrant: Install make in the Vagrant box
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-05-17 06:41:43 +02:00
James Shubin
1c59712cbf pgraph: Move AssociateData function out of the package
This removes another dependency on the resource package.
2017-05-15 10:19:46 -04:00
James Shubin
c2cb1c9168 pgraph: Move GraphMetas function out of package
This removes a dependency on the resources package which wasn't
necessary.
2017-05-15 10:06:31 -04:00
James Shubin
cc8e2e40dd pgraph: Update graph API to remove Get prefix and add Adjacency
Simple cleanups.
2017-05-15 09:58:10 -04:00
James Shubin
e67d97d9da pgraph: Replace CompareMatch with VertexMatchFn
This removes a reference to the resources package in pgraph.
2017-05-13 13:55:42 -04:00
James Shubin
d74c2115fd pgraph: Untangle the semaphore code from the pgraph implementation
This re-implements the semaphore code on top of the graph kv store.
2017-05-13 13:28:41 -04:00
James Shubin
70e7ee2d46 pgraph: Remove use of Flags struct in favour of Value API
One small step to completely cleaning up the pgraph package so that we
can eventually fix the code that would otherwise create a cycle!
2017-05-13 13:28:41 -04:00
James Shubin
d11854f4e8 pgraph: Clean up pgraph module to get ready for clean lib status
The graph of dependencies in golang is a DAG, and as such doesn't allow
cycles. Clean up this lib so that it eventually doesn't import our
resources module or anything else which might want to import it.

This patch makes adjacency private, and adds a generalized key store to
the graph struct.
2017-05-13 13:28:41 -04:00
James Shubin
4bb553e015 pgraph: Use the correct vertex handle to prevent a race
Small typo made that is now fixed! These need to get caught with golint!
2017-05-13 10:08:38 -04:00
James Shubin
0af9af44e5 etcd, resources, world: Add World API for shared keys
It's up to the end user to decide who is writing and/or overwriting
them.

It could also be useful to reimplement (refactor) some of the existing
World API's to be implemented in terms of these primitives.
2017-04-17 07:03:29 -04:00
James Shubin
3a0d73f740 readme: Add new links 2017-04-13 04:35:59 -04:00
James Shubin
9b9ff2622d resources: Make resource kind and baseuid fields public
This is required if we're going to have out of package resources. In
particular for third party packages, and also for if we decide to split
out each resource into a separate sub package.
2017-04-11 01:52:21 -04:00
James Shubin
a4858be967 lib, gapi: Next method of GAPI should generate first event
This puts the generation of the initial event into the Next method of
the GAPI. If it does not happen, then we will never get a graph. This is
important because this notifies the GAPI when we're actually ready to
try and generate a graph, rather than blocking on the Graph method if we
have a long compile for example.

This is also required for the etcd watch cleanup.
2017-04-10 03:20:58 -04:00
James Shubin
6fd5623b1f gapi: Move separate etcd Watch method into GAPI
This cleans up the API to not have a special case for etcd anymore. In
particular, this also adds the requirement that the GAPI must generate
an event on startup as soon as it is ready to generate a graph.
2017-04-10 03:20:58 -04:00
James Shubin
66d9c7091c lib: examples: Update to most recent API
At some point in the past the API changed. Fixed now.
2017-04-10 03:20:58 -04:00
Mildred Ki'Lya
525a1e8140 yamlgraph: Refactor parsing for dynamic resource registration
Avoid use of the reflect package, and use an extensible list of registred
resource kinds. This also has the benefit of removing the empty VirtRes and
AugeasRes struct types when compiling without libvirt and libaugeas.
2017-03-24 22:38:06 +01:00
James Shubin
64dc47d7e9 misc: Fixup documentation 2017-03-20 17:11:51 -04:00
James Shubin
f3fc7bb91e resources: svc: Add basic support for user services
These are user specific services and are available on the session bus.
This doesn't use the private user API because
https://github.com/coreos/go-systemd/pull/225 was NACKed.
2017-03-17 10:15:02 -04:00
James Shubin
028ef14cc0 misc: Replace sloppy use of %v with %s 2017-03-16 13:18:36 -04:00
James Shubin
3e001f9a1c main: Update log messages for consistency 2017-03-16 13:14:50 -04:00
1066 changed files with 102836 additions and 13643 deletions

View File

@@ -12,8 +12,14 @@ end_of_line = lf
insert_final_newline = true insert_final_newline = true
trim_trailing_whitespace = true trim_trailing_whitespace = true
[*.sh]
indent_style = tab
[*.go] [*.go]
indent_style = tab indent_style = tab
[Makefile] [Makefile]
indent_style = tab indent_style = tab
[*.mcl]
indent_style = tab

5
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1,5 @@
# You can add one username per supported platform and one custom link.
custom: "https://paypal.me/purpleidea"
github: purpleidea
liberapay: purpleidea
patreon: purpleidea

47
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,47 @@
## Tips:
* please read the style guide before submitting your patch:
[docs/style-guide.md](../docs/style-guide.md)
* commit message titles must be in the form:
```topic: Capitalized message with no trailing period```
or:
```topic, topic2: Capitalized message with no trailing period```
* golang code must be formatted according to the standard, please run:
```
make gofmt # formats the entire project correctly
```
or format a single golang file correctly:
```
gofmt -w yourcode.go
```
* please rebase your patch against current git master:
```
git checkout master
git pull origin master
git checkout your-feature
git rebase master
git push your-remote your-feature
hub pull-request # or submit with the github web ui
```
* after a patch review, please ping @purpleidea so we know to re-review:
```
# make changes based on reviews...
git add -p # add new changes
git commit --amend # combine with existing commit
git push your-remote your-feature -f
# now ping @purpleidea in the github PR since it doesn't notify us automatically
```
## Thanks for contributing to mgmt and welcome to the team!

98
.github/settings.yml vendored Normal file
View File

@@ -0,0 +1,98 @@
# These settings are synced to GitHub by https://probot.github.io/apps/settings/
repository:
# See https://developer.github.com/v3/repos/#edit for all available settings.
# The name of the repository. Changing this will rename the repository
name: mgmt
# A short description of the repository that will show up on GitHub
description: Next generation distributed, event-driven, parallel config management!
# A URL with more information about the repository
homepage: https://purpleidea.com/tags/mgmtconfig/
# A comma-separated list of topics to set on the repository
topics: golang, go, configuration-management, config-management, devops, etcd, distributed-systems, graph-theory, choreography
# Either `true` to make the repository private, or `false` to make it public.
private: false
# Either `true` to enable issues for this repository, `false` to disable them.
has_issues: true
# Either `true` to enable projects for this repository, or `false` to disable them.
# If projects are disabled for the organization, passing `true` will cause an API error.
has_projects: false
# Either `true` to enable the wiki for this repository, `false` to disable it.
has_wiki: false
# Either `true` to enable downloads for this repository, `false` to disable them.
has_downloads: true
# Updates the default branch for this repository.
default_branch: master
# Either `true` to allow squash-merging pull requests, or `false` to prevent
# squash-merging.
allow_squash_merge: false
# Either `true` to allow merging pull requests with a merge commit, or `false`
# to prevent merging pull requests with merge commits.
allow_merge_commit: false
# Either `true` to allow rebase-merging pull requests, or `false` to prevent
# rebase-merging.
allow_rebase_merge: true
# Labels: define labels for Issues and Pull Requests (in alphabetical order)
labels:
- name: bug
color: fc2929
- name: confirmed
color: d93f0b
- name: design
color: 5319e7
- name: duplicate
color: cccccc
- name: enhancement
color: 84b6eb
- name: good first issue
color: 7057ff
- name: help wanted
color: 159818
- name: invalid
color: e6e6e6
- name: mgmtlove
color: e11d21
- name: question
color: cc317c
- name: needinfo
color: fbca04
- name: wontfix
color: ffffff
# - name: first-timers-only
# # include the old name to rename an existing label
# oldname: Help Wanted
# Collaborators: give specific users access to this repository.
#collaborators:
# - username: purpleidea
# # Note: Only valid on organization-owned repositories.
# # The permission to grant the collaborator. Can be one of:
# # * `pull` - can pull, but not push to or administer this repository.
# # * `push` - can pull and push, but not administer this repository.
# # * `admin` - can pull, push and administer this repository.
# permission: push
# - username: hubot
# permission: pull
# NOTE: The APIs needed for teams are not supported yet by GitHub Apps
# https://developer.github.com/v3/apps/available-endpoints/
#teams:
# - name: core
# permission: admin
# - name: docs
# permission: push

70
.github/workflows/test.yaml vendored Normal file
View File

@@ -0,0 +1,70 @@
# Docs: https://help.github.com/en/articles/workflow-syntax-for-github-actions
# If the name is omitted, it uses the filename instead.
#name: Test
on:
# Run on all pull requests.
pull_request:
#branches:
#- master
# Run on all pushes.
push:
# Run daily at 4am.
schedule:
- cron: 0 4 * * *
jobs:
maketest:
name: Test (${{ matrix.test_block }}) on ${{ matrix.os }} with golang ${{ matrix.golang_version }}
runs-on: ${{ matrix.os }}
env:
GOPATH: /home/runner/work/mgmt/mgmt/go
strategy:
matrix:
# TODO: Add tip when it's supported: https://github.com/actions/setup-go/issues/21
os:
- ubuntu-latest
# macos tests are currently failing in CI
#- macos-latest
golang_version:
# TODO: add 1.15.x and tip
# minimum required and latest published go_version
#- 1.13
- 1.15
test_block:
- basic
- shell
- race
#fail-fast: false
steps:
# Do not shallow fetch, will fail when building bindata/
# The path can't be absolute, so we need to move it to the
# expected location later.
- name: Clone mgmt
uses: actions/checkout@v2
with:
submodules: recursive
fetch-depth: 0
path: ./go/src/github.com/purpleidea/mgmt
- name: Install Go ${{ matrix.golang_version }}
uses: actions/setup-go@v2
with:
go-version: ${{ matrix.golang_version }}
# Install & configure ruby, fixes gem permissions error
- name: Install Ruby
uses: ruby/setup-ruby@v1
with:
ruby-version: head
- name: Install dependencies
working-directory: ./go/src/github.com/purpleidea/mgmt
run: |
make deps
- name: Run test
working-directory: ./go/src/github.com/purpleidea/mgmt
run: |
TEST_BLOCK="${{ matrix.test_block }}" make test

10
.gitignore vendored
View File

@@ -2,10 +2,20 @@
.omv/ .omv/
.ssh/ .ssh/
.vagrant/ .vagrant/
.envrc
old/ old/
tmp/ tmp/
*WIP
*_stringer.go *_stringer.go
bindata/*.go
mgmt mgmt
mgmt.static mgmt.static
# crossbuild artifacts
build/mgmt-*
mgmt.iml mgmt.iml
rpmbuild/ rpmbuild/
releases/
# vim swap files
.*.sw[op]
# prevent `echo foo 2>1` typo errors by making this file read-only
1

23
.gitmodules vendored
View File

@@ -1,5 +1,5 @@
[submodule "vendor/github.com/coreos/etcd"] [submodule "vendor/github.com/coreos/etcd"]
path = vendor/github.com/coreos/etcd path = vendor/go.etcd.io/etcd
url = https://github.com/coreos/etcd/ url = https://github.com/coreos/etcd/
[submodule "vendor/google.golang.org/grpc"] [submodule "vendor/google.golang.org/grpc"]
path = vendor/google.golang.org/grpc path = vendor/google.golang.org/grpc
@@ -16,3 +16,24 @@
[submodule "vendor/honnef.co/go/augeas"] [submodule "vendor/honnef.co/go/augeas"]
path = vendor/honnef.co/go/augeas path = vendor/honnef.co/go/augeas
url = https://github.com/dominikh/go-augeas/ url = https://github.com/dominikh/go-augeas/
[submodule "vendor/github.com/grpc-ecosystem/go-grpc-prometheus"]
path = vendor/github.com/grpc-ecosystem/go-grpc-prometheus
url = https://github.com/grpc-ecosystem/go-grpc-prometheus
[submodule "vendor/github.com/ugorji/go"]
path = vendor/github.com/ugorji/go
url = https://github.com/ugorji/go
[submodule "vendor/github.com/purpleidea/docker"]
path = vendor/github.com/docker/docker
url = https://github.com/purpleidea/docker
[submodule "vendor/github.com/purpleidea/distribution"]
path = vendor/github.com/docker/distribution
url = https://github.com/purpleidea/distribution
[submodule "vendor/github.com/hashicorp/go-multierror"]
path = vendor/github.com/hashicorp/go-multierror
url = https://github.com/hashicorp/go-multierror
[submodule "vendor/github.com/containerd/containerd"]
path = vendor/github.com/containerd/containerd
url = https://github.com/purpleidea/containerd
[submodule "vendor/github.com/hashicorp/consul"]
path = vendor/github.com/hashicorp/consul
url = https://github.com/hashicorp/consul/

View File

@@ -1,26 +1,54 @@
language: go language: go
go: os:
- 1.6.x - linux
- 1.7.x
- 1.8.x
- tip
go_import_path: github.com/purpleidea/mgmt go_import_path: github.com/purpleidea/mgmt
sudo: true sudo: true
dist: trusty dist: xenial
# travis requires that you update manually, and provides this key to trigger it
apt:
update: true
before_install: before_install:
- sudo apt update # print some debug information to help catch the constant travis regressions
- if [ -e /etc/apt/sources.list.d/ ]; then sudo ls -l /etc/apt/sources.list.d/; fi
# workaround broken travis NO_PUBKEY errors
- if [ -e /etc/apt/sources.list.d/rabbitmq_rabbitmq-server.list ]; then sudo rm -f /etc/apt/sources.list.d/rabbitmq_rabbitmq-server.list; fi
- if [ -e /etc/apt/sources.list.d/github_git-lfs.list ]; then sudo rm -f /etc/apt/sources.list.d/github_git-lfs.list; fi
# as per a number of comments online, this might mitigate some flaky fails...
- if [[ "$TRAVIS_OS_NAME" != "osx" ]]; then sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6; fi
# apt update tends to be flaky in travis, retry up to 3 times on failure
# https://docs.travis-ci.com/user/common-build-problems/#travis_retry
- if [[ "$TRAVIS_OS_NAME" != "osx" ]]; then travis_retry travis_retry sudo apt update; fi
- git config remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*"
- git fetch --unshallow - git fetch --unshallow
install: 'make deps' install: 'make deps'
script: 'make test'
matrix: matrix:
fast_finish: true fast_finish: false
allow_failures: allow_failures:
- go: tip - go: 1.14.x
- go: 1.8.x - go: tip
- os: osx
# include only one build for osx for a quicker build as the nr. of these runners are sparse
include:
- name: "basic tests"
go: 1.13.x
env: TEST_BLOCK=basic
- name: "shell tests"
go: 1.13.x
env: TEST_BLOCK=shell
- name: "race tests"
go: 1.13.x
env: TEST_BLOCK=race
- go: 1.14.x
- go: tip
- os: osx
script: 'TEST_BLOCK="$TEST_BLOCK" make test'
# the "secure" channel value is the result of running: ./misc/travis-encrypt.sh
# with a value of: irc.freenode.net#mgmtconfig to eliminate noise from forks...
notifications: notifications:
irc: irc:
channels: #channels:
- "irc.freenode.net#mgmtconfig" # - secure: htcuWAczm3C1zKC9vUfdRzhIXM1vtF+q0cLlQFXK1IQQlk693/pM30Mmf2L/9V2DVDeps+GyLdip0ARXD1DZEJV0lK+Ca1qbHdFP1r4Xv6l5+jaDb5Y88YU5LI8K758QShiZJojuQ1aO2j8xmmt9V0/5y5QwlpPeHbKYBOFPBX3HvlT9DhvwZNKGhBb4qJOEaPVOwq9IkN3DyQ456MHcJ3q3vF9Lb440uTuLsJNof2AbYZH8ZIHCSG2N8tBj2qhJOpWQboYtQJzE2pRaGkGBL4kYcHZSZMXX8sl4cBM1vx/IRUkvBxJUpLJz2gn/eRI+/gr59juZE2K0+FOLlx9dLnX626Y9xSViopBI6JsIoHJDqNC7aGaF2qaYulGYN65VNKVqmghjgt6JLmmiKeH10hYrJMMvt2rms8l4+5iwmCwXvhH/WU9edzk2p5wqERMnostJFEJib0zI3yzLoF0sdJs+veKtagzfayY2d2l7hlmt951IpqqVWldVgWUcQKVvi8gmRarbwFlK+5D7BEnkUDcLNly/cqf7BgEeX6YfF+FiR4pgfOhYvGCD+2q91NgWQXHBCxbyN0be1TVdkXD94f0Lkn94VyEJJ+PkPlG+rPgFwGcjqN4oEGkJeJmES2If05q2Ms1dJLwYQDL3+Py4lNMSdSWj24TzlFVhtwHepuw=
template: template:
- "%{repository} (%{commit}: %{author}): %{message}" - "%{repository} (%{commit}: %{author}): %{message}"
- "More info : %{build_url}" - "More info : %{build_url}"
@@ -30,6 +58,6 @@ notifications:
skip_join: false skip_join: false
email: email:
recipients: recipients:
- travis-ci@shubin.ca - secure: qNkgP6QLl6VXpFQIxas2wggxvIiOmm1/hGRXm4BXsSFzHsJPvMamA3E1HEC7H+luiWTny1jtGSGgTJPV9CX1LtQV0g0S4ThaAvWuKvk3rXO8IVd++iA/Lh1s1H6JdKM0dJtLqFICawjeci4tOQzSvrM2eCBWqT0UYsrQsGHB6AF31GNAH0Acqd5cYeL+ZpbCN+hQEznAZQ7546N25TwqieI8Lg7nisA+lwYYwsaC2+f5RIeyvvKjQv3wzEdBAQ9CI9WQiTOUBnUnyYxMrdomQ/XGF66QnZy9vq5nEP83IFtuhPvSamL7ceT+yJW0jDyBi8sYEV7On7eXzjyHbiYpF4YHcJrFnf5RyV4kQGd6/SC8iZwK4Is4eyeAjDFTC+JafLajw9R9x9bK43BwlRAWOZxjFKe0cU/BVAjmlz87vHgUho2P41+0a5XfajfU6VhA5QFPK6rNH7W1CnA7D/0LmS0yaqJM1OCrm6LfoZEMhe0DxTJ9uWJbr0x1sYao6q8H4xYk+fyRgoBAr2TxYU7kXx8ThiRdzuQ8izdbojlzTYLe8liZMIsjL0axLsLK7YBWrjJUcDFDjR/DqmVxPrvbVFbCi9ChmBw0WmbJvDY0FV8T8dO8wCjg9JEmprAmWPyq0g/F87LFK4tAZqQFJGjP1qwsR9jdwdNTKeCdY656f/Y=
on_failure: change on_failure: change
on_success: change on_success: change

View File

@@ -1,10 +1,13 @@
This is a list of authors/contributors to the mgmt project. This is a list of authors/contributors to the mgmt project.
If you're a contributor, please send a patch with your name. If you're a core contributor, we might ask you to send a patch with your name.
If you appreciate the work of one of the contributors, thank them a beverage! If you appreciate the work of one of the contributors, thank them a beverage!
For a more exhaustive list please run: git log --format='%aN' | sort -u For a more exhaustive list please run: git log --format='%aN' | sort -u
This list is sorted alphabetically by first name. This list is sorted alphabetically by first name.
Felix Frank Felix Frank
James Shubin James Shubin
Joe Groocock
Johan Bloemberg
Jonathan Gold
Julien Pivotto Julien Pivotto
Paul Morgan Paul Morgan

141
COPYING
View File

@@ -1,5 +1,5 @@
GNU AFFERO GENERAL PUBLIC LICENSE GNU GENERAL PUBLIC LICENSE
Version 3, 19 November 2007 Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/> Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies Everyone is permitted to copy and distribute verbatim copies
@@ -7,15 +7,17 @@
Preamble Preamble
The GNU Affero General Public License is a free, copyleft license for The GNU General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure software and other kinds of works.
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast, to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free share and change all versions of a program--to make sure it remains free
software for all its users. software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you price. Our General Public Licenses are designed to make sure that you
@@ -24,34 +26,44 @@ them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things. free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights To protect your rights, we need to prevent others from denying you
with two steps: (1) assert copyright on the software, and (2) offer these rights or asking you to surrender the rights. Therefore, you have
you this License which gives you legal permission to copy, distribute certain responsibilities if you distribute copies of the software, or if
and/or modify the software. you modify it: responsibilities to respect the freedom of others.
A secondary benefit of defending all users' freedom is that For example, if you distribute copies of such a program, whether
improvements made in alternate versions of the program, if they gratis or for a fee, you must pass on to the recipients the same
receive widespread use, become available for other developers to freedoms that you received. You must make sure that they, too, receive
incorporate. Many developers of free software are heartened and or can get the source code. And you must show them these terms so they
encouraged by the resulting cooperation. However, in the case of know their rights.
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to Developers that use the GNU GPL protect your rights with two steps:
ensure that, in such cases, the modified source code becomes available (1) assert copyright on the software, and (2) offer you this License
to the community. It requires the operator of a network server to giving you legal permission to copy, distribute and/or modify it.
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and For the developers' and authors' protection, the GPL clearly explains
published by Affero, was designed to accomplish similar goals. This is that there is no warranty for this free software. For both users' and
a different license, not a version of the Affero GPL, but Affero has authors' sake, the GPL requires that modified versions be marked as
released a new version of the Affero GPL which permits relicensing under changed, so that their problems will not be attributed erroneously to
this license. authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and The precise terms and conditions for copying, distribution and
modification follow. modification follow.
@@ -60,7 +72,7 @@ modification follow.
0. Definitions. 0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License. "This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of "Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks. works, such as semiconductor masks.
@@ -537,45 +549,35 @@ to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program. License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License. 13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work, License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version but the special requirements of the GNU Affero General Public License,
3 of the GNU General Public License. section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License. 14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions the GNU General Public License from time to time. Such new versions will
will be similar in spirit to the present version, but may differ in detail to be similar in spirit to the present version, but may differ in detail to
address new problems or concerns. address new problems or concerns.
Each version is given a distinguishing version number. If the Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published GNU General Public License, you may choose any version ever published
by the Free Software Foundation. by the Free Software Foundation.
If the Program specifies that a proxy can decide which future If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you public statement of acceptance of a version permanently authorizes you
to choose that version for the Program. to choose that version for the Program.
@@ -633,29 +635,40 @@ the "copyright" line and a pointer to where the full notice is found.
Copyright (C) <year> <name of author> Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or the Free Software Foundation, either version 3 of the License, or
(at your option) any later version. (at your option) any later version.
This program is distributed in the hope that it will be useful, This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details. GNU General Public License for more details.
You should have received a copy of the GNU Affero General Public License You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>. along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail. Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer If the program does terminal interaction, make it output a short
network, you should also make sure that it provides a way for users to notice like this when it starts in an interactive mode:
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive <program> Copyright (C) <year> <name of author>
of the code. There are many ways you could offer source, and different This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
solutions will be better for different programs; see section 13 for the This is free software, and you are welcome to redistribute it
specific requirements. under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school, You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary. if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>. <http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

View File

@@ -1,16 +1,16 @@
Mgmt Mgmt
Copyright (C) 2013-2017+ James Shubin and the project contributors Copyright (C) 2013-2021+ James Shubin and the project contributors
Written by James Shubin <james@shubin.ca> and the project contributors Written by James Shubin <james@shubin.ca> and the project contributors
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or the Free Software Foundation, either version 3 of the License, or
(at your option) any later version. (at your option) any later version.
This program is distributed in the hope that it will be useful, This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details. GNU General Public License for more details.
You should have received a copy of the GNU Affero General Public License You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>. along with this program. If not, see <http://www.gnu.org/licenses/>.

304
Makefile
View File

@@ -1,28 +1,36 @@
# Mgmt # Mgmt
# Copyright (C) 2013-2017+ James Shubin and the project contributors # Copyright (C) 2013-2021+ James Shubin and the project contributors
# Written by James Shubin <james@shubin.ca> and the project contributors # Written by James Shubin <james@shubin.ca> and the project contributors
# #
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version. # (at your option) any later version.
# #
# This program is distributed in the hope that it will be useful, # This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of # but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details. # GNU General Public License for more details.
# #
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
SHELL = /usr/bin/env bash SHELL = /usr/bin/env bash
.PHONY: all art cleanart version program path deps run race generate build clean test gofmt yamlfmt format docs rpmbuild mkdirs rpm srpm spec tar upload upload-sources upload-srpms upload-rpms copr .PHONY: all art cleanart version program lang path deps run race bindata generate build build-debug crossbuild clean test gofmt yamlfmt format docs
.SILENT: clean .PHONY: rpmbuild mkdirs rpm srpm spec tar upload upload-sources upload-srpms upload-rpms upload-releases copr tag
.PHONY: mkosi mkosi_fedora-30 mkosi_fedora-29 mkosi_centos-7 mkosi_debian-10 mkosi_ubuntu-bionic mkosi_archlinux
.PHONY: release releases_path release_fedora-30 release_fedora-29 release_centos-7 release_debian-10 release_ubuntu-bionic release_archlinux
.PHONY: funcgen
.SILENT: clean bindata
# a large amount of output from this `find`, can cause `make` to be much slower!
GO_FILES := $(shell find * -name '*.go' -not -path 'old/*' -not -path 'tmp/*')
MCL_FILES := $(shell find lang/funcs/ -name '*.mcl' -not -path 'old/*' -not -path 'tmp/*')
SVERSION := $(or $(SVERSION),$(shell git describe --match '[0-9]*\.[0-9]*\.[0-9]*' --tags --dirty --always)) SVERSION := $(or $(SVERSION),$(shell git describe --match '[0-9]*\.[0-9]*\.[0-9]*' --tags --dirty --always))
VERSION := $(or $(VERSION),$(shell git describe --match '[0-9]*\.[0-9]*\.[0-9]*' --tags --abbrev=0)) VERSION := $(or $(VERSION),$(shell git describe --match '[0-9]*\.[0-9]*\.[0-9]*' --tags --abbrev=0))
PROGRAM := $(shell echo $(notdir $(CURDIR)) | cut -f1 -d"-") PROGRAM := $(shell echo $(notdir $(CURDIR)) | cut -f1 -d"-")
OLDGOLANG := $(shell go version | grep -E 'go1.3|go1.4') PKGNAME := $(shell go list .)
ifeq ($(VERSION),$(SVERSION)) ifeq ($(VERSION),$(SVERSION))
RELEASE = 1 RELEASE = 1
else else
@@ -38,15 +46,43 @@ USERNAME := $(shell cat ~/.config/copr 2>/dev/null | grep username | awk -F '='
SERVER = 'dl.fedoraproject.org' SERVER = 'dl.fedoraproject.org'
REMOTE_PATH = 'pub/alt/$(USERNAME)/$(PROGRAM)' REMOTE_PATH = 'pub/alt/$(USERNAME)/$(PROGRAM)'
ifneq ($(GOTAGS),) ifneq ($(GOTAGS),)
BUILD_FLAGS = -tags '$(GOTAGS)' BUILD_FLAGS = -tags '$(GOTAGS)'
endif endif
GOOSARCHES ?= linux/amd64 linux/ppc64 linux/ppc64le linux/arm64 darwin/amd64
GOHOSTOS = $(shell go env GOHOSTOS)
GOHOSTARCH = $(shell go env GOHOSTARCH)
TOKEN_FEDORA-30 = fedora-30
TOKEN_FEDORA-29 = fedora-29
TOKEN_CENTOS-7 = centos-7
TOKEN_DEBIAN-10 = debian-10
TOKEN_UBUNTU-BIONIC = ubuntu-bionic
TOKEN_ARCHLINUX = archlinux
FILE_FEDORA-30 = mgmt-$(TOKEN_FEDORA-30)-$(VERSION)-1.x86_64.rpm
FILE_FEDORA-29 = mgmt-$(TOKEN_FEDORA-29)-$(VERSION)-1.x86_64.rpm
FILE_CENTOS-7 = mgmt-$(TOKEN_CENTOS-7)-$(VERSION)-1.x86_64.rpm
FILE_DEBIAN-10 = mgmt_$(TOKEN_DEBIAN-10)_$(VERSION)_amd64.deb
FILE_UBUNTU-BIONIC = mgmt_$(TOKEN_UBUNTU-BIONIC)_$(VERSION)_amd64.deb
FILE_ARCHLINUX = mgmt-$(TOKEN_ARCHLINUX)-$(VERSION)-1-x86_64.pkg.tar.xz
PKG_FEDORA-30 = releases/$(VERSION)/$(TOKEN_FEDORA-30)/$(FILE_FEDORA-30)
PKG_FEDORA-29 = releases/$(VERSION)/$(TOKEN_FEDORA-29)/$(FILE_FEDORA-29)
PKG_CENTOS-7 = releases/$(VERSION)/$(TOKEN_CENTOS-7)/$(FILE_CENTOS-7)
PKG_DEBIAN-10 = releases/$(VERSION)/$(TOKEN_DEBIAN-10)/$(FILE_DEBIAN-10)
PKG_UBUNTU-BIONIC = releases/$(VERSION)/$(TOKEN_UBUNTU-BIONIC)/$(FILE_UBUNTU-BIONIC)
PKG_ARCHLINUX = releases/$(VERSION)/$(TOKEN_ARCHLINUX)/$(FILE_ARCHLINUX)
SHA256SUMS = releases/$(VERSION)/SHA256SUMS
SHA256SUMS_ASC = $(SHA256SUMS).asc
default: build default: build
# #
# art # art
# #
art: art/mgmt_logo_default_symbol.png art/mgmt_logo_default_tall.png art/mgmt_logo_default_wide.png art/mgmt_logo_reversed_symbol.png art/mgmt_logo_reversed_tall.png art/mgmt_logo_reversed_wide.png art/mgmt_logo_white_symbol.png art/mgmt_logo_white_tall.png art/mgmt_logo_white_wide.png art: art/mgmt_logo_default_symbol.png art/mgmt_logo_default_tall.png art/mgmt_logo_default_wide.png art/mgmt_logo_reversed_symbol.png art/mgmt_logo_reversed_tall.png art/mgmt_logo_reversed_wide.png art/mgmt_logo_white_symbol.png art/mgmt_logo_white_tall.png art/mgmt_logo_white_wide.png ## generate artwork
cleanart: cleanart:
rm -f art/mgmt_logo_default_symbol.png art/mgmt_logo_default_tall.png art/mgmt_logo_default_wide.png art/mgmt_logo_reversed_symbol.png art/mgmt_logo_reversed_tall.png art/mgmt_logo_reversed_wide.png art/mgmt_logo_white_symbol.png art/mgmt_logo_white_tall.png art/mgmt_logo_white_wide.png rm -f art/mgmt_logo_default_symbol.png art/mgmt_logo_default_tall.png art/mgmt_logo_default_wide.png art/mgmt_logo_reversed_symbol.png art/mgmt_logo_reversed_tall.png art/mgmt_logo_reversed_wide.png art/mgmt_logo_white_symbol.png art/mgmt_logo_white_tall.png art/mgmt_logo_white_wide.png
@@ -82,66 +118,105 @@ art/mgmt_logo_white_wide.png: art/mgmt_logo_white_wide.svg
all: docs $(PROGRAM).static all: docs $(PROGRAM).static
# show the current version # show the current version
version: version: ## show the current version
@echo $(VERSION) @echo $(VERSION)
program: program: ## show the program name
@echo $(PROGRAM) @echo $(PROGRAM)
path: path: ## create working paths
./misc/make-path.sh ./misc/make-path.sh
deps: deps: ## install system and golang dependencies
./misc/make-deps.sh ./misc/make-deps.sh
run: run: ## run mgmt
find . -maxdepth 1 -type f -name '*.go' -not -name '*_test.go' | xargs go run -ldflags "-X main.program=$(PROGRAM) -X main.version=$(SVERSION)" find . -maxdepth 1 -type f -name '*.go' -not -name '*_test.go' | xargs go run -ldflags "-X main.program=$(PROGRAM) -X main.version=$(SVERSION)"
# include race flag # include race flag
race: race:
find . -maxdepth 1 -type f -name '*.go' -not -name '*_test.go' | xargs go run -race -ldflags "-X main.program=$(PROGRAM) -X main.version=$(SVERSION)" find . -maxdepth 1 -type f -name '*.go' -not -name '*_test.go' | xargs go run -race -ldflags "-X main.program=$(PROGRAM) -X main.version=$(SVERSION)"
# generate go files from non-go source
bindata: ## generate go files from non-go sources
$(MAKE) --quiet -C bindata
$(MAKE) --quiet -C lang/funcs
generate: generate:
go generate go generate
build: $(PROGRAM) lang: ## generates the lexer/parser for the language frontend
@# recursively run make in child dir named lang
@$(MAKE) --quiet -C lang
$(PROGRAM): main.go # build a `mgmt` binary for current host os/arch
@echo "Building: $(PROGRAM), version: $(SVERSION)..." $(PROGRAM): build/mgmt-${GOHOSTOS}-${GOHOSTARCH} ## build an mgmt binary for current host os/arch
ifneq ($(OLDGOLANG),) cp -a $< $@
@# avoid equals sign in old golang versions eg in: -X foo=bar
time go build -ldflags "-X main.program $(PROGRAM) -X main.version $(SVERSION)" -o $(PROGRAM) $(BUILD_FLAGS);
else
time go build -i -ldflags "-X main.program=$(PROGRAM) -X main.version=$(SVERSION)" -o $(PROGRAM) $(BUILD_FLAGS);
endif
$(PROGRAM).static: main.go $(PROGRAM).static: $(GO_FILES) $(MCL_FILES)
@echo "Building: $(PROGRAM).static, version: $(SVERSION)..." @echo "Building: $(PROGRAM).static, version: $(SVERSION)..."
go generate go generate
ifneq ($(OLDGOLANG),) go build -a -installsuffix cgo -tags netgo -ldflags '-extldflags "-static" -X main.program=$(PROGRAM) -X main.version=$(SVERSION) -s -w' -o $(PROGRAM).static $(BUILD_FLAGS);
@# avoid equals sign in old golang versions eg in: -X foo=bar
go build -a -installsuffix cgo -tags netgo -ldflags '-extldflags "-static" -X main.program $(PROGRAM) -X main.version $(SVERSION)' -o $(PROGRAM).static $(BUILD_FLAGS);
else
go build -a -installsuffix cgo -tags netgo -ldflags '-extldflags "-static" -X main.program=$(PROGRAM) -X main.version=$(SVERSION)' -o $(PROGRAM).static $(BUILD_FLAGS);
endif
clean: build: LDFLAGS=-s -w ## build a fresh mgmt binary
build: $(PROGRAM)
build-debug: LDFLAGS=
build-debug: $(PROGRAM)
# pattern rule target for (cross)building, mgmt-OS-ARCH will be expanded to the correct build
# extract os and arch from target pattern
GOOS=$(firstword $(subst -, ,$*))
GOARCH=$(lastword $(subst -, ,$*))
build/mgmt-%: $(GO_FILES) $(MCL_FILES) | bindata lang funcgen
@echo "Building: $(PROGRAM), os/arch: $*, version: $(SVERSION)..."
@time env GOOS=${GOOS} GOARCH=${GOARCH} go build -i -ldflags=$(PKGNAME)="-X main.program=$(PROGRAM) -X main.version=$(SVERSION) ${LDFLAGS}" -o $@ $(BUILD_FLAGS)
# create a list of binary file names to use as make targets
crossbuild_targets = $(addprefix build/mgmt-,$(subst /,-,${GOOSARCHES}))
crossbuild: ${crossbuild_targets}
clean: ## clean things up
$(MAKE) --quiet -C test clean
$(MAKE) --quiet -C bindata clean
$(MAKE) --quiet -C lang/funcs clean
$(MAKE) --quiet -C lang clean
$(MAKE) --quiet -C misc/mkosi clean
rm -f lang/funcs/core/generated_funcs.go || true
rm -f lang/funcs/core/generated_funcs_test.go || true
[ ! -e $(PROGRAM) ] || rm $(PROGRAM) [ ! -e $(PROGRAM) ] || rm $(PROGRAM)
rm -f *_stringer.go # generated by `go generate` rm -f *_stringer.go # generated by `go generate`
rm -f *_mock.go # generated by `go generate` rm -f *_mock.go # generated by `go generate`
# crossbuild artifacts
rm -f build/mgmt-*
test: test: build ## run tests
@# recursively run make in child dir named test
@$(MAKE) --quiet -C test
./test.sh ./test.sh
# create all test targets for make tab completion (eg: make test-gofmt)
test_suites=$(shell find test/ -maxdepth 1 -name test-* -exec basename {} .sh \;)
# allow to run only one test suite at a time
${test_suites}: test-%: build
./test.sh $*
# targets to run individual shell tests (eg: make test-shell-load0)
test_shell=$(shell find test/shell/ -maxdepth 1 -name "*.sh" -exec basename {} .sh \;)
$(addprefix test-shell-,${test_shell}): test-shell-%: build
./test/test-shell.sh "$*.sh"
gofmt: gofmt:
find . -maxdepth 3 -type f -name '*.go' -not -path './old/*' -not -path './tmp/*' -exec gofmt -w {} \; # TODO: remove gofmt once goimports has a -s option
find . -maxdepth 9 -type f -name '*.go' -not -path './old/*' -not -path './tmp/*' -not -path './vendor/*' -exec gofmt -s -w {} \;
find . -maxdepth 9 -type f -name '*.go' -not -path './old/*' -not -path './tmp/*' -not -path './vendor/*' -exec goimports -w {} \;
yamlfmt: yamlfmt:
find . -maxdepth 3 -type f -name '*.yaml' -not -path './old/*' -not -path './tmp/*' -not -path './omv.yaml' -exec ruby -e "require 'yaml'; x=YAML.load_file('{}').to_yaml.each_line.map(&:rstrip).join(10.chr)+10.chr; File.open('{}', 'w').write x" \; find . -maxdepth 3 -type f -name '*.yaml' -not -path './old/*' -not -path './tmp/*' -not -path './omv.yaml' -exec ruby -e "require 'yaml'; x=YAML.load_file('{}').to_yaml.each_line.map(&:rstrip).join(10.chr)+10.chr; File.open('{}', 'w').write x" \;
format: gofmt yamlfmt format: gofmt yamlfmt ## format yaml and golang code
docs: $(PROGRAM)-documentation.pdf docs: $(PROGRAM)-documentation.pdf ## generate docs
$(PROGRAM)-documentation.pdf: docs/documentation.md $(PROGRAM)-documentation.pdf: docs/documentation.md
pandoc docs/documentation.md -o docs/'$(PROGRAM)-documentation.pdf' pandoc docs/documentation.md -o docs/'$(PROGRAM)-documentation.pdf'
@@ -166,7 +241,7 @@ rpmbuild/SOURCES/: tar
rpmbuild/SRPMS/: srpm rpmbuild/SRPMS/: srpm
rpmbuild/RPMS/: rpm rpmbuild/RPMS/: rpm
upload: upload-sources upload-srpms upload-rpms upload: upload-sources upload-srpms upload-rpms ## upload sources
# do nothing # do nothing
# #
@@ -271,10 +346,165 @@ upload-rpms: rpmbuild/RPMS/ rpmbuild/RPMS/SHA256SUMS rpmbuild/RPMS/SHA256SUMS.as
rsync -avz --prune-empty-dirs rpmbuild/RPMS/ $(SERVER):$(REMOTE_PATH)/RPMS/; \ rsync -avz --prune-empty-dirs rpmbuild/RPMS/ $(SERVER):$(REMOTE_PATH)/RPMS/; \
fi fi
upload-releases:
echo Running releases/ upload...
rsync -avz --exclude '.mkdir' --exclude 'mgmt-release.url' releases/ $(SERVER):$(REMOTE_PATH)/releases/
# #
# copr build # copr build
# #
copr: upload-srpms copr: upload-srpms ## build in copr
./misc/copr-build.py https://$(SERVER)/$(REMOTE_PATH)/SRPMS/$(SRPM_BASE) ./misc/copr-build.py https://$(SERVER)/$(REMOTE_PATH)/SRPMS/$(SRPM_BASE)
#
# tag
#
tag: ## tags a new release
./misc/tag.sh
#
# mkosi
#
mkosi: mkosi_fedora-30 mkosi_fedora-29 mkosi_centos-7 mkosi_debian-10 mkosi_ubuntu-bionic mkosi_archlinux ## builds distro packages via mkosi
mkosi_fedora-30: releases/$(VERSION)/.mkdir
@title='$@' ; echo "Generating: $${title#'mkosi_'} via mkosi..."
@title='$@' ; distro=$${title#'mkosi_'} ; ./misc/mkosi/make.sh $${distro} `realpath "releases/$(VERSION)/"`
mkosi_fedora-29: releases/$(VERSION)/.mkdir
@title='$@' ; echo "Generating: $${title#'mkosi_'} via mkosi..."
@title='$@' ; distro=$${title#'mkosi_'} ; ./misc/mkosi/make.sh $${distro} `realpath "releases/$(VERSION)/"`
mkosi_centos-7: releases/$(VERSION)/.mkdir
@title='$@' ; echo "Generating: $${title#'mkosi_'} via mkosi..."
@title='$@' ; distro=$${title#'mkosi_'} ; ./misc/mkosi/make.sh $${distro} `realpath "releases/$(VERSION)/"`
mkosi_debian-10: releases/$(VERSION)/.mkdir
@title='$@' ; echo "Generating: $${title#'mkosi_'} via mkosi..."
@title='$@' ; distro=$${title#'mkosi_'} ; ./misc/mkosi/make.sh $${distro} `realpath "releases/$(VERSION)/"`
mkosi_ubuntu-bionic: releases/$(VERSION)/.mkdir
@title='$@' ; echo "Generating: $${title#'mkosi_'} via mkosi..."
@title='$@' ; distro=$${title#'mkosi_'} ; ./misc/mkosi/make.sh $${distro} `realpath "releases/$(VERSION)/"`
mkosi_archlinux: releases/$(VERSION)/.mkdir
@title='$@' ; echo "Generating: $${title#'mkosi_'} via mkosi..."
@title='$@' ; distro=$${title#'mkosi_'} ; ./misc/mkosi/make.sh $${distro} `realpath "releases/$(VERSION)/"`
#
# release
#
release: releases/$(VERSION)/mgmt-release.url ## generates and uploads a release
releases_path:
@#Don't put any other output or dependencies in here or they'll show!
@echo "releases/$(VERSION)/"
release_fedora-30: $(PKG_FEDORA-30)
release_fedora-29: $(PKG_FEDORA-29)
release_centos-7: $(PKG_CENTOS-7)
release_debian-10: $(PKG_DEBIAN-10)
release_ubuntu-bionic: $(PKG_UBUNTU-BIONIC)
release_archlinux: $(PKG_ARCHLINUX)
releases/$(VERSION)/mgmt-release.url: $(PKG_FEDORA-30) $(PKG_FEDORA-29) $(PKG_CENTOS-7) $(PKG_DEBIAN-10) $(PKG_UBUNTU-BIONIC) $(PKG_ARCHLINUX) $(SHA256SUMS_ASC)
@echo "Pushing git tag $(VERSION) to origin..."
git push origin $(VERSION)
@echo "Creating github release..."
hub release create \
-F <( echo -e "$(VERSION)\n";echo "Verify the signatures of all packages before you use them. The signing key can be downloaded from https://purpleidea.com/contact/#pgp-key to verify the release." ) \
-a $(PKG_FEDORA-30) \
-a $(PKG_FEDORA-29) \
-a $(PKG_CENTOS-7) \
-a $(PKG_DEBIAN-10) \
-a $(PKG_UBUNTU-BIONIC) \
-a $(PKG_ARCHLINUX) \
-a $(SHA256SUMS_ASC) \
$(VERSION) \
> releases/$(VERSION)/mgmt-release.url \
&& cat releases/$(VERSION)/mgmt-release.url \
|| rm -f releases/$(VERSION)/mgmt-release.url
releases/$(VERSION)/.mkdir:
mkdir -p releases/$(VERSION)/{$(TOKEN_FEDORA-30),$(TOKEN_FEDORA-29),$(TOKEN_CENTOS-7),$(TOKEN_DEBIAN-10),$(TOKEN_UBUNTU-BIONIC),$(TOKEN_ARCHLINUX)}/ && touch releases/$(VERSION)/.mkdir
releases/$(VERSION)/$(TOKEN_FEDORA-30)/changelog: $(PROGRAM) releases/$(VERSION)/.mkdir
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Generating: $${distro} changelog..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/make-rpm-changelog.sh "$${distro}" $(VERSION)
$(PKG_FEDORA-30): releases/$(VERSION)/$(TOKEN_FEDORA-30)/changelog
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Building: $${distro} package..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/fpm-pack.sh $${distro} $(VERSION) "$(FILE_FEDORA-30)" libvirt-devel augeas-devel
releases/$(VERSION)/$(TOKEN_FEDORA-29)/changelog: $(PROGRAM) releases/$(VERSION)/.mkdir
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Generating: $${distro} changelog..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/make-rpm-changelog.sh "$${distro}" $(VERSION)
$(PKG_FEDORA-29): releases/$(VERSION)/$(TOKEN_FEDORA-29)/changelog
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Building: $${distro} package..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/fpm-pack.sh $${distro} $(VERSION) "$(FILE_FEDORA-29)" libvirt-devel augeas-devel
releases/$(VERSION)/$(TOKEN_CENTOS-7)/changelog: $(PROGRAM) releases/$(VERSION)/.mkdir
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Generating: $${distro} changelog..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/make-rpm-changelog.sh "$${distro}" $(VERSION)
$(PKG_CENTOS-7): releases/$(VERSION)/$(TOKEN_CENTOS-7)/changelog
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Building: $${distro} package..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/fpm-pack.sh $${distro} $(VERSION) "$(FILE_CENTOS-7)" libvirt-devel augeas-devel
releases/$(VERSION)/$(TOKEN_DEBIAN-10)/changelog: $(PROGRAM) releases/$(VERSION)/.mkdir
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Generating: $${distro} changelog..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/make-deb-changelog.sh "$${distro}" $(VERSION)
$(PKG_DEBIAN-10): releases/$(VERSION)/$(TOKEN_DEBIAN-10)/changelog
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Building: $${distro} package..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/fpm-pack.sh $${distro} $(VERSION) "$(FILE_DEBIAN-10)" libvirt-dev libaugeas-dev
releases/$(VERSION)/$(TOKEN_UBUNTU-BIONIC)/changelog: $(PROGRAM) releases/$(VERSION)/.mkdir
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Generating: $${distro} changelog..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/make-deb-changelog.sh "$${distro}" $(VERSION)
$(PKG_UBUNTU-BIONIC): releases/$(VERSION)/$(TOKEN_UBUNTU-BIONIC)/changelog
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Building: $${distro} package..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/fpm-pack.sh $${distro} $(VERSION) "$(FILE_UBUNTU-BIONIC)" libvirt-dev libaugeas-dev
$(PKG_ARCHLINUX): $(PROGRAM) releases/$(VERSION)/.mkdir
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Building: $${distro} package..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/fpm-pack.sh $${distro} $(VERSION) "$(FILE_ARCHLINUX)" libvirt augeas
$(SHA256SUMS): $(PKG_FEDORA-30) $(PKG_FEDORA-29) $(PKG_CENTOS-7) $(PKG_DEBIAN-10) $(PKG_UBUNTU-BIONIC) $(PKG_ARCHLINUX)
@# remove the directory separator in the SHA256SUMS file
@echo "Generating: sha256 sum..."
sha256sum $(PKG_FEDORA-30) $(PKG_FEDORA-29) $(PKG_CENTOS-7) $(PKG_DEBIAN-10) $(PKG_UBUNTU-BIONIC) $(PKG_ARCHLINUX) | awk -F '/| ' '{print $$1" "$$6}' > $(SHA256SUMS)
$(SHA256SUMS_ASC): $(SHA256SUMS)
@echo "Signing sha256 sum..."
gpg2 --yes --clearsign $(SHA256SUMS)
build_container: ## builds the container
docker build -t purpleidea/mgmt-build -f docker/Dockerfile.build .
docker run -td --name mgmt-build purpleidea/mgmt-build
docker cp mgmt-build:/root/gopath/src/github.com/purpleidea/mgmt/mgmt .
docker build -t purpleidea/mgmt -f docker/Dockerfile.static .
docker rm mgmt-build || true
clean_container: ## removes the container
docker rmi purpleidea/mgmt-build
docker rmi purpleidea/mgmt
help: ## show this help screen
@echo 'Usage: make <OPTIONS> ... <TARGETS>'
@echo ''
@echo 'Available targets are:'
@echo ''
@grep -E '^[ a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | \
awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}'
@echo ''
funcgen: lang/funcs/core/generated_funcs.go
lang/funcs/core/generated_funcs.go: lang/funcs/funcgen/*.go lang/funcs/core/funcgen.yaml lang/funcs/funcgen/templates/generated_funcs.go.tpl
@echo "Generating: funcs..."
@go run `find lang/funcs/funcgen/ -maxdepth 1 -type f -name '*.go' -not -name '*_test.go'` -templates=lang/funcs/funcgen/templates/generated_funcs.go.tpl >/dev/null
# vim: ts=8 # vim: ts=8

137
README.md
View File

@@ -4,81 +4,130 @@
[![Go Report Card](https://goreportcard.com/badge/github.com/purpleidea/mgmt?style=flat-square)](https://goreportcard.com/report/github.com/purpleidea/mgmt) [![Go Report Card](https://goreportcard.com/badge/github.com/purpleidea/mgmt?style=flat-square)](https://goreportcard.com/report/github.com/purpleidea/mgmt)
[![Build Status](https://img.shields.io/travis/purpleidea/mgmt/master.svg?style=flat-square)](http://travis-ci.org/purpleidea/mgmt) [![Build Status](https://img.shields.io/travis/purpleidea/mgmt/master.svg?style=flat-square)](http://travis-ci.org/purpleidea/mgmt)
[![Build Status](https://github.com/purpleidea/mgmt/workflows/.github/workflows/test.yaml/badge.svg)](https://github.com/purpleidea/mgmt/actions/)
[![GoDoc](https://img.shields.io/badge/godoc-reference-5272B4.svg?style=flat-square)](https://godoc.org/github.com/purpleidea/mgmt) [![GoDoc](https://img.shields.io/badge/godoc-reference-5272B4.svg?style=flat-square)](https://godoc.org/github.com/purpleidea/mgmt)
[![IRC](https://img.shields.io/badge/irc-%23mgmtconfig-brightgreen.svg?style=flat-square)](https://webchat.freenode.net/?channels=#mgmtconfig) [![IRC](https://img.shields.io/badge/irc-%23mgmtconfig-orange.svg?style=flat-square)](https://web.libera.chat/?channels=#mgmtconfig)
[![Jenkins](https://img.shields.io/badge/jenkins-status-brightgreen.svg?style=flat-square)](https://ci.centos.org/job/purpleidea-mgmt/) [![Patreon](https://img.shields.io/badge/patreon-donate-yellow.svg?style=flat-square)](https://www.patreon.com/purpleidea)
[![Liberapay](https://img.shields.io/badge/liberapay-donate-yellow.svg?style=flat-square)](https://liberapay.com/purpleidea/donate)
## About:
`Mgmt` is a real-time automation tool. It is familiar to existing configuration
management software, but is drastically more powerful as it can allow you to
build real-time, closed-loop feedback systems, in a very safe way, and with a
surprisingly small amout of our `mcl` code. For example, the following code will
ensure that your file server is set to read-only when it's friday.
```mcl
import "datetime"
$is_friday = datetime.weekday(datetime.now()) == "friday"
file "/srv/files/" {
state => $const.res.file.state.exists,
mode => if $is_friday { # this updates the mode, the instant it changes!
"0550"
} else {
"0770"
},
}
```
It can run continuously, intermittently, or on-demand, and in the first case, it
will guarantee that your system is always in the desired state for that instant!
In this mode it can run as a decentralized cluster of agents across your
network, each exchanging information with the others in real-time, to respond to
your changing needs. For example, if you want to ensure that some resource runs
on a maximum of two hosts in your cluster, you can specify that as well:
```mcl
import "sys"
import "world"
# we'll set a few scheduling options:
$opts = struct{strategy => "rr", max => 2, ttl => 10,}
# schedule in a particular namespace with options:
$set = world.schedule("xsched", $opts)
if sys.hostname() in $set {
# use your imagination to put something more complex right here...
print "i got scheduled" {} # this will run on the chosen machines
}
```
As you add and remove hosts from the cluster, the real-time `schedule` function
will dynamically pick up to two hosts from the available pool. These specific
functions aren't intrinsic to the core design, and new ones can be easily added.
Please read on if you'd like to learn more...
## Community: ## Community:
Come join us in the `mgmt` community! Come join us in the `mgmt` community!
| Medium | Link | | Medium | Link |
|---|---|---| |---|---|
| IRC | [#mgmtconfig](https://webchat.freenode.net/?channels=#mgmtconfig) on Freenode | | IRC | [#mgmtconfig](https://web.libera.chat/?channels=#mgmtconfig) on Libera.Chat |
| Twitter | [@mgmtconfig](https://twitter.com/mgmtconfig) & [#mgmtconfig](https://twitter.com/hashtag/mgmtconfig) | | Twitter | [@mgmtconfig](https://twitter.com/mgmtconfig) & [#mgmtconfig](https://twitter.com/hashtag/mgmtconfig) |
| Mailing list | [mgmtconfig-list@redhat.com](https://www.redhat.com/mailman/listinfo/mgmtconfig-list) | | Mailing list | [mgmtconfig-list@redhat.com](https://www.redhat.com/mailman/listinfo/mgmtconfig-list) |
| Patreon | [purpleidea](https://www.patreon.com/purpleidea) on Patreon |
| Liberapay | [purpleidea](https://liberapay.com/purpleidea/donate) on Liberapay |
## Status: ## Status:
Mgmt is a fairly new project.
We're working towards being minimally useful for production environments. Mgmt is a next generation automation tool. It has similarities to other tools in
We aren't feature complete for what we'd consider a 1.x release yet. the configuration management space, but has a fast, modern, distributed systems
With your help you'll be able to influence our design and get us there sooner! approach. The project contains an engine and a language.
[Please have a look at an introductory video or blog post.](docs/on-the-web.md)
Mgmt is a fairly new project. It is usable today, but not yet feature complete.
With your help you'll be able to influence our design and get us to 1.0 sooner!
Interested users should read the [quick start guide](docs/quick-start-guide.md).
## Documentation: ## Documentation:
Please read, enjoy and help improve our documentation! Please read, enjoy and help improve our documentation!
| Documentation | Additional Notes | | Documentation | Additional Notes |
|---|---| |---|---|
| [quick start guide](docs/quick-start-guide.md) | for everyone |
| [frequently asked questions](docs/faq.md) | for everyone |
| [general documentation](docs/documentation.md) | for everyone | | [general documentation](docs/documentation.md) | for everyone |
| [quick start guide](docs/quick-start-guide.md) | for mgmt developers | | [language guide](docs/language-guide.md) | for everyone |
| [function guide](docs/function-guide.md) | for mgmt developers |
| [resource guide](docs/resource-guide.md) | for mgmt developers | | [resource guide](docs/resource-guide.md) | for mgmt developers |
| [style guide](docs/style-guide.md) | for mgmt developers |
| [godoc API reference](https://godoc.org/github.com/purpleidea/mgmt) | for mgmt developers | | [godoc API reference](https://godoc.org/github.com/purpleidea/mgmt) | for mgmt developers |
| [prometheus guide](docs/prometheus.md) | for everyone | | [prometheus guide](docs/prometheus.md) | for everyone |
| [puppet guide](docs/puppet-guide.md) | for puppet sysadmins | | [puppet guide](docs/puppet-guide.md) | for puppet sysadmins |
| [development](docs/development.md) | for mgmt developers |
## Questions: ## Questions:
Please ask in the [community](#community)!
If you have a well phrased question that might benefit others, consider asking it by sending a patch to the documentation [FAQ](https://github.com/purpleidea/mgmt/blob/master/docs/documentation.md#usage-and-frequently-asked-questions) section. I'll merge your question, and a patch with the answer!
## Roadmap: Please ask in the [community](#community)!
Please see: [TODO.md](TODO.md) for a list of upcoming work and TODO items. If you have a well phrased question that might benefit others, consider asking
Please get involved by working on one of these items or by suggesting something else! it by sending a patch to the [FAQ](docs/faq.md) section. I'll merge your
Feel free to grab one of the straightforward [#mgmtlove](https://github.com/purpleidea/mgmt/labels/mgmtlove) issues if you're a first time contributor to the project or if you're unsure about what to hack on! question, and a patch with the answer!
## Get involved:
Feel free to grab one of the straightforward [#mgmtlove](https://github.com/purpleidea/mgmt/labels/mgmtlove)
issues if you're a first time contributor to the project or if you're unsure
about what to hack on! Please get involved by working on one of these items or
by suggesting something else! There are some lower priority issues and harder
issues available in our [TODO](TODO.md) file. Please have a look.
## Bugs: ## Bugs:
Please set the `DEBUG` constant in [main.go](https://github.com/purpleidea/mgmt/blob/master/main.go) to `true`, and post the logs when you report the [issue](https://github.com/purpleidea/mgmt/issues).
Bonus points if you provide a [shell](https://github.com/purpleidea/mgmt/tree/master/test/shell) or [OMV](https://github.com/purpleidea/mgmt/tree/master/test/omv) reproducible test case. Please set the `DEBUG` constant in [main.go](https://github.com/purpleidea/mgmt/blob/master/main.go)
Feel free to read my article on [debugging golang programs](https://ttboj.wordpress.com/2016/02/15/debugging-golang-programs/). to `true`, and post the logs when you report the [issue](https://github.com/purpleidea/mgmt/issues).
Feel free to read my article on [debugging golang programs](https://purpleidea.com/blog/2016/02/15/debugging-golang-programs/).
## Patches: ## Patches:
We'd love to have your patches! Please send them by email, or as a pull request. We'd love to have your patches! Please send them by email, or as a pull request.
## On the web: ## On the web:
| Author | Format | Subject |
|---|---|---|
| James Shubin | blog | [Next generation configuration mgmt](https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/) |
| James Shubin | video | [Introductory recording from DevConf.cz 2016](https://www.youtube.com/watch?v=GVhpPF0j-iE&html5=1) |
| James Shubin | video | [Introductory recording from CfgMgmtCamp.eu 2016](https://www.youtube.com/watch?v=fNeooSiIRnA&html5=1) |
| Julian Dunn | video | [On mgmt at CfgMgmtCamp.eu 2016](https://www.youtube.com/watch?v=kfF9IATUask&t=1949&html5=1) |
| Walter Heck | slides | [On mgmt at CfgMgmtCamp.eu 2016](http://www.slideshare.net/olindata/configuration-management-time-for-a-4th-generation/3) |
| Marco Marongiu | blog | [On mgmt](http://syslog.me/2016/02/15/leap-or-die/) |
| Felix Frank | blog | [From Catalog To Mgmt (on puppet to mgmt "transpiling")](https://ffrank.github.io/features/2016/02/18/from-catalog-to-mgmt/) |
| James Shubin | blog | [Automatic edges in mgmt (...and the pkg resource)](https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/) |
| James Shubin | blog | [Automatic grouping in mgmt](https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/) |
| John Arundel | tweet | [“Puppets days are numbered.”](https://twitter.com/bitfield/status/732157519142002688) |
| Felix Frank | blog | [Puppet, Meet Mgmt (on puppet to mgmt internals)](https://ffrank.github.io/features/2016/06/12/puppet,-meet-mgmt/) |
| Felix Frank | blog | [Puppet Powered Mgmt (puppet to mgmt tl;dr)](https://ffrank.github.io/features/2016/06/19/puppet-powered-mgmt/) |
| James Shubin | blog | [Automatic clustering in mgmt](https://ttboj.wordpress.com/2016/06/20/automatic-clustering-in-mgmt/) |
| James Shubin | video | [Recording from CoreOSFest 2016](https://www.youtube.com/watch?v=KVmDCUA42wc&html5=1) |
| James Shubin | video | [Recording from DebConf16](http://meetings-archive.debian.net/pub/debian-meetings/2016/debconf16/Next_Generation_Config_Mgmt.webm) ([Slides](https://annex.debconf.org//debconf-share/debconf16/slides/15-next-generation-config-mgmt.pdf)) |
| Felix Frank | blog | [Edging It All In (puppet and mgmt edges)](https://ffrank.github.io/features/2016/07/12/edging-it-all-in/) |
| Felix Frank | blog | [Translating All The Things (puppet to mgmt translation warnings)](https://ffrank.github.io/features/2016/08/19/translating-all-the-things/) |
| James Shubin | video | [Recording from systemd.conf 2016](https://www.youtube.com/watch?v=jB992Zb3nH0&html5=1) |
| James Shubin | blog | [Remote execution in mgmt](https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/) |
| James Shubin | video | [Recording from High Load Strategy 2016](https://vimeo.com/191493409) |
| James Shubin | video | [Recording from NLUUG 2016](https://www.youtube.com/watch?v=MmpwOQAb_SE&html5=1) |
| James Shubin | blog | [Send/Recv in mgmt](https://ttboj.wordpress.com/2016/12/07/sendrecv-in-mgmt/) |
| James Shubin | blog | [Metaparameters in mgmt](https://ttboj.wordpress.com/2017/03/01/metaparameters-in-mgmt/) |
## [Read what people are saying and publishing about mgmt!](docs/on-the-web.md)
Happy hacking! Happy hacking!

77
TODO.md
View File

@@ -1,69 +1,90 @@
# TODO # TODO
If you're looking for something to do, look here!
Let us know if you're working on one of the items. Here is a TODO list of longstanding items that are either lower-priority, or
If you'd like something to work on, ping @purpleidea and I'll create an issue more involved in terms of time, skill-level, and/or motivation.
tailored especially for you! Just let me know your approximate golang skill
level and how many hours you'd like to spend on the patch. Please have a look, and let us know if you're working on one of the items. It's
best to open an issue to track your progress and to discuss any implementation
questions you might have.
Lastly, if you'd like something different to work on, please ping @purpleidea
and I'll create an issue tailored especially for your approximate golang skill
level and available time commitment in terms of hours you'd need to spend on the
patch.
Happy Hacking!
## Package resource ## Package resource
- [ ] getfiles support on debian [bug](https://github.com/hughsie/PackageKit/issues/118) - [ ] getfiles support on debian [bug](https://github.com/hughsie/PackageKit/issues/118)
- [ ] directory info on fedora [bug](https://github.com/hughsie/PackageKit/issues/117) - [ ] directory info on fedora [bug](https://github.com/hughsie/PackageKit/issues/117)
- [ ] dnf blocker [bug](https://github.com/hughsie/PackageKit/issues/110) - [ ] dnf blocker [bug](https://github.com/hughsie/PackageKit/issues/110)
## File resource [bug](https://github.com/purpleidea/mgmt/issues/64) [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove) ## File resource [bug](https://github.com/purpleidea/mgmt/issues/64) [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
- [ ] recurse limit support [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove) - [ ] recurse limit support [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
- [ ] fanotify support [bug](https://github.com/go-fsnotify/fsnotify/issues/114) - [ ] fanotify support [bug](https://github.com/go-fsnotify/fsnotify/issues/114)
## Svc resource ## Svc resource
- [ ] base resource improvements
- [ ] refreshonly support [:heart:](https://github.com/purpleidea/mgmt/issues/464)
## Exec resource ## Exec resource
- [ ] base resource improvements - [ ] base resource improvements
## Timer resource ## Timer resource
- [ ] increment algorithm (linear, exponential, etc...) [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove) - [ ] increment algorithm (linear, exponential, etc...) [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## User/Group resource ## User/Group resource
- [ ] base resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
- [ ] automatic edges to file resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove) - [ ] automatic edges to file resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Virt (libvirt) resource
- [ ] base resource improvements [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Net (systemd-networkd) resource
- [ ] base resource
## Nspawn (systemd-nspawn) resource
- [ ] base resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Mount (systemd-mount) resource
- [ ] base resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Cron (systemd-timer) resource
- [ ] base resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Http resource ## Http resource
- [ ] base resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove) - [ ] base resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Etcd improvements ## Etcd improvements
- [ ] fix embedded etcd master race
- [ ] fix etcd race bug that only happens during CI testing (intermittently
failing test case issue)
## Torrent/dht file transfer ## Torrent/dht file transfer
- [ ] base plumbing - [ ] base plumbing
## GPG/Auth improvements ## GPG/Auth improvements
- [ ] base plumbing - [ ] base plumbing
## Resource improvements
- [ ] more reversible resources implemented
- [ ] more "cloud" resources
## Language improvements ## Language improvements
- [ ] language design
- [ ] lexer/parser - [ ] more core functions
- [ ] automatic language formatter, ala `gofmt` - [ ] automatic language formatter, ala `gofmt`
- [ ] gedit/gnome-builder/gtksourceview syntax highlighting - [ ] gedit/gnome-builder/gtksourceview syntax highlighting
- [ ] vim syntax highlighting - [ ] vim syntax highlighting
- [ ] emacs syntax highlighting - [ ] emacs syntax highlighting: see `misc/emacs/` (needs updating)
- [ ] exposed $error variable for feedback in the language
- [ ] improve the printf function to add %[]s, %[]f ([]str, []float) and map,
struct, nested etc... %v would be nice too!
- [ ] add line/col/file annotations to AST so we can get locations of errors
that the parser finds
- [ ] add more error messages with the `%error` pattern in parser.y
- [ ] we should have helper functions or language sugar to pull a field out of a
struct, or a value out of a map, or an index out of a list, etc...
## Engine improvements
- [ ] add a "waiting for func" message in the func engine to notify the user
about slow functions...
## Other ## Other
- [ ] better error/retry handling
- [ ] deb package target in Makefile
- [ ] reproducible builds - [ ] reproducible builds
- [ ] add your suggestions! - [ ] add your suggestions!

23
Vagrantfile vendored
View File

@@ -6,13 +6,16 @@ Vagrant.configure(2) do |config|
config.vm.synced_folder ".", "/vagrant", disabled: true config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.define "mgmt-dev" do |instance| config.vm.define "mgmt-dev" do |instance|
instance.vm.box = "fedora/24-cloud-base" instance.vm.box = "bento/fedora-31"
end end
config.vm.provider "virtualbox" do |v| config.vm.provider "virtualbox" do |v|
v.memory = 1536 v.memory = 1536
v.cpus = 2 v.cpus = 2
end end
config.vm.provider "libvirt" do |v|
v.memory = 2048
end
config.vm.provision "file", source: "vagrant/motd", destination: ".motd" config.vm.provision "file", source: "vagrant/motd", destination: ".motd"
config.vm.provision "shell", inline: "cp ~vagrant/.motd /etc/motd" config.vm.provision "shell", inline: "cp ~vagrant/.motd /etc/motd"
@@ -20,15 +23,25 @@ Vagrant.configure(2) do |config|
config.vm.provision "file", source: "vagrant/mgmt.bashrc", destination: ".mgmt.bashrc" config.vm.provision "file", source: "vagrant/mgmt.bashrc", destination: ".mgmt.bashrc"
config.vm.provision "file", source: "~/.gitconfig", destination: ".gitconfig" config.vm.provision "file", source: "~/.gitconfig", destination: ".gitconfig"
# copied from make-deps.sh (with added git) config.vm.provision "shell", inline: "dnf install -y golang git make"
config.vm.provision "shell", inline: "dnf install -y libvirt-devel golang golang-googlecode-tools-stringer hg git"
# set up packagekit
config.vm.provision "shell" do |shell|
shell.inline = <<-SCRIPT
dnf install -y PackageKit
systemctl enable packagekit
systemctl start packagekit
SCRIPT
end
# set up vagrant home # set up vagrant home
script = <<-SCRIPT script = <<-SCRIPT
grep -q 'mgmt\.bashrc' ~/.bashrc || echo '. ~/.mgmt.bashrc' >>~/.bashrc grep -q 'mgmt\.bashrc' ~/.bashrc || echo '. ~/.mgmt.bashrc' >>~/.bashrc
. ~/.mgmt.bashrc . ~/.mgmt.bashrc
go get -u github.com/purpleidea/mgmt mkdir -p ~/gopath/src/github.com/purpleidea
cd ~/gopath/src/github.com/purpleidea/mgmt cd ~/gopath/src/github.com/purpleidea
git clone https://github.com/purpleidea/mgmt --recursive
cd mgmt
make deps make deps
SCRIPT SCRIPT
config.vm.provision "shell" do |shell| config.vm.provision "shell" do |shell|

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 683 KiB

BIN
art/mgmt_poohbear_meme.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

42
bindata/Makefile Normal file
View File

@@ -0,0 +1,42 @@
# Mgmt
# Copyright (C) 2013-2021+ James Shubin and the project contributors
# Written by James Shubin <james@shubin.ca> and the project contributors
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# The bindata target generates go files from any source defined below. To use
# the files, import the generated "bindata" package and use:
# `bytes, err := bindata.Asset("FILEPATH")`
# where FILEPATH is the path of the original input file relative to `bindata/`.
# To get a list of files stored in this "bindata" package, you can use:
# `paths := bindata.AssetNames()` and `paths, err := bindata.AssetDir(name)`
# to get a list of files with a directory prefix.
.PHONY: build clean
default: build
build: bindata.go
# add more input files as dependencies at the end here...
bindata.go: ../COPYING
@echo "Generating: bindata..."
# go-bindata --pkg bindata -o <OUTPUT> <INPUT>
go-bindata --pkg bindata -o ./$@ $^
# gofmt the output file
gofmt -s -w $@
@ROOT=$$(dirname "$${BASH_SOURCE}")/.. && $$ROOT/misc/header.sh '$@'
clean:
# remove generated bindata.go
@ROOT=$$(dirname "$${BASH_SOURCE}")/.. && rm -f bindata.go

View File

@@ -1,18 +1,18 @@
// Mgmt // Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors // Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors // Written by James Shubin <james@shubin.ca> and the project contributors
// //
// This program is free software: you can redistribute it and/or modify // This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by // it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or // the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version. // (at your option) any later version.
// //
// This program is distributed in the hope that it will be useful, // This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of // but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details. // GNU General Public License for more details.
// //
// You should have received a copy of the GNU Affero General Public License // You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>. // along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package converger is a facility for reporting the converged state. // Package converger is a facility for reporting the converged state.
@@ -20,135 +20,256 @@ package converger
import ( import (
"fmt" "fmt"
"sort"
"sync" "sync"
"time" "time"
"github.com/purpleidea/mgmt/util" "github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/util/errwrap"
) )
// TODO: we could make a new function that masks out the state of certain // New builds a new converger coordinator.
// UID's, but at the moment the new Timer code has obsoleted the need... func New(timeout int64) *Coordinator {
return &Coordinator{
// Converger is the general interface for implementing a convergence watcher.
type Converger interface { // TODO: need a better name
Register() UID
IsConverged(UID) bool // is the UID converged ?
SetConverged(UID, bool) error // set the converged state of the UID
Unregister(UID)
Start()
Pause()
Loop(bool)
ConvergedTimer(UID) <-chan time.Time
Status() map[uint64]bool
Timeout() int // returns the timeout that this was created with
SetStateFn(func(bool) error) // sets the stateFn
}
// UID is the interface resources can use to notify with if converged. You'll
// need to use part of the Converger interface to Register initially too.
type UID interface {
ID() uint64 // get Id
Name() string // get a friendly name
SetName(string)
IsValid() bool // has Id been initialized ?
InvalidateID() // set Id to nil
IsConverged() bool
SetConverged(bool) error
Unregister()
ConvergedTimer() <-chan time.Time
StartTimer() (func() error, error) // cancellable is the same as StopTimer()
ResetTimer() error // resets counter to zero
StopTimer() error
}
// converger is an implementation of the Converger interface.
type converger struct {
timeout int // must be zero (instant) or greater seconds to run
stateFn func(bool) error // run on converged state changes with state bool
converged bool // did we converge (state changes of this run Fn)
channel chan struct{} // signal here to run an isConverged check
control chan bool // control channel for start/pause
mutex sync.RWMutex // used for controlling access to status and lastid
lastid uint64
status map[uint64]bool
}
// cuid is an implementation of the UID interface.
type cuid struct {
converger Converger
id uint64
name string // user defined, friendly name
mutex sync.Mutex
timer chan struct{}
running bool // is the above timer running?
wg sync.WaitGroup
}
// NewConverger builds a new converger struct.
func NewConverger(timeout int, stateFn func(bool) error) *converger {
return &converger{
timeout: timeout, timeout: timeout,
stateFn: stateFn,
channel: make(chan struct{}), mutex: &sync.RWMutex{},
control: make(chan bool),
lastid: 0, //lastid: 0,
status: make(map[uint64]bool), status: make(map[*UID]struct{}),
//converged: false, // initial state
pokeChan: make(chan struct{}, 1), // must be buffered
readyChan: make(chan struct{}), // ready signal
//paused: false, // starts off as started
pauseSignal: make(chan struct{}),
//resumeSignal: make(chan struct{}), // happens on pause
//pausedAck: util.NewEasyAck(), // happens on pause
stateFns: make(map[string]func(bool) error),
smutex: &sync.RWMutex{},
closeChan: make(chan struct{}),
wg: &sync.WaitGroup{},
} }
} }
// Register assigns a UID to the caller. // Coordinator is the central converger engine.
func (obj *converger) Register() UID { type Coordinator struct {
// timeout must be zero (instant) or greater seconds to run. If it's -1
// then this is disabled, and we never run stateFns.
timeout int64
// mutex is used for controlling access to status and lastid.
mutex *sync.RWMutex
// lastid contains the last uid we used for registration.
//lastid uint64
// status contains a reference to each active UID.
status map[*UID]struct{}
// converged stores the last convergence state. When this changes, we
// run the stateFns.
converged bool
// pokeChan receives a message every time we might need to re-calculate.
pokeChan chan struct{}
// readyChan closes to notify any interested parties that the main loop
// is running.
readyChan chan struct{}
// paused represents if this coordinator is paused or not.
paused bool
// pauseSignal closes to request a pause of this coordinator.
pauseSignal chan struct{}
// resumeSignal closes to request a resume of this coordinator.
resumeSignal chan struct{}
// pausedAck is used to send an ack message saying that we've paused.
pausedAck *util.EasyAck
// stateFns run on converged state changes.
stateFns map[string]func(bool) error
// smutex is used for controlling access to the stateFns map.
smutex *sync.RWMutex
// closeChan closes when we've been requested to shutdown.
closeChan chan struct{}
// wg waits for everything to finish.
wg *sync.WaitGroup
}
// Register creates a new UID which can be used to report converged state. You
// must Unregister each UID before Shutdown will be able to finish running.
func (obj *Coordinator) Register() *UID {
obj.wg.Add(1) // additional tracking for each UID
obj.mutex.Lock() obj.mutex.Lock()
defer obj.mutex.Unlock() defer obj.mutex.Unlock()
obj.lastid++ //obj.lastid++
obj.status[obj.lastid] = false // initialize as not converged uid := &UID{
return &cuid{ timeout: obj.timeout, // copy the timeout here
converger: obj, //id: obj.lastid,
id: obj.lastid, //name: fmt.Sprintf("%d", obj.lastid), // some default
name: fmt.Sprintf("%d", obj.lastid), // some default
timer: nil, poke: obj.poke,
running: false,
// timer
mutex: &sync.Mutex{},
timer: nil,
running: false,
wg: &sync.WaitGroup{},
} }
uid.unregister = func() { obj.Unregister(uid) } // add unregister func
obj.status[uid] = struct{}{} // TODO: add converged state here?
return uid
} }
// IsConverged gets the converged status of a uid. // Unregister removes the UID from the converger coordinator. If you supply an
func (obj *converger) IsConverged(uid UID) bool { // invalid or unregistered uid to this function, it will panic. An unregistered
if !uid.IsValid() { // UID is no longer part of the convergence checking.
panic(fmt.Sprintf("the ID of UID(%s) is nil", uid.Name())) func (obj *Coordinator) Unregister(uid *UID) {
} defer obj.wg.Done() // additional tracking for each UID
obj.mutex.RLock()
isConverged, found := obj.status[uid.ID()] // lookup
obj.mutex.RUnlock()
if !found {
panic("the ID of UID is unregistered")
}
return isConverged
}
// SetConverged updates the converger with the converged state of the UID.
func (obj *converger) SetConverged(uid UID, isConverged bool) error {
if !uid.IsValid() {
return fmt.Errorf("the ID of UID(%s) is nil", uid.Name())
}
obj.mutex.Lock() obj.mutex.Lock()
if _, found := obj.status[uid.ID()]; !found { defer obj.mutex.Unlock()
panic("the ID of UID is unregistered")
if _, exists := obj.status[uid]; !exists {
panic("uid is not registered")
} }
obj.status[uid.ID()] = isConverged // set uid.StopTimer() // ignore any errors
obj.mutex.Unlock() // unlock *before* poke or deadlock! delete(obj.status, uid)
if isConverged != obj.converged { // only poke if it would be helpful }
// run in a go routine so that we never block... just queue up!
// this allows us to send events, even if we haven't started... // Run starts the main loop for the converger coordinator. It is commonly run
go func() { obj.channel <- struct{}{} }() // from a go routine. It blocks until the Shutdown method is run to close it.
// NOTE: when we have very short timeouts, if we start before all the resources
// have joined the map, then it might appear as if we converged before we did!
func (obj *Coordinator) Run(startPaused bool) {
obj.wg.Add(1)
wg := &sync.WaitGroup{} // needed for the startPaused
defer wg.Wait() // don't leave any leftover go routines running
if startPaused {
wg.Add(1)
go func() {
defer wg.Done()
obj.Pause() // ignore any errors
close(obj.readyChan)
}()
} else {
close(obj.readyChan) // we must wait till the wg.Add(1) has happened...
} }
defer obj.wg.Done()
for {
// pause if one was requested...
select {
case <-obj.pauseSignal: // channel closes
obj.pausedAck.Ack() // send ack
// we are paused now, and waiting for resume or exit...
select {
case <-obj.resumeSignal: // channel closes
// resumed!
case <-obj.closeChan: // we can always escape
return
}
case _, ok := <-obj.pokeChan: // we got an event (re-calculate)
if !ok {
return
}
if err := obj.test(); err != nil {
// FIXME: what to do on error ?
}
case <-obj.closeChan: // we can always escape
return
}
}
}
// Ready blocks until the Run loop has started up. This is useful so that we
// don't run Shutdown before we've even started up properly.
func (obj *Coordinator) Ready() {
select {
case <-obj.readyChan:
}
}
// Shutdown sends a signal to the Run loop that it should exit. This blocks
// until it does.
func (obj *Coordinator) Shutdown() {
close(obj.closeChan)
obj.wg.Wait()
close(obj.pokeChan) // free memory?
}
// Pause pauses the coordinator. It should not be called on an already paused
// coordinator. It will block until the coordinator pauses with an
// acknowledgment, or until an exit is requested. If the latter happens it will
// error. It is NOT thread-safe with the Resume() method so only call either one
// at a time.
func (obj *Coordinator) Pause() error {
if obj.paused {
return fmt.Errorf("already paused")
}
obj.pausedAck = util.NewEasyAck()
obj.resumeSignal = make(chan struct{}) // build the resume signal
close(obj.pauseSignal)
// wait for ack (or exit signal)
select {
case <-obj.pausedAck.Wait(): // we got it!
// we're paused
case <-obj.closeChan:
return fmt.Errorf("closing")
}
obj.paused = true
return nil return nil
} }
// isConverged returns true if *every* registered uid has converged. // Resume unpauses the coordinator. It can be safely called on a brand-new
func (obj *converger) isConverged() bool { // coordinator that has just started running without incident. It is NOT
obj.mutex.RLock() // take a read lock // thread-safe with the Pause() method, so only call either one at a time.
defer obj.mutex.RUnlock() func (obj *Coordinator) Resume() {
for _, v := range obj.status { // TODO: do we need a mutex around Resume?
if !obj.paused { // no need to unpause brand-new resources
return
}
obj.pauseSignal = make(chan struct{}) // rebuild for next pause
close(obj.resumeSignal)
obj.poke() // unblock and notice the resume if necessary
obj.paused = false
// no need to wait for it to resume
//return // implied
}
// poke sends a message to the coordinator telling it that it should re-evaluate
// whether we're converged or not. This does not block. Do not run this in a
// goroutine. It must not be called after Shutdown has been called.
func (obj *Coordinator) poke() {
// redundant
//if len(obj.pokeChan) > 0 {
// return
//}
select {
case obj.pokeChan <- struct{}{}:
default: // if chan is now full because more than one poke happened...
}
}
// IsConverged returns true if *every* registered uid has converged. If there
// are no registered UID's, then this will return true.
func (obj *Coordinator) IsConverged() bool {
for _, v := range obj.Status() {
if !v { // everyone must be converged for this to be true if !v { // everyone must be converged for this to be true
return false return false
} }
@@ -156,194 +277,170 @@ func (obj *converger) isConverged() bool {
return true return true
} }
// Unregister dissociates the ConvergedUID from the converged checking. // test evaluates whether we're converged or not and runs the state change. It
func (obj *converger) Unregister(uid UID) { // is NOT thread-safe.
if !uid.IsValid() { func (obj *Coordinator) test() error {
panic(fmt.Sprintf("the ID of UID(%s) is nil", uid.Name())) // TODO: add these checks elsewhere to prevent anything from running?
if obj.timeout < 0 {
return nil // nothing to do (only run if timeout is valid)
} }
obj.mutex.Lock()
uid.StopTimer() // ignore any errors
delete(obj.status, uid.ID())
obj.mutex.Unlock()
uid.InvalidateID()
}
// Start causes a Converger object to start or resume running. converged := obj.IsConverged()
func (obj *converger) Start() { defer func() {
obj.control <- true obj.converged = converged // set this only at the end...
} }()
// Pause causes a Converger object to stop running temporarily. if !converged {
func (obj *converger) Pause() { // FIXME: add a sync ACK on pause before return if !obj.converged { // were we previously also not converged?
obj.control <- false return nil // nothing to do
}
// Loop is the main loop for a Converger object. It usually runs in a goroutine.
// TODO: we could eventually have each resource tell us as soon as it converges,
// and then keep track of the time delays here, to avoid callers needing select.
// NOTE: when we have very short timeouts, if we start before all the resources
// have joined the map, then it might appear as if we converged before we did!
func (obj *converger) Loop(startPaused bool) {
if obj.control == nil {
panic("converger not initialized correctly")
}
if startPaused { // start paused without racing
select {
case e := <-obj.control:
if !e {
panic("converger expected true")
}
} }
}
for {
select {
case e := <-obj.control: // expecting "false" which means pause!
if e {
panic("converger expected false")
}
// now i'm paused...
select {
case e := <-obj.control:
if !e {
panic("converger expected true")
}
// restart
// kick once to refresh the check...
go func() { obj.channel <- struct{}{} }()
continue
}
case <-obj.channel: // we're doing a state change
if !obj.isConverged() { // call the arbitrary functions (takes a read lock!)
if obj.converged { // we're doing a state change return obj.runStateFns(false)
if obj.stateFn != nil {
// call an arbitrary function
if err := obj.stateFn(false); err != nil {
// FIXME: what to do on error ?
}
}
}
obj.converged = false
continue
}
// we have converged!
if obj.timeout >= 0 { // only run if timeout is valid
if !obj.converged { // we're doing a state change
if obj.stateFn != nil {
// call an arbitrary function
if err := obj.stateFn(true); err != nil {
// FIXME: what to do on error ?
}
}
}
}
obj.converged = true
// loop and wait again...
}
} }
// we have converged!
if obj.converged { // were we previously also converged?
return nil // nothing to do
}
// call the arbitrary functions (takes a read lock!)
return obj.runStateFns(true)
} }
// ConvergedTimer adds a timeout to a select call and blocks until then. // runStateFns runs the list of stored state functions.
// TODO: this means we could eventually have per resource converged timeouts func (obj *Coordinator) runStateFns(converged bool) error {
func (obj *converger) ConvergedTimer(uid UID) <-chan time.Time { obj.smutex.RLock()
// be clever: if i'm already converged, this timeout should block which defer obj.smutex.RUnlock()
// avoids unnecessary new signals being sent! this avoids fast loops if var keys []string
// we have a low timeout, or in particular a timeout == 0 for k := range obj.stateFns {
if uid.IsConverged() { keys = append(keys, k)
// blocks the case statement in select forever!
return util.TimeAfterOrBlock(-1)
} }
return util.TimeAfterOrBlock(obj.timeout) sort.Strings(keys)
var err error
for _, name := range keys { // run in deterministic order
fn := obj.stateFns[name]
// call an arbitrary function
e := fn(converged)
err = errwrap.Append(err, e) // list of errors
}
return err
}
// AddStateFn adds a state function to be run on change of converged state.
func (obj *Coordinator) AddStateFn(name string, stateFn func(bool) error) error {
obj.smutex.Lock()
defer obj.smutex.Unlock()
if _, exists := obj.stateFns[name]; exists {
return fmt.Errorf("a stateFn with that name already exists")
}
obj.stateFns[name] = stateFn
return nil
}
// RemoveStateFn removes a state function from running on change of converged
// state.
func (obj *Coordinator) RemoveStateFn(name string) error {
obj.smutex.Lock()
defer obj.smutex.Unlock()
if _, exists := obj.stateFns[name]; !exists {
return fmt.Errorf("a stateFn with that name doesn't exist")
}
delete(obj.stateFns, name)
return nil
} }
// Status returns a map of the converged status of each UID. // Status returns a map of the converged status of each UID.
func (obj *converger) Status() map[uint64]bool { func (obj *Coordinator) Status() map[*UID]bool {
status := make(map[uint64]bool) status := make(map[*UID]bool)
obj.mutex.RLock() // take a read lock obj.mutex.RLock() // take a read lock
defer obj.mutex.RUnlock() defer obj.mutex.RUnlock()
for k, v := range obj.status { // make a copy to avoid the mutex for k := range obj.status {
status[k] = v status[k] = k.IsConverged()
} }
return status return status
} }
// Timeout returns the timeout in seconds that converger was created with. This // Timeout returns the timeout in seconds that converger was created with. This
// is useful to avoid passing in the timeout value separately when you're // is useful to avoid passing in the timeout value separately when you're
// already passing in the Converger struct. // already passing in the Coordinator struct.
func (obj *converger) Timeout() int { func (obj *Coordinator) Timeout() int64 {
return obj.timeout return obj.timeout
} }
// SetStateFn sets the state function to be run on change of converged state. // UID represents one of the probes for the converger coordinator. It is created
func (obj *converger) SetStateFn(stateFn func(bool) error) { // by calling the Register method of the Coordinator struct. It should be freed
obj.stateFn = stateFn // after use with Unregister.
type UID struct {
// timeout is a copy of the main timeout. It could eventually be used
// for per-UID timeouts too.
timeout int64
// isConverged stores the convergence state of this particular UID.
isConverged bool
// poke stores a reference to the main poke function.
poke func()
// unregister stores a reference to the unregister function.
unregister func()
// timer
mutex *sync.Mutex
timer chan struct{}
running bool // is the timer running?
wg *sync.WaitGroup
} }
// ID returns the unique id of this UID object. // Unregister removes this UID from the converger coordinator. An unregistered
func (obj *cuid) ID() uint64 { // UID is no longer part of the convergence checking.
return obj.id func (obj *UID) Unregister() {
obj.unregister()
} }
// Name returns a user defined name for the specific cuid. // IsConverged reports whether this UID is converged or not.
func (obj *cuid) Name() string { func (obj *UID) IsConverged() bool {
return obj.name return obj.isConverged
} }
// SetName sets a user defined name for the specific cuid. // SetConverged sets the convergence state of this UID. This is used by the
func (obj *cuid) SetName(name string) { // running timer if one is started. The timer will overwrite any value set by
obj.name = name // this method.
func (obj *UID) SetConverged(isConverged bool) {
obj.isConverged = isConverged
obj.poke() // notify of change
} }
// IsValid tells us if the id is valid or has already been destroyed. // ConvergedTimer adds a timeout to a select call and blocks until then.
func (obj *cuid) IsValid() bool { // TODO: this means we could eventually have per resource converged timeouts
return obj.id != 0 // an id of 0 is invalid func (obj *UID) ConvergedTimer() <-chan time.Time {
// be clever: if i'm already converged, this timeout should block which
// avoids unnecessary new signals being sent! this avoids fast loops if
// we have a low timeout, or in particular a timeout == 0
if obj.IsConverged() {
// blocks the case statement in select forever!
return util.TimeAfterOrBlock(-1)
}
return util.TimeAfterOrBlock(int(obj.timeout))
} }
// InvalidateID marks the id as no longer valid. // StartTimer runs a timer that sets us as converged on timeout. It also returns
func (obj *cuid) InvalidateID() { // a handle to the StopTimer function which should be run before exit.
obj.id = 0 // an id of 0 is invalid func (obj *UID) StartTimer() (func() error, error) {
}
// IsConverged is a helper function to the regular IsConverged method.
func (obj *cuid) IsConverged() bool {
return obj.converger.IsConverged(obj)
}
// SetConverged is a helper function to the regular SetConverged notification.
func (obj *cuid) SetConverged(isConverged bool) error {
return obj.converger.SetConverged(obj, isConverged)
}
// Unregister is a helper function to unregister myself.
func (obj *cuid) Unregister() {
obj.converger.Unregister(obj)
}
// ConvergedTimer is a helper around the regular ConvergedTimer method.
func (obj *cuid) ConvergedTimer() <-chan time.Time {
return obj.converger.ConvergedTimer(obj)
}
// StartTimer runs an invisible timer that automatically converges on timeout.
func (obj *cuid) StartTimer() (func() error, error) {
obj.mutex.Lock() obj.mutex.Lock()
if !obj.running { defer obj.mutex.Unlock()
obj.timer = make(chan struct{}) if obj.running {
obj.running = true
} else {
obj.mutex.Unlock()
return obj.StopTimer, fmt.Errorf("timer already started") return obj.StopTimer, fmt.Errorf("timer already started")
} }
obj.mutex.Unlock() obj.timer = make(chan struct{})
obj.running = true
obj.wg.Add(1) obj.wg.Add(1)
go func() { go func() {
defer obj.wg.Done() defer obj.wg.Done()
for { for {
select { select {
case _, ok := <-obj.timer: // reset signal channel case _, ok := <-obj.timer: // reset signal channel
if !ok { // channel is closed if !ok {
return // false to exit return
} }
obj.SetConverged(false) obj.SetConverged(false)
@@ -351,8 +448,8 @@ func (obj *cuid) StartTimer() (func() error, error) {
obj.SetConverged(true) // converged! obj.SetConverged(true) // converged!
select { select {
case _, ok := <-obj.timer: // reset signal channel case _, ok := <-obj.timer: // reset signal channel
if !ok { // channel is closed if !ok {
return // false to exit return
} }
} }
} }
@@ -361,8 +458,8 @@ func (obj *cuid) StartTimer() (func() error, error) {
return obj.StopTimer, nil return obj.StopTimer, nil
} }
// ResetTimer resets the counter to zero if using a StartTimer internally. // ResetTimer resets the timer to zero.
func (obj *cuid) ResetTimer() error { func (obj *UID) ResetTimer() error {
obj.mutex.Lock() obj.mutex.Lock()
defer obj.mutex.Unlock() defer obj.mutex.Unlock()
if obj.running { if obj.running {
@@ -372,8 +469,8 @@ func (obj *cuid) ResetTimer() error {
return fmt.Errorf("timer hasn't been started") return fmt.Errorf("timer hasn't been started")
} }
// StopTimer stops the running timer permanently until a StartTimer is run. // StopTimer stops the running timer.
func (obj *cuid) StopTimer() error { func (obj *UID) StopTimer() error {
obj.mutex.Lock() obj.mutex.Lock()
defer obj.mutex.Unlock() defer obj.mutex.Unlock()
if !obj.running { if !obj.running {

View File

@@ -0,0 +1,31 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !root
package converger
import (
"testing"
)
func TestBufferedChan1(t *testing.T) {
ch := make(chan bool, 1)
ch <- true
close(ch) // closing a channel that's not empty should not block
// must be able to exit without blocking anywhere
}

7
debian/.gitignore vendored Normal file
View File

@@ -0,0 +1,7 @@
*.debhelper.log
*debhelper
changelog
debhelper-build-stamp
files
mgmt.substvars
mgmt/*

1
debian/compat vendored Normal file
View File

@@ -0,0 +1 @@
9

17
debian/control vendored Normal file
View File

@@ -0,0 +1,17 @@
Source: mgmt
Maintainer: Johan Bloemberg (aequitas) <mgmt@ijohan.nl>
Build-Depends:
debhelper,
devscripts,
dh-golang,
dh-systemd,
golang-go,
Package: mgmt
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}, packagekit
Suggests: graphviz
Description: mgmt: next generation config management!
The mgmt tool is a next generation config management prototype. It's
not yet ready for production, but we hope to get there soon. Get
involved today!

21
debian/copyright vendored Normal file
View File

@@ -0,0 +1,21 @@
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: mgmt
Source: <https://github.com/purpleidea/mgmt>
Files: *
Copyright: Copyright (C) 2013-2021+ James Shubin and the project contributors
License: GPL-3.0
License: GPL-3.0
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.

11
debian/mgmt.docs vendored Normal file
View File

@@ -0,0 +1,11 @@
AUTHORS
COPYING
COPYRIGHT
README.md
THANKS
TODO.md
docs
examples
misc/bashrc.sh
misc/delta-cpu.sh
misc/mgmt.service

2
debian/mgmt.install vendored Normal file
View File

@@ -0,0 +1,2 @@
mgmt usr/bin
misc/mgmt.service /lib/systemd/system

15
debian/rules vendored Executable file
View File

@@ -0,0 +1,15 @@
#!/usr/bin/make -f
export DH_OPTIONS
export DH_GOPKG := mgmt
export DH_GOLANG_INSTALL_ALL := 1
unexport GOROOT
override_dh_auto_build:
make build
override_dh_auto_test:
@echo "Tests are disabled for now"
%:
dh $@ --with=systemd

8
doc.go
View File

@@ -1,18 +1,18 @@
// Mgmt // Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors // Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors // Written by James Shubin <james@shubin.ca> and the project contributors
// //
// This program is free software: you can redistribute it and/or modify // This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by // it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or // the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version. // (at your option) any later version.
// //
// This program is distributed in the hope that it will be useful, // This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of // but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details. // GNU General Public License for more details.
// //
// You should have received a copy of the GNU Affero General Public License // You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>. // along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package main provides the main entrypoint for using the `mgmt` software. // Package main provides the main entrypoint for using the `mgmt` software.

View File

@@ -1,10 +1,10 @@
FROM golang:1.6.2 FROM golang:1.13
MAINTAINER Michał Czeraszkiewicz <contact@czerasz.com> MAINTAINER Michał Czeraszkiewicz <contact@czerasz.com>
# Set the reset cache variable # Set the reset cache variable
# Read more here: http://czerasz.com/2014/11/13/docker-tip-and-tricks/#use-refreshedat-variable-for-better-cache-control # Read more here: http://czerasz.com/2014/11/13/docker-tip-and-tricks/#use-refreshedat-variable-for-better-cache-control
ENV REFRESHED_AT 2016-05-10 ENV REFRESHED_AT 2020-09-23
# Update the package list to be able to use required packages # Update the package list to be able to use required packages
RUN apt-get update RUN apt-get update

12
docker/Dockerfile.build Normal file
View File

@@ -0,0 +1,12 @@
FROM centos:7
MAINTAINER Karim Boumedhel <karimboumedhel@gmail.com>
ENV GOPATH=/root/gopath
ENV PATH=/opt/rh/rh-ruby22/root/usr/bin:/root/gopath/bin:/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/go/bin
ENV LD_LIBRARY_PATH=/opt/rh/rh-ruby22/root/usr/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
ENV PKG_CONFIG_PATH=/opt/rh/rh-ruby22/root/usr/lib64/pkgconfig${PKG_CONFIG_PATH:+:${PKG_CONFIG_PATH}}
RUN yum -y install epel-release wget unzip git make which centos-release-scl gcc && sed -i "s/enabled=0/enabled=1/" /etc/yum.repos.d/epel-testing.repo && yum -y install rh-ruby22 && wget -O /opt/go1.9.1.linux-amd64.tar.gz https://storage.googleapis.com/golang/go1.9.1.linux-amd64.tar.gz && tar -C /usr/local -xzf /opt/go1.9.1.linux-amd64.tar.gz
RUN mkdir -p $GOPATH/src/github.com/purpleidea && cd $GOPATH/src/github.com/purpleidea && git clone --recursive https://github.com/purpleidea/mgmt
RUN go get -u gopkg.in/alecthomas/gometalinter.v1 && cd $GOPATH/src/github.com/purpleidea/mgmt && make deps && make build
CMD ["/bin/bash"]

View File

@@ -1,10 +1,10 @@
FROM golang:1.6.2 FROM golang:1.13
MAINTAINER Michał Czeraszkiewicz <contact@czerasz.com> MAINTAINER Michał Czeraszkiewicz <contact@czerasz.com>
# Set the reset cache variable # Set the reset cache variable
# Read more here: http://czerasz.com/2014/11/13/docker-tip-and-tricks/#use-refreshedat-variable-for-better-cache-control # Read more here: http://czerasz.com/2014/11/13/docker-tip-and-tricks/#use-refreshedat-variable-for-better-cache-control
ENV REFRESHED_AT 2016-05-14 ENV REFRESHED_AT 2019-02-06
RUN apt-get update RUN apt-get update
@@ -27,5 +27,8 @@ WORKDIR /home/$USER_NAME/mgmt
# Install dependencies # Install dependencies
RUN make deps RUN make deps
# Chown $GOPATH
RUN chown -R ${USER_ID}:${GROUP_ID} /go
# Change user # Change user
USER ${USER_NAME} USER ${USER_NAME}

9
docker/Dockerfile.static Normal file
View File

@@ -0,0 +1,9 @@
FROM centos:7
MAINTAINER Karim Boumedhel <karimboumedhel@gmail.com>
RUN yum -y install augeas-libs libvirt-libs && yum clean all
ADD mgmt /usr/bin
RUN chmod 700 /usr/bin/mgmt
ENTRYPOINT ["/usr/bin/mgmt"]
CMD ["-h"]

18
docker/scripts/exec-development Executable file
View File

@@ -0,0 +1,18 @@
#!/bin/bash
# runs command provided as argument inside a development (Linux) Docker container
# Stop on any error
set -e
script_directory="$( cd "$( dirname "$0" )" && pwd )"
project_directory=$script_directory/../..
# Specify the Docker image name
image_name='purpleidea/mgmt:development'
# Run container in development mode
docker run --rm --name=mgm_development --user=mgmt \
-v "$project_directory:/go/src/github.com/purpleidea/mgmt/" \
-w /go/src/github.com/purpleidea/mgmt/ \
-it "$image_name" /bin/bash -c "$*"

View File

@@ -51,7 +51,7 @@ master_doc = 'index'
# General information about the project. # General information about the project.
project = u'mgmt' project = u'mgmt'
copyright = u'2013-2017+ James Shubin and the project contributors' copyright = u'2013-2021+ James Shubin and the project contributors'
author = u'James Shubin' author = u'James Shubin'
# The version info for the project you're documenting, acts as replacement for # The version info for the project you're documenting, acts as replacement for

162
docs/development.md Normal file
View File

@@ -0,0 +1,162 @@
# Development
This document contains some additional information and help regarding
developing `mgmt`. Useful tools, conventions, etc.
Be sure to read [quick start guide](quick-start-guide.md) first.
## Vagrant
If you would like to avoid doing the above steps manually, we have prepared a
[Vagrant](https://www.vagrantup.com/) environment for your convenience. From the
project directory, run a `vagrant up`, and then a `vagrant status`. From there,
you can `vagrant ssh` into the `mgmt` machine. The `MOTD` will explain the rest.
This environment isn't commonly used by the `mgmt` developers, so it might not
be working properly.
## Using Docker
Alternatively, you can check out the [docker-guide](docker-guide.md) in order to
develop or deploy using docker. This method is not endorsed or supported, so use
at your own risk, as it might not be working properly.
## Information about dependencies
Software projects have a few different kinds of dependencies. There are _build_
dependencies, _runtime_ dependencies, and additionally, a few extra dependencies
required for running the _test_ suite.
### Build
* `golang` 1.13 or higher (required, available in some distros and distributed
as a binary officially by [golang.org](https://golang.org/dl/))
### Runtime
A relatively modern GNU/Linux system should be able to run `mgmt` without any
problems. Since `mgmt` runs as a single statically compiled binary, all of the
library dependencies are included. It is expected, that certain advanced
resources require host specific facilities to work. These requirements are
listed below:
| Resource | Dependency | Version | Check version with |
|----------|-------------------|-----------------------------|-----------------------------------------------------------|
| augeas | augeas-devel | `augeas 1.6` or greater | `dnf info augeas-devel` or `apt-cache show libaugeas-dev` |
| file | inotify | `Linux 2.6.27` or greater | `uname -a` |
| hostname | systemd-hostnamed | `systemd 25` or greater | `systemctl --version` |
| nspawn | systemd-nspawn | `systemd ???` or greater | `systemctl --version` |
| pkg | packagekitd | `packagekit 1.x` or greater | `pkcon --version` |
| svc | systemd | `systemd ???` or greater | `systemctl --version` |
| virt | libvirt-devel | `libvirt 1.2.0` or greater | `dnf info libvirt-devel` or `apt-cache show libvirt-dev` |
| virt | libvirtd | `libvirt 1.2.0` or greater | `libvirtd --version` |
For building a visual representation of the graph, `graphviz` is required.
To build `mgmt` without augeas support please run:
`GOTAGS='noaugeas' make build`
To build `mgmt` without libvirt support please run:
`GOTAGS='novirt' make build`
To build `mgmt` without docker support please run:
`GOTAGS='nodocker' make build`
To build `mgmt` without augeas, libvirt or docker support please run:
`GOTAGS='noaugeas novirt nodocker' make build`
## OSX/macOS/Darwin development
Developing and running `mgmt` on macOS is currently not supported (but not
discouraged either). Meaning it might work but in the case it doesn't you would
have to provide your own patches to fix problems (the project maintainer and
community are glad to assist where needed).
There are currently some issues that make `mgmt` less suitable to run for
provisioning macOS. But as a client to provision remote servers it should run
fine.
Since the primary supported systems are Linux and these are the environments
tested, it is wise to run these suites during macOS development as well. To ease
this, Docker can be leveraged ([Docker for Mac](https://docs.docker.com/docker-for-mac/)).
Before running any of the commands below create the development Docker image:
```
docker/scripts/build-development
```
This image requires updating every time dependencies (`make-deps.sh`) changes.
Then to run the test suite:
```
docker run --rm -ti \
-v $PWD:/go/src/github.com/purpleidea/mgmt/ \
-w /go/src/github.com/purpleidea/mgmt/ \
purpleidea/mgmt:development \
make test
```
For convenience this command is wrapped in `docker/scripts/exec-development`.
Basically any command can be executed this way. Because the repository source is
mounted into the Docker container invocation will be quick and allow rapid
testing, for example:
```
docker/scripts/exec-development test/test-shell.sh load0.sh
```
Other examples:
```
docker/scripts/exec-development make build
docker/scripts/exec-development ./mgmt run --tmp-prefix lang examples/lang/load0.mcl
```
Be advised that this method is not supported and it might not be working
properly.
## Testing
This project has both unit tests in the form of golang tests and integration
tests using shell scripting.
Native golang tests are preferred over tests written in our shell testing
framework. Please see [https://golang.org/pkg/testing/](https://golang.org/pkg/testing/)
for more information.
To run all tests:
```
make test
```
There is a library of quick and small integration tests for the language and
YAML related things, check out [`test/shell/`](/test/shell). Adding a test is as
easy as copying one of the files in [`test/shell/`](/test/shell) and adapting
it.
This test suite won't run by default (unless when on CI server) and needs to be
called explictly using:
```
make test-shell
```
Or run an individual shell test using:
```
make test-shell-load0
```
Tip: you can use TAB completion with `make` to quickly get a list of possible
individual tests to run.
## Tools, integrations, IDE's etc
### IDE/Editor support
* Emacs: see `misc/emacs/`
* [Textmate](https://github.com/aequitas/mgmt.tmbundle)
* [VSCode](https://github.com/aequitas/mgmt.vscode)

View File

@@ -1,9 +1,4 @@
# mgmt # General documentation
Available from:
[https://github.com/purpleidea/mgmt/](https://github.com/purpleidea/mgmt/)
This documentation is available in: [Markdown](https://github.com/purpleidea/mgmt/blob/master/docs/documentation.md) or [PDF](https://pdfdoc-purpleidea.rhcloud.com/pdf/https://github.com/purpleidea/mgmt/blob/master/docs/documentation.md) format.
## Overview ## Overview
@@ -18,24 +13,21 @@ foundation in and for, new and existing software.
For more information, you may like to read some blog posts from the author: For more information, you may like to read some blog posts from the author:
* [Next generation config mgmt](https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/) * [Next generation config mgmt](https://purpleidea.com/blog/2016/01/18/next-generation-configuration-mgmt/)
* [Automatic edges in mgmt](https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/) * [Automatic edges in mgmt](https://purpleidea.com/blog/2016/03/14/automatic-edges-in-mgmt/)
* [Automatic grouping in mgmt](https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/) * [Automatic grouping in mgmt](https://purpleidea.com/blog/2016/03/30/automatic-grouping-in-mgmt/)
* [Automatic clustering in mgmt](https://ttboj.wordpress.com/2016/06/20/automatic-clustering-in-mgmt/) * [Automatic clustering in mgmt](https://purpleidea.com/blog/2016/06/20/automatic-clustering-in-mgmt/)
* [Remote execution in mgmt](https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/) * [Remote execution in mgmt](https://purpleidea.com/blog/2016/10/07/remote-execution-in-mgmt/)
* [Send/Recv in mgmt](https://ttboj.wordpress.com/2016/12/07/sendrecv-in-mgmt/) * [Send/Recv in mgmt](https://purpleidea.com/blog/2016/12/07/sendrecv-in-mgmt/)
* [Metaparameters in mgmt](https://ttboj.wordpress.com/2017/03/01/metaparameters-in-mgmt/) * [Metaparameters in mgmt](https://purpleidea.com/blog/2017/03/01/metaparameters-in-mgmt/)
There is also an [introductory video](http://meetings-archive.debian.net/pub/debian-meetings/2016/debconf16/Next_Generation_Config_Mgmt.webm) available. There is also an [introductory video](https://www.youtube.com/watch?v=LkEtBVLfygE&html5=1)
Older videos and other material [is available](https://github.com/purpleidea/mgmt/#on-the-web). available. Older videos and other material [is available](on-the-web.md).
## Setup ## Setup
During this prototype phase, the tool can be run out of the source directory. You'll probably want to read the [quick start guide](quick-start-guide.md) to
You'll probably want to use ```./run.sh run --yaml examples/graph1.yaml``` to get going.
get started. Beware that this _can_ cause data loss. Understand what you're
doing first, or perform these actions in a virtual environment such as the one
provided by [Oh-My-Vagrant](https://github.com/purpleidea/oh-my-vagrant).
## Features ## Features
@@ -71,7 +63,7 @@ the meta attributes of that resource to `false`.
#### Blog post #### Blog post
You can read the introductory blog post about this topic here: You can read the introductory blog post about this topic here:
[https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/](https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/) [https://purpleidea.com/blog/2016/03/14/automatic-edges-in-mgmt/](https://purpleidea.com/blog/2016/03/14/automatic-edges-in-mgmt/)
### Autogrouping ### Autogrouping
@@ -90,7 +82,7 @@ the meta attributes of that resource to `false`.
#### Blog post #### Blog post
You can read the introductory blog post about this topic here: You can read the introductory blog post about this topic here:
[https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/](https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/) [https://purpleidea.com/blog/2016/03/30/automatic-grouping-in-mgmt/](https://purpleidea.com/blog/2016/03/30/automatic-grouping-in-mgmt/)
### Automatic clustering ### Automatic clustering
@@ -106,7 +98,7 @@ with the `--seeds` variable.
#### Blog post #### Blog post
You can read the introductory blog post about this topic here: You can read the introductory blog post about this topic here:
[https://ttboj.wordpress.com/2016/06/20/automatic-clustering-in-mgmt/](https://ttboj.wordpress.com/2016/06/20/automatic-clustering-in-mgmt/) [https://purpleidea.com/blog/2016/06/20/automatic-clustering-in-mgmt/](https://purpleidea.com/blog/2016/06/20/automatic-clustering-in-mgmt/)
### Remote ("agent-less") mode ### Remote ("agent-less") mode
@@ -130,10 +122,14 @@ entire set of running mgmt agents will need to all simultaneously converge for
the group to exit. This is particularly useful for bootstrapping new clusters the group to exit. This is particularly useful for bootstrapping new clusters
which need to exchange information that is only available at run time. which need to exchange information that is only available at run time.
This existed in earlier versions of mgmt as a `--remote` option, but it has been
removed and is being ported to a more powerful variant where you can remote
execute via a `remote` resource.
#### Blog post #### Blog post
You can read the introductory blog post about this topic here: You can read the introductory blog post about this topic here:
[https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/](https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/) [https://purpleidea.com/blog/2016/10/07/remote-execution-in-mgmt/](https://purpleidea.com/blog/2016/10/07/remote-execution-in-mgmt/)
### Puppet support ### Puppet support
@@ -145,300 +141,59 @@ Invoke `mgmt` with the `--puppet` switch, which supports 3 variants:
1. Request the configuration from the Puppet Master (like `puppet agent` does) 1. Request the configuration from the Puppet Master (like `puppet agent` does)
mgmt run --puppet agent `mgmt run puppet --puppet agent`
2. Compile a local manifest file (like `puppet apply`) 2. Compile a local manifest file (like `puppet apply`)
mgmt run --puppet /path/to/my/manifest.pp `mgmt run puppet --puppet /path/to/my/manifest.pp`
3. Compile an ad hoc manifest from the commandline (like `puppet apply -e`) 3. Compile an ad hoc manifest from the commandline (like `puppet apply -e`)
mgmt run --puppet 'file { "/etc/ntp.conf": ensure => file }' `mgmt run puppet --puppet 'file { "/etc/ntp.conf": ensure => file }'`
For more details and caveats see [Puppet.md](Puppet.md). For more details and caveats see [puppet-guide.md](puppet-guide.md).
#### Blog post #### Blog post
An introductory post on the Puppet support is on An introductory post on the Puppet support is on
[Felix's blog](http://ffrank.github.io/features/2016/06/19/puppet-powered-mgmt/). [Felix's blog](http://ffrank.github.io/features/2016/06/19/puppet-powered-mgmt/).
## Resources
This section lists all the built-in resources and their properties. The
resource primitives in `mgmt` are typically more powerful than resources in
other configuration management systems because they can be event based which
lets them respond in real-time to converge to the desired state. This property
allows you to build more complex resources that you probably hadn't considered
in the past.
In addition to the resource specific properties, there are resource properties
(otherwise known as parameters) which can apply to every resource. These are
called [meta parameters](#meta-parameters) and are listed separately. Certain
meta parameters aren't very useful when combined with certain resources, but
in general, it should be fairly obvious, such as when combining the `noop` meta
parameter with the [Noop](#Noop) resource.
* [Augeas](#Augeas): Manipulate files using augeas.
* [Exec](#Exec): Execute shell commands on the system.
* [File](#File): Manage files and directories.
* [Hostname](#Hostname): Manages the hostname on the system.
* [KV](#KV): Set a key value pair in our shared world database.
* [Msg](#Msg): Send log messages.
* [Noop](#Noop): A simple resource that does nothing.
* [Nspawn](#Nspawn): Manage systemd-machined nspawn containers.
* [Password](#Password): Create random password strings.
* [Pkg](#Pkg): Manage system packages with PackageKit.
* [Svc](#Svc): Manage system systemd services.
* [Timer](#Timer): Manage system systemd services.
* [Virt](#Virt): Manage virtual machines with libvirt.
### Augeas
The augeas resource uses [augeas](http://augeas.net/) commands to manipulate
files.
### Exec
The exec resource can execute commands on your system.
### File
The file resource manages files and directories. In `mgmt`, directories are
identified by a trailing slash in their path name. File have no such slash.
It has the following properties:
- `path`: file path (directories have a trailing slash here)
- `content`: raw file content
- `state`: either `exists` (the default value) or `absent`
- `mode`: octal unix file permissions
- `owner`: username or uid for the file owner
- `group`: group name or gid for the file group
#### Path
The path property specifies the file or directory that we are managing.
#### Content
The content property is a string that specifies the desired file contents.
#### Source
The source property points to a source file or directory path that we wish to
copy over and use as the desired contents for our resource.
#### State
The state property describes the action we'd like to apply for the resource. The
possible values are: `exists` and `absent`.
#### Recurse
The recurse property limits whether file resource operations should recurse into
and monitor directory contents with a depth greater than one.
#### Force
The force property is required if we want the file resource to be able to change
a file into a directory or vice-versa. If such a change is needed, but the force
property is not set to `true`, then this file resource will error.
### Hostname
The hostname resource manages static, transient/dynamic and pretty hostnames
on the system and watches them for changes.
#### static_hostname
The static hostname is the one configured in /etc/hostname or a similar
file.
It is chosen by the local user. It is not always in sync with the current
host name as returned by the gethostname() system call.
#### transient_hostname
The transient / dynamic hostname is the one configured via the kernel's
sethostbyname().
It can be different from the static hostname in case DHCP or mDNS have been
configured to change the name based on network information.
#### pretty_hostname
The pretty hostname is a free-form UTF8 host name for presentation to the user.
#### hostname
Hostname is the fallback value for all 3 fields above, if only `hostname` is
specified, it will set all 3 fields to this value.
### KV
The KV resource sets a key and value pair in the global world database. This is
quite useful for setting a flag after a number of resources have run. It will
ignore database updates to the value that are greater in compare order than the
requested key if the `SkipLessThan` parameter is set to true. If we receive a
refresh, then the stored value will be reset to the requested value even if the
stored value is greater.
#### Key
The string key used to store the key.
#### Value
The string value to set. This can also be set via Send/Recv.
#### SkipLessThan
If this parameter is set to `true`, then it will ignore updating the value as
long as the database versions are greater than the requested value. The compare
operation used is based on the `SkipCmpStyle` parameter.
#### SkipCmpStyle
By default this converts the string values to integers and compares them as you
would expect.
### Msg
The msg resource sends messages to the main log, or an external service such
as systemd's journal.
### Noop
The noop resource does absolutely nothing. It does have some utility in testing
`mgmt` and also as a placeholder in the resource graph.
### Nspawn
The nspawn resource is used to manage systemd-machined style containers.
### Password
The password resource can generate a random string to be used as a password. It
will re-generate the password if it receives a refresh notification.
### Pkg
The pkg resource is used to manage system packages. This resource works on many
different distributions because it uses the underlying packagekit facility which
supports different backends for different environments. This ensures that we
have great Debian (deb/dpkg) and Fedora (rpm/dnf) support simultaneously.
### Svc
The service resource is still very WIP. Please help us my improving it!
### Timer
This resource needs better documentation. Please help us my improving it!
### Virt
The virt resource can manage virtual machines via libvirt.
## Usage and frequently asked questions
(Send your questions as a patch to this FAQ! I'll review it, merge it, and
respond by commit with the answer.)
### Why did you start this project?
I wanted a next generation config management solution that didn't have all of
the design flaws or limitations that the current generation of tools do, and no
tool existed!
### Why did you use etcd? What about consul?
Etcd and consul are both written in golang, which made them the top two
contenders for my prototype. Ultimately a choice had to be made, and etcd was
chosen, but it was also somewhat arbitrary. If there is available interest,
good reasoning, *and* patches, then we would consider either switching or
supporting both, but this is not a high priority at this time.
### Can I use an existing etcd cluster instead of the automatic embedded servers?
Yes, it's possible to use an existing etcd cluster instead of the automatic,
elastic embedded etcd servers. To do so, simply point to the cluster with the
`--seeds` variable, the same way you would if you were seeding a new member to
an existing mgmt cluster.
The downside to this approach is that you won't benefit from the automatic
elastic nature of the embedded etcd servers, and that you're responsible if you
accidentally break your etcd cluster, or if you use an unsupported version.
### What does the error message about an inconsistent dataDir mean?
If you get an error message similar to:
```
Etcd: Connect: CtxError...
Etcd: CtxError: Reason: CtxDelayErr(5s): No endpoints available yet!
Etcd: Connect: Endpoints: []
Etcd: The dataDir (/var/lib/mgmt/etcd) might be inconsistent or corrupt.
```
This happens when there are a series of fatal connect errors in a row. This can
happen when you start `mgmt` using a dataDir that doesn't correspond to the
current cluster view. As a result, the embedded etcd server never finishes
starting up, and as a result, a default endpoint never gets added. The solution
is to either reconcile the mistake, and if there is no important data saved, you
can remove the etcd dataDir. This is typically `/var/lib/mgmt/etcd/member/`.
### Why do resources have both a `Compare` method and an `IFF` (on the UID) method?
The `Compare()` methods are for determining if two resources are effectively the
same, which is used to make graph change delta's efficient. This is when we want
to change from the current running graph to a new graph, but preserve the common
vertices. Since we want to make this process efficient, we only update the parts
that are different, and leave everything else alone. This `Compare()` method can
tell us if two resources are the same.
The `IFF()` method is part of the whole UID system, which is for discerning if a
resource meets the requirements another expects for an automatic edge. This is
because the automatic edge system assumes a unified UID pattern to test for
equality. In the future it might be helpful or sane to merge the two similar
comparison functions although for now they are separate because they are
actually answer different questions.
### Did you know that there is a band named `MGMT`?
I didn't realize this when naming the project, and it is accidental. After much
anguishing, I chose the name because it was short and I thought it was
appropriately descriptive. If you need a less ambiguous search term or phrase,
you can try using `mgmtconfig` or `mgmt config`.
### You didn't answer my question, or I have a question!
It's best to ask on [IRC](https://webchat.freenode.net/?channels=#mgmtconfig)
to see if someone can help you. Once we get a big enough community going, we'll
add a mailing list. If you don't get any response from the above, you can
contact me through my [technical blog](https://ttboj.wordpress.com/contact/)
and I'll do my best to help. If you have a good question, please add it as a
patch to this documentation. I'll merge your question, and add a patch with the
answer!
## Reference ## Reference
Please note that there are a number of undocumented options. For more Please note that there are a number of undocumented options. For more
information on these options, please view the source at: information on these options, please view the source at:
[https://github.com/purpleidea/mgmt/](https://github.com/purpleidea/mgmt/). [https://github.com/purpleidea/mgmt/](https://github.com/purpleidea/mgmt/).
If you feel that a well used option needs documenting here, please patch it! If you feel that a well used option needs documenting here, please patch it!
### Overview of reference ### Overview of reference
* [Meta parameters](#meta-parameters): List of available resource meta parameters. * [Meta parameters](#meta-parameters): List of available resource meta parameters.
* [Lang metadata file](#lang-metadata-file): Lang metadata file format.
* [Graph definition file](#graph-definition-file): Main graph definition file. * [Graph definition file](#graph-definition-file): Main graph definition file.
* [Command line](#command-line): Command line parameters. * [Command line](#command-line): Command line parameters.
* [Compilation options](#compilation-options): Compilation options. * [Compilation options](#compilation-options): Compilation options.
### Meta parameters ### Meta parameters
These meta parameters are special parameters (or properties) which can apply to These meta parameters are special parameters (or properties) which can apply to
any resource. The usefulness of doing so will depend on the particular meta any resource. The usefulness of doing so will depend on the particular meta
parameter and resource combination. parameter and resource combination.
#### AutoEdge #### AutoEdge
Boolean. Should we generate auto edges for this resource? Boolean. Should we generate auto edges for this resource?
#### AutoGroup #### AutoGroup
Boolean. Should we attempt to automatically group this resource with others? Boolean. Should we attempt to automatically group this resource with others?
#### Noop #### Noop
Boolean. Should the Apply portion of the CheckApply method of the resource Boolean. Should the Apply portion of the CheckApply method of the resource
make any changes? Noop is a concatenation of no-operation. make any changes? Noop is a concatenation of no-operation.
#### Retry #### Retry
Integer. The number of times to retry running the resource on error. Use -1 for Integer. The number of times to retry running the resource on error. Use -1 for
infinite. This currently applies for both the Watch operation (which can fail) infinite. This currently applies for both the Watch operation (which can fail)
and for the CheckApply operation. While they could have separate values, I've and for the CheckApply operation. While they could have separate values, I've
@@ -446,6 +201,7 @@ decided to use the same ones for both until there's a proper reason to want to
do something differently for the Watch errors. do something differently for the Watch errors.
#### Delay #### Delay
Integer. Number of milliseconds to wait between retries. The same value is Integer. Number of milliseconds to wait between retries. The same value is
shared between the Watch and CheckApply retries. This currently applies for both shared between the Watch and CheckApply retries. This currently applies for both
the Watch operation (which can fail) and for the CheckApply operation. While the Watch operation (which can fail) and for the CheckApply operation. While
@@ -454,6 +210,7 @@ until there's a proper reason to want to do something differently for the Watch
errors. errors.
#### Poll #### Poll
Integer. Number of seconds to wait between `CheckApply` checks. If this is Integer. Number of seconds to wait between `CheckApply` checks. If this is
greater than zero, then the standard event based `Watch` mechanism for this greater than zero, then the standard event based `Watch` mechanism for this
resource is replaced with a simple polling mechanism. In general, this is not resource is replaced with a simple polling mechanism. In general, this is not
@@ -471,6 +228,7 @@ which is another way of saying that if the resource finally settles down to give
the graph enough time, it can probably converge. the graph enough time, it can probably converge.
#### Limit #### Limit
Float. Maximum rate of `CheckApply` runs started per second. Useful to limit Float. Maximum rate of `CheckApply` runs started per second. Useful to limit
an especially _eventful_ process from causing excessive checks to run. This an especially _eventful_ process from causing excessive checks to run. This
defaults to `+Infinity` which adds no limiting. If you change this value, you defaults to `+Infinity` which adds no limiting. If you change this value, you
@@ -478,12 +236,14 @@ will also need to change the `Burst` value to a non-zero value. Please see the
[rate](https://godoc.org/golang.org/x/time/rate) package for more information. [rate](https://godoc.org/golang.org/x/time/rate) package for more information.
#### Burst #### Burst
Integer. Burst is the maximum number of runs which can happen without invoking Integer. Burst is the maximum number of runs which can happen without invoking
the rate limiter as designated by the `Limit` value. If the `Limit` is not set the rate limiter as designated by the `Limit` value. If the `Limit` is not set
to `+Infinity`, this must be a non-zero value. Please see the to `+Infinity`, this must be a non-zero value. Please see the
[rate](https://godoc.org/golang.org/x/time/rate) package for more information. [rate](https://godoc.org/golang.org/x/time/rate) package for more information.
#### Sema #### Sema
List of string ids. Sema is a P/V style counting semaphore which can be used to List of string ids. Sema is a P/V style counting semaphore which can be used to
limit parallelism during the CheckApply phase of resource execution. Each limit parallelism during the CheckApply phase of resource execution. Each
resource can have `N` different semaphores which share a graph global namespace. resource can have `N` different semaphores which share a graph global namespace.
@@ -494,31 +254,108 @@ integer, then that value is the max size for that semaphore. Valid semaphore
id's include: `some_id`, `hello:42`, `not:smart:4` and `:13`. It is expected id's include: `some_id`, `hello:42`, `not:smart:4` and `:13`. It is expected
that the last bare example be only used by the engine to add a global semaphore. that the last bare example be only used by the engine to add a global semaphore.
#### Rewatch
Boolean. Rewatch specifies whether we re-run the Watch worker during a graph
swap if it has errored. When doing a graph compare to swap the graphs, if this
is true, and this particular worker has errored, then we'll remove it and add it
back as a new vertex, thus causing it to run again. This is different from the
`Retry` metaparam which applies during the normal execution. It is only when
this is exhausted that we're in permanent worker failure, and only then can we
rely on this metaparam.
#### Realize
Boolean. Realize ensures that the resource is guaranteed to converge at least
once before a potential graph swap removes or changes it. This guarantee is
useful for fast changing graphs, to ensure that the brief creation of a resource
is seen. This guarantee does not prevent against the engine quitting normally,
and it can't guarantee it if the resource is blocked because of a failed
pre-requisite resource.
*XXX: This is currently not implemented!*
#### Reverse
Boolean. Reverse is a property that some resources can implement that specifies
that some "reverse" operation should happen when that resource "disappears". A
disappearance happens when a resource is defined in one instance of the graph,
and is gone in the subsequent one. This disappearance can happen if it was
previously in an if statement that then becomes false.
This is helpful for building robust programs with the engine. The engine adds a
"reversed" resource to that subsequent graph to accomplish the desired "reverse"
mechanics. The specifics of what this entails is a property of the particular
resource that is being "reversed".
It might be wise to combine the use of this meta parameter with the use of the
`realize` meta parameter to ensure that your reversed resource actually runs at
least once, if there's a chance that it might be gone for a while.
### Lang metadata file
Any module *must* have a metadata file in its root. It must be named
`metadata.yaml`, even if it's empty. You can specify zero or more values in yaml
format which can change how your module behaves, and where the `mcl` language
looks for code and other files. The most important top level keys are: `main`,
`path`, `files`, and `license`.
#### Main
The `main` key points to the default entry point of your code. It must be a
relative path if specified. If it's empty it defaults to `main.mcl`. It should
generally not be changed. It is sometimes set to `main/main.mcl` if you'd like
your modules code out of the root and into a child directory for cases where you
don't plan on having a lot deeper imports relative to `main.mcl` and all those
files would clutter things up.
#### Path
The `path` key specifies the modules import search directory to use for this
module. You can specify this if you'd like to vendor something for your module.
In general, if you use it, please use the convention: `path/`. If it's not
specified, you will default to the parent modules directory.
#### Files
The `files` key specifies some additional files that will get included in your
deploy. It defaults to `files/`.
#### License
The `license` key allows you to specify a license for the module. Please specify
one so that everyone can enjoy your code! Use a "short license identifier", like
`LGPLv3+`, or `MIT`. The former is a safe choice if you're not sure what to use.
### Graph definition file ### Graph definition file
graph.yaml is the compiled graph definition file. The format is currently graph.yaml is the compiled graph definition file. The format is currently
undocumented, but by looking through the [examples/](https://github.com/purpleidea/mgmt/tree/master/examples) undocumented, but by looking through the [examples/](https://github.com/purpleidea/mgmt/tree/master/examples/yaml/)
you can probably figure out most of it, as it's fairly intuitive. you can probably figure out most of it, as it's fairly intuitive. It's not
recommended that you use this, since it's preferable to write code in the
[mcl language](language-guide.md) front-end.
### Command line ### Command line
The main interface to the `mgmt` tool is the command line. For the most recent The main interface to the `mgmt` tool is the command line. For the most recent
documentation, please run `mgmt --help`. documentation, please run `mgmt --help`.
#### `--yaml <graph.yaml>`
Point to a graph file to run.
#### `--converged-timeout <seconds>` #### `--converged-timeout <seconds>`
Exit if the machine has converged for approximately this many seconds. Exit if the machine has converged for approximately this many seconds.
#### `--max-runtime <seconds>` #### `--max-runtime <seconds>`
Exit when the agent has run for approximately this many seconds. This is not Exit when the agent has run for approximately this many seconds. This is not
generally recommended, but may be useful for users who know what they're doing. generally recommended, but may be useful for users who know what they're doing.
#### `--noop` #### `--noop`
Globally force all resources into no-op mode. This also disables the export to Globally force all resources into no-op mode. This also disables the export to
etcd functionality, but does not disable resource collection, however all etcd functionality, but does not disable resource collection, however all
resources that are collected will have their individual noop settings set. resources that are collected will have their individual noop settings set.
#### `--sema <size>` #### `--sema <size>`
Globally add a counting semaphore of this size to each resource in the graph. Globally add a counting semaphore of this size to each resource in the graph.
The semaphore will get given an id of `:size`. In other words if you specify a The semaphore will get given an id of `:size`. In other words if you specify a
size of 42, you can expect a semaphore if named: `:42`. It is expected that size of 42, you can expect a semaphore if named: `:42`. It is expected that
@@ -527,44 +364,46 @@ collision with this globally defined semaphore. The size value must be greater
than zero at this time. The traditional non-parallel execution found in config than zero at this time. The traditional non-parallel execution found in config
management tools such as `Puppet` can be obtained with `--sema 1`. management tools such as `Puppet` can be obtained with `--sema 1`.
#### `--remote <graph.yaml>`
Point to a graph file to run on the remote host specified within. This parameter
can be used multiple times if you'd like to remotely run on multiple hosts in
parallel.
#### `--allow-interactive` #### `--allow-interactive`
Allow interactive prompting for SSH passwords if there is no authentication Allow interactive prompting for SSH passwords if there is no authentication
method that works. method that works.
#### `--ssh-priv-id-rsa` #### `--ssh-priv-id-rsa`
Specify the path for finding SSH keys. This defaults to `~/.ssh/id_rsa`. To Specify the path for finding SSH keys. This defaults to `~/.ssh/id_rsa`. To
never use this method of authentication, set this to the empty string. never use this method of authentication, set this to the empty string.
#### `--cconns` #### `--cconns`
The maximum number of concurrent remote ssh connections to run. This defaults The maximum number of concurrent remote ssh connections to run. This defaults
to `0`, which means unlimited. to `0`, which means unlimited.
#### `--no-caching` #### `--no-caching`
Don't allow remote caching of the remote execution binary. This will require Don't allow remote caching of the remote execution binary. This will require
the binary to be copied over for every remote execution, but it limits the the binary to be copied over for every remote execution, but it limits the
likelihood that there is leftover information from the configuration process. likelihood that there is leftover information from the configuration process.
#### `--prefix <path>` #### `--prefix <path>`
Specify a path to a custom working directory prefix. This directory will get Specify a path to a custom working directory prefix. This directory will get
created if it does not exist. This usually defaults to `/var/lib/mgmt/`. This created if it does not exist. This usually defaults to `/var/lib/mgmt/`. This
can't be combined with the `--tmp-prefix` option. It can be combined with the can't be combined with the `--tmp-prefix` option. It can be combined with the
`--allow-tmp-prefix` option. `--allow-tmp-prefix` option.
#### `--tmp-prefix` #### `--tmp-prefix`
If this option is specified, a temporary prefix will be used instead of the If this option is specified, a temporary prefix will be used instead of the
default prefix. This can't be combined with the `--prefix` option. default prefix. This can't be combined with the `--prefix` option.
#### `--allow-tmp-prefix` #### `--allow-tmp-prefix`
If this option is specified, we will attempt to fall back to a temporary prefix If this option is specified, we will attempt to fall back to a temporary prefix
if the primary prefix couldn't be created. This is useful for avoiding failures if the primary prefix couldn't be created. This is useful for avoiding failures
in environments where the primary prefix may or may not be available, but you'd in environments where the primary prefix may or may not be available, but you'd
like to try. The canonical example is when running `mgmt` with `--remote` there like to try. The canonical example is when running `mgmt` with remote execution
might be a cached copy of the binary in the primary prefix, but in case there's there might be a cached copy of the binary in the primary prefix, but if there's
no binary available continue working in a temporary directory to avoid failure. no binary available continue working in a temporary directory to avoid failure.
### Compilation options ### Compilation options
@@ -581,27 +420,39 @@ GOTAGS=novirt make build
#### Disable augeas support #### Disable augeas support
If you wish to compile mgmt without augeas support, you can use the following command: If you wish to compile mgmt without augeas support, you can use the following
command:
``` ```
GOTAGS=noaugeas make build GOTAGS=noaugeas make build
``` ```
#### Disable docker support
If you wish to compile mgmt without docker support, you can use the following
command:
```
GOTAGS=nodocker make build
```
#### Combining compile-time flags #### Combining compile-time flags
You can combine multiple tags by using a space-separated list: You can combine multiple tags by using a space-separated list:
``` ```
GOTAGS="noaugeas novirt" make build GOTAGS="noaugeas novirt nodocker" make build
``` ```
## Examples ## Examples
For example configurations, please consult the [examples/](https://github.com/purpleidea/mgmt/tree/master/examples) directory in the git
source repository. It is available from: For example configurations, please consult the [examples/](https://github.com/purpleidea/mgmt/tree/master/examples)
directory in the git source repository. It is available from:
[https://github.com/purpleidea/mgmt/tree/master/examples](https://github.com/purpleidea/mgmt/tree/master/examples) [https://github.com/purpleidea/mgmt/tree/master/examples](https://github.com/purpleidea/mgmt/tree/master/examples)
### Systemd: ### Systemd:
See [`misc/mgmt.service`](misc/mgmt.service) for a sample systemd unit file. See [`misc/mgmt.service`](misc/mgmt.service) for a sample systemd unit file.
This unit file is part of the RPM. This unit file is part of the RPM.
@@ -629,13 +480,13 @@ This is a project that I started in my free time in 2013. Development is driven
by all of our collective patches! Dive right in, and start hacking! by all of our collective patches! Dive right in, and start hacking!
Please contact me if you'd like to invite me to speak about this at your event. Please contact me if you'd like to invite me to speak about this at your event.
You can follow along [on my technical blog](https://ttboj.wordpress.com/). You can follow along [on my technical blog](https://purpleidea.com/blog/).
To report any bugs, please file a ticket at: [https://github.com/purpleidea/mgmt/issues](https://github.com/purpleidea/mgmt/issues). To report any bugs, please file a ticket at: [https://github.com/purpleidea/mgmt/issues](https://github.com/purpleidea/mgmt/issues).
## Authors ## Authors
Copyright (C) 2013-2017+ James Shubin and the project contributors Copyright (C) 2013-2021+ James Shubin and the project contributors
Please see the Please see the
[AUTHORS](https://github.com/purpleidea/mgmt/tree/master/AUTHORS) file [AUTHORS](https://github.com/purpleidea/mgmt/tree/master/AUTHORS) file
@@ -643,4 +494,4 @@ for more information.
* [github](https://github.com/purpleidea/) * [github](https://github.com/purpleidea/)
* [&#64;purpleidea](https://twitter.com/#!/purpleidea) * [&#64;purpleidea](https://twitter.com/#!/purpleidea)
* [https://ttboj.wordpress.com/](https://ttboj.wordpress.com/) * [https://purpleidea.com/](https://purpleidea.com/)

385
docs/faq.md Normal file
View File

@@ -0,0 +1,385 @@
## Frequently asked questions
(Send your questions as a patch to this FAQ! I'll review it, merge it, and
respond by commit with the answer.)
### Why did you start this project?
I wanted a next generation config management solution that didn't have all of
the design flaws or limitations that the current generation of tools do, and no
tool existed!
### Why did you choose `golang` for the project?
When I started working on the project, I needed to choose a language that
already had an implementation of a distributed consensus algorithm available.
That meant [Paxos](https://en.wikipedia.org/wiki/Paxos_(computer_science)) or
[Raft](https://en.wikipedia.org/wiki/Raft_(computer_science)). Golang was one
language that actually had two different Raft implementations, `etcd`, and
`consul`. Other design requirements included something that was reasonably fast,
typed and memory-safe, and suited for systems engineering. After a reasonably
extensive search, I chose `golang`. I think it was the right decision. There are
a number of other features of the language which helped influence the decision.
### How do I contribute to the project if I don't know `golang`?
There are many different ways you can contribute to the project. They can be
broadly divided into two main categories:
1. With contributions written in `golang`
2. With contributions _not_ written in `golang`
If you do not know `golang`, and have no desire to learn, you can still
contribute to mgmt by using it, testing it, writing docs, or even just by
telling your friends about it. If you don't mind some coding, learning about the
[mgmt language](https://purpleidea.com/blog/2018/02/05/mgmt-configuration-language/)
might be an enjoyable experience for you. It is a small [DSL](https://en.wikipedia.org/wiki/Domain-specific_language)
and not a general purpose programming language, and you might find it more fun
than what you're typically used to. One of the reasons the mgmt author got into
writing automation modules, was because he found it much more fun to build with
a higher level DSL, than in a general purpose programming language.
If you do not know `golang`, and would like to learn, are a beginner and want to
improve your skills, or want to gain some great interdisciplinary systems
engineering knowledge around a cool automation project, we're happy to mentor
you. Here are some pre-requisites steps which we recommend:
1. Make sure you have a somewhat recent GNU/Linux environment to hack on. A
recent [Fedora](https://getfedora.org/) or [Debian](https://www.debian.org/)
environment is recommended. Developing, testing, and contributing on `macOS` or
`Windows` will be either more difficult or impossible.
2. Ensure that you're mildly comfortable with the basics of using `git`. You can
find a number of tutorials online.
3. Spend between four to six hours with the [golang tour](https://tour.golang.org/).
Skip over the longer problems, but try and get a solid overview of everything.
If you forget something, you can always go back and repeat those parts.
4. Connect to our [#mgmtconfig](https://web.libera.chat/?channels=#mgmtconfig)
IRC channel on the [Libera.Chat](https://libera.chat/) network. You can use any
IRC client that you'd like, but the [hosted web portal](https://web.libera.chat/?channels=#mgmtconfig)
will suffice if you don't know what else to use. [Here are a few suggestions for
alternative clients.](https://libera.chat/guides/clients)
5. Now it's time to try and starting writing a patch! We have tagged a bunch of
[open issues as #mgmtlove](https://github.com/purpleidea/mgmt/issues?q=is%3Aissue+is%3Aopen+label%3Amgmtlove)
for new users to have somewhere to get involved. Look through them to see if
something interests you. If you find one, let us know you're working on it by
leaving a comment in the ticket. We'll be around to answer questions in the IRC
channel, and to create new issues if there wasn't something that fit your
interests. When you submit a patch, we'll review it and give you some feedback.
Over time, we hope you'll learn a lot while supporting the project! Now get
hacking!
### Is this project ready for production?
It's getting pretty close. I'm able to write modules for it now!
Compared to some existing automation tools out there, mgmt is a relatively new
project. It is probably not as feature complete as some other software, but it
also offers a number of features which are not currently available elsewhere.
Because we have not released a `1.0` release yet, we are not guaranteeing
stability of the internal or external API's. We only change them if it's really
necessary, and we don't expect anything particularly drastic to occur. We would
expect it to be relatively easy to adapt your code if such changes happened.
As with all software, bugs can occur, and while we make no guarantees of being
bug-free, there are a number of things we've done to reduce the chances of one
causing you trouble:
1. Our software is written in golang, which is a memory-safe language, and which
is known to reduce or eliminate entire classes of bugs.
2. We have a test suite which we run on every commit, and every 24 hours. If you
have a particular case that you'd like to test, you are welcome to add it in!
3. The mgmt language itself offers a number of safety features. You can
[read about them in the introductory blog post](https://purpleidea.com/blog/2018/02/05/mgmt-configuration-language/).
Having said all this, as with all software, there are still missing features
which some users might want in their production environments. We're working hard
to get all of those implemented, but we hope that you'll get involved and help
us finish off the ones that are most important to you. We are happy to mentor
new contributors, and have even [tagged](https://github.com/purpleidea/mgmt/issues?q=is%3Aissue+is%3Aopen+label%3Amgmtlove)
a number of issues if you need help getting started.
Some of the current limitations include:
* Auth hasn't been implemented yet, so you should only use it in trusted
environments (not on publicly accessible networks) for now.
* The number of built-in core functions is still small. You may encounter
scenarios where you're missing a function. The good news is that it's relatively
easy to add this missing functionality yourself. In time, with your help, the
list will grow!
* Large file distribution is not yet implemented. You might want a scenario
where mgmt is used to distribute large files (such as `.iso` images) throughout
your cluster. While this isn't a common use-case, it won't be possible until
someone wants to write the patch. (Mentoring available!) You can workaround this
easily by storing those files on a separate fileserver for the interim.
* There isn't an ecosystem of community `modules` yet. We've got this on our
roadmap, so please stay tuned!
We hope you'll participate as an early adopter. Every additional pair of helping
hands gets us all there faster! It's quite possible to use this to build useful
automation today, and we hope you'll start getting familiar with the software.
### Why did you use etcd? What about consul?
Etcd and consul are both written in golang, which made them the top two
contenders for my prototype. Ultimately a choice had to be made, and etcd was
chosen, but it was also somewhat arbitrary. If there is available interest,
good reasoning, *and* patches, then we would consider either switching or
supporting both, but this is not a high priority at this time.
### Can I use an existing etcd cluster instead of the automatic embedded servers?
Yes, it's possible to use an existing etcd cluster instead of the automatic,
elastic embedded etcd servers. To do so, simply point to the cluster with the
`--seeds` variable, the same way you would if you were seeding a new member to
an existing mgmt cluster.
The downside to this approach is that you won't benefit from the automatic
elastic nature of the embedded etcd servers, and that you're responsible if you
accidentally break your etcd cluster, or if you use an unsupported version.
### In `mgmt` you talk about events. What is this referring to?
Mgmt has two main concepts that involve "events":
1. Events in the [resource primitive](resource-guide.md).
2. Events in the [reactive language](language-guide.md).
Each resource primitive in mgmt can test (check) and set (apply) the desired
state that was requested of it. This is familiar to what is common with existing
tools such as `Puppet`, `Ansible`, `Chef`, `Terraform`, etc... In addition,
`mgmt` can also **watch** the state and detect changes. As a result, it never
has to waste time and cpu resources by polling to test and set state, leading to
a design which is algorithmically much faster than the existing generation of
tools.
To describe the set of resources to apply, mgmt describes this collection with a
language. In order to model the time component of infrastructure, we use a
special kind of language called an [FRP](https://en.wikipedia.org/wiki/Functional_reactive_programming).
This language has a built-in concept that we call "events", and which means that
we re-evaluate the relevant portions of the code whenever a value or function
has an event that tells us that it changed. The `R` in `FRP` stands for
reactive. This is similar to how a spreadsheet updates dependent cells when a
pre-requisite value is modified. [This article](https://en.wikipedia.org/wiki/Reactive_programming)
provides a bit more background.
Whenever any of the streams of values in the language change, the program is
partially re-evaluated. The output of any mgmt program is a [DAG](https://en.wikipedia.org/wiki/Directed_acyclic_graph)
of resources, or more precisely, a stream of resource graphs. Since we have
events per-resource, we can efficiently switch from one desired-state resource
graph to the next without re-checking their individual states, since we've been
monitoring them all along.
One side-effect of all this, is that if a rogue systems administrator manually
changes the state of any managed resource, mgmt will detect this and attempt to
revert the change. This makes for excellent live demos, but is not the primary
design goal. It is a consequence of tracking state so that graph changes are
efficient. We implement the event detection via an intentional per-resource
[main loop](https://en.wikipedia.org/wiki/Event_loop) which can enable other
interesting functionality too!
Make sure to get rid of your rogue sysadmin! ;)
### Do I need to run `mgmt` as `root`?
No and yes. It depends. Nothing in mgmt explicitly requires root in the design,
however mgmt will require root only if the changes to your system that you want
it to make require root.
For example, if you use it to manage files that require root access to modify,
then you'll need root. If you only use it to manage files and resources
elsewhere, then it shouldn't need root. Many resources are perfectly usable
without root, and virtually all of my live demos are done without root.
### How can I run `mgmt` on-demand, or in `cron`, instead of continuously?
By default, `mgmt` will run continuously in an attempt to keep your machine in a
converged state, even as external forces change the current state, or as your
time-varying desired state changes over time. (You can write code in the mgmt
language which will let you describe a desired state which might change over
time.)
Some users might prefer to only run `mgmt` on-demand manually, or at a set
interval via a tool like `cron`. In order to do so, `mgmt` must have a way to
shut itself down after a single "run". This feature is possible with the
`--converged-timeout` flag. You may specify this flag, along with a number of
seconds as the argument, and when there has been no activity for that many
seconds, the program will shutdown.
Alternatively, while it is not recommended, if you'd like to ensure the program
never runs for longer that a specific number of seconds, you can ask it to
shutdown after that time interval using the `--max-runtime` flag. This also
requires a number of seconds as an argument.
#### Example:
```
./mgmt run lang examples/lang/hello0.mcl --converged-timeout=5
```
### When I try to build `mgmt` I see: `no Go files in $GOPATH/src/github.com/purpleidea/mgmt/bindata`.
Due to the arcane way that `golang` designed its `$GOPATH`, the main project
directory must be inside your `$GOPATH`, and at the appropriate FQDN. This is:
`$GOPATH/src/github.com/purpleidea/mgmt/`. If you have your project root outside
of that directory, then you may get this error when you try to build it. In this
case there is likely a `go get` version of the project at this location. Remove
it and replace it with your git cloned directory. In my case, I like to work on
things in `~/code/mgmt/`, so that path is a symlink that points to the long
project directory.
### Why does my file resource error with `no such file or directory`?
If you create a file resource and only specify the content like this:
```
file "/tmp/foo" {
content => "hello world\n",
}
```
Then this will attempt to set the contents of that file to the desired string,
but *only* if that file already exists. If you'd like to ensure that it also
gets created in case it is not present, then you must also specify the state:
```
file "/tmp/foo" {
state => $const.res.file.state.exists,
content => "hello world\n",
}
```
Similar logic applies for situations when you only specify the `mode` parameter.
This all turns out to be more safe and "correct", in that it would error and
prevent masking an error for a situation when you expected a file to already be
at that location. It also turns out to simplify the internals significantly, and
remove an ambiguous scenario with the reversable file resource.
### Why do function names inside of templates include underscores?
The golang template library which we use to implement the template() function
doesn't support the dot notation, so we import all our normal functions, and
just replace dots with underscores. As an example, the standard `datetime.print`
function is shown within mcl scripts as datetime_print after being imported.
### On startup `mgmt` hangs after: `etcd: server: starting...`.
If you get an error message similar to:
```
etcd: server: starting...
etcd: server: start timeout of 1m0s reached
etcd: server: close timeout of 15s reached
```
But nothing happens afterwards, this can be due to a corrupt etcd storage
directory. Each etcd server embedded in mgmt must have a special directory where
it stores local state. It must not be shared by more than one individual member.
This dir is typically `/var/lib/mgmt/etcd/member/`. If you accidentally use it
(for example during testing) with a different cluster view, then you can corrupt
it. This can happen if you use it with more than one different hostname.
The solution is to avoid making this mistake, and if there is no important data
saved, you can remove the etcd member dir and start over.
### On running `make` to build a new version, it errors with: `Text file busy`.
If you get an error like:
```
cp: cannot create regular file 'mgmt': Text file busy
```
This can happen if you ran `make build` (or just `make`) when there was already
an instance of mgmt running, or if a related file locking issue occurred. To
solve this, shutdown and running mgmt process, run `rm mgmt` to remove the file,
and then get a new one by running `make` again.
### The docs speaks of `--remote` but the CLI errors out?
The `--remote` flag existed in an earlier version of mgmt. It was removed and
will be replaced with a more powerful version, which is a "remote" resource. The
code is mostly ready but it's not finished. If you'd like to help finish it or
sponsor the work, please let me know.
### Does this support Windows? OSX? GNU Hurd?
Mgmt probably works best on Linux, because that's what most developers use for
serious automation workloads. Support for non-Linux operating systems isn't a
high priority of mine, but we're happy to accept patches for missing features
or resources that you think would make sense on your favourite platform.
### Why aren't you using `glide`, `godep` or `go mod` for dependency management?
Vendoring dependencies means that as the git master branch of each dependency
marches on, you're left behind using an old version. As a result, bug fixes and
improvements are not automatically brought into the project. Instead, we run our
complete test suite against the entire project (with the latest dependencies)
[every 24 hours](https://docs.travis-ci.com/user/cron-jobs/) to ensure that it
all still works.
Occasionally a dependency breaks API and causes a failure. In those situations,
we're notified almost immediately, it's easy to see exactly which commit caused
the breakage, and we can either quickly notify the author (if it was a mistake)
or update our code if it was a sensible change. This also puts less burden on
authors to support old, legacy versions of their software unnecessarily.
Historically, we've had approximately one such breakage per year, which were all
detected and fixed within a few hours. The cost of these small, rare,
interruptions is much less expensive than having to periodically move every
dependency in the project to the latest versions. Some examples of this include:
* We caught the `go-bindata` swap before it was publicly known, and fixed it in:
[adbe9c7be178898de3645b0ed17ed2ca06646017](https://github.com/purpleidea/mgmt/commit/adbe9c7be178898de3645b0ed17ed2ca06646017).
* We caught the `codegangsta/cli` API change improvement, and fixed it in:
[ab73261fd4e98cf7ecb08066ad228a8f559ba16a](https://github.com/purpleidea/mgmt/commit/ab73261fd4e98cf7ecb08066ad228a8f559ba16a).
* We caught an un-announced libvirt API change, and promptly fixed it in:
[95cb94a03958a9d2ebf01df0821a8c13a4f3a28c](https://github.com/purpleidea/mgmt/commit/95cb94a03958a9d2ebf01df0821a8c13a4f3a28c).
If we choose responsible dependencies, then it usually means that those authors
are also responsible with their changes to API and to git master. If we ever
find that it's not the case, then we will either switch that dependency to a
more responsible version, or fork it if necessary.
Occasionally, we want to pin a dependency to a particular version. This can
happen if the project treats `git master` as an unstable branch, or because a
dependency needs a newer version of golang than the minimum that we require for
our project. In those cases it's sensible to assume the technical debt, and
vendor the dependency. The common tools such as `glide` and `godep` work by
requiring you install their software, and by either storing a yaml file with the
version of that dependency in your repository, and/or copying all of that code
into git and explicitly storing it. This project thinks that all of these
solutions are wasteful and unnecessary, particularly when an existing elegant
solution already exists: `[git submodules](https://git-scm.com/book/en/v2/Git-Tools-Submodules)`.
The advantages of using `git submodules` are three-fold:
1. You already have the required tools installed.
2. You only store a pointer to the dependency, not additional files or code.
3. The git submodule tools let you easily switch dependency versions, see diff
output, and responsibly plan and test your versions bumps with ease.
Don't blindly use the tools that others tell you to. Learn what they do, think
for yourself, and become a power user today! That process led us to using
`git submodules`. Hopefully you'll come to the same conclusions that we did.
### Did you know that there is a band named `MGMT`?
I didn't realize this when naming the project, and it is accidental. After much
anguishing, I chose the name because it was short and I thought it was
appropriately descriptive. If you need a less ambiguous search term or phrase,
you can try using `mgmtconfig` or `mgmt config`.
It also doesn't stand for
[Methyl Guanine Methyl Transferase](https://en.wikipedia.org/wiki/O-6-methylguanine-DNA_methyltransferase)
which definitely existed before the band did.
### You didn't answer my question, or I have a question!
It's best to ask on [IRC](https://web.libera.chat/?channels=#mgmtconfig)
to see if someone can help you. If you don't get a response from IRC, you can
contact me through my [technical blog](https://purpleidea.com/contact/) and I'll
do my best to help. If you have a good question, please add it as a patch to
this documentation. I'll merge your question, and add a patch with the answer!
For news and updates, subscribe to the [mailing list](https://www.redhat.com/mailman/listinfo/mgmtconfig-list).

461
docs/function-guide.md Normal file
View File

@@ -0,0 +1,461 @@
# Function guide
## Overview
The `mgmt` tool has built-in functions which add useful, reactive functionality
to the language. This guide describes the different function API's that are
available. It is meant to instruct developers on how to write new functions.
Since `mgmt` and the core functions are written in golang, some prior golang
knowledge is assumed.
## Theory
Functions in `mgmt` are similar to functions in other languages, however they
also have a [reactive](https://en.wikipedia.org/wiki/Functional_reactive_programming)
component. Our functions can produce events over time, and there are different
ways to write functions. For some background on this design, please read the
[original article](https://purpleidea.com/blog/2018/02/05/mgmt-configuration-language/)
on the subject.
## Native Functions
Native functions are functions which are implemented in the mgmt language
itself. These are currently not available yet, but are coming soon. Stay tuned!
## Simple Function API
Most functions should be implemented using the simple function API. This API
allows you to implement simple, static, [pure](https://en.wikipedia.org/wiki/Pure_function)
functions that don't require you to write much boilerplate code. They will be
automatically re-evaluated as needed when their input values change. These will
all be automatically made available as helper functions within mgmt templates,
and are also available for use anywhere inside mgmt programs.
You'll need some basic knowledge of using the [`types`](https://github.com/purpleidea/mgmt/tree/master/lang/types)
library which is included with mgmt. This library lets you interact with the
available types and values in the mgmt language. It is very easy to use, and
should be fairly intuitive. Most of what you'll need to know can be inferred
from looking at example code.
To implement a function, you'll need to create a file that imports the
[`lang/funcs/simple/`](https://github.com/purpleidea/mgmt/tree/master/lang/funcs/simple/)
module. It should probably get created in the correct directory inside of:
[`lang/funcs/core/`](https://github.com/purpleidea/mgmt/tree/master/lang/funcs/core/).
The function should be implemented as a `FuncValue` in our type system. It is
then registered with the engine during `init()`. An example explains it best:
### Example
```golang
package simple
import (
"fmt"
"github.com/purpleidea/mgmt/lang/funcs/simple"
"github.com/purpleidea/mgmt/lang/types"
)
// you must register your functions in init when the program starts up
func init() {
// Example function that squares an int and prints out answer as an str.
simple.ModuleRegister(ModuleName, "talkingsquare", &types.FuncValue{
T: types.NewType("func(int) str"), // declare the signature
V: func(input []types.Value) (types.Value, error) {
i := input[0].Int() // get first arg as an int64
// must return the above specified value
return &types.StrValue{
V: fmt.Sprintf("%d^2 is %d", i, i * i),
}, nil // no serious errors occurred
},
})
}
```
This simple function accepts one `int` as input, and returns one `str`.
Functions can have zero or more inputs, and must have exactly one output. You
must be sure to use the `types` library correctly, since if you try and access
an input which should not exist (eg: `input[2]`, when there are only two
that are expected), then you will cause a panic. If you have declared that a
particular argument is an `int` but you try to read it with `.Bool()` you will
also cause a panic. Lastly, make sure that you return a value in the correct
type or you will also cause a panic!
If anything goes wrong, you can return an error, however this will cause the
mgmt engine to shutdown. It should be seen as the equivalent to calling a
`panic()`, however it is safer because it brings the engine down cleanly.
Ideally, your functions should never need to error. You should never cause a
real `panic()`, since this could have negative consequences to the system.
## Simple Polymorphic Function API
Most functions should be implemented using the simple function API. If they need
to have multiple polymorphic forms under the same name, then you can use this
API. This is useful for situations when it would be unhelpful to name the
functions differently, or when the number of possible signatures for the
function would be infinite.
The canonical example of this is the `len` function which returns the number of
elements in either a `list` or a `map`. Since lists and maps are two different
types, you can see that polymorphism is more convenient than requiring a
`listlen` and `maplen` function. Nevertheless, it is also required because a
`list of int` is a different type than a `list of str`, which is a different
type than a `list of list of str` and so on. As you can see the number of
possible input types for such a `len` function is infinite.
Another downside to implementing your functions with this API is that they will
*not* be made available for use inside templates. This is a limitation of the
`golang` template library. In the future if this limitation proves to be
significantly annoying, we might consider writing our own template library.
As with the simple, non-polymorphic API, you can only implement [pure](https://en.wikipedia.org/wiki/Pure_function)
functions, without writing too much boilerplate code. They will be automatically
re-evaluated as needed when their input values change.
To implement a function, you'll need to create a file that imports the
[`lang/funcs/simplepoly/`](https://github.com/purpleidea/mgmt/tree/master/lang/funcs/simplepoly/)
module. It should probably get created in the correct directory inside of:
[`lang/funcs/core/`](https://github.com/purpleidea/mgmt/tree/master/lang/funcs/core/).
The function should be implemented as a list of `FuncValue`'s in our type
system. It is then registered with the engine during `init()`. You may also use
the `variant` type in your type definitions. This special type will never be
seen inside a running program, and will get converted to a concrete type if a
suitable match to this signature can be found. Be warned that signatures which
contain too many variants, or which are very general, might be hard for the
compiler to match, and ambiguous type graphs make for user compiler errors. The
top-level type must still be a function type, it may only contain variants as
part of its signature. It is probably more difficult to unify a function if its
return type is a variant, as opposed to if one of its args was.
An example explains it best:
### Example
```golang
import (
"fmt"
"github.com/purpleidea/mgmt/lang/funcs/simplepoly"
"github.com/purpleidea/mgmt/lang/types"
)
func init() {
// You may use the simplepoly.ModuleRegister method to register your
// function if it's in a module, as seen in the simple function example.
simplepoly.Register("len", []*types.FuncValue{
{
T: types.NewType("func([]variant) int"),
V: Len,
},
{
T: types.NewType("func({variant: variant}) int"),
V: Len,
},
})
}
// Len returns the number of elements in a list or the number of key pairs in a
// map. It can operate on either of these types.
func Len(input []types.Value) (types.Value, error) {
var length int
switch k := input[0].Type().Kind; k {
case types.KindList:
length = len(input[0].List())
case types.KindMap:
length = len(input[0].Map())
default:
return nil, fmt.Errorf("unsupported kind: %+v", k)
}
return &types.IntValue{
V: int64(length),
}, nil
}
```
This simple polymorphic function can accept an infinite number of signatures, of
which there are two basic forms. Both forms return an `int` as is seen above.
The first form takes a `[]variant` which means a `list` of `variant`'s, which
means that it can be a list of any type, since `variant` itself is not a
concrete type. The second form accepts a `{variant: variant}`, which means that
it accepts any form of `map` as input.
The implementation for both of these forms is the same: it is handled by the
same `Len` function which is clever enough to be able to deal with any of the
type signatures possible from those two patterns.
At compile time, if your `mcl` code type checks correctly, a concrete type will
be known for each and every usage of the `len` function, and specific values
will be passed in for this code to compute the length of. As usual, make sure to
only write safe code that will not panic! A panic is a bug. If you really cannot
continue, then you must return an error.
## Function API
To implement a reactive function in `mgmt` it must satisfy the
[`Func`](https://github.com/purpleidea/mgmt/blob/master/lang/interfaces/func.go)
interface. Using the [Simple Function API](#simple-function-api) is preferable
if it meets your needs. Most functions will be able to use that API. If you
really need something more powerful, then you can use the regular function API.
What follows are each of the method signatures and a description of each.
### Info
```golang
Info() *interfaces.Info
```
This returns some information about the function. It is necessary so that the
compiler can type check the code correctly, and know what optimizations can be
performed. This is usually the first method which is called by the engine.
#### Example
```golang
func (obj *FooFunc) Info() *interfaces.Info {
return &interfaces.Info{
Pure: true,
Sig: types.NewType("func(a int) str"),
}
}
```
### Init
```golang
Init(init *interfaces.Init) error
```
This is called to initialize the function. If something goes wrong, it should
return an error. It is passed a struct that contains all the important
information and pointers that it might need to work with throughout its
lifetime. As a result, it will need to save a copy to that pointer for future
use in the other methods.
#### Example
```golang
// Init runs some startup code for this function.
func (obj *FooFunc) Init(init *interfaces.Init) error {
obj.init = init
obj.closeChan = make(chan struct{}) // shutdown signal
return nil
}
```
### Close
```golang
Close() error
```
This is called to cleanup the function. It usually causes the stream to
shutdown. Even if `Stream()` decided to shutdown early, it might still get
called. It is usually called by the engine to tell the function to shutdown.
#### Example
```golang
// Close runs some shutdown code for this function and turns off the stream.
func (obj *FooFunc) Close() error {
close(obj.closeChan) // send a signal to tell the stream to close
return nil
}
```
### Stream
```golang
Stream() error
```
`Stream` is where the real _work_ is done. This method is started by the
language function engine. It will run this function while simultaneously sending
it values on the `input` channel. It will only send a complete set of input
values. You should send a value to the output channel when you have decided that
one should be produced. Make sure to only use input values of the expected type
as declared in the `Info` struct, and send values of the similarly declared
appropriate return type. Failure to do so will may result in a panic and
sadness.
#### Example
```golang
// Stream returns the single value that was generated and then closes.
func (obj *FooFunc) Stream() error {
defer close(obj.init.Output) // the sender closes
var result string
for {
select {
case input, ok := <-obj.init.Input:
if !ok {
return nil // can't output any more
}
ix := input.Struct()["a"].Int()
if ix < 0 {
return fmt.Errorf("we can't deal with negatives")
}
result = fmt.Sprintf("the input is: %d", ix)
case <-obj.closeChan:
return nil
}
select {
case obj.init.Output <- &types.StrValue{
V: result,
}:
case <-obj.closeChan:
return nil
}
}
}
```
As you can see, we read our inputs from the `input` channel, and write to the
`output` channel. Our code is careful to never block or deadlock, and can always
exit if a close signal is requested. It also cleans up after itself by closing
the `output` channel when it is done using it. This is done easily with `defer`.
If it notices that the `input` channel closes, then it knows that no more input
values are coming and it can consider shutting down early.
## Further considerations
There is some additional information that any function author will need to know.
Each issue is listed separately below!
### Function struct
Each function will implement methods as pointer receivers on a function struct.
The naming convention for resources is that they end with a `Func` suffix.
#### Example
```golang
type FooFunc struct {
init *interfaces.Init
// this space can be used if needed
closeChan chan struct{} // shutdown signal
}
```
### Function registration
All functions must be registered with the engine so that they can be found. This
also ensures they can be encoded and decoded. Make sure to include the following
code snippet for this to work.
```golang
import "github.com/purpleidea/mgmt/lang/funcs"
func init() { // special golang method that runs once
funcs.Register("foo", func() interfaces.Func { return &FooFunc{} })
}
```
Functions inside of built-in modules will need to use the `ModuleRegister`
method instead.
```golang
// moduleName is already set to "math" by the math package. Do this in `init`.
funcs.ModuleRegister(moduleName, "cos", func() interfaces.Func { return &CosFunc{} })
```
### Composite functions
Composite functions are functions which import one or more existing functions.
This is useful to prevent code duplication in higher level function scenarios.
Unfortunately no further documentation about this subject has been written. To
expand this section, please send a patch! Please contact us if you'd like to
work on a function that uses this feature, or to add it to an existing one!
We don't expect this functionality to be particularly useful or common, as it's
probably easier and preferable to simply import common golang library code into
multiple different functions instead.
## Polymorphic Function API
The polymorphic function API is an API that lets you implement functions which
do not necessarily have a single static function signature. After compile time,
all functions must have a static function signature. We also know that there
might be different ways you would want to call `printf`, such as:
`printf("the %s is %d", "answer", 42)` or `printf("3 * 2 = %d", 3 * 2)`. Since
you couldn't implement the infinite number of possible signatures, this API lets
you write code which can be coerced into different forms. This makes
implementing what would appear to be generic or polymorphic, instead of
something that is actually static and that still has the static type safety
properties that were guaranteed by the mgmt language.
Since this is an advanced topic, it is not described in full at this time. For
more information please have a look at the source code comments, some of the
existing implementations, and ask around in the community.
## Frequently asked questions
(Send your questions as a patch to this FAQ! I'll review it, merge it, and
respond by commit with the answer.)
### Can I use global variables?
Probably not. You must assume that multiple copies of your function may be used
at the same time. If they require a global variable, it's likely this won't
work. Instead it's probably better to use a struct local variable if you need to
store some state.
There might be some rare instances where a global would be acceptable, but if
you need one of these, you're probably already an internals expert. If you think
they need to lock or synchronize so as to not overwhelm an external resource,
then you have to be especially careful not to cause deadlocking the mgmt engine.
### Can I write functions in a different language?
Currently `golang` is the only supported language for built-in functions. We
might consider allowing external functions to be imported in the future. This
will likely require a language that can expose a C-like API, such as `python` or
`ruby`. Custom `golang` functions are already possible when using mgmt as a lib.
### What new functions need writing?
There are still many ideas for new functions that haven't been written yet. If
you'd like to contribute one, please contact us and tell us about your idea!
### Can I generate many different `FuncValue` implementations from one function?
Yes, you can use a function generator in `golang` to build multiple different
implementations from the same function generator. You just need to implement a
function which *returns* a `golang` type of `func([]types.Value) (types.Value, error)`
which is what `FuncValue` expects. The generator function can use any input it
wants to build the individual functions, thus helping with code re-use.
### How do I determine the signature of my simple, polymorphic function?
The determination of the input portion of the function signature can be
determined by inspecting the length of the input, and the specific type each
value has. Length is done in the standard `golang` way, and the type of each
element can be ascertained with the `Type()` method available on every value.
Knowing the output type is trickier. If it can not be inferred in some manner,
then the only way is to keep track of this yourself. You can use a function
generator to build your `FuncValue` implementations, and pass in the unique
signature to each one as you are building them. Using a generator is a common
technique which was mentioned previously.
One obvious situation where this might occur is if your function doesn't take
any inputs! An example `math.fortytwo()` function was implemented that
demonstrates the use of function generators to pass the type signatures into the
implementations.
### Where can I find more information about mgmt?
Additional blog posts, videos and other material [is available!](https://github.com/purpleidea/mgmt/blob/master/docs/on-the-web.md).
## Suggestions
If you have any ideas for API changes or other improvements to function writing,
please let us know! We're still pre 1.0 and pre 0.1 and happy to break API in
order to get it right!

913
docs/language-guide.md Normal file
View File

@@ -0,0 +1,913 @@
# Language guide
## Overview
The `mgmt` tool has various frontends, each of which may produce a stream of
between zero or more graphs that are passed to the engine for desired state
application. In almost all scenarios, you're going to want to use the language
frontend. This guide describes some of the internals of the language.
## Theory
The mgmt language is a declarative (immutable) functional, reactive programming
language. It is implemented in `golang`. A longer introduction to the language
is [available as a blog post here](https://purpleidea.com/blog/2018/02/05/mgmt-configuration-language/)!
### Types
All expressions must have a type. A composite type such as a list of strings
(`[]str`) is different from a list of integers (`[]int`).
There _is_ a _variant_ type in the language's type system, but it is only used
internally and only appears briefly when needed for type unification hints
during static polymorphic function generation. This is an advanced topic which
is not required for normal usage of the software.
The implementation of the internal types can be found in
[lang/types/](https://github.com/purpleidea/mgmt/tree/master/lang/types/).
#### bool
A `true` or `false` value.
#### str
Any `"string!"` enclosed in quotes.
#### int
A number like `42` or `-13`. Integers are represented internally as golang's
`int64`.
#### float
A floating point number like: `3.1415926`. Float's are represented internally as
golang's `float64`.
#### list
An ordered collection of values of the same type, eg: `[6, 7, 8, 9,]`. It is
worth mentioning that empty lists have a type, although without type hints it
can be impossible to infer the item's type.
#### map
An unordered set of unique keys of the same type and corresponding value pairs
of another type, eg:
`{"boiling" => 100, "freezing" => 0, "room" => 25, "house" => 22, "canada" => -30,}`.
That is to say, all of the keys must have the same type, and all of the values
must have the same type. You can use any type for either, although it is
probably advisable to avoid using very complex types as map keys.
#### struct
An ordered set of field names and corresponding values, each of their own type,
eg: `struct{answer => "42", james => "awesome", is_mgmt_awesome => true,}`.
These are useful for combining more than one type into the same value. Note the
syntactical difference between these and map's: the key's in map's have types,
and as a result, string keys are enclosed in quotes, whereas struct _fields_ are
not string values, and as such are bare and specified without quotes.
#### func
An ordered set of optionally named, differently typed input arguments, and a
return type, eg: `func(s str) int` or:
`func(bool, []str, {str: float}) struct{foo str; bar int}`.
### Expressions
Expressions, and the `Expr` interface need to be better documented. For now
please consume
[lang/interfaces/ast.go](https://github.com/purpleidea/mgmt/tree/master/lang/interfaces/ast.go).
These docs will be expanded on when things are more certain to be stable.
### Statements
There are a very small number of statements in our language. They include:
- **bind**: bind's an expression to a variable within that scope without output
- eg: `$x = 42`
- **if**: produces up to one branch of statements based on a conditional
expression
```mcl
if <conditional> {
<statements>
} else {
# the else branch is optional for if statements
<statements>
}
```
- **resource**: produces a resource
```mcl
file "/tmp/hello" {
content => "world",
mode => "o=rwx",
}
```
- **edge**: produces an edge
```mcl
File["/tmp/hello"] -> Print["alert4"]
```
- **class**: bind's a list of statements to a class name in scope without output
```mcl
class foo {
# some statements go here
}
```
or
```mcl
class bar($a, $b) { # a parameterized class
# some statements go here
}
```
- **include**: include a particular class at this location producing output
```mcl
include foo
include bar("hello", 42)
include bar("world", 13) # an include can be called multiple times
```
- **import**: import a particular scope from this location at a given namespace
```mcl
# a system module import
import "fmt"
# a local, single file import (relative path, not a module)
import "dir1/file.mcl"
# a local, module import (relative path, contents are a module)
import "dir2/"
# a remote module import (absolute remote path, contents are a module)
import "git://github.com/purpleidea/mgmt-example1/"
```
or
```mcl
import "fmt" as * # contents namespaced into top-level names
import "foo.mcl" # namespaced as foo
import "dir1/" as bar # namespaced as bar
import "git://github.com/purpleidea/mgmt-example1/" # namespaced as example1
```
All statements produce _output_. Output consists of between zero and more
`edges` and `resources`. A resource statement can produce a resource, whereas an
`if` statement produces whatever the chosen branch produces. Ultimately the goal
of executing our programs is to produce a list of `resources`, which along with
the produced `edges`, is built into a resource graph. This graph is then passed
to the engine for desired state application.
#### Bind
This section needs better documentation.
#### If
This section needs better documentation.
#### Resource
Resources express the idempotent workloads that we want to have apply on our
system. They correspond to vertices in a [graph](https://en.wikipedia.org/wiki/Directed_acyclic_graph)
which represent the order in which their declared state is applied. You will
usually want to pass in a number of parameters and associated values to the
resource to control how it behaves. For example, setting the `content` parameter
of a `file` resource to the string `hello`, will cause the contents of that file
to contain the string `hello` after it has run.
##### Undefined parameters
For some parameters, there is a distinction between an unspecified parameter,
and a parameter with a `zero` value. For example, for the file resource, you
might choose to set the `content` parameter to be the empty string, which would
ensure that the file has a length of zero. Alternatively you might wish to not
specify the file contents at all, which would leave that property undefined. If
you omit listing a property, then it will be undefined. To control this property
programmatically, you need to specify an `is-defined` value, as well as the
value to use if that boolean is true. You can do this with the resource-specific
`elvis` operator.
```mcl
$b = true # change me to false and then try editing the file manually
file "/tmp/mgmt-elvis" {
content => $b ?: "hello world\n",
state => $const.res.file.state.exists,
}
```
This example is static, however you can imagine that the `$b` value might be
chosen in a programmatic way, even one in which that value varies over time. If
it evaluates to `true`, then the parameter will be used. If no `elvis` operator
is specified, then the parameter value will also be used. If the parameter is
not specified, then it will obviously not be used.
##### Meta parameters
Resources may specify meta parameters. To do so, you must add them as you would
a regular parameter, except that they start with `Meta` and are capitalized. Eg:
```mcl
file "/tmp/f1" {
content => "hello!\n",
Meta:noop => true,
Meta:delay => $b ?: 42,
Meta:autoedge => false,
}
```
As you can see, they also support the elvis operator, and you can add as many as
you like. While it is not recommended to add the same meta parameter more than
once, it does not currently cause an error, and even though the result of doing
so is officially undefined, it will currently take the last specified value.
You may also specify a single meta parameter struct. This is useful if you'd
like to reuse a value, or build a combined value programmatically. For example:
```mcl
file "/tmp/f1" {
content => "hello!\n",
Meta => $b ?: struct{
noop => false,
retry => -1,
delay => 0,
poll => 5,
limit => 4.2,
burst => 3,
sema => ["foo:1", "bar:3",],
autoedge => true,
autogroup => false,
},
}
```
Remember that the top-level `Meta` field supports the elvis operator, while the
individual struct fields in the struct type do not. This is to be expected, but
since they are syntactically similar, it is worth mentioning to avoid confusion.
Please note that at the moment, you must specify a full metaparams struct, since
partial struct types are currently not supported in the language. Patches are
welcome if you'd like to add this tricky feature!
##### Resource naming
Each resource must have a unique name of type `str` that is used to uniquely
identify that resource, and can be used in the functioning of the resource at
that resources discretion. For example, the `file` resource uses the unique name
value to specify the path.
Alternatively, the name value may be a list of strings `[]str` to build a list
of resources, each with a name from that list. When this is done, each resource
will use the same set of parameters. The list of internal edges specified in the
same resource block is created intelligently to have the appropriate edge for
each separate resource.
Using this construct is a veiled form of looping (iteration). This technique is
one of many ways you can perform iterative tasks that you might have
traditionally used a `for` loop for instead. This is preferred, because flow
control is error-prone and can make for less readable code.
##### Internal edges
Resources may also declare edges internally. The edges may point to or from
another resource, and may optionally include a notification. The four properties
are: `Before`, `Depend`, `Notify` and `Listen`. The first two represent normal
edge dependencies, and the second two are normal edge dependencies that also
send notifications. You may have multiples of these per resource, including
multiple `Depend` lines if necessary. Each of these properties also supports the
conditional inclusion `elvis` operator as well.
For example, you may write:
```mcl
$b = true # for example purposes
if $b {
pkg "drbd" {
state => "installed",
# multiple properties may be used in the same resource
Before => File["/etc/drbd.conf"],
Before => Svc["drbd"],
}
}
file "/etc/drbd.conf" {
content => "some config",
Depend => $b ?: Pkg["drbd"],
Notify => Svc["drbd"],
}
svc "drbd" {
state => "running",
}
```
There are two unique properties about these edges that is different from what
you might expect from other automation software:
1. The ability to specify multiples of these properties allows you to avoid
having to manage arrays and conditional trees of these different dependencies.
2. The keywords all have the same length, which means your code lines up nicely.
#### Edge
Edges express dependencies in the graph of resources which are output. They can
be chained as a pair, or in any greater number. For example, you may write:
```mcl
Pkg["drbd"] -> File["/etc/drbd.conf"] -> Svc["drbd"]
```
to express a relationship between three resources. The first character in the
resource kind must be capitalized so that the parser can't ascertain
unambiguously that we are referring to a dependency relationship.
#### Class
A class is a grouping structure that bind's a list of statements to a name in
the scope where it is defined. It doesn't directly produce any output. To
produce output it must be called via the `include` statement.
Defining classes follows the same scoping and shadowing rules that is applied to
the `bind` statement, although they exist in a separate namespace. In other
words you can have a variable named `foo` and a class named `foo` in the same
scope without any conflicts.
Classes can be both parameterized or naked. If a parameterized class is defined,
then the argument types must be either specified manually, or inferred with the
type unification algorithm. One interesting property is that the same class
definition can be used with `include` via two different input signatures,
although in practice this is probably fairly rare. Some usage examples include:
A naked class definition:
```mcl
class foo {
# some statements go here
}
```
A parameterized class with both input types being inferred if possible:
```mcl
class bar($a, $b) {
# some statements go here
}
```
A parameterized class with one type specified statically and one being inferred:
```mcl
class baz($a str, $b) {
# some statements go here
}
```
Classes can also be nested within other classes. Here's a contrived example:
```mcl
import "fmt"
class c1($a, $b) {
# nested class definition
class c2($c) {
test $a {
stringptr => fmt.printf("%s is %d", $b, $c),
}
}
if $a == "t1" {
include c2(42)
}
}
```
Defining polymorphic classes was considered but is not currently allowed at this
time.
Recursive classes are not currently supported and it is not clear if they will
be in the future. Discussion about this topic is welcome on the mailing list.
#### Include
The `include` statement causes the previously defined class to produce the
contained output. This statement must be called with parameters if the named
class is defined with those.
The defined class can be called as many times as you'd like either within the
same scope or within different scopes. If a class uses inferred type input
parameters, then the same class can even be called with different signatures.
Whether the output is useful and whether there is a unique type unification
solution is dependent on your code.
#### Import
The `import` statement imports a scope into the specified namespace. A scope can
contain variable, class, and function definitions. All are statements.
Furthermore, since each of these have different logical uses, you could
theoretically import a scope that contains an `int` variable named `foo`, a
class named `foo`, and a function named `foo` as well. Keep in mind that
variables can contain functions (they can have a type of function) and are
commonly called lambdas.
There are a few different kinds of imports. They differ by the string contents
that you specify. Short single word, or multiple-word tokens separated by zero
or more slashes are system imports. Eg: `math`, `fmt`, or even `math/trig`.
Local imports are path imports that are relative to the current directory. They
can either import a single `mcl` file, or an entire well-formed module. Eg:
`file1.mcl` or `dir1/`. Lastly, you can have a remote import. This must be an
absolute path to a well-formed module. The common transport is `git`, and it can
be represented via an FQDN. Eg: `git://github.com/purpleidea/mgmt-example1/`.
The namespace that any of these are imported into depends on how you use the
import statement. By default, each kind of import will have a logic namespace
identifier associated with it. System imports use the last token in their name.
Eg: `fmt` would be imported as `fmt` and `math/trig` would be imported as
`trig`. Local imports do the same, except the required `.mcl` extension, or
trailing slash are removed. Eg: `foo/file1.mcl` would be imported as `file1` and
`bar/baz/` would be imported as `baz`. Remote imports use some more complex
rules. In general, well-named modules that contain a final directory name in the
form: `mgmt-whatever/` will be named `whatever`. Otherwise, the last path token
will be converted to lowercase and the dashes will be converted to underscores.
The rules for remote imports might change, and should not be considered stable.
In any of the import cases, you can change the namespace that you're imported
into. Simply add the `as whatever` text at the end of the import, and `whatever`
will be the name of the namespace. Please note that `whatever` is not surrounded
by quotes, since it is an identifier, and not a `string`. If you'd like to add
all of the import contents into the top-level scope, you can use the `as *` text
to dump all of the contents in. This is generally not recommended, as it might
cause a conflict with another identifier.
### Stages
The mgmt compiler runs in a number of stages. In order of execution they are:
* [Lexing](#lexing)
* [Parsing](#parsing)
* [Interpolation](#interpolation)
* [Scope propagation](#scope-propagation)
* [Type unification](#type-unification)
* [Function graph generation](#function-graph-generation)
* [Function engine creation and validation](#function-engine-creation-and-validation)
All of the above needs to be done every time the source code changes. After this
point, the [function engine runs](#function-engine-running-and-interpret) and
produces events. On every event, we "[interpret](#function-engine-running-and-interpret)"
which produces a resource graph. This series of resource graphs are passed
to the engine as they are produced.
What follows are some notes about each step.
#### Lexing
Lexing is done using [nex](https://github.com/blynn/nex). It is a pure-golang
implementation which is similar to _Lex_ or _Flex_, but which produces golang
code instead of C. It integrates reasonably well with golang's _yacc_ which is
used for parsing. The token definitions are in:
[lang/lexer.nex](https://github.com/purpleidea/mgmt/tree/master/lang/lexer.nex).
Lexing and parsing run together by calling the `LexParse` method.
#### Parsing
The parser used is golang's implementation of
[yacc](https://godoc.org/golang.org/x/tools/cmd/goyacc). The documentation is
quite abysmal, so it's helpful to rely on the documentation from standard yacc
and trial and error. One small advantage yacc has over standard yacc is that it
can produce error messages from examples. The best documentation is to examine
the source. There is a short write up available [here](https://research.swtch.com/yyerror).
The yacc file exists at:
[lang/parser.y](https://github.com/purpleidea/mgmt/tree/master/lang/parser.y).
Lexing and parsing run together by calling the `LexParse` method.
#### Interpolation
Interpolation is used to transform the AST (which was produced from lexing and
parsing) into one which is either identical or different. It expands strings
which might contain expressions to be interpolated (eg: `"the answer is: ${foo}"`)
and can be used for other scenarios in which one statement or expression would
be better represented by a larger AST. Most nodes in the AST simply return their
own node address, and do not modify the AST.
#### Scope propagation
Scope propagation passes the parent scope (starting with the top-level, built-in
scope) down through the AST. This is necessary so that children nodes can access
variables in the scope if needed. Most AST node's simply pass on the scope
without making any changes. The `ExprVar` node naturally consumes scope's and
the `StmtProg` node cleverly passes the scope through in the order expected for
the out-of-order bind logic to work.
This step typically calls the ordering algorithm to determine the correct order
of statements in a program.
#### Type unification
Each expression must have a known type. The unpleasant option is to force the
programmer to specify by annotation every type throughout their whole program
so that each `Expr` node in the AST knows what to expect. Type annotation is
allowed in situations when you want to explicitly specify a type, or when the
compiler cannot deduce it, however, most of it can usually be inferred.
For type inferrence to work, each node in the AST implements a `Unify` method
which is able to return a list of invariants that must hold true. This starts at
the top most AST node, and gets called through to it's children to assemble a
giant list of invariants. The invariants can take different forms. They can
specify that a particular expression must have a particular type, or they can
specify that two expressions must have the same types. More complex invariants
allow you to specify relationships between different types and expressions.
Furthermore, invariants can allow you to specify that only one invariant out of
a set must hold true.
Once the list of invariants has been collected, they are run through an
invariant solver. The solver can return either return successfully or with an
error. If the solver returns successfully, it means that it has found a trivial
mapping between every expression and it's corresponding type. At this point it
is a simple task to run `SetType` on every expression so that the types are
known. If the solver returns in error, it is usually due to one of two
possibilities:
1. Ambiguity
The solver does not have enough information to make a definitive or
unique determination about the expression to type mappings. The set of
invariants is ambiguous, and we cannot continue. An error will be
returned to the programmer. In this scenario the user will probably need
to add a type annotation, possibly because of a design bug in the user's
program.
2. Conflict
The solver has conflicting information that cannot be reconciled. In
this situation an explicit conflict has been found. If two invariants
are found which both expect a particular expression to have different
types, then it is not possible to find a valid solution. This almost
always happens if the user has made a type error in their program.
Only one solver currently exists, but it is possible to easily plug in an
alternate implementation if someone more skilled in the art of solver design
would like to propose a more logical or performant variant.
#### Function graph generation
At this point we have a fully type AST. The AST must now be transformed into a
directed, acyclic graph (DAG) data structure that represents the flow of data as
necessary for everything to be reactive. Note that this graph is *different*
from the resource graph which is produced and sent to the engine. It is just a
coincidence that both happen to be DAG's. (You don't freak out when you see a
list data structure show up in more than one place, do you?)
To produce this graph, each node has a `Graph` method which it can call. This
starts at the top most node, and is called down through the AST. The edges in
the graphs must represent the individual expression values which are passed
from node to node. The names of the edges must match the function type argument
names which are used in the definition of the corresponding function. These
corresponding functions must exist for each expression node and are produced by
calling that expression's `Func` method. These are usually called by the
function engine during function creation and validation.
#### Function engine creation and validation
Finally we have a graph of the data flows. The function engine must first
initialize which creates references to each of the necessary function
implementations, and gets information about each one. It then needs to be type
checked to ensure that the data flows all correctly match what is expected. If
you were to pass an `int` to a function expecting a `bool`, this would be a
problem. If all goes well, the program should get run shortly.
#### Function engine running and interpret
At this point the function engine runs. It produces a stream of events which
cause the `Output()` method of the top-level program to run, which produces the
list of resources and edges. These are then transformed into the resource graph
which is passed to the engine.
### Function API
If you'd like to create a built-in, core function, you'll need to implement the
function API interface named `Func`. It can be found in
[lang/interfaces/func.go](https://github.com/purpleidea/mgmt/tree/master/lang/interfaces/func.go).
Your function must have a specific type. For example, a simple math function
might have a signature of `func(x int, y int) int`. As you can see, all the
types are known _before_ compile time.
A separate discussion on this matter can be found in the [function guide](function-guide.md).
What follows are each of the method signatures and a description of each.
Failure to implement the API correctly can cause the function graph engine to
block, or the program to panic.
### Info
```golang
Info() *Info
```
The Info method must return a struct containing some information about your
function. The struct has the following type:
```golang
type Info struct {
Sig *types.Type // the signature of the function, must be KindFunc
}
```
You must implement this correctly. Other fields in the `Info` struct may be
added in the future. This method is usually called before any other, and should
not depend on any other method being called first. Other methods must not depend
on this method being called first.
#### Example
```golang
func (obj *FooFunc) Info() *interfaces.Info {
return &interfaces.Info{
Sig: types.NewType("func(a str, b int) float"),
}
}
```
### Init
```golang
Init(*Init) error
```
Init is called by the function graph engine to create an implementation of this
function. It is passed in a struct of the following form:
```golang
type Init struct {
Hostname string // uuid for the host
Input chan types.Value // Engine will close `input` chan
Output chan types.Value // Stream must close `output` chan
World resources.World
Debug bool
Logf func(format string, v ...interface{})
}
```
These values and references may be used (wisely) inside your function. `Input`
will contain a channel of input structs matching the expected input signature
for your function. `Output` will be the channel which you must send values to
whenever a new value should be produced. This must be done in the `Stream()`
function. You may carefully use `World` to access functionality provided by the
engine. You may use `Logf` to log informational messages, however there is no
guarantee that they will be displayed to the user. `Debug` specifies whether the
function is running in a user-requested debug mode. This might cause you to want
to print more log messages for example. You will need to save references to any
or all of these info fields that you wish to use in the struct implementing this
`Func` interface. At a minimum you will need to save `Output` as a minimum of
one value must be produced.
#### Example
```golang
Please see the example functions in
[lang/funcs/core/](https://github.com/purpleidea/mgmt/tree/master/lang/funcs/core/).
```
### Stream
```golang
Stream() error
```
Stream is called by the function engine when it is ready for your function to
start accepting input and producing output. You must always produce at least one
value. Failure to produce at least one value will probably cause the function
engine to hang waiting for your output. This function must close the `Output`
channel when it has no more values to send. The engine will close the `Input`
channel when it has no more values to send. This may or may not influence
whether or not you close the `Output` channel.
#### Example
```golang
Please see the example functions in
[lang/funcs/core/](https://github.com/purpleidea/mgmt/tree/master/lang/funcs/core/).
```
### Close
```golang
Close() error
```
Close asks the particular function to shutdown its `Stream()` function and
return.
#### Example
```golang
Please see the example functions in
[lang/funcs/core/](https://github.com/purpleidea/mgmt/tree/master/lang/funcs/core/).
```
### Polymorphic Function API
For some functions, it might be helpful to be able to implement a function once,
but to have multiple polymorphic variants that can be chosen at compile time.
For this more advanced topic, you will need to use the
[Polymorphic Function API](#polymorphic-function-api). This will help with code
reuse when you have a small, finite number of possible type signatures, and also
for more complicated cases where you might have an infinite number of possible
type signatures. (eg: `[]str`, or `[][]str`, or `[][][]str`, etc...)
Suppose you want to implement a function which can assume different type
signatures. The mgmt language does not support polymorphic types-- you must use
static types throughout the language, however, it is legal to implement a
function which can take different specific type signatures based on how it is
used. For example, you might wish to add a math function which could take the
form of `func(x int, x int) int` or `func(x float, x float) float` depending on
the input values. You might also want to implement a function which takes an
arbitrary number of input arguments (the number must be statically fixed at the
compile time of your program though) and which returns a string.
The `PolyFunc` interface adds additional methods which you must implement to
satisfy such a function implementation. If you'd like to implement such a
function, then please notify the project authors, and they will expand this
section with a longer description of the process.
#### Examples
What follows are a few examples that might help you understand some of the
language details.
##### Example Foo
TODO: please add an example here!
##### Example Bar
TODO: please add an example here!
## Frequently asked questions
(Send your questions as a patch to this FAQ! I'll review it, merge it, and
respond by commit with the answer.)
### What is the difference between `ExprIf` and `StmtIf`?
The language contains both an `if` expression, and and `if` statement. An `if`
expression takes a boolean conditional *and* it must contain exactly _two_
branches (a `then` and an `else` branch) which each contain one expression. The
`if` expression _will_ return the value of one of the two branches based on the
conditional.
#### Example:
```mcl
# this is an if expression, and both branches must exist
$b = true
$x = if $b {
42
} else {
-13
}
```
The `if` statement also takes a boolean conditional, but it may have either one
or two branches. Branches must only directly contain statements. The `if`
statement does not return any value, but it does produce output when it is
evaluated. The output consists primarily of resources (vertices) and edges.
#### Example:
```mcl
# this is an if statement, and in this scenario the else branch was omitted
$b = true
if $b {
file "/tmp/hello" {
content => "world",
}
}
```
### What is the difference `types.Value.Str()` and `types.Value.String()`?
In the `lang/types` library, there is a `types.Value` interface. Every value in
our type system must implement this interface. One of the methods in this
interface is the `String() string` method. This lets you print a representation
of the value. You will probably never need to use this method.
In addition, the `types.Value` interface implements a number of helper functions
which return the value as an equivalent golang type. If you know that the value
is a `bool`, you can call `x.Bool()` on it. If it's a `string` you can call
`x.Str()`. Make sure not to call one of those type methods unless you know the
value is of that type, or you will trigger a panic!
### I created a `&ListValue{}` but it's not working!
If you create a base type like `bool`, `str`, `int`, or `float`, all you need to
do is build the `&BoolValue` and set the `V` field. Eg:
```golang
someBool := &types.BoolValue{V: true}
```
If you are building a container type like `list`, `map`, `struct`, or `func`,
then you *also* need to specify the type of the contained values. This is
because a list has a type of `[]str`, or `[]int`, or even `[][]foo`. Eg:
```golang
someListOfStrings := &types.ListValue{
T: types.NewType("[]str"), # must match the contents!
V: []types.Value{
&types.StrValue{V: "a"},
&types.StrValue{V: "bb"},
&types.StrValue{V: "ccc"},
},
}
```
If you don't build these properly, then you will cause a panic! Even empty lists
have a type.
### Is the `class` statement a singleton?
Not really, but practically it can be used as such. The `class` statement is not
a singleton since it can be called multiple times in different locations, and it
can also be parameterized and called multiple times (with `include`) using
different input parameters. The reason it can be used as such is that statement
output (from multple classes) that is compatible (and usually identical) will
be automatically collated and have the duplicates removed. In that way, you can
assume that an unparameterized class is always a singleton, and that
parameterized classes can often be singletons depending on their contents and if
they are called in an identical way or not. In reality the de-duplication
actually happens at the resource output level, so anything that produces
multiple compatible resources is allowed.
### Are recursive `class` definitions supported?
Recursive class definitions where the contents of a `class` contain a
self-referential `include`, either directly, or with indirection via any other
number of classes is not supported. It's not clear if it ever will be in the
future, unless we decide it's worth the extra complexity. The reason is that our
FRP actually generates a static graph which doesn't change unless the code does.
To support dynamic graphs would require our FRP to be a "higher-order" FRP,
instead of the simpler "first-order" FRP that it is now. You might want to
verify that I got the [nomenclature](https://github.com/gelisam/frp-zoo)
correct. If it turns out that there's an important advantage to supporting a
higher-order FRP in mgmt, then we can consider that in the future.
I realized that recursion would require a static graph when I considered the
structure required for a simple recursive class definition. If some "depth"
value wasn't known statically by compile time, then there would be no way to
know how large the graph would grow, and furthermore, the graph would need to
change if that "depth" value changed.
### I don't like the mgmt language, is there an alternative?
Yes, the language is just one of the available "frontends" that passes a stream
of graphs to the engine "backend". While it _is_ the recommended way of using
mgmt, you're welcome to either use an alternate frontend, or write your own. To
write your own frontend, you must implement the
[GAPI](https://github.com/purpleidea/mgmt/blob/master/gapi/gapi.go) interface.
### I'm an expert in FRP, and you got it all wrong; even the names of things!
I am certainly no expert in FRP, and I've certainly got lots more to learn. One
thing FRP experts might notice is that some of the concepts from FRP are either
named differently, or are notably absent.
In mgmt, we don't talk about behaviours, events, or signals in the strict FRP
definitons of the words. Firstly, because we only support discretized, streams
of values with no plan to add continuous semantics. Secondly, because we prefer
to use terms which are more natural and relatable to what our target audience is
expecting. Our users are more likely to have a background in Physiology, or
systems administration than a background in FRP.
Having said that, we hope that the FRP community will engage with us and help
improve the parts that we got wrong. Even if that means adding continuous
behaviours!
### This is brilliant, may I give you a high-five?
Thank you, and yes, probably. "Props" may also be accepted, although patches are
preferred. If you can't do either, [donations](https://purpleidea.com/misc/donate/)
to support the project are welcome too!
### Where can I find more information about mgmt?
Additional blog posts, videos and other material
[is available!](https://github.com/purpleidea/mgmt/blob/master/docs/on-the-web.md).
## Suggestions
If you have any ideas for changes or other improvements to the language, please
let us know! We're still pre 1.0 and pre 0.1 and happy to change it in order to
get it right!

58
docs/on-the-web.md Normal file
View File

@@ -0,0 +1,58 @@
# On the web
Here is a list of places mgmt has appeared on the web. Feel free to send a patch
if we missed something that you think is relevant!
## Links
| Author | Format | Subject |
|---|---|---|
| James Shubin | blog | [Next generation configuration mgmt](https://purpleidea.com/blog/2016/01/18/next-generation-configuration-mgmt/) |
| James Shubin | video | [Introductory recording from DevConf.cz 2016](https://www.youtube.com/watch?v=GVhpPF0j-iE&html5=1) |
| James Shubin | video | [Introductory recording from CfgMgmtCamp.eu 2016](https://www.youtube.com/watch?v=fNeooSiIRnA&html5=1) |
| Julian Dunn | video | [On mgmt at CfgMgmtCamp.eu 2016](https://www.youtube.com/watch?v=kfF9IATUask&t=1949&html5=1) |
| Walter Heck | slides | [On mgmt at CfgMgmtCamp.eu 2016](http://www.slideshare.net/olindata/configuration-management-time-for-a-4th-generation/3) |
| Marco Marongiu | blog | [On mgmt](http://syslog.me/2016/02/15/leap-or-die/) |
| Felix Frank | blog | [From Catalog To Mgmt (on puppet to mgmt "transpiling")](https://ffrank.github.io/features/2016/02/18/from-catalog-to-mgmt/) |
| James Shubin | blog | [Automatic edges in mgmt (...and the pkg resource)](https://purpleidea.com/blog/2016/03/14/automatic-edges-in-mgmt/) |
| James Shubin | blog | [Automatic grouping in mgmt](https://purpleidea.com/blog/2016/03/30/automatic-grouping-in-mgmt/) |
| John Arundel | tweet | [“Puppets days are numbered.”](https://twitter.com/bitfield/status/732157519142002688) |
| Felix Frank | blog | [Puppet, Meet Mgmt (on puppet to mgmt internals)](https://ffrank.github.io/features/2016/06/12/puppet,-meet-mgmt/) |
| Felix Frank | blog | [Puppet Powered Mgmt (puppet to mgmt tl;dr)](https://ffrank.github.io/features/2016/06/19/puppet-powered-mgmt/) |
| James Shubin | blog | [Automatic clustering in mgmt](https://purpleidea.com/blog/2016/06/20/automatic-clustering-in-mgmt/) |
| James Shubin | video | [Recording from CoreOSFest 2016](https://www.youtube.com/watch?v=KVmDCUA42wc&html5=1) |
| James Shubin | video | [Recording from DebConf16](http://meetings-archive.debian.net/pub/debian-meetings/2016/debconf16/Next_Generation_Config_Mgmt.webm) ([Slides](https://annex.debconf.org//debconf-share/debconf16/slides/15-next-generation-config-mgmt.pdf)) |
| Felix Frank | blog | [Edging It All In (puppet and mgmt edges)](https://ffrank.github.io/features/2016/07/12/edging-it-all-in/) |
| Felix Frank | blog | [Translating All The Things (puppet to mgmt translation warnings)](https://ffrank.github.io/features/2016/08/19/translating-all-the-things/) |
| James Shubin | video | [Recording from systemd.conf 2016](https://www.youtube.com/watch?v=jB992Zb3nH0&html5=1) |
| James Shubin | blog | [Remote execution in mgmt](https://purpleidea.com/blog/2016/10/07/remote-execution-in-mgmt/) |
| James Shubin | video | [Recording from High Load Strategy 2016](https://vimeo.com/191493409) |
| James Shubin | video | [Recording from NLUUG 2016](https://www.youtube.com/watch?v=MmpwOQAb_SE&html5=1) |
| James Shubin | blog | [Send/Recv in mgmt](https://purpleidea.com/blog/2016/12/07/sendrecv-in-mgmt/) |
| Julien Pivotto | blog | [Augeas resource for mgmt](https://roidelapluie.be/blog/2017/02/14/mgmt-augeas/) |
| James Shubin | blog | [Metaparameters in mgmt](https://purpleidea.com/blog/2017/03/01/metaparameters-in-mgmt/) |
| James Shubin | video | [Recording from Incontro DevOps 2017](https://vimeo.com/212241877) |
| Yves Brissaud | blog | [mgmt aux HumanTalks Grenoble (french)](http://log.winsos.net/2017/04/12/mgmt-aux-human-talks-grenoble.html) |
| James Shubin | video | [Recording from OSDC Berlin 2017](https://www.youtube.com/watch?v=LkEtBVLfygE&html5=1) |
| Jonathan Gold | blog | [AWS:EC2 in mgmt](https://jonathangold.ca/blog/aws-ec2-in-mgmt/) |
| James Shubin | video | [Recording from OSMC Nuremberg 2017](https://www.youtube.com/watch?v=hSVadQLeplU&html5=1) |
| James Shubin | video | [Recording from LCA 2018, Developers Miniconf](https://www.youtube.com/watch?v=OvgGfW0ilbE) |
| James Shubin | video | [Recording from LCA 2018, Sysadmin Miniconf](https://www.youtube.com/watch?v=ELq1XOJMIPY) |
| James Shubin | video | [Recording from LCA 2018, Main Conference](https://www.youtube.com/watch?v=_9PG64AOQ3w) |
| James Shubin | video | [Recording from DevConf.cz 2017](https://www.youtube.com/watch?v=-FPEK08l1Zk) |
| James Shubin | video | [Recording from FOSDEM 2018, Config Management Devroom](https://video.fosdem.org/2018/UA2.114/mgmt.webm) |
| James Shubin | blog | [Mgmt Configuration Language](https://purpleidea.com/blog/2018/02/05/mgmt-configuration-language/) |
| James Shubin | video | [Recording from CfgMgmtCamp.eu 2018](https://www.youtube.com/watch?v=NxObmwZDyrI) |
| Jonathan Gold | blog | [Go Netlink and Select](https://jonathangold.ca/blog/go-netlink-and-select/) |
| James Shubin | video | [Recording from DevOpsDays Montreal 2018](https://www.youtube.com/watch?v=1i38c5cooHo) |
| James Shubin | video | [Recording from FOSDEM Minimalistic Languages Devroom 2019](https://video.fosdem.org/2019/K.4.201/mgmtconfig.webm) |
| James Shubin | video | [Recording from FOSDEM Infra Management Devroom 2019](https://video.fosdem.org/2019/UB2.252A/mgmt.webm) |
| James Shubin | video | [Recording from FOSDEM Graph Processing Devroom 2019](https://video.fosdem.org/2019/H.1308/graph_mgmt_config.webm) |
| James Shubin | video | [Recording from FOSDEM Virtualization Devroom 2019](https://video.fosdem.org/2019/H.2213/vai_real_time_virtualization_automation.webm) |
| James Shubin | video | [Recording from FOSDEM Containers Devroom 2019](https://video.fosdem.org/2019/UA2.114/containers_mgmt.webm) |
| James Shubin | video | [Recording from FOSDEM Monitoring Devroom 2019](https://video.fosdem.org/2019/UB2.252A/real_time_merging_of_config_management_and_monitoring.webm) |
| James Shubin | blog | [Mgmt Configuration Language: Class and Include](https://purpleidea.com/blog/2019/07/26/class-and-include-in-mgmt/) |
| James Shubin | video | [Recording from FOSDEM 2020, Main Track (History)](https://video.fosdem.org/2020/Janson/automation.webm) |
| James Shubin | video | [Recording from FOSDEM 2020, Infra Management Devroom](https://video.fosdem.org/2020/UA2.120/mgmt.webm) |
| James Shubin | video | [Recording from FOSDEM 2020, Minimalistic Languages Devroom](https://video.fosdem.org/2020/AW1.125/mgmtconfigmore.webm) |
| James Shubin | video | [Recording from CfgMgmtCamp.eu 2020](https://www.youtube.com/watch?v=Kd7FAORFtsc) |

View File

@@ -30,8 +30,9 @@ Here is a list of the metrics we provide:
- `mgmt_resources_total`: The number of resources that mgmt is managing - `mgmt_resources_total`: The number of resources that mgmt is managing
- `mgmt_checkapply_total`: The number of CheckApply's that mgmt has run - `mgmt_checkapply_total`: The number of CheckApply's that mgmt has run
- `mgmt_failures_total`: The number of resources that have failed - `mgmt_failures_total`: The number of resources that have failed
- `mgmt_failures_current`: The number of resources that have failed - `mgmt_failures`: The number of resources that have failed
- `mgmt_graph_start_time_seconds`: Start time of the current graph since unix epoch in seconds - `mgmt_graph_start_time_seconds`: Start time of the current graph since unix
epoch in seconds
For each metric, you will get some extra labels: For each metric, you will get some extra labels:
@@ -57,10 +58,9 @@ We do not have grafana dashboards yet. Patches welcome!
- [prometheus website](https://prometheus.io/) - [prometheus website](https://prometheus.io/)
- [prometheus documentation](https://prometheus.io/docs/introduction/overview/) - [prometheus documentation](https://prometheus.io/docs/introduction/overview/)
- [prometheus best practices regarding metrics - [prometheus best practices regarding metrics naming](https://prometheus.io/docs/practices/naming/)
naming](https://prometheus.io/docs/practices/naming/)
- [grafana website](http://grafana.org/) - [grafana website](http://grafana.org/)
[pgc]: https://github.com/prometheus/client_golang/blob/master/prometheus/go_collector.go [pgc]: https://github.com/prometheus/client_golang/blob/master/prometheus/go_collector.go
[etcdm]: https://coreos.com/etcd/docs/latest/metrics.html [etcdm]: https://coreos.com/etcd/docs/latest/metrics.html
[pd]: https://github.com/prometheus/prometheus/wiki/Default-port-allocation [pd]: https://github.com/prometheus/prometheus/wiki/Default-port-allocations

View File

@@ -109,8 +109,8 @@ file { "/tmp/mgmt-test":
To avoid this, specify the parameter explicitly: To avoid this, specify the parameter explicitly:
``` ```bash
$ puppet mgmtgraph print --code 'file { "/tmp/mgmt-test": backup => false }' puppet mgmtgraph print --code 'file { "/tmp/mgmt-test": backup => false }'
``` ```
This is tedious in a more complex manifest. A good simplification is the This is tedious in a more complex manifest. A good simplification is the
@@ -143,7 +143,7 @@ you to specify which `puppet.conf` file should be used during
translation. translation.
``` ```
mgmt run --puppet /opt/my-manifest.pp --puppet-conf /etc/mgmt/puppet.conf mgmt run puppet --puppet /opt/my-manifest.pp --puppet-conf /etc/mgmt/puppet.conf
``` ```
Within this file, you can just specify any needed options in the Within this file, you can just specify any needed options in the
@@ -164,3 +164,152 @@ language features.
You should probably make sure to always use the latest release of You should probably make sure to always use the latest release of
both `ffrank-mgmtgraph` and `ffrank-yamlresource` (the latter is both `ffrank-mgmtgraph` and `ffrank-yamlresource` (the latter is
getting pulled in as a dependency of the former). getting pulled in as a dependency of the former).
## Using Puppet in conjunction with the mcl lang
The graph that Puppet generates for `mgmt` can be united with a graph
that is created from native `mgmt` code in its mcl language. This is
useful when you are in the process of replacing Puppet with mgmt. You
can translate your custom modules into mgmt's language one by one,
and let mgmt run the current mix.
Instead of the usual `--puppet-conf` flag and argv for `puppet` and `mcl` input,
you need to use alternative flags to make this work:
* `--lp-lang` to specify the mcl input
* `--lp-puppet` to specify the puppet input
* `--lp-puppet-conf` to point to the optional puppet.conf file
`mgmt` will derive a graph that contains all edges and vertices from
both inputs. You essentially get two unrelated subgraphs that run in
parallel. To form edges between these subgraphs, you have to define
special vertices that will be merged. This works through a hard-coded
naming scheme.
### Mixed graph example 1 - No merges
```mcl
# lang
file "/tmp/mgmt_dir/" { state => "present" }
file "/tmp/mgmt_dir/a" { state => "present" }
```
```puppet
# puppet
file { "/tmp/puppet_dir": ensure => "directory" }
file { "/tmp/puppet_dir/a": ensure => "file" }
```
These very simple inputs (including implicit edges from directory to
respective file) result in two subgraphs that do not relate.
```
File[/tmp/mgmt_dir/] -> File[/tmp/mgmt_dir/a]
File[/tmp/puppet_dir] -> File[/tmp/puppet_dir/a]
```
### Mixed graph example 2 - Merged vertex
In order to have merged vertices in the resulting graph, you will
need to include special resources and classes in the respective
input code.
* On the lang side, add `noop` resources with names starting in `puppet_`.
* On the Puppet side, add **empty** classes with names starting in `mgmt_`.
```mcl
# lang
noop "puppet_handover_to_mgmt" {}
file "/tmp/mgmt_dir/" { state => "present" }
file "/tmp/mgmt_dir/a" { state => "present" }
Noop["puppet_handover_to_mgmt"] -> File["/tmp/mgmt_dir/"]
```
```puppet
# puppet
class mgmt_handover_to_mgmt {}
include mgmt_handover_to_mgmt
file { "/tmp/puppet_dir": ensure => "directory" }
file { "/tmp/puppet_dir/a": ensure => "file" }
File["/tmp/puppet_dir/a"] -> Class["mgmt_handover_to_mgmt"]
```
The new `noop` resource is merged with the new class, resulting in
the following graph:
```
File[/tmp/puppet_dir] -> File[/tmp/puppet_dir/a]
|
V
Noop[handover_to_mgmt]
|
V
File[/tmp/mgmt_dir/] -> File[/tmp/mgmt_dir/a]
```
You put all your ducks in a row, and the resources from the Puppet input
run before those from the mcl input.
**Note:** The names of the `noop` and the class must be identical after the
respective prefix. The common part (here, `handover_to_mgmt`) becomes the name
of the merged resource.
## Mixed graph example 3 - Multiple merges
In most scenarios, it will not be possible to define a single handover
point like in the previous example. For example, if some Puppet resources
need to run in between two stages of native resources, you need at least
two merged vertices:
```mcl
# lang
noop "puppet_handover" {}
noop "puppet_handback" {}
file "/tmp/mgmt_dir/" { state => "present" }
file "/tmp/mgmt_dir/a" { state => "present" }
file "/tmp/mgmt_dir/puppet_subtree/state-file" { state => "present" }
File["/tmp/mgmt_dir/"] -> Noop["puppet_handover"]
Noop["puppet_handback"] -> File["/tmp/mgmt_dir/puppet_subtree/state-file"]
```
```puppet
# puppet
class mgmt_handover {}
class mgmt_handback {}
include mgmt_handover, mgmt_handback
class important_stuff {
file { "/tmp/mgmt_dir/puppet_subtree":
ensure => "directory"
}
# ...
}
Class["mgmt_handover"] -> Class["important_stuff"] -> Class["mgmt_handback"]
```
The resulting graph looks roughly like this:
```
File[/tmp/mgmt_dir/] -> File[/tmp/mgmt_dir/a]
|
V
Noop[handover] -> ( class important_stuff resources )
|
V
Noop[handback]
|
V
File[/tmp/mgmt_dir/puppet_subtree/state-file]
```
You can add arbitrary numbers of merge pairs to your code bases,
with relationships as needed. From our limited experience, code
readability suffers quite a lot from these, however. We advise
to keep these structures simple.

View File

@@ -1,93 +1,113 @@
# Quick start guide # Quick start guide
## Introduction ## Introduction
This guide is intended for developers. Once `mgmt` is minimally viable, we'll
publish a quick start guide for users too. In the meantime, please contribute!
If you're brand new to `mgmt`, it's probably a good idea to start by reading the
[introductory article](https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/)
or to watch an [introductory video](https://github.com/purpleidea/mgmt/#on-the-web).
Once you're familiar with the general idea, please start hacking...
## Vagrant This guide is intended for users and developers. If you're brand new to `mgmt`,
If you would like to avoid doing the following steps manually, we have prepared it's probably a good idea to start by reading an
a [Vagrant](https://www.vagrantup.com/) environment for your convenience. From [introductory article about the engine](https://purpleidea.com/blog/2016/01/18/next-generation-configuration-mgmt/)
the project directory, run a `vagrant up`, and then a `vagrant status`. From and an [introductory article about the language](https://purpleidea.com/blog/2018/02/05/mgmt-configuration-language/).
there, you can `vagrant ssh` into the `mgmt` machine. The MOTD will explain the [There are other articles and videos available](on-the-web.md) if you'd like to
rest. learn more or prefer different formats. Once you're familiar with the general
idea, or if you prefer a hands-on approach, please start hacking...
## Dependencies ## Getting mgmt
Software projects have a few different kinds of dependencies. There are _build_
dependencies, _runtime_ dependencies, and additionally, a few extra dependencies
required for running the _test_ suite.
### Build You can either build `mgmt` from source, or you can download a pre-built
* `golang` 1.6 or higher (required, available in most distros) release. There are also some distro repositories available, but they may not be
* golang libraries (required, available with `go get ./...`) a partial list includes: up to date. A pre-built release is the fastest option if there's one that's
``` available for your platform. If you are developing or testing a new patch to
github.com/coreos/etcd/client `mgmt`, or there is not a release available for your platform, then you'll have
gopkg.in/yaml.v2 to build your own.
gopkg.in/fsnotify.v1
github.com/urfave/cli
github.com/coreos/go-systemd/dbus
github.com/coreos/go-systemd/util
github.com/libvirt/libvirt-go
```
* `stringer` (optional), available as a package on some platforms, otherwise via `go get`
```
golang.org/x/tools/cmd/stringer
```
* `pandoc` (optional), for building a pdf of the documentation
### Runtime ### Downloading a pre-built release:
A relatively modern GNU/Linux system should be able to run `mgmt` without any
problems. Since `mgmt` runs as a single statically compiled binary, all of the
library dependencies are included. It is expected, that certain advanced
resources require host specific facilities to work. These requirements are
listed below:
| Resource | Dependency | Version | The latest releases can be found [here](https://github.com/purpleidea/mgmt/releases/).
|----------|-------------------|---------| An alternate mirror is available [here](https://dl.fedoraproject.org/pub/alt/purpleidea/mgmt/releases/).
| file | inotify | ? |
| hostname | systemd-hostnamed | ? |
| nspawn | systemd-nspawn | ? |
| pkg | packagekitd | ? |
| svc | systemd | ? |
| virt | libvirtd | ? |
For building a visual representation of the graph, `graphviz` is required. Make sure to verify the signatures of all packages before you use them. The
signing key can be downloaded from [https://purpleidea.com/contact/#pgp-key](https://purpleidea.com/contact/#pgp-key)
to verify the release.
### Testing If you've decided to install a pre-build release, you can skip to the
* golint `github.com/golang/lint/golint` [Running mgmt](#running-mgmt) section below!
## Quick start ### Building a release:
* Make sure you have golang version 1.6 or greater installed.
* If you do not have a GOPATH yet, create one and export it: You'll need some dependencies, including `golang`, and some associated tools.
```
#### Installing golang
* You need golang version 1.13 or greater installed.
* To install on rpm style systems: `sudo dnf install golang`
* To install on apt style systems: `sudo apt install golang`
* To install on macOS systems install [Homebrew](https://brew.sh)
and run: `brew install go`
* You can run `go version` to check the golang version.
* If your distro is too old, you may need to [download](https://golang.org/dl/)
a newer golang version.
#### Setting up golang
* You can skip this step, as your installation will default to using `~/go/`,
but if you do not have a `GOPATH` yet and want one in a custom location, create
one and export it:
```shell
mkdir $HOME/gopath mkdir $HOME/gopath
export GOPATH=$HOME/gopath export GOPATH=$HOME/gopath
``` ```
* You might also want to add the GOPATH to your `~/.bashrc` or `~/.profile`. * You might also want to add the GOPATH to your `~/.bashrc` or `~/.profile`.
* For more information you can read the [GOPATH documentation](https://golang.org/cmd/go/#hdr-GOPATH_environment_variable). * For more information you can read the
* Next download the mgmt code base, and switch to that directory: [GOPATH documentation](https://golang.org/cmd/go/#hdr-GOPATH_environment_variable).
```
mkdir -p $GOPATH/src/github.com/purpleidea/ #### Getting the mgmt code and associated dependencies
cd $GOPATH/src/github.com/purpleidea/
* Download the `mgmt` code into the `GOPATH`, and switch to that directory:
```shell
[ -z "$GOPATH" ] && mkdir ~/go/ || mkdir -p $GOPATH/src/github.com/purpleidea/
cd $GOPATH/src/github.com/purpleidea/ || cd ~/go/
git clone --recursive https://github.com/purpleidea/mgmt/ git clone --recursive https://github.com/purpleidea/mgmt/
cd $GOPATH/src/github.com/purpleidea/mgmt cd $GOPATH/src/github.com/purpleidea/mgmt/ || cd ~/go/src/github.com/purpleidea/mgmt/
``` ```
* Run `make deps` to install system and golang dependencies. Take a look at `misc/make-deps.sh` for details.
* Run `make build` to get a freshly built `mgmt` binary. * Add `$GOPATH/bin` to `$PATH`
* Run `time ./mgmt run --yaml examples/graph0.yaml --converged-timeout=5 --tmp-prefix` to try out a very simple example!
* To run continuously in the default mode of operation, omit the `--converged-timeout` option. ```shell
* Have fun hacking on our future technology! export PATH=$PATH:$GOPATH/bin
```
* Run `make deps` to install system and golang dependencies. Take a look at
`misc/make-deps.sh` if you want to see the details of what it does.
#### Building mgmt
* Now run `make` to get a freshly built `mgmt` binary. If this succeeds, you can
proceed to the [Running mgmt](#running-mgmt) section below!
### Installing a distro release
Installation of `mgmt` from distribution packages currently needs improvement.
They are not always up-to-date with git master and as such are not recommended.
At the moment we have:
* [COPR](https://copr.fedoraproject.org/coprs/purpleidea/mgmt/) (currently dead)
* [Arch](https://aur.archlinux.org/packages/mgmt/) (currently stale)
Please contribute more and help improve these! We'd especially like to see a
Debian package!
## Running mgmt
* Run `mgmt run --tmp-prefix lang examples/lang/hello0.mcl` to try out a very
simple example! If you built it from source, you'll need to use `./mgmt` from
the project directory.
* Look in that example file that you ran to see if you can figure out what it
did! You can press `^C` to exit `mgmt`.
* Have fun hacking on our future technology and get involved to shape the
project!
## Examples ## Examples
Please look in the [examples/](../examples/) folder for some examples!
## Installation Please look in the [examples/lang/](../examples/lang/) folder for some more
Installation of `mgmt` from distribution packages currently needs improvement. examples!
At the moment we have:
* [COPR](https://copr.fedoraproject.org/coprs/purpleidea/mgmt/)
* [Arch](https://aur.archlinux.org/packages/mgmt/)
Please contribute more! We'd especially like to see a Debian package!

View File

@@ -16,29 +16,96 @@ Resources in `mgmt` are similar to resources in other systems in that they are
uniquely different in that they can detect when their state has changed, and as uniquely different in that they can detect when their state has changed, and as
a result can run to revert or repair this change instantly. For some background a result can run to revert or repair this change instantly. For some background
on this design, please read the on this design, please read the
[original article](https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/) [original article](https://purpleidea.com/blog/2016/01/18/next-generation-configuration-mgmt/)
on the subject. on the subject.
## Resource Prerequisites
### Imports
You'll need to import a few packages to make writing your resource easier. Here
is the list:
```
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
```
The `engine` package contains most of the interfaces and helper functions that
you'll need to use. The `traits` package contains some base functionality which
you can use to easily add functionality to your resource without needing to
implement it from scratch.
### Resource struct
Each resource will implement methods as pointer receivers on a resource struct.
The naming convention for resources is that they end with a `Res` suffix.
The resource struct should include an anonymous reference to the `Base` trait.
Other `traits` can be added to the resource to add additional functionality.
They are discussed below.
You'll most likely want to store a reference to the `*Init` struct type as
defined by the engine. This is data that the engine will provide to your
resource on Init.
Lastly you should define the public fields that make up your resource API, as
well as any private fields that you might want to use throughout your resource.
Do _not_ depend on global variables, since multiple copies of your resource
could get instantiated.
You'll want to add struct tags based on the different frontends that you want
your resources to be able to use. Some frontends can infer this information if
it is not specified, but others cannot, and some might poorly infer if the
struct name is ambiguous.
If you'd like your resource to be accessible by the `YAML` graph API (GAPI),
then you'll need to include the appropriate YAML fields as shown below. This is
used by the `Puppet` compiler as well, so make sure you include these struct
tags if you want existing `Puppet` code to be able to run using the `mgmt`
engine.
#### Example
```golang
type FooRes struct {
traits.Base // add the base methods without re-implementation
traits.Groupable
traits.Refreshable
init *engine.Init
Whatever string `lang:"whatever" yaml:"whatever"` // you pick!
Baz bool `lang:"baz" yaml:"baz"` // something else
something string // some private field
}
```
## Resource API ## Resource API
To implement a resource in `mgmt` it must satisfy the To implement a resource in `mgmt` it must satisfy the
[`Res`](https://github.com/purpleidea/mgmt/blob/master/resources/resources.go) [`Res`](https://github.com/purpleidea/mgmt/blob/master/engine/resources.go)
interface. What follows are each of the method signatures and a description of interface. What follows are each of the method signatures and a description of
each. each.
### Default ### Default
```golang ```golang
Default() Res Default() engine.Res
``` ```
This returns a populated resource struct as a `Res`. It shouldn't populate any This returns a populated resource struct as a `Res`. It shouldn't populate any
values which already have the correct default as the golang zero value. In values which already get a good default as the respective golang zero value. In
general it is preferable if the zero values make for the correct defaults. general it is preferable if the zero values make for the correct defaults.
(This is to say, resources are designed to behave safely and intuitively
when parameters take a zero value, whenever this is possible.)
#### Example #### Example
```golang ```golang
// Default returns some sensible defaults for this resource. // Default returns some sensible defaults for this resource.
func (obj *FooRes) Default() Res { func (obj *FooRes) Default() engine.Res {
return &FooRes{ return &FooRes{
Answer: 42, // sometimes, defaults shouldn't be the zero value Answer: 42, // sometimes, defaults shouldn't be the zero value
} }
@@ -46,46 +113,61 @@ func (obj *FooRes) Default() Res {
``` ```
### Validate ### Validate
```golang ```golang
Validate() error Validate() error
``` ```
This method is used to validate if the populated resource struct is a valid This method is used to validate if the populated resource struct is a valid
representation of the resource kind. If it does not conform to the resource representation of the resource kind. If it does not conform to the resource
specifications, it should generate an error. If you notice that this method is specifications, it should return an error. If you notice that this method is
quite large, it might be an indication that you should reconsider the parameter quite large, it might be an indication that you should reconsider the parameter
list and interface to this resource. This method is called _before_ `Init`. list and interface to this resource. This method is called by the engine
_before_ `Init`. It can also be called occasionally after a Send/Recv operation
to verify that the newly populated parameters are valid. Remember not to expect
access to the outside world when using this.
#### Example #### Example
```golang ```golang
// Validate reports any problems with the struct definition. // Validate reports any problems with the struct definition.
func (obj *FooRes) Validate() error { func (obj *FooRes) Validate() error {
if obj.Answer != 42 { // validate whatever you want if obj.Answer != 42 { // validate whatever you want
return fmt.Errorf("expected an answer of 42") return fmt.Errorf("expected an answer of 42")
} }
return obj.BaseRes.Validate() // remember to call the base method! return nil
} }
``` ```
### Init ### Init
```golang ```golang
Init() error Init() error
``` ```
This is called to initialize the resource. If something goes wrong, it should This is called to initialize the resource. If something goes wrong, it should
return an error. It should set the resource `kind`, do any resource specific return an error. It should do any resource specific work such as initializing
work, and finish by calling the `Init` method of the base resource. channels, sync primitives, or anything else that is relevant to your resource.
If it is not need throughout, it might be preferable to do some initialization
and tear down locally in either the Watch method or CheckApply method. The
choice depends on your particular resource and making the best decision requires
some experience with mgmt. If you are unsure, feel free to ask an existing
`mgmt` contributor. During `Init`, the engine will pass your resource a struct
containing some useful data and pointers. You should save a copy of this pointer
since you will need to use it in other parts of your resource.
#### Example #### Example
```golang ```golang
// Init initializes the Foo resource. // Init initializes the Foo resource.
func (obj *FooRes) Init() error { func (obj *FooRes) Init(init *engine.Init) error
obj.BaseRes.kind = "foo" // must lower case resource kind obj.init = init // save for later
// run the resource specific initialization, and error if anything fails // run the resource specific initialization, and error if anything fails
if some_error { if some_error {
return err // something went wrong! return err // something went wrong!
} }
return obj.BaseRes.Init() // call the base resource init return nil
} }
``` ```
@@ -96,36 +178,33 @@ shouldn't allow `Init` to dangerously `rm -rf /$the_world` if your code only
checks `$the_world` in `Validate`. Remember to always program safely! checks `$the_world` in `Validate`. Remember to always program safely!
### Close ### Close
```golang ```golang
Close() error Close() error
``` ```
This is called to cleanup after the resource. It is usually not necessary, but This is called to cleanup after the resource. It is usually not necessary, but
can be useful if you'd like to properly close a persistent connection that you can be useful if you'd like to properly close a persistent connection that you
opened in the `Init` method and were using throughout the resource. opened in the `Init` method and were using throughout the resource. It is *not*
the shutdown signal that tells the resource to exit. That happens in the Watch
loop.
#### Example #### Example
```golang ```golang
// Close runs some cleanup code for this resource. // Close runs some cleanup code for this resource.
func (obj *FooRes) Close() error { func (obj *FooRes) Close() error {
err := obj.conn.Close() // close some internal connection err := obj.conn.Close() // close some internal connection
obj.someMap = nil // free up some large data structure from memory
// call base close, b/c we're overriding
if e := obj.BaseRes.Close(); err == nil {
err = e
} else if e != nil {
err = multierr.Append(err, e) // list of errors
}
return err return err
} }
``` ```
You should probably check the return errors of your internal methods, and pass You should probably check the return errors of your internal methods, and pass
on an error if something went wrong. Remember to always call the base `Close` on an error if something went wrong.
method! If you plan to return early if you hit an internal error, then at least
call it with a defer!
### CheckApply ### CheckApply
```golang ```golang
CheckApply(apply bool) (checkOK bool, err error) CheckApply(apply bool) (checkOK bool, err error)
``` ```
@@ -135,7 +214,8 @@ function should check if the state of this resource is correct, and if so, it
should return: `(true, nil)`. If the `apply` variable is set to `true`, then should return: `(true, nil)`. If the `apply` variable is set to `true`, then
this means that we should then proceed to run the changes required to bring the this means that we should then proceed to run the changes required to bring the
resource into the correct state. If the `apply` variable is set to `false`, then resource into the correct state. If the `apply` variable is set to `false`, then
the resource is operating in _noop_ mode and _no operations_ should be executed! the resource is operating in _noop_ mode and _no operational changes_ should be
made!
After having executed the necessary operations to bring the resource back into After having executed the necessary operations to bring the resource back into
the desired state, or after having detected that the state was incorrect, but the desired state, or after having detected that the state was incorrect, but
@@ -147,20 +227,25 @@ function. If you cannot, then you must return an error! The exception to this
rule is that if an external force changes the state of the resource while it is rule is that if an external force changes the state of the resource while it is
being remedied, it is possible to return from this function even though the being remedied, it is possible to return from this function even though the
resource isn't now converged. This is not a bug, as the resources `Watch` resource isn't now converged. This is not a bug, as the resources `Watch`
facility will detect the change, ultimately resulting in a subsequent call to facility will detect the new change, ultimately resulting in a subsequent call
`CheckApply`. to `CheckApply`.
#### Example #### Example
```golang ```golang
// CheckApply does the idempotent work of checking and applying resource state. // CheckApply does the idempotent work of checking and applying resource state.
func (obj *FooRes) CheckApply(apply bool) (bool, error) { func (obj *FooRes) CheckApply(apply bool) (bool, error) {
// check the state // check the state
if state_is_okay { return true, nil } // done early! :) if state_is_okay { return true, nil } // done early! :)
// state was bad // state was bad
if !apply { return false, nil } // don't apply; !stateok, nil
if !apply { return false, nil } // don't apply, we're in noop mode
if any_error { return false, err } // anytime there's an err!
// do the apply! // do the apply!
return false, nil // after success applying return false, nil // after success applying
if any_error { return false, err } // anytime there's an err!
} }
``` ```
@@ -171,20 +256,8 @@ skipped. This is an engine optimization, and not a bug. It is mentioned here in
the documentation in case you are confused as to why a debug message you've the documentation in case you are confused as to why a debug message you've
added to the code isn't always printed. added to the code isn't always printed.
#### Refresh notifications
Some resources may choose to support receiving refresh notifications. In general
these should be avoided if possible, but nevertheless, they do make sense in
certain situations. Resources that support these need to verify if one was sent
during the CheckApply phase of execution. This is accomplished by calling the
`Refresh() bool` method of the resource, and inspecting the return value. This
is only necessary if you plan to perform a refresh action. Refresh actions
should still respect the `apply` variable, and no system changes should be made
if it is `false`. Refresh notifications are generated by any resource when an
action is applied by that resource and are transmitted through graph edges which
have enabled their propagation. Resources that currently perform some refresh
action include `svc`, `timer`, and `password`.
#### Paired execution #### Paired execution
For many resources it is not uncommon to see `CheckApply` run twice in rapid For many resources it is not uncommon to see `CheckApply` run twice in rapid
succession. This is usually not a pathological occurrence, but rather a healthy succession. This is usually not a pathological occurrence, but rather a healthy
pattern which is a consequence of the event system. When the state of the pattern which is a consequence of the event system. When the state of the
@@ -194,15 +267,17 @@ trigger the `Watch` code! In response, a second `CheckApply` is triggered, which
will likely find the state to now be correct. will likely find the state to now be correct.
#### Summary #### Summary
* Anytime an error occurs during `CheckApply`, you should return `(false, err)`. * Anytime an error occurs during `CheckApply`, you should return `(false, err)`.
* If the state is correct and no changes are needed, return `(true, nil)`. * If the state is correct and no changes are needed, return `(true, nil)`.
* You should only make changes to the system if `apply` is set to `true`. * You should only make changes to the system if `apply` is set to `true`.
* After checking the state and possibly applying the fix, return `(false, nil)`. * After checking the state and possibly applying the fix, return `(false, nil)`.
* Returning `(true, err)` is a programming error and will cause a `Fatal`. * Returning `(true, err)` is a programming error and can have a negative effect.
### Watch ### Watch
```golang ```golang
Watch(chan *Event) error Watch() error
``` ```
`Watch` is a main loop that runs and sends messages when it detects that the `Watch` is a main loop that runs and sends messages when it detects that the
@@ -210,7 +285,7 @@ state of the resource might have changed. To send a message you should write to
the input event channel using the `Event` helper method. The Watch function the input event channel using the `Event` helper method. The Watch function
should run continuously until a shutdown message is received. If at any time should run continuously until a shutdown message is received. If at any time
something goes wrong, you should return an error, and the `mgmt` engine will something goes wrong, you should return an error, and the `mgmt` engine will
handle possibly restarting the main loop based on the `retry` meta parameters. handle possibly restarting the main loop based on the `retry` meta parameter.
It is better to send an event notification which turns out to be spurious, than It is better to send an event notification which turns out to be spurious, than
to miss a possible event. Resources which can miss events are incorrect and need to miss a possible event. Resources which can miss events are incorrect and need
@@ -230,47 +305,46 @@ executed. As a result, the resource must still work even if the main loop is not
running. running.
#### Select #### Select
The lifetime of most resources `Watch` method should be spent in an infinite The lifetime of most resources `Watch` method should be spent in an infinite
loop that is bounded by a `select` call. The `select` call is the point where loop that is bounded by a `select` call. The `select` call is the point where
our method hands back control to the engine (and the kernel) so that we can our method hands back control to the engine (and the kernel) so that we can
sleep until something of interest wakes us up. In this loop we must process sleep until something of interest wakes us up. In this loop we must wait until
events from the engine via the `<-obj.Events()` call, and receive events for our we get a shutdown event from the engine via the `<-obj.init.Done` channel, which
resource itself! closes when we'd like to shut everything down. At this point you should cleanup,
and let `Watch` close.
#### Events #### Events
If we receive an internal event from the `<-obj.Events()` method, we can read it
with the ReadEvent helper function. This function tells us if we should shutdown If the `<-obj.init.Done` channel closes, we should shutdown our resource. When
our resource, and if we should generate an event. When we want to send an event, When we want to send an event, we use the `Event` helper function. This
we use the `Event` helper function. It is also important to mark the resource automatically marks the resource state as `dirty`. If you're unsure, it's not
state as `dirty` if we believe it might have changed. We do this with the harmful to send the event. This will ultimately cause `CheckApply` to run. This
`StateOK(false)` function. method can block if the resource is being paused.
#### Startup #### Startup
Once the `Watch` function has finished starting up successfully, it is important Once the `Watch` function has finished starting up successfully, it is important
to generate one event to notify the `mgmt` engine that we're now listening to generate one event to notify the `mgmt` engine that we're now listening
successfully, so that it can run an initial `CheckApply` to ensure we're safely successfully, so that it can run an initial `CheckApply` to ensure we're safely
tracking a healthy state and that we didn't miss anything when `Watch` was down tracking a healthy state and that we didn't miss anything when `Watch` was down
or from before `mgmt` was running. It does this by calling the `Running` method. or from before `mgmt` was running. You must do this by calling the
`obj.init.Running` method.
#### Converged #### Converged
The engine might be asked to shutdown when the entire state of the system has The engine might be asked to shutdown when the entire state of the system has
not seen any changes for some duration of time. The engine can determine this not seen any changes for some duration of time. The engine can determine this
automatically, but each resource can block this if it is absolutely necessary. automatically, but each resource can block this if it is absolutely necessary.
To do this, the `Watch` method should get the `ConvergedUID` handle that has If you need this functionality, please contact one of the maintainers and ask
been prepared for it by the engine. This is done by calling the `ConvergerUID` about adding this feature and improving these docs right here.
method on the resource object. The result can be used to set the converged
status with `SetConverged`, and to notify when the particular timeout has been
reached by waiting on `ConvergedTimer`.
Instead of interacting with the `ConvergedUID` with these two methods, we can
instead use the `StartTimer` and `ResetTimer` methods which accomplish the same
thing, but provide a `select`-free interface for different coding situations.
This particular facility is most likely not required for most resources. It may This particular facility is most likely not required for most resources. It may
prove to be useful if a resource wants to start off a long operation, but avoid prove to be useful if a resource wants to start off a long operation, but avoid
sending out erroneous `Event` messages to keep things alive until it finishes. sending out erroneous `Event` messages to keep things alive until it finishes.
#### Example #### Example
```golang ```golang
// Watch is the listener and main loop for this resource. // Watch is the listener and main loop for this resource.
func (obj *FooRes) Watch() error { func (obj *FooRes) Watch() error {
@@ -279,136 +353,297 @@ func (obj *FooRes) Watch() error {
if err, obj.foo = OpenFoo(); err != nil { if err, obj.foo = OpenFoo(); err != nil {
return err // we couldn't startup return err // we couldn't startup
} }
defer obj.whatever.CloseFoo() // shutdown our defer obj.whatever.CloseFoo() // shutdown our Foo
// notify engine that we're running // notify engine that we're running
if err := obj.Running(); err != nil { obj.init.Running() // when started, notify engine that we're running
return err // bubble up a NACK...
}
var send = false // send event? var send = false // send event?
var exit *error
for { for {
select { select {
case event := <-obj.Events():
// we avoid sending events on unpause
if exit, send = obj.ReadEvent(event); exit != nil {
return *exit // exit
}
// the actual events! // the actual events!
case event := <-obj.foo.Events: case event := <-obj.foo.Events:
if is_an_event { if is_an_event {
send = true // used below send = true
obj.StateOK(false) // dirty
} }
// event errors // event errors
case err := <-obj.foo.Errors: case err := <-obj.foo.Errors:
return err // will cause a retry or permanent failure return err // will cause a retry or permanent failure
case <-obj.init.Done: // signal for shutdown request
return nil
} }
// do all our event sending all together to avoid duplicate msgs // do all our event sending all together to avoid duplicate msgs
if send { if send {
send = false send = false
obj.Event() // send the event! obj.init.Event()
} }
} }
} }
``` ```
#### Summary #### Summary
* Remember to call the appropriate `converger` methods throughout the resource.
* Remember to call `Startup` when the `Watch` is running successfully. * Remember to call `Running` when the `Watch` is running successfully.
* Remember to process internal events and shutdown promptly if asked to. * Remember to process internal events and shutdown promptly if asked to.
* Ensure the design of your resource is well thought out. * Ensure the design of your resource is well thought out.
* Have a look at the existing resources for a rough idea of how this all works. * Have a look at the existing resources for a rough idea of how this all works.
### Compare ### Cmp
```golang ```golang
Compare(Res) bool Cmp(engine.Res) error
``` ```
Each resource must have a `Compare` method. This takes as input another resource Each resource must have a `Cmp` method. It is an abbreviation for `Compare`. It
and must return whether they are identical or not. This is used for identifying takes as input another resource and must return whether they are identical or
if an existing resource can be used in place of a new one with a similar set of not. This is used for identifying if an existing resource can be used in place
parameters. In particular, when switching from one graph to a new (possibly of a new one with a similar set of parameters. In particular, when switching
identical) graph, this avoids recomputing the state for resources which don't from one graph to a new (possibly identical) graph, this avoids recomputing the
change or that are sufficiently similar that they don't need to be swapped out. state for resources which don't change or that are sufficiently similar that
they don't need to be swapped out.
In general if all the resource properties are identical, then they usually don't In general if all the resource properties are identical, then they usually don't
need to be changed. On occasion, not all of them need to be compared, in need to be changed. On occasion, not all of them need to be compared, in
particular if they store some generated state, or if they aren't significant in particular if they store some generated state, or if they aren't significant in
some way. some way.
If the resource is identical, then you should return `nil`. If it is not, then
you should return a short error message which gives the reason it differs.
#### Example #### Example
```golang ```golang
// Compare two resources and return if they are equivalent. // Cmp compares two resources and returns if they are equivalent.
func (obj *FooRes) Compare(res Res) bool { func (obj *FooRes) Cmp(r engine.Res) error {
switch res.(type) { // we can only compare FooRes to others of the same resource kind
case *FooRes: // only compare to other resources of the Foo kind! res, ok := r.(*FooRes)
res := res.(*FileRes) if !ok {
if !obj.BaseRes.Compare(res) { // call base Compare return fmt.Errorf("not a %s", obj.Kind())
return false
}
if obj.Name != res.Name {
return false
}
if obj.whatever != res.whatever {
return false
}
if obj.Flag != res.Flag {
return false
}
default:
return false // different kind of resource
} }
return true // they must match!
if obj.Whatever != res.Whatever {
return fmt.Errorf("the Whatever param differs")
}
if obj.Flag != res.Flag {
return fmt.Errorf("the Flag param differs")
}
return nil // they must match!
} }
``` ```
### UIDs ## Traits
Resources can have different `traits`, which means they can be extended to have
additional functionality or special properties. Those special properties are
usually added by extending your resource so that it is compatible with
additional interface that contain the `Res` interface. Each of these interfaces
represents the additional functionality. Since in most cases this requires some
common boilerplate, you can usually get some or most of the functionality by
embedding the correct trait struct anonymously in your struct. This is shown in
the struct example above. You'll always want to include the `Base` trait in all
resources. This provides some basics which you'll always need.
What follows are a list of available traits.
### Refreshable
Some resources may choose to support receiving refresh notifications. In general
these should be avoided if possible, but nevertheless, they do make sense in
certain situations. Resources that support these need to verify if one was sent
during the CheckApply phase of execution. This is accomplished by calling the
`obj.init.Refresh() bool` method, and inspecting the return value. This is only
necessary if you plan to perform a refresh action. Refresh actions should still
respect the `apply` variable, and no system changes should be made if it is
`false`. Refresh notifications are generated by any resource when an action is
applied by that resource and are transmitted through graph edges which have
enabled their propagation. Resources that currently perform some refresh action
include `svc`, `timer`, and `password`.
It is very important that you include the `traits.Refreshable` struct in your
resource. If you do not include this, then calling `obj.init.Refresh` may
trigger a panic. This is programmer error.
### Edgeable
Edgeable is a trait that allows your resource to automatically connect itself to
other resources that use this trait to add edge dependencies between the two. An
older blog post on this topic is
[available](https://purpleidea.com/blog/2016/03/14/automatic-edges-in-mgmt/).
After you've included this trait, you'll need to implement two methods on your
resource.
#### UIDs
```golang ```golang
UIDs() []ResUID UIDs() []engine.ResUID
``` ```
The `UIDs` method returns a list of `ResUID` interfaces that represent the The `UIDs` method returns a list of `ResUID` interfaces that represent the
particular resource uniquely. This is used with the AutoEdges API to determine particular resource uniquely. This is used with the AutoEdges API to determine
if another resource can match a dependency to this one. if another resource can match a dependency to this one.
### AutoEdges #### AutoEdges
```golang ```golang
AutoEdges() AutoEdge AutoEdges() (engine.AutoEdge, error)
``` ```
This returns a struct that implements the `AutoEdge` interface. This struct This returns a struct that implements the `AutoEdge` interface. This struct
is used to match other resources that might be relevant dependencies for this is used to match other resources that might be relevant dependencies for this
resource. resource.
### CollectPattern ### Groupable
```golang
CollectPattern() string Groupable is a trait that can allow your resource automatically group itself to
``` other resources. Doing so can reduce the resource or runtime burden on the
engine, and improve performance in some scenarios. An older blog post on this
topic is
[available](https://purpleidea.com/blog/2016/03/30/automatic-grouping-in-mgmt/).
### Sendable
Sendable is a trait that allows your resource to send values through the graph
edges to another resource. These values are produced during `CheckApply`. They
can be sent to any resource that has an appropriate parameter and that has the
`Recvable` trait. You can read more about this in the Send/Recv section below.
### Recvable
Recvable is a trait that allows your resource to receive values through the
graph edges from another resource. These values are consumed during the
`CheckApply` phase, and can be detected there as well. They can be received from
any resource that has an appropriate value and that has the `Sendable` trait.
You can read more about this in the Send/Recv section below.
### Collectable
This is currently a stub and will be updated once the DSL is further along. This is currently a stub and will be updated once the DSL is further along.
### UnmarshalYAML ## Resource Initialization
During the resource initialization in `Init`, the engine will pass in a struct
containing a bunch of data and methods. What follows is a description of each
one and how it is used.
### Program
Program is a string containing the name of the program. Very few resources need
this.
### Hostname
Hostname is the uuid for the host. It will be occasionally useful in some
resources. It is preferable if you can avoid depending on this. It is possible
that in the future this will be a channel which changes if the local hostname
changes.
### Running
Running must be called after your watches are all started and ready. It is only
called from within `Watch`. It is used to notify the engine that you're now
ready to detect changes.
### Event
Event sends an event notifying the engine of a possible state change. It is
only called from within `Watch`.
### Done
Done is a channel that closes when the engine wants us to shutdown. It is only
called from within `Watch`.
### Refresh
Refresh returns whether the resource received a notification. This flag can be
used to tell a `svc` to reload, or to perform some state change that wouldn't
otherwise be noticed by inspection alone. You must implement the `Refreshable`
trait for this to work. It is only called from within `CheckApply`.
### Send
Send exposes some variables you wish to send via the `Send/Recv` mechanism. You
must implement the `Sendable` trait for this to work. It is only called from
within `CheckApply`.
### Recv
Recv provides a map of variables which were sent to this resource via the
`Send/Recv` mechanism. You must implement the `Recvable` trait for this to work.
It is only called from within `CheckApply`.
### World
World provides a connection to the outside world. This is most often used for
communicating with the distributed database. It can be used in `Init`,
`CheckApply` and `Watch`. Use with discretion and understanding of the internals
if needed in `Close`.
### VarDir
VarDir is a facility for local storage. It is used to return a path to a
directory which may be used for temporary storage. It should be cleaned up on
resource `Close` if the resource would like to delete the contents. The resource
should not assume that the initial directory is empty, and it should be cleaned
on `Init` if that is a requirement.
### Debug
Debug signals whether we are running in debugging mode. In this case, we might
want to log additional messages.
### Logf
Logf is a logging facility which will correctly namespace any messages which you
wish to pass on. You should use this instead of the log package directly for
production quality resources.
## Further considerations
There is some additional information that any resource writer will need to know.
Each issue is listed separately below!
### Resource registration
All resources must be registered with the engine so that they can be found. This
also ensures they can be encoded and decoded. Make sure to include the following
code snippet for this to work.
```golang
func init() { // special golang method that runs once
// set your resource kind and struct here (the kind must be lower case)
engine.RegisterResource("foo", func() engine.Res { return &FooRes{} })
}
```
### YAML Unmarshalling
To support YAML unmarshalling for your resource, you must implement an
additional method. It is recommended if you want to use your resource with the
`Puppet` compiler.
```golang ```golang
UnmarshalYAML(unmarshal func(interface{}) error) error // optional UnmarshalYAML(unmarshal func(interface{}) error) error // optional
``` ```
This is optional, but recommended for any resource that will have a YAML This is optional, but recommended for any resource that will have a YAML
accessible struct, and an entry in the `GraphConfig` struct. It is not required accessible struct. It is not required because to do so would mean that
because to do so would mean that third-party or custom resources (such as those third-party or custom resources (such as those someone writes to use with
someone writes to use with `libmgmt`) would have to implement this needlessly. `libmgmt`) would have to implement this needlessly.
The signature intentionally matches what is required to satisfy the `go-yaml` The signature intentionally matches what is required to satisfy the `go-yaml`
[Unmarshaler](https://godoc.org/gopkg.in/yaml.v2#Unmarshaler) interface. [Unmarshaler](https://godoc.org/gopkg.in/yaml.v2#Unmarshaler) interface.
#### Example #### Example
```golang ```golang
// UnmarshalYAML is the custom unmarshal handler for this struct. // UnmarshalYAML is the custom unmarshal handler for this struct. It is
// It is primarily useful for setting the defaults. // primarily useful for setting the defaults.
func (obj *FooRes) UnmarshalYAML(unmarshal func(interface{}) error) error { func (obj *FooRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes FooRes // indirection to avoid infinite recursion type rawRes FooRes // indirection to avoid infinite recursion
@@ -428,121 +663,37 @@ func (obj *FooRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
} }
``` ```
## Further considerations
There is some additional information that any resource writer will need to know.
Each issue is listed separately below!
### Resource struct
Each resource will implement methods as pointer receivers on a resource struct.
The resource struct must include an anonymous reference to the `BaseRes` struct.
The naming convention for resources is that they end with a `Res` suffix. If
you'd like your resource to be accessible by the `YAML` graph API (GAPI), then
you'll need to include the appropriate YAML fields as shown below.
#### Example
```golang
type FooRes struct {
BaseRes `yaml:",inline"` // base properties
Whatever string `yaml:"whatever"` // you pick!
Bar int // no yaml, used as public output value for send/recv
Baz bool `yaml:"baz"` // something else
something string // some private field
}
```
### YAML
In addition to labelling your resource struct with YAML fields, you must also
add an entry to the internal `GraphConfig` struct. It is a fairly straight
forward one line patch.
```golang
type GraphConfig struct {
// [snip...]
Resources struct {
Noop []*resources.NoopRes `yaml:"noop"`
File []*resources.FileRes `yaml:"file"`
// [snip...]
Foo []*resources.FooRes `yaml:"foo"` // tada :)
}
}
```
It's also recommended that you add the [UnmarshalYAML](#unmarshalyaml) method to
your resources so that unspecified values are given sane defaults.
### Gob registration
All resources must be registered with the `golang` _gob_ module so that they can
be encoded and decoded. Make sure to include the following code snippet for this
to work.
```golang
import "encoding/gob"
func init() { // special golang method that runs once
gob.Register(&FooRes{}) // substitude your resource here
}
```
## Automatic edges
Automatic edges in `mgmt` are well described in [this article](https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/).
The best example of this technique can be seen in the `svc` resource.
Unfortunately no further documentation about this subject has been written. To
expand this section, please send a patch! Please contact us if you'd like to
work on a resource that uses this feature, or to add it to an existing one!
## Automatic grouping
Automatic grouping in `mgmt` is well described in [this article](https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/).
The best example of this technique can be seen in the `pkg` resource.
Unfortunately no further documentation about this subject has been written. To
expand this section, please send a patch! Please contact us if you'd like to
work on a resource that uses this feature, or to add it to an existing one!
## Send/Recv ## Send/Recv
In `mgmt` there is a novel concept called _Send/Recv_. For some background, In `mgmt` there is a novel concept called _Send/Recv_. For some background,
please [read the introductory article](https://ttboj.wordpress.com/2016/12/07/sendrecv-in-mgmt/). please read the [introductory article](https://purpleidea.com/blog/2016/12/07/sendrecv-in-mgmt/).
When using this feature, the engine will automatically send the user specified When using this feature, the engine will automatically send the user specified
value to the intended destination without requiring any resource specific code. value to the intended destination without requiring much resource specific code.
Any time that one of the destination values is changed, the engine automatically Any time that one of the destination values is changed, the engine automatically
marks the resource state as `dirty`. To detect if a particular value was marks the resource state as `dirty`. To detect if a particular value was
received, and if it changed (during this invocation of CheckApply) from the received, and if it changed (during this invocation of `CheckApply`) from the
previous value, you can query the Recv parameter. It will contain a `map` of all previous value, you can query the `obj.init.Recv()` method. It will contain a
the keys which can be received on, and the value has a `Changed` property which `map` of all the keys which can be received on, and the value has a `Changed`
will indicate whether the value was updated on this particular `CheckApply` property which will indicate whether the value was updated on this particular
invocation. The type of the sending key must match that of the receiving one. `CheckApply` invocation. The type of the sending key must match that of the
This can _only_ be done inside of the `CheckApply` function! receiving one. This can _only_ be done inside of the `CheckApply` function!
```golang ```golang
// inside CheckApply, probably near the top // inside CheckApply, probably near the top
if val, exists := obj.Recv["SomeKey"]; exists { if val, exists := obj.init.Recv()["SomeKey"]; exists {
log.Printf("SomeKey was sent to us from: %s[%s].%s", val.Res.Kind(), val.Res.GetName(), val.Key) obj.init.Logf("the SomeKey param was sent to us from: %s.%s", val.Res, val.Key)
if val.Changed { if val.Changed {
log.Printf("SomeKey was just updated!") obj.init.Logf("the SomeKey param was just updated!")
// you may want to invalidate some local cache // you may want to invalidate some local cache
} }
} }
``` ```
Astute readers will note that there isn't anything that prevents a user from The specifics of resource sending are not currently documented. Please send a
sending an identically typed value to some arbitrary (public) key that the patch here!
resource author hadn't considered! While this is true, resources should probably
work within this problem space anyways. The rule of thumb is that any public
parameter which is normally used in a resource can be used safely.
One subtle scenario is that if a resource creates a local cache or stores a
computation that depends on the value of a public parameter and will require
invalidation should that public parameter change, then you must detect that
scenario and invalidate the cache when it occurs. This *must* be processed
before there is a possibility of failure in CheckApply, because if we fail (and
possibly run again) the subsequent send->recv transfer might not have a new
value to copy, and therefore we won't see this notification of change.
Therefore, it is important to process these promptly, if they must not be lost,
such as for cache invalidation.
Remember, `Send/Recv` only changes your resource code if you cache state.
## Composite resources ## Composite resources
Composite resources are resources which embed one or more existing resources. Composite resources are resources which embed one or more existing resources.
This is useful to prevent code duplication in higher level resource scenarios. This is useful to prevent code duplication in higher level resource scenarios.
The best example of this technique can be seen in the `nspawn` resource which The best example of this technique can be seen in the `nspawn` resource which
@@ -552,24 +703,96 @@ expand this section, please send a patch! Please contact us if you'd like to
work on a resource that uses this feature, or to add it to an existing one! work on a resource that uses this feature, or to add it to an existing one!
## Frequently asked questions ## Frequently asked questions
(Send your questions as a patch to this FAQ! I'll review it, merge it, and (Send your questions as a patch to this FAQ! I'll review it, merge it, and
respond by commit with the answer.) respond by commit with the answer.)
### Can I write resources in a different language? ### Can I write resources in a different language?
Currently `golang` is the only supported language for built-in resources. We Currently `golang` is the only supported language for built-in resources. We
might consider allowing external resources to be imported in the future. This might consider allowing external resources to be imported in the future. This
will likely require a language that can expose a C-like API, such as `python` or will likely require a language that can expose a C-like API, such as `python` or
`ruby`. Custom `golang` resources are already possible when using mgmt as a lib. `ruby`. Custom `golang` resources are already possible when using mgmt as a lib.
Higher level resource collections will be possible once the `mgmt` DSL is ready. Higher level resource collections will be possible once the `mgmt` DSL is ready.
### Why does the resource API have `CheckApply` instead of two separate methods?
In an early version we actually had both "parts" as separate methods, namely:
`StateOK` (Check) and `Apply`, but the [decision](58f41eddd9c06b183f889f15d7c97af81b0331cc)
was made to merge the two into a single method. There are two reasons for this:
1. Many situations would involve the engine running both `Check` and `Apply`. If
the resource needed to share some state (for efficiency purposes) between the
two calls, this is much more difficult. A common example is that a resource
might want to open a connection to `dbus` or `http` to do resource state testing
and applying. If the methods are combined, there's no need to open and close
them twice. A counter argument might be that you could open the connection in
`Init`, and close it in `Close`, however you might not want that open for the
full lifetime of the resource if you only change state occasionally.
2. Suppose you came up with a really good reason why you wanted the two methods
to be separate. It turns out that the current `CheckApply` can wrap this easily.
It would look approximately like this:
```golang
func (obj *FooRes) CheckApply(apply bool) (bool, error) {
// my private split implementation of check and apply
if c, err := obj.check(); err != nil {
return false, err // we errored
} else if c {
return true, nil // state was good!
}
if !apply {
return false, nil // state needs fixing, but apply is false
}
err := obj.apply() // errors if failure or unable to apply
return false, err // always return false, with an optional error
}
```
Feel free to use this pattern if you're convinced it's necessary. Alternatively,
if you think I got the `Res` API wrong and you have an improvement, please let
us know!
### Why do resources have both a `Cmp` method and an `IFF` (on the UID) method?
The `Cmp()` methods are for determining if two resources are effectively the
same, which is used to make graph change delta's efficient. This is when we want
to change from the current running graph to a new graph, but preserve the common
vertices. Since we want to make this process efficient, we only update the parts
that are different, and leave everything else alone. This `Cmp()` method can
tell us if two resources are the same. In case it is not obvious, `cmp` is an
abbrev. for compare.
The `IFF()` method is part of the whole UID system, which is for discerning if a
resource meets the requirements another expects for an automatic edge. This is
because the automatic edge system assumes a unified UID pattern to test for
equality. In the future it might be helpful or sane to merge the two similar
comparison functions although for now they are separate because they are
actually answer different questions.
### What new resource primitives need writing? ### What new resource primitives need writing?
There are still many ideas for new resources that haven't been written yet. If There are still many ideas for new resources that haven't been written yet. If
you'd like to contribute one, please contact us and tell us about your idea! you'd like to contribute one, please contact us and tell us about your idea!
### Is the resource API stable? Does it ever change?
Since we are pre 1.0, the resource API is not guaranteed to be stable, however
it is not expected to change significantly. The last major change kept the
core functionality nearly identical, simplified the implementation of all the
resources, and took about five to ten minutes to port each resource to the new
API. The fundamental logic and behaviour behind the resource API has not changed
since it was initially introduced.
### Where can I find more information about mgmt? ### Where can I find more information about mgmt?
Additional blog posts, videos and other material [is available!](https://github.com/purpleidea/mgmt/#on-the-web).
Additional blog posts, videos and other material [is available!](https://github.com/purpleidea/mgmt/blob/master/docs/on-the-web.md).
## Suggestions ## Suggestions
If you have any ideas for API changes or other improvements to resource writing, If you have any ideas for API changes or other improvements to resource writing,
please let us know! We're still pre 1.0 and pre 0.1 and happy to break API in please let us know! We're still pre 1.0 and pre 0.1 and happy to break API in
order to get it right! order to get it right!

250
docs/resources.md Normal file
View File

@@ -0,0 +1,250 @@
# Resources
Here we list all the built-in resources and their properties. The resource
primitives in `mgmt` are typically more powerful than resources in other
configuration management systems because they can be event based which lets them
respond in real-time to converge to the desired state. This property allows you
to build more complex resources that you probably hadn't considered in the past.
In addition to the resource specific properties, there are resource properties
(otherwise known as parameters) which can apply to every resource. These are
called [meta parameters](documentation.md#meta-parameters) and are listed
separately. Certain meta parameters aren't very useful when combined with
certain resources, but in general, it should be fairly obvious, such as when
combining the `noop` meta parameter with the [Noop](#Noop) resource.
You might want to look at the [generated documentation](https://godoc.org/github.com/purpleidea/mgmt/engine/resources)
for more up-to-date information about these resources.
* [Augeas](#Augeas): Manipulate files using augeas.
* [Consul:KV](#ConsulKV): Set keys in a Consul datastore.
* [Docker](#Docker):[Container](#Container) Manage docker containers.
* [Exec](#Exec): Execute shell commands on the system.
* [File](#File): Manage files and directories.
* [Group](#Group): Manage system groups.
* [Hostname](#Hostname): Manages the hostname on the system.
* [KV](#KV): Set a key value pair in our shared world database.
* [Msg](#Msg): Send log messages.
* [Net](#Net): Manage a local network interface.
* [Noop](#Noop): A simple resource that does nothing.
* [Nspawn](#Nspawn): Manage systemd-machined nspawn containers.
* [Password](#Password): Create random password strings.
* [Pkg](#Pkg): Manage system packages with PackageKit.
* [Print](#Print): Print messages to the console.
* [Svc](#Svc): Manage system systemd services.
* [Test](#Test): A mostly harmless resource that is used for internal testing.
* [Tftp:File](#TftpFile): Add files to the small embedded embedded tftp server.
* [Tftp:Server](#TftpServer): Run a small embedded tftp server.
* [Timer](#Timer): Manage system systemd services.
* [User](#User): Manage system users.
* [Virt](#Virt): Manage virtual machines with libvirt.
## Augeas
The augeas resource uses [augeas](http://augeas.net/) commands to manipulate
files.
## Docker
### Container
The docker:container resource manages docker containers.
It has the following properties:
* `state`: either `running`, `stopped`, or `removed`
* `image`: docker `image` or `image:tag`
* `cmd`: a command or list of commands to run on the container
* `env`: a list of environment variables, e.g. `["VAR=val",],`
* `ports`: a map of portmappings, e.g. `{"tcp" => {80 => 8080, 443 => 8443,},},`
* `apiversion:` override the host's default docker version, e.g. `"v1.35"`
* `force`: destroy and rebuild the container instead of erroring on wrong image
## Exec
The exec resource can execute commands on your system.
## File
The file resource manages files and directories. In `mgmt`, directories are
identified by a trailing slash in their path name. File have no such slash.
It has the following properties:
* `path`: absolute file path (directories have a trailing slash here)
* `state`: either `exists`, `absent`, or undefined
* `content`: raw file content
* `mode`: octal unix file permissions or symbolic string
* `owner`: username or uid for the file owner
* `group`: group name or gid for the file group
### Path
The path property specifies the file or directory that we are managing.
### State
The state property describes the action we'd like to apply for the resource. The
possible values are: `exists` and `absent`. If you do not specify either of
these, it is undefined. Without specifying this value as `exists`, another param
cannot cause a file to get implicitly created. When specifying this value as
`absent`, you should not specify any other params that would normally change the
file. For example, if you specify `content` and this param is `absent`, then you
will get an engine validation error.
### Content
The content property is a string that specifies the desired file contents.
### Source
The source property points to a source file or directory path that we wish to
copy over and use as the desired contents for our resource.
### Fragments
The fragments property lets you specify a list of files to concatenate together
to make up the contents of this file. They will be combined in the order that
they are listed in. If one of the files specified is a directory, then the
files in that top-level directory will be themselves combined together and used.
### Recurse
The recurse property limits whether file resource operations should recurse into
and monitor directory contents with a depth greater than one.
### Force
The force property is required if we want the file resource to be able to change
a file into a directory or vice-versa. If such a change is needed, but the force
property is not set to `true`, then this file resource will error.
### Purge
The purge property is used when this file represents a directory, and we'd like
to remove any unmanaged files from within it. Please note that any unmanaged
files in a directory with this flag set will be irreversibly deleted.
## Group
The group resource manages the system groups from `/etc/group`.
## Hostname
The hostname resource manages static, transient/dynamic and pretty hostnames
on the system and watches them for changes.
### static_hostname
The static hostname is the one configured in /etc/hostname or a similar
file.
It is chosen by the local user. It is not always in sync with the current
host name as returned by the gethostname() system call.
### transient_hostname
The transient / dynamic hostname is the one configured via the kernel's
sethostbyname().
It can be different from the static hostname in case DHCP or mDNS have been
configured to change the name based on network information.
### pretty_hostname
The pretty hostname is a free-form UTF8 host name for presentation to the user.
### hostname
Hostname is the fallback value for all 3 fields above, if only `hostname` is
specified, it will set all 3 fields to this value.
## KV
The KV resource sets a key and value pair in the global world database. This is
quite useful for setting a flag after a number of resources have run. It will
ignore database updates to the value that are greater in compare order than the
requested key if the `SkipLessThan` parameter is set to true. If we receive a
refresh, then the stored value will be reset to the requested value even if the
stored value is greater.
### Key
The string key used to store the key.
### Value
The string value to set. This can also be set via Send/Recv.
### SkipLessThan
If this parameter is set to `true`, then it will ignore updating the value as
long as the database versions are greater than the requested value. The compare
operation used is based on the `SkipCmpStyle` parameter.
### SkipCmpStyle
By default this converts the string values to integers and compares them as you
would expect.
## Msg
The msg resource sends messages to the main log, or an external service such
as systemd's journal.
## Net
The net resource manages a local network interface using netlink.
## Noop
The noop resource does absolutely nothing. It does have some utility in testing
`mgmt` and also as a placeholder in the resource graph.
## Nspawn
The nspawn resource is used to manage systemd-machined style containers.
## Password
The password resource can generate a random string to be used as a password. It
will re-generate the password if it receives a refresh notification.
## Pkg
The pkg resource is used to manage system packages. This resource works on many
different distributions because it uses the underlying packagekit facility which
supports different backends for different environments. This ensures that we
have great Debian (deb/dpkg) and Fedora (rpm/dnf) support simultaneously.
## Print
The print resource prints messages to the console.
## Svc
The service resource is still very WIP. Please help us by improving it!
## Test
The test resource is mostly harmless and is used for internal tests.
## Tftp:File
This adds files to the running tftp server. It's useful because it allows you to
add individual files without needing to create them on disk.
## Tftp:Server
Run a small embedded tftp server. This doesn't apply any state, but instead runs
a pure golang tftp server in the Watch loop.
## Timer
This resource needs better documentation. Please help us by improving it!
## User
The user resource manages the system users from `/etc/passwd`.
## Virt
The virt resource can manage virtual machines via libvirt.

225
docs/style-guide.md Normal file
View File

@@ -0,0 +1,225 @@
# Style guide
This document aims to be a reference for the desired style for patches to mgmt,
and the associated `mcl` language. In particular it describes conventions which
are not officially enforced by tools and in test cases, or that aren't clearly
defined elsewhere. We try to turn as many of these into automated tests as we
can. If something here is not defined in a test, or you think it should be,
please write one! Even better, you can write a tool to automatically fix it,
since this is more useful and can easily be turned into a test!
## Overview for golang code
Most style issues are enforced by the `gofmt` tool. Other style aspects are
often common sense to seasoned programmers, and we hope this will be a useful
reference for new programmers.
There are a lot of useful code review comments described
[here](https://github.com/golang/go/wiki/CodeReviewComments). We don't
necessarily follow everything strictly, but it is in general a very good guide.
### Basics
* All of our golang code is formatted with `gofmt`.
### Comments
All of our code is commented with the minimums required for `godoc` to function,
and so that our comments pass `golint`. Code comments should either be full
sentences (which end with a period, use proper punctuation, and capitalize the
first word when it is not a lower cased identifier), or are short one-line
comments in the source which are not full sentences and don't end with a period.
They should explain algorithms, describe non-obvious behaviour, or situations
which would otherwise need explanation or additional research during a code
review. Notes about use of unfamiliar API's is a good idea for a code comment.
#### Example
Here you can see a function with the correct `godoc` string. The first word must
match the name of the function. It is _not_ capitalized because the function is
private.
```golang
// square multiplies the input integer by itself and returns this product.
func square(x int) int {
return x * x // we don't care about overflow errors
}
```
### Line length
In general we try to stick to 80 character lines when it is appropriate. It is
almost *always* appropriate for function `godoc` comments and most longer
paragraphs. Exceptions are always allowed based on the will of the maintainer.
It is usually better to exceed 80 characters than to break code unnecessarily.
If your code often exceeds 80 characters, it might be an indication that it
needs refactoring.
Occasionally inline, two line source code comments are used within a function.
These should usually be balanced so that you don't have one line with 78
characters and the second with only four. Split the comment between the two.
### Default values
Whenever a constant or function parameter is defined, try and have the safer or
default value be the `zero` value. For example, instead of `const NoDanger`, use
`const AllowDanger` so that the `false` value is the safe scenario.
### Method receiver naming
[Contrary](https://github.com/golang/go/wiki/CodeReviewComments#receiver-names)
to the specialized naming of the method receiver variable, we usually name all
of these `obj` for ease of code copying throughout the project, and for faster
identification when reviewing code. Some anecdotal studies have shown that it
makes the code easier to read since you don't need to remember the name of the
method receiver variable in each different method. This is very similar to what
is done in `python`.
#### Example
```golang
// Bar does a thing, and returns the number of baz results found in our
database.
func (obj *Foo) Bar(baz string) int {
if len(obj.s) > 0 {
return strings.Count(obj.s, baz)
}
return -1
}
```
### Variable naming
We prefer shorter, scoped variables rather than `unnecessarilyLongIdentifiers`.
Remember the scoping rules and feel free to use new variables where appropriate.
For example, in a short string snippet you can use `s` instead of `myString`, as
well as other common choices. `i` is a common `int` counter, `f` for files, `fn`
for functions, `x` for something else and so on.
### Variable re-use
Feel free to create and use new variables instead of attempting to re-use the
same string. For example, if a function input arg is named `s`, you can use a
new variable to receive the first computation result on `s` instead of storing
it back into the original `s`. This avoids confusion if a different part of the
code wants to read the original input, and it avoids any chance of edit by
reference of the original callers copy of the variable.
#### Example
```golang
MyNotIdealFunc(s string, b bool) string {
if !b {
return s + "hey"
}
s = strings.Replace(s, "blah", "", -1) // not ideal (re-use of `s` var)
return s
}
MyOkayFunc(s string, b bool) string {
if !b {
return s + "hey"
}
s2 := strings.Replace(s, "blah", "", -1) // doesn't re-use `s` variable
return s2
}
MyGreatFunc(s string, b bool) string {
if !b {
return s + "hey"
}
return strings.Replace(s, "blah", "", -1) // even cleaner
}
```
### Constants in code
If a function takes a specifier (often a bool) it's sometimes better to name
that variable (often with a `const`) rather than leaving a naked `bool` in the
code. For example, `x := MyFoo("blah", false)` is less clear than
`const useMagic = false; x := MyFoo("blah", useMagic)`.
### Consistent ordering
In general we try to preserve a logical ordering in source files which usually
matches the common order of execution that a _lazy evaluator_ would follow.
This is also the order which is recommended when creating interface types. When
implementing an interface, arrange your methods in the same order that they are
declared in the interface.
When implementing code for the various types in the language, please follow this
order: `bool`, `str`, `int`, `float`, `list`, `map`, `struct`, `func`.
For other aspects where you have a set of items, try to be internally consistent
as well. For example, if you have two switch statements with `A`, `B`, and `C`,
please use the same ordering for these elements elsewhere that they appear in
the code and in the commentary if it is not illogical to do so.
### Product identifiers
Try to avoid references in the code to `mgmt` or a specific program name string
if possible. This makes it easier to rename code if we ever pick a better name
or support `libmgmt` better if we embed it. You can use the `Program` variable
which is available in numerous places if you want a string to put in the logs.
It is also recommended to avoid the `go` (programming language name) string if
possible. Try to use `golang` if required, since the word `go` is already
overloaded, and in particular it was even already used by the
[`go!`](https://en.wikipedia.org/wiki/Go!_(programming_language)).
## Overview for mcl code
The `mcl` language is quite new, so this guide will probably change over time as
we find what's best, and hopefully we'll be able to add an `mclfmt` tool in the
future so that less of this needs to be documented. (Patches welcome!)
### Indentation
Code indentation is done with tabs. The tab-width is a private preference, which
is the beauty of using tabs: you can have your own personal preference. The
inventor of `mgmt` uses and recommends a width of eight, and that is what should
be used if your tool requires a modeline to be publicly committed.
### Line length
We recommend you stick to 80 char line width. If you find yourself with deeper
nesting, it might be a hint that your code could be refactored in a more
pleasant way.
### Capitalization
At the moment, variables, function names, and classes are all lowercase and do
not contain underscores. We will probably figure out what style to recommend
when the language is a bit further along. For example, we haven't decided if we
should have a notion of public and private variables, and if we'd like to
reserve capitalization for this situation.
### Module naming
We recommend you name your modules with an `mgmt-` prefix. For example, a module
about bananas might be named `mgmt-banana`. This is helpful for the useful magic
built-in to the module import code, which will by default take a remote import
like: `import "https://github.com/purpleidea/mgmt-banana/"` and namespace it as
`banana`. Of course you can always pick the namespace yourself on import with:
`import "https://github.com/purpleidea/mgmt-banana/" as tomato` or something
similar.
### Licensing
We believe that sharing code helps reduce unnecessary re-invention, so that we
can [stand on the shoulders of giants](https://en.wikipedia.org/wiki/Standing_on_the_shoulders_of_giants)
and hopefully make faster progress in science, medicine, exploration, etc... As
a result, we recommend releasing your modules under the [LGPLv3+](https://www.gnu.org/licenses/lgpl-3.0.en.html)
license for the maximum balance of freedom and re-usability. We strongly oppose
any [CLA](https://en.wikipedia.org/wiki/Contributor_License_Agreement)
requirements and believe that the ["inbound==outbound"](https://ref.fedorapeople.org/fontana-linuxcon.html#slide2)
rule applies. Lastly, we do not support software patents and we hope you don't
either!
## Suggestions
If you have any ideas for suggestions or other improvements to this guide,
please let us know!

126
engine/autoedge.go Normal file
View File

@@ -0,0 +1,126 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package engine
import (
"fmt"
)
// EdgeableRes is the interface a resource must implement to support automatic
// edges. Both the vertices involved in an edge need to implement this for it to
// be able to work.
type EdgeableRes interface {
Res // implement everything in Res but add the additional requirements
// AutoEdgeMeta lets you get or set meta params for the automatic edges
// trait.
AutoEdgeMeta() *AutoEdgeMeta
// SetAutoEdgeMeta lets you set all of the meta params for the automatic
// edges trait in a single call.
SetAutoEdgeMeta(*AutoEdgeMeta)
// UIDs includes all params to make a unique identification of this
// object.
UIDs() []ResUID // most resources only return one
// AutoEdges returns a struct that implements the AutoEdge interface.
// This interface can be used to generate automatic edges to other
// resources.
AutoEdges() (AutoEdge, error)
}
// AutoEdgeMeta provides some parameters specific to automatic edges.
// TODO: currently this only supports disabling the feature per-resource, but in
// the future you could conceivably have some small pattern to control it better
type AutoEdgeMeta struct {
// Disabled specifies that automatic edges should be disabled for this
// resource.
Disabled bool
}
// Cmp compares two AutoEdgeMeta structs and determines if they're equivalent.
func (obj *AutoEdgeMeta) Cmp(aem *AutoEdgeMeta) error {
if obj.Disabled != aem.Disabled {
return fmt.Errorf("values for Disabled are different")
}
return nil
}
// The AutoEdge interface is used to implement the autoedges feature.
type AutoEdge interface {
Next() []ResUID // call to get list of edges to add
Test([]bool) bool // call until false
}
// ResUID is a unique identifier for a resource, namely it's name, and the kind
// ("type").
type ResUID interface {
fmt.Stringer // String() string
GetName() string
GetKind() string
IFF(ResUID) bool
IsReversed() bool // true means this resource happens before the generator
}
// The BaseUID struct is used to provide a unique resource identifier.
type BaseUID struct {
Name string // name and kind are the values of where this is coming from
Kind string
Reversed *bool // piggyback edge information here
}
// GetName returns the name of the resource UID.
func (obj *BaseUID) GetName() string {
return obj.Name
}
// GetKind returns the kind of the resource UID.
func (obj *BaseUID) GetKind() string {
return obj.Kind
}
// String returns the canonical string representation for a resource UID.
func (obj *BaseUID) String() string {
return fmt.Sprintf("%s[%s]", obj.GetKind(), obj.GetName())
}
// IFF looks at two UID's and if and only if they are equivalent, returns true.
// If they are not equivalent, it returns false. Most resources will want to
// override this method, since it does the important work of actually discerning
// if two resources are identical in function.
func (obj *BaseUID) IFF(uid ResUID) bool {
res, ok := uid.(*BaseUID)
if !ok {
return false
}
return obj.Name == res.Name
}
// IsReversed is part of the ResUID interface, and true means this resource
// happens before the generator.
func (obj *BaseUID) IsReversed() bool {
if obj.Reversed == nil {
panic("programming error!")
}
return *obj.Reversed
}

38
engine/autoedge_test.go Normal file
View File

@@ -0,0 +1,38 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !root
package engine
import (
"testing"
)
func TestIFF1(t *testing.T) {
uid := &BaseUID{Name: "/tmp/unit-test"}
same := &BaseUID{Name: "/tmp/unit-test"}
diff := &BaseUID{Name: "/tmp/other-file"}
if !uid.IFF(same) {
t.Errorf("basic resource UIDs with the same name should satisfy each other's IFF condition")
}
if uid.IFF(diff) {
t.Errorf("basic resource UIDs with different names should NOT satisfy each other's IFF condition")
}
}

88
engine/autogroup.go Normal file
View File

@@ -0,0 +1,88 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package engine
import (
"fmt"
"github.com/purpleidea/mgmt/pgraph"
)
// GroupableRes is the interface a resource must implement to support automatic
// grouping. Default implementations for most of the methods declared in this
// interface can be obtained for your resource by anonymously adding the
// traits.Groupable struct to your resource implementation.
type GroupableRes interface {
Res // implement everything in Res but add the additional requirements
// AutoGroupMeta lets you get or set meta params for the automatic
// grouping trait.
AutoGroupMeta() *AutoGroupMeta
// SetAutoGroupMeta lets you set all of the meta params for the
// automatic grouping trait in a single call.
SetAutoGroupMeta(*AutoGroupMeta)
// GroupCmp compares two resources and decides if they're suitable for
// grouping. This usually needs to be unique to your resource.
GroupCmp(res GroupableRes) error
// GroupRes groups resource argument (res) into self.
GroupRes(res GroupableRes) error
// IsGrouped determines if we are grouped.
IsGrouped() bool // am I grouped?
// SetGrouped sets a flag to tell if we are grouped.
SetGrouped(bool)
// GetGroup returns everyone grouped inside me.
GetGroup() []GroupableRes // return everyone grouped inside me
// SetGroup sets the grouped resources into me.
SetGroup([]GroupableRes)
}
// AutoGroupMeta provides some parameters specific to automatic grouping.
// TODO: currently this only supports disabling the feature per-resource, but in
// the future you could conceivably have some small pattern to control it better
type AutoGroupMeta struct {
// Disabled specifies that automatic grouping should be disabled for
// this resource.
Disabled bool
}
// Cmp compares two AutoGroupMeta structs and determines if they're equivalent.
func (obj *AutoGroupMeta) Cmp(agm *AutoGroupMeta) error {
if obj.Disabled != agm.Disabled {
return fmt.Errorf("values for Disabled are different")
}
return nil
}
// AutoGrouper is the required interface to implement an autogrouping algorithm.
type AutoGrouper interface {
// listed in the order these are typically called in...
Name() string // friendly identifier
Init(*pgraph.Graph) error // only call once
VertexNext() (pgraph.Vertex, pgraph.Vertex, error) // mostly algorithmic
VertexCmp(pgraph.Vertex, pgraph.Vertex) error // can we merge these ?
VertexMerge(pgraph.Vertex, pgraph.Vertex) (pgraph.Vertex, error) // vertex merge fn to use
EdgeMerge(pgraph.Edge, pgraph.Edge) pgraph.Edge // edge merge fn to use
VertexTest(bool) (bool, error) // call until false
}

343
engine/cmp.go Normal file
View File

@@ -0,0 +1,343 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package engine
import (
"fmt"
"github.com/purpleidea/mgmt/pgraph"
)
// ResCmp compares two resources by checking multiple aspects. This is the main
// entry point for running all the compare steps on two resources. This code is
// very similar to AdaptCmp.
func ResCmp(r1, r2 Res) error {
if r1.Kind() != r2.Kind() {
return fmt.Errorf("kind differs")
}
if r1.Name() != r2.Name() {
return fmt.Errorf("name differs")
}
if err := r1.Cmp(r2); err != nil {
return err
}
// TODO: do we need to compare other traits/metaparams?
m1 := r1.MetaParams()
m2 := r2.MetaParams()
if (m1 == nil) != (m2 == nil) { // xor
return fmt.Errorf("meta params differ")
}
if m1 != nil && m2 != nil {
if err := m1.Cmp(m2); err != nil {
return err
}
}
r1x, ok1 := r1.(RefreshableRes)
r2x, ok2 := r2.(RefreshableRes)
if ok1 != ok2 {
return fmt.Errorf("refreshable differs") // they must be different (optional)
}
if ok1 && ok2 {
if r1x.Refresh() != r2x.Refresh() {
return fmt.Errorf("refresh differs")
}
}
// compare meta params for resources with auto edges
r1e, ok1 := r1.(EdgeableRes)
r2e, ok2 := r2.(EdgeableRes)
if ok1 != ok2 {
return fmt.Errorf("edgeable differs") // they must be different (optional)
}
if ok1 && ok2 {
if r1e.AutoEdgeMeta().Cmp(r2e.AutoEdgeMeta()) != nil {
return fmt.Errorf("autoedge differs")
}
}
// compare meta params for resources with auto grouping
r1g, ok1 := r1.(GroupableRes)
r2g, ok2 := r2.(GroupableRes)
if ok1 != ok2 {
return fmt.Errorf("groupable differs") // they must be different (optional)
}
if ok1 && ok2 {
if r1g.AutoGroupMeta().Cmp(r2g.AutoGroupMeta()) != nil {
return fmt.Errorf("autogroup differs")
}
// if resources are grouped, are the groups the same?
if i, j := r1g.GetGroup(), r2g.GetGroup(); len(i) != len(j) {
return fmt.Errorf("autogroup groups differ")
} else if len(i) > 0 { // trick the golinter
// Sort works with Res, so convert the lists to that
iRes := []Res{}
for _, r := range i {
res := r.(Res)
iRes = append(iRes, res)
}
jRes := []Res{}
for _, r := range j {
res := r.(Res)
jRes = append(jRes, res)
}
ix, jx := Sort(iRes), Sort(jRes) // now sort :)
for k := range ix {
// compare sub resources
if err := ResCmp(ix[k], jx[k]); err != nil {
return err
}
}
}
}
r1r, ok1 := r1.(RecvableRes)
r2r, ok2 := r2.(RecvableRes)
if ok1 != ok2 {
return fmt.Errorf("recvable differs") // they must be different (optional)
}
if ok1 && ok2 {
v1 := r1r.Recv()
v2 := r2r.Recv()
if (v1 == nil) != (v2 == nil) { // xor
return fmt.Errorf("recv params differ")
}
if v1 != nil && v2 != nil {
// TODO: until we hit this code path, don't allow
// comparing anything that has this set to non-zero
if len(v1) != 0 || len(v2) != 0 {
return fmt.Errorf("recv params exist")
}
}
}
r1s, ok1 := r1.(SendableRes)
r2s, ok2 := r2.(SendableRes)
if ok1 != ok2 {
return fmt.Errorf("sendable differs") // they must be different (optional)
}
if ok1 && ok2 {
s1 := r1s.Sent()
s2 := r2s.Sent()
if (s1 == nil) != (s2 == nil) { // xor
return fmt.Errorf("send params differ")
}
if s1 != nil && s2 != nil {
// TODO: until we hit this code path, don't allow
// adapting anything that has this set to non-nil
return fmt.Errorf("send params exist")
}
}
// compare meta params for resources with reversible traits
r1v, ok1 := r1.(ReversibleRes)
r2v, ok2 := r2.(ReversibleRes)
if ok1 != ok2 {
return fmt.Errorf("reversible differs") // they must be different (optional)
}
if ok1 && ok2 {
if r1v.ReversibleMeta().Cmp(r2v.ReversibleMeta()) != nil {
return fmt.Errorf("reversible differs")
}
}
return nil
}
// AdaptCmp compares two resources by checking multiple aspects. This is the
// main entry point for running all the compatible compare steps on two
// resources. This code is very similar to ResCmp.
func AdaptCmp(r1, r2 CompatibleRes) error {
if r1.Kind() != r2.Kind() {
return fmt.Errorf("kind differs")
}
if r1.Name() != r2.Name() {
return fmt.Errorf("name differs")
}
// run `Adapts` instead of `Cmp`
if err := r1.Adapts(r2); err != nil {
return err
}
// TODO: do we need to compare other traits/metaparams?
m1 := r1.MetaParams()
m2 := r2.MetaParams()
if (m1 == nil) != (m2 == nil) { // xor
return fmt.Errorf("meta params differ")
}
if m1 != nil && m2 != nil {
if err := m1.Cmp(m2); err != nil {
return err
}
}
// we don't need to compare refresh, since those can always be merged...
// compare meta params for resources with auto edges
r1e, ok1 := r1.(EdgeableRes)
r2e, ok2 := r2.(EdgeableRes)
if ok1 != ok2 {
return fmt.Errorf("edgeable differs") // they must be different (optional)
}
if ok1 && ok2 {
if r1e.AutoEdgeMeta().Cmp(r2e.AutoEdgeMeta()) != nil {
return fmt.Errorf("autoedge differs")
}
}
// compare meta params for resources with auto grouping
r1g, ok1 := r1.(GroupableRes)
r2g, ok2 := r2.(GroupableRes)
if ok1 != ok2 {
return fmt.Errorf("groupable differs") // they must be different (optional)
}
if ok1 && ok2 {
if r1g.AutoGroupMeta().Cmp(r2g.AutoGroupMeta()) != nil {
return fmt.Errorf("autogroup differs")
}
// if resources are grouped, are the groups the same?
if i, j := r1g.GetGroup(), r2g.GetGroup(); len(i) != len(j) {
return fmt.Errorf("autogroup groups differ")
} else if len(i) > 0 { // trick the golinter
// Sort works with Res, so convert the lists to that
iRes := []Res{}
for _, r := range i {
res := r.(Res)
iRes = append(iRes, res)
}
jRes := []Res{}
for _, r := range j {
res := r.(Res)
jRes = append(jRes, res)
}
ix, jx := Sort(iRes), Sort(jRes) // now sort :)
for k := range ix {
// compare sub resources
// TODO: should we use AdaptCmp here?
// TODO: how would they run `Merge` ? (we don't)
// this code path will probably not run, because
// it is called in the lang before autogrouping!
if err := ResCmp(ix[k], jx[k]); err != nil {
return err
}
}
}
}
r1r, ok1 := r1.(RecvableRes)
r2r, ok2 := r2.(RecvableRes)
if ok1 != ok2 {
return fmt.Errorf("recvable differs") // they must be different (optional)
}
if ok1 && ok2 {
v1 := r1r.Recv()
v2 := r2r.Recv()
if (v1 == nil) != (v2 == nil) { // xor
return fmt.Errorf("recv params differ")
}
if v1 != nil && v2 != nil {
// TODO: until we hit this code path, don't allow
// adapting anything that has this set to non-zero
if len(v1) != 0 || len(v2) != 0 {
return fmt.Errorf("recv params exist")
}
}
}
r1s, ok1 := r1.(SendableRes)
r2s, ok2 := r2.(SendableRes)
if ok1 != ok2 {
return fmt.Errorf("sendable differs") // they must be different (optional)
}
if ok1 && ok2 {
s1 := r1s.Sent()
s2 := r2s.Sent()
if (s1 == nil) != (s2 == nil) { // xor
return fmt.Errorf("send params differ")
}
if s1 != nil && s2 != nil {
// TODO: until we hit this code path, don't allow
// adapting anything that has this set to non-nil
return fmt.Errorf("send params exist")
}
}
// compare meta params for resources with reversible traits
r1v, ok1 := r1.(ReversibleRes)
r2v, ok2 := r2.(ReversibleRes)
if ok1 != ok2 {
return fmt.Errorf("reversible differs") // they must be different (optional)
}
if ok1 && ok2 {
if r1v.ReversibleMeta().Cmp(r2v.ReversibleMeta()) != nil {
return fmt.Errorf("reversible differs")
}
}
return nil
}
// VertexCmpFn returns if two vertices are equivalent. It errors if they can't
// be compared because one is not a vertex. This returns true if equal.
// TODO: shouldn't the first argument be an `error` instead?
func VertexCmpFn(v1, v2 pgraph.Vertex) (bool, error) {
r1, ok := v1.(Res)
if !ok {
return false, fmt.Errorf("v1 is not a Res")
}
r2, ok := v2.(Res)
if !ok {
return false, fmt.Errorf("v2 is not a Res")
}
if ResCmp(r1, r2) != nil {
return false, nil
}
return true, nil
}
// EdgeCmpFn returns if two edges are equivalent. It errors if they can't be
// compared because one is not an edge. This returns true if equal.
// TODO: shouldn't the first argument be an `error` instead?
func EdgeCmpFn(e1, e2 pgraph.Edge) (bool, error) {
edge1, ok := e1.(*Edge)
if !ok {
return false, fmt.Errorf("e1 is not an Edge")
}
edge2, ok := e2.(*Edge)
if !ok {
return false, fmt.Errorf("e2 is not an Edge")
}
return edge1.Cmp(edge2) == nil, nil
}

170
engine/copy.go Normal file
View File

@@ -0,0 +1,170 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package engine
import (
"fmt"
"github.com/purpleidea/mgmt/util/errwrap"
)
// ResCopy copies a resource. This is the main entry point for copying a
// resource since it does all the common engine-level copying as well.
func ResCopy(r CopyableRes) (CopyableRes, error) {
res := r.Copy()
res.SetKind(r.Kind())
res.SetName(r.Name())
if x, ok := r.(MetaRes); ok {
dst, ok := res.(MetaRes)
if !ok {
// programming error
panic("meta interfaces are illogical")
}
dst.SetMetaParams(x.MetaParams().Copy()) // copy b/c we have it
}
if x, ok := r.(RefreshableRes); ok {
dst, ok := res.(RefreshableRes)
if !ok {
// programming error
panic("refresh interfaces are illogical")
}
dst.SetRefresh(x.Refresh()) // no need to copy atm
}
// copy meta params for resources with auto edges
if x, ok := r.(EdgeableRes); ok {
dst, ok := res.(EdgeableRes)
if !ok {
// programming error
panic("autoedge interfaces are illogical")
}
dst.SetAutoEdgeMeta(x.AutoEdgeMeta()) // no need to copy atm
}
// copy meta params for resources with auto grouping
if x, ok := r.(GroupableRes); ok {
dst, ok := res.(GroupableRes)
if !ok {
// programming error
panic("autogroup interfaces are illogical")
}
dst.SetAutoGroupMeta(x.AutoGroupMeta()) // no need to copy atm
grouped := []GroupableRes{}
for _, g := range x.GetGroup() {
g0, ok := g.(CopyableRes)
if !ok {
return nil, fmt.Errorf("resource wasn't copyable")
}
g1, err := ResCopy(g0)
if err != nil {
return nil, err
}
g2, ok := g1.(GroupableRes)
if !ok {
return nil, fmt.Errorf("resource wasn't groupable")
}
grouped = append(grouped, g2)
}
dst.SetGroup(grouped)
}
if x, ok := r.(RecvableRes); ok {
dst, ok := res.(RecvableRes)
if !ok {
// programming error
panic("recv interfaces are illogical")
}
dst.SetRecv(x.Recv()) // no need to copy atm
}
if x, ok := r.(SendableRes); ok {
dst, ok := res.(SendableRes)
if !ok {
// programming error
panic("send interfaces are illogical")
}
if err := dst.Send(x.Sent()); err != nil { // no need to copy atm
return nil, errwrap.Wrapf(err, "can't copy send")
}
}
// copy meta params for resources with reversible traits
if x, ok := r.(ReversibleRes); ok {
dst, ok := res.(ReversibleRes)
if !ok {
// programming error
panic("reversible interfaces are illogical")
}
dst.SetReversibleMeta(x.ReversibleMeta()) // no need to copy atm
}
return res, nil
}
// ResMerge merges a set of resources that are compatible with each other. This
// is the main entry point for the merging. They must each successfully be able
// to run AdaptCmp without error.
func ResMerge(r ...CompatibleRes) (CompatibleRes, error) {
if len(r) == 0 {
return nil, fmt.Errorf("zero resources given")
}
if len(r) == 1 {
return r[0], nil
}
if len(r) > 2 {
r0 := r[0]
r1, err := ResMerge(r[1:]...)
if err != nil {
return nil, err
}
return ResMerge(r0, r1)
}
// now we have r[0] and r[1] to merge here...
r0 := r[0]
r1 := r[1]
if err := AdaptCmp(r0, r1); err != nil {
return nil, err
}
res, err := r0.Merge(r1) // resource method of this interface
if err != nil {
return nil, err
}
// meta should have come over in the copy
if x, ok := res.(RefreshableRes); ok {
x0, ok0 := r0.(RefreshableRes)
x1, ok1 := r1.(RefreshableRes)
if !ok0 || !ok1 {
// programming error
panic("refresh interfaces are illogical")
}
x.SetRefresh(x0.Refresh() || x1.Refresh()) // true if either is!
}
// the other traits and metaparams can't be merged easily... so we don't
// merge them, and if they were present and differed, and weren't copied
// in the ResCopy method, then we should have errored above in AdaptCmp!
return res, nil
}

60
engine/edge.go Normal file
View File

@@ -0,0 +1,60 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package engine
import (
"fmt"
)
// Edge is a struct that represents a graph's edge.
type Edge struct {
Name string
Notify bool // should we send a refresh notification along this edge?
refresh bool // is there a notify pending for the dest vertex ?
}
// String is a required method of the Edge interface that we must fulfill.
func (obj *Edge) String() string {
return obj.Name
}
// Cmp compares this edge to another. It returns nil if they are equivalent.
func (obj *Edge) Cmp(edge *Edge) error {
if obj.Name != edge.Name {
return fmt.Errorf("edge names differ")
}
if obj.Notify != edge.Notify {
return fmt.Errorf("notify values differ")
}
// FIXME: should we compare this as well?
//if obj.refresh != edge.refresh {
// return fmt.Errorf("refresh values differ")
//}
return nil
}
// Refresh returns the pending refresh status of this edge.
func (obj *Edge) Refresh() bool {
return obj.refresh
}
// SetRefresh sets the pending refresh status of this edge.
func (obj *Edge) SetRefresh(b bool) {
obj.refresh = b
}

29
engine/error.go Normal file
View File

@@ -0,0 +1,29 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package engine
// Error is a constant error type that implements error.
type Error string
// Error fulfills the error interface of this type.
func (e Error) Error() string { return string(e) }
const (
// ErrClosed means we couldn't complete a task because we had closed.
ErrClosed = Error("closed")
)

61
engine/fs.go Normal file
View File

@@ -0,0 +1,61 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package engine
import (
"os"
"github.com/spf13/afero"
)
// from the ioutil package:
// NopCloser(r io.Reader) io.ReadCloser // not implemented here
// ReadAll(r io.Reader) ([]byte, error)
// ReadDir(dirname string) ([]os.FileInfo, error)
// ReadFile(filename string) ([]byte, error)
// TempDir(dir, prefix string) (name string, err error)
// TempFile(dir, prefix string) (f *os.File, err error) // slightly different here
// WriteFile(filename string, data []byte, perm os.FileMode) error
// Fs is an interface that represents this file system API that we support.
// TODO: this should be in the gapi package or elsewhere.
type Fs interface {
//fmt.Stringer // TODO: add this method?
afero.Fs // TODO: why doesn't this interface exist in the os pkg?
URI() string // returns the URI for this file system
//DirExists(path string) (bool, error)
//Exists(path string) (bool, error)
//FileContainsAnyBytes(filename string, subslices [][]byte) (bool, error)
//FileContainsBytes(filename string, subslice []byte) (bool, error)
//FullBaseFsPath(basePathFs *BasePathFs, relativePath string) string
//GetTempDir(subPath string) string
//IsDir(path string) (bool, error)
//IsEmpty(path string) (bool, error)
//NeuterAccents(s string) string
//ReadAll(r io.Reader) ([]byte, error) // not needed, same as ioutil
ReadDir(dirname string) ([]os.FileInfo, error)
ReadFile(filename string) ([]byte, error)
//SafeWriteReader(path string, r io.Reader) (err error)
TempDir(dir, prefix string) (name string, err error)
TempFile(dir, prefix string) (f afero.File, err error) // slightly different from upstream
//UnicodeSanitize(s string) string
//Walk(root string, walkFn filepath.WalkFunc) error
WriteFile(filename string, data []byte, perm os.FileMode) error
//WriteReader(path string, r io.Reader) (err error)
}

563
engine/graph/actions.go Normal file
View File

@@ -0,0 +1,563 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package graph
import (
"fmt"
"strings"
"sync"
"time"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util/errwrap"
"golang.org/x/time/rate"
)
// OKTimestamp returns true if this vertex can run right now.
func (obj *Engine) OKTimestamp(vertex pgraph.Vertex) bool {
return len(obj.BadTimestamps(vertex)) == 0
}
// BadTimestamps returns the list of vertices that are causing our timestamp to
// be bad.
func (obj *Engine) BadTimestamps(vertex pgraph.Vertex) []pgraph.Vertex {
vs := []pgraph.Vertex{}
ts := obj.state[vertex].timestamp
// these are all the vertices pointing TO vertex, eg: ??? -> vertex
for _, v := range obj.graph.IncomingGraphVertices(vertex) {
// If the vertex has a greater timestamp than any prerequisite,
// then we can't run right now. If they're equal (eg: initially
// with a value of 0) then we also can't run because we should
// let our pre-requisites go first.
t := obj.state[v].timestamp
if obj.Debug {
obj.Logf("OKTimestamp: %d >= %d (%s): !%t", ts, t, v.String(), ts >= t)
}
if ts >= t {
//return false
vs = append(vs, v)
}
}
return vs // formerly "true" if empty
}
// Process is the primary function to execute a particular vertex in the graph.
func (obj *Engine) Process(vertex pgraph.Vertex) error {
res, isRes := vertex.(engine.Res)
if !isRes {
return fmt.Errorf("vertex is not a Res")
}
// backpoke! (can be async)
if vs := obj.BadTimestamps(vertex); len(vs) > 0 {
// back poke in parallel (sync b/c of waitgroup)
wg := &sync.WaitGroup{}
for _, v := range obj.graph.IncomingGraphVertices(vertex) {
if !pgraph.VertexContains(v, vs) { // only poke what's needed
continue
}
// doesn't really need to be in parallel, but we can...
wg.Add(1)
go func(vv pgraph.Vertex) {
defer wg.Done()
obj.state[vv].Poke() // async
}(v)
}
wg.Wait()
return nil // can't continue until timestamp is in sequence
}
// semaphores!
// These shouldn't ever block an exit, since the graph should eventually
// converge causing their them to unlock. More interestingly, since they
// run in a DAG alphabetically, there is no way to permanently deadlock,
// assuming that resources individually don't ever block from finishing!
// The exception is that semaphores with a zero count will always block!
// TODO: Add a close mechanism to close/unblock zero count semaphores...
semas := res.MetaParams().Sema
if obj.Debug && len(semas) > 0 {
obj.Logf("%s: Sema: P(%s)", res, strings.Join(semas, ", "))
}
if err := obj.semaLock(semas); err != nil { // lock
// NOTE: in practice, this might not ever be truly necessary...
return fmt.Errorf("shutdown of semaphores")
}
defer obj.semaUnlock(semas) // unlock
if obj.Debug && len(semas) > 0 {
defer obj.Logf("%s: Sema: V(%s)", res, strings.Join(semas, ", "))
}
// sendrecv!
// connect any senders to receivers and detect if values changed
if res, ok := vertex.(engine.RecvableRes); ok {
if updated, err := obj.SendRecv(res); err != nil {
return errwrap.Wrapf(err, "could not SendRecv")
} else if len(updated) > 0 {
for _, changed := range updated {
if changed { // at least one was updated
// invalidate cache, mark as dirty
obj.state[vertex].tuid.StopTimer()
obj.state[vertex].isStateOK = false
break
}
}
// re-validate after we change any values
if err := engine.Validate(res); err != nil {
return errwrap.Wrapf(err, "failed Validate after SendRecv")
}
}
}
var ok = true
var applied = false // did we run an apply?
var noop = res.MetaParams().Noop // lookup the noop value
var refresh bool
var checkOK bool
var err error
// lookup the refresh (notification) variable
refresh = obj.RefreshPending(vertex) // do i need to perform a refresh?
refreshableRes, isRefreshableRes := vertex.(engine.RefreshableRes)
if isRefreshableRes {
refreshableRes.SetRefresh(refresh) // tell the resource
}
// Check cached state, to skip CheckApply, but can't skip if refreshing!
// If the resource doesn't implement refresh, skip the refresh test.
// FIXME: if desired, check that we pass through refresh notifications!
if (!refresh || !isRefreshableRes) && obj.state[vertex].isStateOK {
checkOK, err = true, nil
} else if noop && (refresh && isRefreshableRes) { // had a refresh to do w/ noop!
checkOK, err = false, nil // therefore the state is wrong
// run the CheckApply!
} else {
obj.Logf("%s: CheckApply(%t)", res, !noop)
// if this fails, don't UpdateTimestamp()
checkOK, err = res.CheckApply(!noop)
obj.Logf("%s: CheckApply(%t): Return(%t, %+v)", res, !noop, checkOK, err)
}
if checkOK && err != nil { // should never return this way
return fmt.Errorf("%s: resource programming error: CheckApply(%t): %t, %+v", res, !noop, checkOK, err)
}
if !checkOK { // something changed, restart timer
obj.state[vertex].cuid.ResetTimer() // activity!
if obj.Debug {
obj.Logf("%s: converger: reset timer", res)
}
}
// if CheckApply ran without noop and without error, state should be good
if !noop && err == nil { // aka !noop || checkOK
obj.state[vertex].tuid.StartTimer()
obj.state[vertex].isStateOK = true // reset
if refresh {
obj.SetUpstreamRefresh(vertex, false) // refresh happened, clear the request
if isRefreshableRes {
refreshableRes.SetRefresh(false)
}
}
}
if !checkOK { // if state *was* not ok, we had to have apply'ed
if err != nil { // error during check or apply
ok = false
} else {
applied = true
}
}
// when noop is true we always want to update timestamp
if noop && err == nil {
ok = true
}
if ok {
// did we actually do work?
activity := applied
if noop {
activity = false // no we didn't do work...
}
if activity { // add refresh flag to downstream edges...
obj.SetDownstreamRefresh(vertex, true)
}
// poke! (should (must?) be sync)
wg := &sync.WaitGroup{}
// update this timestamp *before* we poke or the poked
// nodes might fail due to having a too old timestamp!
obj.state[vertex].timestamp = time.Now().UnixNano() // update timestamp
for _, v := range obj.graph.OutgoingGraphVertices(vertex) {
if !obj.OKTimestamp(v) {
// there is at least another one that will poke this...
continue
}
// If we're pausing (or exiting) then we can skip poking
// so that the graph doesn't go on running forever until
// it's completely done. This is an optional feature and
// we can select it via ^C on user exit or via the GAPI.
if obj.fastPause {
obj.Logf("%s: fast pausing, poke skipped", res)
continue
}
// poke each vertex individually, in parallel...
wg.Add(1)
go func(vv pgraph.Vertex) {
defer wg.Done()
obj.state[vv].Poke()
}(v)
}
wg.Wait()
}
return errwrap.Wrapf(err, "error during Process()")
}
// Worker is the common run frontend of the vertex. It handles all of the retry
// and retry delay common code, and ultimately returns the final status of this
// vertex execution. This function cannot be "re-run" for the same vertex. The
// retry mechanism stuff happens inside of this. To actually "re-run" you need
// to remove the vertex and build a new one. The engine guarantees that we do
// not allow CheckApply to run while we are paused. That is enforced here.
func (obj *Engine) Worker(vertex pgraph.Vertex) error {
res, isRes := vertex.(engine.Res)
if !isRes {
return fmt.Errorf("vertex is not a resource")
}
// bonus safety check
if res.MetaParams().Burst == 0 && !(res.MetaParams().Limit == rate.Inf) { // blocked
return fmt.Errorf("permanently limited (rate != Inf, burst = 0)")
}
//defer close(obj.state[vertex].stopped) // done signal
obj.state[vertex].cuid = obj.Converger.Register()
obj.state[vertex].tuid = obj.Converger.Register()
// must wait for all users of the cuid to finish *before* we unregister!
// as a result, this defer happens *before* the below wait group Wait...
defer obj.state[vertex].cuid.Unregister()
defer obj.state[vertex].tuid.Unregister()
defer obj.state[vertex].wg.Wait() // this Worker is the last to exit!
obj.state[vertex].wg.Add(1)
go func() {
defer obj.state[vertex].wg.Done()
defer close(obj.state[vertex].eventsChan) // we close this on behalf of res
// This is a close reverse-multiplexer. If any of the channels
// close, then it will cause the doneChan to close. That way,
// multiple different folks can send a close signal, without
// every worrying about duplicate channel close panics.
obj.state[vertex].wg.Add(1)
go func() {
defer obj.state[vertex].wg.Done()
// reverse-multiplexer: any close, causes *the* close!
select {
case <-obj.state[vertex].processDone:
case <-obj.state[vertex].watchDone:
case <-obj.state[vertex].limitDone:
case <-obj.state[vertex].removeDone:
case <-obj.state[vertex].eventsDone:
}
// the main "done" signal gets activated here!
close(obj.state[vertex].doneChan)
}()
var err error
var retry = res.MetaParams().Retry // lookup the retry value
var delay uint64
for { // retry loop
// a retry-delay was requested, wait, but don't block events!
if delay > 0 {
errDelayExpired := engine.Error("delay exit")
err = func() error { // slim watch main loop
timer := time.NewTimer(time.Duration(delay) * time.Millisecond)
defer obj.state[vertex].init.Logf("the Watch delay expired!")
defer timer.Stop() // it's nice to cleanup
for {
select {
case <-timer.C: // the wait is over
return errDelayExpired // special
case <-obj.state[vertex].init.Done:
return nil
}
}
}()
if err == errDelayExpired {
delay = 0 // reset
continue
}
} else if interval := res.MetaParams().Poll; interval > 0 { // poll instead of watching :(
obj.state[vertex].cuid.StartTimer()
err = obj.state[vertex].poll(interval)
obj.state[vertex].cuid.StopTimer() // clean up nicely
} else {
obj.state[vertex].cuid.StartTimer()
obj.Logf("Watch(%s)", vertex)
err = res.Watch() // run the watch normally
obj.Logf("Watch(%s): Exited(%+v)", vertex, err)
obj.state[vertex].cuid.StopTimer() // clean up nicely
}
if err == nil { // || err == engine.ErrClosed
return // exited cleanly, we're done
}
// we've got an error...
delay = res.MetaParams().Delay
if retry < 0 { // infinite retries
continue
}
if retry > 0 { // don't decrement past 0
retry--
obj.state[vertex].init.Logf("retrying Watch after %.4f seconds (%d left)", float64(delay)/1000, retry)
continue
}
//if retry == 0 { // optional
// err = errwrap.Wrapf(err, "permanent watch error")
//}
break // break out of this and send the error
} // for retry loop
// this section sends an error...
// If the CheckApply loop exits and THEN the Watch fails with an
// error, then we'd be stuck here if exit signal didn't unblock!
select {
case obj.state[vertex].eventsChan <- errwrap.Wrapf(err, "watch failed"):
// send
}
}()
// If this exits cleanly, we must unblock the reverse-multiplexer.
// I think this additional close is unnecessary, but it's not harmful.
defer close(obj.state[vertex].eventsDone) // causes doneChan to close
limiter := rate.NewLimiter(res.MetaParams().Limit, res.MetaParams().Burst)
var reserv *rate.Reservation
var reterr error
var failed bool // has Process permanently failed?
Loop:
for { // process loop
select {
case err, ok := <-obj.state[vertex].eventsChan: // read from watch channel
if !ok {
return reterr // we only return when chan closes
}
// If the Watch method exits with an error, then this
// channel will get that error propagated to it, which
// we then save so we can return it to the caller of us.
if err != nil {
failed = true
close(obj.state[vertex].watchDone) // causes doneChan to close
reterr = errwrap.Append(reterr, err) // permanent failure
continue
}
if obj.Debug {
obj.Logf("event received")
}
reserv = limiter.ReserveN(time.Now(), 1) // one event
// reserv.OK() seems to always be true here!
case _, ok := <-obj.state[vertex].pokeChan: // read from buffered poke channel
if !ok { // we never close it
panic("unexpected close of poke channel")
}
if obj.Debug {
obj.Logf("poke received")
}
reserv = nil // we didn't receive a real event here...
}
if failed { // don't Process anymore if we've already failed...
continue Loop
}
// drop redundant pokes
for len(obj.state[vertex].pokeChan) > 0 {
select {
case <-obj.state[vertex].pokeChan:
default:
// race, someone else read one!
}
}
// pause if one was requested...
select {
case <-obj.state[vertex].pauseSignal: // channel closes
// NOTE: If we allowed a doneChan below to let us out
// of the resumeSignal wait, then we could loop around
// and run this again, causing a panic. Instead of this
// being made safe with a sync.Once, we instead run a
// Resume() call inside of the vertexRemoveFn function,
// which should unblock it when we're going to need to.
obj.state[vertex].pausedAck.Ack() // send ack
// we are paused now, and waiting for resume or exit...
select {
case <-obj.state[vertex].resumeSignal: // channel closes
// resumed!
// pass through to allow a Process to try to run
// TODO: consider adding this fast pause here...
//if obj.fastPause {
// obj.Logf("fast pausing on resume")
// continue
//}
}
default:
// no pause requested, keep going...
}
if failed { // don't Process anymore if we've already failed...
continue Loop
}
// limit delay
d := time.Duration(0)
if reserv != nil {
d = reserv.DelayFrom(time.Now())
}
if reserv != nil && d > 0 { // delay
obj.state[vertex].init.Logf("limited (rate: %v/sec, burst: %d, next: %v)", res.MetaParams().Limit, res.MetaParams().Burst, d)
timer := time.NewTimer(time.Duration(d) * time.Millisecond)
LimitWait:
for {
select {
case <-timer.C: // the wait is over
break LimitWait
// consume other events while we're waiting...
case e, ok := <-obj.state[vertex].eventsChan: // read from watch channel
if !ok {
return reterr // we only return when chan closes
}
if e != nil {
failed = true
close(obj.state[vertex].limitDone) // causes doneChan to close
reterr = errwrap.Append(reterr, e) // permanent failure
break LimitWait
}
if obj.Debug {
obj.Logf("event received in limit")
}
// TODO: does this get added in properly?
limiter.ReserveN(time.Now(), 1) // one event
}
}
timer.Stop() // it's nice to cleanup
obj.state[vertex].init.Logf("rate limiting expired!")
}
if failed { // don't Process anymore if we've already failed...
continue Loop
}
// end of limit delay
// retry...
var err error
var retry = res.MetaParams().Retry // lookup the retry value
var delay uint64
RetryLoop:
for { // retry loop
if delay > 0 {
timer := time.NewTimer(time.Duration(delay) * time.Millisecond)
RetryWait:
for {
select {
case <-timer.C: // the wait is over
break RetryWait
// consume other events while we're waiting...
case e, ok := <-obj.state[vertex].eventsChan: // read from watch channel
if !ok {
return reterr // we only return when chan closes
}
if e != nil {
failed = true
close(obj.state[vertex].limitDone) // causes doneChan to close
reterr = errwrap.Append(reterr, e) // permanent failure
break RetryWait
}
if obj.Debug {
obj.Logf("event received in retry")
}
// TODO: does this get added in properly?
limiter.ReserveN(time.Now(), 1) // one event
}
}
timer.Stop() // it's nice to cleanup
delay = 0 // reset
obj.state[vertex].init.Logf("the CheckApply delay expired!")
}
if failed { // don't Process anymore if we've already failed...
continue Loop
}
if obj.Debug {
obj.Logf("Process(%s)", vertex)
}
err = obj.Process(vertex)
if obj.Debug {
obj.Logf("Process(%s): Return(%+v)", vertex, err)
}
if err == nil {
break RetryLoop
}
// we've got an error...
delay = res.MetaParams().Delay
if retry < 0 { // infinite retries
continue
}
if retry > 0 { // don't decrement past 0
retry--
obj.state[vertex].init.Logf("retrying CheckApply after %.4f seconds (%d left)", float64(delay)/1000, retry)
continue
}
//if retry == 0 { // optional
// err = errwrap.Wrapf(err, "permanent process error")
//}
// It is important that we shutdown the Watch loop if
// this dies. If Process fails permanently, we ask it
// to exit right here... (It happens when we loop...)
failed = true
close(obj.state[vertex].processDone) // causes doneChan to close
reterr = errwrap.Append(reterr, err) // permanent failure
continue
} // retry loop
// When this Process loop exits, it's because something has
// caused Watch() to shutdown (even if it's our permanent
// failure from Process), which caused this channel to close.
// On or more exit signals are possible, and more than one can
// happen simultaneously.
} // process loop
//return nil // unreachable
}

30
engine/graph/autoedge.go Normal file
View File

@@ -0,0 +1,30 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package graph
import (
"github.com/purpleidea/mgmt/engine/graph/autoedge"
)
// AutoEdge adds the automatic edges to the graph.
func (obj *Engine) AutoEdge() error {
logf := func(format string, v ...interface{}) {
obj.Logf("autoedge: "+format, v...)
}
return autoedge.AutoEdge(obj.nextGraph, obj.Debug, logf)
}

View File

@@ -0,0 +1,156 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package autoedge
import (
"fmt"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util/errwrap"
)
// AutoEdge adds the automatic edges to the graph.
func AutoEdge(graph *pgraph.Graph, debug bool, logf func(format string, v ...interface{})) error {
logf("adding autoedges...")
// initially get all of the autoedges to seek out all possible errors
var err error
autoEdgeObjMap := make(map[engine.EdgeableRes]engine.AutoEdge)
sorted := []engine.EdgeableRes{}
for _, v := range graph.VerticesSorted() {
res, ok := v.(engine.EdgeableRes)
if !ok {
continue
}
if res.AutoEdgeMeta().Disabled { // skip if this res is disabled
continue
}
sorted = append(sorted, res)
}
for _, res := range sorted { // for each vertexes autoedges
autoEdgeObj, e := res.AutoEdges()
if e != nil {
err = errwrap.Append(err, e) // collect all errors
continue
}
if autoEdgeObj == nil {
logf("no auto edges were found for: %s", res)
continue // next vertex
}
autoEdgeObjMap[res] = autoEdgeObj // save for next loop
}
if err != nil {
return errwrap.Wrapf(err, "the auto edges had errors")
}
// now that we're guaranteed error free, we can modify the graph safely
for _, res := range sorted { // stable sort order for determinism in logs
autoEdgeObj, exists := autoEdgeObjMap[res]
if !exists {
continue
}
for { // while the autoEdgeObj has more uids to add...
uids := autoEdgeObj.Next() // get some!
if uids == nil {
logf("the auto edge list is empty for: %s", res)
break // inner loop
}
if debug {
logf("autoedge: UIDS:")
for i, u := range uids {
logf("autoedge: UID%d: %v", i, u)
}
}
// match and add edges
result := addEdgesByMatchingUIDS(res, uids, graph, debug, logf)
// report back, and find out if we should continue
if !autoEdgeObj.Test(result) {
break
}
}
}
// It would be great to ensure we didn't add any graph cycles here, but
// instead of checking now, we'll move the check into the main loop.
return nil
}
// addEdgesByMatchingUIDS adds edges to the vertex in a graph based on if it
// matches a uid list.
func addEdgesByMatchingUIDS(res engine.EdgeableRes, uids []engine.ResUID, graph *pgraph.Graph, debug bool, logf func(format string, v ...interface{})) []bool {
// search for edges and see what matches!
var result []bool
// loop through each uid, and see if it matches any vertex
for _, uid := range uids {
var found = false
// uid is a ResUID object
for _, v := range graph.Vertices() { // search
r, ok := v.(engine.EdgeableRes)
if !ok {
continue
}
if r.AutoEdgeMeta().Disabled { // skip if this res is disabled
continue
}
if res == r { // skip self
continue
}
if debug {
logf("autoedge: Match: %s with UID: %s", r, uid)
}
// we must match to an effective UID for the resource,
// that is to say, the name value of a res is a helpful
// handle, but it is not necessarily a unique identity!
// remember, resources can return multiple UID's each!
if UIDExistsInUIDs(uid, r.UIDs()) {
// add edge from: r -> res
if uid.IsReversed() {
txt := fmt.Sprintf("%s -> %s (autoedge)", r, res)
logf("autoedge: adding: %s", txt)
edge := &engine.Edge{Name: txt}
graph.AddEdge(r, res, edge)
} else { // edges go the "normal" way, eg: pkg resource
txt := fmt.Sprintf("%s -> %s (autoedge)", res, r)
logf("autoedge: adding: %s", txt)
edge := &engine.Edge{Name: txt}
graph.AddEdge(res, r, edge)
}
found = true
break
}
}
result = append(result, found)
}
return result
}
// UIDExistsInUIDs wraps the IFF method when used with a list of UID's.
func UIDExistsInUIDs(uid engine.ResUID, uids []engine.ResUID) bool {
for _, u := range uids {
if uid.IFF(u) {
return true
}
}
return false
}

140
engine/graph/autogroup.go Normal file
View File

@@ -0,0 +1,140 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package graph
import (
"fmt"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/graph/autogroup"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/util/errwrap"
)
// AutoGroup runs the auto grouping on the loaded graph.
func (obj *Engine) AutoGroup(ag engine.AutoGrouper) error {
if obj.nextGraph == nil {
return fmt.Errorf("there is no active graph to autogroup")
}
logf := func(format string, v ...interface{}) {
obj.Logf("autogroup: "+format, v...)
}
// wrap ag with our own vertexCmp, vertexMerge and edgeMerge
wrapped := &wrappedGrouper{
AutoGrouper: ag, // pass in the existing autogrouper
}
if err := autogroup.AutoGroup(wrapped, obj.nextGraph, obj.Debug, logf); err != nil {
return errwrap.Wrapf(err, "autogrouping failed")
}
return nil
}
// wrappedGrouper is an autogrouper which adds our own Cmp and Merge functions
// on top of the desired AutoGrouper that was specified.
type wrappedGrouper struct {
engine.AutoGrouper // anonymous interface
}
func (obj *wrappedGrouper) Name() string {
return fmt.Sprintf("wrappedGrouper: %s", obj.AutoGrouper.Name())
}
func (obj *wrappedGrouper) VertexCmp(v1, v2 pgraph.Vertex) error {
// call existing vertexCmp first
if err := obj.AutoGrouper.VertexCmp(v1, v2); err != nil {
return err
}
r1, ok := v1.(engine.GroupableRes)
if !ok {
return fmt.Errorf("v1 is not a GroupableRes")
}
r2, ok := v2.(engine.GroupableRes)
if !ok {
return fmt.Errorf("v2 is not a GroupableRes")
}
// Some resources of different kinds can now group together!
//if r1.Kind() != r2.Kind() { // we must group similar kinds
// return fmt.Errorf("the two resources aren't the same kind")
//}
// someone doesn't want to group!
if r1.AutoGroupMeta().Disabled || r2.AutoGroupMeta().Disabled {
return fmt.Errorf("one of the autogroup flags is false")
}
if r1.IsGrouped() { // already grouped!
return fmt.Errorf("already grouped")
}
if len(r2.GetGroup()) > 0 { // already has children grouped!
return fmt.Errorf("already has groups")
}
if err := r1.GroupCmp(r2); err != nil { // resource groupcmp failed!
return errwrap.Wrapf(err, "the GroupCmp failed")
}
return nil
}
func (obj *wrappedGrouper) VertexMerge(v1, v2 pgraph.Vertex) (v pgraph.Vertex, err error) {
r1, ok := v1.(engine.GroupableRes)
if !ok {
return nil, fmt.Errorf("v1 is not a GroupableRes")
}
r2, ok := v2.(engine.GroupableRes)
if !ok {
return nil, fmt.Errorf("v2 is not a GroupableRes")
}
if err = r1.GroupRes(r2); err != nil { // GroupRes skips stupid groupings
return // return early on error
}
// merging two resources into one should yield the sum of their semas
if semas := r2.MetaParams().Sema; len(semas) > 0 {
r1.MetaParams().Sema = append(r1.MetaParams().Sema, semas...)
r1.MetaParams().Sema = util.StrRemoveDuplicatesInList(r1.MetaParams().Sema)
}
return // success or fail, and no need to merge the actual vertices!
}
func (obj *wrappedGrouper) EdgeMerge(e1, e2 pgraph.Edge) pgraph.Edge {
e1x, ok := e1.(*engine.Edge)
if !ok {
return e2 // just return something to avoid needing to error
}
e2x, ok := e2.(*engine.Edge)
if !ok {
return e1 // just return something to avoid needing to error
}
// TODO: should we merge the edge.Notify or edge.refresh values?
edge := &engine.Edge{
Notify: e1x.Notify || e2x.Notify, // TODO: should we merge this?
}
refresh := e1x.Refresh() || e2x.Refresh() // TODO: should we merge this?
edge.SetRefresh(refresh)
return edge
}

View File

@@ -0,0 +1,73 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package autogroup
import (
"fmt"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util/errwrap"
)
// AutoGroup is the mechanical auto group "runner" that runs the interface spec.
// TODO: this algorithm may not be correct in all cases. replace if needed!
func AutoGroup(ag engine.AutoGrouper, g *pgraph.Graph, debug bool, logf func(format string, v ...interface{})) error {
logf("algorithm: %s...", ag.Name())
if err := ag.Init(g); err != nil {
return errwrap.Wrapf(err, "error running autoGroup(init)")
}
for {
var v, w pgraph.Vertex
v, w, err := ag.VertexNext() // get pair to compare
if err != nil {
return errwrap.Wrapf(err, "error running autoGroup(vertexNext)")
}
merged := false
// save names since they change during the runs
vStr := fmt.Sprintf("%v", v) // valid even if it is nil
wStr := fmt.Sprintf("%v", w)
if err := ag.VertexCmp(v, w); err != nil { // cmp ?
if debug {
logf("!GroupCmp for: %s into: %s", wStr, vStr)
}
// remove grouped vertex and merge edges (res is safe)
} else if err := VertexMerge(g, v, w, ag.VertexMerge, ag.EdgeMerge); err != nil { // merge...
logf("!VertexMerge for: %s into: %s", wStr, vStr)
} else { // success!
logf("%s into %s", wStr, vStr)
merged = true // woo
}
// did these get used?
if ok, err := ag.VertexTest(merged); err != nil {
return errwrap.Wrapf(err, "error running autoGroup(vertexTest)")
} else if !ok {
break // done!
}
}
// It would be great to ensure we didn't add any graph cycles here, but
// instead of checking now, we'll move the check into the main loop.
return nil
}

View File

@@ -0,0 +1,929 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !root
package autogroup
import (
"fmt"
"reflect"
"sort"
"strings"
"testing"
"time"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/util/errwrap"
)
func init() {
engine.RegisterResource("nooptest", func() engine.Res { return &NoopResTest{} })
}
// NoopResTest is a no-op resource that groups strangely.
type NoopResTest struct {
traits.Base // add the base methods without re-implementation
traits.Groupable
init *engine.Init
Comment string
}
func (obj *NoopResTest) Default() engine.Res {
return &NoopResTest{}
}
func (obj *NoopResTest) Validate() error {
return nil
}
func (obj *NoopResTest) Init(init *engine.Init) error {
obj.init = init // save for later
return nil
}
func (obj *NoopResTest) Close() error {
return nil
}
func (obj *NoopResTest) Watch() error {
return nil // not needed
}
func (obj *NoopResTest) CheckApply(apply bool) (checkOK bool, err error) {
return true, nil // state is always okay
}
func (obj *NoopResTest) Cmp(r engine.Res) error {
// we can only compare NoopRes to others of the same resource kind
res, ok := r.(*NoopResTest)
if !ok {
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.Comment != res.Comment {
return fmt.Errorf("comment differs")
}
return nil
}
func (obj *NoopResTest) GroupCmp(r engine.GroupableRes) error {
res, ok := r.(*NoopResTest)
if !ok {
return fmt.Errorf("resource is not the same kind")
}
// TODO: implement this in vertexCmp for *testGrouper instead?
if strings.Contains(res.Name(), ",") { // HACK
return fmt.Errorf("already grouped") // element to be grouped is already grouped!
}
// group if they start with the same letter! (helpful hack for testing)
if obj.Name()[0] != res.Name()[0] {
return fmt.Errorf("different starting letter")
}
return nil
}
func NewNoopResTest(name string) *NoopResTest {
n, err := engine.NewNamedResource("nooptest", name)
if err != nil {
panic(fmt.Sprintf("unexpected error: %+v", err))
}
//x := n.(*resources.NoopRes)
g, ok := n.(engine.GroupableRes)
if !ok {
panic("not a GroupableRes")
}
g.AutoGroupMeta().Disabled = false // always autogroup
//x := g.(*NoopResTest)
x := n.(*NoopResTest)
return x
}
func NewNoopResTestSema(name string, semas []string) *NoopResTest {
n := NewNoopResTest(name)
n.MetaParams().Sema = semas
return n
}
// NE is a helper function to make testing easier. It creates a new noop edge.
func NE(s string) pgraph.Edge {
obj := &engine.Edge{Name: s}
return obj
}
type testGrouper struct {
// TODO: this algorithm may not be correct in all cases. replace if needed!
NonReachabilityGrouper // "inherit" what we want, and reimplement the rest
}
func (obj *testGrouper) Name() string {
return "testGrouper"
}
func (obj *testGrouper) VertexCmp(v1, v2 pgraph.Vertex) error {
// call existing vertexCmp first
if err := obj.NonReachabilityGrouper.VertexCmp(v1, v2); err != nil {
return err
}
r1, ok := v1.(engine.GroupableRes)
if !ok {
return fmt.Errorf("v1 is not a GroupableRes")
}
r2, ok := v2.(engine.GroupableRes)
if !ok {
return fmt.Errorf("v2 is not a GroupableRes")
}
if r1.Kind() != r2.Kind() { // we must group similar kinds
// TODO: maybe future resources won't need this limitation?
return fmt.Errorf("the two resources aren't the same kind")
}
// someone doesn't want to group!
if r1.AutoGroupMeta().Disabled || r2.AutoGroupMeta().Disabled {
return fmt.Errorf("one of the autogroup flags is false")
}
if r1.IsGrouped() { // already grouped!
return fmt.Errorf("already grouped")
}
if len(r2.GetGroup()) > 0 { // already has children grouped!
return fmt.Errorf("already has groups")
}
if err := r1.GroupCmp(r2); err != nil { // resource groupcmp failed!
return errwrap.Wrapf(err, "the GroupCmp failed")
}
return nil
}
func (obj *testGrouper) VertexMerge(v1, v2 pgraph.Vertex) (v pgraph.Vertex, err error) {
r1 := v1.(engine.GroupableRes)
r2 := v2.(engine.GroupableRes)
if err := r1.GroupRes(r2); err != nil { // group them first
return nil, err
}
// HACK: update the name so it matches full list of self+grouped
res := v1.(engine.GroupableRes)
names := strings.Split(res.Name(), ",") // load in stored names
for _, n := range res.GetGroup() {
names = append(names, n.Name()) // add my contents
}
names = util.StrRemoveDuplicatesInList(names) // remove duplicates
sort.Strings(names)
res.SetName(strings.Join(names, ","))
// TODO: copied from autogroup.go, so try and build a better test...
// merging two resources into one should yield the sum of their semas
if semas := r2.MetaParams().Sema; len(semas) > 0 {
r1.MetaParams().Sema = append(r1.MetaParams().Sema, semas...)
r1.MetaParams().Sema = util.StrRemoveDuplicatesInList(r1.MetaParams().Sema)
}
return // success or fail, and no need to merge the actual vertices!
}
func (obj *testGrouper) EdgeMerge(e1, e2 pgraph.Edge) pgraph.Edge {
edge1 := e1.(*engine.Edge) // panic if wrong
edge2 := e2.(*engine.Edge) // panic if wrong
// HACK: update the name so it makes a union of both names
n1 := strings.Split(edge1.Name, ",") // load
n2 := strings.Split(edge2.Name, ",") // load
names := append(n1, n2...)
names = util.StrRemoveDuplicatesInList(names) // remove duplicates
sort.Strings(names)
return &engine.Edge{Name: strings.Join(names, ",")}
}
// helper function
func runGraphCmp(t *testing.T, g1, g2 *pgraph.Graph) {
debug := testing.Verbose() // set via the -test.v flag to `go test`
logf := func(format string, v ...interface{}) {
t.Logf("test: "+format, v...)
}
if err := AutoGroup(&testGrouper{}, g1, debug, logf); err != nil { // edits the graph
t.Errorf("%v", err)
return
}
err := GraphCmp(g1, g2)
if err != nil {
t.Logf(" actual (g1): %v%v", g1, fullPrint(g1))
t.Logf("expected (g2): %v%v", g2, fullPrint(g2))
t.Logf("Cmp error:")
t.Errorf("%v", err)
}
}
// GraphCmp compares the topology of two graphs and returns nil if they're
// equal. It also compares if grouped element groups are identical.
// TODO: port this to use the pgraph.GraphCmp function instead.
func GraphCmp(g1, g2 *pgraph.Graph) error {
if n1, n2 := g1.NumVertices(), g2.NumVertices(); n1 != n2 {
return fmt.Errorf("graph g1 has %d vertices, while g2 has %d", n1, n2)
}
if e1, e2 := g1.NumEdges(), g2.NumEdges(); e1 != e2 {
return fmt.Errorf("graph g1 has %d edges, while g2 has %d", e1, e2)
}
var m = make(map[pgraph.Vertex]pgraph.Vertex) // g1 to g2 vertex correspondence
Loop:
// check vertices
for v1 := range g1.Adjacency() { // for each vertex in g1
r1 := v1.(engine.GroupableRes)
l1 := strings.Split(r1.Name(), ",") // make list of everyone's names...
for _, x1 := range r1.GetGroup() {
l1 = append(l1, x1.Name()) // add my contents
}
l1 = util.StrRemoveDuplicatesInList(l1) // remove duplicates
sort.Strings(l1)
// inner loop
for v2 := range g2.Adjacency() { // does it match in g2 ?
r2 := v2.(engine.GroupableRes)
l2 := strings.Split(r2.Name(), ",")
for _, x2 := range r2.GetGroup() {
l2 = append(l2, x2.Name())
}
l2 = util.StrRemoveDuplicatesInList(l2) // remove duplicates
sort.Strings(l2)
// does l1 match l2 ?
if ListStrCmp(l1, l2) { // cmp!
m[v1] = v2
continue Loop
}
}
return fmt.Errorf("graph g1, has no match in g2 for: %v", r1.Name())
}
// vertices (and groups) match :)
// check edges
for v1 := range g1.Adjacency() { // for each vertex in g1
v2 := m[v1] // lookup in map to get correspondance
// g1.Adjacency()[v1] corresponds to g2.Adjacency()[v2]
if e1, e2 := len(g1.Adjacency()[v1]), len(g2.Adjacency()[v2]); e1 != e2 {
r1 := v1.(engine.Res)
r2 := v2.(engine.Res)
return fmt.Errorf("graph g1, vertex(%v) has %d edges, while g2, vertex(%v) has %d", r1.Name(), e1, r2.Name(), e2)
}
for vv1, ee1 := range g1.Adjacency()[v1] {
vv2 := m[vv1]
ee1 := ee1.(*engine.Edge)
ee2 := g2.Adjacency()[v2][vv2].(*engine.Edge)
// these are edges from v1 -> vv1 via ee1 (graph 1)
// to cmp to edges from v2 -> vv2 via ee2 (graph 2)
// check: (1) vv1 == vv2 ? (we've already checked this!)
rr1 := vv1.(engine.GroupableRes)
rr2 := vv2.(engine.GroupableRes)
l1 := strings.Split(rr1.Name(), ",") // make list of everyone's names...
for _, x1 := range rr1.GetGroup() {
l1 = append(l1, x1.Name()) // add my contents
}
l1 = util.StrRemoveDuplicatesInList(l1) // remove duplicates
sort.Strings(l1)
l2 := strings.Split(rr2.Name(), ",")
for _, x2 := range rr2.GetGroup() {
l2 = append(l2, x2.Name())
}
l2 = util.StrRemoveDuplicatesInList(l2) // remove duplicates
sort.Strings(l2)
// does l1 match l2 ?
if !ListStrCmp(l1, l2) { // cmp!
return fmt.Errorf("graph g1 and g2 don't agree on: %v and %v", rr1.Name(), rr2.Name())
}
// check: (2) ee1 == ee2
if ee1.Name != ee2.Name {
return fmt.Errorf("graph g1 edge(%v) doesn't match g2 edge(%v)", ee1.Name, ee2.Name)
}
}
}
// check meta parameters
for v1 := range g1.Adjacency() { // for each vertex in g1
for v2 := range g2.Adjacency() { // does it match in g2 ?
r1 := v1.(engine.Res)
r2 := v2.(engine.Res)
s1, s2 := r1.MetaParams().Sema, r2.MetaParams().Sema
sort.Strings(s1)
sort.Strings(s2)
if !reflect.DeepEqual(s1, s2) {
return fmt.Errorf("vertex %s and vertex %s have different semaphores", r1.Name(), r2.Name())
}
}
}
return nil // success!
}
// ListStrCmp compares two lists of strings
func ListStrCmp(a, b []string) bool {
//fmt.Printf("CMP: %v with %v\n", a, b) // debugging
if a == nil && b == nil {
return true
}
if a == nil || b == nil {
return false
}
if len(a) != len(b) {
return false
}
for i := range a {
if a[i] != b[i] {
return false
}
}
return true
}
func fullPrint(g *pgraph.Graph) (str string) {
str += "\n"
for v := range g.Adjacency() {
r := v.(engine.Res)
if semas := r.MetaParams().Sema; len(semas) > 0 {
str += fmt.Sprintf("* v: %v; sema: %v\n", r.Name(), semas)
} else {
str += fmt.Sprintf("* v: %v\n", r.Name())
}
// TODO: add explicit grouping data?
}
for v1 := range g.Adjacency() {
for v2, e := range g.Adjacency()[v1] {
r1 := v1.(engine.Res)
r2 := v2.(engine.Res)
edge := e.(*engine.Edge)
str += fmt.Sprintf("* e: %v -> %v # %v\n", r1.Name(), r2.Name(), edge.Name)
}
}
return
}
func TestDurationAssumptions(t *testing.T) {
var d time.Duration
if (d == 0) != true {
t.Errorf("empty time.Duration is no longer equal to zero")
}
if (d > 0) != false {
t.Errorf("empty time.Duration is now greater than zero")
}
}
// all of the following test cases are laid out with the following semantics:
// * vertices which start with the same single letter are considered "like"
// * "like" elements should be merged
// * vertices can have any integer after their single letter "family" type
// * grouped vertices should have a name with a comma separated list of names
// * edges follow the same conventions about grouping
// empty graph
func TestPgraphGrouping1(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
g2, _ := pgraph.NewGraph("g2") // expected result
runGraphCmp(t, g1, g2)
}
// single vertex
func TestPgraphGrouping2(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{ // grouping to limit variable scope
a1 := NewNoopResTest("a1")
g1.AddVertex(a1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
g2.AddVertex(a1)
}
runGraphCmp(t, g1, g2)
}
// two vertices
func TestPgraphGrouping3(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
g1.AddVertex(a1, b1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
g2.AddVertex(a1, b1)
}
runGraphCmp(t, g1, g2)
}
// two vertices merge
func TestPgraphGrouping4(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
g1.AddVertex(a1, a2)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
g2.AddVertex(a)
}
runGraphCmp(t, g1, g2)
}
// three vertices merge
func TestPgraphGrouping5(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
a3 := NewNoopResTest("a3")
g1.AddVertex(a1, a2, a3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2,a3")
g2.AddVertex(a)
}
runGraphCmp(t, g1, g2)
}
// three vertices, two merge
func TestPgraphGrouping6(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
g1.AddVertex(a1, a2, b1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b1 := NewNoopResTest("b1")
g2.AddVertex(a, b1)
}
runGraphCmp(t, g1, g2)
}
// four vertices, three merge
func TestPgraphGrouping7(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
a3 := NewNoopResTest("a3")
b1 := NewNoopResTest("b1")
g1.AddVertex(a1, a2, a3, b1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2,a3")
b1 := NewNoopResTest("b1")
g2.AddVertex(a, b1)
}
runGraphCmp(t, g1, g2)
}
// four vertices, two&two merge
func TestPgraphGrouping8(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
g1.AddVertex(a1, a2, b1, b2)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b := NewNoopResTest("b1,b2")
g2.AddVertex(a, b)
}
runGraphCmp(t, g1, g2)
}
// five vertices, two&three merge
func TestPgraphGrouping9(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
b3 := NewNoopResTest("b3")
g1.AddVertex(a1, a2, b1, b2, b3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b := NewNoopResTest("b1,b2,b3")
g2.AddVertex(a, b)
}
runGraphCmp(t, g1, g2)
}
// three unique vertices
func TestPgraphGrouping10(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
c1 := NewNoopResTest("c1")
g1.AddVertex(a1, b1, c1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
c1 := NewNoopResTest("c1")
g2.AddVertex(a1, b1, c1)
}
runGraphCmp(t, g1, g2)
}
// three unique vertices, two merge
func TestPgraphGrouping11(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
c1 := NewNoopResTest("c1")
g1.AddVertex(a1, b1, b2, c1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
b := NewNoopResTest("b1,b2")
c1 := NewNoopResTest("c1")
g2.AddVertex(a1, b, c1)
}
runGraphCmp(t, g1, g2)
}
/* simple merge 1
// a1 a2 a1,a2
// \ / >>> | (arrows point downwards)
// b b
*/
func TestPgraphGrouping12(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
e1 := NE("e1")
e2 := NE("e2")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(a2, b1, e2)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b1 := NewNoopResTest("b1")
e := NE("e1,e2")
g2.AddEdge(a, b1, e)
}
runGraphCmp(t, g1, g2)
}
/* simple merge 2
// b b
// / \ >>> | (arrows point downwards)
// a1 a2 a1,a2
*/
func TestPgraphGrouping13(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
e1 := NE("e1")
e2 := NE("e2")
g1.AddEdge(b1, a1, e1)
g1.AddEdge(b1, a2, e2)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b1 := NewNoopResTest("b1")
e := NE("e1,e2")
g2.AddEdge(b1, a, e)
}
runGraphCmp(t, g1, g2)
}
/* triple merge
// a1 a2 a3 a1,a2,a3
// \ | / >>> | (arrows point downwards)
// b b
*/
func TestPgraphGrouping14(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
a3 := NewNoopResTest("a3")
b1 := NewNoopResTest("b1")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(a2, b1, e2)
g1.AddEdge(a3, b1, e3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2,a3")
b1 := NewNoopResTest("b1")
e := NE("e1,e2,e3")
g2.AddEdge(a, b1, e)
}
runGraphCmp(t, g1, g2)
}
/* chain merge
// a1 a1
// / \ |
// b1 b2 >>> b1,b2 (arrows point downwards)
// \ / |
// c1 c1
*/
func TestPgraphGrouping15(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e4 := NE("e4")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(a1, b2, e2)
g1.AddEdge(b1, c1, e3)
g1.AddEdge(b2, c1, e4)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
b := NewNoopResTest("b1,b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1,e2")
e2 := NE("e3,e4")
g2.AddEdge(a1, b, e1)
g2.AddEdge(b, c1, e2)
}
runGraphCmp(t, g1, g2)
}
/* re-attach 1 (outer)
// technically the second possibility is valid too, depending on which order we
// merge edges in, and if we don't filter out any unnecessary edges afterwards!
// a1 a2 a1,a2 a1,a2
// | / | | \
// b1 / >>> b1 OR b1 / (arrows point downwards)
// | / | | /
// c1 c1 c1
*/
func TestPgraphGrouping16(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
c1 := NewNoopResTest("c1")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(b1, c1, e2)
g1.AddEdge(a2, c1, e3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b1 := NewNoopResTest("b1")
c1 := NewNoopResTest("c1")
e1 := NE("e1,e3")
e2 := NE("e2,e3") // e3 gets "merged through" to BOTH edges!
g2.AddEdge(a, b1, e1)
g2.AddEdge(b1, c1, e2)
}
runGraphCmp(t, g1, g2)
}
/* re-attach 2 (inner)
// a1 b2 a1
// | / |
// b1 / >>> b1,b2 (arrows point downwards)
// | / |
// c1 c1
*/
func TestPgraphGrouping17(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(b1, c1, e2)
g1.AddEdge(b2, c1, e3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
b := NewNoopResTest("b1,b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1")
e2 := NE("e2,e3")
g2.AddEdge(a1, b, e1)
g2.AddEdge(b, c1, e2)
}
runGraphCmp(t, g1, g2)
}
/* re-attach 3 (double)
// similar to "re-attach 1", technically there is a second possibility for this
// a2 a1 b2 a1,a2
// \ | / |
// \ b1 / >>> b1,b2 (arrows point downwards)
// \ | / |
// c1 c1
*/
func TestPgraphGrouping18(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e4 := NE("e4")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(b1, c1, e2)
g1.AddEdge(a2, c1, e3)
g1.AddEdge(b2, c1, e4)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b := NewNoopResTest("b1,b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1,e3")
e2 := NE("e2,e3,e4") // e3 gets "merged through" to BOTH edges!
g2.AddEdge(a, b, e1)
g2.AddEdge(b, c1, e2)
}
runGraphCmp(t, g1, g2)
}
/* connected merge 0, (no change!)
// a1 a1
// \ >>> \ (arrows point downwards)
// a2 a2
*/
func TestPgraphGroupingConnected0(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
e1 := NE("e1")
g1.AddEdge(a1, a2, e1)
}
g2, _ := pgraph.NewGraph("g2") // expected result ?
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
e1 := NE("e1")
g2.AddEdge(a1, a2, e1)
}
runGraphCmp(t, g1, g2)
}
/* connected merge 1, (no change!)
// a1 a1
// \ \
// b >>> b (arrows point downwards)
// \ \
// a2 a2
*/
func TestPgraphGroupingConnected1(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b := NewNoopResTest("b")
a2 := NewNoopResTest("a2")
e1 := NE("e1")
e2 := NE("e2")
g1.AddEdge(a1, b, e1)
g1.AddEdge(b, a2, e2)
}
g2, _ := pgraph.NewGraph("g2") // expected result ?
{
a1 := NewNoopResTest("a1")
b := NewNoopResTest("b")
a2 := NewNoopResTest("a2")
e1 := NE("e1")
e2 := NE("e2")
g2.AddEdge(a1, b, e1)
g2.AddEdge(b, a2, e2)
}
runGraphCmp(t, g1, g2)
}
func TestPgraphSemaphoreGrouping1(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTestSema("a1", []string{"s:1"})
a2 := NewNoopResTestSema("a2", []string{"s:2"})
a3 := NewNoopResTestSema("a3", []string{"s:3"})
g1.AddVertex(a1)
g1.AddVertex(a2)
g1.AddVertex(a3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a123 := NewNoopResTestSema("a1,a2,a3", []string{"s:1", "s:2", "s:3"})
g2.AddVertex(a123)
}
runGraphCmp(t, g1, g2)
}
func TestPgraphSemaphoreGrouping2(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTestSema("a1", []string{"s:10", "s:11"})
a2 := NewNoopResTestSema("a2", []string{"s:2"})
a3 := NewNoopResTestSema("a3", []string{"s:3"})
g1.AddVertex(a1)
g1.AddVertex(a2)
g1.AddVertex(a3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a123 := NewNoopResTestSema("a1,a2,a3", []string{"s:10", "s:11", "s:2", "s:3"})
g2.AddVertex(a123)
}
runGraphCmp(t, g1, g2)
}
func TestPgraphSemaphoreGrouping3(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTestSema("a1", []string{"s:1", "s:2"})
a2 := NewNoopResTestSema("a2", []string{"s:2"})
a3 := NewNoopResTestSema("a3", []string{"s:3"})
g1.AddVertex(a1)
g1.AddVertex(a2)
g1.AddVertex(a3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a123 := NewNoopResTestSema("a1,a2,a3", []string{"s:1", "s:2", "s:3"})
g2.AddVertex(a123)
}
runGraphCmp(t, g1, g2)
}

View File

@@ -0,0 +1,140 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package autogroup
import (
"fmt"
"github.com/purpleidea/mgmt/pgraph"
)
// baseGrouper is the base type for implementing the AutoGrouper interface.
type baseGrouper struct {
graph *pgraph.Graph // store a pointer to the graph
vertices []pgraph.Vertex // cached list of vertices
i int
j int
done bool
}
// Name provides a friendly name for the logs to see.
func (ag *baseGrouper) Name() string {
return "baseGrouper"
}
// Init is called only once and before using other AutoGrouper interface methods
// the name method is the only exception: call it any time without side effects!
func (ag *baseGrouper) Init(g *pgraph.Graph) error {
if ag.graph != nil {
return fmt.Errorf("the init method has already been called")
}
ag.graph = g // pointer
ag.vertices = ag.graph.VerticesSorted() // cache in deterministic order!
ag.i = 0
ag.j = 0
if len(ag.vertices) == 0 { // empty graph
ag.done = true
return nil
}
return nil
}
// VertexNext is a simple iterator that loops through vertex (pair) combinations
// an intelligent algorithm would selectively offer only valid pairs of vertices
// these should satisfy logical grouping requirements for the autogroup designs!
// the desired algorithms can override, but keep this method as a base iterator!
func (ag *baseGrouper) VertexNext() (v1, v2 pgraph.Vertex, err error) {
// this does a for v... { for w... { return v, w }} but stepwise!
l := len(ag.vertices)
if ag.i < l {
v1 = ag.vertices[ag.i]
}
if ag.j < l {
v2 = ag.vertices[ag.j]
}
// in case the vertex was deleted
if !ag.graph.HasVertex(v1) {
v1 = nil
}
if !ag.graph.HasVertex(v2) {
v2 = nil
}
// two nested loops...
if ag.j < l {
ag.j++
}
if ag.j == l {
ag.j = 0
if ag.i < l {
ag.i++
}
if ag.i == l {
ag.done = true
}
}
// TODO: is this index swap better or even valid?
//if ag.i < l {
// ag.i++
//}
//if ag.i == l {
// ag.i = 0
// if ag.j < l {
// ag.j++
// }
// if ag.j == l {
// ag.done = true
// }
//}
return
}
// VertexCmp can be used in addition to an overridding implementation.
func (ag *baseGrouper) VertexCmp(v1, v2 pgraph.Vertex) error {
if v1 == nil || v2 == nil {
return fmt.Errorf("the vertex is nil")
}
if v1 == v2 { // skip yourself
return fmt.Errorf("the vertices are the same")
}
return nil // success
}
// VertexMerge needs to be overridden to add the actual merging functionality.
func (ag *baseGrouper) VertexMerge(v1, v2 pgraph.Vertex) (v pgraph.Vertex, err error) {
return nil, fmt.Errorf("vertexMerge needs to be overridden")
}
// EdgeMerge can be overridden, since it just simply returns the first edge.
func (ag *baseGrouper) EdgeMerge(e1, e2 pgraph.Edge) pgraph.Edge {
return e1 // noop
}
// VertexTest processes the results of the grouping for the algorithm to know
// return an error if something went horribly wrong, and bool false to stop.
func (ag *baseGrouper) VertexTest(b bool) (bool, error) {
// NOTE: this particular baseGrouper version doesn't track what happens
// because since we iterate over every pair, we don't care which merge!
if ag.done {
return false, nil
}
return true, nil
}

View File

@@ -0,0 +1,72 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package autogroup
import (
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util/errwrap"
)
// NonReachabilityGrouper is the most straight-forward algorithm for grouping.
// TODO: this algorithm may not be correct in all cases. replace if needed!
type NonReachabilityGrouper struct {
baseGrouper // "inherit" what we want, and reimplement the rest
}
// Name returns the name for the grouper algorithm.
func (ag *NonReachabilityGrouper) Name() string {
return "NonReachabilityGrouper"
}
// VertexNext iteratively finds vertex pairs with simple graph reachability...
// This algorithm relies on the observation that if there's a path from a to b,
// then they *can't* be merged (b/c of the existing dependency) so therefore we
// merge anything that *doesn't* satisfy this condition or that of the reverse!
func (ag *NonReachabilityGrouper) VertexNext() (v1, v2 pgraph.Vertex, err error) {
for {
v1, v2, err = ag.baseGrouper.VertexNext() // get all iterable pairs
if err != nil {
return nil, nil, errwrap.Wrapf(err, "error running autoGroup(vertexNext)")
}
// ignore self cmp early (perf optimization)
if v1 != v2 && v1 != nil && v2 != nil {
// if NOT reachable, they're viable...
out1, e1 := ag.graph.Reachability(v1, v2)
if e1 != nil {
return nil, nil, e1
}
out2, e2 := ag.graph.Reachability(v2, v1)
if e2 != nil {
return nil, nil, e2
}
if len(out1) == 0 && len(out2) == 0 {
return // return v1 and v2, they're viable
}
}
// if we got here, it means we're skipping over this candidate!
if ok, err := ag.baseGrouper.VertexTest(false); err != nil {
return nil, nil, errwrap.Wrapf(err, "error running autoGroup(vertexTest)")
} else if !ok {
return nil, nil, nil // done!
}
// the vertexTest passed, so loop and try with a new pair...
}
}

View File

@@ -0,0 +1,138 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package autogroup
import (
"fmt"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util/errwrap"
)
// VertexMerge merges v2 into v1 by reattaching the edges where appropriate, and
// then by deleting v2 from the graph. Since more than one edge between two
// vertices is not allowed, duplicate edges are merged as well. An edge merge
// function can be provided if you'd like to control how you merge the edges!
func VertexMerge(g *pgraph.Graph, v1, v2 pgraph.Vertex, vertexMergeFn func(pgraph.Vertex, pgraph.Vertex) (pgraph.Vertex, error), edgeMergeFn func(pgraph.Edge, pgraph.Edge) pgraph.Edge) error {
// methodology
// 1) edges between v1 and v2 are removed
//Loop:
for k1 := range g.Adjacency() {
for k2 := range g.Adjacency()[k1] {
// v1 -> v2 || v2 -> v1
if (k1 == v1 && k2 == v2) || (k1 == v2 && k2 == v1) {
delete(g.Adjacency()[k1], k2) // delete map & edge
// NOTE: if we assume this is a DAG, then we can
// assume only v1 -> v2 OR v2 -> v1 exists, and
// we can break out of these loops immediately!
//break Loop
break
}
}
}
// 2) edges that point towards v2 from X now point to v1 from X (no dupes)
for _, x := range g.IncomingGraphVertices(v2) { // all to vertex v (??? -> v)
e := g.Adjacency()[x][v2] // previous edge
r, err := g.Reachability(x, v1)
if err != nil {
return err
}
// merge e with ex := g.Adjacency()[x][v1] if it exists!
if ex, exists := g.Adjacency()[x][v1]; exists && edgeMergeFn != nil && len(r) == 0 {
e = edgeMergeFn(e, ex)
}
if len(r) == 0 { // if not reachable, add it
g.AddEdge(x, v1, e) // overwrite edge
} else if edgeMergeFn != nil { // reachable, merge e through...
prev := x // initial condition
for i, next := range r {
if i == 0 {
// next == prev, therefore skip
continue
}
// this edge is from: prev, to: next
ex, _ := g.Adjacency()[prev][next] // get
ex = edgeMergeFn(ex, e)
g.Adjacency()[prev][next] = ex // set
prev = next
}
}
delete(g.Adjacency()[x], v2) // delete old edge
}
// 3) edges that point from v2 to X now point from v1 to X (no dupes)
for _, x := range g.OutgoingGraphVertices(v2) { // all from vertex v (v -> ???)
e := g.Adjacency()[v2][x] // previous edge
r, err := g.Reachability(v1, x)
if err != nil {
return err
}
// merge e with ex := g.Adjacency()[v1][x] if it exists!
if ex, exists := g.Adjacency()[v1][x]; exists && edgeMergeFn != nil && len(r) == 0 {
e = edgeMergeFn(e, ex)
}
if len(r) == 0 {
g.AddEdge(v1, x, e) // overwrite edge
} else if edgeMergeFn != nil { // reachable, merge e through...
prev := v1 // initial condition
for i, next := range r {
if i == 0 {
// next == prev, therefore skip
continue
}
// this edge is from: prev, to: next
ex, _ := g.Adjacency()[prev][next]
ex = edgeMergeFn(ex, e)
g.Adjacency()[prev][next] = ex
prev = next
}
}
delete(g.Adjacency()[v2], x)
}
// 4) merge and then remove the (now merged/grouped) vertex
if vertexMergeFn != nil { // run vertex merge function
if v, err := vertexMergeFn(v1, v2); err != nil {
return err
} else if v != nil { // replace v1 with the "merged" version...
// note: This branch isn't used if the vertexMergeFn
// decides to just merge logically on its own instead
// of actually returning something that we then merge.
v1 = v // XXX: ineffassign?
//*v1 = *v
// Ensure that everything still validates. (For safety!)
r, ok := v1.(engine.Res) // TODO: v ?
if !ok {
return fmt.Errorf("not a Res")
}
if err := engine.Validate(r); err != nil {
return errwrap.Wrapf(err, "the Res did not Validate")
}
}
}
g.DeleteVertex(v2) // remove grouped vertex
// 5) creation of a cyclic graph should throw an error
if _, err := g.TopologicalSort(); err != nil { // am i a dag or not?
return errwrap.Wrapf(err, "the TopologicalSort failed") // not a dag
}
return nil // success
}

430
engine/graph/engine.go Normal file
View File

@@ -0,0 +1,430 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package graph
import (
"fmt"
"os"
"path"
"sync"
"github.com/purpleidea/mgmt/converger"
"github.com/purpleidea/mgmt/engine"
engineUtil "github.com/purpleidea/mgmt/engine/util"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util/errwrap"
"github.com/purpleidea/mgmt/util/semaphore"
)
const (
// StateDir is the name of the sub directory where all the local
// resource state is stored.
StateDir = "state"
)
// Engine encapsulates a generic graph and manages its operations.
type Engine struct {
Program string
Hostname string
World engine.World
// Prefix is a unique directory prefix which can be used. It should be
// created if needed.
Prefix string
Converger *converger.Coordinator
Debug bool
Logf func(format string, v ...interface{})
graph *pgraph.Graph
nextGraph *pgraph.Graph
state map[pgraph.Vertex]*State
waits map[pgraph.Vertex]*sync.WaitGroup // wg for the Worker func
wlock *sync.Mutex // lock around waits map
slock *sync.Mutex // semaphore lock
semas map[string]*semaphore.Semaphore
wg *sync.WaitGroup // wg for the whole engine (only used for close)
paused bool // are we paused?
fastPause bool
}
// Init initializes the internal structures and starts this the graph running.
// If the struct does not validate, or it cannot initialize, then this errors.
// Initially it will contain an empty graph.
func (obj *Engine) Init() error {
if obj.Program == "" {
return fmt.Errorf("the Program is empty")
}
if obj.Hostname == "" {
return fmt.Errorf("the Hostname is empty")
}
var err error
if obj.graph, err = pgraph.NewGraph("graph"); err != nil {
return err
}
if obj.Prefix == "" || obj.Prefix == "/" {
return fmt.Errorf("the prefix of `%s` is invalid", obj.Prefix)
}
if err := os.MkdirAll(obj.Prefix, 0770); err != nil {
return errwrap.Wrapf(err, "can't create prefix")
}
obj.state = make(map[pgraph.Vertex]*State)
obj.waits = make(map[pgraph.Vertex]*sync.WaitGroup)
obj.wlock = &sync.Mutex{}
obj.slock = &sync.Mutex{}
obj.semas = make(map[string]*semaphore.Semaphore)
obj.wg = &sync.WaitGroup{}
obj.paused = true // start off true, so we can Resume after first Commit
return nil
}
// Load a new graph into the engine. Offline graph operations will be performed
// on this graph. To switch it to the active graph, and run it, use Commit.
func (obj *Engine) Load(newGraph *pgraph.Graph) error {
if obj.nextGraph != nil {
return fmt.Errorf("can't overwrite pending graph, use abort")
}
obj.nextGraph = newGraph
return nil
}
// Abort the pending graph and any work in progress on it. After this call you
// may Load a new graph.
func (obj *Engine) Abort() error {
if obj.nextGraph == nil {
return fmt.Errorf("there is no pending graph to abort")
}
obj.nextGraph = nil
return nil
}
// Validate validates the pending graph to ensure it is appropriate for the
// engine. This should be called before Commit to avoid any surprises there!
// This prevents an error on Commit which could cause an engine shutdown.
func (obj *Engine) Validate() error {
for _, vertex := range obj.nextGraph.Vertices() {
res, ok := vertex.(engine.Res)
if !ok {
return fmt.Errorf("not a Res")
}
if err := engine.Validate(res); err != nil {
return errwrap.Wrapf(err, "the Res did not Validate")
}
}
return nil
}
// Apply a function to the pending graph. You must pass in a function which will
// receive this graph as input, and return an error if something does not
// succeed.
func (obj *Engine) Apply(fn func(*pgraph.Graph) error) error {
return fn(obj.nextGraph)
}
// Commit runs a graph sync and swaps the loaded graph with the current one. If
// it errors, then the running graph wasn't changed. It is recommended that you
// pause the engine before running this, and resume it after you're done.
func (obj *Engine) Commit() error {
// TODO: Does this hurt performance or graph changes ?
start := []func() error{} // functions to run after graphsync to start...
vertexAddFn := func(vertex pgraph.Vertex) error {
// some of these validation steps happen before this Commit step
// in Validate() to avoid erroring here. These are redundant.
// FIXME: should we get rid of this redundant validation?
res, ok := vertex.(engine.Res)
if !ok { // should not happen, previously validated
return fmt.Errorf("not a Res")
}
if obj.Debug {
obj.Logf("loading resource `%s`", res)
}
if _, exists := obj.state[vertex]; exists {
return fmt.Errorf("the Res state already exists")
}
if obj.Debug {
obj.Logf("Validate(%s)", res)
}
err := engine.Validate(res)
if obj.Debug {
obj.Logf("Validate(%s): Return(%+v)", res, err)
}
if err != nil {
return errwrap.Wrapf(err, "the Res did not Validate")
}
pathUID := engineUtil.ResPathUID(res)
statePrefix := fmt.Sprintf("%s/", path.Join(obj.statePrefix(), pathUID))
// don't create this unless it *will* be used
//if err := os.MkdirAll(statePrefix, 0770); err != nil {
// return errwrap.Wrapf(err, "can't create state prefix")
//}
obj.waits[vertex] = &sync.WaitGroup{}
obj.state[vertex] = &State{
Graph: obj.graph, // Update if we swap the graph!
Vertex: vertex,
Program: obj.Program,
Hostname: obj.Hostname,
World: obj.World,
Prefix: statePrefix,
//Converger: obj.Converger,
Debug: obj.Debug,
Logf: func(format string, v ...interface{}) {
obj.Logf(res.String()+": "+format, v...)
},
}
if err := obj.state[vertex].Init(); err != nil {
return errwrap.Wrapf(err, "the Res did not Init")
}
fn := func() error {
// start the Worker
obj.wg.Add(1)
obj.wlock.Lock()
obj.waits[vertex].Add(1)
obj.wlock.Unlock()
go func(v pgraph.Vertex) {
defer obj.wg.Done()
defer func() {
// we need this lock, because this go
// routine could run when the next fn
// function above here is running...
obj.wlock.Lock()
obj.waits[v].Done()
obj.wlock.Unlock()
}()
obj.Logf("Worker(%s)", v)
// contains the Watch and CheckApply loops
err := obj.Worker(v)
obj.Logf("Worker(%s): Exited(%+v)", v, err)
obj.state[v].workerErr = err // store the error
// If the Rewatch metaparam is true, then this will get
// restarted if we do a graph cmp swap. This is why the
// graph cmp function runs the removes before the adds.
// XXX: This should feed into an $error var in the lang.
}(vertex)
return nil
}
start = append(start, fn) // do this at the end, if it's needed
return nil
}
free := []func() error{} // functions to run after graphsync to reset...
vertexRemoveFn := func(vertex pgraph.Vertex) error {
// wait for exit before starting new graph!
close(obj.state[vertex].removeDone) // causes doneChan to close
obj.state[vertex].Resume() // unblock from resume
obj.waits[vertex].Wait() // sync
// close the state and resource
// FIXME: will this mess up the sync and block the engine?
if err := obj.state[vertex].Close(); err != nil {
return errwrap.Wrapf(err, "the Res did not Close")
}
// delete to free up memory from old graphs
fn := func() error {
delete(obj.state, vertex)
delete(obj.waits, vertex)
return nil
}
free = append(free, fn) // do this at the end, so we don't panic
return nil
}
// add the Worker swap (reload) on error decision into this vertexCmpFn
vertexCmpFn := func(v1, v2 pgraph.Vertex) (bool, error) {
r1, ok1 := v1.(engine.Res)
r2, ok2 := v2.(engine.Res)
if !ok1 || !ok2 { // should not happen, previously validated
return false, fmt.Errorf("not a Res")
}
m1 := r1.MetaParams()
m2 := r2.MetaParams()
swap1, swap2 := true, true // assume default of true
if m1 != nil {
swap1 = m1.Rewatch
}
if m2 != nil {
swap2 = m2.Rewatch
}
s1, ok1 := obj.state[v1]
s2, ok2 := obj.state[v2]
x1, x2 := false, false
if ok1 {
x1 = s1.workerErr != nil && swap1
}
if ok2 {
x2 = s2.workerErr != nil && swap2
}
if x1 || x2 {
// We swap, even if they're the same, so that we reload!
// This causes an add and remove of the "same" vertex...
return false, nil
}
return engine.VertexCmpFn(v1, v2) // do the normal cmp otherwise
}
// If GraphSync succeeds, it updates the receiver graph accordingly...
// Running the shutdown in vertexRemoveFn does not need to happen in a
// topologically sorted order because it already paused in that order.
obj.Logf("graph sync...")
if err := obj.graph.GraphSync(obj.nextGraph, vertexCmpFn, vertexAddFn, vertexRemoveFn, engine.EdgeCmpFn); err != nil {
return errwrap.Wrapf(err, "error running graph sync")
}
// We run these afterwards, so that we don't unnecessarily start anyone
// if GraphSync failed in some way. Otherwise we'd have to do clean up!
for _, fn := range start {
if err := fn(); err != nil {
return errwrap.Wrapf(err, "error running start fn")
}
}
// We run these afterwards, so that the state structs (that might get
// referenced) are not destroyed while someone might poke or use one.
for _, fn := range free {
if err := fn(); err != nil {
return errwrap.Wrapf(err, "error running free fn")
}
}
obj.nextGraph = nil
// After this point, we must not error or we'd need to restore all of
// the changes that we'd made to the previously primary graph. This is
// because this function is meant to atomically swap the graphs safely.
// Update all the `State` structs with the new Graph pointer.
for _, vertex := range obj.graph.Vertices() {
state, exists := obj.state[vertex]
if !exists {
continue
}
state.Graph = obj.graph // update pointer to graph
}
return nil
}
// Resume runs the currently active graph. It also un-pauses the graph if it was
// paused. Very little that is interesting should happen here. It all happens in
// the Commit method. After Commit, new things are already started, but we still
// need to Resume any pre-existing resources.
func (obj *Engine) Resume() error {
if !obj.paused {
return fmt.Errorf("already resumed")
}
topoSort, err := obj.graph.TopologicalSort()
if err != nil {
return err
}
//indegree := obj.graph.InDegree() // compute all of the indegree's
reversed := pgraph.Reverse(topoSort)
for _, vertex := range reversed {
//obj.state[vertex].starter = (indegree[vertex] == 0)
obj.state[vertex].Resume() // doesn't error
}
// we wait for everyone to start before exiting!
obj.paused = false
return nil
}
// SetFastPause puts the graph into fast pause mode. This is usually done via
// the argument to the Pause command, but this method can be used if a pause was
// already started, and you'd like subsequent parts to pause quickly. Once in
// fast pause mode for a given pause action, you cannot switch to regular pause.
// This is because once you've started a fast pause, some dependencies might
// have been skipped when fast pausing, and future resources might have missed a
// poke. In general this is only called when you're trying to hurry up the exit.
// XXX: Not implemented
func (obj *Engine) SetFastPause() {
obj.fastPause = true
}
// Pause the active, running graph.
func (obj *Engine) Pause(fastPause bool) error {
if obj.paused {
return fmt.Errorf("already paused")
}
obj.fastPause = fastPause
topoSort, _ := obj.graph.TopologicalSort()
for _, vertex := range topoSort { // squeeze out the events...
// The Event is sent to an unbuffered channel, so this event is
// synchronous, and as a result it blocks until it is received.
if err := obj.state[vertex].Pause(); err != nil && err != engine.ErrClosed {
return err
}
}
obj.paused = true
// we are now completely paused...
obj.fastPause = false // reset
return nil
}
// Close triggers a shutdown. Engine must be already paused before this is run.
func (obj *Engine) Close() error {
emptyGraph, reterr := pgraph.NewGraph("empty")
// this is a graph switch (graph sync) that switches to an empty graph!
if err := obj.Load(emptyGraph); err != nil { // copy in empty graph
reterr = errwrap.Append(reterr, err)
}
// FIXME: Do we want to run commit if Load failed? Does this even work?
// the commit will cause the graph sync to shut things down cleverly...
if err := obj.Commit(); err != nil {
reterr = errwrap.Append(reterr, err)
}
obj.wg.Wait() // for now, this doesn't need to be a separate Wait() method
return reterr
}
// Graph returns the running graph.
func (obj *Engine) Graph() *pgraph.Graph {
return obj.graph
}
// statePrefix returns the dir where all the resource state is stored locally.
func (obj *Engine) statePrefix() string {
return fmt.Sprintf("%s/", path.Join(obj.Prefix, StateDir))
}

View File

@@ -0,0 +1,37 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !root
package graph
import (
"fmt"
"testing"
"github.com/purpleidea/mgmt/util/errwrap"
)
func TestMultiErr(t *testing.T) {
var err error
e := fmt.Errorf("some error")
err = errwrap.Append(err, e) // build an error from a nil base
// ensure that this lib allows us to append to a nil
if err == nil {
t.Errorf("missing error")
}
}

59
engine/graph/refresh.go Normal file
View File

@@ -0,0 +1,59 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package graph
import (
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/pgraph"
)
// RefreshPending determines if any previous nodes have a refresh pending here.
// If this is true, it means I am expected to apply a refresh when I next run.
func (obj *Engine) RefreshPending(vertex pgraph.Vertex) bool {
var refresh bool
for _, e := range obj.graph.IncomingGraphEdges(vertex) {
// if we asked for a notify *and* if one is pending!
edge := e.(*engine.Edge) // panic if wrong
if edge.Notify && edge.Refresh() {
refresh = true
break
}
}
return refresh
}
// SetUpstreamRefresh sets the refresh value to any upstream vertices.
func (obj *Engine) SetUpstreamRefresh(vertex pgraph.Vertex, b bool) {
for _, e := range obj.graph.IncomingGraphEdges(vertex) {
edge := e.(*engine.Edge) // panic if wrong
if edge.Notify {
edge.SetRefresh(b)
}
}
}
// SetDownstreamRefresh sets the refresh value to any downstream vertices.
func (obj *Engine) SetDownstreamRefresh(vertex pgraph.Vertex, b bool) {
for _, e := range obj.graph.OutgoingGraphEdges(vertex) {
edge := e.(*engine.Edge) // panic if wrong
// if we asked for a notify *and* if one is pending!
if edge.Notify {
edge.SetRefresh(b)
}
}
}

300
engine/graph/reverse.go Normal file
View File

@@ -0,0 +1,300 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package graph
import (
"fmt"
"io/ioutil"
"os"
"path"
"sort"
"github.com/purpleidea/mgmt/engine"
engineUtil "github.com/purpleidea/mgmt/engine/util"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util/errwrap"
)
const (
// ReverseFile is the file name in the resource state dir where any
// reversal information is stored.
ReverseFile = "reverse"
// ReversePerm is the permissions mode used to create the ReverseFile.
ReversePerm = 0600
)
// Reversals adds the reversals onto the loaded graph. This should happen last,
// and before Commit.
func (obj *Engine) Reversals() error {
if obj.nextGraph == nil {
return fmt.Errorf("there is no active graph to add reversals to")
}
// Initially get all of the reversals to seek out all possible errors.
// XXX: The engine needs to know where data might have been stored if we
// XXX: want to potentially allow alternate read/write paths, like etcd.
// XXX: In this scenario, we'd have to store a token somewhere to let us
// XXX: know to look elsewhere for the special ReversalList read method.
data, err := obj.ReversalList() // (map[string]string, error)
if err != nil {
return errwrap.Wrapf(err, "the reversals had errors")
}
if len(data) == 0 {
return nil // end early
}
resMatch := func(r1, r2 engine.Res) bool { // simple match on UID only!
if r1.Kind() != r2.Kind() {
return false
}
if r1.Name() != r2.Name() {
return false
}
return true
}
resInList := func(needle engine.Res, haystack []engine.Res) bool {
for _, res := range haystack {
if resMatch(needle, res) {
return true
}
}
return false
}
if obj.Debug {
obj.Logf("decoding %d reversals...", len(data))
}
resources := []engine.Res{}
// do this in a sorted order so that it errors deterministically
sorted := []string{}
for key := range data {
sorted = append(sorted, key)
}
sort.Strings(sorted)
for _, key := range sorted {
val := data[key]
// XXX: replace this ResToB64 method with one that stores it in
// a human readable format, in case someone wants to hack and
// edit it manually.
// XXX: we probably want this to be YAML, it works with the diff
// too...
r, err := engineUtil.B64ToRes(val)
if err != nil {
return errwrap.Wrapf(err, "error decoding res with UID: `%s`", key)
}
res, ok := r.(engine.ReversibleRes)
if !ok {
// this requirement is here to keep things simpler...
return errwrap.Wrapf(err, "decoded res with UID: `%s` was not reversible", key)
}
matchFn := func(vertex pgraph.Vertex) (bool, error) {
r, ok := vertex.(engine.Res)
if !ok {
return false, fmt.Errorf("not a Res")
}
if !resMatch(r, res) {
return false, nil
}
return true, nil
}
// FIXME: not efficient, we could build a cache-map first
vertex, err := obj.nextGraph.VertexMatchFn(matchFn) // (Vertex, error)
if err != nil {
return errwrap.Wrapf(err, "error searching graph for match")
}
if vertex != nil { // found one!
continue // it doesn't need reversing yet
}
// TODO: check for (incompatible?) duplicates instead
if resInList(res, resources) { // we've already got this one...
continue
}
// We set this in two different places to be safe. It ensures
// that we erase the reversal state file after we've used it.
res.ReversibleMeta().Reversal = true // set this for later...
resources = append(resources, res)
}
if len(resources) == 0 {
return nil // end early
}
// Now that we've passed the chance of any errors, we modify the graph.
obj.Logf("adding %d reversals...", len(resources))
for _, res := range resources {
obj.nextGraph.AddVertex(res)
}
// TODO: Do we want a way for stored reversals to add edges too?
// It would be great to ensure we didn't add any graph cycles here, but
// instead of checking now, we'll move the check into the main loop.
return nil
}
// ReversalList returns all the available pending reversal data on this host. It
// can then be decoded by whatever method is appropriate for.
func (obj *Engine) ReversalList() (map[string]string, error) {
result := make(map[string]string) // some key to contents
dir := obj.statePrefix() // loop through this dir...
files, err := ioutil.ReadDir(dir)
if err != nil && !os.IsNotExist(err) {
return nil, errwrap.Wrapf(err, "error reading list of state dirs")
} else if err != nil {
return result, nil // nothing found, no state dir exists yet
}
for _, x := range files {
key := x.Name() // some uid for the resource
file := path.Join(dir, key, ReverseFile)
content, err := ioutil.ReadFile(file)
if err != nil && !os.IsNotExist(err) {
return nil, errwrap.Wrapf(err, "could not read reverse file: %s", file)
} else if err != nil {
continue // file does not exist, skip
}
// file exists!
str := string(content)
result[key] = str // save
}
return result, nil
}
// ReversalInit performs the reversal initialization steps if necessary for this
// resource.
func (obj *State) ReversalInit() error {
res, ok := obj.Vertex.(engine.ReversibleRes)
if !ok {
return nil // nothing to do
}
if res.ReversibleMeta().Disabled {
return nil // nothing to do, reversal isn't enabled
}
// If the reversal is enabled, but we are the result of a previous
// reversal, then this will overwrite that older reversal request, and
// our resource should be designed to deal with that. This happens if we
// return a reversible resource as the reverse of a resource that was
// reversed. It's probably fairly rare.
if res.ReversibleMeta().Reversal {
obj.Logf("triangle reversal") // warn!
}
r, err := res.Reversed()
if err != nil {
return errwrap.Wrapf(err, "could not reverse: %s", res.String())
}
if r == nil {
return nil // this can't be reversed, or isn't implemented here
}
// We set this in two different places to be safe. It ensures that we
// erase the reversal state file after we've used it.
r.ReversibleMeta().Reversal = true // set this for later...
// XXX: replace this ResToB64 method with one that stores it in a human
// readable format, in case someone wants to hack and edit it manually.
// XXX: we probably want this to be YAML, it works with the diff too...
str, err := engineUtil.ResToB64(r)
if err != nil {
return errwrap.Wrapf(err, "could not encode: %s", res.String())
}
// TODO: put this method on traits.Reversible as part of the interface?
return obj.ReversalWrite(str, res.ReversibleMeta().Overwrite) // Store!
}
// ReversalClose performs the reversal shutdown steps if necessary for this
// resource.
func (obj *State) ReversalClose() error {
res, ok := obj.Vertex.(engine.ReversibleRes)
if !ok {
return nil // nothing to do
}
// Don't check res.ReversibleMeta().Disabled because we're removing the
// previous one. That value only applies if we're doing a new reversal.
if !res.ReversibleMeta().Reversal {
return nil // nothing to erase, we're not a reversal resource
}
if !obj.isStateOK { // did we successfully reverse?
obj.Logf("did not complete reversal") // warn
return nil
}
// TODO: put this method on traits.Reversible as part of the interface?
return obj.ReversalDelete() // Erase our reversal instructions.
}
// ReversalWrite stores the reversal state information for this resource.
func (obj *State) ReversalWrite(str string, overwrite bool) error {
dir, err := obj.varDir("") // private version
if err != nil {
return errwrap.Wrapf(err, "could not get VarDir for reverse")
}
file := path.Join(dir, ReverseFile) // return a unique file
content, err := ioutil.ReadFile(file)
if err != nil && !os.IsNotExist(err) {
return errwrap.Wrapf(err, "could not read reverse file: %s", file)
}
// file exists and we shouldn't overwrite if different
if err == nil && !overwrite {
// compare to existing file
oldStr := string(content)
if str != oldStr {
obj.Logf("existing, pending, reversible resource exists")
//obj.Logf("diff:")
//obj.Logf("") // TODO: print the diff w/o and secret values
return fmt.Errorf("existing, pending, reversible resource exists")
}
}
return ioutil.WriteFile(file, []byte(str), ReversePerm)
}
// ReversalDelete removes the reversal state information for this resource.
func (obj *State) ReversalDelete() error {
dir, err := obj.varDir("") // private version
if err != nil {
return errwrap.Wrapf(err, "could not get VarDir for reverse")
}
file := path.Join(dir, ReverseFile) // return a unique file
// FIXME: why do we see these removals when there isn't a state file?
if err = os.Remove(file); os.IsNotExist(err) {
return nil // ignore missing files
}
return errwrap.Wrapf(err, "could not remove reverse state file")
}

View File

@@ -1,21 +1,21 @@
// Mgmt // Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors // Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors // Written by James Shubin <james@shubin.ca> and the project contributors
// //
// This program is free software: you can redistribute it and/or modify // This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by // it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or // the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version. // (at your option) any later version.
// //
// This program is distributed in the hope that it will be useful, // This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of // but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details. // GNU General Public License for more details.
// //
// You should have received a copy of the GNU Affero General Public License // You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>. // along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph package graph
import ( import (
"fmt" "fmt"
@@ -23,49 +23,48 @@ import (
"strconv" "strconv"
"strings" "strings"
"github.com/purpleidea/mgmt/util/errwrap"
"github.com/purpleidea/mgmt/util/semaphore" "github.com/purpleidea/mgmt/util/semaphore"
multierr "github.com/hashicorp/go-multierror"
) )
// SemaSep is the trailing separator to split the semaphore id from the size. // SemaSep is the trailing separator to split the semaphore id from the size.
const SemaSep = ":" const SemaSep = ":"
// SemaLock acquires the list of semaphores in the graph. // semaLock acquires the list of semaphores in the graph.
func (g *Graph) SemaLock(semas []string) error { func (obj *Engine) semaLock(semas []string) error {
var reterr error var reterr error
sort.Strings(semas) // very important to avoid deadlock in the dag! sort.Strings(semas) // very important to avoid deadlock in the dag!
for _, id := range semas { for _, id := range semas {
g.slock.Lock() // semaphore creation lock obj.slock.Lock() // semaphore creation lock
sema, ok := g.semas[id] // lookup sema, ok := obj.semas[id] // lookup
if !ok { if !ok {
size := SemaSize(id) // defaults to 1 size := SemaSize(id) // defaults to 1
g.semas[id] = semaphore.NewSemaphore(size) obj.semas[id] = semaphore.NewSemaphore(size)
sema = g.semas[id] sema = obj.semas[id]
} }
g.slock.Unlock() obj.slock.Unlock()
if err := sema.P(1); err != nil { // lock! err := sema.P(1) // lock!
reterr = multierr.Append(reterr, err) // list of errors reterr = errwrap.Append(reterr, err) // list of errors
}
} }
return reterr return reterr
} }
// SemaUnlock releases the list of semaphores in the graph. // semaUnlock releases the list of semaphores in the graph.
func (g *Graph) SemaUnlock(semas []string) error { func (obj *Engine) semaUnlock(semas []string) error {
var reterr error var reterr error
sort.Strings(semas) // unlock in the same order to remove partial locks sort.Strings(semas) // unlock in the same order to remove partial locks
for _, id := range semas { for _, id := range semas {
sema, ok := g.semas[id] // lookup sema, ok := obj.semas[id] // lookup
if !ok { if !ok {
// programming error! // programming error!
panic(fmt.Sprintf("graph: sema: %s does not exist", id)) panic(fmt.Sprintf("graph: sema: %s does not exist", id))
} }
if err := sema.V(1); err != nil { // unlock! err := sema.V(1) // unlock!
reterr = multierr.Append(reterr, err) // list of errors reterr = errwrap.Append(reterr, err) // list of errors
}
} }
return reterr return reterr
} }

View File

@@ -0,0 +1,37 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !root
package graph
import (
"testing"
)
func TestSemaSize(t *testing.T) {
pairs := map[string]int{
"id:42": 42,
":13": 13,
"some_id": 1,
}
for id, size := range pairs {
if i := SemaSize(id); i != size {
t.Errorf("sema id `%s`, expected: `%d`, got: `%d`", id, size, i)
}
}
}

152
engine/graph/sendrecv.go Normal file
View File

@@ -0,0 +1,152 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package graph
import (
"fmt"
"reflect"
"github.com/purpleidea/mgmt/engine"
engineUtil "github.com/purpleidea/mgmt/engine/util"
"github.com/purpleidea/mgmt/util/errwrap"
)
// SendRecv pulls in the sent values into the receive slots. It is called by the
// receiver and must be given as input the full resource struct to receive on.
// It applies the loaded values to the resource.
func (obj *Engine) SendRecv(res engine.RecvableRes) (map[string]bool, error) {
recv := res.Recv()
if obj.Debug {
// NOTE: this could expose private resource data like passwords
obj.Logf("%s: SendRecv: %+v", res, recv)
}
var updated = make(map[string]bool) // list of updated keys
var err error
for k, v := range recv {
updated[k] = false // default
v.Changed = false // reset to the default
var st interface{} = v.Res // old style direct send/recv
if true { // new style send/recv API
st = v.Res.Sent()
}
if st == nil {
e := fmt.Errorf("received nil value from: %s", v.Res)
err = errwrap.Append(err, e) // list of errors
continue
}
if e := engineUtil.StructFieldCompat(st, v.Key, res, k); e != nil {
err = errwrap.Append(err, e) // list of errors
continue
}
// send
m1, e := engineUtil.StructTagToFieldName(st)
if e != nil {
err = errwrap.Append(err, e) // list of errors
continue
}
key1, exists := m1[v.Key]
if !exists {
e := fmt.Errorf("requested key of `%s` not found in send struct", v.Key)
err = errwrap.Append(err, e) // list of errors
continue
}
obj1 := reflect.Indirect(reflect.ValueOf(st))
type1 := obj1.Type()
value1 := obj1.FieldByName(key1)
kind1 := value1.Kind()
// recv
m2, e := engineUtil.StructTagToFieldName(res)
if e != nil {
err = errwrap.Append(err, e) // list of errors
continue
}
key2, exists := m2[k]
if !exists {
e := fmt.Errorf("requested key of `%s` not found in recv struct", k)
err = errwrap.Append(err, e) // list of errors
continue
}
obj2 := reflect.Indirect(reflect.ValueOf(res)) // pass in full struct
type2 := obj2.Type()
value2 := obj2.FieldByName(key2)
kind2 := value2.Kind()
if obj.Debug {
obj.Logf("Send(%s) has %v: %v", type1, kind1, value1)
obj.Logf("Recv(%s) has %v: %v", type2, kind2, value2)
}
// i think we probably want the same kind, at least for now...
if kind1 != kind2 {
e := fmt.Errorf("kind mismatch between %s: %s and %s: %s", v.Res, kind1, res, kind2)
err = errwrap.Append(err, e) // list of errors
continue
}
// if the types don't match, we can't use send->recv
// FIXME: do we want to relax this for string -> *string ?
if e := TypeCmp(value1, value2); e != nil {
e := errwrap.Wrapf(e, "type mismatch between %s and %s", v.Res, res)
err = errwrap.Append(err, e) // list of errors
continue
}
// if we can't set, then well this is pointless!
if !value2.CanSet() {
e := fmt.Errorf("can't set %s.%s", res, k)
err = errwrap.Append(err, e) // list of errors
continue
}
// if we can't interface, we can't compare...
if !value1.CanInterface() || !value2.CanInterface() {
e := fmt.Errorf("can't interface %s.%s", res, k)
err = errwrap.Append(err, e) // list of errors
continue
}
// if the values aren't equal, we're changing the receiver
if !reflect.DeepEqual(value1.Interface(), value2.Interface()) {
// TODO: can we catch the panics here in case they happen?
value2.Set(value1) // do it for all types that match
updated[k] = true // we updated this key!
v.Changed = true // tag this key as updated!
obj.Logf("SendRecv: %s.%s -> %s.%s", v.Res, v.Key, res, k)
}
}
return updated, err
}
// TypeCmp compares two reflect values to see if they are the same Kind. It can
// look into a ptr Kind to see if the underlying pair of ptr's can TypeCmp too!
func TypeCmp(a, b reflect.Value) error {
ta, tb := a.Type(), b.Type()
if ta != tb {
return fmt.Errorf("type mismatch: %s != %s", ta, tb)
}
// NOTE: it seems we don't need to recurse into pointers to sub check!
return nil // identical Type()'s
}

408
engine/graph/state.go Normal file
View File

@@ -0,0 +1,408 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package graph
import (
"fmt"
"sync"
"time"
"github.com/purpleidea/mgmt/converger"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/util/errwrap"
)
// State stores some state about the resource it is mapped to.
type State struct {
// Graph is a pointer to the graph that this vertex is part of.
Graph *pgraph.Graph
// Vertex is the pointer in the graph that this state corresponds to. It
// can be converted to a `Res` if necessary.
// TODO: should this be passed in on Init instead?
Vertex pgraph.Vertex
Program string
Hostname string
World engine.World
// Prefix is a unique directory prefix which can be used. It should be
// created if needed.
Prefix string
//Converger *converger.Coordinator
// Debug turns on additional output and behaviours.
Debug bool
// Logf is the logging function that should be used to display messages.
Logf func(format string, v ...interface{})
timestamp int64 // last updated timestamp
isStateOK bool // is state OK or do we need to run CheckApply ?
workerErr error // did the Worker error?
// doneChan closes when Watch should shut down. When any of the
// following channels close, it causes this to close.
doneChan chan struct{}
// processDone is closed when the Process/CheckApply function fails
// permanently, and wants to cause Watch to exit.
processDone chan struct{}
// watchDone is closed when the Watch function fails permanently, and we
// close this to signal we should definitely exit. (Often redundant.)
watchDone chan struct{} // could be shared with limitDone
// limitDone is closed when the Watch function fails permanently, and we
// close this to signal we should definitely exit. This happens inside
// of the limit loop of the Process section of Worker.
limitDone chan struct{} // could be shared with watchDone
// removeDone is closed when the vertexRemoveFn method asks for an exit.
// This happens when we're switching graphs. The switch to an "empty" is
// the equivalent of asking for a final shutdown.
removeDone chan struct{}
// eventsDone is closed when we shutdown the Process loop because we
// closed without error. In theory this shouldn't happen, but it could
// if Watch returns without error for some reason.
eventsDone chan struct{}
// eventsChan is the channel that the engine listens on for events from
// the Watch loop for that resource. The event is nil normally, except
// when events are sent on this channel from the engine. This only
// happens as a signaling mechanism when Watch has shutdown and we want
// to notify the Process loop which reads from this.
eventsChan chan error // outgoing from resource
// pokeChan is a separate channel that the Process loop listens on to
// know when we might need to run Process. It never closes, and is safe
// to send on since it is buffered.
pokeChan chan struct{} // outgoing from resource
// paused represents if this particular res is paused or not.
paused bool
// pauseSignal closes to request a pause of this resource.
pauseSignal chan struct{}
// resumeSignal closes to request a resume of this resource.
resumeSignal chan struct{}
// pausedAck is used to send an ack message saying that we've paused.
pausedAck *util.EasyAck
wg *sync.WaitGroup // used for all vertex specific processes
cuid *converger.UID // primary converger
tuid *converger.UID // secondary converger
init *engine.Init // a copy of the init struct passed to res Init
}
// Init initializes structures like channels.
func (obj *State) Init() error {
res, isRes := obj.Vertex.(engine.Res)
if !isRes {
return fmt.Errorf("vertex is not a Res")
}
if obj.Hostname == "" {
return fmt.Errorf("the Hostname is empty")
}
if obj.Prefix == "" {
return fmt.Errorf("the Prefix is empty")
}
if obj.Prefix == "/" {
return fmt.Errorf("the Prefix is root")
}
if obj.Logf == nil {
return fmt.Errorf("the Logf function is missing")
}
obj.doneChan = make(chan struct{})
obj.processDone = make(chan struct{})
obj.watchDone = make(chan struct{})
obj.limitDone = make(chan struct{})
obj.removeDone = make(chan struct{})
obj.eventsDone = make(chan struct{})
obj.eventsChan = make(chan error)
obj.pokeChan = make(chan struct{}, 1) // must be buffered
//obj.paused = false // starts off as started
obj.pauseSignal = make(chan struct{})
//obj.resumeSignal = make(chan struct{}) // happens on pause
//obj.pausedAck = util.NewEasyAck() // happens on pause
obj.wg = &sync.WaitGroup{}
//obj.cuid = obj.Converger.Register() // gets registered in Worker()
//obj.tuid = obj.Converger.Register() // gets registered in Worker()
obj.init = &engine.Init{
Program: obj.Program,
Hostname: obj.Hostname,
// Watch:
Running: obj.event,
Event: obj.event,
Done: obj.doneChan,
// CheckApply:
Refresh: func() bool {
res, ok := obj.Vertex.(engine.RefreshableRes)
if !ok {
panic("res does not support the Refreshable trait")
}
return res.Refresh()
},
Send: engine.GenerateSendFunc(res),
Recv: engine.GenerateRecvFunc(res),
// FIXME: pass in a safe, limited query func instead?
// TODO: not implemented, use FilteredGraph
//Graph: func() *pgraph.Graph {
// _, ok := obj.Vertex.(engine.CanGraphQueryRes)
// if !ok {
// panic("res does not support the GraphQuery trait")
// }
// return obj.Graph // we return in a func so it's fresh!
//},
FilteredGraph: func() (*pgraph.Graph, error) {
graph, err := pgraph.NewGraph("filtered")
if err != nil {
return nil, errwrap.Wrapf(err, "could not create graph")
}
// filter graph and build a new one...
adjacency := obj.Graph.Adjacency()
for v1 := range adjacency {
// check we're allowed
r1, ok := v1.(engine.GraphQueryableRes)
if !ok {
continue
}
// pass in information on requestor...
if err := r1.GraphQueryAllowed(
engine.GraphQueryableOptionKind(res.Kind()),
engine.GraphQueryableOptionName(res.Name()),
// TODO: add more information...
); err != nil {
continue
}
graph.AddVertex(v1)
for v2, edge := range adjacency[v1] {
r2, ok := v2.(engine.GraphQueryableRes)
if !ok {
continue
}
// pass in information on requestor...
if err := r2.GraphQueryAllowed(
engine.GraphQueryableOptionKind(res.Kind()),
engine.GraphQueryableOptionName(res.Name()),
// TODO: add more information...
); err != nil {
continue
}
//graph.AddVertex(v2) // redundant
graph.AddEdge(v1, v2, edge)
}
}
return graph, nil // we return in a func so it's fresh!
},
World: obj.World,
VarDir: obj.varDir,
Debug: obj.Debug,
Logf: func(format string, v ...interface{}) {
obj.Logf("resource: "+format, v...)
},
}
// run the init
if obj.Debug {
obj.Logf("Init(%s)", res)
}
// write the reverse request to the disk...
if err := obj.ReversalInit(); err != nil {
return err // TODO: test this code path...
}
err := res.Init(obj.init)
if obj.Debug {
obj.Logf("Init(%s): Return(%+v)", res, err)
}
if err != nil {
return errwrap.Wrapf(err, "could not Init() resource")
}
return nil
}
// Close shuts down and performs any cleanup. This is most akin to a "post" or
// cleanup command as the initiator for closing a vertex happens in graph sync.
func (obj *State) Close() error {
res, isRes := obj.Vertex.(engine.Res)
if !isRes {
return fmt.Errorf("vertex is not a Res")
}
//if obj.cuid != nil {
// obj.cuid.Unregister() // gets unregistered in Worker()
//}
//if obj.tuid != nil {
// obj.tuid.Unregister() // gets unregistered in Worker()
//}
// redundant safety
obj.wg.Wait() // wait until all poke's and events on me have exited
// run the close
if obj.Debug {
obj.Logf("Close(%s)", res)
}
var reverr error
// clear the reverse request from the disk...
if err := obj.ReversalClose(); err != nil {
// TODO: test this code path...
// TODO: should this be an error or a warning?
reverr = err
}
reterr := res.Close()
if obj.Debug {
obj.Logf("Close(%s): Return(%+v)", res, reterr)
}
reterr = errwrap.Append(reterr, reverr)
return reterr
}
// Poke sends a notification on the poke channel. This channel is used to notify
// the Worker to run the Process/CheckApply when it can. This is used when there
// is a need to schedule or reschedule some work which got postponed or dropped.
// This doesn't contain any internal synchronization primitives or wait groups,
// callers are expected to make sure that they don't leave any of these running
// by the time the Worker() shuts down.
func (obj *State) Poke() {
// redundant
//if len(obj.pokeChan) > 0 {
// return
//}
select {
case obj.pokeChan <- struct{}{}:
default: // if chan is now full because more than one poke happened...
}
}
// Pause pauses this resource. It should not be called on any already paused
// resource. It will block until the resource pauses with an acknowledgment, or
// until an exit for that resource is seen. If the latter happens it will error.
// It is NOT thread-safe with the Resume() method so only call either one at a
// time.
func (obj *State) Pause() error {
if obj.paused {
return fmt.Errorf("already paused")
}
obj.pausedAck = util.NewEasyAck()
obj.resumeSignal = make(chan struct{}) // build the resume signal
close(obj.pauseSignal)
obj.Poke() // unblock and notice the pause if necessary
// wait for ack (or exit signal)
select {
case <-obj.pausedAck.Wait(): // we got it!
// we're paused
case <-obj.doneChan:
return engine.ErrClosed
}
obj.paused = true
return nil
}
// Resume unpauses this resource. It can be safely called on a brand-new
// resource that has just started running without incident. It is NOT
// thread-safe with the Pause() method, so only call either one at a time.
func (obj *State) Resume() {
// TODO: do we need a mutex around Resume?
if !obj.paused { // no need to unpause brand-new resources
return
}
obj.pauseSignal = make(chan struct{}) // rebuild for next pause
close(obj.resumeSignal)
//obj.Poke() // not needed, we're already waiting for resume
obj.paused = false
// no need to wait for it to resume
//return // implied
}
// event is a helper function to send an event to the CheckApply process loop.
// It can be used for the initial `running` event, or any regular event. You
// should instead use Poke() to "schedule" a new Process/CheckApply loop when
// one might be needed. This method will block until we're unpaused and ready to
// receive on the events channel.
func (obj *State) event() {
obj.setDirty() // assume we're initially dirty
select {
case obj.eventsChan <- nil:
// send!
}
//return // implied
}
// setDirty marks the resource state as dirty. This signals to the engine that
// CheckApply will have some work to do in order to converge it.
func (obj *State) setDirty() {
obj.tuid.StopTimer()
obj.isStateOK = false
}
// poll is a replacement for Watch when the Poll metaparameter is used.
func (obj *State) poll(interval uint32) error {
// create a time.Ticker for the given interval
ticker := time.NewTicker(time.Duration(interval) * time.Second)
defer ticker.Stop()
obj.init.Running() // when started, notify engine that we're running
for {
select {
case <-ticker.C: // received the timer event
obj.init.Logf("polling...")
case <-obj.init.Done: // signal for shutdown request
return nil
}
obj.init.Event() // notify engine of an event (this can block)
}
}

51
engine/graph/vardir.go Normal file
View File

@@ -0,0 +1,51 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package graph
import (
"fmt"
"os"
"path"
"github.com/purpleidea/mgmt/util/errwrap"
)
// varDir returns the path to a working directory for the resource. It will try
// and create the directory first, and return an error if this failed. The dir
// should be cleaned up by the resource on Close if it wishes to discard the
// contents. If it does not, then a future resource with the same kind and name
// may see those contents in that directory. The resource should clean up the
// contents before use if it is important that nothing exist. It is always
// possible that contents could remain after an abrupt crash, so do not store
// overly sensitive data unless you're aware of the risks.
func (obj *State) varDir(extra string) (string, error) {
// Using extra adds additional dirs onto our namespace. An empty extra
// adds no additional directories.
if obj.Prefix == "" { // safety
return "", fmt.Errorf("the VarDir prefix is empty")
}
// an empty string at the end has no effect
p := fmt.Sprintf("%s/", path.Join(obj.Prefix, extra))
if err := os.MkdirAll(p, 0770); err != nil {
return "", errwrap.Wrapf(err, "can't create prefix in: %s", p)
}
// returns with a trailing slash as per the mgmt file res convention
return p, nil
}

70
engine/graphqueryable.go Normal file
View File

@@ -0,0 +1,70 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package engine
// GraphQueryableRes is the interface that must be implemented if you want your
// resource to be allowed to be queried from another resource in the graph. This
// is done as a form of explicit authorization tracking so that we can consider
// security aspects more easily. Ultimately, all resource code should be
// trusted, but it's still a good idea to know if a particular resource is even
// able to access information about another one, and if your resource doesn't
// add the trait supporting this, then it won't be allowed.
type GraphQueryableRes interface {
Res // implement everything in Res but add the additional requirements
// GraphQueryAllowed returns nil if you're allowed to query the graph.
GraphQueryAllowed(...GraphQueryableOption) error
}
// GraphQueryableOption is an option that can be used to specify the
// authentication.
type GraphQueryableOption func(*GraphQueryableOptions)
// GraphQueryableOptions represents the different possible configurable options.
type GraphQueryableOptions struct {
// Kind is the kind of the resource making the access.
Kind string
// Name is the name of the resource making the access.
Name string
// TODO: add more options if needed
}
// Apply is a helper function to apply a list of options to the struct. You
// should initialize it with defaults you want, and then apply any you've
// received like this.
func (obj *GraphQueryableOptions) Apply(opts ...GraphQueryableOption) {
for _, optionFunc := range opts { // apply the options
optionFunc(obj)
}
}
// GraphQueryableOptionKind tells the GraphQueryAllowed function what the
// resource kind is.
func GraphQueryableOptionKind(kind string) GraphQueryableOption {
return func(gqo *GraphQueryableOptions) {
gqo.Kind = kind
}
}
// GraphQueryableOptionName tells the GraphQueryAllowed function what the
// resource name is.
func GraphQueryableOptionName(name string) GraphQueryableOption {
return func(gqo *GraphQueryableOptions) {
gqo.Name = name
}
}

202
engine/metaparams.go Normal file
View File

@@ -0,0 +1,202 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package engine
import (
"fmt"
"strconv"
"github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/util/errwrap"
"golang.org/x/time/rate"
)
// DefaultMetaParams are the defaults that are used for undefined metaparams.
// Don't modify this variable. Use .Copy() if you'd like some for yourself.
var DefaultMetaParams = &MetaParams{
Noop: false,
Retry: 0,
Delay: 0,
Poll: 0, // defaults to watching for events
Limit: rate.Inf, // defaults to no limit
Burst: 0, // no burst needed on an infinite rate
//Sema: []string{},
Rewatch: true,
Realize: false, // true would be more awesome, but unexpected for users
}
// MetaRes is the interface a resource must implement to support meta params.
// All resources must implement this.
type MetaRes interface {
// MetaParams lets you get or set meta params for the resource.
MetaParams() *MetaParams
// SetMetaParams lets you set all of the meta params for the resource in
// a single call.
SetMetaParams(*MetaParams)
}
// MetaParams provides some meta parameters that apply to every resource.
type MetaParams struct {
// Noop specifies that no changes should be made by the resource. It
// relies on the individual resource implementation, and can't protect
// you from a poorly or maliciously implemented resource.
Noop bool `yaml:"noop"`
// NOTE: there are separate Watch and CheckApply retry and delay values,
// but I've decided to use the same ones for both until there's a proper
// reason to want to do something differently for the Watch errors.
// Retry is the number of times to retry on error. Use -1 for infinite.
Retry int16 `yaml:"retry"`
// Delay is the number of milliseconds to wait between retries.
Delay uint64 `yaml:"delay"`
// Poll is the number of seconds between poll intervals. Use 0 to Watch.
Poll uint32 `yaml:"poll"`
// Limit is the number of events per second to allow through.
Limit rate.Limit `yaml:"limit"`
// Burst is the number of events to allow in a burst.
Burst int `yaml:"burst"`
// Sema is a list of semaphore ids in the form `id` or `id:count`. If
// you don't specify a count, then 1 is assumed. The sema of `foo` which
// has a count equal to 1, is different from a sema named `foo:1` which
// also has a count equal to 1, but is a different semaphore.
Sema []string `yaml:"sema"`
// Rewatch specifies whether we re-run the Watch worker during a swap
// if it has errored. When doing a GraphCmp to swap the graphs, if this
// is true, and this particular worker has errored, then we'll remove it
// and add it back as a new vertex, thus causing it to run again. This
// is different from the Retry metaparam which applies during the normal
// execution. It is only when this is exhausted that we're in permanent
// worker failure, and only then can we rely on this metaparam.
Rewatch bool `yaml:"rewatch"`
// Realize ensures that the resource is guaranteed to converge at least
// once before a potential graph swap removes or changes it. This
// guarantee is useful for fast changing graphs, to ensure that the
// brief creation of a resource is seen. This guarantee does not prevent
// against the engine quitting normally, and it can't guarantee it if
// the resource is blocked because of a failed pre-requisite resource.
// XXX: Not implemented!
Realize bool `yaml:"realize"`
}
// Cmp compares two AutoGroupMeta structs and determines if they're equivalent.
func (obj *MetaParams) Cmp(meta *MetaParams) error {
if obj.Noop != meta.Noop {
return fmt.Errorf("values for Noop are different")
}
// XXX: add a one way cmp like we used to have ?
//if obj.Noop != meta.Noop {
// // obj is the existing res, res is the *new* resource
// // if we go from no-noop -> noop, we can re-use the obj
// // if we go from noop -> no-noop, we need to regenerate
// if obj.Noop { // asymmetrical
// return fmt.Errorf("values for Noop are different") // going from noop to no-noop!
// }
//}
if obj.Retry != meta.Retry {
return fmt.Errorf("values for Retry are different")
}
if obj.Delay != meta.Delay {
return fmt.Errorf("values for Delay are different")
}
if obj.Poll != meta.Poll {
return fmt.Errorf("values for Poll are different")
}
if obj.Limit != meta.Limit {
return fmt.Errorf("values for Limit are different")
}
if obj.Burst != meta.Burst {
return fmt.Errorf("values for Burst are different")
}
if err := util.SortedStrSliceCompare(obj.Sema, meta.Sema); err != nil {
return errwrap.Wrapf(err, "values for Sema are different")
}
if obj.Rewatch != meta.Rewatch {
return fmt.Errorf("values for Rewatch are different")
}
if obj.Realize != meta.Realize {
return fmt.Errorf("values for Realize are different")
}
return nil
}
// Validate runs some validation on the meta params.
func (obj *MetaParams) Validate() error {
if obj.Burst == 0 && !(obj.Limit == rate.Inf) { // blocked
return fmt.Errorf("permanently limited (rate != Inf, burst = 0)")
}
for _, s := range obj.Sema {
if s == "" {
return fmt.Errorf("semaphore is empty")
}
if _, err := strconv.Atoi(s); err == nil { // standalone int
return fmt.Errorf("semaphore format is invalid")
}
}
return nil
}
// Copy copies this struct and returns a new one.
func (obj *MetaParams) Copy() *MetaParams {
sema := []string{}
if obj.Sema != nil {
sema = make([]string, len(obj.Sema))
copy(sema, obj.Sema)
}
return &MetaParams{
Noop: obj.Noop,
Retry: obj.Retry,
Delay: obj.Delay,
Poll: obj.Poll,
Limit: obj.Limit, // FIXME: can we copy this type like this? test me!
Burst: obj.Burst,
Sema: sema,
Rewatch: obj.Rewatch,
Realize: obj.Realize,
}
}
// UnmarshalYAML is the custom unmarshal handler for the MetaParams struct. It
// is primarily useful for setting the defaults.
// TODO: this is untested
func (obj *MetaParams) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawMetaParams MetaParams // indirection to avoid infinite recursion
raw := rawMetaParams(*DefaultMetaParams) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = MetaParams(raw) // restore from indirection with type conversion!
return nil
}

42
engine/metaparams_test.go Normal file
View File

@@ -0,0 +1,42 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !root
package engine
import (
"testing"
)
func TestMetaCmp1(t *testing.T) {
m1 := &MetaParams{
Noop: true,
}
m2 := &MetaParams{
Noop: false,
}
// TODO: should we allow this? Maybe only with the future Mutate API?
//if err := m2.Cmp(m1); err != nil { // going from noop(false) -> noop(true) is okay!
// t.Errorf("the two resources do not match")
//}
if m1.Cmp(m2) == nil { // going from noop(true) -> noop(false) is not okay!
t.Errorf("the two resources should not match")
}
}

32
engine/refresh.go Normal file
View File

@@ -0,0 +1,32 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package engine
// RefreshableRes is the interface a resource must implement to support refresh
// notifications. Default implementations for all of the methods declared in
// this interface can be obtained for your resource by anonymously adding the
// traits.Refreshable struct to your resource implementation.
type RefreshableRes interface {
Res // implement everything in Res but add the additional requirements
// Refresh returns the refresh notification state.
Refresh() bool
// SetRefresh sets the refresh notification state.
SetRefresh(bool)
}

314
engine/resources.go Normal file
View File

@@ -0,0 +1,314 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package engine
import (
"encoding/gob"
"fmt"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util/errwrap"
"gopkg.in/yaml.v2"
)
// TODO: should each resource be a sub-package?
var registeredResources = map[string]func() Res{}
// RegisterResource registers a new resource by providing a constructor function
// that returns a resource object ready to be unmarshalled from YAML.
func RegisterResource(kind string, fn func() Res) {
f := fn()
if kind == "" {
panic("can't register a resource with an empty kind")
}
if _, ok := registeredResources[kind]; ok {
panic(fmt.Sprintf("a resource kind of %s is already registered", kind))
}
gob.Register(f)
registeredResources[kind] = fn
}
// RegisteredResourcesNames returns the kind of the registered resources.
func RegisteredResourcesNames() []string {
kinds := []string{}
for k := range registeredResources {
kinds = append(kinds, k)
}
return kinds
}
// NewResource returns an empty resource object from a registered kind. It
// errors if the resource kind doesn't exist.
func NewResource(kind string) (Res, error) {
fn, ok := registeredResources[kind]
if !ok {
return nil, fmt.Errorf("no resource kind `%s` available", kind)
}
res := fn().Default()
res.SetKind(kind)
return res, nil
}
// NewNamedResource returns an empty resource object from a registered kind. It
// also sets the name. It is a wrapper around NewResource. It also errors if the
// name is empty.
func NewNamedResource(kind, name string) (Res, error) {
if name == "" {
return nil, fmt.Errorf("resource name is empty")
}
res, err := NewResource(kind)
if err != nil {
return nil, err
}
res.SetName(name)
return res, nil
}
// Init is the structure of values and references which is passed into all
// resources on initialization. None of these are available in Validate, or
// before Init runs.
type Init struct {
// Program is the name of the program.
Program string
// Hostname is the uuid for the host.
Hostname string
// Called from within Watch:
// Running must be called after your watches are all started and ready.
Running func()
// Event sends an event notifying the engine of a possible state change.
Event func()
// Done returns a channel that will close to signal to us that it's time
// for us to shutdown.
Done chan struct{}
// Called from within CheckApply:
// Refresh returns whether the resource received a notification. This
// flag can be used to tell a svc to reload, or to perform some state
// change that wouldn't otherwise be noticed by inspection alone. You
// must implement the Refreshable trait for this to work.
Refresh func() bool
// Send exposes some variables you wish to send via the Send/Recv
// mechanism. You must implement the Sendable trait for this to work.
Send func(interface{}) error
// Recv provides a map of variables which were sent to this resource via
// the Send/Recv mechanism. You must implement the Recvable trait for
// this to work.
Recv func() map[string]*Send
// Other functionality:
// Graph is a function that returns the current graph. The returned
// value won't be valid after a graphsync so make sure to call this when
// you are about to use it, and discard it right after.
// FIXME: it might be better to offer a safer, more limited, GraphQuery?
//Graph func() *pgraph.Graph // TODO: not implemented, use FilteredGraph
// FilteredGraph is a function that returns a filtered variant of the
// current graph. Only resource that have allowed themselves to be added
// into this graph will appear. If they did not consent, then those
// vertices and any associated edges, will not be present.
FilteredGraph func() (*pgraph.Graph, error)
// TODO: GraphQuery offers an interface to query the resource graph.
// World provides a connection to the outside world. This is most often
// used for communicating with the distributed database.
World World
// VarDir is a facility for local storage. It is used to return a path
// to a directory which may be used for temporary storage. It should be
// cleaned up on resource Close if the resource would like to delete the
// contents. The resource should not assume that the initial directory
// is empty, and it should be cleaned on Init if that is a requirement.
VarDir func(string) (string, error)
// Debug signals whether we are running in debugging mode. In this case,
// we might want to log additional messages.
Debug bool
// Logf is a logging facility which will correctly namespace any
// messages which you wish to pass on. You should use this instead of
// the log package directly for production quality resources.
Logf func(format string, v ...interface{})
}
// KindedRes is an interface that is required for a resource to have a kind.
type KindedRes interface {
// Kind returns a string representing the kind of resource this is.
Kind() string
// SetKind sets the resource kind and should only be called by the
// engine.
SetKind(string)
}
// NamedRes is an interface that is used so a resource can have a unique name.
type NamedRes interface {
Name() string
SetName(string)
}
// Res is the minimum interface you need to implement to define a new resource.
type Res interface {
fmt.Stringer // String() string
KindedRes
NamedRes // TODO: consider making this optional in the future
MetaRes // All resources must have meta params.
// Default returns a struct with sane defaults for this resource.
Default() Res
// Validate determines if the struct has been defined in a valid state.
Validate() error
// Init initializes the resource and passes in some external information
// and data from the engine.
Init(*Init) error
// Close is run by the engine to clean up after the resource is done.
Close() error
// Watch is run by the engine to monitor for state changes. If it
// detects any, it notifies the engine which will usually run CheckApply
// in response.
Watch() error
// CheckApply determines if the state of the resource is correct and if
// asked to with the `apply` variable, applies the requested state.
CheckApply(apply bool) (checkOK bool, err error)
// Cmp compares itself to another resource and returns an error if they
// are not equivalent. This is more strict than the Adapts method of the
// CompatibleRes interface which allows for equivalent differences if
// the have a compatible result in CheckApply.
Cmp(Res) error
}
// Repr returns a representation of a resource from its kind and name. This is
// used as the definitive format so that it can be changed in one place.
func Repr(kind, name string) string {
return fmt.Sprintf("%s[%s]", kind, name)
}
// Stringer returns a consistent and unique string representation of a resource.
func Stringer(res Res) string {
return Repr(res.Kind(), res.Name())
}
// Validate validates a resource by checking multiple aspects. This is the main
// entry point for running all the validation steps on a resource.
func Validate(res Res) error {
if res.Kind() == "" { // shouldn't happen IIRC
return fmt.Errorf("the Res has an empty Kind")
}
if res.Name() == "" {
return fmt.Errorf("the Res has an empty Name")
}
if err := res.MetaParams().Validate(); err != nil {
return errwrap.Wrapf(err, "the Res has an invalid meta param")
}
return res.Validate()
}
// InterruptableRes is an interface that adds interrupt functionality to
// resources. If the resource implements this interface, the engine will call
// the Interrupt method to shutdown the resource quickly. Running this method
// may leave the resource in a partial state, however this may be desired if you
// want a faster exit or if you'd prefer a partial state over letting the
// resource complete in a situation where you made an error and you wish to exit
// quickly to avoid data loss. It is usually triggered after multiple ^C
// signals.
type InterruptableRes interface {
Res
// Ask the resource to shutdown quickly. This can be called at any point
// in the resource lifecycle after Init. Close will still be called. It
// will only get called after an exit or pause request has been made. It
// is designed to unblock any long running operation that is occurring
// in the CheckApply portion of the life cycle. If the resource has
// already exited, running this method should not block. (That is to say
// that you should not expect CheckApply or Watch to be alive and be
// able to read from a channel to satisfy your request.) It is best to
// probably have this close a channel to multicast that signal around to
// anyone who can detect it in a select. If you are in a situation which
// cannot interrupt, then you can return an error.
// FIXME: implement, and check the above description is what we expect!
Interrupt() error
}
// CopyableRes is an interface that a resource can implement if we want to be
// able to copy the resource to build another one.
type CopyableRes interface {
Res
// Copy returns a new resource which has a copy of the public data.
// Don't call this directly, use engine.ResCopy instead.
// TODO: should we copy any private state or not?
Copy() CopyableRes
}
// CompatibleRes is an interface that a resource can implement to express if a
// similar variant of itself is functionally equivalent. For example, two `pkg`
// resources that install `cowsay` could be equivalent if one requests a state
// of `installed` and the other requests `newest`, since they'll finish with a
// compatible result. This doesn't need to be behind a metaparam flag or trait,
// because it is never beneficial to turn it off, unless there is a bug to fix.
type CompatibleRes interface {
//Res // causes "duplicate method" error
CopyableRes // we'll need to use the Copy method in the Merge function!
// Adapts compares itself to another resource and returns an error if
// they are not compatibly equivalent. This is less strict than the
// default `Cmp` method which should be used for most cases. Don't call
// this directly, use engine.AdaptCmp instead.
Adapts(CompatibleRes) error
// Merge returns the combined resource to use when two are equivalent.
// This might get called multiple times for N different resources that
// need to get merged, and so it should produce a consistent result no
// matter which order it is called in. Don't call this directly, use
// engine.ResMerge instead.
Merge(CompatibleRes) (CompatibleRes, error)
}
// CollectableRes is an interface for resources that support collection. It is
// currently temporary until a proper API for all resources is invented.
type CollectableRes interface {
Res
CollectPattern(string) // XXX: temporary until Res collection is more advanced
}
// YAMLRes is a resource that supports creation by unmarshalling.
type YAMLRes interface {
Res
yaml.Unmarshaler // UnmarshalYAML(unmarshal func(interface{}) error) error
}

View File

@@ -1,33 +1,34 @@
// Mgmt // Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors // Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors // Written by James Shubin <james@shubin.ca> and the project contributors
// //
// This program is free software: you can redistribute it and/or modify // This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by // it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or // the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version. // (at your option) any later version.
// //
// This program is distributed in the hope that it will be useful, // This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of // but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details. // GNU General Public License for more details.
// //
// You should have received a copy of the GNU Affero General Public License // You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>. // along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !noaugeas // +build !noaugeas
package resources package resources
import ( import (
"encoding/gob"
"fmt" "fmt"
"log"
"os" "os"
"strings" "strings"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/purpleidea/mgmt/recwatch" "github.com/purpleidea/mgmt/recwatch"
"github.com/purpleidea/mgmt/util/errwrap"
errwrap "github.com/pkg/errors"
// FIXME: we vendor go/augeas because master requires augeas 1.6.0 // FIXME: we vendor go/augeas because master requires augeas 1.6.0
// and libaugeas-dev-1.6.0 is not yet available in a PPA. // and libaugeas-dev-1.6.0 is not yet available in a PPA.
"honnef.co/go/augeas" "honnef.co/go/augeas"
@@ -39,13 +40,15 @@ const (
) )
func init() { func init() {
gob.Register(&AugeasRes{}) engine.RegisterResource("augeas", func() engine.Res { return &AugeasRes{} })
} }
// AugeasRes is a resource that enables you to use the augeas resource. // AugeasRes is a resource that enables you to use the augeas resource.
// Currently only allows you to change simple files (e.g sshd_config). // Currently only allows you to change simple files (e.g sshd_config).
type AugeasRes struct { type AugeasRes struct {
BaseRes `yaml:",inline"` traits.Base // add the base methods without re-implementation
init *engine.Init
// File is the path to the file targeted by this resource. // File is the path to the file targeted by this resource.
File string `yaml:"file"` File string `yaml:"file"`
@@ -57,7 +60,7 @@ type AugeasRes struct {
// Sets is a list of changes that will be applied to the file, in the form of // Sets is a list of changes that will be applied to the file, in the form of
// ["path", "value"]. mgmt will run augeas.Get() before augeas.Set(), to // ["path", "value"]. mgmt will run augeas.Get() before augeas.Set(), to
// prevent changing the file when it is not needed. // prevent changing the file when it is not needed.
Sets []AugeasSet `yaml:"sets"` Sets []*AugeasSet `yaml:"sets"`
recWatcher *recwatch.RecWatcher // used to watch the changed files recWatcher *recwatch.RecWatcher // used to watch the changed files
} }
@@ -68,13 +71,31 @@ type AugeasSet struct {
Value string `yaml:"value"` // The value to be set on the given Path. Value string `yaml:"value"` // The value to be set on the given Path.
} }
// Default returns some sensible defaults for this resource. // Cmp compares this set with another one.
func (obj *AugeasRes) Default() Res { func (obj *AugeasSet) Cmp(set *AugeasSet) error {
return &AugeasRes{ if obj == nil && set == nil {
BaseRes: BaseRes{ return nil
MetaParams: DefaultMetaParams, // force a default
},
} }
if obj == nil && set != nil {
return fmt.Errorf("can't compare nil set to set")
}
if obj != nil && set == nil {
return fmt.Errorf("can't compare set to nil set")
}
if obj.Path != set.Path {
return fmt.Errorf("the Path values differ")
}
if obj.Value != set.Value {
return fmt.Errorf("the Value values differ")
}
return nil
}
// Default returns some sensible defaults for this resource.
func (obj *AugeasRes) Default() engine.Res {
return &AugeasRes{}
} }
// Validate if the params passed in are valid data. // Validate if the params passed in are valid data.
@@ -88,17 +109,23 @@ func (obj *AugeasRes) Validate() error {
if (obj.Lens == "") != (obj.File == "") { if (obj.Lens == "") != (obj.File == "") {
return fmt.Errorf("the File and Lens params must be specified together") return fmt.Errorf("the File and Lens params must be specified together")
} }
return obj.BaseRes.Validate() return nil
} }
// Init initiates the resource. // Init initializes the resource.
func (obj *AugeasRes) Init() error { func (obj *AugeasRes) Init(init *engine.Init) error {
obj.BaseRes.kind = "augeas" obj.init = init // save for later
return obj.BaseRes.Init() // call base init, b/c we're overriding
return nil
} }
// Watch is the primary listener for this resource and it outputs events. // Close is run by the engine to clean up after the resource is done.
// Taken from the File resource. func (obj *AugeasRes) Close() error {
return nil
}
// Watch is the primary listener for this resource and it outputs events. This
// was taken from the File resource.
// FIXME: DRY - This is taken from the file resource // FIXME: DRY - This is taken from the file resource
func (obj *AugeasRes) Watch() error { func (obj *AugeasRes) Watch() error {
var err error var err error
@@ -108,17 +135,12 @@ func (obj *AugeasRes) Watch() error {
} }
defer obj.recWatcher.Close() defer obj.recWatcher.Close()
// notify engine that we're running obj.init.Running() // when started, notify engine that we're running
if err := obj.Running(); err != nil {
return err // bubble up a NACK...
}
var send = false // send event? var send = false // send event?
var exit *error
for { for {
if obj.debug { if obj.init.Debug {
log.Printf("%s[%s]: Watching: %s", obj.Kind(), obj.GetName(), obj.File) // attempting to watch... obj.init.Logf("Watching: %s", obj.File) // attempting to watch...
} }
select { select {
@@ -127,31 +149,27 @@ func (obj *AugeasRes) Watch() error {
return nil return nil
} }
if err := event.Error; err != nil { if err := event.Error; err != nil {
return errwrap.Wrapf(err, "Unknown %s[%s] watcher error", obj.Kind(), obj.GetName()) return errwrap.Wrapf(err, "Unknown %s watcher error", obj)
} }
if obj.debug { // don't access event.Body if event.Error isn't nil if obj.init.Debug { // don't access event.Body if event.Error isn't nil
log.Printf("%s[%s]: Event(%s): %v", obj.Kind(), obj.GetName(), event.Body.Name, event.Body.Op) obj.init.Logf("Event(%s): %v", event.Body.Name, event.Body.Op)
} }
send = true send = true
obj.StateOK(false) // dirty
case event := <-obj.Events(): case <-obj.init.Done: // closed by the engine to signal shutdown
if exit, send = obj.ReadEvent(event); exit != nil { return nil
return *exit // exit
}
//obj.StateOK(false) // dirty // these events don't invalidate state
} }
// do all our event sending all together to avoid duplicate msgs // do all our event sending all together to avoid duplicate msgs
if send { if send {
send = false send = false
obj.Event() obj.init.Event() // notify engine of an event (this can block)
} }
} }
} }
// checkApplySet runs CheckApply for one element of the AugeasRes.Set // checkApplySet runs CheckApply for one element of the AugeasRes.Set
func (obj *AugeasRes) checkApplySet(apply bool, ag *augeas.Augeas, set AugeasSet) (bool, error) { func (obj *AugeasRes) checkApplySet(apply bool, ag *augeas.Augeas, set *AugeasSet) (bool, error) {
fullpath := fmt.Sprintf("/files/%v/%v", obj.File, set.Path) fullpath := fmt.Sprintf("/files/%v/%v", obj.File, set.Path)
// We do not check for errors because errors are also thrown when // We do not check for errors because errors are also thrown when
@@ -177,7 +195,7 @@ func (obj *AugeasRes) checkApplySet(apply bool, ag *augeas.Augeas, set AugeasSet
// CheckApply method for Augeas resource. // CheckApply method for Augeas resource.
func (obj *AugeasRes) CheckApply(apply bool) (bool, error) { func (obj *AugeasRes) CheckApply(apply bool) (bool, error) {
log.Printf("%s[%s]: CheckApply: %s", obj.Kind(), obj.GetName(), obj.File) obj.init.Logf("CheckApply: %s", obj.File)
// By default we do not set any option to augeas, we use the defaults. // By default we do not set any option to augeas, we use the defaults.
opts := augeas.None opts := augeas.None
if obj.Lens != "" { if obj.Lens != "" {
@@ -225,7 +243,7 @@ func (obj *AugeasRes) CheckApply(apply bool) (bool, error) {
return checkOK, nil return checkOK, nil
} }
log.Printf("%s[%s]: changes needed, saving", obj.Kind(), obj.GetName()) obj.init.Logf("changes needed, saving")
if err = ag.Save(); err != nil { if err = ag.Save(); err != nil {
return false, errwrap.Wrapf(err, "augeas: error while saving augeas values") return false, errwrap.Wrapf(err, "augeas: error while saving augeas values")
} }
@@ -241,51 +259,50 @@ func (obj *AugeasRes) CheckApply(apply bool) (bool, error) {
return false, nil return false, nil
} }
// AugeasUID is the UID struct for AugeasRes. // Cmp compares two resources and returns an error if they are not equivalent.
type AugeasUID struct { func (obj *AugeasRes) Cmp(r engine.Res) error {
BaseUID // we can only compare to others of the same resource kind
name string res, ok := r.(*AugeasRes)
} if !ok {
return fmt.Errorf("resource is not the same kind")
}
if obj.File != res.File {
return fmt.Errorf("the File params differ")
}
if obj.Lens != res.Lens {
return fmt.Errorf("the Lens params differ")
}
if len(obj.Sets) != len(res.Sets) {
return fmt.Errorf("the length of the two Sets params differs")
}
for i := 0; i < len(obj.Sets); i++ {
if err := obj.Sets[i].Cmp(res.Sets[i]); err != nil {
return errwrap.Wrapf(err, "the Sets item at index %d differs", i)
}
}
// AutoEdges returns the AutoEdge interface. In this case no autoedges are used.
func (obj *AugeasRes) AutoEdges() AutoEdge {
return nil return nil
} }
// AugeasUID is the UID struct for AugeasRes.
type AugeasUID struct {
engine.BaseUID
name string
}
// UIDs includes all params to make a unique identification of this object. // UIDs includes all params to make a unique identification of this object.
func (obj *AugeasRes) UIDs() []ResUID { func (obj *AugeasRes) UIDs() []engine.ResUID {
x := &AugeasUID{ x := &AugeasUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()}, BaseUID: engine.BaseUID{Name: obj.Name(), Kind: obj.Kind()},
name: obj.Name, name: obj.Name(),
} }
return []ResUID{x} return []engine.ResUID{x}
} }
// GroupCmp returns whether two resources can be grouped together or not. // UnmarshalYAML is the custom unmarshal handler for this struct. It is
func (obj *AugeasRes) GroupCmp(r Res) bool { // primarily useful for setting the defaults.
return false // Augeas commands can not be grouped together.
}
// Compare two resources and return if they are equivalent.
func (obj *AugeasRes) Compare(res Res) bool {
switch res.(type) {
// we can only compare AugeasRes to others of the same resource
case *AugeasRes:
res := res.(*AugeasRes)
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
default:
return false
}
return true
}
// UnmarshalYAML is the custom unmarshal handler for this struct.
// It is primarily useful for setting the defaults.
func (obj *AugeasRes) UnmarshalYAML(unmarshal func(interface{}) error) error { func (obj *AugeasRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes AugeasRes // indirection to avoid infinite recursion type rawRes AugeasRes // indirection to avoid infinite recursion

1416
engine/resources/aws_ec2.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,250 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"context"
"fmt"
"sync"
"time"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/util/errwrap"
)
func init() {
engine.RegisterResource("config:etcd", func() engine.Res { return &ConfigEtcdRes{} })
}
const (
sizeCheckApplyTimeout = 5 * time.Second
)
// ConfigEtcdRes is a resource that sets mgmt's etcd configuration.
type ConfigEtcdRes struct {
traits.Base // add the base methods without re-implementation
init *engine.Init
// IdealClusterSize is the requested minimum size of the cluster. If you
// set this to zero, it will cause a cluster wide shutdown if
// AllowSizeShutdown is true. If it's not true, then it will cause a
// validation error.
IdealClusterSize uint16 `lang:"idealclustersize"`
// AllowSizeShutdown is a required safety flag that you must set to true
// if you want to allow causing a cluster shutdown by setting
// IdealClusterSize to zero.
AllowSizeShutdown bool `lang:"allow_size_shutdown"`
// sizeFlag determines whether sizeCheckApply already ran or not.
sizeFlag bool
interruptChan chan struct{}
wg *sync.WaitGroup
}
// Default returns some sensible defaults for this resource.
func (obj *ConfigEtcdRes) Default() engine.Res {
return &ConfigEtcdRes{}
}
// Validate if the params passed in are valid data.
func (obj *ConfigEtcdRes) Validate() error {
if obj.IdealClusterSize < 0 {
return fmt.Errorf("the IdealClusterSize param must be positive")
}
if obj.IdealClusterSize == 0 && !obj.AllowSizeShutdown {
return fmt.Errorf("the IdealClusterSize can't be zero if AllowSizeShutdown is false")
}
return nil
}
// Init runs some startup code for this resource.
func (obj *ConfigEtcdRes) Init(init *engine.Init) error {
obj.init = init // save for later
obj.interruptChan = make(chan struct{})
obj.wg = &sync.WaitGroup{}
return nil
}
// Close is run by the engine to clean up after the resource is done.
func (obj *ConfigEtcdRes) Close() error {
obj.wg.Wait() // bonus
return nil
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *ConfigEtcdRes) Watch() error {
obj.wg.Add(1)
defer obj.wg.Done()
// FIXME: add timeout to context
// The obj.init.Done channel is closed by the engine to signal shutdown.
ctx, cancel := util.ContextWithCloser(context.Background(), obj.init.Done)
defer cancel()
ch, err := obj.init.World.IdealClusterSizeWatch(util.CtxWithWg(ctx, obj.wg))
if err != nil {
return errwrap.Wrapf(err, "could not watch ideal cluster size")
}
obj.init.Running() // when started, notify engine that we're running
Loop:
for {
select {
case event, ok := <-ch:
if !ok {
break Loop
}
if obj.init.Debug {
obj.init.Logf("event: %+v", event)
}
// pass through and send an event
case <-obj.init.Done: // closed by the engine to signal shutdown
}
obj.init.Event() // notify engine of an event (this can block)
}
return nil
}
// sizeCheckApply sets the IdealClusterSize parameter. If it sees a value change
// to zero, then it *won't* try and change it away from zero, because it assumes
// that someone has requested a shutdown. If the value is seen on first startup,
// then it will change it, because it might be a zero from the previous cluster.
func (obj *ConfigEtcdRes) sizeCheckApply(apply bool) (bool, error) {
wg := &sync.WaitGroup{}
defer wg.Wait() // this must be above the defer cancel() call
ctx, cancel := context.WithTimeout(context.Background(), sizeCheckApplyTimeout)
defer cancel()
wg.Add(1)
go func() {
defer wg.Done()
select {
case <-obj.interruptChan:
cancel()
case <-ctx.Done():
// let this exit
}
}()
val, err := obj.init.World.IdealClusterSizeGet(ctx)
if err != nil {
return false, errwrap.Wrapf(err, "could not get ideal cluster size")
}
// if we got a value of zero, and we've already run before, then it's ok
if obj.IdealClusterSize != 0 && val == 0 && obj.sizeFlag {
obj.init.Logf("impending cluster shutdown, not setting ideal cluster size")
return true, nil // impending shutdown, don't try and cancel it.
}
obj.sizeFlag = true
// must be done after setting the above flag
if obj.IdealClusterSize == val { // state is correct
return true, nil
}
if !apply {
return false, nil
}
// set!
// This is run as a transaction so we detect if we needed to change it.
changed, err := obj.init.World.IdealClusterSizeSet(ctx, obj.IdealClusterSize)
if err != nil {
return false, errwrap.Wrapf(err, "could not set ideal cluster size")
}
if !changed {
return true, nil // we lost a race, which means no change needed
}
obj.init.Logf("set dynamic cluster size to: %d", obj.IdealClusterSize)
return false, nil
}
// CheckApply method for Noop resource. Does nothing, returns happy!
func (obj *ConfigEtcdRes) CheckApply(apply bool) (bool, error) {
checkOK := true
if c, err := obj.sizeCheckApply(apply); err != nil {
return false, err
} else if !c {
checkOK = false
}
// TODO: add more config settings management here...
//if c, err := obj.TODOCheckApply(apply); err != nil {
// return false, err
//} else if !c {
// checkOK = false
//}
return checkOK, nil // w00t
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *ConfigEtcdRes) Cmp(r engine.Res) error {
// we can only compare ConfigEtcdRes to others of the same resource kind
res, ok := r.(*ConfigEtcdRes)
if !ok {
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.IdealClusterSize != res.IdealClusterSize {
return fmt.Errorf("the IdealClusterSize param differs")
}
if obj.AllowSizeShutdown != res.AllowSizeShutdown {
return fmt.Errorf("the AllowSizeShutdown param differs")
}
return nil
}
// Interrupt is called to ask the execution of this resource to end early.
func (obj *ConfigEtcdRes) Interrupt() error {
close(obj.interruptChan)
return nil
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *ConfigEtcdRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes ConfigEtcdRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*ConfigEtcdRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to ConfigEtcdRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = ConfigEtcdRes(raw) // restore from indirection with type conversion!
return nil
}

View File

@@ -0,0 +1,283 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"context"
"fmt"
"net/url"
"sync"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/util/errwrap"
"github.com/hashicorp/consul/api"
)
func init() {
engine.RegisterResource("consul:kv", func() engine.Res { return &ConsulKVRes{} })
}
// ConsulKVRes is a resource that writes a value into a Consul datastore. The
// name of the resource can either be the key name, or the concatenation of the
// server address and the key name: http://127.0.0.1:8500/my-key. If the param
// keys are specified, then those are used. If the Name cannot be properly
// parsed by url.Parse, then it will be considered as the Key's value. If the
// Key is specified explicitly, then we won't use anything from the Name.
type ConsulKVRes struct {
traits.Base
init *engine.Init
// Key is the name of the key. Defaults to the name of the resource.
Key string `lang:"key" yaml:"key"`
// Value is the value for the key.
Value string `lang:"value" yaml:"value"`
// Scheme is the URI scheme for the Consul server. Default: http.
Scheme string `lang:"scheme" yaml:"scheme"`
// Address is the address of the Consul server. Default: 127.0.0.1:8500.
Address string `lang:"address" yaml:"address"`
// Token is used to provide an ACL token to use for this resource.
Token string `lang:"token" yaml:"token"`
client *api.Client
config *api.Config // needed to close the idle connections
once bool // safety token
key string // cache the key name to avoid re-running the parser
}
// Default returns some sensible defaults for this resource.
func (obj *ConsulKVRes) Default() engine.Res {
return &ConsulKVRes{}
}
// Validate if the params passed in are valid data.
func (obj *ConsulKVRes) Validate() error {
s, _, k := obj.inputParser()
if k == "" {
return fmt.Errorf("the Key is empty")
}
if s != "" && s != "http" && s != "https" {
return fmt.Errorf("unknown Scheme")
}
return nil
}
// Init runs some startup code for this resource.
func (obj *ConsulKVRes) Init(init *engine.Init) error {
obj.init = init // save for later
s, a, k := obj.inputParser()
obj.config = api.DefaultConfig()
if s != "" {
obj.config.Scheme = s
}
if a != "" {
obj.config.Address = obj.Address
}
obj.key = k // store the key
obj.init.Logf("using consul key: %s", obj.key)
if obj.Token != "" {
obj.config.Token = obj.Token
}
var err error
obj.client, err = api.NewClient(obj.config)
return errwrap.Wrapf(err, "could not create Consul client")
}
// Close is run by the engine to clean up after the resource is done.
func (obj *ConsulKVRes) Close() error {
if obj.config != nil && obj.config.Transport != nil {
obj.config.Transport.CloseIdleConnections()
}
return nil
}
// Watch is the listener and main loop for this resource and it outputs events.
func (obj *ConsulKVRes) Watch() error {
wg := &sync.WaitGroup{}
defer wg.Wait()
ch := make(chan error)
exit := make(chan struct{})
kv := obj.client.KV()
wg.Add(1)
go func() {
defer close(ch)
defer wg.Done()
opts := &api.QueryOptions{RequireConsistent: true}
ctx, cancel := util.ContextWithCloser(context.Background(), exit)
defer cancel()
opts = opts.WithContext(ctx)
for {
_, meta, err := kv.Get(obj.key, opts)
select {
case ch <- err: // send
if err != nil {
return
}
// WaitIndex = 0, which means that it is the
// first time we run the query, as we are about
// to change the WaitIndex to make a blocking
// query, we can consider the watch started.
opts.WaitIndex = meta.LastIndex
if opts.WaitIndex != 0 {
continue
}
if !obj.once {
obj.init.Running()
obj.once = true
continue
}
// Unexpected situation, bug in consul API...
select {
case ch <- fmt.Errorf("unexpected behaviour in Consul API"):
case <-obj.init.Done: // signal for shutdown request
}
case <-obj.init.Done: // signal for shutdown request
}
return
}
}()
defer close(exit)
for {
select {
case err, ok := <-ch:
if !ok { // channel shutdown
return nil
}
if err != nil {
return errwrap.Wrapf(err, "unknown %s watcher error", obj)
}
if obj.init.Debug {
obj.init.Logf("event!")
}
obj.init.Event()
case <-obj.init.Done: // signal for shutdown request
return nil
}
}
}
// CheckApply is run to check the state and, if apply is true, to apply the
// necessary changes to reach the desired state. This is run before Watch and
// again if Watch finds a change occurring to the state.
func (obj *ConsulKVRes) CheckApply(apply bool) (bool, error) {
if obj.init.Debug {
obj.init.Logf("consul key: %s", obj.key)
}
kv := obj.client.KV()
pair, _, err := kv.Get(obj.key, nil)
if err != nil {
return false, err
}
if pair != nil && string(pair.Value) == obj.Value {
return true, nil
}
if !apply {
return false, nil
}
p := &api.KVPair{Key: obj.key, Value: []byte(obj.Value)}
_, err = kv.Put(p, nil)
return false, err
}
// Cmp compares two resources and return if they are equivalent.
func (obj *ConsulKVRes) Cmp(r engine.Res) error {
res, ok := r.(*ConsulKVRes)
if !ok {
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.Key != res.Key {
return fmt.Errorf("the Key param differs")
}
if obj.Value != res.Value {
return fmt.Errorf("the Value param differs")
}
if obj.Scheme != res.Scheme {
return fmt.Errorf("the Scheme param differs")
}
if obj.Address != res.Address {
return fmt.Errorf("the Address param differs")
}
if obj.Token != res.Token {
return fmt.Errorf("the Token param differs")
}
return nil
}
// inputParser parses the Name() of a resource and extracts the scheme, address,
// and key name of a consul key. We don't have an error, because if we have one,
// then it means the input must be a raw key. Output of this function is scheme,
// address (includes hostname and port), and key. This also takes our parameters
// in to account, and applies the correct overrides if they are specified there.
func (obj *ConsulKVRes) inputParser() (string, string, string) {
// If the key is specified explicitly, then we're not going to parse the
// resource name for a pattern, and we use our given params as they are.
if obj.Key != "" {
return obj.Scheme, obj.Address, obj.Key
}
// Now we parse...
u, err := url.Parse(obj.Name())
if err != nil {
// If this didn't work, then we know it's explicitly a raw key.
return obj.Scheme, obj.Address, obj.Name()
}
// Otherwise, we use the parse result, and we overwrite any of the
// fields if we have an explicit param that was specified.
k := u.Path
s := u.Scheme
a := u.Host
//if obj.Key != "" { // this is now guaranteed to never happen
// k = obj.Key
//}
if obj.Scheme != "" {
s = obj.Scheme
}
if obj.Address != "" {
a = obj.Address
}
return s, a, k
}

View File

@@ -0,0 +1,71 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"fmt"
"testing"
"github.com/purpleidea/mgmt/engine"
)
func createConsulRes(name string) *ConsulKVRes {
r, err := engine.NewNamedResource("consul:kv", name)
if err != nil {
panic(fmt.Sprintf("could not create resource: %+v", err))
}
res := r.(*ConsulKVRes) // if this panics, the test will panic
return res
}
func TestParseConsulName(t *testing.T) {
n1 := "test"
r1 := createConsulRes(n1)
if s, a, k := r1.inputParser(); s != "" || a != "" || k != "test" {
t.Errorf("unexpected output while parsing `%s`: %s, %s, %s", n1, s, a, k)
}
n2 := "http://127.0.0.1:8500/test"
r2 := createConsulRes(n2)
if s, a, k := r2.inputParser(); s != "http" || a != "127.0.0.1:8500" || k != "/test" {
t.Errorf("unexpected output while parsing `%s`: %s, %s, %s", n2, s, a, k)
}
n3 := "http://127.0.0.1:8500/test"
r3 := createConsulRes(n3)
r3.Scheme = "https"
r3.Address = "example.com"
if s, a, k := r3.inputParser(); s != "https" || a != "example.com" || k != "/test" {
t.Errorf("unexpected output while parsing `%s`: %s, %s, %s", n3, s, a, k)
}
n4 := "http:://127.0.0.1..5:8500/test" // wtf, url.Parse is on drugs...
r4 := createConsulRes(n4)
//if s, a, k := r4.inputParser(); s != "" || a != "" || k != n4 { // what i really expect
if s, a, k := r4.inputParser(); s != "http" || a != "" || k != "" { // what i get
t.Errorf("unexpected output while parsing `%s`: %s, %s, %s", n4, s, a, k)
}
n5 := "http://127.0.0.1:8500/test" // whatever, it's ignored
r5 := createConsulRes(n3)
r5.Key = "some key"
if s, a, k := r5.inputParser(); s != "" || a != "" || k != "some key" {
t.Errorf("unexpected output while parsing `%s`: %s, %s, %s", n5, s, a, k)
}
}

559
engine/resources/cron.go Normal file
View File

@@ -0,0 +1,559 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"bytes"
"context"
"fmt"
"os/user"
"path"
"strings"
"time"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
engineUtil "github.com/purpleidea/mgmt/engine/util"
"github.com/purpleidea/mgmt/recwatch"
"github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/util/errwrap"
sdbus "github.com/coreos/go-systemd/dbus"
"github.com/coreos/go-systemd/unit"
systemdUtil "github.com/coreos/go-systemd/util"
"github.com/godbus/dbus"
)
const (
// OnCalendar is a systemd-timer trigger, whose behaviour is defined in
// 'man systemd-timer', and whose format is defined in the 'Calendar
// Events' section of 'man systemd-time'.
OnCalendar = "OnCalendar"
// OnActiveSec is a systemd-timer trigger, whose behaviour is defined in
// 'man systemd-timer', and whose format is a time span as defined in
// 'man systemd-time'.
OnActiveSec = "OnActiveSec"
// OnBootSec is a systemd-timer trigger, whose behaviour is defined in
// 'man systemd-timer', and whose format is a time span as defined in
// 'man systemd-time'.
OnBootSec = "OnBootSec"
// OnStartupSec is a systemd-timer trigger, whose behaviour is defined in
// 'man systemd-timer', and whose format is a time span as defined in
// 'man systemd-time'.
OnStartupSec = "OnStartupSec"
// OnUnitActiveSec is a systemd-timer trigger, whose behaviour is defined
// in 'man systemd-timer', and whose format is a time span as defined in
// 'man systemd-time'.
OnUnitActiveSec = "OnUnitActiveSec"
// OnUnitInactiveSec is a systemd-timer trigger, whose behaviour is defined
// in 'man systemd-timer', and whose format is a time span as defined in
// 'man systemd-time'.
OnUnitInactiveSec = "OnUnitInactiveSec"
// ctxTimeout is the delay, in seconds, before the calls to restart or stop
// the systemd unit will error due to timeout.
ctxTimeout = 30
)
func init() {
engine.RegisterResource("cron", func() engine.Res { return &CronRes{} })
}
// CronRes is a systemd-timer cron resource.
type CronRes struct {
traits.Base
traits.Edgeable
traits.Recvable
traits.Refreshable // needed because we embed a svc res
init *engine.Init
// Unit is the name of the systemd service unit. It is only necessary to
// set if you want to specify a service with a different name than the
// resource.
Unit string `yaml:"unit"`
// State must be 'exists' or 'absent'.
State string `yaml:"state"`
// Session, if true, creates the timer as the current user, rather than
// root. The service it points to must also be a user unit. It defaults to
// false.
Session bool `yaml:"session"`
// Trigger is the type of timer. Valid types are 'OnCalendar',
// 'OnActiveSec'. 'OnBootSec'. 'OnStartupSec'. 'OnUnitActiveSec', and
// 'OnUnitInactiveSec'. For more information see 'man systemd.timer'.
Trigger string `yaml:"trigger"`
// Time must be used with all triggers. For 'OnCalendar', it must be in
// the format defined in 'man systemd-time' under the heading 'Calendar
// Events'. For all other triggers, time should be a valid time span as
// defined in 'man systemd-time'
Time string `yaml:"time"`
// AccuracySec is the accuracy of the timer in systemd-time time span
// format. It defaults to one minute.
AccuracySec string `yaml:"accuracysec"`
// RandomizedDelaySec delays the timer by a randomly selected, evenly
// distributed amount of time between 0 and the specified time value. The
// value must be a valid systemd-time time span.
RandomizedDelaySec string `yaml:"randomizeddelaysec"`
// Persistent, if true, means the time when the service unit was last
// triggered is stored on disk. When the timer is activated, the service
// unit is triggered immediately if it would have been triggered at least
// once during the time when the timer was inactive. It defaults to false.
Persistent bool `yaml:"persistent"`
// WakeSystem, if true, will cause the system to resume from suspend,
// should it be suspended and if the system supports this. It defaults to
// false.
WakeSystem bool `yaml:"wakesystem"`
// RemainAfterElapse, if true, means an elapsed timer will stay loaded, and
// its state remains queriable. If false, an elapsed timer unit that cannot
// elapse anymore is unloaded. It defaults to true.
RemainAfterElapse bool `yaml:"remainafterelapse"`
file *FileRes // nested file resource
recWatcher *recwatch.RecWatcher // recwatcher for nested file
}
// Default returns some sensible defaults for this resource.
func (obj *CronRes) Default() engine.Res {
return &CronRes{
State: "exists",
RemainAfterElapse: true,
}
}
// makeComposite creates a pointer to a FileRes. The pointer is used to validate
// and initialize the nested file resource and to apply the file state in
// CheckApply.
func (obj *CronRes) makeComposite() (*FileRes, error) {
p, err := obj.UnitFilePath()
if err != nil {
return nil, errwrap.Wrapf(err, "error generating unit file path")
}
res, err := engine.NewNamedResource("file", p)
if err != nil {
return nil, errwrap.Wrapf(err, "error creating nested file resource")
}
file, ok := res.(*FileRes)
if !ok {
return nil, fmt.Errorf("error casting fileres")
}
file.State = obj.State
if obj.State != "absent" {
s := obj.unitFileContents()
file.Content = &s
}
return file, nil
}
// Validate if the params passed in are valid data.
func (obj *CronRes) Validate() error {
// validate state
if obj.State != "absent" && obj.State != "exists" {
return fmt.Errorf("state must be 'absent' or 'exists'")
}
// validate trigger
if obj.State == "absent" && obj.Trigger == "" {
return nil // if trigger is undefined we can't make a unit file
}
if obj.Trigger == "" || obj.Time == "" {
return fmt.Errorf("trigger and must be set together")
}
if obj.Trigger != OnCalendar &&
obj.Trigger != OnActiveSec &&
obj.Trigger != OnBootSec &&
obj.Trigger != OnStartupSec &&
obj.Trigger != OnUnitActiveSec &&
obj.Trigger != OnUnitInactiveSec {
return fmt.Errorf("invalid trigger")
}
// TODO: Validate time (regex?)
// validate nested file
file, err := obj.makeComposite()
if err != nil {
return errwrap.Wrapf(err, "makeComposite failed in validate")
}
if err := file.Validate(); err != nil { // composite resource
return errwrap.Wrapf(err, "validate failed for embedded file: %s", obj.file)
}
return nil
}
// Init runs some startup code for this resource.
func (obj *CronRes) Init(init *engine.Init) error {
var err error
obj.init = init // save for later
obj.file, err = obj.makeComposite()
if err != nil {
return errwrap.Wrapf(err, "makeComposite failed in init")
}
return obj.file.Init(init)
}
// Close is run by the engine to clean up after the resource is done.
func (obj *CronRes) Close() error {
if obj.file != nil {
return obj.file.Close()
}
return nil
}
// Watch for state changes and sends a message to the bus if there is a change.
func (obj *CronRes) Watch() error {
var bus *dbus.Conn
var err error
// this resource depends on systemd
if !systemdUtil.IsRunningSystemd() {
return fmt.Errorf("systemd is not running")
}
// create a private message bus
if obj.Session {
bus, err = util.SessionBusPrivateUsable()
} else {
bus, err = util.SystemBusPrivateUsable()
}
if err != nil {
return errwrap.Wrapf(err, "failed to connect to bus")
}
defer bus.Close()
// dbus addmatch arguments for the timer unit
args := []string{}
args = append(args, "type='signal'")
args = append(args, "interface='org.freedesktop.systemd1.Manager'")
args = append(args, "eavesdrop='true'")
args = append(args, fmt.Sprintf("arg2='%s.timer'", obj.Name()))
// match dbus messsages
if call := bus.BusObject().Call(engineUtil.DBusAddMatch, 0, strings.Join(args, ",")); call.Err != nil {
return err
}
defer bus.BusObject().Call(engineUtil.DBusRemoveMatch, 0, args) // ignore the error
// channels for dbus signal
dbusChan := make(chan *dbus.Signal)
defer close(dbusChan)
bus.Signal(dbusChan)
defer bus.RemoveSignal(dbusChan) // not needed here, but nice for symmetry
p, err := obj.UnitFilePath()
if err != nil {
return errwrap.Wrapf(err, "error generating unit file path")
}
// recwatcher for the systemd-timer unit file
obj.recWatcher, err = recwatch.NewRecWatcher(p, false)
if err != nil {
return err
}
defer obj.recWatcher.Close()
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
select {
case event := <-dbusChan:
// process dbus events
if obj.init.Debug {
obj.init.Logf("%+v", event)
}
send = true
case event, ok := <-obj.recWatcher.Events():
// process unit file recwatch events
if !ok { // channel shutdown
return nil
}
if err := event.Error; err != nil {
return errwrap.Wrapf(err, "Unknown %s watcher error", obj)
}
if obj.init.Debug {
obj.init.Logf("Event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case <-obj.init.Done: // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply is run to check the state and, if apply is true, to apply the
// necessary changes to reach the desired state. This is run before Watch and
// again if Watch finds a change occurring to the state.
func (obj *CronRes) CheckApply(apply bool) (bool, error) {
checkOK := true
// use the embedded file resource to apply the correct state
if c, err := obj.file.CheckApply(apply); err != nil {
return false, errwrap.Wrapf(err, "nested file failed")
} else if !c {
checkOK = false
}
// check timer state and apply the defined state if needed
if c, err := obj.unitCheckApply(apply); err != nil {
return false, errwrap.Wrapf(err, "unitCheckApply error")
} else if !c {
checkOK = false
}
return checkOK, nil
}
// unitCheckApply checks the state of the systemd-timer unit and, if apply is
// true, applies the defined state.
func (obj *CronRes) unitCheckApply(apply bool) (bool, error) {
var conn *sdbus.Conn
var godbusConn *dbus.Conn
var err error
// this resource depends on systemd to ensure that it's running
if !systemdUtil.IsRunningSystemd() {
return false, fmt.Errorf("systemd is not running")
}
// go-systemd connection
if obj.Session {
conn, err = sdbus.NewUserConnection()
} else {
conn, err = sdbus.New() // system bus
}
if err != nil {
return false, errwrap.Wrapf(err, "error making go-systemd dbus connection")
}
defer conn.Close()
// get the load state and active state of the timer unit
loadState, err := conn.GetUnitProperty(fmt.Sprintf("%s.timer", obj.Name()), "LoadState")
if err != nil {
return false, errwrap.Wrapf(err, "failed to get load state")
}
activeState, err := conn.GetUnitProperty(fmt.Sprintf("%s.timer", obj.Name()), "ActiveState")
if err != nil {
return false, errwrap.Wrapf(err, "failed to get active state")
}
// check the timer unit state
if obj.State == "absent" && loadState.Value == dbus.MakeVariant("not-found") {
return true, nil
}
if obj.State == "exists" && activeState.Value == dbus.MakeVariant("active") {
return true, nil
}
if !apply {
return false, nil
}
// systemctl daemon-reload
if err := conn.Reload(); err != nil {
return false, errwrap.Wrapf(err, "error reloading daemon")
}
// context for stopping/restarting the unit
ctx, cancel := context.WithTimeout(context.Background(), ctxTimeout*time.Second)
defer cancel()
// godbus connection for stopping/restarting the unit
if obj.Session {
godbusConn, err = util.SessionBusPrivateUsable()
} else {
godbusConn, err = util.SystemBusPrivateUsable()
}
if err != nil {
return false, errwrap.Wrapf(err, "error making godbus connection")
}
defer godbusConn.Close()
// stop or restart the unit
if obj.State == "absent" {
return false, engineUtil.StopUnit(ctx, godbusConn, fmt.Sprintf("%s.timer", obj.Name()))
}
return false, engineUtil.RestartUnit(ctx, godbusConn, fmt.Sprintf("%s.timer", obj.Name()))
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *CronRes) Cmp(r engine.Res) error {
res, ok := r.(*CronRes)
if !ok {
return fmt.Errorf("res is not the same kind")
}
if obj.State != res.State {
return fmt.Errorf("state differs: %s vs %s", obj.State, res.State)
}
if obj.Trigger != res.Trigger {
return fmt.Errorf("trigger differs: %s vs %s", obj.Trigger, res.Trigger)
}
if obj.Time != res.Time {
return fmt.Errorf("time differs: %s vs %s", obj.Time, res.Time)
}
if obj.AccuracySec != res.AccuracySec {
return fmt.Errorf("accuracysec differs: %s vs %s", obj.AccuracySec, res.AccuracySec)
}
if obj.RandomizedDelaySec != res.RandomizedDelaySec {
return fmt.Errorf("randomizeddelaysec differs: %s vs %s", obj.RandomizedDelaySec, res.RandomizedDelaySec)
}
if obj.Unit != res.Unit {
return fmt.Errorf("unit differs: %s vs %s", obj.Unit, res.Unit)
}
if obj.Persistent != res.Persistent {
return fmt.Errorf("persistent differs: %t vs %t", obj.Persistent, res.Persistent)
}
if obj.WakeSystem != res.WakeSystem {
return fmt.Errorf("wakesystem differs: %t vs %t", obj.WakeSystem, res.WakeSystem)
}
if obj.RemainAfterElapse != res.RemainAfterElapse {
return fmt.Errorf("remainafterelapse differs: %t vs %t", obj.RemainAfterElapse, res.RemainAfterElapse)
}
return obj.file.Cmp(r)
}
// CronUID is a unique resource identifier.
type CronUID struct {
// NOTE: There is also a name variable in the BaseUID struct, this is
// information about where this UID came from, and is unrelated to the
// information about the resource we're matching. That data which is
// used in the IFF function, is what you see in the struct fields here.
engine.BaseUID
unit string // name of target unit
session bool // user session
}
// IFF aka if and only if they are equivalent, return true. If not, false.
func (obj *CronUID) IFF(uid engine.ResUID) bool {
res, ok := uid.(*CronUID)
if !ok {
return false
}
if obj.unit != res.unit {
return false
}
if obj.session != res.session {
return false
}
return true
}
// AutoEdges returns the AutoEdge interface.
func (obj *CronRes) AutoEdges() (engine.AutoEdge, error) {
return nil, nil
}
// UIDs includes all params to make a unique identification of this object. Most
// resources only return one although some resources can return multiple.
func (obj *CronRes) UIDs() []engine.ResUID {
unit := fmt.Sprintf("%s.service", obj.Name())
if obj.Unit != "" {
unit = obj.Unit
}
uids := []engine.ResUID{
&CronUID{
BaseUID: engine.BaseUID{Name: obj.Name(), Kind: obj.Kind()},
unit: unit, // name of target unit
session: obj.Session, // user session
},
}
if file, err := obj.makeComposite(); err == nil {
uids = append(uids, file.UIDs()...) // add the file uid if we can
}
return uids
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *CronRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes CronRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*CronRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to CronRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = CronRes(raw) // restore from indirection with type conversion!
return nil
}
// UnitFilePath returns the path to the systemd-timer unit file.
func (obj *CronRes) UnitFilePath() (string, error) {
// root timer
if !obj.Session {
return fmt.Sprintf("/etc/systemd/system/%s.timer", obj.Name()), nil
}
// user timer
u, err := user.Current()
if err != nil {
return "", errwrap.Wrapf(err, "error getting current user")
}
if u.HomeDir == "" {
return "", fmt.Errorf("user has no home directory")
}
return path.Join(u.HomeDir, "/.config/systemd/user/", fmt.Sprintf("%s.timer", obj.Name())), nil
}
// unitFileContents returns the contents of the unit file representing the
// CronRes struct.
func (obj *CronRes) unitFileContents() string {
u := []*unit.UnitOption{}
// [Unit]
u = append(u, &unit.UnitOption{Section: "Unit", Name: "Description", Value: "timer generated by mgmt"})
// [Timer]
u = append(u, &unit.UnitOption{Section: "Timer", Name: obj.Trigger, Value: obj.Time})
if obj.AccuracySec != "" {
u = append(u, &unit.UnitOption{Section: "Timer", Name: "AccuracySec", Value: obj.AccuracySec})
}
if obj.RandomizedDelaySec != "" {
u = append(u, &unit.UnitOption{Section: "Timer", Name: "RandomizedDelaySec", Value: obj.RandomizedDelaySec})
}
if obj.Unit != "" {
u = append(u, &unit.UnitOption{Section: "Timer", Name: "Unit", Value: obj.Unit})
}
if obj.Persistent != false { // defaults to false
u = append(u, &unit.UnitOption{Section: "Timer", Name: "Persistent", Value: "true"})
}
if obj.WakeSystem != false { // defaults to false
u = append(u, &unit.UnitOption{Section: "Timer", Name: "WakeSystem", Value: "true"})
}
if obj.RemainAfterElapse != true { // defaults to true
u = append(u, &unit.UnitOption{Section: "Timer", Name: "RemainAfterElapse", Value: "false"})
}
// [Install]
u = append(u, &unit.UnitOption{Section: "Install", Name: "WantedBy", Value: "timers.target"})
buf := new(bytes.Buffer)
buf.ReadFrom(unit.Serialize(u))
return buf.String()
}

1177
engine/resources/dhcp.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,493 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !nodocker
package resources
import (
"context"
"fmt"
"io/ioutil"
"regexp"
"strings"
"time"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/util/errwrap"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/client"
"github.com/docker/go-connections/nat"
)
const (
// ContainerRunning is the running container state.
ContainerRunning = "running"
// ContainerStopped is the stopped container state.
ContainerStopped = "stopped"
// ContainerRemoved is the removed container state.
ContainerRemoved = "removed"
// initCtxTimeout is the length of time, in seconds, before requests are
// cancelled in Init.
initCtxTimeout = 20
// checkApplyCtxTimeout is the length of time, in seconds, before
// requests are cancelled in CheckApply.
checkApplyCtxTimeout = 120
)
func init() {
engine.RegisterResource("docker:container", func() engine.Res { return &DockerContainerRes{} })
}
// DockerContainerRes is a docker container resource.
type DockerContainerRes struct {
traits.Base // add the base methods without re-implementation
traits.Edgeable
// State of the container must be running, stopped, or removed.
State string `yaml:"state"`
// Image is a docker image, or image:tag.
Image string `yaml:"image"`
// Cmd is a command, or list of commands to run on the container.
Cmd []string `yaml:"cmd"`
// Env is a list of environment variables. E.g. ["VAR=val",].
Env []string `yaml:"env"`
// Ports is a map of port bindings. E.g. {"tcp" => {80 => 8080},}.
Ports map[string]map[int64]int64 `yaml:"ports"`
// APIVersion allows you to override the host's default client API
// version.
APIVersion string `yaml:"apiversion"`
// Force, if true, this will destroy and redeploy the container if the
// image is incorrect.
Force bool `yaml:"force"`
client *client.Client // docker api client
init *engine.Init
}
// Default returns some sensible defaults for this resource.
func (obj *DockerContainerRes) Default() engine.Res {
return &DockerContainerRes{
State: "running",
}
}
// Validate if the params passed in are valid data.
func (obj *DockerContainerRes) Validate() error {
// validate state
if obj.State != ContainerRunning && obj.State != ContainerStopped && obj.State != ContainerRemoved {
return fmt.Errorf("state must be running, stopped or removed")
}
// make sure an image is specified
if obj.Image == "" {
return fmt.Errorf("image must be specified")
}
// validate env
for _, env := range obj.Env {
if !strings.Contains(env, "=") || strings.Contains(env, " ") {
return fmt.Errorf("invalid environment variable: %s", env)
}
}
// validate ports
for k, v := range obj.Ports {
if k != "tcp" && k != "udp" && k != "sctp" {
return fmt.Errorf("ports primary key should be tcp, udp or sctp")
}
for p, q := range v {
if (p < 1 || p > 65535) || (q < 1 || q > 65535) {
return fmt.Errorf("ports must be between 1 and 65535")
}
}
}
// validate APIVersion
if obj.APIVersion != "" {
verOK, err := regexp.MatchString(`^(v)[1-9]\.[0-9]\d*$`, obj.APIVersion)
if err != nil {
return errwrap.Wrapf(err, "error matching apiversion string")
}
if !verOK {
return fmt.Errorf("invalid apiversion: %s", obj.APIVersion)
}
}
return nil
}
// Init runs some startup code for this resource.
func (obj *DockerContainerRes) Init(init *engine.Init) error {
var err error
obj.init = init // save for later
ctx, cancel := context.WithTimeout(context.Background(), initCtxTimeout*time.Second)
defer cancel()
// Initialize the docker client.
obj.client, err = client.NewClientWithOpts(client.WithVersion(obj.APIVersion))
if err != nil {
return errwrap.Wrapf(err, "error creating docker client")
}
// Validate the image.
resp, err := obj.client.ImageSearch(ctx, obj.Image, types.ImageSearchOptions{Limit: 1})
if err != nil {
return errwrap.Wrapf(err, "error searching for image")
}
if len(resp) == 0 {
return fmt.Errorf("image: %s not found", obj.Image)
}
return nil
}
// Close is run by the engine to clean up after the resource is done.
func (obj *DockerContainerRes) Close() error {
return obj.client.Close() // close the docker client
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *DockerContainerRes) Watch() error {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
eventChan, errChan := obj.client.Events(ctx, types.EventsOptions{})
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
select {
case event, ok := <-eventChan:
if !ok { // channel shutdown
return nil
}
if obj.init.Debug {
obj.init.Logf("%+v", event)
}
send = true
case err, ok := <-errChan:
if !ok {
return nil
}
return err
case <-obj.init.Done: // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply method for Docker resource.
func (obj *DockerContainerRes) CheckApply(apply bool) (bool, error) {
var id string
var destroy bool
ctx, cancel := context.WithTimeout(context.Background(), checkApplyCtxTimeout*time.Second)
defer cancel()
// List any container whose name matches this resource.
opts := types.ContainerListOptions{
All: true,
Filters: filters.NewArgs(filters.KeyValuePair{Key: "name", Value: obj.Name()}),
}
containerList, err := obj.client.ContainerList(ctx, opts)
if err != nil {
return false, errwrap.Wrapf(err, "error listing containers")
}
if len(containerList) > 1 {
return false, fmt.Errorf("more than one container named %s", obj.Name())
}
if len(containerList) == 0 && obj.State == ContainerRemoved {
return true, nil
}
if len(containerList) == 1 {
// If the state and image are correct, we're done.
if containerList[0].State == obj.State && containerList[0].Image == obj.Image {
return true, nil
}
id = containerList[0].ID // save the id for later
// If the image is wrong, and force is true, mark the container for
// destruction.
if containerList[0].Image != obj.Image && obj.Force {
destroy = true
}
// Otherwise return an error.
if containerList[0].Image != obj.Image && !obj.Force {
return false, fmt.Errorf("%s exists but has the wrong image: %s", obj.Name(), containerList[0].Image)
}
}
if !apply {
return false, nil
}
if obj.State == ContainerStopped { // container exists and should be stopped
return false, obj.containerStop(ctx, id, nil)
}
if obj.State == ContainerRemoved { // container exists and should be removed
if err := obj.containerStop(ctx, id, nil); err != nil {
return false, err
}
return false, obj.containerRemove(ctx, id, types.ContainerRemoveOptions{})
}
if destroy {
if err := obj.containerStop(ctx, id, nil); err != nil {
return false, err
}
if err := obj.containerRemove(ctx, id, types.ContainerRemoveOptions{}); err != nil {
return false, err
}
containerList = []types.Container{} // zero the list
}
if len(containerList) == 0 { // no container was found
// Download the specified image if it doesn't exist locally.
p, err := obj.client.ImagePull(ctx, obj.Image, types.ImagePullOptions{})
if err != nil {
return false, errwrap.Wrapf(err, "error pulling image")
}
// Wait for the image to download, EOF signals that it's done.
if _, err := ioutil.ReadAll(p); err != nil {
return false, errwrap.Wrapf(err, "error reading image pull result")
}
// set up port bindings
containerConfig := &container.Config{
Image: obj.Image,
Cmd: obj.Cmd,
Env: obj.Env,
ExposedPorts: make(map[nat.Port]struct{}),
}
hostConfig := &container.HostConfig{
PortBindings: make(map[nat.Port][]nat.PortBinding),
}
for k, v := range obj.Ports {
for p, q := range v {
containerConfig.ExposedPorts[nat.Port(k)] = struct{}{}
hostConfig.PortBindings[nat.Port(fmt.Sprintf("%d/%s", p, k))] = []nat.PortBinding{
{
HostIP: "0.0.0.0",
HostPort: fmt.Sprintf("%d", q),
},
}
}
}
c, err := obj.client.ContainerCreate(ctx, containerConfig, hostConfig, nil, nil, obj.Name())
if err != nil {
return false, errwrap.Wrapf(err, "error creating container")
}
id = c.ID
}
return false, obj.containerStart(ctx, id, types.ContainerStartOptions{})
}
// containerStart starts the specified container, and waits for it to start.
func (obj *DockerContainerRes) containerStart(ctx context.Context, id string, opts types.ContainerStartOptions) error {
// Get an events channel for the container we're about to start.
eventOpts := types.EventsOptions{
Filters: filters.NewArgs(filters.KeyValuePair{Key: "container", Value: id}),
}
eventCh, errCh := obj.client.Events(ctx, eventOpts)
// Start the container.
if err := obj.client.ContainerStart(ctx, id, opts); err != nil {
return errwrap.Wrapf(err, "error starting container")
}
// Wait for a message on eventChan that says the container has started.
select {
case event := <-eventCh:
if event.Status != "start" {
return fmt.Errorf("unexpected event: %+v", event)
}
case err := <-errCh:
return errwrap.Wrapf(err, "error waiting for container start")
}
return nil
}
// containerStop stops the specified container and waits for it to stop.
func (obj *DockerContainerRes) containerStop(ctx context.Context, id string, timeout *time.Duration) error {
ch, errCh := obj.client.ContainerWait(ctx, id, container.WaitConditionNotRunning)
obj.client.ContainerStop(ctx, id, timeout)
select {
case <-ch:
case err := <-errCh:
return errwrap.Wrapf(err, "error waiting for container to stop")
}
return nil
}
// containerRemove removes the specified container and waits for it to be
// removed.
func (obj *DockerContainerRes) containerRemove(ctx context.Context, id string, opts types.ContainerRemoveOptions) error {
ch, errCh := obj.client.ContainerWait(ctx, id, container.WaitConditionRemoved)
obj.client.ContainerRemove(ctx, id, opts)
select {
case <-ch:
case err := <-errCh:
return errwrap.Wrapf(err, "error waiting for container to be removed")
}
return nil
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *DockerContainerRes) Cmp(r engine.Res) error {
// we can only compare DockerContainerRes to others of the same resource kind
res, ok := r.(*DockerContainerRes)
if !ok {
return fmt.Errorf("error casting r to *DockerContainerRes")
}
if obj.State != res.State {
return fmt.Errorf("the State differs")
}
if obj.Image != res.Image {
return fmt.Errorf("the Image differs")
}
if err := util.SortedStrSliceCompare(obj.Cmd, res.Cmd); err != nil {
return errwrap.Wrapf(err, "the Cmd field differs")
}
if err := util.SortedStrSliceCompare(obj.Env, res.Env); err != nil {
return errwrap.Wrapf(err, "tne Env field differs")
}
if len(obj.Ports) != len(res.Ports) {
return fmt.Errorf("the Ports length differs")
}
for k, v := range obj.Ports {
for p, q := range v {
if w, ok := res.Ports[k][p]; !ok || q != w {
return fmt.Errorf("the Ports field differs")
}
}
}
if obj.APIVersion != res.APIVersion {
return fmt.Errorf("the APIVersion differs")
}
if obj.Force != res.Force {
return fmt.Errorf("the Force field differs")
}
return nil
}
// DockerContainerUID is the UID struct for DockerContainerRes.
type DockerContainerUID struct {
engine.BaseUID
name string
}
// DockerContainerResAutoEdges holds the state of the auto edge generator.
type DockerContainerResAutoEdges struct {
UIDs []engine.ResUID
pointer int
}
// AutoEdges returns edges to any docker:image resource that matches the image
// specified in the docker:container resource definition.
func (obj *DockerContainerRes) AutoEdges() (engine.AutoEdge, error) {
var result []engine.ResUID
var reversed bool
if obj.State != "removed" {
reversed = true
}
result = append(result, &DockerImageUID{
BaseUID: engine.BaseUID{
Reversed: &reversed,
},
image: dockerImageNameTag(obj.Image),
})
return &DockerContainerResAutoEdges{
UIDs: result,
pointer: 0,
}, nil
}
// Next returnes the next automatic edge.
func (obj *DockerContainerResAutoEdges) Next() []engine.ResUID {
if len(obj.UIDs) == 0 {
return nil
}
value := obj.UIDs[obj.pointer]
obj.pointer++
return []engine.ResUID{value}
}
// Test gets results of the earlier Next() call, & returns if we should
// continue.
func (obj *DockerContainerResAutoEdges) Test(input []bool) bool {
if len(obj.UIDs) <= obj.pointer {
return false
}
if len(input) != 1 { // in case we get given bad data
panic(fmt.Sprintf("Expecting a single value!"))
}
return true // keep going
}
// UIDs includes all params to make a unique identification of this object. Most
// resources only return one, although some resources can return multiple.
func (obj *DockerContainerRes) UIDs() []engine.ResUID {
x := &DockerContainerUID{
BaseUID: engine.BaseUID{Name: obj.Name(), Kind: obj.Kind()},
name: obj.Name(),
}
return []engine.ResUID{x}
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *DockerContainerRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes DockerContainerRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*DockerContainerRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to DockerContainerRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = DockerContainerRes(raw) // restore from indirection with type conversion!
return nil
}

View File

@@ -0,0 +1,202 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !nodocker
package resources
import (
"context"
"fmt"
"io/ioutil"
"log"
"os"
"testing"
"time"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
)
var res *DockerContainerRes
var id string
func TestMain(m *testing.M) {
var setupCode, testCode, cleanupCode int
if err := setup(); err != nil {
log.Printf("error during setup: %s", err)
setupCode = 1
}
if setupCode == 0 {
testCode = m.Run()
}
if err := cleanup(); err != nil {
log.Printf("error during cleanup: %s", err)
cleanupCode = 1
}
os.Exit(setupCode + testCode + cleanupCode)
}
func Test_containerStart(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
if err := res.containerStart(ctx, id, types.ContainerStartOptions{}); err != nil {
t.Errorf("containerStart() error: %s", err)
return
}
l, err := res.client.ContainerList(
ctx,
types.ContainerListOptions{
Filters: filters.NewArgs(
filters.KeyValuePair{Key: "id", Value: id},
filters.KeyValuePair{Key: "status", Value: "running"},
),
},
)
if err != nil {
t.Errorf("error listing containers: %s", err)
return
}
if len(l) != 1 {
t.Errorf("failed to start container")
return
}
}
func Test_containerStop(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
if err := res.containerStop(ctx, id, nil); err != nil {
t.Errorf("containerStop() error: %s", err)
return
}
l, err := res.client.ContainerList(
ctx,
types.ContainerListOptions{
Filters: filters.NewArgs(
filters.KeyValuePair{Key: "id", Value: id},
),
},
)
if err != nil {
t.Errorf("error listing containers: %s", err)
return
}
if len(l) != 0 {
t.Errorf("failed to stop container")
return
}
}
func Test_containerRemove(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
if err := res.containerRemove(ctx, id, types.ContainerRemoveOptions{}); err != nil {
t.Errorf("containerRemove() error: %s", err)
return
}
l, err := res.client.ContainerList(
ctx,
types.ContainerListOptions{
All: true,
Filters: filters.NewArgs(
filters.KeyValuePair{Key: "id", Value: id},
),
},
)
if err != nil {
t.Errorf("error listing containers: %s", err)
return
}
if len(l) != 0 {
t.Errorf("failed to remove container")
return
}
}
func setup() error {
var err error
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
res = &DockerContainerRes{}
res.Init(res.init)
p, err := res.client.ImagePull(ctx, "alpine", types.ImagePullOptions{})
if err != nil {
return fmt.Errorf("error pulling image: %s", err)
}
if _, err := ioutil.ReadAll(p); err != nil {
return fmt.Errorf("error reading image pull result: %s", err)
}
resp, err := res.client.ContainerCreate(
ctx,
&container.Config{
Image: "alpine",
Cmd: []string{"sleep", "100"},
},
&container.HostConfig{},
nil,
nil,
"mgmt-test",
)
if err != nil {
return fmt.Errorf("error creating container: %s", err)
}
id = resp.ID
return nil
}
func cleanup() error {
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
l, err := res.client.ContainerList(
ctx,
types.ContainerListOptions{
All: true,
Filters: filters.NewArgs(filters.KeyValuePair{Key: "id", Value: id}),
},
)
if err != nil {
return fmt.Errorf("error listing containers: %s", err)
}
if len(l) > 0 {
if err := res.client.ContainerStop(ctx, id, nil); err != nil {
return fmt.Errorf("error stopping container: %s", err)
}
if err := res.client.ContainerRemove(ctx, id, types.ContainerRemoveOptions{}); err != nil {
return fmt.Errorf("error removing container: %s", err)
}
}
return nil
}

View File

@@ -0,0 +1,295 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !nodocker
package resources
import (
"context"
"fmt"
"io/ioutil"
"regexp"
"strings"
"time"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/client"
errwrap "github.com/pkg/errors"
)
const (
// dockerImageInitCtxTimeout is the length of time, in seconds, before
// requests are cancelled in Init.
dockerImageInitCtxTimeout = 20
// dockerImageCheckApplyCtxTimeout is the length of time, in seconds,
// before requests are cancelled in CheckApply.
dockerImageCheckApplyCtxTimeout = 120
)
func init() {
engine.RegisterResource("docker:image", func() engine.Res { return &DockerImageRes{} })
}
// DockerImageRes is a docker image resource. The resource's name must be a
// docker image in any supported format (url, image, or image:tag).
type DockerImageRes struct {
traits.Base // add the base methods without re-implementation
traits.Edgeable
// State of the image must be exists or absent.
State string `yaml:"state"`
// APIVersion allows you to override the host's default client API
// version.
APIVersion string `yaml:"apiversion"`
image string // full image:tag format
client *client.Client // docker api client
init *engine.Init
}
// Default returns some sensible defaults for this resource.
func (obj *DockerImageRes) Default() engine.Res {
return &DockerImageRes{
// TODO: eventually if image supports other properties, this can
// be left out and we could have the state be "unmanaged".
State: "exists",
}
}
// Validate if the params passed in are valid data.
func (obj *DockerImageRes) Validate() error {
// validate state
if obj.State != "exists" && obj.State != "absent" {
return fmt.Errorf("state must be exists or absent")
}
// validate APIVersion
if obj.APIVersion != "" {
verOK, err := regexp.MatchString(`^(v)[1-9]\.[0-9]\d*$`, obj.APIVersion)
if err != nil {
return errwrap.Wrapf(err, "error matching apiversion string")
}
if !verOK {
return fmt.Errorf("invalid apiversion: %s", obj.APIVersion)
}
}
return nil
}
// Init runs some startup code for this resource.
func (obj *DockerImageRes) Init(init *engine.Init) error {
var err error
obj.init = init // save for later
// Save the full image name and tag.
obj.image = dockerImageNameTag(obj.Name())
ctx, cancel := context.WithTimeout(context.Background(), dockerImageInitCtxTimeout*time.Second)
defer cancel()
// Initialize the docker client.
obj.client, err = client.NewClientWithOpts(client.WithVersion(obj.APIVersion))
if err != nil {
return errwrap.Wrapf(err, "error creating docker client")
}
// Validate the image.
resp, err := obj.client.ImageSearch(ctx, obj.image, types.ImageSearchOptions{Limit: 1})
if err != nil {
return errwrap.Wrapf(err, "error searching for image")
}
if len(resp) == 0 {
return fmt.Errorf("image: %s not found", obj.image)
}
return nil
}
// Close is run by the engine to clean up after the resource is done.
func (obj *DockerImageRes) Close() error {
return obj.client.Close() // close the docker client
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *DockerImageRes) Watch() error {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
eventChan, errChan := obj.client.Events(ctx, types.EventsOptions{})
// notify engine that we're running
obj.init.Running()
var send = false // send event?
for {
select {
case event, ok := <-eventChan:
if !ok { // channel shutdown
return nil
}
if obj.init.Debug {
obj.init.Logf("%+v", event)
}
send = true
case err, ok := <-errChan:
if !ok {
return nil
}
return err
case <-obj.init.Done: // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply method for Docker resource.
func (obj *DockerImageRes) CheckApply(apply bool) (checkOK bool, err error) {
ctx, cancel := context.WithTimeout(context.Background(), dockerImageCheckApplyCtxTimeout*time.Second)
defer cancel()
s, err := obj.client.ImageList(ctx, types.ImageListOptions{
Filters: filters.NewArgs(filters.Arg("reference", obj.image)),
})
if err != nil {
return false, errwrap.Wrapf(err, "error listing images")
}
if len(s) > 1 {
return false, fmt.Errorf("more than one image found")
}
if obj.State == "absent" && len(s) == 0 {
return true, nil
}
if obj.State == "exists" && len(s) == 1 {
return true, nil
}
if !apply {
return false, nil
}
if obj.State == "absent" {
// TODO: force? prune children?
if _, err := obj.client.ImageRemove(ctx, obj.image, types.ImageRemoveOptions{}); err != nil {
return false, errwrap.Wrapf(err, "error removing image")
}
return false, nil
}
// pull the image
p, err := obj.client.ImagePull(ctx, obj.image, types.ImagePullOptions{})
if err != nil {
return false, errwrap.Wrapf(err, "error pulling image")
}
// Wait for the image to download, EOF signals that it's done.
if _, err := ioutil.ReadAll(p); err != nil {
return false, errwrap.Wrapf(err, "error reading image pull result")
}
return false, nil
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *DockerImageRes) Cmp(r engine.Res) error {
// we can only compare DockerImageRes to others of the same resource kind
res, ok := r.(*DockerImageRes)
if !ok {
return fmt.Errorf("error casting r to *DockerImageRes")
}
if obj.State != res.State {
return fmt.Errorf("the State differs")
}
if obj.APIVersion != res.APIVersion {
return fmt.Errorf("the APIVersion differs")
}
return nil
}
// DockerImageUID is the UID struct for DockerImageRes.
type DockerImageUID struct {
engine.BaseUID
image string
}
// UIDs includes all params to make a unique identification of this object. Most
// resources only return one, although some resources can return multiple.
func (obj *DockerImageRes) UIDs() []engine.ResUID {
x := &DockerImageUID{
BaseUID: engine.BaseUID{Name: obj.Name(), Kind: obj.Kind()},
image: dockerImageNameTag(obj.Name()),
}
return []engine.ResUID{x}
}
// AutoEdges returns the AutoEdge interface.
func (obj *DockerImageRes) AutoEdges() (engine.AutoEdge, error) {
return nil, nil
}
// IFF aka if and only if they are equivalent, return true. If not, false.
func (obj *DockerImageUID) IFF(uid engine.ResUID) bool {
res, ok := uid.(*DockerImageUID)
if !ok {
return false
}
return obj.image == res.image
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *DockerImageRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes DockerImageRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*DockerImageRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to DockerImageRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = DockerImageRes(raw) // restore from indirection with type conversion!
return nil
}
// dockerImageNameTag does a naive check to see if the input includes a tag or
// is a url, and if not, appends the `:latest` tag to ensure disambiguation.
func dockerImageNameTag(image string) string {
if strings.Contains(image, ":") {
return image
}
return image + ":latest"
}

887
engine/resources/exec.go Normal file
View File

@@ -0,0 +1,887 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"bufio"
"bytes"
"context"
"fmt"
"os/exec"
"os/user"
"sort"
"strings"
"sync"
"syscall"
"time"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
engineUtil "github.com/purpleidea/mgmt/engine/util"
"github.com/purpleidea/mgmt/util/errwrap"
)
func init() {
engine.RegisterResource("exec", func() engine.Res { return &ExecRes{} })
}
// ExecRes is an exec resource for running commands.
type ExecRes struct {
traits.Base // add the base methods without re-implementation
traits.Edgeable
traits.Sendable
init *engine.Init
// Cmd is the command to run. If this is not specified, we use the name.
Cmd string `yaml:"cmd"`
// Args is a list of args to pass to Cmd. This can be used *instead* of
// passing the full command and args as a single string to Cmd. It can
// only be used when a Shell is *not* specified. The advantage of this
// is that you don't have to worry about escape characters.
Args []string `yaml:"args"`
// Cwd is the dir to run the command in. If empty, then this will use
// the working directory of the calling process. (This process is mgmt,
// not the process being run here.)
Cwd string `yaml:"cwd"`
// Shell is the (optional) shell to use to run the cmd. If you specify
// this, then you can't use the Args parameter.
Shell string `yaml:"shell"`
// Timeout is the number of seconds to wait before sending a Kill to the
// running command. If the Kill is received before the process exits,
// then this be treated as an error.
Timeout uint64 `yaml:"timeout"`
// Env allows the user to specify environment variables for script
// execution. These are taken using a map of format of VAR_NAME -> value.
Env map[string]string `yaml:"env"`
// Watch is the command to run to detect event changes. Each line of
// output from this command is treated as an event.
WatchCmd string `yaml:"watchcmd"`
// WatchCwd is the Cwd for the WatchCmd. See the docs for Cwd.
WatchCwd string `yaml:"watchcwd"`
// WatchShell is the Shell for the WatchCmd. See the docs for Shell.
WatchShell string `yaml:"watchshell"`
// IfCmd is the command that runs to guard against running the Cmd. If
// this command succeeds, then Cmd *will* be run. If this command
// returns a non-zero result, then the Cmd will not be run. Any error
// scenario or timeout will cause the resource to error.
IfCmd string `yaml:"ifcmd"`
// IfCwd is the Cwd for the IfCmd. See the docs for Cwd.
IfCwd string `yaml:"ifcwd"`
// IfShell is the Shell for the IfCmd. See the docs for Shell.
IfShell string `yaml:"ifshell"`
// User is the (optional) user to use to execute the command. It is used
// for any command being run.
User string `yaml:"user"`
// Group is the (optional) group to use to execute the command. It is
// used for any command being run.
Group string `yaml:"group"`
output *string // all cmd output, read only, do not set!
stdout *string // the cmd stdout, read only, do not set!
stderr *string // the cmd stderr, read only, do not set!
interruptChan chan struct{}
wg *sync.WaitGroup
}
// Default returns some sensible defaults for this resource.
func (obj *ExecRes) Default() engine.Res {
return &ExecRes{}
}
// getCmd returns the actual command to run. When Cmd is not specified, we use
// the Name.
func (obj *ExecRes) getCmd() string {
if obj.Cmd != "" {
return obj.Cmd
}
return obj.Name()
}
// Validate if the params passed in are valid data.
func (obj *ExecRes) Validate() error {
if obj.getCmd() == "" { // this is the only thing that is really required
return fmt.Errorf("the Cmd can't be empty")
}
split := strings.Fields(obj.getCmd())
if len(obj.Args) > 0 && obj.Shell != "" {
return fmt.Errorf("the Args param can't be used with a Shell")
}
if len(obj.Args) > 0 && len(split) > 1 {
return fmt.Errorf("the Args param can't be used when Cmd has args")
}
// check that, if an user or a group is set, we're running as root
if obj.User != "" || obj.Group != "" {
currentUser, err := user.Current()
if err != nil {
return errwrap.Wrapf(err, "error looking up current user")
}
if currentUser.Uid != "0" {
return fmt.Errorf("running as root is required if you want to use exec with a different user/group")
}
}
// check that environment variables' format is valid
for key := range obj.Env {
if err := isNameValid(key); err != nil {
return errwrap.Wrapf(err, "invalid variable name")
}
}
return nil
}
// Init runs some startup code for this resource.
func (obj *ExecRes) Init(init *engine.Init) error {
obj.init = init // save for later
obj.interruptChan = make(chan struct{})
obj.wg = &sync.WaitGroup{}
return nil
}
// Close is run by the engine to clean up after the resource is done.
func (obj *ExecRes) Close() error {
return nil
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *ExecRes) Watch() error {
ioChan := make(chan *cmdOutput)
defer obj.wg.Wait()
if obj.WatchCmd != "" {
var cmdName string
var cmdArgs []string
if obj.WatchShell == "" {
// call without a shell
// FIXME: are there still whitespace splitting issues?
split := strings.Fields(obj.WatchCmd)
cmdName = split[0]
//d, _ := os.Getwd() // TODO: how does this ever error ?
//cmdName = path.Join(d, cmdName)
cmdArgs = split[1:]
} else {
cmdName = obj.WatchShell // usually bash, or sh
cmdArgs = []string{"-c", obj.WatchCmd}
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
cmd := exec.CommandContext(ctx, cmdName, cmdArgs...)
cmd.Dir = obj.WatchCwd // run program in pwd if ""
// ignore signals sent to parent process (we're in our own group)
cmd.SysProcAttr = &syscall.SysProcAttr{
Setpgid: true,
Pgid: 0,
}
// if we have a user and group, use them
var err error
if cmd.SysProcAttr.Credential, err = obj.getCredential(); err != nil {
return errwrap.Wrapf(err, "error while setting credential")
}
if ioChan, err = obj.cmdOutputRunner(ctx, cmd); err != nil {
return errwrap.Wrapf(err, "error starting WatchCmd")
}
}
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
select {
case data, ok := <-ioChan:
if !ok { // EOF
// FIXME: add an "if watch command ends/crashes"
// restart or generate error option
return fmt.Errorf("reached EOF")
}
if err := data.err; err != nil {
// error reading input or cmd failure
exitErr, ok := err.(*exec.ExitError) // embeds an os.ProcessState
if !ok {
// command failed in some bad way
return errwrap.Wrapf(err, "watchcmd failed in some bad way")
}
pStateSys := exitErr.Sys() // (*os.ProcessState) Sys
wStatus, ok := pStateSys.(syscall.WaitStatus)
if !ok {
return errwrap.Wrapf(err, "could not get exit status of watchcmd")
}
exitStatus := wStatus.ExitStatus()
if exitStatus == 0 {
// i'm not sure if this could happen
return errwrap.Wrapf(err, "unexpected watchcmd exit status of zero")
}
obj.init.Logf("watchcmd exited with: %d", exitStatus)
return errwrap.Wrapf(err, "watchcmd errored")
}
// each time we get a line of output, we loop!
if s := data.text; s == "" {
obj.init.Logf("watch output is empty!")
} else {
obj.init.Logf("watch output is:")
obj.init.Logf(s)
}
if data.text != "" {
send = true
}
case <-obj.init.Done: // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply checks the resource state and applies the resource if the bool
// input is true. It returns error info and if the state check passed or not.
// TODO: expand the IfCmd to be a list of commands
func (obj *ExecRes) CheckApply(apply bool) (bool, error) {
// If we receive a refresh signal, then the engine skips the IsStateOK()
// check and this will run. It is still guarded by the IfCmd, but it can
// have a chance to execute, and all without the check of obj.Refresh()!
if obj.IfCmd != "" { // if there is no onlyif check, we should just run
var cmdName string
var cmdArgs []string
if obj.IfShell == "" {
// call without a shell
// FIXME: are there still whitespace splitting issues?
split := strings.Fields(obj.IfCmd)
cmdName = split[0]
//d, _ := os.Getwd() // TODO: how does this ever error ?
//cmdName = path.Join(d, cmdName)
cmdArgs = split[1:]
} else {
cmdName = obj.IfShell // usually bash, or sh
cmdArgs = []string{"-c", obj.IfCmd}
}
cmd := exec.Command(cmdName, cmdArgs...)
cmd.Dir = obj.IfCwd // run program in pwd if ""
// ignore signals sent to parent process (we're in our own group)
cmd.SysProcAttr = &syscall.SysProcAttr{
Setpgid: true,
Pgid: 0,
}
// if we have an user and group, use them
var err error
if cmd.SysProcAttr.Credential, err = obj.getCredential(); err != nil {
return false, errwrap.Wrapf(err, "error while setting credential")
}
var out splitWriter
out.Init()
cmd.Stdout = out.Stdout
cmd.Stderr = out.Stderr
if err := cmd.Run(); err != nil {
exitErr, ok := err.(*exec.ExitError) // embeds an os.ProcessState
if !ok {
// command failed in some bad way
return false, errwrap.Wrapf(err, "ifcmd failed in some bad way")
}
pStateSys := exitErr.Sys() // (*os.ProcessState) Sys
wStatus, ok := pStateSys.(syscall.WaitStatus)
if !ok {
return false, errwrap.Wrapf(err, "could not get exit status of ifcmd")
}
exitStatus := wStatus.ExitStatus()
if exitStatus == 0 {
// i'm not sure if this could happen
return false, errwrap.Wrapf(err, "unexpected ifcmd exit status of zero")
}
obj.init.Logf("ifcmd exited with: %d", exitStatus)
if s := out.String(); s == "" {
obj.init.Logf("ifcmd output is empty!")
} else {
obj.init.Logf("ifcmd output is:")
obj.init.Logf(s)
}
return true, nil // don't run
}
if s := out.String(); s == "" {
obj.init.Logf("ifcmd output is empty!")
} else {
obj.init.Logf("ifcmd output is:")
obj.init.Logf(s)
}
}
// state is not okay, no work done, exit, but without error
if !apply {
return false, nil
}
// apply portion
obj.init.Logf("Apply")
var cmdName string
var cmdArgs []string
if obj.Shell == "" {
// call without a shell
// FIXME: are there still whitespace splitting issues?
// TODO: we could make the split character user selectable...!
split := strings.Fields(obj.getCmd())
cmdName = split[0]
//d, _ := os.Getwd() // TODO: how does this ever error ?
//cmdName = path.Join(d, cmdName)
cmdArgs = split[1:]
if len(obj.Args) > 0 {
if len(split) != 1 { // should not happen
return false, fmt.Errorf("validation error")
}
cmdArgs = obj.Args
}
} else {
cmdName = obj.Shell // usually bash, or sh
cmdArgs = []string{"-c", obj.getCmd()}
}
wg := &sync.WaitGroup{}
defer wg.Wait() // this must be above the defer cancel() call
var ctx context.Context
var cancel context.CancelFunc
if obj.Timeout > 0 { // cmd.Process.Kill() is called on timeout
ctx, cancel = context.WithTimeout(context.Background(), time.Duration(obj.Timeout)*time.Second)
} else { // zero timeout means no timer
ctx, cancel = context.WithCancel(context.Background())
}
defer cancel()
cmd := exec.CommandContext(ctx, cmdName, cmdArgs...)
cmd.Dir = obj.Cwd // run program in pwd if ""
envKeys := []string{}
for key := range obj.Env {
envKeys = append(envKeys, key)
}
sort.Strings(envKeys)
cmdEnv := []string{}
for _, k := range envKeys {
cmdEnv = append(cmdEnv, k+"="+obj.Env[k])
}
cmd.Env = cmdEnv
// ignore signals sent to parent process (we're in our own group)
cmd.SysProcAttr = &syscall.SysProcAttr{
Setpgid: true,
Pgid: 0,
}
// if we have a user and group, use them
var err error
if cmd.SysProcAttr.Credential, err = obj.getCredential(); err != nil {
return false, errwrap.Wrapf(err, "error while setting credential")
}
var out splitWriter
out.Init()
// from the docs: "If Stdout and Stderr are the same writer, at most one
// goroutine at a time will call Write." so we trick it here!
cmd.Stdout = out.Stdout
cmd.Stderr = out.Stderr
if err := cmd.Start(); err != nil {
return false, errwrap.Wrapf(err, "error starting cmd")
}
wg.Add(1)
go func() {
defer wg.Done()
select {
case <-obj.interruptChan:
cancel()
case <-ctx.Done():
// let this exit
}
}()
err = cmd.Wait() // we can unblock this with the timeout
// save in memory for send/recv
// we use pointers to strings to indicate if used or not
if out.Stdout.Activity || out.Stderr.Activity {
str := out.String()
obj.output = &str
}
if out.Stdout.Activity {
str := out.Stdout.String()
obj.stdout = &str
}
if out.Stderr.Activity {
str := out.Stderr.String()
obj.stderr = &str
}
// process the err result from cmd, we process non-zero exits here too!
exitErr, ok := err.(*exec.ExitError) // embeds an os.ProcessState
if err != nil && ok {
pStateSys := exitErr.Sys() // (*os.ProcessState) Sys
wStatus, ok := pStateSys.(syscall.WaitStatus)
if !ok {
return false, errwrap.Wrapf(err, "error running cmd")
}
exitStatus := wStatus.ExitStatus()
if !wStatus.Signaled() { // not a timeout or cancel (no signal)
return false, errwrap.Wrapf(err, "cmd error, exit status: %d", exitStatus)
}
sig := wStatus.Signal()
// we get this on timeout, because ctx calls cmd.Process.Kill()
if sig == syscall.SIGKILL {
return false, errwrap.Wrapf(err, "cmd timeout, exit status: %d", exitStatus)
}
return false, errwrap.Wrapf(err, "unknown cmd error, signal: %s, exit status: %d", sig, exitStatus)
} else if err != nil {
return false, errwrap.Wrapf(err, "general cmd error")
}
// TODO: if we printed the stdout while the command is running, this
// would be nice, but it would require terminal log output that doesn't
// interleave all the parallel parts which would mix it all up...
if s := out.String(); s == "" {
obj.init.Logf("command output is empty!")
} else {
obj.init.Logf("command output is:")
obj.init.Logf(s)
}
if err := obj.init.Send(&ExecSends{
Output: obj.output,
Stdout: obj.stdout,
Stderr: obj.stderr,
}); err != nil {
return false, err
}
// The state tracking is for exec resources that can't "detect" their
// state, and assume it's invalid when the Watch() function triggers.
// If we apply state successfully, we should reset it here so that we
// know that we have applied since the state was set not ok by event!
// This now happens automatically after the engine runs CheckApply().
return false, nil // success
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *ExecRes) Cmp(r engine.Res) error {
// we can only compare ExecRes to others of the same resource kind
res, ok := r.(*ExecRes)
if !ok {
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.Cmd != res.Cmd {
return fmt.Errorf("the Cmd differs")
}
if len(obj.Args) != len(res.Args) {
return fmt.Errorf("the Args differ")
}
for i, a := range obj.Args {
if a != res.Args[i] {
return fmt.Errorf("the Args differ at index: %d", i)
}
}
if obj.Cwd != res.Cwd {
return fmt.Errorf("the Cwd differs")
}
if obj.Shell != res.Shell {
return fmt.Errorf("the Shell differs")
}
if obj.Timeout != res.Timeout {
return fmt.Errorf("the Timeout differs")
}
if obj.WatchCmd != res.WatchCmd {
return fmt.Errorf("the WatchCmd differs")
}
if obj.WatchCwd != res.WatchCwd {
return fmt.Errorf("the WatchCwd differs")
}
if obj.WatchShell != res.WatchShell {
return fmt.Errorf("the WatchShell differs")
}
if obj.IfCmd != res.IfCmd {
return fmt.Errorf("the IfCmd differs")
}
if obj.IfCwd != res.IfCwd {
return fmt.Errorf("the IfCwd differs")
}
if obj.IfShell != res.IfShell {
return fmt.Errorf("the IfShell differs")
}
if obj.User != res.User {
return fmt.Errorf("the User differs")
}
if obj.Group != res.Group {
return fmt.Errorf("the Group differs")
}
return nil
}
// Interrupt is called to ask the execution of this resource to end early.
func (obj *ExecRes) Interrupt() error {
close(obj.interruptChan)
return nil
}
// ExecUID is the UID struct for ExecRes.
type ExecUID struct {
engine.BaseUID
Cmd string
IfCmd string
// TODO: add more elements here
}
// ExecResAutoEdges holds the state of the auto edge generator.
type ExecResAutoEdges struct {
edges []engine.ResUID
pointer int
}
// Next returns the next automatic edge.
func (obj *ExecResAutoEdges) Next() []engine.ResUID {
if len(obj.edges) == 0 {
return nil
}
value := obj.edges[obj.pointer]
obj.pointer++
return []engine.ResUID{value}
}
// Test gets results of the earlier Next() call, & returns if we should
// continue!
func (obj *ExecResAutoEdges) Test(input []bool) bool {
if len(obj.edges) <= obj.pointer {
return false
}
if len(input) != 1 { // in case we get given bad data
panic(fmt.Sprintf("Expecting a single value!"))
}
return true // keep going
}
// AutoEdges returns the AutoEdge interface. In this case the systemd units.
func (obj *ExecRes) AutoEdges() (engine.AutoEdge, error) {
var data []engine.ResUID
var reversed = true
for _, x := range obj.cmdFiles() {
data = append(data, &PkgFileUID{
BaseUID: engine.BaseUID{
Name: obj.Name(),
Kind: obj.Kind(),
Reversed: &reversed,
},
path: x, // what matters
})
data = append(data, &FileUID{
BaseUID: engine.BaseUID{
Name: obj.Name(),
Kind: obj.Kind(),
Reversed: &reversed,
},
path: x,
})
}
if obj.User != "" {
data = append(data, &UserUID{
BaseUID: engine.BaseUID{
Name: obj.Name(),
Kind: obj.Kind(),
Reversed: &reversed,
},
name: obj.User,
})
}
if obj.Group != "" {
data = append(data, &GroupUID{
BaseUID: engine.BaseUID{
Name: obj.Name(),
Kind: obj.Kind(),
Reversed: &reversed,
},
name: obj.Group,
})
}
return &ExecResAutoEdges{
edges: data,
pointer: 0,
}, nil
}
// UIDs includes all params to make a unique identification of this object. Most
// resources only return one, although some resources can return multiple.
func (obj *ExecRes) UIDs() []engine.ResUID {
x := &ExecUID{
BaseUID: engine.BaseUID{Name: obj.Name(), Kind: obj.Kind()},
Cmd: obj.getCmd(),
IfCmd: obj.IfCmd,
// TODO: add more params here
}
return []engine.ResUID{x}
}
// ExecSends is the struct of data which is sent after a successful Apply.
type ExecSends struct {
// Output is the combined stdout and stderr of the command.
Output *string `lang:"output"`
// Stdout is the stdout of the command.
Stdout *string `lang:"stdout"`
// Stderr is the stderr of the command.
Stderr *string `lang:"stderr"`
}
// Sends represents the default struct of values we can send using Send/Recv.
func (obj *ExecRes) Sends() interface{} {
return &ExecSends{
Output: nil,
Stdout: nil,
Stderr: nil,
}
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *ExecRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes ExecRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*ExecRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to ExecRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = ExecRes(raw) // restore from indirection with type conversion!
return nil
}
// getCredential returns the correct *syscall.Credential if an User and Group
// are set.
func (obj *ExecRes) getCredential() (*syscall.Credential, error) {
var uid, gid int
var err error
var currentUser *user.User
if currentUser, err = user.Current(); err != nil {
return nil, errwrap.Wrapf(err, "error looking up current user")
}
if currentUser.Uid != "0" {
// since we're not root, we've got nothing to do
return nil, nil
}
if obj.Group != "" {
gid, err = engineUtil.GetGID(obj.Group)
if err != nil {
return nil, errwrap.Wrapf(err, "error looking up gid for %s", obj.Group)
}
}
if obj.User != "" {
uid, err = engineUtil.GetUID(obj.User)
if err != nil {
return nil, errwrap.Wrapf(err, "error looking up uid for %s", obj.User)
}
}
return &syscall.Credential{Uid: uint32(uid), Gid: uint32(gid)}, nil
}
// cmdFiles returns all the potential files/commands this command might need.
func (obj *ExecRes) cmdFiles() []string {
var paths []string
if obj.Shell != "" {
paths = append(paths, obj.Shell)
} else if cmdSplit := strings.Fields(obj.getCmd()); len(cmdSplit) > 0 {
paths = append(paths, cmdSplit[0])
}
if obj.WatchShell != "" {
paths = append(paths, obj.WatchShell)
} else if watchSplit := strings.Fields(obj.WatchCmd); len(watchSplit) > 0 {
paths = append(paths, watchSplit[0])
}
if obj.IfShell != "" {
paths = append(paths, obj.IfShell)
} else if ifSplit := strings.Fields(obj.IfCmd); len(ifSplit) > 0 {
paths = append(paths, ifSplit[0])
}
return paths
}
// cmdOutput is the output struct of the cmdOutputRunner channel output. You
// should always check the error first. If it's nil, then you can assume the
// text data is good to use.
type cmdOutput struct {
text string
err error
}
// cmdOutputRunner wraps the Cmd in with a StdoutPipe scanner and reads for
// errors. It runs Start and Wait, and errors runtime things in the channel. If
// it can't start up the command, it will fail early. Once it's running, it will
// return the channel which can be used for the duration of the process.
// Cancelling the context merely unblocks the sending on the output channel, it
// does not Kill the cmd process. For that you must do it yourself elsewhere.
func (obj *ExecRes) cmdOutputRunner(ctx context.Context, cmd *exec.Cmd) (chan *cmdOutput, error) {
cmdReader, err := cmd.StdoutPipe()
if err != nil {
return nil, errwrap.Wrapf(err, "error creating StdoutPipe for Cmd")
}
scanner := bufio.NewScanner(cmdReader)
if err := cmd.Start(); err != nil {
return nil, errwrap.Wrapf(err, "error starting Cmd")
}
ch := make(chan *cmdOutput)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch)
for scanner.Scan() {
select {
case ch <- &cmdOutput{text: scanner.Text()}: // blocks here ?
case <-ctx.Done():
return
}
}
// on EOF, scanner.Err() will be nil
reterr := scanner.Err()
reterr = errwrap.Append(reterr, cmd.Wait()) // always run Wait()
// send any misc errors we encounter on the channel
if reterr != nil {
select {
case ch <- &cmdOutput{err: reterr}:
case <-ctx.Done():
return
}
}
}()
return ch, nil
}
// splitWriter mimics what the ssh.CombinedOutput command does, but stores the
// the stdout and stderr separately. This is slightly tricky because we don't
// want the combined output to be interleaved incorrectly. It creates sub writer
// structs which share the same lock and a shared output buffer.
type splitWriter struct {
Stdout *wrapWriter
Stderr *wrapWriter
stdout bytes.Buffer // just the stdout
stderr bytes.Buffer // just the stderr
output bytes.Buffer // combined output
mutex *sync.Mutex
initialized bool // is this initialized?
}
// Init initializes the splitWriter.
func (obj *splitWriter) Init() {
if obj.initialized {
panic("splitWriter is already initialized")
}
obj.mutex = &sync.Mutex{}
obj.Stdout = &wrapWriter{
Mutex: obj.mutex,
Buffer: &obj.stdout,
Output: &obj.output,
}
obj.Stderr = &wrapWriter{
Mutex: obj.mutex,
Buffer: &obj.stderr,
Output: &obj.output,
}
obj.initialized = true
}
// String returns the contents of the combined output buffer.
func (obj *splitWriter) String() string {
if !obj.initialized {
panic("splitWriter is not initialized")
}
return obj.output.String()
}
// wrapWriter is a simple writer which is used internally by splitWriter.
type wrapWriter struct {
Mutex *sync.Mutex
Buffer *bytes.Buffer // stdout or stderr
Output *bytes.Buffer // combined output
Activity bool // did we get any writes?
}
// Write writes to both bytes buffers with a parent lock to mix output safely.
func (obj *wrapWriter) Write(p []byte) (int, error) {
// TODO: can we move the lock to only guard around the Output.Write ?
obj.Mutex.Lock()
defer obj.Mutex.Unlock()
obj.Activity = true
i, err := obj.Buffer.Write(p) // first write
if err != nil {
return i, err
}
return obj.Output.Write(p) // shared write
}
// String returns the contents of the unshared buffer.
func (obj *wrapWriter) String() string {
return obj.Buffer.String()
}
// isNameValid checks that environment variable name is valid.
func isNameValid(varName string) error {
if varName == "" {
return fmt.Errorf("variable name cannot be an empty string")
}
for i := range varName {
c := varName[i]
if i == 0 && '0' <= c && c <= '9' {
return fmt.Errorf("variable name cannot begin with number")
}
if !(c == '_' || '0' <= c && c <= '9' || 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z') {
return fmt.Errorf("invalid character in variable name")
}
}
return nil
}

View File

@@ -0,0 +1,335 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !root
package resources
import (
"context"
"fmt"
"os/exec"
"syscall"
"testing"
"time"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/graph/autoedge"
"github.com/purpleidea/mgmt/pgraph"
)
func fakeExecInit(t *testing.T) (*engine.Init, *ExecSends) {
debug := testing.Verbose() // set via the -test.v flag to `go test`
logf := func(format string, v ...interface{}) {
t.Logf("test: "+format, v...)
}
execSends := &ExecSends{}
return &engine.Init{
Send: func(st interface{}) error {
x, ok := st.(*ExecSends)
if !ok {
return fmt.Errorf("unable to send")
}
*execSends = *x // set
return nil
},
Debug: debug,
Logf: logf,
}, execSends
}
func TestExecSendRecv1(t *testing.T) {
r1 := &ExecRes{
Cmd: "echo hello world",
Shell: "/bin/bash",
}
if err := r1.Validate(); err != nil {
t.Errorf("validate failed with: %v", err)
}
defer func() {
if err := r1.Close(); err != nil {
t.Errorf("close failed with: %v", err)
}
}()
init, execSends := fakeExecInit(t)
if err := r1.Init(init); err != nil {
t.Errorf("init failed with: %v", err)
}
// run artificially without the entire engine
if _, err := r1.CheckApply(true); err != nil {
t.Errorf("checkapply failed with: %v", err)
}
t.Logf("output is: %v", execSends.Output)
if execSends.Output != nil {
t.Logf("output is: %v", *execSends.Output)
}
t.Logf("stdout is: %v", execSends.Stdout)
if execSends.Stdout != nil {
t.Logf("stdout is: %v", *execSends.Stdout)
}
t.Logf("stderr is: %v", execSends.Stderr)
if execSends.Stderr != nil {
t.Logf("stderr is: %v", *execSends.Stderr)
}
if execSends.Stdout == nil {
t.Errorf("stdout is nil")
} else {
if out := *execSends.Stdout; out != "hello world\n" {
t.Errorf("got wrong stdout(%d): %s", len(out), out)
}
}
}
func TestExecSendRecv2(t *testing.T) {
r1 := &ExecRes{
Cmd: "echo hello world 1>&2", // to stderr
Shell: "/bin/bash",
}
if err := r1.Validate(); err != nil {
t.Errorf("validate failed with: %v", err)
}
defer func() {
if err := r1.Close(); err != nil {
t.Errorf("close failed with: %v", err)
}
}()
init, execSends := fakeExecInit(t)
if err := r1.Init(init); err != nil {
t.Errorf("init failed with: %v", err)
}
// run artificially without the entire engine
if _, err := r1.CheckApply(true); err != nil {
t.Errorf("checkapply failed with: %v", err)
}
t.Logf("output is: %v", execSends.Output)
if execSends.Output != nil {
t.Logf("output is: %v", *execSends.Output)
}
t.Logf("stdout is: %v", execSends.Stdout)
if execSends.Stdout != nil {
t.Logf("stdout is: %v", *execSends.Stdout)
}
t.Logf("stderr is: %v", execSends.Stderr)
if execSends.Stderr != nil {
t.Logf("stderr is: %v", *execSends.Stderr)
}
if execSends.Stderr == nil {
t.Errorf("stderr is nil")
} else {
if out := *execSends.Stderr; out != "hello world\n" {
t.Errorf("got wrong stderr(%d): %s", len(out), out)
}
}
}
func TestExecSendRecv3(t *testing.T) {
r1 := &ExecRes{
Cmd: "echo hello world && echo goodbye world 1>&2", // to stdout && stderr
Shell: "/bin/bash",
}
if err := r1.Validate(); err != nil {
t.Errorf("validate failed with: %v", err)
}
defer func() {
if err := r1.Close(); err != nil {
t.Errorf("close failed with: %v", err)
}
}()
init, execSends := fakeExecInit(t)
if err := r1.Init(init); err != nil {
t.Errorf("init failed with: %v", err)
}
// run artificially without the entire engine
if _, err := r1.CheckApply(true); err != nil {
t.Errorf("checkapply failed with: %v", err)
}
t.Logf("output is: %v", execSends.Output)
if execSends.Output != nil {
t.Logf("output is: %v", *execSends.Output)
}
t.Logf("stdout is: %v", execSends.Stdout)
if execSends.Stdout != nil {
t.Logf("stdout is: %v", *execSends.Stdout)
}
t.Logf("stderr is: %v", execSends.Stderr)
if execSends.Stderr != nil {
t.Logf("stderr is: %v", *execSends.Stderr)
}
if execSends.Output == nil {
t.Errorf("output is nil")
} else {
// it looks like bash or golang race to the write, so whichever
// order they come out in is ok, as long as they come out whole
if out := *execSends.Output; out != "hello world\ngoodbye world\n" && out != "goodbye world\nhello world\n" {
t.Errorf("got wrong output(%d): %s", len(out), out)
}
}
if execSends.Stdout == nil {
t.Errorf("stdout is nil")
} else {
if out := *execSends.Stdout; out != "hello world\n" {
t.Errorf("got wrong stdout(%d): %s", len(out), out)
}
}
if execSends.Stderr == nil {
t.Errorf("stderr is nil")
} else {
if out := *execSends.Stderr; out != "goodbye world\n" {
t.Errorf("got wrong stderr(%d): %s", len(out), out)
}
}
}
func TestExecTimeoutBehaviour(t *testing.T) {
// cmd.Process.Kill() is called on timeout
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
defer cancel()
cmdName := "/bin/sleep" // it's /usr/bin/sleep on modern distros
cmdArgs := []string{"300"} // 5 min in seconds
cmd := exec.CommandContext(ctx, cmdName, cmdArgs...)
// ignore signals sent to parent process (we're in our own group)
cmd.SysProcAttr = &syscall.SysProcAttr{
Setpgid: true,
Pgid: 0,
}
if err := cmd.Start(); err != nil {
t.Errorf("error starting cmd: %+v", err)
return
}
err := cmd.Wait() // we can unblock this with the timeout
if err == nil {
t.Errorf("expected error, got nil")
return
}
exitErr, ok := err.(*exec.ExitError) // embeds an os.ProcessState
if err != nil && ok {
pStateSys := exitErr.Sys() // (*os.ProcessState) Sys
wStatus, ok := pStateSys.(syscall.WaitStatus)
if !ok {
t.Errorf("error running cmd")
return
}
if !wStatus.Signaled() {
t.Errorf("did not get signal, exit status: %d", wStatus.ExitStatus())
return
}
// we get this on timeout, because ctx calls cmd.Process.Kill()
if sig := wStatus.Signal(); sig != syscall.SIGKILL {
t.Errorf("got wrong signal: %+v, exit status: %d", sig, wStatus.ExitStatus())
return
}
t.Logf("exit status: %d", wStatus.ExitStatus())
return
} else if err != nil {
t.Errorf("general cmd error")
return
}
// no error
}
func TestExecAutoEdge1(t *testing.T) {
g, err := pgraph.NewGraph("TestGraph")
if err != nil {
t.Errorf("error creating graph: %v", err)
return
}
resUser, err := engine.NewNamedResource("user", "someuser")
if err != nil {
t.Errorf("error creating user resource: %v", err)
return
}
resGroup, err := engine.NewNamedResource("group", "somegroup")
if err != nil {
t.Errorf("error creating group resource: %v", err)
return
}
resFile, err := engine.NewNamedResource("file", "/somefile")
if err != nil {
t.Errorf("error creating group resource: %v", err)
return
}
resExec, err := engine.NewNamedResource("exec", "somefile")
if err != nil {
t.Errorf("error creating exec resource: %v", err)
return
}
exc := resExec.(*ExecRes)
exc.Cmd = resFile.Name()
exc.User = resUser.Name()
exc.Group = resGroup.Name()
g.AddVertex(resUser, resGroup, resFile, resExec)
if i := g.NumEdges(); i != 0 {
t.Errorf("should have 0 edges instead of: %d", i)
return
}
debug := testing.Verbose() // set via the -test.v flag to `go test`
logf := func(format string, v ...interface{}) {
t.Logf("test: "+format, v...)
}
if err := autoedge.AutoEdge(g, debug, logf); err != nil {
t.Errorf("error running autoedges: %v", err)
return
}
expected, err := pgraph.NewGraph("Expected")
if err != nil {
t.Errorf("error creating graph: %v", err)
return
}
expectEdge := func(from, to pgraph.Vertex) {
edge := &engine.Edge{Name: fmt.Sprintf("%s -> %s (expected)", from, to)}
expected.AddEdge(from, to, edge)
}
expectEdge(resFile, resExec)
expectEdge(resUser, resExec)
expectEdge(resGroup, resExec)
vertexCmp := func(v1, v2 pgraph.Vertex) (bool, error) { return v1 == v2, nil } // pointer compare is sufficient
edgeCmp := func(e1, e2 pgraph.Edge) (bool, error) { return true, nil } // we don't care about edges here
if err := expected.GraphCmp(g, vertexCmp, edgeCmp); err != nil {
t.Errorf("graph doesn't match expected: %s", err)
return
}
}

1680
engine/resources/file.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,265 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !root
package resources
import (
"bytes"
"encoding/base64"
"encoding/gob"
"testing"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/graph/autoedge"
engineUtil "github.com/purpleidea/mgmt/engine/util"
"github.com/purpleidea/mgmt/pgraph"
)
func TestFileAutoEdge1(t *testing.T) {
g, err := pgraph.NewGraph("TestGraph")
if err != nil {
t.Errorf("error creating graph: %v", err)
return
}
r1 := &FileRes{
Path: "/tmp/a/b/", // some dir
}
r2 := &FileRes{
Path: "/tmp/a/", // some parent dir
}
r3 := &FileRes{
Path: "/tmp/a/b/c", // some child file
}
g.AddVertex(r1, r2, r3)
if i := g.NumEdges(); i != 0 {
t.Errorf("should have 0 edges instead of: %d", i)
}
debug := testing.Verbose() // set via the -test.v flag to `go test`
logf := func(format string, v ...interface{}) {
t.Logf("test: "+format, v...)
}
// run artificially without the entire engine
if err := autoedge.AutoEdge(g, debug, logf); err != nil {
t.Errorf("error running autoedges: %v", err)
}
// two edges should have been added
if i := g.NumEdges(); i != 2 {
t.Errorf("should have 2 edges instead of: %d", i)
}
}
func TestMiscEncodeDecode1(t *testing.T) {
var err error
// encode
var input interface{} = &FileRes{}
b1 := bytes.Buffer{}
e := gob.NewEncoder(&b1)
err = e.Encode(&input) // pass with &
if err != nil {
t.Errorf("gob failed to Encode: %v", err)
}
str := base64.StdEncoding.EncodeToString(b1.Bytes())
// decode
var output interface{}
bb, err := base64.StdEncoding.DecodeString(str)
if err != nil {
t.Errorf("base64 failed to Decode: %v", err)
}
b2 := bytes.NewBuffer(bb)
d := gob.NewDecoder(b2)
err = d.Decode(&output) // pass with &
if err != nil {
t.Errorf("gob failed to Decode: %v", err)
}
res1, ok := input.(engine.Res)
if !ok {
t.Errorf("input %v is not a Res", res1)
return
}
res2, ok := output.(engine.Res)
if !ok {
t.Errorf("output %v is not a Res", res2)
return
}
if err := res1.Cmp(res2); err != nil {
t.Errorf("the input and output Res values do not match: %+v", err)
}
}
func TestMiscEncodeDecode2(t *testing.T) {
var err error
// encode
input, err := engine.NewNamedResource("file", "file1")
if err != nil {
t.Errorf("can't create: %v", err)
return
}
// NOTE: Do not add this bit of code, because it would cause the path to
// get taken from the actual Path parameter, instead of using the name,
// and if we use the name, the Cmp function will detect if the name is
// stored properly or not.
//fileRes := input.(*FileRes) // must not panic
//fileRes.Path = "/tmp/whatever"
b64, err := engineUtil.ResToB64(input)
if err != nil {
t.Errorf("can't encode: %v", err)
return
}
output, err := engineUtil.B64ToRes(b64)
if err != nil {
t.Errorf("can't decode: %v", err)
return
}
res1, ok := input.(engine.Res)
if !ok {
t.Errorf("input %v is not a Res", res1)
return
}
res2, ok := output.(engine.Res)
if !ok {
t.Errorf("output %v is not a Res", res2)
return
}
// this uses the standalone file cmp function
if err := res1.Cmp(res2); err != nil {
t.Errorf("the input and output Res values do not match: %+v", err)
}
}
func TestMiscEncodeDecode3(t *testing.T) {
var err error
// encode
input, err := engine.NewNamedResource("file", "file1")
if err != nil {
t.Errorf("can't create: %v", err)
return
}
fileRes := input.(*FileRes) // must not panic
fileRes.Path = "/tmp/whatever"
// TODO: add other params/traits/etc here!
b64, err := engineUtil.ResToB64(input)
if err != nil {
t.Errorf("can't encode: %v", err)
return
}
output, err := engineUtil.B64ToRes(b64)
if err != nil {
t.Errorf("can't decode: %v", err)
return
}
res1, ok := input.(engine.Res)
if !ok {
t.Errorf("input %v is not a Res", res1)
return
}
res2, ok := output.(engine.Res)
if !ok {
t.Errorf("output %v is not a Res", res2)
return
}
// this uses the more complete, engine cmp function
if err := engine.ResCmp(res1, res2); err != nil {
t.Errorf("the input and output Res values do not match: %+v", err)
}
}
func TestMiscEncodeDecode4(t *testing.T) {
var err error
const (
Kind = "file"
Name = "file1"
)
// encode
input, err := engine.NewNamedResource(Kind, Name)
if err != nil {
t.Errorf("can't create: %v", err)
return
}
fileRes := input.(*FileRes) // must not panic
fileRes.Path = "/tmp/whatever"
// TODO: add other params/traits/etc here!
b64, err := engineUtil.ResToB64(input)
if err != nil {
t.Errorf("can't encode: %v", err)
return
}
output, err := engineUtil.B64ToRes(b64)
if err != nil {
t.Errorf("can't decode: %v", err)
return
}
res1, ok := input.(engine.Res)
if !ok {
t.Errorf("input %v is not a Res", res1)
return
}
res2, ok := output.(engine.Res)
if !ok {
t.Errorf("output %v is not a Res", res2)
return
}
// this uses the more complete, engine cmp function
if err := engine.ResCmp(res1, res2); err != nil {
t.Errorf("the input and output Res values do not match: %+v", err)
}
// ensure the kind and name are correctly decoded too!
if kind := res2.Kind(); kind != Kind {
t.Errorf("the output kind was `%s`, expected `%s`", kind, Kind)
}
if name := res2.Name(); name != Name {
t.Errorf("the output name was `%s`, expected `%s`", name, Name)
}
}
func TestFileAbsolute1(t *testing.T) {
// file resource paths should be absolute
f1 := &FileRes{
Path: "tmp/a/b", // some relative file
}
f2 := &FileRes{
Path: "tmp/a/b/", // some relative dir
}
f3 := &FileRes{
Path: "tmp", // some short relative file
}
if f1.Validate() == nil || f2.Validate() == nil || f3.Validate() == nil {
t.Errorf("file res should have failed validate")
}
}

303
engine/resources/group.go Normal file
View File

@@ -0,0 +1,303 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"fmt"
"io/ioutil"
"os/exec"
"os/user"
"strconv"
"syscall"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/purpleidea/mgmt/recwatch"
"github.com/purpleidea/mgmt/util/errwrap"
)
func init() {
engine.RegisterResource("group", func() engine.Res { return &GroupRes{} })
}
const groupFile = "/etc/group"
// GroupRes is a user group resource.
type GroupRes struct {
traits.Base // add the base methods without re-implementation
traits.Edgeable
init *engine.Init
State string `yaml:"state"` // state: exists, absent
GID *uint32 `yaml:"gid"` // the group's gid
recWatcher *recwatch.RecWatcher
}
// Default returns some sensible defaults for this resource.
func (obj *GroupRes) Default() engine.Res {
return &GroupRes{}
}
// Validate if the params passed in are valid data.
func (obj *GroupRes) Validate() error {
if obj.State != "exists" && obj.State != "absent" {
return fmt.Errorf("state must be 'exists' or 'absent'")
}
return nil
}
// Init runs some startup code for this resource.
func (obj *GroupRes) Init(init *engine.Init) error {
obj.init = init // save for later
return nil
}
// Close is run by the engine to clean up after the resource is done.
func (obj *GroupRes) Close() error {
return nil
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *GroupRes) Watch() error {
var err error
obj.recWatcher, err = recwatch.NewRecWatcher(groupFile, false)
if err != nil {
return err
}
defer obj.recWatcher.Close()
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
if obj.init.Debug {
obj.init.Logf("Watching: %s", groupFile) // attempting to watch...
}
select {
case event, ok := <-obj.recWatcher.Events():
if !ok { // channel shutdown
return nil
}
if err := event.Error; err != nil {
return errwrap.Wrapf(err, "Unknown %s watcher error", obj)
}
if obj.init.Debug { // don't access event.Body if event.Error isn't nil
obj.init.Logf("Event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case <-obj.init.Done: // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply method for Group resource.
func (obj *GroupRes) CheckApply(apply bool) (bool, error) {
obj.init.Logf("CheckApply(%t)", apply)
// check if the group exists
exists := true
group, err := user.LookupGroup(obj.Name())
if err != nil {
if _, ok := err.(user.UnknownGroupError); !ok {
return false, errwrap.Wrapf(err, "error looking up group")
}
exists = false
}
// if the group doesn't exist and should be absent, we are done
if obj.State == "absent" && !exists {
return true, nil
}
// if the group exists and no GID is specified, we are done
if obj.State == "exists" && exists && obj.GID == nil {
return true, nil
}
if exists && obj.GID != nil {
// check if GID is taken
lookupGID, err := user.LookupGroupId(strconv.Itoa(int(*obj.GID)))
if err != nil {
if _, ok := err.(user.UnknownGroupIdError); !ok {
return false, errwrap.Wrapf(err, "error looking up GID")
}
}
if lookupGID != nil && lookupGID.Name != obj.Name() {
return false, fmt.Errorf("the requested GID belongs to another group")
}
// get the existing group's GID
existingGID, err := strconv.ParseUint(group.Gid, 10, 32)
if err != nil {
return false, errwrap.Wrapf(err, "error casting existing GID")
}
// check if existing group has the wrong GID
// if it is wrong groupmod will change it to the desired value
if *obj.GID != uint32(existingGID) {
obj.init.Logf("Inconsistent GID: %s", obj.Name())
}
// if the group exists and has the correct GID, we are done
if obj.State == "exists" && *obj.GID == uint32(existingGID) {
return true, nil
}
}
if !apply {
return false, nil
}
var cmdName string
args := []string{obj.Name()}
if obj.State == "exists" {
if exists {
obj.init.Logf("Modifying group: %s", obj.Name())
cmdName = "groupmod"
} else {
obj.init.Logf("Adding group: %s", obj.Name())
cmdName = "groupadd"
}
if obj.GID != nil {
args = append(args, "-g", fmt.Sprintf("%d", *obj.GID))
}
}
if obj.State == "absent" && exists {
obj.init.Logf("Deleting group: %s", obj.Name())
cmdName = "groupdel"
}
cmd := exec.Command(cmdName, args...)
cmd.SysProcAttr = &syscall.SysProcAttr{
Setpgid: true,
Pgid: 0,
}
// open a pipe to get error messages from os/exec
stderr, err := cmd.StderrPipe()
if err != nil {
return false, errwrap.Wrapf(err, "failed to initialize stderr pipe")
}
// start the command
if err := cmd.Start(); err != nil {
return false, errwrap.Wrapf(err, "cmd failed to start")
}
// capture any error messages
slurp, err := ioutil.ReadAll(stderr)
if err != nil {
return false, errwrap.Wrapf(err, "error slurping error message")
}
// wait until cmd exits and return error message if any
if err := cmd.Wait(); err != nil {
return false, errwrap.Wrapf(err, "%s", slurp)
}
return false, nil
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *GroupRes) Cmp(r engine.Res) error {
// we can only compare GroupRes to others of the same resource kind
res, ok := r.(*GroupRes)
if !ok {
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.State != res.State {
return fmt.Errorf("the State differs")
}
if (obj.GID == nil) != (res.GID == nil) {
return fmt.Errorf("the GID differs")
}
if obj.GID != nil && res.GID != nil {
if *obj.GID != *res.GID {
return fmt.Errorf("the GID differs")
}
}
return nil
}
// GroupUID is the UID struct for GroupRes.
type GroupUID struct {
engine.BaseUID
name string
gid *uint32
}
// AutoEdges returns the AutoEdge interface.
func (obj *GroupRes) AutoEdges() (engine.AutoEdge, error) {
return nil, nil
}
// IFF aka if and only if they are equivalent, return true. If not, false.
func (obj *GroupUID) IFF(uid engine.ResUID) bool {
res, ok := uid.(*GroupUID)
if !ok {
return false
}
if obj.gid != nil && res.gid != nil {
if *obj.gid != *res.gid {
return false
}
}
if obj.name != "" && res.name != "" {
if obj.name != res.name {
return false
}
}
return true
}
// UIDs includes all params to make a unique identification of this object. Most
// resources only return one, although some resources can return multiple.
func (obj *GroupRes) UIDs() []engine.ResUID {
x := &GroupUID{
BaseUID: engine.BaseUID{Name: obj.Name(), Kind: obj.Kind()},
name: obj.Name(),
gid: obj.GID,
}
return []engine.ResUID{x}
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *GroupRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes GroupRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*GroupRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to GroupRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = GroupRes(raw) // restore from indirection with type conversion!
return nil
}

View File

@@ -1,57 +1,57 @@
// Mgmt // Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors // Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors // Written by James Shubin <james@shubin.ca> and the project contributors
// //
// This program is free software: you can redistribute it and/or modify // This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by // it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or // the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version. // (at your option) any later version.
// //
// This program is distributed in the hope that it will be useful, // This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of // but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details. // GNU General Public License for more details.
// //
// You should have received a copy of the GNU Affero General Public License // You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>. // along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources package resources
import ( import (
"encoding/gob"
"errors" "errors"
"fmt" "fmt"
"log"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
engineUtil "github.com/purpleidea/mgmt/engine/util"
"github.com/purpleidea/mgmt/util" "github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/util/errwrap"
"github.com/godbus/dbus" "github.com/godbus/dbus"
errwrap "github.com/pkg/errors"
) )
// ErrResourceInsufficientParameters is returned when the configuration of the resource
// is insufficient for the resource to do any useful work.
var ErrResourceInsufficientParameters = errors.New(
"Insufficient parameters for this resource")
func init() { func init() {
gob.Register(&HostnameRes{}) engine.RegisterResource("hostname", func() engine.Res { return &HostnameRes{} })
} }
const ( const (
hostname1Path = "/org/freedesktop/hostname1" hostname1Path = "/org/freedesktop/hostname1"
hostname1Iface = "org.freedesktop.hostname1" hostname1Iface = "org.freedesktop.hostname1"
dbusAddMatch = "org.freedesktop.DBus.AddMatch" dbusPropertiesIface = "org.freedesktop.DBus.Properties"
) )
// ErrResourceInsufficientParameters is returned when the configuration of the
// resource is insufficient for the resource to do any useful work.
var ErrResourceInsufficientParameters = errors.New("insufficient parameters for this resource")
// HostnameRes is a resource that allows setting and watching the hostname. // HostnameRes is a resource that allows setting and watching the hostname.
// //
// StaticHostname is the one configured in /etc/hostname or a similar file. // StaticHostname is the one configured in /etc/hostname or a similar file. It
// It is chosen by the local user. It is not always in sync with the current // is chosen by the local user. It is not always in sync with the current host
// host name as returned by the gethostname() system call. // name as returned by the gethostname() system call.
// //
// TransientHostname is the one configured via the kernel's sethostbyname(). // TransientHostname is the one configured via the kernel's sethostbyname(). It
// It can be different from the static hostname in case DHCP or mDNS have been // can be different from the static hostname in case DHCP or mDNS have been
// configured to change the name based on network information. // configured to change the name based on network information.
// //
// PrettyHostname is a free-form UTF8 host name for presentation to the user. // PrettyHostname is a free-form UTF8 host name for presentation to the user.
@@ -59,7 +59,10 @@ const (
// Hostname is the fallback value for all 3 fields above, if only Hostname is // Hostname is the fallback value for all 3 fields above, if only Hostname is
// specified, it will set all 3 fields to this value. // specified, it will set all 3 fields to this value.
type HostnameRes struct { type HostnameRes struct {
BaseRes `yaml:",inline"` traits.Base // add the base methods without re-implementation
init *engine.Init
Hostname string `yaml:"hostname"` Hostname string `yaml:"hostname"`
PrettyHostname string `yaml:"pretty_hostname"` PrettyHostname string `yaml:"pretty_hostname"`
StaticHostname string `yaml:"static_hostname"` StaticHostname string `yaml:"static_hostname"`
@@ -69,12 +72,8 @@ type HostnameRes struct {
} }
// Default returns some sensible defaults for this resource. // Default returns some sensible defaults for this resource.
func (obj *HostnameRes) Default() Res { func (obj *HostnameRes) Default() engine.Res {
return &HostnameRes{ return &HostnameRes{}
BaseRes: BaseRes{
MetaParams: DefaultMetaParams, // force a default
},
}
} }
// Validate if the params passed in are valid data. // Validate if the params passed in are valid data.
@@ -82,12 +81,13 @@ func (obj *HostnameRes) Validate() error {
if obj.PrettyHostname == "" && obj.StaticHostname == "" && obj.TransientHostname == "" { if obj.PrettyHostname == "" && obj.StaticHostname == "" && obj.TransientHostname == "" {
return ErrResourceInsufficientParameters return ErrResourceInsufficientParameters
} }
return obj.BaseRes.Validate() return nil
} }
// Init runs some startup code for this resource. // Init runs some startup code for this resource.
func (obj *HostnameRes) Init() error { func (obj *HostnameRes) Init(init *engine.Init) error {
obj.BaseRes.kind = "hostname" obj.init = init // save for later
if obj.PrettyHostname == "" { if obj.PrettyHostname == "" {
obj.PrettyHostname = obj.Hostname obj.PrettyHostname = obj.Hostname
} }
@@ -97,7 +97,12 @@ func (obj *HostnameRes) Init() error {
if obj.TransientHostname == "" { if obj.TransientHostname == "" {
obj.TransientHostname = obj.Hostname obj.TransientHostname = obj.Hostname
} }
return obj.BaseRes.Init() // call base init, b/c we're overriding return nil
}
// Close is run by the engine to clean up after the resource is done.
func (obj *HostnameRes) Close() error {
return nil
} }
// Watch is the primary listener for this resource and it outputs events. // Watch is the primary listener for this resource and it outputs events.
@@ -105,61 +110,55 @@ func (obj *HostnameRes) Watch() error {
// if we share the bus with others, we will get each others messages!! // if we share the bus with others, we will get each others messages!!
bus, err := util.SystemBusPrivateUsable() // don't share the bus connection! bus, err := util.SystemBusPrivateUsable() // don't share the bus connection!
if err != nil { if err != nil {
return errwrap.Wrap(err, "Failed to connect to bus") return errwrap.Wrapf(err, "failed to connect to bus")
} }
defer bus.Close() defer bus.Close()
callResult := bus.BusObject().Call( // watch the PropertiesChanged signal on the hostname1 dbus path
"org.freedesktop.DBus.AddMatch", 0, args := fmt.Sprintf(
fmt.Sprintf("type='signal',path='%s',interface='org.freedesktop.DBus.Properties',member='PropertiesChanged'", hostname1Path)) "type='signal', path='%s', interface='%s', member='PropertiesChanged'",
if callResult.Err != nil { hostname1Path,
return errwrap.Wrap(callResult.Err, "Failed to subscribe to DBus events for hostname1") dbusPropertiesIface,
)
if call := bus.BusObject().Call(engineUtil.DBusAddMatch, 0, args); call.Err != nil {
return errwrap.Wrapf(call.Err, "failed to subscribe to DBus events for hostname1")
} }
defer bus.BusObject().Call(engineUtil.DBusRemoveMatch, 0, args) // ignore the error
signals := make(chan *dbus.Signal, 10) // closed by dbus package signals := make(chan *dbus.Signal, 10) // closed by dbus package
bus.Signal(signals) bus.Signal(signals)
// notify engine that we're running obj.init.Running() // when started, notify engine that we're running
if err := obj.Running(); err != nil {
return err // bubble up a NACK...
}
var send = false // send event? var send = false // send event?
for { for {
select { select {
case <-signals: case <-signals:
send = true send = true
obj.StateOK(false) // dirty
case event := <-obj.Events(): case <-obj.init.Done: // closed by the engine to signal shutdown
// we avoid sending events on unpause return nil
if exit, _ := obj.ReadEvent(event); exit != nil {
return *exit // exit
}
send = true
obj.StateOK(false) // dirty
} }
// do all our event sending all together to avoid duplicate msgs // do all our event sending all together to avoid duplicate msgs
if send { if send {
send = false send = false
obj.Event() obj.init.Event() // notify engine of an event (this can block)
} }
} }
} }
func updateHostnameProperty(object dbus.BusObject, expectedValue, property, setterName string, apply bool) (checkOK bool, err error) { func (obj *HostnameRes) updateHostnameProperty(object dbus.BusObject, expectedValue, property, setterName string, apply bool) (bool, error) {
propertyObject, err := object.GetProperty("org.freedesktop.hostname1." + property) propertyObject, err := object.GetProperty("org.freedesktop.hostname1." + property)
if err != nil { if err != nil {
return false, errwrap.Wrapf(err, "failed to get org.freedesktop.hostname1.%s", property) return false, errwrap.Wrapf(err, "failed to get org.freedesktop.hostname1.%s", property)
} }
if propertyObject.Value() == nil { if propertyObject.Value() == nil {
return false, errwrap.Errorf("Unexpected nil value received when reading property %s", property) return false, fmt.Errorf("unexpected nil value received when reading property %s", property)
} }
propertyValue, ok := propertyObject.Value().(string) propertyValue, ok := propertyObject.Value().(string)
if !ok { if !ok {
return false, fmt.Errorf("Received unexpected type as %s value, expected string got '%T'", property, propertyValue) return false, fmt.Errorf("received unexpected type as %s value, expected string got '%T'", property, propertyValue)
} }
// expected value and actual value match => checkOk // expected value and actual value match => checkOk
@@ -173,7 +172,7 @@ func updateHostnameProperty(object dbus.BusObject, expectedValue, property, sett
} }
// attempting to apply the changes // attempting to apply the changes
log.Printf("Changing %s: %s => %s", property, propertyValue, expectedValue) obj.init.Logf("Changing %s: %s => %s", property, propertyValue, expectedValue)
if err := object.Call("org.freedesktop.hostname1."+setterName, 0, expectedValue, false).Err; err != nil { if err := object.Call("org.freedesktop.hostname1."+setterName, 0, expectedValue, false).Err; err != nil {
return false, errwrap.Wrapf(err, "failed to call org.freedesktop.hostname1.%s", setterName) return false, errwrap.Wrapf(err, "failed to call org.freedesktop.hostname1.%s", setterName)
} }
@@ -183,32 +182,32 @@ func updateHostnameProperty(object dbus.BusObject, expectedValue, property, sett
} }
// CheckApply method for Hostname resource. // CheckApply method for Hostname resource.
func (obj *HostnameRes) CheckApply(apply bool) (checkOK bool, err error) { func (obj *HostnameRes) CheckApply(apply bool) (bool, error) {
conn, err := util.SystemBusPrivateUsable() conn, err := util.SystemBusPrivateUsable()
if err != nil { if err != nil {
return false, errwrap.Wrap(err, "Failed to connect to the private system bus") return false, errwrap.Wrapf(err, "failed to connect to the private system bus")
} }
defer conn.Close() defer conn.Close()
hostnameObject := conn.Object(hostname1Iface, hostname1Path) hostnameObject := conn.Object(hostname1Iface, hostname1Path)
checkOK = true checkOK := true
if obj.PrettyHostname != "" { if obj.PrettyHostname != "" {
propertyCheckOK, err := updateHostnameProperty(hostnameObject, obj.PrettyHostname, "PrettyHostname", "SetPrettyHostname", apply) propertyCheckOK, err := obj.updateHostnameProperty(hostnameObject, obj.PrettyHostname, "PrettyHostname", "SetPrettyHostname", apply)
if err != nil { if err != nil {
return false, err return false, err
} }
checkOK = checkOK && propertyCheckOK checkOK = checkOK && propertyCheckOK
} }
if obj.StaticHostname != "" { if obj.StaticHostname != "" {
propertyCheckOK, err := updateHostnameProperty(hostnameObject, obj.StaticHostname, "StaticHostname", "SetStaticHostname", apply) propertyCheckOK, err := obj.updateHostnameProperty(hostnameObject, obj.StaticHostname, "StaticHostname", "SetStaticHostname", apply)
if err != nil { if err != nil {
return false, err return false, err
} }
checkOK = checkOK && propertyCheckOK checkOK = checkOK && propertyCheckOK
} }
if obj.TransientHostname != "" { if obj.TransientHostname != "" {
propertyCheckOK, err := updateHostnameProperty(hostnameObject, obj.TransientHostname, "Hostname", "SetHostname", apply) propertyCheckOK, err := obj.updateHostnameProperty(hostnameObject, obj.TransientHostname, "Hostname", "SetHostname", apply)
if err != nil { if err != nil {
return false, err return false, err
} }
@@ -218,66 +217,52 @@ func (obj *HostnameRes) CheckApply(apply bool) (checkOK bool, err error) {
return checkOK, nil return checkOK, nil
} }
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *HostnameRes) Cmp(r engine.Res) error {
// we can only compare HostnameRes to others of the same resource kind
res, ok := r.(*HostnameRes)
if !ok {
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.PrettyHostname != res.PrettyHostname {
return fmt.Errorf("the PrettyHostname differs")
}
if obj.StaticHostname != res.StaticHostname {
return fmt.Errorf("the StaticHostname differs")
}
if obj.TransientHostname != res.TransientHostname {
return fmt.Errorf("the TransientHostname differs")
}
return nil
}
// HostnameUID is the UID struct for HostnameRes. // HostnameUID is the UID struct for HostnameRes.
type HostnameUID struct { type HostnameUID struct {
BaseUID engine.BaseUID
name string name string
prettyHostname string prettyHostname string
staticHostname string staticHostname string
transientHostname string transientHostname string
} }
// AutoEdges returns the AutoEdge interface. In this case no autoedges are used. // UIDs includes all params to make a unique identification of this object. Most
func (obj *HostnameRes) AutoEdges() AutoEdge { // resources only return one, although some resources can return multiple.
return nil func (obj *HostnameRes) UIDs() []engine.ResUID {
}
// UIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *HostnameRes) UIDs() []ResUID {
x := &HostnameUID{ x := &HostnameUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()}, BaseUID: engine.BaseUID{Name: obj.Name(), Kind: obj.Kind()},
name: obj.Name, name: obj.Name(),
prettyHostname: obj.PrettyHostname, prettyHostname: obj.PrettyHostname,
staticHostname: obj.StaticHostname, staticHostname: obj.StaticHostname,
transientHostname: obj.TransientHostname, transientHostname: obj.TransientHostname,
} }
return []ResUID{x} return []engine.ResUID{x}
} }
// GroupCmp returns whether two resources can be grouped together or not. // UnmarshalYAML is the custom unmarshal handler for this struct. It is
func (obj *HostnameRes) GroupCmp(r Res) bool { // primarily useful for setting the defaults.
return false
}
// Compare two resources and return if they are equivalent.
func (obj *HostnameRes) Compare(res Res) bool {
switch res := res.(type) {
// we can only compare HostnameRes to others of the same resource
case *HostnameRes:
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
if obj.PrettyHostname != res.PrettyHostname {
return false
}
if obj.StaticHostname != res.StaticHostname {
return false
}
if obj.TransientHostname != res.TransientHostname {
return false
}
default:
return false
}
return true
}
// UnmarshalYAML is the custom unmarshal handler for this struct.
// It is primarily useful for setting the defaults.
func (obj *HostnameRes) UnmarshalYAML(unmarshal func(interface{}) error) error { func (obj *HostnameRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes HostnameRes // indirection to avoid infinite recursion type rawRes HostnameRes // indirection to avoid infinite recursion

808
engine/resources/http.go Normal file
View File

@@ -0,0 +1,808 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"bytes"
"context"
"fmt"
"io"
"net"
"net/http"
"os"
"path/filepath"
"strings"
"sync"
"time"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/purpleidea/mgmt/util/errwrap"
securefilepath "github.com/cyphar/filepath-securejoin"
)
func init() {
engine.RegisterResource("http:server", func() engine.Res { return &HTTPServerRes{} })
engine.RegisterResource("http:file", func() engine.Res { return &HTTPFileRes{} })
}
const (
// HTTPUseSecureJoin specifies that we should add in a "secure join" lib
// so that we avoid the ../../etc/passwd and symlink problems.
HTTPUseSecureJoin = true
)
// HTTPServerRes is an http server resource. It serves files, but does not
// actually apply any state. The name is used as the address to listen on,
// unless the Address field is specified, and in that case it is used instead.
// This resource can offer up files for serving that are specified either inline
// in this resource by specifying an http root, or as http:file resources which
// will get autogrouped into this resource at runtime. The two methods can be
// combined as well.
//
// This server also supports autogrouping some more magical resources into it.
// For example, the http:flag and http:ui resources add in magic endpoints.
//
// This server is not meant as a featureful replacement for the venerable and
// modern httpd servers out there, but rather as a simple, dynamic, integrated
// alternative for bootstrapping new machines and clusters in an elegant way.
//
// TODO: add support for TLS
// XXX: Add an http:flag resource that lets an http client set a flag somewhere!
// XXX: Add a http:ui resource that functions can read data from!
// XXX: The http:ui resource can also take in values from those functions!
type HTTPServerRes struct {
traits.Base // add the base methods without re-implementation
traits.Edgeable // XXX: add autoedge support
traits.Groupable // can have HTTPFileRes grouped into it
init *engine.Init
// Address is the listen address to use for the http server. It is
// common to use `:80` (the standard) to listen on TCP port 80 on all
// addresses.
Address string `lang:"address" yaml:"address"`
// Timeout is the maximum duration in seconds to use for unspecified
// timeouts. In other words, when this value is specified, it is used as
// the value for the other *Timeout values when they aren't used. Put
// another way, this makes it easy to set all the different timeouts
// with a single parameter.
Timeout *uint64 `lang:"timeout" yaml:"timeout"`
// ReadTimeout is the maximum duration in seconds for reading during the
// http request. If it is zero, then there is no timeout. If this is
// unspecified, then the value of Timeout is used instead if it is set.
// For more information, see the golang net/http Server documentation.
ReadTimeout *uint64 `lang:"read_timeout" yaml:"read_timeout"`
// WriteTimeout is the maximum duration in seconds for writing during
// the http request. If it is zero, then there is no timeout. If this is
// unspecified, then the value of Timeout is used instead if it is set.
// For more information, see the golang net/http Server documentation.
WriteTimeout *uint64 `lang:"write_timeout" yaml:"write_timeout"`
// ShutdownTimeout is the maximum duration in seconds to wait for the
// server to shutdown gracefully before calling Close. By default it is
// nice to let client connections terminate gracefully, however it might
// take longer than we are willing to wait, particularly if one is long
// polling or running a very long download. As a result, you can set a
// timeout here. The default is zero which means it will wait
// indefinitely. The shutdown process can also be cancelled by the
// interrupt handler which this resource supports. If this is
// unspecified, then the value of Timeout is used instead if it is set.
ShutdownTimeout *uint64 `lang:"shutdown_timeout" yaml:"shutdown_timeout"`
// Root is the root directory that we should serve files from. If it is
// not specified, then it is not used. Any http file resources will have
// precedence over anything in here, in case the same path exists twice.
// TODO: should we have a flag to determine the precedence rules here?
Root string `lang:"root" yaml:"root"`
// TODO: should we allow adding a list of one-of files directly here?
interruptChan chan struct{}
conn net.Listener
serveMux *http.ServeMux // can't share the global one between resources!
server *http.Server
}
// Default returns some sensible defaults for this resource.
func (obj *HTTPServerRes) Default() engine.Res {
return &HTTPServerRes{}
}
// getAddress returns the actual address to use. When Address is not specified,
// we use the Name.
func (obj *HTTPServerRes) getAddress() string {
if obj.Address != "" {
return obj.Address
}
return obj.Name()
}
// getReadTimeout determines the value for ReadTimeout, because if unspecified,
// this will default to the value of Timeout.
func (obj *HTTPServerRes) getReadTimeout() *uint64 {
if obj.ReadTimeout != nil {
return obj.ReadTimeout
}
return obj.Timeout // might be nil
}
// getWriteTimeout determines the value for WriteTimeout, because if
// unspecified, this will default to the value of Timeout.
func (obj *HTTPServerRes) getWriteTimeout() *uint64 {
if obj.WriteTimeout != nil {
return obj.WriteTimeout
}
return obj.Timeout // might be nil
}
// getShutdownTimeout determines the value for ShutdownTimeout, because if
// unspecified, this will default to the value of Timeout.
func (obj *HTTPServerRes) getShutdownTimeout() *uint64 {
if obj.ShutdownTimeout != nil {
return obj.ShutdownTimeout
}
return obj.Timeout // might be nil
}
// Validate checks if the resource data structure was populated correctly.
func (obj *HTTPServerRes) Validate() error {
if obj.getAddress() == "" {
return fmt.Errorf("empty address")
}
host, _, err := net.SplitHostPort(obj.getAddress())
if err != nil {
return errwrap.Wrapf(err, "the Address is in an invalid format: %s", obj.getAddress())
}
if host != "" {
// TODO: should we allow fqdn's here?
ip := net.ParseIP(host)
if ip == nil {
return fmt.Errorf("the Address is not a valid IP: %s", host)
}
}
if obj.Root != "" && !strings.HasPrefix(obj.Root, "/") {
return fmt.Errorf("the Root must be absolute")
}
if obj.Root != "" && !strings.HasSuffix(obj.Root, "/") {
return fmt.Errorf("the Root must be a dir")
}
// XXX: validate that the autogrouped resources don't have paths that
// conflict with each other. We can only have a single unique entry for
// what handles a /whatever URL.
return nil
}
// Init runs some startup code for this resource.
func (obj *HTTPServerRes) Init(init *engine.Init) error {
obj.init = init // save for later
// No need to error in Validate if Timeout is ignored, but log it.
// These are all specified, so Timeout effectively does nothing.
a := obj.ReadTimeout != nil
b := obj.WriteTimeout != nil
c := obj.ShutdownTimeout != nil
if obj.Timeout != nil && (a && b && c) {
obj.init.Logf("the Timeout param is being ignored")
}
// NOTE: If we don't Init anything that's autogrouped, then it won't
// even get an Init call on it.
// TODO: should we do this in the engine? Do we want to decide it here?
for _, res := range obj.GetGroup() { // grouped elements
if err := res.Init(init); err != nil {
return errwrap.Wrapf(err, "autogrouped Init failed")
}
}
obj.interruptChan = make(chan struct{})
return nil
}
// Close is run by the engine to clean up after the resource is done.
func (obj *HTTPServerRes) Close() error {
return nil
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *HTTPServerRes) Watch() error {
// TODO: I think we could replace all this with:
//obj.conn, err := net.Listen("tcp", obj.getAddress())
// ...but what is the advantage?
addr, err := net.ResolveTCPAddr("tcp", obj.getAddress())
if err != nil {
return errwrap.Wrapf(err, "could not resolve address")
}
obj.conn, err = net.ListenTCP("tcp", addr)
if err != nil {
return errwrap.Wrapf(err, "could not start listener")
}
defer obj.conn.Close()
obj.serveMux = http.NewServeMux() // do it here in case Watch restarts!
obj.serveMux.HandleFunc("/", obj.handler())
readTimeout := uint64(0)
if i := obj.getReadTimeout(); i != nil {
readTimeout = *i
}
writeTimeout := uint64(0)
if i := obj.getWriteTimeout(); i != nil {
writeTimeout = *i
}
obj.server = &http.Server{
Addr: obj.getAddress(),
Handler: obj.serveMux,
ReadTimeout: time.Duration(readTimeout) * time.Second,
WriteTimeout: time.Duration(writeTimeout) * time.Second,
//MaxHeaderBytes: 1 << 20, XXX: should we add a param for this?
}
obj.init.Running() // when started, notify engine that we're running
var closeError error
closeSignal := make(chan struct{})
wg := &sync.WaitGroup{}
defer wg.Wait()
shutdownChan := make(chan struct{}) // server shutdown finished signal
wg.Add(1)
go func() {
defer wg.Done()
select {
case <-obj.interruptChan:
// TODO: should we bubble up the error from Close?
// TODO: do we need a mutex around this Close?
obj.server.Close() // kill it quickly!
case <-shutdownChan:
// let this exit
}
}()
wg.Add(1)
go func() {
defer wg.Done()
defer close(closeSignal)
err := obj.server.Serve(obj.conn) // blocks until Shutdown() is called!
if err == nil || err == http.ErrServerClosed {
return
}
// if this returned on its own, then closeSignal can be used...
closeError = errwrap.Wrapf(err, "the server errored")
}()
// When Shutdown is called, Serve, ListenAndServe, and ListenAndServeTLS
// immediately return ErrServerClosed. Make sure the program doesn't
// exit and waits instead for Shutdown to return.
defer func() {
defer close(shutdownChan) // signal that shutdown is finished
ctx := context.Background()
if i := obj.getShutdownTimeout(); i != nil && *i > 0 {
var cancel context.CancelFunc
ctx, cancel = context.WithTimeout(ctx, time.Duration(*i)*time.Second)
defer cancel()
}
err := obj.server.Shutdown(ctx) // shutdown gracefully
if err == context.DeadlineExceeded {
// TODO: should we bubble up the error from Close?
// TODO: do we need a mutex around this Close?
obj.server.Close() // kill it now
}
}()
startupChan := make(chan struct{})
close(startupChan) // send one initial signal
var send = false // send event?
for {
if obj.init.Debug {
obj.init.Logf("Looping...")
}
select {
case <-startupChan:
startupChan = nil
send = true
case <-closeSignal: // something shut us down early
return closeError
case <-obj.init.Done: // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply never has anything to do for this resource, so it always succeeds.
// It does however check that certain runtime requirements (such as the Root dir
// existing if one was specified) are fulfilled.
func (obj *HTTPServerRes) CheckApply(apply bool) (bool, error) {
if obj.init.Debug {
obj.init.Logf("CheckApply")
}
// XXX: We don't want the initial CheckApply to return true until the
// Watch has started up, so we must block here until that's the case...
// Cheap runtime validation!
if obj.Root != "" {
fileInfo, err := os.Stat(obj.Root)
if err != nil {
return false, errwrap.Wrapf(err, "can't stat Root dir")
}
if !fileInfo.IsDir() {
return false, fmt.Errorf("the Root path is not a dir")
}
}
return true, nil // always succeeds, with nothing to do!
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *HTTPServerRes) Cmp(r engine.Res) error {
// we can only compare HTTPServerRes to others of the same resource kind
res, ok := r.(*HTTPServerRes)
if !ok {
return fmt.Errorf("res is not the same kind")
}
if obj.Address != res.Address {
return fmt.Errorf("the Address differs")
}
if (obj.Timeout == nil) != (res.Timeout == nil) { // xor
return fmt.Errorf("the Timeout differs")
}
if obj.Timeout != nil && res.Timeout != nil {
if *obj.Timeout != *res.Timeout { // compare the values
return fmt.Errorf("the value of Timeout differs")
}
}
if (obj.ReadTimeout == nil) != (res.ReadTimeout == nil) {
return fmt.Errorf("the ReadTimeout differs")
}
if obj.ReadTimeout != nil && res.ReadTimeout != nil {
if *obj.ReadTimeout != *res.ReadTimeout {
return fmt.Errorf("the value of ReadTimeout differs")
}
}
if (obj.WriteTimeout == nil) != (res.WriteTimeout == nil) {
return fmt.Errorf("the WriteTimeout differs")
}
if obj.WriteTimeout != nil && res.WriteTimeout != nil {
if *obj.WriteTimeout != *res.WriteTimeout {
return fmt.Errorf("the value of WriteTimeout differs")
}
}
if (obj.ShutdownTimeout == nil) != (res.ShutdownTimeout == nil) {
return fmt.Errorf("the ShutdownTimeout differs")
}
if obj.ShutdownTimeout != nil && res.ShutdownTimeout != nil {
if *obj.ShutdownTimeout != *res.ShutdownTimeout {
return fmt.Errorf("the value of ShutdownTimeout differs")
}
}
// TODO: We could do this sort of thing to skip checking Timeout when it
// is not used, but for the moment, this is overkill and not needed yet.
//a := obj.ReadTimeout != nil
//b := obj.WriteTimeout != nil
//c := obj.ShutdownTimeout != nil
//if !(obj.Timeout != nil && (a && b && c)) {
// // the Timeout param is not being ignored
//}
if obj.Root != res.Root {
return fmt.Errorf("the Root differs")
}
return nil
}
// Interrupt is called to ask the execution of this resource to end early. It
// will cause the server Shutdown to end abruptly instead of leading open client
// connections terminate gracefully. It does this by causing the server Close
// method to run.
func (obj *HTTPServerRes) Interrupt() error {
close(obj.interruptChan) // this should cause obj.server.Close() to run!
return nil
}
// Copy copies the resource. Don't call it directly, use engine.ResCopy instead.
// TODO: should this copy internal state?
func (obj *HTTPServerRes) Copy() engine.CopyableRes {
var timeout, readTimeout, writeTimeout, shutdownTimeout *uint64
if obj.Timeout != nil {
x := *obj.Timeout
timeout = &x
}
if obj.ReadTimeout != nil {
x := *obj.ReadTimeout
readTimeout = &x
}
if obj.WriteTimeout != nil {
x := *obj.WriteTimeout
writeTimeout = &x
}
if obj.ShutdownTimeout != nil {
x := *obj.ShutdownTimeout
shutdownTimeout = &x
}
return &HTTPServerRes{
Address: obj.Address,
Timeout: timeout,
ReadTimeout: readTimeout,
WriteTimeout: writeTimeout,
ShutdownTimeout: shutdownTimeout,
Root: obj.Root,
}
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *HTTPServerRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes HTTPServerRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*HTTPServerRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to HTTPServerRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = HTTPServerRes(raw) // restore from indirection with type conversion!
return nil
}
// GroupCmp returns whether two resources can be grouped together or not. Can
// these two resources be merged, aka, does this resource support doing so? Will
// resource allow itself to be grouped _into_ this obj?
func (obj *HTTPServerRes) GroupCmp(r engine.GroupableRes) error {
res1, ok1 := r.(*HTTPFileRes) // different from what we usually do!
if ok1 {
// If the http file resource has the Server field specified,
// then it must match against our name field if we want it to
// group with us.
if res1.Server != "" && res1.Server != obj.Name() {
return fmt.Errorf("resource groups with a different server name")
}
return nil
}
return fmt.Errorf("resource is not the right kind")
}
// readHandler handles all the incoming download requests from clients.
func (obj *HTTPServerRes) handler() func(http.ResponseWriter, *http.Request) {
// TODO: we could statically pre-compute some stuff here...
return func(w http.ResponseWriter, req *http.Request) {
if obj.init.Debug {
obj.init.Logf("Client: %s", req.RemoteAddr)
}
// TODO: would this leak anything security sensitive in our log?
obj.init.Logf("URL: %s", req.URL)
if obj.init.Debug {
obj.init.Logf("Path: %s", req.URL.Path)
}
// We only allow GET at the moment.
if req.Method != http.MethodGet {
w.WriteHeader(http.StatusMethodNotAllowed)
return
}
requestPath := req.URL.Path // TODO: is this what we want here?
//var handle io.Reader // TODO: simplify?
var handle io.ReadSeeker
// Look through the autogrouped resources!
// TODO: can we improve performance by only searching here once?
for _, x := range obj.GetGroup() { // grouped elements
res, ok := x.(*HTTPFileRes) // convert from Res
if !ok {
continue
}
if requestPath != res.getPath() {
continue // not me
}
if obj.init.Debug {
obj.init.Logf("Got grouped file: %s", res.String())
}
var err error
handle, err = res.getContent()
if err != nil {
obj.init.Logf("could not get content for: %s", requestPath)
msg, httpStatus := toHTTPError(err)
http.Error(w, msg, httpStatus)
return
}
break
}
// Look in root if we have one, and we haven't got a file yet...
if obj.Root != "" && handle == nil {
p := filepath.Join(obj.Root, requestPath) // normal unsafe!
if !strings.HasPrefix(p, obj.Root) { // root ends with /
// user might have tried a ../../etc/passwd hack
obj.init.Logf("join inconsistency: %s", p)
http.NotFound(w, req) // lie to them...
return
}
if HTTPUseSecureJoin {
var err error
p, err = securefilepath.SecureJoin(obj.Root, requestPath)
if err != nil {
obj.init.Logf("secure join fail: %s", p)
http.NotFound(w, req) // lie to them...
return
}
}
if obj.init.Debug {
obj.init.Logf("Got file at root: %s", p)
}
var err error
handle, err = os.Open(p)
if err != nil {
obj.init.Logf("could not open: %s", p)
msg, httpStatus := toHTTPError(err)
http.Error(w, msg, httpStatus)
return
}
}
// We never found a file...
if handle == nil {
if obj.init.Debug || true { // XXX: maybe we should always do this?
obj.init.Logf("File not found: %s", requestPath)
}
http.NotFound(w, req)
return
}
// Determine the last-modified time if we can.
modtime := time.Now()
if f, ok := handle.(*os.File); ok {
fi, err := f.Stat()
if err == nil {
modtime = fi.ModTime()
}
// TODO: if Stat errors, should we fail the whole thing?
}
// XXX: is requestPath what we want for the name field?
http.ServeContent(w, req, requestPath, modtime, handle)
//obj.init.Logf("%d bytes sent", n) // XXX: how do we know (on the server-side) if it worked?
return
}
}
// HTTPFileRes is a file that exists within an http server. The name is used as
// the public path of the file, unless the filename field is specified, and in
// that case it is used instead. The way this works is that it autogroups at
// runtime with an existing http resource, and in doing so makes the file
// associated with this resource available for serving from that http server.
type HTTPFileRes struct {
traits.Base // add the base methods without re-implementation
traits.Edgeable // XXX: add autoedge support
traits.Groupable // can be grouped into HTTPServerRes
init *engine.Init
// Server is the name of the http server resource to group this into. If
// it is omitted, and there is only a single http resource, then it will
// be grouped into it automatically. If there is more than one main http
// resource being used, then the grouping behaviour is *undefined* when
// this is not specified, and it is not recommended to leave this blank!
Server string `lang:"server" yaml:"server"`
// Filename is the name of the file this data should appear as on the
// http server.
Filename string `lang:"filename" yaml:"filename"`
// Path is the absolute path to a file that should be used as the source
// for this file resource. It must not be combined with the data field.
Path string `lang:"path" yaml:"path"`
// Data is the file content that should be used as the source for this
// file resource. It must not be combined with the path field.
// TODO: should this be []byte instead?
Data string `lang:"data" yaml:"data"`
}
// Default returns some sensible defaults for this resource.
func (obj *HTTPFileRes) Default() engine.Res {
return &HTTPFileRes{}
}
// getPath returns the actual path we respond to. When Filename is not
// specified, we use the Name. Note that this is the filename that will be seen
// on the http server, it is *not* the source path to the actual file contents
// being sent by the server.
func (obj *HTTPFileRes) getPath() string {
if obj.Filename != "" {
return obj.Filename
}
return obj.Name()
}
// getContent returns the content that we expect from this resource. It depends
// on whether the user specified the Path or Data fields, and whether the Path
// exists or not.
func (obj *HTTPFileRes) getContent() (io.ReadSeeker, error) {
if obj.Path != "" && obj.Data != "" {
// programming error! this should have been caught in Validate!
return nil, fmt.Errorf("must not specify Path and Data")
}
if obj.Path != "" {
return os.Open(obj.Path)
}
return bytes.NewReader([]byte(obj.Data)), nil
}
// Validate checks if the resource data structure was populated correctly.
func (obj *HTTPFileRes) Validate() error {
if obj.getPath() == "" {
return fmt.Errorf("empty filename")
}
// FIXME: does getPath need to start with a slash?
if obj.Path != "" && !strings.HasPrefix(obj.Path, "/") {
return fmt.Errorf("the Path must be absolute")
}
if obj.Path != "" && obj.Data != "" {
return fmt.Errorf("must not specify Path and Data")
}
// NOTE: if obj.Path == "" && obj.Data == "" then we have an empty file!
return nil
}
// Init runs some startup code for this resource.
func (obj *HTTPFileRes) Init(init *engine.Init) error {
obj.init = init // save for later
return nil
}
// Close is run by the engine to clean up after the resource is done.
func (obj *HTTPFileRes) Close() error {
return nil
}
// Watch is the primary listener for this resource and it outputs events. This
// particular one does absolutely nothing but block until we've received a done
// signal.
func (obj *HTTPFileRes) Watch() error {
obj.init.Running() // when started, notify engine that we're running
select {
case <-obj.init.Done: // closed by the engine to signal shutdown
}
//obj.init.Event() // notify engine of an event (this can block)
return nil
}
// CheckApply never has anything to do for this resource, so it always succeeds.
func (obj *HTTPFileRes) CheckApply(apply bool) (bool, error) {
if obj.init.Debug {
obj.init.Logf("CheckApply")
}
return true, nil // always succeeds, with nothing to do!
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *HTTPFileRes) Cmp(r engine.Res) error {
// we can only compare HTTPFileRes to others of the same resource kind
res, ok := r.(*HTTPFileRes)
if !ok {
return fmt.Errorf("res is not the same kind")
}
if obj.Server != res.Server {
return fmt.Errorf("the Server field differs")
}
if obj.Filename != res.Filename {
return fmt.Errorf("the Filename differs")
}
if obj.Path != res.Path {
return fmt.Errorf("the Path differs")
}
if obj.Data != res.Data {
return fmt.Errorf("the Data differs")
}
return nil
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *HTTPFileRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes HTTPFileRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*HTTPFileRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to HTTPFileRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = HTTPFileRes(raw) // restore from indirection with type conversion!
return nil
}
// toHTTPError returns a non-specific HTTP error message and status code for a
// given non-nil error value. It's important that toHTTPError does not actually
// return err.Error(), since msg and httpStatus are returned to users, and
// historically Go's ServeContent always returned just "404 Not Found" for all
// errors. We don't want to start leaking information in error messages.
// NOTE: This was copied and modified slightly from the golang net/http package.
// See: https://github.com/golang/go/issues/38375
func toHTTPError(err error) (msg string, httpStatus int) {
if os.IsNotExist(err) {
//return "404 page not found", http.StatusNotFound
return http.StatusText(http.StatusNotFound), http.StatusNotFound
}
if os.IsPermission(err) {
//return "403 Forbidden", http.StatusForbidden
return http.StatusText(http.StatusForbidden), http.StatusForbidden
}
// Default:
//return "500 Internal Server Error", http.StatusInternalServerError
return http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError
}

View File

@@ -1,44 +1,54 @@
// Mgmt // Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors // Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors // Written by James Shubin <james@shubin.ca> and the project contributors
// //
// This program is free software: you can redistribute it and/or modify // This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by // it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or // the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version. // (at your option) any later version.
// //
// This program is distributed in the hope that it will be useful, // This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of // but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details. // GNU General Public License for more details.
// //
// You should have received a copy of the GNU Affero General Public License // You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>. // along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources package resources
import ( import (
"encoding/gob" "context"
"fmt" "fmt"
"log"
"strconv" "strconv"
"sync"
"time"
errwrap "github.com/pkg/errors" "github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/util/errwrap"
) )
func init() { func init() {
gob.Register(&KVRes{}) engine.RegisterResource("kv", func() engine.Res { return &KVRes{} })
} }
// KVResSkipCmpStyle represents the different styles of comparison when using SkipLessThan. // KVResSkipCmpStyle represents the different styles of comparison when using
// SkipLessThan.
type KVResSkipCmpStyle int type KVResSkipCmpStyle int
// These are the different allowed comparison styles. Most folks will want SkipCmpStyleInt. // These are the different allowed comparison styles. Most folks will want
// SkipCmpStyleInt.
const ( const (
SkipCmpStyleInt KVResSkipCmpStyle = iota SkipCmpStyleInt KVResSkipCmpStyle = iota
SkipCmpStyleString SkipCmpStyleString
) )
const (
kvCheckApplyTimeout = 5 * time.Second
)
// KVRes is a resource which writes a key/value pair into cluster wide storage. // KVRes is a resource which writes a key/value pair into cluster wide storage.
// It will ensure that the key is set to the requested value. The one exception // It will ensure that the key is set to the requested value. The one exception
// is that if you use the SkipLessThan parameter, then it will only replace the // is that if you use the SkipLessThan parameter, then it will only replace the
@@ -48,28 +58,47 @@ const (
// The one exception is that when this resource receives a refresh signal, then // The one exception is that when this resource receives a refresh signal, then
// it will set the value to be the exact one if they are not identical already. // it will set the value to be the exact one if they are not identical already.
type KVRes struct { type KVRes struct {
BaseRes `yaml:",inline"` traits.Base // add the base methods without re-implementation
Key string `yaml:"key"` // key to set //traits.Groupable // TODO: it could be useful to group our writes and watches!
Value *string `yaml:"value"` // value to set (nil to delete) traits.Refreshable
SkipLessThan bool `yaml:"skiplessthan"` // skip updates as long as stored value is greater traits.Recvable
SkipCmpStyle KVResSkipCmpStyle `yaml:"skipcmpstyle"` // how to do the less than cmp
init *engine.Init
// Key represents the key to set. If it is not specified, the Name value
// is used instead.
Key string `lang:"key" yaml:"key"`
// Value represents the string value to set. If this value is nil or,
// undefined, then this will delete that key.
Value *string `lang:"value" yaml:"value"`
// SkipLessThan causes the value to be updated as long as it is greater.
SkipLessThan bool `lang:"skiplessthan" yaml:"skiplessthan"`
// SkipCmpStyle is the type of compare function used when determining if
// the value is greater when using the SkipLessThan parameter.
SkipCmpStyle KVResSkipCmpStyle `lang:"skipcmpstyle" yaml:"skipcmpstyle"`
interruptChan chan struct{}
// TODO: does it make sense to have different backends here? (eg: local) // TODO: does it make sense to have different backends here? (eg: local)
} }
// Default returns some sensible defaults for this resource. // getKey returns the key to be used for this resource. If the Key field is
func (obj *KVRes) Default() Res { // specified, it will use that, otherwise it uses the Name.
return &KVRes{ func (obj *KVRes) getKey() string {
BaseRes: BaseRes{ if obj.Key != "" {
MetaParams: DefaultMetaParams, // force a default return obj.Key
},
} }
return obj.Name()
}
// Default returns some sensible defaults for this resource.
func (obj *KVRes) Default() engine.Res {
return &KVRes{}
} }
// Validate if the params passed in are valid data. // Validate if the params passed in are valid data.
// FIXME: This will catch most issues unless data is passed in after Init with
// the Send/Recv mechanism. Should the engine re-call Validate after Send/Recv?
func (obj *KVRes) Validate() error { func (obj *KVRes) Validate() error {
if obj.Key == "" { if obj.getKey() == "" {
return fmt.Errorf("key must not be empty") return fmt.Errorf("key must not be empty")
} }
if obj.SkipLessThan { if obj.SkipLessThan {
@@ -83,27 +112,38 @@ func (obj *KVRes) Validate() error {
} }
} }
} }
return obj.BaseRes.Validate() return nil
} }
// Init initializes the resource. // Init initializes the resource.
func (obj *KVRes) Init() error { func (obj *KVRes) Init(init *engine.Init) error {
obj.BaseRes.kind = "kv" obj.init = init // save for later
return obj.BaseRes.Init() // call base init, b/c we're overriding
obj.interruptChan = make(chan struct{})
return nil
}
// Close is run by the engine to clean up after the resource is done.
func (obj *KVRes) Close() error {
return nil
} }
// Watch is the primary listener for this resource and it outputs events. // Watch is the primary listener for this resource and it outputs events.
func (obj *KVRes) Watch() error { func (obj *KVRes) Watch() error {
// FIXME: add timeout to context
// The obj.init.Done channel is closed by the engine to signal shutdown.
ctx, cancel := util.ContextWithCloser(context.Background(), obj.init.Done)
defer cancel()
// notify engine that we're running ch, err := obj.init.World.StrMapWatch(ctx, obj.getKey()) // get possible events!
if err := obj.Running(); err != nil { if err != nil {
return err // bubble up a NACK... return err
} }
ch := obj.Data().World.StrWatch(obj.Key) // get possible events! obj.init.Running() // when started, notify engine that we're running
var send = false // send event? var send = false // send event?
var exit *error
for { for {
select { select {
// NOTE: this part is very similar to the file resource code // NOTE: this part is very similar to the file resource code
@@ -112,38 +152,33 @@ func (obj *KVRes) Watch() error {
return nil return nil
} }
if err != nil { if err != nil {
return errwrap.Wrapf(err, "unknown %s[%s] watcher error", obj.Kind(), obj.GetName()) return errwrap.Wrapf(err, "unknown %s watcher error", obj)
} }
if obj.Data().Debug { if obj.init.Debug {
log.Printf("%s[%s]: Event!", obj.Kind(), obj.GetName()) obj.init.Logf("event!")
} }
send = true send = true
obj.StateOK(false) // dirty
case event := <-obj.Events(): case <-obj.init.Done: // closed by the engine to signal shutdown
// we avoid sending events on unpause return nil
if exit, send = obj.ReadEvent(event); exit != nil {
return *exit // exit
}
} }
// do all our event sending all together to avoid duplicate msgs // do all our event sending all together to avoid duplicate msgs
if send { if send {
send = false send = false
obj.Event() obj.init.Event() // notify engine of an event (this can block)
} }
} }
} }
// lessThanCheck checks for less than validity. // lessThanCheck checks for less than validity.
func (obj *KVRes) lessThanCheck(value string) (checkOK bool, err error) { func (obj *KVRes) lessThanCheck(value string) (bool, error) {
v := *obj.Value v := *obj.Value
if value == v { // redundant check for safety if value == v { // redundant check for safety
return true, nil return true, nil
} }
var refresh = obj.Refresh() // do we have a pending reload to apply? var refresh = obj.init.Refresh() // do we have a pending reload to apply?
if !obj.SkipLessThan || refresh { // update lessthan on refresh if !obj.SkipLessThan || refresh { // update lessthan on refresh
return false, nil return false, nil
} }
@@ -175,16 +210,31 @@ func (obj *KVRes) lessThanCheck(value string) (checkOK bool, err error) {
} }
// CheckApply method for Password resource. Does nothing, returns happy! // CheckApply method for Password resource. Does nothing, returns happy!
func (obj *KVRes) CheckApply(apply bool) (checkOK bool, err error) { func (obj *KVRes) CheckApply(apply bool) (bool, error) {
log.Printf("%s[%s]: CheckApply(%t)", obj.Kind(), obj.GetName(), apply) obj.init.Logf("CheckApply(%t)", apply)
if val, exists := obj.Recv["Value"]; exists && val.Changed { wg := &sync.WaitGroup{}
defer wg.Wait() // this must be above the defer cancel() call
ctx, cancel := context.WithTimeout(context.Background(), kvCheckApplyTimeout)
defer cancel()
wg.Add(1)
go func() {
defer wg.Done()
select {
case <-obj.interruptChan:
cancel()
case <-ctx.Done():
// let this exit
}
}()
if val, exists := obj.init.Recv()["Value"]; exists && val.Changed {
// if we received on Value, and it changed, wooo, nothing to do. // if we received on Value, and it changed, wooo, nothing to do.
log.Printf("CheckApply: `Value` was updated!") obj.init.Logf("CheckApply: `Value` was updated!")
} }
hostname := obj.Data().Hostname // me hostname := obj.init.Hostname // me
keyMap, err := obj.Data().World.StrGet(obj.Key) keyMap, err := obj.init.World.StrMapGet(ctx, obj.getKey())
if err != nil { if err != nil {
return false, errwrap.Wrapf(err, "check error during StrGet") return false, errwrap.Wrapf(err, "check error during StrGet")
} }
@@ -204,7 +254,7 @@ func (obj *KVRes) CheckApply(apply bool) (checkOK bool, err error) {
return true, nil // nothing to delete, we're good! return true, nil // nothing to delete, we're good!
} else if ok && obj.Value == nil { // delete } else if ok && obj.Value == nil { // delete
err := obj.Data().World.StrDel(obj.Key) err := obj.init.World.StrMapDel(ctx, obj.getKey())
return false, errwrap.Wrapf(err, "apply error during StrDel") return false, errwrap.Wrapf(err, "apply error during StrDel")
} }
@@ -212,79 +262,66 @@ func (obj *KVRes) CheckApply(apply bool) (checkOK bool, err error) {
return false, nil return false, nil
} }
if err := obj.Data().World.StrSet(obj.Key, *obj.Value); err != nil { if err := obj.init.World.StrMapSet(ctx, obj.getKey(), *obj.Value); err != nil {
return false, errwrap.Wrapf(err, "apply error during StrSet") return false, errwrap.Wrapf(err, "apply error during StrSet")
} }
return false, nil return false, nil
} }
// KVUID is the UID struct for KVRes. // Cmp compares two resources and returns an error if they are not equivalent.
type KVUID struct { func (obj *KVRes) Cmp(r engine.Res) error {
BaseUID // we can only compare KVRes to others of the same resource kind
name string res, ok := r.(*KVRes)
} if !ok {
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.getKey() != res.getKey() {
return fmt.Errorf("the Key differs")
}
if (obj.Value == nil) != (res.Value == nil) { // xor
return fmt.Errorf("the Value differs")
}
if obj.Value != nil && res.Value != nil {
if *obj.Value != *res.Value { // compare the strings
return fmt.Errorf("the contents of Value differs")
}
}
if obj.SkipLessThan != res.SkipLessThan {
return fmt.Errorf("the SkipLessThan param differs")
}
if obj.SkipCmpStyle != res.SkipCmpStyle {
return fmt.Errorf("the SkipCmpStyle param differs")
}
// AutoEdges returns the AutoEdge interface. In this case no autoedges are used.
func (obj *KVRes) AutoEdges() AutoEdge {
return nil return nil
} }
// UIDs includes all params to make a unique identification of this object. // Interrupt is called to ask the execution of this resource to end early.
// Most resources only return one, although some resources can return multiple. func (obj *KVRes) Interrupt() error {
func (obj *KVRes) UIDs() []ResUID { close(obj.interruptChan)
return nil
}
// KVUID is the UID struct for KVRes.
type KVUID struct {
engine.BaseUID
name string
}
// UIDs includes all params to make a unique identification of this object. Most
// resources only return one, although some resources can return multiple.
func (obj *KVRes) UIDs() []engine.ResUID {
x := &KVUID{ x := &KVUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()}, BaseUID: engine.BaseUID{Name: obj.Name(), Kind: obj.Kind()},
name: obj.Name, name: obj.Name(),
} }
return []ResUID{x} return []engine.ResUID{x}
} }
// GroupCmp returns whether two resources can be grouped together or not. // UnmarshalYAML is the custom unmarshal handler for this struct. It is
func (obj *KVRes) GroupCmp(r Res) bool { // primarily useful for setting the defaults.
_, ok := r.(*KVRes)
if !ok {
return false
}
return false // TODO: this is doable!
// TODO: it could be useful to group our writes and watches!
}
// Compare two resources and return if they are equivalent.
func (obj *KVRes) Compare(res Res) bool {
switch res.(type) {
// we can only compare KVRes to others of the same resource
case *KVRes:
res := res.(*KVRes)
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Key != res.Key {
return false
}
if (obj.Value == nil) != (res.Value == nil) { // xor
return false
}
if obj.Value != nil && res.Value != nil {
if *obj.Value != *res.Value { // compare the strings
return false
}
}
if obj.SkipLessThan != res.SkipLessThan {
return false
}
if obj.SkipCmpStyle != res.SkipCmpStyle {
return false
}
default:
return false
}
return true
}
// UnmarshalYAML is the custom unmarshal handler for this struct.
// It is primarily useful for setting the defaults.
func (obj *KVRes) UnmarshalYAML(unmarshal func(interface{}) error) error { func (obj *KVRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes KVRes // indirection to avoid infinite recursion type rawRes KVRes // indirection to avoid infinite recursion

728
engine/resources/mount.go Normal file
View File

@@ -0,0 +1,728 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"bytes"
"context"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
"time"
"unsafe"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
engineUtil "github.com/purpleidea/mgmt/engine/util"
"github.com/purpleidea/mgmt/recwatch"
"github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/util/errwrap"
sdbus "github.com/coreos/go-systemd/dbus"
"github.com/coreos/go-systemd/unit"
systemdUtil "github.com/coreos/go-systemd/util"
fstab "github.com/deniswernert/go-fstab"
"github.com/godbus/dbus"
"golang.org/x/sys/unix"
)
func init() {
engine.RegisterResource("mount", func() engine.Res { return &MountRes{} })
}
const (
// procFilesystems is a file that lists all the valid filesystem types.
procFilesystems = "/proc/filesystems"
// procPath is the path to /proc/mounts which contains all active mounts.
procPath = "/proc/mounts"
// fstabPath is the path to the fstab file which defines mounts.
fstabPath = "/etc/fstab"
// fstabUmask is the umask (permissions) used to edit /etc/fstab.
fstabUmask = 0644
// getStatus64 is an ioctl command to get the status of file backed
// loopback devices (i.e. iso file mounts.)
getStatus64 = 0x4C05
// loopFileUmask is the umask (permissions) used to read the loop file.
loopFileUmask = 0660
// devDisk is the path where disks and partitions can be found, organized
// by uuid/label/path.
devDisk = "/dev/disk/"
// diskByUUID is the location of symlinks for devices by UUID.
diskByUUID = devDisk + "by-uuid/"
// diskByLabel is the location of symlinks for devices by label.
diskByLabel = devDisk + "by-label/"
// diskByUUID is the location of symlinks for partitions by UUID.
diskByPartUUID = devDisk + "by-partuuid/"
// diskByLabel is the location of symlinks for partitions by label.
diskByPartLabel = devDisk + "by-partlabel/"
// dbusSystemdService is the service to connect to systemd itself.
dbusSystemd1Service = "org.freedesktop.systemd1"
// dbusSystemd1Interface is the base systemd1 path.
dbusSystemd1Path = "/org/freedesktop/systemd1"
// dbusUnitPath is the dbus path where mount unit files are found.
dbusUnitPath = dbusSystemd1Path + "/unit/"
// dbusSystemd1Interface is the base systemd1 interface.
dbusSystemd1Interface = "org.freedesktop.systemd1"
// dbusMountInterface is used as an argument to filter dbus messages.
dbusMountInterface = dbusSystemd1Interface + ".Mount"
// dbusManagerInterface is the systemd manager interface used for
// interfacing with systemd units.
dbusManagerInterface = dbusSystemd1Interface + ".Manager"
// dbusRestartUnit is the dbus method for restarting systemd units.
dbusRestartUnit = dbusManagerInterface + ".RestartUnit"
// dbusReloadSystemd is the dbus method for reloading systemd settings.
// (i.e. systemctl daemon-reload)
dbusReloadSystemd = dbusManagerInterface + ".Reload"
// restartTimeout is the delay before restartUnit is assumed to have
// failed.
dbusRestartCtxTimeout = 10
// dbusSignalJobRemoved is the name of the dbus signal that produces a
// message when a dbus job is done (or has errored.)
dbusSignalJobRemoved = "JobRemoved"
)
// MountRes is a systemd mount resource that adds/removes entries from
// /etc/fstab, and makes sure the defined device is mounted or unmounted
// accordingly. The mount point is set according to the resource's name.
type MountRes struct {
traits.Base
init *engine.Init
// State must be exists ot absent. If absent, remaining fields are ignored.
State string `yaml:"state"`
Device string `yaml:"device"` // location of the device or image
Type string `yaml:"type"` // the type of filesystem
Options map[string]string `yaml:"options"` // mount options
Freq int `yaml:"freq"` // dump frequency
PassNo int `yaml:"passno"` // verification order
mount *fstab.Mount // struct representing the mount
}
// Default returns some sensible defaults for this resource.
func (obj *MountRes) Default() engine.Res {
return &MountRes{
Options: defaultMntOps(),
}
}
// Validate if the params passed in are valid data.
func (obj *MountRes) Validate() error {
var err error
// validate state
if obj.State != "exists" && obj.State != "absent" {
return fmt.Errorf("state must be 'exists', or 'absent'")
}
// validate type
fs, err := ioutil.ReadFile(procFilesystems)
if err != nil {
return errwrap.Wrapf(err, "error reading %s", procFilesystems)
}
fsSlice := strings.Fields(string(fs))
for i, x := range fsSlice {
if x == "nodev" {
fsSlice = append(fsSlice[:i], fsSlice[i+1:]...)
}
}
if obj.State != "absent" && !util.StrInList(obj.Type, fsSlice) {
return fmt.Errorf("type must be a valid filesystem type (see /proc/filesystems)")
}
// validate mountpoint
if strings.Contains(obj.Name(), "//") {
return fmt.Errorf("double slashes are not allowed in resource name")
}
if err := unix.Access(obj.Name(), unix.R_OK); err != nil {
return errwrap.Wrapf(err, "error validating mount point: %s", obj.Name())
}
// validate device
device, err := evalSpec(obj.Device) // eval symlink
if err != nil {
return errwrap.Wrapf(err, "error evaluating spec: %s", obj.Device)
}
if err := unix.Access(device, unix.R_OK); err != nil {
return errwrap.Wrapf(err, "error validating device: %s", device)
}
return nil
}
// Init runs some startup code for this resource.
func (obj *MountRes) Init(init *engine.Init) error {
obj.init = init //save for later
obj.mount = &fstab.Mount{
Spec: obj.Device,
File: obj.Name(),
VfsType: obj.Type,
MntOps: obj.Options,
Freq: obj.Freq,
PassNo: obj.PassNo,
}
return nil
}
// Close is run by the engine to clean up after the resource is done.
func (obj *MountRes) Close() error {
return nil
}
// Watch listens for signals from the mount unit associated with the resource.
// It also watch for changes to /etc/fstab, where mounts are defined.
func (obj *MountRes) Watch() error {
// make sure systemd is running
if !systemdUtil.IsRunningSystemd() {
return fmt.Errorf("systemd is not running")
}
// establish a godbus connection
conn, err := util.SystemBusPrivateUsable()
if err != nil {
return errwrap.Wrapf(err, "error establishing dbus connection")
}
defer conn.Close()
// add a dbus rule to watch signals from the mount unit.
args := fmt.Sprintf("type='signal', path='%s', arg0='%s'",
dbusUnitPath+sdbus.PathBusEscape(unit.UnitNamePathEscape((obj.Name()+".mount"))),
dbusMountInterface,
)
if call := conn.BusObject().Call(engineUtil.DBusAddMatch, 0, args); call.Err != nil {
return errwrap.Wrapf(call.Err, "error creating dbus call")
}
defer conn.BusObject().Call(engineUtil.DBusRemoveMatch, 0, args) // ignore the error
ch := make(chan *dbus.Signal)
defer close(ch)
conn.Signal(ch)
defer conn.RemoveSignal(ch)
// watch the fstab file
recWatcher, err := recwatch.NewRecWatcher(fstabPath, false)
if err != nil {
return err
}
// close the recwatcher when we're done
defer recWatcher.Close()
obj.init.Running() // when started, notify engine that we're running
var send bool
var done bool
for {
select {
case event, ok := <-recWatcher.Events():
if !ok {
if done {
return nil
}
done = true
continue
}
if err := event.Error; err != nil {
return errwrap.Wrapf(err, "unknown recwatcher error")
}
if obj.init.Debug {
obj.init.Logf("event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case event, ok := <-ch:
if !ok {
if done {
return nil
}
done = true
continue
}
if obj.init.Debug {
obj.init.Logf("event: %+v", event)
}
send = true
case <-obj.init.Done: // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// fstabCheckApply checks /etc/fstab for entries corresponding to the resource
// definition, and adds or deletes the entry as needed.
func (obj *MountRes) fstabCheckApply(apply bool) (bool, error) {
exists, err := fstabEntryExists(fstabPath, obj.mount)
if err != nil {
return false, errwrap.Wrapf(err, "error checking if fstab entry exists")
}
// if everything is as it should be, we're done
if (exists && obj.State == "exists") || (!exists && obj.State == "absent") {
return true, nil
}
if !apply {
return false, nil
}
obj.init.Logf("fstabCheckApply(%t)", apply)
if obj.State == "exists" {
if err := obj.fstabEntryAdd(fstabPath, obj.mount); err != nil {
return false, errwrap.Wrapf(err, "error adding fstab entry: %+v", obj.mount)
}
return false, nil
}
if err := obj.fstabEntryRemove(fstabPath, obj.mount); err != nil {
return false, errwrap.Wrapf(err, "error removing fstab entry: %+v", obj.mount)
}
return false, nil
}
// mountCheckApply checks if the defined resource is mounted, and mounts or
// unmounts it according to the defined state.
func (obj *MountRes) mountCheckApply(apply bool) (bool, error) {
exists, err := mountExists(procPath, obj.mount)
if err != nil {
return false, errwrap.Wrapf(err, "error checking if mount exists")
}
// if everything is as it should be, we're done
if (exists && obj.State == "exists") || (!exists && obj.State == "absent") {
return true, nil
}
if !apply {
return false, nil
}
obj.init.Logf("mountCheckApply(%t)", apply)
if obj.State == "exists" {
// Reload mounts from /etc/fstab by performing a `daemon-reload` and
// restarting `local-fs.target` and `remote-fs.target` units.
if err := mountReload(); err != nil {
return false, errwrap.Wrapf(err, "error reloading /etc/fstab")
}
return false, nil // we're done
}
// unmount the device
if err := unix.Unmount(obj.Name(), 0); err != nil { // 0 means no flags
return false, errwrap.Wrapf(err, "error unmounting %s", obj.Name())
}
return false, nil
}
// CheckApply is run to check the state and, if apply is true, to apply the
// necessary changes to reach the desired state. This is run before Watch and
// again if Watch finds a change occurring to the state.
func (obj *MountRes) CheckApply(apply bool) (bool, error) {
checkOK := true
if c, err := obj.fstabCheckApply(apply); err != nil {
return false, err
} else if !c {
checkOK = false
}
if c, err := obj.mountCheckApply(apply); err != nil {
return false, err
} else if !c {
checkOK = false
}
return checkOK, nil
}
// Cmp compares two resources and return if they are equivalent.
func (obj *MountRes) Cmp(r engine.Res) error {
// we can only compare MountRes to others of the same resource kind
res, ok := r.(*MountRes)
if !ok {
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.State != res.State {
return fmt.Errorf("the State differs")
}
if obj.Type != res.Type {
return fmt.Errorf("the Type differs")
}
if !strMapEq(obj.Options, res.Options) {
return fmt.Errorf("the Options differ")
}
if obj.Freq != res.Freq {
return fmt.Errorf("the Type differs")
}
if obj.PassNo != res.PassNo {
return fmt.Errorf("the PassNo differs")
}
return nil
}
// MountUID is a unique resource identifier.
type MountUID struct {
engine.BaseUID
name string
}
// IFF aka if and only if they are equivalent, return true. If not, false.
func (obj *MountUID) IFF(uid engine.ResUID) bool {
res, ok := uid.(*MountUID)
if !ok {
return false
}
return obj.name == res.name
}
// UIDs includes all params to make a unique identification of this object. Most
// resources only return one although some resources can return multiple.
func (obj *MountRes) UIDs() []engine.ResUID {
x := &MountUID{
BaseUID: engine.BaseUID{Name: obj.Name(), Kind: obj.Kind()},
name: obj.Name(),
}
return []engine.ResUID{x}
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *MountRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes MountRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*MountRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to MountRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = MountRes(raw) // restore from indirection with type conversion!
return nil
}
// defaultMntOps returns a map that sets the default mount options for fstab
// mounts.
func defaultMntOps() map[string]string {
return map[string]string{"defaults": ""}
}
// strMapEq returns true, if and only if the two provided maps are identical.
func strMapEq(x, y map[string]string) bool {
if len(x) != len(y) {
return false
}
for k, v := range x {
if val, ok := x[k]; !ok || v != val {
return false
}
}
return true
}
// fstabEntryExists checks whether or not a given mount exists in the provided
// fstab file.
func fstabEntryExists(file string, mount *fstab.Mount) (bool, error) {
mounts, err := fstab.ParseFile(file)
if err != nil {
return false, errwrap.Wrapf(err, "error parsing file: %s", file)
}
for _, m := range mounts {
if m.Equals(mount) {
return true, nil
}
}
return false, nil
}
// fstabEntryAdd adds the given mount to the provided fstab file.
func (obj *MountRes) fstabEntryAdd(file string, mount *fstab.Mount) error {
mounts, err := fstab.ParseFile(file)
if err != nil {
return errwrap.Wrapf(err, "error parsing file: %s", file)
}
for _, m := range mounts {
// if the entry exists, we're done
if m.Equals(mount) {
return nil
}
}
// mount does not exist so we need to add it
mounts = append(mounts, mount)
return obj.fstabWrite(file, mounts)
}
// fstabEntryRemove removes the given mount from the provided fstab file.
func (obj *MountRes) fstabEntryRemove(file string, mount *fstab.Mount) error {
mounts, err := fstab.ParseFile(file)
if err != nil {
return errwrap.Wrapf(err, "error parsing file: %s", file)
}
for i, m := range mounts {
// remove any entry with the defined mountpoint
if m.File == mount.File {
mounts = append(mounts[:i], mounts[i+1:]...)
}
}
return obj.fstabWrite(file, mounts)
}
// fstabWrite generates an fstab file with the given mounts, and writes them to
// the provided fstab file.
func (obj *MountRes) fstabWrite(file string, mounts fstab.Mounts) error {
// build the file contents
contents := fmt.Sprintf("# Generated by %s at %d", obj.init.Program, time.Now().UnixNano()) + "\n"
contents = contents + mounts.String() + "\n"
// write the file
if err := ioutil.WriteFile(file, []byte(contents), fstabUmask); err != nil {
return errwrap.Wrapf(err, "error writing fstab file: %s", file)
}
return nil
}
// mountExists returns true, if a given mount exists in the given file
// (typically /proc/mounts.)
func mountExists(file string, mount *fstab.Mount) (bool, error) {
var err error
m := *mount // make a copy so we don't change the definition
// resolve the device's symlink if there is one
if m.Spec, err = evalSpec(mount.Spec); err != nil {
return false, errwrap.Wrapf(err, "error evaluating spec: %s", mount.Spec)
}
// get all mounts
mounts, err := fstab.ParseFile(file)
if err != nil {
return false, errwrap.Wrapf(err, "error parsing file: %s", file)
}
// check for the defined mount
for _, p := range mounts {
found, err := mountCompare(&m, p)
if err != nil {
return false, errwrap.Wrapf(err, "mounts could not be compared: %s and %s", mount.String(), p.String())
}
if found {
return true, nil
}
}
return false, nil
}
// mountCompare compares two mounts. It is assumed that the first comes from a
// resource definition, and the second comes from /proc/mounts. It compares the
// two after resolving the loopback device's file path (if necessary,) and
// ignores freq and passno, as they may differ between the definition and
// /proc/mounts.
func mountCompare(def, proc *fstab.Mount) (bool, error) {
if def.Equals(proc) {
return true, nil
}
if def.File != proc.File {
return false, nil
}
if def.Spec != "" {
procSpec, err := loopFilePath(proc.Spec)
if err != nil {
return false, err
}
if def.Spec != procSpec {
return false, nil
}
}
if !strMapEq(def.MntOps, defaultMntOps()) && !strMapEq(def.MntOps, proc.MntOps) {
return false, nil
}
if def.VfsType != "" && def.VfsType != proc.VfsType {
return false, nil
}
return true, nil
}
// mountReload performs a daemon-reload and restarts fs-local.target and
// fs-remote.target, to let systemd mount any new entries in /etc/fstab.
func mountReload() error {
// establish a godbus connection
conn, err := util.SystemBusPrivateUsable()
if err != nil {
return errwrap.Wrapf(err, "error establishing dbus connection")
}
defer conn.Close()
// systemctl daemon-reload
call := conn.Object(dbusSystemd1Service, dbusSystemd1Path).Call(dbusReloadSystemd, 0)
if call.Err != nil {
return errwrap.Wrapf(call.Err, "error reloading systemd")
}
// systemctl restart local-fs.target
if err := restartUnit(conn, "local-fs.target"); err != nil {
return errwrap.Wrapf(err, "error restarting unit")
}
// systemctl restart remote-fs.target
if err := restartUnit(conn, "remote-fs.target"); err != nil {
return errwrap.Wrapf(err, "error restarting unit")
}
return nil
}
// restartUnit restarts the given dbus unit and waits for it to finish starting
// up. If restartTimeout is exceeded, it will return an error.
func restartUnit(conn *dbus.Conn, unit string) error {
// timeout if we don't get the JobRemoved event
ctx, cancel := context.WithTimeout(context.TODO(), dbusRestartCtxTimeout*time.Second)
defer cancel()
// Add a dbus rule to watch the systemd1 JobRemoved signal used to wait
// until the restart job completes.
args := fmt.Sprintf("type='signal', path='%s', interface='%s', member='%s', arg2='%s'",
dbusSystemd1Path,
dbusManagerInterface,
dbusSignalJobRemoved,
unit,
)
if call := conn.BusObject().Call(engineUtil.DBusAddMatch, 0, args); call.Err != nil {
return errwrap.Wrapf(call.Err, "error creating dbus call")
}
defer conn.BusObject().Call(engineUtil.DBusRemoveMatch, 0, args) // ignore the error
// channel for godbus connection
ch := make(chan *dbus.Signal)
defer close(ch)
conn.Signal(ch)
defer conn.RemoveSignal(ch)
// restart the unit
sd1 := conn.Object(dbusSystemd1Service, dbus.ObjectPath(dbusSystemd1Path))
if call := sd1.Call(dbusRestartUnit, 0, unit, "fail"); call.Err != nil {
return errwrap.Wrapf(call.Err, "error restarting unit: %s", unit)
}
// wait for the job to be removed, indicating completion
select {
case event, ok := <-ch:
if !ok {
return fmt.Errorf("channel closed unexpectedly")
}
if event.Body[3] != "done" {
return fmt.Errorf("unexpected job status: %s", event.Body[3])
}
case <-ctx.Done():
return fmt.Errorf("restarting %s failed due to context timeout", unit)
}
return nil
}
// evalSpec resolves the device from the supplied spec, i.e. it follows the
// symlink, if any, from the provided uuid, label, or path.
func evalSpec(spec string) (string, error) {
var path string
m := &fstab.Mount{}
m.Spec = spec
switch m.SpecType() {
case fstab.UUID:
path = diskByUUID + m.SpecValue()
case fstab.Label:
path = diskByLabel + m.SpecValue()
case fstab.PartUUID:
path = diskByPartUUID + m.SpecValue()
case fstab.PartLabel:
path = diskByPartLabel + m.SpecValue()
case fstab.Path:
path = m.SpecValue()
default:
return "", fmt.Errorf("unexpected spec type: %v", m.SpecType())
}
return filepath.EvalSymlinks(path)
}
// loopFilePath returns the file path of the mounted filesystem image, backing
// the given loopback device.
func loopFilePath(spec string) (string, error) {
// if it's not a loopback device, return the input
if !strings.Contains(spec, "/dev/loop") {
return spec, nil
}
info, err := getLoopInfo(spec)
if err != nil {
return "", errwrap.Wrapf(err, "error getting loop info")
}
// trim the extra null chars off the end of the filename
return string(bytes.Trim(info.FileName[:], "\x00")), nil
}
// loopInfo is a datastructure that holds relevant information about a file
// backed loopback device. Code is based on freddierice/go-losetup.
type loopInfo struct {
Device uint64
INode uint64
RDevice uint64
Offset uint64
SizeLimit uint64
Number uint32
EncryptType uint32
EncryptKeySize uint32
Flags uint32
FileName [64]byte
CryptName [64]byte
EncryptKey [32]byte
Init [2]uint64
}
// getLoopInfo returns a loopInfo struct containing information about the
// provided file backed loopback device.
func getLoopInfo(loop string) (*loopInfo, error) {
// open the loop file
f, err := os.OpenFile(loop, 0, loopFileUmask)
if err != nil {
return nil, fmt.Errorf("error opening %s: %s", loop, err)
}
defer f.Close()
// deserialize the contents
retInfo := &loopInfo{}
_, _, errno := unix.Syscall(unix.SYS_IOCTL, f.Fd(), getStatus64, uintptr(unsafe.Pointer(retInfo)))
if errno == unix.ENXIO {
return nil, fmt.Errorf("device not backed by a file")
} else if errno != 0 {
return nil, fmt.Errorf("error getting info about %s (errno: %d)", loop, errno)
}
return retInfo, nil
}

View File

@@ -0,0 +1,76 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !root !darwin
package resources
import (
"io/ioutil"
"os"
"testing"
fstab "github.com/deniswernert/go-fstab"
)
func TestMountExists(t *testing.T) {
const procMock1 = `/tmp/mount0 /mnt/proctest ext4 rw,seclabel,relatime,data=ordered 0 0` + "\n"
var mountExistsTests = []struct {
procMock []byte
in *fstab.Mount
out bool
}{
{
[]byte(procMock1),
&fstab.Mount{
Spec: "/tmp/mount0",
File: "/mnt/proctest",
VfsType: "ext4",
MntOps: map[string]string{"defaults": ""},
Freq: 1,
PassNo: 1,
},
true,
},
}
file, err := ioutil.TempFile("", "proc")
if err != nil {
t.Errorf("error creating temp file: %v", err)
return
}
defer os.Remove(file.Name())
for _, test := range mountExistsTests {
if err := ioutil.WriteFile(file.Name(), test.procMock, 0664); err != nil {
t.Errorf("error writing proc file: %s: %v", file.Name(), err)
return
}
if err := ioutil.WriteFile(test.in.Spec, []byte{}, 0664); err != nil {
t.Errorf("error writing fstab file: %s: %v", file.Name(), err)
return
}
result, err := mountExists(file.Name(), test.in)
if err != nil {
t.Errorf("error checking if fstab entry %s exists: %v", test.in.String(), err)
return
}
if result != test.out {
t.Errorf("mountExistsTests test wanted: %t, got: %t", test.out, result)
}
}
}

View File

@@ -0,0 +1,295 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !root
package resources
import (
"io/ioutil"
"os"
"testing"
fstab "github.com/deniswernert/go-fstab"
)
const fstabMock1 = `UUID=ef5726f2-615c-4350-b0ab-f106e5fc90ad / ext4 defaults 1 1` + "\n"
var fstabWriteTests = []struct {
in fstab.Mounts
}{
{
fstab.Mounts{
&fstab.Mount{
Spec: "UUID=00112233-4455-6677-8899-aabbccddeeff",
File: "/boot",
VfsType: "ext3",
MntOps: map[string]string{"defaults": ""},
Freq: 1,
PassNo: 2,
},
&fstab.Mount{
Spec: "/dev/mapper/home",
File: "/home",
VfsType: "ext3",
MntOps: map[string]string{"defaults": ""},
Freq: 1,
PassNo: 2,
},
},
},
{
fstab.Mounts{
&fstab.Mount{
Spec: "/dev/cdrom",
File: "/mnt/cdrom",
VfsType: "iso9660",
MntOps: map[string]string{"ro": "", "blocksize": "2048"},
},
},
},
}
func (obj *MountRes) TestFstabWrite(t *testing.T) {
file, err := ioutil.TempFile("", "fstab")
if err != nil {
t.Errorf("error creating temp file: %v", err)
return
}
defer os.Remove(file.Name())
for _, test := range fstabWriteTests {
if err := obj.fstabWrite(file.Name(), test.in); err != nil {
t.Errorf("error writing fstab file: %s: %v", file.Name(), err)
return
}
for _, mount := range test.in {
exists, err := fstabEntryExists(file.Name(), mount)
if err != nil {
t.Errorf("error checking if fstab entry %s exists: %v", mount.String(), err)
return
}
if !exists {
t.Errorf("failed to write %s to fstab", mount.String())
}
}
}
}
var fstabEntryAddTests = []struct {
fstabMock []byte
in *fstab.Mount
}{
{
[]byte(fstabMock1),
&fstab.Mount{
Spec: "/dev/sdb1",
File: "/mnt/foo",
VfsType: "ext2",
MntOps: map[string]string{"ro": "", "blocksize": "2048"},
},
},
{
[]byte(fstabMock1),
&fstab.Mount{
Spec: "UUID=00112233-4455-6677-8899-aabbccddeeff",
File: "/",
VfsType: "ext3",
MntOps: map[string]string{"defaults": ""},
Freq: 1,
PassNo: 2,
},
},
}
func (obj *MountRes) TestFstabEntryAdd(t *testing.T) {
file, err := ioutil.TempFile("", "fstab")
if err != nil {
t.Errorf("error creating temp file: %v", err)
return
}
defer os.Remove(file.Name())
for _, test := range fstabEntryAddTests {
if err := ioutil.WriteFile(file.Name(), test.fstabMock, 0644); err != nil {
t.Errorf("error writing fstab file: %s: %v", file.Name(), err)
return
}
err := obj.fstabEntryAdd(file.Name(), test.in)
if err != nil {
t.Errorf("error adding fstab entry: %s to file: %s: %v", test.in.String(), file.Name(), err)
return
}
exists, err := fstabEntryExists(file.Name(), test.in)
if err != nil {
t.Errorf("error checking if %s exists: %v", test.in.String(), err)
return
}
if !exists {
t.Errorf("fstab failed to add entry: %s to fstab", test.in.String())
}
}
}
var fstabEntryRemoveTests = []struct {
fstabMock []byte
in *fstab.Mount
}{
{
[]byte(fstabMock1),
&fstab.Mount{
Spec: "UUID=ef5726f2-615c-4350-b0ab-f106e5fc90ad",
File: "/",
VfsType: "ext4",
MntOps: map[string]string{"defaults": ""},
Freq: 1,
PassNo: 1,
},
},
}
func (obj *MountRes) TestFstabEntryRemove(t *testing.T) {
file, err := ioutil.TempFile("", "fstab")
if err != nil {
t.Errorf("error creating temp file: %v", err)
return
}
defer os.Remove(file.Name())
for _, test := range fstabEntryRemoveTests {
if err := ioutil.WriteFile(file.Name(), test.fstabMock, 0644); err != nil {
t.Errorf("error writing fstab file: %s: %v", file.Name(), err)
return
}
err := obj.fstabEntryRemove(file.Name(), test.in)
if err != nil {
t.Errorf("error removing fstab entry: %s from file: %s: %v", test.in.String(), file.Name(), err)
return
}
exists, err := fstabEntryExists(file.Name(), test.in)
if err != nil {
t.Errorf("error checking if %s exists: %v", test.in.String(), err)
return
}
if exists {
t.Errorf("fstab failed to remove entry: %s from fstab", test.in.String())
}
}
}
var mountCompareTests = []struct {
dIn *fstab.Mount
pIn *fstab.Mount
out bool
}{
{
&fstab.Mount{
Spec: "/dev/foo",
File: "/mnt/foo",
VfsType: "ext3",
MntOps: map[string]string{"defaults": ""},
},
&fstab.Mount{
Spec: "/dev/foo",
File: "/mnt/foo",
VfsType: "ext3",
MntOps: map[string]string{"foo": "bar", "baz": ""},
},
true,
},
{
&fstab.Mount{
Spec: "UUID=00112233-4455-6677-8899-aabbccddeeff",
File: "/mnt/foo",
VfsType: "ext3",
},
&fstab.Mount{
Spec: "UUID=00112233-4455-6677-8899-aabbccddeeff",
File: "/mnt/bar",
VfsType: "ext3",
},
false,
},
}
var fstabEntryExistsTests = []struct {
fstabMock []byte
in *fstab.Mount
out bool
}{
{
[]byte(fstabMock1),
&fstab.Mount{
Spec: "UUID=ef5726f2-615c-4350-b0ab-f106e5fc90ad",
File: "/",
VfsType: "ext4",
MntOps: map[string]string{"defaults": ""},
Freq: 1,
PassNo: 1,
},
true,
},
{
[]byte(fstabMock1),
&fstab.Mount{
Spec: "/dev/mapper/root",
File: "/home",
VfsType: "ext4",
MntOps: map[string]string{"defaults": ""},
Freq: 1,
PassNo: 1,
},
false,
},
}
func TestFstabEntryExists(t *testing.T) {
file, err := ioutil.TempFile("", "fstab")
if err != nil {
t.Errorf("error creating temp file: %v", err)
return
}
defer os.Remove(file.Name())
for _, test := range fstabEntryExistsTests {
if err := ioutil.WriteFile(file.Name(), test.fstabMock, 0644); err != nil {
t.Errorf("error writing fstab file: %s: %v", file.Name(), err)
return
}
result, err := fstabEntryExists(file.Name(), test.in)
if err != nil {
t.Errorf("error checking if fstab entry %s exists: %v", test.in.String(), err)
return
}
if result != test.out {
t.Errorf("fstabEntryExists test wanted: %t, got: %t", test.out, result)
}
}
}
func TestMountCompare(t *testing.T) {
for _, test := range mountCompareTests {
result, err := mountCompare(test.dIn, test.pIn)
if err != nil {
t.Errorf("error comparing mounts: %s and %s: %v", test.dIn.String(), test.pIn.String(), err)
return
}
if result != test.out {
t.Errorf("mountCompare test wanted: %t, got: %t", test.out, result)
}
}
}

View File

@@ -1,39 +1,44 @@
// Mgmt // Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors // Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors // Written by James Shubin <james@shubin.ca> and the project contributors
// //
// This program is free software: you can redistribute it and/or modify // This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by // it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or // the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version. // (at your option) any later version.
// //
// This program is distributed in the hope that it will be useful, // This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of // but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details. // GNU General Public License for more details.
// //
// You should have received a copy of the GNU Affero General Public License // You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>. // along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources package resources
import ( import (
"encoding/gob"
"fmt" "fmt"
"log"
"regexp" "regexp"
"strings" "strings"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/coreos/go-systemd/journal" "github.com/coreos/go-systemd/journal"
) )
func init() { func init() {
gob.Register(&MsgRes{}) engine.RegisterResource("msg", func() engine.Res { return &MsgRes{} })
} }
// MsgRes is a resource that writes messages to logs. // MsgRes is a resource that writes messages to logs.
type MsgRes struct { type MsgRes struct {
BaseRes `yaml:",inline"` traits.Base // add the base methods without re-implementation
traits.Refreshable
init *engine.Init
Body string `yaml:"body"` Body string `yaml:"body"`
Priority string `yaml:"priority"` Priority string `yaml:"priority"`
Fields map[string]string `yaml:"fields"` Fields map[string]string `yaml:"fields"`
@@ -44,19 +49,9 @@ type MsgRes struct {
syslogStateOK bool syslogStateOK bool
} }
// MsgUID is a unique representation for a MsgRes object.
type MsgUID struct {
BaseUID
body string
}
// Default returns some sensible defaults for this resource. // Default returns some sensible defaults for this resource.
func (obj *MsgRes) Default() Res { func (obj *MsgRes) Default() engine.Res {
return &MsgRes{ return &MsgRes{}
BaseRes: BaseRes{
MetaParams: DefaultMetaParams, // force a default
},
}
} }
// Validate the params that are passed to MsgRes. // Validate the params that are passed to MsgRes.
@@ -70,16 +65,54 @@ func (obj *MsgRes) Validate() error {
return fmt.Errorf("fields cannot begin with _") return fmt.Errorf("fields cannot begin with _")
} }
} }
return obj.BaseRes.Validate() switch obj.Priority {
case "Emerg":
case "Alert":
case "Crit":
case "Err":
case "Warning":
case "Notice":
case "Info":
case "Debug":
default:
return fmt.Errorf("invalid Priority '%s'", obj.Priority)
}
return nil
} }
// Init runs some startup code for this resource. // Init runs some startup code for this resource.
func (obj *MsgRes) Init() error { func (obj *MsgRes) Init(init *engine.Init) error {
obj.BaseRes.kind = "msg" obj.init = init // save for later
return obj.BaseRes.Init() // call base init, b/c we're overrriding
return nil
} }
// isAllStateOK derives a compound state from all internal cache flags that apply to this resource. // Close is run by the engine to clean up after the resource is done.
func (obj *MsgRes) Close() error {
return nil
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *MsgRes) Watch() error {
obj.init.Running() // when started, notify engine that we're running
//var send = false // send event?
for {
select {
case <-obj.init.Done: // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
//if send {
// send = false
// obj.init.Event() // notify engine of an event (this can block)
//}
}
}
// isAllStateOK derives a compound state from all internal cache flags that
// apply to this resource.
func (obj *MsgRes) isAllStateOK() bool { func (obj *MsgRes) isAllStateOK() bool {
if obj.Journal && !obj.journalStateOK { if obj.Journal && !obj.journalStateOK {
return false return false
@@ -92,11 +125,13 @@ func (obj *MsgRes) isAllStateOK() bool {
// updateStateOK sets the global state so it can be read by the engine. // updateStateOK sets the global state so it can be read by the engine.
func (obj *MsgRes) updateStateOK() { func (obj *MsgRes) updateStateOK() {
obj.StateOK(obj.isAllStateOK()) // XXX: this resource doesn't entirely make sense to me at the moment.
if !obj.isAllStateOK() {
//obj.init.Dirty() // XXX: removed with API cleanup
}
} }
// JournalPriority converts a string description to a numeric priority. // JournalPriority converts a string description to a numeric priority.
// XXX: Have Validate() make sure it actually is one of these.
func (obj *MsgRes) journalPriority() journal.Priority { func (obj *MsgRes) journalPriority() journal.Priority {
switch obj.Priority { switch obj.Priority {
case "Emerg": case "Emerg":
@@ -119,42 +154,15 @@ func (obj *MsgRes) journalPriority() journal.Priority {
return journal.PriNotice return journal.PriNotice
} }
// Watch is the primary listener for this resource and it outputs events. // CheckApply method for Msg resource. Every check leads to an apply, meaning
func (obj *MsgRes) Watch() error { // that the message is flushed to the journal.
// notify engine that we're running
if err := obj.Running(); err != nil {
return err // bubble up a NACK...
}
var send = false // send event?
var exit *error
for {
select {
case event := <-obj.Events():
// we avoid sending events on unpause
if exit, send = obj.ReadEvent(event); exit != nil {
return *exit // exit
}
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.Event()
}
}
}
// CheckApply method for Msg resource.
// Every check leads to an apply, meaning that the message is flushed to the journal.
func (obj *MsgRes) CheckApply(apply bool) (bool, error) { func (obj *MsgRes) CheckApply(apply bool) (bool, error) {
// isStateOK() done by engine, so we updateStateOK() to pass in value // isStateOK() done by engine, so we updateStateOK() to pass in value
//if obj.isAllStateOK() { //if obj.isAllStateOK() {
// return true, nil // return true, nil
//} //}
if obj.Refresh() { // if we were notified... if obj.init.Refresh() { // if we were notified...
// invalidate cached state... // invalidate cached state...
obj.logStateOK = false obj.logStateOK = false
if obj.Journal { if obj.Journal {
@@ -167,7 +175,7 @@ func (obj *MsgRes) CheckApply(apply bool) (bool, error) {
} }
if !obj.logStateOK { if !obj.logStateOK {
log.Printf("%s[%s]: Body: %s", obj.Kind(), obj.GetName(), obj.Body) obj.init.Logf("Body: %s", obj.Body)
obj.logStateOK = true obj.logStateOK = true
obj.updateStateOK() obj.updateStateOK()
} }
@@ -190,54 +198,51 @@ func (obj *MsgRes) CheckApply(apply bool) (bool, error) {
return false, nil return false, nil
} }
// UIDs includes all params to make a unique identification of this object. // Cmp compares two resources and returns an error if they are not equivalent.
// Most resources only return one, although some resources can return multiple. func (obj *MsgRes) Cmp(r engine.Res) error {
func (obj *MsgRes) UIDs() []ResUID { // we can only compare MsgRes to others of the same resource kind
x := &MsgUID{ res, ok := r.(*MsgRes)
BaseUID: BaseUID{ if !ok {
name: obj.GetName(), return fmt.Errorf("not a %s", obj.Kind())
kind: obj.Kind(), }
},
body: obj.Body, if obj.Body != res.Body {
return fmt.Errorf("the Body differs")
}
if obj.Priority != res.Priority {
return fmt.Errorf("the Priority differs")
}
if len(obj.Fields) != len(res.Fields) {
return fmt.Errorf("the length of Fields differs")
}
for field, value := range obj.Fields {
if res.Fields[field] != value {
return fmt.Errorf("the Fields differ")
}
} }
return []ResUID{x}
}
// AutoEdges returns the AutoEdges. In this case none are used.
func (obj *MsgRes) AutoEdges() AutoEdge {
return nil return nil
} }
// Compare two resources and return if they are equivalent. // MsgUID is a unique representation for a MsgRes object.
func (obj *MsgRes) Compare(res Res) bool { type MsgUID struct {
switch res.(type) { engine.BaseUID
case *MsgRes:
res := res.(*MsgRes) body string
if !obj.BaseRes.Compare(res) {
return false
}
if obj.Body != res.Body {
return false
}
if obj.Priority != res.Priority {
return false
}
if len(obj.Fields) != len(res.Fields) {
return false
}
for field, value := range obj.Fields {
if res.Fields[field] != value {
return false
}
}
default:
return false
}
return true
} }
// UnmarshalYAML is the custom unmarshal handler for this struct. // UIDs includes all params to make a unique identification of this object. Most
// It is primarily useful for setting the defaults. // resources only return one, although some resources can return multiple.
func (obj *MsgRes) UIDs() []engine.ResUID {
x := &MsgUID{
BaseUID: engine.BaseUID{Name: obj.Name(), Kind: obj.Kind()},
body: obj.Body,
}
return []engine.ResUID{x}
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *MsgRes) UnmarshalYAML(unmarshal func(interface{}) error) error { func (obj *MsgRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes MsgRes // indirection to avoid infinite recursion type rawRes MsgRes // indirection to avoid infinite recursion

View File

@@ -0,0 +1,44 @@
// Mgmt
// Copyright (C) 2013-2021+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// +build !root
package resources
import (
"testing"
)
func TestMsgValidate1(t *testing.T) {
r1 := &MsgRes{
Priority: "Debug",
}
if err := r1.Validate(); err != nil {
t.Errorf("validate failed with: %v", err)
}
}
func TestMsgValidate2(t *testing.T) {
r1 := &MsgRes{
Priority: "UnrealPriority",
}
if err := r1.Validate(); err == nil {
t.Errorf("validation error is nil")
}
}

Some files were not shown because too many files have changed in this diff Show More