109 Commits

Author SHA1 Message Date
James Shubin
9398deeabc etcd: Workaround a nil ptr bug
A clean re-write of this etcd code is needed, but until then, this
should hopefully workaround the occasional test failures. In practice I
don't think anyone has every hit this bug.
2019-01-17 20:07:24 -05:00
James Shubin
bf63d2e844 engine: graph: Avoid a possible panic sending on a closed channel
It's plausible that we send on a closed channel if we're running a back
poke and it tries to send a poke on something that has already closed.
If it detects this condition, it will exit.

Unfortunately, it's not clear if the wait group will protect this case,
but hopefully this will hold us until we can re-write the needed parts
of the engine.
2019-01-17 20:05:49 -05:00
James Shubin
b808592fb3 engine: Work around bad timestamp panic
Occasionally when a back poke happens downstream of an upstream vertex
which has already exited, it could get back poked, which would cause a
panic. This moves the deletion of the state struct until the entire
graph has completed so that it won't panic. It doesn't matter if a back
poke is lost, we're shutting down or pausing, and in this scenario it
can be lost.
2019-01-17 20:05:49 -05:00
James Shubin
e2296a631b engine: event: Switch events system to use simpler structs
Pass around pointers of things now. Also, naming is vastly improved and
clearer.
2019-01-17 20:04:17 -05:00
James Shubin
e20555d4bc test: Don't be unnecessarily noisy in this test
This is confusing if you're looking for an error in the test.
2019-01-17 19:33:35 -05:00
James Shubin
b89e2dcd3c test: Add a three host variant of the empty etcd test 2019-01-17 19:21:56 -05:00
James Shubin
165d11b2ca test: Rename t8 to be more descriptive 2019-01-17 19:21:56 -05:00
James Shubin
d4046c0acf test: Enable t8 to test for two host etcd clusters
I can't remember why we disabled this, so let's put it back. There's
still one rare etcd race, but hopefully it doesn't fail too much until
we fix it.
2019-01-17 19:21:56 -05:00
James Shubin
88498695ac test: Add a semaphore shell test
This test tests new language features and as a fan in-out graph.
2019-01-17 19:21:56 -05:00
James Shubin
354a1c23b0 engine: graph: Prevent converged timeout of dirty res
Somewhere after the engine re-write we seem to have regressed and
converge early even if some resource is dirty. This adds an additional
timer so that we don't start the individual resource converged countdown
until our state is okay.
2019-01-17 18:46:00 -05:00
Kevin Kuehler
34550246f4 lang: Add debug flag and Logf to fact init struct 2019-01-17 18:12:45 -05:00
Jonathan Gold
db1cc846dc test: Ensure gometalinter is available 2019-01-15 20:24:37 -05:00
Jonathan Gold
74484bcbdf make: deps: Only install gometalinter on CI/CD servers 2019-01-15 20:23:24 -05:00
James Shubin
d5ecf8ce16 engine: Fix typos 2019-01-12 15:03:03 -05:00
James Shubin
b1ffb1d4a4 lang: Add autoedge and autogroup meta params to mcl
These weren't yet exposed in mcl. They're now available under the same
Meta namespace as the normal meta param structs. Even though they live
as a separate trait, they should be exposed together for a consistent
interface in mcl. If autoedge or autogroup ever grow additional params,
we can always add: `Meta:autoedge:something` to break it down further.
2019-01-12 13:16:39 -05:00
James Shubin
451e1122a7 lang: Refactor the res metaparams helper
We can do all the actions without returning anything but an error.
2019-01-12 12:34:07 -05:00
James Shubin
10dcf32f3c lang: Allow a list of strings in the resource name
This adds a core looping construct by allowing a list of names to build
a resource. They'll all have the same parameters, but they'll
intelligently add the correct list of edges that they'd individually
create.

Constructs like these are one reason we do NOT have actual looping
functionality in the language, and it should stay that way.
2019-01-12 11:54:02 -05:00
James Shubin
7f1477b26d lang: Add a placeholder "ExprAny" expression for unification hacks
Instead of adding complexity to the unification engine, we can add a
fake placeholder expression that is unreachable by the AST, but used for
unification so that we can ensure a "wrap" invariant has some contents.

Ideally we'd improve the unification engine, but we'll leave that for
the future, and it's easy to revert this one commit in the future.
2019-01-12 11:45:53 -05:00
James Shubin
33b68c09d3 lang: Refactor edges helper method 2019-01-12 11:45:53 -05:00
James Shubin
7ec48ca845 lang: Refactor resource creation into a helper method 2019-01-12 11:45:53 -05:00
James Shubin
5c92cef983 docs: Add sub categories to the language guide
Hopefully this makes the longer sections easier to read.
2019-01-12 11:45:53 -05:00
James Shubin
75eba466c6 travis: Clean up my grammar
What was I thinking?
2019-01-11 04:38:12 -05:00
James Shubin
ad30737119 lang: Add meta parameter parsing to resources
Now we can actually specify metaparameters in the resources!
2019-01-11 04:13:13 -05:00
James Shubin
8e0bde3071 lang: Move capitalized res identifier into parser
This gives us more specificity when trying to match exactly.
2019-01-11 02:57:39 -05:00
James Shubin
7d641427d2 test: Fix golang cache regression
Golang decided to change the GOCACHE behaviour in newer versions of `go
test`. This changes our tests to use the new approach.

For users using a local `.envrc`, you might want to add:

GOFLAGS="-count=1"

Which is supposed to fix this problem for local tests.

More information is available in: https://github.com/golang/go/issues/29378
2019-01-10 20:41:10 -05:00
James Shubin
3b62beed26 travis: Print debug info to catch travis regressions 2019-01-10 18:23:11 -05:00
James Shubin
2d3cf68261 travis: Workaround another broken apt repo
This works around another travis NO_PUBKEY regression.
2019-01-10 18:22:45 -05:00
Vincent Membré
7d6080d13f engine: resources: exec: Use WatchShell in Exec resource when needed instead of Shell 2019-01-03 10:28:22 +01:00
James Shubin
e3eefeb3fe engine: resources: pkg: Implement the CompatibleRes interface
This signals to an interested consumer that two or more compatible
resources can be merged safely. This is so that we can avoid the
"duplicate resource" design problem that puppet had.

To test this, you can run:

./mgmt run --tmp-prefix lang --lang 'pkg "cowsay" { state => "installed", } pkg "cowsay" { state => "newest", }'

which should work.
2018-12-29 02:54:55 -05:00
James Shubin
f10dddadd6 lang: Handle merging of compatible resources properly
The duplicate resource problem that puppet had should now be correctly
solved in mgmt.
2018-12-29 02:51:09 -05:00
James Shubin
d166112917 engine: Add an interface for compatible resources
This also adds utility functions for merging and improved comparing.
2018-12-29 02:46:43 -05:00
James Shubin
8ed5c1bedf engine: Add a resource copy interface and implementation
If we want to copy an entire resource, we should use this helper method.
2018-12-29 02:42:02 -05:00
James Shubin
4489076fac engine: Add setters for the trait interfaces
Turns out it's useful to wholesale set the entire struct.
2018-12-29 01:16:38 -05:00
James Shubin
bdc33cd421 lang: Validate the edge field names in our resources
Validate these early instead of waiting for this to be caught during
output generation.
2018-12-29 00:18:10 -05:00
James Shubin
889dae2955 lang: Improve sub testing
This makes individual sub tests from the table easier to run.
2018-12-29 00:16:35 -05:00
James Shubin
9ff21b68e4 engine: resources: pkg: Simplify state check
Refactor this code.
2018-12-28 20:33:51 -05:00
James Shubin
a69a7009f8 engine: resources: pkg: Replace state strings with constants
This helps avoid typos, and gives us something we can export in the
future.
2018-12-28 20:32:23 -05:00
James Shubin
d413fac4cb engine: resources: pkg: Remove old Compare method
This was legacy code. Get rid of it.
2018-12-28 20:06:00 -05:00
James Shubin
246ecd8607 engine: resources: cron: Fix typo in error message 2018-12-28 20:00:14 -05:00
James Shubin
22105af720 lang: test: Add a test of duplicate resource generation
These two cases should be allowed in our language. This is something
that puppet got wrong, and hopefully this makes writing modules more
sane in mcl, since two modules both depending on a "cowsay" package
won't cause compile errors.

This only checks the language. The de-duplication is done there. We
don't currently have a check for this in the engine. (We should!)
2018-12-28 18:44:07 -05:00
James Shubin
880c4d2f48 lang, util: Tests that depend on the fs should be sorted
This ensures they're deterministic on any file system.
2018-12-28 18:00:08 -05:00
Jonathan Gold
443f489152 etcd: Add more test cases to TestEtcdCopyFs0 2018-12-22 04:47:49 -05:00
Jonathan Gold
39fdfdfd8c etcd: Add TestEtcdCopyFs0
This commit adds a new test to etcd/fs/fs_test.go that performs the same
actions (with some new cases) as TestFs2 and TestFs3, but allows us to
add more test cases as needed.
2018-12-22 04:46:58 -05:00
James Shubin
96dccca475 lang: Add module imports and more
This enables imports in mcl code, and is one of last remaining blockers
to using mgmt. Now we can start writing standalone modules, and adding
standard library functions as needed. There's still lots to do, but this
was a big missing piece. It was much harder to get right than I had
expected, but I think it's solid!

This unfortunately large commit is the result of some wild hacking I've
been doing for the past little while. It's the result of a rebase that
broke many "wip" commits that tracked my private progress, into
something that's not gratuitously messy for our git logs. Since this was
a learning and discovery process for me, I've "erased" the confusing git
history that wouldn't have helped. I'm happy to discuss the dead-ends,
and a small portion of that code was even left in for possible future
use.

This patch includes:

* A change to the cli interface:
You now specify the front-end explicitly, instead of leaving it up to
the front-end to decide when to "activate". For example, instead of:

mgmt run --lang code.mcl

we now do:

mgmt run lang --lang code.mcl

We might rename the --lang flag in the future to avoid the awkward word
repetition. Suggestions welcome, but I'm considering "input". One
side-effect of this change, is that flags which are "engine" specific
now must be specified with "run" before the front-end name. Eg:

mgmt run --tmp-prefix lang --lang code.mcl

instead of putting --tmp-prefix at the end. We also changed the GAPI
slightly, but I've patched all code that used it. This also makes things
consistent with the "deploy" command.

* The deploys are more robust and let you deploy after a run
This has been vastly improved and let's mgmt really run as a smart
engine that can handle different workloads. If you don't want to deploy
when you've started with `run` or if one comes in, you can use the
--no-watch-deploy option to block new deploys.

* The import statement exists and works!
We now have a working `import` statement. Read the docs, and try it out.
I think it's quite elegant how it fits in with `SetScope`. Have a look.
As a result, we now have some built-in functions available in modules.
This also adds the metadata.yaml entry-point for all modules. Have a
look at the examples or the tests. The bulk of the patch is to support
this.

* Improved lang input parsing code:
I re-wrote the parsing that determined what ran when we passed different
things to --lang. Deciding between running an mcl file or raw code is
now handled in a more intelligent, and re-usable way. See the inputs.go
file if you want to have a look. One casualty is that you can't stream
code from stdin *directly* to the front-end, it's encapsulated into a
deploy first. You can still use stdin though! I doubt anyone will notice
this change.

* The scope was extended to include functions and classes:
Go forth and import lovely code. All these exist in scopes now, and can
be re-used!

* Function calls actually use the scope now. Glad I got this sorted out.

* There is import cycle detection for modules!
Yes, this is another dag. I think that's #4. I guess they're useful.

* A ton of tests and new test infra was added!
This should make it much easier to add new tests that run mcl code. Have
a look at TestAstFunc1 to see how to add more of these.

As usual, I'll try to keep these commits smaller in the future!
2018-12-21 06:22:12 -05:00
James Shubin
948a3c6d08 gapi: Add a bytes helper
Use bytes directly if we've got them.
2018-12-20 21:21:30 -05:00
James Shubin
dc13d5d26b util: Add some useful path parsing functions
These two are useful for looking at path prefixes and rebasing the paths
onto other paths.
2018-12-20 21:21:30 -05:00
James Shubin
aae714db6b lang: Add a top-level stmt safety method
This adds a new method to the *StmtProg that lets us determine if the
prog contains only what is necessary for a scope and nothing more. This
is useful because that is exactly what is produced when doing an import.
With this detection method, we can know if a module contains dead code
that might mislead the user into thinking it will get run when it won't.
2018-12-20 21:21:30 -05:00
James Shubin
a7c9673bcf lang: Improve empty scope and output
For some reason these were unnecessary methods on the structs, even when
those structs contained nothing useful to offer.
2018-12-20 21:21:30 -05:00
James Shubin
3d06775ddc lang: Add some lambda function parsing and tests
Part of this isn't fully implemented, but might as well get the tests
running.
2018-12-20 21:21:30 -05:00
James Shubin
48beea3884 test: Clean up and improve golang tests
This adds some consistency to the tests and properly catches difficult
scenarios in some of the lexparse tests.
2018-12-20 21:21:30 -05:00
James Shubin
958d3f6094 lang: Add beginning of user defined functions
This adds the lexer, parser and struct basics for user defined
functions. It's far from finished, but it's good to get the foundation
started.
2018-12-20 21:21:30 -05:00
James Shubin
08f24fb272 lang: Add a URL result to the import name parser
This is meant to be useful for the downloader. This will probably get
more complicated over time, but for now the goal is to have it simple
enough to work for 80% of use cases.
2018-12-20 21:21:30 -05:00
James Shubin
07d57e1a64 git: Ignore some WIP files that won't get tracked in git 2018-12-20 21:21:30 -05:00
James Shubin
cd7711bdfe gapi: Add a prefix variable in case we want to namespace on disk
This could get passed through to use as a module download path.
2018-12-20 21:21:30 -05:00
James Shubin
433ffa05a5 bindata: Add infrastructure for building core mcl files
This should prepare us so that we can build native mcl code alongside
the core *.go files which we already have. This includes a single mcl
file that is used as a placeholder so that the build doesn't fail if we
don't have any mcl files in the core/ directory. It will get ignored
automatically.
2018-12-20 21:21:30 -05:00
James Shubin
046b21b907 lang: Refactor most functions to support modules
This is a giant refactor to move functions into a hierarchial module
layout. While this isn't entirely implemented yet, it should work
correctly once all the import bits have landed. What's broken at the
moment is the template function, which currently doesn't understand the
period separator.
2018-12-20 21:21:30 -05:00
James Shubin
c32183eb70 lang: Tidy up grouping of lexer tokens in the parser
Just some small cleaning.
2018-12-20 21:21:30 -05:00
James Shubin
73b11045f2 lang: Add lexing/parsing of import statements
This adds the basic import statement, and its associated variants. It
also adds the import structure which is the result of parsing.
2018-12-20 21:21:30 -05:00
James Shubin
57ce3fa587 lang: Allow matching underscores in some of the identifier's
This allows matching underscores in some of the identifier's, but not
when they're the last character.

This caused me to suffer a bit of pain tracking down a bug which turned
out to be in the lexer. It started with a failing test that I wrote in:

974c2498c4

and which followed with a fix in:

52682f463a

Glad that's fixed!
2018-12-20 21:21:30 -05:00
James Shubin
a26620da38 lang: Add resource specific tokens in lexer and parser
This adds some custom tokens for the lexer and parser so that resources
can have colons in their names.
2018-12-20 21:21:30 -05:00
James Shubin
86b8099eb9 lang: Add import spec parsing and tests
This adds parsing of the upcoming "import" statement contents. It is the
logic which determines how an import statement is read in the language.
Hopefully it won't need any changes or additional magic additions.
2018-12-20 21:21:30 -05:00
James Shubin
c8e9a100a6 lang: Support lexing and parsing a list of files with offsets
This adds a LexParseWithOffsets method that also takes a list of offsets
to be used if our input stream is composed of multiple io.Readers
combined together.

At the moment the offsets are based on line count instead of file size.
I think the latter would be preferable, but it seems it's much more
difficult to implement as it probably requires support in the lexer and
parser. That improved solution would probably be faster, and more
correct in case someone passed in a file without a trailing newline.
2018-12-20 21:21:30 -05:00
James Shubin
a287f028d1 lang: Detect sub tests with the same name
This detects identically named tests and fails the test in such a
scenario to prevent confusion.
2018-12-20 21:21:30 -05:00
James Shubin
cf50fb3568 lang: Allow dotted identifiers
This adds support for dotted identifiers in include statements, var
expressions and function call expressions. The dotted identifiers are
used to refer to classes, bind statements, and function definitions
(respectively) that are included in the scope by import statements.
2018-12-20 21:21:30 -05:00
James Shubin
4c8193876f util: Add a UInt64Slice and associated sorting functionality.
This adds an easy to sort slice of uint64's and associated functionality
to sort a list of strings by their associated order in a map indexed by
uint64's.
2018-12-20 21:21:30 -05:00
James Shubin
158bc1eb2a lang: Add an Apply iterator to the Stmt and Expr API
This adds a new interface Node which must implement the Apply method.
This method traverse the entire AST and applies a function to each node.
Both Stmt and Expr must implement this.
2018-12-20 21:21:30 -05:00
James Shubin
3f42e5f702 lang: Add logging and debug info via a new Init method
This expands the Stmt and Expr interfaces to add an Init method. This
is used to pass in Debug and Logf values, but is also used to validate
the AST. This gets rid of standalone use of the "log" package.
2018-12-20 21:21:30 -05:00
Tom Payne
75633817a7 etcd: Ensure that fs.Fs implements afero.Fs 2018-12-20 21:19:55 -05:00
Tom Payne
83b00fce3e etcd: Add Lchown (returns ErrNotImplemented) 2018-12-20 21:19:55 -05:00
Tom Payne
38befb53ad etcd: Add Chown (returns ErrNotImplemented) 2018-12-20 21:19:55 -05:00
Kevin Kuehler
d0b5c4de68 util: Patch CopyFs and add tests
Fix CopyFs bug that resulted in a flattened destination directory.
Added tests catch this bug, and ensure the data is in fact copied
to the destination directory.
2018-12-20 12:15:06 -08:00
James Shubin
1b68845b00 test: Fix up token vet test
I forgot some of the cases to catch earlier.
2018-12-19 22:24:20 -05:00
James Shubin
a7bc72540d util: Fix small linting error
Woops!
2018-12-19 12:29:44 -05:00
James Shubin
27ac7481f9 test: Increase the vet testing for irregular strings
Catch some inconsistent comments to keep things neat. Hey, anything we
can automate, we do :)
2018-12-19 06:52:23 -05:00
James Shubin
9bc36be513 util: Add a test for CopyFs
This adds a test case for the standalone CopyFs function, and an easy to
use test case infra.
2018-12-19 06:51:05 -05:00
James Shubin
e62e35bc88 util: Improve the test helper function and add a better one
This should help us write tests that use unique physical directories
inside the directory tree.
2018-12-19 06:10:48 -05:00
James Shubin
bd80ced9b2 util: Add an fs helper and a test helper 2018-12-17 12:10:09 -05:00
Jonathan Gold
bb2f2e5e54 util: Add PathSlice type that satisfies sort.Interface
This commit adds a []string{} type alias named PathSlice, and the
Len(), Swap(), and Less() methods required to satisfy sort.Interface.
Now you can do `sort.Sort(util.PathSlice(foo))` where foo is a slice
of paths. It will be sorted by depth in alphabetical order.
2018-12-17 01:14:54 -05:00
James Shubin
b1eb6711b7 engine: resources: Work around a subtle embedded res bug
This is a subtle issue that was found that caused a panic. This should
solve things for now, but it would be wise to build embedded or
composite resources sparingly until we we're certain this would work the
way we wanted for all scenarios.
2018-12-16 16:07:42 -05:00
Jonathan Gold
da0ffa5e56 engine: resources: cron: Add auto edges from SvcRes 2018-12-16 15:12:58 -05:00
Felix Frank
68ef312233 gitignore: Ignore vim swap files 2018-12-16 13:41:21 -05:00
Felix Frank
9fefadca24 docs: Explain the langpuppet interface and function 2018-12-16 13:35:47 -05:00
James Shubin
e14b14b88c engine: resources: svc: Add symmetric closing
This improves some of the closing in the svc resource. This still needs
lots of improvements, and it's sort of terrible because it was some very
early code written.
2018-12-16 08:27:26 -05:00
James Shubin
d5bfb7257e engine: resources: file: Require paths to be absolute
This is a requirement of our file resource, so we should validate this
and clearly express it in the documentation.
2018-12-16 07:24:07 -05:00
Jonathan Gold
8282f3b59c engine: resources: cron: Add lang examples 2018-12-15 11:01:05 -05:00
Jonathan Gold
dbf0c84f0b engine: resources: cron: Add support for user session timers 2018-12-15 10:47:35 -05:00
Jonathan Gold
a5977b993a engine: util: Add EdgeCombiner() for combining auto edges 2018-12-15 10:47:35 -05:00
Jonathan Gold
27df3ae876 engine: resources: cron: Add a systemd-timer resource 2018-12-15 10:47:35 -05:00
Felix Frank
a49d07cf01 gapi: langpuppet: Add initial implementation
This new entrypoint allows graph generation from both a Puppet manifest
and a piece of mcl code. The GAPI implementation wraps the two existing
GAPIs.
2018-12-15 03:43:15 +01:00
Jonathan Gold
28f343ac50 engine: resources: svc: Use dbus session bus for user session svc
This patch adds a util function, SessionBusUsable, that makes and returns
a new usable dbus session bus. If the svc bool session is true, the resource
will use a bus created with that function.
2018-12-14 00:16:21 -05:00
Jonathan Gold
4297a39d03 engine: resources: group: Make group edgeable
This adds the edgeable trait to the group resource and adds an
AutoEdges method which returns nil, nil. These changes are necessary
to allow UserRes to make autoedges to GroupRes.
2018-12-13 23:01:42 -05:00
Jonathan Gold
bd996e441c etcd: Use mgmt backend for fs tests 2018-12-11 18:11:45 -05:00
Jonathan Gold
086a89fad6 etcd: Use source filepath base in CopyFs destination path
This patch corrects the destination path in CopyFs to use the source's
base filepath, instead of the entire source path. Now copying /foo/bar
to /baz results in /baz/bar instead of /baz/foo/bar. This commit also adds
a test to verify this behaviour.
2018-12-11 02:20:11 -05:00
Michael Lesko-Krleza
70ac38e66c test: Increase test coverage for graphsync
This patch is an addition to graphsync_test.go, which increases the test
coverage from 72.4% to 72.9%.
2018-12-11 02:02:33 -05:00
James Shubin
d990d2ad86 travis: Bump to golang 1.10
This requires breaking changes in gofmt. It is hilarious that this was
changed. Oh well. This also moves to the latest stable etcd. Lastly,
this changes the `go vet` testing to test by package, since the new go
vet changed how it works and now fails without this change.
2018-12-11 01:46:17 -05:00
Jonathan Gold
56db31ca43 engine: resources: file: Add shell test for source field 2018-12-10 22:08:59 -05:00
Jonathan Gold
b902e2d30b engine: resources: file: Fix bug preventing use of source field
This patch fixes a previously undiscovered bug which prevented
the use of the source field in the file resource. CheckApply was
returning early if obj.Content was nil. It is also necessary to
check that obj.Source is empty before returning, otherwise
syncCheckApply never runs.
2018-12-10 22:08:59 -05:00
Jonathan Gold
d2bab32b0e engine: resources: packagekit: Fix dbus addmatch rule
I broke packagekit with commit 299080f5 due to a missing equals sign
in the DBus AddMatch rule. This commit adds the necessary equals sign.
2018-12-09 11:12:48 -05:00
Jonathan Gold
b2d726051b travis: Build on Xenial
Builds were failing on Trusty due to broken GPG keys, and upgrading
the build environment to Xenial Xerus solves the problem.
2018-12-04 20:27:47 -05:00
Jonathan Gold
8e25667f87 engine: resources: net: test: Add shell test for net resource
This patch adds a shell test for net, which creates a dummy interface
and runs mgmt to bring it up and assign it with an address. It then
checks if the state was applied correctly. Finally, it runs mgmt again
to bring the interface down, and tests that it comes down and stays
down.
2018-12-04 17:12:57 -05:00
Jonathan Gold
9b5c4c50e7 engine: resources: net: Allow addr without gateway
In some scenarios it is desirable to set the addrs and gateway
independently, i.e. if a default gateway is already set on
the machine. This patch removes the requirement to set them
together.
2018-12-04 17:12:57 -05:00
Jonathan Gold
d2ce70a673 puppet: Fix error message when puppet conf copy fails
This commit adds the missing config file location to the error
message.
2018-12-04 16:58:40 -05:00
Felix Frank
9db0fc4ee4 make: Speed up the build by skipping gem docs
Per default, the Ruby gems renerate documentation in two distinct formats
during installation. By passing --no-ri and --no-rdoc, gem is instructed
to skip this step for both formats.

If the user needs documentation for any of the gems after all, they can
manually generate the docs themselves.
2018-12-04 16:56:23 -05:00
Felix Frank
9ed830bb81 make: Remove spurious dependency package 'rubygems' for Debian-like systems
On Ubuntu, the apt-get install call to ruby, ruby-devel, and rubygems will
fail because there is no "rubygems" package in Ubuntu.

In Debian, this package is virtual only. In both cases, the ruby package
is sufficient. (See also https://packages.debian.org/jessie/rubygems)
2018-12-04 16:55:46 -05:00
James Shubin
4e42d9ed03 travis: Work around broken travis NO_PUBKEY error
W: GPG error: https://packagecloud.io/rabbitmq/rabbitmq-server/ubuntu trusty InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY F6609E60DC62814E
E: The repository 'https://packagecloud.io/rabbitmq/rabbitmq-server/ubuntu trusty InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
2018-12-04 16:52:02 -05:00
James Shubin
4c93bc3599 test: Add doc note about skipping docker tests
This is useful if you don't have docker running, since otherwise it
causes all the tests to fail.
2018-12-03 23:55:20 -05:00
Jonathan Gold
7c817802a8 engine: resources: net: test: Add some go tests
This patch adds go tests for NetRes.unitFileContents(), socketSet.fdSet(),
and socketSet.nfd(), in the net resource.
2018-12-03 23:42:42 -05:00
Jonathan Gold
de90b592fb lang: Fix error message format strings
This commit replaces %s with %d in two error messages, where the
argument is an integer, not a string.
2018-12-03 19:27:35 -05:00
Jonathan Gold
b9d0cc2e28 etcd: Fix deploy transaction error message
This commit removes an unused argument from the error format string.
2018-12-03 19:26:18 -05:00
269 changed files with 13269 additions and 1621 deletions

3
.gitignore vendored
View File

@@ -5,6 +5,7 @@
.envrc
old/
tmp/
*WIP
*_stringer.go
bindata/*.go
mgmt
@@ -14,3 +15,5 @@ build/mgmt-*
mgmt.iml
rpmbuild/
releases/
# vim swap files
.*.sw[op]

View File

@@ -2,16 +2,21 @@ language: go
os:
- linux
go:
- 1.9.x
- 1.10.x
- 1.11.x
- tip
go_import_path: github.com/purpleidea/mgmt
sudo: true
dist: trusty
dist: xenial
# travis requires that you update manually, and provides this key to trigger it
apt:
update: true
before_install:
# print some debug information to help catch the constant travis regressions
- if [ -e /etc/apt/sources.list.d/ ]; then sudo ls -l /etc/apt/sources.list.d/; fi
# workaround broken travis NO_PUBKEY errors
- if [ -e /etc/apt/sources.list.d/rabbitmq_rabbitmq-server.list ]; then sudo rm -f /etc/apt/sources.list.d/rabbitmq_rabbitmq-server.list; fi
- if [ -e /etc/apt/sources.list.d/github_git-lfs.list ]; then sudo rm -f /etc/apt/sources.list.d/github_git-lfs.list; fi
# as per a number of comments online, this might mitigate some flaky fails...
- if [[ "$TRAVIS_OS_NAME" != "osx" ]]; then sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6; fi
# apt update tends to be flaky in travis, retry up to 3 times on failure
@@ -24,13 +29,13 @@ script: 'make test'
matrix:
fast_finish: false
allow_failures:
- go: 1.10.x
- go: 1.11.x
- go: tip
- os: osx
# include only one build for osx for a quicker build as the nr. of these runners are sparse
include:
- os: osx
go: 1.9.x
go: 1.10.x
# the "secure" channel value is the result of running: ./misc/travis-encrypt.sh
# with a value of: irc.freenode.net#mgmtconfig to eliminate noise from forks...

View File

@@ -119,6 +119,7 @@ race:
bindata: ## generate go files from non-go sources
@echo "Generating: bindata..."
$(MAKE) --quiet -C bindata
$(MAKE) --quiet -C lang/funcs
generate:
go generate
@@ -163,6 +164,7 @@ crossbuild: ${crossbuild_targets}
clean: ## clean things up
$(MAKE) --quiet -C bindata clean
$(MAKE) --quiet -C lang/funcs clean
$(MAKE) --quiet -C lang clean
[ ! -e $(PROGRAM) ] || rm $(PROGRAM)
rm -f *_stringer.go # generated by `go generate`

View File

@@ -16,9 +16,12 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# The bindata target generates go files from any source defined below. To use
# the files, import the "bindata" package and use:
# the files, import the generated "bindata" package and use:
# `bytes, err := bindata.Asset("FILEPATH")`
# where FILEPATH is the path of the original input file relative to `bindata/`.
# To get a list of files stored in this "bindata" package, you can use:
# `paths := bindata.AssetNames()` and `paths, err := bindata.AssetDir(name)`
# to get a list of files with a directory prefix.
.PHONY: build clean
default: build
@@ -34,5 +37,5 @@ bindata.go: ../COPYING
@ROOT=$$(dirname "$${BASH_SOURCE}")/.. && $$ROOT/misc/header.sh '$@'
clean:
# remove generated bindata/*.go
@ROOT=$$(dirname "$${BASH_SOURCE}")/.. && rm -f *.go
# remove generated bindata.go
@ROOT=$$(dirname "$${BASH_SOURCE}")/.. && rm -f bindata.go

View File

@@ -137,15 +137,15 @@ Invoke `mgmt` with the `--puppet` switch, which supports 3 variants:
1. Request the configuration from the Puppet Master (like `puppet agent` does)
`mgmt run --puppet agent`
`mgmt run puppet --puppet agent`
2. Compile a local manifest file (like `puppet apply`)
`mgmt run --puppet /path/to/my/manifest.pp`
`mgmt run puppet --puppet /path/to/my/manifest.pp`
3. Compile an ad hoc manifest from the commandline (like `puppet apply -e`)
`mgmt run --puppet 'file { "/etc/ntp.conf": ensure => file }'`
`mgmt run puppet --puppet 'file { "/etc/ntp.conf": ensure => file }'`
For more details and caveats see [Puppet.md](Puppet.md).
@@ -164,6 +164,7 @@ If you feel that a well used option needs documenting here, please patch it!
### Overview of reference
* [Meta parameters](#meta-parameters): List of available resource meta parameters.
* [Lang metadata file](#lang-metadata-file): Lang metadata file format.
* [Graph definition file](#graph-definition-file): Main graph definition file.
* [Command line](#command-line): Command line parameters.
* [Compilation options](#compilation-options): Compilation options.
@@ -249,11 +250,48 @@ integer, then that value is the max size for that semaphore. Valid semaphore
id's include: `some_id`, `hello:42`, `not:smart:4` and `:13`. It is expected
that the last bare example be only used by the engine to add a global semaphore.
### Lang metadata file
Any module *must* have a metadata file in its root. It must be named
`metadata.yaml`, even if it's empty. You can specify zero or more values in yaml
format which can change how your module behaves, and where the `mcl` language
looks for code and other files. The most important top level keys are: `main`,
`path`, `files`, and `license`.
#### Main
The `main` key points to the default entry point of your code. It must be a
relative path if specified. If it's empty it defaults to `main.mcl`. It should
generally not be changed. It is sometimes set to `main/main.mcl` if you'd like
your modules code out of the root and into a child directory for cases where you
don't plan on having a lot deeper imports relative to `main.mcl` and all those
files would clutter things up.
#### Path
The `path` key specifies the modules import search directory to use for this
module. You can specify this if you'd like to vendor something for your module.
In general, if you use it, please use the convention: `path/`. If it's not
specified, you will default to the parent modules directory.
#### Files
The `files` key specifies some additional files that will get included in your
deploy. It defaults to `files/`.
#### License
The `license` key allows you to specify a license for the module. Please specify
one so that everyone can enjoy your code! Use a "short license identifier", like
`LGPLv3+`, or `MIT`. The former is a safe choice if you're not sure what to use.
### Graph definition file
graph.yaml is the compiled graph definition file. The format is currently
undocumented, but by looking through the [examples/](https://github.com/purpleidea/mgmt/tree/master/examples)
you can probably figure out most of it, as it's fairly intuitive.
undocumented, but by looking through the [examples/](https://github.com/purpleidea/mgmt/tree/master/examples/yaml/)
you can probably figure out most of it, as it's fairly intuitive. It's not
recommended that you use this, since it's preferable to write code in the
[mcl language](language-guide.md) front-end.
### Command line

View File

@@ -57,6 +57,8 @@ hacking!
### Is this project ready for production?
It's getting pretty close. I'm able to write modules for it now!
Compared to some existing automation tools out there, mgmt is a relatively new
project. It is probably not as feature complete as some other software, but it
also offers a number of features which are not currently available elsewhere.
@@ -146,7 +148,7 @@ requires a number of seconds as an argument.
#### Example:
```
./mgmt run --lang examples/lang/hello0.mcl --converged-timeout=5
./mgmt run lang --lang examples/lang/hello0.mcl --converged-timeout=5
```
### What does the error message about an inconsistent dataDir mean?
@@ -167,14 +169,15 @@ starting up, and as a result, a default endpoint never gets added. The solution
is to either reconcile the mistake, and if there is no important data saved, you
can remove the etcd dataDir. This is typically `/var/lib/mgmt/etcd/member/`.
### Why do resources have both a `Compare` method and an `IFF` (on the UID) method?
### Why do resources have both a `Cmp` method and an `IFF` (on the UID) method?
The `Compare()` methods are for determining if two resources are effectively the
The `Cmp()` methods are for determining if two resources are effectively the
same, which is used to make graph change delta's efficient. This is when we want
to change from the current running graph to a new graph, but preserve the common
vertices. Since we want to make this process efficient, we only update the parts
that are different, and leave everything else alone. This `Compare()` method can
tell us if two resources are the same.
that are different, and leave everything else alone. This `Cmp()` method can
tell us if two resources are the same. In case it is not obvious, `cmp` is an
abbrev. for compare.
The `IFF()` method is part of the whole UID system, which is for discerning if a
resource meets the requirements another expects for an automatic edge. This is

View File

@@ -124,16 +124,15 @@ An example explains it best:
### Example
```golang
package simplepoly
import (
"fmt"
"github.com/purpleidea/mgmt/lang/types"
"github.com/purpleidea/mgmt/lang/funcs/simplepoly"
)
func init() {
Register("len", []*types.FuncValue{
simplepoly.Register("len", []*types.FuncValue{
{
T: types.NewType("func([]variant) int"),
V: Len,
@@ -343,11 +342,21 @@ also ensures they can be encoded and decoded. Make sure to include the following
code snippet for this to work.
```golang
import "github.com/purpleidea/mgmt/lang/funcs"
func init() { // special golang method that runs once
funcs.Register("foo", func() interfaces.Func { return &FooFunc{} })
}
```
Functions inside of built-in modules will need to use the `ModuleRegister`
method instead.
```golang
// moduleName is already set to "math" by the math package. Do this in `init`.
funcs.ModuleRegister(moduleName, "cos", func() interfaces.Func { return &CosFunc{} })
```
### Composite functions
Composite functions are functions which import one or more existing functions.

View File

@@ -140,6 +140,31 @@ expression
include bar("world", 13) # an include can be called multiple times
```
- **import**: import a particular scope from this location at a given namespace
```mcl
# a system module import
import "fmt"
# a local, single file import (relative path, not a module)
import "dir1/file.mcl"
# a local, module import (relative path, contents are a module)
import "dir2/"
# a remote module import (absolute remote path, contents are a module)
import "git://github.com/purpleidea/mgmt-example1/"
```
or
```mcl
import "fmt" as * # contents namespaced into top-level names
import "foo.mcl" # namespaced as foo
import "dir1/" as bar # namespaced as bar
import "git://github.com/purpleidea/mgmt-example1/" # namespaced as example1
```
All statements produce _output_. Output consists of between zero and more
`edges` and `resources`. A resource statement can produce a resource, whereas an
`if` statement produces whatever the chosen branch produces. Ultimately the goal
@@ -165,6 +190,8 @@ resource to control how it behaves. For example, setting the `content` parameter
of a `file` resource to the string `hello`, will cause the contents of that file
to contain the string `hello` after it has run.
##### Undefined parameters
For some parameters, there is a distinction between an unspecified parameter,
and a parameter with a `zero` value. For example, for the file resource, you
might choose to set the `content` parameter to be the empty string, which would
@@ -189,6 +216,75 @@ it evaluates to `true`, then the parameter will be used. If no `elvis` operator
is specified, then the parameter value will also be used. If the parameter is
not specified, then it will obviously not be used.
##### Meta parameters
Resources may specify meta parameters. To do so, you must add them as you would
a regular parameter, except that they start with `Meta` and are capitalized. Eg:
```mcl
file "/tmp/f1" {
content => "hello!\n",
Meta:noop => true,
Meta:delay => $b ?: 42,
Meta:autoedge => false,
}
```
As you can see, they also support the elvis operator, and you can add as many as
you like. While it is not recommended to add the same meta parameter more than
once, it does not currently cause an error, and even though the result of doing
so is officially undefined, it will currently take the last specified value.
You may also specify a single meta parameter struct. This is useful if you'd
like to reuse a value, or build a combined value programmatically. For example:
```mcl
file "/tmp/f1" {
content => "hello!\n",
Meta => $b ?: struct{
noop => false,
retry => -1,
delay => 0,
poll => 5,
limit => 4.2,
burst => 3,
sema => ["foo:1", "bar:3",],
autoedge => true,
autogroup => false,
},
}
```
Remember that the top-level `Meta` field supports the elvis operator, while the
individual struct fields in the struct type do not. This is to be expected, but
since they are syntactically similar, it is worth mentioning to avoid confusion.
Please note that at the moment, you must specify a full metaparams struct, since
partial struct types are currently not supported in the language. Patches are
welcome if you'd like to add this tricky feature!
##### Resource naming
Each resource must have a unique name of type `str` that is used to uniquely
identify that resource, and can be used in the functioning of the resource at
that resources discretion. For example, the `file` resource uses the unique name
value to specify the path.
Alternatively, the name value may be a list of strings `[]str` to build a list
of resources, each with a name from that list. When this is done, each resource
will use the same set of parameters. The list of internal edges specified in the
same resource block is created intelligently to have the appropriate edge for
each separate resource.
Using this construct is a veiled form of looping (iteration). This technique is
one of many ways you can perform iterative tasks that you might have
traditionally used a `for` loop for instead. This is preferred, because flow
control is error-prone and can make for less readable code.
##### Internal edges
Resources may also declare edges internally. The edges may point to or from
another resource, and may optionally include a notification. The four properties
are: `Before`, `Depend`, `Notify` and `Listen`. The first two represent normal
@@ -285,11 +381,12 @@ class baz($a str, $b) {
Classes can also be nested within other classes. Here's a contrived example:
```mcl
import "fmt"
class c1($a, $b) {
# nested class definition
class c2($c) {
test $a {
stringptr => printf("%s is %d", $b, $c),
stringptr => fmt.printf("%s is %d", $b, $c),
}
}
@@ -317,6 +414,45 @@ parameters, then the same class can even be called with different signatures.
Whether the output is useful and whether there is a unique type unification
solution is dependent on your code.
#### Import
The `import` statement imports a scope into the specified namespace. A scope can
contain variable, class, and function definitions. All are statements.
Furthermore, since each of these have different logical uses, you could
theoretically import a scope that contains an `int` variable named `foo`, a
class named `foo`, and a function named `foo` as well. Keep in mind that
variables can contain functions (they can have a type of function) and are
commonly called lambdas.
There are a few different kinds of imports. They differ by the string contents
that you specify. Short single word, or multiple-word tokens separated by zero
or more slashes are system imports. Eg: `math`, `fmt`, or even `math/trig`.
Local imports are path imports that are relative to the current directory. They
can either import a single `mcl` file, or an entire well-formed module. Eg:
`file1.mcl` or `dir1/`. Lastly, you can have a remote import. This must be an
absolute path to a well-formed module. The common transport is `git`, and it can
be represented via an FQDN. Eg: `git://github.com/purpleidea/mgmt-example1/`.
The namespace that any of these are imported into depends on how you use the
import statement. By default, each kind of import will have a logic namespace
identifier associated with it. System imports use the last token in their name.
Eg: `fmt` would be imported as `fmt` and `math/trig` would be imported as
`trig`. Local imports do the same, except the required `.mcl` extension, or
trailing slash are removed. Eg: `foo/file1.mcl` would be imported as `file1` and
`bar/baz/` would be imported as `baz`. Remote imports use some more complex
rules. In general, well-named modules that contain a final directory name in the
form: `mgmt-whatever/` will be named `whatever`. Otherwise, the last path token
will be converted to lowercase and the dashes will be converted to underscores.
The rules for remote imports might change, and should not be considered stable.
In any of the import cases, you can change the namespace that you're imported
into. Simply add the `as whatever` text at the end of the import, and `whatever`
will be the name of the namespace. Please note that `whatever` is not surrounded
by quotes, since it is an identifier, and not a `string`. If you'd like to add
all of the import contents into the top-level scope, you can use the `as *` text
to dump all of the contents in. This is generally not recommended, as it might
cause a conflict with another identifier.
### Stages
The mgmt compiler runs in a number of stages. In order of execution they are:

View File

@@ -143,7 +143,7 @@ you to specify which `puppet.conf` file should be used during
translation.
```
mgmt run --puppet /opt/my-manifest.pp --puppet-conf /etc/mgmt/puppet.conf
mgmt run puppet --puppet /opt/my-manifest.pp --puppet-conf /etc/mgmt/puppet.conf
```
Within this file, you can just specify any needed options in the
@@ -164,3 +164,152 @@ language features.
You should probably make sure to always use the latest release of
both `ffrank-mgmtgraph` and `ffrank-yamlresource` (the latter is
getting pulled in as a dependency of the former).
## Using Puppet in conjunction with the mcl lang
The graph that Puppet generates for `mgmt` can be united with a graph
that is created from native `mgmt` code in its mcl language. This is
useful when you are in the process of replacing Puppet with mgmt. You
can translate your custom modules into mgmt's language one by one,
and let mgmt run the current mix.
Instead of the usual `--puppet`, `--puppet-conf`, and `--lang` for mcl,
you need to use alternative flags to make this work:
* `--lp-lang` to specify the mcl input
* `--lp-puppet` to specify the puppet input
* `--lp-puppet-conf` to point to the optional puppet.conf file
`mgmt` will derive a graph that contains all edges and vertices from
both inputs. You essentially get two unrelated subgraphs that run in
parallel. To form edges between these subgraphs, you have to define
special vertices that will be merged. This works through a hard-coded
naming scheme.
### Mixed graph example 1 - No merges
```mcl
# lang
file "/tmp/mgmt_dir/" { state => "present" }
file "/tmp/mgmt_dir/a" { state => "present" }
```
```puppet
# puppet
file { "/tmp/puppet_dir": ensure => "directory" }
file { "/tmp/puppet_dir/a": ensure => "file" }
```
These very simple inputs (including implicit edges from directory to
respective file) result in two subgraphs that do not relate.
```
File[/tmp/mgmt_dir/] -> File[/tmp/mgmt_dir/a]
File[/tmp/puppet_dir] -> File[/tmp/puppet_dir/a]
```
### Mixed graph example 2 - Merged vertex
In order to have merged vertices in the resulting graph, you will
need to include special resources and classes in the respective
input code.
* On the lang side, add `noop` resources with names starting in `puppet_`.
* On the Puppet side, add **empty** classes with names starting in `mgmt_`.
```mcl
# lang
noop "puppet_handover_to_mgmt" {}
file "/tmp/mgmt_dir/" { state => "present" }
file "/tmp/mgmt_dir/a" { state => "present" }
Noop["puppet_handover_to_mgmt"] -> File["/tmp/mgmt_dir/"]
```
```puppet
# puppet
class mgmt_handover_to_mgmt {}
include mgmt_handover_to_mgmt
file { "/tmp/puppet_dir": ensure => "directory" }
file { "/tmp/puppet_dir/a": ensure => "file" }
File["/tmp/puppet_dir/a"] -> Class["mgmt_handover_to_mgmt"]
```
The new `noop` resource is merged with the new class, resulting in
the following graph:
```
File[/tmp/puppet_dir] -> File[/tmp/puppet_dir/a]
|
V
Noop[handover_to_mgmt]
|
V
File[/tmp/mgmt_dir/] -> File[/tmp/mgmt_dir/a]
```
You put all your ducks in a row, and the resources from the Puppet input
run before those from the mcl input.
**Note:** The names of the `noop` and the class must be identical after the
respective prefix. The common part (here, `handover_to_mgmt`) becomes the name
of the merged resource.
## Mixed graph example 3 - Multiple merges
In most scenarios, it will not be possible to define a single handover
point like in the previous example. For example, if some Puppet resources
need to run in between two stages of native resources, you need at least
two merged vertices:
```mcl
# lang
noop "puppet_handover" {}
noop "puppet_handback" {}
file "/tmp/mgmt_dir/" { state => "present" }
file "/tmp/mgmt_dir/a" { state => "present" }
file "/tmp/mgmt_dir/puppet_subtree/state-file" { state => "present" }
File["/tmp/mgmt_dir/"] -> Noop["puppet_handover"]
Noop["puppet_handback"] -> File["/tmp/mgmt_dir/puppet_subtree/state-file"]
```
```puppet
# puppet
class mgmt_handover {}
class mgmt_handback {}
include mgmt_handover, mgmt_handback
class important_stuff {
file { "/tmp/mgmt_dir/puppet_subtree":
ensure => "directory"
}
# ...
}
Class["mgmt_handover"] -> Class["important_stuff"] -> Class["mgmt_handback"]
```
The resulting graph looks roughly like this:
```
File[/tmp/mgmt_dir/] -> File[/tmp/mgmt_dir/a]
|
V
Noop[handover] -> ( class important_stuff resources )
|
V
Noop[handback]
|
V
File[/tmp/mgmt_dir/puppet_subtree/state-file]
```
You can add arbitrary numbers of merge pairs to your code bases,
with relationships as needed. From our limited experience, code
readability suffers quite a lot from these, however. We advise
to keep these structures simple.

View File

@@ -13,7 +13,7 @@ Once you're familiar with the general idea, please start hacking...
### Installing golang
* You need golang version 1.9 or greater installed.
* You need golang version 1.10 or greater installed.
* To install on rpm style systems: `sudo dnf install golang`
* To install on apt style systems: `sudo apt install golang`
* To install on macOS systems install [Homebrew](https://brew.sh)
@@ -57,8 +57,8 @@ export PATH=$PATH:$GOPATH/bin
### Running mgmt
* Run `time ./mgmt run --lang examples/lang/hello0.mcl --tmp-prefix` to try out
a very simple example!
* Run `time ./mgmt run --tmp-prefix lang --lang examples/lang/hello0.mcl` to try
out a very simple example!
* Look in that example file that you ran to see if you can figure out what it
did!
* Have fun hacking on our future technology and get involved to shape the
@@ -89,7 +89,7 @@ required for running the _test_ suite.
### Build
* `golang` 1.9 or higher (required, available in some distros and distributed
* `golang` 1.10 or higher (required, available in some distros and distributed
as a binary officially by [golang.org](https://golang.org/dl/))
### Runtime
@@ -181,5 +181,5 @@ Other examples:
```
docker/scripts/exec-development make build
docker/scripts/exec-development ./mgmt run --tmp-prefix --lang examples/lang/load0.mcl
docker/scripts/exec-development ./mgmt run --tmp-prefix lang --lang examples/lang/load0.mcl
```

View File

@@ -68,7 +68,7 @@ identified by a trailing slash in their path name. File have no such slash.
It has the following properties:
* `path`: file path (directories have a trailing slash here)
* `path`: absolute file path (directories have a trailing slash here)
* `content`: raw file content
* `state`: either `exists` (the default value) or `absent`
* `mode`: octal unix file permissions

View File

@@ -1,22 +1,28 @@
# Style guide
## Overview
This document aims to be a reference for the desired style for patches to mgmt,
and the associated `mcl` language. In particular it describes conventions which
are not officially enforced by tools and in test cases, or that aren't clearly
defined elsewhere. We try to turn as many of these into automated tests as we
can. If something here is not defined in a test, or you think it should be,
please write one! Even better, you can write a tool to automatically fix it,
since this is more useful and can easily be turned into a test!
This document aims to be a reference for the desired style for patches to mgmt.
In particular it describes conventions which we use which are not officially
enforced by the `gofmt` tool, and which might not be clearly defined elsewhere.
Most of these are common sense to seasoned programmers, and we hope this will be
a useful reference for new programmers.
## Overview for golang code
Most style issues are enforced by the `gofmt` tool. Other style aspects are
often common sense to seasoned programmers, and we hope this will be a useful
reference for new programmers.
There are a lot of useful code review comments described
[here](https://github.com/golang/go/wiki/CodeReviewComments). We don't
necessarily follow everything strictly, but it is in general a very good guide.
## Basics
### Basics
* All of our golang code is formatted with `gofmt`.
## Comments
### Comments
All of our code is commented with the minimums required for `godoc` to function,
and so that our comments pass `golint`. Code comments should either be full
@@ -28,7 +34,7 @@ They should explain algorithms, describe non-obvious behaviour, or situations
which would otherwise need explanation or additional research during a code
review. Notes about use of unfamiliar API's is a good idea for a code comment.
### Example
#### Example
Here you can see a function with the correct `godoc` string. The first word must
match the name of the function. It is _not_ capitalized because the function is
@@ -41,7 +47,7 @@ func square(x int) int {
}
```
## Line length
### Line length
In general we try to stick to 80 character lines when it is appropriate. It is
almost *always* appropriate for function `godoc` comments and most longer
@@ -55,7 +61,7 @@ Occasionally inline, two line source code comments are used within a function.
These should usually be balanced so that you don't have one line with 78
characters and the second with only four. Split the comment between the two.
## Method receiver naming
### Method receiver naming
[Contrary](https://github.com/golang/go/wiki/CodeReviewComments#receiver-names)
to the specialized naming of the method receiver variable, we usually name all
@@ -65,7 +71,7 @@ makes the code easier to read since you don't need to remember the name of the
method receiver variable in each different method. This is very similar to what
is done in `python`.
### Example
#### Example
```golang
// Bar does a thing, and returns the number of baz results found in our
@@ -78,7 +84,7 @@ func (obj *Foo) Bar(baz string) int {
}
```
## Consistent ordering
### Consistent ordering
In general we try to preserve a logical ordering in source files which usually
matches the common order of execution that a _lazy evaluator_ would follow.
@@ -90,6 +96,55 @@ declared in the interface.
When implementing code for the various types in the language, please follow this
order: `bool`, `str`, `int`, `float`, `list`, `map`, `struct`, `func`.
## Overview for mcl code
The `mcl` language is quite new, so this guide will probably change over time as
we find what's best, and hopefully we'll be able to add an `mclfmt` tool in the
future so that less of this needs to be documented. (Patches welcome!)
### Indentation
Code indentation is done with tabs. The tab-width is a private preference, which
is the beauty of using tabs: you can have your own personal preference. The
inventor of `mgmt` uses and recommends a width of eight, and that is what should
be used if your tool requires a modeline to be publicly committed.
### Line length
We recommend you stick to 80 char line width. If you find yourself with deeper
nesting, it might be a hint that your code could be refactored in a more
pleasant way.
### Capitalization
At the moment, variables, function names, and classes are all lowercase and do
not contain underscores. We will probably figure out what style to recommend
when the language is a bit further along. For example, we haven't decided if we
should have a notion of public and private variables, and if we'd like to
reserve capitalization for this situation.
### Module naming
We recommend you name your modules with an `mgmt-` prefix. For example, a module
about bananas might be named `mgmt-banana`. This is helpful for the useful magic
built-in to the module import code, which will by default take a remote import
like: `import "https://github.com/purpleidea/mgmt-banana/"` and namespace it as
`banana`. Of course you can always pick the namespace yourself on import with:
`import "https://github.com/purpleidea/mgmt-banana/" as tomato` or something
similar.
### Licensing
We believe that sharing code helps reduce unnecessary re-invention, so that we
can [stand on the shoulders of giants](https://en.wikipedia.org/wiki/Standing_on_the_shoulders_of_giants)
and hopefully make faster progress in science, medicine, exploration, etc... As
a result, we recommend releasing your modules under the [LGPLv3+](https://www.gnu.org/licenses/lgpl-3.0.en.html)
license for the maximum balance of freedom and re-usability. We strongly oppose
any [CLA](https://en.wikipedia.org/wiki/Contributor_License_Agreement)
requirements and believe that the ["inbound==outbound"](https://ref.fedorapeople.org/fontana-linuxcon.html#slide2)
rule applies. Lastly, we do not support software patents and we hope you don't
either!
## Suggestions
If you have any ideas for suggestions or other improvements to this guide,

View File

@@ -31,6 +31,10 @@ type EdgeableRes interface {
// trait.
AutoEdgeMeta() *AutoEdgeMeta
// SetAutoEdgeMeta lets you set all of the meta params for the automatic
// edges trait in a single call.
SetAutoEdgeMeta(*AutoEdgeMeta)
// UIDs includes all params to make a unique identification of this
// object.
UIDs() []ResUID // most resources only return one

View File

@@ -34,6 +34,10 @@ type GroupableRes interface {
// grouping trait.
AutoGroupMeta() *AutoGroupMeta
// SetAutoGroupMeta lets you set all of the meta params for the
// automatic grouping trait in a single call.
SetAutoGroupMeta(*AutoGroupMeta)
// GroupCmp compares two resources and decides if they're suitable for
//grouping. This usually needs to be unique to your resource.
GroupCmp(res GroupableRes) error

View File

@@ -24,7 +24,8 @@ import (
)
// ResCmp compares two resources by checking multiple aspects. This is the main
// entry point for running all the compare steps on two resource.
// entry point for running all the compare steps on two resources. This code is
// very similar to AdaptCmp.
func ResCmp(r1, r2 Res) error {
if r1.Kind() != r2.Kind() {
return fmt.Errorf("kind differs")
@@ -37,6 +38,30 @@ func ResCmp(r1, r2 Res) error {
return err
}
// TODO: do we need to compare other traits/metaparams?
m1 := r1.MetaParams()
m2 := r2.MetaParams()
if (m1 == nil) != (m2 == nil) { // xor
return fmt.Errorf("meta params differ")
}
if m1 != nil && m2 != nil {
if err := m1.Cmp(m2); err != nil {
return err
}
}
r1x, ok1 := r1.(RefreshableRes)
r2x, ok2 := r2.(RefreshableRes)
if ok1 != ok2 {
return fmt.Errorf("refreshable differs") // they must be different (optional)
}
if ok1 && ok2 {
if r1x.Refresh() != r2x.Refresh() {
return fmt.Errorf("refresh differs")
}
}
// compare meta params for resources with auto edges
r1e, ok1 := r1.(EdgeableRes)
r2e, ok2 := r2.(EdgeableRes)
@@ -87,6 +112,174 @@ func ResCmp(r1, r2 Res) error {
}
}
r1r, ok1 := r1.(RecvableRes)
r2r, ok2 := r2.(RecvableRes)
if ok1 != ok2 {
return fmt.Errorf("recvable differs") // they must be different (optional)
}
if ok1 && ok2 {
v1 := r1r.Recv()
v2 := r2r.Recv()
if (v1 == nil) != (v2 == nil) { // xor
return fmt.Errorf("recv params differ")
}
if v1 != nil && v2 != nil {
// TODO: until we hit this code path, don't allow
// comparing anything that has this set to non-zero
if len(v1) != 0 || len(v2) != 0 {
return fmt.Errorf("recv params exist")
}
}
}
r1s, ok1 := r1.(SendableRes)
r2s, ok2 := r2.(SendableRes)
if ok1 != ok2 {
return fmt.Errorf("sendable differs") // they must be different (optional)
}
if ok1 && ok2 {
s1 := r1s.Sent()
s2 := r2s.Sent()
if (s1 == nil) != (s2 == nil) { // xor
return fmt.Errorf("send params differ")
}
if s1 != nil && s2 != nil {
// TODO: until we hit this code path, don't allow
// adapting anything that has this set to non-nil
return fmt.Errorf("send params exist")
}
}
return nil
}
// AdaptCmp compares two resources by checking multiple aspects. This is the
// main entry point for running all the compatible compare steps on two
// resources. This code is very similar to ResCmp.
func AdaptCmp(r1, r2 CompatibleRes) error {
if r1.Kind() != r2.Kind() {
return fmt.Errorf("kind differs")
}
if r1.Name() != r2.Name() {
return fmt.Errorf("name differs")
}
// run `Adapts` instead of `Cmp`
if err := r1.Adapts(r2); err != nil {
return err
}
// TODO: do we need to compare other traits/metaparams?
m1 := r1.MetaParams()
m2 := r2.MetaParams()
if (m1 == nil) != (m2 == nil) { // xor
return fmt.Errorf("meta params differ")
}
if m1 != nil && m2 != nil {
if err := m1.Cmp(m2); err != nil {
return err
}
}
// we don't need to compare refresh, since those can always be merged...
// compare meta params for resources with auto edges
r1e, ok1 := r1.(EdgeableRes)
r2e, ok2 := r2.(EdgeableRes)
if ok1 != ok2 {
return fmt.Errorf("edgeable differs") // they must be different (optional)
}
if ok1 && ok2 {
if r1e.AutoEdgeMeta().Cmp(r2e.AutoEdgeMeta()) != nil {
return fmt.Errorf("autoedge differs")
}
}
// compare meta params for resources with auto grouping
r1g, ok1 := r1.(GroupableRes)
r2g, ok2 := r2.(GroupableRes)
if ok1 != ok2 {
return fmt.Errorf("groupable differs") // they must be different (optional)
}
if ok1 && ok2 {
if r1g.AutoGroupMeta().Cmp(r2g.AutoGroupMeta()) != nil {
return fmt.Errorf("autogroup differs")
}
// if resources are grouped, are the groups the same?
if i, j := r1g.GetGroup(), r2g.GetGroup(); len(i) != len(j) {
return fmt.Errorf("autogroup groups differ")
} else if len(i) > 0 { // trick the golinter
// Sort works with Res, so convert the lists to that
iRes := []Res{}
for _, r := range i {
res := r.(Res)
iRes = append(iRes, res)
}
jRes := []Res{}
for _, r := range j {
res := r.(Res)
jRes = append(jRes, res)
}
ix, jx := Sort(iRes), Sort(jRes) // now sort :)
for k := range ix {
// compare sub resources
// TODO: should we use AdaptCmp here?
// TODO: how would they run `Merge` ? (we don't)
// this code path will probably not run, because
// it is called in the lang before autogrouping!
if err := ResCmp(ix[k], jx[k]); err != nil {
return err
}
}
}
}
r1r, ok1 := r1.(RecvableRes)
r2r, ok2 := r2.(RecvableRes)
if ok1 != ok2 {
return fmt.Errorf("recvable differs") // they must be different (optional)
}
if ok1 && ok2 {
v1 := r1r.Recv()
v2 := r2r.Recv()
if (v1 == nil) != (v2 == nil) { // xor
return fmt.Errorf("recv params differ")
}
if v1 != nil && v2 != nil {
// TODO: until we hit this code path, don't allow
// adapting anything that has this set to non-zero
if len(v1) != 0 || len(v2) != 0 {
return fmt.Errorf("recv params exist")
}
}
}
r1s, ok1 := r1.(SendableRes)
r2s, ok2 := r2.(SendableRes)
if ok1 != ok2 {
return fmt.Errorf("sendable differs") // they must be different (optional)
}
if ok1 && ok2 {
s1 := r1s.Sent()
s2 := r2s.Sent()
if (s1 == nil) != (s2 == nil) { // xor
return fmt.Errorf("send params differ")
}
if s1 != nil && s2 != nil {
// TODO: until we hit this code path, don't allow
// adapting anything that has this set to non-nil
return fmt.Errorf("send params exist")
}
}
return nil
}

160
engine/copy.go Normal file
View File

@@ -0,0 +1,160 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package engine
import (
"fmt"
errwrap "github.com/pkg/errors"
)
// ResCopy copies a resource. This is the main entry point for copying a
// resource since it does all the common engine-level copying as well.
func ResCopy(r CopyableRes) (CopyableRes, error) {
res := r.Copy()
res.SetKind(r.Kind())
res.SetName(r.Name())
if x, ok := r.(MetaRes); ok {
dst, ok := res.(MetaRes)
if !ok {
// programming error
panic("meta interfaces are illogical")
}
dst.SetMetaParams(x.MetaParams().Copy()) // copy b/c we have it
}
if x, ok := r.(RefreshableRes); ok {
dst, ok := res.(RefreshableRes)
if !ok {
// programming error
panic("refresh interfaces are illogical")
}
dst.SetRefresh(x.Refresh()) // no need to copy atm
}
// copy meta params for resources with auto edges
if x, ok := r.(EdgeableRes); ok {
dst, ok := res.(EdgeableRes)
if !ok {
// programming error
panic("autoedge interfaces are illogical")
}
dst.SetAutoEdgeMeta(x.AutoEdgeMeta()) // no need to copy atm
}
// copy meta params for resources with auto grouping
if x, ok := r.(GroupableRes); ok {
dst, ok := res.(GroupableRes)
if !ok {
// programming error
panic("autogroup interfaces are illogical")
}
dst.SetAutoGroupMeta(x.AutoGroupMeta()) // no need to copy atm
grouped := []GroupableRes{}
for _, g := range x.GetGroup() {
g0, ok := g.(CopyableRes)
if !ok {
return nil, fmt.Errorf("resource wasn't copyable")
}
g1, err := ResCopy(g0)
if err != nil {
return nil, err
}
g2, ok := g1.(GroupableRes)
if !ok {
return nil, fmt.Errorf("resource wasn't groupable")
}
grouped = append(grouped, g2)
}
dst.SetGroup(grouped)
}
if x, ok := r.(RecvableRes); ok {
dst, ok := res.(RecvableRes)
if !ok {
// programming error
panic("recv interfaces are illogical")
}
dst.SetRecv(x.Recv()) // no need to copy atm
}
if x, ok := r.(SendableRes); ok {
dst, ok := res.(SendableRes)
if !ok {
// programming error
panic("send interfaces are illogical")
}
if err := dst.Send(x.Sent()); err != nil { // no need to copy atm
return nil, errwrap.Wrapf(err, "can't copy send")
}
}
return res, nil
}
// ResMerge merges a set of resources that are compatible with each other. This
// is the main entry point for the merging. They must each successfully be able
// to run AdaptCmp without error.
func ResMerge(r ...CompatibleRes) (CompatibleRes, error) {
if len(r) == 0 {
return nil, fmt.Errorf("zero resources given")
}
if len(r) == 1 {
return r[0], nil
}
if len(r) > 2 {
r0 := r[0]
r1, err := ResMerge(r[1:]...)
if err != nil {
return nil, err
}
return ResMerge(r0, r1)
}
// now we have r[0] and r[1] to merge here...
r0 := r[0]
r1 := r[1]
if err := AdaptCmp(r0, r1); err != nil {
return nil, err
}
res, err := r0.Merge(r1) // resource method of this interface
if err != nil {
return nil, err
}
// meta should have come over in the copy
if x, ok := res.(RefreshableRes); ok {
x0, ok0 := r0.(RefreshableRes)
x1, ok1 := r1.(RefreshableRes)
if !ok0 || !ok1 {
// programming error
panic("refresh interfaces are illogical")
}
x.SetRefresh(x0.Refresh() || x1.Refresh()) // true if either is!
}
// the other traits and metaparams can't be merged easily... so we don't
// merge them, and if they were present and differed, and weren't copied
// in the ResCopy method, then we should have errored above in AdaptCmp!
return res, nil
}

View File

@@ -25,9 +25,59 @@ type Kind int
// The different event kinds are used in different contexts.
const (
EventNil Kind = iota
EventStart
EventPause
EventPoke
EventExit
KindNil Kind = iota
KindStart
KindPause
KindPoke
KindExit
)
// Pre-built messages so they can be used directly without having to use NewMsg.
// These are useful when we don't want a response via ACK().
var (
Start = &Msg{Kind: KindStart}
Pause = &Msg{Kind: KindPause} // probably unused b/c we want a resp
Poke = &Msg{Kind: KindPoke}
Exit = &Msg{Kind: KindExit}
)
// Msg is an event primitive that represents a kind of event, and optionally a
// request for an ACK.
type Msg struct {
Kind Kind
resp chan struct{}
}
// NewMsg builds a new message struct. It will want an ACK. If you don't want an
// ACK then use the pre-built messages in the package variable globals.
func NewMsg(kind Kind) *Msg {
return &Msg{
Kind: kind,
resp: make(chan struct{}),
}
}
// CanACK determines if an ACK is possible for this message. It does not say
// whether one has already been sent or not.
func (obj *Msg) CanACK() bool {
return obj.resp != nil
}
// ACK acknowledges the event. It must not be called more than once for the same
// event. It unblocks the past and future calls of Wait for this event.
func (obj *Msg) ACK() {
close(obj.resp)
}
// Wait on ACK for this event. It doesn't matter if this runs before or after
// the ACK. It will unblock either way.
// TODO: consider adding a context if it's ever useful.
func (obj *Msg) Wait() error {
select {
//case <-ctx.Done():
// return ctx.Err()
case <-obj.resp:
return nil
}
}

View File

@@ -48,7 +48,7 @@ type Fs interface {
//IsDir(path string) (bool, error)
//IsEmpty(path string) (bool, error)
//NeuterAccents(s string) string
//ReadAll(r io.Reader) ([]byte, error) // not needed
//ReadAll(r io.Reader) ([]byte, error) // not needed, same as ioutil
ReadDir(dirname string) ([]os.FileInfo, error)
ReadFile(filename string) ([]byte, error)
//SafeWriteReader(path string, r io.Reader) (err error)

View File

@@ -119,6 +119,7 @@ func (obj *Engine) Process(vertex pgraph.Vertex) error {
for _, changed := range updated {
if changed { // at least one was updated
// invalidate cache, mark as dirty
obj.state[vertex].tuid.StopTimer()
obj.state[vertex].isStateOK = false
break
}
@@ -174,6 +175,7 @@ func (obj *Engine) Process(vertex pgraph.Vertex) error {
// if CheckApply ran without noop and without error, state should be good
if !noop && err == nil { // aka !noop || checkOK
obj.state[vertex].tuid.StartTimer()
obj.state[vertex].isStateOK = true // reset
if refresh {
obj.SetUpstreamRefresh(vertex, false) // refresh happened, clear the request
@@ -252,9 +254,11 @@ func (obj *Engine) Worker(vertex pgraph.Vertex) error {
defer close(obj.state[vertex].stopped) // done signal
obj.state[vertex].cuid = obj.Converger.Register()
obj.state[vertex].tuid = obj.Converger.Register()
// must wait for all users of the cuid to finish *before* we unregister!
// as a result, this defer happens *before* the below wait group Wait...
defer obj.state[vertex].cuid.Unregister()
defer obj.state[vertex].tuid.Unregister()
defer obj.state[vertex].wg.Wait() // this Worker is the last to exit!
@@ -343,7 +347,7 @@ func (obj *Engine) Worker(vertex pgraph.Vertex) error {
var limiter = rate.NewLimiter(res.MetaParams().Limit, res.MetaParams().Burst)
// It is important that we shutdown the Watch loop if this exits.
// Example, if Process errors permanently, we should ask Watch to exit.
defer obj.state[vertex].Event(event.EventExit) // signal an exit
defer obj.state[vertex].Event(event.Exit) // signal an exit
for {
select {
case err, ok := <-obj.state[vertex].outputChan: // read from watch channel
@@ -460,11 +464,11 @@ func (obj *Engine) Worker(vertex pgraph.Vertex) error {
// err = errwrap.Wrapf(err, "permanent process error")
//}
// If this exits, defer calls Event(event.EventExit),
// If this exits, defer calls: obj.Event(event.Exit),
// which will cause the Watch loop to shutdown. Also,
// if the Watch loop shuts down, that will cause this
// Process loop to shut down. Also the graph sync can
// run an Event(event.EventExit) which causes this to
// run an: obj.Event(event.Exit) which causes this to
// shutdown as well. Lastly, it is possible that more
// that one of these scenarios happens simultaneously.
return err

View File

@@ -125,7 +125,7 @@ func (obj *Engine) Validate() error {
}
// Apply a function to the pending graph. You must pass in a function which will
// receive this graph as input, and return an error if it something does not
// receive this graph as input, and return an error if something does not
// succeed.
func (obj *Engine) Apply(fn func(*pgraph.Graph) error) error {
return fn(obj.nextGraph)
@@ -194,10 +194,11 @@ func (obj *Engine) Commit() error {
}
return nil
}
free := []func() error{} // functions to run after graphsync to reset...
vertexRemoveFn := func(vertex pgraph.Vertex) error {
// wait for exit before starting new graph!
obj.state[vertex].Event(event.EventExit) // signal an exit
obj.waits[vertex].Wait() // sync
obj.state[vertex].Event(event.Exit) // signal an exit
obj.waits[vertex].Wait() // sync
// close the state and resource
// FIXME: will this mess up the sync and block the engine?
@@ -206,8 +207,12 @@ func (obj *Engine) Commit() error {
}
// delete to free up memory from old graphs
delete(obj.state, vertex)
delete(obj.waits, vertex)
fn := func() error {
delete(obj.state, vertex)
delete(obj.waits, vertex)
return nil
}
free = append(free, fn) // do this at the end, so we don't panic
return nil
}
@@ -218,6 +223,13 @@ func (obj *Engine) Commit() error {
if err := obj.graph.GraphSync(obj.nextGraph, engine.VertexCmpFn, vertexAddFn, vertexRemoveFn, engine.EdgeCmpFn); err != nil {
return errwrap.Wrapf(err, "error running graph sync")
}
// we run these afterwards, so that the state structs (that might get
// referenced) aren't destroyed while someone might poke or use one.
for _, fn := range free {
if err := fn(); err != nil {
return errwrap.Wrapf(err, "error running free fn")
}
}
obj.nextGraph = nil
// After this point, we must not error or we'd need to restore all of
@@ -276,7 +288,7 @@ func (obj *Engine) Start() error {
}
if unpause { // unpause (if needed)
obj.state[vertex].Event(event.EventStart)
obj.state[vertex].Event(event.Start)
}
}
// we wait for everyone to start before exiting!
@@ -301,7 +313,7 @@ func (obj *Engine) Pause(fastPause bool) {
for _, vertex := range topoSort { // squeeze out the events...
// The Event is sent to an unbuffered channel, so this event is
// synchronous, and as a result it blocks until it is received.
obj.state[vertex].Event(event.EventPause)
obj.state[vertex].Event(event.Pause)
}
// we are now completely paused...

View File

@@ -65,7 +65,7 @@ type State struct {
// events is a channel of incoming events which is read by the Watch
// loop for that resource. It receives events like pause, start, and
// poke. The channel shuts down to signal for Watch to exit.
eventsChan chan event.Kind // incoming to resource
eventsChan chan *event.Msg // incoming to resource
eventsLock *sync.Mutex // lock around sending and closing of events channel
eventsDone bool // is channel closed?
@@ -86,13 +86,14 @@ type State struct {
working bool // is the Main() loop running ?
cuid converger.UID // primary converger
tuid converger.UID // secondary converger
init *engine.Init // a copy of the init struct passed to res Init
}
// Init initializes structures like channels.
func (obj *State) Init() error {
obj.eventsChan = make(chan event.Kind)
obj.eventsChan = make(chan *event.Msg)
obj.eventsLock = &sync.Mutex{}
obj.outputChan = make(chan error)
@@ -121,6 +122,7 @@ func (obj *State) Init() error {
}
//obj.cuid = obj.Converger.Register() // gets registered in Worker()
//obj.tuid = obj.Converger.Register() // gets registered in Worker()
obj.init = &engine.Init{
Program: obj.Program,
@@ -128,6 +130,7 @@ func (obj *State) Init() error {
// Watch:
Running: func() error {
obj.tuid.StopTimer()
close(obj.started) // this is reset in the reset func
obj.isStateOK = false // assume we're initially dirty
// optimization: skip the initial send if not a starter
@@ -141,6 +144,7 @@ func (obj *State) Init() error {
Events: obj.eventsChan,
Read: obj.read,
Dirty: func() { // TODO: should we rename this SetDirty?
obj.tuid.StopTimer()
obj.isStateOK = false
},
@@ -208,6 +212,9 @@ func (obj *State) Close() error {
//if obj.cuid != nil {
// obj.cuid.Unregister() // gets unregistered in Worker()
//}
//if obj.tuid != nil {
// obj.tuid.Unregister() // gets unregistered in Worker()
//}
// redundant safety
obj.wg.Wait() // wait until all poke's and events on me have exited
@@ -239,6 +246,16 @@ func (obj *State) Poke() {
obj.wg.Add(1)
defer obj.wg.Done()
// now that we've added to the wait group, obj.outputChan won't close...
// so see if there's an exit signal before we release the wait group!
// XXX: i don't think this is necessarily happening, but maybe it is?
// XXX: re-write some of the engine to ensure that: "the sender closes"!
select {
case <-obj.exit.Signal():
return // skip sending the poke b/c we're closing
default:
}
select {
case obj.outputChan <- nil:
@@ -249,7 +266,7 @@ func (obj *State) Poke() {
// Event sends a Pause or Start event to the resource. It can also be used to
// send Poke events, but it's much more efficient to send them directly instead
// of passing them through the resource.
func (obj *State) Event(kind event.Kind) {
func (obj *State) Event(msg *event.Msg) {
// TODO: should these happen after the lock?
obj.wg.Add(1)
defer obj.wg.Done()
@@ -261,7 +278,7 @@ func (obj *State) Event(kind event.Kind) {
return
}
if kind == event.EventExit { // set this so future events don't deadlock
if msg.Kind == event.KindExit { // set this so future events don't deadlock
obj.Logf("exit event...")
obj.eventsDone = true
close(obj.eventsChan) // causes resource Watch loop to close
@@ -270,7 +287,7 @@ func (obj *State) Event(kind event.Kind) {
}
select {
case obj.eventsChan <- kind:
case obj.eventsChan <- msg:
case <-obj.exit.Signal():
}
@@ -278,40 +295,40 @@ func (obj *State) Event(kind event.Kind) {
// read is a helper function used inside the main select statement of resources.
// If it returns an error, then this is a signal for the resource to exit.
func (obj *State) read(kind event.Kind) error {
switch kind {
case event.EventPoke:
func (obj *State) read(msg *event.Msg) error {
switch msg.Kind {
case event.KindPoke:
return obj.event() // a poke needs to cause an event...
case event.EventStart:
case event.KindStart:
return fmt.Errorf("unexpected start")
case event.EventPause:
case event.KindPause:
// pass
case event.EventExit:
case event.KindExit:
return engine.ErrSignalExit
default:
return fmt.Errorf("unhandled event: %+v", kind)
return fmt.Errorf("unhandled event: %+v", msg.Kind)
}
// we're paused now
select {
case kind, ok := <-obj.eventsChan:
case msg, ok := <-obj.eventsChan:
if !ok {
return engine.ErrWatchExit
}
switch kind {
case event.EventPoke:
switch msg.Kind {
case event.KindPoke:
return fmt.Errorf("unexpected poke")
case event.EventPause:
case event.KindPause:
return fmt.Errorf("unexpected pause")
case event.EventStart:
case event.KindStart:
// resumed
return nil
case event.EventExit:
case event.KindExit:
return engine.ErrSignalExit
default:
return fmt.Errorf("unhandled event: %+v", kind)
return fmt.Errorf("unhandled event: %+v", msg.Kind)
}
}
}
@@ -328,45 +345,45 @@ func (obj *State) event() error {
return nil // sent event!
// make sure to keep handling incoming
case kind, ok := <-obj.eventsChan:
case msg, ok := <-obj.eventsChan:
if !ok {
return engine.ErrWatchExit
}
switch kind {
case event.EventPoke:
switch msg.Kind {
case event.KindPoke:
// we're trying to send an event, so swallow the
// poke: it's what we wanted to have happen here
continue
case event.EventStart:
case event.KindStart:
return fmt.Errorf("unexpected start")
case event.EventPause:
case event.KindPause:
// pass
case event.EventExit:
case event.KindExit:
return engine.ErrSignalExit
default:
return fmt.Errorf("unhandled event: %+v", kind)
return fmt.Errorf("unhandled event: %+v", msg.Kind)
}
}
// we're paused now
select {
case kind, ok := <-obj.eventsChan:
case msg, ok := <-obj.eventsChan:
if !ok {
return engine.ErrWatchExit
}
switch kind {
case event.EventPoke:
switch msg.Kind {
case event.KindPoke:
return fmt.Errorf("unexpected poke")
case event.EventPause:
case event.KindPause:
return fmt.Errorf("unexpected pause")
case event.EventStart:
case event.KindStart:
// resumed
case event.EventExit:
case event.KindExit:
return engine.ErrSignalExit
default:
return fmt.Errorf("unhandled event: %+v", kind)
return fmt.Errorf("unhandled event: %+v", msg.Kind)
}
}
}

View File

@@ -44,6 +44,10 @@ var DefaultMetaParams = &MetaParams{
type MetaRes interface {
// MetaParams lets you get or set meta params for the resource.
MetaParams() *MetaParams
// SetMetaParams lets you set all of the meta params for the resource in
// a single call.
SetMetaParams(*MetaParams)
}
// MetaParams provides some meta parameters that apply to every resource.

View File

@@ -100,11 +100,11 @@ type Init struct {
// Events returns a channel that we must watch for messages from the
// engine. When it closes, this is a signal to shutdown.
Events chan event.Kind
Events chan *event.Msg
// Read processes messages that come in from the Events channel. It is a
// helper method that knows how to handle the pause mechanism correctly.
Read func(event.Kind) error
Read func(*event.Msg) error
// Dirty marks the resource state as dirty. This signals to the engine
// that CheckApply will have some work to do in order to converge it.
@@ -192,12 +192,14 @@ type Res interface {
// in response.
Watch() error
// CheckApply determines if the state of the resource is connect and if
// CheckApply determines if the state of the resource is correct and if
// asked to with the `apply` variable, applies the requested state.
CheckApply(apply bool) (checkOK bool, err error)
// Cmp compares itself to another resource and returns an error if they
// are not equivalent.
// are not equivalent. This is more strict than the Adapts method of the
// CompatibleRes interface which allows for equivalent differences if
// the have a compatible result in CheckApply.
Cmp(Res) error
}
@@ -246,15 +248,50 @@ type InterruptableRes interface {
// is designed to unblock any long running operation that is occurring
// in the CheckApply portion of the life cycle. If the resource has
// already exited, running this method should not block. (That is to say
// that you should not expect CheckApply or Watch to be able to alive
// and able to read from a channel to satisfy your request.) It is best
// to probably have this close a channel to multicast that signal around
// to anyone who can detect it in a select. If you are in a situation
// which cannot interrupt, then you can return an error.
// that you should not expect CheckApply or Watch to be alive and be
// able to read from a channel to satisfy your request.) It is best to
// probably have this close a channel to multicast that signal around to
// anyone who can detect it in a select. If you are in a situation which
// cannot interrupt, then you can return an error.
// FIXME: implement, and check the above description is what we expect!
Interrupt() error
}
// CopyableRes is an interface that a resource can implement if we want to be
// able to copy the resource to build another one.
type CopyableRes interface {
Res
// Copy returns a new resource which has a copy of the public data.
// Don't call this directly, use engine.ResCopy instead.
// TODO: should we copy any private state or not?
Copy() CopyableRes
}
// CompatibleRes is an interface that a resource can implement to express if a
// similar variant of itself is functionally equivalent. For example, two `pkg`
// resources that install `cowsay` could be equivalent if one requests a state
// of `installed` and the other requests `newest`, since they'll finish with a
// compatible result. This doesn't need to be behind a metaparam flag or trait,
// because it is never beneficial to turn it off, unless there is a bug to fix.
type CompatibleRes interface {
//Res // causes "duplicate method" error
CopyableRes // we'll need to use the Copy method in the Merge function!
// Adapts compares itself to another resource and returns an error if
// they are not compatibly equivalent. This is less strict than the
// default `Cmp` method which should be used for most cases. Don't call
// this directly, use engine.AdaptCmp instead.
Adapts(CompatibleRes) error
// Merge returns the combined resource to use when two are equivalent.
// This might get called multiple times for N different resources that
// need to get merged, and so it should produce a consistent result no
// matter which order it is called in. Don't call this directly, use
// engine.ResMerge instead.
Merge(CompatibleRes) (CompatibleRes, error)
}
// CollectableRes is an interface for resources that support collection. It is
// currently temporary until a proper API for all resources is invented.
type CollectableRes interface {

568
engine/resources/cron.go Normal file
View File

@@ -0,0 +1,568 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"bytes"
"context"
"fmt"
"os/user"
"path"
"strings"
"time"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
engineUtil "github.com/purpleidea/mgmt/engine/util"
"github.com/purpleidea/mgmt/recwatch"
"github.com/purpleidea/mgmt/util"
sdbus "github.com/coreos/go-systemd/dbus"
"github.com/coreos/go-systemd/unit"
systemdUtil "github.com/coreos/go-systemd/util"
"github.com/godbus/dbus"
errwrap "github.com/pkg/errors"
)
const (
// OnCalendar is a systemd-timer trigger, whose behaviour is defined in
// 'man systemd-timer', and whose format is defined in the 'Calendar
// Events' section of 'man systemd-time'.
OnCalendar = "OnCalendar"
// OnActiveSec is a systemd-timer trigger, whose behaviour is defined in
// 'man systemd-timer', and whose format is a time span as defined in
// 'man systemd-time'.
OnActiveSec = "OnActiveSec"
// OnBootSec is a systemd-timer trigger, whose behaviour is defined in
// 'man systemd-timer', and whose format is a time span as defined in
// 'man systemd-time'.
OnBootSec = "OnBootSec"
// OnStartupSec is a systemd-timer trigger, whose behaviour is defined in
// 'man systemd-timer', and whose format is a time span as defined in
// 'man systemd-time'.
OnStartupSec = "OnStartupSec"
// OnUnitActiveSec is a systemd-timer trigger, whose behaviour is defined
// in 'man systemd-timer', and whose format is a time span as defined in
// 'man systemd-time'.
OnUnitActiveSec = "OnUnitActiveSec"
// OnUnitInactiveSec is a systemd-timer trigger, whose behaviour is defined
// in 'man systemd-timer', and whose format is a time span as defined in
// 'man systemd-time'.
OnUnitInactiveSec = "OnUnitInactiveSec"
// ctxTimeout is the delay, in seconds, before the calls to restart or stop
// the systemd unit will error due to timeout.
ctxTimeout = 30
)
func init() {
engine.RegisterResource("cron", func() engine.Res { return &CronRes{} })
}
// CronRes is a systemd-timer cron resource.
type CronRes struct {
traits.Base
traits.Edgeable
traits.Recvable
traits.Refreshable // needed because we embed a svc res
init *engine.Init
// Unit is the name of the systemd service unit. It is only necessary to
// set if you want to specify a service with a different name than the
// resource.
Unit string `yaml:"unit"`
// State must be 'exists' or 'absent'.
State string `yaml:"state"`
// Session, if true, creates the timer as the current user, rather than
// root. The service it points to must also be a user unit. It defaults to
// false.
Session bool `yaml:"session"`
// Trigger is the type of timer. Valid types are 'OnCalendar',
// 'OnActiveSec'. 'OnBootSec'. 'OnStartupSec'. 'OnUnitActiveSec', and
// 'OnUnitInactiveSec'. For more information see 'man systemd.timer'.
Trigger string `yaml:"trigger"`
// Time must be used with all triggers. For 'OnCalendar', it must be in
// the format defined in 'man systemd-time' under the heading 'Calendar
// Events'. For all other triggers, time should be a valid time span as
// defined in 'man systemd-time'
Time string `yaml:"time"`
// AccuracySec is the accuracy of the timer in systemd-time time span
// format. It defaults to one minute.
AccuracySec string `yaml:"accuracysec"`
// RandomizedDelaySec delays the timer by a randomly selected, evenly
// distributed amount of time between 0 and the specified time value. The
// value must be a valid systemd-time time span.
RandomizedDelaySec string `yaml:"randomizeddelaysec"`
// Persistent, if true, means the time when the service unit was last
// triggered is stored on disk. When the timer is activated, the service
// unit is triggered immediately if it would have been triggered at least
// once during the time when the timer was inactive. It defaults to false.
Persistent bool `yaml:"persistent"`
// WakeSystem, if true, will cause the system to resume from suspend,
// should it be suspended and if the system supports this. It defaults to
// false.
WakeSystem bool `yaml:"wakesystem"`
// RemainAfterElapse, if true, means an elapsed timer will stay loaded, and
// its state remains queriable. If false, an elapsed timer unit that cannot
// elapse anymore is unloaded. It defaults to true.
RemainAfterElapse bool `yaml:"remainafterelapse"`
file *FileRes // nested file resource
recWatcher *recwatch.RecWatcher // recwatcher for nested file
}
// Default returns some sensible defaults for this resource.
func (obj *CronRes) Default() engine.Res {
return &CronRes{
State: "exists",
RemainAfterElapse: true,
}
}
// makeComposite creates a pointer to a FileRes. The pointer is used to
// validate and initialize the nested file resource and to apply the file state
// in CheckApply.
func (obj *CronRes) makeComposite() (*FileRes, error) {
p, err := obj.UnitFilePath()
if err != nil {
return nil, errwrap.Wrapf(err, "error generating unit file path")
}
res, err := engine.NewNamedResource("file", p)
if err != nil {
return nil, errwrap.Wrapf(err, "error creating nested file resource")
}
file, ok := res.(*FileRes)
if !ok {
return nil, fmt.Errorf("error casting fileres")
}
file.State = obj.State
if obj.State != "absent" {
s := obj.unitFileContents()
file.Content = &s
}
return file, nil
}
// Validate if the params passed in are valid data.
func (obj *CronRes) Validate() error {
// validate state
if obj.State != "absent" && obj.State != "exists" {
return fmt.Errorf("state must be 'absent' or 'exists'")
}
// validate trigger
if obj.State == "absent" && obj.Trigger == "" {
return nil // if trigger is undefined we can't make a unit file
}
if obj.Trigger == "" || obj.Time == "" {
return fmt.Errorf("trigger and must be set together")
}
if obj.Trigger != OnCalendar &&
obj.Trigger != OnActiveSec &&
obj.Trigger != OnBootSec &&
obj.Trigger != OnStartupSec &&
obj.Trigger != OnUnitActiveSec &&
obj.Trigger != OnUnitInactiveSec {
return fmt.Errorf("invalid trigger")
}
// TODO: Validate time (regex?)
// validate nested file
file, err := obj.makeComposite()
if err != nil {
return errwrap.Wrapf(err, "makeComposite failed in validate")
}
if err := file.Validate(); err != nil { // composite resource
return errwrap.Wrapf(err, "validate failed for embedded file: %s", obj.file)
}
return nil
}
// Init runs some startup code for this resource.
func (obj *CronRes) Init(init *engine.Init) error {
var err error
obj.init = init // save for later
obj.file, err = obj.makeComposite()
if err != nil {
return errwrap.Wrapf(err, "makeComposite failed in init")
}
return obj.file.Init(init)
}
// Close is run by the engine to clean up after the resource is done.
func (obj *CronRes) Close() error {
if obj.file != nil {
return obj.file.Close()
}
return nil
}
// Watch for state changes and sends a message to the bus if there is a change.
func (obj *CronRes) Watch() error {
var bus *dbus.Conn
var err error
// this resource depends on systemd
if !systemdUtil.IsRunningSystemd() {
return fmt.Errorf("systemd is not running")
}
// create a private message bus
if obj.Session {
bus, err = util.SessionBusPrivateUsable()
} else {
bus, err = util.SystemBusPrivateUsable()
}
if err != nil {
return errwrap.Wrapf(err, "failed to connect to bus")
}
defer bus.Close()
// dbus addmatch arguments for the timer unit
args := []string{}
args = append(args, "type='signal'")
args = append(args, "interface='org.freedesktop.systemd1.Manager'")
args = append(args, "eavesdrop='true'")
args = append(args, fmt.Sprintf("arg2='%s.timer'", obj.Name()))
// match dbus messsages
if call := bus.BusObject().Call(engineUtil.DBusAddMatch, 0, strings.Join(args, ",")); call.Err != nil {
return err
}
defer bus.BusObject().Call(engineUtil.DBusRemoveMatch, 0, args) // ignore the error
// channels for dbus signal
dbusChan := make(chan *dbus.Signal)
defer close(dbusChan)
bus.Signal(dbusChan)
defer bus.RemoveSignal(dbusChan) // not needed here, but nice for symmetry
p, err := obj.UnitFilePath()
if err != nil {
return errwrap.Wrapf(err, "error generating unit file path")
}
// recwatcher for the systemd-timer unit file
obj.recWatcher, err = recwatch.NewRecWatcher(p, false)
if err != nil {
return err
}
defer obj.recWatcher.Close()
// notify engine that we're running
if err := obj.init.Running(); err != nil {
return err // exit if requested
}
var send = false // send event?
for {
select {
case event := <-dbusChan:
// process dbus events
if obj.init.Debug {
obj.init.Logf("%+v", event)
}
send = true
obj.init.Dirty() // dirty
case event, ok := <-obj.recWatcher.Events():
// process unit file recwatch events
if !ok { // channel shutdown
return nil
}
if err := event.Error; err != nil {
return errwrap.Wrapf(err, "Unknown %s watcher error", obj)
}
if obj.init.Debug {
obj.init.Logf("Event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
obj.init.Dirty() // dirty
case event, ok := <-obj.init.Events:
if !ok {
return nil
}
if err := obj.init.Read(event); err != nil {
return err
}
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
if err := obj.init.Event(); err != nil {
return err // exit if requested
}
}
}
}
// CheckApply is run to check the state and, if apply is true, to apply the
// necessary changes to reach the desired state. This is run before Watch and
// again if Watch finds a change occurring to the state.
func (obj *CronRes) CheckApply(apply bool) (checkOK bool, err error) {
ok := true
// use the embedded file resource to apply the correct state
if c, err := obj.file.CheckApply(apply); err != nil {
return false, errwrap.Wrapf(err, "nested file failed")
} else if !c {
ok = false
}
// check timer state and apply the defined state if needed
if c, err := obj.unitCheckApply(apply); err != nil {
return false, errwrap.Wrapf(err, "unitCheckApply error")
} else if !c {
ok = false
}
return ok, nil
}
// unitCheckApply checks the state of the systemd-timer unit and, if apply is
// true, applies the defined state.
func (obj *CronRes) unitCheckApply(apply bool) (checkOK bool, err error) {
var conn *sdbus.Conn
var godbusConn *dbus.Conn
// this resource depends on systemd to ensure that it's running
if !systemdUtil.IsRunningSystemd() {
return false, fmt.Errorf("systemd is not running")
}
// go-systemd connection
if obj.Session {
conn, err = sdbus.NewUserConnection()
} else {
conn, err = sdbus.New() // system bus
}
if err != nil {
return false, errwrap.Wrapf(err, "error making go-systemd dbus connection")
}
defer conn.Close()
// get the load state and active state of the timer unit
loadState, err := conn.GetUnitProperty(fmt.Sprintf("%s.timer", obj.Name()), "LoadState")
if err != nil {
return false, errwrap.Wrapf(err, "failed to get load state")
}
activeState, err := conn.GetUnitProperty(fmt.Sprintf("%s.timer", obj.Name()), "ActiveState")
if err != nil {
return false, errwrap.Wrapf(err, "failed to get active state")
}
// check the timer unit state
if obj.State == "absent" && loadState.Value == dbus.MakeVariant("not-found") {
return true, nil
}
if obj.State == "exists" && activeState.Value == dbus.MakeVariant("active") {
return true, nil
}
if !apply {
return false, nil
}
// systemctl daemon-reload
if err := conn.Reload(); err != nil {
return false, errwrap.Wrapf(err, "error reloading daemon")
}
// context for stopping/restarting the unit
ctx, cancel := context.WithTimeout(context.Background(), ctxTimeout*time.Second)
defer cancel()
// godbus connection for stopping/restarting the unit
if obj.Session {
godbusConn, err = util.SessionBusPrivateUsable()
} else {
godbusConn, err = util.SystemBusPrivateUsable()
}
if err != nil {
return false, errwrap.Wrapf(err, "error making godbus connection")
}
defer godbusConn.Close()
// stop or restart the unit
if obj.State == "absent" {
return false, engineUtil.StopUnit(ctx, godbusConn, fmt.Sprintf("%s.timer", obj.Name()))
}
return false, engineUtil.RestartUnit(ctx, godbusConn, fmt.Sprintf("%s.timer", obj.Name()))
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *CronRes) Cmp(r engine.Res) error {
res, ok := r.(*CronRes)
if !ok {
return fmt.Errorf("res is not the same kind")
}
if obj.State != res.State {
return fmt.Errorf("state differs: %s vs %s", obj.State, res.State)
}
if obj.Trigger != res.Trigger {
return fmt.Errorf("trigger differs: %s vs %s", obj.Trigger, res.Trigger)
}
if obj.Time != res.Time {
return fmt.Errorf("time differs: %s vs %s", obj.Time, res.Time)
}
if obj.AccuracySec != res.AccuracySec {
return fmt.Errorf("accuracysec differs: %s vs %s", obj.AccuracySec, res.AccuracySec)
}
if obj.RandomizedDelaySec != res.RandomizedDelaySec {
return fmt.Errorf("randomizeddelaysec differs: %s vs %s", obj.RandomizedDelaySec, res.RandomizedDelaySec)
}
if obj.Unit != res.Unit {
return fmt.Errorf("unit differs: %s vs %s", obj.Unit, res.Unit)
}
if obj.Persistent != res.Persistent {
return fmt.Errorf("persistent differs: %t vs %t", obj.Persistent, res.Persistent)
}
if obj.WakeSystem != res.WakeSystem {
return fmt.Errorf("wakesystem differs: %t vs %t", obj.WakeSystem, res.WakeSystem)
}
if obj.RemainAfterElapse != res.RemainAfterElapse {
return fmt.Errorf("remainafterelapse differs: %t vs %t", obj.RemainAfterElapse, res.RemainAfterElapse)
}
return obj.file.Cmp(r)
}
// CronUID is a unique resource identifier.
type CronUID struct {
// NOTE: There is also a name variable in the BaseUID struct, this is
// information about where this UID came from, and is unrelated to the
// information about the resource we're matching. That data which is
// used in the IFF function, is what you see in the struct fields here.
engine.BaseUID
unit string // name of target unit
session bool // user session
}
// IFF aka if and only if they are equivalent, return true. If not, false.
func (obj *CronUID) IFF(uid engine.ResUID) bool {
res, ok := uid.(*CronUID)
if !ok {
return false
}
if obj.unit != res.unit {
return false
}
if obj.session != res.session {
return false
}
return true
}
// AutoEdges returns the AutoEdge interface.
func (obj *CronRes) AutoEdges() (engine.AutoEdge, error) {
return nil, nil
}
// UIDs includes all params to make a unique identification of this object.
// Most resources only return one although some resources can return multiple.
func (obj *CronRes) UIDs() []engine.ResUID {
unit := fmt.Sprintf("%s.service", obj.Name())
if obj.Unit != "" {
unit = obj.Unit
}
uids := []engine.ResUID{
&CronUID{
BaseUID: engine.BaseUID{Name: obj.Name(), Kind: obj.Kind()},
unit: unit, // name of target unit
session: obj.Session, // user session
},
}
if file, err := obj.makeComposite(); err == nil {
uids = append(uids, file.UIDs()...) // add the file uid if we can
}
return uids
}
// UnmarshalYAML is the custom unmarshal handler for this struct.
// It is primarily useful for setting the defaults.
func (obj *CronRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes CronRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*CronRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to CronRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = CronRes(raw) // restore from indirection with type conversion!
return nil
}
// UnitFilePath returns the path to the systemd-timer unit file.
func (obj *CronRes) UnitFilePath() (string, error) {
// root timer
if !obj.Session {
return fmt.Sprintf("/etc/systemd/system/%s.timer", obj.Name()), nil
}
// user timer
u, err := user.Current()
if err != nil {
return "", errwrap.Wrapf(err, "error getting current user")
}
if u.HomeDir == "" {
return "", fmt.Errorf("user has no home directory")
}
return path.Join(u.HomeDir, "/.config/systemd/user/", fmt.Sprintf("%s.timer", obj.Name())), nil
}
// unitFileContents returns the contents of the unit file representing the
// CronRes struct.
func (obj *CronRes) unitFileContents() string {
u := []*unit.UnitOption{}
// [Unit]
u = append(u, &unit.UnitOption{Section: "Unit", Name: "Description", Value: "timer generated by mgmt"})
// [Timer]
u = append(u, &unit.UnitOption{Section: "Timer", Name: obj.Trigger, Value: obj.Time})
if obj.AccuracySec != "" {
u = append(u, &unit.UnitOption{Section: "Timer", Name: "AccuracySec", Value: obj.AccuracySec})
}
if obj.RandomizedDelaySec != "" {
u = append(u, &unit.UnitOption{Section: "Timer", Name: "RandomizedDelaySec", Value: obj.RandomizedDelaySec})
}
if obj.Unit != "" {
u = append(u, &unit.UnitOption{Section: "Timer", Name: "Unit", Value: obj.Unit})
}
if obj.Persistent != false { // defaults to false
u = append(u, &unit.UnitOption{Section: "Timer", Name: "Persistent", Value: "true"})
}
if obj.WakeSystem != false { // defaults to false
u = append(u, &unit.UnitOption{Section: "Timer", Name: "WakeSystem", Value: "true"})
}
if obj.RemainAfterElapse != true { // defaults to true
u = append(u, &unit.UnitOption{Section: "Timer", Name: "RemainAfterElapse", Value: "false"})
}
// [Install]
u = append(u, &unit.UnitOption{Section: "Install", Name: "WantedBy", Value: "timers.target"})
buf := new(bytes.Buffer)
buf.ReadFrom(unit.Serialize(u))
return buf.String()
}

View File

@@ -118,7 +118,7 @@ func (obj *ExecRes) Watch() error {
//cmdName = path.Join(d, cmdName)
cmdArgs = split[1:]
} else {
cmdName = obj.Shell // usually bash, or sh
cmdName = obj.WatchShell // usually bash, or sh
cmdArgs = []string{"-c", obj.WatchCmd}
}
cmd := exec.Command(cmdName, cmdArgs...)

View File

@@ -54,7 +54,10 @@ type FileRes struct {
init *engine.Init
Path string `yaml:"path"` // path variable (usually defaults to name)
// Path variable, which usually defaults to the name, represents the
// destination path for the file or directory being managed. It must be
// an absolute path, and as a result must start with a slash.
Path string `yaml:"path"`
Dirname string `yaml:"dirname"` // override the path dirname
Basename string `yaml:"basename"` // override the path basename
Content *string `yaml:"content"` // nil to mark as undefined
@@ -93,6 +96,10 @@ func (obj *FileRes) Validate() error {
return fmt.Errorf("basename must not start with a slash")
}
if !strings.HasPrefix(obj.GetPath(), "/") {
return fmt.Errorf("resultant path must be absolute")
}
if obj.Content != nil && obj.Source != "" {
return fmt.Errorf("can't specify both Content and Source")
}
@@ -608,7 +615,7 @@ func (obj *FileRes) contentCheckApply(apply bool) (checkOK bool, _ error) {
}
// content is not defined, leave it alone...
if obj.Content == nil {
if obj.Content == nil && obj.Source == "" {
return true, nil
}

View File

@@ -146,3 +146,19 @@ func TestMiscEncodeDecode2(t *testing.T) {
t.Errorf("The input and output Res values do not match: %+v", err)
}
}
func TestFileAbsolute1(t *testing.T) {
// file resource paths should be absolute
f1 := &FileRes{
Path: "tmp/a/b", // some relative file
}
f2 := &FileRes{
Path: "tmp/a/b/", // some relative dir
}
f3 := &FileRes{
Path: "tmp", // some short relative file
}
if f1.Validate() == nil || f2.Validate() == nil || f3.Validate() == nil {
t.Errorf("file res should have failed validate")
}
}

View File

@@ -41,6 +41,7 @@ const groupFile = "/etc/group"
// GroupRes is a user group resource.
type GroupRes struct {
traits.Base // add the base methods without re-implementation
traits.Edgeable
init *engine.Init
@@ -266,6 +267,11 @@ type GroupUID struct {
gid *uint32
}
// AutoEdges returns the AutoEdge interface.
func (obj *GroupRes) AutoEdges() (engine.AutoEdge, error) {
return nil, nil
}
// IFF aka if and only if they are equivalent, return true. If not, false.
func (obj *GroupUID) IFF(uid engine.ResUID) bool {
res, ok := uid.(*GroupUID)

View File

@@ -119,9 +119,6 @@ func (obj *NetRes) Validate() error {
}
// validate network address input
if (obj.Addrs == nil) != (obj.Gateway == "") {
return fmt.Errorf("addrs and gateway must both be set or both be empty")
}
if obj.Addrs != nil {
for _, addr := range obj.Addrs {
if _, _, err := net.ParseCIDR(addr); err != nil {
@@ -882,7 +879,11 @@ func (obj *socketSet) nfd() int {
// and fdPipe. See man select for more info.
func (obj *socketSet) fdSet() *unix.FdSet {
fdSet := &unix.FdSet{}
// Generate the bitmask representing the file descriptors in the socketSet.
// The rightmost bit corresponds to file descriptor zero, and each bit to
// the left represents the next file descriptor number in the sequence of
// all real numbers. E.g. the FdSet containing containing 0 and 4 is 10001.
fdSet.Bits[obj.fdEvents/64] |= 1 << uint(obj.fdEvents)
fdSet.Bits[obj.fdPipe/64] |= 1 << uint(obj.fdPipe) // fd = 3 becomes 100 if we add 5, we get 10100
fdSet.Bits[obj.fdPipe/64] |= 1 << uint(obj.fdPipe)
return fdSet
}

View File

@@ -0,0 +1,166 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"bytes"
"strings"
"testing"
"golang.org/x/sys/unix"
)
// test cases for NetRes.unitFileContents()
var unitFileContentsTests = []struct {
dev string
in *NetRes
out []byte
}{
{
"eth0",
&NetRes{
State: "up",
Addrs: []string{"192.168.42.13/24"},
Gateway: "192.168.42.1",
},
[]byte(
strings.Join(
[]string{
"[Match]",
"Name=eth0",
"[Network]",
"Address=192.168.42.13/24",
"Gateway=192.168.42.1",
},
"\n"),
),
},
{
"wlp5s0",
&NetRes{
State: "up",
Addrs: []string{"10.0.2.13/24", "10.0.2.42/24"},
Gateway: "10.0.2.1",
},
[]byte(
strings.Join(
[]string{
"[Match]",
"Name=wlp5s0",
"[Network]",
"Address=10.0.2.13/24",
"Address=10.0.2.42/24",
"Gateway=10.0.2.1",
},
"\n"),
),
},
}
// test NetRes.unitFileContents()
func TestUnitFileContents(t *testing.T) {
for _, test := range unitFileContentsTests {
test.in.SetName(test.dev)
result := test.in.unitFileContents()
if !bytes.Equal(test.out, result) {
t.Errorf("nfd test wanted:\n %s, got:\n %s", test.out, result)
}
}
}
// test cases for socketSet.fdSet()
var fdSetTests = []struct {
in *socketSet
out *unix.FdSet
}{
{
&socketSet{
fdEvents: 3,
fdPipe: 4,
},
&unix.FdSet{
Bits: [16]int64{0x18}, // 11000
},
},
{
&socketSet{
fdEvents: 12,
fdPipe: 8,
},
&unix.FdSet{
Bits: [16]int64{0x1100}, // 1000100000000
},
},
{
&socketSet{
fdEvents: 9,
fdPipe: 21,
},
&unix.FdSet{
Bits: [16]int64{0x200200}, // 1000000000001000000000
},
},
}
// test socketSet.fdSet()
func TestFdSet(t *testing.T) {
for _, test := range fdSetTests {
result := test.in.fdSet()
if *result != *test.out {
t.Errorf("fdSet test wanted: %b, got: %b", *test.out, *result)
}
}
}
// test cases for socketSet.nfd()
var nfdTests = []struct {
in *socketSet
out int
}{
{
&socketSet{
fdEvents: 3,
fdPipe: 4,
},
5,
},
{
&socketSet{
fdEvents: 8,
fdPipe: 4,
},
9,
},
{
&socketSet{
fdEvents: 90,
fdPipe: 900,
},
901,
},
}
// test socketSet.nfd()
func TestNfd(t *testing.T) {
for _, test := range nfdTests {
result := test.in.nfd()
if result != test.out {
t.Errorf("nfd test wanted: %d, got: %d", test.out, result)
}
}
}

View File

@@ -52,6 +52,7 @@ func init() {
type NspawnRes struct {
traits.Base // add the base methods without re-implementation
//traits.Groupable // TODO: this would be quite useful for this resource
traits.Refreshable // needed because we embed a svc res
init *engine.Init

View File

@@ -214,7 +214,7 @@ func (obj *Conn) matchSignal(ch chan *dbus.Signal, path dbus.ObjectPath, iface s
call = bus.Call(engineUtil.DBusAddMatch, 0, args)
} else {
for _, signal := range signals {
args := fmt.Sprintf("type='signal', path='%s', interface='%s', member'%s'", pathStr, iface, signal)
args := fmt.Sprintf("type='signal', path='%s', interface='%s', member='%s'", pathStr, iface, signal)
argsList = append(argsList, args)
if call = bus.Call(engineUtil.DBusAddMatch, 0, args); call.Err != nil {
break // fail if any one fails

View File

@@ -34,6 +34,20 @@ func init() {
engine.RegisterResource("pkg", func() engine.Res { return &PkgRes{} })
}
const (
// PkgStateInstalled is the string that represents that the package
// should be installed.
PkgStateInstalled = "installed"
// PkgStateUninstalled is the string that represents that the package
// should be uninstalled.
PkgStateUninstalled = "uninstalled"
// PkgStateNewest is the string that represents that the package should
// be installed in the newest available version.
PkgStateNewest = "newest"
)
// PkgRes is a package resource for packagekit.
type PkgRes struct {
traits.Base // add the base methods without re-implementation
@@ -53,7 +67,7 @@ type PkgRes struct {
// Default returns some sensible defaults for this resource.
func (obj *PkgRes) Default() engine.Res {
return &PkgRes{
State: "installed", // i think this is preferable to "latest"
State: PkgStateInstalled, // i think this is preferable to "latest"
}
}
@@ -190,7 +204,7 @@ func (obj *PkgRes) pkgMappingHelper(bus *packagekit.Conn) (map[string]*packageki
var filter uint64 // initializes at the "zero" value of 0
filter += packagekit.PkFilterEnumArch // always search in our arch (optional!)
// we're requesting latest version, or to narrow down install choices!
if obj.State == "newest" || obj.State == "installed" {
if obj.State == PkgStateNewest || obj.State == PkgStateInstalled {
// if we add this, we'll still see older packages if installed
// this is an optimization, and is *optional*, this logic is
// handled inside of PackagesToPackageIDs now automatically!
@@ -283,13 +297,13 @@ func (obj *PkgRes) CheckApply(apply bool) (checkOK bool, err error) {
data, _ := result[obj.Name()] // if above didn't error, we won't either!
validState := util.BoolMapTrue(util.BoolMapValues(states))
// obj.State == "installed" || "uninstalled" || "newest" || "4.2-1.fc23"
// obj.State == PkgStateInstalled || PkgStateUninstalled || PkgStateNewest || "4.2-1.fc23"
switch obj.State {
case "installed":
case PkgStateInstalled:
fallthrough
case "uninstalled":
case PkgStateUninstalled:
fallthrough
case "newest":
case PkgStateNewest:
if validState {
return true, nil // state is correct, exit!
}
@@ -321,15 +335,15 @@ func (obj *PkgRes) CheckApply(apply bool) (checkOK bool, err error) {
// apply correct state!
obj.init.Logf("Set(%s): %s...", obj.State, obj.fmtNames(util.StrListIntersection(applyPackages, obj.getNames())))
switch obj.State {
case "uninstalled": // run remove
case PkgStateUninstalled: // run remove
// NOTE: packageID is different than when installed, because now
// it has the "installed" flag added to the data portion if it!!
// it has the "installed" flag added to the data portion of it!!
err = bus.RemovePackages(packageIDs, transactionFlags)
case "newest": // TODO: isn't this the same operation as install, below?
case PkgStateNewest: // TODO: isn't this the same operation as install, below?
err = bus.UpdatePackages(packageIDs, transactionFlags)
case "installed":
case PkgStateInstalled:
fallthrough // same method as for "set specific version", below
default: // version string
err = bus.InstallPackages(packageIDs, transactionFlags)
@@ -343,38 +357,93 @@ func (obj *PkgRes) CheckApply(apply bool) (checkOK bool, err error) {
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *PkgRes) Cmp(r engine.Res) error {
if !obj.Compare(r) {
return fmt.Errorf("did not compare")
}
return nil
}
// Compare two resources and return if they are equivalent.
func (obj *PkgRes) Compare(r engine.Res) bool {
// we can only compare PkgRes to others of the same resource kind
res, ok := r.(*PkgRes)
if !ok {
return false
return fmt.Errorf("res is not the same kind")
}
// if obj.Name != res.Name {
// return false
// }
if obj.State != res.State {
return false
}
if obj.AllowUntrusted != res.AllowUntrusted {
return false
}
if obj.AllowNonFree != res.AllowNonFree {
return false
}
if obj.AllowUnsupported != res.AllowUnsupported {
return false
return fmt.Errorf("state differs: %s vs %s", obj.State, res.State)
}
return true
return obj.Adapts(res)
}
// Adapts compares two resources and returns an error if they are not able to be
// equivalently output compatible.
func (obj *PkgRes) Adapts(r engine.CompatibleRes) error {
res, ok := r.(*PkgRes)
if !ok {
return fmt.Errorf("res is not the same kind")
}
if obj.State != res.State {
e := fmt.Errorf("state differs in an incompatible way: %s vs %s", obj.State, res.State)
if obj.State == PkgStateUninstalled || res.State == PkgStateUninstalled {
return e
}
if stateIsVersion(obj.State) || stateIsVersion(res.State) {
return e
}
// one must be installed, and the other must be "newest"
}
if obj.AllowUntrusted != res.AllowUntrusted {
return fmt.Errorf("allowuntrusted differs: %t vs %t", obj.AllowUntrusted, res.AllowUntrusted)
}
if obj.AllowNonFree != res.AllowNonFree {
return fmt.Errorf("allownonfree differs: %t vs %t", obj.AllowNonFree, res.AllowNonFree)
}
if obj.AllowUnsupported != res.AllowUnsupported {
return fmt.Errorf("allowunsupported differs: %t vs %t", obj.AllowUnsupported, res.AllowUnsupported)
}
return nil
}
// Merge returns the best equivalent of the two resources. They must satisfy the
// Adapts test for this to work.
func (obj *PkgRes) Merge(r engine.CompatibleRes) (engine.CompatibleRes, error) {
res, ok := r.(*PkgRes)
if !ok {
return nil, fmt.Errorf("res is not the same kind")
}
if err := obj.Adapts(r); err != nil {
return nil, errwrap.Wrapf(err, "can't merge resources that aren't compatible")
}
// modify the copy, not the original
x, err := engine.ResCopy(obj) // don't call our .Copy() directly!
if err != nil {
return nil, err
}
result, ok := x.(*PkgRes)
if !ok {
// bug!
return nil, fmt.Errorf("res is not the same kind")
}
// if these two were compatible then if they're not identical, then one
// must be PkgStateNewest and the other is PkgStateInstalled, so we
// upgrade to the best common denominator
if obj.State != res.State {
result.State = PkgStateNewest
}
return result, nil
}
// Copy copies the resource. Don't call it directly, use engine.ResCopy instead.
// TODO: should this copy internal state?
func (obj *PkgRes) Copy() engine.CopyableRes {
return &PkgRes{
State: obj.State,
AllowUntrusted: obj.AllowUntrusted,
AllowNonFree: obj.AllowNonFree,
AllowUnsupported: obj.AllowUnsupported,
}
}
// PkgUID is the main UID struct for PkgRes.
@@ -552,9 +621,8 @@ func (obj *PkgRes) GroupCmp(r engine.GroupableRes) error {
if !ok {
return fmt.Errorf("resource is not the same kind")
}
objStateIsVersion := (obj.State != "installed" && obj.State != "uninstalled" && obj.State != "newest") // must be a ver. string
resStateIsVersion := (res.State != "installed" && res.State != "uninstalled" && res.State != "newest") // must be a ver. string
if objStateIsVersion || resStateIsVersion {
// TODO: what should we do about the empty string?
if stateIsVersion(obj.State) || stateIsVersion(res.State) {
// can't merge specific version checks atm
return fmt.Errorf("resource uses a version string")
}
@@ -603,3 +671,10 @@ func ReturnSvcInFileList(fileList []string) []string {
}
return result
}
// stateIsVersion is a simple test to see if the state string is an existing
// well-known flag.
// TODO: what should we do about the empty string?
func stateIsVersion(state string) bool {
return (state != PkgStateInstalled && state != PkgStateUninstalled && state != PkgStateNewest) // must be a ver. string
}

View File

@@ -21,9 +21,12 @@ package resources
import (
"fmt"
"os/user"
"path"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
engineUtil "github.com/purpleidea/mgmt/engine/util"
"github.com/purpleidea/mgmt/util"
systemd "github.com/coreos/go-systemd/dbus" // change namespace
@@ -69,7 +72,6 @@ func (obj *SvcRes) Validate() error {
// Init runs some startup code for this resource.
func (obj *SvcRes) Init(init *engine.Init) error {
obj.init = init // save for later
return nil
}
@@ -86,6 +88,7 @@ func (obj *SvcRes) Watch() error {
}
var conn *systemd.Conn
var bus *dbus.Conn
var err error
if obj.Session {
conn, err = systemd.NewUserConnection() // user session
@@ -99,16 +102,23 @@ func (obj *SvcRes) Watch() error {
defer conn.Close()
// if we share the bus with others, we will get each others messages!!
bus, err := util.SystemBusPrivateUsable() // don't share the bus connection!
if obj.Session {
bus, err = util.SessionBusPrivateUsable()
} else {
bus, err = util.SystemBusPrivateUsable()
}
if err != nil {
return errwrap.Wrapf(err, "failed to connect to bus")
}
defer bus.Close()
// XXX: will this detect new units?
bus.BusObject().Call("org.freedesktop.DBus.AddMatch", 0,
"type='signal',interface='org.freedesktop.systemd1.Manager',member='Reloading'")
buschan := make(chan *dbus.Signal, 10)
defer close(buschan) // NOTE: closing a chan that contains a value is ok
bus.Signal(buschan)
defer bus.RemoveSignal(buschan) // not needed here, but nice for symmetry
// notify engine that we're running
if err := obj.init.Running(); err != nil {
@@ -119,8 +129,12 @@ func (obj *SvcRes) Watch() error {
var send = false // send event?
var invalid = false // does the svc exist or not?
var previous bool // previous invalid value
set := conn.NewSubscriptionSet() // no error should be returned
// TODO: do we first need to call conn.Subscribe() ?
set := conn.NewSubscriptionSet() // no error should be returned
subChannel, subErrors := set.Subscribe()
//defer close(subChannel) // cannot close receive-only channel
//defer close(subErrors) // cannot close receive-only channel
var activeSet = false
for {
@@ -266,7 +280,17 @@ func (obj *SvcRes) CheckApply(apply bool) (checkOK bool, err error) {
var running = (activestate.Value == dbus.MakeVariant("active"))
var stateOK = ((obj.State == "") || (obj.State == "running" && running) || (obj.State == "stopped" && !running))
var startupOK = true // XXX: DETECT AND SET
var startupOK = true // XXX: DETECT AND SET
// NOTE: if this svc resource is embedded as a composite resource inside
// of another resource using a technique such as `makeComposite()`, then
// the Init of the embedded resource is traditionally passed through and
// identical to the parent's Init. As a result, the data matches what is
// expected from the parent. (So this luckily turns out to be actually a
// thing that does help, although it is important to add the Refreshable
// trait to the parent resource, or we'll panic when we call this line.)
// It might not be recommended to use the Watch method without a thought
// to what actually happens when we would run Send(), and other methods.
var refresh = obj.init.Refresh() // do we have a pending reload to apply?
if stateOK && startupOK && !refresh {
@@ -369,7 +393,8 @@ type SvcUID struct {
// information about the resource we're matching. That data which is
// used in the IFF function, is what you see in the struct fields here.
engine.BaseUID
name string // the svc name
name string // the svc name
session bool // user session
}
// IFF aka if and only if they are equivalent, return true. If not, false.
@@ -378,7 +403,13 @@ func (obj *SvcUID) IFF(uid engine.ResUID) bool {
if !ok {
return false
}
return obj.name == res.name
if obj.name != res.name {
return false
}
if obj.session != res.session {
return false
}
return true
}
// SvcResAutoEdges holds the state of the auto edge generator.
@@ -420,13 +451,56 @@ func (obj *SvcResAutoEdges) Test(input []bool) bool {
return true // keep going
}
// AutoEdges returns the AutoEdge interface. In this case the systemd units.
// SvcResAutoEdgesCron holds the state of the svc -> cron auto edge generator.
type SvcResAutoEdgesCron struct {
unit string // target unit
session bool // user session
}
// Next returns the next automatic edge.
func (obj *SvcResAutoEdgesCron) Next() []engine.ResUID {
// XXX: should this be true if SvcRes State == "stopped"?
reversed := false
value := &CronUID{
BaseUID: engine.BaseUID{
Kind: "CronRes",
Reversed: &reversed,
},
unit: obj.unit, // target unit
session: obj.session, // user session
}
return []engine.ResUID{value} // we return one, even though api supports N
}
// Test takes the output of the last call to Next() and outputs true if we
// should continue.
func (obj *SvcResAutoEdgesCron) Test([]bool) bool {
return false // only get one svc -> cron edge
}
// AutoEdges returns the AutoEdge interface. In this case, systemd unit file
// resources and cron (systemd-timer) resources.
func (obj *SvcRes) AutoEdges() (engine.AutoEdge, error) {
var data []engine.ResUID
svcFiles := []string{
var svcFiles []string
svcFiles = []string{
// root svc
fmt.Sprintf("/etc/systemd/system/%s.service", obj.Name()), // takes precedence
fmt.Sprintf("/usr/lib/systemd/system/%s.service", obj.Name()), // pkg default
}
if obj.Session {
// user svc
u, err := user.Current()
if err != nil {
return nil, errwrap.Wrapf(err, "error getting current user")
}
if u.HomeDir == "" {
return nil, fmt.Errorf("user has no home directory")
}
svcFiles = []string{
path.Join(u.HomeDir, "/.config/systemd/user/", fmt.Sprintf("%s.service", obj.Name())),
}
}
for _, x := range svcFiles {
var reversed = true
data = append(data, &FileUID{
@@ -438,11 +512,18 @@ func (obj *SvcRes) AutoEdges() (engine.AutoEdge, error) {
path: x, // what matters
})
}
return &FileResAutoEdges{
fileEdge := &FileResAutoEdges{
data: data,
pointer: 0,
found: false,
}, nil
}
cronEdge := &SvcResAutoEdgesCron{
session: obj.Session,
unit: fmt.Sprintf("%s.service", obj.Name()),
}
return engineUtil.AutoEdgeCombiner(fileEdge, cronEdge)
}
// UIDs includes all params to make a unique identification of this object.
@@ -450,7 +531,8 @@ func (obj *SvcRes) AutoEdges() (engine.AutoEdge, error) {
func (obj *SvcRes) UIDs() []engine.ResUID {
x := &SvcUID{
BaseUID: engine.BaseUID{Name: obj.Name(), Kind: obj.Kind()},
name: obj.Name(), // svc name
name: obj.Name(), // svc name
session: obj.Session, // user session
}
return []engine.ResUID{x}
}

View File

@@ -40,3 +40,9 @@ func (obj *Edgeable) AutoEdgeMeta() *engine.AutoEdgeMeta {
}
return obj.meta
}
// SetAutoEdgeMeta lets you set all of the meta params for the automatic edges
// trait in a single call.
func (obj *Edgeable) SetAutoEdgeMeta(meta *engine.AutoEdgeMeta) {
obj.meta = meta
}

View File

@@ -47,6 +47,12 @@ func (obj *Groupable) AutoGroupMeta() *engine.AutoGroupMeta {
return obj.meta
}
// SetAutoGroupMeta lets you set all of the meta params for the automatic
// grouping trait in a single call.
func (obj *Groupable) SetAutoGroupMeta(meta *engine.AutoGroupMeta) {
obj.meta = meta
}
// GroupCmp compares two resources and decides if they're suitable for grouping.
// You'll probably want to override this method when implementing a resource...
// This base implementation assumes not, so override me!

View File

@@ -38,3 +38,9 @@ func (obj *Meta) MetaParams() *engine.MetaParams {
}
return obj.meta
}
// SetMetaParams lets you set all of the meta params for the resource in a
// single call.
func (obj *Meta) SetMetaParams(meta *engine.MetaParams) {
obj.meta = meta
}

View File

@@ -19,6 +19,7 @@ package util
import (
"bytes"
"context"
"encoding/base64"
"encoding/gob"
"fmt"
@@ -30,6 +31,7 @@ import (
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/lang/types"
"github.com/godbus/dbus"
errwrap "github.com/pkg/errors"
)
@@ -44,6 +46,20 @@ const (
// DBusRemoveMatch is the dbus method to remove a previously defined
// AddMatch rule.
DBusRemoveMatch = DBusInterface + ".RemoveMatch"
// DBusSystemd1Path is the base systemd1 path.
DBusSystemd1Path = "/org/freedesktop/systemd1"
// DBusSystemd1Iface is the base systemd1 interface.
DBusSystemd1Iface = "org.freedesktop.systemd1"
// DBusSystemd1ManagerIface is the systemd manager interface used for
// interfacing with systemd units.
DBusSystemd1ManagerIface = DBusSystemd1Iface + ".Manager"
// DBusRestartUnit is the dbus method for restarting systemd units.
DBusRestartUnit = DBusSystemd1ManagerIface + ".RestartUnit"
// DBusStopUnit is the dbus method for stopping systemd units.
DBusStopUnit = DBusSystemd1ManagerIface + ".StopUnit"
// DBusSignalJobRemoved is the name of the dbus signal that produces a
// message when a dbus job is done (or has errored.)
DBusSignalJobRemoved = "JobRemoved"
)
// ResToB64 encodes a resource to a base64 encoded string (after serialization).
@@ -259,3 +275,94 @@ func GetGID(group string) (int, error) {
return -1, errwrap.Wrapf(err, "group lookup error (%s)", group)
}
// RestartUnit resarts the given dbus unit and waits for it to finish starting.
func RestartUnit(ctx context.Context, conn *dbus.Conn, unit string) error {
return unitStateAction(ctx, conn, unit, DBusRestartUnit)
}
// StopUnit stops the given dbus unit and waits for it to finish stopping.
func StopUnit(ctx context.Context, conn *dbus.Conn, unit string) error {
return unitStateAction(ctx, conn, unit, DBusStopUnit)
}
// unitStateAction is a helper function to perform state actions on systemd
// units. It waits for the requested job to be complete before it returns.
func unitStateAction(ctx context.Context, conn *dbus.Conn, unit, action string) error {
// Add a dbus rule to watch the systemd1 JobRemoved signal, used to wait
// until the job completes.
args := []string{
"type='signal'",
fmt.Sprintf("path='%s'", DBusSystemd1Path),
fmt.Sprintf("interface='%s'", DBusSystemd1ManagerIface),
fmt.Sprintf("member='%s'", DBusSignalJobRemoved),
fmt.Sprintf("arg2='%s'", unit),
}
// match dbus messages
if call := conn.BusObject().Call(DBusAddMatch, 0, strings.Join(args, ",")); call.Err != nil {
return errwrap.Wrapf(call.Err, "error creating dbus call")
}
defer conn.BusObject().Call(DBusRemoveMatch, 0, args) // ignore the error
// channel for godbus signal
ch := make(chan *dbus.Signal)
defer close(ch)
// subscribe the channel to the signal
conn.Signal(ch)
defer conn.RemoveSignal(ch)
// perform requested action on specified unit
sd1 := conn.Object(DBusSystemd1Iface, dbus.ObjectPath(DBusSystemd1Path))
if call := sd1.Call(action, 0, unit, "fail"); call.Err != nil {
return errwrap.Wrapf(call.Err, "error stopping unit: %s", unit)
}
// wait for the job to be removed, indicating completion
select {
case event, ok := <-ch:
if !ok {
return fmt.Errorf("channel closed unexpectedly")
}
if event.Body[3] != "done" {
return fmt.Errorf("unexpected job status: %s", event.Body[3])
}
case <-ctx.Done():
return fmt.Errorf("action %s on %s failed due to context timeout", action, unit)
}
return nil
}
// autoEdgeCombiner holds the state of the auto edge generator.
type autoEdgeCombiner struct {
ae []engine.AutoEdge
ptr int
}
// Next returns the next automatic edge.
func (obj *autoEdgeCombiner) Next() []engine.ResUID {
if len(obj.ae) <= obj.ptr {
panic("shouldn't be called anymore!")
}
return obj.ae[obj.ptr].Next() // return the next edge
}
// Test takes the output of the last call to Next() and outputs true if we
// should continue.
func (obj *autoEdgeCombiner) Test(input []bool) bool {
if !obj.ae[obj.ptr].Test(input) {
obj.ptr++ // match found, on to the next
}
return len(obj.ae) > obj.ptr // are there any auto edges left?
}
// AutoEdgeCombiner takes any number of AutoEdge structs, and combines them
// into a single one, so that the logic from each one can be built separately,
// and then combined using this utility. This makes implementing different
// AutoEdge generators much easier. This respects the Next() and Test() API,
// and ratchets through each AutoEdge entry until they have all run their
// course.
func AutoEdgeCombiner(ae ...engine.AutoEdge) (engine.AutoEdge, error) {
return &autoEdgeCombiner{
ae: ae,
}, nil
}

View File

@@ -91,32 +91,53 @@ func GetDeploys(obj Client) (map[uint64]string, error) {
return result, nil
}
// GetDeploy gets the latest deploy if id == 0, otherwise it returns the deploy
// with the specified id if it exists.
// calculateMax is a helper function.
func calculateMax(deploys map[uint64]string) uint64 {
var max uint64
for i := range deploys {
if i > max {
max = i
}
}
return max
}
// GetDeploy returns the deploy with the specified id if it exists. If you input
// an id of 0, you'll get back an empty deploy without error. This is useful so
// that you can pass through this function easily.
// FIXME: implement this more efficiently so that it doesn't have to download *all* the old deploys from etcd!
func GetDeploy(obj Client, id uint64) (string, error) {
result, err := GetDeploys(obj)
if err != nil {
return "", err
}
if id != 0 {
str, exists := result[id]
if !exists {
return "", fmt.Errorf("can't find id `%d`", id)
}
return str, nil
}
// find the latest id
var max uint64
for i := range result {
if i > max {
max = i
}
}
if max == 0 {
// don't optimize this test to the top, because it's better to catch an
// etcd failure early if we can, rather than fail later when we deploy!
if id == 0 {
return "", nil // no results yet
}
return result[max], nil
str, exists := result[id]
if !exists {
return "", fmt.Errorf("can't find id `%d`", id)
}
return str, nil
}
// GetMaxDeployID returns the maximum deploy id. If none are found, this returns
// zero. You must increment the returned value by one when you add a deploy. If
// two or more clients race for this deploy id, then the loser is not committed,
// and must repeat this GetMaxDeployID process until it succeeds with a commit!
func GetMaxDeployID(obj Client) (uint64, error) {
// TODO: this was all implemented super inefficiently, fix up for perf!
deploys, err := GetDeploys(obj) // get previous deploys
if err != nil {
return 0, errwrap.Wrapf(err, "error getting previous deploys")
}
// find the latest id
max := calculateMax(deploys)
return max, nil // found! (or zero)
}
// AddDeploy adds a new deploy. It takes an id and ensures it's sequential. If
@@ -162,7 +183,7 @@ func AddDeploy(obj Client, id uint64, hash, pHash string, data *string) error {
// this way, we only generate one watch event, and only when it's needed
result, err := obj.Txn(ifs, ops, nil)
if err != nil {
return errwrap.Wrapf(err, "error creating deploy id %d: %s", id)
return errwrap.Wrapf(err, "error creating deploy id %d", id)
}
if !result.Succeeded {
return fmt.Errorf("could not create deploy id %d", id)

View File

@@ -37,12 +37,12 @@
//
// Smoke testing:
// mkdir /tmp/mgmt{A..E}
// ./mgmt run --yaml examples/etcd1a.yaml --hostname h1 --tmp-prefix --no-pgp
// ./mgmt run --yaml examples/etcd1b.yaml --hostname h2 --tmp-prefix --no-pgp --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2381 --server-urls http://127.0.0.1:2382
// ./mgmt run --yaml examples/etcd1c.yaml --hostname h3 --tmp-prefix --no-pgp --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2383 --server-urls http://127.0.0.1:2384
// ./mgmt run --hostname h1 --tmp-prefix --no-pgp yaml --yaml examples/yaml/etcd1a.yaml
// ./mgmt run --hostname h2 --tmp-prefix --no-pgp --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2381 --server-urls http://127.0.0.1:2382 yaml --yaml examples/yaml/etcd1b.yaml
// ./mgmt run --hostname h3 --tmp-prefix --no-pgp --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2383 --server-urls http://127.0.0.1:2384 yaml --yaml examples/yaml/etcd1c.yaml
// ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 put /_mgmt/idealClusterSize 3
// ./mgmt run --yaml examples/etcd1d.yaml --hostname h4 --tmp-prefix --no-pgp --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2385 --server-urls http://127.0.0.1:2386
// ./mgmt run --yaml examples/etcd1e.yaml --hostname h5 --tmp-prefix --no-pgp --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2387 --server-urls http://127.0.0.1:2388
// ./mgmt run --hostname h4 --tmp-prefix --no-pgp --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2385 --server-urls http://127.0.0.1:2386 yaml --yaml examples/yaml/etcd1d.yaml
// ./mgmt run --hostname h5 --tmp-prefix --no-pgp --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2387 --server-urls http://127.0.0.1:2388 yaml --yaml examples/yaml/etcd1e.yaml
// ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 member list
// ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2381 put /_mgmt/idealClusterSize 5
// ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2381 member list
@@ -1049,6 +1049,15 @@ func (obj *EmbdEtcd) rawGet(ctx context.Context, gq *GQ) (result map[string]stri
log.Printf("Trace: Etcd: rawGet()")
}
obj.rLock.RLock()
// TODO: we're checking if this is nil to workaround a nil ptr bug...
if obj.client == nil { // bug?
obj.rLock.RUnlock()
return nil, fmt.Errorf("client is nil")
}
if obj.client.KV == nil { // bug?
obj.rLock.RUnlock()
return nil, fmt.Errorf("client.KV is nil")
}
response, err := obj.client.KV.Get(ctx, gq.path, gq.opts...)
obj.rLock.RUnlock()
if err != nil || response == nil {

View File

@@ -341,6 +341,18 @@ func (obj *Fs) Create(name string) (afero.File, error) {
return fileCreate(obj, name)
}
// Chown is the equivalent of os.Chown. It returns ErrNotImplemented.
func (obj *Fs) Chown(name string, uid, gid int) error {
// FIXME: Implement Chown
return ErrNotImplemented
}
// Lchown is the equivalent of os.Lchown. It returns ErrNotImplemented.
func (obj *Fs) Lchown(name string, uid, gid int) error {
// FIXME: Implement Lchown
return ErrNotImplemented
}
// Mkdir makes a new directory.
func (obj *Fs) Mkdir(name string, perm os.FileMode) error {
if err := obj.mount(); err != nil {

View File

@@ -20,17 +20,21 @@
package fs_test // named this way to make it easier for examples
import (
"fmt"
"io"
"os/exec"
"syscall"
"testing"
"github.com/purpleidea/mgmt/etcd"
etcdfs "github.com/purpleidea/mgmt/etcd/fs"
"github.com/purpleidea/mgmt/integration"
"github.com/purpleidea/mgmt/util"
errwrap "github.com/pkg/errors"
"github.com/spf13/afero"
)
// XXX: spawn etcd for this test, like `cdtmpmkdir && etcd` and then kill it...
// XXX: write a bunch more tests to test this
// TODO: apparently using 0666 is equivalent to respecting the current umask
@@ -39,13 +43,48 @@ const (
superblock = "/some/superblock" // TODO: generate randomly per test?
)
// Ensure that etcdfs.Fs implements afero.Fs.
var _ afero.Fs = &etcdfs.Fs{}
// runEtcd starts etcd locally via the mgmt binary. It returns a function to
// kill the process which the caller must use to clean up.
func runEtcd() (func() error, error) {
// Run mgmt as etcd backend to ensure that we are testing against the
// appropriate vendored version of etcd rather than some unknown version.
cmdName, err := integration.BinaryPath()
if err != nil {
return nil, errwrap.Wrapf(err, "error getting binary path")
}
cmd := exec.Command(cmdName, "run", "--tmp-prefix", "empty") // empty GAPI
if err := cmd.Start(); err != nil {
return nil, errwrap.Wrapf(err, "error starting command %v", cmd)
}
return func() error {
// cleanup when we're done
if err := cmd.Process.Signal(syscall.SIGQUIT); err != nil {
fmt.Printf("error sending quit signal: %+v\n", err)
}
if err := cmd.Process.Kill(); err != nil {
return errwrap.Wrapf(err, "error killing process")
}
return nil
}, nil
}
func TestFs1(t *testing.T) {
stopEtcd, err := runEtcd()
if err != nil {
t.Errorf("setup error: %+v", err)
}
defer stopEtcd() // ignore the error
etcdClient := &etcd.ClientEtcd{
Seeds: []string{"localhost:2379"}, // endpoints
}
if err := etcdClient.Connect(); err != nil {
t.Logf("client connection error: %+v", err)
t.Errorf("client connection error: %+v", err)
return
}
defer etcdClient.Destroy()
@@ -58,22 +97,21 @@ func TestFs1(t *testing.T) {
//var etcdFs afero.Fs = NewEtcdFs()
if err := etcdFs.Mkdir("/", umask); err != nil {
t.Logf("error: %+v", err)
t.Logf("mkdir error: %+v", err)
if err != etcdfs.ErrExist {
t.Errorf("mkdir error: %+v", err)
return
}
}
if err := etcdFs.Mkdir("/tmp", umask); err != nil {
t.Logf("error: %+v", err)
if err != etcdfs.ErrExist {
return
}
t.Errorf("mkdir2 error: %+v", err)
return
}
fi, err := etcdFs.Stat("/tmp")
if err != nil {
t.Logf("stat error: %+v", err)
t.Errorf("stat error: %+v", err)
return
}
@@ -82,7 +120,7 @@ func TestFs1(t *testing.T) {
f, err := etcdFs.Create("/tmp/foo")
if err != nil {
t.Logf("error: %+v", err)
t.Errorf("create error: %+v", err)
return
}
@@ -90,104 +128,77 @@ func TestFs1(t *testing.T) {
i, err := f.WriteString("hello world!\n")
if err != nil {
t.Logf("error: %+v", err)
t.Errorf("writestring error: %+v", err)
return
}
t.Logf("wrote: %d", i)
if err := etcdFs.Mkdir("/tmp/d1", umask); err != nil {
t.Logf("error: %+v", err)
if err != etcdfs.ErrExist {
return
}
}
if err := etcdFs.Rename("/tmp/foo", "/tmp/bar"); err != nil {
t.Logf("rename error: %+v", err)
t.Errorf("mkdir3 error: %+v", err)
return
}
//f2, err := etcdFs.Create("/tmp/bar")
//if err != nil {
// t.Logf("error: %+v", err)
// return
//}
if err := etcdFs.Rename("/tmp/foo", "/tmp/bar"); err != nil {
t.Errorf("rename error: %+v", err)
return
}
//i2, err := f2.WriteString("hello bar!\n")
//if err != nil {
// t.Logf("error: %+v", err)
// return
//}
//t.Logf("wrote: %d", i2)
f2, err := etcdFs.Create("/tmp/bar")
if err != nil {
t.Errorf("create2 error: %+v", err)
return
}
i2, err := f2.WriteString("hello bar!\n")
if err != nil {
t.Errorf("writestring2 error: %+v", err)
return
}
t.Logf("wrote: %d", i2)
dir, err := etcdFs.Open("/tmp")
if err != nil {
t.Logf("error: %+v", err)
t.Errorf("open error: %+v", err)
return
}
names, err := dir.Readdirnames(-1)
if err != nil && err != io.EOF {
t.Logf("error: %+v", err)
t.Errorf("readdirnames error: %+v", err)
return
}
for _, name := range names {
t.Logf("name in /tmp: %+v", name)
return
}
//dir, err := etcdFs.Open("/")
//if err != nil {
// t.Logf("error: %+v", err)
// return
//}
//names, err := dir.Readdirnames(-1)
//if err != nil && err != io.EOF {
// t.Logf("error: %+v", err)
// return
//}
//for _, name := range names {
// t.Logf("name in /: %+v", name)
//}
dir, err = etcdFs.Open("/")
if err != nil {
t.Errorf("open2 error: %+v", err)
return
}
names, err = dir.Readdirnames(-1)
if err != nil && err != io.EOF {
t.Errorf("readdirnames2 error: %+v", err)
return
}
for _, name := range names {
t.Logf("name in /: %+v", name)
}
}
func TestFs2(t *testing.T) {
stopEtcd, err := runEtcd()
if err != nil {
t.Errorf("setup error: %+v", err)
}
defer stopEtcd() // ignore the error
etcdClient := &etcd.ClientEtcd{
Seeds: []string{"localhost:2379"}, // endpoints
}
if err := etcdClient.Connect(); err != nil {
t.Logf("client connection error: %+v", err)
return
}
defer etcdClient.Destroy()
etcdFs := &etcdfs.Fs{
Client: etcdClient.GetClient(),
Metadata: superblock,
DataPrefix: etcdfs.DefaultDataPrefix,
}
tree, err := util.FsTree(etcdFs, "/")
if err != nil {
t.Errorf("tree error: %+v", err)
return
}
t.Logf("tree: \n%s", tree)
tree2, err := util.FsTree(etcdFs, "/tmp")
if err != nil {
t.Errorf("tree2 error: %+v", err)
return
}
t.Logf("tree2: \n%s", tree2)
}
func TestFs3(t *testing.T) {
etcdClient := &etcd.ClientEtcd{
Seeds: []string{"localhost:2379"}, // endpoints
}
if err := etcdClient.Connect(); err != nil {
t.Logf("client connection error: %+v", err)
t.Errorf("client connection error: %+v", err)
return
}
defer etcdClient.Destroy()
@@ -208,15 +219,15 @@ func TestFs3(t *testing.T) {
var memFs = afero.NewMemMapFs()
if err := util.CopyFs(etcdFs, memFs, "/", "/", false); err != nil {
t.Errorf("CopyFs error: %+v", err)
t.Errorf("copyfs error: %+v", err)
return
}
if err := util.CopyFs(etcdFs, memFs, "/", "/", true); err != nil {
t.Errorf("CopyFs2 error: %+v", err)
t.Errorf("copyfs2 error: %+v", err)
return
}
if err := util.CopyFs(etcdFs, memFs, "/", "/tmp/d1/", false); err != nil {
t.Errorf("CopyFs3 error: %+v", err)
t.Errorf("copyfs3 error: %+v", err)
return
}
@@ -227,3 +238,180 @@ func TestFs3(t *testing.T) {
}
t.Logf("tree2: \n%s", tree2)
}
func TestFs3(t *testing.T) {
stopEtcd, err := runEtcd()
if err != nil {
t.Errorf("setup error: %+v", err)
}
defer stopEtcd() // ignore the error
etcdClient := &etcd.ClientEtcd{
Seeds: []string{"localhost:2379"}, // endpoints
}
if err := etcdClient.Connect(); err != nil {
t.Errorf("client connection error: %+v", err)
return
}
defer etcdClient.Destroy()
etcdFs := &etcdfs.Fs{
Client: etcdClient.GetClient(),
Metadata: superblock,
DataPrefix: etcdfs.DefaultDataPrefix,
}
if err := etcdFs.Mkdir("/tmp", umask); err != nil {
t.Errorf("mkdir error: %+v", err)
}
if err := etcdFs.Mkdir("/tmp/foo", umask); err != nil {
t.Errorf("mkdir2 error: %+v", err)
}
if err := etcdFs.Mkdir("/tmp/foo/bar", umask); err != nil {
t.Errorf("mkdir3 error: %+v", err)
}
tree, err := util.FsTree(etcdFs, "/")
if err != nil {
t.Errorf("tree error: %+v", err)
return
}
t.Logf("tree: \n%s", tree)
var memFs = afero.NewMemMapFs()
if err := util.CopyFs(etcdFs, memFs, "/tmp/foo/bar", "/", false); err != nil {
t.Errorf("copyfs error: %+v", err)
return
}
if err := util.CopyFs(etcdFs, memFs, "/tmp/foo/bar", "/baz/", false); err != nil {
t.Errorf("copyfs2 error: %+v", err)
return
}
tree2, err := util.FsTree(memFs, "/")
if err != nil {
t.Errorf("tree2 error: %+v", err)
return
}
t.Logf("tree2: \n%s", tree2)
if _, err := memFs.Stat("/bar"); err != nil {
t.Errorf("stat error: %+v", err)
return
}
if _, err := memFs.Stat("/baz/bar"); err != nil {
t.Errorf("stat2 error: %+v", err)
return
}
}
func TestEtcdCopyFs0(t *testing.T) {
tests := []struct {
mkdir, cpsrc, cpdst, check string
force bool
}{
{
mkdir: "/",
cpsrc: "/",
cpdst: "/",
check: "/",
force: false,
},
{
mkdir: "/",
cpsrc: "/",
cpdst: "/",
check: "/",
force: true,
},
{
mkdir: "/",
cpsrc: "/",
cpdst: "/tmp/d1",
check: "/tmp/d1",
force: false,
},
{
mkdir: "/tmp/foo/bar",
cpsrc: "/tmp/foo/bar",
cpdst: "/",
check: "/bar",
force: false,
},
{
mkdir: "/tmp/foo/bar",
cpsrc: "/tmp/foo/bar",
cpdst: "/baz/",
check: "/baz/bar",
force: false,
},
{
mkdir: "/tmp/foo/bar",
cpsrc: "/tmp/foo",
cpdst: "/baz/",
check: "/baz/foo/bar",
force: false,
},
{
mkdir: "/tmp/this/is/a/really/deep/directory/to/make/sure/we/can/handle/deep/copies",
cpsrc: "/tmp/this/is/a",
cpdst: "/that/was/",
check: "/that/was/a/really/deep/directory/to/make/sure/we/can/handle/deep/copies",
force: false,
},
}
for _, tt := range tests {
stopEtcd, err := runEtcd()
if err != nil {
t.Errorf("setup error: %+v", err)
return
}
defer stopEtcd() // ignore the error
etcdClient := &etcd.ClientEtcd{
Seeds: []string{"localhost:2379"}, // endpoints
}
if err := etcdClient.Connect(); err != nil {
t.Errorf("client connection error: %+v", err)
return
}
defer etcdClient.Destroy()
etcdFs := &etcdfs.Fs{
Client: etcdClient.GetClient(),
Metadata: superblock,
DataPrefix: etcdfs.DefaultDataPrefix,
}
if err := etcdFs.MkdirAll(tt.mkdir, umask); err != nil {
t.Errorf("mkdir error: %+v", err)
return
}
tree, err := util.FsTree(etcdFs, "/")
if err != nil {
t.Errorf("tree error: %+v", err)
return
}
t.Logf("tree: \n%s", tree)
var memFs = afero.NewMemMapFs()
if err := util.CopyFs(etcdFs, memFs, tt.cpsrc, tt.cpdst, tt.force); err != nil {
t.Errorf("copyfs error: %+v", err)
return
}
tree2, err := util.FsTree(memFs, "/")
if err != nil {
t.Errorf("tree2 error: %+v", err)
return
}
t.Logf("tree2: \n%s", tree2)
if _, err := memFs.Stat(tt.check); err != nil {
t.Errorf("stat error: %+v", err)
return
}
}
}

View File

@@ -28,7 +28,8 @@ import (
// A successful call returns err == nil, not err == EOF. Because ReadAll is
// defined to read from src until EOF, it does not treat an EOF from Read
// as an error to be reported.
//func ReadAll(r io.Reader) ([]byte, error) {
//func (obj *Fs) ReadAll(r io.Reader) ([]byte, error) {
// // NOTE: doesn't need Fs, same as ioutil.ReadAll package
// return afero.ReadAll(r)
//}

View File

@@ -1,4 +1,6 @@
# it was a lovely surprise to me, when i realized that mgmt had the answer!
import "fmt"
import "example"
print "answer" {
msg => printf("the answer to life, the universe, and everything is: %d", answer()),
msg => fmt.printf("the answer to life, the universe, and everything is: %d", example.answer()),
}

View File

@@ -1,3 +1,6 @@
import "fmt"
import "sys"
$set = ["a", "b", "c", "d",]
$c1 = "x1" in ["x1", "x2", "x3",]
@@ -5,18 +8,18 @@ $c2 = 42 in [4, 13, 42,]
$c3 = "x" in $set
$c4 = "b" in $set
$s = printf("1: %t, 2: %t, 3: %t, 4: %t\n", $c1, $c2, $c3, $c4)
$s = fmt.printf("1: %t, 2: %t, 3: %t, 4: %t\n", $c1, $c2, $c3, $c4)
file "/tmp/mgmt/contains" {
content => $s,
}
$x = if hostname() in ["h1", "h3",] {
printf("i (%s) am one of the chosen few!\n", hostname())
$x = if sys.hostname() in ["h1", "h3",] {
fmt.printf("i (%s) am one of the chosen few!\n", sys.hostname())
} else {
printf("i (%s) was not chosen :(\n", hostname())
fmt.printf("i (%s) was not chosen :(\n", sys.hostname())
}
file "/tmp/mgmt/hello-${hostname()}" {
file "/tmp/mgmt/hello-${sys.hostname()}" {
content => $x,
}

9
examples/lang/cron0.mcl Normal file
View File

@@ -0,0 +1,9 @@
cron "purpleidea-oneshot" {
session => true,
trigger => "OnBootSec",
time => "60",
}
svc "purpleidea-oneshot" {
session => true,
}

3
examples/lang/cron1.mcl Normal file
View File

@@ -0,0 +1,3 @@
cron "purpleidea-oneshot" {
state => "absent",
}

8
examples/lang/cron2.mcl Normal file
View File

@@ -0,0 +1,8 @@
cron "purpleidea-oneshot" {
trigger => "OnUnitActiveSec",
time => "2minutes",
}
svc "purpleidea-oneshot" {}
file "/etc/systemd/system/purpleidea-oneshot.service" {}

13
examples/lang/cron3.mcl Normal file
View File

@@ -0,0 +1,13 @@
$home = getenv("HOME")
cron "purpleidea-oneshot" {
session => true,
trigger => "OnCalendar",
time => "*:*:0",
}
svc "purpleidea-oneshot" {
session => true,
}
file printf("%s/.config/systemd/user/purpleidea-oneshot.service", $home) {}

17
examples/lang/cron4.mcl Normal file
View File

@@ -0,0 +1,17 @@
$home = getenv("HOME")
cron "purpleidea-oneshot" {
state => "absent",
session => true,
trigger => "OnCalendar",
time => "*:*:0",
}
svc "purpleidea-oneshot" {
state => "stopped",
session => true,
}
file printf("%s/.config/systemd/user/purpleidea-oneshot.service", $home) {
state => "absent",
}

View File

@@ -1,4 +1,6 @@
$d = datetime()
import "datetime"
$d = datetime.now()
file "/tmp/mgmt/datetime" {
content => template("Hello! It is now: {{ datetime_print . }}\n", $d),
}

View File

@@ -1,11 +1,14 @@
$secplusone = datetime() + $ayear
import "datetime"
import "sys"
$secplusone = datetime.now() + $ayear
# note the order of the assignment (year can come later in the code)
$ayear = 60 * 60 * 24 * 365 # is a year in seconds (31536000)
$tmplvalues = struct{year => $secplusone, load => $theload,}
$theload = structlookup(load(), "x1")
$theload = structlookup(sys.load(), "x1")
if 5 > 3 {
file "/tmp/mgmt/datetime" {

View File

@@ -1,11 +1,14 @@
$secplusone = datetime() + $ayear
import "datetime"
import "sys"
$secplusone = datetime.now() + $ayear
# note the order of the assignment (year can come later in the code)
$ayear = 60 * 60 * 24 * 365 # is a year in seconds (31536000)
$tmplvalues = struct{year => $secplusone, load => $theload, vumeter => $vumeter,}
$theload = structlookup(load(), "x1")
$theload = structlookup(sys.load(), "x1")
$vumeter = vumeter("====", 10, 0.9)

View File

@@ -1,20 +1,23 @@
# read and print environment variable
# env TEST=123 EMPTY= ./mgmt run --tmp-prefix --lang=examples/lang/env0.mcl --converged-timeout=5
# env TEST=123 EMPTY= ./mgmt run --tmp-prefix --converged-timeout=5 lang --lang=examples/lang/env0.mcl
$x = getenv("TEST", "321")
import "fmt"
import "sys"
$x = sys.getenv("TEST", "321")
print "print1" {
msg => printf("the value of the environment variable TEST is: %s", $x),
msg => fmt.printf("the value of the environment variable TEST is: %s", $x),
}
$y = getenv("DOESNOTEXIT", "321")
$y = sys.getenv("DOESNOTEXIT", "321")
print "print2" {
msg => printf("environment variable DOESNOTEXIT does not exist, defaulting to: %s", $y),
msg => fmt.printf("environment variable DOESNOTEXIT does not exist, defaulting to: %s", $y),
}
$z = getenv("EMPTY", "456")
$z = sys.getenv("EMPTY", "456")
print "print3" {
msg => printf("same goes for epmty variables like EMPTY: %s", $z),
msg => fmt.printf("same goes for epmty variables like EMPTY: %s", $z),
}

View File

@@ -1,9 +1,12 @@
$env = env()
import "fmt"
import "sys"
$env = sys.env()
$m = maplookup($env, "GOPATH", "")
print "print0" {
msg => if hasenv("GOPATH") {
printf("GOPATH is: %s", $m)
msg => if sys.hasenv("GOPATH") {
fmt.printf("GOPATH is: %s", $m)
} else {
"GOPATH is missing!"
},

View File

@@ -1,13 +1,15 @@
# run this example with these commands
# watch -n 0.1 'tail *' # run this in /tmp/mgmt/
# time ./mgmt run --lang examples/lang/exchange0.mcl --hostname h1 --ideal-cluster-size 1 --tmp-prefix --no-pgp
# time ./mgmt run --lang examples/lang/exchange0.mcl --hostname h2 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2381 --server-urls http://127.0.0.1:2382 --tmp-prefix --no-pgp
# time ./mgmt run --lang examples/lang/exchange0.mcl --hostname h3 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2383 --server-urls http://127.0.0.1:2384 --tmp-prefix --no-pgp
# time ./mgmt run --lang examples/lang/exchange0.mcl --hostname h4 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2385 --server-urls http://127.0.0.1:2386 --tmp-prefix --no-pgp
# time ./mgmt run --hostname h1 --ideal-cluster-size 1 --tmp-prefix --no-pgp lang --lang examples/lang/exchange0.mcl
# time ./mgmt run --hostname h2 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2381 --server-urls http://127.0.0.1:2382 --tmp-prefix --no-pgp lang --lang examples/lang/exchange0.mcl
# time ./mgmt run --hostname h3 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2383 --server-urls http://127.0.0.1:2384 --tmp-prefix --no-pgp lang --lang examples/lang/exchange0.mcl
# time ./mgmt run --hostname h4 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2385 --server-urls http://127.0.0.1:2386 --tmp-prefix --no-pgp lang --lang examples/lang/exchange0.mcl
import "sys"
$rand = random1(8)
$exchanged = exchange("keyns", $rand)
file "/tmp/mgmt/exchange-${hostname()}" {
file "/tmp/mgmt/exchange-${sys.hostname()}" {
content => template("Found: {{ . }}\n", $exchanged),
}

View File

@@ -1,4 +1,6 @@
$dt = datetime()
import "datetime"
$dt = datetime.now()
$hystvalues = {"ix0" => $dt, "ix1" => $dt{1}, "ix2" => $dt{2}, "ix3" => $dt{3},}

View File

@@ -1,4 +1,6 @@
file "/tmp/mgmt/${hostname()}" {
content => "hello from ${hostname()}!\n",
import "sys"
file "/tmp/mgmt/${sys.hostname()}" {
content => "hello from ${sys.hostname()}!\n",
state => "exists",
}

View File

@@ -0,0 +1,5 @@
import "fmt"
test "printf" {
anotherstr => fmt.printf("the answer is: %d", 42),
}

View File

@@ -1,9 +1,11 @@
import "fmt"
$x1 = ["a", "b", "c", "d",]
print "print4" {
msg => printf("length is: %d", len($x1)),
msg => fmt.printf("length is: %d", len($x1)),
}
$x2 = {"a" => 1, "b" => 2, "c" => 3,}
print "print3" {
msg => printf("length is: %d", len($x2)),
msg => fmt.printf("length is: %d", len($x2)),
}

View File

@@ -1,9 +1,12 @@
$theload = load()
import "fmt"
import "sys"
$theload = sys.load()
$x1 = structlookup($theload, "x1")
$x5 = structlookup($theload, "x5")
$x15 = structlookup($theload, "x15")
print "print1" {
msg => printf("load average: %f, %f, %f", $x1, $x5, $x15),
msg => fmt.printf("load average: %f, %f, %f", $x1, $x5, $x15),
}

View File

@@ -1,13 +1,15 @@
import "fmt"
$m = {"k1" => 42, "k2" => 13,}
$found = maplookup($m, "k1", 99)
print "print1" {
msg => printf("found value of: %d", $found),
msg => fmt.printf("found value of: %d", $found),
}
$notfound = maplookup($m, "k3", 99)
print "print2" {
msg => printf("notfound value of: %d", $notfound),
msg => fmt.printf("notfound value of: %d", $notfound),
}

View File

@@ -1,4 +1,6 @@
import "fmt"
test "t1" {
int64 => (4 + 32) * 15 - 8,
anotherstr => printf("the answer is: %d", 42),
anotherstr => fmt.printf("the answer is: %d", 42),
}

View File

@@ -1,3 +1,6 @@
import "fmt"
import "math"
print "print0" {
msg => printf("13.0 ^ 4.2 is: %f", pow(13.0, 4.2)),
msg => fmt.printf("13.0 ^ 4.2 is: %f", math.pow(13.0, 4.2)),
}

View File

@@ -0,0 +1 @@
# empty metadata file (use defaults)

View File

@@ -0,0 +1,3 @@
main: "main/hello.mcl" # this is not the default, the default is "main.mcl"
files: "files/" # these are some extra files we can use (is the default)
path: "path/" # where to look for modules, defaults to using a global

View File

@@ -0,0 +1,2 @@
main: "main.mcl"
files: "files/" # these are some extra files we can use (is the default)

View File

@@ -0,0 +1,2 @@
main: "main.mcl"
files: "files/" # these are some extra files we can use (is the default)

View File

@@ -1,8 +1,10 @@
import "fmt"
test "printf-a" {
anotherstr => printf("the %s is: %d", "answer", 42),
anotherstr => fmt.printf("the %s is: %d", "answer", 42),
}
$format = "a %s is: %f"
test "printf-b" {
anotherstr => printf($format, "cool number", 3.14159),
anotherstr => fmt.printf($format, "cool number", 3.14159),
}

View File

@@ -1,3 +1,5 @@
import "sys"
# here are all the possible options:
#$opts = struct{strategy => "rr", max => 3, reuse => false, ttl => 10,}
@@ -13,6 +15,6 @@ $set = schedule("xsched", $opts)
# and if you want, you can omit the options entirely:
#$set = schedule("xsched")
file "/tmp/mgmt/scheduled-${hostname()}" {
file "/tmp/mgmt/scheduled-${sys.hostname()}" {
content => template("set: {{ . }}\n", $set),
}

View File

@@ -1,3 +1,5 @@
import "fmt"
$ns = "estate"
$exchanged = kvlookup($ns)
$state = maplookup($exchanged, $hostname, "default")
@@ -16,6 +18,6 @@ Exec["exec0"].output -> Kv["kv0"].value
if $state != "default" {
file "/tmp/mgmt/state" {
content => printf("state: %s\n", $state),
content => fmt.printf("state: %s\n", $state),
}
}

View File

@@ -1,13 +1,15 @@
import "fmt"
$st = struct{f1 => 42, f2 => true, f3 => 3.14,}
$f1 = structlookup($st, "f1")
print "print1" {
msg => printf("f1 field is: %d", $f1),
msg => fmt.printf("f1 field is: %d", $f1),
}
$f2 = structlookup($st, "f2")
print "print2" {
msg => printf("f2 field is: %t", $f2),
msg => fmt.printf("f2 field is: %t", $f2),
}

View File

@@ -1,8 +1,11 @@
import "fmt"
import "example"
$answer = 42
$s = int2str($answer)
$s = example.int2str($answer)
print "print1" {
msg => printf("an str is: %s", $s),
msg => fmt.printf("an str is: %s", $s),
}
print "print2" {

View File

@@ -0,0 +1,9 @@
noop "puppet_first_handover" {}
noop "puppet_second_handover" {}
print "first message" {}
print "third message" {}
Print["first message"] -> Noop["puppet_first_handover"]
Noop["puppet_second_handover"] -> Print["third message"]

View File

@@ -0,0 +1,10 @@
class mgmt_first_handover {}
class mgmt_second_handover {}
include mgmt_first_handover, mgmt_second_handover
Class["mgmt_first_handover"]
->
notify { "second message": }
->
Class["mgmt_second_handover"]

View File

@@ -31,14 +31,14 @@ type MyGAPI struct {
Name string // graph name
Interval uint // refresh interval, 0 to never refresh
data gapi.Data
data *gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewMyGAPI creates a new MyGAPI struct and calls Init().
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
func NewMyGAPI(data *gapi.Data, name string, interval uint) (*MyGAPI, error) {
obj := &MyGAPI{
Name: name,
Interval: interval,
@@ -46,30 +46,8 @@ func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
return obj, obj.Init(data)
}
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
// should take the prefix of the registered name. On activation, if there are
// any validation problems, you should return an error. If this was not
// activated, then you should return a nil GAPI and a nil error.
func (obj *MyGAPI) Cli(c *cli.Context, fs engine.Fs) (*gapi.Deploy, error) {
if s := c.String(obj.Name); c.IsSet(obj.Name) {
if s != "" {
return nil, fmt.Errorf("input is not empty")
}
return &gapi.Deploy{
Name: obj.Name,
Noop: c.GlobalBool("noop"),
Sema: c.GlobalInt("sema"),
GAPI: &MyGAPI{
// TODO: add properties here...
},
}, nil
}
return nil, nil // we weren't activated!
}
// CliFlags returns a list of flags used by this deploy subcommand.
func (obj *MyGAPI) CliFlags() []cli.Flag {
// CliFlags returns a list of flags used by the passed in subcommand.
func (obj *MyGAPI) CliFlags(string) []cli.Flag {
return []cli.Flag{
cli.StringFlag{
Name: obj.Name,
@@ -79,8 +57,26 @@ func (obj *MyGAPI) CliFlags() []cli.Flag {
}
}
// Cli takes a cli.Context and some other info, and returns our GAPI. If there
// are any validation problems, you should return an error.
func (obj *MyGAPI) Cli(cliInfo *gapi.CliInfo) (*gapi.Deploy, error) {
c := cliInfo.CliContext
//fs := cliInfo.Fs // copy files from local filesystem *into* this fs...
//debug := cliInfo.Debug
//logf := func(format string, v ...interface{}) {
// cliInfo.Logf(Name+": "+format, v...)
//}
return &gapi.Deploy{
Name: obj.Name,
Noop: c.GlobalBool("noop"),
Sema: c.GlobalInt("sema"),
GAPI: &MyGAPI{},
}, nil
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
func (obj *MyGAPI) Init(data *gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}

View File

@@ -36,14 +36,14 @@ type MyGAPI struct {
Name string // graph name
Interval uint // refresh interval, 0 to never refresh
data gapi.Data
data *gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewMyGAPI creates a new MyGAPI struct and calls Init().
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
func NewMyGAPI(data *gapi.Data, name string, interval uint) (*MyGAPI, error) {
obj := &MyGAPI{
Name: name,
Interval: interval,
@@ -51,30 +51,8 @@ func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
return obj, obj.Init(data)
}
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
// should take the prefix of the registered name. On activation, if there are
// any validation problems, you should return an error. If this was not
// activated, then you should return a nil GAPI and a nil error.
func (obj *MyGAPI) Cli(c *cli.Context, fs engine.Fs) (*gapi.Deploy, error) {
if s := c.String(obj.Name); c.IsSet(obj.Name) {
if s != "" {
return nil, fmt.Errorf("input is not empty")
}
return &gapi.Deploy{
Name: obj.Name,
Noop: c.GlobalBool("noop"),
Sema: c.GlobalInt("sema"),
GAPI: &MyGAPI{
// TODO: add properties here...
},
}, nil
}
return nil, nil // we weren't activated!
}
// CliFlags returns a list of flags used by this deploy subcommand.
func (obj *MyGAPI) CliFlags() []cli.Flag {
// CliFlags returns a list of flags used by the passed in subcommand.
func (obj *MyGAPI) CliFlags(string) []cli.Flag {
return []cli.Flag{
cli.StringFlag{
Name: obj.Name,
@@ -84,8 +62,26 @@ func (obj *MyGAPI) CliFlags() []cli.Flag {
}
}
// Cli takes a cli.Context and some other info, and returns our GAPI. If there
// are any validation problems, you should return an error.
func (obj *MyGAPI) Cli(cliInfo *gapi.CliInfo) (*gapi.Deploy, error) {
c := cliInfo.CliContext
//fs := cliInfo.Fs // copy files from local filesystem *into* this fs...
//debug := cliInfo.Debug
//logf := func(format string, v ...interface{}) {
// cliInfo.Logf(Name+": "+format, v...)
//}
return &gapi.Deploy{
Name: obj.Name,
Noop: c.GlobalBool("noop"),
Sema: c.GlobalInt("sema"),
GAPI: &MyGAPI{},
}, nil
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
func (obj *MyGAPI) Init(data *gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}

View File

@@ -31,14 +31,14 @@ type MyGAPI struct {
Name string // graph name
Interval uint // refresh interval, 0 to never refresh
data gapi.Data
data *gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewMyGAPI creates a new MyGAPI struct and calls Init().
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
func NewMyGAPI(data *gapi.Data, name string, interval uint) (*MyGAPI, error) {
obj := &MyGAPI{
Name: name,
Interval: interval,
@@ -46,30 +46,8 @@ func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
return obj, obj.Init(data)
}
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
// should take the prefix of the registered name. On activation, if there are
// any validation problems, you should return an error. If this was not
// activated, then you should return a nil GAPI and a nil error.
func (obj *MyGAPI) Cli(c *cli.Context, fs engine.Fs) (*gapi.Deploy, error) {
if s := c.String(obj.Name); c.IsSet(obj.Name) {
if s != "" {
return nil, fmt.Errorf("input is not empty")
}
return &gapi.Deploy{
Name: obj.Name,
Noop: c.GlobalBool("noop"),
Sema: c.GlobalInt("sema"),
GAPI: &MyGAPI{
// TODO: add properties here...
},
}, nil
}
return nil, nil // we weren't activated!
}
// CliFlags returns a list of flags used by this deploy subcommand.
func (obj *MyGAPI) CliFlags() []cli.Flag {
// CliFlags returns a list of flags used by the passed in subcommand.
func (obj *MyGAPI) CliFlags(string) []cli.Flag {
return []cli.Flag{
cli.StringFlag{
Name: obj.Name,
@@ -79,8 +57,26 @@ func (obj *MyGAPI) CliFlags() []cli.Flag {
}
}
// Cli takes a cli.Context and some other info, and returns our GAPI. If there
// are any validation problems, you should return an error.
func (obj *MyGAPI) Cli(cliInfo *gapi.CliInfo) (*gapi.Deploy, error) {
c := cliInfo.CliContext
//fs := cliInfo.Fs // copy files from local filesystem *into* this fs...
//debug := cliInfo.Debug
//logf := func(format string, v ...interface{}) {
// cliInfo.Logf(Name+": "+format, v...)
//}
return &gapi.Deploy{
Name: obj.Name,
Noop: c.GlobalBool("noop"),
Sema: c.GlobalInt("sema"),
GAPI: &MyGAPI{},
}, nil
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
func (obj *MyGAPI) Init(data *gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}

View File

@@ -32,14 +32,14 @@ type MyGAPI struct {
Count uint // number of resources to create
Interval uint // refresh interval, 0 to never refresh
data gapi.Data
data *gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewMyGAPI creates a new MyGAPI struct and calls Init().
func NewMyGAPI(data gapi.Data, name string, interval uint, count uint) (*MyGAPI, error) {
func NewMyGAPI(data *gapi.Data, name string, interval uint, count uint) (*MyGAPI, error) {
obj := &MyGAPI{
Name: name,
Count: count,
@@ -48,30 +48,8 @@ func NewMyGAPI(data gapi.Data, name string, interval uint, count uint) (*MyGAPI,
return obj, obj.Init(data)
}
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
// should take the prefix of the registered name. On activation, if there are
// any validation problems, you should return an error. If this was not
// activated, then you should return a nil GAPI and a nil error.
func (obj *MyGAPI) Cli(c *cli.Context, fs engine.Fs) (*gapi.Deploy, error) {
if s := c.String(obj.Name); c.IsSet(obj.Name) {
if s != "" {
return nil, fmt.Errorf("input is not empty")
}
return &gapi.Deploy{
Name: obj.Name,
Noop: c.GlobalBool("noop"),
Sema: c.GlobalInt("sema"),
GAPI: &MyGAPI{
// TODO: add properties here...
},
}, nil
}
return nil, nil // we weren't activated!
}
// CliFlags returns a list of flags used by this deploy subcommand.
func (obj *MyGAPI) CliFlags() []cli.Flag {
// CliFlags returns a list of flags used by the passed in subcommand.
func (obj *MyGAPI) CliFlags(string) []cli.Flag {
return []cli.Flag{
cli.StringFlag{
Name: obj.Name,
@@ -81,8 +59,26 @@ func (obj *MyGAPI) CliFlags() []cli.Flag {
}
}
// Cli takes a cli.Context and some other info, and returns our GAPI. If there
// are any validation problems, you should return an error.
func (obj *MyGAPI) Cli(cliInfo *gapi.CliInfo) (*gapi.Deploy, error) {
c := cliInfo.CliContext
//fs := cliInfo.Fs // copy files from local filesystem *into* this fs...
//debug := cliInfo.Debug
//logf := func(format string, v ...interface{}) {
// cliInfo.Logf(Name+": "+format, v...)
//}
return &gapi.Deploy{
Name: obj.Name,
Noop: c.GlobalBool("noop"),
Sema: c.GlobalInt("sema"),
GAPI: &MyGAPI{},
}, nil
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
func (obj *MyGAPI) Init(data *gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}

View File

@@ -31,14 +31,14 @@ type MyGAPI struct {
Name string // graph name
Interval uint // refresh interval, 0 to never refresh
data gapi.Data
data *gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewMyGAPI creates a new MyGAPI struct and calls Init().
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
func NewMyGAPI(data *gapi.Data, name string, interval uint) (*MyGAPI, error) {
obj := &MyGAPI{
Name: name,
Interval: interval,
@@ -46,30 +46,8 @@ func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
return obj, obj.Init(data)
}
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
// should take the prefix of the registered name. On activation, if there are
// any validation problems, you should return an error. If this was not
// activated, then you should return a nil GAPI and a nil error.
func (obj *MyGAPI) Cli(c *cli.Context, fs engine.Fs) (*gapi.Deploy, error) {
if s := c.String(obj.Name); c.IsSet(obj.Name) {
if s != "" {
return nil, fmt.Errorf("input is not empty")
}
return &gapi.Deploy{
Name: obj.Name,
Noop: c.GlobalBool("noop"),
Sema: c.GlobalInt("sema"),
GAPI: &MyGAPI{
// TODO: add properties here...
},
}, nil
}
return nil, nil // we weren't activated!
}
// CliFlags returns a list of flags used by this deploy subcommand.
func (obj *MyGAPI) CliFlags() []cli.Flag {
// CliFlags returns a list of flags used by the passed in subcommand.
func (obj *MyGAPI) CliFlags(string) []cli.Flag {
return []cli.Flag{
cli.StringFlag{
Name: obj.Name,
@@ -79,8 +57,26 @@ func (obj *MyGAPI) CliFlags() []cli.Flag {
}
}
// Cli takes a cli.Context and some other info, and returns our GAPI. If there
// are any validation problems, you should return an error.
func (obj *MyGAPI) Cli(cliInfo *gapi.CliInfo) (*gapi.Deploy, error) {
c := cliInfo.CliContext
//fs := cliInfo.Fs // copy files from local filesystem *into* this fs...
//debug := cliInfo.Debug
//logf := func(format string, v ...interface{}) {
// cliInfo.Logf(Name+": "+format, v...)
//}
return &gapi.Deploy{
Name: obj.Name,
Noop: c.GlobalBool("noop"),
Sema: c.GlobalInt("sema"),
GAPI: &MyGAPI{},
}, nil
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
func (obj *MyGAPI) Init(data *gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}

View File

@@ -0,0 +1,9 @@
[Unit]
Description=Fake oneshot service for testing
[Service]
Type=oneshot
ExecStart=/usr/bin/sleep 5s
[Install]
WantedBy=multi-user.target

View File

@@ -21,7 +21,6 @@ import (
"fmt"
"sync"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/gapi"
"github.com/purpleidea/mgmt/pgraph"
@@ -39,44 +38,31 @@ func init() {
// GAPI implements the main lang GAPI interface.
type GAPI struct {
data gapi.Data
data *gapi.Data
initialized bool
closeChan chan struct{}
wg *sync.WaitGroup // sync group for tunnel go routines
}
// CliFlags returns a list of flags used by the specified subcommand.
func (obj *GAPI) CliFlags(command string) []cli.Flag {
return []cli.Flag{}
}
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
// should take the prefix of the registered name. On activation, if there are
// any validation problems, you should return an error. If this was not
// activated, then you should return a nil GAPI and a nil error.
func (obj *GAPI) Cli(c *cli.Context, fs engine.Fs) (*gapi.Deploy, error) {
if s := c.String(Name); c.IsSet(Name) {
if s == "" {
return nil, fmt.Errorf("input code is empty")
}
return &gapi.Deploy{
Name: Name,
//Noop: false,
GAPI: &GAPI{},
}, nil
}
return nil, nil // we weren't activated!
}
// CliFlags returns a list of flags used by this deploy subcommand.
func (obj *GAPI) CliFlags() []cli.Flag {
return []cli.Flag{
cli.StringFlag{
Name: Name,
Value: "",
Usage: "empty graph to deploy",
},
}
func (obj *GAPI) Cli(*gapi.CliInfo) (*gapi.Deploy, error) {
return &gapi.Deploy{
Name: Name,
//Noop: false,
GAPI: &GAPI{},
}, nil
}
// Init initializes the lang GAPI struct.
func (obj *GAPI) Init(data gapi.Data) error {
func (obj *GAPI) Init(data *gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}

View File

@@ -28,6 +28,18 @@ import (
"github.com/urfave/cli"
)
const (
// CommandRun is the identifier for the "run" command. It is distinct
// from the other commands, because it can run with any front-end.
CommandRun = "run"
// CommandDeploy is the identifier for the "deploy" command.
CommandDeploy = "deploy"
// CommandGet is the identifier for the "get" (download) command.
CommandGet = "get"
)
// RegisteredGAPIs is a global map of all possible GAPIs which can be used. You
// should never touch this map directly. Use methods like Register instead.
var RegisteredGAPIs = make(map[string]func() GAPI) // must initialize this map
@@ -42,6 +54,19 @@ func Register(name string, fn func() GAPI) {
RegisteredGAPIs[name] = fn
}
// CliInfo is the set of input values passed into the Cli method so that the
// GAPI can decide if it wants to activate, and if it does, the initial handles
// it needs to use to do so.
type CliInfo struct {
// CliContext is the struct that is used to transfer in user input.
CliContext *cli.Context
// Fs is the filesystem the Cli method should copy data into. It usually
// copies *from* the local filesystem using standard io functionality.
Fs engine.Fs
Debug bool
Logf func(format string, v ...interface{})
}
// Data is the set of input values passed into the GAPI structs via Init.
type Data struct {
Program string // name of the originating program
@@ -50,6 +75,7 @@ type Data struct {
Noop bool
NoConfigWatch bool
NoStreamWatch bool
Prefix string
Debug bool
Logf func(format string, v ...interface{})
// NOTE: we can add more fields here if needed by GAPI endpoints
@@ -68,13 +94,60 @@ type Next struct {
Err error // if something goes wrong (use with or without exit!)
}
// GAPI is a Graph API that represents incoming graphs and change streams.
// GAPI is a Graph API that represents incoming graphs and change streams. It is
// the frontend interface that needs to be implemented to use the engine.
type GAPI interface {
Cli(c *cli.Context, fs engine.Fs) (*Deploy, error)
CliFlags() []cli.Flag
// CliFlags is passed a Command constant specifying which command it is
// requesting the flags for. If an invalid or unsupported command is
// passed in, simply return an empty list. Similarly, it is not required
// to ever return any flags, and the GAPI may always return an empty
// list.
CliFlags(string) []cli.Flag
Init(Data) error // initializes the GAPI and passes in useful data
Graph() (*pgraph.Graph, error) // returns the most recent pgraph
Next() chan Next // returns a stream of switch events
Close() error // shutdown the GAPI
// Cli is run on each GAPI to give it a chance to decide if it wants to
// activate, and if it does, then it will return a deploy struct. During
// this time, it uses the CliInfo struct as useful information to decide
// what to do.
Cli(*CliInfo) (*Deploy, error)
// Init initializes the GAPI and passes in some useful data.
Init(*Data) error
// Graph returns the most recent pgraph. This is called by the engine on
// every event from Next().
Graph() (*pgraph.Graph, error)
// Next returns a stream of switch events. The engine will run Graph()
// to build a new graph after every Next event.
Next() chan Next
// Close shuts down the GAPI. It asks the GAPI to close, and must cause
// Next() to unblock even if is currently blocked and waiting to send a
// new event.
Close() error
}
// GetInfo is the set of input values passed into the Get method for it to run.
type GetInfo struct {
// CliContext is the struct that is used to transfer in user input.
CliContext *cli.Context
Noop bool
Sema int
Update bool
Debug bool
Logf func(format string, v ...interface{})
}
// GettableGAPI represents additional methods that need to be implemented in
// this GAPI so that it can be used with the `get` Command. The methods in this
// interface are called independently from the rest of the GAPI interface, and
// you must not rely on shared state from those methods. Logically, this should
// probably be named "Getable", however the correct modern word is "Gettable".
type GettableGAPI interface {
GAPI // the base interface must be implemented
// Get runs the get/download method.
Get(*GetInfo) error
}

View File

@@ -42,8 +42,15 @@ func CopyFileToFs(fs engine.Fs, src, dst string) error {
return nil
}
// CopyStringToFs copies a file from src path on the local fs to a dst path on
// fs.
// CopyBytesToFs copies a list of bytes to a dst path on fs.
func CopyBytesToFs(fs engine.Fs, b []byte, dst string) error {
if err := fs.WriteFile(dst, b, Umask); err != nil {
return errwrap.Wrapf(err, "can't write to file `%s`", dst)
}
return nil
}
// CopyStringToFs copies a string to a dst path on fs.
func CopyStringToFs(fs engine.Fs, str, dst string) error {
if err := fs.WriteFile(dst, []byte(str), Umask); err != nil {
return errwrap.Wrapf(err, "can't write to file `%s`", dst)
@@ -55,3 +62,9 @@ func CopyStringToFs(fs engine.Fs, str, dst string) error {
func CopyDirToFs(fs engine.Fs, src, dst string) error {
return util.CopyDiskToFs(fs, src, dst, false)
}
// CopyDirContentsToFs copies a dir contents from src path on the local fs to a
// dst path on fs.
func CopyDirContentsToFs(fs engine.Fs, src, dst string) error {
return util.CopyDiskContentsToFs(fs, src, dst, false)
}

View File

@@ -32,7 +32,8 @@ import (
func TestInstance0(t *testing.T) {
code := `
$root = getenv("MGMT_TEST_ROOT")
import "sys"
$root = sys.getenv("MGMT_TEST_ROOT")
file "${root}/mgmt-hello-world" {
content => "hello world from @purpleidea\n",
@@ -42,6 +43,10 @@ func TestInstance0(t *testing.T) {
m := Instance{
Hostname: "h1", // arbitrary
Preserve: true,
Debug: false, // TODO: set to true if not too wordy
Logf: func(format string, v ...interface{}) {
t.Logf("test: "+format, v...)
},
}
if err := m.SimpleDeployLang(code); err != nil {
t.Errorf("failed with: %+v", err)
@@ -68,18 +73,19 @@ func TestInstance1(t *testing.T) {
fail bool
expect map[string]string
}
values := []test{}
testCases := []test{}
{
code := util.Code(`
$root = getenv("MGMT_TEST_ROOT")
import "sys"
$root = sys.getenv("MGMT_TEST_ROOT")
file "${root}/mgmt-hello-world" {
content => "hello world from @purpleidea\n",
state => "exists",
}
`)
values = append(values, test{
testCases = append(testCases, test{
name: "hello world",
code: code,
fail: false,
@@ -89,13 +95,17 @@ func TestInstance1(t *testing.T) {
})
}
for index, test := range values { // run all the tests
t.Run(fmt.Sprintf("test #%d (%s)", index, test.name), func(t *testing.T) {
code, fail, expect := test.code, test.fail, test.expect
for index, tc := range testCases { // run all the tests
t.Run(fmt.Sprintf("test #%d (%s)", index, tc.name), func(t *testing.T) {
code, fail, expect := tc.code, tc.fail, tc.expect
m := Instance{
Hostname: "h1",
Preserve: true,
Debug: false, // TODO: set to true if not too wordy
Logf: func(format string, v ...interface{}) {
t.Logf(fmt.Sprintf("test #%d: ", index)+format, v...)
},
}
err := m.SimpleDeployLang(code)
d := m.Dir()
@@ -151,18 +161,19 @@ func TestCluster1(t *testing.T) {
hosts []string
expect map[string]map[string]string // hostname, file, contents
}
values := []test{}
testCases := []test{}
{
code := util.Code(`
$root = getenv("MGMT_TEST_ROOT")
import "sys"
$root = sys.getenv("MGMT_TEST_ROOT")
file "${root}/mgmt-hostname" {
content => "i am ${hostname()}\n",
content => "i am ${sys.hostname()}\n",
state => "exists",
}
`)
values = append(values, test{
testCases = append(testCases, test{
name: "simple pair",
code: code,
fail: false,
@@ -179,14 +190,15 @@ func TestCluster1(t *testing.T) {
}
{
code := util.Code(`
$root = getenv("MGMT_TEST_ROOT")
import "sys"
$root = sys.getenv("MGMT_TEST_ROOT")
file "${root}/mgmt-hostname" {
content => "i am ${hostname()}\n",
content => "i am ${sys.hostname()}\n",
state => "exists",
}
`)
values = append(values, test{
testCases = append(testCases, test{
name: "hello world",
code: code,
fail: false,
@@ -205,13 +217,17 @@ func TestCluster1(t *testing.T) {
})
}
for index, test := range values { // run all the tests
t.Run(fmt.Sprintf("test #%d (%s)", index, test.name), func(t *testing.T) {
code, fail, hosts, expect := test.code, test.fail, test.hosts, test.expect
for index, tc := range testCases { // run all the tests
t.Run(fmt.Sprintf("test #%d (%s)", index, tc.name), func(t *testing.T) {
code, fail, hosts, expect := tc.code, tc.fail, tc.hosts, tc.expect
c := Cluster{
Hostnames: hosts,
Preserve: true,
Debug: false, // TODO: set to true if not too wordy
Logf: func(format string, v ...interface{}) {
t.Logf(fmt.Sprintf("test #%d: ", index)+format, v...)
},
}
err := c.SimpleDeployLang(code)
if d := c.Dir(); d != "" {

View File

@@ -39,6 +39,9 @@ type Cluster struct {
// This is helpful for running analysis or tests on the output.
Preserve bool
// Logf is a logger which should be used.
Logf func(format string, v ...interface{})
// Debug enables more verbosity.
Debug bool
@@ -62,7 +65,8 @@ func (obj *Cluster) Init() error {
}
}
for _, h := range obj.Hostnames {
for _, hostname := range obj.Hostnames {
h := hostname
instancePrefix := path.Join(obj.dir, h)
if err := os.MkdirAll(instancePrefix, dirMode); err != nil {
return errwrap.Wrapf(err, "can't create instance directory")
@@ -71,7 +75,10 @@ func (obj *Cluster) Init() error {
obj.instances[h] = &Instance{
Hostname: h,
Preserve: obj.Preserve,
Debug: obj.Debug,
Logf: func(format string, v ...interface{}) {
obj.Logf(fmt.Sprintf("instance <%s>: ", h)+format, v...)
},
Debug: obj.Debug,
dir: instancePrefix,
}

View File

@@ -75,6 +75,9 @@ type Instance struct {
// This is helpful for running analysis or tests on the output.
Preserve bool
// Logf is a logger which should be used.
Logf func(format string, v ...interface{})
// Debug enables more verbosity.
Debug bool
@@ -205,6 +208,9 @@ func (obj *Instance) Run(seeds []*Instance) error {
//s := fmt.Sprintf("--seeds=%s", strings.Join(urls, ","))
cmdArgs = append(cmdArgs, s)
}
gapi := "empty" // empty GAPI (for now)
cmdArgs = append(cmdArgs, gapi)
obj.Logf("run: %s %s", cmdName, strings.Join(cmdArgs, " "))
obj.cmd = exec.Command(cmdName, cmdArgs...)
obj.cmd.Env = []string{
fmt.Sprintf("MGMT_TEST_ROOT=%s", obj.testRootDirectory),
@@ -369,8 +375,12 @@ func (obj *Instance) DeployLang(code string) error {
"--seeds", obj.clientURL,
"lang", "--lang", filename,
}
obj.Logf("run: %s %s", cmdName, strings.Join(cmdArgs, " "))
cmd := exec.Command(cmdName, cmdArgs...)
if err := cmd.Run(); err != nil {
stdoutStderr, err := cmd.CombinedOutput() // does cmd.Run() for us!
obj.Logf("stdout/stderr:\n%s", stdoutStderr)
if err != nil {
return errwrap.Wrapf(err, "can't run deploy")
}
return nil

153
lang/download.go Normal file
View File

@@ -0,0 +1,153 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package lang // TODO: move this into a sub package of lang/$name?
import (
"fmt"
"os"
"path"
"strings"
"github.com/purpleidea/mgmt/lang/interfaces"
errwrap "github.com/pkg/errors"
git "gopkg.in/src-d/go-git.v4"
)
// Downloader implements the Downloader interface. It provides a mechanism to
// pull down new code from the internet. This is usually done with git.
type Downloader struct {
info *interfaces.DownloadInfo
// Depth is the max recursion depth that we should descent to. A
// negative value means infinite. This is usually the default.
Depth int
// Retry is the max number of retries we should run if we encounter a
// network error. A negative value means infinite. The default is
// usually zero.
Retry int
// TODO: add a retry backoff parameter
}
// Init initializes the downloader with some core structures we'll need.
func (obj *Downloader) Init(info *interfaces.DownloadInfo) error {
obj.info = info
return nil
}
// Get runs a single download of an import and stores it on disk.
// XXX: this should only touch the filesystem via obj.info.Fs, but that is not
// implemented at the moment, so we cheat and use the local fs directly. This is
// not disastrous, since we only run Get on a local fs, since we don't download
// to etcdfs directly with the downloader during a deploy. This is because we'd
// need to implement the afero.Fs -> billy.Filesystem mapping layer.
func (obj *Downloader) Get(info *interfaces.ImportData, modulesPath string) error {
if info == nil {
return fmt.Errorf("empty import information")
}
if info.URL == "" {
return fmt.Errorf("can't clone from empty URL")
}
if modulesPath == "" || !strings.HasSuffix(modulesPath, "/") || !strings.HasPrefix(modulesPath, "/") {
return fmt.Errorf("module path (`%s`) (must be an absolute dir)", modulesPath)
}
if stat, err := obj.info.Fs.Stat(modulesPath); err != nil || !stat.IsDir() {
if err == nil {
return fmt.Errorf("module path (`%s`) must be a dir", modulesPath)
}
if err == os.ErrNotExist {
return fmt.Errorf("module path (`%s`) must exist", modulesPath)
}
return errwrap.Wrapf(err, "could not read module path (`%s`)", modulesPath)
}
if info.IsSystem || info.IsLocal {
// NOTE: this doesn't prevent us from downloading from a remote
// git repo that is actually a .git file path instead of HTTP...
return fmt.Errorf("can only download remote repos")
}
// TODO: error early if we're provided *ImportData that we can't act on
pull := false
dir := modulesPath + info.Path // TODO: is this dir unique?
isBare := false
options := &git.CloneOptions{
URL: info.URL,
// TODO: do we want to add an option for infinite recursion here?
RecurseSubmodules: git.DefaultSubmoduleRecursionDepth,
}
msg := fmt.Sprintf("downloading `%s` to: `%s`", info.URL, dir)
if obj.info.Noop {
msg = "(noop) " + msg // add prefix
}
obj.info.Logf(msg)
if obj.info.Debug {
obj.info.Logf("info: `%+v`", info)
obj.info.Logf("options: `%+v`", options)
}
if obj.info.Noop {
return nil // done early
}
// FIXME: replace with:
// `git.Clone(s storage.Storer, worktree billy.Filesystem, o *CloneOptions)`
// that uses an `fs engine.Fs` wrapped to the git Filesystem interface:
// `billyFs := desfacer.New(obj.info.Fs)`
// TODO: repo, err := git.Clone(??? storage.Storer, billyFs, options)
repo, err := git.PlainClone(path.Clean(dir), isBare, options)
if err == git.ErrRepositoryAlreadyExists {
if obj.info.Update {
pull = true // make sure to pull latest...
}
} else if err != nil {
return errwrap.Wrapf(err, "can't clone repo: `%s` to: `%s`", info.URL, dir)
}
worktree, err := repo.Worktree()
if err != nil {
return errwrap.Wrapf(err, "can't get working tree: `%s`", dir)
}
if worktree == nil {
// FIXME: not sure how we're supposed to handle this scenario...
return errwrap.Wrapf(err, "can't work with nil work tree for: `%s`", dir)
}
// TODO: do we need to checkout master first, before pulling?
if pull {
options := &git.PullOptions{
// TODO: do we want to add an option for infinite recursion here?
RecurseSubmodules: git.DefaultSubmoduleRecursionDepth,
}
err := worktree.Pull(options)
if err != nil && err != git.NoErrAlreadyUpToDate {
return errwrap.Wrapf(err, "can't pull latest from: `%s`", info.URL)
}
}
// TODO: checkout requested sha1/tag if one was specified...
// if err := worktree.Checkout(opts *CheckoutOptions)
// does the repo have a metadata file present? (we'll validate it later)
if _, err := obj.info.Fs.Stat(dir + interfaces.MetadataFilename); err != nil {
return errwrap.Wrapf(err, "could not read repo metadata file `%s` in its root", interfaces.MetadataFilename)
}
return nil
}

44
lang/funcs/Makefile Normal file
View File

@@ -0,0 +1,44 @@
# Mgmt
# Copyright (C) 2013-2018+ James Shubin and the project contributors
# Written by James Shubin <james@shubin.ca> and the project contributors
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# The bindata target generates go files from any source defined below. To use
# the files, import the generated "bindata" package and use:
# `bytes, err := bindata.Asset("FILEPATH")`
# where FILEPATH is the path of the original input file relative to `bindata/`.
# To get a list of files stored in this "bindata" package, you can use:
# `paths := bindata.AssetNames()` and `paths, err := bindata.AssetDir(name)`
# to get a list of files with a directory prefix.
.PHONY: build clean
default: build
MCL_FILES := $(shell find * -name '*.mcl' -not -path 'old/*' -not -path 'tmp/*')
GENERATED = bindata/bindata.go
build: $(GENERATED)
# add more input files as dependencies at the end here...
$(GENERATED): $(MCL_FILES)
@# go-bindata --pkg bindata -o <OUTPUT> <INPUT>
go-bindata --pkg bindata -o ./$@ $^
@# gofmt the output file
gofmt -s -w $@
@ROOT=$$(dirname "$${BASH_SOURCE}")/../.. && $$ROOT/misc/header.sh '$@'
clean:
@# remove generated bindata/bindata.go
@ROOT=$$(dirname "$${BASH_SOURCE}")/../.. && rm -f $(GENERATED)

2
lang/funcs/bindata.mcl Normal file
View File

@@ -0,0 +1,2 @@
# You can add *.mcl files alongside the *.go files into the core/ directory.
# They will get compiled into the binary when it is built.

2
lang/funcs/bindata/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
# this file gets generated here
bindata.go

19
lang/funcs/bindata/doc.go Normal file
View File

@@ -0,0 +1,19 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package bindata stores core mcl code that is built-in at compile time.
package bindata

27
lang/funcs/core/core.go Normal file
View File

@@ -0,0 +1,27 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package core
import (
// import so the funcs register
_ "github.com/purpleidea/mgmt/lang/funcs/core/coredatetime"
_ "github.com/purpleidea/mgmt/lang/funcs/core/coreexample"
_ "github.com/purpleidea/mgmt/lang/funcs/core/corefmt"
_ "github.com/purpleidea/mgmt/lang/funcs/core/coremath"
_ "github.com/purpleidea/mgmt/lang/funcs/core/coresys"
)

View File

@@ -0,0 +1,23 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package coredatetime
const (
// moduleName is the prefix given to all the functions in this module.
moduleName = "datetime"
)

View File

@@ -15,7 +15,7 @@
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package core // TODO: should this be in its own individual package?
package coredatetime
import (
"time"
@@ -25,7 +25,7 @@ import (
)
func init() {
facts.Register("datetime", func() facts.Fact { return &DateTimeFact{} }) // must register the fact and name
facts.ModuleRegister(moduleName, "now", func() facts.Fact { return &DateTimeFact{} }) // must register the fact and name
}
// DateTimeFact is a fact which returns the current date and time.

View File

@@ -15,19 +15,19 @@
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package simple // TODO: should this be in its own individual package?
package coredatetime
import (
"fmt"
"time"
"github.com/purpleidea/mgmt/lang/funcs/simple"
"github.com/purpleidea/mgmt/lang/types"
)
func init() {
// TODO: should we support namespacing these, eg: datetime.print ?
// FIXME: consider renaming this to printf, and add in a format string?
Register("datetime_print", &types.FuncValue{
simple.ModuleRegister(moduleName, "print", &types.FuncValue{
T: types.NewType("func(a int) str"),
V: func(input []types.Value) (types.Value, error) {
epochDelta := input[0].Int()

View File

@@ -15,9 +15,10 @@
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package simple // TODO: should this be in its own individual package?
package coreexample
import (
"github.com/purpleidea/mgmt/lang/funcs/simple"
"github.com/purpleidea/mgmt/lang/types"
)
@@ -25,7 +26,7 @@ import (
const Answer = 42
func init() {
Register("answer", &types.FuncValue{
simple.ModuleRegister(moduleName, "answer", &types.FuncValue{
T: types.NewType("func() int"),
V: func([]types.Value) (types.Value, error) {
return &types.IntValue{V: Answer}, nil

View File

@@ -0,0 +1,23 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package coreexample
const (
// moduleName is the prefix given to all the functions in this module.
moduleName = "example"
)

View File

@@ -15,17 +15,17 @@
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package simple // TODO: should this be in its own individual package?
package coreexample
import (
"fmt"
"github.com/purpleidea/mgmt/lang/funcs/simple"
"github.com/purpleidea/mgmt/lang/types"
)
func init() {
// TODO: should we support namespacing these, eg: example.errorbool ?
Register("example_errorbool", &types.FuncValue{
simple.ModuleRegister(moduleName, "errorbool", &types.FuncValue{
T: types.NewType("func(a bool) str"),
V: func(input []types.Value) (types.Value, error) {
if input[0].Bool() {

View File

@@ -15,7 +15,7 @@
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package core // TODO: should this be in its own individual package?
package coreexample
import (
"time"
@@ -25,8 +25,7 @@ import (
)
func init() {
// TODO: rename these `play` facts to start with a test_ prefix or similar
facts.Register("flipflop", func() facts.Fact { return &FlipFlopFact{} }) // must register the fact and name
facts.ModuleRegister(moduleName, "flipflop", func() facts.Fact { return &FlipFlopFact{} }) // must register the fact and name
}
// FlipFlopFact is a fact which flips a bool repeatedly. This is an example fact

Some files were not shown because too many files have changed in this diff Show More