283 Commits

Author SHA1 Message Date
Lourenço Vales
cdc09f9c46 added partial cloudflare api integration 2025-10-02 16:04:21 +02:00
Lourenço Vales
65fac167cf added cmp function 2025-10-02 13:08:49 +02:00
Lourenço Vales
6c67acf5fe added CheckApply function; made some changes to structure 2025-10-02 13:08:49 +02:00
Lourenço Vales
ab69c29761 engine: resources: Add Cloudflare DNS resource 2025-10-02 13:08:49 +02:00
James Shubin
5f4ae05340 readme: We moved to matrix 2025-10-02 03:05:05 -04:00
James Shubin
c48b884d16 misc: Add fpm repo script 2025-09-30 02:55:29 -04:00
James Shubin
fe77bce544 misc: Update old email address 2025-09-30 02:42:43 -04:00
James Shubin
26640df164 test: shell: Get the first ethernet device
In CI sometimes there are two, so this fails.
2025-09-30 00:27:35 -04:00
James Shubin
debd4ee653 misc: Remove old prototype language 2025-09-29 23:15:45 -04:00
James Shubin
63269fe343 spec: Small RPM fixes
In case anyone tries this.
2025-09-29 23:15:16 -04:00
James Shubin
f588703474 engine: resources: Add a gsettings resource
This adds a way to run the gsettings command for configuring dconf
settings usually used by GNOME applications.
2025-09-29 21:40:06 -04:00
James Shubin
52fbc31da7 engine: resources: Remove this noise 2025-09-29 21:26:14 -04:00
James Shubin
154f900d2a engine: resources: Add an ifequals option to block cmd
If the ifcmd returns true and this option is set, it will match that
output against this field, and if they match, then we skip cmd.

Much cleaner than needing to invoke bash to compare two strings.
2025-09-29 21:24:40 -04:00
James Shubin
bbd4f1dea1 engine: Add a utility function to copy the Init struct 2025-09-29 21:23:17 -04:00
James Shubin
22120649e5 modules: misc: Add simple flatpak management 2025-09-26 23:25:42 -04:00
James Shubin
a840dd43dd cli, tools, util, modules: Add a grow util and module
This builds in some functionality for growing the filesystem for new
machines. It also gets wrapped with an mcl module for ease of use.
2025-09-26 23:00:20 -04:00
James Shubin
83743df3e4 util: Add variant of exec cmd that returns output 2025-09-26 21:18:54 -04:00
James Shubin
15b2ff68cc lang: core: os: Simplify waitgroup
Doesn't need to be part of the struct. Maybe there are others like this
that need porting.
2025-09-26 21:18:54 -04:00
James Shubin
17544e881c test: Fix tag failures 2025-09-25 02:23:14 -04:00
James Shubin
6090517830 releases: Add release notes for 1.0.0 2025-09-25 02:11:05 -04:00
James Shubin
6a7b3d5fa9 readme: Ten 2025-09-25 01:16:03 -04:00
James Shubin
25804c71df lang: core embedded: provisioner: Work with USB-free machines
This feature was for machines that boot from USB keys. When we PXE boot,
this should not fail when the file isn't missing.
2025-09-16 01:11:32 -04:00
James Shubin
a54553c858 engine: resources: Print a warning if svc is slow
The biggest horror is blocked execution somewhere, so if the svc start,
stop or reload is being slow, then at least print a message to warn us.
2025-09-15 04:05:04 -04:00
James Shubin
ff1581be87 engine: resources: Massively refactor the svc
This was a long time coming, but now it looks to be done. It was kind of
meant as low-hanging fruit for some interested student, but in the end I
got to it first.
2025-09-15 04:05:04 -04:00
James Shubin
ec48a6944c lang: core: iter: Make map coding more consistent with filter
Keeping the code looking similar makes it easy to patch bugs that occur
in both.
2025-09-14 23:52:22 -04:00
James Shubin
df9849319d lang: core: iter: Replace graph when list length changes
The map and iter functions weren't replacing the graph if the input list
length changed. This was just an oversight in coding AFAICT, as the
sneaky case is that if the length stays the same, but the list contents
change, then it's okay to not swap.
2025-09-14 23:50:21 -04:00
James Shubin
045aa8820c engine: resources: Display tick marks for input range
This makes it prettier. We should also add the values, but this is
harder to do nicely.
2025-09-14 20:52:45 -04:00
James Shubin
a66cbc3098 engine: resources: Work around race in upstream lib
This is actually fixed in: 7d147928ee
but this is not in a release yet.
2025-09-14 19:36:57 -04:00
James Shubin
9833cb8df3 modules: virtualization: Update to Fedora 42 2025-09-14 00:06:59 -04:00
James Shubin
a73dc19ce9 engine: resources: Fix virt hotplug
At some point, this seems to have rotted, since I assume upstream
started requiring this updated XML spec. Fix it now.
2025-09-13 23:54:46 -04:00
James Shubin
bcf57f8581 engine: resources: Make the qemu guest agent automatic 2025-09-13 23:54:46 -04:00
James Shubin
611cdb3193 engine: resources: Disable buggy restart code
This was really not ever tested properly, and I worry it will deadlock.
It definitely kicks off falls positives that don't even do any harm as
far as we can tell.
2025-09-13 23:54:24 -04:00
James Shubin
1b39a780e1 engine: resources: Clean up virt code
There was and still is a bunch of terrible mess in this code. This does
some initial cleanup, and also fixes an important bug!

If you're provisioning a vmhost from scratch, then the function engine
might do some work to get the libvirt related services running before
the virt resource is used to build a vm. Since we had connection code in
Init() it would fail if it wasn't up already, meaning we'd have to write
fancy mcl code to avoid this, or we could do this refactor and keep
things more logical.
2025-09-13 23:28:33 -04:00
James Shubin
d59ae2e007 engine: graph: We shouldn't complain on context cancellation
These are expected from our engine. We do care about timeout's and so
on. This allows os to return ctx.Err() whenever a <-ctx.Done() happens,
which is more idiomatic for what we really want, but which we weren't
thorough with before.
2025-09-13 23:28:33 -04:00
James Shubin
b9363a3463 go: Update systemd dep to fix race
Hopefully this race is fixed upstream. Let's see.
2025-09-11 23:22:45 -04:00
James Shubin
a5f89d8d7b lang: funcs: dage: Use lazy freshness check
Not sure if this would introduce a glitch or not. Does seem to work
correctly. Without this, examples/lang/datetime2.mcl don't update
properly.
2025-09-11 23:19:45 -04:00
James Shubin
790b7199ca lang: New function engine
This mega patch primarily introduces a new function engine. The main
reasons for this new engine are:

1) Massively improved performance with lock-contended graphs.

Certain large function graphs could have very high lock-contention which
turned out to be much slower than I would have liked. This new algorithm
happens to be basically lock-free, so that's another helpful
improvement.

2) Glitch-free function graphs.

The function graphs could "glitch" (an FRP term) which could be
undesirable in theory. In practice this was never really an issue, and
I've not explicitly guaranteed that the new graphs are provably
glitch-free, but in practice things are a lot more consistent.

3) Simpler graph shape.

The new graphs don't require the private channels. This makes
understanding the graphs a lot easier.

4) Branched graphs only run half.

Previously we would run two pure side of an if statement, and while this
was mostly meant as an early experiment, it stayed in for far too long
and now was the right time to remove this. This also means our graphs
are much smaller and more efficient too.

Note that this changed the function API slightly. Everything has been
ported. It's possible that we introduce a new API in the future, but it
is unexpected to cause removal of the two current APIs.

In addition, we finally split out the "schedule" aspect from
world.schedule(). The "pick me" aspects now happen in a separate
resource, rather than as a yucky side-effect in the function. This also
lets us more precisely choose when we're scheduled, and we can observe
without being chosen too.

As usual many thanks to Sam for helping through some of the algorithmic
graph shape issues!
2025-09-11 23:19:45 -04:00
James Shubin
1e2db5b8c5 gapi: New API
Clear out lots of cruft and mistakes in this old API. The language GAPI
isn't updated with this commit, and as a result, this won't build, but
the fixes for it are coming shortly. We could have merged the two
commits, but it's easier to show them separately for clarity.
2025-09-09 02:21:59 -04:00
James Shubin
6041c5dc22 puppet: langpuppet: Nuke due to porting difficulties
Porting of the GAPI will cause challenging refactoring for me, so I'm
removing this for now, but happy to have it back if someone wants to
port it.
2025-09-09 02:21:59 -04:00
James Shubin
a668cd847e util: New buffered infinite chan primitive
I'm sure there are better implementations, but this feels clean enough
for now. Let's see if this is useful or not.
2025-09-09 02:21:59 -04:00
James Shubin
474df66ca0 lang: types: Add nil type for placeholder
The nil type is being added, but only slightly. It is not meant for real
use in the language. It will not work in all situations, and it isn't
implemented for many methods.

It is being added only for being a "dummy" placeholder value in the
engine when we have an unused value being propagated along an edge.
2025-09-09 02:21:59 -04:00
James Shubin
2022a31820 make: Leave race detector on by default
Maybe this will help us shake out some bugs.
2025-09-09 02:21:59 -04:00
James Shubin
71756df815 mod: Update fsnotify
This should fix a race condition in that library.

Likely fixed in: https://github.com/fsnotify/fsnotify/pull/678
2025-09-09 02:21:59 -04:00
James Shubin
f808c1ea0c converger: Wrap atomic lookup
Avoid this race. Maybe this code should be revisited with a mutex.
2025-09-09 02:21:59 -04:00
James Shubin
6c206b8010 util: Prevent unlikely race on easy exit
Race detector hit this up once, and I can see how it would be possible.
2025-09-09 02:21:59 -04:00
James Shubin
fb8958f192 engine: graph: Add err mutex
Here's a race that pops up. This is suboptimal locking, but it's not
important for now.
2025-09-09 02:21:59 -04:00
James Shubin
a070722937 etcd: Lock around read to prevent race 2025-09-09 02:21:59 -04:00
James Shubin
b02363ad0d etcd: scheduler: Use atomic to prevent race
This code should be rewritten, but in the meantime, at least avoid the
race detector issues.
2025-09-09 00:04:22 -04:00
James Shubin
bed7e6be79 etcd: Pass through the namespace
This is a bit tricky, and we should nuke and redo some of this API. The
sneaky bit has to do with whether we've already added the namespace
magic into our etcd client or not.
2025-09-09 00:04:22 -04:00
James Shubin
0031acbcbc lang: funcs: structs: Map indexes use half the integers
We want the pattern to be key:0, val:0, key:1, val:1, and so on... This
was previously using 0,1,2,3...

When we use Call directly, we need to fix this. Previously this was dead
code which is why the bug wasn't caught.
2025-09-09 00:04:22 -04:00
James Shubin
4e523231d6 engine: graph: Avoid race on fast pause variable
This code is basically unused, but let's keep it in for now in case we
eventually replace it with some contextual ctx code instead.
2025-09-09 00:04:22 -04:00
James Shubin
05d72b339d converger: Combine two signal channels into one
There's no reason we need to remake these two channels, when we can just
use one. We should probably rewrite this code entirely, but at least we
get rid of this race for now.
2025-09-09 00:04:21 -04:00
James Shubin
d2cda4ca78 etcd: Disable the dynamic chooser
We're not using dynamic etcd right now, so disable this code and prevent
the race detector complaining.
2025-09-09 00:04:21 -04:00
James Shubin
2f860be5fe engine: graph: Lock around frequent read races
These are all "safe" in terms of not ever conflicting, but the golang
memory model weirdness requires an actual lock to avoid race detector
errors.
2025-09-09 00:04:21 -04:00
James Shubin
5692837175 lang: Add a simple test of a non-tree dag 2025-09-09 00:04:21 -04:00
James Shubin
04ff2a8c5c lang: ast: Turn this speculation flag into a const
Makes it easier to find when debugging.
2025-09-09 00:04:21 -04:00
James Shubin
166b463fa9 lang: funcs: structs: Update the graph shape docs 2025-09-09 00:04:21 -04:00
James Shubin
2e858ff447 test: Improve colon test comment
I use these patterns when early hacking, and it's good to have a test to
catch them all before I merge.
2025-09-09 00:04:21 -04:00
James Shubin
6fac46da7c misc: Improved stack filtering
Although this needs more debugging, I'm not sure how the format changed.
I guess this is part of the "API" that golang is allowed to break ;)
2025-09-09 00:04:21 -04:00
James Shubin
2b820da311 lang: ast: structs, funcs: structs: Exprif without a channel
This adds an improved "expr if" which only adds the active branch to the
graph and removes the "secret" channel.
2025-08-04 17:45:06 -04:00
James Shubin
86c6ee8dee lang: ast, funcs: Remove the secret channel from call
This removes the secret channel from the call function. Having it made
it more complicated to write new function engines, and it's not clear
why it was even needed in the first place. It seems that even the
current generation of function engines work just fine without it.

Co-authored-by: Samuel Gélineau <gelisam@gmail.com>
2025-08-04 17:04:01 -04:00
James Shubin
0a76910902 lang: core: Skip broken test
I expect this will be deprecated soon, let's see.
2025-08-04 17:04:01 -04:00
James Shubin
138ff8a895 lang: funcs: structs: Fix typos 2025-07-17 01:47:55 -04:00
James Shubin
8edb8e2a7b lang: interfaces: Add new helpers for dealing with args 2025-07-17 01:47:55 -04:00
James Shubin
bdf5209f68 util: errwrap: Add unwrapping for context removal
It's common in many concurrent engines to have a situation where we
collect errors on shutdown. Errors can either because a context closed,
or because some engine error happened. The latter, can also cause the
former, leading to a list of returned errors. In these scenarios, we
want to filter out all the secondary context errors, unless that's all
that's there. This provides a helper function to do so.
2025-07-16 23:48:37 -04:00
James Shubin
299b49bb17 util: errwrap: Add a function for joining
This is like the Append function but for a list.
2025-07-16 23:48:37 -04:00
James Shubin
71e4282d3f lang: interfaces: Add args to struct helper
We should consider if it's possible to avoid all of this transforming
entirely, but at least for now, do it all in one place by having an
available helper.
2025-07-13 03:18:23 -04:00
James Shubin
984aa0f5fc lang: Rename the vertex names
Make it a bit more obvious what the generated nodes are for.
2025-07-13 03:18:23 -04:00
James Shubin
737d1c9004 lang: interfaces: Table can be a standalone type
We'd like to have some useful helpers defined on it, like Copy.
2025-07-13 03:18:23 -04:00
James Shubin
d113fcb6d7 lang: ast, interfaces, interpret: Table should be a well-known type
We use this in enough places, that it's nice to have it as a well-known
alias.
2025-07-13 03:18:23 -04:00
James Shubin
73e641120f pgraph: Improve time complexity of IncomingGraphVertices
This goes from O(n^2) to O(n) when map lookup is O(1). I never really
focused on much optimizing, but I noticed this one in passing.
2025-07-13 03:18:22 -04:00
James Shubin
f7e446ef6f lang: core: example: nested: We don't use the func suffix anymore
There are only functions, no need to add suffixes to file names.
2025-07-13 03:18:22 -04:00
James Shubin
21917864db lang: core, funcs: Remove facts API
This started because it was possible, not because it was very useful.
The overhead of using the full function API, is lessened by the function
API helpers, and the upcoming improvements in the function API.

It's much easier to have one fewer API's to manage and so on.

It's also a stark reminder of how weak tools like "puppet" are which
only really have data collection systems that don't take arguments.
2025-07-13 03:15:53 -04:00
James Shubin
c49d469dcd engine: resources: Work around trailing slash issue in home dir
If the user is logged in, and we try to change from /home/james to
/home/james/ we'll get the error:

usermod: user james is currently used by process ????

and furthermore, it makes no sense to try and make this change since the
usermod function won't do anything if you run:

usermod --gid james --groups wheel --home /home/james/ james

when /etc/passwd has /home/james as the string.
2025-06-25 06:05:28 -04:00
James Shubin
0a79daf277 pgraph: Print cycles on error
I'm a terrible algorithmist, so who knows if this is correct, but it
seems to work in my cursory testing.
2025-06-25 06:05:28 -04:00
James Shubin
a4ed647d02 modules: purpleidea: Add more packages 2025-06-25 06:05:28 -04:00
James Shubin
79c199975d modules: purpleidea: Add another useful helper package 2025-06-25 06:05:11 -04:00
James Shubin
50b4a2a4f7 lang: ast: Make error message clearer 2025-06-25 04:50:47 -04:00
James Shubin
f778008929 modules: misc: Key generation should support other types
I think these short keys are sketchy, but what do I know.
2025-06-25 04:50:47 -04:00
James Shubin
54380a2a1f modules: cups: Add more edges
Useful for performance reasons until we make autoedges blazing fast.
2025-06-25 04:50:47 -04:00
James Shubin
a5fc1256e2 lang: core: embedded: provisioner: Encrypt the filesystem
The provisioner should be able to encrypt things. We should use an empty
passphrase so that the choosing of the actual passphrase can be done at
first boot.
2025-06-23 19:53:52 -04:00
James Shubin
0b2236962c lang: core: embedded: provisioner: Separate home is rare
Maybe one day we want this off to prevent storage issues, but not today.
2025-06-12 18:17:33 -04:00
James Shubin
ee7ad7cbbe lang: core: embedded: provisioner: Skip ignore if no drives available
Small bug for certain setups.
2025-06-12 18:11:14 -04:00
James Shubin
7ba4c4960b modules: meta: Wrong interface for the loc network
Copy pasta bug!
2025-06-08 21:27:04 -04:00
James Shubin
777ea6115b lang: core: embedded: provisioner: Support exec handoff
Could be used for any tool, but mgmt is an obvious possibility.

I should check this code more, but it's roughly right and I'm sure it
will get refactored more when I build opt-in provisioning and so on.
2025-06-08 21:16:54 -04:00
James Shubin
582cea31b0 modules: Get rid of unnecessary printing 2025-06-08 04:33:35 -04:00
James Shubin
c107240098 modules: shorewall: Add manual edges for performance
If you don't want to use auto-edges, then this still works.
2025-06-08 04:30:39 -04:00
James Shubin
6265a330bf modules: misc: The COPR setup must be non-interactive 2025-06-08 04:21:58 -04:00
James Shubin
cfcb35456f modules: misc: Use a template for network
Just a bug, now fixed.
2025-06-08 04:21:27 -04:00
James Shubin
1ef7c370e7 etcd, engine: Fix typos 2025-06-08 03:36:11 -04:00
James Shubin
f22ec07ed3 lang: Improve logging of startup information
Graph size and build time are both helpful.
2025-06-08 03:36:11 -04:00
James Shubin
f594799a7f etcd: ssh: Improve the authentication for ssh etcd world
This was rather tricky, but I think I've learned a lot more about how
SSH actually works. We now only offer up to the server what we can
actually support, which lets us actually get back a host key we have a
chance of actually authenticating against.

Needed a new version of the ssh code and had to mess with go mod
garbage.
2025-06-08 03:07:59 -04:00
James Shubin
1ccec72a7c cli, etcd, lib, setup: Support ssh hostkey logic
This makes it easy to pass in the expected key so that we never have to
guess and risk MITM attacks.
2025-06-07 17:55:41 -04:00
James Shubin
55eeb50fb4 lang: Refactor all the highlight helping together
Keep this cleaner and add a bit more.
2025-06-07 17:52:15 -04:00
James Shubin
2b7e9c3200 engine, integration, setup: Seeds should be called properly 2025-06-07 17:52:15 -04:00
James Shubin
25263fe9ea cli: Allow multiples of these args
We forgot the flag that lets the CLI parser actually let us use
multiples.
2025-06-06 23:53:42 -04:00
James Shubin
1df28c1d00 lang: ast, funcs: Start plumbing through the textarea
We need to get these everywhere and this is a start.
2025-06-06 03:11:06 -04:00
James Shubin
32e91dc7de lang: interpolate: Add temporary textarea info to interpolation
We should really be doing the math to find out how far along the string
each token really is, but that's complicated and tedious, especially
with the simplification passes, so let's skip that for now and just show
the whole thing.
2025-06-06 03:11:06 -04:00
James Shubin
c2c6cb5b6a lang: interfaces: Subtle fixes in textarea
Turns out we need a lot more tests to make this work easier.
2025-06-06 03:11:06 -04:00
James Shubin
58461323b9 lang: parser: Try to add the end values in parser
Not sure if this is right, but it's a start.
2025-06-06 03:11:06 -04:00
James Shubin
cdc6743d83 lang: ast, interfaces, interpolate: Remove the legacy pos
This ports things to the new textarea. We need to plumb through things a
lot more, especially the string interpolation math to get the right
offsets everywhere, but that's coming.
2025-06-06 02:37:43 -04:00
James Shubin
86dfa5844a lang: ast: Add missing initialization calls
Not sure how we forgot these before.
2025-06-06 02:35:20 -04:00
James Shubin
5d44cd28db lang: ast, interpolate: Pass through uninterpolated strings
We don't need to make a new reference for nothing.
2025-06-06 01:00:11 -04:00
James Shubin
4f977dbe57 lang: Use the source finder wherever we can
This was easy to add and it works great!
2025-06-06 01:00:11 -04:00
James Shubin
573bd283cd lang: funcs: dage: Print out some error locations
Most things don't support this yet, but let's get in some initial
plumbing. It's always difficult to know which function failed, so we
need to start telling the users more precisely.
2025-06-06 01:00:11 -04:00
James Shubin
6ac72974eb lang: ast, interfaces: Move textarea to a common package
We're going to use it everywhere. We also make it more forgiving in the
meanwhile while we're porting things over.
2025-06-06 01:00:11 -04:00
James Shubin
4189a1299a lang: ast: Add scope feedback for classes
We did this elsewhere for functions, let's add classes too.
2025-06-06 01:00:11 -04:00
James Shubin
dcd4f0709f lang: ast: Provide better error reporting for scope errors
Print these file and line numbers when we can!
2025-06-05 23:00:56 -04:00
James Shubin
75bafa4fd3 mcl, docs: Use the less ambiguous form of the import
Update the style guide as well!
2025-06-05 22:47:38 -04:00
James Shubin
e5ec13f592 modules: misc: Add a class to install a copr repo 2025-06-05 22:47:38 -04:00
James Shubin
1a0fcfb829 modules: misc: Use the less ambiguous import name 2025-06-05 22:34:33 -04:00
James Shubin
ba86665cbb misc: Rename for consistency 2025-06-05 22:34:29 -04:00
James Shubin
301ce03061 misc, setup, util: Add a ulimit
I think this gives us the ability to open more files.
2025-06-05 22:34:29 -04:00
James Shubin
650e8392c5 golang: Tidy mod file
I ran go mod tidy.
2025-06-05 21:46:28 -04:00
James Shubin
d7534b2b3b golang: Update go.mod to avoid tidy warnings 2025-06-05 21:45:06 -04:00
James Shubin
3b88ad3794 lang: core: os: Add a modinfo function
Not sure if this will need renaming, but might be a useful family of
functions.
2025-06-05 21:40:19 -04:00
James Shubin
499b8f2732 lang: funcs: dage: Make error clearer
The implementation of the specific function is sending a nil value,
which is not allowed. This is a bug in the code of that function.
2025-06-05 20:36:15 -04:00
James Shubin
ac3a131a9f modules: meta: Improve firewall rules for our router 2025-06-05 14:47:46 -04:00
James Shubin
a72492f042 modules: meta: Move router to networkd
I can't get NetworkManager working properly in parallel to wireguard. I
get an extra route added and it breaks the tunnel. No idea why. The
networkd equivalent seems to just work.
2025-06-05 14:47:46 -04:00
James Shubin
c51a55e98a modules: misc: Add networkd helpers 2025-06-05 14:47:46 -04:00
James Shubin
892fd1e691 modules: misc: Add a network manager dhcp interface 2025-06-05 14:47:46 -04:00
James Shubin
23aa18d363 modules: shorewall: Refactor to allow bulk rules
Very useful for brownfield deployments where we're migrating a ton of
rules over.
2025-06-05 14:47:46 -04:00
James Shubin
d14930ef28 lang: core: embedded: provisioner: Don't provision to USB disks
This should hopefully skip over any USB drives. Of course if we actually
want to provision to a USB drive, then we'll have to add a feature flag
for that.
2025-06-05 14:47:46 -04:00
James Shubin
81063ae6df etcd: ssh: Reconnect on SSH failures
If the SSH connection dies, the dialer can now reconnect that portion.
2025-06-05 14:47:46 -04:00
James Shubin
f42daf4509 etcd: ssh: Improve logging to be less misleading 2025-06-05 14:47:46 -04:00
James Shubin
1caf6fb3bf etcd: ssh: Pass through the ctx into the SSH dialer
I hope I did this correctly.
2025-06-05 14:47:46 -04:00
James Shubin
16ade43caf engine: Rename world API and add a context
We want to be able to pass ctx through for various reasons.
2025-06-05 14:47:46 -04:00
James Shubin
99d8846934 engine: resources: Remove env
The nix people want this sometimes, and don't in others ;)

Karpfen, you now owe me a $beverage =D
2025-06-05 14:47:09 -04:00
Lourenço Vales
2d78dc9836 modules: contrib: Add cryptpad module
This is a module to deploy cryptpad locally - as it is, this is only
intended for dev purposes, not for production environments
2025-05-26 14:28:33 +02:00
James Shubin
b85751e07e setup: Add the ssh url arg to the setup utility 2025-05-26 02:26:32 -04:00
James Shubin
0fd6970c0a engine: resources: The http server flag res should autogroup
If we want to receive more than on flag (key) value, then these
obviously need to autogroup together, because it's the same http server
request that comes in and which should be shared by everyone with the
same path.
2025-05-25 04:46:34 -04:00
James Shubin
936cf7dd9d engine: graph: autogroup: Ensure we sort correctly
This should ensure this sort order of longer thing first, when running
the algorithm. We want to autogroup two or more http:server:flag
resources together first, before the whole grouped resource gets pulled
into the http:server one.
2025-05-25 04:21:44 -04:00
James Shubin
fd5bc63293 engine: traits: Allow self-kind with hierarchical grouping
Not sure why I was blocking this previously, perhaps there was no use
case, and I was trying to ensure there wasn't accidental, unwanted
grouping? Perhaps I wanted to improve performance? In any case, let's
turn this off for now, and check for bugs.
2025-05-25 04:19:38 -04:00
James Shubin
be4cb6658e engine: graph: Simplify the Send/Recv code
This code might be slightly redundant, so simplify it. If something went
wrong, this will be the commit that did it, but it seems relatively
safe.
2025-05-25 04:18:26 -04:00
James Shubin
efff84bbd4 engine: graph: autogroup: Print these errors when debugging
This gives important clues as to why something isn't grouping as
expected. Show them if needed.
2025-05-25 03:48:57 -04:00
James Shubin
74f36c5d73 engine: resources: Add some compile time checks for groupers
These can "break" silently and not autogroup if we change the resource
and it no longer fulfills the interface. Add this compile time check to
prevent that.
2025-05-25 03:47:47 -04:00
James Shubin
b868a60f69 engine: resources: Simplify the Watch loop
I had some legacy unnecessary boolean for sending everywhere. Not sure
why I never re-read it, it's so easy to just copy and paste and carry
on.
2025-05-25 02:12:14 -04:00
James Shubin
f73127ec23 engine: resources: Make error not ambiguous
The same text exists elsewhere.
2025-05-25 01:42:41 -04:00
James Shubin
654e958d3f engine: resources: Add the proper prefix to grouped http resources
Resources that can be grouped into the http:server resource must have
that prefix. Grouping is basically hierarchical, and without that common
prefix, it means we'd have to special-case our grouping algorithm.
2025-05-25 01:40:25 -04:00
James Shubin
1f54253f95 engine: resources: Add a trim field to line resource 2025-05-25 01:40:21 -04:00
James Shubin
2948644536 lang: ast: Remove the double dash
Not sure why this is here?
2025-05-25 01:17:28 -04:00
James Shubin
d2403d2f0c etcd: client: str: We do not want the prefix match
This was a likely copy+pasta error, since we match precise strings here.
If we had two similarly prefixed strings, we'd have an error.
2025-05-25 01:17:28 -04:00
James Shubin
876834ff29 lang: core: fmt: Catch printf edge case 2025-05-25 01:17:28 -04:00
James Shubin
861ba50f9c engine: resources: Add a ui redirect
I always forget the /index.html part so make it easier!
2025-05-15 02:52:57 -04:00
James Shubin
43492a8cfa make: Add missing clean target for wasm 2025-05-15 02:41:58 -04:00
James Shubin
287504cfa8 engine: resources: Add missing struct tags to kv 2025-05-15 01:46:03 -04:00
James Shubin
0847b27f6a modules: misc: Add a pattern for systemd daemon reload
It would be really nice if systemd actually had an API for getting
events on this.
2025-05-14 21:21:27 -04:00
James Shubin
aa4320dd5f modules: misc: Add some authorized key work
More testing and features are needed, but this is a good start.
2025-05-09 04:11:04 -04:00
James Shubin
7c5adb1fec lang: core: net: Add a helper to return the network ip 2025-05-09 02:49:02 -04:00
James Shubin
20e1c461b8 lang: core: embedded: provisioner: Update import style 2025-05-09 02:48:51 -04:00
James Shubin
e9d485b7f6 lang: ast, core: Add some safety checks
I don't think I'm hitting these, but good for debugging.
2025-05-09 01:08:12 -04:00
James Shubin
e86d66b906 engine: resources: Avoid double slash on error
Errors will include a second slash if this ends with one. Might as well
clean it to avoid the semblance of a bug.
2025-05-09 00:00:46 -04:00
James Shubin
9a63fadfbd engine: resources: Rename var
So it doesn't conflict with "path" import.
2025-05-08 23:16:57 -04:00
James Shubin
7afa372765 engine: resources: Let the user race me
If a user is racing the file resource, don't error permanently, just
skip the file that vanished, and move on with your life.
2025-05-08 23:11:51 -04:00
James Shubin
fddebb2474 engine, lang: core: Match exported resources properly
I inverted the logic for complex setups and forgot to handle the zero
cases. I also didn't notice my loop continue error. This cleans all this
up so that we can have proper exported resource matching.
2025-05-08 22:29:03 -04:00
James Shubin
ad0dd44130 engine: Don't force validation for hidden resources
I think this is what I want in most scenarios, is there a reason to do
otherwise? This is because we may wish to export incomplete resources,
where the remaining necessary fields for validation happens on collect.
2025-05-06 03:36:01 -04:00
James Shubin
2ee403bab9 git: Update gitignore files
We were overly matching in some cases by not starting with a slash. This
updates a few other cases too.
2025-05-06 02:52:26 -04:00
James Shubin
0e34f13cce engine: resources: Add a line resource
Simple enough for the common cases. It just needs some tests.
2025-05-06 02:22:39 -04:00
Lourenço Vales
f2a6a6769f engine: resources: Add a WatchFiles field to exec
This adds a field that takes a list of files for exec to watch for
events on.
2025-05-05 23:54:33 -04:00
James Shubin
4903995052 lang: core: os: Add readfilewait which won't error easily
Just a useful helper function which we may want for a while.
2025-05-05 23:54:28 -04:00
James Shubin
774d408e13 engine: Fix up some send/recv corner cases
Initially I wasn't 100% clear or decided on the send/recv semantics.
After some experimenting, I think this is much closer to what we want.
Nothing should break or regress here, this only enables more
possibilities.
2025-05-05 23:53:37 -04:00
James Shubin
ae1d9b94d4 engine: util: Add a debug utility
This is useful for some patches. Let's see if I can remember to use and
improve it!
2025-05-05 22:30:31 -04:00
James Shubin
267bcc144b engine: util: Clean up error messages 2025-05-05 22:29:10 -04:00
James Shubin
fd40c3b64f engine: util: Fix grammar typo 2025-05-05 20:21:29 -04:00
James Shubin
e2b6da01d8 engine: graph: Fix messy imports 2025-05-05 20:21:29 -04:00
James Shubin
dad15f6adc examples: lang: Add missing folder 2025-05-04 13:54:45 -04:00
James Shubin
6ec707aea7 examples: lang: Simplify a common example 2025-05-02 03:12:18 -04:00
James Shubin
807c4b3430 engine: resources: Add an http ui resource
Many years ago I built and demoed a prototype of a simple web ui with a
slider, and as you moved it left and right, it started up or shutdown
some number of virtual machines.

The webui was standalone code, but the rough idea of having events from
a high-level overview flow into mgmt, was what I wanted to test out. At
this stage, I didn't even have the language built yet. This prototype
helped convince me of the way a web ui would fit into everything.

Years later, I build an autogrouping prototype which looks quite similar
to what we have today. I recently picked it back up to polish it a bit
more. It's certainly not perfect, and might even be buggy, but it's
useful enough that it's worth sharing.

If I had more cycles, I'd probably consider removing the "store" mode,
and replace it with the normal "value" system, but we would need the
resource "mutate" API if we wanted this. This would allow us to directly
change the "value" field, without triggering a graph swap, which would
be a lot less clunky than the "store" situation.

Of course I'd love to see a GTK version of this concept, but I figured
it would be more practical to have a web ui over HTTP.

One notable missing feature, is that if the "web ui" changes (rather
than just a value changing) we need to offer to the user to reload it.
It currently doesn't get an event for that, and so don't confuse your
users. We also need to be better at validating "untrusted" input here.

There's also no major reason to use the "gin" framework, we should
probably redo this with the standard library alone, but it was easier
for me to push out something quick this way. We can optimize that later.

Lastly, this is all quite ugly since I'm not a very good web dev, so if
you want to make this polished, please do! The wasm code is also quite
terrible due to limitations in the compiler, and maybe one day when that
works better and doesn't constantly deadlock, we can improve it.
2025-05-02 02:14:14 -04:00
James Shubin
6b10477ebc lang: core: convert: Add a simple str to int function 2025-05-02 00:24:21 -04:00
James Shubin
412e480b44 engine: local: Get the logic right
I think we were not benefitting from the cache and sending unnecessary
events. It would be great to have tests for this, but commit this fix
for now, and be embarrassed in the future if I got this code wrong.
2025-05-02 00:04:00 -04:00
James Shubin
cc2a235fbb engine: resources: Add a reminder about events
I might want to do this some day, it could be important. Look into it.
2025-05-02 00:04:00 -04:00
James Shubin
7c77efec1d engine: resources: Cleanup this old code
This is equivalent and cleaner.
2025-05-02 00:04:00 -04:00
James Shubin
4b1548488d lib: It is called mcl officially for a while now 2025-04-28 00:31:14 -04:00
James Shubin
47aecd25c3 lang: funcs: structs: Pass through the type
Not sure why this wasn't done or if it should be, but seems plausible
for now.
2025-04-27 22:23:42 -04:00
James Shubin
fb6eae184a lang: ast: Refactor to unindent slightly 2025-04-27 22:19:14 -04:00
James Shubin
16d3e3063c lang: funcs: facts: Do not reuse fact pointers
In my carelessness, I was re-using pointers when a fact was used twice!
This could cause disastrous consequences like a double close panic on a
datetime.now() fact for example.

In other news, we should consider if it's possible to get more clever
about graph shape optimization so that we don't need more than once
instance of certain functions like datetime.now() in our graph.
2025-04-27 22:14:51 -04:00
James Shubin
37bb67dffd lang: Improve graph shape with speculative execution
Most of the time, we don't need to have a dynamic call sub graph, since
the actual function call could be represented statically as it
originally was before lambda functions were implemented. Simplifying the
graph shape has important performance benefits in terms of both keep the
graph smaller (memory, etc) and in avoiding the need to run transactions
at runtime (speed) to reshape the graph.

Co-authored-by: Samuel Gélineau <gelisam@gmail.com>
2025-04-27 22:14:51 -04:00
James Shubin
9c9f2f558a lang: Move out this legacy execution function
Hasn't been used in a while, but it's fine if we want to use it for
tests.
2025-04-22 03:24:23 -04:00
James Shubin
1a81e57410 lang: interfaces: Update stale comments 2025-04-22 03:24:23 -04:00
James Shubin
7096293885 lang: funcs: dage: Return better errors
Helps a lot with debugging.
2025-04-22 03:24:23 -04:00
James Shubin
1536a94026 lang: Functions that build should be copyable
It's not entirely clear if this is required, but it's probably a good
idea. We should consider making it a requirement of the BuildableFunc
interface.
2025-04-22 03:24:23 -04:00
James Shubin
1bb1e056c4 lang: funcs: structs: Add some extra safety checks
Not sure if these are even needed.
2025-04-22 03:24:23 -04:00
James Shubin
e71b11f843 lang: funcs: facts: Check if a fact is callable 2025-04-22 03:24:23 -04:00
James Shubin
b4769eefd9 lang: funcs: facts: Add a separate callable interface
Add some symmetry to our interfaces for now, even though I'd love to
drop the idea of "facts" altogether.
2025-04-22 03:24:23 -04:00
James Shubin
d4a24d4c9d lang: funcs: wrapped: Simplify the implementation 2025-04-22 03:24:23 -04:00
Ahmad Abuziad
c5d7fdb0a3 util: Add a bunch of tests
This improves our test coverage significantly.
2025-04-22 03:18:49 -04:00
Lourenço Vales
ae68dd79cb lang: core: iter: Add a range function
This commit implements a range function that mimicks python's range
built-in by having a start, stop, and range argument. There's also
a few examples and tests to mimick Python's examples to guarantee
we're consistent with their behaviour.
2025-04-22 02:37:35 -04:00
James Shubin
de970ee557 engine: resources: Add symlink param to the file res
This adds initial symlink support to the file resource, and while it is
hopefully correct, there are always sneaky edge cases around symlinks
and security, so review and tests are highly encouraged!
2025-04-22 02:21:58 -04:00
James Shubin
60a3d7c65e lang: interfaces: Add more information about graph semantics
Sam thoughts.
2025-04-19 13:02:51 -04:00
James Shubin
9c1c587f7b lang: parse, core: world: Add a collect package
Some checks failed
/ Test (basic) on ubuntu-latest with golang 1.23 (push) Has been cancelled
/ Test (race) on ubuntu-latest with golang 1.23 (push) Has been cancelled
/ Test (shell) on ubuntu-latest with golang 1.23 (push) Has been cancelled
This lets us look at the available resource data for collection, and to
filter it so we can decide what we want to collect on our machine.

Other types of collect functions could be added in the future.
2025-04-05 17:00:53 -04:00
James Shubin
af04d364d0 lang: core: fmt: Make printf handle more cases
Until we make a clean determination about what this should print, this
should handle things for now.
2025-04-05 16:14:11 -04:00
James Shubin
748f05732a engine, etcd: Watch on star pattern for all hostnames
We forgot to watch on star hostname matches.
2025-04-05 15:45:44 -04:00
James Shubin
148bd50e9f engine, etcd: Prevent engine thrashing
These two small bugs would allow thrashing to occur since we'd
constantly delete and re-add exports, and constantly think that a noop
etcd operation made a change.
2025-04-05 15:28:54 -04:00
James Shubin
6c1c08ceda engine: resources: Test to make sure metaparams are preserved
We should ensure these get preserved across encoding/decoding. We rely
on this behaviour.
2025-04-05 12:45:23 -04:00
James Shubin
045b29291e engine, lang: Modern exported resources
I've been waiting to write this patch for a long time. I firmly believe
that the idea of "exported resources" was truly a brilliant one, but
which was never even properly understood by its original inventors! This
patch set aims to show how it should have been done.

The main differences are:

* Real-time modelling, since "once per run" makes no sense.
* Filter with code/functions not language syntax.
* Directed exporting to limit the intended recipients.

The next step is to add more "World" reading and filtering functions to
make it easy and expressive to make your selection of resources to
collect!
2025-04-05 12:45:23 -04:00
James Shubin
955112f64f engine: Let others use the ResUID struct
It's a useful key in maps.
2025-04-05 12:45:23 -04:00
James Shubin
7f341cee84 engine: resources: Improving logging even more
Messages should happen after the event on success. The error scenario
has its own pathway to report.
2025-04-05 12:45:23 -04:00
James Shubin
f71e623931 engine: resources: Print a message on empty file creation
We don't see this event happening which is confusing. There might be
other cases we didn't handle cleanly.
2025-04-05 12:45:23 -04:00
James Shubin
8ff187b4e9 lang: core, funcs: Rename things for consistency
Seems we had different patterns going on. This makes those all
consistent now.
2025-04-05 12:45:23 -04:00
James Shubin
30aca74089 engine, yamlgraph: Disable the old exported resources stuff
These were really just stubs so that I could prove out the reactive
model very early, and I don't think they're really used anywhere.

I'm also not really using the yamlgraph frontend. If someone wants any
of that code, step up, or it will rot even more.
2025-04-05 12:45:23 -04:00
James Shubin
3dfca97f86 engine: Add a method to determine if a res kind is valid 2025-04-05 12:45:23 -04:00
James Shubin
0d4c6e272d lib: Add timing for topological sort
At least for consistency with everyone else...
2025-04-05 12:45:23 -04:00
James Shubin
fce250b8af cli, etcd, lib: Fixup golint issues with SSH
This stuff is arbitrary and stupid.
2025-04-05 12:45:23 -04:00
James Shubin
f6a8404f9f modules: virtualization: Qcow2 should be the default
Snapshots and so much more is only possible with qcow2. A long time ago
it had performance issues, but things seem okay now.
2025-03-28 04:44:56 -04:00
Karpfen
c50a578426 git: Add vendor/ to gitignore 2025-03-22 14:56:16 -04:00
Karpfen
7e8ced534f misc: Use /usr/bin/env for a more generic shebang
Use path based SHELL in Makefiles. It was suggested that this is a
better solution for make for cases when there is no /usr/bin/env.

See: https://github.com/purpleidea/mgmt/pull/694#discussion_r1015596204
2025-03-22 14:53:21 -04:00
Lourenço Vales
f2d9219218 lang: core: os: Add is_virtual function
This is a basic implementation of a detection method for whether mgmt is
running in a virtualized environment. We achieve this by doing two types
of checks: on one hand, we check if the CPU flags can confirm the
presence of a virtualized env; on the other, we check if the presence
of known files related with DMI (and their contents) can confirm whether
 we're inside a virt env. Either of these situations will cause the
function to return true, with the default case being false. All of these
checks are relatively naive and can be improved by looking at the main
inspiration for this implementation, which was systemd's own check for
the presence of virtualization.
2025-03-21 14:18:55 -04:00
James Shubin
f269096eb9 cli, etcd, lib: Remove the etcd client from main
We are slowly getting rid of more cruft and abstracting it nicely. More
to go!
2025-03-19 06:01:42 -04:00
James Shubin
5665259784 cli, engine, etcd, lib: Move the hostname value to the API
Every world implementation needs a unique UUID, might as well move this
to the API.
2025-03-19 05:41:04 -04:00
James Shubin
02fca6409a cli, etcd, lib: Add an etcd client over ssh world backend
This provides a new kind of "world" backend, one that runs etcd over an
SSH connection. This is useful for situations where you want to run an
etcd cluster somewhere for clients across the net, but where you don't
want to expose the ports publicly.

If SSH authentication is setup correctly (using public keys) this will
tunnel over SSH for etcd to connect.

This patch does not yet support deploys over SSH, but that should be
fixed in the future as the world code gets cleaned up more.
2025-03-19 05:33:07 -04:00
James Shubin
a7a5237b07 cli, engine, etcd, lib: Pass in init args
Improve the API and make it more general.
2025-03-18 04:54:13 -04:00
James Shubin
7ad54fe3e8 cli, engine, etcd, lib: Split out the deployer into world
This should hopefully make the refactor into a clean world API a bit
better. Still more to do though!
2025-03-18 04:54:13 -04:00
James Shubin
1a35ab61ca engine: Split out the world filesystem interface 2025-03-18 03:32:42 -04:00
James Shubin
59c33a354c engine, lang: core: world: Split out the scheduler interface 2025-03-18 03:32:42 -04:00
James Shubin
c853e24ded engine: Split out the str world interface
This is a core API and it should really be cleaned up if possible.
2025-03-18 03:32:42 -04:00
James Shubin
692db084e4 engine: Split off resource world interface 2025-03-18 03:32:42 -04:00
James Shubin
1edff3b3f5 engine: Move another interface method 2025-03-18 03:32:42 -04:00
James Shubin
b173d9f8ef engine: Split out the etcd cluster size options
This is clean up work so that it's easier to generalize the world
backends.
2025-03-18 03:32:42 -04:00
James Shubin
a697add8d0 modules: dhcp: Add more package versions 2025-03-17 22:01:43 -04:00
James Shubin
c83e2cb877 git: Add a small ignore entry 2025-03-17 21:51:45 -04:00
James Shubin
642c6b952f lang: core, funcs: Port some functions to CallableFunc API
Some modern features of our function engine and language might require
this new API, so port what we can and figure out the rest later.
2025-03-16 23:23:57 -04:00
James Shubin
f313380480 engine: resources: Container stopped should be valid for no container 2025-03-13 01:03:11 -04:00
James Shubin
f8a4751290 engine: resources: Don't prematurely error docker watches
A subtlety about the engine is that while it guarantees CheckApply
happens in the listed edge-based dependency order, it doesn't stop
Watch from starting up in whatever order it wants to. As a result, we
can prematurely error since the docker service isn't running yet. It may
in fact be in the process of getting installed and started by mgmt
before we then try and use this resource! As a result, let it error once
for free and wait for CheckApply to get going before we start again.

Keep in mind, Watch has to use the .Running() method once to tell
CheckApply to do its initial event. So this concurrency is complex!

It's unclear if this is a bug in mgmt or not, but I'm leaning towards
not, particularly since there isn't an obvious way to fix it.
2025-03-12 06:14:38 -04:00
James Shubin
3ca1aa9cb1 engine: resources: Fix backwards docker ports
This wasn't setup properly, now it's fixed. Woops.
2025-03-12 05:45:27 -04:00
James Shubin
37308b950b cli, gapi: Add more information that deploy is running
There can be a non-obvious pause, so give some hint here...
2025-03-12 05:45:26 -04:00
James Shubin
05306e3729 engine: resources: Modernize the docker resources
They made the assumption that there would be a based docker service
installed at Init which could not be guaranteed. Also use the internal
metaparameter timeout feature instead of private counters.
2025-03-12 05:45:26 -04:00
James Shubin
a6057319a9 lang: Make scope error messages be more consistent 2025-03-12 03:33:08 -04:00
James Shubin
87d8533bd0 lib: Patch out the mess when using our magic option 2025-03-11 04:53:08 -04:00
James Shubin
dce83efa96 etcd: Add a special magic option hack
Workaround some legacy code for now.
2025-03-11 04:53:08 -04:00
James Shubin
1cb9648b08 etcd: Workaround possible rare deadlock
This code is terrible, but maybe this is good enough for now.
2025-03-11 04:18:03 -04:00
James Shubin
17b859d0d7 cli, gapi, lang, lib: Add a flag to skip autoedges
The GAPI API is a bit of a mess, but I think this seems to work for
standalone run and also deploys. Hopefully I didn't add any unnecessary
extra dead code here, but that's archaeology for another day.
2025-03-11 04:18:03 -04:00
James Shubin
8d34910b9b modules: prometheus: Fix title of service template 2025-03-11 04:18:02 -04:00
James Shubin
5667fec410 modules: prometheus: Remove erroneous tmpl extension 2025-03-11 04:18:02 -04:00
James Shubin
46035fee83 engine: resources: Add simple configuration steps to virt builder
This adds some simplistic configuration management / provisioning
functionality to this virt:builder resource which makes it easier to
kick off special functionality that we might want to build.
2025-03-11 04:18:02 -04:00
James Shubin
219d25b330 engine: resources, modules: virtualization: Add a seeds option
This makes it easier to configure the machine by giving it an automatic
initial setup of an mgmt client.
2025-03-11 04:18:02 -04:00
James Shubin
181aab9c81 engine: resources: Fix small cmp typo in virt builder res 2025-03-10 19:01:05 -04:00
James Shubin
aabcaa7c8c setup: Error if no options are specified 2025-03-10 18:31:20 -04:00
James Shubin
09f3b8c05f setup: Add seeds and no server feature
We will want both of these for most clustered setups.
2025-03-10 16:24:18 -04:00
James Shubin
f5e2fde20d cli, lib: Fix typo 2025-03-10 16:23:56 -04:00
James Shubin
50bd6f5811 lib, gapi, cli: Add a wait flag to empty and a new default
Change the default "wait" state for if you run the empty frontend when
there's already an available deploy waiting. You almost certainly want
to start running it right away.

Example:

mgmt etcd

mgmt run --hostname h1 --no-server --tmp-prefix --seeds=http://127.0.0.1:2379 empty
mgmt run --hostname h2 --no-server --tmp-prefix --seeds=http://127.0.0.1:2379 empty

mgmt deploy --no-git --seeds=http://127.0.0.1:2379 lang examples/lang/hello0.mcl

mgmt run --hostname h3 --no-server --tmp-prefix --seeds=http://127.0.0.1:2379 empty

In fact, you don't even need to start up etcd first for this to all
work.
2025-03-10 14:56:42 -04:00
James Shubin
37e5a37045 setup: Fix firstboot typo 2025-03-09 15:55:58 -04:00
James Shubin
8544a66257 lang: Allow more than one possible error in tests
There are some rare situations with completely symmetrical graphs which
mean that there isn't a "more correct" error. This is due to the
annoying map iteration non-determinism, and so instead of fighting to
remove every bit of that, let's just accept more than one error here.
2025-03-09 03:03:37 -04:00
James Shubin
a50765393d lang: ast: Catch ordering errors 2025-03-09 01:50:28 -05:00
James Shubin
6bae5fc561 pgraph: Make our slow toposort even slower
I think this makes it more deterministic, but I'm not sure it matters,
since we are comparing based in the .String() property, and some nodes
have the same value, so it ends up depending on the order they're added
to the graph datastructure, but then we lose this information since it's
a map. Yuck.
2025-03-09 01:50:28 -05:00
James Shubin
f87c550be1 lang: ast, interfaces: Improve speculation safety checks
We want to speculate in more cases, so make sure that speculation is
safe!
2025-03-08 17:45:29 -05:00
James Shubin
aea894a706 lang: ast: Add more context to pointer errors
This makes debugging easier. We don't expect these errors to occur with
normal usage.
2025-03-08 17:45:29 -05:00
James Shubin
a549a30f71 lang: ast: Add more context to table errors
This makes debugging easier. We don't expect these errors to occur with
normal usage.
2025-03-08 17:45:29 -05:00
James Shubin
2899bc234a lang: Add a forkv loop statement for iterating over a map
This adds a forkv statement which is used to iterate over a map with a
body of statements. This is an important data transformation tool which
should be used sparingly, but is important to have.

An import statement inside of a forkv loop is not currently supported.
We have a simple hack to detect the obvious cases, but more deeply
nested scenarios probably won't be caught, and you'll get an obscure
error message if you try to do this.

This was incredibly challenging to get right, and it's all thanks to Sam
for his brilliance.

Note, I couldn't think of a better keyword that "forkv" but suggestions
are welcome if you think you have a better idea. Other ideas were formap
and foreach, but neither got me very excited.
2025-03-08 17:45:29 -05:00
James Shubin
cf7e73bbf6 lang: Add a for loop statement for iterating over a list
This adds a for statement which is used to iterate over a list with a
body of statements. This is an important data transformation tool which
should be used sparingly, but is important to have.

An import statement inside of a for loop is not currently supported. We
have a simple hack to detect the obvious cases, but more deeply nested
scenarios probably won't be caught, and you'll get an obscure error
message if you try to do this.

This was incredibly challenging to get right, and it's all thanks to Sam
for his brilliance.

Co-authored-by: Samuel Gélineau <gelisam@gmail.com>
2025-03-08 17:45:29 -05:00
James Shubin
c456a5ab97 lang: types: Add some length methods for list and map 2025-03-06 16:55:55 -05:00
James Shubin
b5ae96e0d4 lang: types: Add some helpful true and false values
In case we need one, we don't need to build it.
2025-03-06 16:55:55 -05:00
James Shubin
f792facde9 lang: gapi: Debug stalled graph errors
This makes debugging these scenarios a lot easier.
2025-03-05 17:24:26 -05:00
James Shubin
a64e3ee179 misc: Add baddev tooling
I think this is the *wrong* way to build this, but it's perfectly legal
to have a feature branch with this committed that people can develop
against. We can always cherry-pick off those commits to merge them, and
we can update and rebase this commit over time when needed.
2025-02-27 17:18:39 -05:00
James Shubin
c5257dd64b lang: parser: Simplify code and format it
This would get done by gofmt -s anyways.
2025-02-27 17:13:31 -05:00
James Shubin
f74bc969ca make: Cleanup old targets 2025-02-27 17:01:37 -05:00
Edward Toroshchyn
63d7b8e51e engine: resources: exec: Fix wrong err variable being checked in test 2025-02-27 14:50:38 -05:00
James Shubin
d56896cb0d lang: core: Simplify list and map lookup default functions 2025-02-26 19:59:47 -05:00
James Shubin
d579787bcd lang: core: Simplify list and map lookup functions 2025-02-26 19:59:47 -05:00
James Shubin
37fffce9f5 lang: core: Simplify implementation of the "contains" function 2025-02-26 18:12:38 -05:00
James Shubin
d7ecc72b41 lang: ast, gapi, interfaces, parser: Print line numbers on error
This adds an initial implementation of printing line numbers on type
unification errors. It also attempts to print a visual position
indicator for most scenarios.

This patch was started by Felix Frank and finished by James Shubin.

Co-authored-by: Felix Frank <Felix.Frank.de@gmail.com>
2025-02-25 20:15:02 -05:00
James Shubin
f754bbbf90 git: Add more entries to gitignore file 2025-02-25 12:10:04 -05:00
Felix Frank
bb171ced86 misc: Add missing sudo invocation 2025-02-24 10:43:43 -05:00
Edward Toroshchyn
c25a2a257b misc: Fix typos and spelling errors 2025-02-24 16:01:46 +01:00
Lourenço
1f90de31e7 lang: core: net: Add a new func for URL parsing
This is a first attempt to add a new function for URL parsing, using
go's net/url package and the simple API. This is still a barebones
implementation, there's possibility to expose more information. It also
includes simple tests.
2025-02-19 13:35:20 +01:00
Lourenço
b5384d1278 engine: resources: Adding logic for svc unit state
This is a small patch that adds logic for checking what's the state of
the unit file and making the CheckApply function more robust
2025-02-18 17:14:27 -05:00
James Shubin
d80ec4aaa7 engine: resources: Detect simple self-referential frags
It would be a likely mistake to create a self-referential frag, and mgmt
would spin forever updating the file... We probably don't want this, so
let's just catch this case in Validate.

Of course you could get around this with multiple files, and a fancier
search could statically check the graph, but the goal isn't to prevent
any bad code, since that's not likely to be possible.
2025-02-15 06:58:15 -05:00
James Shubin
5d63376087 modules: meta: Remove duplicate line 2025-02-13 06:33:47 -05:00
James Shubin
4fd6ced287 docs: Add new talks from Belgium, 2025 2025-02-10 10:34:53 -05:00
James Shubin
82489c3fe0 engine: resources: Add shell field to user resource 2025-02-07 18:08:25 -05:00
James Shubin
a064a87ecd lang: Add a weird test case
Mark Smith was concerned we might not handle this case correctly. It
seems we do in fact catch this scenario, so it's not an issue. Yay!
2025-02-07 17:57:36 -05:00
James Shubin
f51a1200d1 util: Add a helper to get the users shell entry 2025-02-07 17:57:36 -05:00
James Shubin
ecd5a0f304 util, lang, etcd: Move the error type to our util package
We use this error in a lot of places, let's centralize it a bit.
2025-02-07 17:57:36 -05:00
James Shubin
096ef4cc66 engine: resources: Modernize the user resource
Do some small fixups like adding ctx and fixing obvious bugs.
2025-02-07 17:57:36 -05:00
James Shubin
7da98ef349 test: Rename the reflowed comments test to make it easier to find
This makes running one-of executions of this a bit easier.
2025-02-06 08:19:22 -05:00
hades
8cd7fa27e2 engine: resources: exec: Add a bit of documentation to exec res 2025-02-06 08:18:48 -05:00
James Shubin
134e2f1cd9 examples: lang: Add a new env example 2025-02-06 07:49:35 -05:00
Edward Toroshchyn
042ae02428 engine: resources: exec: Add tests to check env values 2025-02-06 07:14:12 -05:00
Edward Toroshchyn
1e33c1fdae misc: Add vim syntax highlighting file
This is an extremely basic initial version of syntax highlighting, written just
so that I can edit the MCL files in vim and not cry.

The following features are supported:
 - MCL keywords
 - strings (including escape characters)
 - comments
 - built-in resources (as of 0.0.27)
2025-02-05 08:50:50 -05:00
James Shubin
bdc46648ff modules: Add prometheus and grafana modules
These are really stubs, and need some more testing and integration, but
there were some people who expressed interest in this, so let's push it
early.
2025-02-03 04:46:44 -05:00
James Shubin
ab9c1d3d96 modules: cups: Fixup obvious missing bits
I didn't merge these parts because I have some other WIP code I was
working on. Might as well put this in now.
2025-02-02 01:51:41 -05:00
James Shubin
0fb546ad61 engine: resources: Make some svc cleanups
We would often actually drop the refresh because of bad checks.
2025-02-02 01:43:14 -05:00
James Shubin
7439d532c7 modules: dhcp: Include stub hosts file
Even if we don't have any hosts, we should still have a valid config.
2025-01-31 11:44:30 -05:00
James Shubin
de9c0adcc0 releases: Add release notes for 0.0.27
I did this build with: `make release` followed by:
GOTAGS='noaugeas' make release when the arm64 build failed.
2025-01-31 03:22:45 -05:00
629 changed files with 32428 additions and 11964 deletions

16
.gitignore vendored
View File

@@ -5,16 +5,22 @@
.envrc
old/
tmp/
/vendor/
*WIP
*_stringer.go
mgmt
mgmt.static
/mgmt
/mgmt.static
# crossbuild artifacts
build/mgmt-*
/build/mgmt-*
mgmt.iml
rpmbuild/
releases/
/rpmbuild/
/releases/
/repository/
/pprof/
/sites/
# vim swap files
.*.sw[op]
# prevent `echo foo 2>1` typo errors by making this file read-only
1
# allow users to keep some junk files around
*.wip

View File

@@ -27,7 +27,7 @@
# additional permission if he deems it necessary to achieve the goals of this
# additional permission.
SHELL = /usr/bin/env bash
SHELL = bash
.PHONY: all art cleanart version program lang path deps run race generate build build-debug crossbuild clean test gofmt yamlfmt format docs
.PHONY: rpmbuild mkdirs rpm srpm spec tar upload upload-sources upload-srpms upload-rpms upload-releases copr tag
.PHONY: mkosi mkosi_fedora-latest mkosi_fedora-older mkosi_stream-latest mkosi_debian-stable mkosi_ubuntu-latest mkosi_archlinux
@@ -38,6 +38,7 @@ SHELL = /usr/bin/env bash
# a large amount of output from this `find`, can cause `make` to be much slower!
GO_FILES := $(shell find * -name '*.go' -not -path 'old/*' -not -path 'tmp/*')
MCL_FILES := $(shell find lang/ -name '*.mcl' -not -path 'old/*' -not -path 'tmp/*')
MISC_FILES := $(shell find engine/resources/http_server_ui/)
SVERSION := $(or $(SVERSION),$(shell git describe --match '[0-9]*\.[0-9]*\.[0-9]*' --tags --dirty --always))
VERSION := $(or $(VERSION),$(shell git describe --match '[0-9]*\.[0-9]*\.[0-9]*' --tags --abbrev=0))
@@ -191,13 +192,6 @@ path: ## create working paths
deps: ## install system and golang dependencies
./misc/make-deps.sh
run: ## run mgmt
find . -maxdepth 1 -type f -name '*.go' -not -name '*_test.go' | xargs go run -ldflags "-X main.program=$(PROGRAM) -X main.version=$(SVERSION)"
# include race flag
race:
find . -maxdepth 1 -type f -name '*.go' -not -name '*_test.go' | xargs go run -race -ldflags "-X main.program=$(PROGRAM) -X main.version=$(SVERSION)"
generate:
go generate
@@ -205,11 +199,15 @@ lang: ## generates the lexer/parser for the language frontend
@# recursively run make in child dir named lang
@$(MAKE) --quiet -C lang
resources: ## builds the resources dependencies required for the engine backend
@# recursively run make in child dir named engine/resources
@$(MAKE) --quiet -C engine/resources
# build a `mgmt` binary for current host os/arch
$(PROGRAM): build/mgmt-${GOHOSTOS}-${GOHOSTARCH} ## build an mgmt binary for current host os/arch
cp -a $< $@
$(PROGRAM).static: $(GO_FILES) $(MCL_FILES) go.mod go.sum
$(PROGRAM).static: $(GO_FILES) $(MCL_FILES) $(MISC_FILES) go.mod go.sum
@echo "Building: $(PROGRAM).static, version: $(SVERSION)..."
go generate
go build $(TRIMPATH) -a -installsuffix cgo -tags netgo -ldflags '-extldflags "-static" -X main.program=$(PROGRAM) -X main.version=$(SVERSION) -s -w' -o $(PROGRAM).static $(BUILD_FLAGS);
@@ -220,15 +218,22 @@ build: $(PROGRAM)
build-debug: LDFLAGS=
build-debug: $(PROGRAM)
# if you're using the bad/dev branch, you might want this too!
baddev: BUILD_FLAGS = -tags 'noaugeas novirt'
baddev: $(PROGRAM)
# pattern rule target for (cross)building, mgmt-OS-ARCH will be expanded to the correct build
# extract os and arch from target pattern
GOOS=$(firstword $(subst -, ,$*))
GOARCH=$(lastword $(subst -, ,$*))
build/mgmt-%: $(GO_FILES) $(MCL_FILES) go.mod go.sum | lang funcgen
build/mgmt-%: $(GO_FILES) $(MCL_FILES) $(MISC_FILES) go.mod go.sum | lang resources funcgen
@# If you need to run `go mod tidy` then this can trigger.
@if [ "$(PKGNAME)" = "" ]; then echo "\$$(PKGNAME) is empty, test with: go list ."; exit 42; fi
@echo "Building: $(PROGRAM), os/arch: $*, version: $(SVERSION)..."
time env GOOS=${GOOS} GOARCH=${GOARCH} go build $(TRIMPATH) -ldflags=$(PKGNAME)="-X main.program=$(PROGRAM) -X main.version=$(SVERSION) ${LDFLAGS}" -o $@ $(BUILD_FLAGS)
@# XXX: leave race detector on by default for now. For production
@# builds, we can consider turning it off for performance improvements.
@# XXX: ./mgmt run --tmp-prefix lang something_fast.mcl > /tmp/race 2>&1 # search for "WARNING: DATA RACE"
time env GOOS=${GOOS} GOARCH=${GOARCH} go build $(TRIMPATH) -race -ldflags=$(PKGNAME)="-X main.program=$(PROGRAM) -X main.version=$(SVERSION) ${LDFLAGS}" -o $@ $(BUILD_FLAGS)
# create a list of binary file names to use as make targets
# to use this you might want to run something like:
@@ -240,6 +245,7 @@ crossbuild: ${crossbuild_targets}
clean: ## clean things up
$(MAKE) --quiet -C test clean
$(MAKE) --quiet -C lang clean
$(MAKE) --quiet -C engine/resources clean
$(MAKE) --quiet -C misc/mkosi clean
rm -f lang/core/generated_funcs.go || true
rm -f lang/core/generated_funcs_test.go || true
@@ -643,5 +649,6 @@ funcgen: lang/core/generated_funcs.go
lang/core/generated_funcs.go: lang/funcs/funcgen/*.go lang/core/funcgen.yaml lang/funcs/funcgen/templates/generated_funcs.go.tpl
@echo "Generating: funcs..."
@go run `find lang/funcs/funcgen/ -maxdepth 1 -type f -name '*.go' -not -name '*_test.go'` -templates=lang/funcs/funcgen/templates/generated_funcs.go.tpl >/dev/null
@gofmt -s -w $@
# vim: ts=8

View File

@@ -6,7 +6,6 @@
[![Build Status](https://github.com/purpleidea/mgmt/workflows/.github/workflows/test.yaml/badge.svg)](https://github.com/purpleidea/mgmt/actions/)
[![GoDoc](https://img.shields.io/badge/godoc-reference-5272B4.svg?style=flat-square)](https://godocs.io/github.com/purpleidea/mgmt)
[![Matrix](https://img.shields.io/badge/matrix-%23mgmtconfig-orange.svg?style=flat-square)](https://matrix.to/#/#mgmtconfig:matrix.org)
[![IRC](https://img.shields.io/badge/irc-%23mgmtconfig-orange.svg?style=flat-square)](https://web.libera.chat/?channels=#mgmtconfig)
[![Patreon](https://img.shields.io/badge/patreon-donate-yellow.svg?style=flat-square)](https://www.patreon.com/purpleidea)
[![Liberapay](https://img.shields.io/badge/liberapay-donate-yellow.svg?style=flat-square)](https://liberapay.com/purpleidea/donate)
@@ -73,7 +72,6 @@ Come join us in the `mgmt` community!
| Medium | Link |
|---|---|
| Matrix | [#mgmtconfig](https://matrix.to/#/#mgmtconfig:matrix.org) on Matrix.org |
| IRC | [#mgmtconfig](https://web.libera.chat/?channels=#mgmtconfig) on Libera.Chat |
| Twitter | [@mgmtconfig](https://twitter.com/mgmtconfig) & [#mgmtconfig](https://twitter.com/hashtag/mgmtconfig) |
| Mailing list | [looking for a new home, suggestions welcome](https://gitlab.freedesktop.org/freedesktop/freedesktop/-/issues/1082) |
| Patreon | [purpleidea](https://www.patreon.com/purpleidea) on Patreon |
@@ -85,9 +83,19 @@ the configuration management space, but has a fast, modern, distributed systems
approach. The project contains an engine and a language.
[Please have a look at an introductory video or blog post.](docs/on-the-web.md)
Mgmt is a fairly new project. It is usable today, but not yet feature complete.
With your help you'll be able to influence our design and get us to 1.0 sooner!
Interested users should read the [quick start guide](docs/quick-start-guide.md).
Mgmt is over ten years old! It is very powerful today, and has a solid
foundation and architecture which has been polished over the years. As with all
software, there are bugs to fix and improvements to be made, but I expect
they're easy to hack through and fix if you find any. Interested users should
start with the [official website](https://mgmtconfig.com/docs/).
## Sponsors:
Mgmt is generously sponsored by:
[![m9rx corporation](art/m9rx.png)](https://m9rx.com/)
Please reach out if you'd like to sponsor!
## Documentation:

BIN
art/m9rx.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

View File

@@ -125,6 +125,8 @@ type Args struct {
DocsCmd *DocsGenerateArgs `arg:"subcommand:docs" help:"generate documentation"`
ToolsCmd *ToolsArgs `arg:"subcommand:tools" help:"collection of useful tools"`
// This never runs, it gets preempted in the real main() function.
// XXX: Can we do it nicely with the new arg parser? can it ignore all args?
EtcdCmd *EtcdArgs `arg:"subcommand:etcd" help:"run standalone etcd"`
@@ -173,6 +175,10 @@ func (obj *Args) Run(ctx context.Context, data *cliUtil.Data) (bool, error) {
return cmd.Run(ctx, data)
}
if cmd := obj.ToolsCmd; cmd != nil {
return cmd.Run(ctx, data)
}
// NOTE: we could return true, fmt.Errorf("...") if more than one did
return false, nil // nobody activated
}

View File

@@ -36,9 +36,11 @@ import (
"os/signal"
cliUtil "github.com/purpleidea/mgmt/cli/util"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/etcd"
"github.com/purpleidea/mgmt/etcd/client"
"github.com/purpleidea/mgmt/etcd/deployer"
etcdfs "github.com/purpleidea/mgmt/etcd/fs"
etcdSSH "github.com/purpleidea/mgmt/etcd/ssh"
"github.com/purpleidea/mgmt/gapi"
"github.com/purpleidea/mgmt/lib"
"github.com/purpleidea/mgmt/util"
@@ -52,12 +54,28 @@ import (
// particular one contains all the common flags for the `deploy` subcommand
// which all frontends can use.
type DeployArgs struct {
Seeds []string `arg:"--seeds,env:MGMT_SEEDS" help:"default etc client endpoint"`
// SSHURL can be specified if we want to transport the SSH client
// connection over SSH. If this is specified, the second hop is made
// with the Seeds values, but they connect from this destination. You
// can specify this in the standard james@server:22 format. This will
// use your ~/.ssh/ directory for public key authentication and
// verifying the host key in the known_hosts file. This must already be
// setup for things to work.
SSHURL string `arg:"--ssh-url" help:"transport the etcd client connection over SSH to this server"`
// SSHHostKey is the key part (which is already base64 encoded) from a
// known_hosts file, representing the host we're connecting to. If this
// is specified, then it overrides looking for it in the URL.
SSHHostKey string `arg:"--ssh-hostkey" help:"use this ssh known hosts key when connecting over SSH"`
Seeds []string `arg:"--seeds,separate,env:MGMT_SEEDS" help:"default etcd client endpoints"`
Noop bool `arg:"--noop" help:"globally force all resources into no-op mode"`
Sema int `arg:"--sema" default:"-1" help:"globally add a semaphore to all resources with this lock count"`
NoGit bool `arg:"--no-git" help:"don't look at git commit id for safe deploys"`
Force bool `arg:"--force" help:"force a new deploy, even if the safety chain would break"`
NoAutoEdges bool `arg:"--no-autoedges" help:"skip the autoedges stage"`
DeployEmpty *cliUtil.EmptyArgs `arg:"subcommand:empty" help:"deploy empty payload"`
DeployLang *cliUtil.LangArgs `arg:"subcommand:lang" help:"deploy lang (mcl) payload"`
DeployYaml *cliUtil.YamlArgs `arg:"subcommand:yaml" help:"deploy yaml graph payload"`
@@ -184,26 +202,53 @@ func (obj *DeployArgs) Run(ctx context.Context, data *cliUtil.Data) (bool, error
}
}()
simpleDeploy := &deployer.SimpleDeploy{
Client: etcdClient,
var world engine.World
world = &etcd.World{ // XXX: What should some of these fields be?
Client: etcdClient, // XXX: remove me when etcdfs below is done
Seeds: obj.Seeds,
NS: lib.NS,
//MetadataPrefix: lib.MetadataPrefix,
//StoragePrefix: lib.StoragePrefix,
//StandaloneFs: ???.DeployFs, // used for static deploys
//GetURI: func() string {
//},
}
if obj.SSHURL != "" { // alternate world implementation over SSH
world = &etcdSSH.World{
URL: obj.SSHURL,
HostKey: obj.SSHHostKey,
Seeds: obj.Seeds,
NS: lib.NS,
//MetadataPrefix: lib.MetadataPrefix,
//StoragePrefix: lib.StoragePrefix,
//StandaloneFs: ???.DeployFs, // used for static deploys
//GetURI: func() string {
//},
}
// XXX: We need to first get rid of the standalone etcd client,
// and then pull the etcdfs stuff in so it uses that client.
return false, fmt.Errorf("--ssh-url is not implemented yet")
}
worldInit := &engine.WorldInit{
Hostname: "", // XXX: Should we set this?
Debug: data.Flags.Debug,
Logf: func(format string, v ...interface{}) {
Logf("deploy: "+format, v...)
Logf("world: etcd: "+format, v...)
},
}
if err := simpleDeploy.Init(); err != nil {
return false, errwrap.Wrapf(err, "deploy Init failed")
if err := world.Connect(ctx, worldInit); err != nil {
return false, errwrap.Wrapf(err, "world Connect failed")
}
defer func() {
err := errwrap.Wrapf(simpleDeploy.Close(), "deploy Close failed")
err := errwrap.Wrapf(world.Cleanup(), "world Cleanup failed")
if err != nil {
// TODO: cause the final exit code to be non-zero
Logf("deploy cleanup error: %+v", err)
// TODO: cause the final exit code to be non-zero?
Logf("close error: %+v", err)
}
}()
// get max id (from all the previous deploys)
max, err := simpleDeploy.GetMaxDeployID(ctx)
max, err := world.GetMaxDeployID(ctx)
if err != nil {
return false, errwrap.Wrapf(err, "error getting max deploy id")
}
@@ -211,6 +256,7 @@ func (obj *DeployArgs) Run(ctx context.Context, data *cliUtil.Data) (bool, error
var id = max + 1 // next id
Logf("previous max deploy id: %d", max)
// XXX: Get this from the World API? (Which might need improving!)
etcdFs := &etcdfs.Fs{
Client: etcdClient,
// TODO: using a uuid is meant as a temporary measure, i hate them
@@ -251,13 +297,16 @@ func (obj *DeployArgs) Run(ctx context.Context, data *cliUtil.Data) (bool, error
deploy.Noop = obj.Noop
deploy.Sema = obj.Sema
deploy.NoAutoEdges = obj.NoAutoEdges
str, err := deploy.ToB64()
if err != nil {
return false, errwrap.Wrapf(err, "encoding error")
}
Logf("pushing...")
// this nominally checks the previous git hash matches our expectation
if err := simpleDeploy.AddDeploy(ctx, id, hash, pHash, &str); err != nil {
if err := world.AddDeploy(ctx, id, hash, pHash, &str); err != nil {
return false, errwrap.Wrapf(err, "could not create deploy id `%d`", id)
}
Logf("success, id: %d", id)

View File

@@ -141,6 +141,8 @@ func (obj *RunArgs) Run(ctx context.Context, data *cliUtil.Data) (bool, error) {
Noop: obj.Noop,
Sema: obj.Sema,
//Update: obj.Update,
NoAutoEdges: obj.NoAutoEdges,
},
Fs: standaloneFs,

150
cli/tools.go Normal file
View File

@@ -0,0 +1,150 @@
// Mgmt
// Copyright (C) James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//
// Additional permission under GNU GPL version 3 section 7
//
// If you modify this program, or any covered work, by linking or combining it
// with embedded mcl code and modules (and that the embedded mcl code and
// modules which link with this program, contain a copy of their source code in
// the authoritative form) containing parts covered by the terms of any other
// license, the licensors of this program grant you additional permission to
// convey the resulting work. Furthermore, the licensors of this program grant
// the original author, James Shubin, additional permission to update this
// additional permission if he deems it necessary to achieve the goals of this
// additional permission.
package cli
import (
"context"
"os"
"os/signal"
"sync"
"syscall"
cliUtil "github.com/purpleidea/mgmt/cli/util"
"github.com/purpleidea/mgmt/tools"
)
// ToolsArgs is the CLI parsing structure and type of the parsed result. This
// particular one contains all the common flags for the `tools` subcommand.
type ToolsArgs struct {
tools.Config // embedded config (can't be a pointer) https://github.com/alexflint/go-arg/issues/240
ToolsGrow *cliUtil.ToolsGrowArgs `arg:"subcommand:grow" help:"tools for growing storage"`
}
// Run executes the correct subcommand. It errors if there's ever an error. It
// returns true if we did activate one of the subcommands. It returns false if
// we did not. This information is used so that the top-level parser can return
// usage or help information if no subcommand activates. This particular Run is
// the run for the main `tools` subcommand. The tools command provides some
// functionality which can be helpful with provisioning and config management.
func (obj *ToolsArgs) Run(ctx context.Context, data *cliUtil.Data) (bool, error) {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
var name string
var args interface{}
if cmd := obj.ToolsGrow; cmd != nil {
name = cliUtil.LookupSubcommand(obj, cmd) // "grow"
args = cmd
}
_ = name
Logf := func(format string, v ...interface{}) {
// Don't block this globally...
//if !data.Flags.Debug {
// return
//}
data.Flags.Logf("main: "+format, v...)
}
var api tools.API
if cmd := obj.ToolsGrow; cmd != nil {
api = &tools.Grow{
ToolsGrowArgs: args.(*cliUtil.ToolsGrowArgs),
Config: obj.Config,
Program: data.Program,
Version: data.Version,
Debug: data.Flags.Debug,
Logf: Logf,
}
}
if api == nil {
return false, nil // nothing found (display help!)
}
// We don't use these for the tools command in normal operation.
if data.Flags.Debug {
cliUtil.Hello(data.Program, data.Version, data.Flags) // say hello!
defer Logf("goodbye!")
}
// install the exit signal handler
wg := &sync.WaitGroup{}
defer wg.Wait()
exit := make(chan struct{})
defer close(exit)
wg.Add(1)
go func() {
defer cancel()
defer wg.Done()
// must have buffer for max number of signals
signals := make(chan os.Signal, 3+1) // 3 * ^C + 1 * SIGTERM
signal.Notify(signals, os.Interrupt) // catch ^C
//signal.Notify(signals, os.Kill) // catch signals
signal.Notify(signals, syscall.SIGTERM)
var count uint8
for {
select {
case sig := <-signals: // any signal will do
if sig != os.Interrupt {
data.Flags.Logf("interrupted by signal")
return
}
switch count {
case 0:
data.Flags.Logf("interrupted by ^C")
cancel()
case 1:
data.Flags.Logf("interrupted by ^C (fast pause)")
cancel()
case 2:
data.Flags.Logf("interrupted by ^C (hard interrupt)")
cancel()
}
count++
case <-exit:
return
}
}
}()
if err := api.Main(ctx); err != nil {
if data.Flags.Debug {
data.Flags.Logf("main: %+v", err)
}
return false, err
}
return true, nil
}

View File

@@ -70,7 +70,9 @@ func LookupSubcommand(obj interface{}, st interface{}) string {
}
// EmptyArgs is the empty CLI parsing structure and type of the parsed result.
type EmptyArgs struct{}
type EmptyArgs struct {
Wait bool `arg:"--wait" help:"don't use any existing (stale) deploys"`
}
// LangArgs is the lang CLI parsing structure and type of the parsed result.
type LangArgs struct {
@@ -87,7 +89,7 @@ type LangArgs struct {
OnlyUnify bool `arg:"--only-unify" help:"stop after type unification"`
SkipUnify bool `arg:"--skip-unify" help:"skip type unification"`
UnifySolver *string `arg:"--unify-name" help:"pick a specific unification solver"`
UnifyOptimizations []string `arg:"--unify-optimizations" help:"list of unification optimizations to request (experts only)"`
UnifyOptimizations []string `arg:"--unify-optimizations,separate" help:"list of unification optimizations to request (experts only)"`
Depth int `arg:"--depth" default:"-1" help:"max recursion depth limit (-1 is unlimited)"`
@@ -162,6 +164,12 @@ type SetupPkgArgs struct {
// parsed result.
type SetupSvcArgs struct {
BinaryPath string `arg:"--binary-path" help:"path to the binary"`
SSHURL string `arg:"--ssh-url" help:"transport the etcd client connection over SSH to this server"`
SSHHostKey string `arg:"--ssh-hostkey" help:"use this ssh known hosts key when connecting over SSH"`
Seeds []string `arg:"--seeds,separate,env:MGMT_SEEDS" help:"default etcd client endpoints"`
NoServer bool `arg:"--no-server" help:"do not start embedded etcd server (do not promote from client to peer)"`
Install bool `arg:"--install" help:"install the systemd mgmt service"`
Start bool `arg:"--start" help:"start the mgmt service"`
Enable bool `arg:"--enable" help:"enable the mgmt service"`
@@ -196,3 +204,11 @@ type DocsGenerateArgs struct {
NoResources bool `arg:"--no-resources" help:"skip resource doc generation"`
NoFunctions bool `arg:"--no-functions" help:"skip function doc generation"`
}
// ToolsGrowArgs is the util tool CLI parsing structure and type of the parsed
// result.
type ToolsGrowArgs struct {
Mount string `arg:"--mount,required" help:"root mount point to start with"`
Exec bool `arg:"--exec" help:"actually run these commands"`
Done string `arg:"--done" help:"create this file when done, skip if it exists"`
}

View File

@@ -34,6 +34,7 @@ import (
"fmt"
"sort"
"sync"
"sync/atomic"
"time"
"github.com/purpleidea/mgmt/util"
@@ -61,6 +62,8 @@ func New(timeout int) *Coordinator {
//resumeSignal: make(chan struct{}), // happens on pause
//pausedAck: util.NewEasyAck(), // happens on pause
sendSignal: make(chan bool),
stateFns: make(map[string]func(bool) error),
smutex: &sync.RWMutex{},
@@ -103,6 +106,8 @@ type Coordinator struct {
// pausedAck is used to send an ack message saying that we've paused.
pausedAck *util.EasyAck
sendSignal chan bool // send pause (false) or resume (true)
// stateFns run on converged state changes.
stateFns map[string]func(bool) error
// smutex is used for controlling access to the stateFns map.
@@ -126,6 +131,8 @@ func (obj *Coordinator) Register() *UID {
//id: obj.lastid,
//name: fmt.Sprintf("%d", obj.lastid), // some default
isConverged: &atomic.Bool{},
poke: obj.poke,
// timer
@@ -176,11 +183,28 @@ func (obj *Coordinator) Run(startPaused bool) {
for {
// pause if one was requested...
select {
case <-obj.pauseSignal: // channel closes
//case <-obj.pauseSignal: // channel closes
// obj.pausedAck.Ack() // send ack
// // we are paused now, and waiting for resume or exit...
// select {
// case <-obj.resumeSignal: // channel closes # XXX: RACE READ
// // resumed!
//
// case <-obj.closeChan: // we can always escape
// return
// }
case b, _ := <-obj.sendSignal:
if b { // resume
panic("unexpected resume") // TODO: continue instead?
}
// paused
obj.pausedAck.Ack() // send ack
// we are paused now, and waiting for resume or exit...
select {
case <-obj.resumeSignal: // channel closes
case b, _ := <-obj.sendSignal:
if !b { // pause
panic("unexpected pause") // TODO: continue instead?
}
// resumed!
case <-obj.closeChan: // we can always escape
@@ -229,8 +253,13 @@ func (obj *Coordinator) Pause() error {
}
obj.pausedAck = util.NewEasyAck()
obj.resumeSignal = make(chan struct{}) // build the resume signal
close(obj.pauseSignal)
//obj.resumeSignal = make(chan struct{}) // build the resume signal XXX: RACE WRITE
//close(obj.pauseSignal)
select {
case obj.sendSignal <- false:
case <-obj.closeChan:
return fmt.Errorf("closing")
}
// wait for ack (or exit signal)
select {
@@ -253,8 +282,14 @@ func (obj *Coordinator) Resume() {
return
}
obj.pauseSignal = make(chan struct{}) // rebuild for next pause
close(obj.resumeSignal)
//obj.pauseSignal = make(chan struct{}) // rebuild for next pause
//close(obj.resumeSignal)
select {
case obj.sendSignal <- true:
case <-obj.closeChan:
return
}
obj.poke() // unblock and notice the resume if necessary
obj.paused = false
@@ -389,7 +424,7 @@ type UID struct {
// for per-UID timeouts too.
timeout int
// isConverged stores the convergence state of this particular UID.
isConverged bool
isConverged *atomic.Bool
// poke stores a reference to the main poke function.
poke func()
@@ -411,14 +446,14 @@ func (obj *UID) Unregister() {
// IsConverged reports whether this UID is converged or not.
func (obj *UID) IsConverged() bool {
return obj.isConverged
return obj.isConverged.Load()
}
// SetConverged sets the convergence state of this UID. This is used by the
// running timer if one is started. The timer will overwrite any value set by
// this method.
func (obj *UID) SetConverged(isConverged bool) {
obj.isConverged = isConverged
obj.isConverged.Store(isConverged)
obj.poke() // notify of change
}

4
debian/control vendored
View File

@@ -12,6 +12,6 @@ Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}, packagekit
Suggests: graphviz
Description: mgmt: next generation config management!
The mgmt tool is a next generation config management prototype. It's
not yet ready for production, but we hope to get there soon. Get
The mgmt tool is a next generation config management solution. It's
ready for production, and we hope you try out the future soon. Get
involved today!

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
script_directory="$( cd "$( dirname "$0" )" && pwd )"
project_directory=$script_directory/../..

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# Stop on any error
set -e

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# runs command provided as argument inside a development (Linux) Docker container

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# Stop on any error
set -e

View File

@@ -153,6 +153,6 @@ man_pages = [
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'mgmt', u'mgmt Documentation',
author, 'mgmt', 'A next generation config management prototype!',
author, 'mgmt', 'Next generation distributed, event-driven, parallel config management!',
'Miscellaneous'),
]

View File

@@ -139,7 +139,7 @@ easy as copying one of the files in [`test/shell/`](/test/shell) and adapting
it.
This test suite won't run by default (unless when on CI server) and needs to be
called explictly using:
called explicitly using:
```
make test-shell

View File

@@ -2,8 +2,8 @@
## Overview
The `mgmt` tool is a next generation config management prototype. It's not yet
ready for production, but we hope to get there soon. Get involved today!
The `mgmt` tool is a next generation config management solution. It's ready for
production, and we hope you try out the future soon. Get involved today!
## Project Description
@@ -297,6 +297,49 @@ This meta param is a safety measure to make your life easier. It works for all
resources. If someone comes up with a resource which would routinely start with
a dollar sign, then we can revisit the default for this resource kind.
#### Hidden
Boolean. Hidden means that this resource will not get executed on the resource
graph on which it is defined. This can be used as a simple boolean switch, or,
more commonly in combination with the Export meta param which specifies that the
resource params are exported into the shared database. When this is true, it
does not prevent export. In fact, it is commonly used in combination with
Export. Using this option will still include it in the resource graph, but it
will exist there in a special "mode" where it will not conflict with any other
identically named resources. It can even be used as part of an edge or via a
send/recv receiver. It can NOT be a sending vertex. These properties
differentiate the use of this instead of simply wrapping a resource in an "if"
statement.
#### Export
List of strings. Export is a list of hostnames (and/or the special "*" entry)
which if set, will mark this resource data as intended for export to those
hosts. This does not prevent any users of the shared data storage from reading
these values, so if you want to guarantee secrecy, use the encryption
primitives. This only labels the data accordingly, so that other hosts can know
what data is available for them to collect. The (kind, name, host) export triple
must be unique from any given exporter. In other words, you may not export two
different instances of a kind+name to the same host, the exports must not
conflict. On resource collect, this parameter is not preserved.
```mcl
file "/tmp/foo" {
state => "exists",
content => "i'm exported!\n",
Meta:hidden => true,
Meta:export => ["h1",],
}
file "/tmp/foo" {
state => "exists",
content => "i'm exported AND i'm used here\n",
Meta:export => ["h1",],
}
```
#### Reverse
Boolean. Reverse is a property that some resources can implement that specifies

View File

@@ -53,16 +53,13 @@ find a number of tutorials online.
3. Spend between four to six hours with the [golang tour](https://tour.golang.org/).
Skip over the longer problems, but try and get a solid overview of everything.
If you forget something, you can always go back and repeat those parts.
4. Connect to our [#mgmtconfig](https://web.libera.chat/?channels=#mgmtconfig)
IRC channel on the [Libera.Chat](https://libera.chat/) network. You can use any
IRC client that you'd like, but the [hosted web portal](https://web.libera.chat/?channels=#mgmtconfig)
will suffice if you don't know what else to use. [Here are a few suggestions for
alternative clients.](https://libera.chat/guides/clients)
4. Connect to our [#mgmtconfig](https://matrix.to/#/#mgmtconfig:matrix.org)
Matrix channel and hang out with us there out there.
5. Now it's time to try and starting writing a patch! We have tagged a bunch of
[open issues as #mgmtlove](https://github.com/purpleidea/mgmt/issues?q=is%3Aissue+is%3Aopen+label%3Amgmtlove)
for new users to have somewhere to get involved. Look through them to see if
something interests you. If you find one, let us know you're working on it by
leaving a comment in the ticket. We'll be around to answer questions in the IRC
leaving a comment in the ticket. We'll be around to answer questions in the
channel, and to create new issues if there wasn't something that fit your
interests. When you submit a patch, we'll review it and give you some feedback.
Over time, we hope you'll learn a lot while supporting the project! Now get
@@ -534,9 +531,7 @@ which definitely existed before the band did.
### You didn't answer my question, or I have a question!
It's best to ask on [IRC](https://web.libera.chat/?channels=#mgmtconfig)
to see if someone can help you. If you don't get a response from IRC, you can
contact me through my [technical blog](https://purpleidea.com/contact/) and I'll
do my best to help. If you have a good question, please add it as a patch to
It's best to ask on [Matrix](https://matrix.to/#/#mgmtconfig:matrix.org) to see
if someone can help. If you don't get a response there, you can send a patch to
this documentation. I'll merge your question, and add a patch with the answer!
For news and updates, subscribe to the [mailing list](https://www.redhat.com/mailman/listinfo/mgmtconfig-list).

View File

@@ -177,66 +177,69 @@ func (obj *FooFunc) Init(init *interfaces.Init) error {
}
```
### Call
Call is run when you want to return a new value from the function. It takes the
input arguments to the function.
#### Example
```golang
func (obj *FooFunc) Call(ctx context.Context, args []types.Value) (types.Value, error) {
return &types.StrValue{ // Our type system "str" (string) value.
V: strconv.FormatInt(args[0].Int(), 10), // a golang string
}, nil
}
```
### Stream
```golang
Stream(context.Context) error
```
`Stream` is where the real _work_ is done. This method is started by the
language function engine. It will run this function while simultaneously sending
it values on the `Input` channel. It will only send a complete set of input
values. You should send a value to the output channel when you have decided that
one should be produced. Make sure to only use input values of the expected type
as declared in the `Info` struct, and send values of the similarly declared
appropriate return type. Failure to do so will may result in a panic and
sadness. You must shutdown if the input context cancels. You must close the
`Output` channel if you are done generating new values and/or when you shutdown.
`Stream` is where any evented work is done. This method is started by the
function engine. It will run this function once. It should call the
`obj.init.Event()` method when it believes the function engine should run
`Call()` again.
Implementing this is not required if you don't have events.
If the `ctx` closes, you must shutdown as soon as possible.
#### Example
```golang
// Stream returns the single value that was generated and then closes.
// Stream starts a mainloop and runs Event when it's time to Call() again.
func (obj *FooFunc) Stream(ctx context.Context) error {
defer close(obj.init.Output) // the sender closes
var result string
ticker := time.NewTicker(time.Duration(1) * time.Second)
defer ticker.Stop()
// streams must generate an initial event on startup
// even though ticker will send one, we want to be faster to first event
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
for {
select {
case input, ok := <-obj.init.Input:
if !ok {
return nil // can't output any more
}
case <-startChan:
startChan = nil // disable
ix := input.Struct()["a"].Int()
if ix < 0 {
return fmt.Errorf("we can't deal with negatives")
}
result = fmt.Sprintf("the input is: %d", ix)
case <-ticker.C: // received the timer event
// pass
case <-ctx.Done():
return nil
}
select {
case obj.init.Output <- &types.StrValue{
V: result,
}:
case <-ctx.Done():
return nil
if err := obj.init.Event(ctx); err != nil {
return err
}
}
}
```
As you can see, we read our inputs from the `input` channel, and write to the
`output` channel. Our code is careful to never block or deadlock, and can always
exit if a close signal is requested. It also cleans up after itself by closing
the `output` channel when it is done using it. This is done easily with `defer`.
If it notices that the `input` channel closes, then it knows that no more input
values are coming and it can consider shutting down early.
## Further considerations
There is some additional information that any function author will need to know.
@@ -327,7 +330,7 @@ Yes, you can use a function generator in `golang` to build multiple different
implementations from the same function generator. You just need to implement a
function which *returns* a `golang` type of `func([]types.Value) (types.Value, error)`
which is what `FuncValue` expects. The generator function can use any input it
wants to build the individual functions, thus helping with code re-use.
wants to build the individual functions, thus helping with code reuse.
### How do I determine the signature of my simple, polymorphic function?

View File

@@ -100,6 +100,24 @@ expression
}
```
- **for**: loop over a list with a body of statements
```mcl
$list = ["a", "b", "c",]
for $index, $value in $list {
# some statements go here
}
```
- **forkv**: loop over a map with a body of statements
```mcl
$map = {0 => "a", 1 => "b", 2 => "c",}
forkv $key, $val in $map {
# some statements go here
}
```
- **resource**: produces a resource
```mcl
@@ -985,7 +1003,7 @@ Not really, but practically it can be used as such. The `class` statement is not
a singleton since it can be called multiple times in different locations, and it
can also be parameterized and called multiple times (with `include`) using
different input parameters. The reason it can be used as such is that statement
output (from multple classes) that is compatible (and usually identical) will
output (from multiple classes) that is compatible (and usually identical) will
be automatically collated and have the duplicates removed. In that way, you can
assume that an unparameterized class is always a singleton, and that
parameterized classes can often be singletons depending on their contents and if
@@ -1027,7 +1045,7 @@ thing FRP experts might notice is that some of the concepts from FRP are either
named differently, or are notably absent.
In mgmt, we don't talk about behaviours, events, or signals in the strict FRP
definitons of the words. Firstly, because we only support discretized, streams
definitions of the words. Firstly, because we only support discretized, streams
of values with no plan to add continuous semantics. Secondly, because we prefer
to use terms which are more natural and relatable to what our target audience is
expecting. Our users are more likely to have a background in Physiology, or

View File

@@ -61,3 +61,5 @@ if we missed something that you think is relevant!
| James Shubin | video | [Recording from CfgMgmtCamp.eu 2024](https://www.youtube.com/watch?v=vBt9lpGD4bc) |
| James Shubin | blog | [Mgmt Configuration Language: Functions](https://purpleidea.com/blog/2024/11/22/functions-in-mgmt/) |
| James Shubin | blog | [Modules and imports in mgmt](https://purpleidea.com/blog/2024/12/03/modules-and-imports-in-mgmt/) |
| James Shubin | video | [Recording from FOSDEM 2025, Docs Devroom](https://video.fosdem.org/2025/k4201/fosdem-2025-6143-docs-straight-from-the-code-ast-powered-automation.mp4) |
| James Shubin | video | [Recording from CfgMgmtCamp.eu 2025](https://www.youtube.com/watch?v=0Oa7CWx4TEA) |

View File

@@ -62,7 +62,7 @@ status-quo of using your own etcd cluster is stable, and you can even
use the embedded etcd server in standalone mode...
* This means you can run `mgmt etcd` and get the standard etcd binary
behviour that you'd get from running `etcd` normally. This makes it
behaviour that you'd get from running `etcd` normally. This makes it
easy to use both together since you only need to transport one binary
around. (And maybe mgmt will do that for you!)

205
docs/release-notes/0.0.27 Normal file
View File

@@ -0,0 +1,205 @@
I've just released version 0.0.27 of mgmt!
> 854 files changed, 28882 insertions(+), 16049 deletions(-)
This is rather large release, as I'm not making regular releases unless there's
a specific ask. Most folks that are playing with mgmt are using `git master`.
With that, here are a few highlights from the release:
* Type unification is now extremely fast for all scenarios.
* Added a modules/ directory with shared mcl code for everyone to use. This
includes code for virtualization, cups, shorewall, dhcp, routers, and more!
* New core mgmt commands including setup, firstboot, and docs were added!
* The provisioner got lots of improvements including handoff, and iPXE support.
And much more...
DOWNLOAD
Prebuilt binaries are available here for this release:
https://github.com/purpleidea/mgmt/releases/tag/0.0.27
They can also be found on the Fedora mirror:
https://dl.fedoraproject.org/pub/alt/purpleidea/mgmt/releases/0.0.27/
NEWS
* Primary community channel is now on Matrix. IRC is deprecated until someone
wants to run a bridge for us.
* Type unification is now textbook, and blazingly (linearly) fast. The large
programs I'm writing now unify in under 200ms. Most small programs typically
unify in ~5ms.
* Resource and edge names are always lists of strings now unless they're static.
* We're up to golang 1.23 now. Older versions may still work.
* Our type system now supports unification variables like ?1, ?2 and so on.
* I fixed a bug in my contrib.sh script which omitted the Co-authored-by people!
This means Samuel Gélineau might have previously been missed in past release
notes which is tragic, since he has been by far the most important contributor
to mgmt.
* Made toposort deterministic which fixes some spurious non-determinism.
* Added the iterator filter function. (An important core primitive.)
* Cleaned up the output of many resources to make logs more useful / less noisy.
* Added constants, although I plan to change this to a `const` import package.
* Added the list and map core packages.
* Catch $ in metaparams to make the obvious bug cases easier for users to avoid.
* Consul is now behind a build tag for now, since it's non-free. We'll remove it
eventually if there isn't a suitable free replacement.
* Added mcl modules directory with a good initial set of interesting code.
* Added the the "vardir" API to our "local" package. This is a helpful primitive
which I use in almost every module that I write.
* Added a gzip resource!
* Added a tar resource!
* We moved the template() function to the golang.template namespace. This makes
it clear what kind of template it is and de-emphasizes our "love" for it as the
blessed template engine at least for now.
* Added a sysctl resource!
* Added a virt-builder resource for building images. We can now automate virtual
machines really elegantly.
* A bunch of core functions were added including stuff in net, strings, deploy,
and more!
* The local package got a neat "pool" function. There are lots of possibilities
to use this in creative ways!
* The GAPI/deploy code got more testing and we found some edge cases and patched
them. You can now deploy in all sorts of creative ways and things should work
as expected!
* Added a resource for archiving a deploy. This is deploy:tar and helps with
bootstrapping new machines.
* Found a sneaky DHCP bug and fixed it!
* Added mgmt setup and firstboot commands! This helps bootstrap things without
needing to re-implement that logic everywhere as bash too!
* Added a "docs" command for generating resources and function documentation!
* The provisioner got lots of improvements including handoff, and iPXE support.
* New mcl modules include shorewall, dhcp, cups, some meta modules, misc modules
and more!
* Added a BMC resource in case you want to automate your server hardware.
* We now allow multiple star (*) imports although it's not recommended.
* Hostname handoff is now also part of the provisioner.
* Fixed two type unification corner cases with magic struct functions.
* Added iPXE support to the provisioner.
* Added pprof support to make it easy to generate performance information.
* Added anonymous function calling. These are occasionally useful, and now the
language has them. They were fun and concise to implement!
* We're looking for help writing Amazon, Google, DigitalOcean, Hetzner, etc,
resources if anyone is interested, reach out to us. Particularly if there is
support from those organizations as well.
* Many other bug fixes, changes, etc...
* See the git log for more NEWS, and for anything notable I left out!
BUGS/TODO
* Function values getting _passed_ to resources doesn't work yet, but it's not a
blocker, but it would definitely be useful. We're looking into it.
* Function graphs are unnecessarily dynamic. We might make them more static so
that we don't need as many transactions. This is really a compiler optimization
and not a bug, but it's something important we'd like to have.
* Running two Txn's during the same pause would be really helpful. I'm not sure
how much of a performance improvement we'd get from this, but it would sure be
interesting to build. If you want to build a fancy synchronization primitive,
then let us know! Again this is not a bug.
* The arm64 version doesn't support augeas, so it was built with:
GOTAGS='noaugeas' to get the build out.
TALKS
After FOSDEM/CfgMgmtCamp 2025, I don't have anything planned until CfgMgmtCamp
2026. If you'd like to book me for a private event, or sponsor my travel for
your conference, please let me know.
PARTNER PROGRAM
Interest in the partner program has been limited to small individuals with no
real corporate backing, so its been officially discontinued for now. If you're
interested in partnering with us and receiving support, mgmt products early
access to releases, bug fixes, support, and many other goodies, please sign-up
today: https://bit.ly/mgmt-partner-program
MISC
Our mailing list host (Red Hat) is no longer letting non-Red Hat employees use
their infrastructure. We're looking for a new home. I've opened a ticket with
Freedesktop. If you have any sway with them or other recommendations, please let
me know:
https://gitlab.freedesktop.org/freedesktop/freedesktop/-/issues/1082
We're still looking for new contributors, and while there are easy, medium and
hard issues available! You're also welcome to suggest your own! Please join us
in #mgmtconfig on Libera IRC or Matrix (preferred) and ping us if you'd like
help getting started! For details please see:
https://github.com/purpleidea/mgmt/blob/master/docs/faq.md#how-do-i-contribute-to-the-project-if-i-dont-know-golang
Many tagged #mgmtlove issues exist:
https://github.com/purpleidea/mgmt/issues?q=is%3Aissue+is%3Aopen+label%3Amgmtlove
Although asking in matrix is the best way to find something to work on.
MENTORING
We offer mentoring for new golang/mgmt hackers who want to get involved. This is
fun and friendly! You get to improve your skills, and we get some patches in
return. Ping me off-list for details.
THANKS
Thanks (alphabetically) to everyone who contributed to the latest release:
Cian Yong Leow, Felix Frank, James Shubin, Joe Groocock, Julian Rüth, Omar Al-Shuha, Samuel Gélineau, xlai89
We had 8 unique committers since 0.0.26, and have had 96 overall.
Run 'git log 0.0.26..0.0.27' to see what has changed since 0.0.26
Happy hacking,
James
@purpleidea

280
docs/release-notes/1.0.0 Normal file
View File

@@ -0,0 +1,280 @@
I've just released version 1.0.0 of mgmt!
> 614 files changed, 30199 insertions(+), 11916 deletions(-)
This is very important and large release. It's been 10 years since I first
publicly released this project, and I might as well stop confusing new users.
I'm happily using it in production for some time now, and I love writing `mcl`
every day! I am doing customer work in mgmt, and I have happy users.
With that, here are a few highlights from the release:
* There is a new function engine which is significantly faster on large graphs.
It could be improved further, but the optimizations aren't needed for now.
* The "automatic embedded etcd clustering" should be considered deprecated. You
can run with --no-magic to ensure it's off. It was buggy and we will possibly
write it with mcl anyways. Expect it to be removed soon.
* Type unification errors have context and line numbers! Many other error
scenarios have this too! This isn't perfect, and there are still some remaining
places when you don't get this information. Please help us find and expand
these.
* The function API has been overhauled which now makes writing most functions
significantly easier and simpler. They'll also use less memory. This is a
benefit of the new function engine.
* We have added *declarative* for and forkv statements to the language.
* Exported resources are merged and gorgeous! They work how I've always wanted.
You can actually see my experiment in the very first demo of mgmt, and I finally
wrote them to work with the language how I've always wanted.
* There's an http:server:ui set of resources that have been added. Check out:
https://www.youtube.com/watch?v=8vz1MMGkuik for some examples of that in action
and more!
And much more...
SPONSORS
The `mgmt` project is generously sponsored by:
m9rx corporation - https://m9rx.com/
Please reach out if you'd like to sponsor!
DOWNLOAD
Prebuilt binaries are available here for this release:
https://github.com/purpleidea/mgmt/releases/tag/1.0.0
They can also be found on the Fedora mirror:
https://dl.fedoraproject.org/pub/alt/purpleidea/mgmt/releases/1.0.0/
NEWS
* A bunch of misc mcl code has been added to modules/ for you to see.
* The user resource has been improved following feedback from cloudflare.
* Detect self-referential frags when building files that way.
* Added a new function for URL parsing.
* Type unification errors have context and line numbers!
* There's a "baddev" feature branch which gets rebased which you can use if you
don't want to install the tools to compiler the lexer/parser stuff. We do the
ugly commit for you if that's easier for development.
* We have added *declarative* for and forkv statements to the language. If you
know of a better name that "forkv" we're happy to hear it, but a small poll
didn't produce a more convincing suggestion.
* Waiting for a deploy just happens automatically with the "empty" frontend.
* Waiting to run a deploy just waits automatically until etcd is online.
* Automatic mgmt deploying after virt provisioning works with a seeds field.
* There's a global flag to skip autoedges to improve performance.
* The docker resource has been modernized and supports running on a docker host
that we're bootstrapping.
* Docker ports were built backwards and these have been corrected.
* The "world" interface has been cleaned up dramatically. This will make life
easier for someone who wants to add a new backend there. Filesystem, scheduler,
deployer, and more are all split.
* We can run our etcd connection over SSH. That's one of the new backends.
There's actually a reconnect issue, but it's an easy fix and it should likely
come out in the next release.
* We have an is_virtual function to detect where mgmt is running!
* Virtualization modules moved to qcow2 by default. It's solid.
* Improved a lot of user-facing logging so it's clearer what's happening.
* Exported resources have been implemented ... and they're glorious. They work
how I've always dreamed, and are such a breath of fresh air from the Puppet
days. There's an export/collect system. Export works by metaparam, not a special
language feature, and collect works with core functions. It runs when the
resource in the graph actually runs, as opposed to "all at once, even if you
fail" like the old days. Yay!
* fmt.printf handles more cases!
* The file resource now has a symlink param. Someone test it and find issues.
* We have an iter.range function which is helpful with `for` statements.
* We do some speculation which drastically reduces the shape of the function
graphs in a lot of constant scenarios. This also reduces the need to change the
shape, which brings a huge performance boost.
* Don't reuse fact pointers. There was a bug around those. In fact get rid of
the fact API since it's pointless really.
* There's some new stuff in the convert package.
* We added an http:server:ui resource. This is kind of a prototype, but you can
see it in action here: https://www.youtube.com/watch?v=8vz1MMGkuik
* Fix some send/recv corner cases. I wish I had more tests for this. Hint!
* There's an os.readfilewait() function in temporarily. This will go away when
we get the <|> operator.
* A WatchFiles field was added to the exec resource. Very handy.
* We have a new "line" resource. It supports "trim"-ing too.
* There are some new functions that have been added.
* The modules/ directory got some ssh key things.
* Automatic grouping logic improved, thanks to http:server:ui stuff.
* Hierarchical grouping works very reliably as far as I can tell.
* A bunch of ctx's were added all over where they never were. Legacy code!
* A bunch of network/NetworkManager/networkd and related mcl code was added. The
interfaces are really ugly, what is the correct upstream network config thing?
* We have a modinfo function.
* We built in some ulimit settings for modern machines.
* We have an mcl class for copr setup.
* We added SSH hostkey logic into our core etcd ssh connection tooling.
* The provisioner supports exec handoff. It can also handle more scenarios, eg
booting from an ipxe usb key and not installing on it.
* The provisioner supports encrypting machines with LUKS. It does this in a very
clever way to allow creation of secure passwords after first boot. Many kudos to
the systemd and other authors who built all the needed pieces for this to just
work fairly well.
* We improved a graph function from O(n) to O(1). Woops =D
* We removed the secret channels from the function graphs. This is much simpler
now!
* ExprIf and StmtIf both do the more correct thing. I guess the bigger graph was
eventually going to need to get killed. This was a good choice that I didn't
make soon enough.
* A ton of races were killed. We're building by default with the race checker.
I don't know why I didn't do this ten years ago. Performance is not so terrible
these days, and it catches so much. Woops. Good lesson to share with others.
* The language has a nil type, but don't worry, this is only for internal
plumbing, and we will NOT let it be user facing!
* The langpuppet stuff had to be removed again for now. If it's used, patch in.
* The GAPI stuff got a major cleanup. It was early code that was bad. Now it's a
lot better.
* The new function engine is the really big story. Have a look if you're an
algorithmist. We'd love to have people work on improving it further. It's most
likely glitch free now too!
* The virt resource code a big cleanup. It runs hotplug again which had rotted
due to libvirt api changes I think.
* The qemu guest agent works automatically again.
* The svc resource (one of the earliest) has been overhauled since it had old
buggy code which has now been fixed.
* We're looking for help writing Amazon, Google, DigitalOcean, Hetzner, etc,
resources if anyone is interested, reach out to us. Particularly if there is
support from those organizations as well.
* Many other bug fixes, changes, etc...
* See the git log for more NEWS, and for anything notable I left out!
BUGS/TODO
* Function values getting _passed_ to resources doesn't work yet, but it's not a
blocker, but it would definitely be useful. We're looking into it.
* The arm64 version doesn't support augeas, so it was built with:
GOTAGS='noaugeas' to get the build out.
* We don't have the <|> operator merged yet. Expect that when we do this, we'll
consider removing the || (default) operator. This is the only pending language
change that I know of, and these cases are easily caught by the compiler and can
be easily patched.
* Autoedge performance isn't great. It can easily be disabled. Most of the time
I just specify my edges, so this is really a convenience feature, but it should
be looked into when we have a chance.
* There's a subtle ssh reconnect issue which can occur. It should be easy to
fix and I have a patch in testing.
* Our wasm code input fields grew tick marks, but I think this disturbed the
buggy wasm code. If someone is an expert here, please have at it.
TALKS
After FOSDEM/CfgMgmtCamp 2026, I don't have anything planned until CfgMgmtCamp
2027. If you'd like to book me for a private event, or sponsor my travel for
your conference, please let me know.
MISC
Our mailing list host (Red Hat) is no longer letting non-Red Hat employees use
their infrastructure. We're looking for a new home. I've opened a ticket with
Freedesktop. If you have any sway with them or other recommendations, please let
me know:
https://gitlab.freedesktop.org/freedesktop/freedesktop/-/issues/1082
We're still looking for new contributors, and while there are easy, medium and
hard issues available! You're also welcome to suggest your own! Please join us
in #mgmtconfig on Libera IRC or Matrix (preferred) and ping us if you'd like
help getting started! For details please see:
https://github.com/purpleidea/mgmt/blob/master/docs/faq.md#how-do-i-contribute-to-the-project-if-i-dont-know-golang
Many tagged #mgmtlove issues exist:
https://github.com/purpleidea/mgmt/issues?q=is%3Aissue+is%3Aopen+label%3Amgmtlove
Although asking in matrix is the best way to find something to work on.
MENTORING
We offer mentoring for new golang/mgmt hackers who want to get involved. This is
fun and friendly! You get to improve your skills, and we get some patches in
return. Ping me off-list for details.
THANKS
Thanks (alphabetically) to everyone who contributed to the latest release:
Ahmad Abuziad, Edward Toroshchyn, Felix Frank, hades, James Shubin, Karpfen, Lourenço, Lourenço Vales, Samuel Gélineau
We had 10 unique committers since 0.0.27, and have had 103 overall.
run 'git log 0.0.27..1.0.0' to see what has changed since 0.0.27
Happy hacking,
James
@purpleidea

View File

@@ -361,14 +361,14 @@ func (obj *FooRes) Watch(ctx context.Context) error {
// notify engine that we're running
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
select {
// the actual events!
case event := <-obj.foo.Events:
if is_an_event {
send = true
if !is_an_event {
continue // skip event
}
// send below...
// event errors
case err := <-obj.foo.Errors:
@@ -378,11 +378,7 @@ func (obj *FooRes) Watch(ctx context.Context) error {
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event()
}
obj.init.Event() // notify engine of an event (this can block)
}
}
```
@@ -523,9 +519,10 @@ graph edges from another resource. These values are consumed during the
any resource that has an appropriate value and that has the `Sendable` trait.
You can read more about this in the Send/Recv section below.
### Collectable
### Exportable
This is currently a stub and will be updated once the DSL is further along.
Exportable allows a resource to tell the exporter what subset of its data it
wishes to export when that occurs. It is rare that you will need to use this.
## Resource Initialization
@@ -687,8 +684,41 @@ if val, exists := obj.init.Recv()["some_key"]; exists {
}
```
The specifics of resource sending are not currently documented. Please send a
patch here!
A resource can send a value during CheckApply by running the `obj.init.Send()`
method. It must always send a value if (1) it's not erroring in CheckApply, and
(2) if the `obj.SendActive()` method inside of CheckApply returns true. It is
not harmful to run the Send method if CheckApply is going to error, or if
`obj.SendActive()` returns false, just unnecessary. In the `!apply` case where
we're running in "noop" mode, and where the state is not correct, then you
should still attempt to send a value, but it is a bit ambiguous which value to
send. This behaviour may be specified in the future, but at the moment it's
mostly inconsequential. At the moment, `obj.SendActive()` is disabled at compile
time, but can be enabled if you have a legitimate use-case for it.
```golang
// inside CheckApply, somewhere near the end usually
if err := obj.init.Send(&ExecSends{ // send the special data structure
Output: obj.output,
Stdout: obj.stdout,
Stderr: obj.stderr,
}); err != nil {
return false, err
}
```
You must also implement the `Sends()` method which should return the above
sending struct with all of the fields containing their default or values. Please
note, that those fields must have their struct tags set appropriately.
### Safety
Lastly, please note that in order for a resource to send a useful value, even
when its state is already correct (it may have run earlier for example) then it
may require the implementation of CheckApply to cache a return value for later
use. Keep in mind that you should store this securely should there be a chance
that sensitive info is contained within, and that an untrusted user could put
malicious data in the cache if you are not careful. It's best to make sure the
users of your resource are aware of its implementation details here.
## Composite resources

View File

@@ -56,7 +56,7 @@ It has the following properties:
* `image`: docker `image` or `image:tag`
* `cmd`: a command or list of commands to run on the container
* `env`: a list of environment variables, e.g. `["VAR=val",],`
* `ports`: a map of portmappings, e.g. `{"tcp" => {80 => 8080, 443 => 8443,},},`
* `ports`: a map of portmappings, e.g. `{"tcp" => {8080 => 80, 8443 => 443,},},`
* `apiversion:` override the host's default docker version, e.g. `"v1.35"`
* `force`: destroy and rebuild the container instead of erroring on wrong image

View File

@@ -133,7 +133,7 @@ result, it might be very hard for them to improve their API's, particularly
without breaking compatibility promises for their existing customers. As a
result, they should either add a versioned API, which lets newer consumers get
the benefit, or add new parallel services which offer the modern features. If
they don't, the only solution is for new competitors to build-in these better
they don't, the only solution is for new competitors to build in these better
efficiencies, eventually offering better value to cost ratios, which will then
make legacy products less lucrative and therefore unmaintainable as compared to
their competitors.

View File

@@ -129,9 +129,9 @@ For example, in a short string snippet you can use `s` instead of `myString`, as
well as other common choices. `i` is a common `int` counter, `f` for files, `fn`
for functions, `x` for something else and so on.
### Variable re-use
### Variable reuse
Feel free to create and use new variables instead of attempting to re-use the
Feel free to create and use new variables instead of attempting to reuse the
same string. For example, if a function input arg is named `s`, you can use a
new variable to receive the first computation result on `s` instead of storing
it back into the original `s`. This avoids confusion if a different part of the
@@ -145,7 +145,7 @@ MyNotIdealFunc(s string, b bool) string {
if !b {
return s + "hey"
}
s = strings.Replace(s, "blah", "", -1) // not ideal (re-use of `s` var)
s = strings.Replace(s, "blah", "", -1) // not ideal (reuse of `s` var)
return s
}
@@ -153,7 +153,7 @@ MyOkayFunc(s string, b bool) string {
if !b {
return s + "hey"
}
s2 := strings.Replace(s, "blah", "", -1) // doesn't re-use `s` variable
s2 := strings.Replace(s, "blah", "", -1) // doesn't reuse `s` variable
return s2
}
@@ -256,6 +256,15 @@ like: `import "https://github.com/purpleidea/mgmt-banana/"` and namespace it as
`import "https://github.com/purpleidea/mgmt-banana/" as tomato` or something
similar.
### Imports
When importing "golang" modules such as "golang/strings" it's recommended to use
the `import "golang/strings" as golang_strings` format. This is to avoid
confusion with the normal core package you get from `import "strings"`.
In the long-term, we expect to remove the `"golang/"` namespace when our own
standard library is complete enough.
### Licensing
We believe that sharing code helps reduce unnecessary re-invention, so that we

View File

@@ -52,19 +52,27 @@ func (obj *Engine) OKTimestamp(vertex pgraph.Vertex) bool {
// BadTimestamps returns the list of vertices that are causing our timestamp to
// be bad.
func (obj *Engine) BadTimestamps(vertex pgraph.Vertex) []pgraph.Vertex {
obj.tlock.RLock()
state := obj.state[vertex]
obj.tlock.RUnlock()
vs := []pgraph.Vertex{}
obj.state[vertex].mutex.RLock() // concurrent read start
ts := obj.state[vertex].timestamp // race
obj.state[vertex].mutex.RUnlock() // concurrent read end
state.mutex.RLock() // concurrent read start
ts := state.timestamp // race
state.mutex.RUnlock() // concurrent read end
// these are all the vertices pointing TO vertex, eg: ??? -> vertex
for _, v := range obj.graph.IncomingGraphVertices(vertex) {
obj.tlock.RLock()
state := obj.state[v]
obj.tlock.RUnlock()
// If the vertex has a greater timestamp than any prerequisite,
// then we can't run right now. If they're equal (eg: initially
// with a value of 0) then we also can't run because we should
// let our pre-requisites go first.
obj.state[v].mutex.RLock() // concurrent read start
t := obj.state[v].timestamp // race
obj.state[v].mutex.RUnlock() // concurrent read end
state.mutex.RLock() // concurrent read start
t := state.timestamp // race
state.mutex.RUnlock() // concurrent read end
if obj.Debug {
obj.Logf("OKTimestamp: %d >= %d (%s): !%t", ts, t, v.String(), ts >= t)
}
@@ -83,6 +91,10 @@ func (obj *Engine) Process(ctx context.Context, vertex pgraph.Vertex) error {
return fmt.Errorf("vertex is not a Res")
}
obj.tlock.RLock()
state := obj.state[vertex]
obj.tlock.RUnlock()
// backpoke! (can be async)
if vs := obj.BadTimestamps(vertex); len(vs) > 0 {
// back poke in parallel (sync b/c of waitgroup)
@@ -129,12 +141,80 @@ func (obj *Engine) Process(ctx context.Context, vertex pgraph.Vertex) error {
// sendrecv!
// connect any senders to receivers and detect if values changed
// this actually checks and sends into resource trees recursively...
// XXX: This code is duplicated in the fancier autogrouping code below!
//if res, ok := vertex.(engine.RecvableRes); ok {
// if obj.Debug {
// obj.Logf("SendRecv: %s", res) // receiving here
// }
// if updated, err := SendRecv(res, nil); err != nil {
// return errwrap.Wrapf(err, "could not SendRecv")
// } else if len(updated) > 0 {
// //for _, s := range graph.UpdatedStrings(updated) {
// // obj.Logf("SendRecv: %s", s)
// //}
// for r, m := range updated { // map[engine.RecvableRes]map[string]*engine.Send
// v, ok := r.(pgraph.Vertex)
// if !ok {
// continue
// }
// _, stateExists := obj.state[v] // autogrouped children probably don't have a state
// if !stateExists {
// continue
// }
// for s, send := range m {
// if !send.Changed {
// continue
// }
// obj.Logf("Send/Recv: %v.%s -> %v.%s", send.Res, send.Key, r, s)
// // if send.Changed == true, at least one was updated
// // invalidate cache, mark as dirty
// obj.state[v].setDirty()
// //break // we might have more vertices now
// }
//
// // re-validate after we change any values
// if err := engine.Validate(r); err != nil {
// return errwrap.Wrapf(err, "failed Validate after SendRecv")
// }
// }
// }
//}
// Send/Recv *can* receive from someone that was grouped! The sender has
// to use *their* send/recv handle/implementation, which has to be setup
// properly by the parent resource during Init(). See: http:server:flag.
collectSendRecv := []engine.Res{} // found resources
if res, ok := vertex.(engine.RecvableRes); ok {
if obj.Debug {
obj.Logf("SendRecv: %s", res) // receiving here
collectSendRecv = append(collectSendRecv, res)
}
if updated, err := SendRecv(res, nil); err != nil {
return errwrap.Wrapf(err, "could not SendRecv")
// If we contain grouped resources, maybe someone inside wants to recv?
// This code is similar to the above and was added for http:server:ui.
// XXX: Maybe this block isn't needed, as mentioned we need to check!
if res, ok := vertex.(engine.GroupableRes); ok {
process := res.GetGroup() // look through these
for len(process) > 0 { // recurse through any nesting
var x engine.GroupableRes
x, process = process[0], process[1:] // pop from front!
for _, g := range x.GetGroup() {
collectSendRecv = append(collectSendRecv, g.(engine.Res))
}
}
}
//for _, g := res.GetGroup() // non-recursive, one-layer method
for _, g := range collectSendRecv { // recursive method!
r, ok := g.(engine.RecvableRes)
if !ok {
continue
}
// This section looks almost identical to the above one!
if updated, err := SendRecv(r, nil); err != nil {
return errwrap.Wrapf(err, "could not grouped SendRecv")
} else if len(updated) > 0 {
//for _, s := range graph.UpdatedStrings(updated) {
// obj.Logf("SendRecv: %s", s)
@@ -161,11 +241,13 @@ func (obj *Engine) Process(ctx context.Context, vertex pgraph.Vertex) error {
// re-validate after we change any values
if err := engine.Validate(r); err != nil {
return errwrap.Wrapf(err, "failed Validate after SendRecv")
return errwrap.Wrapf(err, "failed grouped Validate after SendRecv")
}
}
}
}
// XXX: this might not work with two merged "CompatibleRes" resources...
// XXX: fix that so we can have the mappings to do it in lang/interpret.go ?
var ok = true
var applied = false // did we run an apply?
@@ -181,15 +263,34 @@ func (obj *Engine) Process(ctx context.Context, vertex pgraph.Vertex) error {
refreshableRes.SetRefresh(refresh) // tell the resource
}
// Run the exported resource exporter!
var exportOK bool
var exportErr error
wg := &sync.WaitGroup{}
wg.Add(1)
// (Run this concurrently with the CheckApply related stuff below...)
go func() {
defer wg.Done()
// doesn't really need to be in parallel, but we can...
exportOK, exportErr = obj.Exporter.Export(ctx, res)
}()
// Check cached state, to skip CheckApply, but can't skip if refreshing!
// If the resource doesn't implement refresh, skip the refresh test.
// FIXME: if desired, check that we pass through refresh notifications!
if (!refresh || !isRefreshableRes) && obj.state[vertex].isStateOK.Load() { // mutex RLock/RUnlock
if (!refresh || !isRefreshableRes) && state.isStateOK.Load() { // mutex RLock/RUnlock
checkOK, err = true, nil
} else if noop && (refresh && isRefreshableRes) { // had a refresh to do w/ noop!
checkOK, err = false, nil // therefore the state is wrong
} else if res.MetaParams().Hidden {
// We're not running CheckApply
if obj.Debug {
obj.Logf("%s: Hidden", res)
}
checkOK, err = true, nil // default
} else {
// run the CheckApply!
if obj.Debug {
@@ -201,13 +302,20 @@ func (obj *Engine) Process(ctx context.Context, vertex pgraph.Vertex) error {
obj.Logf("%s: CheckApply(%t): Return(%t, %s)", res, !noop, checkOK, engineUtil.CleanError(err))
}
}
wg.Wait()
checkOK = checkOK && exportOK // always combine
if err == nil { // If CheckApply didn't error, look at exportOK.
// This is because if CheckApply errors we don't need to care or
// tell anyone about an exporting error.
err = exportErr
}
if checkOK && err != nil { // should never return this way
return fmt.Errorf("%s: resource programming error: CheckApply(%t): %t, %+v", res, !noop, checkOK, err)
}
if !checkOK { // something changed, restart timer
obj.state[vertex].cuid.ResetTimer() // activity!
state.cuid.ResetTimer() // activity!
if obj.Debug {
obj.Logf("%s: converger: reset timer", res)
}
@@ -215,10 +323,10 @@ func (obj *Engine) Process(ctx context.Context, vertex pgraph.Vertex) error {
// if CheckApply ran without noop and without error, state should be good
if !noop && err == nil { // aka !noop || checkOK
obj.state[vertex].tuid.StartTimer()
//obj.state[vertex].mutex.Lock()
obj.state[vertex].isStateOK.Store(true) // reset
//obj.state[vertex].mutex.Unlock()
state.tuid.StartTimer()
//state.mutex.Lock()
state.isStateOK.Store(true) // reset
//state.mutex.Unlock()
if refresh {
obj.SetUpstreamRefresh(vertex, false) // refresh happened, clear the request
if isRefreshableRes {
@@ -255,9 +363,9 @@ func (obj *Engine) Process(ctx context.Context, vertex pgraph.Vertex) error {
wg := &sync.WaitGroup{}
// update this timestamp *before* we poke or the poked
// nodes might fail due to having a too old timestamp!
obj.state[vertex].mutex.Lock() // concurrent write start
obj.state[vertex].timestamp = time.Now().UnixNano() // update timestamp (race)
obj.state[vertex].mutex.Unlock() // concurrent write end
state.mutex.Lock() // concurrent write start
state.timestamp = time.Now().UnixNano() // update timestamp (race)
state.mutex.Unlock() // concurrent write end
for _, v := range obj.graph.OutgoingGraphVertices(vertex) {
if !obj.OKTimestamp(v) {
// there is at least another one that will poke this...
@@ -268,7 +376,7 @@ func (obj *Engine) Process(ctx context.Context, vertex pgraph.Vertex) error {
// so that the graph doesn't go on running forever until
// it's completely done. This is an optional feature and
// we can select it via ^C on user exit or via the GAPI.
if obj.fastPause {
if obj.fastPause.Load() {
obj.Logf("%s: fast pausing, poke skipped", res)
continue
}
@@ -298,57 +406,71 @@ func (obj *Engine) Worker(vertex pgraph.Vertex) error {
return fmt.Errorf("vertex is not a resource")
}
obj.tlock.RLock()
state := obj.state[vertex]
obj.tlock.RUnlock()
// bonus safety check
if res.MetaParams().Burst == 0 && !(res.MetaParams().Limit == rate.Inf) { // blocked
return fmt.Errorf("permanently limited (rate != Inf, burst = 0)")
}
// initialize or reinitialize the meta state for this resource uid
// if we're using a Hidden resource, we don't support this feature
// TODO: should we consider supporting it? is it really necessary?
// XXX: to support this for Hidden, we'd need to handle dupe names
metas := &engine.MetaState{
CheckApplyRetry: res.MetaParams().Retry, // lookup the retry value
}
if !res.MetaParams().Hidden {
// Skip this if Hidden since we can have a hidden res that has
// the same kind+name as a regular res, and this would conflict.
obj.mlock.Lock()
if _, exists := obj.metas[engine.PtrUID(res)]; !exists || res.MetaParams().Reset {
obj.metas[engine.PtrUID(res)] = &engine.MetaState{
CheckApplyRetry: res.MetaParams().Retry, // lookup the retry value
}
}
metas := obj.metas[engine.PtrUID(res)] // handle
metas = obj.metas[engine.PtrUID(res)] // handle
obj.mlock.Unlock()
}
//defer close(obj.state[vertex].stopped) // done signal
//defer close(state.stopped) // done signal
obj.state[vertex].cuid = obj.Converger.Register()
obj.state[vertex].tuid = obj.Converger.Register()
state.cuid = obj.Converger.Register()
state.tuid = obj.Converger.Register()
// must wait for all users of the cuid to finish *before* we unregister!
// as a result, this defer happens *before* the below wait group Wait...
defer obj.state[vertex].cuid.Unregister()
defer obj.state[vertex].tuid.Unregister()
defer state.cuid.Unregister()
defer state.tuid.Unregister()
defer obj.state[vertex].wg.Wait() // this Worker is the last to exit!
defer state.wg.Wait() // this Worker is the last to exit!
obj.state[vertex].wg.Add(1)
state.wg.Add(1)
go func() {
defer obj.state[vertex].wg.Done()
defer close(obj.state[vertex].eventsChan) // we close this on behalf of res
defer state.wg.Done()
defer close(state.eventsChan) // we close this on behalf of res
// This is a close reverse-multiplexer. If any of the channels
// close, then it will cause the doneCtx to cancel. That way,
// multiple different folks can send a close signal, without
// every worrying about duplicate channel close panics.
obj.state[vertex].wg.Add(1)
state.wg.Add(1)
go func() {
defer obj.state[vertex].wg.Done()
defer state.wg.Done()
// reverse-multiplexer: any close, causes *the* close!
select {
case <-obj.state[vertex].processDone:
case <-obj.state[vertex].watchDone:
case <-obj.state[vertex].limitDone:
case <-obj.state[vertex].retryDone:
case <-obj.state[vertex].removeDone:
case <-obj.state[vertex].eventsDone:
case <-state.processDone:
case <-state.watchDone:
case <-state.limitDone:
case <-state.retryDone:
case <-state.removeDone:
case <-state.eventsDone:
}
// the main "done" signal gets activated here!
obj.state[vertex].doneCtxCancel() // cancels doneCtx
state.doneCtxCancel() // cancels doneCtx
}()
var err error
@@ -360,14 +482,14 @@ func (obj *Engine) Worker(vertex pgraph.Vertex) error {
errDelayExpired := engine.Error("delay exit")
err = func() error { // slim watch main loop
timer := time.NewTimer(time.Duration(delay) * time.Millisecond)
defer obj.state[vertex].init.Logf("the Watch delay expired!")
defer state.init.Logf("the Watch delay expired!")
defer timer.Stop() // it's nice to cleanup
for {
select {
case <-timer.C: // the wait is over
return errDelayExpired // special
case <-obj.state[vertex].doneCtx.Done():
case <-state.doneCtx.Done():
return nil
}
}
@@ -376,16 +498,27 @@ func (obj *Engine) Worker(vertex pgraph.Vertex) error {
delay = 0 // reset
continue
}
} else if res.MetaParams().Hidden {
// We're not running Watch
if obj.Debug {
obj.Logf("%s: Hidden", res)
}
state.cuid.StartTimer() // TODO: Should we do this?
err = state.hidden(state.doneCtx)
state.cuid.StopTimer() // TODO: Should we do this?
} else if interval := res.MetaParams().Poll; interval > 0 { // poll instead of watching :(
obj.state[vertex].cuid.StartTimer()
err = obj.state[vertex].poll(obj.state[vertex].doneCtx, interval)
obj.state[vertex].cuid.StopTimer() // clean up nicely
state.cuid.StartTimer()
err = state.poll(state.doneCtx, interval)
state.cuid.StopTimer() // clean up nicely
} else {
obj.state[vertex].cuid.StartTimer()
state.cuid.StartTimer()
if obj.Debug {
obj.Logf("%s: Watch...", vertex)
}
err = res.Watch(obj.state[vertex].doneCtx) // run the watch normally
err = res.Watch(state.doneCtx) // run the watch normally
if obj.Debug {
if s := engineUtil.CleanError(err); err != nil {
obj.Logf("%s: Watch Error: %s", vertex, s)
@@ -393,11 +526,14 @@ func (obj *Engine) Worker(vertex pgraph.Vertex) error {
obj.Logf("%s: Watch Exited...", vertex)
}
}
obj.state[vertex].cuid.StopTimer() // clean up nicely
state.cuid.StopTimer() // clean up nicely
}
if err == nil { // || err == engine.ErrClosed
return // exited cleanly, we're done
}
if err == context.Canceled {
return // we shutdown nicely on request
}
// we've got an error...
delay = res.MetaParams().Delay
@@ -406,7 +542,7 @@ func (obj *Engine) Worker(vertex pgraph.Vertex) error {
}
if retry > 0 { // don't decrement past 0
retry--
obj.state[vertex].init.Logf("retrying Watch after %.4f seconds (%d left)", float64(delay)/1000, retry)
state.init.Logf("retrying Watch after %.4f seconds (%d left)", float64(delay)/1000, retry)
continue
}
//if retry == 0 { // optional
@@ -419,14 +555,14 @@ func (obj *Engine) Worker(vertex pgraph.Vertex) error {
// If the CheckApply loop exits and THEN the Watch fails with an
// error, then we'd be stuck here if exit signal didn't unblock!
select {
case obj.state[vertex].eventsChan <- errwrap.Wrapf(err, "watch failed"):
case state.eventsChan <- errwrap.Wrapf(err, "watch failed"):
// send
}
}()
// If this exits cleanly, we must unblock the reverse-multiplexer.
// I think this additional close is unnecessary, but it's not harmful.
defer close(obj.state[vertex].eventsDone) // causes doneCtx to cancel
defer close(state.eventsDone) // causes doneCtx to cancel
limiter := rate.NewLimiter(res.MetaParams().Limit, res.MetaParams().Burst)
var reserv *rate.Reservation
var reterr error
@@ -440,7 +576,7 @@ Loop:
// This select is also the main event receiver and is also the
// only place where we read from the poke channel.
select {
case err, ok := <-obj.state[vertex].eventsChan: // read from watch channel
case err, ok := <-state.eventsChan: // read from watch channel
if !ok {
return reterr // we only return when chan closes
}
@@ -449,7 +585,7 @@ Loop:
// we then save so we can return it to the caller of us.
if err != nil {
failed = true
close(obj.state[vertex].watchDone) // causes doneCtx to cancel
close(state.watchDone) // causes doneCtx to cancel
reterr = errwrap.Append(reterr, err) // permanent failure
continue
}
@@ -459,7 +595,7 @@ Loop:
reserv = limiter.ReserveN(time.Now(), 1) // one event
// reserv.OK() seems to always be true here!
case _, ok := <-obj.state[vertex].pokeChan: // read from buffered poke channel
case _, ok := <-state.pokeChan: // read from buffered poke channel
if !ok { // we never close it
panic("unexpected close of poke channel")
}
@@ -468,9 +604,9 @@ Loop:
}
reserv = nil // we didn't receive a real event here...
case _, ok := <-obj.state[vertex].pauseSignal: // one message
case _, ok := <-state.pauseSignal: // one message
if !ok {
obj.state[vertex].pauseSignal = nil
state.pauseSignal = nil
continue // this is not a new pause message
}
// NOTE: If we allowed a doneCtx below to let us out
@@ -482,7 +618,7 @@ Loop:
// we are paused now, and waiting for resume or exit...
select {
case _, ok := <-obj.state[vertex].resumeSignal: // channel closes
case _, ok := <-state.resumeSignal: // channel closes
if !ok {
closed = true
}
@@ -497,9 +633,9 @@ Loop:
}
// drop redundant pokes
for len(obj.state[vertex].pokeChan) > 0 {
for len(state.pokeChan) > 0 {
select {
case <-obj.state[vertex].pokeChan:
case <-state.pokeChan:
default:
// race, someone else read one!
}
@@ -516,7 +652,7 @@ Loop:
d = reserv.DelayFrom(time.Now())
}
if reserv != nil && d > 0 { // delay
obj.state[vertex].init.Logf("limited (rate: %v/sec, burst: %d, next: %dms)", res.MetaParams().Limit, res.MetaParams().Burst, d/time.Millisecond)
state.init.Logf("limited (rate: %v/sec, burst: %d, next: %dms)", res.MetaParams().Limit, res.MetaParams().Burst, d/time.Millisecond)
timer := time.NewTimer(time.Duration(d) * time.Millisecond)
LimitWait:
for {
@@ -528,13 +664,13 @@ Loop:
break LimitWait
// consume other events while we're waiting...
case e, ok := <-obj.state[vertex].eventsChan: // read from watch channel
case e, ok := <-state.eventsChan: // read from watch channel
if !ok {
return reterr // we only return when chan closes
}
if e != nil {
failed = true
close(obj.state[vertex].limitDone) // causes doneCtx to cancel
close(state.limitDone) // causes doneCtx to cancel
reterr = errwrap.Append(reterr, e) // permanent failure
break LimitWait
}
@@ -545,13 +681,13 @@ Loop:
limiter.ReserveN(time.Now(), 1) // one event
// this pause/resume block is the same as the upper main one
case _, ok := <-obj.state[vertex].pauseSignal:
case _, ok := <-state.pauseSignal:
if !ok {
obj.state[vertex].pauseSignal = nil
state.pauseSignal = nil
break LimitWait
}
select {
case _, ok := <-obj.state[vertex].resumeSignal: // channel closes
case _, ok := <-state.resumeSignal: // channel closes
if !ok {
closed = true
}
@@ -560,7 +696,7 @@ Loop:
}
}
timer.Stop() // it's nice to cleanup
obj.state[vertex].init.Logf("rate limiting expired!")
state.init.Logf("rate limiting expired!")
}
// don't Process anymore if we've already failed or shutdown...
if failed || closed {
@@ -587,13 +723,13 @@ Loop:
break RetryWait
// consume other events while we're waiting...
case e, ok := <-obj.state[vertex].eventsChan: // read from watch channel
case e, ok := <-state.eventsChan: // read from watch channel
if !ok {
return reterr // we only return when chan closes
}
if e != nil {
failed = true
close(obj.state[vertex].retryDone) // causes doneCtx to cancel
close(state.retryDone) // causes doneCtx to cancel
reterr = errwrap.Append(reterr, e) // permanent failure
break RetryWait
}
@@ -604,13 +740,13 @@ Loop:
limiter.ReserveN(time.Now(), 1) // one event
// this pause/resume block is the same as the upper main one
case _, ok := <-obj.state[vertex].pauseSignal:
case _, ok := <-state.pauseSignal:
if !ok {
obj.state[vertex].pauseSignal = nil
state.pauseSignal = nil
break RetryWait
}
select {
case _, ok := <-obj.state[vertex].resumeSignal: // channel closes
case _, ok := <-state.resumeSignal: // channel closes
if !ok {
closed = true
}
@@ -620,7 +756,7 @@ Loop:
}
timer.Stop() // it's nice to cleanup
delay = 0 // reset
obj.state[vertex].init.Logf("the CheckApply delay expired!")
state.init.Logf("the CheckApply delay expired!")
}
// don't Process anymore if we've already failed or shutdown...
if failed || closed {
@@ -631,7 +767,7 @@ Loop:
obj.Logf("Process(%s)", vertex)
}
backPoke := false
err = obj.Process(obj.state[vertex].doneCtx, vertex)
err = obj.Process(state.doneCtx, vertex)
if err == engine.ErrBackPoke {
backPoke = true
err = nil // for future code safety
@@ -656,7 +792,7 @@ Loop:
}
if metas.CheckApplyRetry > 0 { // don't decrement past 0
metas.CheckApplyRetry--
obj.state[vertex].init.Logf(
state.init.Logf(
"retrying CheckApply after %.4f seconds (%d left)",
float64(delay)/1000,
metas.CheckApplyRetry,
@@ -671,7 +807,7 @@ Loop:
// this dies. If Process fails permanently, we ask it
// to exit right here... (It happens when we loop...)
failed = true
close(obj.state[vertex].processDone) // causes doneCtx to cancel
close(state.processDone) // causes doneCtx to cancel
reterr = errwrap.Append(reterr, err) // permanent failure
continue

View File

@@ -56,7 +56,7 @@ func AutoEdge(graph *pgraph.Graph, debug bool, logf func(format string, v ...int
sorted = append(sorted, res)
}
for _, res := range sorted { // for each vertexes autoedges
for _, res := range sorted { // for each vertices autoedges
autoEdgeObj, e := res.AutoEdges()
if e != nil {
err = errwrap.Append(err, e) // collect all errors

View File

@@ -95,12 +95,20 @@ func (obj *wrappedGrouper) VertexCmp(v1, v2 pgraph.Vertex) error {
return fmt.Errorf("one of the autogroup flags is false")
}
// We don't want to bail on these two conditions if the kinds are the
// same. This prevents us from having a linear chain of pkg->pkg->pkg,
// instead of flattening all of them into one arbitrary choice. But if
// we are doing hierarchical grouping, then we want to allow this type
// of grouping, or we won't end up building any hierarchies! This was
// added for http:server:ui. Check this condition is really required.
if r1.Kind() == r2.Kind() { // XXX: needed or do we unwrap the contents?
if r1.IsGrouped() { // already grouped!
return fmt.Errorf("already grouped")
}
if len(r2.GetGroup()) > 0 { // already has children grouped!
return fmt.Errorf("already has groups")
}
}
if err := r1.GroupCmp(r2); err != nil { // resource groupcmp failed!
return errwrap.Wrapf(err, "the GroupCmp failed")
}

View File

@@ -59,11 +59,15 @@ func AutoGroup(ag engine.AutoGrouper, g *pgraph.Graph, debug bool, logf func(for
if err := ag.VertexCmp(v, w); err != nil { // cmp ?
if debug {
logf("!GroupCmp for: %s into: %s", wStr, vStr)
logf("!GroupCmp err: %+v", err)
}
// remove grouped vertex and merge edges (res is safe)
} else if err := VertexMerge(g, v, w, ag.VertexMerge, ag.EdgeMerge); err != nil { // merge...
logf("!VertexMerge for: %s into: %s", wStr, vStr)
if debug {
logf("!VertexMerge err: %+v", err)
}
} else { // success!
logf("%s into %s", wStr, vStr)

View File

@@ -49,6 +49,13 @@ import (
func init() {
engine.RegisterResource("nooptest", func() engine.Res { return &NoopResTest{} })
engine.RegisterResource("nooptestkind:foo", func() engine.Res { return &NoopResTest{} })
engine.RegisterResource("nooptestkind:foo:hello", func() engine.Res { return &NoopResTest{} })
engine.RegisterResource("nooptestkind:foo:world", func() engine.Res { return &NoopResTest{} })
engine.RegisterResource("nooptestkind:foo:world:big", func() engine.Res { return &NoopResTest{} })
engine.RegisterResource("nooptestkind:foo:world:bad", func() engine.Res { return &NoopResTest{} })
engine.RegisterResource("nooptestkind:foo:world:bazzz", func() engine.Res { return &NoopResTest{} })
engine.RegisterResource("nooptestkind:this:is:very:long", func() engine.Res { return &NoopResTest{} })
}
// NoopResTest is a no-op resource that groups strangely.
@@ -108,19 +115,35 @@ func (obj *NoopResTest) GroupCmp(r engine.GroupableRes) error {
}
// TODO: implement this in vertexCmp for *testGrouper instead?
k1 := strings.HasPrefix(obj.Kind(), "nooptestkind:")
k2 := strings.HasPrefix(res.Kind(), "nooptestkind:")
if !k1 && !k2 { // XXX: compat mode, to skip during "kind" tests
if strings.Contains(res.Name(), ",") { // HACK
return fmt.Errorf("already grouped") // element to be grouped is already grouped!
}
}
// XXX: make a better grouping algorithm for test expression
// XXX: this prevents us from re-using the same kind twice in a test...
// group different kinds if they're hierarchical (helpful hack for testing)
if obj.Kind() != res.Kind() {
s1 := strings.Split(obj.Kind(), ":")
s2 := strings.Split(res.Kind(), ":")
if len(s1) > len(s2) { // let longer get grouped INTO shorter
return fmt.Errorf("chunk inversion")
}
}
// group if they start with the same letter! (helpful hack for testing)
if obj.Name()[0] != res.Name()[0] {
return fmt.Errorf("different starting letter")
}
//fmt.Printf("group of: %+v into: %+v\n", res.Kind(), obj.Kind())
return nil
}
func NewNoopResTest(name string) *NoopResTest {
n, err := engine.NewNamedResource("nooptest", name)
func NewKindNoopResTest(kind, name string) *NoopResTest {
n, err := engine.NewNamedResource(kind, name)
if err != nil {
panic(fmt.Sprintf("unexpected error: %+v", err))
}
@@ -138,6 +161,10 @@ func NewNoopResTest(name string) *NoopResTest {
return x
}
func NewNoopResTest(name string) *NoopResTest {
return NewKindNoopResTest("nooptest", name)
}
func NewNoopResTestSema(name string, semas []string) *NoopResTest {
n := NewNoopResTest(name)
n.MetaParams().Sema = semas
@@ -174,21 +201,29 @@ func (obj *testGrouper) VertexCmp(v1, v2 pgraph.Vertex) error {
return fmt.Errorf("v2 is not a GroupableRes")
}
if r1.Kind() != r2.Kind() { // we must group similar kinds
// TODO: maybe future resources won't need this limitation?
return fmt.Errorf("the two resources aren't the same kind")
}
//if r1.Kind() != r2.Kind() { // we must group similar kinds
// // TODO: maybe future resources won't need this limitation?
// return fmt.Errorf("the two resources aren't the same kind")
//}
// someone doesn't want to group!
if r1.AutoGroupMeta().Disabled || r2.AutoGroupMeta().Disabled {
return fmt.Errorf("one of the autogroup flags is false")
}
// We don't want to bail on these two conditions if the kinds are the
// same. This prevents us from having a linear chain of pkg->pkg->pkg,
// instead of flattening all of them into one arbitrary choice. But if
// we are doing hierarchical grouping, then we want to allow this type
// of grouping, or we won't end up building any hierarchies!
if r1.Kind() == r2.Kind() {
if r1.IsGrouped() { // already grouped!
return fmt.Errorf("already grouped")
}
if len(r2.GetGroup()) > 0 { // already has children grouped!
return fmt.Errorf("already has groups")
}
}
if err := r1.GroupCmp(r2); err != nil { // resource groupcmp failed!
return errwrap.Wrapf(err, "the GroupCmp failed")
}
@@ -197,6 +232,8 @@ func (obj *testGrouper) VertexCmp(v1, v2 pgraph.Vertex) error {
}
func (obj *testGrouper) VertexMerge(v1, v2 pgraph.Vertex) (v pgraph.Vertex, err error) {
//fmt.Printf("merge of: %s into: %s\n", v2, v1)
// NOTE: this doesn't look at kind!
r1 := v1.(engine.GroupableRes)
r2 := v2.(engine.GroupableRes)
if err := r1.GroupRes(r2); err != nil { // group them first
@@ -273,9 +310,13 @@ Loop:
for v1 := range g1.Adjacency() { // for each vertex in g1
r1 := v1.(engine.GroupableRes)
l1 := strings.Split(r1.Name(), ",") // make list of everyone's names...
// XXX: this should be recursive for hierarchical grouping...
// XXX: instead, hack it for now:
if !strings.HasPrefix(r1.Kind(), "nooptestkind:") {
for _, x1 := range r1.GetGroup() {
l1 = append(l1, x1.Name()) // add my contents
}
}
l1 = util.StrRemoveDuplicatesInList(l1) // remove duplicates
sort.Strings(l1)
@@ -283,9 +324,13 @@ Loop:
for v2 := range g2.Adjacency() { // does it match in g2 ?
r2 := v2.(engine.GroupableRes)
l2 := strings.Split(r2.Name(), ",")
// XXX: this should be recursive for hierarchical grouping...
// XXX: instead, hack it for now:
if !strings.HasPrefix(r2.Kind(), "nooptestkind:") {
for _, x2 := range r2.GetGroup() {
l2 = append(l2, x2.Name())
}
}
l2 = util.StrRemoveDuplicatesInList(l2) // remove duplicates
sort.Strings(l2)
@@ -301,7 +346,7 @@ Loop:
// check edges
for v1 := range g1.Adjacency() { // for each vertex in g1
v2 := m[v1] // lookup in map to get correspondance
v2 := m[v1] // lookup in map to get correspondence
// g1.Adjacency()[v1] corresponds to g2.Adjacency()[v2]
if e1, e2 := len(g1.Adjacency()[v1]), len(g2.Adjacency()[v2]); e1 != e2 {
r1 := v1.(engine.Res)
@@ -771,9 +816,9 @@ func TestPgraphGrouping16(t *testing.T) {
a := NewNoopResTest("a1,a2")
b1 := NewNoopResTest("b1")
c1 := NewNoopResTest("c1")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e1 := NE("e1") // +e3 a bit?
e2 := NE("e2") // ok!
e3 := NE("e3") // +e1 a bit?
g3.AddEdge(a, b1, e1)
g3.AddEdge(b1, c1, e2)
g3.AddEdge(a, c1, e3)
@@ -859,9 +904,9 @@ func TestPgraphGrouping18(t *testing.T) {
a := NewNoopResTest("a1,a2")
b := NewNoopResTest("b1,b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1")
e2 := NE("e2,e4")
e3 := NE("e3")
e1 := NE("e1") // +e3 a bit?
e2 := NE("e2,e4") // ok!
e3 := NE("e3") // +e1 a bit?
g3.AddEdge(a, b, e1)
g3.AddEdge(b, c1, e2)
g3.AddEdge(a, c1, e3)
@@ -978,3 +1023,110 @@ func TestPgraphSemaphoreGrouping3(t *testing.T) {
}
runGraphCmp(t, g1, g2)
}
func TestPgraphGroupingKinds0(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewKindNoopResTest("nooptestkind:foo", "a1")
a2 := NewKindNoopResTest("nooptestkind:foo:hello", "a2")
g1.AddVertex(a1, a2)
}
g2, _ := pgraph.NewGraph("g2") // expected result ?
{
a := NewNoopResTest("a1,a2")
g2.AddVertex(a)
}
runGraphCmp(t, g1, g2)
}
func TestPgraphGroupingKinds1(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewKindNoopResTest("nooptestkind:foo", "a1")
a2 := NewKindNoopResTest("nooptestkind:foo:world", "a2")
a3 := NewKindNoopResTest("nooptestkind:foo:world:big", "a3")
g1.AddVertex(a1, a2, a3)
}
g2, _ := pgraph.NewGraph("g2") // expected result ?
{
a := NewNoopResTest("a1,a2,a3")
g2.AddVertex(a)
}
runGraphCmp(t, g1, g2)
}
func TestPgraphGroupingKinds2(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewKindNoopResTest("nooptestkind:foo", "a1")
a2 := NewKindNoopResTest("nooptestkind:foo:world", "a2")
a3 := NewKindNoopResTest("nooptestkind:foo:world:big", "a3")
a4 := NewKindNoopResTest("nooptestkind:foo:world:bad", "a4")
g1.AddVertex(a1, a2, a3, a4)
}
g2, _ := pgraph.NewGraph("g2") // expected result ?
{
a := NewNoopResTest("a1,a2,a3,a4")
g2.AddVertex(a)
}
runGraphCmp(t, g1, g2)
}
func TestPgraphGroupingKinds3(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewKindNoopResTest("nooptestkind:foo", "a1")
a2 := NewKindNoopResTest("nooptestkind:foo:world", "a2")
a3 := NewKindNoopResTest("nooptestkind:foo:world:big", "a3")
a4 := NewKindNoopResTest("nooptestkind:foo:world:bad", "a4")
a5 := NewKindNoopResTest("nooptestkind:foo:world:bazzz", "a5")
g1.AddVertex(a1, a2, a3, a4, a5)
}
g2, _ := pgraph.NewGraph("g2") // expected result ?
{
a := NewNoopResTest("a1,a2,a3,a4,a5")
g2.AddVertex(a)
}
runGraphCmp(t, g1, g2)
}
// This test is valid, but our test system doesn't support duplicate kinds atm.
//func TestPgraphGroupingKinds4(t *testing.T) {
// g1, _ := pgraph.NewGraph("g1") // original graph
// {
// a1 := NewKindNoopResTest("nooptestkind:foo", "a1")
// a2 := NewKindNoopResTest("nooptestkind:foo:world", "a2")
// a3 := NewKindNoopResTest("nooptestkind:foo:world:big", "a3")
// a4 := NewKindNoopResTest("nooptestkind:foo:world:big", "a4")
// g1.AddVertex(a1, a2, a3, a4)
// }
// g2, _ := pgraph.NewGraph("g2") // expected result ?
// {
// a := NewNoopResTest("a1,a2,a3,a4")
// g2.AddVertex(a)
// }
// runGraphCmp(t, g1, g2)
//}
func TestPgraphGroupingKinds5(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewKindNoopResTest("nooptestkind:foo", "a1")
a2 := NewKindNoopResTest("nooptestkind:foo:world", "a2")
a3 := NewKindNoopResTest("nooptestkind:foo:world:big", "a3")
a4 := NewKindNoopResTest("nooptestkind:foo:world:bad", "a4")
a5 := NewKindNoopResTest("nooptestkind:foo:world:bazzz", "a5")
b1 := NewKindNoopResTest("nooptestkind:foo", "b1")
// NOTE: the very long one shouldn't group, but our test doesn't
// support detecting this pattern at the moment...
b2 := NewKindNoopResTest("nooptestkind:this:is:very:long", "b2")
g1.AddVertex(a1, a2, a3, a4, a5, b1, b2)
}
g2, _ := pgraph.NewGraph("g2") // expected result ?
{
a := NewNoopResTest("a1,a2,a3,a4,a5")
b := NewNoopResTest("b1,b2")
g2.AddVertex(a, b)
}
runGraphCmp(t, g1, g2)
}

View File

@@ -58,17 +58,18 @@ func (ag *baseGrouper) Init(g *pgraph.Graph) error {
ag.graph = g // pointer
// We sort deterministically, first by kind, and then by name. In
// particular, longer kind chunks sort first. So http:ui:text should
// appear before http:server and http:ui. This is a hack so that if we
// are doing hierarchical automatic grouping, it gives the http:ui:text
// a chance to get grouped into http:ui, before http:ui gets grouped
// into http:server, because once that happens, http:ui:text will never
// get grouped, and this won't work properly. This works, because when
// we start comparing iteratively the list of resources, it does this
// with a O(n^2) loop that compares the X and Y zero indexes first, and
// and then continues along. If the "longer" resources appear first,
// then they'll group together first. We should probably put this into
// a new Grouper struct, but for now we might as well leave it here.
// particular, longer kind chunks sort first. So http:server:ui:input
// should appear before http:server and http:server:ui. This is a
// strategy so that if we are doing hierarchical automatic grouping, it
// gives the http:server:ui:input a chance to get grouped into
// http:server:ui, before http:server:ui gets grouped into http:server,
// because once that happens, http:server:ui:input will never get
// grouped, and this won't work properly. This works, because when we
// start comparing iteratively the list of resources, it does this with
// a O(n^2) loop that compares the X and Y zero indexes first, and then
// continues along. If the "longer" resources appear first, then they'll
// group together first. We should probably put this into a new Grouper
// struct, but for now we might as well leave it here.
//vertices := ag.graph.VerticesSorted() // formerly
vertices := RHVSort(ag.graph.Vertices())
@@ -134,7 +135,7 @@ func (ag *baseGrouper) VertexNext() (v1, v2 pgraph.Vertex, err error) {
return
}
// VertexCmp can be used in addition to an overridding implementation.
// VertexCmp can be used in addition to an overriding implementation.
func (ag *baseGrouper) VertexCmp(v1, v2 pgraph.Vertex) error {
if v1 == nil || v2 == nil {
return fmt.Errorf("the vertex is nil")

View File

@@ -181,7 +181,7 @@ func (obj RHVSlice) Less(i, j int) bool {
li := len(si)
lj := len(sj)
if li != lj { // eg: http:ui vs. http:ui:text
if li != lj { // eg: http:server:ui vs. http:server:ui:text
return li > lj // reverse
}

View File

@@ -0,0 +1,84 @@
// Mgmt
// Copyright (C) James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//
// Additional permission under GNU GPL version 3 section 7
//
// If you modify this program, or any covered work, by linking or combining it
// with embedded mcl code and modules (and that the embedded mcl code and
// modules which link with this program, contain a copy of their source code in
// the authoritative form) containing parts covered by the terms of any other
// license, the licensors of this program grant you additional permission to
// convey the resulting work. Furthermore, the licensors of this program grant
// the original author, James Shubin, additional permission to update this
// additional permission if he deems it necessary to achieve the goals of this
// additional permission.
//go:build !root
package autogroup
import (
"fmt"
"testing"
"github.com/purpleidea/mgmt/engine"
_ "github.com/purpleidea/mgmt/engine/resources" // import so the resources register
"github.com/purpleidea/mgmt/pgraph"
)
// ListPgraphVertexCmp compares two lists of pgraph.Vertex pointers.
func ListPgraphVertexCmp(a, b []pgraph.Vertex) bool {
//fmt.Printf("CMP: %v with %v\n", a, b) // debugging
if a == nil && b == nil {
return true
}
if a == nil || b == nil {
return false
}
if len(a) != len(b) {
return false
}
for i := range a {
if a[i] != b[i] {
return false
}
}
return true
}
// empty graph
func TestRHVSort1(t *testing.T) {
r1, err := engine.NewNamedResource("http:server", "foo")
if err != nil {
panic(fmt.Sprintf("unexpected error: %+v", err))
}
r2, err := engine.NewNamedResource("http:server:ui", "bar")
if err != nil {
panic(fmt.Sprintf("unexpected error: %+v", err))
}
vertices := []pgraph.Vertex{r1, r2}
expected := []pgraph.Vertex{r2, r1}
if out := RHVSort(vertices); !ListPgraphVertexCmp(expected, out) {
t.Errorf("vertices: %+v", vertices)
t.Errorf("expected: %+v", expected)
t.Errorf("test out: %+v", out)
}
}

View File

@@ -37,6 +37,7 @@ import (
"os"
"path"
"sync"
"sync/atomic"
"github.com/purpleidea/mgmt/converger"
"github.com/purpleidea/mgmt/engine"
@@ -59,7 +60,10 @@ type Engine struct {
Version string
Hostname string
// Break off separate logical pieces into chunks where possible.
Converger *converger.Coordinator
Exporter *Exporter
Local *local.API
World engine.World
@@ -72,6 +76,7 @@ type Engine struct {
graph *pgraph.Graph
nextGraph *pgraph.Graph
state map[pgraph.Vertex]*State
tlock *sync.RWMutex // lock around state map
waits map[pgraph.Vertex]*sync.WaitGroup // wg for the Worker func
wlock *sync.Mutex // lock around waits map
@@ -84,7 +89,10 @@ type Engine struct {
wg *sync.WaitGroup // wg for the whole engine (only used for close)
paused bool // are we paused?
fastPause bool
fastPause *atomic.Bool
isClosing bool // are we shutting down?
errMutex *sync.Mutex // wraps the *state workerErr (one mutex for all)
}
// Init initializes the internal structures and starts this the graph running.
@@ -112,11 +120,12 @@ func (obj *Engine) Init() error {
}
obj.state = make(map[pgraph.Vertex]*State)
obj.tlock = &sync.RWMutex{}
obj.waits = make(map[pgraph.Vertex]*sync.WaitGroup)
obj.wlock = &sync.Mutex{}
obj.mlock = &sync.Mutex{}
obj.metas = make(map[engine.ResPtrUID]*engine.MetaState)
obj.metas = make(map[engine.ResPtrUID]*engine.MetaState) // don't include .Hidden res
obj.slock = &sync.Mutex{}
obj.semas = make(map[string]*semaphore.Semaphore)
@@ -124,6 +133,21 @@ func (obj *Engine) Init() error {
obj.wg = &sync.WaitGroup{}
obj.paused = true // start off true, so we can Resume after first Commit
obj.fastPause = &atomic.Bool{}
obj.errMutex = &sync.Mutex{}
obj.Exporter = &Exporter{
World: obj.World,
Debug: obj.Debug,
Logf: func(format string, v ...interface{}) {
// TODO: is this a sane prefix to use here?
obj.Logf("export: "+format, v...)
},
}
if err := obj.Exporter.Init(); err != nil {
return err
}
return nil
}
@@ -188,6 +212,12 @@ func (obj *Engine) Commit() error {
if !ok { // should not happen, previously validated
return fmt.Errorf("not a Res")
}
// Skip this if Hidden since we can have a hidden res that has
// the same kind+name as a regular res, and this would conflict.
if res.MetaParams().Hidden {
continue
}
activeMetas[engine.PtrUID(res)] = struct{}{} // add
}
@@ -208,7 +238,11 @@ func (obj *Engine) Commit() error {
return fmt.Errorf("the Res state already exists")
}
// Skip this if Hidden since we can have a hidden res that has
// the same kind+name as a regular res, and this would conflict.
if !res.MetaParams().Hidden {
activeMetas[engine.PtrUID(res)] = struct{}{} // add
}
if obj.Debug {
obj.Logf("Validate(%s)", res)
@@ -281,7 +315,9 @@ func (obj *Engine) Commit() error {
obj.Logf("%s: Exited...", v)
}
}
obj.errMutex.Lock()
obj.state[v].workerErr = err // store the error
obj.errMutex.Unlock()
// If the Rewatch metaparam is true, then this will get
// restarted if we do a graph cmp swap. This is why the
// graph cmp function runs the removes before the adds.
@@ -299,7 +335,12 @@ func (obj *Engine) Commit() error {
if !ok { // should not happen, previously validated
return fmt.Errorf("not a Res")
}
// Skip this if Hidden since we can have a hidden res that has
// the same kind+name as a regular res, and this would conflict.
if !res.MetaParams().Hidden {
delete(activeMetas, engine.PtrUID(res))
}
// wait for exit before starting new graph!
close(obj.state[vertex].removeDone) // causes doneCtx to cancel
@@ -314,7 +355,9 @@ func (obj *Engine) Commit() error {
// delete to free up memory from old graphs
fn := func() error {
obj.tlock.Lock()
delete(obj.state, vertex)
obj.tlock.Unlock()
delete(obj.waits, vertex)
return nil
}
@@ -342,12 +385,15 @@ func (obj *Engine) Commit() error {
s1, ok1 := obj.state[v1]
s2, ok2 := obj.state[v2]
x1, x2 := false, false
// no need to have different mutexes for each state atm
obj.errMutex.Lock()
if ok1 {
x1 = s1.workerErr != nil && swap1
}
if ok2 {
x2 = s2.workerErr != nil && swap2
}
obj.errMutex.Unlock()
if x1 || x2 {
// We swap, even if they're the same, so that we reload!
@@ -467,7 +513,7 @@ func (obj *Engine) Resume() error {
// poke. In general this is only called when you're trying to hurry up the exit.
// XXX: Not implemented
func (obj *Engine) SetFastPause() {
obj.fastPause = true
obj.fastPause.Store(true)
}
// Pause the active, running graph.
@@ -480,7 +526,7 @@ func (obj *Engine) Pause(fastPause bool) error {
return fmt.Errorf("already paused")
}
obj.fastPause = fastPause
obj.fastPause.Store(fastPause)
topoSort, _ := obj.graph.TopologicalSort()
for _, vertex := range topoSort { // squeeze out the events...
// The Event is sent to an unbuffered channel, so this event is
@@ -493,7 +539,7 @@ func (obj *Engine) Pause(fastPause bool) error {
obj.paused = true
// we are now completely paused...
obj.fastPause = false // reset
obj.fastPause.Store(false) // reset
return nil
}
@@ -501,6 +547,7 @@ func (obj *Engine) Pause(fastPause bool) error {
// actually just a Load of an empty graph and a Commit. It waits for all the
// resources to exit before returning.
func (obj *Engine) Shutdown() error {
obj.isClosing = true
emptyGraph, reterr := pgraph.NewGraph("empty")
// this is a graph switch (graph sync) that switches to an empty graph!
@@ -517,6 +564,15 @@ func (obj *Engine) Shutdown() error {
return reterr
}
// IsClosing tells the caller if a Shutdown() was run. This is helpful so that
// the graph can behave slightly differently when receiving the final empty
// graph. This is because it's empty because we passed one to unload everything,
// not because the user actually removed all resources. We may want to preserve
// the exported state for example, and not purge it.
func (obj *Engine) IsClosing() bool {
return obj.isClosing
}
// Graph returns the running graph.
func (obj *Engine) Graph() *pgraph.Graph {
return obj.graph

355
engine/graph/exporter.go Normal file
View File

@@ -0,0 +1,355 @@
// Mgmt
// Copyright (C) James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//
// Additional permission under GNU GPL version 3 section 7
//
// If you modify this program, or any covered work, by linking or combining it
// with embedded mcl code and modules (and that the embedded mcl code and
// modules which link with this program, contain a copy of their source code in
// the authoritative form) containing parts covered by the terms of any other
// license, the licensors of this program grant you additional permission to
// convey the resulting work. Furthermore, the licensors of this program grant
// the original author, James Shubin, additional permission to update this
// additional permission if he deems it necessary to achieve the goals of this
// additional permission.
package graph
import (
"context"
"fmt"
"sync"
"github.com/purpleidea/mgmt/engine"
engineUtil "github.com/purpleidea/mgmt/engine/util"
"github.com/purpleidea/mgmt/pgraph"
)
// Exporter is the main engine mechanism that sends the exported resource data
// to the World database. The code is relatively succinct, but slightly subtle.
type Exporter struct {
// Watch specifies if we want to enable the additional watch feature. It
// should probably be left off unless we're debugging something or using
// weird environments where we expect someone to mess with our res data.
Watch bool
World engine.World
Debug bool
Logf func(format string, v ...interface{})
state map[engine.ResDelete]bool // key NOT a pointer for it to be unique
prev map[engine.ResDelete]pgraph.Vertex
mutex *sync.Mutex
// watch specific variables
workerRunning bool
workerWg *sync.WaitGroup
workerCtx context.Context
workerCancel func()
}
// Init performs some initialization before first use. This is required.
func (obj *Exporter) Init() error {
obj.state = make(map[engine.ResDelete]bool)
obj.prev = make(map[engine.ResDelete]pgraph.Vertex)
obj.mutex = &sync.Mutex{}
obj.workerRunning = false
obj.workerWg = &sync.WaitGroup{}
obj.workerCtx, obj.workerCancel = context.WithCancel(context.Background())
return nil
}
// Export performs the worldly export, and then stores the resource unique ID in
// our in-memory data store. Exported resources use this tracking to know when
// to run their cleanups. If this function encounters an error, it returns
// (false, err). If it does nothing it returns (true, nil). If it does work it
// return (false, nil). These return codes match how CheckApply returns. This
// may run concurrently by multiple different resources, so as a result it must
// stay thread safe.
func (obj *Exporter) Export(ctx context.Context, res engine.Res) (bool, error) {
// As a result of running this operation in roughly the same places that
// the usual CheckApply step would run, we end up with a more nuanced
// and mature "exported resources" model than what was ever possible
// with other tools. We can now "wait" (via the resource graph
// dependencies) to run an export until an earlier resource dependency
// step has run. We can also programmatically "un-export" a resource by
// publishing a subsequent resource graph which either removes that
// Export flag or the entire resource. The one downside is that
// exporting to the database happens in multiple transactions rather
// than a batched bolus, but this is more appropriate because we're now
// more accurately modelling real-time systems, and this bandwidth is
// not a significant amount anyways. Lastly, we make sure to not run the
// purge when we ^C, since it should be safe to shutdown without killing
// all the data we left there.
if res.MetaParams().Noop {
return true, nil // did nothing
}
exports := res.MetaParams().Export
if len(exports) == 0 {
return true, nil // did nothing
}
// It's OK to check the cache here instead of re-sending via the World
// API and so on, because the only way the Res data would change in
// World is if (1) someone messed with etcd, which we'd see with Watch,
// or (2) if the Res data changed because we have a new resource graph.
// If we have a new resource graph, then any changed elements will get
// pruned from this state cache via the Prune method, which helps us.
// If send/recv or any other weird resource method changes things, then
// we also want to invalidate the state cache.
state := true
// TODO: This recv code is untested!
if r, ok := res.(engine.RecvableRes); ok {
for _, v := range r.Recv() { // map[string]*Send
// XXX: After we read the changed value, will it persist?
state = state && !v.Changed
}
}
obj.mutex.Lock()
for _, ptrUID := range obj.ptrUID(res) {
b := obj.state[*ptrUID] // no need to check if exists
state = state && b // if any are false, it's all false
}
obj.mutex.Unlock()
if state {
return true, nil // state OK!
}
// XXX: Do we want to change any metaparams when we export?
// XXX: Do we want to change any metaparams when we collect?
b64, err := obj.resToB64(res)
if err != nil {
return false, err
}
resourceExports := []*engine.ResExport{}
duplicates := make(map[string]struct{})
for _, export := range exports {
//ptrUID := engine.ResDelete{
// Kind: res.Kind(),
// Name: res.Name(),
// Host: export,
//}
if export == "*" {
export = "" // XXX: use whatever means "all"
}
if _, exists := duplicates[export]; exists {
continue
}
duplicates[export] = struct{}{}
// skip this check since why race it or split the resource...
//if stateOK := obj.state[ptrUID]; stateOK {
// // rare that we'd have a split of some of these from a
// // single resource updated and others already fine, but
// // might as well do the check since it's cheap...
// continue
//}
resExport := &engine.ResExport{
Kind: res.Kind(),
Name: res.Name(),
Host: export,
Data: b64, // encoded res data
}
resourceExports = append(resourceExports, resExport)
}
// The fact that we Watch the write-only-by-us values at all, is a
// luxury that allows us to handle mischievous actors that overwrote an
// exported value. It really isn't necessary. It's the consumers that
// really need to watch.
if err := obj.worker(); err != nil {
return false, err // big error
}
// TODO: Do we want to log more information about where this exports to?
obj.Logf("%s", res)
//obj.Logf("%s\n", engineUtil.DebugStructFields(res)) // debug
// XXX: Add a TTL if requested
b, err := obj.World.ResExport(ctx, resourceExports) // do it!
if err != nil {
return false, err
}
obj.mutex.Lock()
defer obj.mutex.Unlock()
// NOTE: The Watch() method *must* invalidate this state if it changes.
// This is only pertinent if we're using the luxury Watch add-ons.
for _, ptrUID := range obj.ptrUID(res) {
obj.state[*ptrUID] = true // state OK!
}
return b, nil
}
// Prune removes any exports which are no longer actively being presented in the
// resource graph. This cleans things up between graph swaps. This should NOT
// run if we're shutting down cleanly. Keep in mind that this must act on the
// new graph which is available by "Commit", not before we're ready to "Commit".
func (obj *Exporter) Prune(ctx context.Context, graph *pgraph.Graph) error {
// mutex should be optional since this should only run when graph paused
obj.mutex.Lock()
defer obj.mutex.Unlock()
// make searching faster by initially storing it all in a map
m := make(map[engine.ResDelete]pgraph.Vertex) // key is NOT a pointer
for _, v := range graph.Vertices() {
res, ok := v.(engine.Res)
if !ok { // should not happen
return fmt.Errorf("not a Res")
}
for _, ptrUID := range obj.ptrUID(res) { // skips non-export things
m[*ptrUID] = v
}
}
resourceDeletes := []*engine.ResDelete{}
for k := range obj.state {
v, exists := m[k] // exists means it's in the graph
prev := obj.prev[k]
obj.prev[k] = v // may be nil
if exists && v != prev { // pointer compare to old vertex
// Here we have a Res that previously existed under the
// same kind/name/host. We need to invalidate the state
// only if it's a different Res than the previous one!
// If we do this erroneously, it causes extra traffic.
obj.state[k] = false // do this only if the Res is NEW
continue // skip it, it's staying
} else if exists {
// If it exists and it's the same as it was, do nothing.
// This is important to prevent thrashing/flapping...
continue
}
// These don't exist anymore, we have to get rid of them...
delete(obj.state, k) // it's gone!
resourceDeletes = append(resourceDeletes, &k)
}
if len(resourceDeletes) == 0 {
return nil
}
obj.Logf("prune: %d exports", len(resourceDeletes))
for _, x := range resourceDeletes {
obj.Logf("prune: %s to %s", engine.Repr(x.Kind, x.Name), x.Host)
}
// XXX: this function could optimize the grouping since we split the
// list of host entries out from the kind/name since we can't have a
// unique map key with a struct that contains a slice.
if _, err := obj.World.ResDelete(ctx, resourceDeletes); err != nil {
return err
}
return nil
}
// resToB64 is a helper to refactor out this method.
func (obj *Exporter) resToB64(res engine.Res) (string, error) {
if r, ok := res.(engine.ExportableRes); ok {
return r.ToB64()
}
return engineUtil.ResToB64(res)
}
// ptrUID is a helper for this repetitive code.
func (obj *Exporter) ptrUID(res engine.Res) []*engine.ResDelete {
a := []*engine.ResDelete{}
for _, export := range res.MetaParams().Export {
if export == "*" {
export = "" // XXX: use whatever means "all"
}
ptrUID := &engine.ResDelete{
Kind: res.Kind(),
Name: res.Name(),
Host: export,
}
a = append(a, ptrUID)
}
return a
}
// worker is a helper to kick off the optional Watch workers.
func (obj *Exporter) worker() error {
if !obj.Watch {
return nil // feature is disabled
}
obj.mutex.Lock()
defer obj.mutex.Unlock()
if obj.workerRunning {
return nil // already running
}
kind := "" // watch everything
ch, err := obj.World.ResWatch(obj.workerCtx, kind) // (chan error, error)
if err != nil {
return err // big error
}
obj.workerRunning = true
obj.workerWg.Add(1)
go func() {
defer func() {
obj.mutex.Lock()
obj.workerRunning = false
obj.mutex.Unlock()
}()
defer obj.workerWg.Done()
Loop:
for {
var e error
var ok bool
select {
case e, ok = <-ch:
if !ok {
// chan closed
break Loop
}
case <-obj.workerCtx.Done():
break Loop
}
if e != nil {
// something errored... shutdown coming!
}
// event!
obj.mutex.Lock()
for k := range obj.state {
obj.state[k] = false // reset it all
}
obj.mutex.Unlock()
}
}()
return nil
}
// Shutdown cancels any running workers and waits for them to finish.
func (obj *Exporter) Shutdown() {
obj.workerCancel()
obj.workerWg.Wait()
}

View File

@@ -128,6 +128,21 @@ func SendRecv(res engine.RecvableRes, fn RecvFn) (map[engine.RecvableRes]map[str
}
if st == nil {
// This can happen if there is a send->recv between two
// resources where the producer does not send a value.
// This can happen for a few reasons. (1) If the
// programmer made a mistake and has a non-erroring
// CheckApply without a return. Note that it should send
// a value for the (true, nil) CheckApply cases too.
// (2) If the resource that's sending started off in the
// "good" state right at first run, and never produced a
// value to send. This may be a programming error since
// the implementation must always either produce a value
// or be okay that there's an error. It could be a valid
// error if the resource was intended to not be run in a
// way where it wouldn't initially have a value to send,
// whether cached or otherwise, but this scenario should
// be rare.
e := fmt.Errorf("received nil value from: %s", v.Res)
err = errwrap.Append(err, e) // list of errors
continue

View File

@@ -228,7 +228,7 @@ func (obj *State) Init() error {
if !ok {
continue
}
// pass in information on requestor...
// pass in information on requester...
if err := r1.GraphQueryAllowed(
engine.GraphQueryableOptionKind(res.Kind()),
engine.GraphQueryableOptionName(res.Name()),
@@ -243,7 +243,7 @@ func (obj *State) Init() error {
if !ok {
continue
}
// pass in information on requestor...
// pass in information on requester...
if err := r2.GraphQueryAllowed(
engine.GraphQueryableOptionKind(res.Kind()),
engine.GraphQueryableOptionName(res.Name()),
@@ -430,3 +430,13 @@ func (obj *State) poll(ctx context.Context, interval uint32) error {
obj.init.Event() // notify engine of an event (this can block)
}
}
// hidden is a replacement for Watch when the Hidden metaparameter is used.
func (obj *State) hidden(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
select {
case <-ctx.Done(): // signal for shutdown request
return nil
}
}

View File

@@ -144,7 +144,7 @@ func (obj *Value) ValueGet(ctx context.Context, key string) (interface{}, error)
var val interface{}
//var err error
if _, skip := obj.skipread[key]; skip {
if _, skip := obj.skipread[key]; !skip {
val, err = valueRead(ctx, prefix, key) // must return val == nil if missing
if err != nil {
// We had an actual read issue. Report this and stop
@@ -177,6 +177,16 @@ func (obj *Value) ValueSet(ctx context.Context, key string, value interface{}) e
obj.mutex.Lock()
defer obj.mutex.Unlock()
// If we're already in the correct state, then return early and *don't*
// send any events at the very end...
v, exists := obj.values[key]
if !exists && value == nil {
return nil // already in the correct state
}
if exists && v == value { // XXX: reflect.DeepEqual(v, value) ?
return nil // already in the correct state
}
// Write to state dir on disk first. If ctx cancels, we assume it's not
// written or it doesn't matter because we're cancelling, meaning we're
// shutting down, so our local cache can be invalidated anyways.

View File

@@ -53,6 +53,8 @@ var DefaultMetaParams = &MetaParams{
Rewatch: false,
Realize: false, // true would be more awesome, but unexpected for users
Dollar: false,
Hidden: false,
Export: []string{},
}
// MetaRes is the interface a resource must implement to support meta params.
@@ -140,6 +142,33 @@ type MetaParams struct {
// interpolate a variable name. In the rare case when it's needed, you
// can disable that check with this meta param.
Dollar bool `yaml:"dollar"`
// Hidden means that this resource will not get executed on the resource
// graph on which it is defined. This can be used as a simple boolean
// switch, or, more commonly in combination with the Export meta param
// which specifies that the resource params are exported into the shared
// database. When this is true, it does not prevent export. In fact, it
// is commonly used in combination with Export. Using this option will
// still include it in the resource graph, but it will exist there in a
// special "mode" where it will not conflict with any other identically
// named resources. It can even be used as part of an edge or via a
// send/recv receiver. It can NOT be a sending vertex. These properties
// differentiate the use of this instead of simply wrapping a resource
// in an "if" statement. If it is hidden, then it does not need to pass
// the resource Validate method step.
Hidden bool `yaml:"hidden"`
// Export is a list of hostnames (and/or the special "*" entry) which if
// set, will mark this resource data as intended for export to those
// hosts. This does not prevent any users of the shared data storage
// from reading these values, so if you want to guarantee secrecy, use
// the encryption primitives. This only labels the data accordingly, so
// that other hosts can know what data is available for them to collect.
// The (kind, name, host) export triple must be unique from any given
// exporter. In other words, you may not export two different instances
// of a kind+name to the same host, the exports must not conflict. On
// resource collect, this parameter is not preserved.
Export []string `yaml:"export"`
}
// Cmp compares two AutoGroupMeta structs and determines if they're equivalent.
@@ -150,7 +179,7 @@ func (obj *MetaParams) Cmp(meta *MetaParams) error {
// XXX: add a one way cmp like we used to have ?
//if obj.Noop != meta.Noop {
// // obj is the existing res, res is the *new* resource
// // if we go from no-noop -> noop, we can re-use the obj
// // if we go from no-noop -> noop, we can reuse the obj
// // if we go from noop -> no-noop, we need to regenerate
// if obj.Noop { // asymmetrical
// return fmt.Errorf("values for Noop are different") // going from noop to no-noop!
@@ -189,6 +218,12 @@ func (obj *MetaParams) Cmp(meta *MetaParams) error {
if obj.Dollar != meta.Dollar {
return fmt.Errorf("values for Dollar are different")
}
if obj.Hidden != meta.Hidden {
return fmt.Errorf("values for Hidden are different")
}
if err := util.SortedStrSliceCompare(obj.Export, meta.Export); err != nil {
return errwrap.Wrapf(err, "values for Export are different")
}
return nil
}
@@ -208,6 +243,13 @@ func (obj *MetaParams) Validate() error {
}
}
for _, s := range obj.Export {
if s == "" {
return fmt.Errorf("export is empty")
}
}
// TODO: Should we validate the export patterns?
return nil
}
@@ -218,6 +260,11 @@ func (obj *MetaParams) Copy() *MetaParams {
sema = make([]string, len(obj.Sema))
copy(sema, obj.Sema)
}
export := []string{}
if obj.Export != nil {
export = make([]string, len(obj.Export))
copy(export, obj.Export)
}
return &MetaParams{
Noop: obj.Noop,
Retry: obj.Retry,
@@ -230,6 +277,8 @@ func (obj *MetaParams) Copy() *MetaParams {
Rewatch: obj.Rewatch,
Realize: obj.Realize,
Dollar: obj.Dollar,
Hidden: obj.Hidden,
Export: export,
}
}

View File

@@ -95,6 +95,12 @@ func RegisteredResourcesNames() []string {
return kinds
}
// IsKind returns true if this is a valid resource kind.
func IsKind(kind string) bool {
_, ok := registeredResources[kind]
return ok
}
// NewResource returns an empty resource object from a registered kind. It
// errors if the resource kind doesn't exist.
func NewResource(kind string) (Res, error) {
@@ -202,6 +208,27 @@ type Init struct {
Logf func(format string, v ...interface{})
}
// Copy makes a copy of this Init struct, with all of the same elements inside.
func (obj *Init) Copy() *Init {
return &Init{
Program: obj.Program,
Version: obj.Version,
Hostname: obj.Hostname,
Running: obj.Running,
Event: obj.Event,
Refresh: obj.Refresh,
Send: obj.Send,
Recv: obj.Recv,
//Graph: obj.Graph, // TODO: not implemented, use FilteredGraph
FilteredGraph: obj.FilteredGraph,
Local: obj.Local,
World: obj.World,
VarDir: obj.VarDir,
Debug: obj.Debug,
Logf: obj.Logf,
}
}
// KindedRes is an interface that is required for a resource to have a kind.
type KindedRes interface {
// Kind returns a string representing the kind of resource this is.
@@ -274,8 +301,8 @@ func Stringer(res Res) string {
// the resource only. This was formerly a string, but a struct is more precise.
// The result is suitable as a unique map key.
type ResPtrUID struct {
kind string
name string
Kind string
Name string
}
// PtrUID generates a ResPtrUID from a resource. The result is suitable as a
@@ -283,7 +310,7 @@ type ResPtrUID struct {
func PtrUID(res Res) ResPtrUID {
// the use of "repr" is kind of arbitrary as long as it's unique
//return ResPtrUID(Repr(res.Kind(), res.Name()))
return ResPtrUID{kind: res.Kind(), name: res.Name()}
return ResPtrUID{Kind: res.Kind(), Name: res.Name()}
}
// Validate validates a resource by checking multiple aspects. This is the main
@@ -306,6 +333,12 @@ func Validate(res Res) error {
return fmt.Errorf("the Res name starts with a $")
}
// Don't need to validate normally if hidden.
// XXX: Check if it's also Exported too? len(res.MetaParams.Export) > 0
if res.MetaParams().Hidden {
return nil
}
return res.Validate()
}
@@ -370,12 +403,20 @@ type CompatibleRes interface {
Merge(CompatibleRes) (CompatibleRes, error)
}
// CollectableRes is an interface for resources that support collection. It is
// currently temporary until a proper API for all resources is invented.
type CollectableRes interface {
// ExportableRes allows the resource to have its own implementation of resource
// encoding, so that it can send data over the wire differently. It's unlikely
// that you will want to implement this interface for most scenarios. It may be
// useful to limit private data exposure, large data sizes, and to add more info
// to what would normally be shared.
type ExportableRes interface {
Res
CollectPattern(string) // XXX: temporary until Res collection is more advanced
// ToB64 lets the resource provide an alternative implementation of the
// usual ResToB64 method. This lets the resource omit, add, or modify
// the parameter data before it goes out over the wire.
ToB64() (string, error)
// TODO: Do we want to add a FromB64 method for decoding the Resource?
}
// YAMLRes is a resource that supports creation by unmarshalling.

43
engine/resources/Makefile Normal file
View File

@@ -0,0 +1,43 @@
# Mgmt
# Copyright (C) James Shubin and the project contributors
# Written by James Shubin <james@shubin.ca> and the project contributors
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
#
# Additional permission under GNU GPL version 3 section 7
#
# If you modify this program, or any covered work, by linking or combining it
# with embedded mcl code and modules (and that the embedded mcl code and
# modules which link with this program, contain a copy of their source code in
# the authoritative form) containing parts covered by the terms of any other
# license, the licensors of this program grant you additional permission to
# convey the resulting work. Furthermore, the licensors of this program grant
# the original author, James Shubin, additional permission to update this
# additional permission if he deems it necessary to achieve the goals of this
# additional permission.
SHELL = bash
.PHONY: build clean
default: build
WASM_FILE = http_server_ui/main.wasm
build: $(WASM_FILE)
$(WASM_FILE): http_server_ui/main.go
@echo "Generating: wasm..."
cd http_server_ui/ && env GOOS=js GOARCH=wasm go build -o `basename $(WASM_FILE)`
clean:
@rm -f $(WASM_FILE) || true

View File

@@ -148,7 +148,6 @@ func (obj *AugeasRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
if obj.init.Debug {
obj.init.Logf("Watching: %s", obj.File) // attempting to watch...
@@ -165,19 +164,14 @@ func (obj *AugeasRes) Watch(ctx context.Context) error {
if obj.init.Debug { // don't access event.Body if event.Error isn't nil
obj.init.Logf("Event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// checkApplySet runs CheckApply for one element of the AugeasRes.Set
func (obj *AugeasRes) checkApplySet(ctx context.Context, apply bool, ag *augeas.Augeas, set *AugeasSet) (bool, error) {

View File

@@ -159,7 +159,6 @@ var AwsRegions = []string{
// http://docs.aws.amazon.com/cli/latest/userguide/cli-config-files.html
type AwsEc2Res struct {
traits.Base // add the base methods without re-implementation
traits.Sendable
init *engine.Init
@@ -193,7 +192,7 @@ type AwsEc2Res struct {
// UserData is used to run bash and cloud-init commands on first launch.
// See http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
// for documantation and examples.
// for documentation and examples.
UserData string `lang:"userdata" yaml:"userdata"`
client *ec2.EC2 // client session for AWS API calls
@@ -448,8 +447,6 @@ func (obj *AwsEc2Res) Watch(ctx context.Context) error {
// longpollWatch uses the ec2 api's built in methods to watch ec2 resource
// state.
func (obj *AwsEc2Res) longpollWatch(ctx context.Context) error {
send := false
// We tell the engine that we're running right away. This is not correct,
// but the api doesn't have a way to signal when the waiters are ready.
obj.init.Running() // when started, notify engine that we're running
@@ -528,19 +525,15 @@ func (obj *AwsEc2Res) longpollWatch(ctx context.Context) error {
continue
default:
obj.init.Logf("State: %v", msg.state)
send = true
}
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// snsWatch uses amazon's SNS and CloudWatchEvents APIs to get instance state-
// change notifications pushed to the http endpoint (snsServer) set up below. In
@@ -548,7 +541,6 @@ func (obj *AwsEc2Res) longpollWatch(ctx context.Context) error {
// it can publish to. snsWatch creates an http server which listens for messages
// published to the topic and processes them accordingly.
func (obj *AwsEc2Res) snsWatch(ctx context.Context) error {
send := false
defer obj.wg.Wait()
// create the sns listener
// closing is handled by http.Server.Shutdown in the defer func below
@@ -623,18 +615,14 @@ func (obj *AwsEc2Res) snsWatch(ctx context.Context) error {
continue
}
obj.init.Logf("State: %v", msg.event)
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply method for AwsEc2 resource.
func (obj *AwsEc2Res) CheckApply(ctx context.Context, apply bool) (bool, error) {

View File

@@ -0,0 +1,512 @@
// Mgmt
// Copyright (C) James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//
// Additional permission under GNU GPL version 3 section 7
//
// If you modify this program, or any covered work, by linking or combining it
// with embedded mcl code and modules (and that the embedded mcl code and
// modules which link with this program, contain a copy of their source code in
// the authoritative form) containing parts covered by the terms of any other
// license, the licensors of this program grant you additional permission to
// convey the resulting work. Furthermore, the licensors of this program grant
// the original author, James Shubin, additional permission to update this
// additional permission if he deems it necessary to achieve the goals of this
// additional permission.
package resources
import (
"context"
"fmt"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/purpleidea/mgmt/util/errwrap"
"github.com/cloudflare/cloudflare-go/v6"
"github.com/cloudflare/cloudflare-go/v6/zones"
)
func init() {
engine.RegisterResource("cloudflare:dns", func() engine.Res { return &CloudflareDNSRes{} })
}
// TODO: description of cloudflare_dns resource
type CloudflareDNSRes struct {
traits.Base
init *engine.Init
APIToken string `lang:"apitoken"`
Comment string `lang:"comment"`
Content string `lang:"content"`
// using a *int64 here to help with disambiguating nil values
Priority *int64 `lang:"priority"`
// using a *bool here to help with disambiguating nil values
Proxied *bool `lang:"proxied"`
Purged bool `lang:"purged"`
RecordName string `lang:"record_name"`
State string `lang:"state"`
TTL int64 `lang:"ttl"`
Type string `lang:"type"`
Zone string `lang:"zone"`
client *cloudflare.Client
zoneID string
}
func (obj *CloudflareDNSRes) Default() engine.Res {
return &CloudflareDNSRes{
State: "exists",
TTL: 1, // this sets TTL to automatic
}
}
func (obj *CloudflareDNSRes) Validate() error {
if obj.RecordName == "" {
return fmt.Errorf("record name is required")
}
if obj.APIToken == "" {
return fmt.Errorf("API token is required")
}
if obj.Type == "" {
return fmt.Errorf("record type is required")
}
if (obj.TTL < 60 || obj.TTL > 86400) && obj.TTL != 1 { // API requirement
return fmt.Errorf("TTL must be between 60s and 86400s, or set to 1")
}
if obj.Zone == "" {
return fmt.Errorf("zone name is required")
}
if obj.State != "exists" && obj.State != "absent" && obj.State != "" {
return fmt.Errorf("state must be either 'exists', 'absent', or empty")
}
if obj.State == "exists" && obj.Content == "" && !obj.Purge {
return fmt.Errorf("content is required when state is 'exists'")
}
if obj.MetaParams().Poll == 0 {
return fmt.Errorf("cloudflare:dns requiers polling, set Meta:poll param (e.g., 60 seconds)")
}
return nil
}
func (obj *CloudflareDNSRes) Init(init *engine.Init) error {
obj.init = init
obj.client = cloudflare.NewClient(
option.WithAPIToken(obj.APIToken),
)
//TODO: does it make more sense to check it here or in CheckApply()?
//zoneListParams := zones.ZoneListParams{
// name: cloudflare.F(obj.Zone),
//}
//zoneList, err := obj.client.Zones.List(context.Background(), zoneListParams)
//if err != nil {
// return errwrap.Wrapf(err, "failed to list zones")
//}
//if len(zoneList.Result) == 0 {
// return fmt.Errorf("zone %s not found", obj.Zone)
//}
obj.zoneID = zoneList.Results[0].ID
return nil
}
func (obj *CloudflareDNSRes) Cleanup() error {
obj.APIToken = ""
obj.client = nil
obj.zoneID = ""
return nil
}
// Watch isn't implemented for this resource, since the Cloudflare API does not
// provide any event stream. Instead, always use polling.
func (obj *CloudflareDNSRes) Watch(context.Context) error {
return fmt.Errorf("invalid Watch call: requires poll metaparam")
}
func (obj *CloudflareDNSRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
zone, err := obj.client.Zones.List(ctx, zones.ZoneListParams{
RecordName: cloudflare.F(obj.Zone),
})
if err != nil {
return false, fmt.Errorf(err)
}
if len(zone.Result) == 0 {
return false, fmt.Errorf("there's no zone registered with name %s", obj.Zone)
}
if len(zone.Result) > 1 {
return false, fmt.Errorf("there's more than one zone with name %s", obj.Zone)
}
// We start by checking the need for purging
if obj.Purge {
checkOK, err := obj.purgeCheckApply(ctx, apply)
if err != nil {
return false, err
}
if !checkOK {
return false, nil
}
}
// List existing records
listParams := dns.RecordListParams{
ZoneID: cloudflare.F(obj.zoneID),
Name: cloudflare.F(obj.RecordName),
Type: cloudflare.F(dns.RecordListParamsType(obj.Type)),
}
recordList, err := obj.client.DNS.Records.List(ctx, listParams)
if err != nil {
return false, errwrap.Wrapf(err, "failed to list DNS records")
}
recordExists := len(records.Result) > 0
var record dns.Record
if recordExists {
record = recordList.Result[0]
}
switch obj.State {
case "exists", "":
if !recordExists {
if !apply {
return false, nil
}
if err := obj.createRecord(ctx); err != nil {
return false, err
}
return true, nil
}
if obj.needsUpdate(record) {
if !apply {
return false, nil
}
if err := obj.updateRecord(ctx, record.ID); err != nil {
return false, err
}
return true, nil
}
case "absent":
if recordExists {
if !apply {
return false, nil
}
deleteParams := dns.RecordDeleteParams{
ZoneID: cloudflare.F(obj.zoneID),
}
_, err := obj.client.DNS.Reords.Delete(ctx, record.ID, deleteParams)
if err != nil {
return false, errwrap.Wrapf(err, "failed to delete DNS record")
}
return true, nil
}
}
return true, nil
}
func (obj *CloudflareDNSRes) Cmp(r engine.Res) error {
if obj == nil && r == nil {
return nil
}
if (obj == nil) != (r == nil) {
return fmt.Errorf("one resource is empty")
}
res, ok := r.(*CloudflareDNSRes)
if !ok {
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.APIToken != res.APIToken {
return fmt.Errorf("apitoken differs")
}
// check how this being a pointer influences this check
if obj.Proxied != res.Proxied {
return fmt.Errorf("proxied values differ")
}
if obj.RecordName != res.RecordName {
return fmt.Errorf("record name differs")
}
if obj.Purged != res.Purged {
return fmt.Errorf("purge value differs")
}
if obj.State != res.State {
return fmt.Errorf("state differs")
}
if obj.TTL != res.TTL {
return fmt.Errorf("ttl differs")
}
if obj.Type != res.Type {
return fmt.Errorf("record type differs")
}
if obj.Zone != res.Zone {
return fmt.Errorf("zone differs")
}
if obj.zoneID != res.zoneID {
return fmt.Errorf("zoneid differs")
}
if obj.Content != res.Content {
return fmt.Errorf("content param differs")
}
// check how this being a pointer influences this check
if obj.Priority != res.Priority {
return fmt.Errorf("the priority param differs")
}
return nil
}
func (obj *CloudflareDNSRes) buildRecordParam() dns.RecordNewParamsBodyUnion {
ttl := dns.TTL(obj.TTL)
switch obj.Type {
case "A":
param := dns.ARecordParam{
Name: cloudflare.F(obj.RecordName),
Type: cloudflare.F(dns.ARecordTypeA),
Content: cloudflare.F(obj.Content),
TTL: cloudflare.F(ttl),
}
if obj.Proxied != nil {
param.Proxied = cloudflare.F(*obj.Proxied)
}
if obj.Comment != "" {
param.Comment = cloudflare.F(obj.Comment)
}
return param
case "AAAA":
param := dns.AAAARecordParam{
Name: cloudflare.F(obj.RecordName),
Type: cloudflare.F(dns.AAAARecordTypeAAAA),
Content: cloudflare.F(obj.Content),
TTL: cloudflare.F(ttl),
}
if obj.Proxied != nil {
param.Proxied = cloudflare.F(*obj.Proxied)
}
if obj.Comment != "" {
param.Comment = cloudflare.F(obj.Comment)
}
return param
case "CNAME":
param := dns.CNAMERecordParam{
Name: cloudflare.F(obj.RecordName),
Type: cloudflare.F(dns.CNAMERecordTypeCNAME),
Content: cloudflare.F(obj.Content),
TTL: cloudflare.F(ttl),
}
if obj.Proxied != nil {
param.Proxied = cloudflare.F(*obj.Proxied)
}
if obj.Comment != "" {
param.Comment = cloudflare.F(obj.Comment)
}
return param
case "MX":
param := dns.MXRecordParam{
Name: cloudflare.F(obj.RecordName),
Type: cloudflare.F(dns.MXRecordTypeMX),
Content: cloudflare.F(obj.Content),
TTL: cloudflare.F(ttl),
}
if obj.Proxied != nil {
param.Proxied = cloudflare.F(*obj.Proxied)
}
if obj.Priority != nil { // required for MX record
param.Priority = cloudflare.F(*obj.Priority)
}
if obj.Comment != "" {
param.Comment = cloudflare.F(obj.Comment)
}
return param
case "TXT":
param := dns.TXTRecordParam{
Name: cloudflare.F(obj.RecordName),
Type: cloudflare.F(dns.TXTRecordTypeTXT),
Content: cloudflare.F(obj.Content),
TTL: cloudflare.F(ttl),
}
if obj.Proxied != nil {
param.Proxied = cloudflare.F(*obj.Proxied)
}
if obj.Comment != "" {
param.Comment = cloudflare.F(obj.Comment)
}
return param
case "NS":
param := dns.NSRecordParam{
Name: cloudflare.F(obj.RecordName),
Type: cloudflare.F(dns.NSRecordTypeNS),
Content: cloudflare.F(obj.Content),
TTL: cloudflare.F(ttl),
}
if obj.Proxied != nil {
param.Proxied = cloudflare.F(*obj.Proxied)
}
if obj.Comment != "" {
param.Comment = cloudflare.F(obj.Comment)
}
return param
case "SRV":
param := dns.SRVRecordParam{
Name: cloudflare.F(obj.RecordName),
Type: cloudflare.F(dns.SRVRecordTypeSRV),
Content: cloudflare.F(obj.Content),
TTL: cloudflare.F(ttl),
}
if obj.Proxied != nil {
param.Proxied = cloudflare.F(*obj.Proxied)
}
if obj.Priority != nil {
param.Priority = cloudflare.F(*obj.Priority)
}
if obj.Comment != "" {
param.Comment = cloudflare.F(obj.Comment)
}
return param
case "PTR":
param := dns.PTRRecordParam{
Name: cloudflare.F(obj.RecordName),
Type: cloudflare.F(dns.PTRRecordTypePTR),
Content: cloudflare.F(obj.Content),
TTL: cloudflare.F(ttl),
}
if obj.Proxied != nil {
param.Proxied = cloudflare.F(*obj.Proxied)
}
if obj.Comment != "" {
param.Comment = cloudflare.F(obj.Comment)
}
return param
default: // we should return something else here, need to investigate
}
}
func (obj *CloudflareDNSRes) createRecord(ctx context.Context) error {
recordParams := obj.buildRecordParam()
createParams := dns.RecordNewParams{
ZoneID: cloudflare.F(obj.zoneID),
Body: recordParams,
}
_, err := obj.client.DNS.Records.New(ctx, createParams)
if err != nil {
return errwrap.Wrapf(err, "failed to create dns record")
}
return nil
}
func (obj *CloudflareDNSRes) updateRecord(ctx context.Context, recordID string) error {
recordParams := obj.buildRecordParam()
editParams := dns.RecordEditParams{
ZoneID: cloudflare.F(obj.zoneID),
Body: recordParams,
}
_, err := obj.client.DNS.Records.Edit(ctx, recordID, editParams)
if err != nil {
return errwrap.Wrapf(err, "failed to update dns record")
}
return nil
}
func (obj *CloudflareDNSRes) needsUpdate(record dns.Record) bool {
if obj.Content != record.Content {
return true
}
if obj.TTL != int64(record.TTL) {
return true
}
if obj.Proxied != nil && record.Proxied != nil {
if *obj.Proxied != *record.Proxied {
return true
}
}
if obj.Priority != nil && record.Priority != nil {
if *obj.Priority != *record.Priority {
return true
}
}
if obj.Comment != record.Comment {
return true
}
// TODO add more checks?
return false
}

View File

@@ -65,6 +65,8 @@ type ConfigEtcdRes struct {
// IdealClusterSize to zero.
AllowSizeShutdown bool `lang:"allow_size_shutdown"`
world engine.EtcdWorld
// sizeFlag determines whether sizeCheckApply already ran or not.
sizeFlag bool
@@ -93,6 +95,12 @@ func (obj *ConfigEtcdRes) Validate() error {
func (obj *ConfigEtcdRes) Init(init *engine.Init) error {
obj.init = init // save for later
world, ok := obj.init.World.(engine.EtcdWorld)
if !ok {
return fmt.Errorf("world backend does not support the EtcdWorld interface")
}
obj.world = world
obj.interruptChan = make(chan struct{})
return nil
@@ -109,7 +117,7 @@ func (obj *ConfigEtcdRes) Watch(ctx context.Context) error {
defer wg.Wait()
innerCtx, cancel := context.WithCancel(ctx)
defer cancel()
ch, err := obj.init.World.IdealClusterSizeWatch(util.CtxWithWg(innerCtx, wg))
ch, err := obj.world.IdealClusterSizeWatch(util.CtxWithWg(innerCtx, wg))
if err != nil {
return errwrap.Wrapf(err, "could not watch ideal cluster size")
}
@@ -158,7 +166,7 @@ func (obj *ConfigEtcdRes) sizeCheckApply(ctx context.Context, apply bool) (bool,
}
}()
val, err := obj.init.World.IdealClusterSizeGet(ctx)
val, err := obj.world.IdealClusterSizeGet(ctx)
if err != nil {
return false, errwrap.Wrapf(err, "could not get ideal cluster size")
}
@@ -181,7 +189,7 @@ func (obj *ConfigEtcdRes) sizeCheckApply(ctx context.Context, apply bool) (bool,
// set!
// This is run as a transaction so we detect if we needed to change it.
changed, err := obj.init.World.IdealClusterSizeSet(ctx, obj.IdealClusterSize)
changed, err := obj.world.IdealClusterSizeSet(ctx, obj.IdealClusterSize)
if err != nil {
return false, errwrap.Wrapf(err, "could not set ideal cluster size")
}

View File

@@ -142,7 +142,7 @@ type CronRes struct {
WakeSystem bool `lang:"wakesystem" yaml:"wakesystem"`
// RemainAfterElapse, if true, means an elapsed timer will stay loaded,
// and its state remains queriable. If false, an elapsed timer unit that
// and its state remains queryable. If false, an elapsed timer unit that
// cannot elapse anymore is unloaded. It defaults to true.
RemainAfterElapse bool `lang:"remainafterelapse" yaml:"remainafterelapse"`
@@ -271,7 +271,7 @@ func (obj *CronRes) Watch(ctx context.Context) error {
//args = append(args, "eavesdrop='true'") // XXX: not allowed anymore?
args = append(args, fmt.Sprintf("arg2='%s.timer'", obj.Name()))
// match dbus messsages
// match dbus messages
if call := bus.BusObject().Call(engineUtil.DBusAddMatch, 0, strings.Join(args, ",")); call.Err != nil {
return call.Err
}
@@ -296,7 +296,6 @@ func (obj *CronRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
select {
case event := <-dbusChan:
@@ -304,7 +303,6 @@ func (obj *CronRes) Watch(ctx context.Context) error {
if obj.init.Debug {
obj.init.Logf("%+v", event)
}
send = true
case event, ok := <-obj.recWatcher.Events():
// process unit file recwatch events
@@ -317,18 +315,14 @@ func (obj *CronRes) Watch(ctx context.Context) error {
if obj.init.Debug {
obj.init.Logf("Event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply is run to check the state and, if apply is true, to apply the
// necessary changes to reach the desired state. This is run before Watch and

View File

@@ -158,7 +158,6 @@ func (obj *DeployTar) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
select {
case event, ok := <-recWatcher.Events():
@@ -174,19 +173,14 @@ func (obj *DeployTar) Watch(ctx context.Context) error {
if obj.init.Debug { // don't access event.Body if event.Error isn't nil
obj.init.Logf("event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply checks the resource state and applies the resource if the bool
// input is true. It returns error info and if the state check passed or not.

View File

@@ -514,7 +514,6 @@ func (obj *DHCPServerRes) Watch(ctx context.Context) error {
startupChan := make(chan struct{})
close(startupChan) // send one initial signal
var send = false // send event?
for {
if obj.init.Debug {
obj.init.Logf("Looping...")
@@ -523,7 +522,6 @@ func (obj *DHCPServerRes) Watch(ctx context.Context) error {
select {
case <-startupChan:
startupChan = nil
send = true
case <-closeSignal: // something shut us down early
return closeError
@@ -532,13 +530,9 @@ func (obj *DHCPServerRes) Watch(ctx context.Context) error {
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// sidCheckApply runs the server ID cache operation in CheckApply, which can
// help CheckApply fail before the handler runs, so at least we see an error.
@@ -1863,7 +1857,7 @@ func (obj *DHCPRangeRes) handler4(data *HostData) (func(*dhcpv4.DHCPv4, *dhcpv4.
// FIXME: Run this somewhere for now, eventually it should get scheduled
// to run in the returned duration of time. This way, it would clean old
// peristed entries when they're stale, not when a new request comes in.
// persisted entries when they're stale, not when a new request comes in.
if _, err := obj.leaseClean(); err != nil {
return nil, errwrap.Wrapf(err, "clean error")
}

View File

@@ -37,7 +37,7 @@ import (
"io"
"regexp"
"strings"
"time"
"sync"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
@@ -47,8 +47,8 @@ import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/client"
dockerImage "github.com/docker/docker/api/types/image"
dockerClient "github.com/docker/docker/client"
"github.com/docker/go-connections/nat"
)
@@ -59,13 +59,6 @@ const (
ContainerStopped = "stopped"
// ContainerRemoved is the removed container state.
ContainerRemoved = "removed"
// initCtxTimeout is the length of time, in seconds, before requests are
// cancelled in Init.
initCtxTimeout = 20
// checkApplyCtxTimeout is the length of time, in seconds, before
// requests are cancelled in CheckApply.
checkApplyCtxTimeout = 120
)
func init() {
@@ -89,7 +82,9 @@ type DockerContainerRes struct {
// Env is a list of environment variables. E.g. ["VAR=val",].
Env []string `lang:"env" yaml:"env"`
// Ports is a map of port bindings. E.g. {"tcp" => {80 => 8080},}.
// Ports is a map of port bindings. E.g. {"tcp" => {8080 => 80},}. The
// key is the host port, and the val is the inner service port to
// forward to.
Ports map[string]map[int64]int64 `lang:"ports" yaml:"ports"`
// APIVersion allows you to override the host's default client API
@@ -100,9 +95,14 @@ type DockerContainerRes struct {
// image is incorrect.
Force bool `lang:"force" yaml:"force"`
client *client.Client // docker api client
init *engine.Init
client *dockerClient.Client // docker api client
once *sync.Once
start chan struct{} // closes by once
sflag bool // first time happened?
ready chan struct{} // closes by once
}
// Default returns some sensible defaults for this resource.
@@ -159,44 +159,69 @@ func (obj *DockerContainerRes) Validate() error {
// Init runs some startup code for this resource.
func (obj *DockerContainerRes) Init(init *engine.Init) error {
var err error
obj.init = init // save for later
ctx, cancel := context.WithTimeout(context.Background(), initCtxTimeout*time.Second)
defer cancel()
obj.once = &sync.Once{}
obj.start = make(chan struct{})
obj.ready = make(chan struct{})
// Initialize the docker client.
obj.client, err = client.NewClientWithOpts(client.WithVersion(obj.APIVersion))
if err != nil {
return errwrap.Wrapf(err, "error creating docker client")
}
// Validate the image.
resp, err := obj.client.ImageSearch(ctx, obj.Image, types.ImageSearchOptions{Limit: 1})
if err != nil {
return errwrap.Wrapf(err, "error searching for image")
}
if len(resp) == 0 {
return fmt.Errorf("image: %s not found", obj.Image)
}
return nil
}
// Cleanup is run by the engine to clean up after the resource is done.
func (obj *DockerContainerRes) Cleanup() error {
return obj.client.Close() // close the docker client
return nil
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *DockerContainerRes) Watch(ctx context.Context) error {
innerCtx, cancel := context.WithCancel(context.Background())
defer cancel()
var client *dockerClient.Client
var err error
eventChan, errChan := obj.client.Events(innerCtx, types.EventsOptions{})
for {
client, err = dockerClient.NewClientWithOpts(dockerClient.WithVersion(obj.APIVersion))
if err == nil {
// the above won't check the connection, force that here
_, err = client.Ping(ctx)
}
if err == nil {
break
}
// If we didn't connect right away, it might be because we're
// waiting for someone to install the docker package, and start
// the service. We might even have an edge between this resource
// and those dependencies, but that doesn't stop this Watch from
// starting up. As a result, we will wait *once* for CheckApply
// to unlock us, since that runs in dependency order.
// This error looks like: Cannot connect to the Docker daemon at
// unix:///var/run/docker.sock. Is the docker daemon running?
if dockerClient.IsErrConnectionFailed(err) && !obj.sflag {
// notify engine that we're running so that CheckApply
// can start...
obj.init.Running()
select {
case <-obj.start:
obj.sflag = true
continue
obj.init.Running() // when started, notify engine that we're running
case <-ctx.Done(): // don't block
close(obj.ready) // tell CheckApply to unblock!
return nil
}
}
close(obj.ready) // tell CheckApply to unblock!
return errwrap.Wrapf(err, "error creating docker client")
}
defer client.Close() // success, so close it later
eventChan, errChan := client.Events(ctx, types.EventsOptions{})
close(obj.ready) // tell CheckApply to start now that events are running
// notify engine that we're running
if !obj.sflag {
obj.init.Running()
}
var send = false // send event?
for {
select {
case event, ok := <-eventChan:
@@ -206,7 +231,6 @@ func (obj *DockerContainerRes) Watch(ctx context.Context) error {
if obj.init.Debug {
obj.init.Logf("%+v", event)
}
send = true
case err, ok := <-errChan:
if !ok {
@@ -218,21 +242,40 @@ func (obj *DockerContainerRes) Watch(ctx context.Context) error {
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply method for Docker resource.
func (obj *DockerContainerRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
obj.once.Do(func() { close(obj.start) }) // Tell Watch() it's safe to start again.
// Now wait to make sure events are started before we make changes!
select {
case <-obj.ready:
case <-ctx.Done(): // don't block
return false, ctx.Err()
}
var id string
var destroy bool
var err error
ctx, cancel := context.WithTimeout(ctx, checkApplyCtxTimeout*time.Second)
defer cancel()
// Initialize the docker client.
obj.client, err = dockerClient.NewClientWithOpts(dockerClient.WithVersion(obj.APIVersion))
if err != nil {
return false, errwrap.Wrapf(err, "error creating docker client")
}
defer obj.client.Close() // close the docker client
// Validate the image.
resp, err := obj.client.ImageSearch(ctx, obj.Image, types.ImageSearchOptions{Limit: 1})
if err != nil {
return false, errwrap.Wrapf(err, "error searching for image")
}
if len(resp) == 0 {
return false, fmt.Errorf("image: %s not found", obj.Image)
}
// List any container whose name matches this resource.
opts := container.ListOptions{
@@ -247,7 +290,9 @@ func (obj *DockerContainerRes) CheckApply(ctx context.Context, apply bool) (bool
if len(containerList) > 1 {
return false, fmt.Errorf("more than one container named %s", obj.Name())
}
if len(containerList) == 0 && obj.State == ContainerRemoved {
// NOTE: If container doesn't exist, we might as well accept "stopped"
// as valid for now, at least until we rewrite this horrible code.
if len(containerList) == 0 && (obj.State == ContainerRemoved || obj.State == ContainerStopped) {
return true, nil
}
if len(containerList) == 1 {
@@ -268,6 +313,8 @@ func (obj *DockerContainerRes) CheckApply(ctx context.Context, apply bool) (bool
}
}
// XXX: Check if defined ports matches what we expect.
if !apply {
return false, nil
}
@@ -295,7 +342,7 @@ func (obj *DockerContainerRes) CheckApply(ctx context.Context, apply bool) (bool
if len(containerList) == 0 { // no container was found
// Download the specified image if it doesn't exist locally.
p, err := obj.client.ImagePull(ctx, obj.Image, image.PullOptions{})
p, err := obj.client.ImagePull(ctx, obj.Image, dockerImage.PullOptions{})
if err != nil {
return false, errwrap.Wrapf(err, "error pulling image")
}
@@ -316,15 +363,25 @@ func (obj *DockerContainerRes) CheckApply(ctx context.Context, apply bool) (bool
PortBindings: make(map[nat.Port][]nat.PortBinding),
}
for k, v := range obj.Ports {
for proto, v := range obj.Ports {
// On the outside, on the host, we'd see 8080 which is p
// and on the inside, the container would have something
// running on 80, which is q.
for p, q := range v {
containerConfig.ExposedPorts[nat.Port(k)] = struct{}{}
hostConfig.PortBindings[nat.Port(fmt.Sprintf("%d/%s", p, k))] = []nat.PortBinding{
{
// Port is a string containing port number and
// protocol in the format "80/tcp".
port := fmt.Sprintf("%d/%s", q, proto)
n := nat.Port(port)
containerConfig.ExposedPorts[n] = struct{}{} // PortSet
pb := nat.PortBinding{
HostIP: "0.0.0.0",
HostPort: fmt.Sprintf("%d", q),
},
HostPort: fmt.Sprintf("%d", p), // eg: 8080
}
if _, exists := hostConfig.PortBindings[n]; !exists {
hostConfig.PortBindings[n] = []nat.PortBinding{}
}
hostConfig.PortBindings[n] = append(hostConfig.PortBindings[n], pb)
}
}
@@ -340,6 +397,7 @@ func (obj *DockerContainerRes) CheckApply(ctx context.Context, apply bool) (bool
// containerStart starts the specified container, and waits for it to start.
func (obj *DockerContainerRes) containerStart(ctx context.Context, id string, opts container.StartOptions) error {
obj.init.Logf("starting...")
// Get an events channel for the container we're about to start.
eventOpts := types.EventsOptions{
Filters: filters.NewArgs(filters.KeyValuePair{Key: "container", Value: id}),
@@ -350,6 +408,7 @@ func (obj *DockerContainerRes) containerStart(ctx context.Context, id string, op
return errwrap.Wrapf(err, "error starting container")
}
// Wait for a message on eventChan that says the container has started.
// TODO: Should we add ctx here or does cancelling above guarantee exit?
select {
case event := <-eventCh:
if event.Status != "start" {
@@ -363,11 +422,13 @@ func (obj *DockerContainerRes) containerStart(ctx context.Context, id string, op
// containerStop stops the specified container and waits for it to stop.
func (obj *DockerContainerRes) containerStop(ctx context.Context, id string, timeout *int) error {
obj.init.Logf("stopping...")
ch, errCh := obj.client.ContainerWait(ctx, id, container.WaitConditionNotRunning)
stopOpts := container.StopOptions{
Timeout: timeout,
}
obj.client.ContainerStop(ctx, id, stopOpts)
// TODO: Should we add ctx here or does cancelling above guarantee exit?
select {
case <-ch:
case err := <-errCh:
@@ -379,8 +440,10 @@ func (obj *DockerContainerRes) containerStop(ctx context.Context, id string, tim
// containerRemove removes the specified container and waits for it to be
// removed.
func (obj *DockerContainerRes) containerRemove(ctx context.Context, id string, opts container.RemoveOptions) error {
obj.init.Logf("removing...")
ch, errCh := obj.client.ContainerWait(ctx, id, container.WaitConditionRemoved)
obj.client.ContainerRemove(ctx, id, opts)
// TODO: Should we add ctx here or does cancelling above guarantee exit?
select {
case <-ch:
case err := <-errCh:
@@ -407,7 +470,7 @@ func (obj *DockerContainerRes) Cmp(r engine.Res) error {
return errwrap.Wrapf(err, "the Cmd field differs")
}
if err := util.SortedStrSliceCompare(obj.Env, res.Env); err != nil {
return errwrap.Wrapf(err, "tne Env field differs")
return errwrap.Wrapf(err, "the Env field differs")
}
if len(obj.Ports) != len(res.Ports) {
return fmt.Errorf("the Ports length differs")
@@ -461,7 +524,7 @@ func (obj *DockerContainerRes) AutoEdges() (engine.AutoEdge, error) {
}, nil
}
// Next returnes the next automatic edge.
// Next returns the next automatic edge.
func (obj *DockerContainerResAutoEdges) Next() []engine.ResUID {
if len(obj.UIDs) == 0 {
return nil

View File

@@ -37,27 +37,18 @@ import (
"io"
"regexp"
"strings"
"time"
"sync"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/client"
dockerImage "github.com/docker/docker/api/types/image"
dockerClient "github.com/docker/docker/client"
errwrap "github.com/pkg/errors"
)
const (
// dockerImageInitCtxTimeout is the length of time, in seconds, before
// requests are cancelled in Init.
dockerImageInitCtxTimeout = 20
// dockerImageCheckApplyCtxTimeout is the length of time, in seconds,
// before requests are cancelled in CheckApply.
dockerImageCheckApplyCtxTimeout = 120
)
func init() {
engine.RegisterResource("docker:image", func() engine.Res { return &DockerImageRes{} })
}
@@ -75,10 +66,12 @@ type DockerImageRes struct {
// version.
APIVersion string `lang:"apiversion" yaml:"apiversion"`
image string // full image:tag format
client *client.Client // docker api client
init *engine.Init
once *sync.Once
start chan struct{} // closes by once
sflag bool // first time happened?
ready chan struct{} // closes by once
}
// Default returns some sensible defaults for this resource.
@@ -113,48 +106,69 @@ func (obj *DockerImageRes) Validate() error {
// Init runs some startup code for this resource.
func (obj *DockerImageRes) Init(init *engine.Init) error {
var err error
obj.init = init // save for later
// Save the full image name and tag.
obj.image = dockerImageNameTag(obj.Name())
obj.once = &sync.Once{}
obj.start = make(chan struct{})
obj.ready = make(chan struct{})
ctx, cancel := context.WithTimeout(context.Background(), dockerImageInitCtxTimeout*time.Second)
defer cancel()
// Initialize the docker client.
obj.client, err = client.NewClientWithOpts(client.WithVersion(obj.APIVersion))
if err != nil {
return errwrap.Wrapf(err, "error creating docker client")
}
// Validate the image.
resp, err := obj.client.ImageSearch(ctx, obj.image, types.ImageSearchOptions{Limit: 1})
if err != nil {
return errwrap.Wrapf(err, "error searching for image")
}
if len(resp) == 0 {
return fmt.Errorf("image: %s not found", obj.image)
}
return nil
}
// Cleanup is run by the engine to clean up after the resource is done.
func (obj *DockerImageRes) Cleanup() error {
return obj.client.Close() // close the docker client
return nil
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *DockerImageRes) Watch(ctx context.Context) error {
innerCtx, cancel := context.WithCancel(context.Background())
defer cancel()
var client *dockerClient.Client
var err error
eventChan, errChan := obj.client.Events(innerCtx, types.EventsOptions{})
for {
client, err = dockerClient.NewClientWithOpts(dockerClient.WithVersion(obj.APIVersion))
if err == nil {
// the above won't check the connection, force that here
_, err = client.Ping(ctx)
}
if err == nil {
break
}
// If we didn't connect right away, it might be because we're
// waiting for someone to install the docker package, and start
// the service. We might even have an edge between this resource
// and those dependencies, but that doesn't stop this Watch from
// starting up. As a result, we will wait *once* for CheckApply
// to unlock us, since that runs in dependency order.
// This error looks like: Cannot connect to the Docker daemon at
// unix:///var/run/docker.sock. Is the docker daemon running?
if dockerClient.IsErrConnectionFailed(err) && !obj.sflag {
// notify engine that we're running so that CheckApply
// can start...
obj.init.Running()
select {
case <-obj.start:
obj.sflag = true
continue
case <-ctx.Done(): // don't block
close(obj.ready) // tell CheckApply to unblock!
return nil
}
}
close(obj.ready) // tell CheckApply to unblock!
return errwrap.Wrapf(err, "error creating docker client")
}
defer client.Close() // success, so close it later
eventChan, errChan := client.Events(ctx, types.EventsOptions{})
close(obj.ready) // tell CheckApply to start now that events are running
// notify engine that we're running
if !obj.sflag {
obj.init.Running()
}
var send = false // send event?
for {
select {
case event, ok := <-eventChan:
@@ -164,7 +178,6 @@ func (obj *DockerImageRes) Watch(ctx context.Context) error {
if obj.init.Debug {
obj.init.Logf("%+v", event)
}
send = true
case err, ok := <-errChan:
if !ok {
@@ -176,21 +189,42 @@ func (obj *DockerImageRes) Watch(ctx context.Context) error {
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply method for Docker resource.
func (obj *DockerImageRes) CheckApply(ctx context.Context, apply bool) (checkOK bool, err error) {
ctx, cancel := context.WithTimeout(ctx, dockerImageCheckApplyCtxTimeout*time.Second)
defer cancel()
s, err := obj.client.ImageList(ctx, image.ListOptions{
Filters: filters.NewArgs(filters.Arg("reference", obj.image)),
obj.once.Do(func() { close(obj.start) }) // Tell Watch() it's safe to start again.
// Now wait to make sure events are started before we make changes!
select {
case <-obj.ready:
case <-ctx.Done(): // don't block
return false, ctx.Err()
}
// Save the full image name and tag.
image := dockerImageNameTag(obj.Name())
// Initialize the docker client.
client, err := dockerClient.NewClientWithOpts(dockerClient.WithVersion(obj.APIVersion))
if err != nil {
return false, errwrap.Wrapf(err, "error creating docker client")
}
defer client.Close()
// Validate the image.
resp, err := client.ImageSearch(ctx, image, types.ImageSearchOptions{Limit: 1})
if err != nil {
return false, errwrap.Wrapf(err, "error searching for image")
}
if len(resp) == 0 {
return false, fmt.Errorf("image: %s not found", image)
}
s, err := client.ImageList(ctx, dockerImage.ListOptions{
Filters: filters.NewArgs(filters.Arg("reference", image)),
})
if err != nil {
return false, errwrap.Wrapf(err, "error listing images")
@@ -211,15 +245,17 @@ func (obj *DockerImageRes) CheckApply(ctx context.Context, apply bool) (checkOK
}
if obj.State == "absent" {
obj.init.Logf("removing...")
// TODO: force? prune children?
if _, err := obj.client.ImageRemove(ctx, obj.image, image.RemoveOptions{}); err != nil {
if _, err := client.ImageRemove(ctx, image, dockerImage.RemoveOptions{}); err != nil {
return false, errwrap.Wrapf(err, "error removing image")
}
return false, nil
}
// pull the image
p, err := obj.client.ImagePull(ctx, obj.image, image.PullOptions{})
obj.init.Logf("pulling...")
p, err := client.ImagePull(ctx, image, dockerImage.PullOptions{})
if err != nil {
return false, errwrap.Wrapf(err, "error pulling image")
}

View File

@@ -38,6 +38,7 @@ import (
"os"
"os/exec"
"os/user"
"path"
"sort"
"strings"
"sync"
@@ -56,6 +57,12 @@ func init() {
}
// ExecRes is an exec resource for running commands.
//
// This resource attempts to minimise the effects of the execution environment,
// and, in particular, will start the new process with an empty environment (as
// would `execve` with an empty `envp` array). If you want the environment to
// inherit the mgmt process' environment, you can import it from "sys" and use
// it with `env => sys.env()` in your exec resource.
type ExecRes struct {
traits.Base // add the base methods without re-implementation
traits.Edgeable
@@ -90,7 +97,9 @@ type ExecRes struct {
Cwd string `lang:"cwd" yaml:"cwd"`
// Shell is the (optional) shell to use to run the cmd. If you specify
// this, then you can't use the Args parameter.
// this, then you can't use the Args parameter. Note that unless you
// use absolute paths, or set the PATH variable, the shell might not be
// able to find the program you're trying to run.
Shell string `lang:"shell" yaml:"shell"`
// Timeout is the number of seconds to wait before sending a Kill to the
@@ -99,7 +108,9 @@ type ExecRes struct {
Timeout uint64 `lang:"timeout" yaml:"timeout"`
// Env allows the user to specify environment variables for script
// execution. These are taken using a map of format of VAR_NAME -> value.
// execution. These are taken using a map of format of VAR_KEY -> value.
// Omitting this value or setting it to an empty array will cause the
// program to be run with an empty environment.
Env map[string]string `lang:"env" yaml:"env"`
// WatchCmd is the command to run to detect event changes. Each line of
@@ -109,6 +120,9 @@ type ExecRes struct {
// WatchCwd is the Cwd for the WatchCmd. See the docs for Cwd.
WatchCwd string `lang:"watchcwd" yaml:"watchcwd"`
// WatchFiles is a list of files that will be kept track of.
WatchFiles []string `lang:"watchfiles" yaml:"watchfiles"`
// WatchShell is the Shell for the WatchCmd. See the docs for Shell.
WatchShell string `lang:"watchshell" yaml:"watchshell"`
@@ -124,6 +138,13 @@ type ExecRes struct {
// IfShell is the Shell for the IfCmd. See the docs for Shell.
IfShell string `lang:"ifshell" yaml:"ifshell"`
// IfEquals specifies that if the ifcmd returns zero, and that the
// output matches this string, then it will guard against the Cmd
// running. This can be the empty string. Remember to take into account
// if the output includes a trailing newline or not. (Hint: it usually
// does!)
IfEquals *string `lang:"ifequals" yaml:"ifequals"`
// Creates is the absolute file path to check for before running the
// main cmd. If this path exists, then the cmd will not run. More
// precisely we attempt to `stat` the file, so it must succeed for a
@@ -151,10 +172,28 @@ type ExecRes struct {
// used for any command being run.
Group string `lang:"group" yaml:"group"`
// SendOutput is a value which can be sent for the Send/Recv Output
// field if no value is available in the cache. This is used in very
// specialized scenarios (particularly prototyping and unclean
// environments) and should not be used routinely. It should be used
// only in situations where we didn't produce our own sending values,
// and there are none in the cache, and instead are relying on a runtime
// mechanism to help us out. This can commonly occur if you wish to make
// incremental progress when locally testing some code using Send/Recv,
// but you are combining it with --tmp-prefix for other reasons.
SendOutput *string `lang:"send_output" yaml:"send_output"`
// SendStdout is like SendOutput but for stdout alone. See those docs.
SendStdout *string `lang:"send_stdout" yaml:"send_stdout"`
// SendStderr is like SendOutput but for stderr alone. See those docs.
SendStderr *string `lang:"send_stderr" yaml:"send_stderr"`
output *string // all cmd output, read only, do not set!
stdout *string // the cmd stdout, read only, do not set!
stderr *string // the cmd stderr, read only, do not set!
dir string // the path to local storage
interruptChan chan struct{}
wg *sync.WaitGroup
}
@@ -187,6 +226,12 @@ func (obj *ExecRes) Validate() error {
return fmt.Errorf("the Args param can't be used when Cmd has args")
}
for _, file := range obj.WatchFiles {
if !strings.HasPrefix(file, "/") {
return fmt.Errorf("the path (`%s`) in WatchFiles must be absolute", file)
}
}
if obj.Creates != "" && !strings.HasPrefix(obj.Creates, "/") {
return fmt.Errorf("the Creates param must be an absolute path")
}
@@ -215,6 +260,12 @@ func (obj *ExecRes) Validate() error {
func (obj *ExecRes) Init(init *engine.Init) error {
obj.init = init // save for later
dir, err := obj.init.VarDir("")
if err != nil {
return errwrap.Wrapf(err, "could not get VarDir in Init()")
}
obj.dir = dir
obj.interruptChan = make(chan struct{})
obj.wg = &sync.WaitGroup{}
@@ -228,10 +279,13 @@ func (obj *ExecRes) Cleanup() error {
// Watch is the primary listener for this resource and it outputs events.
func (obj *ExecRes) Watch(ctx context.Context) error {
defer obj.wg.Wait()
wg := &sync.WaitGroup{}
defer wg.Wait()
ioChan := make(chan *cmdOutput)
rwChan := make(chan recwatch.Event)
filesChan := make(chan recwatch.Event)
var watchCmd *exec.Cmd
if obj.WatchCmd != "" {
var cmdName string
@@ -271,6 +325,46 @@ func (obj *ExecRes) Watch(ctx context.Context) error {
}
}
for _, file := range obj.WatchFiles {
recurse := strings.HasSuffix(file, "/") // check if it's a file or dir
recWatcher, err := recwatch.NewRecWatcher(file, recurse)
if err != nil {
return err
}
defer recWatcher.Close()
wg.Add(1)
go func() {
defer wg.Done()
for {
var files recwatch.Event
var ok bool
var shutdown bool
select {
case files, ok = <-recWatcher.Events(): // receiving events
case <-ctx.Done(): // unblock
return
}
if !ok {
err := fmt.Errorf("channel shutdown")
files = recwatch.Event{Error: err}
shutdown = true
}
select {
case filesChan <- files: // send events
if shutdown { // optimization to free early
return
}
case <-ctx.Done():
return
}
}
}()
}
if obj.Creates != "" {
recWatcher, err := recwatch.NewRecWatcher(obj.Creates, false)
if err != nil {
@@ -282,7 +376,6 @@ func (obj *ExecRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
select {
case data, ok := <-ioChan:
@@ -321,8 +414,8 @@ func (obj *ExecRes) Watch(ctx context.Context) error {
obj.init.Logf("watch out:")
obj.init.Logf("%s", s)
}
if data.text != "" {
send = true
if data.text == "" { // TODO: do we want to skip event?
continue
}
case event, ok := <-rwChan:
@@ -332,19 +425,22 @@ func (obj *ExecRes) Watch(ctx context.Context) error {
if err := event.Error; err != nil {
return errwrap.Wrapf(err, "unknown %s watcher error", obj)
}
send = true
case files, ok := <-filesChan:
if !ok { // channel shutdown
return fmt.Errorf("unexpected recwatch shutdown")
}
if err := files.Error; err != nil {
return errwrap.Wrapf(err, "unknown %s watcher error", obj)
}
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply checks the resource state and applies the resource if the bool
// input is true. It returns error info and if the state check passed or not.
@@ -354,6 +450,10 @@ func (obj *ExecRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
// check and this will run. It is still guarded by the IfCmd, but it can
// have a chance to execute, and all without the check of obj.Refresh()!
if err := obj.checkApplyReadCache(); err != nil {
return false, err
}
if obj.IfCmd != "" { // if there is no onlyif check, we should just run
var cmdName string
var cmdArgs []string
@@ -413,30 +513,55 @@ func (obj *ExecRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
obj.init.Logf("ifcmd out:")
obj.init.Logf("%s", s)
}
//if err := obj.checkApplyWriteCache(); err != nil {
// return false, err
//}
obj.safety()
if err := obj.send(); err != nil {
return false, err
}
return true, nil // don't run
}
if s := out.String(); s == "" {
s := out.String()
if s == "" {
obj.init.Logf("ifcmd out empty!")
} else {
obj.init.Logf("ifcmd out:")
obj.init.Logf("%s", s)
}
if obj.IfEquals != nil && *obj.IfEquals == s {
obj.init.Logf("ifequals matched")
return true, nil // don't run
}
}
if obj.Creates != "" { // gate the extra syscall
if _, err := os.Stat(obj.Creates); err == nil {
obj.init.Logf("creates file exists, skipping cmd")
//if err := obj.checkApplyWriteCache(); err != nil {
// return false, err
//}
obj.safety()
if err := obj.send(); err != nil {
return false, err
}
return true, nil // don't run
}
}
// state is not okay, no work done, exit, but without error
if !apply {
//if err := obj.checkApplyWriteCache(); err != nil {
// return false, err
//}
//obj.safety()
if err := obj.send(); err != nil {
return false, err
}
return false, nil
}
// apply portion
obj.init.Logf("Apply")
var cmdName string
var cmdArgs []string
if obj.Shell == "" {
@@ -644,11 +769,10 @@ func (obj *ExecRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
}
}
if err := obj.init.Send(&ExecSends{
Output: obj.output,
Stdout: obj.stdout,
Stderr: obj.stderr,
}); err != nil {
if err := obj.checkApplyWriteCache(); err != nil {
return false, err
}
if err := obj.send(); err != nil {
return false, err
}
@@ -660,6 +784,77 @@ func (obj *ExecRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
return false, nil // success
}
// send is a helper to avoid duplication of the same send operation.
func (obj *ExecRes) send() error {
return obj.init.Send(&ExecSends{
Output: obj.output,
Stdout: obj.stdout,
Stderr: obj.stderr,
})
}
// safety is a helper function that populates the cached "send" values if they
// are empty. It must only be called right before actually sending any values,
// and right before CheckApply returns. It should be used only in situations
// where we didn't produce our own sending values, and there are none in the
// cache, and instead are relying on a runtime mechanism to help us out. This
// mechanism is useful as a backstop for when we're running in unclean
// scenarios.
func (obj *ExecRes) safety() {
if x := obj.SendOutput; x != nil && obj.output == nil {
s := *x // copy
obj.output = &s
}
if x := obj.SendStdout; x != nil && obj.stdout == nil {
s := *x // copy
obj.stdout = &s
}
if x := obj.SendStderr; x != nil && obj.stderr == nil {
s := *x // copy
obj.stderr = &s
}
}
// checkApplyReadCache is a helper to do all our reading from the cache.
func (obj *ExecRes) checkApplyReadCache() error {
output, err := engineUtil.ReadData(path.Join(obj.dir, "output"))
if err != nil {
return err
}
obj.output = output
stdout, err := engineUtil.ReadData(path.Join(obj.dir, "stdout"))
if err != nil {
return err
}
obj.stdout = stdout
stderr, err := engineUtil.ReadData(path.Join(obj.dir, "stderr"))
if err != nil {
return err
}
obj.stderr = stderr
return nil
}
// checkApplyWriteCache is a helper to do all our writing into the cache.
func (obj *ExecRes) checkApplyWriteCache() error {
if _, err := engineUtil.WriteData(path.Join(obj.dir, "output"), obj.output); err != nil {
return err
}
if _, err := engineUtil.WriteData(path.Join(obj.dir, "stdout"), obj.stdout); err != nil {
return err
}
if _, err := engineUtil.WriteData(path.Join(obj.dir, "stderr"), obj.stderr); err != nil {
return err
}
return nil
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *ExecRes) Cmp(r engine.Res) error {
// we can only compare ExecRes to others of the same resource kind
@@ -698,6 +893,9 @@ func (obj *ExecRes) Cmp(r engine.Res) error {
if obj.WatchShell != res.WatchShell {
return fmt.Errorf("the WatchShell differs")
}
if err := engineUtil.StrListCmp(obj.WatchFiles, res.WatchFiles); err != nil {
return errwrap.Wrapf(err, "the WatchFiles differ")
}
if obj.IfCmd != res.IfCmd {
return fmt.Errorf("the IfCmd differs")
@@ -708,6 +906,9 @@ func (obj *ExecRes) Cmp(r engine.Res) error {
if obj.IfShell != res.IfShell {
return fmt.Errorf("the IfShell differs")
}
if err := engineUtil.StrPtrCmp(obj.IfEquals, res.IfEquals); err != nil {
return errwrap.Wrapf(err, "the IfEquals differs")
}
if obj.Creates != res.Creates {
return fmt.Errorf("the Creates differs")
@@ -730,6 +931,16 @@ func (obj *ExecRes) Cmp(r engine.Res) error {
return fmt.Errorf("the Group differs")
}
if err := engineUtil.StrPtrCmp(obj.SendOutput, res.SendOutput); err != nil {
return errwrap.Wrapf(err, "the SendOutput differs")
}
if err := engineUtil.StrPtrCmp(obj.SendStdout, res.SendStdout); err != nil {
return errwrap.Wrapf(err, "the SendStdout differs")
}
if err := engineUtil.StrPtrCmp(obj.SendStderr, res.SendStderr); err != nil {
return errwrap.Wrapf(err, "the SendStderr differs")
}
return nil
}

View File

@@ -35,6 +35,8 @@ import (
"context"
"fmt"
"os/exec"
"path"
"strings"
"syscall"
"testing"
"time"
@@ -45,6 +47,7 @@ import (
)
func fakeExecInit(t *testing.T) (*engine.Init, *ExecSends) {
tmpdir := fmt.Sprintf("%s/", t.TempDir()) // gets cleaned up at end, new dir for each call
debug := testing.Verbose() // set via the -test.v flag to `go test`
logf := func(format string, v ...interface{}) {
t.Logf("test: "+format, v...)
@@ -59,6 +62,9 @@ func fakeExecInit(t *testing.T) (*engine.Init, *ExecSends) {
*execSends = *x // set
return nil
},
VarDir: func(p string) (string, error) {
return path.Join(tmpdir, p), nil
},
Debug: debug,
Logf: logf,
}, execSends
@@ -253,6 +259,126 @@ func TestExecSendRecv3(t *testing.T) {
}
}
func TestExecEnvEmpty(t *testing.T) {
now := time.Now()
min := time.Second * 3 // approx min time needed for the test
ctx := context.Background()
if deadline, ok := t.Deadline(); ok {
d := deadline.Add(-min)
t.Logf(" now: %+v", now)
t.Logf(" d: %+v", d)
newCtx, cancel := context.WithDeadline(ctx, d)
ctx = newCtx
defer cancel()
}
r1 := &ExecRes{
Cmd: "env",
Shell: "/bin/bash",
}
if err := r1.Validate(); err != nil {
t.Errorf("validate failed with: %v", err)
}
defer func() {
if err := r1.Cleanup(); err != nil {
t.Errorf("cleanup failed with: %v", err)
}
}()
init, execSends := fakeExecInit(t)
if err := r1.Init(init); err != nil {
t.Errorf("init failed with: %v", err)
}
// run artificially without the entire engine
if _, err := r1.CheckApply(ctx, true); err != nil {
t.Errorf("checkapply failed with: %v", err)
}
if execSends.Stdout == nil {
t.Errorf("stdout is nil")
return
}
for _, v := range strings.Split(*execSends.Stdout, "\n") {
if v == "" {
continue
}
s := strings.SplitN(v, "=", 2)
if s[0] == "_" || s[0] == "PWD" || s[0] == "SHLVL" {
// these variables are set by bash and are expected
continue
}
t.Errorf("executed process had an unexpected env variable: %s", s[0])
}
}
func TestExecEnvSetByResource(t *testing.T) {
now := time.Now()
min := time.Second * 3 // approx min time needed for the test
ctx := context.Background()
if deadline, ok := t.Deadline(); ok {
d := deadline.Add(-min)
t.Logf(" now: %+v", now)
t.Logf(" d: %+v", d)
newCtx, cancel := context.WithDeadline(ctx, d)
ctx = newCtx
defer cancel()
}
r1 := &ExecRes{
Cmd: "env",
Shell: "/bin/bash",
Env: map[string]string{
"PURPLE": "idea",
"CONTAINS_UNDERSCORES": "and=equal=signs",
},
}
if err := r1.Validate(); err != nil {
t.Errorf("validate failed with: %v", err)
}
defer func() {
if err := r1.Cleanup(); err != nil {
t.Errorf("cleanup failed with: %v", err)
}
}()
init, execSends := fakeExecInit(t)
if err := r1.Init(init); err != nil {
t.Errorf("init failed with: %v", err)
}
// run artificially without the entire engine
if _, err := r1.CheckApply(ctx, true); err != nil {
t.Errorf("checkapply failed with: %v", err)
}
if execSends.Stdout == nil {
t.Errorf("stdout is nil")
return
}
for _, v := range strings.Split(*execSends.Stdout, "\n") {
if v == "" {
continue
}
s := strings.SplitN(v, "=", 2)
if s[0] == "_" || s[0] == "PWD" || s[0] == "SHLVL" {
// these variables are set by bash and are expected
continue
}
if s[0] == "PURPLE" {
if s[1] != "idea" {
t.Errorf("executed process had an unexpected value for env variable: %s", v)
}
continue
}
if s[0] == "CONTAINS_UNDERSCORES" {
if s[1] != "and=equal=signs" {
t.Errorf("executed process had an unexpected value for env variable: %s", v)
}
continue
}
t.Errorf("executed process had an unexpected env variable: %s", s[0])
}
}
func TestExecTimeoutBehaviour(t *testing.T) {
now := time.Now()
min := time.Second * 3 // approx min time needed for the test
@@ -291,7 +417,7 @@ func TestExecTimeoutBehaviour(t *testing.T) {
}
exitErr, ok := err.(*exec.ExitError) // embeds an os.ProcessState
if err != nil && ok {
if ok {
pStateSys := exitErr.Sys() // (*os.ProcessState) Sys
wStatus, ok := pStateSys.(syscall.WaitStatus)
if !ok {
@@ -311,13 +437,8 @@ func TestExecTimeoutBehaviour(t *testing.T) {
t.Logf("exit status: %d", wStatus.ExitStatus())
return
} else if err != nil {
t.Errorf("general cmd error")
return
}
// no error
t.Errorf("general cmd error")
}
func TestExecAutoEdge1(t *testing.T) {

View File

@@ -134,7 +134,8 @@ type FileRes struct {
// `exists` or `absent`. If you do not specify this, we will not be able
// to create or remove a file if it might be logical for another
// param to require that. Instead it will error. This means that this
// field is not implied by specifying some content or a mode.
// field is not implied by specifying some content or a mode. This is
// also used when determining how we manage a symlink.
State string `lang:"state" yaml:"state"`
// Content specifies the file contents to use. If this is nil, they are
@@ -145,7 +146,7 @@ type FileRes struct {
// Source specifies the source contents for the file resource. It cannot
// be combined with the Content or Fragments parameters. It must be an
// absolute path, and it can point to a file or a directory. If it
// points to a file, then that will will be copied throuh directly. If
// points to a file, then that will will be copied through directly. If
// it points to a directory, then it will copy the directory "rsync
// style" onto the file destination. As a result, if this is a file,
// then the main file res must be a file, and if it is a directory, then
@@ -156,7 +157,8 @@ type FileRes struct {
// Force parameter. If source is undefined and the file path is a
// directory, then a directory will be created. If left undefined, and
// combined with the Purge option too, then any unmanaged file in this
// dir will be removed.
// dir will be removed. Lastly, if the Symlink parameter is true, then
// this specifies the source that the symbolic symlink points to.
Source string `lang:"source" yaml:"source"`
// Fragments specifies that the file is built from a list of individual
@@ -194,7 +196,8 @@ type FileRes struct {
Recurse bool `lang:"recurse" yaml:"recurse"`
// Force must be set if we want to perform an unusual operation, such as
// changing a file into a directory or vice-versa.
// changing a file into a directory or vice-versa. This is also required
// when changing a file or directory into a symlink or vice-versa.
Force bool `lang:"force" yaml:"force"`
// Purge specifies that when true, any unmanaged file in this file
@@ -203,6 +206,12 @@ type FileRes struct {
// Recurse to true. This doesn't work with Content or Fragments.
Purge bool `lang:"purge" yaml:"purge"`
// Symlink specifies that the file should be a symbolic link to the
// source contents. Those do not have to point to an actual file or
// directory. The source in that case can be either an absolute or
// relative path.
Symlink bool `lang:"symlink" yaml:"symlink"`
sha256sum string
}
@@ -295,18 +304,22 @@ func (obj *FileRes) Validate() error {
return fmt.Errorf("can only specify one of Content, Source, and Fragments")
}
if obj.Symlink && !isSrc && obj.State == FileStateExists {
return fmt.Errorf("can't use Symlink with an empty Source")
}
if obj.State == FileStateAbsent && (isContent || isSrc || isFrag) {
return fmt.Errorf("can't specify file Content, Source, or Fragments when State is %s", FileStateAbsent)
}
// The path and Source must either both be dirs or both not be.
srcIsDir := strings.HasSuffix(obj.Source, "/")
if isSrc && (obj.isDir() != srcIsDir) {
if isSrc && (obj.isDir() != srcIsDir) && !obj.Symlink {
return fmt.Errorf("the path and Source must either both be dirs or both not be")
}
if obj.isDir() && (isContent || isFrag) { // makes no sense
return fmt.Errorf("can't specify Content or Fragments when creating a Dir")
if obj.isDir() && (isContent || isFrag || obj.Symlink) { // makes no sense
return fmt.Errorf("can't specify Content or Fragments or Symlink when creating a Dir")
}
// TODO: is this really a requirement that we want to enforce?
@@ -318,7 +331,7 @@ func (obj *FileRes) Validate() error {
return fmt.Errorf("you'll want to Recurse when you have a Purge to do")
}
if isSrc && !obj.isDir() && !srcIsDir && obj.Recurse {
if isSrc && !obj.isDir() && !srcIsDir && obj.Recurse && !obj.Symlink {
return fmt.Errorf("you can't recurse when copying a single file")
}
@@ -327,6 +340,13 @@ func (obj *FileRes) Validate() error {
if !strings.HasPrefix(frag, "/") {
return fmt.Errorf("the frag (`%s`) isn't an absolute path", frag)
}
// If the file is inside one of our fragment dirs, then this
// would make an infinite loop mess. We can't prevent this
// happening in other ways with multiple dirs doing this for
// each other, but we can at least catch the common case.
if util.HasPathPrefix(obj.getPath(), frag) {
return fmt.Errorf("inside a frag (`%s`)", frag)
}
}
if obj.Purge && (isContent || isFrag) {
@@ -365,6 +385,13 @@ func (obj *FileRes) Validate() error {
}
}
if obj.Symlink && (isContent || isFrag) {
return fmt.Errorf("can't specify Content or Fragments with Symlink")
}
if obj.Symlink && (obj.Recurse || obj.Purge) {
return fmt.Errorf("can't specify Recurse or Purge with Symlink")
}
return nil
}
@@ -491,7 +518,6 @@ func (obj *FileRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
if obj.init.Debug {
obj.init.Logf("watching: %s", obj.getPath()) // attempting to watch...
@@ -511,7 +537,6 @@ func (obj *FileRes) Watch(ctx context.Context) error {
if obj.init.Debug { // don't access event.Body if event.Error isn't nil
obj.init.Logf("event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case event, ok := <-inputEvents:
if !ok {
@@ -523,19 +548,14 @@ func (obj *FileRes) Watch(ctx context.Context) error {
if obj.init.Debug { // don't access event.Body if event.Error isn't nil
obj.init.Logf("input event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// fileCheckApply is the CheckApply operation for a source and destination file.
// It can accept an io.Reader as the source, which can be a regular file, or it
@@ -636,7 +656,7 @@ func (obj *FileRes) fileCheckApply(ctx context.Context, apply bool, src io.ReadS
return "", false, err
}
sha256sum = hex.EncodeToString(hash.Sum(nil))
// since we re-use this src handler below, it is
// since we reuse this src handler below, it is
// *critical* to seek to 0, or we'll copy nothing!
if n, err := src.Seek(0, 0); err != nil || n != 0 {
return sha256sum, false, err
@@ -666,7 +686,7 @@ func (obj *FileRes) fileCheckApply(ctx context.Context, apply bool, src io.ReadS
if err != nil {
return sha256sum, false, err
}
defer dstFile.Close() // TODO: is this redundant because of the earlier defered Close() ?
defer dstFile.Close() // TODO: is this redundant because of the earlier deferred Close() ?
if isFile { // set mode because it's a new file
if err := dstFile.Chmod(srcStat.Mode()); err != nil {
@@ -714,10 +734,10 @@ func (obj *FileRes) dirCheckApply(ctx context.Context, apply bool) (bool, error)
// the path exists and is not a directory
// delete the file if force is given
if err == nil && !fileInfo.IsDir() {
obj.init.Logf("removing (force): %s", obj.getPath())
if err := os.Remove(obj.getPath()); err != nil {
return false, err
}
obj.init.Logf("force remove")
}
// create the empty directory
@@ -730,11 +750,19 @@ func (obj *FileRes) dirCheckApply(ctx context.Context, apply bool) (bool, error)
if obj.Recurse {
// TODO: add recurse limit here
if err := os.MkdirAll(obj.getPath(), mode); err != nil {
return false, err
}
obj.init.Logf("mkdir -p -m %s", mode)
return false, os.MkdirAll(obj.getPath(), mode)
return false, nil
}
return false, os.Mkdir(obj.getPath(), mode)
if err := os.Mkdir(obj.getPath(), mode); err != nil {
return false, err
}
obj.init.Logf("mkdir -m %s", mode)
return false, nil
}
// syncCheckApply is the CheckApply operation for a source and destination dir.
@@ -931,6 +959,10 @@ func (obj *FileRes) syncCheckApply(ctx context.Context, apply bool, src, dst str
// stateCheckApply performs a CheckApply of the file state to create or remove
// an empty file or directory.
func (obj *FileRes) stateCheckApply(ctx context.Context, apply bool) (bool, error) {
if obj.Symlink {
return true, nil // delegate all of this work to symlinkCheckApply
}
if obj.State == FileStateUndefined { // state is not specified
return true, nil
}
@@ -995,6 +1027,7 @@ func (obj *FileRes) stateCheckApply(ctx context.Context, apply bool) (bool, erro
if err := f.Close(); err != nil {
return false, errwrap.Wrapf(err, "problem closing empty file")
}
obj.init.Logf("created")
return false, nil // defer the Content != nil work to later...
}
@@ -1026,6 +1059,10 @@ func (obj *FileRes) contentCheckApply(ctx context.Context, apply bool) (bool, er
// sourceCheckApply performs a CheckApply for the file source.
func (obj *FileRes) sourceCheckApply(ctx context.Context, apply bool) (bool, error) {
if obj.Symlink { // delegate
return obj.symlinkCheckApply(ctx, apply)
}
if obj.init.Debug {
obj.init.Logf("sourceCheckApply(%t)", apply)
}
@@ -1154,7 +1191,12 @@ func (obj *FileRes) chownCheckApply(ctx context.Context, apply bool) (bool, erro
return true, nil
}
fileInfo, err := os.Stat(obj.getPath())
// XXX: Is this the correct usage of Stat for Symlinks and regular files?
stat := os.Stat
if obj.Symlink {
stat = os.Lstat
}
fileInfo, err := stat(obj.getPath())
// TODO: is this a sane behaviour that we want to preserve?
// If the file does not exist and we are in noop mode, do not throw an
// error.
@@ -1222,7 +1264,12 @@ func (obj *FileRes) chmodCheckApply(ctx context.Context, apply bool) (bool, erro
return false, err
}
fileInfo, err := os.Stat(obj.getPath())
// XXX: Is this the correct usage of Stat for Symlinks and regular files?
stat := os.Stat
if obj.Symlink {
stat = os.Lstat
}
fileInfo, err := stat(obj.getPath())
if err != nil { // if the file does not exist, it's correct to error!
return false, err
}
@@ -1241,6 +1288,75 @@ func (obj *FileRes) chmodCheckApply(ctx context.Context, apply bool) (bool, erro
return false, os.Chmod(obj.getPath(), mode)
}
// symlinkCheckApply performs a CheckApply for the symlink parameter.
func (obj *FileRes) symlinkCheckApply(ctx context.Context, apply bool) (bool, error) {
if !obj.Symlink {
return true, nil
}
if obj.init.Debug {
obj.init.Logf("symlinkCheckApply(%t)", apply)
}
if obj.State == FileStateUndefined { // state is not specified
return true, nil
}
p := obj.getPath()
dest, err := os.Readlink(p)
isNotExist := os.IsNotExist(err)
isInvalidSymlink := isInvalidSymlink(err)
if err != nil && !isNotExist && !isInvalidSymlink {
return false, err // some unknown error
}
if obj.State == FileStateAbsent && isNotExist {
return true, nil
}
if obj.State == FileStateExists && err == nil && dest == obj.Source {
return true, nil
}
// state is not okay, no work done, exit, but without error
if !apply {
return false, nil
}
if obj.State == FileStateAbsent && isInvalidSymlink && !obj.Force {
return false, fmt.Errorf("can't remove non-symlink without Force")
}
if obj.State == FileStateAbsent {
obj.init.Logf("removing: %s", p)
// TODO: not sure we ever want to recurse with symlinks
//if obj.Recurse {
// return false, os.RemoveAll(p) // dangerous ;)
//}
return false, os.Remove(p)
}
//if obj.State == FileStateExists ...
// want to change to a symlink but can't
if isInvalidSymlink && !obj.Force {
return false, fmt.Errorf("can't mutate to symlink without Force")
}
// remove old file/dir or wrong symlink before making new symlink
if isInvalidSymlink || err == nil {
obj.init.Logf("removing: %s", p)
if err := os.Remove(p); err != nil {
return false, err
}
// now make the symlink...
}
// make the symlink
obj.init.Logf("symlink %s %s", obj.Source, p)
return false, os.Symlink(obj.Source, p)
}
// CheckApply checks the resource state and applies the resource if the bool
// input is true. It returns error info and if the state check passed or not.
func (obj *FileRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
@@ -1250,6 +1366,7 @@ func (obj *FileRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
// might not have a new value to copy, and therefore we won't see this
// notification of change. Therefore, it is important to process these
// promptly, if they must not be lost, such as for cache invalidation.
// NOTE: Modern send/recv doesn't really have this limitation anymore.
if val, exists := obj.init.Recv()["content"]; exists && val.Changed {
// if we received on Content, and it changed, invalidate the cache!
obj.init.Logf("contentCheckApply: invalidating sha256sum of `content`")
@@ -1270,6 +1387,7 @@ func (obj *FileRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
} else if !c {
checkOK = false
}
// sourceCheckApply runs symlinkCheckApply
if c, err := obj.sourceCheckApply(ctx, apply); err != nil {
return false, err
} else if !c {
@@ -1354,6 +1472,9 @@ func (obj *FileRes) Cmp(r engine.Res) error {
if obj.Purge != res.Purge {
return fmt.Errorf("the Purge option differs")
}
if obj.Symlink != res.Symlink {
return fmt.Errorf("the Symlink option differs")
}
return nil
}
@@ -1494,12 +1615,6 @@ func (obj *FileRes) UIDs() []engine.ResUID {
// return fmt.Errorf("not possible at the moment")
//}
// CollectPattern applies the pattern for collection resources.
func (obj *FileRes) CollectPattern(pattern string) {
// XXX: currently the pattern for files can only override the Dirname variable :P
obj.Dirname = pattern // XXX: simplistic for now
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *FileRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
@@ -1689,12 +1804,12 @@ type FileInfo struct {
}
// ReadDir reads a directory path, and returns a list of enhanced FileInfo's.
func ReadDir(path string) ([]FileInfo, error) {
if !strings.HasSuffix(path, "/") { // dirs have trailing slashes
func ReadDir(p string) ([]FileInfo, error) {
if !strings.HasSuffix(p, "/") { // dirs have trailing slashes
return nil, fmt.Errorf("path must be a directory")
}
output := []FileInfo{} // my file info
files, err := os.ReadDir(path)
files, err := os.ReadDir(path.Clean(p)) // clean for prettier errors
if os.IsNotExist(err) {
return output, err // return empty list
}
@@ -1702,8 +1817,8 @@ func ReadDir(path string) ([]FileInfo, error) {
return nil, err
}
for _, file := range files {
abs := path + smartPath(file)
rel, err := filepath.Rel(path, abs) // NOTE: calls Clean()
abs := p + smartPath(file)
rel, err := filepath.Rel(p, abs) // NOTE: calls Clean()
if err != nil { // shouldn't happen
return nil, errwrap.Wrapf(err, "unhandled error in ReadDir")
}
@@ -1712,7 +1827,12 @@ func ReadDir(path string) ([]FileInfo, error) {
}
fileInfo, err := file.Info()
if err != nil {
if os.IsNotExist(err) {
// File vanished before we could run Info() on it. This
// can happen if someone deletes a file in a directory
// while we're in the middle of running this. So skip...
continue
} else if err != nil {
return nil, errwrap.Wrapf(err, "unhandled error in FileInfo")
}
@@ -1753,3 +1873,13 @@ func printFiles(fileInfos map[string]FileInfo) string {
}
return s
}
// isInvalidSymlink is a helper which returns true if the error from os.Readlink
// is the "invalid argument" error which happens if we try and read a normal
// file. The comparison against os.ErrInvalid and errors.Is checks don't work.
func isInvalidSymlink(err error) bool {
if perr, ok := err.(*os.PathError); ok {
return perr.Err == syscall.EINVAL
}
return false
}

View File

@@ -262,7 +262,6 @@ func (obj *FirewalldRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
select {
case event, ok := <-events: // &nftables.MonitorEvent
@@ -278,19 +277,13 @@ func (obj *FirewalldRes) Watch(ctx context.Context) error {
//obj.init.Logf("event data: %+v", event.Data)
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply checks the resource state and applies the resource if the bool
// input is true. It returns error info and if the state check passed or not.

View File

@@ -102,7 +102,6 @@ func (obj *GroupRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
if obj.init.Debug {
obj.init.Logf("Watching: %s", groupFile) // attempting to watch...
@@ -119,19 +118,14 @@ func (obj *GroupRes) Watch(ctx context.Context) error {
if obj.init.Debug { // don't access event.Body if event.Error isn't nil
obj.init.Logf("Event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply method for Group resource.
func (obj *GroupRes) CheckApply(ctx context.Context, apply bool) (bool, error) {

View File

@@ -0,0 +1,391 @@
// Mgmt
// Copyright (C) James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//
// Additional permission under GNU GPL version 3 section 7
//
// If you modify this program, or any covered work, by linking or combining it
// with embedded mcl code and modules (and that the embedded mcl code and
// modules which link with this program, contain a copy of their source code in
// the authoritative form) containing parts covered by the terms of any other
// license, the licensors of this program grant you additional permission to
// convey the resulting work. Furthermore, the licensors of this program grant
// the original author, James Shubin, additional permission to update this
// additional permission if he deems it necessary to achieve the goals of this
// additional permission.
package resources
import (
"context"
"fmt"
"os/exec"
"os/user"
"reflect"
"strconv"
"strings"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
engineUtil "github.com/purpleidea/mgmt/engine/util"
"github.com/purpleidea/mgmt/util/errwrap"
)
func init() {
engine.RegisterResource("gsettings", func() engine.Res { return &GsettingsRes{} })
}
const (
gsettingsTmpl = "gsettings@%s"
)
// GsettingsRes is a resource for setting dconf values through gsettings. The
// ideal scenario is that this runs as the same user that wants settings set.
// This should be done by a local user-specific mgmt daemon. As a special case,
// we can run as root (or anyone with permission) which launches a subprocess
// which setuid/setgid's to that user to run the needed operations. To specify
// the schema and key, set the resource name as "schema key" (separated by a
// single space character) or use the parameters.
type GsettingsRes struct {
// XXX: add a dbus version of this-- it will require running as the user
// directly since in that scenario we can't spawn a process of the right
// uid/gid, and if we set either of those we would interfere with all of
// the normal mgmt stuff running inside this process.
traits.Base // add the base methods without re-implementation
init *engine.Init
// Schema is the schema to use in. This can be schema:path if the schema
// doesn't have a fixed path. See the `gsettings` manual for more info.
Schema string `lang:"schema" yaml:"schema"`
// Key is the key to set.
Key string `lang:"key" yaml:"key"`
// Type is the type value to set. This can be "bool", "str", "int", or
// "custom".
// XXX: add support for [][]str and so on...
Type string `lang:"type" yaml:"type"`
// Value is the value to set. It is interface{} because it can hold any
// value type.
// XXX: Add resource unification to this key
Value interface{} `lang:"value" yaml:"value"`
// User is the (optional) user to use to execute the command. It is used
// for any command being run.
User string `lang:"user" yaml:"user"`
// Group is the (optional) group to use to execute the command. It is
// used for any command being run.
Group string `lang:"group" yaml:"group"`
// XXX: We should have a "once" functionality if this param is set true.
// XXX: Basically it would change that field once, and store a "tag"
// file to say it was done.
// XXX: Maybe that should be a metaparam called Once that works anywhere.
// XXX: Maybe there should be a way to reset the "once" tag too...
//Once string `lang:"once" yaml:"once"`
// We're using the exec resource to build the resources because it's all
// done through exec.
exec *ExecRes
}
// Default returns some sensible defaults for this resource.
func (obj *GsettingsRes) Default() engine.Res {
return &GsettingsRes{}
}
// parse is a helper to pull out the correct schema and key to use.
func (obj *GsettingsRes) parse() (string, string, error) {
schema := obj.Schema
key := obj.Key
sp := strings.Split(obj.Name(), " ")
if len(sp) == 2 && obj.Schema == "" && obj.Key == "" {
schema = sp[0]
key = sp[1]
}
if schema == "" {
return "", "", fmt.Errorf("empty schema")
}
if key == "" {
return "", "", fmt.Errorf("empty key")
}
return schema, key, nil
}
// value is a helper to pull out the value in the correct format to use.
func (obj *GsettingsRes) value() (string, error) {
if obj.Type == "bool" {
v, ok := obj.Value.(bool)
if !ok {
return "", fmt.Errorf("invalid bool")
}
if v {
return "true", nil
}
return "false", nil
}
if obj.Type == "str" {
v, ok := obj.Value.(string)
if !ok {
return "", fmt.Errorf("invalid str")
}
return v, nil
}
if obj.Type == "int" {
v, ok := obj.Value.(int)
if !ok {
return "", fmt.Errorf("invalid int")
}
return strconv.Itoa(v), nil
}
if obj.Type == "custom" {
v, ok := obj.Value.(string)
if !ok {
return "", fmt.Errorf("invalid custom")
}
return v, nil
}
// XXX: add proper type parsing
return "", fmt.Errorf("invalid type: %s", obj.Type)
}
// uid is a helper to get the correct uid.
func (obj *GsettingsRes) uid() (int, error) {
uid := obj.User // something or empty
if obj.User == "" {
u, err := user.Current()
if err != nil {
return -1, err
}
uid = u.Uid
}
out, err := engineUtil.GetUID(uid)
if err != nil {
return -1, errwrap.Wrapf(err, "error looking up uid for %s", uid)
}
return out, nil
}
// makeComposite creates a pointer to a ExecRes. The pointer is used to validate
// and initialize the nested exec.
func (obj *GsettingsRes) makeComposite() (*ExecRes, error) {
cmd, err := exec.LookPath("gsettings")
if err != nil {
return nil, err
}
schema, key, err := obj.parse()
if err != nil {
return nil, err
}
val, err := obj.value()
if err != nil {
return nil, err
}
uid, err := obj.uid()
if err != nil {
return nil, err
}
res, err := engine.NewNamedResource("exec", fmt.Sprintf(gsettingsTmpl, obj.Name()))
if err != nil {
return nil, err
}
exec := res.(*ExecRes)
exec.Cmd = cmd
exec.Args = []string{
"set",
schema,
key,
val,
}
exec.Cwd = "/"
exec.IfCmd = fmt.Sprintf("%s get %s %s", cmd, schema, key)
exec.IfCwd = "/"
expected := val + "\n" // value comes with a trailing newline
exec.IfEquals = &expected
exec.WatchCmd = fmt.Sprintf("%s monitor %s %s", cmd, schema, key)
exec.WatchCwd = "/"
exec.User = obj.User
exec.Group = obj.Group
exec.Env = map[string]string{
// Either of these will work, so we'll include both for fun.
"DBUS_SESSION_BUS_ADDRESS": fmt.Sprintf("unix:path=/run/user/%d/bus", uid),
"XDG_RUNTIME_DIR": fmt.Sprintf("/run/user/%d/", uid),
}
//exec.Timeout = ? // TODO: should we have a timeout to prevent blocking?
return exec, nil
}
// Validate reports any problems with the struct definition.
func (obj *GsettingsRes) Validate() error {
if _, _, err := obj.parse(); err != nil {
return err
}
// validation of obj.Type happens in this function.
if _, err := obj.value(); err != nil {
return err
}
exec, err := obj.makeComposite()
if err != nil {
return errwrap.Wrapf(err, "makeComposite failed in validate")
}
if err := exec.Validate(); err != nil { // composite resource
return errwrap.Wrapf(err, "validate failed for embedded exec: %s", exec)
}
return nil
}
// Init runs some startup code for this resource.
func (obj *GsettingsRes) Init(init *engine.Init) error {
obj.init = init // save for later
exec, err := obj.makeComposite()
if err != nil {
return errwrap.Wrapf(err, "makeComposite failed in init")
}
obj.exec = exec
newInit := obj.init.Copy()
newInit.Send = func(interface{}) error { // override so exec can't send
return nil
}
newInit.Logf = func(format string, v ...interface{}) {
//if format == "cmd out empty!" {
// return
//}
//obj.init.Logf("exec: "+format, v...)
}
return obj.exec.Init(newInit)
}
// Cleanup is run by the engine to clean up after the resource is done.
func (obj *GsettingsRes) Cleanup() error {
if obj.exec != nil {
return obj.exec.Cleanup()
}
return nil
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *GsettingsRes) Watch(ctx context.Context) error {
return obj.exec.Watch(ctx)
}
// CheckApply checks the resource state and applies the resource if the bool
// input is true. It returns error info and if the state check passed or not.
func (obj *GsettingsRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
obj.init.Logf("%s", obj.exec.IfCmd) // "gsettings get"
checkOK, err := obj.exec.CheckApply(ctx, apply)
if err != nil {
return checkOK, err
}
if !checkOK {
// "gsettings set"
obj.init.Logf("%s %s", obj.exec.Cmd, strings.Join(obj.exec.Args, " "))
}
return checkOK, nil
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *GsettingsRes) Cmp(r engine.Res) error {
// we can only compare GsettingsRes to others of the same resource kind
res, ok := r.(*GsettingsRes)
if !ok {
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.Schema != res.Schema {
return fmt.Errorf("the Schema differs")
}
if obj.Key != res.Key {
return fmt.Errorf("the Key differs")
}
if obj.Type != res.Type {
return fmt.Errorf("the Type differs")
}
//if obj.Value != res.Value {
// return fmt.Errorf("the Value differs")
//}
if !reflect.DeepEqual(obj.Value, res.Value) {
return fmt.Errorf("the Value field differs")
}
if obj.User != res.User {
return fmt.Errorf("the User differs")
}
if obj.Group != res.Group {
return fmt.Errorf("the Group differs")
}
// TODO: why is res.exec ever nil?
if (obj.exec == nil) != (res.exec == nil) { // xor
return fmt.Errorf("the exec differs")
}
if obj.exec != nil && res.exec != nil {
if err := obj.exec.Cmp(res.exec); err != nil {
return errwrap.Wrapf(err, "the exec differs")
}
}
return nil
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *GsettingsRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes GsettingsRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*GsettingsRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to GsettingsRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = GsettingsRes(raw) // restore from indirection with type conversion!
return nil
}

View File

@@ -243,7 +243,6 @@ func (obj *GzipRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
select {
case event, ok := <-recWatcher.Events():
@@ -259,7 +258,6 @@ func (obj *GzipRes) Watch(ctx context.Context) error {
if obj.init.Debug { // don't access event.Body if event.Error isn't nil
obj.init.Logf("event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case event, ok := <-events:
if !ok { // channel shutdown
@@ -271,19 +269,14 @@ func (obj *GzipRes) Watch(ctx context.Context) error {
if obj.init.Debug { // don't access event.Body if event.Error isn't nil
obj.init.Logf("event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply checks the resource state and applies the resource if the bool
// input is true. It returns error info and if the state check passed or not.

View File

@@ -183,7 +183,6 @@ func (obj *HostnameRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
select {
case _, ok := <-signals:
@@ -191,7 +190,6 @@ func (obj *HostnameRes) Watch(ctx context.Context) error {
return fmt.Errorf("unexpected close")
}
//signals = nil
send = true
case event, ok := <-recWatcher.Events():
if !ok { // channel shutdown
@@ -203,19 +201,14 @@ func (obj *HostnameRes) Watch(ctx context.Context) error {
if obj.init.Debug { // don't access event.Body if event.Error isn't nil
obj.init.Logf("event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
func (obj *HostnameRes) updateHostnameProperty(object dbus.BusObject, expectedValue, property, setterName string, apply bool) (bool, error) {
propertyObject, err := object.GetProperty("org.freedesktop.hostname1." + property)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,807 @@
// Mgmt
// Copyright (C) James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//
// Additional permission under GNU GPL version 3 section 7
//
// If you modify this program, or any covered work, by linking or combining it
// with embedded mcl code and modules (and that the embedded mcl code and
// modules which link with this program, contain a copy of their source code in
// the authoritative form) containing parts covered by the terms of any other
// license, the licensors of this program grant you additional permission to
// convey the resulting work. Furthermore, the licensors of this program grant
// the original author, James Shubin, additional permission to update this
// additional permission if he deems it necessary to achieve the goals of this
// additional permission.
package resources
import (
"context"
"fmt"
"net"
"net/http"
"os"
"path/filepath"
"strings"
"sync"
"time"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util/errwrap"
securefilepath "github.com/cyphar/filepath-securejoin"
)
const (
// HTTPUseSecureJoin specifies that we should add in a "secure join" lib
// so that we avoid the ../../etc/passwd and symlink problems.
HTTPUseSecureJoin = true
httpServerKind = httpKind + ":server"
)
func init() {
engine.RegisterResource(httpServerKind, func() engine.Res { return &HTTPServerRes{} })
}
// HTTPServerGroupableRes is the interface that you must implement if you want
// to allow a resource the ability to be grouped into the http server resource.
// As an added safety, the Kind must also begin with "http:", and not have more
// than one colon, or it must begin with http:server:, and not have any further
// colons to avoid accidents of unwanted grouping.
type HTTPServerGroupableRes interface {
engine.Res
// ParentName is used to limit which resources autogroup into this one.
// If it's empty then it's ignored, otherwise it must match the Name of
// the parent to get grouped.
ParentName() string
// AcceptHTTP determines whether this will respond to this request.
// Return nil to accept, or any error to pass. This should be
// deterministic (pure) and fast.
AcceptHTTP(req *http.Request) error
// ServeHTTP is the standard HTTP handler that will be used for this.
http.Handler // ServeHTTP(w http.ResponseWriter, req *http.Request)
}
// HTTPServerRes is an http server resource. It serves files, but does not
// actually apply any state. The name is used as the address to listen on,
// unless the Address field is specified, and in that case it is used instead.
// This resource can offer up files for serving that are specified either inline
// in this resource by specifying an http root, or as http:server:file resources
// which will get autogrouped into this resource at runtime. The two methods can
// be combined as well.
//
// This server also supports autogrouping some more magical resources into it.
// For example, the http:server:flag and http:server:ui resources add in magic
// endpoints.
//
// This server is not meant as a featureful replacement for the venerable and
// modern httpd servers out there, but rather as a simple, dynamic, integrated
// alternative for bootstrapping new machines and clusters in an elegant way.
//
// TODO: add support for TLS
// XXX: Make the http:server:ui resource that functions can read data from!
// XXX: The http:server:ui resource can also take in values from those functions
type HTTPServerRes struct {
traits.Base // add the base methods without re-implementation
traits.Edgeable // XXX: add autoedge support
traits.Groupable // can have HTTPServerFileRes and others grouped into it
init *engine.Init
// Address is the listen address to use for the http server. It is
// common to use `:80` (the standard) to listen on TCP port 80 on all
// addresses.
Address string `lang:"address" yaml:"address"`
// Timeout is the maximum duration in seconds to use for unspecified
// timeouts. In other words, when this value is specified, it is used as
// the value for the other *Timeout values when they aren't used. Put
// another way, this makes it easy to set all the different timeouts
// with a single parameter.
Timeout *uint64 `lang:"timeout" yaml:"timeout"`
// ReadTimeout is the maximum duration in seconds for reading during the
// http request. If it is zero, then there is no timeout. If this is
// unspecified, then the value of Timeout is used instead if it is set.
// For more information, see the golang net/http Server documentation.
ReadTimeout *uint64 `lang:"read_timeout" yaml:"read_timeout"`
// WriteTimeout is the maximum duration in seconds for writing during
// the http request. If it is zero, then there is no timeout. If this is
// unspecified, then the value of Timeout is used instead if it is set.
// For more information, see the golang net/http Server documentation.
WriteTimeout *uint64 `lang:"write_timeout" yaml:"write_timeout"`
// ShutdownTimeout is the maximum duration in seconds to wait for the
// server to shutdown gracefully before calling Close. By default it is
// nice to let client connections terminate gracefully, however it might
// take longer than we are willing to wait, particularly if one is long
// polling or running a very long download. As a result, you can set a
// timeout here. The default is zero which means it will wait
// indefinitely. The shutdown process can also be cancelled by the
// interrupt handler which this resource supports. If this is
// unspecified, then the value of Timeout is used instead if it is set.
ShutdownTimeout *uint64 `lang:"shutdown_timeout" yaml:"shutdown_timeout"`
// Root is the root directory that we should serve files from. If it is
// not specified, then it is not used. Any http file resources will have
// precedence over anything in here, in case the same path exists twice.
// TODO: should we have a flag to determine the precedence rules here?
Root string `lang:"root" yaml:"root"`
// TODO: should we allow adding a list of one-of files directly here?
eventsChanMap map[engine.Res]chan error
interruptChan chan struct{}
conn net.Listener
serveMux *http.ServeMux // can't share the global one between resources!
server *http.Server
}
// Default returns some sensible defaults for this resource.
func (obj *HTTPServerRes) Default() engine.Res {
return &HTTPServerRes{}
}
// getAddress returns the actual address to use. When Address is not specified,
// we use the Name.
func (obj *HTTPServerRes) getAddress() string {
if obj.Address != "" {
return obj.Address
}
return obj.Name()
}
// getReadTimeout determines the value for ReadTimeout, because if unspecified,
// this will default to the value of Timeout.
func (obj *HTTPServerRes) getReadTimeout() *uint64 {
if obj.ReadTimeout != nil {
return obj.ReadTimeout
}
return obj.Timeout // might be nil
}
// getWriteTimeout determines the value for WriteTimeout, because if
// unspecified, this will default to the value of Timeout.
func (obj *HTTPServerRes) getWriteTimeout() *uint64 {
if obj.WriteTimeout != nil {
return obj.WriteTimeout
}
return obj.Timeout // might be nil
}
// getShutdownTimeout determines the value for ShutdownTimeout, because if
// unspecified, this will default to the value of Timeout.
func (obj *HTTPServerRes) getShutdownTimeout() *uint64 {
if obj.ShutdownTimeout != nil {
return obj.ShutdownTimeout
}
return obj.Timeout // might be nil
}
// AcceptHTTP determines whether we will respond to this request. Return nil to
// accept, or any error to pass. In this particular case, it accepts for the
// Root directory handler, but it happens to be implemented with this signature
// in case it gets moved. It doesn't intentionally match the
// HTTPServerGroupableRes interface.
func (obj *HTTPServerRes) AcceptHTTP(req *http.Request) error {
// Look in root if we have one, and we haven't got a file yet...
if obj.Root == "" {
return fmt.Errorf("no Root directory")
}
return nil
}
// ServeHTTP is the standard HTTP handler that will be used here. In this
// particular case, it serves the Root directory handler, but it happens to be
// implemented with this signature in case it gets moved. It doesn't
// intentionally match the HTTPServerGroupableRes interface.
func (obj *HTTPServerRes) ServeHTTP(w http.ResponseWriter, req *http.Request) {
// We only allow GET at the moment.
if req.Method != http.MethodGet {
w.WriteHeader(http.StatusMethodNotAllowed)
return
}
requestPath := req.URL.Path // TODO: is this what we want here?
p := filepath.Join(obj.Root, requestPath) // normal unsafe!
if !strings.HasPrefix(p, obj.Root) { // root ends with /
// user might have tried a ../../etc/passwd hack
obj.init.Logf("join inconsistency: %s", p)
http.NotFound(w, req) // lie to them...
return
}
if HTTPUseSecureJoin {
var err error
p, err = securefilepath.SecureJoin(obj.Root, requestPath)
if err != nil {
obj.init.Logf("secure join fail: %s", p)
http.NotFound(w, req) // lie to them...
return
}
}
if obj.init.Debug {
obj.init.Logf("Got file at root: %s", p)
}
handle, err := os.Open(p)
if err != nil {
obj.init.Logf("could not open: %s", p)
sendHTTPError(w, err)
return
}
defer handle.Close() // ignore error
// Determine the last-modified time if we can.
modtime := time.Now()
fi, err := handle.Stat()
if err == nil {
modtime = fi.ModTime()
}
// TODO: if Stat errors, should we fail the whole thing?
// XXX: is requestPath what we want for the name field?
http.ServeContent(w, req, requestPath, modtime, handle)
//obj.init.Logf("%d bytes sent", n) // XXX: how do we know (on the server-side) if it worked?
return
}
// Validate checks if the resource data structure was populated correctly.
func (obj *HTTPServerRes) Validate() error {
if obj.getAddress() == "" {
return fmt.Errorf("empty address")
}
host, _, err := net.SplitHostPort(obj.getAddress())
if err != nil {
return errwrap.Wrapf(err, "the Address is in an invalid format: %s", obj.getAddress())
}
if host != "" {
// TODO: should we allow fqdn's here?
ip := net.ParseIP(host)
if ip == nil {
return fmt.Errorf("the Address is not a valid IP: %s", host)
}
}
if obj.Root != "" && !strings.HasPrefix(obj.Root, "/") {
return fmt.Errorf("the Root must be absolute")
}
if obj.Root != "" && !strings.HasSuffix(obj.Root, "/") {
return fmt.Errorf("the Root must be a dir")
}
// XXX: validate that the autogrouped resources don't have paths that
// conflict with each other. We can only have a single unique entry for
// what handles a /whatever URL.
return nil
}
// Init runs some startup code for this resource.
func (obj *HTTPServerRes) Init(init *engine.Init) error {
obj.init = init // save for later
// No need to error in Validate if Timeout is ignored, but log it.
// These are all specified, so Timeout effectively does nothing.
a := obj.ReadTimeout != nil
b := obj.WriteTimeout != nil
c := obj.ShutdownTimeout != nil
if obj.Timeout != nil && (a && b && c) {
obj.init.Logf("the Timeout param is being ignored")
}
// NOTE: If we don't Init anything that's autogrouped, then it won't
// even get an Init call on it.
obj.eventsChanMap = make(map[engine.Res]chan error)
// TODO: should we do this in the engine? Do we want to decide it here?
for _, res := range obj.GetGroup() { // grouped elements
// NOTE: We build a new init, but it's not complete. We only add
// what we're planning to use, and we ignore the rest for now...
r := res // bind the variable!
obj.eventsChanMap[r] = make(chan error)
event := func() {
select {
case obj.eventsChanMap[r] <- nil:
// send!
}
// We don't do this here (why?) we instead read from the
// above channel and then send on multiplexedChan to the
// main loop, where it runs the obj.init.Event function.
//obj.init.Event() // notify engine of an event (this can block)
}
newInit := &engine.Init{
Program: obj.init.Program,
Version: obj.init.Version,
Hostname: obj.init.Hostname,
// Watch:
Running: event,
Event: event,
// CheckApply:
Refresh: func() bool {
innerRes, ok := r.(engine.RefreshableRes)
if !ok {
panic("res does not support the Refreshable trait")
}
return innerRes.Refresh()
},
Send: engine.GenerateSendFunc(r),
Recv: engine.GenerateRecvFunc(r), // unused
FilteredGraph: func() (*pgraph.Graph, error) {
panic("FilteredGraph for HTTP not implemented")
},
Local: obj.init.Local,
World: obj.init.World,
//VarDir: obj.init.VarDir, // TODO: wrap this
Debug: obj.init.Debug,
Logf: func(format string, v ...interface{}) {
obj.init.Logf(r.String()+": "+format, v...)
},
}
if err := res.Init(newInit); err != nil {
return errwrap.Wrapf(err, "autogrouped Init failed")
}
}
obj.interruptChan = make(chan struct{})
return nil
}
// Cleanup is run by the engine to clean up after the resource is done.
func (obj *HTTPServerRes) Cleanup() error {
return nil
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *HTTPServerRes) Watch(ctx context.Context) error {
// TODO: I think we could replace all this with:
//obj.conn, err := net.Listen("tcp", obj.getAddress())
// ...but what is the advantage?
addr, err := net.ResolveTCPAddr("tcp", obj.getAddress())
if err != nil {
return errwrap.Wrapf(err, "could not resolve address")
}
obj.conn, err = net.ListenTCP("tcp", addr)
if err != nil {
return errwrap.Wrapf(err, "could not start listener")
}
defer obj.conn.Close()
obj.serveMux = http.NewServeMux() // do it here in case Watch restarts!
// TODO: We could consider having the obj.GetGroup loop here, instead of
// essentially having our own "router" API with AcceptHTTP.
obj.serveMux.HandleFunc("/", obj.handler())
readTimeout := uint64(0)
if i := obj.getReadTimeout(); i != nil {
readTimeout = *i
}
writeTimeout := uint64(0)
if i := obj.getWriteTimeout(); i != nil {
writeTimeout = *i
}
obj.server = &http.Server{
Addr: obj.getAddress(),
Handler: obj.serveMux,
ReadTimeout: time.Duration(readTimeout) * time.Second,
WriteTimeout: time.Duration(writeTimeout) * time.Second,
//MaxHeaderBytes: 1 << 20, XXX: should we add a param for this?
}
multiplexedChan := make(chan error)
defer close(multiplexedChan) // closes after everyone below us is finished
wg := &sync.WaitGroup{}
defer wg.Wait()
for _, r := range obj.GetGroup() { // grouped elements
res := r // optional in newer golang
wg.Add(1)
go func() {
defer wg.Done()
defer close(obj.eventsChanMap[res]) // where Watch sends events
if err := res.Watch(ctx); err != nil {
select {
case multiplexedChan <- err:
case <-ctx.Done():
}
}
}()
// wait for Watch first Running() call or immediate error...
select {
case <-obj.eventsChanMap[res]: // triggers on start or on err...
}
wg.Add(1)
go func() {
defer wg.Done()
for {
var ok bool
var err error
select {
// receive
case err, ok = <-obj.eventsChanMap[res]:
if !ok {
return
}
}
// send (multiplex)
select {
case multiplexedChan <- err:
case <-ctx.Done():
return
}
}
}()
}
// we block until all the children are started first...
obj.init.Running() // when started, notify engine that we're running
var closeError error
closeSignal := make(chan struct{})
shutdownChan := make(chan struct{}) // server shutdown finished signal
wg.Add(1)
go func() {
defer wg.Done()
select {
case <-obj.interruptChan:
// TODO: should we bubble up the error from Close?
// TODO: do we need a mutex around this Close?
obj.server.Close() // kill it quickly!
case <-shutdownChan:
// let this exit
}
}()
wg.Add(1)
go func() {
defer wg.Done()
defer close(closeSignal)
err := obj.server.Serve(obj.conn) // blocks until Shutdown() is called!
if err == nil || err == http.ErrServerClosed {
return
}
// if this returned on its own, then closeSignal can be used...
closeError = errwrap.Wrapf(err, "the server errored")
}()
// When Shutdown is called, Serve, ListenAndServe, and ListenAndServeTLS
// immediately return ErrServerClosed. Make sure the program doesn't
// exit and waits instead for Shutdown to return.
defer func() {
defer close(shutdownChan) // signal that shutdown is finished
innerCtx := context.Background()
if i := obj.getShutdownTimeout(); i != nil && *i > 0 {
var cancel context.CancelFunc
innerCtx, cancel = context.WithTimeout(innerCtx, time.Duration(*i)*time.Second)
defer cancel()
}
err := obj.server.Shutdown(innerCtx) // shutdown gracefully
if err == context.DeadlineExceeded {
// TODO: should we bubble up the error from Close?
// TODO: do we need a mutex around this Close?
obj.server.Close() // kill it now
}
}()
startupChan := make(chan struct{})
close(startupChan) // send one initial signal
for {
if obj.init.Debug {
obj.init.Logf("Looping...")
}
select {
case <-startupChan:
startupChan = nil
case err, ok := <-multiplexedChan:
if !ok { // shouldn't happen
multiplexedChan = nil
continue
}
if err != nil {
return err
}
case <-closeSignal: // something shut us down early
return closeError
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
obj.init.Event() // notify engine of an event (this can block)
}
}
// CheckApply never has anything to do for this resource, so it always succeeds.
// It does however check that certain runtime requirements (such as the Root dir
// existing if one was specified) are fulfilled. If there are any autogrouped
// resources, those will be recursively called so that they can send/recv.
func (obj *HTTPServerRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
if obj.init.Debug {
obj.init.Logf("CheckApply")
}
// XXX: We don't want the initial CheckApply to return true until the
// Watch has started up, so we must block here until that's the case...
// Cheap runtime validation!
// XXX: maybe only do this only once to avoid repeated, unnecessary checks?
if obj.Root != "" {
fileInfo, err := os.Stat(obj.Root)
if err != nil {
return false, errwrap.Wrapf(err, "can't stat Root dir")
}
if !fileInfo.IsDir() {
return false, fmt.Errorf("the Root path is not a dir")
}
}
checkOK := true
for _, res := range obj.GetGroup() { // grouped elements
if c, err := res.CheckApply(ctx, apply); err != nil {
return false, errwrap.Wrapf(err, "autogrouped CheckApply failed")
} else if !c {
checkOK = false
}
}
return checkOK, nil
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *HTTPServerRes) Cmp(r engine.Res) error {
// we can only compare HTTPServerRes to others of the same resource kind
res, ok := r.(*HTTPServerRes)
if !ok {
return fmt.Errorf("res is not the same kind")
}
if obj.Address != res.Address {
return fmt.Errorf("the Address differs")
}
if (obj.Timeout == nil) != (res.Timeout == nil) { // xor
return fmt.Errorf("the Timeout differs")
}
if obj.Timeout != nil && res.Timeout != nil {
if *obj.Timeout != *res.Timeout { // compare the values
return fmt.Errorf("the value of Timeout differs")
}
}
if (obj.ReadTimeout == nil) != (res.ReadTimeout == nil) {
return fmt.Errorf("the ReadTimeout differs")
}
if obj.ReadTimeout != nil && res.ReadTimeout != nil {
if *obj.ReadTimeout != *res.ReadTimeout {
return fmt.Errorf("the value of ReadTimeout differs")
}
}
if (obj.WriteTimeout == nil) != (res.WriteTimeout == nil) {
return fmt.Errorf("the WriteTimeout differs")
}
if obj.WriteTimeout != nil && res.WriteTimeout != nil {
if *obj.WriteTimeout != *res.WriteTimeout {
return fmt.Errorf("the value of WriteTimeout differs")
}
}
if (obj.ShutdownTimeout == nil) != (res.ShutdownTimeout == nil) {
return fmt.Errorf("the ShutdownTimeout differs")
}
if obj.ShutdownTimeout != nil && res.ShutdownTimeout != nil {
if *obj.ShutdownTimeout != *res.ShutdownTimeout {
return fmt.Errorf("the value of ShutdownTimeout differs")
}
}
// TODO: We could do this sort of thing to skip checking Timeout when it
// is not used, but for the moment, this is overkill and not needed yet.
//a := obj.ReadTimeout != nil
//b := obj.WriteTimeout != nil
//c := obj.ShutdownTimeout != nil
//if !(obj.Timeout != nil && (a && b && c)) {
// // the Timeout param is not being ignored
//}
if obj.Root != res.Root {
return fmt.Errorf("the Root differs")
}
return nil
}
// Interrupt is called to ask the execution of this resource to end early. It
// will cause the server Shutdown to end abruptly instead of leading open client
// connections terminate gracefully. It does this by causing the server Close
// method to run.
func (obj *HTTPServerRes) Interrupt() error {
close(obj.interruptChan) // this should cause obj.server.Close() to run!
return nil
}
// Copy copies the resource. Don't call it directly, use engine.ResCopy instead.
// TODO: should this copy internal state?
func (obj *HTTPServerRes) Copy() engine.CopyableRes {
var timeout, readTimeout, writeTimeout, shutdownTimeout *uint64
if obj.Timeout != nil {
x := *obj.Timeout
timeout = &x
}
if obj.ReadTimeout != nil {
x := *obj.ReadTimeout
readTimeout = &x
}
if obj.WriteTimeout != nil {
x := *obj.WriteTimeout
writeTimeout = &x
}
if obj.ShutdownTimeout != nil {
x := *obj.ShutdownTimeout
shutdownTimeout = &x
}
return &HTTPServerRes{
Address: obj.Address,
Timeout: timeout,
ReadTimeout: readTimeout,
WriteTimeout: writeTimeout,
ShutdownTimeout: shutdownTimeout,
Root: obj.Root,
}
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *HTTPServerRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes HTTPServerRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*HTTPServerRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to HTTPServerRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = HTTPServerRes(raw) // restore from indirection with type conversion!
return nil
}
// GroupCmp returns whether two resources can be grouped together or not. Can
// these two resources be merged, aka, does this resource support doing so? Will
// resource allow itself to be grouped _into_ this obj?
func (obj *HTTPServerRes) GroupCmp(r engine.GroupableRes) error {
res, ok := r.(HTTPServerGroupableRes) // different from what we usually do!
if !ok {
return fmt.Errorf("resource is not the right kind")
}
// If the http resource has the parent name field specified, then it
// must match against our name field if we want it to group with us.
if pn := res.ParentName(); pn != "" && pn != obj.Name() {
return fmt.Errorf("resource groups with a different parent name")
}
// http:server:foo is okay, but file or config:etcd is not
if !strings.HasPrefix(r.Kind(), httpServerKind+":") {
return fmt.Errorf("not one of our children")
}
// http:server:foo is okay, but http:server:foo:bar is not
p1 := httpServerKind + ":"
s1 := strings.TrimPrefix(r.Kind(), p1)
if len(s1) != len(r.Kind()) && strings.Count(s1, ":") > 0 { // has prefix
return fmt.Errorf("maximum one resource after `%s` prefix", httpServerKind)
}
//// http:foo is okay, but http:foo:bar is not
//p2 := httpServerKind + ":"
//s2 := strings.TrimPrefix(r.Kind(), p2)
//if len(s2) != len(r.Kind()) && strings.Count(s2, ":") > 0 { // has prefix
// return fmt.Errorf("maximum one resource after `%s` prefix", httpServerKind)
//}
return nil
}
// readHandler handles all the incoming download requests from clients.
func (obj *HTTPServerRes) handler() func(http.ResponseWriter, *http.Request) {
// TODO: we could statically pre-compute some stuff here...
return func(w http.ResponseWriter, req *http.Request) {
if obj.init.Debug {
obj.init.Logf("Client: %s", req.RemoteAddr)
}
// TODO: would this leak anything security sensitive in our log?
obj.init.Logf("URL: %s", req.URL)
requestPath := req.URL.Path // TODO: is this what we want here?
if obj.init.Debug {
obj.init.Logf("Path: %s", requestPath)
}
// Look through the autogrouped resources!
// TODO: can we improve performance by only searching here once?
for _, x := range obj.GetGroup() { // grouped elements
res, ok := x.(HTTPServerGroupableRes) // convert from Res
if !ok {
continue
}
if obj.init.Debug {
obj.init.Logf("Got grouped resource: %s", res.String())
}
err := res.AcceptHTTP(req)
if err == nil {
res.ServeHTTP(w, req)
return
}
if obj.init.Debug {
obj.init.Logf("Could not serve: %+v", err)
}
//continue // not me
}
// Look in root if we have one, and we haven't got a file yet...
err := obj.AcceptHTTP(req)
if err == nil {
obj.ServeHTTP(w, req)
return
}
if obj.init.Debug {
obj.init.Logf("Could not serve Root: %+v", err)
}
// We never found something to serve...
if obj.init.Debug || true { // XXX: maybe we should always do this?
obj.init.Logf("File not found: %s", requestPath)
}
http.NotFound(w, req)
return
}
}

View File

@@ -0,0 +1,339 @@
// Mgmt
// Copyright (C) James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//
// Additional permission under GNU GPL version 3 section 7
//
// If you modify this program, or any covered work, by linking or combining it
// with embedded mcl code and modules (and that the embedded mcl code and
// modules which link with this program, contain a copy of their source code in
// the authoritative form) containing parts covered by the terms of any other
// license, the licensors of this program grant you additional permission to
// convey the resulting work. Furthermore, the licensors of this program grant
// the original author, James Shubin, additional permission to update this
// additional permission if he deems it necessary to achieve the goals of this
// additional permission.
package resources
import (
"bytes"
"context"
"fmt"
"io"
"net/http"
"os"
"strings"
"time"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/purpleidea/mgmt/util/safepath"
)
const (
httpServerFileKind = httpServerKind + ":file"
)
func init() {
engine.RegisterResource(httpServerFileKind, func() engine.Res { return &HTTPServerFileRes{} })
}
var _ HTTPServerGroupableRes = &HTTPServerFileRes{} // compile time check
// HTTPServerFileRes is a file that exists within an http server. The name is
// used as the public path of the file, unless the filename field is specified,
// and in that case it is used instead. The way this works is that it autogroups
// at runtime with an existing http resource, and in doing so makes the file
// associated with this resource available for serving from that http server.
type HTTPServerFileRes struct {
traits.Base // add the base methods without re-implementation
traits.Edgeable // XXX: add autoedge support
traits.Groupable // can be grouped into HTTPServerRes
init *engine.Init
// Server is the name of the http server resource to group this into. If
// it is omitted, and there is only a single http resource, then it will
// be grouped into it automatically. If there is more than one main http
// resource being used, then the grouping behaviour is *undefined* when
// this is not specified, and it is not recommended to leave this blank!
Server string `lang:"server" yaml:"server"`
// Filename is the name of the file this data should appear as on the
// http server.
Filename string `lang:"filename" yaml:"filename"`
// Path is the absolute path to a file that should be used as the source
// for this file resource. It must not be combined with the data field.
// If this corresponds to a directory, then it will used as a root dir
// that will be served as long as the resource name or Filename are also
// a directory ending with a slash.
Path string `lang:"path" yaml:"path"`
// Data is the file content that should be used as the source for this
// file resource. It must not be combined with the path field.
// TODO: should this be []byte instead?
Data string `lang:"data" yaml:"data"`
}
// Default returns some sensible defaults for this resource.
func (obj *HTTPServerFileRes) Default() engine.Res {
return &HTTPServerFileRes{}
}
// getPath returns the actual path we respond to. When Filename is not
// specified, we use the Name. Note that this is the filename that will be seen
// on the http server, it is *not* the source path to the actual file contents
// being sent by the server.
func (obj *HTTPServerFileRes) getPath() string {
if obj.Filename != "" {
return obj.Filename
}
return obj.Name()
}
// getContent returns the content that we expect from this resource. It depends
// on whether the user specified the Path or Data fields, and whether the Path
// exists or not.
func (obj *HTTPServerFileRes) getContent(requestPath safepath.AbsPath) (io.ReadSeeker, error) {
if obj.Path != "" && obj.Data != "" {
// programming error! this should have been caught in Validate!
return nil, fmt.Errorf("must not specify Path and Data")
}
if obj.Data != "" {
return bytes.NewReader([]byte(obj.Data)), nil
}
absFile, err := obj.getContentRelative(requestPath)
if err != nil { // on error, we just assume no root/prefix stuff happens
return os.Open(obj.Path)
}
return os.Open(absFile.Path())
}
// getContentRelative takes a request, and returns the absolute path to the file
// that we want to request, if it's safely under what we can provide.
func (obj *HTTPServerFileRes) getContentRelative(requestPath safepath.AbsPath) (safepath.AbsFile, error) {
// the location on disk of the data
srcPath, err := safepath.SmartParseIntoPath(obj.Path) // (safepath.Path, error)
if err != nil {
return safepath.AbsFile{}, err
}
srcAbsDir, ok := srcPath.(safepath.AbsDir)
if !ok {
return safepath.AbsFile{}, fmt.Errorf("the Path is not an abs dir")
}
// the public path we respond to (might be a dir prefix or just a file)
pubPath, err := safepath.SmartParseIntoPath(obj.getPath()) // (safepath.Path, error)
if err != nil {
return safepath.AbsFile{}, err
}
pubAbsDir, ok := pubPath.(safepath.AbsDir)
if !ok {
return safepath.AbsFile{}, fmt.Errorf("the name is not an abs dir")
}
// is the request underneath what we're providing?
if !safepath.HasPrefix(requestPath, pubAbsDir) {
return safepath.AbsFile{}, fmt.Errorf("wrong prefix")
}
// make the delta
delta, err := safepath.StripPrefix(requestPath, pubAbsDir) // (safepath.Path, error)
if err != nil {
return safepath.AbsFile{}, err
}
relFile, ok := delta.(safepath.RelFile)
if !ok {
return safepath.AbsFile{}, fmt.Errorf("the delta is not a rel file")
}
return safepath.JoinToAbsFile(srcAbsDir, relFile), nil // AbsFile
}
// ParentName is used to limit which resources autogroup into this one. If it's
// empty then it's ignored, otherwise it must match the Name of the parent to
// get grouped.
func (obj *HTTPServerFileRes) ParentName() string {
return obj.Server
}
// AcceptHTTP determines whether we will respond to this request. Return nil to
// accept, or any error to pass.
func (obj *HTTPServerFileRes) AcceptHTTP(req *http.Request) error {
requestPath := req.URL.Path // TODO: is this what we want here?
if strings.HasSuffix(obj.Path, "/") { // a dir!
if strings.HasPrefix(requestPath, obj.getPath()) {
// relative dir root
return nil
}
}
if requestPath != obj.getPath() {
return fmt.Errorf("unhandled path")
}
return nil
}
// ServeHTTP is the standard HTTP handler that will be used here.
func (obj *HTTPServerFileRes) ServeHTTP(w http.ResponseWriter, req *http.Request) {
// We only allow GET at the moment.
if req.Method != http.MethodGet {
w.WriteHeader(http.StatusMethodNotAllowed)
return
}
requestPath := req.URL.Path // TODO: is this what we want here?
absPath, err := safepath.ParseIntoAbsPath(requestPath)
if err != nil {
obj.init.Logf("invalid input path: %s", requestPath)
sendHTTPError(w, err)
return
}
handle, err := obj.getContent(absPath)
if err != nil {
obj.init.Logf("could not get content for: %s", requestPath)
sendHTTPError(w, err)
return
}
//if readSeekCloser, ok := handle.(io.ReadSeekCloser); ok { // same
// defer readSeekCloser.Close() // ignore error
//}
if closer, ok := handle.(io.Closer); ok {
defer closer.Close() // ignore error
}
// Determine the last-modified time if we can.
modtime := time.Now()
if f, ok := handle.(*os.File); ok {
fi, err := f.Stat()
if err == nil {
modtime = fi.ModTime()
}
// TODO: if Stat errors, should we fail the whole thing?
}
// XXX: is requestPath what we want for the name field?
http.ServeContent(w, req, requestPath, modtime, handle)
//obj.init.Logf("%d bytes sent", n) // XXX: how do we know (on the server-side) if it worked?
}
// Validate checks if the resource data structure was populated correctly.
func (obj *HTTPServerFileRes) Validate() error {
if obj.getPath() == "" {
return fmt.Errorf("empty filename")
}
// FIXME: does getPath need to start with a slash?
if obj.Path != "" && !strings.HasPrefix(obj.Path, "/") {
return fmt.Errorf("the Path must be absolute")
}
if obj.Path != "" && obj.Data != "" {
return fmt.Errorf("must not specify Path and Data")
}
// NOTE: if obj.Path == "" && obj.Data == "" then we have an empty file!
return nil
}
// Init runs some startup code for this resource.
func (obj *HTTPServerFileRes) Init(init *engine.Init) error {
obj.init = init // save for later
return nil
}
// Cleanup is run by the engine to clean up after the resource is done.
func (obj *HTTPServerFileRes) Cleanup() error {
return nil
}
// Watch is the primary listener for this resource and it outputs events. This
// particular one does absolutely nothing but block until we've received a done
// signal.
func (obj *HTTPServerFileRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
select {
case <-ctx.Done(): // closed by the engine to signal shutdown
}
//obj.init.Event() // notify engine of an event (this can block)
return nil
}
// CheckApply never has anything to do for this resource, so it always succeeds.
func (obj *HTTPServerFileRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
if obj.init.Debug {
obj.init.Logf("CheckApply")
}
return true, nil // always succeeds, with nothing to do!
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *HTTPServerFileRes) Cmp(r engine.Res) error {
// we can only compare HTTPServerFileRes to others of the same resource kind
res, ok := r.(*HTTPServerFileRes)
if !ok {
return fmt.Errorf("res is not the same kind")
}
if obj.Server != res.Server {
return fmt.Errorf("the Server field differs")
}
if obj.Filename != res.Filename {
return fmt.Errorf("the Filename differs")
}
if obj.Path != res.Path {
return fmt.Errorf("the Path differs")
}
if obj.Data != res.Data {
return fmt.Errorf("the Data differs")
}
return nil
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *HTTPServerFileRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes HTTPServerFileRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*HTTPServerFileRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to HTTPServerFileRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = HTTPServerFileRes(raw) // restore from indirection with type conversion!
return nil
}

View File

@@ -38,29 +38,33 @@ import (
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util/errwrap"
)
const (
httpFlagKind = httpKind + ":flag"
httpServerFlagKind = httpServerKind + ":flag"
)
func init() {
engine.RegisterResource(httpFlagKind, func() engine.Res { return &HTTPFlagRes{} })
engine.RegisterResource(httpServerFlagKind, func() engine.Res { return &HTTPServerFlagRes{} })
}
// HTTPFlagRes is a special path that exists within an http server. The name is
// used as the public path of the flag, unless the path field is specified, and
// in that case it is used instead. The way this works is that it autogroups at
// runtime with an existing http resource, and in doing so makes the flag
// associated with this resource available to cause actions when it receives a
// request on that http server. If you create a flag which responds to the same
// type of request as an http:file resource or any other kind of resource, it is
// undefined behaviour which will answer the request. The most common clash will
// happen if both are present at the same path.
type HTTPFlagRes struct {
var _ HTTPServerGroupableRes = &HTTPServerFlagRes{} // compile time check
// HTTPServerFlagRes is a special path that exists within an http server. The
// name is used as the public path of the flag, unless the path field is
// specified, and in that case it is used instead. The way this works is that it
// autogroups at runtime with an existing http resource, and in doing so makes
// the flag associated with this resource available to cause actions when it
// receives a request on that http server. If you create a flag which responds
// to the same type of request as an http:server:file resource or any other kind
// of resource, it is undefined behaviour which will answer the request. The
// most common clash will happen if both are present at the same path.
type HTTPServerFlagRes struct {
traits.Base // add the base methods without re-implementation
traits.Edgeable // XXX: add autoedge support
traits.Groupable // can be grouped into HTTPServerRes
traits.Groupable // can be grouped into HTTPServerRes or itself
traits.Sendable
init *engine.Init
@@ -81,20 +85,26 @@ type HTTPFlagRes struct {
// TODO: consider adding a method selection field
//Method string `lang:"method" yaml:"method"`
mutex *sync.Mutex // guard the value
value *string // cached value
previousValue *string
mutex *sync.Mutex // guard the values
eventStream chan error
//value *string // cached value
//prevValue *string // previous value
// TODO: do the values need to be pointers?
mapResKey map[*HTTPServerFlagRes]string // flagRes not Res
mapResPrev map[*HTTPServerFlagRes]*string
mapResValue map[*HTTPServerFlagRes]*string
}
// Default returns some sensible defaults for this resource.
func (obj *HTTPFlagRes) Default() engine.Res {
return &HTTPFlagRes{}
func (obj *HTTPServerFlagRes) Default() engine.Res {
return &HTTPServerFlagRes{}
}
// getPath returns the actual path we respond to. When Path is not specified, we
// use the Name.
func (obj *HTTPFlagRes) getPath() string {
func (obj *HTTPServerFlagRes) getPath() string {
if obj.Path != "" {
return obj.Path
}
@@ -104,13 +114,17 @@ func (obj *HTTPFlagRes) getPath() string {
// ParentName is used to limit which resources autogroup into this one. If it's
// empty then it's ignored, otherwise it must match the Name of the parent to
// get grouped.
func (obj *HTTPFlagRes) ParentName() string {
func (obj *HTTPServerFlagRes) ParentName() string {
return obj.Server
}
// AcceptHTTP determines whether we will respond to this request. Return nil to
// accept, or any error to pass.
func (obj *HTTPFlagRes) AcceptHTTP(req *http.Request) error {
func (obj *HTTPServerFlagRes) AcceptHTTP(req *http.Request) error {
// NOTE: We don't need to look at anyone that might be autogrouped,
// because for them to autogroup, they must share the same path! The
// idea is that they're part of the same request of course...
requestPath := req.URL.Path // TODO: is this what we want here?
if requestPath != obj.getPath() {
return fmt.Errorf("unhandled path")
@@ -125,7 +139,7 @@ func (obj *HTTPFlagRes) AcceptHTTP(req *http.Request) error {
}
// ServeHTTP is the standard HTTP handler that will be used here.
func (obj *HTTPFlagRes) ServeHTTP(w http.ResponseWriter, req *http.Request) {
func (obj *HTTPServerFlagRes) ServeHTTP(w http.ResponseWriter, req *http.Request) {
// We only allow POST at the moment.
if req.Method != http.MethodPost {
w.WriteHeader(http.StatusMethodNotAllowed)
@@ -137,17 +151,23 @@ func (obj *HTTPFlagRes) ServeHTTP(w http.ResponseWriter, req *http.Request) {
// sendHTTPError(w, err)
// return
//}
if obj.Key != "" {
val := req.PostFormValue(obj.Key) // string
for res, key := range obj.mapResKey { // TODO: sort deterministically?
if key == "" {
continue
}
val := req.PostFormValue(key) // string
if obj.init.Debug || true { // XXX: maybe we should always do this?
obj.init.Logf("Got val: %s", val)
obj.init.Logf("got %s: %s", key, val)
}
obj.mutex.Lock()
if val == "" {
obj.value = nil // erase
//obj.value = nil // erase
//delete(obj.mapResValue, res)
obj.mapResValue[res] = nil
} else {
obj.value = &val // store
//obj.value = &val // store
obj.mapResValue[res] = &val // store
}
obj.mutex.Unlock()
// TODO: Should we diff the new value with the previous one to
@@ -166,7 +186,7 @@ func (obj *HTTPFlagRes) ServeHTTP(w http.ResponseWriter, req *http.Request) {
}
// Validate checks if the resource data structure was populated correctly.
func (obj *HTTPFlagRes) Validate() error {
func (obj *HTTPServerFlagRes) Validate() error {
if obj.getPath() == "" {
return fmt.Errorf("empty filename")
}
@@ -179,17 +199,75 @@ func (obj *HTTPFlagRes) Validate() error {
}
// Init runs some startup code for this resource.
func (obj *HTTPFlagRes) Init(init *engine.Init) error {
func (obj *HTTPServerFlagRes) Init(init *engine.Init) error {
obj.init = init // save for later
obj.mutex = &sync.Mutex{}
obj.eventStream = make(chan error, 1) // non-blocking
obj.mapResKey = make(map[*HTTPServerFlagRes]string) // res to key
obj.mapResPrev = make(map[*HTTPServerFlagRes]*string) // res to prev value
obj.mapResValue = make(map[*HTTPServerFlagRes]*string) // res to value
obj.mapResKey[obj] = obj.Key // add "self" res
obj.mapResPrev[obj] = nil
obj.mapResValue[obj] = nil
for _, res := range obj.GetGroup() { // this is a noop if there are none!
flagRes, ok := res.(*HTTPServerFlagRes) // convert from Res
if !ok {
panic(fmt.Sprintf("grouped member %v is not a %s", res, obj.Kind()))
}
r := res // bind the variable!
newInit := &engine.Init{
Program: obj.init.Program,
Version: obj.init.Version,
Hostname: obj.init.Hostname,
// Watch:
//Running: event,
//Event: event,
// CheckApply:
//Refresh: func() bool {
// innerRes, ok := r.(engine.RefreshableRes)
// if !ok {
// panic("res does not support the Refreshable trait")
// }
// return innerRes.Refresh()
//},
Send: engine.GenerateSendFunc(r),
Recv: engine.GenerateRecvFunc(r), // unused
FilteredGraph: func() (*pgraph.Graph, error) {
panic("FilteredGraph for HTTP:Server:Flag not implemented")
},
Local: obj.init.Local,
World: obj.init.World,
//VarDir: obj.init.VarDir, // TODO: wrap this
Debug: obj.init.Debug,
Logf: func(format string, v ...interface{}) {
obj.init.Logf(r.String()+": "+format, v...)
},
}
if err := res.Init(newInit); err != nil {
return errwrap.Wrapf(err, "autogrouped Init failed")
}
obj.mapResKey[flagRes] = flagRes.Key
obj.mapResPrev[flagRes] = nil // initialize as a bonus
obj.mapResValue[flagRes] = nil
}
return nil
}
// Cleanup is run by the engine to clean up after the resource is done.
func (obj *HTTPFlagRes) Cleanup() error {
func (obj *HTTPServerFlagRes) Cleanup() error {
return nil
}
@@ -197,13 +275,12 @@ func (obj *HTTPFlagRes) Cleanup() error {
// particular one listens for events from incoming http requests to the flag,
// and notifies the engine so that CheckApply can then run and return the
// correct value on send/recv.
func (obj *HTTPFlagRes) Watch(ctx context.Context) error {
func (obj *HTTPServerFlagRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
startupChan := make(chan struct{})
close(startupChan) // send one initial signal
var send = false // send event?
for {
if obj.init.Debug {
obj.init.Logf("Looping...")
@@ -212,7 +289,6 @@ func (obj *HTTPFlagRes) Watch(ctx context.Context) error {
select {
case <-startupChan:
startupChan = nil
send = true
case err, ok := <-obj.eventStream:
if !ok { // shouldn't happen
@@ -222,52 +298,75 @@ func (obj *HTTPFlagRes) Watch(ctx context.Context) error {
if err != nil {
return err
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply never has anything to do for this resource, so it always succeeds.
func (obj *HTTPFlagRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
if obj.init.Debug || true { // XXX: maybe we should always do this?
obj.init.Logf("value: %+v", obj.value)
func (obj *HTTPServerFlagRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
checkOK := true
// run CheckApply on any grouped elements, or just myself
// TODO: Should we loop in a deterministic order?
for flagRes, key := range obj.mapResKey { // includes the main parent Res
if obj.init.Debug {
obj.init.Logf("key: %+v", key)
}
c, err := flagRes.checkApply(ctx, apply, obj)
if err != nil {
return false, err
}
checkOK = checkOK && c
}
return checkOK, nil
}
// checkApply is the actual implementation, but it's used as a helper to make
// the running of autogrouping easier.
func (obj *HTTPServerFlagRes) checkApply(ctx context.Context, apply bool, parentObj *HTTPServerFlagRes) (bool, error) {
parentObj.mutex.Lock()
objValue := parentObj.mapResValue[obj] // nil if missing
objPrevValue := parentObj.mapResPrev[obj]
if obj.init.Debug {
obj.init.Logf("value: %+v", objValue)
}
// TODO: can we send an empty (nil) value to show it has been removed?
value := "" // not a ptr, because we don't/can't? send a nil value
obj.mutex.Lock()
// first compute if different...
different := false
if (obj.value == nil) != (obj.previousValue == nil) { // xor
if (objValue == nil) != (objPrevValue == nil) { // xor
different = true
} else if obj.value != nil && obj.previousValue != nil {
if *obj.value != *obj.previousValue {
} else if objValue != nil && objPrevValue != nil {
if *objValue != *objPrevValue {
different = true
}
}
// now store in previous
if obj.value == nil {
obj.previousValue = nil
if objValue == nil {
//obj.prevValue = nil
parentObj.mapResPrev[obj] = nil
} else { // a value has been set
v := *obj.value
obj.previousValue = &v // value to cache for future compare
v := *objValue
//obj.prevValue = &v // value to cache for future compare
parentObj.mapResPrev[obj] = &v
value = *obj.value // value for send/recv
value = *objValue // value for send/recv
}
obj.mutex.Unlock()
parentObj.mutex.Unlock()
// Previously, if we graph swapped, as is quite common, we'd loose
// obj.value because the swap would destroy and then re-create and then
@@ -276,7 +375,7 @@ func (obj *HTTPFlagRes) CheckApply(ctx context.Context, apply bool) (bool, error
// As a result, we need to run send/recv on the new graph after
// autogrouping, so that we compare apples to apples, when we do the
// graphsync!
if err := obj.init.Send(&HTTPFlagSends{
if err := obj.init.Send(&HTTPServerFlagSends{
Value: &value,
}); err != nil {
return false, err
@@ -287,9 +386,9 @@ func (obj *HTTPFlagRes) CheckApply(ctx context.Context, apply bool) (bool, error
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *HTTPFlagRes) Cmp(r engine.Res) error {
// we can only compare HTTPFlagRes to others of the same resource kind
res, ok := r.(*HTTPFlagRes)
func (obj *HTTPServerFlagRes) Cmp(r engine.Res) error {
// we can only compare HTTPServerFlagRes to others of the same resource kind
res, ok := r.(*HTTPServerFlagRes)
if !ok {
return fmt.Errorf("res is not the same kind")
}
@@ -307,28 +406,51 @@ func (obj *HTTPFlagRes) Cmp(r engine.Res) error {
return nil
}
// HTTPFlagSends is the struct of data which is sent after a successful Apply.
type HTTPFlagSends struct {
// HTTPServerFlagSends is the struct of data which is sent after a successful
// Apply.
type HTTPServerFlagSends struct {
// Value is the received value being sent.
Value *string `lang:"value"`
}
// Sends represents the default struct of values we can send using Send/Recv.
func (obj *HTTPFlagRes) Sends() interface{} {
return &HTTPFlagSends{
func (obj *HTTPServerFlagRes) Sends() interface{} {
return &HTTPServerFlagSends{
Value: nil,
}
}
// GroupCmp returns whether two resources can be grouped together or not.
func (obj *HTTPServerFlagRes) GroupCmp(r engine.GroupableRes) error {
res, ok := r.(*HTTPServerFlagRes)
if !ok {
return fmt.Errorf("resource is not the same kind")
}
if obj.Server != res.Server {
return fmt.Errorf("resource has a different Server field")
}
if obj.getPath() != res.getPath() {
return fmt.Errorf("resource has a different path")
}
//if obj.Method != res.Method {
// return fmt.Errorf("resource has a different Method field")
//}
return nil
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *HTTPFlagRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes HTTPFlagRes // indirection to avoid infinite recursion
func (obj *HTTPServerFlagRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes HTTPServerFlagRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*HTTPFlagRes) // put in the right format
res, ok := def.(*HTTPServerFlagRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to HTTPFlagRes")
return fmt.Errorf("could not convert to HTTPServerFlagRes")
}
raw := rawRes(*res) // convert; the defaults go here
@@ -336,6 +458,6 @@ func (obj *HTTPFlagRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
return err
}
*obj = HTTPFlagRes(raw) // restore from indirection with type conversion!
*obj = HTTPServerFlagRes(raw) // restore from indirection with type conversion!
return nil
}

View File

@@ -49,47 +49,49 @@ import (
)
const (
httpProxyKind = httpKind + ":proxy"
httpServerProxyKind = httpServerKind + ":proxy"
)
var (
// httpProxyRWMutex synchronizes against reads and writes to the cache.
// httpServerProxyRWMutex synchronizes against reads and writes to the cache.
// TODO: we could instead have a per-cache path individual mutex, but to
// keep things simple for now, we just lumped them all together.
httpProxyRWMutex *sync.RWMutex
httpServerProxyRWMutex *sync.RWMutex
)
func init() {
httpProxyRWMutex = &sync.RWMutex{}
httpServerProxyRWMutex = &sync.RWMutex{}
engine.RegisterResource(httpProxyKind, func() engine.Res { return &HTTPProxyRes{} })
engine.RegisterResource(httpServerProxyKind, func() engine.Res { return &HTTPServerProxyRes{} })
}
// HTTPProxyRes is a resource representing a special path that exists within an
// http server. The name is used as the public path of the endpoint, unless the
// path field is specified, and in that case it is used instead. The way this
// works is that it autogroups at runtime with an existing http resource, and in
// doing so makes the path associated with this resource available when serving
// files. When something under the path is accessed, this is pulled from the
// backing http server, which makes an http client connection if needed to pull
// the authoritative file down, saves it locally for future use, and then
// returns it to the original http client caller. On a subsequent call, if the
// cache was not invalidated, the file doesn't need to be fetched from the
// network. In effect, this works as a caching http proxy. If you create this as
// a resource which responds to the same type of request as an http:file
// resource or any other kind of resource, it is undefined behaviour which will
// answer the request. The most common clash will happen if both are present at
// the same path. This particular implementation stores some file data in memory
// as a convenience instead of streaming directly to clients. This makes locking
// much easier, but is wasteful. If you plan on using this for huge files and on
// systems with low amounts of memory, you might want to optimize this. The
// resultant proxy path is determined by subtracting the `Sub` field from the
// `Path` (and request path) and then appending the result to the `Head` field.
type HTTPProxyRes struct {
var _ HTTPServerGroupableRes = &HTTPServerProxyRes{} // compile time check
// HTTPServerProxyRes is a resource representing a special path that exists
// within an http server. The name is used as the public path of the endpoint,
// unless the path field is specified, and in that case it is used instead. The
// way this works is that it autogroups at runtime with an existing http server
// resource, and in doing so makes the path associated with this resource
// available when serving files. When something under the path is accessed, this
// is pulled from the backing http server, which makes an http client connection
// if needed to pull the authoritative file down, saves it locally for future
// use, and then returns it to the original http client caller. On a subsequent
// call, if the cache was not invalidated, the file doesn't need to be fetched
// from the network. In effect, this works as a caching http proxy. If you
// create this as a resource which responds to the same type of request as an
// http:server:file resource or any other kind of resource, it is undefined
// behaviour which will answer the request. The most common clash will happen if
// both are present at the same path. This particular implementation stores some
// file data in memory as a convenience instead of streaming directly to
// clients. This makes locking much easier, but is wasteful. If you plan on
// using this for huge files and on systems with low amounts of memory, you
// might want to optimize this. The resultant proxy path is determined by
// subtracting the `Sub` field from the `Path` (and request path) and then
// appending the result to the `Head` field.
type HTTPServerProxyRes struct {
traits.Base // add the base methods without re-implementation
traits.Edgeable // XXX: add autoedge support
traits.Groupable // can be grouped into HTTPServerRes
traits.Sendable
init *engine.Init
@@ -137,13 +139,13 @@ type HTTPProxyRes struct {
}
// Default returns some sensible defaults for this resource.
func (obj *HTTPProxyRes) Default() engine.Res {
return &HTTPProxyRes{}
func (obj *HTTPServerProxyRes) Default() engine.Res {
return &HTTPServerProxyRes{}
}
// getPath returns the actual path we respond to. When Path is not specified, we
// use the Name.
func (obj *HTTPProxyRes) getPath() string {
func (obj *HTTPServerProxyRes) getPath() string {
if obj.Path != "" {
return obj.Path
}
@@ -152,7 +154,7 @@ func (obj *HTTPProxyRes) getPath() string {
// serveHTTP is the real implementation of ServeHTTP, but with a more ergonomic
// signature.
func (obj *HTTPProxyRes) serveHTTP(ctx context.Context, requestPath string) (handlerFuncError, error) {
func (obj *HTTPServerProxyRes) serveHTTP(ctx context.Context, requestPath string) (handlerFuncError, error) {
// TODO: switch requestPath to use safepath.AbsPath instead of a string
result, err := obj.pathParser.parse(requestPath)
@@ -238,8 +240,8 @@ func (obj *HTTPProxyRes) serveHTTP(ctx context.Context, requestPath string) (han
writers := []io.Writer{w} // out to the client
if obj.Cache != "" { // check in the cache...
httpProxyRWMutex.Lock()
defer httpProxyRWMutex.Unlock()
httpServerProxyRWMutex.Lock()
defer httpServerProxyRWMutex.Unlock()
// store in cachePath
if err := os.MkdirAll(filepath.Dir(cachePath), 0700); err != nil {
@@ -324,11 +326,11 @@ func (obj *HTTPProxyRes) serveHTTP(ctx context.Context, requestPath string) (han
// getCachedFile pulls a file from our local cache if it exists. It returns the
// correct http handler on success, which we can then run.
func (obj *HTTPProxyRes) getCachedFile(ctx context.Context, absPath string) (handlerFuncError, error) {
func (obj *HTTPServerProxyRes) getCachedFile(ctx context.Context, absPath string) (handlerFuncError, error) {
// TODO: if infinite reads keep coming in, do we indefinitely-postpone
// the locking so that a new file can be saved in the cache?
httpProxyRWMutex.RLock()
defer httpProxyRWMutex.RUnlock()
httpServerProxyRWMutex.RLock()
defer httpServerProxyRWMutex.RUnlock()
f, err := os.Open(absPath)
if err != nil {
@@ -362,13 +364,13 @@ func (obj *HTTPProxyRes) getCachedFile(ctx context.Context, absPath string) (han
// ParentName is used to limit which resources autogroup into this one. If it's
// empty then it's ignored, otherwise it must match the Name of the parent to
// get grouped.
func (obj *HTTPProxyRes) ParentName() string {
func (obj *HTTPServerProxyRes) ParentName() string {
return obj.Server
}
// AcceptHTTP determines whether we will respond to this request. Return nil to
// accept, or any error to pass.
func (obj *HTTPProxyRes) AcceptHTTP(req *http.Request) error {
func (obj *HTTPServerProxyRes) AcceptHTTP(req *http.Request) error {
requestPath := req.URL.Path // TODO: is this what we want here?
if p := obj.getPath(); strings.HasSuffix(p, "/") { // a dir!
@@ -385,7 +387,7 @@ func (obj *HTTPProxyRes) AcceptHTTP(req *http.Request) error {
}
// ServeHTTP is the standard HTTP handler that will be used here.
func (obj *HTTPProxyRes) ServeHTTP(w http.ResponseWriter, req *http.Request) {
func (obj *HTTPServerProxyRes) ServeHTTP(w http.ResponseWriter, req *http.Request) {
// We only allow GET at the moment.
if req.Method != http.MethodGet {
w.WriteHeader(http.StatusMethodNotAllowed)
@@ -420,7 +422,7 @@ func (obj *HTTPProxyRes) ServeHTTP(w http.ResponseWriter, req *http.Request) {
}
// Validate checks if the resource data structure was populated correctly.
func (obj *HTTPProxyRes) Validate() error {
func (obj *HTTPServerProxyRes) Validate() error {
if obj.getPath() == "" {
return fmt.Errorf("empty filename")
}
@@ -450,7 +452,7 @@ func (obj *HTTPProxyRes) Validate() error {
}
// Init runs some startup code for this resource.
func (obj *HTTPProxyRes) Init(init *engine.Init) error {
func (obj *HTTPServerProxyRes) Init(init *engine.Init) error {
obj.init = init // save for later
obj.pathParser = &pathParser{
@@ -464,14 +466,14 @@ func (obj *HTTPProxyRes) Init(init *engine.Init) error {
}
// Cleanup is run by the engine to clean up after the resource is done.
func (obj *HTTPProxyRes) Cleanup() error {
func (obj *HTTPServerProxyRes) Cleanup() error {
return nil
}
// Watch is the primary listener for this resource and it outputs events. This
// particular one does absolutely nothing but block until we've received a done
// signal.
func (obj *HTTPProxyRes) Watch(ctx context.Context) error {
func (obj *HTTPServerProxyRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
select {
@@ -484,7 +486,7 @@ func (obj *HTTPProxyRes) Watch(ctx context.Context) error {
}
// CheckApply never has anything to do for this resource, so it always succeeds.
func (obj *HTTPProxyRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
func (obj *HTTPServerProxyRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
if obj.init.Debug {
obj.init.Logf("CheckApply")
}
@@ -493,9 +495,9 @@ func (obj *HTTPProxyRes) CheckApply(ctx context.Context, apply bool) (bool, erro
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *HTTPProxyRes) Cmp(r engine.Res) error {
// we can only compare HTTPProxyRes to others of the same resource kind
res, ok := r.(*HTTPProxyRes)
func (obj *HTTPServerProxyRes) Cmp(r engine.Res) error {
// we can only compare HTTPServerProxyRes to others of the same resource kind
res, ok := r.(*HTTPServerProxyRes)
if !ok {
return fmt.Errorf("res is not the same kind")
}
@@ -520,29 +522,15 @@ func (obj *HTTPProxyRes) Cmp(r engine.Res) error {
return nil
}
// HTTPProxySends is the struct of data which is sent after a successful Apply.
type HTTPProxySends struct {
// Data is the received value being sent.
// TODO: should this be []byte or *[]byte instead?
Data *string `lang:"data"`
}
// Sends represents the default struct of values we can send using Send/Recv.
func (obj *HTTPProxyRes) Sends() interface{} {
return &HTTPProxySends{
Data: nil,
}
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *HTTPProxyRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes HTTPProxyRes // indirection to avoid infinite recursion
func (obj *HTTPServerProxyRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes HTTPServerProxyRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*HTTPProxyRes) // put in the right format
res, ok := def.(*HTTPServerProxyRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to HTTPProxyRes")
return fmt.Errorf("could not convert to HTTPServerProxyRes")
}
raw := rawRes(*res) // convert; the defaults go here
@@ -550,7 +538,7 @@ func (obj *HTTPProxyRes) UnmarshalYAML(unmarshal func(interface{}) error) error
return err
}
*obj = HTTPProxyRes(raw) // restore from indirection with type conversion!
*obj = HTTPServerProxyRes(raw) // restore from indirection with type conversion!
return nil
}

View File

@@ -36,7 +36,7 @@ import (
"testing"
)
func TestHttpProxyPathParser0(t *testing.T) {
func TestHttpServerProxyPathParser0(t *testing.T) {
type test struct { // an individual test
fail bool

View File

@@ -0,0 +1,795 @@
// Mgmt
// Copyright (C) James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//
// Additional permission under GNU GPL version 3 section 7
//
// If you modify this program, or any covered work, by linking or combining it
// with embedded mcl code and modules (and that the embedded mcl code and
// modules which link with this program, contain a copy of their source code in
// the authoritative form) containing parts covered by the terms of any other
// license, the licensors of this program grant you additional permission to
// convey the resulting work. Furthermore, the licensors of this program grant
// the original author, James Shubin, additional permission to update this
// additional permission if he deems it necessary to achieve the goals of this
// additional permission.
package resources
import (
"context"
_ "embed" // embed data with go:embed
"fmt"
"html/template"
"net/http"
"sort"
"strings"
"sync"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/resources/http_server_ui/common"
"github.com/purpleidea/mgmt/engine/resources/http_server_ui/static"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util/errwrap"
"github.com/gin-gonic/gin"
)
const (
httpServerUIKind = httpServerKind + ":ui"
httpServerUIIndexHTMLTmpl = "index.html.tmpl"
)
var (
//go:embed http_server_ui/index.html.tmpl
httpServerUIIndexHTMLTmplData string
//go:embed http_server_ui/wasm_exec.js
httpServerUIWasmExecData []byte
//go:embed http_server_ui/main.wasm
httpServerUIMainWasmData []byte
)
func init() {
engine.RegisterResource(httpServerUIKind, func() engine.Res { return &HTTPServerUIRes{} })
// XXX: here for now: https://github.com/gin-gonic/gin/issues/1180
gin.SetMode(gin.ReleaseMode) // for production
}
var _ HTTPServerGroupableRes = &HTTPServerUIRes{} // compile time check
// HTTPServerUIGroupableRes is the interface that you must implement if you want
// to allow a resource the ability to be grouped into the http server ui
// resource. As an added safety, the Kind must also begin with
// "http:server:ui:", and not have more than one colon to avoid accidents of
// unwanted grouping.
type HTTPServerUIGroupableRes interface {
engine.Res
// ParentName is used to limit which resources autogroup into this one.
// If it's empty then it's ignored, otherwise it must match the Name of
// the parent to get grouped.
ParentName() string
// GetKind returns the "kind" of resource that this UI element is. This
// is technically different than the Kind() field, because it can be a
// unique kind that's specific to the HTTP form UI resources.
GetKind() string
// GetID returns the unique ID that this UI element responds to. Note
// that this is NOT replaceable by Name() because this ID is used in
// places that might be public, such as in webui form source code.
GetID() string
// SetValue sends the new value that was obtained from submitting the
// form. This is the raw, unsafe value that you must validate first.
SetValue(context.Context, []string) error
// GetValue gets a string representation for the form value, that we'll
// use in our html form.
GetValue(context.Context) (string, error)
// GetType returns a map that you can use to build the input field in
// the ui.
GetType() map[string]string
// GetSort returns a string that you can use to determine the global
// sorted display order of all the elements in a ui.
GetSort() string
}
// HTTPServerUIResData represents some additional data to attach to the
// resource.
type HTTPServerUIResData struct {
// Title is the generated page title that is displayed to the user.
Title string `lang:"title" yaml:"title"`
// Head is a list of strings to insert into the <head> and </head> tags
// of your page. This string allows HTML, so choose carefully!
// XXX: a *string should allow a partial struct here without having this
// field, but our type unification algorithm isn't this fancy yet...
Head string `lang:"head" yaml:"head"`
}
// HTTPServerUIRes is a web UI resource that exists within an http server. The
// name is used as the public path of the ui, unless the path field is
// specified, and in that case it is used instead. The way this works is that it
// autogroups at runtime with an existing http server resource, and in doing so
// makes the form associated with this resource available for serving from that
// http server.
type HTTPServerUIRes struct {
traits.Base // add the base methods without re-implementation
traits.Edgeable // XXX: add autoedge support
traits.Groupable // can be grouped into HTTPServerRes
init *engine.Init
// Server is the name of the http server resource to group this into. If
// it is omitted, and there is only a single http resource, then it will
// be grouped into it automatically. If there is more than one main http
// resource being used, then the grouping behaviour is *undefined* when
// this is not specified, and it is not recommended to leave this blank!
Server string `lang:"server" yaml:"server"`
// Path is the name of the path that this should be exposed under. For
// example, you might want to name this "/ui/" to expose it as "ui"
// under the server root. This overrides the name variable that is set.
Path string `lang:"path" yaml:"path"`
// Data represents some additional data to attach to the resource.
Data *HTTPServerUIResData `lang:"data" yaml:"data"`
//eventStream chan error
eventsChanMap map[engine.Res]chan error
// notifications contains a channel for every long poller waiting for a
// reply.
notifications map[engine.Res]map[chan struct{}]struct{}
// rwmutex guards the notifications map.
rwmutex *sync.RWMutex
ctx context.Context // set by Watch
}
// Default returns some sensible defaults for this resource.
func (obj *HTTPServerUIRes) Default() engine.Res {
return &HTTPServerUIRes{}
}
// getPath returns the actual path we respond to. When Path is not specified, we
// use the Name. Note that this is the handler path that will be seen on the
// root http server, and this ui application might use a querystring and/or POST
// data as well.
func (obj *HTTPServerUIRes) getPath() string {
if obj.Path != "" {
return obj.Path
}
return obj.Name()
}
// routerPath returns an appropriate path for our router based on what we want
// to achieve using our parent prefix.
func (obj *HTTPServerUIRes) routerPath(p string) string {
if strings.HasPrefix(p, "/") {
return obj.getPath() + p[1:]
}
return obj.getPath() + p
}
// ParentName is used to limit which resources autogroup into this one. If it's
// empty then it's ignored, otherwise it must match the Name of the parent to
// get grouped.
func (obj *HTTPServerUIRes) ParentName() string {
return obj.Server
}
// AcceptHTTP determines whether we will respond to this request. Return nil to
// accept, or any error to pass.
func (obj *HTTPServerUIRes) AcceptHTTP(req *http.Request) error {
requestPath := req.URL.Path // TODO: is this what we want here?
//if requestPath != obj.getPath() {
// return fmt.Errorf("unhandled path")
//}
if !strings.HasPrefix(requestPath, obj.getPath()) {
return fmt.Errorf("unhandled path")
}
return nil
}
// getResByID returns the grouped resource with the id we're searching for if it
// exists, otherwise nil and false.
func (obj *HTTPServerUIRes) getResByID(id string) (HTTPServerUIGroupableRes, bool) {
for _, x := range obj.GetGroup() { // grouped elements
res, ok := x.(HTTPServerUIGroupableRes) // convert from Res
if !ok {
continue
}
if obj.init.Debug {
obj.init.Logf("Got grouped resource: %s", res.String())
}
if id != res.GetID() {
continue
}
return res, true
}
return nil, false
}
// ginLogger is a helper to get structured logs out of gin.
func (obj *HTTPServerUIRes) ginLogger() gin.HandlerFunc {
return func(c *gin.Context) {
//start := time.Now()
c.Next()
//duration := time.Since(start)
//timestamp := time.Now().Format(time.RFC3339)
method := c.Request.Method
path := c.Request.URL.Path
status := c.Writer.Status()
//latency := duration
clientIP := c.ClientIP()
if obj.init.Debug {
return
}
obj.init.Logf("%v %s %s (%d)", clientIP, method, path, status)
}
}
// getTemplate builds the super template that contains the map of each file name
// so that it can be used easily to send out named, templated documents.
func (obj *HTTPServerUIRes) getTemplate() (*template.Template, error) {
// XXX: get this from somewhere
m := make(map[string]string)
//m["foo.tmpl"] = "hello from file1" // TODO: add more content?
m[httpServerUIIndexHTMLTmpl] = httpServerUIIndexHTMLTmplData // index.html.tmpl
filenames := []string{}
for filename := range m {
filenames = append(filenames, filename)
}
sort.Strings(filenames) // deterministic order
var t *template.Template
// This logic from golang/src/html/template/template.go:parseFiles(...)
for _, filename := range filenames {
data := m[filename]
var tmpl *template.Template
if t == nil {
t = template.New(filename)
}
if filename == t.Name() {
tmpl = t
} else {
tmpl = t.New(filename)
}
if _, err := tmpl.Parse(data); err != nil {
return nil, err
}
}
t = t.Option("missingkey=error") // be thorough
return t, nil
}
// ServeHTTP is the standard HTTP handler that will be used here.
func (obj *HTTPServerUIRes) ServeHTTP(w http.ResponseWriter, req *http.Request) {
// XXX: do all the router bits in Init() if we can...
//gin.SetMode(gin.ReleaseMode) // for production
router := gin.New()
router.Use(obj.ginLogger(), gin.Recovery())
templ, err := obj.getTemplate() // do in init?
if err != nil {
obj.init.Logf("template error: %+v", err)
return
}
router.SetHTMLTemplate(templ)
router.GET(obj.routerPath("/"), func(c *gin.Context) {
c.Redirect(http.StatusMovedPermanently, obj.routerPath("/index.html"))
})
router.GET(obj.routerPath("/index.html"), func(c *gin.Context) {
h := gin.H{}
h["program"] = obj.init.Program
h["version"] = obj.init.Version
h["hostname"] = obj.init.Hostname
h["embedded"] = static.HTTPServerUIStaticEmbedded // true or false
h["title"] = "" // key must be specified
h["path"] = obj.getPath()
if obj.Data != nil {
h["title"] = obj.Data.Title // template var
h["head"] = template.HTML(obj.Data.Head)
}
c.HTML(http.StatusOK, httpServerUIIndexHTMLTmpl, h)
})
router.GET(obj.routerPath("/main.wasm"), func(c *gin.Context) {
c.Data(http.StatusOK, "application/wasm", httpServerUIMainWasmData)
})
router.GET(obj.routerPath("/wasm_exec.js"), func(c *gin.Context) {
// the version of this file has to match compiler version
// the original came from: ~golang/lib/wasm/wasm_exec.js
// XXX: add a test to ensure this matches the compiler version
// the content-type matters or this won't work in the browser
c.Data(http.StatusOK, "text/javascript;charset=UTF-8", httpServerUIWasmExecData)
})
if static.HTTPServerUIStaticEmbedded {
router.GET(obj.routerPath("/"+static.HTTPServerUIIndexBootstrapCSS), func(c *gin.Context) {
c.Data(http.StatusOK, "text/css;charset=UTF-8", static.HTTPServerUIIndexStaticBootstrapCSS)
})
router.GET(obj.routerPath("/"+static.HTTPServerUIIndexBootstrapJS), func(c *gin.Context) {
c.Data(http.StatusOK, "text/javascript;charset=UTF-8", static.HTTPServerUIIndexStaticBootstrapJS)
})
}
router.POST(obj.routerPath("/save/"), func(c *gin.Context) {
id, ok := c.GetPostForm("id")
if !ok || id == "" {
msg := "missing id"
c.JSON(http.StatusBadRequest, gin.H{"error": msg})
return
}
values, ok := c.GetPostFormArray("value")
if !ok {
msg := "missing value"
c.JSON(http.StatusBadRequest, gin.H{"error": msg})
return
}
res, ok := obj.getResByID(id)
if !ok {
msg := fmt.Sprintf("id `%s` not found", id)
c.JSON(http.StatusBadRequest, gin.H{"error": msg})
return
}
// we're storing data...
if err := res.SetValue(obj.ctx, values); err != nil {
msg := fmt.Sprintf("bad data: %v", err)
c.JSON(http.StatusBadRequest, gin.H{"error": msg})
return
}
// XXX: instead of an event to everything, instead if SetValue
// is an active sub resource (instead of something that noop's)
// that should send an event and eventually propagate to here,
// so skip sending this global one...
// Trigger a Watch() event so that CheckApply() calls Send/Recv,
// so our newly received POST value gets sent through the graph.
//select {
//case obj.eventStream <- nil: // send an event
//case <-obj.ctx.Done(): // in case Watch dies
// c.JSON(http.StatusInternalServerError, gin.H{
// "error": "Internal Server Error",
// "code": 500,
// })
//}
c.JSON(http.StatusOK, nil)
})
router.GET(obj.routerPath("/list/"), func(c *gin.Context) {
elements := []*common.FormElement{}
for _, x := range obj.GetGroup() { // grouped elements
res, ok := x.(HTTPServerUIGroupableRes) // convert from Res
if !ok {
continue
}
element := &common.FormElement{
Kind: res.GetKind(),
ID: res.GetID(),
Type: res.GetType(),
Sort: res.GetSort(),
}
elements = append(elements, element)
}
form := &common.Form{
Elements: elements,
}
// XXX: c.JSON or c.PureJSON ?
c.JSON(http.StatusOK, form) // send the struct as json
})
router.GET(obj.routerPath("/list/:id"), func(c *gin.Context) {
id := c.Param("id")
res, ok := obj.getResByID(id)
if !ok {
msg := fmt.Sprintf("id `%s` not found", id)
c.JSON(http.StatusBadRequest, gin.H{"error": msg})
return
}
val, err := res.GetValue(obj.ctx)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{
"error": "Internal Server Error",
"code": 500,
})
return
}
el := &common.FormElementGeneric{ // XXX: text or string?
Value: val,
}
c.JSON(http.StatusOK, el) // send the struct as json
})
router.GET(obj.routerPath("/watch/:id"), func(c *gin.Context) {
id := c.Param("id")
res, ok := obj.getResByID(id)
if !ok {
msg := fmt.Sprintf("id `%s` not found", id)
c.JSON(http.StatusBadRequest, gin.H{"error": msg})
return
}
ch := make(chan struct{})
//defer close(ch) // don't close, let it gc instead
obj.rwmutex.Lock()
obj.notifications[res][ch] = struct{}{} // add to notification "list"
obj.rwmutex.Unlock()
defer func() {
obj.rwmutex.Lock()
delete(obj.notifications[res], ch)
obj.rwmutex.Unlock()
}()
select {
case <-ch: // http long poll
// pass
//case <-obj.???[res].Done(): // in case Watch dies
// c.JSON(http.StatusInternalServerError, gin.H{
// "error": "Internal Server Error",
// "code": 500,
// })
case <-obj.ctx.Done(): // in case Watch dies
c.JSON(http.StatusInternalServerError, gin.H{
"error": "Internal Server Error",
"code": 500,
})
return
}
val, err := res.GetValue(obj.ctx)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{
"error": "Internal Server Error",
"code": 500,
})
return
}
el := &common.FormElementGeneric{ // XXX: text or string?
Value: val,
}
c.JSON(http.StatusOK, el) // send the struct as json
})
router.GET(obj.routerPath("/ping"), func(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{
"message": "pong",
})
})
router.ServeHTTP(w, req)
return
}
// Validate checks if the resource data structure was populated correctly.
func (obj *HTTPServerUIRes) Validate() error {
if obj.getPath() == "" {
return fmt.Errorf("empty path")
}
// FIXME: does getPath need to start with a slash or end with one?
if !strings.HasPrefix(obj.getPath(), "/") {
return fmt.Errorf("the Path must be absolute")
}
if !strings.HasSuffix(obj.getPath(), "/") {
return fmt.Errorf("the Path must end with a slash")
}
return nil
}
// Init runs some startup code for this resource.
func (obj *HTTPServerUIRes) Init(init *engine.Init) error {
obj.init = init // save for later
//obj.eventStream = make(chan error)
obj.eventsChanMap = make(map[engine.Res]chan error)
obj.notifications = make(map[engine.Res]map[chan struct{}]struct{})
obj.rwmutex = &sync.RWMutex{}
// NOTE: If we don't Init anything that's autogrouped, then it won't
// even get an Init call on it.
// TODO: should we do this in the engine? Do we want to decide it here?
for _, res := range obj.GetGroup() { // grouped elements
// NOTE: We build a new init, but it's not complete. We only add
// what we're planning to use, and we ignore the rest for now...
r := res // bind the variable!
obj.eventsChanMap[r] = make(chan error)
obj.notifications[r] = make(map[chan struct{}]struct{})
event := func() {
select {
case obj.eventsChanMap[r] <- nil:
// send!
}
obj.rwmutex.RLock()
for ch := range obj.notifications[r] {
select {
case ch <- struct{}{}:
// send!
default:
// skip immediately if nobody is listening
}
}
obj.rwmutex.RUnlock()
// We don't do this here (why?) we instead read from the
// above channel and then send on multiplexedChan to the
// main loop, where it runs the obj.init.Event function.
//obj.init.Event() // notify engine of an event (this can block)
}
newInit := &engine.Init{
Program: obj.init.Program,
Version: obj.init.Version,
Hostname: obj.init.Hostname,
// Watch:
Running: event,
Event: event,
// CheckApply:
//Refresh: func() bool { // TODO: do we need this?
// innerRes, ok := r.(engine.RefreshableRes)
// if !ok {
// panic("res does not support the Refreshable trait")
// }
// return innerRes.Refresh()
//},
Send: engine.GenerateSendFunc(r),
Recv: engine.GenerateRecvFunc(r), // unused
FilteredGraph: func() (*pgraph.Graph, error) {
panic("FilteredGraph for HTTP:Server:UI not implemented")
},
Local: obj.init.Local,
World: obj.init.World,
//VarDir: obj.init.VarDir, // TODO: wrap this
Debug: obj.init.Debug,
Logf: func(format string, v ...interface{}) {
obj.init.Logf(res.Kind()+": "+format, v...)
},
}
if err := res.Init(newInit); err != nil {
return errwrap.Wrapf(err, "autogrouped Init failed")
}
}
return nil
}
// Cleanup is run by the engine to clean up after the resource is done.
func (obj *HTTPServerUIRes) Cleanup() error {
return nil
}
// Watch is the primary listener for this resource and it outputs events. This
// particular one does absolutely nothing but block until we've received a done
// signal.
func (obj *HTTPServerUIRes) Watch(ctx context.Context) error {
multiplexedChan := make(chan error)
defer close(multiplexedChan) // closes after everyone below us is finished
wg := &sync.WaitGroup{}
defer wg.Wait()
innerCtx, cancel := context.WithCancel(ctx) // store for ServeHTTP
defer cancel()
obj.ctx = innerCtx
for _, r := range obj.GetGroup() { // grouped elements
res := r // optional in newer golang
wg.Add(1)
go func() {
defer wg.Done()
defer close(obj.eventsChanMap[res]) // where Watch sends events
if err := res.Watch(ctx); err != nil {
select {
case multiplexedChan <- err:
case <-ctx.Done():
}
}
}()
// wait for Watch first Running() call or immediate error...
select {
case <-obj.eventsChanMap[res]: // triggers on start or on err...
}
wg.Add(1)
go func() {
defer wg.Done()
for {
var ok bool
var err error
select {
// receive
case err, ok = <-obj.eventsChanMap[res]:
if !ok {
return
}
}
// send (multiplex)
select {
case multiplexedChan <- err:
case <-ctx.Done():
return
}
}
}()
}
// we block until all the children are started first...
obj.init.Running() // when started, notify engine that we're running
startupChan := make(chan struct{})
close(startupChan) // send one initial signal
for {
if obj.init.Debug {
obj.init.Logf("Looping...")
}
select {
case <-startupChan:
startupChan = nil
//case err, ok := <-obj.eventStream:
// if !ok { // shouldn't happen
// obj.eventStream = nil
// continue
// }
// if err != nil {
// return err
// }
case err, ok := <-multiplexedChan:
if !ok { // shouldn't happen
multiplexedChan = nil
continue
}
if err != nil {
return err
}
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
obj.init.Event() // notify engine of an event (this can block)
}
//return nil // unreachable
}
// CheckApply is responsible for the Send/Recv aspects of the autogrouped
// resources. It recursively calls any autogrouped children.
func (obj *HTTPServerUIRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
if obj.init.Debug {
obj.init.Logf("CheckApply")
}
checkOK := true
for _, res := range obj.GetGroup() { // grouped elements
if c, err := res.CheckApply(ctx, apply); err != nil {
return false, errwrap.Wrapf(err, "autogrouped CheckApply failed")
} else if !c {
checkOK = false
}
}
return checkOK, nil // w00t
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *HTTPServerUIRes) Cmp(r engine.Res) error {
// we can only compare HTTPServerUIRes to others of the same resource kind
res, ok := r.(*HTTPServerUIRes)
if !ok {
return fmt.Errorf("res is not the same kind")
}
if obj.Server != res.Server {
return fmt.Errorf("the Server field differs")
}
if obj.Path != res.Path {
return fmt.Errorf("the Path differs")
}
return nil
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *HTTPServerUIRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes HTTPServerUIRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*HTTPServerUIRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to HTTPServerUIRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = HTTPServerUIRes(raw) // restore from indirection with type conversion!
return nil
}
// GroupCmp returns whether two resources can be grouped together or not. Can
// these two resources be merged, aka, does this resource support doing so? Will
// resource allow itself to be grouped _into_ this obj?
func (obj *HTTPServerUIRes) GroupCmp(r engine.GroupableRes) error {
res, ok := r.(HTTPServerUIGroupableRes) // different from what we usually do!
if !ok {
return fmt.Errorf("resource is not the right kind")
}
// If the http resource has the parent name field specified, then it
// must match against our name field if we want it to group with us.
if pn := res.ParentName(); pn != "" && pn != obj.Name() {
return fmt.Errorf("resource groups with a different parent name")
}
p := httpServerUIKind + ":"
// http:server:ui:foo is okay, but http:server:file is not
if !strings.HasPrefix(r.Kind(), p) {
return fmt.Errorf("not one of our children")
}
// http:server:ui:foo is okay, but http:server:ui:foo:bar is not
s := strings.TrimPrefix(r.Kind(), p)
if len(s) != len(r.Kind()) && strings.Count(s, ":") > 0 { // has prefix
return fmt.Errorf("maximum one resource after `%s` prefix", httpServerUIKind)
}
return nil
}

View File

@@ -0,0 +1 @@
/main.wasm

View File

@@ -0,0 +1,8 @@
This directory contains the golang wasm source for the `http_server_ui`
resource. It gets built automatically when you run `make` from the main project
root directory.
After it gets built, the compiled artifact gets bundled into the main project
binary via go embed.
It is not a normal package that should get built with everything else.

View File

@@ -0,0 +1,84 @@
// Mgmt
// Copyright (C) James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//
// Additional permission under GNU GPL version 3 section 7
//
// If you modify this program, or any covered work, by linking or combining it
// with embedded mcl code and modules (and that the embedded mcl code and
// modules which link with this program, contain a copy of their source code in
// the authoritative form) containing parts covered by the terms of any other
// license, the licensors of this program grant you additional permission to
// convey the resulting work. Furthermore, the licensors of this program grant
// the original author, James Shubin, additional permission to update this
// additional permission if he deems it necessary to achieve the goals of this
// additional permission.
// Package common contains some code that is shared between the wasm and the
// http:server:ui packages.
package common
const (
// HTTPServerUIInputType represents the field in the "Type" map that specifies
// which input type we're using.
HTTPServerUIInputType = "type"
// HTTPServerUIInputTypeText is the representation of the html "text"
// type.
HTTPServerUIInputTypeText = "text"
// HTTPServerUIInputTypeRange is the representation of the html "range"
// type.
HTTPServerUIInputTypeRange = "range"
// HTTPServerUIInputTypeRangeMin is the html input "range" min field.
HTTPServerUIInputTypeRangeMin = "min"
// HTTPServerUIInputTypeRangeMax is the html input "range" max field.
HTTPServerUIInputTypeRangeMax = "max"
// HTTPServerUIInputTypeRangeStep is the html input "range" step field.
HTTPServerUIInputTypeRangeStep = "step"
)
// Form represents the entire form containing all the desired elements.
type Form struct {
// Elements is a list of form elements in this form.
// TODO: Maybe this should be an interface?
Elements []*FormElement `json:"elements"`
}
// FormElement represents each form element.
type FormElement struct {
// Kind is the kind of form element that this is.
Kind string `json:"kind"`
// ID is the unique public id for this form element.
ID string `json:"id"`
// Type is a map that you can use to build the input field in the ui.
Type map[string]string `json:"type"`
// Sort is a string that you can use to determine the global sorted
// display order of all the elements in a ui.
Sort string `json:"sort"`
}
// FormElementGeneric is a value store.
type FormElementGeneric struct {
// Value holds the string value we're interested in.
Value string `json:"value"`
}

View File

@@ -0,0 +1,163 @@
{{- /*
Mgmt
Copyright (C) James Shubin and the project contributors
Written by James Shubin <james@shubin.ca> and the project contributors
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Additional permission under GNU GPL version 3 section 7
If you modify this program, or any covered work, by linking or combining it
with embedded mcl code and modules (and that the embedded mcl code and
modules which link with this program, contain a copy of their source code in
the authoritative form) containing parts covered by the terms of any other
license, the licensors of this program grant you additional permission to
convey the resulting work. Furthermore, the licensors of this program grant
the original author, James Shubin, additional permission to update this
additional permission if he deems it necessary to achieve the goals of this
additional permission.
This was modified from the boiler-plate in the ~golang/misc/wasm/* directory.
*/ -}}
<!doctype html>
<html>
<head>
<meta charset="utf-8">
{{ if .title }}
<title>{{ .title }}</title>
{{ end }}
{{ if .head }}
{{ .head }}
{{ end }}
{{ if .embedded }}
<link href="static/bootstrap.min.css" rel="stylesheet" crossorigin="anonymous">
<script src="static/bootstrap.bundle.min.js" crossorigin="anonymous"></script>
{{ else }}
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.5/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-SgOJa3DmI69IUzQ2PVdRZhwQ+dy64/BUtbMJw1MZ8t5HZApcHrRKUc4W0kG879m7" crossorigin="anonymous">
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.5/dist/js/bootstrap.bundle.min.js" integrity="sha384-k6d4wzSIapyDyv1kpU366/PK5hCdSbCRGRCMv+eplOQJWyd1fbcAu9OCUj5zNLiq" crossorigin="anonymous"></script>
{{ end }}
<style>
/* Auto-apply Bootstrap-like blue (primary) styling based on element type. */
body {
--bs-primary: #0d6efd; /* Bootstrap 5 default primary color */
}
h1, h2, h3, h4, h5, h6, strong, b {
color: var(--bs-primary);
}
a {
color: var(--bs-primary);
text-decoration: none;
}
a:hover {
text-decoration: underline;
color: #0b5ed7; /* slightly darker blue */
}
button, input[type="submit"], input[type="button"] {
background-color: var(--bs-primary);
color: #fff;
border: none;
padding: 0.375rem 0.75rem;
border-radius: 0.25rem;
cursor: pointer;
}
button:hover, input[type="submit"]:hover, input[type="button"]:hover {
background-color: #0b5ed7;
}
p, span, li {
color: #212529; /* standard text color */
}
code, pre {
background-color: #e7f1ff;
color: #084298;
padding: 0.25rem 0.5rem;
border-radius: 0.25rem;
}
fieldset {
background-color: #e7f1ff;
border: 1px solid blue;
padding: 10px; /* optional: adds spacing inside the border */
margin-bottom: 20px; /* optional: adds spacing below the fieldset */
margin: 0 20px; /* adds 20px space on left and right */
}
label {
display: inline-block;
width: 100px; /* arbitrary */
text-align: right; /* aligns label text to the right */
margin-right: 10px; /* spacing between label and input */
margin-bottom: 8px; /* small vertical space below each label */
}
input[type="text"] {
width: 30ch; /* the number of characters you want to fit */
box-sizing: border-box; /* ensures padding and border are included in the width */
}
input[type="range"] {
vertical-align: middle; /* aligns the range input vertically with other elements */
width: 30ch; /* the number of characters you want to fit (to match text) */
box-sizing: border-box; /* ensures padding and border are included in the width */
}
</style>
</head>
<body>
<!--
Add the following polyfill for Microsoft Edge 17/18 support:
<script src="https://cdn.jsdelivr.net/npm/text-encoding@0.7.0/lib/encoding.min.js"></script>
(see https://caniuse.com/#feat=textencoder)
-->
<script src="wasm_exec.js"></script>
<script>
// These values can be read from inside the wasm program.
window._mgmt_program = "{{ .program }}";
window._mgmt_version = "{{ .version }}";
window._mgmt_hostname = "{{ .hostname }}";
window._mgmt_title = "{{ .title }}";
window._mgmt_path = "{{ .path }}";
if (!WebAssembly.instantiateStreaming) { // polyfill
WebAssembly.instantiateStreaming = async (resp, importObject) => {
const source = await (await resp).arrayBuffer();
return await WebAssembly.instantiate(source, importObject);
};
}
const go = new Go();
//let mod, inst;
WebAssembly.instantiateStreaming(fetch("main.wasm"), go.importObject).then((result) => {
//mod = result.module;
//inst = result.instance;
go.run(result.instance);
}).catch((err) => {
console.error(err);
});
//async function run() {
// console.clear();
// await go.run(inst);
// inst = await WebAssembly.instantiate(mod, go.importObject); // reset instance
//}
</script>
</body>
</html>

View File

@@ -0,0 +1,338 @@
// Mgmt
// Copyright (C) James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//
// Additional permission under GNU GPL version 3 section 7
//
// If you modify this program, or any covered work, by linking or combining it
// with embedded mcl code and modules (and that the embedded mcl code and
// modules which link with this program, contain a copy of their source code in
// the authoritative form) containing parts covered by the terms of any other
// license, the licensors of this program grant you additional permission to
// convey the resulting work. Furthermore, the licensors of this program grant
// the original author, James Shubin, additional permission to update this
// additional permission if he deems it necessary to achieve the goals of this
// additional permission.
package main
import (
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"sort"
"strconv"
"syscall/js"
"time"
"github.com/purpleidea/mgmt/engine/resources/http_server_ui/common"
"github.com/purpleidea/mgmt/util/errwrap"
)
// Main is the main implementation of this process. It holds our shared data.
type Main struct {
// some values we pull in
program string
version string
hostname string
title string
path string
document js.Value
body js.Value
// window.location.origin (the base url with port for XHR)
wlo string
// base is the wlo + the specific path suffix
base string
response chan *Response
}
// Init must be called before the Main struct is used.
func (obj *Main) Init() error {
fmt.Println("Hello from mgmt wasm!")
obj.program = js.Global().Get("_mgmt_program").String()
obj.version = js.Global().Get("_mgmt_version").String()
obj.hostname = js.Global().Get("_mgmt_hostname").String()
obj.title = js.Global().Get("_mgmt_title").String()
obj.path = js.Global().Get("_mgmt_path").String()
obj.document = js.Global().Get("document")
obj.body = obj.document.Get("body")
obj.wlo = js.Global().Get("window").Get("location").Get("origin").String()
obj.base = obj.wlo + obj.path
obj.response = make(chan *Response)
return nil
}
// Run is the main execution of this program.
func (obj *Main) Run() error {
h1 := obj.document.Call("createElement", "h1")
h1.Set("innerHTML", obj.title)
obj.body.Call("appendChild", h1)
h6 := obj.document.Call("createElement", "h6")
pre := obj.document.Call("createElement", "pre")
pre.Set("textContent", fmt.Sprintf("This is: %s, version: %s, on %s", obj.program, obj.version, obj.hostname))
//pre.Set("innerHTML", fmt.Sprintf("This is: %s, version: %s, on %s", obj.program, obj.version, obj.hostname))
h6.Call("appendChild", pre)
obj.body.Call("appendChild", h6)
obj.body.Call("appendChild", obj.document.Call("createElement", "hr"))
//document.baseURI
// XXX: how to get the base so we can add our own querystring???
fmt.Println("URI: ", obj.document.Get("baseURI").String())
fmt.Println("window.location.origin: ", obj.wlo)
fmt.Println("BASE: ", obj.base)
fieldset := obj.document.Call("createElement", "fieldset")
legend := obj.document.Call("createElement", "legend")
legend.Set("textContent", "live!") // XXX: pick some message here
fieldset.Call("appendChild", legend)
// XXX: consider using this instead: https://github.com/hashicorp/go-retryablehttp
//client := retryablehttp.NewClient()
//client.RetryMax = 10
client := &http.Client{
//Timeout: time.Duration(timeout) * time.Second,
//CheckRedirect: checkRedirectFunc,
}
// Startup form building...
// XXX: Add long polling to know if the form shape changes, and offer a
// refresh to the end-user to see the new form.
listURL := obj.base + "list/"
watchURL := obj.base + "watch/"
resp, err := client.Get(listURL) // works
if err != nil {
return errwrap.Wrapf(err, "could not list ui")
}
s, err := io.ReadAll(resp.Body) // TODO: apparently we can stream
resp.Body.Close()
if err != nil {
return errwrap.Wrapf(err, "could read from listed ui")
}
fmt.Printf("Response: %+v\n", string(s))
var form *common.Form
if err := json.Unmarshal(s, &form); err != nil {
return errwrap.Wrapf(err, "could not unmarshal form")
}
//fmt.Printf("%+v\n", form) // debug
// Sort according to the "sort" field so elements are in expected order.
sort.Slice(form.Elements, func(i, j int) bool {
return form.Elements[i].Sort < form.Elements[j].Sort
})
for _, x := range form.Elements {
id := x.ID
resp, err := client.Get(listURL + id)
if err != nil {
return errwrap.Wrapf(err, "could not get id %s", id)
}
s, err := io.ReadAll(resp.Body) // TODO: apparently we can stream
resp.Body.Close()
if err != nil {
return errwrap.Wrapf(err, "could not read from id %s", id)
}
fmt.Printf("Response: %+v\n", string(s))
var element *common.FormElementGeneric // XXX: switch based on x.Kind
if err := json.Unmarshal(s, &element); err != nil {
return errwrap.Wrapf(err, "could not unmarshal id %s", id)
}
//fmt.Printf("%+v\n", element) // debug
inputType, exists := x.Type[common.HTTPServerUIInputType] // "text" or "range" ...
if !exists {
fmt.Printf("Element has no input type: %+v\n", element)
continue
}
label := obj.document.Call("createElement", "label")
label.Call("setAttribute", "for", id)
label.Set("innerHTML", fmt.Sprintf("%s: ", id))
fieldset.Call("appendChild", label)
el := obj.document.Call("createElement", "input")
el.Set("id", id)
//el.Call("setAttribute", "id", id)
//el.Call("setAttribute", "name", id)
el.Set("type", inputType)
if inputType == common.HTTPServerUIInputTypeRange {
min := 0
max := 0
step := 1
if val, exists := x.Type[common.HTTPServerUIInputTypeRangeMin]; exists {
if d, err := strconv.Atoi(val); err == nil {
min = d
el.Set("min", val)
}
}
if val, exists := x.Type[common.HTTPServerUIInputTypeRangeMax]; exists {
if d, err := strconv.Atoi(val); err == nil {
max = d
el.Set("max", val)
}
}
if val, exists := x.Type[common.HTTPServerUIInputTypeRangeStep]; exists {
if d, err := strconv.Atoi(val); err == nil {
step = d
el.Set("step", val)
}
}
// add the tick marks
el.Call("setAttribute", "list", id) // Use setAttribute (NOT Set)
datalist := obj.document.Call("createElement", "datalist")
datalist.Set("id", id) // matches the id of the list field
for i := min; i <= max; i += step {
fmt.Printf("i: %+v\n", i)
option := obj.document.Call("createElement", "option")
option.Set("value", i)
datalist.Call("appendChild", option)
}
fieldset.Call("appendChild", datalist)
}
el.Set("value", element.Value) // XXX: here or after change handler?
// event handler
changeEvent := js.FuncOf(func(this js.Value, args []js.Value) interface{} {
event := args[0]
value := event.Get("target").Get("value").String()
//obj.wg.Add(1)
go func() {
//defer obj.wg.Done()
fmt.Println("Action!")
u := obj.base + "save/"
values := url.Values{
"id": {id},
"value": {value},
}
resp, err := http.PostForm(u, values)
//fmt.Println(resp, err) // debug
s, err := io.ReadAll(resp.Body) // TODO: apparently we can stream
resp.Body.Close()
fmt.Printf("Response: %+v\n", string(s))
fmt.Printf("Error: %+v\n", err)
obj.response <- &Response{
Str: string(s),
Err: err,
}
}()
return nil
})
defer changeEvent.Release()
el.Call("addEventListener", "change", changeEvent)
// http long poll
go func() {
for {
fmt.Printf("About to long poll for: %s\n", id)
//resp, err := client.Get(watchURL + id) // XXX: which?
resp, err := http.Get(watchURL + id)
if err != nil {
fmt.Println("Error fetching:", watchURL+id, err) // XXX: test error paths
time.Sleep(2 * time.Second)
continue
}
s, err := io.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
fmt.Println("Error reading response:", err)
time.Sleep(2 * time.Second)
continue
}
var element *common.FormElementGeneric // XXX: switch based on x.Kind
if err := json.Unmarshal(s, &element); err != nil {
fmt.Println("could not unmarshal id %s: %v", id, err)
time.Sleep(2 * time.Second)
continue
}
//fmt.Printf("%+v\n", element) // debug
fmt.Printf("Long poll for %s got: %s\n", id, element.Value)
obj.document.Call("getElementById", id).Set("value", element.Value)
//time.Sleep(1 * time.Second)
}
}()
fieldset.Call("appendChild", el)
br := obj.document.Call("createElement", "br")
fieldset.Call("appendChild", br)
}
obj.body.Call("appendChild", fieldset)
// We need this mainloop for receiving the results of our async stuff...
for {
select {
case resp, ok := <-obj.response:
if !ok {
break
}
if err := resp.Err; err != nil {
fmt.Printf("Err: %+v\n", err)
continue
}
fmt.Printf("Str: %+v\n", resp.Str)
}
}
return nil
}
// Response is a standard response struct which we pass through.
type Response struct {
Str string
Err error
}
func main() {
m := &Main{}
if err := m.Init(); err != nil {
fmt.Printf("Error: %+v\n", err)
return
}
if err := m.Run(); err != nil {
fmt.Printf("Error: %+v\n", err)
return
}
select {} // don't shutdown wasm
}

View File

@@ -0,0 +1,2 @@
*.css
*.js

View File

@@ -0,0 +1,54 @@
// Mgmt
// Copyright (C) James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//
// Additional permission under GNU GPL version 3 section 7
//
// If you modify this program, or any covered work, by linking or combining it
// with embedded mcl code and modules (and that the embedded mcl code and
// modules which link with this program, contain a copy of their source code in
// the authoritative form) containing parts covered by the terms of any other
// license, the licensors of this program grant you additional permission to
// convey the resulting work. Furthermore, the licensors of this program grant
// the original author, James Shubin, additional permission to update this
// additional permission if he deems it necessary to achieve the goals of this
// additional permission.
//go:build httpserveruistatic
package static
import (
_ "embed" // embed data with go:embed
)
const (
// HTTPServerUIStaticEmbedded specifies whether files have been
// embedded.
HTTPServerUIStaticEmbedded = true
)
var (
// HTTPServerUIIndexStaticBootstrapCSS is the embedded data. It is
// embedded.
//go:embed http_server_ui/static/bootstrap.min.css
HTTPServerUIIndexStaticBootstrapCSS []byte
// HTTPServerUIIndexStaticBootstrapJS is the embedded data. It is
// embedded.
//go:embed http_server_ui/static/bootstrap.bundle.min.js
HTTPServerUIIndexStaticBootstrapJS []byte
)

View File

@@ -0,0 +1,48 @@
// Mgmt
// Copyright (C) James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//
// Additional permission under GNU GPL version 3 section 7
//
// If you modify this program, or any covered work, by linking or combining it
// with embedded mcl code and modules (and that the embedded mcl code and
// modules which link with this program, contain a copy of their source code in
// the authoritative form) containing parts covered by the terms of any other
// license, the licensors of this program grant you additional permission to
// convey the resulting work. Furthermore, the licensors of this program grant
// the original author, James Shubin, additional permission to update this
// additional permission if he deems it necessary to achieve the goals of this
// additional permission.
//go:build !httpserveruistatic
package static
const (
// HTTPServerUIStaticEmbedded specifies whether files have been
// embedded.
HTTPServerUIStaticEmbedded = false
)
var (
// HTTPServerUIIndexStaticBootstrapCSS is the embedded data. It is empty
// here.
HTTPServerUIIndexStaticBootstrapCSS []byte
// HTTPServerUIIndexStaticBootstrapJS is the embedded data. It is empty
// here.
HTTPServerUIIndexStaticBootstrapJS []byte
)

View File

@@ -0,0 +1,42 @@
// Mgmt
// Copyright (C) James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//
// Additional permission under GNU GPL version 3 section 7
//
// If you modify this program, or any covered work, by linking or combining it
// with embedded mcl code and modules (and that the embedded mcl code and
// modules which link with this program, contain a copy of their source code in
// the authoritative form) containing parts covered by the terms of any other
// license, the licensors of this program grant you additional permission to
// convey the resulting work. Furthermore, the licensors of this program grant
// the original author, James Shubin, additional permission to update this
// additional permission if he deems it necessary to achieve the goals of this
// additional permission.
// Package static contains some optional embedded data which can be useful if we
// are running from an entirely offline, internet-absent scenario.
package static
const (
// HTTPServerUIIndexBootstrapCSS is the path to the bootstrap css file
// when embedded, relative to the parent directory.
HTTPServerUIIndexBootstrapCSS = "static/bootstrap.min.css"
// HTTPServerUIIndexBootstrapJS is the path to the bootstrap js file
// when embedded, relative to the parent directory.
HTTPServerUIIndexBootstrapJS = "static/bootstrap.bundle.min.js"
)

View File

@@ -0,0 +1,577 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// This was copied from the original in the ~golang/lib/wasm/* directory.
"use strict";
(() => {
const enosys = () => {
const err = new Error("not implemented");
err.code = "ENOSYS";
return err;
};
if (!globalThis.fs) {
let outputBuf = "";
globalThis.fs = {
constants: { O_WRONLY: -1, O_RDWR: -1, O_CREAT: -1, O_TRUNC: -1, O_APPEND: -1, O_EXCL: -1, O_DIRECTORY: -1 }, // unused
writeSync(fd, buf) {
outputBuf += decoder.decode(buf);
const nl = outputBuf.lastIndexOf("\n");
if (nl != -1) {
console.log(outputBuf.substring(0, nl));
outputBuf = outputBuf.substring(nl + 1);
}
return buf.length;
},
write(fd, buf, offset, length, position, callback) {
if (offset !== 0 || length !== buf.length || position !== null) {
callback(enosys());
return;
}
const n = this.writeSync(fd, buf);
callback(null, n);
},
chmod(path, mode, callback) { callback(enosys()); },
chown(path, uid, gid, callback) { callback(enosys()); },
close(fd, callback) { callback(enosys()); },
fchmod(fd, mode, callback) { callback(enosys()); },
fchown(fd, uid, gid, callback) { callback(enosys()); },
fstat(fd, callback) { callback(enosys()); },
fsync(fd, callback) { callback(null); },
ftruncate(fd, length, callback) { callback(enosys()); },
lchown(path, uid, gid, callback) { callback(enosys()); },
link(path, link, callback) { callback(enosys()); },
lstat(path, callback) { callback(enosys()); },
mkdir(path, perm, callback) { callback(enosys()); },
open(path, flags, mode, callback) { callback(enosys()); },
read(fd, buffer, offset, length, position, callback) { callback(enosys()); },
readdir(path, callback) { callback(enosys()); },
readlink(path, callback) { callback(enosys()); },
rename(from, to, callback) { callback(enosys()); },
rmdir(path, callback) { callback(enosys()); },
stat(path, callback) { callback(enosys()); },
symlink(path, link, callback) { callback(enosys()); },
truncate(path, length, callback) { callback(enosys()); },
unlink(path, callback) { callback(enosys()); },
utimes(path, atime, mtime, callback) { callback(enosys()); },
};
}
if (!globalThis.process) {
globalThis.process = {
getuid() { return -1; },
getgid() { return -1; },
geteuid() { return -1; },
getegid() { return -1; },
getgroups() { throw enosys(); },
pid: -1,
ppid: -1,
umask() { throw enosys(); },
cwd() { throw enosys(); },
chdir() { throw enosys(); },
}
}
if (!globalThis.path) {
globalThis.path = {
resolve(...pathSegments) {
return pathSegments.join("/");
}
}
}
if (!globalThis.crypto) {
throw new Error("globalThis.crypto is not available, polyfill required (crypto.getRandomValues only)");
}
if (!globalThis.performance) {
throw new Error("globalThis.performance is not available, polyfill required (performance.now only)");
}
if (!globalThis.TextEncoder) {
throw new Error("globalThis.TextEncoder is not available, polyfill required");
}
if (!globalThis.TextDecoder) {
throw new Error("globalThis.TextDecoder is not available, polyfill required");
}
const encoder = new TextEncoder("utf-8");
const decoder = new TextDecoder("utf-8");
globalThis.Go = class {
constructor() {
this.argv = ["js"];
this.env = {};
this.exit = (code) => {
if (code !== 0) {
console.warn("exit code:", code);
}
};
this._exitPromise = new Promise((resolve) => {
this._resolveExitPromise = resolve;
});
this._pendingEvent = null;
this._scheduledTimeouts = new Map();
this._nextCallbackTimeoutID = 1;
const setInt64 = (addr, v) => {
this.mem.setUint32(addr + 0, v, true);
this.mem.setUint32(addr + 4, Math.floor(v / 4294967296), true);
}
const setInt32 = (addr, v) => {
this.mem.setUint32(addr + 0, v, true);
}
const getInt64 = (addr) => {
const low = this.mem.getUint32(addr + 0, true);
const high = this.mem.getInt32(addr + 4, true);
return low + high * 4294967296;
}
const loadValue = (addr) => {
const f = this.mem.getFloat64(addr, true);
if (f === 0) {
return undefined;
}
if (!isNaN(f)) {
return f;
}
const id = this.mem.getUint32(addr, true);
return this._values[id];
}
const storeValue = (addr, v) => {
const nanHead = 0x7FF80000;
if (typeof v === "number" && v !== 0) {
if (isNaN(v)) {
this.mem.setUint32(addr + 4, nanHead, true);
this.mem.setUint32(addr, 0, true);
return;
}
this.mem.setFloat64(addr, v, true);
return;
}
if (v === undefined) {
this.mem.setFloat64(addr, 0, true);
return;
}
let id = this._ids.get(v);
if (id === undefined) {
id = this._idPool.pop();
if (id === undefined) {
id = this._values.length;
}
this._values[id] = v;
this._goRefCounts[id] = 0;
this._ids.set(v, id);
}
this._goRefCounts[id]++;
let typeFlag = 0;
switch (typeof v) {
case "object":
if (v !== null) {
typeFlag = 1;
}
break;
case "string":
typeFlag = 2;
break;
case "symbol":
typeFlag = 3;
break;
case "function":
typeFlag = 4;
break;
}
this.mem.setUint32(addr + 4, nanHead | typeFlag, true);
this.mem.setUint32(addr, id, true);
}
const loadSlice = (addr) => {
const array = getInt64(addr + 0);
const len = getInt64(addr + 8);
return new Uint8Array(this._inst.exports.mem.buffer, array, len);
}
const loadSliceOfValues = (addr) => {
const array = getInt64(addr + 0);
const len = getInt64(addr + 8);
const a = new Array(len);
for (let i = 0; i < len; i++) {
a[i] = loadValue(array + i * 8);
}
return a;
}
const loadString = (addr) => {
const saddr = getInt64(addr + 0);
const len = getInt64(addr + 8);
return decoder.decode(new DataView(this._inst.exports.mem.buffer, saddr, len));
}
const testCallExport = (a, b) => {
this._inst.exports.testExport0();
return this._inst.exports.testExport(a, b);
}
const timeOrigin = Date.now() - performance.now();
this.importObject = {
_gotest: {
add: (a, b) => a + b,
callExport: testCallExport,
},
gojs: {
// Go's SP does not change as long as no Go code is running. Some operations (e.g. calls, getters and setters)
// may synchronously trigger a Go event handler. This makes Go code get executed in the middle of the imported
// function. A goroutine can switch to a new stack if the current stack is too small (see morestack function).
// This changes the SP, thus we have to update the SP used by the imported function.
// func wasmExit(code int32)
"runtime.wasmExit": (sp) => {
sp >>>= 0;
const code = this.mem.getInt32(sp + 8, true);
this.exited = true;
delete this._inst;
delete this._values;
delete this._goRefCounts;
delete this._ids;
delete this._idPool;
this.exit(code);
},
// func wasmWrite(fd uintptr, p unsafe.Pointer, n int32)
"runtime.wasmWrite": (sp) => {
sp >>>= 0;
const fd = getInt64(sp + 8);
const p = getInt64(sp + 16);
const n = this.mem.getInt32(sp + 24, true);
fs.writeSync(fd, new Uint8Array(this._inst.exports.mem.buffer, p, n));
},
// func resetMemoryDataView()
"runtime.resetMemoryDataView": (sp) => {
sp >>>= 0;
this.mem = new DataView(this._inst.exports.mem.buffer);
},
// func nanotime1() int64
"runtime.nanotime1": (sp) => {
sp >>>= 0;
setInt64(sp + 8, (timeOrigin + performance.now()) * 1000000);
},
// func walltime() (sec int64, nsec int32)
"runtime.walltime": (sp) => {
sp >>>= 0;
const msec = (new Date).getTime();
setInt64(sp + 8, msec / 1000);
this.mem.setInt32(sp + 16, (msec % 1000) * 1000000, true);
},
// func scheduleTimeoutEvent(delay int64) int32
"runtime.scheduleTimeoutEvent": (sp) => {
sp >>>= 0;
const id = this._nextCallbackTimeoutID;
this._nextCallbackTimeoutID++;
this._scheduledTimeouts.set(id, setTimeout(
() => {
this._resume();
while (this._scheduledTimeouts.has(id)) {
// for some reason Go failed to register the timeout event, log and try again
// (temporary workaround for https://github.com/golang/go/issues/28975)
console.warn("scheduleTimeoutEvent: missed timeout event");
this._resume();
}
},
getInt64(sp + 8),
));
this.mem.setInt32(sp + 16, id, true);
},
// func clearTimeoutEvent(id int32)
"runtime.clearTimeoutEvent": (sp) => {
sp >>>= 0;
const id = this.mem.getInt32(sp + 8, true);
clearTimeout(this._scheduledTimeouts.get(id));
this._scheduledTimeouts.delete(id);
},
// func getRandomData(r []byte)
"runtime.getRandomData": (sp) => {
sp >>>= 0;
crypto.getRandomValues(loadSlice(sp + 8));
},
// func finalizeRef(v ref)
"syscall/js.finalizeRef": (sp) => {
sp >>>= 0;
const id = this.mem.getUint32(sp + 8, true);
this._goRefCounts[id]--;
if (this._goRefCounts[id] === 0) {
const v = this._values[id];
this._values[id] = null;
this._ids.delete(v);
this._idPool.push(id);
}
},
// func stringVal(value string) ref
"syscall/js.stringVal": (sp) => {
sp >>>= 0;
storeValue(sp + 24, loadString(sp + 8));
},
// func valueGet(v ref, p string) ref
"syscall/js.valueGet": (sp) => {
sp >>>= 0;
const result = Reflect.get(loadValue(sp + 8), loadString(sp + 16));
sp = this._inst.exports.getsp() >>> 0; // see comment above
storeValue(sp + 32, result);
},
// func valueSet(v ref, p string, x ref)
"syscall/js.valueSet": (sp) => {
sp >>>= 0;
Reflect.set(loadValue(sp + 8), loadString(sp + 16), loadValue(sp + 32));
},
// func valueDelete(v ref, p string)
"syscall/js.valueDelete": (sp) => {
sp >>>= 0;
Reflect.deleteProperty(loadValue(sp + 8), loadString(sp + 16));
},
// func valueIndex(v ref, i int) ref
"syscall/js.valueIndex": (sp) => {
sp >>>= 0;
storeValue(sp + 24, Reflect.get(loadValue(sp + 8), getInt64(sp + 16)));
},
// valueSetIndex(v ref, i int, x ref)
"syscall/js.valueSetIndex": (sp) => {
sp >>>= 0;
Reflect.set(loadValue(sp + 8), getInt64(sp + 16), loadValue(sp + 24));
},
// func valueCall(v ref, m string, args []ref) (ref, bool)
"syscall/js.valueCall": (sp) => {
sp >>>= 0;
try {
const v = loadValue(sp + 8);
const m = Reflect.get(v, loadString(sp + 16));
const args = loadSliceOfValues(sp + 32);
const result = Reflect.apply(m, v, args);
sp = this._inst.exports.getsp() >>> 0; // see comment above
storeValue(sp + 56, result);
this.mem.setUint8(sp + 64, 1);
} catch (err) {
sp = this._inst.exports.getsp() >>> 0; // see comment above
storeValue(sp + 56, err);
this.mem.setUint8(sp + 64, 0);
}
},
// func valueInvoke(v ref, args []ref) (ref, bool)
"syscall/js.valueInvoke": (sp) => {
sp >>>= 0;
try {
const v = loadValue(sp + 8);
const args = loadSliceOfValues(sp + 16);
const result = Reflect.apply(v, undefined, args);
sp = this._inst.exports.getsp() >>> 0; // see comment above
storeValue(sp + 40, result);
this.mem.setUint8(sp + 48, 1);
} catch (err) {
sp = this._inst.exports.getsp() >>> 0; // see comment above
storeValue(sp + 40, err);
this.mem.setUint8(sp + 48, 0);
}
},
// func valueNew(v ref, args []ref) (ref, bool)
"syscall/js.valueNew": (sp) => {
sp >>>= 0;
try {
const v = loadValue(sp + 8);
const args = loadSliceOfValues(sp + 16);
const result = Reflect.construct(v, args);
sp = this._inst.exports.getsp() >>> 0; // see comment above
storeValue(sp + 40, result);
this.mem.setUint8(sp + 48, 1);
} catch (err) {
sp = this._inst.exports.getsp() >>> 0; // see comment above
storeValue(sp + 40, err);
this.mem.setUint8(sp + 48, 0);
}
},
// func valueLength(v ref) int
"syscall/js.valueLength": (sp) => {
sp >>>= 0;
setInt64(sp + 16, parseInt(loadValue(sp + 8).length));
},
// valuePrepareString(v ref) (ref, int)
"syscall/js.valuePrepareString": (sp) => {
sp >>>= 0;
const str = encoder.encode(String(loadValue(sp + 8)));
storeValue(sp + 16, str);
setInt64(sp + 24, str.length);
},
// valueLoadString(v ref, b []byte)
"syscall/js.valueLoadString": (sp) => {
sp >>>= 0;
const str = loadValue(sp + 8);
loadSlice(sp + 16).set(str);
},
// func valueInstanceOf(v ref, t ref) bool
"syscall/js.valueInstanceOf": (sp) => {
sp >>>= 0;
this.mem.setUint8(sp + 24, (loadValue(sp + 8) instanceof loadValue(sp + 16)) ? 1 : 0);
},
// func copyBytesToGo(dst []byte, src ref) (int, bool)
"syscall/js.copyBytesToGo": (sp) => {
sp >>>= 0;
const dst = loadSlice(sp + 8);
const src = loadValue(sp + 32);
if (!(src instanceof Uint8Array || src instanceof Uint8ClampedArray)) {
this.mem.setUint8(sp + 48, 0);
return;
}
const toCopy = src.subarray(0, dst.length);
dst.set(toCopy);
setInt64(sp + 40, toCopy.length);
this.mem.setUint8(sp + 48, 1);
},
// func copyBytesToJS(dst ref, src []byte) (int, bool)
"syscall/js.copyBytesToJS": (sp) => {
sp >>>= 0;
const dst = loadValue(sp + 8);
const src = loadSlice(sp + 16);
if (!(dst instanceof Uint8Array || dst instanceof Uint8ClampedArray)) {
this.mem.setUint8(sp + 48, 0);
return;
}
const toCopy = src.subarray(0, dst.length);
dst.set(toCopy);
setInt64(sp + 40, toCopy.length);
this.mem.setUint8(sp + 48, 1);
},
"debug": (value) => {
console.log(value);
},
}
};
}
async run(instance) {
if (!(instance instanceof WebAssembly.Instance)) {
throw new Error("Go.run: WebAssembly.Instance expected");
}
this._inst = instance;
this.mem = new DataView(this._inst.exports.mem.buffer);
this._values = [ // JS values that Go currently has references to, indexed by reference id
NaN,
0,
null,
true,
false,
globalThis,
this,
];
this._goRefCounts = new Array(this._values.length).fill(Infinity); // number of references that Go has to a JS value, indexed by reference id
this._ids = new Map([ // mapping from JS values to reference ids
[0, 1],
[null, 2],
[true, 3],
[false, 4],
[globalThis, 5],
[this, 6],
]);
this._idPool = []; // unused ids that have been garbage collected
this.exited = false; // whether the Go program has exited
// Pass command line arguments and environment variables to WebAssembly by writing them to the linear memory.
let offset = 4096;
const strPtr = (str) => {
const ptr = offset;
const bytes = encoder.encode(str + "\0");
new Uint8Array(this.mem.buffer, offset, bytes.length).set(bytes);
offset += bytes.length;
if (offset % 8 !== 0) {
offset += 8 - (offset % 8);
}
return ptr;
};
const argc = this.argv.length;
const argvPtrs = [];
this.argv.forEach((arg) => {
argvPtrs.push(strPtr(arg));
});
argvPtrs.push(0);
const keys = Object.keys(this.env).sort();
keys.forEach((key) => {
argvPtrs.push(strPtr(`${key}=${this.env[key]}`));
});
argvPtrs.push(0);
const argv = offset;
argvPtrs.forEach((ptr) => {
this.mem.setUint32(offset, ptr, true);
this.mem.setUint32(offset + 4, 0, true);
offset += 8;
});
// The linker guarantees global data starts from at least wasmMinDataAddr.
// Keep in sync with cmd/link/internal/ld/data.go:wasmMinDataAddr.
const wasmMinDataAddr = 4096 + 8192;
if (offset >= wasmMinDataAddr) {
throw new Error("total length of command line and environment variables exceeds limit");
}
this._inst.exports.run(argc, argv);
if (this.exited) {
this._resolveExitPromise();
}
await this._exitPromise;
}
_resume() {
if (this.exited) {
throw new Error("Go program has already exited");
}
this._inst.exports.resume();
if (this.exited) {
this._resolveExitPromise();
}
}
_makeFuncWrapper(id) {
const go = this;
return function () {
const event = { id: id, this: this, args: arguments };
go._pendingEvent = event;
go._resume();
return event.result;
};
}
}
})();

View File

@@ -0,0 +1,675 @@
// Mgmt
// Copyright (C) James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//
// Additional permission under GNU GPL version 3 section 7
//
// If you modify this program, or any covered work, by linking or combining it
// with embedded mcl code and modules (and that the embedded mcl code and
// modules which link with this program, contain a copy of their source code in
// the authoritative form) containing parts covered by the terms of any other
// license, the licensors of this program grant you additional permission to
// convey the resulting work. Furthermore, the licensors of this program grant
// the original author, James Shubin, additional permission to update this
// additional permission if he deems it necessary to achieve the goals of this
// additional permission.
package resources
import (
"context"
"fmt"
"net/url"
"strconv"
"sync"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/resources/http_server_ui/common"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/purpleidea/mgmt/util/errwrap"
)
const (
httpServerUIInputKind = httpServerUIKind + ":input"
httpServerUIInputStoreKey = "key"
httpServerUIInputStoreSchemeLocal = "local"
httpServerUIInputStoreSchemeWorld = "world"
httpServerUIInputTypeText = common.HTTPServerUIInputTypeText // "text"
httpServerUIInputTypeRange = common.HTTPServerUIInputTypeRange // "range"
)
func init() {
engine.RegisterResource(httpServerUIInputKind, func() engine.Res { return &HTTPServerUIInputRes{} })
}
var _ HTTPServerUIGroupableRes = &HTTPServerUIInputRes{} // compile time check
// HTTPServerUIInputRes is a form element that exists within a http:server:ui
// resource, which exists within an http server. The name is used as the unique
// id of the field, unless the id field is specified, and in that case it is
// used instead. The way this works is that it autogroups at runtime with an
// existing http:server:ui resource, and in doing so makes the form field
// associated with this resource available as part of that ui which is itself
// grouped and served from the http server resource.
type HTTPServerUIInputRes struct {
traits.Base // add the base methods without re-implementation
traits.Edgeable // XXX: add autoedge support
traits.Groupable // can be grouped into HTTPServerUIRes
traits.Sendable
init *engine.Init
// Path is the name of the http ui resource to group this into. If it is
// omitted, and there is only a single http ui resource, then it will
// be grouped into it automatically. If there is more than one main http
// ui resource being used, then the grouping behaviour is *undefined*
// when this is not specified, and it is not recommended to leave this
// blank!
Path string `lang:"path" yaml:"path"`
// ID is the unique id for this element. It is used in form fields and
// should not be a private identifier. It must be unique within a given
// http ui.
ID string `lang:"id" yaml:"id"`
// Value is the default value to use for the form field. If you change
// it, then the resource graph will change and we'll rebuild and have
// the new value visible. You can use either this or the Store field.
// XXX: If we ever add our resource mutate API, we might not need to
// swap to a new resource graph, and maybe Store is not needed?
Value string `lang:"value" yaml:"value"`
// Store the data in this source. It will also read in a default value
// from there if one is present. It will watch it for changes as well,
// and update the displayed value if it's changed from another source.
// This cannot be used at the same time as the Value field.
Store string `lang:"store" yaml:"store"`
// Type specifies the type of input field this is, and some information
// about it.
// XXX: come up with a format such as "multiline://?max=60&style=foo"
Type string `lang:"type" yaml:"type"`
// Sort is a string that you can use to determine the global sorted
// display order of all the elements in a ui.
Sort string `lang:"sort" yaml:"sort"`
scheme string // the scheme we're using with Store, cached for later
key string // the key we're using with Store, cached for later
typeURL *url.URL // the type data, cached for later
typeURLValues url.Values // the type data, cached for later
last *string // the last value we sent
value string // what we've last received from SetValue
storeEvent bool // did a store event happen?
mutex *sync.Mutex // guards storeEvent and value
event chan struct{} // local event that the setValue sends
}
// Default returns some sensible defaults for this resource.
func (obj *HTTPServerUIInputRes) Default() engine.Res {
return &HTTPServerUIInputRes{
Type: "text://",
}
}
// Validate checks if the resource data structure was populated correctly.
func (obj *HTTPServerUIInputRes) Validate() error {
if obj.GetID() == "" {
return fmt.Errorf("empty id")
}
if obj.Value != "" && obj.Store != "" {
return fmt.Errorf("may only use either Value or Store")
}
if obj.Value != "" {
if err := obj.checkValue(obj.Value); err != nil {
return errwrap.Wrapf(err, "the Value field is invalid")
}
}
if obj.Store != "" {
// XXX: check the URI format
}
return nil
}
// Init runs some startup code for this resource.
func (obj *HTTPServerUIInputRes) Init(init *engine.Init) error {
obj.init = init // save for later
u, err := url.Parse(obj.Type)
if err != nil {
return err
}
if u == nil {
return fmt.Errorf("can't parse Type")
}
if u.Scheme != httpServerUIInputTypeText && u.Scheme != httpServerUIInputTypeRange {
return fmt.Errorf("unknown scheme: %s", u.Scheme)
}
values, err := url.ParseQuery(u.RawQuery)
if err != nil {
return err
}
obj.typeURL = u
obj.typeURLValues = values
if obj.Store != "" {
u, err := url.Parse(obj.Store)
if err != nil {
return err
}
if u == nil {
return fmt.Errorf("can't parse Store")
}
if u.Scheme != httpServerUIInputStoreSchemeLocal && u.Scheme != httpServerUIInputStoreSchemeWorld {
return fmt.Errorf("unknown scheme: %s", u.Scheme)
}
values, err := url.ParseQuery(u.RawQuery)
if err != nil {
return err
}
obj.scheme = u.Scheme // cache for later
obj.key = obj.Name() // default
x, exists := values[httpServerUIInputStoreKey]
if exists && len(x) > 0 && x[0] != "" { // ignore absent or broken keys
obj.key = x[0]
}
}
// populate our obj.value cache somehow, so we don't mutate obj.Value
obj.value = obj.Value // copy
obj.mutex = &sync.Mutex{}
obj.event = make(chan struct{}, 1) // buffer to avoid blocks or deadlock
return nil
}
// Cleanup is run by the engine to clean up after the resource is done.
func (obj *HTTPServerUIInputRes) Cleanup() error {
return nil
}
// getKey returns the key to be used for this resource. If the Store field is
// specified, it will use that parsed part, otherwise it uses the Name.
func (obj *HTTPServerUIInputRes) getKey() string {
if obj.Store != "" {
return obj.key
}
return obj.Name()
}
// ParentName is used to limit which resources autogroup into this one. If it's
// empty then it's ignored, otherwise it must match the Name of the parent to
// get grouped.
func (obj *HTTPServerUIInputRes) ParentName() string {
return obj.Path
}
// GetKind returns the kind of this resource.
func (obj *HTTPServerUIInputRes) GetKind() string {
// NOTE: We don't *need* to return such a specific string, and "input"
// would be enough, but we might as well use this because we have it.
return httpServerUIInputKind
}
// GetID returns the actual ID we respond to. When ID is not specified, we use
// the Name.
func (obj *HTTPServerUIInputRes) GetID() string {
if obj.ID != "" {
return obj.ID
}
return obj.Name()
}
// SetValue stores the new value field that was obtained from submitting the
// form. This receives the raw, unsafe value that you must validate first.
func (obj *HTTPServerUIInputRes) SetValue(ctx context.Context, vs []string) error {
if len(vs) != 1 {
return fmt.Errorf("unexpected length of %d", len(vs))
}
value := vs[0]
if err := obj.checkValue(value); err != nil {
return err
}
obj.mutex.Lock()
obj.setValue(ctx, value) // also sends an event
obj.mutex.Unlock()
return nil
}
// setValue is the helper version where the caller must provide the mutex.
func (obj *HTTPServerUIInputRes) setValue(ctx context.Context, val string) error {
obj.value = val
select {
case obj.event <- struct{}{}:
default:
}
return nil
}
func (obj *HTTPServerUIInputRes) checkValue(value string) error {
// XXX: validate based on obj.Type
// XXX: validate what kind of values are allowed, probably no \n, etc...
return nil
}
// GetValue gets a string representation for the form value, that we'll use in
// our html form.
func (obj *HTTPServerUIInputRes) GetValue(ctx context.Context) (string, error) {
obj.mutex.Lock()
defer obj.mutex.Unlock()
if obj.storeEvent {
val, exists, err := obj.storeGet(ctx, obj.getKey())
if err != nil {
return "", errwrap.Wrapf(err, "error during get")
}
if !exists {
return "", nil // default
}
return val, nil
}
return obj.value, nil
}
// GetType returns a map that you can use to build the input field in the ui.
func (obj *HTTPServerUIInputRes) GetType() map[string]string {
m := make(map[string]string)
if obj.typeURL.Scheme == httpServerUIInputTypeRange {
m = obj.rangeGetType()
}
m[common.HTTPServerUIInputType] = obj.typeURL.Scheme
return m
}
func (obj *HTTPServerUIInputRes) rangeGetType() map[string]string {
m := make(map[string]string)
base := 10
bits := 64
if sa, exists := obj.typeURLValues[common.HTTPServerUIInputTypeRangeMin]; exists && len(sa) > 0 {
if x, err := strconv.ParseInt(sa[0], base, bits); err == nil {
m[common.HTTPServerUIInputTypeRangeMin] = strconv.FormatInt(x, base)
}
}
if sa, exists := obj.typeURLValues[common.HTTPServerUIInputTypeRangeMax]; exists && len(sa) > 0 {
if x, err := strconv.ParseInt(sa[0], base, bits); err == nil {
m[common.HTTPServerUIInputTypeRangeMax] = strconv.FormatInt(x, base)
}
}
if sa, exists := obj.typeURLValues[common.HTTPServerUIInputTypeRangeStep]; exists && len(sa) > 0 {
if x, err := strconv.ParseInt(sa[0], base, bits); err == nil {
m[common.HTTPServerUIInputTypeRangeStep] = strconv.FormatInt(x, base)
}
}
return m
}
// GetSort returns a string that you can use to determine the global sorted
// display order of all the elements in a ui.
func (obj *HTTPServerUIInputRes) GetSort() string {
return obj.Sort
}
// Watch is the primary listener for this resource and it outputs events. This
// particular one does absolutely nothing but block until we've received a done
// signal.
func (obj *HTTPServerUIInputRes) Watch(ctx context.Context) error {
if obj.Store != "" && obj.scheme == httpServerUIInputStoreSchemeLocal {
return obj.localWatch(ctx)
}
if obj.Store != "" && obj.scheme == httpServerUIInputStoreSchemeWorld {
return obj.worldWatch(ctx)
}
obj.init.Running() // when started, notify engine that we're running
// XXX: do we need to watch on obj.event for normal .Value stuff?
select {
case <-ctx.Done(): // closed by the engine to signal shutdown
}
//obj.init.Event() // notify engine of an event (this can block)
return nil
}
func (obj *HTTPServerUIInputRes) localWatch(ctx context.Context) error {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
ch, err := obj.init.Local.ValueWatch(ctx, obj.getKey()) // get possible events!
if err != nil {
return errwrap.Wrapf(err, "error during watch")
}
obj.init.Running() // when started, notify engine that we're running
for {
select {
case _, ok := <-ch:
if !ok { // channel shutdown
return nil
}
obj.mutex.Lock()
obj.storeEvent = true
obj.mutex.Unlock()
case <-obj.event:
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
if obj.init.Debug {
obj.init.Logf("event!")
}
obj.init.Event() // notify engine of an event (this can block)
}
}
func (obj *HTTPServerUIInputRes) worldWatch(ctx context.Context) error {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
ch, err := obj.init.World.StrWatch(ctx, obj.getKey()) // get possible events!
if err != nil {
return errwrap.Wrapf(err, "error during watch")
}
obj.init.Running() // when started, notify engine that we're running
for {
select {
case err, ok := <-ch:
if !ok { // channel shutdown
return nil
}
if err != nil {
return errwrap.Wrapf(err, "unknown %s watcher error", obj)
}
obj.mutex.Lock()
obj.storeEvent = true
obj.mutex.Unlock()
case <-obj.event:
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
if obj.init.Debug {
obj.init.Logf("event!")
}
obj.init.Event() // notify engine of an event (this can block)
}
}
// CheckApply performs the send/recv portion of this autogrouped resources. That
// can fail, but only if the send portion fails for some reason. If we're using
// the Store feature, then it also reads and writes to and from that store.
func (obj *HTTPServerUIInputRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
if obj.init.Debug {
obj.init.Logf("CheckApply")
}
// If we're in ".Value" mode, we want to look at the incoming value, and
// send it onwards. This function mostly exists as a stub in this case.
// The private value gets set by obj.SetValue from the http:server:ui
// parent. If we're in ".Store" mode, then we're reconciling between the
// "World" and the http:server:ui "Web".
if obj.Store != "" {
return obj.storeCheckApply(ctx, apply)
}
return obj.valueCheckApply(ctx, apply)
}
func (obj *HTTPServerUIInputRes) valueCheckApply(ctx context.Context, apply bool) (bool, error) {
obj.mutex.Lock()
value := obj.value // gets set by obj.SetValue
obj.mutex.Unlock()
if obj.last != nil && *obj.last == value {
if err := obj.init.Send(&HTTPServerUIInputSends{
Value: &value,
}); err != nil {
return false, err
}
return true, nil // expected value has already been sent
}
if !apply {
if err := obj.init.Send(&HTTPServerUIInputSends{
Value: &value, // XXX: arbitrary since we're in noop mode
}); err != nil {
return false, err
}
return false, nil
}
s := value // copy
obj.last = &s // cache
// XXX: This is getting called twice, what's the bug?
obj.init.Logf("sending: %s", value)
// send
if err := obj.init.Send(&HTTPServerUIInputSends{
Value: &value,
}); err != nil {
return false, err
}
return false, nil
//return true, nil // always succeeds, with nothing to do!
}
// storeCheckApply is a tricky function where we attempt to reconcile the state
// between a third-party changing the value in the World database, and a recent
// "http:server:ui" change by an end user. Basically whoever runs last is the
// "right" value that we want to use. We know who sent the event from reading
// the storeEvent variable, and if it was the World, we want to cache it
// locally, and if it was the Web, then we want to push it up to the store.
func (obj *HTTPServerUIInputRes) storeCheckApply(ctx context.Context, apply bool) (bool, error) {
v1, exists, err := obj.storeGet(ctx, obj.getKey())
if err != nil {
return false, errwrap.Wrapf(err, "error during get")
}
obj.mutex.Lock()
v2 := obj.value // gets set by obj.SetValue
storeEvent := obj.storeEvent
obj.storeEvent = false // reset it
obj.mutex.Unlock()
if exists && v1 == v2 { // both sides are happy
if err := obj.init.Send(&HTTPServerUIInputSends{
Value: &v2,
}); err != nil {
return false, err
}
return true, nil
}
if !apply {
if err := obj.init.Send(&HTTPServerUIInputSends{
Value: &v2, // XXX: arbitrary since we're in noop mode
}); err != nil {
return false, err
}
return false, nil
}
obj.mutex.Lock()
if storeEvent { // event from World, pull down the value
err = obj.setValue(ctx, v1) // also sends an event
}
value := obj.value
obj.mutex.Unlock()
if err != nil {
return false, err
}
if !exists || !storeEvent { // event from web, push up the value
if err := obj.storeSet(ctx, obj.getKey(), value); err != nil {
return false, errwrap.Wrapf(err, "error during set")
}
}
obj.init.Logf("sending: %s", value)
// send
if err := obj.init.Send(&HTTPServerUIInputSends{
Value: &value,
}); err != nil {
return false, err
}
return false, nil
}
func (obj *HTTPServerUIInputRes) storeGet(ctx context.Context, key string) (string, bool, error) {
if obj.Store != "" && obj.scheme == httpServerUIInputStoreSchemeLocal {
val, err := obj.init.Local.ValueGet(ctx, key)
if err != nil {
return "", false, err // real error
}
if val == nil { // if val is nil, and no error then it doesn't exist
return "", false, nil // val doesn't exist
}
s, ok := val.(string)
if !ok {
// TODO: support different types perhaps?
return "", false, fmt.Errorf("not a string") // real error
}
return s, true, nil
}
if obj.Store != "" && obj.scheme == httpServerUIInputStoreSchemeWorld {
val, err := obj.init.World.StrGet(ctx, key)
if err != nil && obj.init.World.StrIsNotExist(err) {
return "", false, nil // val doesn't exist
}
if err != nil {
return "", false, err // real error
}
return val, true, nil
}
return "", false, nil // something else
}
func (obj *HTTPServerUIInputRes) storeSet(ctx context.Context, key, val string) error {
if obj.Store != "" && obj.scheme == httpServerUIInputStoreSchemeLocal {
return obj.init.Local.ValueSet(ctx, key, val)
}
if obj.Store != "" && obj.scheme == httpServerUIInputStoreSchemeWorld {
return obj.init.World.StrSet(ctx, key, val)
}
return nil // something else
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *HTTPServerUIInputRes) Cmp(r engine.Res) error {
// we can only compare HTTPServerUIInputRes to others of the same resource kind
res, ok := r.(*HTTPServerUIInputRes)
if !ok {
return fmt.Errorf("res is not the same kind")
}
if obj.Path != res.Path {
return fmt.Errorf("the Path differs")
}
if obj.ID != res.ID {
return fmt.Errorf("the ID differs")
}
if obj.Value != res.Value {
return fmt.Errorf("the Value differs")
}
if obj.Store != res.Store {
return fmt.Errorf("the Store differs")
}
if obj.Type != res.Type {
return fmt.Errorf("the Type differs")
}
if obj.Sort != res.Sort {
return fmt.Errorf("the Sort differs")
}
return nil
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *HTTPServerUIInputRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes HTTPServerUIInputRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*HTTPServerUIInputRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to HTTPServerUIInputRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = HTTPServerUIInputRes(raw) // restore from indirection with type conversion!
return nil
}
// HTTPServerUIInputSends is the struct of data which is sent after a successful
// Apply.
type HTTPServerUIInputSends struct {
// Value is the text element value being sent.
Value *string `lang:"value"`
}
// Sends represents the default struct of values we can send using Send/Recv.
func (obj *HTTPServerUIInputRes) Sends() interface{} {
return &HTTPServerUIInputSends{
Value: nil,
}
}

View File

@@ -94,7 +94,7 @@ type KVRes struct {
// functions like `getval`, require this to be false, since they're
// pulling values directly out of the same namespace that is shared by
// all nodes.
Mapped bool
Mapped bool `lang:"mapped" yaml:"mapped"`
// SkipLessThan causes the value to be updated as long as it is greater.
SkipLessThan bool `lang:"skiplessthan" yaml:"skiplessthan"`
@@ -209,10 +209,8 @@ func (obj *KVRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
select {
// NOTE: this part is very similar to the file resource code
case err, ok := <-ch:
if !ok { // channel shutdown
return nil
@@ -223,19 +221,14 @@ func (obj *KVRes) Watch(ctx context.Context) error {
if obj.init.Debug {
obj.init.Logf("event!")
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// lessThanCheck checks for less than validity.
func (obj *KVRes) lessThanCheck(value string) (bool, error) {
@@ -275,7 +268,7 @@ func (obj *KVRes) lessThanCheck(value string) (bool, error) {
return false, nil
}
// CheckApply method for Password resource. Does nothing, returns happy!
// CheckApply method for resource. Does nothing, returns happy!
func (obj *KVRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
wg := &sync.WaitGroup{}
defer wg.Wait() // this must be above the defer cancel() call
@@ -294,12 +287,16 @@ func (obj *KVRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
if val, exists := obj.init.Recv()["value"]; exists && val.Changed {
// if we received on Value, and it changed, wooo, nothing to do.
obj.init.Logf("`value` was received!")
if obj.Value == nil {
obj.init.Logf("nil `value` was received!")
} else {
obj.init.Logf("`value` (%s) was received!", *obj.Value)
}
}
value, exists, err := obj.kvGet(ctx, obj.getKey())
if err != nil {
return false, errwrap.Wrapf(err, "error during get")
return false, errwrap.Wrapf(err, "error during kv get")
}
if exists && obj.Value != nil {
if value == *obj.Value {

388
engine/resources/line.go Normal file
View File

@@ -0,0 +1,388 @@
// Mgmt
// Copyright (C) James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//
// Additional permission under GNU GPL version 3 section 7
//
// If you modify this program, or any covered work, by linking or combining it
// with embedded mcl code and modules (and that the embedded mcl code and
// modules which link with this program, contain a copy of their source code in
// the authoritative form) containing parts covered by the terms of any other
// license, the licensors of this program grant you additional permission to
// convey the resulting work. Furthermore, the licensors of this program grant
// the original author, James Shubin, additional permission to update this
// additional permission if he deems it necessary to achieve the goals of this
// additional permission.
package resources
import (
"bufio"
"context"
"fmt"
"os"
"strings"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/purpleidea/mgmt/util/errwrap"
"github.com/purpleidea/mgmt/util/recwatch"
)
func init() {
engine.RegisterResource("line", func() engine.Res { return &LineRes{} })
}
const (
// LineStateExists is the string that represents that the line should be
// present.
LineStateExists = "exists"
// LineStateAbsent is the string that represents that the line should
// not exist.
LineStateAbsent = "absent"
)
// LineRes is a simple resource that adds or removes a line of text from a file.
// For more complicated control over the file, use the regular File resource.
type LineRes struct {
traits.Base // add the base methods without re-implementation
init *engine.Init
// File is the absolute path to the file that we are managing.
// TODO: Allow the Name to be something like ${path}:some-contents ?
File string `lang:"file" yaml:"file"`
// State specifies the desired state of the line. It can be either
// `exists` or `absent`. If you do not specify this, we will not be able
// to create or remove a line.
State string `lang:"state" yaml:"state"`
// Content specifies the line contents to add or remove. If this is
// empty, then it does nothing.
Content string `lang:"content" yaml:"content"`
// Trim specifies that we will trim any whitespace from the beginning
// and end of the content. This makes it easier to pass in data from a
// file that ends with a newline, and avoid adding an unnecessary blank.
Trim bool `lang:"trim" yaml:"trim"`
// TODO: consider adding top or bottom insertion preferences?
// TODO: consider adding duplicate removal preferences?
}
// getContent is a simple helper to apply the trim field to the content.
func (obj *LineRes) getContent() string {
if !obj.Trim {
return obj.Content
}
return strings.TrimSpace(obj.Content)
}
// Default returns some sensible defaults for this resource.
func (obj *LineRes) Default() engine.Res {
return &LineRes{}
}
// Validate if the params passed in are valid data.
func (obj *LineRes) Validate() error {
if !strings.HasPrefix(obj.File, "/") {
return fmt.Errorf("the File must be absolute")
}
if strings.HasSuffix(obj.File, "/") {
return fmt.Errorf("the File must not end with a slash")
}
if obj.State != LineStateExists && obj.State != LineStateAbsent {
return fmt.Errorf("the State is invalid")
}
return nil
}
// Init runs some startup code for this resource.
func (obj *LineRes) Init(init *engine.Init) error {
obj.init = init // save for later
return nil
}
// Cleanup is run by the engine to clean up after the resource is done.
func (obj *LineRes) Cleanup() error {
return nil
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *LineRes) Watch(ctx context.Context) error {
recWatcher, err := recwatch.NewRecWatcher(obj.File, false)
if err != nil {
return err
}
defer recWatcher.Close()
obj.init.Running() // when started, notify engine that we're running
for {
if obj.init.Debug {
obj.init.Logf("watching: %s", obj.File) // attempting to watch...
}
select {
case event, ok := <-recWatcher.Events():
if !ok { // channel shutdown
return nil
}
if err := event.Error; err != nil {
return errwrap.Wrapf(err, "unknown %s watcher error", obj)
}
if obj.init.Debug { // don't access event.Body if event.Error isn't nil
obj.init.Logf("event(%s): %v", event.Body.Name, event.Body.Op)
}
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
obj.init.Event() // notify engine of an event (this can block)
}
}
// CheckApply method for Value resource. Does nothing, returns happy!
func (obj *LineRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
if obj.getContent() == "" { // special case
return true, nil // done early
}
exists, err := obj.check(ctx)
if err != nil {
return false, err
}
if obj.State == LineStateExists && exists {
return true, nil
}
if obj.State == LineStateAbsent && !exists {
return true, nil
}
if !apply {
return false, nil
}
if obj.State == LineStateAbsent { // remove
obj.init.Logf("removing line")
return obj.remove(ctx)
}
//if obj.State == LineStateExists { // add
//}
obj.init.Logf("adding line")
return obj.add(ctx)
}
// check returns true if it found a match. false otherwise. It errors if
// something went permanently wrong. If the file doesn't exist, this returns
// false.
func (obj *LineRes) check(ctx context.Context) (bool, error) {
matchLines := strings.Split(obj.getContent(), "\n")
file, err := os.Open(obj.File)
if os.IsNotExist(err) {
return false, nil
}
if err != nil {
return false, err
}
defer file.Close()
// XXX: make a streaming version of this function without this cache
var fileLines []string
scanner := bufio.NewScanner(file)
for scanner.Scan() {
select {
case <-ctx.Done():
return false, ctx.Err()
default:
}
fileLines = append(fileLines, scanner.Text())
}
if err := scanner.Err(); err != nil {
return false, err
}
// XXX: add tests to make sure this is correct
for i := 0; i <= len(fileLines)-len(matchLines); i++ {
select {
case <-ctx.Done():
return false, ctx.Err()
default:
}
match := true
for j := 0; j < len(matchLines); j++ {
if fileLines[i+j] != matchLines[j] {
match = false
break
}
}
if match {
return true, nil // end early, we found a match!
}
}
return false, nil
}
// remove returns true if it did nothing. false if it removed a match. It errors
// if something went permanently wrong.
func (obj *LineRes) remove(ctx context.Context) (bool, error) {
matchLines := strings.Split(obj.getContent(), "\n")
file, err := os.Open(obj.File)
if err != nil {
return false, err
}
var fileLines []string
scanner := bufio.NewScanner(file)
for scanner.Scan() {
select {
case <-ctx.Done():
return false, ctx.Err()
default:
}
fileLines = append(fileLines, scanner.Text())
}
if err := scanner.Err(); err != nil {
file.Close() // don't leak
return false, err
}
file.Close() // close before we eventually write
// check if the last line ends with a newline
nl := ""
if len(fileLines) > 0 && strings.HasSuffix(fileLines[len(fileLines)-1], "\n") {
nl = "\n"
}
// XXX: add tests to make sure this is correct
var newLines []string
i := 0
count := 0
for i < len(fileLines) {
select {
case <-ctx.Done():
return false, ctx.Err()
default:
}
match := true
if i+len(matchLines) <= len(fileLines) {
for j := 0; j < len(matchLines); j++ {
if fileLines[i+j] != matchLines[j] {
match = false
break
}
}
} else {
match = false
}
if match {
i += len(matchLines) // skip over the matched block
count += len(matchLines) // count the skips
} else {
newLines = append(newLines, fileLines[i])
i++
}
}
if count == 0 {
return true, nil // nothing removed!
}
// write out the updated file
output := strings.Join(newLines, "\n") + nl // preserve newline at EOF
return false, os.WriteFile(obj.File, []byte(output), 0600)
}
// add returns true if it did nothing. false if it add a line. It errors if
// something went permanently wrong. It's not strictly required for it to avoid
// adding duplicates, but it's a nice feature, hence why it can return true.
// TODO: add at beginning or at end of file?
// XXX: do the duplicate check at the same time?
func (obj *LineRes) add(ctx context.Context) (bool, error) {
file, err := os.OpenFile(obj.File, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0600)
if err != nil {
return false, err
}
defer file.Close()
if _, err := file.WriteString(obj.getContent() + "\n"); err != nil {
return false, err
}
return false, nil
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *LineRes) Cmp(r engine.Res) error {
// we can only compare LineRes to others of the same resource kind
res, ok := r.(*LineRes)
if !ok {
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.File != res.File {
return fmt.Errorf("the File field differs")
}
if obj.State != res.State {
return fmt.Errorf("the State field differs")
}
if obj.Content != res.Content {
return fmt.Errorf("the Content field differs")
}
// TODO: We could technically compare obj.getContent() instead...
if obj.Trim != res.Trim {
return fmt.Errorf("the Trim field differs")
}
return nil
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *LineRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes LineRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*LineRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to LineRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = LineRes(raw) // restore from indirection with type conversion!
return nil
}

View File

@@ -253,7 +253,6 @@ func (obj *MountRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send bool
var done bool
for {
select {
@@ -272,8 +271,6 @@ func (obj *MountRes) Watch(ctx context.Context) error {
obj.init.Logf("event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case event, ok := <-ch:
if !ok {
if done {
@@ -286,19 +283,13 @@ func (obj *MountRes) Watch(ctx context.Context) error {
obj.init.Logf("event: %+v", event)
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// fstabCheckApply checks /etc/fstab for entries corresponding to the resource
// definition, and adds or deletes the entry as needed.

View File

@@ -121,19 +121,10 @@ func (obj *MsgRes) Cleanup() error {
func (obj *MsgRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
//var send = false // send event?
for {
select {
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
//if send {
// send = false
// obj.init.Event() // notify engine of an event (this can block)
//}
}
}
// isAllStateOK derives a compound state from all internal cache flags that

View File

@@ -320,7 +320,6 @@ func (obj *NetRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
var done bool
for {
select {
@@ -339,8 +338,6 @@ func (obj *NetRes) Watch(ctx context.Context) error {
obj.init.Logf("Event: %+v", s.msg)
}
send = true
case event, ok := <-recWatcher.Events():
if !ok {
if done {
@@ -356,19 +353,13 @@ func (obj *NetRes) Watch(ctx context.Context) error {
obj.init.Logf("Event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// ifaceCheckApply checks the state of the network device and brings it up or
// down as necessary.

View File

@@ -183,12 +183,13 @@ func (obj *NspawnRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
select {
case event := <-busChan:
// process org.freedesktop.machine1 events for this resource's name
if event.Body[0] == obj.Name() {
if event.Body[0] != obj.Name() {
continue
}
obj.init.Logf("Event received: %v", event.Name)
if event.Name == machineNew {
obj.init.Logf("Machine started")
@@ -197,20 +198,14 @@ func (obj *NspawnRes) Watch(ctx context.Context) error {
} else {
return fmt.Errorf("unknown event: %s", event.Name)
}
send = true
}
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply is run to check the state and, if apply is true, to apply the
// necessary changes to reach the desired state. This is run before Watch and

View File

@@ -41,6 +41,7 @@ import (
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
engineUtil "github.com/purpleidea/mgmt/engine/util"
"github.com/purpleidea/mgmt/util/errwrap"
"github.com/purpleidea/mgmt/util/recwatch"
)
@@ -115,6 +116,8 @@ func (obj *PasswordRes) Cleanup() error {
return nil
}
// read is a helper to read the data from disk. This is similar to an engineUtil
// function named ReadData but is kept separate for safety anyways.
func (obj *PasswordRes) read() (string, error) {
file, err := os.Open(obj.path) // open a handle to read the file
if err != nil {
@@ -128,14 +131,28 @@ func (obj *PasswordRes) read() (string, error) {
return strings.TrimSpace(string(data)), nil
}
// write is a helper to store the data on disk. This is similar to an engineUtil
// function named WriteData but is kept separate for safety anyways.
func (obj *PasswordRes) write(password string) (int, error) {
file, err := os.Create(obj.path) // open a handle to create the file
uid, gid, err := engineUtil.GetUIDGID()
if err != nil {
return -1, err
}
// Chmod it before we write the secret data.
file, err := os.OpenFile(obj.path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600)
if err != nil {
return -1, errwrap.Wrapf(err, "can't create file")
}
defer file.Close()
var c int
if c, err = file.Write([]byte(password + newline)); err != nil {
// Chown it before we write the secret data.
if err := file.Chown(uid, gid); err != nil {
return -1, err
}
c, err := file.Write([]byte(password + newline))
if err != nil {
return c, errwrap.Wrapf(err, "can't write file")
}
return c, file.Sync()
@@ -205,7 +222,6 @@ func (obj *PasswordRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
select {
// NOTE: this part is very similar to the file resource code
@@ -216,19 +232,14 @@ func (obj *PasswordRes) Watch(ctx context.Context) error {
if err := event.Error; err != nil {
return errwrap.Wrapf(err, "unknown %s watcher error", obj)
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply method for Password resource. Does nothing, returns happy!
func (obj *PasswordRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
@@ -269,11 +280,21 @@ func (obj *PasswordRes) CheckApply(ctx context.Context, apply bool) (bool, error
//}
if !refresh && exists && !generate && !write { // nothing to do, done!
if err := obj.init.Send(&PasswordSends{
Password: &password,
}); err != nil {
return false, err
}
return true, nil
}
// a refresh was requested, the token doesn't exist, or the check failed
if !apply {
if err := obj.init.Send(&PasswordSends{
Password: &password, // XXX: arbitrary since we're in noop mode
}); err != nil {
return false, err
}
return false, nil
}

View File

@@ -150,7 +150,6 @@ func (obj *PkgRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
if obj.init.Debug {
obj.init.Logf("%s: Watching...", obj.fmtNames(obj.getNames()))
@@ -169,19 +168,13 @@ func (obj *PkgRes) Watch(ctx context.Context) error {
<-ch // discard
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// get list of names when grouped or not
func (obj *PkgRes) getNames() []string {

View File

@@ -45,6 +45,7 @@ import (
"time"
"github.com/purpleidea/mgmt/engine"
engineUtil "github.com/purpleidea/mgmt/engine/util"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/util/errwrap"
@@ -500,6 +501,7 @@ func TestResources1(t *testing.T) {
doneCtx, doneCtxCancel := context.WithCancel(context.Background())
defer doneCtxCancel()
tmpdir := fmt.Sprintf("%s/", t.TempDir()) // gets cleaned up at end, new dir for each call
debug := testing.Verbose() // set via the -test.v flag to `go test`
logf := func(format string, v ...interface{}) {
t.Logf(fmt.Sprintf("test #%d: ", index)+format, v...)
@@ -520,6 +522,10 @@ func TestResources1(t *testing.T) {
}
},
VarDir: func(p string) (string, error) {
return path.Join(tmpdir, p), nil
},
// Watch listens on this for close/pause events.
Debug: debug,
Logf: logf,
@@ -804,9 +810,9 @@ func TestResources2(t *testing.T) {
}
return resCheckApplyError(res, expCheckOK, errOK)
}
// resCleanup runs CLeanup on the res.
// resCleanup runs Cleanup on the res.
resCleanup := func(res engine.Res) func() error {
// run CLeanup
// run Cleanup
return func() error {
return res.Cleanup()
}
@@ -1682,7 +1688,7 @@ func TestResources2(t *testing.T) {
fileAbsent(d2f1),
fileAbsent(d2f2),
fileAbsent(d2f3),
fileExists(p2, false), // ensure it's a file XXX !!!
fileExists(p2, false), // ensure it's a file
fileExists(p3, true), // ensure it's a dir
fileExists(p4, false),
resCheckApply(r1, true), // it's already good
@@ -1777,3 +1783,47 @@ func TestResPtrUID1(t *testing.T) {
t.Errorf("uid's don't match")
}
}
func TestResToB64(t *testing.T) {
res, err := engine.NewNamedResource("noop", "n1")
if err != nil {
t.Errorf("could not build resource: %+v", err)
return
}
s, err := engineUtil.ResToB64(res)
if err != nil {
t.Errorf("error trying to encode res: %s", err.Error())
return
}
t.Logf("out: %s", s)
}
func TestResToB64Meta(t *testing.T) {
hidden := true // must be true, since false is a default
res, err := engine.NewNamedResource("noop", "n1")
if err != nil {
t.Errorf("could not build resource: %+v", err)
return
}
res.MetaParams().Hidden = hidden
s, err := engineUtil.ResToB64(res)
if err != nil {
t.Errorf("error trying to encode res: %s", err.Error())
return
}
t.Logf("out: %s", s)
r, err := engineUtil.B64ToRes(s)
if err != nil {
t.Errorf("error trying to decode res: %s", err.Error())
return
}
if r.MetaParams().Hidden != hidden {
t.Errorf("metaparam did not get preserved")
return
}
t.Logf("meta: %v", r.MetaParams().Hidden)
}

View File

@@ -0,0 +1,349 @@
// Mgmt
// Copyright (C) James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//
// Additional permission under GNU GPL version 3 section 7
//
// If you modify this program, or any covered work, by linking or combining it
// with embedded mcl code and modules (and that the embedded mcl code and
// modules which link with this program, contain a copy of their source code in
// the authoritative form) containing parts covered by the terms of any other
// license, the licensors of this program grant you additional permission to
// convey the resulting work. Furthermore, the licensors of this program grant
// the original author, James Shubin, additional permission to update this
// additional permission if he deems it necessary to achieve the goals of this
// additional permission.
package resources
import (
"context"
"fmt"
"sync"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/purpleidea/mgmt/etcd/scheduler" // XXX: abstract this if possible
"github.com/purpleidea/mgmt/util/errwrap"
)
func init() {
engine.RegisterResource("schedule", func() engine.Res { return &ScheduleRes{} })
}
// ScheduleRes is a resource which starts up a "distributed scheduler". All
// nodes of the same namespace will be part of the same scheduling pool. The
// scheduling result can be determined by using the "schedule" function. If the
// options specified are different among peers in the same namespace, then it is
// undefined which options if any will get chosen.
type ScheduleRes struct {
traits.Base // add the base methods without re-implementation
init *engine.Init
world engine.SchedulerWorld
// Namespace represents the namespace key to use. If it is not
// specified, the Name value is used instead.
Namespace string `lang:"namespace" yaml:"namespace"`
// Strategy is the scheduling strategy to use. If this value is nil or,
// undefined, then a default will be chosen automatically.
Strategy *string `lang:"strategy" yaml:"strategy"`
// Max is the max number of hosts to elect. If this is unspecified, then
// a default of 1 is used.
Max *int `lang:"max" yaml:"max"`
// Reuse specifies that we reuse the client lease on reconnect. If reuse
// is false, then on host disconnect, that hosts entry will immediately
// expire, and the scheduler will react instantly and remove that host
// entry from the list. If this is true, or if the host closes without a
// clean shutdown, it will take the TTL number of seconds to remove the
// entry.
Reuse *bool `lang:"reuse" yaml:"reuse"`
// TTL is the time to live for added scheduling "votes". If this value
// is nil or, undefined, then a default value is used. See the `Reuse`
// entry for more information.
TTL *int `lang:"ttl" yaml:"ttl"`
// once is the startup signal for the scheduler
once chan struct{}
}
// getNamespace returns the namespace key to be used for this resource. If the
// Namespace field is specified, it will use that, otherwise it uses the Name.
func (obj *ScheduleRes) getNamespace() string {
if obj.Namespace != "" {
return obj.Namespace
}
return obj.Name()
}
func (obj *ScheduleRes) getOpts() []scheduler.Option {
schedulerOpts := []scheduler.Option{}
// don't add bad or zero-value options
defaultStrategy := true
if obj.Strategy != nil && *obj.Strategy != "" {
strategy := *obj.Strategy
if obj.init.Debug {
obj.init.Logf("opts: strategy: %s", strategy)
}
defaultStrategy = false
schedulerOpts = append(schedulerOpts, scheduler.StrategyKind(strategy))
}
if defaultStrategy { // we always need to add one!
schedulerOpts = append(schedulerOpts, scheduler.StrategyKind(scheduler.DefaultStrategy))
}
if obj.Max != nil && *obj.Max > 0 {
max := *obj.Max
// TODO: check for overflow
if obj.init.Debug {
obj.init.Logf("opts: max: %d", max)
}
schedulerOpts = append(schedulerOpts, scheduler.MaxCount(max))
}
if obj.Reuse != nil {
reuse := *obj.Reuse
if obj.init.Debug {
obj.init.Logf("opts: reuse: %t", reuse)
}
schedulerOpts = append(schedulerOpts, scheduler.ReuseLease(reuse))
}
if obj.TTL != nil && *obj.TTL > 0 {
ttl := *obj.TTL
// TODO: check for overflow
if obj.init.Debug {
obj.init.Logf("opts: ttl: %d", ttl)
}
schedulerOpts = append(schedulerOpts, scheduler.SessionTTL(ttl))
}
return schedulerOpts
}
// Default returns some sensible defaults for this resource.
func (obj *ScheduleRes) Default() engine.Res {
return &ScheduleRes{}
}
// Validate if the params passed in are valid data.
func (obj *ScheduleRes) Validate() error {
if obj.getNamespace() == "" {
return fmt.Errorf("the Namespace must not be empty")
}
return nil
}
// Init initializes the resource.
func (obj *ScheduleRes) Init(init *engine.Init) error {
obj.init = init // save for later
world, ok := obj.init.World.(engine.SchedulerWorld)
if !ok {
return fmt.Errorf("world backend does not support the SchedulerWorld interface")
}
obj.world = world
obj.once = make(chan struct{}, 1) // buffered!
return nil
}
// Cleanup is run by the engine to clean up after the resource is done.
func (obj *ScheduleRes) Cleanup() error {
return nil
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *ScheduleRes) Watch(ctx context.Context) error {
wg := &sync.WaitGroup{}
defer wg.Wait()
ctx, cancel := context.WithCancel(ctx)
defer cancel()
obj.init.Running() // when started, notify engine that we're running
select {
case <-obj.once:
// pass
case <-ctx.Done():
return ctx.Err()
}
if obj.init.Debug {
obj.init.Logf("starting scheduler...")
}
sched, err := obj.world.Scheduler(obj.getNamespace(), obj.getOpts()...)
if err != nil {
return errwrap.Wrapf(err, "can't create scheduler")
}
watchChan := make(chan *scheduler.ScheduledResult)
wg.Add(1)
go func() {
defer wg.Done()
defer sched.Shutdown()
select {
case <-ctx.Done():
return
}
}()
// process the stream of scheduling output...
wg.Add(1)
go func() {
defer wg.Done()
defer close(watchChan)
for {
hosts, err := sched.Next(ctx)
select {
case watchChan <- &scheduler.ScheduledResult{
Hosts: hosts,
Err: err,
}:
case <-ctx.Done():
return
}
}
}()
for {
select {
case result, ok := <-watchChan:
if !ok { // channel shutdown
return nil
}
if result == nil {
return fmt.Errorf("unexpected nil result")
}
if err := result.Err; err != nil {
if err == scheduler.ErrEndOfResults {
//return nil // TODO: we should probably fix the reconnect issue and use this here
return fmt.Errorf("scheduler shutdown, reconnect bug?") // XXX: fix etcd reconnects
}
return errwrap.Wrapf(err, "channel watch failed on `%s`", obj.getNamespace())
}
if obj.init.Debug {
obj.init.Logf("event!")
}
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
obj.init.Event() // notify engine of an event (this can block)
}
}
// CheckApply method for resource.
func (obj *ScheduleRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
// For maximum correctness, don't start scheduling myself until this
// CheckApply runs at least once. Effectively this unblocks Watch() once
// it has run. If we didn't do this, then illogical graphs could happen
// where we have an edge like Foo["whatever"] -> Schedule["bar"] and if
// Foo failed, we'd still be scheduling, which is not what we want.
select {
case obj.once <- struct{}{}:
default: // if buffer is full
}
// FIXME: If we wanted to be really fancy, we could wait until the write
// to the scheduler (etcd) finished before we returned true.
return true, nil
}
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *ScheduleRes) Cmp(r engine.Res) error {
// we can only compare ScheduleRes to others of the same resource kind
res, ok := r.(*ScheduleRes)
if !ok {
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.getNamespace() != res.getNamespace() {
return fmt.Errorf("the Namespace differs")
}
if (obj.Strategy == nil) != (res.Strategy == nil) { // xor
return fmt.Errorf("the Strategy differs")
}
if obj.Strategy != nil && res.Strategy != nil {
if *obj.Strategy != *res.Strategy { // compare the values
return fmt.Errorf("the contents of Strategy differs")
}
}
if (obj.Max == nil) != (res.Max == nil) { // xor
return fmt.Errorf("the Max differs")
}
if obj.Max != nil && res.Max != nil {
if *obj.Max != *res.Max { // compare the values
return fmt.Errorf("the contents of Max differs")
}
}
if (obj.Reuse == nil) != (res.Reuse == nil) { // xor
return fmt.Errorf("the Reuse differs")
}
if obj.Reuse != nil && res.Reuse != nil {
if *obj.Reuse != *res.Reuse { // compare the values
return fmt.Errorf("the contents of Reuse differs")
}
}
if (obj.TTL == nil) != (res.TTL == nil) { // xor
return fmt.Errorf("the TTL differs")
}
if obj.TTL != nil && res.TTL != nil {
if *obj.TTL != *res.TTL { // compare the values
return fmt.Errorf("the contents of TTL differs")
}
}
return nil
}
// UnmarshalYAML is the custom unmarshal handler for this struct. It is
// primarily useful for setting the defaults.
func (obj *ScheduleRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes ScheduleRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*ScheduleRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to ScheduleRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = ScheduleRes(raw) // restore from indirection with type conversion!
return nil
}

View File

@@ -36,6 +36,7 @@ import (
"fmt"
"os/user"
"path"
"time"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
@@ -139,21 +140,27 @@ func (obj *SvcRes) Cleanup() error {
return nil
}
// svc is a helper that returns the systemd name.
func (obj *SvcRes) svc() string {
return fmt.Sprintf("%s.service", obj.Name())
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *SvcRes) Watch(ctx context.Context) error {
// obj.Name: svc name
if !systemdUtil.IsRunningSystemd() {
return fmt.Errorf("systemd is not running")
}
ctx, cancel := context.WithCancel(ctx)
defer cancel() // make sure we always close any below ctx just in case!
var conn *systemd.Conn
var bus *dbus.Conn
var err error
if obj.Session {
conn, err = systemd.NewUserConnection() // user session
conn, err = systemd.NewUserConnectionContext(ctx) // user session
} else {
// we want NewSystemConnection but New falls back to this
conn, err = systemd.New() // needs root access
// we want NewSystemConnectionContext but New... falls back to this
conn, err = systemd.NewWithContext(ctx) // needs root access
}
if err != nil {
return errwrap.Wrapf(err, "failed to connect to systemd")
@@ -161,6 +168,7 @@ func (obj *SvcRes) Watch(ctx context.Context) error {
defer conn.Close()
// if we share the bus with others, we will get each others messages!!
var bus *dbus.Conn
if obj.Session {
bus, err = util.SessionBusPrivateUsable()
} else {
@@ -171,124 +179,179 @@ func (obj *SvcRes) Watch(ctx context.Context) error {
}
defer bus.Close()
// XXX: will this detect new units?
bus.BusObject().Call("org.freedesktop.DBus.AddMatch", 0,
"type='signal',interface='org.freedesktop.systemd1.Manager',member='Reloading'")
buschan := make(chan *dbus.Signal, 10)
defer close(buschan) // NOTE: closing a chan that contains a value is ok
bus.Signal(buschan)
defer bus.RemoveSignal(buschan) // not needed here, but nice for symmetry
// NOTE: I guess it's not the worst-case scenario if we drop signal or
// if it fills up and we block. Whichever way the upstream implements it
// we'll have a back log of signals to loop through which is just fine.
chBus := make(chan *dbus.Signal, 10) // TODO: what size if any?
defer close(chBus) // NOTE: closing a chan that contains a value is ok
bus.Signal(chBus)
defer bus.RemoveSignal(chBus) // not needed here, but nice for symmetry
// Legacy way to do this matching...
//method := "org.freedesktop.DBus.AddMatch"
//flags := dbus.Flags(0)
//args := []interface{}{"type='signal',interface='org.freedesktop.systemd1.Manager',member='Reloading'"}
//call := bus.BusObject().CallWithContext(ctx, method, flags, args...) // *dbus.Call
//if err := call.Err; err != nil {
// return errwrap.Wrapf(err, "failed to connect signal on bus")
//}
matchOptions := []dbus.MatchOption{
dbus.WithMatchInterface("org.freedesktop.systemd1.Manager"),
dbus.WithMatchMember("Reloading"),
}
if err := bus.AddMatchSignalContext(ctx, matchOptions...); err != nil {
return errwrap.Wrapf(err, "failed to add match signal on bus")
}
defer func() {
// On shutdown, we prefer to give this a chance to run. If we
// use the main ctx, then it will error because ctx cancelled.
ctx, cancel := context.WithTimeout(context.Background(), 1000*time.Millisecond)
defer cancel()
if err := bus.RemoveMatchSignalContext(ctx, matchOptions...); err != nil {
obj.init.Logf("failed to remove match signal on bus: %+v", err)
}
}()
obj.init.Running() // when started, notify engine that we're running
var svc = fmt.Sprintf("%s.service", obj.Name()) // systemd name
var send = false // send event?
var invalid = false // does the svc exist or not?
var previous bool // previous invalid value
svc := obj.svc() // systemd name
// TODO: do we first need to call conn.Subscribe() ?
set := conn.NewSubscriptionSet() // no error should be returned
subChannel, subErrors := set.Subscribe()
//defer close(subChannel) // cannot close receive-only channel
//defer close(subErrors) // cannot close receive-only channel
var activeSet = false
// XXX: dynamic bugs: https://github.com/coreos/go-systemd/issues/474
set.Add(svc) // it's okay if the svc doesn't exist yet
chSub, chSubErr := set.Subscribe()
//defer close(chSub) // cannot close receive-only channel
//defer close(chSubErr) // cannot close receive-only channel
//chSubClosed := false
//chSubErrClosed := false
for {
// XXX: watch for an event for new units...
// XXX: detect if startup enabled/disabled value changes...
//if chSubClosed && chSubErrClosed {
//
//}
previous = invalid
invalid = false
// firstly, does svc even exist or not?
loadstate, err := conn.GetUnitPropertyContext(ctx, svc, "LoadState")
if err != nil {
obj.init.Logf("failed to get property: %+v", err)
invalid = true
}
if !invalid {
var notFound = (loadstate.Value == dbus.MakeVariant("not-found"))
if notFound { // XXX: in the loop we'll handle changes better...
obj.init.Logf("failed to find svc")
invalid = true // XXX: ?
}
}
if previous != invalid { // if invalid changed, send signal
send = true
}
if invalid {
if obj.init.Debug {
obj.init.Logf("waiting for service") // waiting for svc to appear...
obj.init.Logf("watching...")
}
if activeSet {
activeSet = false
set.Remove(svc) // no return value should ever occur
}
select {
case <-buschan: // XXX: wait for new units event to unstick
// loop so that we can see the changed invalid signal
obj.init.Logf("daemon reload")
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
case sig, ok := <-chBus:
if !ok {
chBus = nil
return fmt.Errorf("unexpected close") // we close this one!
}
} else {
if !activeSet {
activeSet = true
set.Add(svc) // no return value should ever occur
if obj.init.Debug {
obj.init.Logf("sig: %+v", sig)
}
//obj.init.Logf("watching...") // attempting to watch...
select {
case event := <-subChannel:
// This event happens if we `systemctl daemon-reload` or
// if `systemctl enable/disable <svc>` is run. For both
// of these situations we seem to always get two events.
// The first seems to have `Body:[true]`, and the second
// has `Body:[false]`.
// https://pkg.go.dev/github.com/godbus/dbus/v5#Signal
//eg: &{Sender::1.287 Path:/org/freedesktop/systemd1 Name:org.freedesktop.systemd1.Manager.Reloading Body:[false] Sequence:7}
if sig.Name != "org.freedesktop.systemd1.Manager.Reloading" {
// not for us
continue
}
if len(sig.Body) == 0 {
// does this ever happen? send a signal for now
obj.init.Logf("daemon reload with empty body")
break // break out of select and send event now
}
if len(sig.Body) > 1 {
// does this ever happen? send a signal for now
obj.init.Logf("daemon reload with big body")
break // break out of select and send event now
}
b, ok := sig.Body[0].(bool)
if !ok {
// does this ever happen? send a signal for now
obj.init.Logf("daemon reload with badly typed body")
break // break out of select and send event now
}
// We do all of this annoying parsing to cut our event
// count by half, since these signals seem to come in
// pairs. We skip the "true" one that comes first.
if b {
if obj.init.Debug {
obj.init.Logf("skipping daemon-reload start")
}
continue
}
if obj.init.Debug {
obj.init.Logf("daemon reload") // success!
}
case event, ok := <-chSub:
if !ok {
chSub = nil
//chSubClosed = true
continue
}
if obj.init.Debug {
obj.init.Logf("event: %+v", event)
}
// NOTE: the value returned is a map for some reason...
if event[svc] != nil {
// event[svc].ActiveState is not nil
switch event[svc].ActiveState {
// The value returned is a map in case we monitor many.
unitStatus, ok := event[svc]
if !ok { // not me
continue
}
if unitStatus == nil {
if obj.init.Debug {
obj.init.Logf("service stopped")
}
break // break out of select and send event now
}
msg := ""
switch event[svc].ActiveState { // string
case "active":
obj.init.Logf("started")
msg = "service started"
case "inactive":
obj.init.Logf("stopped")
msg = "service stopped"
case "reloading":
obj.init.Logf("reloading")
msg = "service reloading"
case "failed":
obj.init.Logf("failed")
msg = "service failed"
case "activating":
obj.init.Logf("activating")
msg = "service activating"
case "deactivating":
obj.init.Logf("deactivating")
msg = "service deactivating"
default:
return fmt.Errorf("unknown svc state: %s", event[svc].ActiveState)
return fmt.Errorf("unknown service state: %s", event[svc].ActiveState)
}
} else {
// svc stopped (and ActiveState is nil...)
obj.init.Logf("stopped")
if obj.init.Debug {
obj.init.Logf("%s", msg)
}
send = true
case err := <-subErrors:
return errwrap.Wrapf(err, "unknown %s error", obj)
case err, ok := <-chSubErr:
if !ok {
chSubErr = nil
//chSubErrClosed = true
continue
}
if err == nil {
obj.init.Logf("unexpected nil error")
continue
}
return errwrap.Wrapf(err, "unknown error")
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
return ctx.Err()
}
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply checks the resource state and applies the resource if the bool
// input is true. It returns error info and if the state check passed or not.
@@ -297,43 +360,64 @@ func (obj *SvcRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
return false, fmt.Errorf("systemd is not running")
}
ctx, cancel := context.WithCancel(ctx)
defer cancel() // make sure we always close any below ctx just in case!
var conn *systemd.Conn
var err error
if obj.Session {
conn, err = systemd.NewUserConnection() // user session
conn, err = systemd.NewUserConnectionContext(ctx) // user session
} else {
// we want NewSystemConnection but New falls back to this
conn, err = systemd.New() // needs root access
// we want NewSystemConnectionContext but New... falls back to this
conn, err = systemd.NewWithContext(ctx) // needs root access
}
if err != nil {
return false, errwrap.Wrapf(err, "failed to connect to systemd")
}
defer conn.Close()
var svc = fmt.Sprintf("%s.service", obj.Name()) // systemd name
// if we share the bus with others, we will get each others messages!!
//var bus *dbus.Conn
//if obj.Session {
// bus, err = util.SessionBusPrivateUsable()
//} else {
// bus, err = util.SystemBusPrivateUsable()
//}
//if err != nil {
// return errwrap.Wrapf(err, "failed to connect to bus")
//}
//defer bus.Close()
loadstate, err := conn.GetUnitPropertyContext(ctx, svc, "LoadState")
svc := obj.svc() // systemd name
loadState, err := conn.GetUnitPropertyContext(ctx, svc, "LoadState")
if err != nil {
return false, errwrap.Wrapf(err, "failed to get load state")
}
// NOTE: we have to compare variants with other variants, they are really strings...
var notFound = (loadstate.Value == dbus.MakeVariant("not-found"))
notFound := (loadState.Value == dbus.MakeVariant("not-found"))
if notFound {
return false, errwrap.Wrapf(err, "failed to find svc: %s", svc)
}
// XXX: check svc "enabled at boot" or not status...
//conn.GetUnitPropertiesContexts(svc)
activestate, err := conn.GetUnitPropertyContext(ctx, svc, "ActiveState")
activeState, err := conn.GetUnitPropertyContext(ctx, svc, "ActiveState")
if err != nil {
return false, errwrap.Wrapf(err, "failed to get active state")
}
var running = (activestate.Value == dbus.MakeVariant("active"))
var stateOK = ((obj.State == "") || (obj.State == "running" && running) || (obj.State == "stopped" && !running))
var startupOK = true // XXX: DETECT AND SET
running := (activeState.Value == dbus.MakeVariant("active"))
stateOK := ((obj.State == "") || (obj.State == "running" && running) || (obj.State == "stopped" && !running))
startupState, err := conn.GetUnitPropertyContext(ctx, svc, "UnitFileState")
if err != nil {
return false, errwrap.Wrapf(err, "failed to get unit file state")
}
enabled := (startupState.Value == dbus.MakeVariant("enabled"))
disabled := (startupState.Value == dbus.MakeVariant("disabled"))
startupOK := ((obj.Startup == "") || (obj.Startup == "enabled" && enabled) || (obj.Startup == "disabled" && disabled))
// NOTE: if this svc resource is embedded as a composite resource inside
// of another resource using a technique such as `makeComposite()`, then
@@ -344,7 +428,7 @@ func (obj *SvcRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
// trait to the parent resource, or we'll panic when we call this line.)
// It might not be recommended to use the Watch method without a thought
// to what actually happens when we would run Send(), and other methods.
var refresh = obj.init.Refresh() // do we have a pending reload to apply?
refresh := obj.init.Refresh() // do we have a pending reload to apply?
if stateOK && startupOK && !refresh {
return true, nil // we are in the correct state
@@ -356,58 +440,105 @@ func (obj *SvcRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
}
// apply portion
if !startupOK && obj.Startup != "" {
files := []string{svc} // the svc represented in a list
if obj.Startup == "enabled" {
_, _, err = conn.EnableUnitFilesContext(ctx, files, false, true)
} else if obj.Startup == "disabled" {
_, err = conn.DisableUnitFilesContext(ctx, files, false)
} else {
// pass
}
if err != nil {
return false, errwrap.Wrapf(err, "unable to change startup status")
}
if obj.Startup == "enabled" {
obj.init.Logf("service enabled")
} else if obj.Startup == "disabled" {
obj.init.Logf("service disabled")
}
}
// XXX: do we need to use a buffered channel here?
result := make(chan string, 1) // catch result information
defer close(result)
var status string
var ok bool
if !stateOK && obj.State != "" {
if obj.State == "running" {
_, err = conn.StartUnitContext(ctx, svc, SystemdUnitModeFail, result)
} else if obj.State == "stopped" {
_, err = conn.StopUnitContext(ctx, svc, SystemdUnitModeFail, result)
} else { // skip through this section
// TODO: should we do anything here instead?
result <- "" // chan is buffered, so won't block
}
if err != nil {
return false, errwrap.Wrapf(err, "unable to change running status")
}
if refresh {
obj.init.Logf("Skipping reload, due to pending start/stop")
obj.init.Logf("skipping reload, due to pending start/stop")
}
refresh = false // We did a start or stop, so a reload is not needed.
// TODO: Do we need a timeout here?
// TODO: Should we permanenty error after a long timeout here?
for {
warn := true // warn once
select {
case status = <-result:
case status, ok = <-result:
if !ok {
return false, fmt.Errorf("unexpected closed channel during start/stop")
}
break
case <-time.After(10 * time.Second):
if warn {
obj.init.Logf("service start/stop is slow...")
}
warn = false
continue
case <-ctx.Done():
return false, ctx.Err()
}
if &status == nil {
return false, fmt.Errorf("systemd service action result is nil")
}
switch status {
case SystemdUnitResultDone:
// pass
case SystemdUnitResultFailed:
return false, fmt.Errorf("svc failed (selinux?)")
default:
return false, fmt.Errorf("unknown systemd return string: %v", status)
break // don't loop forever
}
// XXX: also set enabled on boot
switch status {
case "":
// pass
case SystemdUnitResultDone:
if obj.State == "running" {
obj.init.Logf("service started")
} else if obj.State == "stopped" {
obj.init.Logf("service stopped")
}
case SystemdUnitResultCanceled:
// TODO: should this be context.Canceled?
return false, fmt.Errorf("operation cancelled")
case SystemdUnitResultTimeout:
return false, fmt.Errorf("operation timed out")
case SystemdUnitResultFailed:
return false, fmt.Errorf("svc failed (selinux?)")
default:
return false, fmt.Errorf("unknown systemd return string: %s", status)
}
}
if !refresh { // Do we need to reload the service?
return false, nil // success
}
obj.init.Logf("Reloading...")
if obj.init.Debug {
obj.init.Logf("reloading...")
}
// From: https://www.freedesktop.org/software/systemd/man/latest/org.freedesktop.systemd1.html
// If a service is restarted that isn't running, it will be started
@@ -418,17 +549,46 @@ func (obj *SvcRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
return false, errwrap.Wrapf(err, "failed to reload unit")
}
// TODO: Do we need a timeout here?
// TODO: Should we permanenty error after a long timeout here?
for {
warn := true // warn once
select {
case status = <-result:
case status, ok = <-result:
if !ok {
return false, fmt.Errorf("unexpected closed channel during reload")
}
break
case <-time.After(10 * time.Second):
if warn {
obj.init.Logf("service start/stop is slow...")
}
warn = false
continue
case <-ctx.Done():
return false, ctx.Err()
}
break // don't loop forever
}
switch status {
case SystemdUnitResultDone:
case "":
// pass
case SystemdUnitResultDone:
obj.init.Logf("service reloaded")
case SystemdUnitResultCanceled:
// TODO: should this be context.Canceled?
return false, fmt.Errorf("operation cancelled")
case SystemdUnitResultTimeout:
return false, fmt.Errorf("operation timed out")
case SystemdUnitResultFailed:
return false, fmt.Errorf("svc reload failed (selinux?)")
default:
return false, fmt.Errorf("unknown systemd return string: %v", status)
}
@@ -555,10 +715,13 @@ func (obj *SvcResAutoEdgesCron) Test([]bool) bool {
func (obj *SvcRes) AutoEdges() (engine.AutoEdge, error) {
var data []engine.ResUID
var svcFiles []string
svc := obj.svc() // systemd name
svcFiles = []string{
// root svc
fmt.Sprintf("/etc/systemd/system/%s.service", obj.Name()), // takes precedence
fmt.Sprintf("/usr/lib/systemd/system/%s.service", obj.Name()), // pkg default
fmt.Sprintf("/etc/systemd/system/%s", svc), // takes precedence
fmt.Sprintf("/usr/lib/systemd/system/%s", svc), // pkg default
}
if obj.Session {
// user svc
@@ -570,7 +733,7 @@ func (obj *SvcRes) AutoEdges() (engine.AutoEdge, error) {
return nil, fmt.Errorf("user has no home directory")
}
svcFiles = []string{
path.Join(u.HomeDir, "/.config/systemd/user/", fmt.Sprintf("%s.service", obj.Name())),
path.Join(u.HomeDir, "/.config/systemd/user/", svc),
}
}
for _, x := range svcFiles {
@@ -592,7 +755,7 @@ func (obj *SvcRes) AutoEdges() (engine.AutoEdge, error) {
}
cronEdge := &SvcResAutoEdgesCron{
session: obj.Session,
unit: fmt.Sprintf("%s.service", obj.Name()),
unit: svc,
}
return engineUtil.AutoEdgeCombiner(fileEdge, cronEdge)

View File

@@ -217,7 +217,6 @@ func (obj *SysctlRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
select {
case event, ok := <-events1:
@@ -230,7 +229,6 @@ func (obj *SysctlRes) Watch(ctx context.Context) error {
if obj.init.Debug { // don't access event.Body if event.Error isn't nil
obj.init.Logf("event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case event, ok := <-events2:
if !ok { // channel shutdown
@@ -242,19 +240,14 @@ func (obj *SysctlRes) Watch(ctx context.Context) error {
if obj.init.Debug { // don't access event.Body if event.Error isn't nil
obj.init.Logf("event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply checks the resource state and applies the resource if the bool
// input is true. It returns error info and if the state check passed or not.

View File

@@ -218,7 +218,6 @@ func (obj *TarRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
select {
case event, ok := <-recWatcher.Events():
@@ -234,7 +233,6 @@ func (obj *TarRes) Watch(ctx context.Context) error {
if obj.init.Debug { // don't access event.Body if event.Error isn't nil
obj.init.Logf("event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case event, ok := <-events:
if !ok { // channel shutdown
@@ -249,19 +247,14 @@ func (obj *TarRes) Watch(ctx context.Context) error {
if obj.init.Debug { // don't access event.Body if event.Error isn't nil
obj.init.Logf("event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply checks the resource state and applies the resource if the bool
// input is true. It returns error info and if the state check passed or not.

View File

@@ -199,7 +199,6 @@ func (obj *TFTPServerRes) Watch(ctx context.Context) error {
startupChan := make(chan struct{})
close(startupChan) // send one initial signal
var send = false // send event?
for {
if obj.init.Debug {
obj.init.Logf("Looping...")
@@ -208,7 +207,6 @@ func (obj *TFTPServerRes) Watch(ctx context.Context) error {
select {
case <-startupChan:
startupChan = nil
send = true
case <-closeSignal: // something shut us down early
return closeError
@@ -217,13 +215,9 @@ func (obj *TFTPServerRes) Watch(ctx context.Context) error {
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply never has anything to do for this resource, so it always succeeds.
// It does however check that certain runtime requirements (such as the Root dir

View File

@@ -91,23 +91,18 @@ func (obj *TimerRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
select {
case <-obj.ticker.C: // received the timer event
send = true
obj.init.Logf("received tick")
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply method for Timer resource. Triggers a timer reset on notify.
func (obj *TimerRes) CheckApply(ctx context.Context, apply bool) (bool, error) {

View File

@@ -35,6 +35,7 @@ import (
"io"
"os/exec"
"os/user"
"path/filepath"
"sort"
"strconv"
"strings"
@@ -42,6 +43,7 @@ import (
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/traits"
"github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/util/errwrap"
"github.com/purpleidea/mgmt/util/recwatch"
)
@@ -50,8 +52,6 @@ func init() {
engine.RegisterResource("user", func() engine.Res { return &UserRes{} })
}
const passwdFile = "/etc/passwd"
// UserRes is a user account resource.
type UserRes struct {
traits.Base // add the base methods without re-implementation
@@ -78,6 +78,11 @@ type UserRes struct {
// HomeDir is the path to the user's home directory.
HomeDir *string `lang:"homedir" yaml:"homedir"`
// Shell is the users login shell. Many options may exist in the
// `/etc/shells` file. If you set this, you most likely want to pick
// `/bin/bash` or `/usr/sbin/nologin`.
Shell *string `lang:"shell" yaml:"shell"`
// AllowDuplicateUID is needed for a UID to be non-unique. This is rare
// but happens if you want more than one username to access the
// resources of the same UID. See the --non-unique flag in `useradd`.
@@ -123,6 +128,11 @@ func (obj *UserRes) Validate() error {
}
}
}
if obj.HomeDir != nil && !strings.HasSuffix(*obj.HomeDir, "/") {
return fmt.Errorf("the HomeDir should end with a slash")
}
return nil
}
@@ -141,7 +151,7 @@ func (obj *UserRes) Cleanup() error {
// Watch is the primary listener for this resource and it outputs events.
func (obj *UserRes) Watch(ctx context.Context) error {
var err error
obj.recWatcher, err = recwatch.NewRecWatcher(passwdFile, false)
obj.recWatcher, err = recwatch.NewRecWatcher(util.EtcPasswdFile, false)
if err != nil {
return err
}
@@ -149,10 +159,9 @@ func (obj *UserRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
if obj.init.Debug {
obj.init.Logf("Watching: %s", passwdFile) // attempting to watch...
obj.init.Logf("watching: %s", util.EtcPasswdFile) // attempting to watch...
}
select {
@@ -161,28 +170,23 @@ func (obj *UserRes) Watch(ctx context.Context) error {
return nil
}
if err := event.Error; err != nil {
return errwrap.Wrapf(err, "Unknown %s watcher error", obj)
return errwrap.Wrapf(err, "unknown %s watcher error", obj)
}
if obj.init.Debug { // don't access event.Body if event.Error isn't nil
obj.init.Logf("Event(%s): %v", event.Body.Name, event.Body.Op)
obj.init.Logf("event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply method for User resource.
func (obj *UserRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
var exists = true
exists := true
usr, err := user.Lookup(obj.Name())
if err != nil {
if _, ok := err.(user.UnknownUserError); !ok {
@@ -207,6 +211,10 @@ func (obj *UserRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
}
if usercheck := true; exists && obj.State == "exists" {
shell, err := util.UserShell(ctx, obj.Name())
if err != nil {
return false, err
}
intUID, err := strconv.Atoi(usr.Uid)
if err != nil {
return false, errwrap.Wrapf(err, "error casting UID to int")
@@ -221,7 +229,24 @@ func (obj *UserRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
if obj.GID != nil && int(*obj.GID) != intGID {
usercheck = false
}
if obj.HomeDir != nil && *obj.HomeDir != usr.HomeDir {
// The usermod function will error trying to change /home/james
// to /home/james/ when he's logged in, *AND* it won't actually
// update the string in the /etc/passwd file during normal exec
// of the function. To avoid all this cmp these two identically.
cmpHomeDir := func(h1, h2 string) error {
if h1 == h2 {
return nil
}
if filepath.Clean(h1) == filepath.Clean(h2) {
return nil
}
return fmt.Errorf("did not match")
}
if obj.HomeDir != nil && cmpHomeDir(*obj.HomeDir, usr.HomeDir) != nil {
usercheck = false
}
if obj.Shell != nil && *obj.Shell != shell {
usercheck = false
}
if usercheck {
@@ -238,38 +263,42 @@ func (obj *UserRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
if obj.State == "exists" {
if exists {
cmdName = "usermod"
obj.init.Logf("Modifying user: %s", obj.Name())
obj.init.Logf("modifying user: %s", obj.Name())
} else {
cmdName = "useradd"
obj.init.Logf("Adding user: %s", obj.Name())
obj.init.Logf("adding user: %s", obj.Name())
}
if obj.AllowDuplicateUID {
args = append(args, "--non-unique")
}
if obj.UID != nil {
args = append(args, "-u", fmt.Sprintf("%d", *obj.UID))
args = append(args, "--uid", fmt.Sprintf("%d", *obj.UID))
}
if obj.GID != nil {
args = append(args, "-g", fmt.Sprintf("%d", *obj.GID))
args = append(args, "--gid", fmt.Sprintf("%d", *obj.GID))
}
if obj.Group != nil {
args = append(args, "-g", *obj.Group)
args = append(args, "--gid", *obj.Group)
}
if obj.Groups != nil {
args = append(args, "-G", strings.Join(obj.Groups, ","))
args = append(args, "--groups", strings.Join(obj.Groups, ","))
}
if obj.HomeDir != nil {
args = append(args, "-d", *obj.HomeDir)
args = append(args, "--home", *obj.HomeDir)
}
if obj.Shell != nil {
args = append(args, "--shell", *obj.Shell)
}
}
if obj.State == "absent" {
cmdName = "userdel"
obj.init.Logf("Deleting user: %s", obj.Name())
args = []string{}
obj.init.Logf("deleting user: %s", obj.Name())
}
args = append(args, obj.Name())
cmd := exec.Command(cmdName, args...)
cmd := exec.CommandContext(ctx, cmdName, args...)
cmd.SysProcAttr = &syscall.SysProcAttr{
Setpgid: true,
Pgid: 0,
@@ -343,13 +372,22 @@ func (obj *UserRes) Cmp(r engine.Res) error {
}
}
if (obj.HomeDir == nil) != (res.HomeDir == nil) {
return fmt.Errorf("the HomeDirs differs")
return fmt.Errorf("the HomeDir differs")
}
if obj.HomeDir != nil && res.HomeDir != nil {
if *obj.HomeDir != *res.HomeDir {
return fmt.Errorf("the HomeDir differs")
}
}
if (obj.Shell == nil) != (res.Shell == nil) {
return fmt.Errorf("the Shell differs")
}
if obj.Shell != nil && res.Shell != nil {
if *obj.Shell != *res.Shell {
return fmt.Errorf("the Shell differs")
}
}
if obj.AllowDuplicateUID != res.AllowDuplicateUID {
return fmt.Errorf("the AllowDuplicateUID differs")
}

View File

@@ -115,6 +115,8 @@ func (obj *ValueRes) Cleanup() error {
func (obj *ValueRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
// XXX: Should we be using obj.init.Local.ValueWatch ?
select {
case <-ctx.Done(): // closed by the engine to signal shutdown
}
@@ -132,6 +134,7 @@ func (obj *ValueRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
// might not have a new value to copy, and therefore we won't see this
// notification of change. Therefore, it is important to process these
// promptly, if they must not be lost, such as for cache invalidation.
// NOTE: Modern send/recv doesn't really have this limitation anymore.
if !obj.isSet {
obj.cachedAny = obj.Any // store anything we have if any
}
@@ -171,7 +174,12 @@ func (obj *ValueRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
checkOK = true
}
if !apply { // XXX: does this break send/recv if we end early?
if !apply {
if err := obj.init.Send(&ValueSends{
Any: obj.cachedAny,
}); err != nil {
return false, err
}
return checkOK, nil
}
@@ -189,7 +197,7 @@ func (obj *ValueRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
}
// send
//if obj.cachedAny != nil { // TODO: okay to send if value got removed too?
//if obj.cachedAny != nil { // XXX: okay to send if value got removed too?
if err := obj.init.Send(&ValueSends{
Any: obj.cachedAny,
}); err != nil {

View File

@@ -34,7 +34,6 @@ package resources
import (
"context"
"fmt"
"math/rand"
"net/url"
"strings"
"sync"
@@ -46,8 +45,8 @@ import (
"github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/util/errwrap"
"github.com/libvirt/libvirt-go"
libvirtxml "github.com/libvirt/libvirt-go-xml"
libvirt "libvirt.org/go/libvirt" // gitlab.com/libvirt/libvirt-go-module
libvirtxml "libvirt.org/go/libvirtxml" // gitlab.com/libvirt/libvirt-go-xml-module
)
func init() {
@@ -65,17 +64,6 @@ const (
ShortPollInterval = 5 // seconds
)
var (
libvirtInitialized = false
)
type virtURISchemeType int
const (
defaultURI virtURISchemeType = iota
lxcURI
)
// VirtRes is a libvirt resource. A transient virt resource, which has its state
// set to `shutoff` is one which does not exist. The parallel equivalent is a
// file resource which removes a particular path.
@@ -88,33 +76,43 @@ type VirtRes struct {
// URI is the libvirt connection URI, eg: `qemu:///session`.
URI string `lang:"uri" yaml:"uri"`
// State is the desired vm state. Possible values include: `running`,
// `paused` and `shutoff`.
State string `lang:"state" yaml:"state"`
// Transient is whether the vm is defined (false) or undefined (true).
Transient bool `lang:"transient" yaml:"transient"`
// CPUs is the desired cpu count of the machine.
CPUs uint `lang:"cpus" yaml:"cpus"`
// MaxCPUs is the maximum number of cpus allowed in the machine. You
// need to set this so that on boot the `hardware` knows how many cpu
// `slots` it might need to make room for.
MaxCPUs uint `lang:"maxcpus" yaml:"maxcpus"`
// HotCPUs specifies whether we can hot plug and unplug cpus.
HotCPUs bool `lang:"hotcpus" yaml:"hotcpus"`
// Memory is the size in KBytes of memory to include in the machine.
Memory uint64 `lang:"memory" yaml:"memory"`
// OSInit is the init used by lxc.
OSInit string `lang:"osinit" yaml:"osinit"`
// Boot is the boot order. Values are `fd`, `hd`, `cdrom` and `network`.
Boot []string `lang:"boot" yaml:"boot"`
// Disk is the list of disk devices to include.
Disk []*DiskDevice `lang:"disk" yaml:"disk"`
// CdRom is the list of cdrom devices to include.
CDRom []*CDRomDevice `lang:"cdrom" yaml:"cdrom"`
// Network is the list of network devices to include.
Network []*NetworkDevice `lang:"network" yaml:"network"`
// Filesystem is the list of file system devices to include.
Filesystem []*FilesystemDevice `lang:"filesystem" yaml:"filesystem"`
@@ -124,42 +122,26 @@ type VirtRes struct {
// RestartOnDiverge is the restart policy, and can be: `ignore`,
// `ifneeded` or `error`.
RestartOnDiverge string `lang:"restartondiverge" yaml:"restartondiverge"`
// RestartOnRefresh specifies if we restart on refresh signal.
RestartOnRefresh bool `lang:"restartonrefresh" yaml:"restartonrefresh"`
wg *sync.WaitGroup
// cached in Init()
uriScheme virtURISchemeType
absent bool // cached state
// conn and version are cached for use by CheckApply and it's children.
conn *libvirt.Connect
version uint32 // major * 1000000 + minor * 1000 + release
absent bool // cached state
uriScheme virtURISchemeType
processExitWatch bool // do we want to wait on an explicit process exit?
processExitChan chan struct{}
restartScheduled bool // do we need to schedule a hard restart?
// set in Watch, read in CheckApply
mutex *sync.RWMutex
guestAgentConnected bool // our tracking of if guest agent is running
}
restartScheduled bool // do we need to schedule a hard restart?
// VirtAuth is used to pass credentials to libvirt.
type VirtAuth struct {
Username string `lang:"username" yaml:"username"`
Password string `lang:"password" yaml:"password"`
}
// Cmp compares two VirtAuth structs. It errors if they are not identical.
func (obj *VirtAuth) Cmp(auth *VirtAuth) error {
if (obj == nil) != (auth == nil) { // xor
return fmt.Errorf("the VirtAuth differs")
}
if obj == nil && auth == nil {
return nil
}
if obj.Username != auth.Username {
return fmt.Errorf("the Username differs")
}
if obj.Password != auth.Password {
return fmt.Errorf("the Password differs")
}
return nil
// XXX: misc junk which we may wish to rewrite
//processExitWatch bool // do we want to wait on an explicit process exit?
processExitChan chan struct{}
}
// Default returns some sensible defaults for this resource.
@@ -174,9 +156,15 @@ func (obj *VirtRes) Default() engine.Res {
// Validate if the params passed in are valid data.
func (obj *VirtRes) Validate() error {
// XXX: Code requires polling for the mainloop for now.
if obj.MetaParams().Poll > 0 {
return fmt.Errorf("can't poll with virt resources")
}
if obj.CPUs > obj.MaxCPUs {
return fmt.Errorf("the number of CPUs (%d) must not be greater than MaxCPUs (%d)", obj.CPUs, obj.MaxCPUs)
}
return nil
}
@@ -184,12 +172,10 @@ func (obj *VirtRes) Validate() error {
func (obj *VirtRes) Init(init *engine.Init) error {
obj.init = init // save for later
if !libvirtInitialized {
if err := libvirt.EventRegisterDefaultImpl(); err != nil {
return errwrap.Wrapf(err, "method EventRegisterDefaultImpl failed")
}
libvirtInitialized = true
if err := libvirtInit(); err != nil {
return err
}
var u *url.URL
var err error
if u, err = url.Parse(obj.URI); err != nil {
@@ -202,20 +188,39 @@ func (obj *VirtRes) Init(init *engine.Init) error {
obj.absent = (obj.Transient && obj.State == "shutoff") // machine shouldn't exist
obj.conn, err = obj.connect() // gets closed in Close method of Res API
if err != nil {
return errwrap.Wrapf(err, "connection to libvirt failed in init")
obj.mutex = &sync.RWMutex{}
return nil
}
// Cleanup is run by the engine to clean up after the resource is done.
func (obj *VirtRes) Cleanup() error {
return nil
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *VirtRes) Watch(ctx context.Context) error {
wg := &sync.WaitGroup{}
defer wg.Wait() // wait until everyone has exited before we exit!
// XXX: we're using two connections per resource, we could pool these up
conn, _, err := obj.Auth.Connect(obj.URI)
if err != nil {
return errwrap.Wrapf(err, "connection to libvirt failed")
}
defer conn.Close()
// check for hard to change properties
dom, err := obj.conn.LookupDomainByName(obj.Name())
if err == nil {
defer dom.Free()
} else if !isNotFound(err) {
return errwrap.Wrapf(err, "could not lookup on init")
}
dom, err := conn.LookupDomainByName(obj.Name())
if err != nil && !isNotFound(err) {
return errwrap.Wrapf(err, "could not lookup domain")
} else if isNotFound(err) {
// noop
} else if err == nil {
defer dom.Free()
if err == nil {
// maxCPUs, err := dom.GetMaxVcpus()
i, err := dom.GetVcpusFlags(libvirt.DOMAIN_VCPU_MAXIMUM)
if err != nil {
@@ -224,7 +229,9 @@ func (obj *VirtRes) Init(init *engine.Init) error {
maxCPUs := uint(i)
if obj.MaxCPUs != maxCPUs { // max cpu slots is hard to change
// we'll need to reboot to fix this one...
obj.mutex.Lock()
obj.restartScheduled = true
obj.mutex.Unlock()
}
// parse running domain xml to read properties
@@ -243,178 +250,138 @@ func (obj *VirtRes) Init(init *engine.Init) error {
for _, x := range domXML.Devices.Channels {
if x.Target.VirtIO != nil && strings.HasPrefix(x.Target.VirtIO.Name, "org.qemu.guest_agent.") {
// last connection found wins (usually 1 anyways)
obj.mutex.Lock()
obj.guestAgentConnected = (x.Target.VirtIO.State == "connected")
obj.mutex.Unlock()
}
}
}
obj.wg = &sync.WaitGroup{}
return nil
}
// Cleanup is run by the engine to clean up after the resource is done.
func (obj *VirtRes) Cleanup() error {
// By the time that this Close method is called, the engine promises
// that the Watch loop has previously shutdown! (Assuming no bugs!)
// TODO: As a result, this is an extra check which shouldn't be needed,
// but which might mask possible engine bugs. Consider removing it!
obj.wg.Wait()
// Our channel event sources...
domChan := make(chan libvirt.DomainEventType)
gaChan := make(chan *libvirt.DomainEventAgentLifecycle)
errorChan := make(chan error)
// TODO: what is the first int Close return value useful for (if at all)?
_, err := obj.conn.Close() // close libvirt conn that was opened in Init
obj.conn = nil // set to nil to help catch any nil ptr bugs!
return err
}
// connect is the connect helper for the libvirt connection. It can handle auth.
func (obj *VirtRes) connect() (conn *libvirt.Connect, err error) {
if obj.Auth != nil {
callback := func(creds []*libvirt.ConnectCredential) {
// Populate credential structs with the
// prepared username/password values
for _, cred := range creds {
if cred.Type == libvirt.CRED_AUTHNAME {
cred.Result = obj.Auth.Username
cred.ResultLen = len(cred.Result)
} else if cred.Type == libvirt.CRED_PASSPHRASE {
cred.Result = obj.Auth.Password
cred.ResultLen = len(cred.Result)
}
}
}
auth := &libvirt.ConnectAuth{
CredType: []libvirt.ConnectCredentialType{
libvirt.CRED_AUTHNAME, libvirt.CRED_PASSPHRASE,
},
Callback: callback,
}
conn, err = libvirt.NewConnectWithAuth(obj.URI, auth, 0)
if err == nil {
if version, err := conn.GetLibVersion(); err == nil {
obj.version = version
}
}
}
if obj.Auth == nil || err != nil {
conn, err = libvirt.NewConnect(obj.URI)
if err == nil {
if version, err := conn.GetLibVersion(); err == nil {
obj.version = version
}
}
}
// domain events callback
domCallback := func(c *libvirt.Connect, d *libvirt.Domain, ev *libvirt.DomainEventLifecycle) {
domName, _ := d.GetName()
if domName != obj.Name() {
return
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *VirtRes) Watch(ctx context.Context) error {
// FIXME: how will this work if we're polling?
wg := &sync.WaitGroup{}
defer wg.Wait() // wait until everyone has exited before we exit!
domChan := make(chan libvirt.DomainEventType) // TODO: do we need to buffer this?
gaChan := make(chan *libvirt.DomainEventAgentLifecycle)
errorChan := make(chan error)
exitChan := make(chan struct{})
defer close(exitChan)
obj.wg.Add(1) // don't exit without waiting for EventRunDefaultImpl
wg.Add(1)
select {
case domChan <- ev.Event: // send
case <-ctx.Done():
}
}
// if dom is nil, we get events for *all* domains!
domCallbackID, err := conn.DomainEventLifecycleRegister(nil, domCallback)
if err != nil {
return err
}
defer conn.DomainEventDeregister(domCallbackID)
// guest agent events callback
gaCallback := func(c *libvirt.Connect, d *libvirt.Domain, eva *libvirt.DomainEventAgentLifecycle) {
domName, _ := d.GetName()
if domName != obj.Name() {
return
}
select {
case gaChan <- eva: // send
case <-ctx.Done():
}
}
gaCallbackID, err := conn.DomainEventAgentLifecycleRegister(nil, gaCallback)
if err != nil {
return err
}
defer conn.DomainEventDeregister(gaCallbackID)
// run libvirt event loop
// TODO: *trigger* EventRunDefaultImpl to unblock so it can shut down...
// at the moment this isn't a major issue because it seems to unblock in
// bursts every 5 seconds! we can do this by writing to an event handler
// in the meantime, terminating the program causes it to exit anyways...
wg.Add(1) // don't exit without waiting for EventRunDefaultImpl
go func() {
defer obj.wg.Done()
defer wg.Done()
defer obj.init.Logf("EventRunDefaultImpl exited!")
defer func() {
if !obj.init.Debug {
return
}
obj.init.Logf("EventRunDefaultImpl exited!")
}()
defer close(errorChan)
for {
// TODO: can we merge this into our main for loop below?
select {
case <-exitChan:
case <-ctx.Done():
return
default:
}
//obj.init.Logf("EventRunDefaultImpl started!")
if err := libvirt.EventRunDefaultImpl(); err != nil {
err := libvirt.EventRunDefaultImpl()
if err == nil {
//obj.init.Logf("EventRunDefaultImpl looped!")
continue
}
select {
case errorChan <- errwrap.Wrapf(err, "EventRunDefaultImpl failed"):
case <-exitChan:
// pass
case <-ctx.Done():
}
return
}
//obj.init.Logf("EventRunDefaultImpl looped!")
}
}()
// domain events callback
domCallback := func(c *libvirt.Connect, d *libvirt.Domain, ev *libvirt.DomainEventLifecycle) {
domName, _ := d.GetName()
if domName == obj.Name() {
select {
case domChan <- ev.Event: // send
case <-exitChan:
}
}
}
// if dom is nil, we get events for *all* domains!
domCallbackID, err := obj.conn.DomainEventLifecycleRegister(nil, domCallback)
if err != nil {
return err
}
defer obj.conn.DomainEventDeregister(domCallbackID)
// guest agent events callback
gaCallback := func(c *libvirt.Connect, d *libvirt.Domain, eva *libvirt.DomainEventAgentLifecycle) {
domName, _ := d.GetName()
if domName == obj.Name() {
select {
case gaChan <- eva: // send
case <-exitChan:
}
}
}
gaCallbackID, err := obj.conn.DomainEventAgentLifecycleRegister(nil, gaCallback)
if err != nil {
return err
}
defer obj.conn.DomainEventDeregister(gaCallbackID)
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
send := false // send event?
for {
processExited := false // did the process exit fully (shutdown)?
//processExited := false // did the process exit fully (shutdown)?
select {
case event := <-domChan:
case event, ok := <-domChan:
if !ok {
// TODO: Should we restart it?
domChan = nil
continue
}
// TODO: shouldn't we do these checks in CheckApply ?
switch event {
case libvirt.DOMAIN_EVENT_DEFINED:
if obj.Transient {
send = true
}
case libvirt.DOMAIN_EVENT_UNDEFINED:
if !obj.Transient {
send = true
}
case libvirt.DOMAIN_EVENT_STARTED:
fallthrough
case libvirt.DOMAIN_EVENT_RESUMED:
if obj.State != "running" {
send = true
}
case libvirt.DOMAIN_EVENT_SUSPENDED:
if obj.State != "paused" {
send = true
}
case libvirt.DOMAIN_EVENT_STOPPED:
fallthrough
case libvirt.DOMAIN_EVENT_SHUTDOWN:
if obj.State != "shutoff" {
send = true
}
processExited = true
//processExited = true
case libvirt.DOMAIN_EVENT_PMSUSPENDED:
// FIXME: IIRC, in s3 we can't cold change
@@ -423,24 +390,33 @@ func (obj *VirtRes) Watch(ctx context.Context) error {
fallthrough
case libvirt.DOMAIN_EVENT_CRASHED:
send = true
processExited = true // FIXME: is this okay for PMSUSPENDED ?
//processExited = true // FIXME: is this okay for PMSUSPENDED ?
}
if obj.processExitWatch && processExited {
close(obj.processExitChan) // send signal
obj.processExitWatch = false
}
//if obj.processExitWatch && processExited {
// close(obj.processExitChan) // send signal
// obj.processExitWatch = false
//}
case agentEvent := <-gaChan:
case agentEvent, ok := <-gaChan:
if !ok {
// TODO: Should we restart it?
gaChan = nil
continue
}
state, reason := agentEvent.State, agentEvent.Reason
if state == libvirt.CONNECT_DOMAIN_EVENT_AGENT_LIFECYCLE_STATE_CONNECTED {
obj.mutex.Lock()
obj.guestAgentConnected = true
obj.mutex.Unlock()
send = true
obj.init.Logf("guest agent connected")
} else if state == libvirt.CONNECT_DOMAIN_EVENT_AGENT_LIFECYCLE_STATE_DISCONNECTED {
obj.mutex.Lock()
obj.guestAgentConnected = false
obj.mutex.Unlock()
// ignore CONNECT_DOMAIN_EVENT_AGENT_LIFECYCLE_REASON_DOMAIN_STARTED
// events because they just tell you that guest agent channel was added
if reason == libvirt.CONNECT_DOMAIN_EVENT_AGENT_LIFECYCLE_REASON_CHANNEL {
@@ -451,11 +427,17 @@ func (obj *VirtRes) Watch(ctx context.Context) error {
return fmt.Errorf("unknown guest agent state: %v", state)
}
case err := <-errorChan:
case err, ok := <-errorChan:
if !ok {
return nil
}
if err == nil { // unlikely
continue
}
return errwrap.Wrapf(err, "unknown libvirt error")
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
return ctx.Err()
}
// do all our event sending all together to avoid duplicate msgs
@@ -470,7 +452,6 @@ func (obj *VirtRes) Watch(ctx context.Context) error {
// It doesn't check the state before hand, as it is a simple helper function.
// The caller must run dom.Free() after use, when error was returned as nil.
func (obj *VirtRes) domainCreate() (*libvirt.Domain, bool, error) {
if obj.Transient {
var flag libvirt.DomainCreateFlags
var state string
@@ -677,7 +658,10 @@ func (obj *VirtRes) attrCheckApply(ctx context.Context, apply bool, dom *libvirt
}
// modify the online aspect of the cpus with qemu-guest-agent
if obj.HotCPUs && obj.guestAgentConnected && domInfo.State != libvirt.DOMAIN_PAUSED {
obj.mutex.RLock()
guestAgentConnected := obj.guestAgentConnected
obj.mutex.RUnlock()
if obj.HotCPUs && guestAgentConnected && domInfo.State != libvirt.DOMAIN_PAUSED {
// if hotplugging a cpu without the guest agent, you might need:
// manually to: echo 1 > /sys/devices/system/cpu/cpu1/online OR
@@ -730,8 +714,9 @@ func (obj *VirtRes) domainShutdownSync(apply bool, dom *libvirt.Domain) (bool, e
if !apply {
return false, nil
}
obj.processExitWatch = true
obj.processExitChan = make(chan struct{})
//obj.processExitWatch = true
//obj.processExitChan = make(chan struct{})
// if machine shuts down before we call this, we error;
// this isn't ideal, but it happened due to user error!
obj.init.Logf("running shutdown")
@@ -765,18 +750,29 @@ func (obj *VirtRes) domainShutdownSync(apply bool, dom *libvirt.Domain) (bool, e
// CheckApply checks the resource state and applies the resource if the bool
// input is true. It returns error info and if the state check passed or not.
func (obj *VirtRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
if obj.conn == nil { // programming error?
return false, fmt.Errorf("got called with nil connection")
// XXX: we're using two connections per resource, we could pool these up
conn, version, err := obj.Auth.Connect(obj.URI)
if err != nil {
return false, errwrap.Wrapf(err, "connection to libvirt failed")
}
// cache these for child methods
obj.conn = conn
obj.version = version
defer conn.Close()
// if we do the restart, we must flip the flag back to false as evidence
var restart bool // do we need to do a restart?
if obj.RestartOnRefresh && obj.init.Refresh() { // a refresh is a restart ask
restart = true
}
obj.mutex.RLock()
restartScheduled := obj.restartScheduled
obj.mutex.RUnlock()
// we need to restart in all situations except ignore. the "error" case
// means that if a restart is actually needed, we should return an error
if obj.restartScheduled && obj.RestartOnDiverge != "ignore" { // "ignore", "ifneeded", "error"
if restartScheduled && obj.RestartOnDiverge != "ignore" { // "ignore", "ifneeded", "error"
restart = true
}
if !apply {
@@ -785,10 +781,11 @@ func (obj *VirtRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
var checkOK = true
dom, err := obj.conn.LookupDomainByName(obj.Name())
if err == nil {
// pass
} else if isNotFound(err) {
dom, err := conn.LookupDomainByName(obj.Name())
if err != nil && !isNotFound(err) {
return false, errwrap.Wrapf(err, "LookupDomainByName failed")
}
if isNotFound(err) {
// domain not found
if obj.absent {
// we can ignore the restart var since we're not running
@@ -802,13 +799,14 @@ func (obj *VirtRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
var c bool // = true
dom, c, err = obj.domainCreate() // create the domain
if err != nil {
// XXX: print out the XML of the definition?
return false, errwrap.Wrapf(err, "domainCreate failed")
} else if !c {
checkOK = false
}
} else {
return false, errwrap.Wrapf(err, "LookupDomainByName failed")
}
if err == nil {
// pass
}
defer dom.Free() // the Free() for two possible domain objects above
// domain now exists
@@ -833,7 +831,7 @@ func (obj *VirtRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
if err != nil {
return false, errwrap.Wrapf(err, "domain.GetXMLDesc failed")
}
if _, err = obj.conn.DomainDefineXML(domXML); err != nil {
if _, err = conn.DomainDefineXML(domXML); err != nil {
return false, errwrap.Wrapf(err, "conn.DomainDefineXML failed")
}
obj.init.Logf("domain defined")
@@ -843,20 +841,22 @@ func (obj *VirtRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
// shutdown here and let the stateCheckApply fix things up...
// TODO: i think this is the most straight forward process...
if !obj.absent && restart {
if c, err := obj.domainShutdownSync(apply, dom); err != nil {
return false, errwrap.Wrapf(err, "domainShutdownSync failed")
} else if !c {
checkOK = false
restart = false // clear the restart requirement...
}
}
//if !obj.absent && restart {
// if c, err := obj.domainShutdownSync(apply, dom); err != nil {
// return false, errwrap.Wrapf(err, "domainShutdownSync failed")
//
// } else if !c {
// checkOK = false
// restart = false // clear the restart requirement...
// }
//}
// FIXME: is doing this early check (therefore twice total) a good idea?
// run additional pre-emptive attr change checks here for hotplug stuff!
// run additional preemptive attr change checks here for hotplug stuff!
if !obj.absent {
if c, err := obj.attrCheckApply(ctx, apply, dom); err != nil {
return false, errwrap.Wrapf(err, "early attrCheckApply failed")
} else if !c {
checkOK = false
}
@@ -866,6 +866,7 @@ func (obj *VirtRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
// apply correct machine state, eg: startup/shutoff/pause as needed
if c, err := obj.stateCheckApply(ctx, apply, dom); err != nil {
return false, errwrap.Wrapf(err, "stateCheckApply failed")
} else if !c {
checkOK = false
}
@@ -877,13 +878,14 @@ func (obj *VirtRes) CheckApply(ctx context.Context, apply bool) (bool, error) {
if !obj.absent {
if c, err := obj.attrCheckApply(ctx, apply, dom); err != nil {
return false, errwrap.Wrapf(err, "attrCheckApply failed")
} else if !c {
checkOK = false
}
}
// we had to do a restart, we didn't, and we should error if it was needed
if obj.restartScheduled && restart == true && obj.RestartOnDiverge == "error" {
if restartScheduled && restart == true && obj.RestartOnDiverge == "error" {
return false, fmt.Errorf("needed restart but didn't! (RestartOnDiverge: %s)", obj.RestartOnDiverge)
}
@@ -937,7 +939,8 @@ func (obj *VirtRes) getDomainXML() string {
if i < obj.CPUs {
enabled = "yes"
}
b += fmt.Sprintf("<vcpu id='%d' enabled='%s' hotpluggable='yes'/>", i, enabled)
// all vcpus must have either set or unset order
b += fmt.Sprintf("<vcpu id='%d' enabled='%s' hotpluggable='yes' order='%d'/>", i, enabled, i+1)
}
b += fmt.Sprintf("</vcpus>")
} else {
@@ -1004,10 +1007,6 @@ func (obj *VirtRes) getDomainXML() string {
return b
}
type virtDevice interface {
GetXML(idx int) string
}
// DiskDevice represents a disk that is attached to the virt machine.
type DiskDevice struct {
Source string `lang:"source" yaml:"source"`
@@ -1304,24 +1303,3 @@ func (obj *VirtRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
*obj = VirtRes(raw) // restore from indirection with type conversion!
return nil
}
// randMAC returns a random mac address in the libvirt range.
func randMAC() string {
rand.Seed(time.Now().UnixNano())
return "52:54:00" +
fmt.Sprintf(":%x", rand.Intn(255)) +
fmt.Sprintf(":%x", rand.Intn(255)) +
fmt.Sprintf(":%x", rand.Intn(255))
}
// isNotFound tells us if this is a domain not found error.
func isNotFound(err error) bool {
if err == nil {
return false
}
if virErr, ok := err.(libvirt.Error); ok && virErr.Code == libvirt.ERR_NO_DOMAIN {
// domain not found
return true
}
return false // some other error
}

View File

@@ -33,6 +33,7 @@ import (
"bytes"
"context"
"fmt"
"net/url"
"os"
"os/exec"
"path"
@@ -146,9 +147,39 @@ type VirtBuilderRes struct {
// additional packages to install which are needed to bootstrap mgmt.
// This defaults to true.
// TODO: This does not yet support multi or cross arch.
// FIXME: This doesn't kick off mgmt runs yet.
Bootstrap bool `lang:"bootstrap" yaml:"bootstrap"`
// Seeds is a list of default etcd client endpoints to connect to. If
// you specify this, you must also set Bootstrap to true. These should
// likely be http URL's like: http://127.0.0.1:2379 or similar.
Seeds []string `lang:"seeds" yaml:"seeds"`
// Mkdir creates these directories in the guests. This happens before
// CopyIn runs. Directories must be absolute and end with a slash. Any
// intermediate directories are created, similar to how `mkdir -p`
// works.
Mkdir []string `lang:"mkdir" yaml:"mkdir"`
// CopyIn is a list of local paths to copy into the machine dest. The
// dest directory must exist for this to work. Use Mkdir if you need to
// make a directory, since that step happens earlier. All paths must be
// absolute, and directories must end with a slash. This happens before
// the RunCmd stage in case you want to create something to be used
// there.
CopyIn []*CopyIn `lang:"copy_in" yaml:"copy_in"`
// RunCmd is a sequence of commands + args (one set per list item) to
// run in the build environment. These happen after the CopyIn stage.
RunCmd []string `lang:"run_cmd" yaml:"run_cmd"`
// FirstbootCmd is a sequence of commands + args (one set per list item)
// to run once on first boot.
// TODO: Consider replacing this with the mgmt firstboot mechanism for
// consistency between this platform and other platforms that might not
// support the excellent libguestfs version of those scripts. (Make the
// logs look more homogeneous.)
FirstbootCmd []string `lang:"firstboot_cmd" yaml:"firstboot_cmd"`
// LogOutput logs the output of running this command to a file in the
// special $vardir directory. It defaults to true. Keep in mind that if
// you let virt-builder choose the password randomly, it will be output
@@ -156,7 +187,8 @@ type VirtBuilderRes struct {
LogOutput bool `lang:"log_output" yaml:"log_output"`
// Tweaks adds some random tweaks to work around common bugs. This
// defaults to true.
// defaults to true. It also does some useful things that most may find
// desirable.
Tweaks bool `lang:"tweaks" yaml:"tweaks"`
varDir string
@@ -305,6 +337,42 @@ func (obj *VirtBuilderRes) Validate() error {
}
}
for _, x := range obj.Seeds {
if x == "" {
return fmt.Errorf("empty seed")
}
if _, err := url.Parse(x); err != nil { // it's so rare this fails
return err
}
}
for _, x := range obj.Mkdir {
if x == "" {
return fmt.Errorf("empty Mkdir entry")
}
if !strings.HasPrefix(x, "/") {
return fmt.Errorf("the Mkdir entry must be absolute")
}
if !strings.HasSuffix(x, "/") {
return fmt.Errorf("the Mkdir entry must be a directory")
}
}
for _, x := range obj.CopyIn {
if err := x.Validate(); err != nil {
return err
}
}
for _, x := range obj.RunCmd {
if x == "" {
return fmt.Errorf("empty RunCmd entry")
}
}
for _, x := range obj.FirstbootCmd {
if x == "" {
return fmt.Errorf("empty FirstbootCmd entry")
}
}
return nil
}
@@ -370,7 +438,6 @@ func (obj *VirtBuilderRes) Watch(ctx context.Context) error {
obj.init.Running() // when started, notify engine that we're running
var send = false // send event?
for {
select {
case event, ok := <-recWatcher.Events():
@@ -383,19 +450,14 @@ func (obj *VirtBuilderRes) Watch(ctx context.Context) error {
if obj.init.Debug { // don't access event.Body if event.Error isn't nil
obj.init.Logf("event(%s): %v", event.Body.Name, event.Body.Op)
}
send = true
case <-ctx.Done(): // closed by the engine to signal shutdown
return nil
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
obj.init.Event() // notify engine of an event (this can block)
}
}
}
// CheckApply checks the resource state and applies the resource if the bool
// input is true. It returns error info and if the state check passed or not.
@@ -447,6 +509,12 @@ func (obj *VirtBuilderRes) CheckApply(ctx context.Context, apply bool) (bool, er
extraPackages = append(extraPackages, p...)
}
// Magic vm things should happen automatically.
if d := obj.getDistro(); obj.Tweaks && (d == "fedora" || d == "debian") {
p := "qemu-guest-agent" // same for debian and fedora
extraPackages = append(extraPackages, p)
}
if len(obj.Packages) > 0 || len(extraPackages) > 0 {
packages := []string{} // I think the ordering _may_ matter.
packages = append(packages, obj.Packages...)
@@ -455,6 +523,13 @@ func (obj *VirtBuilderRes) CheckApply(ctx context.Context, apply bool) (bool, er
cmdArgs = append(cmdArgs, args...)
}
// Magic vm things should happen automatically.
if d := obj.getDistro(); obj.Tweaks && (d == "fedora" || d == "debian") {
x := "/usr/bin/systemctl enable qemu-guest-agent.service"
args := []string{"--run-command", x}
cmdArgs = append(cmdArgs, args...)
}
// XXX: Tweak for debian grub-pc bug:
// https://www.mail-archive.com/guestfs@lists.libguestfs.org/msg00062.html
if obj.Tweaks && obj.Update && obj.getDistro() == "debian" {
@@ -501,8 +576,47 @@ func (obj *VirtBuilderRes) CheckApply(ctx context.Context, apply bool) (bool, er
// TODO: bootstrap mgmt based on the deploy method this ran with
// TODO: --tmp-prefix ? --module-path ?
//args2 := []string{"--firstboot-command", VirtBuilderBinDir+"mgmt", "run", "lang", "?"}
//cmdArgs = append(cmdArgs, args2...)
// TODO: add an alternate handoff method to run a bolus of code?
if len(obj.Seeds) > 0 {
m := filepath.Join(VirtBuilderBinDir, filepath.Base(p)) // mgmt full path
setupSvc := []string{
m, // mgmt
"setup", // setup command
"svc", // TODO: pull from a const?
"--install",
//"--start", // we're in pre-boot env right now
"--enable", // start on first boot!
fmt.Sprintf("--binary-path=%s", m),
"--no-server", // TODO: hardcode this for now
//fmt.Sprintf("--seeds=%s", strings.Join(obj.Seeds, ",")),
}
for _, seed := range obj.Seeds {
// TODO: validate each seed?
s := fmt.Sprintf("--seeds=%s", seed)
setupSvc = append(setupSvc, s)
}
setupSvcCmd := strings.Join(setupSvc, " ")
args := []string{"--run-command", setupSvcCmd} // cmd must be a single string
cmdArgs = append(cmdArgs, args...)
}
}
for _, x := range obj.Mkdir {
args := []string{"--mkdir", x}
cmdArgs = append(cmdArgs, args...)
}
for _, x := range obj.CopyIn {
args := []string{"--copy-in", x.Path + ":" + x.Dest} // LOCALPATH:REMOTEDIR
cmdArgs = append(cmdArgs, args...)
}
for _, x := range obj.RunCmd {
args := []string{"--run-command", x}
cmdArgs = append(cmdArgs, args...)
}
for _, x := range obj.FirstbootCmd {
args := []string{"--firstboot-command", x}
cmdArgs = append(cmdArgs, args...)
}
cmd := exec.CommandContext(ctx, cmdName, cmdArgs...)
@@ -626,7 +740,7 @@ func (obj *VirtBuilderRes) Cmp(r engine.Res) error {
}
if len(obj.SSHKeys) != len(res.SSHKeys) {
return fmt.Errorf("the number of Packages differs")
return fmt.Errorf("the number of SSHKeys differs")
}
for i, x := range obj.SSHKeys {
if err := res.SSHKeys[i].Cmp(x); err != nil {
@@ -644,6 +758,48 @@ func (obj *VirtBuilderRes) Cmp(r engine.Res) error {
return fmt.Errorf("the Bootstrap value differs")
}
if len(obj.Seeds) != len(res.Seeds) {
return fmt.Errorf("the number of Seeds differs")
}
for i, x := range obj.Seeds {
if seed := res.Seeds[i]; x != seed {
return fmt.Errorf("the seed at index %d differs", i)
}
}
if len(obj.Mkdir) != len(res.Mkdir) {
return fmt.Errorf("the number of Mkdir entries differs")
}
for i, x := range obj.Mkdir {
if s := res.Mkdir[i]; x != s {
return fmt.Errorf("the Mkdir entry at index %d differs", i)
}
}
if len(obj.CopyIn) != len(res.CopyIn) {
return fmt.Errorf("the number of CopyIn structs differ")
}
for i, x := range obj.CopyIn {
if err := res.CopyIn[i].Cmp(x); err != nil {
return errwrap.Wrapf(err, "the copy in struct at index %d differs", i)
}
}
if len(obj.RunCmd) != len(res.RunCmd) {
return fmt.Errorf("the number of RunCmd entries differs")
}
for i, x := range obj.RunCmd {
if s := res.RunCmd[i]; x != s {
return fmt.Errorf("the RunCmd entry at index %d differs", i)
}
}
if len(obj.FirstbootCmd) != len(res.FirstbootCmd) {
return fmt.Errorf("the number of FirstbootCmd entries differs")
}
for i, x := range obj.FirstbootCmd {
if s := res.FirstbootCmd[i]; x != s {
return fmt.Errorf("the FirstbootCmd entry at index %d differs", i)
}
}
if obj.LogOutput != res.LogOutput {
return fmt.Errorf("the LogOutput value differs")
}
@@ -782,3 +938,58 @@ func (obj *SSHKeyInfo) Cmp(x *SSHKeyInfo) error {
return nil
}
// CopyIn is a list of local paths to copy into the machine dest.
type CopyIn struct {
// Path is the local file or directory that we want to copy in.
// TODO: Add autoedges
Path string `lang:"path" yaml:"path"`
// Dest is the destination dir that the path gets copied into. This
// directory must exist.
Dest string `lang:"dest" yaml:"dest"`
}
// Validate reports any problems with the struct definition.
func (obj *CopyIn) Validate() error {
if obj == nil {
return fmt.Errorf("nil obj")
}
if obj.Path == "" {
return fmt.Errorf("empty Path")
}
if !strings.HasPrefix(obj.Path, "/") {
return fmt.Errorf("the Path must be absolute")
}
if obj.Dest == "" {
return fmt.Errorf("empty Dest")
}
if !strings.HasPrefix(obj.Dest, "/") {
return fmt.Errorf("the Dest must be absolute")
}
if !strings.HasSuffix(obj.Dest, "/") {
return fmt.Errorf("the dest must be a directory")
}
return nil
}
// Cmp compares two of these and returns an error if they are not equivalent.
func (obj *CopyIn) Cmp(x *CopyIn) error {
//if (obj == nil) != (x == nil) { // xor
// return fmt.Errorf("we differ") // redundant
//}
if obj == nil || x == nil {
// special case since we want to error if either is nil
return fmt.Errorf("can't cmp if nil")
}
if obj.Path != x.Path {
return fmt.Errorf("the Path differs")
}
if obj.Dest != x.Dest {
return fmt.Errorf("the Dest differs")
}
return nil
}

View File

@@ -0,0 +1,175 @@
// Mgmt
// Copyright (C) James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//
// Additional permission under GNU GPL version 3 section 7
//
// If you modify this program, or any covered work, by linking or combining it
// with embedded mcl code and modules (and that the embedded mcl code and
// modules which link with this program, contain a copy of their source code in
// the authoritative form) containing parts covered by the terms of any other
// license, the licensors of this program grant you additional permission to
// convey the resulting work. Furthermore, the licensors of this program grant
// the original author, James Shubin, additional permission to update this
// additional permission if he deems it necessary to achieve the goals of this
// additional permission.
//go:build !novirt
package resources
import (
"fmt"
"math/rand"
"sync"
"time"
"github.com/purpleidea/mgmt/util/errwrap"
libvirt "libvirt.org/go/libvirt" // gitlab.com/libvirt/libvirt-go-module
)
var (
// shared by all virt resources
libvirtInitialized = false
libvirtMutex *sync.Mutex
)
func init() {
libvirtMutex = &sync.Mutex{}
}
type virtURISchemeType int
const (
defaultURI virtURISchemeType = iota
lxcURI
)
// libvirtInit is called in the Init method of any virt resource. It must be run
// before any connection to the hypervisor is made!
func libvirtInit() error {
libvirtMutex.Lock()
defer libvirtMutex.Unlock()
if libvirtInitialized {
return nil // done early
}
if err := libvirt.EventRegisterDefaultImpl(); err != nil {
return errwrap.Wrapf(err, "method EventRegisterDefaultImpl failed")
}
libvirtInitialized = true
return nil
}
// randMAC returns a random mac address in the libvirt range.
func randMAC() string {
rand.Seed(time.Now().UnixNano())
return "52:54:00" +
fmt.Sprintf(":%x", rand.Intn(255)) +
fmt.Sprintf(":%x", rand.Intn(255)) +
fmt.Sprintf(":%x", rand.Intn(255))
}
// isNotFound tells us if this is a domain or network not found error.
// TODO: expand this with other ERR_NO_? values eventually.
func isNotFound(err error) bool {
if err == nil {
return false
}
virErr, ok := err.(libvirt.Error)
if !ok {
return false
}
if virErr.Code == libvirt.ERR_NO_DOMAIN {
// domain not found
return true
}
if virErr.Code == libvirt.ERR_NO_NETWORK {
// network not found
return true
}
return false // some other error
}
// VirtAuth is used to pass credentials to libvirt.
type VirtAuth struct {
Username string `lang:"username" yaml:"username"`
Password string `lang:"password" yaml:"password"`
}
// Cmp compares two VirtAuth structs. It errors if they are not identical.
func (obj *VirtAuth) Cmp(auth *VirtAuth) error {
if (obj == nil) != (auth == nil) { // xor
return fmt.Errorf("the VirtAuth differs")
}
if obj == nil && auth == nil {
return nil
}
if obj.Username != auth.Username {
return fmt.Errorf("the Username differs")
}
if obj.Password != auth.Password {
return fmt.Errorf("the Password differs")
}
return nil
}
// Connect is the connect helper for the libvirt connection. It can handle auth.
func (obj *VirtAuth) Connect(uri string) (conn *libvirt.Connect, version uint32, err error) {
if obj != nil {
callback := func(creds []*libvirt.ConnectCredential) {
// Populate credential structs with the
// prepared username/password values
for _, cred := range creds {
if cred.Type == libvirt.CRED_AUTHNAME {
cred.Result = obj.Username
cred.ResultLen = len(cred.Result)
} else if cred.Type == libvirt.CRED_PASSPHRASE {
cred.Result = obj.Password
cred.ResultLen = len(cred.Result)
}
}
}
auth := &libvirt.ConnectAuth{
CredType: []libvirt.ConnectCredentialType{
libvirt.CRED_AUTHNAME, libvirt.CRED_PASSPHRASE,
},
Callback: callback,
}
conn, err = libvirt.NewConnectWithAuth(uri, auth, 0)
if err == nil {
if v, err := conn.GetLibVersion(); err == nil {
version = v
}
}
}
if obj == nil || err != nil {
conn, err = libvirt.NewConnect(uri)
if err == nil {
if v, err := conn.GetLibVersion(); err == nil {
version = v
}
}
}
return
}

Some files were not shown because too many files have changed in this diff Show More