59 Commits

Author SHA1 Message Date
James Shubin
6a7d904fae misc: Improve tagging script
This way we can push the tag *after* all the builds succeed. If
something goes wrong, we can always delete our local tag and try again.
2019-10-04 06:49:04 -04:00
James Shubin
d4043d3f86 misc, make: Add full file path into fpm script
This is needed for our fancier, unique file names.
2019-10-04 06:43:17 -04:00
James Shubin
b4902a4f58 make: Add a unique token to the package file name
This unique token is necessary so that storing the files in the same dir
(basically a GitHub release) or in the SHA1SUMS file doesn't cause a
conflict.
2019-10-04 06:06:44 -04:00
James Shubin
ffe402f201 misc: Add fedora-30 mkosi+fpm build environment
Good example of how to add a new distro or version.
2019-10-04 06:02:08 -04:00
James Shubin
09cc7da282 misc: Add proper archlinux prefix in build script 2019-10-04 06:01:23 -04:00
James Shubin
2d2dad41f4 todo: Update the TODO file so that it has a sane purpose
We stored some stuff in GitHub, and some stuff here. We can keep using
this, but let's do it for the stuff that hasn't changed in a while.
2019-10-04 04:11:26 -04:00
bjanssens
5f7c0a86dd art: Add the requested art
Signed-off-by: bjanssens <bjanssens@inuits.eu>
2019-10-04 09:23:20 +02:00
Donald Bakong
fc1c631c98 engine: resources: Change Res API from Compare to Cmp
This will be done by refactoring the current method, to return an error
message instead of a boolean value. This will also update a typo on the
user res.
2019-09-27 18:10:58 -04:00
James Shubin
89bdafacb8 misc: Refactor Makefile slightly
We could make this even better in the future with lists.
2019-09-23 06:57:26 -04:00
James Shubin
73b6b3f129 misc: Remove old image building cruft 2019-09-23 06:50:26 -04:00
James Shubin
b2a495f593 misc: Add mkosi target for ubuntu bionic
The name of these is pretty weird.
2019-09-23 06:50:00 -04:00
James Shubin
65ee904377 misc: Work around old golang in ubuntu
Hopefully this helps.
2019-09-23 06:48:55 -04:00
James Shubin
13f59230b5 misc: Split Makefile PHONY target into multiple lines
AIUI this is valid make. Please correct me if I'm wrong.
2019-09-23 06:48:55 -04:00
James Shubin
36d2a0de1e misc: Make mkosi building suitable for different distro versions
We'd like to be able to build both Fedora N and N-1 at the same time if
possible. This makes it more generally applicable for this scenario, as
well as for other distros.
2019-09-23 06:48:55 -04:00
James Shubin
a4db9fc8e5 misc: Add mkosi based package building with fpm
Building distro packages is great, however if they aren't built in the
correct environment with the associated dependencies, then they won't
work properly on those distros.

This patch adds an `mkosi` based image building environment that builds
the packages in their respective distros, and then copies them out into
our releases directory.

You'll now want to `make tag && make mkosi && make release` to get a new
release out. We use a small hack to trick the `make release` portion to
not re-build the distro packages if they're already present in the
releases/ directory for that version.

This commit depends on a very recent version of mkosi (it was tested
with git master) and also depends on two currently unmerged patches:
https://github.com/systemd/mkosi/pull/363 and
https://github.com/systemd/mkosi/pull/365
2019-09-20 12:32:41 -04:00
James Shubin
9dae5ef83b engine: resources: Improve the file res and add strict state
This might be slightly controversial, in that you must specify the state
if a file would need to be created to perform the action. We no longer
implicitly assume that just specifying content is enough. As it turns
out, I believe this is safer and more correct. The code to implement
this turns out to be much more logical and simplified, and does this
removes an ambiguous corner case from the reversed resource code.

Some discussion in: https://github.com/purpleidea/mgmt/issues/540

This patch also does a bit of related cleanup.
2019-09-14 16:07:53 -04:00
James Shubin
e8842a740c lang: Remove duplicate log message
Looks like we had two copies of the same message by accident.
2019-09-11 04:26:15 -04:00
James Shubin
0d3807ad09 lang, test: Fix copy paste error with log message
This changes this to the correct error message.
2019-09-11 04:26:15 -04:00
James Shubin
5c27a249b7 engine: resources: Add reversible API and file resource
This adds the first reversible resource (file) and the necessary engine
API hooks to make it all work. This allows a special "reversed" resource
to be added to the subsequent graph in the stream when an earlier
version "disappears". This disappearance can happen if it was previously
in an if statement that then becomes false.

It might be wise to combine the use of this meta parameter with the use
of the `realize` meta parameter to ensure that your reversed resource
actually runs at least once, if there's a chance that it might be gone
for a while.

This patch also adds a new test harness for testing resources. It
doesn't test the "live" aspect of resources, as it doesn't run Watch,
but it was designed to ensure CheckApply works as intended, and it runs
very quickly with a simplified timeline of happenings.
2019-09-11 03:40:22 -04:00
James Shubin
7e41860b28 docs: Add missing docs on the rewatch and realize meta params
Sometimes it's hard to keep this in sync.
2019-09-11 03:40:22 -04:00
James Shubin
43ff92bbe7 engine: resources: Clean up test log message 2019-09-11 03:40:22 -04:00
James Shubin
28adc7e563 engine: resource: Refactor helper functions
Maybe we can use them in other tests too.
2019-09-11 03:40:22 -04:00
James Shubin
9788411995 engine: resources: Add another validation check
This simple check should prevent some silly mistakes and make the logic
easier for other parts of the code that won't have to worry about this
pattern.
2019-09-11 03:40:22 -04:00
James Shubin
0c9e8cc50e engine: resources: Change the default file state
The default file state should be undefined. This is important because if
a reverse scenario that doesn't specify the state gets given this
default, it will be as if it was specified explicitly, which wouldn't
necessarily be what we want. Instead, an undefined state should
implicitly cause a file to get created if there's a reason to do so,
such as if content or another attribute is specified.

Hopefully this change doesn't introduce any bugs in the CheckApply code,
if it does, then it was due to a lack of implicit file creation.
2019-09-11 03:16:57 -04:00
James Shubin
34d572c523 engine: Improve the way we make a unique res path token
This is needed in the state directory.
2019-09-11 03:16:57 -04:00
James Shubin
011b496b3f engine: resources: Ensure the Kind and Name methods work
Triple check these work after decoding, by adding a test.
2019-09-11 03:16:57 -04:00
James Shubin
12b906eac6 engine: Refactor state dir into a separate function
This lets us re-use it, and know the path is fixed.
2019-09-11 03:16:57 -04:00
James Shubin
20937d05c3 engine: resources: file: Add undefined file state and validate it
We should consider using *string instead of the empty string, but let's
keep the diff smaller for now.
2019-09-11 03:16:57 -04:00
James Shubin
4943d37ccf engine: resources: file: Use constants for state values
More robustness is yay!
2019-09-11 03:16:57 -04:00
James Shubin
3a8fd215de engine: resources: file: Add Copy method to file res
This lets us implement the CopyableRes interface.
2019-09-11 03:16:57 -04:00
James Shubin
87572e8922 test: Catch capitalized error messages in tests 2019-09-06 03:28:49 -04:00
James Shubin
f1eedc7a01 lang: Clarify error message about missing field
User probably just mistyped a field name. Make that clear.
2019-09-06 03:28:49 -04:00
Donald Bakong
b79e48dd77 docs: Fix typo on quick-start-guide.md 2019-08-25 22:53:43 -04:00
James Shubin
18872194af misc: Warn users with weird computers
A user seemed to experience a weird golang issue when they had deps from
both package managers installed. I won't block or fail their install,
but we can print a warning message so that someone sees it in their
logs.
2019-08-23 22:10:34 -04:00
James Shubin
bafd7ba282 misc: Use apt install of apt-get where possible
The future is now!
2019-08-23 22:10:12 -04:00
Donald Bakong
b186481181 pgraph: Add a test for FindEdge() function 2019-08-08 00:43:26 -04:00
James Shubin
09ca6d11ad lang: funcs: Module name should be public
For consistency with the rest of the core functions.
2019-07-29 11:17:43 -04:00
James Shubin
e68e4e786d docs: Add newly recorded talks and blog post 2019-07-26 06:52:01 -04:00
James Shubin
ee638254c3 lang: Remove the specialized info structs
Since this was an early form of the modern data struct, remove those and
pass in the correct data. This is also important in case we have
something more complex inside our string interpolation!
2019-07-26 04:20:04 -04:00
James Shubin
1e678905c4 util: Fix typo 2019-07-26 04:20:04 -04:00
James Shubin
10804c4b25 lang: Improve the gapi copying
We hit a weird bug where dirs would not get copied properly. I thought
the solution might be to add the missing dirs so they'd get a proper
mkdir, but in the end that didn't work well, so we just use `mkdirall`
and that seems to work. Let's leave it like this for now. Some of the
previous work for that is in the previous commit.
2019-07-26 04:20:04 -04:00
James Shubin
4bf9b4d41b util: Add some path helper functions
In the end, I'm not sure how useful these will be, but we'll keep them
in for now.
2019-07-26 04:20:04 -04:00
James Shubin
1161872324 etcd: fs: Errors should start with lower case 2019-07-26 04:20:04 -04:00
James Shubin
98cb570896 util: Add new mkdirall variants for the copy functions
This adds versions that recursively `mkdir` and all don't error as
easily. This works around some bugs we were having with file copying.
2019-07-26 04:20:04 -04:00
James Shubin
ed4ee3b58e lang: funcs: Add deploy package with readfile related functions
This adds a readfile function to actually access files from our deploy.
A fun side effect is that we can even access our own code! In general,
it's a good reminder that you should only run trusted code on your own
infrastructure. This also includes a fancy new test case.
2019-07-26 03:38:26 -04:00
James Shubin
066048f4de lang: Pass through the Fs and the FsURI
This should give us options as to how a function should interact with an
FS. I feel like it's cleaner to go through the World API, and passing in
the FsURI lets us do that, but I passed in the Fs at the same time in
case it's useful for some reason. I think using it is a boundary
violation, but it's just a hunch. Does anything break when we move from
one deploy to the next?
2019-07-26 03:07:08 -04:00
James Shubin
4b6b91c08b lang: Make sure to call Init for functions that arrive via import
We weren't calling Init on some functions which should have had this
done. I'm not sure whether this is the right place, or if it should be
elsewhere as part of the scope building process. Good enough for now.
2019-07-22 06:49:02 -04:00
James Shubin
2980523a5b lang: Add a new function interface to accept data
Sometimes certain internal functions might want to get some data from
the AST or from something relating to the state of the language. This
adds a method to pass in that data. For now it's a very simple method,
but we could generalize it in the future if it becomes more useful.
2019-07-22 06:46:04 -04:00
James Shubin
f2f9c043bf lang, gapi: Work around a copy bug in the deploy
It seems when we had a files/ dir that we added to our deploy, it would
get copied into /files/files/whatever instead of /files/whatever where
it should be. Hopefully this works around the issue forever.
2019-07-22 06:40:47 -04:00
James Shubin
5d59cfd2c9 util: Ensure the afero copy function is working as intended
The destination should be a dir sometimes.
2019-07-22 06:38:02 -04:00
James Shubin
f94474e24f lang: Add the world implementation to our test suite
This allows our tests to actual run the World API in them.
2019-07-22 06:36:37 -04:00
James Shubin
a63fc6d9ba util: Add a remove path suffix util function
This pairs with a similar one we already had.
2019-07-22 06:35:13 -04:00
James Shubin
076adeef80 lang: funcs: Fix a copypasta error with the not equals operator
Woops, sorry!
2019-07-22 06:08:37 -04:00
James Shubin
a0e756317c lang: Add tests for slow unification
These used to be cases where our algorithm was unusably slow.

Thanks to foxxx0 for the report!
2019-07-21 03:15:06 -04:00
James Shubin
252cb5f2f3 lang: Detect windows style CR and return a better error
If you get a sneaky \r in your code, the error just looks like
whitespace, so this way we can warn you explicitly.
2019-07-21 03:10:21 -04:00
James Shubin
64288b4914 lang, test: Inline some overly indented tests
Sometimes you're busy hacking and it's nice for future you to fix up
your code!
2019-07-21 01:19:15 -04:00
James Shubin
9ca6c6a315 test: Split up long tests into multiple sub tests again
I think we need this for non --race tests too.
2019-07-21 00:55:36 -04:00
James Shubin
3651ab5c0c lang: Add more tests for function 2019-07-20 22:27:21 -04:00
James Shubin
b3f15e1ddc lang: Add more tests for class and include 2019-07-20 01:33:42 -04:00
178 changed files with 4406 additions and 884 deletions

126
Makefile
View File

@@ -16,7 +16,11 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
SHELL = /usr/bin/env bash
.PHONY: all art cleanart version program lang path deps run race bindata generate build build-debug crossbuild clean test gofmt yamlfmt format docs rpmbuild mkdirs rpm srpm spec tar upload upload-sources upload-srpms upload-rpms upload-releases copr tag release funcgen
.PHONY: all art cleanart version program lang path deps run race bindata generate build build-debug crossbuild clean test gofmt yamlfmt format docs
.PHONY: rpmbuild mkdirs rpm srpm spec tar upload upload-sources upload-srpms upload-rpms upload-releases copr tag
.PHONY: mkosi mkosi_fedora-30 mkosi_fedora-29 mkosi_debian-10 mkosi_ubuntu-bionic mkosi_archlinux
.PHONY: release releases_path release_fedora-30 release_fedora-29 release_debian-10 release_ubuntu-bionic release_archlinux
.PHONY: funcgen
.SILENT: clean bindata
# a large amount of output from this `find`, can cause `make` to be much slower!
@@ -49,9 +53,23 @@ GOOSARCHES ?= linux/amd64 linux/ppc64 linux/ppc64le linux/arm64 darwin/amd64
GOHOSTOS = $(shell go env GOHOSTOS)
GOHOSTARCH = $(shell go env GOHOSTARCH)
RPM_PKG = releases/$(VERSION)/rpm/mgmt-$(VERSION)-1.x86_64.rpm
DEB_PKG = releases/$(VERSION)/deb/mgmt_$(VERSION)_amd64.deb
PACMAN_PKG = releases/$(VERSION)/pacman/mgmt-$(VERSION)-1-x86_64.pkg.tar.xz
TOKEN_FEDORA-30 = fedora-30
TOKEN_FEDORA-29 = fedora-29
TOKEN_DEBIAN-10 = debian-10
TOKEN_UBUNTU-BIONIC = ubuntu-bionic
TOKEN_ARCHLINUX = archlinux
FILE_FEDORA-30 = mgmt-$(TOKEN_FEDORA-30)-$(VERSION)-1.x86_64.rpm
FILE_FEDORA-29 = mgmt-$(TOKEN_FEDORA-29)-$(VERSION)-1.x86_64.rpm
FILE_DEBIAN-10 = mgmt_$(TOKEN_DEBIAN-10)_$(VERSION)_amd64.deb
FILE_UBUNTU-BIONIC = mgmt_$(TOKEN_UBUNTU-BIONIC)_$(VERSION)_amd64.deb
FILE_ARCHLINUX = mgmt-$(TOKEN_ARCHLINUX)-$(VERSION)-1-x86_64.pkg.tar.xz
PKG_FEDORA-30 = releases/$(VERSION)/$(TOKEN_FEDORA-30)/$(FILE_FEDORA-30)
PKG_FEDORA-29 = releases/$(VERSION)/$(TOKEN_FEDORA-29)/$(FILE_FEDORA-29)
PKG_DEBIAN-10 = releases/$(VERSION)/$(TOKEN_DEBIAN-10)/$(FILE_DEBIAN-10)
PKG_UBUNTU-BIONIC = releases/$(VERSION)/$(TOKEN_UBUNTU-BIONIC)/$(FILE_UBUNTU-BIONIC)
PKG_ARCHLINUX = releases/$(VERSION)/$(TOKEN_ARCHLINUX)/$(FILE_ARCHLINUX)
SHA256SUMS = releases/$(VERSION)/SHA256SUMS
SHA256SUMS_ASC = $(SHA256SUMS).asc
@@ -165,6 +183,7 @@ clean: ## clean things up
$(MAKE) --quiet -C bindata clean
$(MAKE) --quiet -C lang/funcs clean
$(MAKE) --quiet -C lang clean
$(MAKE) --quiet -C misc/mkosi clean
rm -f lang/funcs/core/generated_funcs.go || true
rm -f lang/funcs/core/generated_funcs_test.go || true
[ ! -e $(PROGRAM) ] || rm $(PROGRAM)
@@ -343,18 +362,57 @@ copr: upload-srpms ## build in copr
tag: ## tags a new release
./misc/tag.sh
#
# mkosi
#
mkosi: mkosi_fedora-30 mkosi_fedora-29 mkosi_debian-10 mkosi_ubuntu-bionic mkosi_archlinux ## builds distro packages via mkosi
mkosi_fedora-30: releases/$(VERSION)/.mkdir
@title='$@' ; echo "Generating: $${title#'mkosi_'} via mkosi..."
@title='$@' ; distro=$${title#'mkosi_'} ; ./misc/mkosi/make.sh $${distro} `realpath "releases/$(VERSION)/"`
mkosi_fedora-29: releases/$(VERSION)/.mkdir
@title='$@' ; echo "Generating: $${title#'mkosi_'} via mkosi..."
@title='$@' ; distro=$${title#'mkosi_'} ; ./misc/mkosi/make.sh $${distro} `realpath "releases/$(VERSION)/"`
mkosi_debian-10: releases/$(VERSION)/.mkdir
@title='$@' ; echo "Generating: $${title#'mkosi_'} via mkosi..."
@title='$@' ; distro=$${title#'mkosi_'} ; ./misc/mkosi/make.sh $${distro} `realpath "releases/$(VERSION)/"`
mkosi_ubuntu-bionic: releases/$(VERSION)/.mkdir
@title='$@' ; echo "Generating: $${title#'mkosi_'} via mkosi..."
@title='$@' ; distro=$${title#'mkosi_'} ; ./misc/mkosi/make.sh $${distro} `realpath "releases/$(VERSION)/"`
mkosi_archlinux: releases/$(VERSION)/.mkdir
@title='$@' ; echo "Generating: $${title#'mkosi_'} via mkosi..."
@title='$@' ; distro=$${title#'mkosi_'} ; ./misc/mkosi/make.sh $${distro} `realpath "releases/$(VERSION)/"`
#
# release
#
release: releases/$(VERSION)/mgmt-release.url ## generates and uploads a release
releases/$(VERSION)/mgmt-release.url: $(RPM_PKG) $(DEB_PKG) $(PACMAN_PKG) $(SHA256SUMS_ASC)
releases_path:
@#Don't put any other output or dependencies in here or they'll show!
@echo "releases/$(VERSION)/"
release_fedora-30: $(PKG_FEDORA-30)
release_fedora-29: $(PKG_FEDORA-29)
release_debian-10: $(PKG_DEBIAN-10)
release_ubuntu-bionic: $(PKG_UBUNTU-BIONIC)
release_archlinux: $(PKG_ARCHLINUX)
releases/$(VERSION)/mgmt-release.url: $(PKG_FEDORA-30) $(PKG_FEDORA-29) $(PKG_DEBIAN-10) $(PKG_UBUNTU-BIONIC) $(PKG_ARCHLINUX) $(SHA256SUMS_ASC)
@echo "Pushing git tag $(VERSION) to origin..."
git push origin $(VERSION)
@echo "Creating github release..."
hub release create \
-F <( echo -e "$(VERSION)\n";echo "Verify the signatures of all packages before you use them. The signing key can be downloaded from https://purpleidea.com/contact/#pgp-key to verify the release." ) \
-a $(RPM_PKG) \
-a $(DEB_PKG) \
-a $(PACMAN_PKG) \
-a $(PKG_FEDORA-30) \
-a $(PKG_FEDORA-29) \
-a $(PKG_DEBIAN-10) \
-a $(PKG_UBUNTU-BIONIC) \
-a $(PKG_ARCHLINUX) \
-a $(SHA256SUMS_ASC) \
$(VERSION) \
> releases/$(VERSION)/mgmt-release.url \
@@ -362,32 +420,48 @@ releases/$(VERSION)/mgmt-release.url: $(RPM_PKG) $(DEB_PKG) $(PACMAN_PKG) $(SHA2
|| rm -f releases/$(VERSION)/mgmt-release.url
releases/$(VERSION)/.mkdir:
mkdir -p releases/$(VERSION)/{deb,rpm,pacman}/ && touch releases/$(VERSION)/.mkdir
mkdir -p releases/$(VERSION)/{$(TOKEN_FEDORA-30),$(TOKEN_FEDORA-29),$(TOKEN_DEBIAN-10),$(TOKEN_UBUNTU-BIONIC),$(TOKEN_ARCHLINUX)}/ && touch releases/$(VERSION)/.mkdir
releases/$(VERSION)/rpm/changelog: $(PROGRAM) releases/$(VERSION)/.mkdir
@echo "Generating: rpm changelog..."
./misc/make-rpm-changelog.sh $(VERSION)
releases/$(VERSION)/$(TOKEN_FEDORA-30)/changelog: $(PROGRAM) releases/$(VERSION)/.mkdir
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Generating: $${distro} changelog..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/make-rpm-changelog.sh "$${distro}" $(VERSION)
$(RPM_PKG): releases/$(VERSION)/rpm/changelog
@echo "Building: rpm package..."
./misc/fpm-pack.sh rpm $(VERSION) libvirt-devel augeas-devel
$(PKG_FEDORA-30): releases/$(VERSION)/$(TOKEN_FEDORA-30)/changelog
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Building: $${distro} package..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/fpm-pack.sh $${distro} $(VERSION) "$(FILE_FEDORA-30)" libvirt-devel augeas-devel
releases/$(VERSION)/deb/changelog: $(PROGRAM) releases/$(VERSION)/.mkdir
@echo "Generating: deb changelog..."
./misc/make-deb-changelog.sh $(VERSION)
releases/$(VERSION)/$(TOKEN_FEDORA-29)/changelog: $(PROGRAM) releases/$(VERSION)/.mkdir
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Generating: $${distro} changelog..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/make-rpm-changelog.sh "$${distro}" $(VERSION)
$(DEB_PKG): releases/$(VERSION)/deb/changelog
@echo "Building: deb package..."
./misc/fpm-pack.sh deb $(VERSION) libvirt-dev libaugeas-dev
$(PKG_FEDORA-29): releases/$(VERSION)/$(TOKEN_FEDORA-29)/changelog
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Building: $${distro} package..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/fpm-pack.sh $${distro} $(VERSION) "$(FILE_FEDORA-29)" libvirt-devel augeas-devel
$(PACMAN_PKG): $(PROGRAM) releases/$(VERSION)/.mkdir
@echo "Building: pacman package..."
./misc/fpm-pack.sh pacman $(VERSION) libvirt augeas
releases/$(VERSION)/$(TOKEN_DEBIAN-10)/changelog: $(PROGRAM) releases/$(VERSION)/.mkdir
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Generating: $${distro} changelog..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/make-deb-changelog.sh "$${distro}" $(VERSION)
$(SHA256SUMS): $(RPM_PKG) $(DEB_PKG) $(PACMAN_PKG)
$(PKG_DEBIAN-10): releases/$(VERSION)/$(TOKEN_DEBIAN-10)/changelog
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Building: $${distro} package..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/fpm-pack.sh $${distro} $(VERSION) "$(FILE_DEBIAN-10)" libvirt-dev libaugeas-dev
releases/$(VERSION)/$(TOKEN_UBUNTU-BIONIC)/changelog: $(PROGRAM) releases/$(VERSION)/.mkdir
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Generating: $${distro} changelog..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/make-deb-changelog.sh "$${distro}" $(VERSION)
$(PKG_UBUNTU-BIONIC): releases/$(VERSION)/$(TOKEN_UBUNTU-BIONIC)/changelog
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Building: $${distro} package..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/fpm-pack.sh $${distro} $(VERSION) "$(FILE_UBUNTU-BIONIC)" libvirt-dev libaugeas-dev
$(PKG_ARCHLINUX): $(PROGRAM) releases/$(VERSION)/.mkdir
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; echo "Building: $${distro} package..."
@title='$(@D)' ; distro=$${title#'releases/$(VERSION)/'} ; ./misc/fpm-pack.sh $${distro} $(VERSION) "$(FILE_ARCHLINUX)" libvirt augeas
$(SHA256SUMS): $(PKG_FEDORA-30) $(PKG_FEDORA-29) $(PKG_DEBIAN-10) $(PKG_UBUNTU-BIONIC) $(PKG_ARCHLINUX)
@# remove the directory separator in the SHA256SUMS file
@echo "Generating: sha256 sum..."
sha256sum $(RPM_PKG) $(DEB_PKG) $(PACMAN_PKG) | awk -F '/| ' '{print $$1" "$$6}' > $(SHA256SUMS)
sha256sum $(PKG_FEDORA-30) $(PKG_FEDORA-29) $(PKG_DEBIAN-10) $(PKG_UBUNTU-BIONIC) $(PKG_ARCHLINUX) | awk -F '/| ' '{print $$1" "$$6}' > $(SHA256SUMS)
$(SHA256SUMS_ASC): $(SHA256SUMS)
@echo "Signing sha256 sum..."

View File

@@ -107,12 +107,13 @@ If you have a well phrased question that might benefit others, consider asking
it by sending a patch to the [FAQ](docs/faq.md) section. I'll merge your
question, and a patch with the answer!
## Roadmap:
## Get involved:
Feel free to grab one of the straightforward [#mgmtlove](https://github.com/purpleidea/mgmt/labels/mgmtlove)
issues if you're a first time contributor to the project or if you're unsure
about what to hack on! Please get involved by working on one of these items or
by suggesting something else!
by suggesting something else! There are some lower priority issues and harder
issues available in our [TODO](TODO.md) file. Please have a look.
## Bugs:

65
TODO.md
View File

@@ -1,10 +1,18 @@
# TODO
If you're looking for something to do, look here!
Let us know if you're working on one of the items.
If you'd like something to work on, ping @purpleidea and I'll create an issue
tailored especially for you! Just let me know your approximate golang skill
level and how many hours you'd like to spend on the patch.
Here is a TODO list of longstanding items that are either lower-priority, or
more involved in terms of time, skill-level, and/or motivation.
Please have a look, and let us know if you're working on one of the items. It's
best to open an issue to track your progress and to discuss any implementation
questions you might have.
Lastly, if you'd like something different to work on, please ping @purpleidea
and I'll create an issue tailored especially for your approximate golang skill
level and available time commitment in terms of hours you'd need to spend on the
patch.
Happy Hacking!
## Package resource
@@ -19,7 +27,7 @@ level and how many hours you'd like to spend on the patch.
## Svc resource
- [ ] base resource improvements
- [ ] refreshonly support [:heart:](https://github.com/purpleidea/mgmt/issues/464)
## Exec resource
@@ -33,33 +41,14 @@ level and how many hours you'd like to spend on the patch.
- [ ] automatic edges to file resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Virt (libvirt) resource
- [ ] base resource improvements [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Net (systemd-networkd) resource
- [ ] base resource
## Nspawn (systemd-nspawn) resource
- [ ] base resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Mount (systemd-mount) resource
- [ ] base resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Cron (systemd-timer) resource
- [ ] base resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Http resource
- [ ] base resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Etcd improvements
- [ ] fix embedded etcd master race
- [ ] fix etcd race bug that only happens during CI testing (intermittently
failing test case issue)
## Torrent/dht file transfer
@@ -69,17 +58,33 @@ level and how many hours you'd like to spend on the patch.
- [ ] base plumbing
## Resource improvements
- [ ] more reversible resources implemented
- [ ] more "cloud" resources
## Language improvements
- [ ] more core functions
- [ ] automatic language formatter, ala `gofmt`
- [ ] gedit/gnome-builder/gtksourceview syntax highlighting
- [ ] vim syntax highlighting
- [x] emacs syntax highlighting: see `misc/emacs/`
- [ ] emacs syntax highlighting: see `misc/emacs/` (needs updating)
- [ ] exposed $error variable for feedback in the language
- [ ] improve the printf function to add %[]s, %[]f ([]str, []float) and map,
struct, nested etc... %v would be nice too!
- [ ] add line/col/file annotations to AST so we can get locations of errors
that the parser finds
- [ ] add more error messages with the `%error` pattern in parser.y
- [ ] we should have helper functions or language sugar to pull a field out of a
struct, or a value out of a map, or an index out of a list, etc...
## Engine improvements
- [ ] add a "waiting for func" message in the func engine to notify the user
about slow functions...
## Other
- [ ] better error/retry handling
- [ ] deb package target in Makefile
- [ ] reproducible builds
- [ ] add your suggestions!

BIN
art/mgmt_poobear_meme.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

View File

@@ -250,6 +250,43 @@ integer, then that value is the max size for that semaphore. Valid semaphore
id's include: `some_id`, `hello:42`, `not:smart:4` and `:13`. It is expected
that the last bare example be only used by the engine to add a global semaphore.
#### Rewatch
Boolean. Rewatch specifies whether we re-run the Watch worker during a graph
swap if it has errored. When doing a graph compare to swap the graphs, if this
is true, and this particular worker has errored, then we'll remove it and add it
back as a new vertex, thus causing it to run again. This is different from the
`Retry` metaparam which applies during the normal execution. It is only when
this is exhausted that we're in permanent worker failure, and only then can we
rely on this metaparam.
#### Realize
Boolean. Realize ensures that the resource is guaranteed to converge at least
once before a potential graph swap removes or changes it. This guarantee is
useful for fast changing graphs, to ensure that the brief creation of a resource
is seen. This guarantee does not prevent against the engine quitting normally,
and it can't guarantee it if the resource is blocked because of a failed
pre-requisite resource.
*XXX: This is currently not implemented!*
#### Reverse
Boolean. Reverse is a property that some resources can implement that specifies
that some "reverse" operation should happen when that resource "disappears". A
disappearance happens when a resource is defined in one instance of the graph,
and is gone in the subsequent one. This disappearance can happen if it was
previously in an if statement that then becomes false.
This is helpful for building robust programs with the engine. The engine adds a
"reversed" resource to that subsequent graph to accomplish the desired "reverse"
mechanics. The specifics of what this entails is a property of the particular
resource that is being "reversed".
It might be wise to combine the use of this meta parameter with the use of the
`realize` meta parameter to ensure that your reversed resource actually runs at
least once, if there's a chance that it might be gone for a while.
### Lang metadata file
Any module *must* have a metadata file in its root. It must be named

View File

@@ -226,6 +226,34 @@ it and replace it with your git cloned directory. In my case, I like to work on
things in `~/code/mgmt/`, so that path is a symlink that points to the long
project directory.
### Why does my file resource error with `no such file or directory`?
If you create a file resource and only specify the content like this:
```
file "/tmp/foo" {
content => "hello world\n",
}
```
Then this will attempt to set the contents of that file to the desired string,
but *only* if that file already exists. If you'd like to ensure that it also
gets created in case it is not present, then you must also specify the state:
```
file "/tmp/foo" {
state => "exists",
content => "hello world\n",
}
```
Similar logic applies for situations when you only specify the `mode` parameter.
This all turns out to be more safe and "correct", in that it would error and
prevent masking an error for a situation when you expected a file to already be
at that location. It also turns out to simplify the internals significantly, and
remove an ambiguous scenario with the reversable file resource.
### On startup `mgmt` hangs after: `etcd: server: starting...`.
If you get an error message similar to:

View File

@@ -44,3 +44,11 @@ if we missed something that you think is relevant!
| James Shubin | blog | [Mgmt Configuration Language](https://purpleidea.com/blog/2018/02/05/mgmt-configuration-language/) |
| James Shubin | video | [Recording from CfgMgmtCamp.eu 2018](https://www.youtube.com/watch?v=NxObmwZDyrI) |
| Jonathan Gold | blog | [Go Netlink and Select](https://jonathangold.ca/blog/go-netlink-and-select/) |
| James Shubin | video | [Recording from DevOpsDays Montreal 2018](https://www.youtube.com/watch?v=1i38c5cooHo) |
| James Shubin | video | [Recording from FOSDEM Minimalistic Languages Devroom 2019](https://video.fosdem.org/2019/K.4.201/mgmtconfig.webm) |
| James Shubin | video | [Recording from FOSDEM Infra Management Devroom 2019](https://video.fosdem.org/2019/UB2.252A/mgmt.webm) |
| James Shubin | video | [Recording from FOSDEM Graph Processing Devroom 2019](https://video.fosdem.org/2019/H.1308/graph_mgmt_config.webm) |
| James Shubin | video | [Recording from FOSDEM Virtualization Devroom 2019](https://video.fosdem.org/2019/H.2213/vai_real_time_virtualization_automation.webm) |
| James Shubin | video | [Recording from FOSDEM Containers Devroom 2019](https://video.fosdem.org/2019/UA2.114/containers_mgmt.webm) |
| James Shubin | video | [Recording from FOSDEM Monitoring Devroom 2019](https://video.fosdem.org/2019/UB2.252A/real_time_merging_of_config_management_and_monitoring.webm) |
| James Shubin | blog | [Mgmt Configuration Language: Class and Include](https://purpleidea.com/blog/2019/07/26/class-and-include-in-mgmt/) |

View File

@@ -43,7 +43,7 @@ You'll need some dependencies, including `golang`, and some associated tools.
* To install on macOS systems install [Homebrew](https://brew.sh)
and run: `brew install go`
* You can run `go version` to check the golang version.
* If your distro is tool old, you may need to [download](https://golang.org/dl/)
* If your distro is too old, you may need to [download](https://golang.org/dl/)
a newer golang version.
#### Setting up golang

View File

@@ -69,8 +69,8 @@ identified by a trailing slash in their path name. File have no such slash.
It has the following properties:
* `path`: absolute file path (directories have a trailing slash here)
* `state`: either `exists`, `absent`, or undefined
* `content`: raw file content
* `state`: either `exists` (the default value) or `absent`
* `mode`: octal unix file permissions
* `owner`: username or uid for the file owner
* `group`: group name or gid for the file group
@@ -79,6 +79,16 @@ It has the following properties:
The path property specifies the file or directory that we are managing.
### State
The state property describes the action we'd like to apply for the resource. The
possible values are: `exists` and `absent`. If you do not specify either of
these, it is undefined. Without specifying this value as `exists`, another param
cannot cause a file to get implicitly created. When specifying this value as
`absent`, you should not specify any other params that would normally change the
file. For example, if you specify `content` and this param is `absent`, then you
will get an engine validation error.
### Content
The content property is a string that specifies the desired file contents.
@@ -88,11 +98,6 @@ The content property is a string that specifies the desired file contents.
The source property points to a source file or directory path that we wish to
copy over and use as the desired contents for our resource.
### State
The state property describes the action we'd like to apply for the resource. The
possible values are: `exists` and `absent`.
### Recurse
The recurse property limits whether file resource operations should recurse into

View File

@@ -152,6 +152,18 @@ func ResCmp(r1, r2 Res) error {
}
}
// compare meta params for resources with reversible traits
r1v, ok1 := r1.(ReversibleRes)
r2v, ok2 := r2.(ReversibleRes)
if ok1 != ok2 {
return fmt.Errorf("reversible differs") // they must be different (optional)
}
if ok1 && ok2 {
if r1v.ReversibleMeta().Cmp(r2v.ReversibleMeta()) != nil {
return fmt.Errorf("reversible differs")
}
}
return nil
}
@@ -280,6 +292,18 @@ func AdaptCmp(r1, r2 CompatibleRes) error {
}
}
// compare meta params for resources with reversible traits
r1v, ok1 := r1.(ReversibleRes)
r2v, ok2 := r2.(ReversibleRes)
if ok1 != ok2 {
return fmt.Errorf("reversible differs") // they must be different (optional)
}
if ok1 && ok2 {
if r1v.ReversibleMeta().Cmp(r2v.ReversibleMeta()) != nil {
return fmt.Errorf("reversible differs")
}
}
return nil
}

View File

@@ -106,6 +106,16 @@ func ResCopy(r CopyableRes) (CopyableRes, error) {
}
}
// copy meta params for resources with reversible traits
if x, ok := r.(ReversibleRes); ok {
dst, ok := res.(ReversibleRes)
if !ok {
// programming error
panic("reversible interfaces are illogical")
}
dst.SetReversibleMeta(x.ReversibleMeta()) // no need to copy atm
}
return res, nil
}

View File

@@ -89,6 +89,9 @@ func AutoEdge(graph *pgraph.Graph, debug bool, logf func(format string, v ...int
}
}
}
// It would be great to ensure we didn't add any loops here, but instead
// of checking now, we'll move the check into the main loop.
return nil
}

View File

@@ -66,5 +66,8 @@ func AutoGroup(ag engine.AutoGrouper, g *pgraph.Graph, debug bool, logf func(for
}
}
// It would be great to ensure we didn't add any loops here, but instead
// of checking now, we'll move the check into the main loop.
return nil
}

View File

@@ -18,6 +18,9 @@
package autogroup
import (
"fmt"
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util/errwrap"
)
@@ -112,8 +115,17 @@ func VertexMerge(g *pgraph.Graph, v1, v2 pgraph.Vertex, vertexMergeFn func(pgrap
// note: This branch isn't used if the vertexMergeFn
// decides to just merge logically on its own instead
// of actually returning something that we then merge.
v1 = v // TODO: ineffassign?
v1 = v // XXX: ineffassign?
//*v1 = *v
// Ensure that everything still validates. (For safety!)
r, ok := v1.(engine.Res) // TODO: v ?
if !ok {
return fmt.Errorf("not a Res")
}
if err := engine.Validate(r); err != nil {
return errwrap.Wrapf(err, "the Res did not Validate")
}
}
}
g.DeleteVertex(v2) // remove grouped vertex

View File

@@ -25,11 +25,18 @@ import (
"github.com/purpleidea/mgmt/converger"
"github.com/purpleidea/mgmt/engine"
engineUtil "github.com/purpleidea/mgmt/engine/util"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util/errwrap"
"github.com/purpleidea/mgmt/util/semaphore"
)
const (
// StateDir is the name of the sub directory where all the local
// resource state is stored.
StateDir = "state"
)
// Engine encapsulates a generic graph and manages its operations.
type Engine struct {
Program string
@@ -174,9 +181,9 @@ func (obj *Engine) Commit() error {
return errwrap.Wrapf(err, "the Res did not Validate")
}
// FIXME: is res.Name() sufficiently unique to use as a UID here?
pathUID := fmt.Sprintf("%s-%s", res.Kind(), res.Name())
statePrefix := fmt.Sprintf("%s/", path.Join(obj.Prefix, "state", pathUID))
pathUID := engineUtil.ResPathUID(res)
statePrefix := fmt.Sprintf("%s/", path.Join(obj.statePrefix(), pathUID))
// don't create this unless it *will* be used
//if err := os.MkdirAll(statePrefix, 0770); err != nil {
// return errwrap.Wrapf(err, "can't create state prefix")
@@ -416,3 +423,8 @@ func (obj *Engine) Close() error {
func (obj *Engine) Graph() *pgraph.Graph {
return obj.graph
}
// statePrefix returns the dir where all the resource state is stored locally.
func (obj *Engine) statePrefix() string {
return fmt.Sprintf("%s/", path.Join(obj.Prefix, StateDir))
}

295
engine/graph/reverse.go Normal file
View File

@@ -0,0 +1,295 @@
// Mgmt
// Copyright (C) 2013-2019+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package graph
import (
"fmt"
"io/ioutil"
"os"
"path"
"sort"
"github.com/purpleidea/mgmt/engine"
engineUtil "github.com/purpleidea/mgmt/engine/util"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util/errwrap"
)
const (
// ReverseFile is the file name in the resource state dir where any
// reversal information is stored.
ReverseFile = "reverse"
// ReversePerm is the permissions mode used to create the ReverseFile.
ReversePerm = 0600
)
// Reversals adds the reversals onto the loaded graph. This should happen last,
// and before Commit.
func (obj *Engine) Reversals() error {
if obj.nextGraph == nil {
return fmt.Errorf("there is no active graph to add reversals to")
}
// Initially get all of the reversals to seek out all possible errors.
// XXX: The engine needs to know where data might have been stored if we
// XXX: want to potentially allow alternate read/write paths, like etcd.
// XXX: In this scenario, we'd have to store a token somewhere to let us
// XXX: know to look elsewhere for the special ReversalList read method.
data, err := obj.ReversalList() // (map[string]string, error)
if err != nil {
return errwrap.Wrapf(err, "the reversals had errors")
}
if len(data) == 0 {
return nil // end early
}
resMatch := func(r1, r2 engine.Res) bool { // simple match on UID only!
if r1.Kind() != r2.Kind() {
return false
}
if r1.Name() != r2.Name() {
return false
}
return true
}
resInList := func(needle engine.Res, haystack []engine.Res) bool {
for _, res := range haystack {
if resMatch(needle, res) {
return true
}
}
return false
}
if obj.Debug {
obj.Logf("decoding %d reversals...", len(data))
}
resources := []engine.Res{}
// do this in a sorted order so that it errors deterministically
sorted := []string{}
for key := range data {
sorted = append(sorted, key)
}
sort.Strings(sorted)
for _, key := range sorted {
val := data[key]
// XXX: replace this ResToB64 method with one that stores it in
// a human readable format, in case someone wants to hack and
// edit it manually.
// XXX: we probably want this to be YAML, it works with the diff
// too...
r, err := engineUtil.B64ToRes(val)
if err != nil {
return errwrap.Wrapf(err, "error decoding res with UID: `%s`", key)
}
res, ok := r.(engine.ReversibleRes)
if !ok {
// this requirement is here to keep things simpler...
return errwrap.Wrapf(err, "decoded res with UID: `%s` was not reversible", key)
}
matchFn := func(vertex pgraph.Vertex) (bool, error) {
r, ok := vertex.(engine.Res)
if !ok {
return false, fmt.Errorf("not a Res")
}
if !resMatch(r, res) {
return false, nil
}
return true, nil
}
// FIXME: not efficient, we could build a cache-map first
vertex, err := obj.nextGraph.VertexMatchFn(matchFn) // (Vertex, error)
if err != nil {
return errwrap.Wrapf(err, "error searching graph for match")
}
if vertex != nil { // found one!
continue // it doesn't need reversing yet
}
// TODO: check for (incompatible?) duplicates instead
if resInList(res, resources) { // we've already got this one...
continue
}
// We set this in two different places to be safe. It ensures
// that we erase the reversal state file after we've used it.
res.ReversibleMeta().Reversal = true // set this for later...
resources = append(resources, res)
}
if len(resources) == 0 {
return nil // end early
}
// Now that we've passed the chance of any errors, we modify the graph.
obj.Logf("adding %d reversals...", len(resources))
for _, res := range resources {
obj.nextGraph.AddVertex(res)
}
// TODO: Do we want a way for stored reversals to add edges too?
// It would be great to ensure we didn't add any loops here, but instead
// of checking now, we'll move the check into the main loop.
return nil
}
// ReversalList returns all the available pending reversal data on this host. It
// can then be decoded by whatever method is appropriate for.
func (obj *Engine) ReversalList() (map[string]string, error) {
result := make(map[string]string) // some key to contents
dir := obj.statePrefix() // loop through this dir...
files, err := ioutil.ReadDir(dir)
if err != nil && !os.IsNotExist(err) {
return nil, errwrap.Wrapf(err, "error reading list of state dirs")
} else if err != nil {
return result, nil // nothing found, no state dir exists yet
}
for _, x := range files {
key := x.Name() // some uid for the resource
file := path.Join(dir, key, ReverseFile)
content, err := ioutil.ReadFile(file)
if err != nil && !os.IsNotExist(err) {
return nil, errwrap.Wrapf(err, "could not read reverse file: %s", file)
} else if err != nil {
continue // file does not exist, skip
}
// file exists!
str := string(content)
result[key] = str // save
}
return result, nil
}
// ReversalInit performs the reversal initialization steps if necessary for this
// resource.
func (obj *State) ReversalInit() error {
res, ok := obj.Vertex.(engine.ReversibleRes)
if !ok {
return nil // nothing to do
}
if res.ReversibleMeta().Disabled {
return nil // nothing to do, reversal isn't enabled
}
// If the reversal is enabled, but we are the result of a previous
// reversal, then this will overwrite that older reversal request, and
// our resource should be designed to deal with that. This happens if we
// return a reversible resource as the reverse of a resource that was
// reversed. It's probably fairly rare.
if res.ReversibleMeta().Reversal {
obj.Logf("triangle reversal") // warn!
}
r, err := res.Reversed()
if err != nil {
return errwrap.Wrapf(err, "could not reverse: %s", res.String())
}
if r == nil {
return nil // this can't be reversed, or isn't implemented here
}
// We set this in two different places to be safe. It ensures that we
// erase the reversal state file after we've used it.
r.ReversibleMeta().Reversal = true // set this for later...
// XXX: replace this ResToB64 method with one that stores it in a human
// readable format, in case someone wants to hack and edit it manually.
// XXX: we probably want this to be YAML, it works with the diff too...
str, err := engineUtil.ResToB64(r)
if err != nil {
return errwrap.Wrapf(err, "could not encode: %s", res.String())
}
// TODO: put this method on traits.Reversible as part of the interface?
return obj.ReversalWrite(str, res.ReversibleMeta().Overwrite) // Store!
}
// ReversalClose performs the reversal shutdown steps if necessary for this
// resource.
func (obj *State) ReversalClose() error {
res, ok := obj.Vertex.(engine.ReversibleRes)
if !ok {
return nil // nothing to do
}
// Don't check res.ReversibleMeta().Disabled because we're removing the
// previous one. That value only applies if we're doing a new reversal.
if !res.ReversibleMeta().Reversal {
return nil // nothing to erase, we're not a reversal resource
}
if !obj.isStateOK { // did we successfully reverse?
obj.Logf("did not complete reversal") // warn
return nil
}
// TODO: put this method on traits.Reversible as part of the interface?
return obj.ReversalDelete() // Erase our reversal instructions.
}
// ReversalWrite stores the reversal state information for this resource.
func (obj *State) ReversalWrite(str string, overwrite bool) error {
dir, err := obj.varDir("") // private version
if err != nil {
return errwrap.Wrapf(err, "could not get VarDir for reverse")
}
file := path.Join(dir, ReverseFile) // return a unique file
content, err := ioutil.ReadFile(file)
if err != nil && !os.IsNotExist(err) {
return errwrap.Wrapf(err, "could not read reverse file: %s", file)
}
// file exists and we shouldn't overwrite if different
if err == nil && !overwrite {
// compare to existing file
oldStr := string(content)
if str != oldStr {
obj.Logf("existing, pending, reversible resource exists")
//obj.Logf("diff:")
//obj.Logf("") // TODO: print the diff w/o and secret values
return fmt.Errorf("existing, pending, reversible resource exists")
}
}
return ioutil.WriteFile(file, []byte(str), ReversePerm)
}
// ReversalDelete removes the reversal state information for this resource.
func (obj *State) ReversalDelete() error {
dir, err := obj.varDir("") // private version
if err != nil {
return errwrap.Wrapf(err, "could not get VarDir for reverse")
}
file := path.Join(dir, ReverseFile) // return a unique file
return errwrap.Wrapf(os.Remove(file), "could not remove reverse state file")
}

View File

@@ -203,6 +203,12 @@ func (obj *State) Init() error {
if obj.Debug {
obj.Logf("Init(%s)", res)
}
// write the reverse request to the disk...
if err := obj.ReversalInit(); err != nil {
return err // TODO: test this code path...
}
err := res.Init(obj.init)
if obj.Debug {
obj.Logf("Init(%s): Return(%+v)", res, err)
@@ -236,12 +242,23 @@ func (obj *State) Close() error {
if obj.Debug {
obj.Logf("Close(%s)", res)
}
err := res.Close()
if obj.Debug {
obj.Logf("Close(%s): Return(%+v)", res, err)
var reverr error
// clear the reverse request from the disk...
if err := obj.ReversalClose(); err != nil {
// TODO: test this code path...
// TODO: should this be an error or a warning?
reverr = err
}
return err
reterr := res.Close()
if obj.Debug {
obj.Logf("Close(%s): Return(%+v)", res, reterr)
}
reterr = errwrap.Append(reterr, reverr)
return reterr
}
// Poke sends a notification on the poke channel. This channel is used to notify

View File

@@ -751,45 +751,37 @@ func (obj *AwsEc2Res) CheckApply(apply bool) (bool, error) {
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *AwsEc2Res) Cmp(r engine.Res) error {
if !obj.Compare(r) {
return fmt.Errorf("did not compare")
}
return nil
}
// Compare two resources and return if they are equivalent.
func (obj *AwsEc2Res) Compare(r engine.Res) bool {
// we can only compare AwsEc2Res to others of the same resource kind
res, ok := r.(*AwsEc2Res)
if !ok {
return false
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.State != res.State {
return false
return fmt.Errorf("the State differs")
}
if obj.Region != res.Region {
return false
return fmt.Errorf("the Region differs")
}
if obj.Type != res.Type {
return false
return fmt.Errorf("the Type differs")
}
if obj.ImageID != res.ImageID {
return false
return fmt.Errorf("the ImageID differs")
}
if obj.WatchEndpoint != res.WatchEndpoint {
return false
return fmt.Errorf("the WatchEndpoint differs")
}
if obj.WatchListenAddr != res.WatchListenAddr {
return false
return fmt.Errorf("the WatchListenAddr differs")
}
if obj.ErrorOnMalformedPost != res.ErrorOnMalformedPost {
return false
return fmt.Errorf("the ErrorOnMalformedPost differs")
}
if obj.UserData != res.UserData {
return false
return fmt.Errorf("the UserData differs")
}
return true
return nil
}
func (obj *AwsEc2Res) prependName() string {
@@ -1025,7 +1017,7 @@ func (obj *AwsEc2Res) snsMakeTopic() (string, error) {
}
obj.init.Logf("Created SNS Topic")
if topic.TopicArn == nil {
return "", fmt.Errorf("TopicArn is nil")
return "", fmt.Errorf("the TopicArn is nil")
}
return *topic.TopicArn, nil
}

View File

@@ -43,6 +43,18 @@ func init() {
engine.RegisterResource("file", func() engine.Res { return &FileRes{} })
}
const (
// FileStateExists is the string that represents that the file should be
// present.
FileStateExists = "exists"
// FileStateAbsent is the string that represents that the file should
// not exist.
FileStateAbsent = "absent"
// FileStateUndefined means the file state has not been specified.
// TODO: consider moving to *string and express this state as a nil.
FileStateUndefined = ""
)
// FileRes is a file and directory resource. Dirs are defined by names ending
// in a slash.
type FileRes struct {
@@ -50,6 +62,7 @@ type FileRes struct {
traits.Edgeable
//traits.Groupable // TODO: implement this
traits.Recvable
traits.Reversible
init *engine.Init
@@ -60,19 +73,29 @@ type FileRes struct {
Dirname string `lang:"dirname" yaml:"dirname"` // override the path dirname
Basename string `lang:"basename" yaml:"basename"` // override the path basename
// State specifies the desired state of the file. It can be either
// `exists` or `absent`. If you do not specify this, we will not be able
// to create or remove a file if it might be logical for another
// param to require that. Instead it will error. This means that this
// field is not implied by specifying some content or a mode.
State string `lang:"state" yaml:"state"`
// Content specifies the file contents to use. If this is nil, they are
// left undefined. It cannot be combined with Source.
Content *string `lang:"content" yaml:"content"`
// Source specifies the source contents for the file resource. It cannot
// be combined with the Content parameter.
Source string `lang:"source" yaml:"source"`
// State specifies the desired state of the file. It can be either
// `exists` or `absent`. If you do not specify this, it will be
// undefined, and determined based on the other parameters.
State string `lang:"state" yaml:"state"`
Owner string `lang:"owner" yaml:"owner"`
Group string `lang:"group" yaml:"group"`
// Owner specifies the file owner. You can specify either the string
// name, or a string representation of the owner integer uid.
Owner string `lang:"owner" yaml:"owner"`
// Group specifies the file group. You can specify either the string
// name, or a string representation of the group integer gid.
Group string `lang:"group" yaml:"group"`
// Mode is the mode of the file as a string representation of the octal
// form.
// TODO: add symbolic representations
Mode string `lang:"mode" yaml:"mode"`
Recurse bool `lang:"recurse" yaml:"recurse"`
Force bool `lang:"force" yaml:"force"`
@@ -81,96 +104,6 @@ type FileRes struct {
recWatcher *recwatch.RecWatcher
}
// Default returns some sensible defaults for this resource.
func (obj *FileRes) Default() engine.Res {
return &FileRes{
State: "exists",
}
}
// Validate reports any problems with the struct definition.
func (obj *FileRes) Validate() error {
if obj.getPath() == "" {
return fmt.Errorf("path is empty")
}
if obj.Dirname != "" && !strings.HasSuffix(obj.Dirname, "/") {
return fmt.Errorf("dirname must end with a slash")
}
if strings.HasPrefix(obj.Basename, "/") {
return fmt.Errorf("basename must not start with a slash")
}
if !strings.HasPrefix(obj.getPath(), "/") {
return fmt.Errorf("resultant path must be absolute")
}
if obj.Content != nil && obj.Source != "" {
return fmt.Errorf("can't specify both Content and Source")
}
if obj.isDir() && obj.Content != nil { // makes no sense
return fmt.Errorf("can't specify Content when creating a Dir")
}
if obj.Mode != "" {
if _, err := obj.mode(); err != nil {
return err
}
}
if obj.Owner != "" || obj.Group != "" {
fileInfo, err := os.Stat("/") // pick root just to do this test
if err != nil {
return fmt.Errorf("can't stat root to get system information")
}
_, ok := fileInfo.Sys().(*syscall.Stat_t)
if !ok {
return fmt.Errorf("can't set Owner or Group on this platform")
}
}
if _, err := engineUtil.GetUID(obj.Owner); obj.Owner != "" && err != nil {
return err
}
if _, err := engineUtil.GetGID(obj.Group); obj.Group != "" && err != nil {
return err
}
// XXX: should this specify that we create an empty directory instead?
//if obj.Source == "" && obj.isDir() {
// return fmt.Errorf("Can't specify an empty source when creating a Dir.")
//}
return nil
}
// mode returns the file permission specified on the graph. It doesn't handle
// the case where the mode is not specified. The caller should check obj.Mode is
// not empty.
func (obj *FileRes) mode() (os.FileMode, error) {
m, err := strconv.ParseInt(obj.Mode, 8, 32)
if err != nil {
return os.FileMode(0), errwrap.Wrapf(err, "mode should be an octal number (%s)", obj.Mode)
}
return os.FileMode(m), nil
}
// Init runs some startup code for this resource.
func (obj *FileRes) Init(init *engine.Init) error {
obj.init = init // save for later
obj.sha256sum = ""
return nil
}
// Close is run by the engine to clean up after the resource is done.
func (obj *FileRes) Close() error {
return nil
}
// getPath returns the actual path to use for this resource. It computes this
// after analysis of the Path, Dirname and Basename values. Dirs end with slash.
// TODO: memoize the result if this seems important.
@@ -200,6 +133,115 @@ func (obj *FileRes) isDir() bool {
return strings.HasSuffix(obj.getPath(), "/") // dirs have trailing slashes
}
// mode returns the file permission specified on the graph. It doesn't handle
// the case where the mode is not specified. The caller should check obj.Mode is
// not empty.
func (obj *FileRes) mode() (os.FileMode, error) {
m, err := strconv.ParseInt(obj.Mode, 8, 32)
if err != nil {
return os.FileMode(0), errwrap.Wrapf(err, "mode should be an octal number (%s)", obj.Mode)
}
return os.FileMode(m), nil
}
// Default returns some sensible defaults for this resource.
func (obj *FileRes) Default() engine.Res {
return &FileRes{
//State: FileStateUndefined, // the default must be undefined!
}
}
// Validate reports any problems with the struct definition.
func (obj *FileRes) Validate() error {
if obj.getPath() == "" {
return fmt.Errorf("path is empty")
}
if obj.Dirname != "" && !strings.HasSuffix(obj.Dirname, "/") {
return fmt.Errorf("dirname must end with a slash")
}
if strings.HasPrefix(obj.Basename, "/") {
return fmt.Errorf("basename must not start with a slash")
}
if !strings.HasPrefix(obj.getPath(), "/") {
return fmt.Errorf("resultant path must be absolute")
}
if obj.State != FileStateExists && obj.State != FileStateAbsent && obj.State != FileStateUndefined {
return fmt.Errorf("the State is invalid")
}
if obj.State == FileStateAbsent && obj.Content != nil {
return fmt.Errorf("can't specify Content for an absent file")
}
if obj.Content != nil && obj.Source != "" {
return fmt.Errorf("can't specify both Content and Source")
}
if obj.isDir() && obj.Content != nil { // makes no sense
return fmt.Errorf("can't specify Content when creating a Dir")
}
// TODO: should we silently ignore these errors or include them?
//if obj.State == FileStateAbsent && obj.Owner != "" {
// return fmt.Errorf("can't specify Owner for an absent file")
//}
//if obj.State == FileStateAbsent && obj.Group != "" {
// return fmt.Errorf("can't specify Group for an absent file")
//}
if obj.Owner != "" || obj.Group != "" {
fileInfo, err := os.Stat("/") // pick root just to do this test
if err != nil {
return fmt.Errorf("can't stat root to get system information")
}
_, ok := fileInfo.Sys().(*syscall.Stat_t)
if !ok {
return fmt.Errorf("can't set Owner or Group on this platform")
}
}
if _, err := engineUtil.GetUID(obj.Owner); obj.Owner != "" && err != nil {
return err
}
if _, err := engineUtil.GetGID(obj.Group); obj.Group != "" && err != nil {
return err
}
// TODO: should we silently ignore this error or include it?
//if obj.State == FileStateAbsent && obj.Mode != "" {
// return fmt.Errorf("can't specify Mode for an absent file")
//}
if obj.Mode != "" {
if _, err := obj.mode(); err != nil {
return err
}
}
// XXX: should this specify that we create an empty directory instead?
//if obj.Source == "" && obj.isDir() {
// return fmt.Errorf("can't specify an empty source when creating a Dir.")
//}
return nil
}
// Init runs some startup code for this resource.
func (obj *FileRes) Init(init *engine.Init) error {
obj.init = init // save for later
obj.sha256sum = ""
return nil
}
// Close is run by the engine to clean up after the resource is done.
func (obj *FileRes) Close() error {
return nil
}
// Watch is the primary listener for this resource and it outputs events.
// This one is a file watcher for files and directories.
// Modify with caution, it is probably important to write some test cases first!
@@ -252,7 +294,7 @@ func (obj *FileRes) Watch() error {
// can be a bytes Buffer struct. It can take an input sha256 hash to use instead
// of computing the source data hash, and it returns the computed value if this
// function reaches that stage. As usual, it respects the apply action variable,
// and it symmetry with the main CheckApply function returns checkOK and error.
// and has some symmetry with the main CheckApply function.
func (obj *FileRes) fileCheckApply(apply bool, src io.ReadSeeker, dst string, sha256sum string) (string, bool, error) {
// TODO: does it make sense to switch dst to an io.Writer ?
// TODO: use obj.Force when dealing with symlinks and other file types!
@@ -289,18 +331,25 @@ func (obj *FileRes) fileCheckApply(apply bool, src io.ReadSeeker, dst string, sh
defer dstClose()
dstExists := !os.IsNotExist(err)
// Optimization: we shouldn't be making the file, it happens in
// stateCheckApply, but we skip doing it there in order to do it here,
// unless we're undefined, and then we shouldn't force it!
if !dstExists && obj.State == FileStateUndefined {
return "", false, err
}
dstStat, err := dstFile.Stat()
if err != nil && dstExists {
return "", false, err
}
if dstExists && dstStat.IsDir() { // oops, dst is a dir, and we want a file...
if !apply {
return "", false, nil
}
if !obj.Force {
return "", false, fmt.Errorf("can't force dir into file: %s", dst)
}
if !apply {
return "", false, nil
}
cleanDst := path.Clean(dst)
if cleanDst == "" || cleanDst == "/" {
@@ -390,7 +439,7 @@ func (obj *FileRes) dirCheckApply(apply bool) (bool, error) {
// check if the path exists and is a directory
fileInfo, err := os.Stat(obj.getPath())
if err != nil && !os.IsNotExist(err) {
return false, errwrap.Wrapf(err, "error checking file resource existence")
return false, errwrap.Wrapf(err, "stat error on file resource")
}
if err == nil && fileInfo.IsDir() {
@@ -503,6 +552,7 @@ func (obj *FileRes) syncCheckApply(apply bool, src, dst string) (bool, error) {
relPathFile := strings.TrimSuffix(relPath, "/")
if _, ok := smartDst[relPathFile]; ok {
absCleanDst := path.Clean(absDst)
// TODO: can we fail this before `!apply`?
if !obj.Force {
return false, fmt.Errorf("can't force file into dir: %s", absCleanDst)
}
@@ -571,13 +621,13 @@ func (obj *FileRes) syncCheckApply(apply bool, src, dst string) (bool, error) {
continue
}
_ = absSrc
//obj.init.Logf("syncCheckApply: Recurse rm: %s -> %s", absSrc, absDst)
//obj.init.Logf("syncCheckApply: recurse rm: %s -> %s", absSrc, absDst)
//if c, err := obj.syncCheckApply(apply, absSrc, absDst); err != nil {
// return false, errwrap.Wrapf(err, "syncCheckApply: Recurse rm failed")
// return false, errwrap.Wrapf(err, "syncCheckApply: recurse rm failed")
//} else if !c { // don't let subsequent passes make this true
// checkOK = false
//}
//obj.init.Logf("syncCheckApply: Removing: %s", absCleanDst)
//obj.init.Logf("syncCheckApply: removing: %s", absCleanDst)
//if apply { // safety
// if err := os.Remove(absCleanDst); err != nil {
// return false, err
@@ -589,9 +639,10 @@ func (obj *FileRes) syncCheckApply(apply bool, src, dst string) (bool, error) {
return checkOK, nil
}
// state performs a CheckApply of the file state to create an empty file.
// stateCheckApply performs a CheckApply of the file state to create or remove
// an empty file or directory.
func (obj *FileRes) stateCheckApply(apply bool) (bool, error) {
if obj.State == "" { // state is not specified
if obj.State == FileStateUndefined { // state is not specified
return true, nil
}
@@ -601,11 +652,11 @@ func (obj *FileRes) stateCheckApply(apply bool) (bool, error) {
return false, errwrap.Wrapf(err, "could not stat file")
}
if obj.State == "absent" && os.IsNotExist(err) {
if obj.State == FileStateAbsent && os.IsNotExist(err) {
return true, nil
}
if obj.State == "exists" && err == nil {
if obj.State == FileStateExists && err == nil {
return true, nil
}
@@ -614,153 +665,107 @@ func (obj *FileRes) stateCheckApply(apply bool) (bool, error) {
return false, nil
}
if obj.State == "absent" {
return false, nil // defer the work to contentCheckApply
}
if obj.Content == nil && !obj.isDir() {
// Create an empty file to ensure one exists. Don't O_TRUNC it,
// in case one is magically created right after our exists test.
// The chmod used is what is used by the os.Create function.
// TODO: is using O_EXCL okay?
f, err := os.OpenFile(obj.getPath(), os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0666)
if err != nil {
return false, errwrap.Wrapf(err, "problem creating empty file")
if obj.State == FileStateAbsent { // remove
p := obj.getPath()
if p == "" {
// programming error?
return false, fmt.Errorf("can't remove empty path") // safety
}
if err := f.Close(); err != nil {
return false, errwrap.Wrapf(err, "problem closing empty file")
}
}
return false, nil // defer the Content != nil and isDir work to later...
}
// contentCheckApply performs a CheckApply for the file existence and content.
func (obj *FileRes) contentCheckApply(apply bool) (bool, error) {
obj.init.Logf("contentCheckApply(%t)", apply)
if obj.State == "absent" {
if _, err := os.Stat(obj.getPath()); os.IsNotExist(err) {
// no such file or directory, but
// file should be missing, phew :)
return true, nil
} else if err != nil { // what could this error be?
return false, err
}
// state is not okay, no work done, exit, but without error
if !apply {
return false, nil
}
// apply portion
if obj.getPath() == "" || obj.getPath() == "/" {
if p == "/" {
return false, fmt.Errorf("don't want to remove root") // safety
}
obj.init.Logf("contentCheckApply: removing: %s", obj.getPath())
obj.init.Logf("stateCheckApply: removing: %s", p)
// FIXME: respect obj.Recurse here...
// TODO: add recurse limit here
err := os.RemoveAll(obj.getPath()) // dangerous ;)
return false, err // either nil or not
err := os.RemoveAll(p) // dangerous ;)
return false, err // either nil or not
}
if obj.isDir() && obj.Source == "" {
// we need to make a file or a directory now
if obj.isDir() {
return obj.dirCheckApply(apply)
}
// Optimization: we shouldn't even look at obj.Content here, but we can
// skip this empty file creation here since we know we're going to be
// making it there anyways. This way we save the extra fopen noise.
if obj.Content != nil {
return false, nil // pretend we actually made it
}
// Create an empty file to ensure one exists. Don't O_TRUNC it, in case
// one is magically created right after our exists test. The chmod used
// is what is used by the os.Create function.
// TODO: is using O_EXCL okay?
f, err := os.OpenFile(obj.getPath(), os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0666)
if err != nil {
return false, errwrap.Wrapf(err, "problem creating empty file")
}
if err := f.Close(); err != nil {
return false, errwrap.Wrapf(err, "problem closing empty file")
}
return false, nil // defer the Content != nil work to later...
}
// contentCheckApply performs a CheckApply for the file content.
func (obj *FileRes) contentCheckApply(apply bool) (bool, error) {
obj.init.Logf("contentCheckApply(%t)", apply)
// content is not defined, leave it alone...
if obj.Content == nil && obj.Source == "" {
if obj.Content == nil {
return true, nil
}
if obj.Source == "" { // do the obj.Content checks first...
bufferSrc := bytes.NewReader([]byte(*obj.Content))
sha256sum, checkOK, err := obj.fileCheckApply(apply, bufferSrc, obj.getPath(), obj.sha256sum)
if sha256sum != "" { // empty values mean errored or didn't hash
// this can be valid even when the whole function errors
obj.sha256sum = sha256sum // cache value
}
if err != nil {
return false, err
}
// if no err, but !ok, then...
return checkOK, nil // success
bufferSrc := bytes.NewReader([]byte(*obj.Content))
sha256sum, checkOK, err := obj.fileCheckApply(apply, bufferSrc, obj.getPath(), obj.sha256sum)
if sha256sum != "" { // empty values mean errored or didn't hash
// this can be valid even when the whole function errors
obj.sha256sum = sha256sum // cache value
}
if err != nil {
return false, err
}
// if no err, but !ok, then...
return checkOK, nil // success
}
// sourceCheckApply performs a CheckApply for the file source.
func (obj *FileRes) sourceCheckApply(apply bool) (bool, error) {
obj.init.Logf("sourceCheckApply(%t)", apply)
// source is not defined, leave it alone...
if obj.Source == "" {
return true, nil
}
checkOK, err := obj.syncCheckApply(apply, obj.Source, obj.getPath())
if err != nil {
obj.init.Logf("syncCheckApply: Error: %v", err)
obj.init.Logf("syncCheckApply: error: %v", err)
return false, err
}
return checkOK, nil
}
// chmodCheckApply performs a CheckApply for the file permissions.
func (obj *FileRes) chmodCheckApply(apply bool) (bool, error) {
obj.init.Logf("chmodCheckApply(%t)", apply)
if obj.State == "absent" {
// file is absent
return true, nil
}
if obj.Mode == "" {
// no mode specified, everything is ok
return true, nil
}
mode, err := obj.mode()
// If the file does not exist and we are in
// noop mode, do not throw an error.
if os.IsNotExist(err) && !apply {
return false, nil
}
if err != nil {
return false, err
}
fileInfo, err := os.Stat(obj.getPath())
if err != nil {
return false, err
}
// nothing to do
if fileInfo.Mode() == mode {
return true, nil
}
// not clean but don't apply
if !apply {
return false, nil
}
err = os.Chmod(obj.getPath(), mode)
return false, err
}
// chownCheckApply performs a CheckApply for the file ownership.
func (obj *FileRes) chownCheckApply(apply bool) (bool, error) {
var expectedUID, expectedGID int
obj.init.Logf("chownCheckApply(%t)", apply)
if obj.State == "absent" {
// file is absent or no owner specified
if obj.Owner == "" && obj.Group == "" {
// no owner or group specified, everything is ok
return true, nil
}
fileInfo, err := os.Stat(obj.getPath())
// If the file does not exist and we are in
// noop mode, do not throw an error.
if os.IsNotExist(err) && !apply {
return false, nil
}
if err != nil {
// TODO: is this a sane behaviour that we want to preserve?
// If the file does not exist and we are in noop mode, do not throw an
// error.
//if os.IsNotExist(err) && !apply {
// return false, nil
//}
if err != nil { // if the file does not exist, it's correct to error!
return false, err
}
@@ -770,6 +775,8 @@ func (obj *FileRes) chownCheckApply(apply bool) (bool, error) {
return false, fmt.Errorf("can't set Owner or Group on this platform")
}
var expectedUID, expectedGID int
if obj.Owner != "" {
expectedUID, err = engineUtil.GetUID(obj.Owner)
if err != nil {
@@ -779,7 +786,6 @@ func (obj *FileRes) chownCheckApply(apply bool) (bool, error) {
// nothing specified, no changes to be made, expect same as actual
expectedUID = int(stUnix.Uid)
}
if obj.Group != "" {
expectedGID, err = engineUtil.GetGID(obj.Group)
if err != nil {
@@ -803,6 +809,38 @@ func (obj *FileRes) chownCheckApply(apply bool) (bool, error) {
return false, os.Chown(obj.getPath(), expectedUID, expectedGID)
}
// chmodCheckApply performs a CheckApply for the file permissions.
func (obj *FileRes) chmodCheckApply(apply bool) (bool, error) {
obj.init.Logf("chmodCheckApply(%t)", apply)
if obj.Mode == "" {
// no mode specified, everything is ok
return true, nil
}
mode, err := obj.mode() // get the desired mode
if err != nil {
return false, err
}
fileInfo, err := os.Stat(obj.getPath())
if err != nil { // if the file does not exist, it's correct to error!
return false, err
}
// nothing to do
if fileInfo.Mode() == mode {
return true, nil
}
// not clean but don't apply
if !apply {
return false, nil
}
return false, os.Chmod(obj.getPath(), mode)
}
// CheckApply checks the resource state and applies the resource if the bool
// input is true. It returns error info and if the state check passed or not.
func (obj *FileRes) CheckApply(apply bool) (bool, error) {
@@ -820,7 +858,7 @@ func (obj *FileRes) CheckApply(apply bool) (bool, error) {
checkOK := true
// always run stateCheckApply before contentCheckApply, they go together
// run stateCheckApply before contentCheckApply and sourceCheckApply
if c, err := obj.stateCheckApply(apply); err != nil {
return false, err
} else if !c {
@@ -831,8 +869,7 @@ func (obj *FileRes) CheckApply(apply bool) (bool, error) {
} else if !c {
checkOK = false
}
if c, err := obj.chmodCheckApply(apply); err != nil {
if c, err := obj.sourceCheckApply(apply); err != nil {
return false, err
} else if !c {
checkOK = false
@@ -843,6 +880,11 @@ func (obj *FileRes) CheckApply(apply bool) (bool, error) {
} else if !c {
checkOK = false
}
if c, err := obj.chmodCheckApply(apply); err != nil {
return false, err
} else if !c {
checkOK = false
}
return checkOK, nil // w00t
}
@@ -860,6 +902,11 @@ func (obj *FileRes) Cmp(r engine.Res) error {
if obj.getPath() != res.getPath() {
return fmt.Errorf("the Path differs")
}
if obj.State != res.State {
return fmt.Errorf("the State differs")
}
if (obj.Content == nil) != (res.Content == nil) { // xor
return fmt.Errorf("the Content differs")
}
@@ -871,9 +918,6 @@ func (obj *FileRes) Cmp(r engine.Res) error {
if obj.Source != res.Source {
return fmt.Errorf("the Source differs")
}
if obj.State != res.State {
return fmt.Errorf("the State differs")
}
if obj.Owner != res.Owner {
return fmt.Errorf("the Owner differs")
@@ -1023,6 +1067,130 @@ func (obj *FileRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
return nil
}
// Copy copies the resource. Don't call it directly, use engine.ResCopy instead.
// TODO: should this copy internal state?
func (obj *FileRes) Copy() engine.CopyableRes {
var content *string
if obj.Content != nil { // copy the string contents, not the pointer...
s := *obj.Content
content = &s
}
return &FileRes{
Path: obj.Path,
Dirname: obj.Dirname,
Basename: obj.Basename,
State: obj.State, // TODO: if this becomes a pointer, copy the string!
Content: content,
Source: obj.Source,
Owner: obj.Owner,
Group: obj.Group,
Mode: obj.Mode,
Recurse: obj.Recurse,
Force: obj.Force,
}
}
// Reversed returns the "reverse" or "reciprocal" resource. This is used to
// "clean" up after a previously defined resource has been removed.
func (obj *FileRes) Reversed() (engine.ReversibleRes, error) {
// NOTE: Previously, we did some more complicated management of reversed
// properties. For example, we could add mode and state even when they
// weren't originally specified. This code has now been simplified to
// avoid this complexity, because it's not really necessary, and it is
// somewhat illogical anyways.
// TODO: reversing this could be tricky, since we'd store it all
if obj.isDir() { // XXX: limit this error to a defined state or content?
return nil, fmt.Errorf("can't reverse a dir yet")
}
cp, err := engine.ResCopy(obj)
if err != nil {
return nil, errwrap.Wrapf(err, "could not copy")
}
rev, ok := cp.(engine.ReversibleRes)
if !ok {
return nil, fmt.Errorf("not reversible")
}
rev.ReversibleMeta().Disabled = true // the reverse shouldn't run again
res, ok := cp.(*FileRes)
if !ok {
return nil, fmt.Errorf("copied res was not our kind")
}
// these are already copied in, and we don't need to change them...
//res.Path = obj.Path
//res.Dirname = obj.Dirname
//res.Basename = obj.Basename
if obj.State == FileStateExists {
res.State = FileStateAbsent
}
if obj.State == FileStateAbsent {
res.State = FileStateExists
}
// If we've specified content, we might need to restore the original, OR
// if we're removing the file with a `state => "absent"`, save it too...
// The `res.State != FileStateAbsent` check is an optional optimization.
if (obj.Content != nil || obj.State == FileStateAbsent) && res.State != FileStateAbsent {
content, err := ioutil.ReadFile(obj.getPath())
if err != nil && !os.IsNotExist(err) {
return nil, errwrap.Wrapf(err, "could not read file for reversal storage")
}
res.Content = nil
if err == nil {
str := string(content)
res.Content = &str // set contents
}
}
if res.State == FileStateAbsent { // can't specify content when absent!
res.Content = nil
}
//res.Source = "" // XXX: what should we do with this?
if obj.Source != "" {
return nil, fmt.Errorf("can't reverse with Source yet")
}
// There is a race if the operating system is adding/changing/removing
// the file between the ioutil.Readfile at the top and here. If there is
// a discrepancy between the two, then you might get an unexpected
// reverse, but in reality, your perspective is pretty absurd. This is a
// user error, and not an issue we actually care about, afaict.
fileInfo, err := os.Stat(obj.getPath())
if err != nil && !os.IsNotExist(err) {
return nil, errwrap.Wrapf(err, "could not stat file for reversal information")
}
res.Owner = ""
res.Group = ""
res.Mode = ""
if err == nil {
stUnix, ok := fileInfo.Sys().(*syscall.Stat_t)
// XXX: add a !ok error scenario or some alternative?
if ok { // if not, this isn't unix
if obj.Owner != "" {
res.Owner = strconv.FormatInt(int64(stUnix.Uid), 10) // Uid is a uint32
}
if obj.Group != "" {
res.Group = strconv.FormatInt(int64(stUnix.Gid), 10) // Gid is a uint32
}
}
// TODO: use Mode().String() when we support full rwx style mode specs!
if obj.Mode != "" {
res.Mode = fmt.Sprintf("%#o", fileInfo.Mode().Perm()) // 0400, 0777, etc.
}
}
// these are already copied in, and we don't need to change them...
//res.Recurse = obj.Recurse
//res.Force = obj.Force
return res, nil
}
// smartPath adds a trailing slash to the path if it is a directory.
func smartPath(fileInfo os.FileInfo) string {
smartPath := fileInfo.Name() // absolute path

View File

@@ -78,7 +78,7 @@ func TestMiscEncodeDecode1(t *testing.T) {
e := gob.NewEncoder(&b1)
err = e.Encode(&input) // pass with &
if err != nil {
t.Errorf("Gob failed to Encode: %v", err)
t.Errorf("gob failed to Encode: %v", err)
}
str := base64.StdEncoding.EncodeToString(b1.Bytes())
@@ -86,27 +86,27 @@ func TestMiscEncodeDecode1(t *testing.T) {
var output interface{}
bb, err := base64.StdEncoding.DecodeString(str)
if err != nil {
t.Errorf("Base64 failed to Decode: %v", err)
t.Errorf("base64 failed to Decode: %v", err)
}
b2 := bytes.NewBuffer(bb)
d := gob.NewDecoder(b2)
err = d.Decode(&output) // pass with &
if err != nil {
t.Errorf("Gob failed to Decode: %v", err)
t.Errorf("gob failed to Decode: %v", err)
}
res1, ok := input.(engine.Res)
if !ok {
t.Errorf("Input %v is not a Res", res1)
t.Errorf("input %v is not a Res", res1)
return
}
res2, ok := output.(engine.Res)
if !ok {
t.Errorf("Output %v is not a Res", res2)
t.Errorf("output %v is not a Res", res2)
return
}
if err := res1.Cmp(res2); err != nil {
t.Errorf("The input and output Res values do not match: %+v", err)
t.Errorf("the input and output Res values do not match: %+v", err)
}
}
@@ -116,7 +116,7 @@ func TestMiscEncodeDecode2(t *testing.T) {
// encode
input, err := engine.NewNamedResource("file", "file1")
if err != nil {
t.Errorf("Can't create: %v", err)
t.Errorf("can't create: %v", err)
return
}
// NOTE: Do not add this bit of code, because it would cause the path to
@@ -128,29 +128,29 @@ func TestMiscEncodeDecode2(t *testing.T) {
b64, err := engineUtil.ResToB64(input)
if err != nil {
t.Errorf("Can't encode: %v", err)
t.Errorf("can't encode: %v", err)
return
}
output, err := engineUtil.B64ToRes(b64)
if err != nil {
t.Errorf("Can't decode: %v", err)
t.Errorf("can't decode: %v", err)
return
}
res1, ok := input.(engine.Res)
if !ok {
t.Errorf("Input %v is not a Res", res1)
t.Errorf("input %v is not a Res", res1)
return
}
res2, ok := output.(engine.Res)
if !ok {
t.Errorf("Output %v is not a Res", res2)
t.Errorf("output %v is not a Res", res2)
return
}
// this uses the standalone file cmp function
if err := res1.Cmp(res2); err != nil {
t.Errorf("The input and output Res values do not match: %+v", err)
t.Errorf("the input and output Res values do not match: %+v", err)
}
}
@@ -160,7 +160,7 @@ func TestMiscEncodeDecode3(t *testing.T) {
// encode
input, err := engine.NewNamedResource("file", "file1")
if err != nil {
t.Errorf("Can't create: %v", err)
t.Errorf("can't create: %v", err)
return
}
fileRes := input.(*FileRes) // must not panic
@@ -169,29 +169,82 @@ func TestMiscEncodeDecode3(t *testing.T) {
b64, err := engineUtil.ResToB64(input)
if err != nil {
t.Errorf("Can't encode: %v", err)
t.Errorf("can't encode: %v", err)
return
}
output, err := engineUtil.B64ToRes(b64)
if err != nil {
t.Errorf("Can't decode: %v", err)
t.Errorf("can't decode: %v", err)
return
}
res1, ok := input.(engine.Res)
if !ok {
t.Errorf("Input %v is not a Res", res1)
t.Errorf("input %v is not a Res", res1)
return
}
res2, ok := output.(engine.Res)
if !ok {
t.Errorf("Output %v is not a Res", res2)
t.Errorf("output %v is not a Res", res2)
return
}
// this uses the more complete, engine cmp function
if err := engine.ResCmp(res1, res2); err != nil {
t.Errorf("The input and output Res values do not match: %+v", err)
t.Errorf("the input and output Res values do not match: %+v", err)
}
}
func TestMiscEncodeDecode4(t *testing.T) {
var err error
const (
Kind = "file"
Name = "file1"
)
// encode
input, err := engine.NewNamedResource(Kind, Name)
if err != nil {
t.Errorf("can't create: %v", err)
return
}
fileRes := input.(*FileRes) // must not panic
fileRes.Path = "/tmp/whatever"
// TODO: add other params/traits/etc here!
b64, err := engineUtil.ResToB64(input)
if err != nil {
t.Errorf("can't encode: %v", err)
return
}
output, err := engineUtil.B64ToRes(b64)
if err != nil {
t.Errorf("can't decode: %v", err)
return
}
res1, ok := input.(engine.Res)
if !ok {
t.Errorf("input %v is not a Res", res1)
return
}
res2, ok := output.(engine.Res)
if !ok {
t.Errorf("output %v is not a Res", res2)
return
}
// this uses the more complete, engine cmp function
if err := engine.ResCmp(res1, res2); err != nil {
t.Errorf("the input and output Res values do not match: %+v", err)
}
// ensure the kind and name are correctly decoded too!
if kind := res2.Kind(); kind != Kind {
t.Errorf("the output kind was `%s`, expected `%s`", kind, Kind)
}
if name := res2.Name(); name != Name {
t.Errorf("the output name was `%s`, expected `%s`", name, Name)
}
}

View File

@@ -58,7 +58,7 @@ func (obj *GroupRes) Default() engine.Res {
// Validate if the params passed in are valid data.
func (obj *GroupRes) Validate() error {
if obj.State != "exists" && obj.State != "absent" {
return fmt.Errorf("State must be 'exists' or 'absent'")
return fmt.Errorf("state must be 'exists' or 'absent'")
}
return nil
}
@@ -220,32 +220,24 @@ func (obj *GroupRes) CheckApply(apply bool) (bool, error) {
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *GroupRes) Cmp(r engine.Res) error {
if !obj.Compare(r) {
return fmt.Errorf("did not compare")
}
return nil
}
// Compare two resources and return if they are equivalent.
func (obj *GroupRes) Compare(r engine.Res) bool {
// we can only compare GroupRes to others of the same resource kind
res, ok := r.(*GroupRes)
if !ok {
return false
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.State != res.State {
return false
return fmt.Errorf("the State differs")
}
if (obj.GID == nil) != (res.GID == nil) {
return false
return fmt.Errorf("the GID differs")
}
if obj.GID != nil && res.GID != nil {
if *obj.GID != *res.GID {
return false
return fmt.Errorf("the GID differs")
}
}
return true
return nil
}
// GroupUID is the UID struct for GroupRes.

View File

@@ -219,31 +219,23 @@ func (obj *HostnameRes) CheckApply(apply bool) (bool, error) {
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *HostnameRes) Cmp(r engine.Res) error {
if !obj.Compare(r) {
return fmt.Errorf("did not compare")
}
return nil
}
// Compare two resources and return if they are equivalent.
func (obj *HostnameRes) Compare(r engine.Res) bool {
// we can only compare HostnameRes to others of the same resource kind
res, ok := r.(*HostnameRes)
if !ok {
return false
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.PrettyHostname != res.PrettyHostname {
return false
return fmt.Errorf("the PrettyHostname differs")
}
if obj.StaticHostname != res.StaticHostname {
return false
return fmt.Errorf("the StaticHostname differs")
}
if obj.TransientHostname != res.TransientHostname {
return false
return fmt.Errorf("the TransientHostname differs")
}
return true
return nil
}
// HostnameUID is the UID struct for HostnameRes.

View File

@@ -200,36 +200,28 @@ func (obj *MsgRes) CheckApply(apply bool) (bool, error) {
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *MsgRes) Cmp(r engine.Res) error {
if !obj.Compare(r) {
return fmt.Errorf("did not compare")
}
return nil
}
// Compare two resources and return if they are equivalent.
func (obj *MsgRes) Compare(r engine.Res) bool {
// we can only compare MsgRes to others of the same resource kind
res, ok := r.(*MsgRes)
if !ok {
return false
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.Body != res.Body {
return false
return fmt.Errorf("the Body differs")
}
if obj.Priority != res.Priority {
return false
return fmt.Errorf("the Priority differs")
}
if len(obj.Fields) != len(res.Fields) {
return false
return fmt.Errorf("the length of Fields differs")
}
for field, value := range obj.Fields {
if res.Fields[field] != value {
return false
return fmt.Errorf("the Fields differ")
}
}
return true
return nil
}
// MsgUID is a unique representation for a MsgRes object.

View File

@@ -506,34 +506,26 @@ func (obj *NetRes) CheckApply(apply bool) (bool, error) {
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *NetRes) Cmp(r engine.Res) error {
if !obj.Compare(r) {
return fmt.Errorf("did not compare")
}
return nil
}
// Compare two resources and return if they are equivalent.
func (obj *NetRes) Compare(r engine.Res) bool {
// we can only compare NetRes to others of the same resource kind
res, ok := r.(*NetRes)
if !ok {
return false
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.State != res.State {
return false
return fmt.Errorf("the State differs")
}
if (obj.Addrs == nil) != (res.Addrs == nil) {
return false
return fmt.Errorf("the Addrs differ")
}
if err := util.SortedStrSliceCompare(obj.Addrs, res.Addrs); err != nil {
return false
return fmt.Errorf("the Addrs differ")
}
if obj.Gateway != res.Gateway {
return false
return fmt.Errorf("the Gateway differs")
}
return true
return nil
}
// NetUID is a unique resource identifier.

View File

@@ -261,35 +261,27 @@ func (obj *NspawnRes) CheckApply(apply bool) (bool, error) {
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *NspawnRes) Cmp(r engine.Res) error {
if !obj.Compare(r) {
return fmt.Errorf("did not compare")
}
return nil
}
// Compare two resources and return if they are equivalent.
func (obj *NspawnRes) Compare(r engine.Res) bool {
// we can only compare NspawnRes to others of the same resource kind
res, ok := r.(*NspawnRes)
if !ok {
return false
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.State != res.State {
return false
return fmt.Errorf("the State differs")
}
// TODO: why is res.svc ever nil?
if (obj.svc == nil) != (res.svc == nil) { // xor
return false
return fmt.Errorf("the svc differs")
}
if obj.svc != nil && res.svc != nil {
if !obj.svc.Compare(res.svc) {
return false
if err := obj.svc.Cmp(res.svc); err != nil {
return errwrap.Wrapf(err, "the svc differs")
}
}
return true
return nil
}
// NspawnUID is a unique resource identifier.

View File

@@ -352,7 +352,7 @@ loop:
// should already be broken
break loop
} else {
return []string{}, fmt.Errorf("PackageKit: Error: %v", signal.Body)
return []string{}, fmt.Errorf("error in body: %v", signal.Body)
}
}
}
@@ -363,9 +363,9 @@ loop:
func (obj *Conn) IsInstalledList(packages []string) ([]bool, error) {
var filter uint64 // initializes at the "zero" value of 0
filter += PkFilterEnumArch // always search in our arch
packageIDs, e := obj.ResolvePackages(packages, filter)
if e != nil {
return nil, fmt.Errorf("ResolvePackages error: %v", e)
packageIDs, err := obj.ResolvePackages(packages, filter)
if err != nil {
return nil, errwrap.Wrapf(err, "error resolving packages")
}
var m = make(map[string]int)
@@ -443,7 +443,7 @@ loop:
}
if signal.Name == FmtTransactionMethod("ErrorCode") {
return fmt.Errorf("PackageKit: Error: %v", signal.Body)
return fmt.Errorf("error in body: %v", signal.Body)
} else if signal.Name == FmtTransactionMethod("Package") {
// a package was installed...
// only start the timer once we're here...
@@ -454,14 +454,14 @@ loop:
} else if signal.Name == FmtTransactionMethod("Destroy") {
return nil // success
} else {
return fmt.Errorf("PackageKit: Error: %v", signal.Body)
return fmt.Errorf("error in body: %v", signal.Body)
}
case <-util.TimeAfterOrBlock(timeout):
if finished {
obj.Logf("Timeout: InstallPackages: Waiting for 'Destroy'")
return nil // got tired of waiting for Destroy
}
return fmt.Errorf("PackageKit: Timeout: InstallPackages: %s", strings.Join(packageIDs, ", "))
return fmt.Errorf("timeout installing packages: %s", strings.Join(packageIDs, ", "))
}
}
}
@@ -500,7 +500,7 @@ loop:
}
if signal.Name == FmtTransactionMethod("ErrorCode") {
return fmt.Errorf("PackageKit: Error: %v", signal.Body)
return fmt.Errorf("error in body: %v", signal.Body)
} else if signal.Name == FmtTransactionMethod("Package") {
// a package was installed...
continue loop
@@ -511,7 +511,7 @@ loop:
// should already be broken
break loop
} else {
return fmt.Errorf("PackageKit: Error: %v", signal.Body)
return fmt.Errorf("error in body: %v", signal.Body)
}
}
}
@@ -549,7 +549,7 @@ loop:
}
if signal.Name == FmtTransactionMethod("ErrorCode") {
return fmt.Errorf("PackageKit: Error: %v", signal.Body)
return fmt.Errorf("error in body: %v", signal.Body)
} else if signal.Name == FmtTransactionMethod("Package") {
} else if signal.Name == FmtTransactionMethod("Finished") {
// TODO: should we wait for the Destroy signal?
@@ -558,7 +558,7 @@ loop:
// should already be broken
break loop
} else {
return fmt.Errorf("PackageKit: Error: %v", signal.Body)
return fmt.Errorf("error in body: %v", signal.Body)
}
}
}
@@ -601,7 +601,7 @@ loop:
}
if signal.Name == FmtTransactionMethod("ErrorCode") {
err = fmt.Errorf("PackageKit: Error: %v", signal.Body)
err = fmt.Errorf("error in body: %v", signal.Body)
return
// one signal returned per packageID found...
@@ -626,7 +626,7 @@ loop:
// should already be broken
break loop
} else {
err = fmt.Errorf("PackageKit: Error: %v", signal.Body)
err = fmt.Errorf("error in body: %v", signal.Body)
return
}
}
@@ -669,7 +669,7 @@ loop:
}
if signal.Name == FmtTransactionMethod("ErrorCode") {
return nil, fmt.Errorf("PackageKit: Error: %v", signal.Body)
return nil, fmt.Errorf("error in body: %v", signal.Body)
} else if signal.Name == FmtTransactionMethod("Package") {
//pkg_int, ok := signal.Body[0].(int)
@@ -692,7 +692,7 @@ loop:
// should already be broken
break loop
} else {
return nil, fmt.Errorf("PackageKit: Error: %v", signal.Body)
return nil, fmt.Errorf("error in body: %v", signal.Body)
}
}
}
@@ -718,9 +718,9 @@ func (obj *Conn) PackagesToPackageIDs(packageMap map[string]string, filter uint6
if obj.Debug {
obj.Logf("PackagesToPackageIDs(): %s", strings.Join(packages, ", "))
}
resolved, e := obj.ResolvePackages(packages, filter)
if e != nil {
return nil, fmt.Errorf("Resolve error: %v", e)
resolved, err := obj.ResolvePackages(packages, filter)
if err != nil {
return nil, errwrap.Wrapf(err, "error resolving")
}
found := make([]bool, count) // default false
@@ -758,7 +758,7 @@ func (obj *Conn) PackagesToPackageIDs(packageMap map[string]string, filter uint6
}
state := packageMap[pkg] // lookup the requested state/version
if state == "" {
return nil, fmt.Errorf("Empty package state for %v", pkg)
return nil, fmt.Errorf("empty package state for: `%s`", pkg)
}
found[index] = true
stateIsVersion := (state != "installed" && state != "uninstalled" && state != "newest") // must be a ver. string
@@ -794,9 +794,9 @@ func (obj *Conn) PackagesToPackageIDs(packageMap map[string]string, filter uint6
// to be done, and if so, anything that needs updating isn't newest!
// if something isn't installed, we can't verify it with this method
// FIXME: https://github.com/hughsie/PackageKit/issues/116
updates, e := obj.GetUpdates(filter)
if e != nil {
return nil, fmt.Errorf("Updates error: %v", e)
updates, err := obj.GetUpdates(filter)
if err != nil {
return nil, errwrap.Wrapf(err, "updates error")
}
for _, packageID := range updates {
//obj.Logf("* %v", packageID)
@@ -844,9 +844,9 @@ func (obj *Conn) PackagesToPackageIDs(packageMap map[string]string, filter uint6
if obj.Debug {
obj.Logf("PackagesToPackageIDs(): Recurse: %s", strings.Join(checkPackages, ", "))
}
recursion, e = obj.PackagesToPackageIDs(filteredPackageMap, filter+PkFilterEnumNewest)
if e != nil {
return nil, fmt.Errorf("Recursion error: %v", e)
recursion, err = obj.PackagesToPackageIDs(filteredPackageMap, filter+PkFilterEnumNewest)
if err != nil {
return nil, errwrap.Wrapf(err, "recursion error")
}
}
}

View File

@@ -295,33 +295,25 @@ func (obj *PasswordRes) CheckApply(apply bool) (bool, error) {
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *PasswordRes) Cmp(r engine.Res) error {
if !obj.Compare(r) {
return fmt.Errorf("did not compare")
}
return nil
}
// Compare two resources and return if they are equivalent.
func (obj *PasswordRes) Compare(r engine.Res) bool {
// we can only compare PasswordRes to others of the same resource kind
res, ok := r.(*PasswordRes)
if !ok {
return false
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.Length != res.Length {
return false
return fmt.Errorf("the Length differs")
}
// TODO: we *could* optimize by allowing CheckApply to move from
// saved->!saved, by removing the file, but not likely worth it!
if obj.Saved != res.Saved {
return false
return fmt.Errorf("the Saved differs")
}
if obj.CheckRecovery != res.CheckRecovery {
return false
return fmt.Errorf("the CheckRecovery differs")
}
return true
return nil
}
// PasswordUID is the UID struct for PasswordRes.

View File

@@ -115,24 +115,16 @@ func (obj *PrintRes) CheckApply(apply bool) (bool, error) {
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *PrintRes) Cmp(r engine.Res) error {
if !obj.Compare(r) {
return fmt.Errorf("did not compare")
}
return nil
}
// Compare two resources and return if they are equivalent.
func (obj *PrintRes) Compare(r engine.Res) bool {
// we can only compare PrintRes to others of the same resource kind
res, ok := r.(*PrintRes)
if !ok {
return false
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.Msg != res.Msg {
return false
return fmt.Errorf("the Msg differs")
}
return true
return nil
}
// PrintUID is the UID struct for PrintRes.

View File

@@ -30,6 +30,7 @@ import (
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/util/errwrap"
)
// TODO: consider providing this as a lib so that we can add tests into the
@@ -152,6 +153,45 @@ func NewClearChangedStep(ms uint) Step {
}
}
// FileExpect takes a path and a string to expect in that file, and builds a
// Step that checks that out of them.
func FileExpect(p, s string) Step { // path & string
return &manualStep{
action: func() error { return nil },
expect: func() error {
content, err := ioutil.ReadFile(p)
if err != nil {
return err
}
if string(content) != s {
return fmt.Errorf("contents did not match in %s", p)
}
return nil
},
}
}
// FileExpect takes a path and a string to write to that file, and builds a Step
// that does that to them.
func FileWrite(p, s string) Step { // path & string
return &manualStep{
action: func() error {
// TODO: apparently using 0666 is equivalent to respecting the current umask
const umask = 0666
return ioutil.WriteFile(p, []byte(s), umask)
},
expect: func() error { return nil },
}
}
// ErrIsNotExistOK returns nil if we get an IsNotExist true result on the error.
func ErrIsNotExistOK(e error) error {
if os.IsNotExist(e) {
return nil
}
return errwrap.Wrapf(e, "unexpected error")
}
func TestResources1(t *testing.T) {
type test struct { // an individual test
name string
@@ -177,31 +217,6 @@ func TestResources1(t *testing.T) {
expect: func() error { return nil },
}
}
fileExpect := func(p, s string) Step { // path & string
return &manualStep{
action: func() error { return nil },
expect: func() error {
content, err := ioutil.ReadFile(p)
if err != nil {
return err
}
if string(content) != s {
return fmt.Errorf("contents did not match in %s", p)
}
return nil
},
}
}
fileWrite := func(p, s string) Step { // path & string
return &manualStep{
action: func() error {
// TODO: apparently using 0666 is equivalent to respecting the current umask
const umask = 0666
return ioutil.WriteFile(p, []byte(s), umask)
},
expect: func() error { return nil },
}
}
testCases := []test{}
{
@@ -210,17 +225,18 @@ func TestResources1(t *testing.T) {
p := "/tmp/whatever"
s := "hello, world\n"
res.Path = p
res.State = "exists"
contents := s
res.Content = &contents
timeline := []Step{
NewStartupStep(1000 * 60), // startup
NewChangedStep(1000*60, false), // did we do something?
fileExpect(p, s), // check initial state
FileExpect(p, s), // check initial state
NewClearChangedStep(1000 * 15), // did we do something?
fileWrite(p, "this is whatever\n"), // change state
FileWrite(p, "this is whatever\n"), // change state
NewChangedStep(1000*60, false), // did we do something?
fileExpect(p, s), // check again
FileExpect(p, s), // check again
sleep(1), // we can sleep too!
}
@@ -249,11 +265,11 @@ func TestResources1(t *testing.T) {
timeline := []Step{
NewStartupStep(1000 * 60), // startup
NewChangedStep(1000*60, false), // did we do something?
fileExpect(f, s+"\n"), // check initial state
FileExpect(f, s+"\n"), // check initial state
NewClearChangedStep(1000 * 15), // did we do something?
fileWrite(f, "this is stuff!\n"), // change state
FileWrite(f, "this is stuff!\n"), // change state
NewChangedStep(1000*60, false), // did we do something?
fileExpect(f, s+"\n"), // check again
FileExpect(f, s+"\n"), // check again
sleep(1), // we can sleep too!
}
@@ -278,7 +294,7 @@ func TestResources1(t *testing.T) {
timeline := []Step{
NewStartupStep(1000 * 60), // startup
NewChangedStep(1000*60, false), // did we do something?
fileExpect(p, ""), // check initial state
FileExpect(p, ""), // check initial state
NewClearChangedStep(1000 * 15), // did we do something?
}
@@ -303,7 +319,7 @@ func TestResources1(t *testing.T) {
timeline := []Step{
NewStartupStep(1000 * 60), // startup
NewChangedStep(1000*60, true), // did we do something?
fileExpect(p, content), // check initial state
FileExpect(p, content), // check initial state
}
testCases = append(testCases, test{
@@ -372,7 +388,7 @@ func TestResources1(t *testing.T) {
doneChan := make(chan struct{})
debug := testing.Verbose() // set via the -test.v flag to `go test`
logf := func(format string, v ...interface{}) {
t.Logf(fmt.Sprintf("test #%d: Res: ", index)+format, v...)
t.Logf(fmt.Sprintf("test #%d: ", index)+format, v...)
}
init := &engine.Init{
Running: func() {
@@ -548,3 +564,619 @@ func TestResources1(t *testing.T) {
})
}
}
// TestResources2 just tests a partial execution of the resource by running
// CheckApply and Reverse and basics without the mainloop. It's a less accurate
// representation of a running resource, but is still useful for many
// circumstances. This also uses a simpler timeline, because it was not possible
// to get the reference passing of the reversed resource working with the fancy
// version.
func TestResources2(t *testing.T) {
type test struct { // an individual test
name string
timeline []func() error // TODO: this could be a generator that keeps pushing out steps until it's done!
expect func() error // function to check for expected state
startup func() error // function to run as startup (unused?)
cleanup func() error // function to run as cleanup
}
// resValidate runs Validate on the res.
resValidate := func(res engine.Res) func() error {
// run Close
return func() error {
return res.Validate()
}
}
// resInit runs Init on the res.
resInit := func(res engine.Res) func() error {
logf := func(format string, v ...interface{}) {
// noop for now
}
init := &engine.Init{
//Debug: debug,
Logf: logf,
// unused
Send: func(st interface{}) error {
return nil
},
Recv: func() map[string]*engine.Send {
return map[string]*engine.Send{}
},
}
// run Init
return func() error {
return res.Init(init)
}
}
// resCheckApplyError runs CheckApply with noop = false for the res. It
// errors if the returned checkOK values isn't what we were expecting or
// if the errOK function returns an error when given a chance to inspect
// the returned error.
resCheckApplyError := func(res engine.Res, expCheckOK bool, errOK func(e error) error) func() error {
return func() error {
checkOK, err := res.CheckApply(true) // no noop!
if e := errOK(err); e != nil {
return errwrap.Wrapf(e, "error from CheckApply did not match expected")
}
if checkOK != expCheckOK {
return fmt.Errorf("result from CheckApply did not match expected: `%t` != `%t`", checkOK, expCheckOK)
}
return nil
}
}
// resCheckApply runs CheckApply with noop = false for the res. It
// errors if the returned checkOK values isn't what we were expecting or
// if there was an error.
resCheckApply := func(res engine.Res, expCheckOK bool) func() error {
errOK := func(e error) error {
if e == nil {
return nil
}
return errwrap.Wrapf(e, "unexpected error from CheckApply")
}
return resCheckApplyError(res, expCheckOK, errOK)
}
// resClose runs Close on the res.
resClose := func(res engine.Res) func() error {
// run Close
return func() error {
return res.Close()
}
}
// resReversal runs Reverse on the resource and stores the result in the
// rev variable. This should be called before the res CheckApply, and
// usually before Init, but after Validate.
resReversal := func(res engine.Res, rev *engine.Res) func() error {
return func() error {
r, ok := res.(engine.ReversibleRes)
if !ok {
return fmt.Errorf("res is not a ReversibleRes")
}
// We don't really need this to be checked here.
//if r.ReversibleMeta().Disabled {
// return fmt.Errorf("res did not specify Meta:reverse")
//}
if r.ReversibleMeta().Reversal {
//logf("triangle reversal") // warn!
}
reversed, err := r.Reversed()
if err != nil {
return errwrap.Wrapf(err, "could not reverse: %s", r.String())
}
if reversed == nil {
return nil // this can't be reversed, or isn't implemented here
}
reversed.ReversibleMeta().Reversal = true // set this for later...
retRes, ok := reversed.(engine.Res)
if !ok {
return fmt.Errorf("not a Res")
}
*rev = retRes // store!
return nil
}
}
fileWrite := func(p, s string) func() error {
// write the file to path
return func() error {
return ioutil.WriteFile(p, []byte(s), 0666)
}
}
fileExpect := func(p, s string) func() error {
// check the contents at the path match the string we expect
return func() error {
content, err := ioutil.ReadFile(p)
if err != nil {
return err
}
if string(content) != s {
return fmt.Errorf("contents did not match in %s", p)
}
return nil
}
}
fileAbsent := func(p string) func() error {
// does the file exist?
return func() error {
_, err := os.Stat(p)
if !os.IsNotExist(err) {
return fmt.Errorf("file was supposed to be absent, got: %+v", err)
}
return nil
}
}
fileRemove := func(p string) func() error {
// remove the file at path
return func() error {
err := os.Remove(p)
// if the file isn't there, don't error
if err != nil && !os.IsNotExist(err) {
return err
}
return nil
}
}
testCases := []test{}
{
//file "/tmp/somefile" {
// state => "exists",
// content => "some new text\n",
//}
r1 := makeRes("file", "r1")
res := r1.(*FileRes) // if this panics, the test will panic
p := "/tmp/somefile"
res.Path = p
res.State = "exists"
content := "some new text\n"
res.Content = &content
timeline := []func() error{
fileWrite(p, "whatever"),
resValidate(r1),
resInit(r1),
resCheckApply(r1, false), // changed
fileExpect(p, content),
resCheckApply(r1, true), // it's already good
resClose(r1),
fileExpect(p, content), // ensure it exists
}
testCases = append(testCases, test{
name: "simple file",
timeline: timeline,
expect: func() error { return nil },
startup: func() error { return nil },
cleanup: func() error { return nil },
})
}
{
//file "/tmp/somefile" {
// # state is NOT specified
// content => "some new text\n",
//}
r1 := makeRes("file", "r1")
res := r1.(*FileRes) // if this panics, the test will panic
p := "/tmp/somefile"
res.Path = p
//res.State = "exists" // not specified!
content := "some new text\n"
res.Content = &content
timeline := []func() error{
fileWrite(p, "whatever"),
resValidate(r1),
resInit(r1),
resCheckApply(r1, false), // changed
fileExpect(p, content),
resCheckApply(r1, true), // it's already good
resClose(r1),
fileExpect(p, content), // ensure it exists
}
testCases = append(testCases, test{
name: "edit file only",
timeline: timeline,
expect: func() error { return nil },
startup: func() error { return nil },
cleanup: func() error { return nil },
})
}
{
//file "/tmp/somefile" {
// # state is NOT specified
// content => "some new text\n",
//}
// and no existing file exists! (therefore we want an error!)
r1 := makeRes("file", "r1")
res := r1.(*FileRes) // if this panics, the test will panic
p := "/tmp/somefile"
res.Path = p
//res.State = "exists" // not specified!
content := "some new text\n"
res.Content = &content
timeline := []func() error{
fileRemove(p), // nothing here
resValidate(r1),
resInit(r1),
resCheckApplyError(r1, false, ErrIsNotExistOK), // should error
resCheckApplyError(r1, false, ErrIsNotExistOK), // double check
resClose(r1),
fileAbsent(p), // ensure it's absent
}
testCases = append(testCases, test{
name: "strict file",
timeline: timeline,
expect: func() error { return nil },
startup: func() error { return nil },
cleanup: func() error { return nil },
})
}
{
//file "/tmp/somefile" {
// state => "absent",
//}
// and no existing file exists!
r1 := makeRes("file", "r1")
res := r1.(*FileRes) // if this panics, the test will panic
p := "/tmp/somefile"
res.Path = p
res.State = "absent"
timeline := []func() error{
fileRemove(p), // nothing here
resValidate(r1),
resInit(r1),
resCheckApply(r1, true),
resCheckApply(r1, true),
resClose(r1),
fileAbsent(p), // ensure it's absent
}
testCases = append(testCases, test{
name: "absent file",
timeline: timeline,
expect: func() error { return nil },
startup: func() error { return nil },
cleanup: func() error { return nil },
})
}
{
//file "/tmp/somefile" {
// state => "absent",
//}
// and a file already exists!
r1 := makeRes("file", "r1")
res := r1.(*FileRes) // if this panics, the test will panic
p := "/tmp/somefile"
res.Path = p
res.State = "absent"
timeline := []func() error{
fileWrite(p, "whatever"),
resValidate(r1),
resInit(r1),
resCheckApply(r1, false),
resCheckApply(r1, true),
resClose(r1),
fileAbsent(p), // ensure it's absent
}
testCases = append(testCases, test{
name: "absent file pre-existing",
timeline: timeline,
expect: func() error { return nil },
startup: func() error { return nil },
cleanup: func() error { return nil },
})
}
{
//file "/tmp/somefile" {
// content => "some new text\n",
// state => "exists",
//
// Meta:reverse => true,
//}
r1 := makeRes("file", "r1")
res := r1.(*FileRes) // if this panics, the test will panic
p := "/tmp/somefile"
res.Path = p
res.State = "exists"
content := "some new text\n"
res.Content = &content
original := "this is the original state\n" // original state
var r2 engine.Res // future reversed resource
timeline := []func() error{
fileWrite(p, original),
fileExpect(p, original),
resValidate(r1),
resReversal(r1, &r2), // runs in Init to snapshot
func() error { // random test
if st := r2.(*FileRes).State; st != "absent" {
return fmt.Errorf("unexpected state: %s", st)
}
return nil
},
resInit(r1),
resCheckApply(r1, false), // changed
fileExpect(p, content),
resCheckApply(r1, true), // it's already good
resClose(r1),
//resValidate(r2), // no!!!
func() error {
// wrap it b/c it is currently nil
return r2.Validate()
},
func() error {
return resInit(r2)()
},
func() error {
return resCheckApply(r2, false)()
},
func() error {
return resCheckApply(r2, true)()
},
func() error {
return resClose(r2)()
},
fileAbsent(p), // ensure it's absent
}
testCases = append(testCases, test{
name: "some file",
timeline: timeline,
expect: func() error { return nil },
startup: func() error { return nil },
cleanup: func() error { return nil },
})
}
{
//file "/tmp/somefile" {
// content => "some new text\n",
//
// Meta:reverse => true,
//}
//# and there's an existing file at this path...
r1 := makeRes("file", "r1")
res := r1.(*FileRes) // if this panics, the test will panic
p := "/tmp/somefile"
res.Path = p
//res.State = "exists" // unspecified
content := "some new text\n"
res.Content = &content
original := "this is the original state\n" // original state
var r2 engine.Res // future reversed resource
timeline := []func() error{
fileWrite(p, original),
fileExpect(p, original),
resValidate(r1),
resReversal(r1, &r2), // runs in Init to snapshot
func() error { // random test
// state should be unspecified
if st := r2.(*FileRes).State; st == "absent" || st == "exists" {
return fmt.Errorf("unexpected state: %s", st)
}
return nil
},
resInit(r1),
resCheckApply(r1, false), // changed
fileExpect(p, content),
resCheckApply(r1, true), // it's already good
resClose(r1),
//resValidate(r2),
func() error {
// wrap it b/c it is currently nil
return r2.Validate()
},
func() error {
return resInit(r2)()
},
func() error {
return resCheckApply(r2, false)()
},
func() error {
return resCheckApply(r2, true)()
},
func() error {
return resClose(r2)()
},
fileExpect(p, original), // we restored the contents!
fileRemove(p), // cleanup
}
testCases = append(testCases, test{
name: "some file restore",
timeline: timeline,
expect: func() error { return nil },
startup: func() error { return nil },
cleanup: func() error { return nil },
})
}
{
//file "/tmp/somefile" {
// content => "some new text\n",
//
// Meta:reverse => true,
//}
//# and there's NO existing file at this path...
//# NOTE: This used to be a corner case subtlety for reversal.
//# Now that we error in this scenario before reversal, it's ok!
r1 := makeRes("file", "r1")
res := r1.(*FileRes) // if this panics, the test will panic
p := "/tmp/somefile"
res.Path = p
//res.State = "exists" // unspecified
content := "some new text\n"
res.Content = &content
var r2 engine.Res // future reversed resource
timeline := []func() error{
fileRemove(p), // ensure no file exists
resValidate(r1),
resReversal(r1, &r2), // runs in Init to snapshot
func() error { // random test
// state should be unspecified i think
// TODO: or should it be absent?
if st := r2.(*FileRes).State; st == "absent" || st == "exists" {
return fmt.Errorf("unexpected state: %s", st)
}
return nil
},
resInit(r1),
resCheckApplyError(r1, false, ErrIsNotExistOK), // changed
//fileExpect(p, content),
//resCheckApply(r1, true), // it's already good
resClose(r1),
//func() error {
// // wrap it b/c it is currently nil
// return r2.Validate()
//},
//func() error {
// return resInit(r2)()
//},
//func() error { // it's already in the correct state
// return resCheckApply(r2, true)()
//},
//func() error {
// return resClose(r2)()
//},
//fileExpect(p, content), // we never changed it back...
//fileRemove(p), // cleanup
}
testCases = append(testCases, test{
name: "ambiguous file restore",
timeline: timeline,
expect: func() error { return nil },
startup: func() error { return nil },
cleanup: func() error { return nil },
})
}
{
//file "/tmp/somefile" {
// state => "absent",
//
// Meta:reverse => true,
//}
r1 := makeRes("file", "r1")
res := r1.(*FileRes) // if this panics, the test will panic
p := "/tmp/somefile"
res.Path = p
res.State = "absent"
original := "this is the original state\n" // original state
var r2 engine.Res // future reversed resource
timeline := []func() error{
fileWrite(p, original),
fileExpect(p, original),
resValidate(r1),
resReversal(r1, &r2), // runs in Init to snapshot
func() error { // random test
if st := r2.(*FileRes).State; st != "exists" {
return fmt.Errorf("unexpected state: %s", st)
}
return nil
},
resInit(r1),
resCheckApply(r1, false), // changed
fileAbsent(p), // ensure it got removed
resCheckApply(r1, true), // it's already good
resClose(r1),
//resValidate(r2), // no!!!
func() error {
// wrap it b/c it is currently nil
return r2.Validate()
},
func() error {
return resInit(r2)()
},
func() error {
return resCheckApply(r2, false)()
},
func() error {
return resCheckApply(r2, true)()
},
func() error {
return resClose(r2)()
},
fileExpect(p, original), // ensure it's back to original
}
testCases = append(testCases, test{
name: "some removal",
timeline: timeline,
expect: func() error { return nil },
startup: func() error { return nil },
cleanup: func() error { return nil },
})
}
names := []string{}
for index, tc := range testCases { // run all the tests
if tc.name == "" {
t.Errorf("test #%d: not named", index)
continue
}
if util.StrInList(tc.name, names) {
t.Errorf("test #%d: duplicate sub test name of: %s", index, tc.name)
continue
}
names = append(names, tc.name)
t.Run(fmt.Sprintf("test #%d (%s)", index, tc.name), func(t *testing.T) {
timeline, expect, startup, cleanup := tc.timeline, tc.expect, tc.startup, tc.cleanup
t.Logf("test #%d: starting...\n", index)
defer t.Logf("test #%d: done!", index)
//debug := testing.Verbose() // set via the -test.v flag to `go test`
//logf := func(format string, v ...interface{}) {
// t.Logf(fmt.Sprintf("test #%d: ", index)+format, v...)
//}
t.Logf("test #%d: running startup()", index)
if err := startup(); err != nil {
t.Errorf("test #%d: FAIL", index)
t.Errorf("test #%d: could not startup: %+v", index, err)
}
defer func() {
t.Logf("test #%d: running cleanup()", index)
if err := cleanup(); err != nil {
t.Errorf("test #%d: FAIL", index)
t.Errorf("test #%d: could not cleanup: %+v", index, err)
}
}()
// run timeline
t.Logf("test #%d: executing timeline", index)
for ix, step := range timeline {
t.Logf("test #%d: step(%d)...", index, ix)
if err := step(); err != nil {
t.Errorf("test #%d: FAIL", index)
t.Errorf("test #%d: step(%d) action failed: %s", index, ix, err.Error())
break
}
}
t.Logf("test #%d: shutting down...", index)
if err := expect(); err != nil {
t.Errorf("test #%d: FAIL", index)
t.Errorf("test #%d: expect failed: %s", index, err.Error())
return
}
// all done!
})
}
}

View File

@@ -354,31 +354,23 @@ func (obj *SvcRes) CheckApply(apply bool) (bool, error) {
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *SvcRes) Cmp(r engine.Res) error {
if !obj.Compare(r) {
return fmt.Errorf("did not compare")
}
return nil
}
// Compare two resources and return if they are equivalent.
func (obj *SvcRes) Compare(r engine.Res) bool {
// we can only compare SvcRes to others of the same resource kind
res, ok := r.(*SvcRes)
if !ok {
return false
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.State != res.State {
return false
return fmt.Errorf("the State differs")
}
if obj.Startup != res.Startup {
return false
return fmt.Errorf("the Startup differs")
}
if obj.Session != res.Session {
return false
return fmt.Errorf("the Session differs")
}
return true
return nil
}
// SvcUID is the UID struct for SvcRes.

View File

@@ -199,25 +199,17 @@ func (obj *TestRes) CheckApply(apply bool) (bool, error) {
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *TestRes) Cmp(r engine.Res) error {
if !obj.Compare(r) {
return fmt.Errorf("did not compare")
}
return nil
}
// Compare two resources and return if they are equivalent.
func (obj *TestRes) Compare(r engine.Res) bool {
// we can only compare TestRes to others of the same resource kind
res, ok := r.(*TestRes)
if !ok {
return false
return fmt.Errorf("not a %s", obj.Kind())
}
//if obj.Name != res.Name {
// return false
//}
if obj.CompareFail || res.CompareFail {
return false
return fmt.Errorf("the CompareFail is true")
}
// TODO: yes, I know the long manual version is absurd, but I couldn't
@@ -228,145 +220,145 @@ func (obj *TestRes) Compare(r engine.Res) bool {
//}
if obj.Bool != res.Bool {
return false
return fmt.Errorf("the Bool differs")
}
if obj.Str != res.Str {
return false
return fmt.Errorf("the Str differs")
}
if obj.Int != res.Int {
return false
return fmt.Errorf("the Str differs")
}
if obj.Int8 != res.Int8 {
return false
return fmt.Errorf("the Int8 differs")
}
if obj.Int16 != res.Int16 {
return false
return fmt.Errorf("the Int16 differs")
}
if obj.Int32 != res.Int32 {
return false
return fmt.Errorf("the Int32 differs")
}
if obj.Int64 != res.Int64 {
return false
return fmt.Errorf("the Int64 differs")
}
if obj.Uint != res.Uint {
return false
return fmt.Errorf("the Uint differs")
}
if obj.Uint8 != res.Uint8 {
return false
return fmt.Errorf("the Uint8 differs")
}
if obj.Uint16 != res.Uint16 {
return false
return fmt.Errorf("the Uint16 differs")
}
if obj.Uint32 != res.Uint32 {
return false
return fmt.Errorf("the Uint32 differs")
}
if obj.Uint64 != res.Uint64 {
return false
return fmt.Errorf("the Uint64 differs")
}
//if obj.Uintptr
if obj.Byte != res.Byte {
return false
return fmt.Errorf("the Byte differs")
}
if obj.Rune != res.Rune {
return false
return fmt.Errorf("the Rune differs")
}
if obj.Float32 != res.Float32 {
return false
return fmt.Errorf("the Float32 differs")
}
if obj.Float64 != res.Float64 {
return false
return fmt.Errorf("the Float64 differs")
}
if obj.Complex64 != res.Complex64 {
return false
return fmt.Errorf("the Complex64 differs")
}
if obj.Complex128 != res.Complex128 {
return false
return fmt.Errorf("the Complex128 differs")
}
if (obj.BoolPtr == nil) != (res.BoolPtr == nil) { // xor
return false
return fmt.Errorf("the BoolPtr differs")
}
if obj.BoolPtr != nil && res.BoolPtr != nil {
if *obj.BoolPtr != *res.BoolPtr { // compare
return false
return fmt.Errorf("the BoolPtr differs")
}
}
if (obj.StringPtr == nil) != (res.StringPtr == nil) { // xor
return false
return fmt.Errorf("the StringPtr differs")
}
if obj.StringPtr != nil && res.StringPtr != nil {
if *obj.StringPtr != *res.StringPtr { // compare
return false
return fmt.Errorf("the StringPtr differs")
}
}
if (obj.Int64Ptr == nil) != (res.Int64Ptr == nil) { // xor
return false
return fmt.Errorf("the Int64Ptr differs")
}
if obj.Int64Ptr != nil && res.Int64Ptr != nil {
if *obj.Int64Ptr != *res.Int64Ptr { // compare
return false
return fmt.Errorf("the Int64Ptr differs")
}
}
if (obj.Int8Ptr == nil) != (res.Int8Ptr == nil) { // xor
return false
return fmt.Errorf("the Int8Ptr differs")
}
if obj.Int8Ptr != nil && res.Int8Ptr != nil {
if *obj.Int8Ptr != *res.Int8Ptr { // compare
return false
return fmt.Errorf("the Int8Ptr differs")
}
}
if (obj.Uint8Ptr == nil) != (res.Uint8Ptr == nil) { // xor
return false
return fmt.Errorf("the Uint8Ptr differs")
}
if obj.Uint8Ptr != nil && res.Uint8Ptr != nil {
if *obj.Uint8Ptr != *res.Uint8Ptr { // compare
return false
return fmt.Errorf("the Uint8Ptr differs")
}
}
if !reflect.DeepEqual(obj.Int8PtrPtrPtr, res.Int8PtrPtrPtr) {
return false
return fmt.Errorf("the Int8PtrPtrPtr differs")
}
if !reflect.DeepEqual(obj.SliceString, res.SliceString) {
return false
return fmt.Errorf("the SliceString differs")
}
if !reflect.DeepEqual(obj.MapIntFloat, res.MapIntFloat) {
return false
return fmt.Errorf("the MapIntFloat differs")
}
if !reflect.DeepEqual(obj.MixedStruct, res.MixedStruct) {
return false
return fmt.Errorf("the MixedStruct differs")
}
if !reflect.DeepEqual(obj.Interface, res.Interface) {
return false
return fmt.Errorf("the Interface differs")
}
if obj.AnotherStr != res.AnotherStr {
return false
return fmt.Errorf("the AnotherStr differs")
}
if obj.ValidateBool != res.ValidateBool {
return false
return fmt.Errorf("the ValidateBool differs")
}
if obj.ValidateError != res.ValidateError {
return false
return fmt.Errorf("the ValidateError differs")
}
if obj.AlwaysGroup != res.AlwaysGroup {
return false
return fmt.Errorf("the AlwaysGroup differs")
}
if obj.SendValue != res.SendValue {
return false
return fmt.Errorf("the SendValue differs")
}
if obj.Comment != res.Comment {
return false
return fmt.Errorf("the Comment differs")
}
return true
return nil
}
// TestUID is the UID struct for TestRes.

View File

@@ -113,25 +113,17 @@ func (obj *TimerRes) CheckApply(apply bool) (bool, error) {
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *TimerRes) Cmp(r engine.Res) error {
if !obj.Compare(r) {
return fmt.Errorf("did not compare")
}
return nil
}
// Compare two resources and return if they are equivalent.
func (obj *TimerRes) Compare(r engine.Res) bool {
// we can only compare TimerRes to others of the same resource kind
res, ok := r.(*TimerRes)
if !ok {
return false
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.Interval != res.Interval {
return false
return fmt.Errorf("the Interval differs")
}
return true
return nil
}
// TimerUID is the UID struct for TimerRes.

View File

@@ -273,45 +273,37 @@ func (obj *UserRes) CheckApply(apply bool) (bool, error) {
// Cmp compares two resources and returns an error if they are not equivalent.
func (obj *UserRes) Cmp(r engine.Res) error {
if !obj.Compare(r) {
return fmt.Errorf("did not compare")
}
return nil
}
// Compare two resources and return if they are equivalent.
func (obj *UserRes) Compare(r engine.Res) bool {
// we can only compare UserRes to others of the same resource kind
res, ok := r.(*UserRes)
if !ok {
return false
return fmt.Errorf("not a %s", obj.Kind())
}
if obj.State != res.State {
return false
return fmt.Errorf("the State differs")
}
if (obj.UID == nil) != (res.UID == nil) {
return false
return fmt.Errorf("the UID differs")
}
if obj.UID != nil && res.UID != nil {
if *obj.UID != *res.UID {
return false
return fmt.Errorf("the UID differs")
}
}
if (obj.GID == nil) != (res.GID == nil) {
return false
return fmt.Errorf("the GID differs")
}
if obj.GID != nil && res.GID != nil {
if *obj.GID != *res.GID {
return false
return fmt.Errorf("the GID differs")
}
}
if (obj.Groups == nil) != (res.Groups == nil) {
return false
return fmt.Errorf("the Group differs")
}
if obj.Groups != nil && res.Groups != nil {
if len(obj.Groups) != len(res.Groups) {
return false
return fmt.Errorf("the Group differs")
}
objGroups := obj.Groups
resGroups := res.Groups
@@ -319,22 +311,22 @@ func (obj *UserRes) Compare(r engine.Res) bool {
sort.Strings(resGroups)
for i := range objGroups {
if objGroups[i] != resGroups[i] {
return false
return fmt.Errorf("the Group differs at index: %d", i)
}
}
}
if (obj.HomeDir == nil) != (res.HomeDir == nil) {
return false
return fmt.Errorf("the HomeDirs differs")
}
if obj.HomeDir != nil && res.HomeDir != nil {
if *obj.HomeDir != *obj.HomeDir {
return false
if *obj.HomeDir != *res.HomeDir {
return fmt.Errorf("the HomeDir differs")
}
}
if obj.AllowDuplicateUID != res.AllowDuplicateUID {
return false
return fmt.Errorf("the AllowDuplicateUID differs")
}
return true
return nil
}
// UserUID is the UID struct for UserRes.

View File

@@ -41,10 +41,14 @@ type ReversibleRes interface {
// Reversed returns the "reverse" or "reciprocal" resource. This is used
// to "clean" up after a previously defined resource has been removed.
// Interestingly, this returns the core Res interface instead of a
// Interestingly, this could return the core Res interface instead of a
// ReversibleRes, because there is no requirement that the reverse of a
// Res be the same kind of Res, and the reverse might not be reversible!
Reversed() (Res, error)
// However, in practice, it's nice to use some of the Reversible meta
// params in the built value, so keep things simple and have this be a
// reversible res. The Res itself doesn't have to implement Reversed()
// in a meaningful way, it can just return nil and it will get ignored.
Reversed() (ReversibleRes, error)
}
// ReversibleMeta provides some parameters specific to reversible resources.
@@ -53,6 +57,16 @@ type ReversibleMeta struct {
// resource.
Disabled bool
// Reversal specifies that the resource was built from a reversal. This
// must be set if the resource was built by a reversal.
Reversal bool
// Overwrite specifies that we should overwrite any existing stored
// reversible resource if one that is pending already exists. If this is
// false, and a resource with the same name and kind exists, then this
// will cause an error.
Overwrite bool
// TODO: add options here, including whether to reverse edges, etc...
}
@@ -61,5 +75,11 @@ func (obj *ReversibleMeta) Cmp(rm *ReversibleMeta) error {
if obj.Disabled != rm.Disabled {
return fmt.Errorf("values for Disabled are different")
}
if obj.Reversal != rm.Reversal { // TODO: do we want to compare these?
return fmt.Errorf("values for Reversal are different")
}
if obj.Overwrite != rm.Overwrite {
return fmt.Errorf("values for Overwrite are different")
}
return nil
}

View File

@@ -23,6 +23,7 @@ import (
"encoding/base64"
"encoding/gob"
"fmt"
"os"
"os/user"
"reflect"
"strconv"
@@ -62,6 +63,23 @@ const (
DBusSignalJobRemoved = "JobRemoved"
)
// ResPathUID returns a unique resource UID based on its name and kind. It's
// safe to use as a token in a path, and as a result has no slashes in it.
func ResPathUID(res engine.Res) string {
// res.Name() is NOT sufficiently unique to use as a UID here, because:
// a name of: /tmp/mgmt/foo is /tmp-mgmt-foo and
// a name of: /tmp/mgmt-foo -> /tmp-mgmt-foo if we replace slashes.
// As a result, we base64 encode (but without slashes).
name := strings.Replace(res.Name(), "/", "-", -1) // TODO: use ReplaceAll in 1.12
if os.PathSeparator != '/' { // lol windows?
name = strings.Replace(name, string(os.PathSeparator), "-", -1) // TODO: use ReplaceAll in 1.12
}
b := []byte(res.Name())
encoded := base64.URLEncoding.EncodeToString(b)
// Add the safe name on so that it's easier to identify by name...
return fmt.Sprintf("%s-%s+%s", res.Kind(), encoded, name)
}
// ResToB64 encodes a resource to a base64 encoded string (after serialization).
func ResToB64(res engine.Res) (string, error) {
b := bytes.Buffer{}

View File

@@ -400,7 +400,7 @@ func (obj *EmbdEtcd) Validate() error {
if obj.NoNetwork {
if len(obj.Seeds) != 0 || len(obj.ClientURLs) != 0 || len(obj.ServerURLs) != 0 {
return fmt.Errorf("NoNetwork is mutually exclusive with Seeds, ClientURLs and ServerURLs")
return fmt.Errorf("option NoNetwork is mutually exclusive with Seeds, ClientURLs and ServerURLs")
}
}

View File

@@ -70,9 +70,9 @@ var (
// ErrNotExist is returned when we can't find the requested path.
ErrNotExist = os.ErrNotExist
ErrFileClosed = errors.New("File is closed")
ErrFileReadOnly = errors.New("File handle is read only")
ErrOutOfRange = errors.New("Out of range")
ErrFileClosed = errors.New("file is closed")
ErrFileReadOnly = errors.New("file handle is read only")
ErrOutOfRange = errors.New("out of range")
)
// Fs is a specialized afero.Fs implementation for etcd. It implements a small

View File

@@ -231,15 +231,15 @@ func TestFs2(t *testing.T) {
var memFs = afero.NewMemMapFs()
if err := util.CopyFs(etcdFs, memFs, "/", "/", false); err != nil {
if err := util.CopyFs(etcdFs, memFs, "/", "/", false, false); err != nil {
t.Errorf("copyfs error: %+v", err)
return
}
if err := util.CopyFs(etcdFs, memFs, "/", "/", true); err != nil {
if err := util.CopyFs(etcdFs, memFs, "/", "/", true, false); err != nil {
t.Errorf("copyfs2 error: %+v", err)
return
}
if err := util.CopyFs(etcdFs, memFs, "/", "/tmp/d1/", false); err != nil {
if err := util.CopyFs(etcdFs, memFs, "/", "/tmp/d1/", false, false); err != nil {
t.Errorf("copyfs3 error: %+v", err)
return
}
@@ -300,11 +300,11 @@ func TestFs3(t *testing.T) {
var memFs = afero.NewMemMapFs()
if err := util.CopyFs(etcdFs, memFs, "/tmp/foo/bar", "/", false); err != nil {
if err := util.CopyFs(etcdFs, memFs, "/tmp/foo/bar", "/", false, false); err != nil {
t.Errorf("copyfs error: %+v", err)
return
}
if err := util.CopyFs(etcdFs, memFs, "/tmp/foo/bar", "/baz/", false); err != nil {
if err := util.CopyFs(etcdFs, memFs, "/tmp/foo/bar", "/baz/", false, false); err != nil {
t.Errorf("copyfs2 error: %+v", err)
return
}
@@ -419,7 +419,7 @@ func TestEtcdCopyFs0(t *testing.T) {
t.Logf("tree: \n%s", tree)
var memFs = afero.NewMemMapFs()
if err := util.CopyFs(etcdFs, memFs, tt.cpsrc, tt.cpdst, tt.force); err != nil {
if err := util.CopyFs(etcdFs, memFs, tt.cpsrc, tt.cpdst, tt.force, false); err != nil {
t.Errorf("copyfs error: %+v", err)
return
}

View File

@@ -11,6 +11,7 @@ $c4 = "b" in $set
$s = fmt.printf("1: %t, 2: %t, 3: %t, 4: %t\n", $c1, $c2, $c3, $c4)
file "/tmp/mgmt/contains" {
state => "exists",
content => $s,
}
@@ -21,5 +22,6 @@ $x = if sys.hostname() in ["h1", "h3",] {
}
file "/tmp/mgmt/hello-${sys.hostname()}" {
state => "exists",
content => $x,
}

View File

@@ -5,4 +5,5 @@ cron "purpleidea-oneshot" {
svc "purpleidea-oneshot" {}
# TODO: do we need a state => "exists" specified here?
file "/etc/systemd/system/purpleidea-oneshot.service" {}

View File

@@ -10,4 +10,5 @@ svc "purpleidea-oneshot" {
session => true,
}
# TODO: do we need a state => "exists" specified here?
file printf("%s/.config/systemd/user/purpleidea-oneshot.service", $home) {}

View File

@@ -2,5 +2,6 @@ import "datetime"
$d = datetime.now()
file "/tmp/mgmt/datetime" {
state => "exists",
content => template("Hello! It is now: {{ datetime_print . }}\n", $d),
}

View File

@@ -12,6 +12,7 @@ $theload = structlookup(sys.load(), "x1")
if 5 > 3 {
file "/tmp/mgmt/datetime" {
state => "exists",
content => template("Now + 1 year is: {{ .year }} seconds, aka: {{ datetime_print .year }}\n\nload average: {{ .load }}\n", $tmplvalues),
}
}

View File

@@ -14,5 +14,6 @@ $theload = structlookup(sys.load(), "x1")
$vumeter = example.vumeter("====", 10, 0.9)
file "/tmp/mgmt/datetime" {
state => "exists",
content => template("Now + 1 year is: {{ .year }} seconds, aka: {{ datetime_print .year }}\n\nload average: {{ .load }}\n\nvu: {{ .vumeter }}\n", $tmplvalues),
}

View File

@@ -13,5 +13,6 @@ $rand = random1(8)
$exchanged = world.exchange("keyns", $rand)
file "/tmp/mgmt/exchange-${sys.hostname()}" {
state => "exists",
content => template("Found: {{ . }}\n", $exchanged),
}

View File

@@ -5,5 +5,6 @@ $dt = datetime.now()
$hystvalues = {"ix0" => $dt, "ix1" => $dt{1}, "ix2" => $dt{2}, "ix3" => $dt{3},}
file "/tmp/mgmt/history" {
state => "exists",
content => template("Index(0) {{.ix0}}: {{ datetime_print .ix0 }}\nIndex(1) {{.ix1}}: {{ datetime_print .ix1 }}\nIndex(2) {{.ix2}}: {{ datetime_print .ix2 }}\nIndex(3) {{.ix3}}: {{ datetime_print .ix3 }}\n", $hystvalues),
}

View File

@@ -1,6 +1,7 @@
import "sys"
file "/tmp/mgmt/systemload" {
state => "exists",
content => template("load average: {{ .load }} threshold: {{ .threshold }}\n", $tmplvalues),
}

View File

@@ -3,6 +3,7 @@ password "pass0" {
}
file "/tmp/mgmt/password" {
state => "exists",
}
Password["pass0"].password -> File["/tmp/mgmt/password"].content

View File

@@ -2,5 +2,6 @@ import "os"
# this copies the contents from /tmp/input and puts them in /tmp/output
file "/tmp/output" {
state => "exists",
content => os.readfile("/tmp/input"),
}

View File

@@ -0,0 +1,25 @@
import "datetime"
import "math"
$now = datetime.now()
# alternate every four seconds
$mod0 = math.mod($now, 8) == 0
$mod1 = math.mod($now, 8) == 1
$mod2 = math.mod($now, 8) == 2
$mod3 = math.mod($now, 8) == 3
$mod = $mod0 || $mod1 || $mod2 || $mod3
file "/tmp/mgmt/" {
state => "exists",
}
# file should disappear and re-appear every four seconds
if $mod {
file "/tmp/mgmt/hello" {
content => "please say abracadabra...\n",
state => "exists",
Meta:reverse => true,
}
}

View File

@@ -0,0 +1,25 @@
import "datetime"
import "math"
$now = datetime.now()
# alternate every four seconds
$mod0 = math.mod($now, 8) == 0
$mod1 = math.mod($now, 8) == 1
$mod2 = math.mod($now, 8) == 2
$mod3 = math.mod($now, 8) == 3
$mod = $mod0 || $mod1 || $mod2 || $mod3
file "/tmp/mgmt/" {
state => "exists",
}
# file should re-appear and disappear every four seconds
# it will even preserve and then restore the pre-existing content!
if $mod {
file "/tmp/mgmt/hello" {
state => "absent", # delete the file
Meta:reverse => true,
}
}

View File

@@ -0,0 +1,26 @@
import "datetime"
import "math"
$now = datetime.now()
# alternate every four seconds
$mod0 = math.mod($now, 8) == 0
$mod1 = math.mod($now, 8) == 1
$mod2 = math.mod($now, 8) == 2
$mod3 = math.mod($now, 8) == 3
$mod = $mod0 || $mod1 || $mod2 || $mod3
file "/tmp/mgmt/" {
state => "exists",
}
# file should change the mode every four seconds
# editing the file contents at anytime is allowed
if $mod {
file "/tmp/mgmt/hello" {
state => "exists",
mode => "0777",
Meta:reverse => true,
}
}

View File

@@ -17,5 +17,6 @@ $set = world.schedule("xsched", $opts)
#$set = world.schedule("xsched")
file "/tmp/mgmt/scheduled-${sys.hostname()}" {
state => "exists",
content => template("set: {{ . }}\n", $set),
}

View File

@@ -19,6 +19,7 @@ Exec["exec0"].output -> Kv["kv0"].value
if $state != "default" {
file "/tmp/mgmt/state" {
state => "exists",
content => fmt.printf("state: %s\n", $state),
}
}

View File

@@ -7,6 +7,7 @@ $state = maplookup($exchanged, $hostname, "default")
if $state == "one" || $state == "default" {
file "/tmp/mgmt/state" {
state => "exists",
content => "state: one\n",
}
@@ -22,6 +23,7 @@ if $state == "one" || $state == "default" {
if $state == "two" {
file "/tmp/mgmt/state" {
state => "exists",
content => "state: two\n",
}
@@ -37,6 +39,7 @@ if $state == "two" {
if $state == "three" {
file "/tmp/mgmt/state" {
state => "exists",
content => "state: three\n",
}

View File

@@ -3,5 +3,6 @@ print "unicode" {
msg => $unicode,
}
file "/tmp/unicode" {
state => "exists",
content => $unicode + "\n",
}

View File

@@ -17,6 +17,7 @@ $count = if $input > 8 {
}
file "/tmp/output" {
state => "exists",
content => fmt.printf("requesting: %d cpus\n", $count),
}

View File

@@ -58,10 +58,20 @@ func CopyStringToFs(fs engine.Fs, str, dst string) error {
}
// CopyDirToFs copies a dir from src path on the local fs to a dst path on fs.
// FIXME: I'm not sure this does the logical thing when the dst path is a dir.
// FIXME: We've got a workaround for this inside of the lang CLI GAPI.
func CopyDirToFs(fs engine.Fs, src, dst string) error {
return util.CopyDiskToFs(fs, src, dst, false)
}
// CopyDirToFsForceAll copies a dir from src path on the local fs to a dst path
// on fs, but it doesn't error when making a dir that already exists. It also
// uses `MkdirAll` to prevent some issues.
// FIXME: This is being added because of issues with CopyDirToFs. POSIX is hard.
func CopyDirToFsForceAll(fs engine.Fs, src, dst string) error {
return util.CopyDiskToFsAll(fs, src, dst, true, true)
}
// CopyDirContentsToFs copies a dir contents from src path on the local fs to a
// dst path on fs.
func CopyDirContentsToFs(fs engine.Fs, src, dst string) error {

View File

@@ -20,6 +20,7 @@ package core
import (
// import so the funcs register
_ "github.com/purpleidea/mgmt/lang/funcs/core/datetime"
_ "github.com/purpleidea/mgmt/lang/funcs/core/deploy"
_ "github.com/purpleidea/mgmt/lang/funcs/core/example"
_ "github.com/purpleidea/mgmt/lang/funcs/core/example/nested"
_ "github.com/purpleidea/mgmt/lang/funcs/core/fmt"

View File

@@ -143,29 +143,31 @@ func TestPureFuncExec0(t *testing.T) {
return
}
if !reflect.DeepEqual(result, expect) {
// double check because DeepEqual is different since the func exists
diff := pretty.Compare(result, expect)
if diff != "" { // bonus
t.Errorf("test #%d: result did not match expected", index)
// TODO: consider making our own recursive print function
t.Logf("test #%d: actual: \n\n%s\n", index, spew.Sdump(result))
t.Logf("test #%d: expected: \n\n%s", index, spew.Sdump(expect))
// more details, for tricky cases:
diffable := &pretty.Config{
Diffable: true,
IncludeUnexported: true,
//PrintStringers: false,
//PrintTextMarshalers: false,
//SkipZeroFields: false,
}
t.Logf("test #%d: actual: \n\n%s\n", index, diffable.Sprint(result))
t.Logf("test #%d: expected: \n\n%s", index, diffable.Sprint(expect))
t.Logf("test #%d: diff:\n%s", index, diff)
return
}
if reflect.DeepEqual(result, expect) {
return
}
// double check because DeepEqual is different since the func exists
diff := pretty.Compare(result, expect)
if diff == "" { // bonus
return
}
t.Errorf("test #%d: result did not match expected", index)
// TODO: consider making our own recursive print function
t.Logf("test #%d: actual: \n\n%s\n", index, spew.Sdump(result))
t.Logf("test #%d: expected: \n\n%s", index, spew.Sdump(expect))
// more details, for tricky cases:
diffable := &pretty.Config{
Diffable: true,
IncludeUnexported: true,
//PrintStringers: false,
//PrintTextMarshalers: false,
//SkipZeroFields: false,
}
t.Logf("test #%d: actual: \n\n%s\n", index, diffable.Sprint(result))
t.Logf("test #%d: expected: \n\n%s", index, diffable.Sprint(expect))
t.Logf("test #%d: diff:\n%s", index, diff)
})
}
}

View File

@@ -0,0 +1,156 @@
// Mgmt
// Copyright (C) 2013-2019+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package coredeploy
import (
"fmt"
"strings"
"github.com/purpleidea/mgmt/lang/funcs"
"github.com/purpleidea/mgmt/lang/interfaces"
"github.com/purpleidea/mgmt/lang/types"
)
func init() {
funcs.ModuleRegister(ModuleName, "abspath", func() interfaces.Func { return &AbsPathFunc{} }) // must register the func and name
}
const (
pathArg = "path"
)
// AbsPathFunc is a function that returns the absolute, full path in the deploy
// from an input path that is relative to the calling file. If you pass it an
// empty string, you'll just get the absolute deploy directory path that you're
// in.
type AbsPathFunc struct {
init *interfaces.Init
data *interfaces.FuncData
last types.Value // last value received to use for diff
path string // the active path
result string // last calculated output
closeChan chan struct{}
}
// SetData is used by the language to pass our function some code-level context.
func (obj *AbsPathFunc) SetData(data *interfaces.FuncData) {
obj.data = data
}
// ArgGen returns the Nth arg name for this function.
func (obj *AbsPathFunc) ArgGen(index int) (string, error) {
seq := []string{pathArg}
if l := len(seq); index >= l {
return "", fmt.Errorf("index %d exceeds arg length of %d", index, l)
}
return seq[index], nil
}
// Validate makes sure we've built our struct properly. It is usually unused for
// normal functions that users can use directly.
func (obj *AbsPathFunc) Validate() error {
return nil
}
// Info returns some static info about itself.
func (obj *AbsPathFunc) Info() *interfaces.Info {
return &interfaces.Info{
Pure: false, // maybe false because the file contents can change
Memo: false,
Sig: types.NewType(fmt.Sprintf("func(%s str) str", pathArg)),
}
}
// Init runs some startup code for this function.
func (obj *AbsPathFunc) Init(init *interfaces.Init) error {
obj.init = init
obj.closeChan = make(chan struct{})
if obj.data == nil {
// programming error
return fmt.Errorf("missing function data")
}
return nil
}
// Stream returns the changing values that this func has over time.
func (obj *AbsPathFunc) Stream() error {
defer close(obj.init.Output) // the sender closes
for {
select {
case input, ok := <-obj.init.Input:
if !ok {
obj.init.Input = nil // don't infinite loop back
continue // no more inputs, but don't return!
}
//if err := input.Type().Cmp(obj.Info().Sig.Input); err != nil {
// return errwrap.Wrapf(err, "wrong function input")
//}
if obj.last != nil && input.Cmp(obj.last) == nil {
continue // value didn't change, skip it
}
obj.last = input // store for next
path := input.Struct()[pathArg].Str()
// TODO: add validation for absolute path?
if path == obj.path {
continue // nothing changed
}
obj.path = path
p := strings.TrimSuffix(obj.data.Base, "/")
if p == obj.data.Base { // didn't trim, so we fail
// programming error
return fmt.Errorf("no trailing slash on Base, got: `%s`", p)
}
result := p
if obj.path == "" {
result += "/" // add the above trailing slash back
} else if !strings.HasPrefix(obj.path, "/") {
return fmt.Errorf("path was not absolute, got: `%s`", obj.path)
//result += "/" // be forgiving ?
}
result += obj.path
if obj.result == result {
continue // result didn't change
}
obj.result = result // store new result
case <-obj.closeChan:
return nil
}
select {
case obj.init.Output <- &types.StrValue{
V: obj.result,
}:
case <-obj.closeChan:
return nil
}
}
}
// Close runs some shutdown code for this function and turns off the stream.
func (obj *AbsPathFunc) Close() error {
close(obj.closeChan)
return nil
}

View File

@@ -0,0 +1,23 @@
// Mgmt
// Copyright (C) 2013-2019+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package coredeploy
const (
// ModuleName is the prefix given to all the functions in this module.
ModuleName = "deploy"
)

View File

@@ -0,0 +1,165 @@
// Mgmt
// Copyright (C) 2013-2019+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package coredeploy
import (
"fmt"
"strings"
"github.com/purpleidea/mgmt/lang/funcs"
"github.com/purpleidea/mgmt/lang/interfaces"
"github.com/purpleidea/mgmt/lang/types"
"github.com/purpleidea/mgmt/util/errwrap"
)
func init() {
funcs.ModuleRegister(ModuleName, "readfile", func() interfaces.Func { return &ReadFileFunc{} }) // must register the func and name
}
// ReadFileFunc is a function that reads the full contents from a file in our
// deploy. The file contents can only change with a new deploy, so this is
// static. Please note that this is different from the readfile function in the
// os package.
type ReadFileFunc struct {
init *interfaces.Init
data *interfaces.FuncData
last types.Value // last value received to use for diff
filename string // the active filename
result string // last calculated output
closeChan chan struct{}
}
// SetData is used by the language to pass our function some code-level context.
func (obj *ReadFileFunc) SetData(data *interfaces.FuncData) {
obj.data = data
}
// ArgGen returns the Nth arg name for this function.
func (obj *ReadFileFunc) ArgGen(index int) (string, error) {
seq := []string{"filename"}
if l := len(seq); index >= l {
return "", fmt.Errorf("index %d exceeds arg length of %d", index, l)
}
return seq[index], nil
}
// Validate makes sure we've built our struct properly. It is usually unused for
// normal functions that users can use directly.
func (obj *ReadFileFunc) Validate() error {
return nil
}
// Info returns some static info about itself.
func (obj *ReadFileFunc) Info() *interfaces.Info {
return &interfaces.Info{
Pure: false, // maybe false because the file contents can change
Memo: false,
Sig: types.NewType("func(filename str) str"),
}
}
// Init runs some startup code for this function.
func (obj *ReadFileFunc) Init(init *interfaces.Init) error {
obj.init = init
obj.closeChan = make(chan struct{})
if obj.data == nil {
// programming error
return fmt.Errorf("missing function data")
}
return nil
}
// Stream returns the changing values that this func has over time.
func (obj *ReadFileFunc) Stream() error {
defer close(obj.init.Output) // the sender closes
for {
select {
case input, ok := <-obj.init.Input:
if !ok {
obj.init.Input = nil // don't infinite loop back
continue // no more inputs, but don't return!
}
//if err := input.Type().Cmp(obj.Info().Sig.Input); err != nil {
// return errwrap.Wrapf(err, "wrong function input")
//}
if obj.last != nil && input.Cmp(obj.last) == nil {
continue // value didn't change, skip it
}
obj.last = input // store for next
filename := input.Struct()["filename"].Str()
// TODO: add validation for absolute path?
if filename == obj.filename {
continue // nothing changed
}
obj.filename = filename
p := strings.TrimSuffix(obj.data.Base, "/")
if p == obj.data.Base { // didn't trim, so we fail
// programming error
return fmt.Errorf("no trailing slash on Base, got: `%s`", p)
}
path := p
if !strings.HasPrefix(obj.filename, "/") {
return fmt.Errorf("filename was not absolute, got: `%s`", obj.filename)
//path += "/" // be forgiving ?
}
path += obj.filename
fs, err := obj.init.World.Fs(obj.data.FsURI) // open the remote file system
if err != nil {
return errwrap.Wrapf(err, "can't load code from file system `%s`", obj.data.FsURI)
}
// this is relative to the module dir the func is in!
content, err := fs.ReadFile(path) // open the remote file system
// We could use it directly, but it feels like less correct.
//content, err := obj.data.Fs.ReadFile(path) // open the remote file system
if err != nil {
return errwrap.Wrapf(err, "can't read file `%s` (%s)", obj.filename, path)
}
result := string(content) // convert to string
if obj.result == result {
continue // result didn't change
}
obj.result = result // store new result
case <-obj.closeChan:
return nil
}
select {
case obj.init.Output <- &types.StrValue{
V: obj.result,
}:
case <-obj.closeChan:
return nil
}
}
}
// Close runs some shutdown code for this function and turns off the stream.
func (obj *ReadFileFunc) Close() error {
close(obj.closeChan)
return nil
}

View File

@@ -0,0 +1,151 @@
// Mgmt
// Copyright (C) 2013-2019+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package coredeploy
import (
"fmt"
"github.com/purpleidea/mgmt/lang/funcs"
"github.com/purpleidea/mgmt/lang/interfaces"
"github.com/purpleidea/mgmt/lang/types"
"github.com/purpleidea/mgmt/util/errwrap"
)
func init() {
funcs.ModuleRegister(ModuleName, "readfileabs", func() interfaces.Func { return &ReadFileAbsFunc{} }) // must register the func and name
}
// ReadFileAbsFunc is a function that reads the full contents from a file in our
// deploy. The file contents can only change with a new deploy, so this is
// static. In particular, this takes an absolute path relative to the root
// deploy. In general, you should use `deploy.readfile` instead. Please note
// that this is different from the readfile function in the os package.
type ReadFileAbsFunc struct {
init *interfaces.Init
data *interfaces.FuncData
last types.Value // last value received to use for diff
filename string // the active filename
result string // last calculated output
closeChan chan struct{}
}
// SetData is used by the language to pass our function some code-level context.
func (obj *ReadFileAbsFunc) SetData(data *interfaces.FuncData) {
obj.data = data
}
// ArgGen returns the Nth arg name for this function.
func (obj *ReadFileAbsFunc) ArgGen(index int) (string, error) {
seq := []string{"filename"}
if l := len(seq); index >= l {
return "", fmt.Errorf("index %d exceeds arg length of %d", index, l)
}
return seq[index], nil
}
// Validate makes sure we've built our struct properly. It is usually unused for
// normal functions that users can use directly.
func (obj *ReadFileAbsFunc) Validate() error {
return nil
}
// Info returns some static info about itself.
func (obj *ReadFileAbsFunc) Info() *interfaces.Info {
return &interfaces.Info{
Pure: false, // maybe false because the file contents can change
Memo: false,
Sig: types.NewType("func(filename str) str"),
}
}
// Init runs some startup code for this function.
func (obj *ReadFileAbsFunc) Init(init *interfaces.Init) error {
obj.init = init
obj.closeChan = make(chan struct{})
if obj.data == nil {
// programming error
return fmt.Errorf("missing function data")
}
return nil
}
// Stream returns the changing values that this func has over time.
func (obj *ReadFileAbsFunc) Stream() error {
defer close(obj.init.Output) // the sender closes
for {
select {
case input, ok := <-obj.init.Input:
if !ok {
obj.init.Input = nil // don't infinite loop back
continue // no more inputs, but don't return!
}
//if err := input.Type().Cmp(obj.Info().Sig.Input); err != nil {
// return errwrap.Wrapf(err, "wrong function input")
//}
if obj.last != nil && input.Cmp(obj.last) == nil {
continue // value didn't change, skip it
}
obj.last = input // store for next
filename := input.Struct()["filename"].Str()
// TODO: add validation for absolute path?
if filename == obj.filename {
continue // nothing changed
}
obj.filename = filename
fs, err := obj.init.World.Fs(obj.data.FsURI) // open the remote file system
if err != nil {
return errwrap.Wrapf(err, "can't load code from file system `%s`", obj.data.FsURI)
}
content, err := fs.ReadFile(obj.filename) // open the remote file system
// We could use it directly, but it feels like less correct.
//content, err := obj.data.Fs.ReadFile(obj.filename) // open the remote file system
if err != nil {
return errwrap.Wrapf(err, "can't read file `%s`", obj.filename)
}
result := string(content) // convert to string
if obj.result == result {
continue // result didn't change
}
obj.result = result // store new result
case <-obj.closeChan:
return nil
}
select {
case obj.init.Output <- &types.StrValue{
V: obj.result,
}:
case <-obj.closeChan:
return nil
}
}
}
// Close runs some shutdown code for this function and turns off the stream.
func (obj *ReadFileAbsFunc) Close() error {
close(obj.closeChan)
return nil
}

View File

@@ -33,7 +33,7 @@ func testSqrtSuccess(input, sqrt float64) error {
return err
}
if val.Float() != sqrt {
return fmt.Errorf("Invalid output, expected %f, got %f", sqrt, val.Float())
return fmt.Errorf("invalid output, expected %f, got %f", sqrt, val.Float())
}
return nil
}
@@ -42,7 +42,7 @@ func testSqrtError(input float64) error {
inputVal := &types.FloatValue{V: input}
_, err := Sqrt([]types.Value{inputVal})
if err == nil {
return fmt.Errorf("Expected error for input %f, got nil", input)
return fmt.Errorf("expected error for input %f, got nil", input)
}
return nil
}

View File

@@ -35,6 +35,8 @@ func init() {
// ReadFileFunc is a function that reads the full contents from a local file. If
// the file contents change or the file path changes, a new string will be sent.
// Please note that this is different from the readfile function in the deploy
// package.
type ReadFileFunc struct {
init *interfaces.Init
last types.Value // last value received to use for diff

View File

@@ -31,7 +31,7 @@ func testToLower(t *testing.T, input, expected string) {
return
}
if value.Str() != expected {
t.Errorf("Invalid output, expected %s, got %s", expected, value.Str())
t.Errorf("invalid output, expected %s, got %s", expected, value.Str())
}
}

View File

@@ -181,7 +181,7 @@ func init() {
T: types.NewType("func(a str, b str) bool"),
V: func(input []types.Value) (types.Value, error) {
return &types.BoolValue{
V: input[0].Str() == input[1].Str(),
V: input[0].Str() != input[1].Str(),
}, nil
},
})

View File

@@ -234,8 +234,10 @@ func (obj *GAPI) Cli(cliInfo *gapi.CliInfo) (*gapi.Deploy, error) {
logf("init...")
// init and validate the structure of the AST
data := &interfaces.Data{
Fs: localFs, // the local fs!
Base: output.Base, // base dir (absolute path) that this is rooted in
// TODO: add missing fields here if/when needed
Fs: localFs, // the local fs!
FsURI: localFs.URI(), // TODO: is this right?
Base: output.Base, // base dir (absolute path) that this is rooted in
Files: output.Files,
Imports: importVertex,
Metadata: output.Metadata,
@@ -328,6 +330,27 @@ func (obj *GAPI) Cli(cliInfo *gapi.CliInfo) (*gapi.Deploy, error) {
return nil, fmt.Errorf("duplicates in file list found")
}
// Add any missing dirs, so that we don't need to use `MkdirAll`...
// FIXME: It's possible that the dirs get generated upstream, but it's
// not exactly clear where they'd need to get added into the list. If we
// figure that out, we can remove this additional step. It's trickier,
// because adding duplicates isn't desirable either.
//dirs, err := util.MissingMkdirs(files)
//if err != nil {
// // possible programming error
// return nil, errwrap.Wrapf(err, "unexpected missing mkdirs input")
//}
//parents := util.DirParents(output.Base)
//parents = append(parents, output.Base) // include self
//
// And we don't want to include any of the parents above the Base dir...
//for _, x := range dirs {
// if util.StrInList(x, parents) {
// continue
// }
// files = append(files, x)
//}
// sort by depth dependency order! (or mkdir -p all the dirs first)
// TODO: is this natively already in a correctly sorted order?
util.PathSlice(files).Sort() // sort it
@@ -340,8 +363,18 @@ func (obj *GAPI) Cli(cliInfo *gapi.CliInfo) (*gapi.Deploy, error) {
}
if strings.HasSuffix(src, "/") { // it's a dir
// FIXME: I think fixing CopyDirToFs might be better...
if dst != "/" { // XXX: hack, don't nest the copy badly!
out, err := util.RemovePathSuffix(dst)
if err != nil {
// possible programming error
return nil, errwrap.Wrapf(err, "malformed dst dir path: `%s`", dst)
}
dst = out
}
// TODO: add more tests to this (it is actually CopyFs)
if err := gapi.CopyDirToFs(fs, src, dst); err != nil {
// TODO: Used to be: CopyDirToFs, but it had issues...
if err := gapi.CopyDirToFsForceAll(fs, src, dst); err != nil {
return nil, errwrap.Wrapf(err, "can't copy dir from `%s` to `%s`", src, dst)
}
continue
@@ -406,6 +439,7 @@ func (obj *GAPI) LangInit() error {
obj.lang = &Lang{
Fs: fs,
FsURI: obj.InputURI,
Input: input,
Hostname: obj.data.Hostname,
@@ -662,8 +696,10 @@ func (obj *GAPI) Get(getInfo *gapi.GetInfo) error {
logf("init...")
// init and validate the structure of the AST
data := &interfaces.Data{
Fs: localFs, // the local fs!
Base: output.Base, // base dir (absolute path) that this is rooted in
// TODO: add missing fields here if/when needed
Fs: localFs, // the local fs!
FsURI: localFs.URI(), // TODO: is this right?
Base: output.Base, // base dir (absolute path) that this is rooted in
Files: output.Files,
Imports: importVertex,
Metadata: output.Metadata,

View File

@@ -142,6 +142,10 @@ type Data struct {
// system to manage file resources or other aspects.
Fs engine.Fs
// FsURI is the fs URI of the active filesystem. This is useful to pass
// to the engine.World API for further consumption.
FsURI string
// Base directory (absolute path) that the running code is in. If an
// import is found, that's a recursive addition, and naturally for that
// run, this value would be different in the recursion.

View File

@@ -108,3 +108,38 @@ type NamedArgsFunc interface {
// the util.NumToAlpha function when this interface isn't implemented...
ArgGen(int) (string, error)
}
// FuncData is some data that is passed into the function during compilation. It
// helps provide some context about the AST and the deploy for functions that
// might need it.
// TODO: Consider combining this with the existing Data struct or more of it...
// TODO: Do we want to add line/col/file values here, and generalize this?
type FuncData struct {
// Fs represents a handle to the filesystem that we're running on. This
// is necessary for opening files if needed by import statements. The
// file() paths used to get templates or other files from our deploys
// come from here, this is *not* used to interact with the host file
// system to manage file resources or other aspects.
Fs engine.Fs
// FsURI is the fs URI of the active filesystem. This is useful to pass
// to the engine.World API for further consumption.
FsURI string
// Base directory (absolute path) that the running code is in. This is a
// copy of the value from the Expr and Stmt Data struct for Init.
Base string
}
// DataFunc is a function that accepts some context from the AST and deploy
// before Init and runtime. If you don't wish to accept this data, then don't
// implement this method and you won't get any. This is mostly useful for
// special functions that are useful in core.
// TODO: This could be replaced if a func ever needs a SetScope method...
type DataFunc interface {
Func // implement everything in Func but add the additional requirements
// SetData is used by the language to pass our function some code-level
// context.
SetData(*FuncData)
}

View File

@@ -145,31 +145,33 @@ func TestMetadataParse0(t *testing.T) {
return
}
if metadata != nil {
if !reflect.DeepEqual(meta, metadata) {
// double check because DeepEqual is different since the func exists
diff := pretty.Compare(meta, metadata)
if diff != "" { // bonus
t.Errorf("test #%d: metadata did not match expected", index)
// TODO: consider making our own recursive print function
t.Logf("test #%d: actual: \n\n%s\n", index, spew.Sdump(meta))
t.Logf("test #%d: expected: \n\n%s", index, spew.Sdump(metadata))
// more details, for tricky cases:
diffable := &pretty.Config{
Diffable: true,
IncludeUnexported: true,
//PrintStringers: false,
//PrintTextMarshalers: false,
//SkipZeroFields: false,
}
t.Logf("test #%d: actual: \n\n%s\n", index, diffable.Sprint(meta))
t.Logf("test #%d: expected: \n\n%s", index, diffable.Sprint(metadata))
t.Logf("test #%d: diff:\n%s", index, diff)
return
}
}
if metadata == nil {
return
}
if reflect.DeepEqual(meta, metadata) {
return
}
// double check because DeepEqual is different since the func exists
diff := pretty.Compare(meta, metadata)
if diff == "" { // bonus
return
}
t.Errorf("test #%d: metadata did not match expected", index)
// TODO: consider making our own recursive print function
t.Logf("test #%d: actual: \n\n%s\n", index, spew.Sdump(meta))
t.Logf("test #%d: expected: \n\n%s", index, spew.Sdump(metadata))
// more details, for tricky cases:
diffable := &pretty.Config{
Diffable: true,
IncludeUnexported: true,
//PrintStringers: false,
//PrintTextMarshalers: false,
//SkipZeroFields: false,
}
t.Logf("test #%d: actual: \n\n%s\n", index, diffable.Sprint(meta))
t.Logf("test #%d: expected: \n\n%s", index, diffable.Sprint(metadata))
t.Logf("test #%d: diff:\n%s", index, diff)
})
}
}

View File

@@ -35,24 +35,11 @@ type Pos struct {
Filename string // optional source filename, if known
}
// InterpolateInfo contains some information passed around during interpolation.
// TODO: rename to Info if this is moved to its own package.
type InterpolateInfo struct {
// Prefix used for path namespacing if required.
Prefix string
// Debug represents if we're running in debug mode or not.
Debug bool
// Logf is a logger which should be used.
Logf func(format string, v ...interface{})
}
// InterpolateStr interpolates a string and returns the representative AST. This
// particular implementation uses the hashicorp hil library and syntax to do so.
func InterpolateStr(str string, pos *Pos, info *InterpolateInfo) (interfaces.Expr, error) {
if info.Debug {
info.Logf("interpolating: %s", str)
func InterpolateStr(str string, pos *Pos, data *interfaces.Data) (interfaces.Expr, error) {
if data.Debug {
data.Logf("interpolating: %s", str)
}
var line, column int = -1, -1
var filename string
@@ -71,51 +58,58 @@ func InterpolateStr(str string, pos *Pos, info *InterpolateInfo) (interfaces.Exp
if err != nil {
return nil, errwrap.Wrapf(err, "can't parse string interpolation: `%s`", str)
}
if info.Debug {
info.Logf("tree: %+v", tree)
if data.Debug {
data.Logf("tree: %+v", tree)
}
transformInfo := &InterpolateInfo{
Prefix: info.Prefix,
Debug: info.Debug,
transformData := &interfaces.Data{
// TODO: add missing fields here if/when needed
Fs: data.Fs,
FsURI: data.FsURI,
Base: data.Base,
Files: data.Files,
Imports: data.Imports,
Metadata: data.Metadata,
Modules: data.Modules,
Downloader: data.Downloader,
//World: data.World,
Prefix: data.Prefix,
Debug: data.Debug,
Logf: func(format string, v ...interface{}) {
info.Logf("transform: "+format, v...)
data.Logf("transform: "+format, v...)
},
}
result, err := hilTransform(tree, transformInfo)
result, err := hilTransform(tree, transformData)
if err != nil {
return nil, errwrap.Wrapf(err, "error running AST map: `%s`", str)
}
if info.Debug {
info.Logf("transform: %+v", result)
if data.Debug {
data.Logf("transform: %+v", result)
}
// make sure to run the Init on the new expression
return result, errwrap.Wrapf(result.Init(&interfaces.Data{
Debug: info.Debug,
Logf: info.Logf,
}), "init failed")
return result, errwrap.Wrapf(result.Init(data), "init failed")
}
// hilTransform returns the AST equivalent of the hil AST.
func hilTransform(root hilast.Node, info *InterpolateInfo) (interfaces.Expr, error) {
func hilTransform(root hilast.Node, data *interfaces.Data) (interfaces.Expr, error) {
switch node := root.(type) {
case *hilast.Output: // common root node
if info.Debug {
info.Logf("got output type: %+v", node)
if data.Debug {
data.Logf("got output type: %+v", node)
}
if len(node.Exprs) == 0 {
return nil, fmt.Errorf("no expressions found")
}
if len(node.Exprs) == 1 {
return hilTransform(node.Exprs[0], info)
return hilTransform(node.Exprs[0], data)
}
// assumes len > 1
args := []interfaces.Expr{}
for _, n := range node.Exprs {
expr, err := hilTransform(n, info)
expr, err := hilTransform(n, data)
if err != nil {
return nil, errwrap.Wrapf(err, "root failed")
}
@@ -131,12 +125,12 @@ func hilTransform(root hilast.Node, info *InterpolateInfo) (interfaces.Expr, err
return result, nil
case *hilast.Call:
if info.Debug {
info.Logf("got function type: %+v", node)
if data.Debug {
data.Logf("got function type: %+v", node)
}
args := []interfaces.Expr{}
for _, n := range node.Args {
arg, err := hilTransform(n, info)
arg, err := hilTransform(n, data)
if err != nil {
return nil, fmt.Errorf("call failed: %+v", err)
}
@@ -149,8 +143,8 @@ func hilTransform(root hilast.Node, info *InterpolateInfo) (interfaces.Expr, err
}, nil
case *hilast.LiteralNode: // string, int, etc...
if info.Debug {
info.Logf("got literal type: %+v", node)
if data.Debug {
data.Logf("got literal type: %+v", node)
}
switch node.Typex {
@@ -184,8 +178,8 @@ func hilTransform(root hilast.Node, info *InterpolateInfo) (interfaces.Expr, err
}
case *hilast.VariableAccess: // variable lookup
if info.Debug {
info.Logf("got variable access type: %+v", node)
if data.Debug {
data.Logf("got variable access type: %+v", node)
}
return &ExprVar{
Name: node.Name,

View File

@@ -185,18 +185,19 @@ func TestInterpolate0(t *testing.T) {
}
}
if !reflect.DeepEqual(iast, exp) {
// double check because DeepEqual is different since the logf exists
diff := pretty.Compare(iast, exp)
if diff != "" { // bonus
t.Errorf("test #%d: AST did not match expected", index)
// TODO: consider making our own recursive print function
t.Logf("test #%d: actual: \n%s", index, spew.Sdump(iast))
t.Logf("test #%d: expected: \n%s", index, spew.Sdump(exp))
t.Logf("test #%d: diff:\n%s", index, diff)
return
}
if reflect.DeepEqual(iast, exp) {
return
}
// double check because DeepEqual is different since the logf exists
diff := pretty.Compare(iast, exp)
if diff == "" { // bonus
return
}
t.Errorf("test #%d: AST did not match expected", index)
// TODO: consider making our own recursive print function
t.Logf("test #%d: actual: \n%s", index, spew.Sdump(iast))
t.Logf("test #%d: expected: \n%s", index, spew.Sdump(exp))
t.Logf("test #%d: diff:\n%s", index, diff)
})
}
}
@@ -388,6 +389,7 @@ func TestInterpolateBasicStmt(t *testing.T) {
ast, fail, exp := tc.ast, tc.fail, tc.exp
data := &interfaces.Data{
// TODO: add missing fields here if/when needed
Debug: testing.Verbose(), // set via the -test.v flag to `go test`
Logf: func(format string, v ...interface{}) {
t.Logf("ast: "+format, v...)
@@ -421,18 +423,19 @@ func TestInterpolateBasicStmt(t *testing.T) {
}
}
if !reflect.DeepEqual(iast, exp) {
// double check because DeepEqual is different since the logf exists
diff := pretty.Compare(iast, exp)
if diff != "" { // bonus
t.Errorf("test #%d: AST did not match expected", index)
// TODO: consider making our own recursive print function
t.Logf("test #%d: actual: \n%s", index, spew.Sdump(iast))
t.Logf("test #%d: expected: \n%s", index, spew.Sdump(exp))
t.Logf("test #%d: diff:\n%s", index, diff)
return
}
if reflect.DeepEqual(iast, exp) {
return
}
// double check because DeepEqual is different since the logf exists
diff := pretty.Compare(iast, exp)
if diff == "" { // bonus
return
}
t.Errorf("test #%d: AST did not match expected", index)
// TODO: consider making our own recursive print function
t.Logf("test #%d: actual: \n%s", index, spew.Sdump(iast))
t.Logf("test #%d: expected: \n%s", index, spew.Sdump(exp))
t.Logf("test #%d: diff:\n%s", index, diff)
})
}
}
@@ -709,6 +712,7 @@ func TestInterpolateBasicExpr(t *testing.T) {
ast, fail, exp := tc.ast, tc.fail, tc.exp
data := &interfaces.Data{
// TODO: add missing fields here if/when needed
Debug: testing.Verbose(), // set via the -test.v flag to `go test`
Logf: func(format string, v ...interface{}) {
t.Logf("ast: "+format, v...)
@@ -742,18 +746,19 @@ func TestInterpolateBasicExpr(t *testing.T) {
}
}
if !reflect.DeepEqual(iast, exp) {
// double check because DeepEqual is different since the logf exists
diff := pretty.Compare(iast, exp)
if diff != "" { // bonus
t.Errorf("test #%d: AST did not match expected", index)
// TODO: consider making our own recursive print function
t.Logf("test #%d: actual: \n%s", index, spew.Sdump(iast))
t.Logf("test #%d: expected: \n%s", index, spew.Sdump(exp))
t.Logf("test #%d: diff:\n%s", index, diff)
return
}
if reflect.DeepEqual(iast, exp) {
return
}
// double check because DeepEqual is different since the logf exists
diff := pretty.Compare(iast, exp)
if diff == "" { // bonus
return
}
t.Errorf("test #%d: AST did not match expected", index)
// TODO: consider making our own recursive print function
t.Logf("test #%d: actual: \n%s", index, spew.Sdump(iast))
t.Logf("test #%d: expected: \n%s", index, spew.Sdump(exp))
t.Logf("test #%d: diff:\n%s", index, diff)
})
}
}

View File

@@ -30,6 +30,7 @@ import (
"github.com/purpleidea/mgmt/engine"
"github.com/purpleidea/mgmt/engine/resources"
"github.com/purpleidea/mgmt/etcd"
"github.com/purpleidea/mgmt/lang/funcs"
"github.com/purpleidea/mgmt/lang/interfaces"
"github.com/purpleidea/mgmt/lang/unification"
@@ -440,6 +441,7 @@ func TestAstFunc0(t *testing.T) {
t.Logf("test #%d: AST: %+v", index, ast)
data := &interfaces.Data{
// TODO: add missing fields here if/when needed
Debug: testing.Verbose(), // set via the -test.v flag to `go test`
Logf: func(format string, v ...interface{}) {
t.Logf("ast: "+format, v...)
@@ -548,7 +550,7 @@ func TestAstFunc1(t *testing.T) {
const magicEmpty = "# empty!"
dir, err := util.TestDirFull()
if err != nil {
t.Errorf("FAIL: could not get tests directory: %+v", err)
t.Errorf("could not get tests directory: %+v", err)
return
}
t.Logf("tests directory is: %s", dir)
@@ -590,7 +592,7 @@ func TestAstFunc1(t *testing.T) {
// build test array automatically from reading the dir
files, err := ioutil.ReadDir(dir)
if err != nil {
t.Errorf("FAIL: could not read through tests directory: %+v", err)
t.Errorf("could not read through tests directory: %+v", err)
return
}
sorted := []string{}
@@ -606,13 +608,13 @@ func TestAstFunc1(t *testing.T) {
graphFileFull := dir + graphFile
info, err := os.Stat(graphFileFull)
if err != nil || info.IsDir() {
t.Errorf("FAIL: missing: %s", graphFile)
t.Errorf("missing: %s", graphFile)
t.Errorf("(err: %+v)", err)
continue
}
content, err := ioutil.ReadFile(graphFileFull)
if err != nil {
t.Errorf("FAIL: could not read graph file: %+v", err)
t.Errorf("could not read graph file: %+v", err)
return
}
str := string(content) // expected graph
@@ -789,7 +791,9 @@ func TestAstFunc1(t *testing.T) {
importGraph.AddVertex(importVertex)
data := &interfaces.Data{
// TODO: add missing fields here if/when needed
Fs: fs,
FsURI: fs.URI(), // TODO: is this right?
Base: output.Base, // base dir (absolute path) the metadata file is in
Files: output.Files, // no really needed here afaict
Imports: importVertex,
@@ -967,7 +971,7 @@ func TestAstFunc2(t *testing.T) {
const magicEmpty = "# empty!"
dir, err := util.TestDirFull()
if err != nil {
t.Errorf("FAIL: could not get tests directory: %+v", err)
t.Errorf("could not get tests directory: %+v", err)
return
}
t.Logf("tests directory is: %s", dir)
@@ -1010,7 +1014,7 @@ func TestAstFunc2(t *testing.T) {
// build test array automatically from reading the dir
files, err := ioutil.ReadDir(dir)
if err != nil {
t.Errorf("FAIL: could not read through tests directory: %+v", err)
t.Errorf("could not read through tests directory: %+v", err)
return
}
sorted := []string{}
@@ -1026,13 +1030,13 @@ func TestAstFunc2(t *testing.T) {
graphFileFull := dir + graphFile
info, err := os.Stat(graphFileFull)
if err != nil || info.IsDir() {
t.Errorf("FAIL: missing: %s", graphFile)
t.Errorf("missing: %s", graphFile)
t.Errorf("(err: %+v)", err)
continue
}
content, err := ioutil.ReadFile(graphFileFull)
if err != nil {
t.Errorf("FAIL: could not read graph file: %+v", err)
t.Errorf("could not read graph file: %+v", err)
return
}
str := string(content) // expected graph
@@ -1135,6 +1139,20 @@ func TestAstFunc2(t *testing.T) {
afs := &afero.Afero{Fs: mmFs} // wrap so that we're implementing ioutil
fs := &util.Fs{Afero: afs}
// implementation of the World API (alternatives can be substituted in)
world := &etcd.World{
//Hostname: hostname,
//Client: etcdClient,
//MetadataPrefix: /fs, // MetadataPrefix
//StoragePrefix: "/storage", // StoragePrefix
// TODO: is this correct? (seems to work for testing)
StandaloneFs: fs, // used for static deploys
Debug: testing.Verbose(), // set via the -test.v flag to `go test`
Logf: func(format string, v ...interface{}) {
logf("world: etcd: "+format, v...)
},
}
// use this variant, so that we don't copy the dir name
// this is the equivalent to running `rsync -a src/ /`
if err := util.CopyDiskContentsToFs(fs, src, "/", false); err != nil {
@@ -1217,9 +1235,11 @@ func TestAstFunc2(t *testing.T) {
importGraph.AddVertex(importVertex)
data := &interfaces.Data{
// TODO: add missing fields here if/when needed
Fs: fs,
Base: output.Base, // base dir (absolute path) the metadata file is in
Files: output.Files, // no really needed here afaict
FsURI: "memmapfs:///", // we're in standalone mode
Base: output.Base, // base dir (absolute path) the metadata file is in
Files: output.Files, // no really needed here afaict
Imports: importVertex,
Metadata: output.Metadata,
Modules: "/" + interfaces.ModuleDirectory, // not really needed here afaict
@@ -1263,7 +1283,7 @@ func TestAstFunc2(t *testing.T) {
}
if fail2 && err == nil {
t.Errorf("test #%d: FAIL", index)
t.Errorf("test #%d: interpolation passed, expected fail", index)
t.Errorf("test #%d: set scope passed, expected fail", index)
return
}
@@ -1354,7 +1374,7 @@ func TestAstFunc2(t *testing.T) {
funcs := &funcs.Engine{
Graph: graph, // not the same as the output graph!
Hostname: "", // NOTE: empty b/c not used
World: nil, // NOTE: nil b/c not used
World: world, // used partially in some tests
Debug: testing.Verbose(), // set via the -test.v flag to `go test`
Logf: func(format string, v ...interface{}) {
logf("funcs: "+format, v...)
@@ -1661,6 +1681,7 @@ func TestAstInterpret0(t *testing.T) {
t.Logf("test #%d: AST: %+v", index, ast)
data := &interfaces.Data{
// TODO: add missing fields here if/when needed
Debug: testing.Verbose(), // set via the -test.v flag to `go test`
Logf: func(format string, v ...interface{}) {
t.Logf("ast: "+format, v...)

View File

@@ -0,0 +1 @@
Vertex: test[t1]

View File

@@ -0,0 +1,5 @@
$x1 = "t1"
class foo {
test $x1 {}
}
include foo

View File

@@ -0,0 +1 @@
Vertex: test[t1]

View File

@@ -0,0 +1,5 @@
include foo
class foo {
test $x1 {}
}
$x1 = "t1"

View File

@@ -0,0 +1 @@
Vertex: test[t1]

View File

@@ -0,0 +1,5 @@
$x1 = "bad1"
class foo($x1) {
test $x1 {}
}
include foo("t1")

View File

@@ -0,0 +1,2 @@
Vertex: test[t1]
Vertex: test[t2]

View File

@@ -0,0 +1,7 @@
$x1 = "t1"
class foo {
test $x1 {}
test $x2 {}
}
include foo
$x2 = "t2"

View File

@@ -0,0 +1,2 @@
Vertex: test[t1: t1]
Vertex: test[t2: t2]

View File

@@ -0,0 +1,7 @@
$x1 = "bad1"
class foo($x1, $x2) {
test "t1: " + $x2 {} # swapped
test "t2: " + $x1 {}
}
include foo($x2, "t1")
$x2 = "t2"

View File

@@ -0,0 +1,3 @@
Vertex: test[t0: t0]
Vertex: test[t1: t1]
Vertex: test[t2: t2]

View File

@@ -0,0 +1,12 @@
$x1 = "bad1"
class foo($x1, $x2) {
include bar
test "t1: " + $x1 {}
test "t2: " + $x2 {}
class bar {
test "t0: " + $x0 {}
}
}
include foo("t1", $x2)
$x2 = "t2"
$x0 = "t0"

View File

@@ -0,0 +1 @@
# err: err2: class `bar` does not exist in this scope

View File

@@ -0,0 +1,9 @@
class foo {
test "t1" {}
class bar { # unused definition
test "t0" {}
}
}
include foo
# This sort of thing is not currently supported, and not sure if it ever will.
include bar # nope!

View File

@@ -0,0 +1 @@
Vertex: test[t1]

View File

@@ -0,0 +1,4 @@
class foo {
test $x1 {} # capture the var
}
$x1 = "t1"

View File

@@ -0,0 +1,4 @@
$x1 = "bad1"
include defs.foo
import "defs.mcl" # out of order for fun

View File

@@ -0,0 +1 @@
# err: err2: recursive reference while setting scope: not a dag

View File

@@ -0,0 +1,6 @@
class c1 {
include c2
}
class c2 {
include c1
}

View File

@@ -0,0 +1 @@
# err: err2: recursive reference while setting scope: not a dag

View File

@@ -0,0 +1,9 @@
class c1($cond) {
test "nope" {}
if $cond {
include c1(false)
} else {
test "done" {}
}
}
include c1(true)

View File

@@ -0,0 +1 @@
Vertex: test[d]

View File

@@ -0,0 +1,10 @@
$msg = "a"
class shadowme($msg) {
$msg = "c"
if true {
$msg = "d" # this is used!
test $msg {}
}
}
include shadowme("b")

View File

@@ -0,0 +1 @@
Vertex: test[c]

View File

@@ -0,0 +1,10 @@
$msg = "a"
class shadowme($msg) {
$msg = "c"
if true {
$msg = "d"
}
test $msg {}
}
include shadowme("b")

Some files were not shown because too many files have changed in this diff Show More