56 Commits
0.0.2 ... 0.0.3

Author SHA1 Message Date
James Shubin
a6dc81a38e Allow failures in 1.4.x because of etcd problem
The error seen is:

req.Cancel undefined
(type *http.Request has no field or method Cancel)

And is possibly the result of something etcd folks changed. Sadly, the
fast building 1.4.x won't work at the moment :(
2016-03-28 21:28:08 -04:00
James Shubin
81c5ce40d4 Add grouping algorithm
This might not be fully correct, but it seems to be accurate so far. Of
particular note, the vertex order needs to be deterministic for this
algorithm, which isn't provided by a map, since golang intentionally
randomizes it. As a result, this also adds a sorted version of
GetVertices called GetVerticesSorted.
2016-03-28 21:16:03 -04:00
James Shubin
c59f45a37b Force process events to be synchronous
This avoids messing up the converged-timeout state!
2016-03-28 21:16:03 -04:00
James Shubin
1b01f908e3 Add resource auto grouping
Sorry for the size of this patch, I was busy hacking and plumbing away
and it got out of hand! I'm allowing this because there doesn't seem to
be anyone hacking away on parts of the code that this would break, since
the resource code is fairly stable in this change. In particular, it
revisits and refreshes some areas of the code that didn't see anything
new or innovative since the project first started. I've gotten rid of a
lot of cruft, and in particular cleaned up some things that I didn't
know how to do better before! Here's hoping I'll continue to learn and
have more to improve upon in the future! (Well let's not hope _too_ hard
though!)

The logical goal of this patch was to make logical grouping of resources
possible. For example, it might be more efficient to group three package
installations into a single transaction, instead of having to run three
separate transactions. This is because a package installation typically
has an initial one-time per run cost which shouldn't need to be
repeated.

Another future goal would be to group file resources sharing a common
base path under a common recursive fanotify watcher. Since this depends
on fanotify capabilities first, this hasn't been implemented yet, but
could be a useful method of reducing the number of separate watches
needed, since there is a finite limit.

It's worth mentioning that grouping resources typically _reduces_ the
parallel execution capability of a particular graph, but depending on
the cost/benefit tradeoff, this might be preferential. I'd submit it's
almost universally beneficial for pkg resources.

This monster patch includes:
* the autogroup feature
* the grouping interface
* a placeholder algorithm
* an extensive test case infrastructure to test grouping algorithms
* a move of some base resource methods into pgraph refactoring
* some config/compile clean ups to remove code duplication
* b64 encoding/decoding improvements
* a rename of the yaml "res" entries to "kind" (more logical)
* some docs
* small fixes
* and more!
2016-03-28 20:54:41 -04:00
James Shubin
9720812a78 Graph cleanups to make way for the autogroup feature! 2016-03-28 20:54:41 -04:00
James Shubin
05b4066ba6 Add initial plumbing for autogroups
This adds some of the API changes and improvements to the pkg resource
so that it can make use of this feature.
2016-03-28 20:54:41 -04:00
James Shubin
50c458b6cc Support "noarch" style packages in arch check
Forgot about these :)
2016-03-28 20:54:41 -04:00
James Shubin
2ab6d61a61 Avoid errors on golint test with travis
No idea why this happens on travis, and I'm done wasting time trying to
figure it out.
2016-03-28 20:54:41 -04:00
James Shubin
b77a39bdff Add golint test improvements so we detect branches more cleverly
Also fix up a count reversal, hopefully I was just tunnel visioned
before and this is the correct fix!
2016-03-28 05:27:28 -04:00
James Shubin
d3f7432861 Update context since etcd upstream moved to vendor/ dir
Follow suit with what etcd upstream is doing.
2016-03-28 05:27:28 -04:00
Daniele Sluijters
7f3ef5bf85 docs: Add Features and autoedges section 2016-03-14 09:20:53 +01:00
James Shubin
659fb3eb82 Split the Res interface into a Base sub piece
I didn't know this was possible until I was browsing through some golang
docs recently. This should hopefully make it clearer which the common
methods to all resources are (which don't need to be reimplemented each
time) and which ones are unique and need to be created for each
resource.
2016-03-14 01:48:58 -04:00
James Shubin
d1315bb092 Refactor out noop resource into a separate file
I still think it's a useful resource for demonstrating concepts and
perhaps for other future purposes.
2016-03-14 01:28:44 -04:00
James Shubin
b4ac0e2e7c Update README and docs with information about new blog post 2016-03-14 01:03:35 -04:00
James Shubin
bfe619272e Update make deps script to make it better for debian folks
Hopefully this should make it easier for debian users, or for users who
run the script in the wrong directory.
2016-03-11 18:45:28 -05:00
Xavi S.B
963f025011 Fix broken link to graph.yaml section 2016-03-10 20:42:47 +01:00
James Shubin
b8cdcaeb75 Switch bc for awk
This is useful in dumb environments *cough* travis, that apparently
don't have bc as a default :(
2016-03-10 04:23:29 -05:00
James Shubin
6b6dc75152 Add a threshold based golint test case
This lets some golint errors in, but fails if you're over a certain
threshold. The current threshold of 15% (of LOC) is arbitrary and
subject to change. The algorithm should be extended to check a range of
commits, although it's unclear how to detect what range of commits make
up a patch set.
2016-03-10 04:06:57 -05:00
James Shubin
23647445d7 Golint fixes 2016-03-10 03:29:51 -05:00
James Shubin
e60dda5027 Add some of the pkg and svc autoedge logic
This adds another chunk of it, and makes some other small fixes.
2016-03-10 03:29:51 -05:00
James Shubin
f39551952f Add pkg auto edge basics with packagekit improvements
This is a monster patch that finally gets the iterative pkg auto edges
working the way they should. For each file, as soon as one matches, we
don't want to keep add dependencies on other file objects under that
tree structure. This reduces the number of necessary edges considerably,
and allows the graph to run more concurrently.
2016-03-10 03:29:51 -05:00
James Shubin
a9538052bf Split out some of the pkg CheckApply logic
This avoids code duplication since we need to reuse this logic
elsewhere, and as well this adds support for resolving the mapping for
multiple numbers of packages at the same time.

Note that on occasion, InstallPackages has hung waiting for Finished or
Destroy signals which never came! Increasing the PkBufferSize seemed to
help, but it's not clear if this is a permanent fix, or if more advanced
debugging and/or work arounds are needed. Is it possible that not
*waiting* to receive the destroy signals is causing this?
2016-03-10 03:29:51 -05:00
James Shubin
267d5179f5 Exec variable should use pre-existing error variable
Small fixup
2016-03-10 03:29:51 -05:00
James Shubin
10b8c93da4 Make resource "kind" determination more obvious
By adding the "kind" to the base resource, it is still identifiable even
when the resource specific elements are gone in calls that are only
defined in the base resource. This is also more logical for building
resources!

This also switches resources to use an Init() method. This will be
useful for when resources have more complex initialization to do.
2016-03-10 03:29:50 -05:00
James Shubin
c999f0c2cd Add initial "autoedge" plumbing
This allows for resources to automatically add necessary edges to the
graph so that the event system doesn't have to work overtime due to
sub-optimal execution order.
2016-03-10 03:29:50 -05:00
James Shubin
54615dc03b Actually update to 1.6 in travis 2016-03-10 03:22:36 -05:00
James Shubin
9aea95ce85 Bump go versions in travis
Failures might be caused by inotify not working correctly in travis.
This is probably because travis has things mounted over NFS.
https://github.com/travis-ci/travis-ci/issues/2342
2016-02-28 18:44:19 -05:00
James Shubin
80f48291f3 Update Makefile plumbing and associated misc things 2016-02-28 16:57:18 -05:00
James Shubin
1a164cee3e Fix file resource regression
I added a regression to the file resource. This was caused by two
different bugs that I added when I switched the API to use checkapply. I
would have caught these issues, except my test cases *also* had a bug! I
think I've fixed all three issues now.

Lastly, when running on travis, the tests behave very differently! Some
of the tests actually fail, and it's not clear why. As a result, we had
to disable them! I guess you get what you pay for.
2016-02-28 02:18:46 -05:00
James Shubin
da494cdc7c Clean up the examples/ directory
Naming things numerically isn't very obvious. This is better for now.
2016-02-26 02:32:13 -05:00
James Shubin
06635dfa75 Rename GetRes() which is not descriptive, to Kind()
This used to be GetType(), but since now things are "resources", we want
to know what "kind" they are, since asking what "type" they are is
confusing, and makes less logical sense than "Kind".
2016-02-23 00:24:43 -05:00
James Shubin
a56fb3c8cd Update items for Jenkins tests 2016-02-22 20:00:15 -05:00
James Shubin
2dc3c62bbd The svc resource should use the private system bus of course
I didn't realize there was a difference until I was debugging an issue
with the pkg resource, which also uses dbus!
2016-02-22 19:44:10 -05:00
James Shubin
0339d0caa8 Improve go vet testing 2016-02-22 19:43:55 -05:00
James Shubin
3b5678dd91 Add package (pkg) resource
This is based on PackageKit, which means events, *and* we automatically
get support for any of the backends that PackageKit supports. This means
dpkg, and rpm are both first class citizens! Many other backends will
surely work, although thorough testing is left as an exercise to the
reader, or to someone who would like to write more test cases!

Unfortunately at the moment, there are a few upstream PackageKit bugs
which cause us issues, but those have been apparently resolved upstream.
If you experience issues with an old version of PackageKit, test if it
is working correctly before blaming mgmt :)

In parallel, mgmt might increase the testing surface for PackageKit, so
hopefully this makes it more robust for everyone involved!

Lastly, I'd like to point out that many great things that are typically
used for servers do start in the GNOME desktop world. Help support your
GNOME GNU/Linux desktop today!
2016-02-22 19:05:24 -05:00
James Shubin
82ff34234d Fix Makefile warnings on Travis 2016-02-22 01:36:57 -05:00
James Shubin
f3d1369764 Add some fixes for building in tip or with golang 1.6 2016-02-22 01:30:12 -05:00
James Shubin
ed61444d82 Fix golang 1.6 vet issue 2016-02-22 01:05:27 -05:00
James Shubin
ce0d68a8ba Make sure to error on test
This works around set -e not working in this scenario.
2016-02-22 00:59:36 -05:00
James Shubin
74aadbadb8 Update README with small fixes 2016-02-22 00:37:13 -05:00
James Shubin
58f41eddd9 Change API from StateOK/Apply to CheckApply()
This simplifies the API, and reduces code duplication for most
resources. It became obvious when writing the pkg resource, that the two
operations should really be one. Hopefully this will last! Comments
welcome.
2016-02-21 22:07:49 -05:00
James Shubin
4726445ec4 Add great article by felix frank
This is very important work that is doubly hard because the API isn't
stable yet. If you see felix, buy him a beverage.

PS: Sorry felix that I just broke the api. I'll send you the patch to
fix it!
2016-02-21 19:42:21 -05:00
James Shubin
3a85384377 Rename type to resource (res) and service to svc
Naming the resources "type" was a stupid mistake, and is a huge source
of confusion when also talking about real types. Fix this before it gets
out of hand.
2016-02-21 15:51:52 -05:00
James Shubin
d20b529508 Update README with new video
Also did some small formatting fixes
2016-02-20 14:35:52 -05:00
Rob Wilson
7199f558e8 build requires (at least) stringer - add package which contains missing dependency 2016-02-20 10:42:26 +00:00
James Shubin
674cb24a1a Update link to video 2016-02-18 16:00:57 -05:00
James Shubin
02c7336315 Add link to slides from Walter Heck 2016-02-17 01:57:17 -05:00
James Shubin
cde052d819 Add DevConf.cz video recording 2016-02-16 14:26:11 -05:00
James Shubin
989cc8d236 Add more mgmtlove tags 2016-02-16 13:06:22 -05:00
James Shubin
1186d63653 Add a link to my article about debugging golang 2016-02-15 17:37:06 -05:00
James Shubin
6e68d6dda0 Update web articles 2016-02-15 14:18:59 -05:00
James Shubin
7d876701b3 Add test case to match headers 2016-02-14 14:24:26 -05:00
James Shubin
dbbb483853 Switch to CentOS-7.1 for OMV tests.
There is an occasional hang when booting up F23 on Vagrant which causes
jenkins to wait indefinitely. This might be fixed by getting F23 back to
using eth0/eth1 in the base image.
2016-02-12 17:36:51 -05:00
James Shubin
85e9473d56 Make debugging easier when running on Jenkins 2016-02-12 15:55:31 -05:00
James Shubin
427d424707 Update docs to refer to #mgmtlove patches 2016-02-12 15:01:15 -05:00
James Shubin
f90c5fafa4 This now works. w00t 2016-02-12 14:48:00 -05:00
86 changed files with 6246 additions and 1503 deletions

View File

@@ -1,15 +1,17 @@
language: go
go:
- 1.4.3
- 1.5.2
- 1.5.3
- 1.6
- tip
dist: trusty
sudo: required
sudo: false
before_install: 'git fetch --unshallow'
install: 'make deps'
script: 'make test'
matrix:
allow_failures:
- go: tip
- go: 1.4.3
notifications:
irc:
channels:

View File

@@ -30,13 +30,16 @@ along with this program. If not, see <http://www.gnu.org/licenses/>.
1. [Overview](#overview)
2. [Project description - What the project does](#project-description)
3. [Setup - Getting started with mgmt](#setup)
4. [Usage/FAQ - Notes on usage and frequently asked questions](#usage-and-frequently-asked-questions)
5. [Reference - Detailed reference](#reference)
* [graph.yaml](#graph.yaml)
4. [Features - All things mgmt can do](#features)
* [Autoedges - Automatic resource relationships](#autoedges)
* [Autogrouping - Automatic resource grouping](#autogrouping)
5. [Usage/FAQ - Notes on usage and frequently asked questions](#usage-and-frequently-asked-questions)
6. [Reference - Detailed reference](#reference)
* [Graph definition file](#graph-definition-file)
* [Command line](#command-line)
6. [Examples - Example configurations](#examples)
7. [Development - Background on module development and reporting bugs](#development)
8. [Authors - Authors and contact information](#authors)
7. [Examples - Example configurations](#examples)
8. [Development - Background on module development and reporting bugs](#development)
9. [Authors - Authors and contact information](#authors)
##Overview
@@ -49,6 +52,13 @@ The mgmt tool is a distributed, event driven, config management tool, that
supports parallel execution, and librarification to be used as the management
foundation in and for, new and existing software.
For more information, you may like to read some blog posts from the author:
* [Next generation config mgmt](https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/)
* [Automatic edges in mgmt](https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/)
There is also an [introductory video](https://www.youtube.com/watch?v=GVhpPF0j-iE&html5=1) available.
##Setup
During this prototype phase, the tool can be run out of the source directory.
@@ -57,6 +67,51 @@ get started. Beware that this _can_ cause data loss. Understand what you're
doing first, or perform these actions in a virtual environment such as the one
provided by [Oh-My-Vagrant](https://github.com/purpleidea/oh-my-vagrant).
##Features
This section details the numerous features of mgmt and some caveats you might
need to be aware of.
###Autoedges
Automatic edges, or AutoEdges, is the mechanism in mgmt by which it will
automatically create dependencies for you between resources. For example,
since mgmt can discover which files are installed by a package it will
automatically ensure that any file resource you declare that matches a
file installed by your package resource will only be processed after the
package is installed.
####Controlling autodeges
Though autoedges is likely to be very helpful and avoid you having to declare
all dependencies explicitly, there are cases where this behaviour is
undesirable.
Some distributions allow package installations to automatically start the
service they ship. This can be problematic in the case of packages like MySQL
as there are configuration options that need to be set before MySQL is ever
started for the first time (or you'll need to wipe the data directory). In
order to handle this situation you can disable autoedges per resource and
explicitly declare that you want `my.cnf` to be written to disk before the
installation of the `mysql-server` package.
You can disable autoedges for a resource by setting the `autoedge` key on
the meta attributes of that resource to `false`.
###Autogrouping
Automatic grouping or AutoGroup is the mechanism in mgmt by which it will
automatically group multiple resource vertices into a single one. This is
particularly useful for grouping multiple package resources into a single
resource, since the multiple installations can happen together in a single
transaction, which saves a lot of time because package resources typically have
a large fixed cost to running (downloading and verifying the package repo) and
if they are grouped they share this fixed cost. This grouping feature can be
used for other use cases too.
You can disable autogrouping for a resource by setting the `autogroup` key on
the meta attributes of that resource to `false`.
##Usage and frequently asked questions
(Send your questions as a patch to this FAQ! I'll review it, merge it, and
respond by commit with the answer.)
@@ -92,11 +147,11 @@ information on these options, please view the source at:
If you feel that a well used option needs documenting here, please patch it!
###Overview of reference
* [graph.yaml](#graph.yaml): Main graph definition file.
* [Graph definition file](#graph-definition-file): Main graph definition file.
* [Command line](#command-line): Command line parameters.
###graph.yaml
This is the compiled graph definition file. The format is currently
###Graph definition file
graph.yaml is the compiled graph definition file. The format is currently
undocumented, but by looking through the [examples/](https://github.com/purpleidea/mgmt/tree/master/examples)
you can probably figure out most of it, as it's fairly intuitive.

View File

@@ -16,26 +16,27 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
SHELL = /bin/bash
.PHONY: all version program path deps run race build clean test format docs rpmbuild rpm srpm spec tar upload upload-sources upload-srpms upload-rpms copr
.PHONY: all version program path deps run race build clean test format docs rpmbuild mkdirs rpm srpm spec tar upload upload-sources upload-srpms upload-rpms copr
.SILENT: clean
SVERSION := $(shell git describe --match '[0-9]*\.[0-9]*\.[0-9]*' --tags --dirty --always)
VERSION := $(shell git describe --match '[0-9]*\.[0-9]*\.[0-9]*' --tags --abbrev=0)
PROGRAM := $(shell basename --suffix=-$(VERSION) $(notdir $(CURDIR)))
PROGRAM := $(shell echo $(notdir $(CURDIR)) | cut -f1 -d"-")
OLDGOLANG := $(shell go version | grep -E 'go1.3|go1.4')
ifeq ($(VERSION),$(SVERSION))
RELEASE = 1
else
RELEASE = untagged
endif
ARCH = $(shell arch)
SPEC = rpmbuild/SPECS/mgmt.spec
SOURCE = rpmbuild/SOURCES/mgmt-$(VERSION).tar.bz2
SRPM = rpmbuild/SRPMS/mgmt-$(VERSION)-$(RELEASE).src.rpm
SRPM_BASE = mgmt-$(VERSION)-$(RELEASE).src.rpm
RPM = rpmbuild/RPMS/mgmt-$(VERSION)-$(RELEASE).$(ARCH).rpm
SPEC = rpmbuild/SPECS/$(PROGRAM).spec
SOURCE = rpmbuild/SOURCES/$(PROGRAM)-$(VERSION).tar.bz2
SRPM = rpmbuild/SRPMS/$(PROGRAM)-$(VERSION)-$(RELEASE).src.rpm
SRPM_BASE = $(PROGRAM)-$(VERSION)-$(RELEASE).src.rpm
RPM = rpmbuild/RPMS/$(PROGRAM)-$(VERSION)-$(RELEASE).$(ARCH).rpm
USERNAME := $(shell cat ~/.config/copr 2>/dev/null | grep username | awk -F '=' '{print $$2}' | tr -d ' ')
SERVER = 'dl.fedoraproject.org'
REMOTE_PATH = 'pub/alt/$(USERNAME)/mgmt'
REMOTE_PATH = 'pub/alt/$(USERNAME)/$(PROGRAM)'
all: docs
@@ -59,20 +60,20 @@ run:
race:
find -maxdepth 1 -type f -name '*.go' -not -name '*_test.go' | xargs go run -race -ldflags "-X main.program=$(PROGRAM) -X main.version=$(SVERSION)"
build: mgmt
build: $(PROGRAM)
mgmt: main.go
@echo "Building: $(PROGRAM), version: $(SVERSION)."
$(PROGRAM): main.go
@echo "Building: $(PROGRAM), version: $(SVERSION)..."
go generate
# avoid equals sign in old golang versions eg in: -X foo=bar
if go version | grep -qE 'go1.3|go1.4'; then \
go build -ldflags "-X main.program $(PROGRAM) -X main.version $(SVERSION)" -o mgmt; \
else \
go build -ldflags "-X main.program=$(PROGRAM) -X main.version=$(SVERSION)" -o mgmt; \
fi
ifneq ($(OLDGOLANG),)
@# avoid equals sign in old golang versions eg in: -X foo=bar
go build -ldflags "-X main.program $(PROGRAM) -X main.version $(SVERSION)" -o $(PROGRAM);
else
go build -ldflags "-X main.program=$(PROGRAM) -X main.version=$(SVERSION)" -o $(PROGRAM);
endif
clean:
[ ! -e mgmt ] || rm mgmt
[ ! -e $(PROGRAM) ] || rm $(PROGRAM)
rm -f *_stringer.go # generated by `go generate`
test:
@@ -82,10 +83,10 @@ format:
find -type f -name '*.go' -not -path './old/*' -not -path './tmp/*' -exec gofmt -w {} \;
find -type f -name '*.yaml' -not -path './old/*' -not -path './tmp/*' -not -path './omv.yaml' -exec ruby -e "require 'yaml'; x=YAML.load_file('{}').to_yaml.each_line.map(&:rstrip).join(10.chr)+10.chr; File.open('{}', 'w').write x" \;
docs: mgmt-documentation.pdf
docs: $(PROGRAM)-documentation.pdf
mgmt-documentation.pdf: DOCUMENTATION.md
pandoc DOCUMENTATION.md -o 'mgmt-documentation.pdf'
$(PROGRAM)-documentation.pdf: DOCUMENTATION.md
pandoc DOCUMENTATION.md -o '$(PROGRAM)-documentation.pdf'
#
# build aliases
@@ -116,21 +117,21 @@ upload: upload-sources upload-srpms upload-rpms
$(RPM): $(SPEC) $(SOURCE)
@echo Running rpmbuild -bb...
rpmbuild --define '_topdir $(shell pwd)/rpmbuild' -bb $(SPEC) && \
mv rpmbuild/RPMS/$(ARCH)/mgmt-$(VERSION)-$(RELEASE).*.rpm $(RPM)
mv rpmbuild/RPMS/$(ARCH)/$(PROGRAM)-$(VERSION)-$(RELEASE).*.rpm $(RPM)
$(SRPM): $(SPEC) $(SOURCE)
@echo Running rpmbuild -bs...
rpmbuild --define '_topdir $(shell pwd)/rpmbuild' -bs $(SPEC)
# renaming is not needed because we aren't using the dist variable
#mv rpmbuild/SRPMS/mgmt-$(VERSION)-$(RELEASE).*.src.rpm $(SRPM)
#mv rpmbuild/SRPMS/$(PROGRAM)-$(VERSION)-$(RELEASE).*.src.rpm $(SRPM)
#
# spec
#
$(SPEC): rpmbuild/ mgmt.spec.in
$(SPEC): rpmbuild/ spec.in
@echo Running templater...
#cat mgmt.spec.in > $(SPEC)
sed -e s/__VERSION__/$(VERSION)/ -e s/__RELEASE__/$(RELEASE)/ < mgmt.spec.in > $(SPEC)
#cat spec.in > $(SPEC)
sed -e s/__PROGRAM__/$(PROGRAM)/ -e s/__VERSION__/$(VERSION)/ -e s/__RELEASE__/$(RELEASE)/ < spec.in > $(SPEC)
# append a changelog to the .spec file
git log --format="* %cd %aN <%aE>%n- (%h) %s%d%n" --date=local | sed -r 's/[0-9]+:[0-9]+:[0-9]+ //' >> $(SPEC)
@@ -140,7 +141,7 @@ $(SPEC): rpmbuild/ mgmt.spec.in
$(SOURCE): rpmbuild/
@echo Running git archive...
# use HEAD if tag doesn't exist yet, so that development is easier...
git archive --prefix=mgmt-$(VERSION)/ -o $(SOURCE) $(VERSION) 2> /dev/null || (echo 'Warning: $(VERSION) does not exist. Using HEAD instead.' && git archive --prefix=mgmt-$(VERSION)/ -o $(SOURCE) HEAD)
git archive --prefix=$(PROGRAM)-$(VERSION)/ -o $(SOURCE) $(VERSION) 2> /dev/null || (echo 'Warning: $(VERSION) does not exist. Using HEAD instead.' && git archive --prefix=$(PROGRAM)-$(VERSION)/ -o $(SOURCE) HEAD)
# TODO: if git archive had a --submodules flag this would easier!
@echo Running git archive submodules...
# i thought i would need --ignore-zeros, but it doesn't seem necessary!
@@ -149,14 +150,14 @@ $(SOURCE): rpmbuild/
temp="$${temp#\'}"; \
path=$$temp; \
[ "$$path" = "" ] && continue; \
(cd $$path && git archive --prefix=mgmt-$(VERSION)/$$path/ HEAD > $$p/rpmbuild/tmp.tar && tar --concatenate --file=$$p/$(SOURCE) $$p/rpmbuild/tmp.tar && rm $$p/rpmbuild/tmp.tar); \
(cd $$path && git archive --prefix=$(PROGRAM)-$(VERSION)/$$path/ HEAD > $$p/rpmbuild/tmp.tar && tar --concatenate --file=$$p/$(SOURCE) $$p/rpmbuild/tmp.tar && rm $$p/rpmbuild/tmp.tar); \
done
# TODO: ensure that each sub directory exists
rpmbuild/:
mkdir -p rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS}
rpmbuild:
mkdirs:
mkdir -p rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS}
#

View File

@@ -16,7 +16,7 @@ If you have a well phrased question that might benefit others, consider asking i
## Quick start:
* Either get the golang dependencies on your own, or run `make deps` if you're comfortable with how we install them.
* Run `make build` to get a fresh built `mgmt` binary.
* Run `make build` to get a freshly built `mgmt` binary.
* Run `cd $(mktemp --tmpdir -d tmp.XXX) && etcd` to get etcd running. The `mgmt` software will do this automatically for you in the future.
* Run `time ./mgmt run --file examples/graph0.yaml --converged-timeout=1` to try out a very simple example!
* To run continuously in the default mode of operation, omit the `--converged-timeout` option.
@@ -31,10 +31,12 @@ Please see: [DOCUMENTATION.md](DOCUMENTATION.md) or [PDF](https://pdfdoc-purplei
## Roadmap:
Please see: [TODO.md](TODO.md) for a list of upcoming work and TODO items.
Please get involved by working on one of these items or by suggesting something else!
Feel free to grab one of the straightforward [#mgmtlove](https://github.com/purpleidea/mgmt/labels/mgmtlove) issues if you're a first time contributor to the project or if you're unsure about what to hack on!
## Bugs:
Please set the `DEBUG` constant in [main.go](https://github.com/purpleidea/mgmt/blob/master/main.go) to `true`, and post the logs when you report the [issue](https://github.com/purpleidea/mgmt/issues).
Bonus points if you provide a [shell](https://github.com/purpleidea/mgmt/tree/master/test/shell) or [OMV](https://github.com/purpleidea/mgmt/tree/master/test/omv) reproducible test case.
Feel free to read my article on [debugging golang programs](https://ttboj.wordpress.com/2016/02/15/debugging-golang-programs/).
## Dependencies:
* golang 1.4 or higher (required, available in most distros)
@@ -59,7 +61,13 @@ We'd love to have your patches! Please send them by email, or as a pull request.
## On the web:
* Introductory blog post: [https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/](https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/)
* Julian Dunn at Cfgmgmtcamp 2016 [https://www.youtube.com/watch?v=kfF9IATUask&t=1949&html5=1](https://www.youtube.com/watch?v=kfF9IATUask&t=1949&html5=1)
* Introductory recording from DevConf.cz 2016: [https://www.youtube.com/watch?v=GVhpPF0j-iE&html5=1](https://www.youtube.com/watch?v=GVhpPF0j-iE&html5=1)
* Introductory recording from CfgMgmtCamp.eu 2016: [https://www.youtube.com/watch?v=fNeooSiIRnA&html5=1](https://www.youtube.com/watch?v=fNeooSiIRnA&html5=1)
* Julian Dunn at CfgMgmtCamp.eu 2016: [https://www.youtube.com/watch?v=kfF9IATUask&t=1949&html5=1](https://www.youtube.com/watch?v=kfF9IATUask&t=1949&html5=1)
* Walter Heck at CfgMgmtCamp.eu 2016: [http://www.slideshare.net/olindata/configuration-management-time-for-a-4th-generation/3](http://www.slideshare.net/olindata/configuration-management-time-for-a-4th-generation/3)
* Marco Marongiu on mgmt: [http://syslog.me/2016/02/15/leap-or-die/](http://syslog.me/2016/02/15/leap-or-die/)
* Felix Frank on puppet to mgmt "transpiling" [https://ffrank.github.io/features/2016/02/18/from-catalog-to-mgmt/](https://ffrank.github.io/features/2016/02/18/from-catalog-to-mgmt/)
* Blog post on automatic edges and the pkg resource: [https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/](https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/)
##

View File

@@ -3,11 +3,11 @@ If you're looking for something to do, look here!
Let us know if you're working on one of the items.
## Package resource
- [ ] base type [bug](https://github.com/purpleidea/mgmt/issues/11)
- [ ] base resource [bug](https://github.com/purpleidea/mgmt/issues/11)
- [ ] dnf blocker [bug](https://github.com/hughsie/PackageKit/issues/110)
- [ ] install signal blocker [bug](https://github.com/hughsie/PackageKit/issues/109)
## File resource
## File resource [bug](https://github.com/purpleidea/mgmt/issues/13) [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
- [ ] ability to make/delete folders
- [ ] recursive argument (can recursively watch/modify contents)
- [ ] force argument (can cause switch from file <-> folder)
@@ -17,7 +17,7 @@ Let us know if you're working on one of the items.
- [ ] base resource improvements
## Timer resource
- [ ] base resource
- [ ] base resource [bug](https://github.com/purpleidea/mgmt/issues/15) [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
- [ ] reset on recompile
- [ ] increment algorithm (linear, exponential, etc...)
@@ -37,7 +37,7 @@ Let us know if you're working on one of the items.
- [ ] better error/retry handling
- [ ] resource grouping
- [ ] automatic dependency adding (eg: packagekit file dependencies)
- [ ] rpm package target in Makefile
- [ ] mgmt systemd service file [bug](https://github.com/purpleidea/mgmt/issues/12) [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
- [ ] deb package target in Makefile
- [ ] reproducible builds
- [ ] add your suggestions!

508
config.go
View File

@@ -19,19 +19,21 @@ package main
import (
"errors"
"fmt"
"gopkg.in/yaml.v2"
"io/ioutil"
"log"
"reflect"
"strings"
)
type collectorTypeConfig struct {
Type string `yaml:"type"`
type collectorResConfig struct {
Kind string `yaml:"kind"`
Pattern string `yaml:"pattern"` // XXX: Not Implemented
}
type vertexConfig struct {
Type string `yaml:"type"`
Kind string `yaml:"kind"`
Name string `yaml:"name"`
}
@@ -43,13 +45,14 @@ type edgeConfig struct {
type GraphConfig struct {
Graph string `yaml:"graph"`
Types struct {
Noop []NoopType `yaml:"noop"`
File []FileType `yaml:"file"`
Service []ServiceType `yaml:"service"`
Exec []ExecType `yaml:"exec"`
} `yaml:"types"`
Collector []collectorTypeConfig `yaml:"collect"`
Resources struct {
Noop []*NoopRes `yaml:"noop"`
Pkg []*PkgRes `yaml:"pkg"`
File []*FileRes `yaml:"file"`
Svc []*SvcRes `yaml:"svc"`
Exec []*ExecRes `yaml:"exec"`
} `yaml:"resources"`
Collector []collectorResConfig `yaml:"collect"`
Edges []edgeConfig `yaml:"edges"`
Comment string `yaml:"comment"`
}
@@ -80,89 +83,79 @@ func ParseConfigFromFile(filename string) *GraphConfig {
return &config
}
// XXX: we need to fix this function so that it either fails without modifying
// the graph, passes successfully and modifies it, or basically panics i guess
// this way an invalid compilation can leave the old graph running, and we we
// don't modify a partial graph. so we really need to validate, and then perform
// whatever actions are necessary
// finding some way to do this on a copy of the graph, and then do a graph diff
// and merge the new data into the old graph would be more appropriate, in
// particular if we can ensure the graph merge can't fail. As for the putting
// of stuff into etcd, we should probably store the operations to complete in
// the new graph, and keep retrying until it succeeds, thus blocking any new
// etcd operations until that time.
func UpdateGraphFromConfig(config *GraphConfig, hostname string, g *Graph, etcdO *EtcdWObject) bool {
// NewGraphFromConfig returns a new graph from existing input, such as from the
// existing graph, and a GraphConfig struct.
func (g *Graph) NewGraphFromConfig(config *GraphConfig, etcdO *EtcdWObject, hostname string) (*Graph, error) {
var NoopMap = make(map[string]*Vertex)
var FileMap = make(map[string]*Vertex)
var ServiceMap = make(map[string]*Vertex)
var ExecMap = make(map[string]*Vertex)
var graph *Graph // new graph to return
if g == nil { // FIXME: how can we check for an empty graph?
graph = NewGraph("Graph") // give graph a default name
} else {
graph = g.Copy() // same vertices, since they're pointers!
}
var lookup = make(map[string]map[string]*Vertex)
lookup["noop"] = NoopMap
lookup["file"] = FileMap
lookup["service"] = ServiceMap
lookup["exec"] = ExecMap
//log.Printf("%+v", config) // debug
g.SetName(config.Graph) // set graph name
// TODO: if defined (somehow)...
graph.SetName(config.Graph) // set graph name
var keep []*Vertex // list of vertex which are the same in new graph
for _, t := range config.Types.Noop {
obj := NewNoopType(t.Name)
v := g.GetVertexMatch(obj)
if v == nil { // no match found
v = NewVertex(obj)
g.AddVertex(v) // call standalone in case not part of an edge
// use reflection to avoid duplicating code... better options welcome!
value := reflect.Indirect(reflect.ValueOf(config.Resources))
vtype := value.Type()
for i := 0; i < vtype.NumField(); i++ { // number of fields in struct
name := vtype.Field(i).Name // string of field name
field := value.FieldByName(name)
iface := field.Interface() // interface type of value
slice := reflect.ValueOf(iface)
// XXX: should we just drop these everywhere and have the kind strings be all lowercase?
kind := FirstToUpper(name)
if DEBUG {
log.Printf("Config: Processing: %v...", kind)
}
NoopMap[obj.Name] = v // used for constructing edges
keep = append(keep, v) // append
for j := 0; j < slice.Len(); j++ { // loop through resources of same kind
x := slice.Index(j).Interface()
obj, ok := x.(Res) // convert to Res type
if !ok {
return nil, fmt.Errorf("Error: Config: Can't convert: %v of type: %T to Res.", x, x)
}
for _, t := range config.Types.File {
if _, exists := lookup[kind]; !exists {
lookup[kind] = make(map[string]*Vertex)
}
// XXX: should we export based on a @@ prefix, or a metaparam
// like exported => true || exported => (host pattern)||(other pattern?)
if strings.HasPrefix(t.Name, "@@") { // exported resource
// add to etcd storage...
t.Name = t.Name[2:] //slice off @@
if !etcdO.EtcdPut(hostname, t.Name, "file", t) {
log.Printf("Problem exporting file resource %v.", t.Name)
continue
if !strings.HasPrefix(obj.GetName(), "@@") { // exported resource
// XXX: we don't have a way of knowing if any of the
// metaparams are undefined, and as a result to set the
// defaults that we want! I hate the go yaml parser!!!
v := graph.GetVertexMatch(obj)
if v == nil { // no match found
obj.Init()
v = NewVertex(obj)
graph.AddVertex(v) // call standalone in case not part of an edge
}
lookup[kind][obj.GetName()] = v // used for constructing edges
keep = append(keep, v) // append
} else {
obj := NewFileType(t.Name, t.Path, t.Dirname, t.Basename, t.Content, t.State)
v := g.GetVertexMatch(obj)
if v == nil { // no match found
v = NewVertex(obj)
g.AddVertex(v) // call standalone in case not part of an edge
}
FileMap[obj.Name] = v // used for constructing edges
keep = append(keep, v) // append
}
// XXX: do this in a different function...
// add to etcd storage...
obj.SetName(obj.GetName()[2:]) //slice off @@
data, err := ResToB64(obj)
if err != nil {
return nil, fmt.Errorf("Config: Could not encode %v resource: %v, error: %v", kind, obj.GetName(), err)
}
for _, t := range config.Types.Service {
obj := NewServiceType(t.Name, t.State, t.Startup)
v := g.GetVertexMatch(obj)
if v == nil { // no match found
v = NewVertex(obj)
g.AddVertex(v) // call standalone in case not part of an edge
if !etcdO.EtcdPut(hostname, obj.GetName(), kind, data) {
return nil, fmt.Errorf("Config: Could not export %v resource: %v", kind, obj.GetName())
}
ServiceMap[obj.Name] = v // used for constructing edges
keep = append(keep, v) // append
}
for _, t := range config.Types.Exec {
obj := NewExecType(t.Name, t.Cmd, t.Shell, t.Timeout, t.WatchCmd, t.WatchShell, t.IfCmd, t.IfShell, t.PollInt, t.State)
v := g.GetVertexMatch(obj)
if v == nil { // no match found
v = NewVertex(obj)
g.AddVertex(v) // call standalone in case not part of an edge
}
ExecMap[obj.Name] = v // used for constructing edges
keep = append(keep, v) // append
}
// lookup from etcd graph
@@ -171,59 +164,372 @@ func UpdateGraphFromConfig(config *GraphConfig, hostname string, g *Graph, etcdO
nodes, ok := etcdO.EtcdGet()
if ok {
for _, t := range config.Collector {
// XXX: use t.Type and optionally t.Pattern to collect from etcd storage
log.Printf("Collect: %v; Pattern: %v", t.Type, t.Pattern)
// XXX: should we just drop these everywhere and have the kind strings be all lowercase?
kind := FirstToUpper(t.Kind)
for _, x := range etcdO.EtcdGetProcess(nodes, "file") {
var obj *FileType
if B64ToObj(x, &obj) != true {
log.Printf("Collect: File: %v not collected!", x)
// use t.Kind and optionally t.Pattern to collect from etcd storage
log.Printf("Collect: %v; Pattern: %v", kind, t.Pattern)
for _, str := range etcdO.EtcdGetProcess(nodes, kind) {
obj, err := B64ToRes(str)
if err != nil {
log.Printf("B64ToRes failed to decode: %v", err)
log.Printf("Collect: %v: not collected!", kind)
continue
}
if t.Pattern != "" { // XXX: currently the pattern for files can only override the Dirname variable :P
obj.Dirname = t.Pattern
if t.Pattern != "" { // XXX: simplistic for now
obj.CollectPattern(t.Pattern) // obj.Dirname = t.Pattern
}
log.Printf("Collect: File: %v collected!", obj.GetName())
log.Printf("Collect: %v[%v]: collected!", kind, obj.GetName())
// XXX: similar to file add code:
v := g.GetVertexMatch(obj)
// XXX: similar to other resource add code:
if _, exists := lookup[kind]; !exists {
lookup[kind] = make(map[string]*Vertex)
}
v := graph.GetVertexMatch(obj)
if v == nil { // no match found
obj.Init() // initialize go channels or things won't work!!!
v = NewVertex(obj)
g.AddVertex(v) // call standalone in case not part of an edge
graph.AddVertex(v) // call standalone in case not part of an edge
}
FileMap[obj.GetName()] = v // used for constructing edges
lookup[kind][obj.GetName()] = v // used for constructing edges
keep = append(keep, v) // append
}
}
}
// get rid of any vertices we shouldn't "keep" (that aren't in new graph)
for _, v := range g.GetVertices() {
if !HasVertex(v, keep) {
for _, v := range graph.GetVertices() {
if !VertexContains(v, keep) {
// wait for exit before starting new graph!
v.Type.SendEvent(eventExit, true, false)
g.DeleteVertex(v)
v.SendEvent(eventExit, true, false)
graph.DeleteVertex(v)
}
}
for _, e := range config.Edges {
if _, ok := lookup[e.From.Type]; !ok {
return false
if _, ok := lookup[FirstToUpper(e.From.Kind)]; !ok {
return nil, fmt.Errorf("Can't find 'from' resource!")
}
if _, ok := lookup[e.To.Type]; !ok {
return false
if _, ok := lookup[FirstToUpper(e.To.Kind)]; !ok {
return nil, fmt.Errorf("Can't find 'to' resource!")
}
if _, ok := lookup[e.From.Type][e.From.Name]; !ok {
return false
if _, ok := lookup[FirstToUpper(e.From.Kind)][e.From.Name]; !ok {
return nil, fmt.Errorf("Can't find 'from' name!")
}
if _, ok := lookup[e.To.Type][e.To.Name]; !ok {
return false
if _, ok := lookup[FirstToUpper(e.To.Kind)][e.To.Name]; !ok {
return nil, fmt.Errorf("Can't find 'to' name!")
}
g.AddEdge(lookup[e.From.Type][e.From.Name], lookup[e.To.Type][e.To.Name], NewEdge(e.Name))
graph.AddEdge(lookup[FirstToUpper(e.From.Kind)][e.From.Name], lookup[FirstToUpper(e.To.Kind)][e.To.Name], NewEdge(e.Name))
}
return graph, nil
}
// add edges to the vertex in a graph based on if it matches a uuid list
func (g *Graph) addEdgesByMatchingUUIDS(v *Vertex, uuids []ResUUID) []bool {
// search for edges and see what matches!
var result []bool
// loop through each uuid, and see if it matches any vertex
for _, uuid := range uuids {
var found = false
// uuid is a ResUUID object
for _, vv := range g.GetVertices() { // search
if v == vv { // skip self
continue
}
if DEBUG {
log.Printf("Compile: AutoEdge: Match: %v[%v] with UUID: %v[%v]", vv.Kind(), vv.GetName(), uuid.Kind(), uuid.GetName())
}
// we must match to an effective UUID for the resource,
// that is to say, the name value of a res is a helpful
// handle, but it is not necessarily a unique identity!
// remember, resources can return multiple UUID's each!
if UUIDExistsInUUIDs(uuid, vv.GetUUIDs()) {
// add edge from: vv -> v
if uuid.Reversed() {
txt := fmt.Sprintf("AutoEdge: %v[%v] -> %v[%v]", vv.Kind(), vv.GetName(), v.Kind(), v.GetName())
log.Printf("Compile: Adding %v", txt)
g.AddEdge(vv, v, NewEdge(txt))
} else { // edges go the "normal" way, eg: pkg resource
txt := fmt.Sprintf("AutoEdge: %v[%v] -> %v[%v]", v.Kind(), v.GetName(), vv.Kind(), vv.GetName())
log.Printf("Compile: Adding %v", txt)
g.AddEdge(v, vv, NewEdge(txt))
}
found = true
break
}
}
result = append(result, found)
}
return result
}
// add auto edges to graph
func (g *Graph) AutoEdges() {
log.Println("Compile: Adding AutoEdges...")
for _, v := range g.GetVertices() { // for each vertexes autoedges
if !v.GetMeta().AutoEdge { // is the metaparam true?
continue
}
autoEdgeObj := v.AutoEdges()
if autoEdgeObj == nil {
log.Printf("%v[%v]: Config: No auto edges were found!", v.Kind(), v.GetName())
continue // next vertex
}
for { // while the autoEdgeObj has more uuids to add...
uuids := autoEdgeObj.Next() // get some!
if uuids == nil {
log.Printf("%v[%v]: Config: The auto edge list is empty!", v.Kind(), v.GetName())
break // inner loop
}
if DEBUG {
log.Println("Compile: AutoEdge: UUIDS:")
for i, u := range uuids {
log.Printf("Compile: AutoEdge: UUID%d: %v", i, u)
}
}
// match and add edges
result := g.addEdgesByMatchingUUIDS(v, uuids)
// report back, and find out if we should continue
if !autoEdgeObj.Test(result) {
break
}
}
}
}
// AutoGrouper is the required interface to implement for an autogroup algorithm
type AutoGrouper interface {
// listed in the order these are typically called in...
name() string // friendly identifier
init(*Graph) error // only call once
vertexNext() (*Vertex, *Vertex, error) // mostly algorithmic
vertexCmp(*Vertex, *Vertex) error // can we merge these ?
vertexMerge(*Vertex, *Vertex) (*Vertex, error) // vertex merge fn to use
edgeMerge(*Edge, *Edge) *Edge // edge merge fn to use
vertexTest(bool) (bool, error) // call until false
}
// baseGrouper is the base type for implementing the AutoGrouper interface
type baseGrouper struct {
graph *Graph // store a pointer to the graph
vertices []*Vertex // cached list of vertices
i int
j int
done bool
}
// name provides a friendly name for the logs to see
func (ag *baseGrouper) name() string {
return "baseGrouper"
}
// init is called only once and before using other AutoGrouper interface methods
// the name method is the only exception: call it any time without side effects!
func (ag *baseGrouper) init(g *Graph) error {
if ag.graph != nil {
return fmt.Errorf("The init method has already been called!")
}
ag.graph = g // pointer
ag.vertices = ag.graph.GetVerticesSorted() // cache in deterministic order!
ag.i = 0
ag.j = 0
if len(ag.vertices) == 0 { // empty graph
ag.done = true
return nil
}
return nil
}
// vertexNext is a simple iterator that loops through vertex (pair) combinations
// an intelligent algorithm would selectively offer only valid pairs of vertices
// these should satisfy logical grouping requirements for the autogroup designs!
// the desired algorithms can override, but keep this method as a base iterator!
func (ag *baseGrouper) vertexNext() (v1, v2 *Vertex, err error) {
// this does a for v... { for w... { return v, w }} but stepwise!
l := len(ag.vertices)
if ag.i < l {
v1 = ag.vertices[ag.i]
}
if ag.j < l {
v2 = ag.vertices[ag.j]
}
// in case the vertex was deleted
if !ag.graph.HasVertex(v1) {
v1 = nil
}
if !ag.graph.HasVertex(v2) {
v2 = nil
}
// two nested loops...
if ag.j < l {
ag.j++
}
if ag.j == l {
ag.j = 0
if ag.i < l {
ag.i++
}
if ag.i == l {
ag.done = true
}
}
return
}
func (ag *baseGrouper) vertexCmp(v1, v2 *Vertex) error {
if v1 == nil || v2 == nil {
return fmt.Errorf("Vertex is nil!")
}
if v1 == v2 { // skip yourself
return fmt.Errorf("Vertices are the same!")
}
if v1.Kind() != v2.Kind() { // we must group similar kinds
// TODO: maybe future resources won't need this limitation?
return fmt.Errorf("The two resources aren't the same kind!")
}
// someone doesn't want to group!
if !v1.GetMeta().AutoGroup || !v2.GetMeta().AutoGroup {
return fmt.Errorf("One of the autogroup flags is false!")
}
if v1.Res.IsGrouped() { // already grouped!
return fmt.Errorf("Already grouped!")
}
if len(v2.Res.GetGroup()) > 0 { // already has children grouped!
return fmt.Errorf("Already has groups!")
}
if !v1.Res.GroupCmp(v2.Res) { // resource groupcmp failed!
return fmt.Errorf("The GroupCmp failed!")
}
return nil // success
}
func (ag *baseGrouper) vertexMerge(v1, v2 *Vertex) (v *Vertex, err error) {
// NOTE: it's important to use w.Res instead of w, b/c
// the w by itself is the *Vertex obj, not the *Res obj
// which is contained within it! They both satisfy the
// Res interface, which is why both will compile! :(
err = v1.Res.GroupRes(v2.Res) // GroupRes skips stupid groupings
return // success or fail, and no need to merge the actual vertices!
}
func (ag *baseGrouper) edgeMerge(e1, e2 *Edge) *Edge {
return e1 // noop
}
// vertexTest processes the results of the grouping for the algorithm to know
// return an error if something went horribly wrong, and bool false to stop
func (ag *baseGrouper) vertexTest(b bool) (bool, error) {
// NOTE: this particular baseGrouper version doesn't track what happens
// because since we iterate over every pair, we don't care which merge!
if ag.done {
return false, nil
}
return true, nil
}
// TODO: this algorithm may not be correct in all cases. replace if needed!
type nonReachabilityGrouper struct {
baseGrouper // "inherit" what we want, and reimplement the rest
}
func (ag *nonReachabilityGrouper) name() string {
return "nonReachabilityGrouper"
}
// this algorithm relies on the observation that if there's a path from a to b,
// then they *can't* be merged (b/c of the existing dependency) so therefore we
// merge anything that *doesn't* satisfy this condition or that of the reverse!
func (ag *nonReachabilityGrouper) vertexNext() (v1, v2 *Vertex, err error) {
for {
v1, v2, err = ag.baseGrouper.vertexNext() // get all iterable pairs
if err != nil {
log.Fatalf("Error running autoGroup(vertexNext): %v", err)
}
if v1 != v2 { // ignore self cmp early (perf optimization)
// if NOT reachable, they're viable...
out1 := ag.graph.Reachability(v1, v2)
out2 := ag.graph.Reachability(v2, v1)
if len(out1) == 0 && len(out2) == 0 {
return // return v1 and v2, they're viable
}
}
// if we got here, it means we're skipping over this candidate!
if ok, err := ag.baseGrouper.vertexTest(false); err != nil {
log.Fatalf("Error running autoGroup(vertexTest): %v", err)
} else if !ok {
return nil, nil, nil // done!
}
// the vertexTest passed, so loop and try with a new pair...
}
}
// autoGroup is the mechanical auto group "runner" that runs the interface spec
func (g *Graph) autoGroup(ag AutoGrouper) chan string {
strch := make(chan string) // output log messages here
go func(strch chan string) {
strch <- fmt.Sprintf("Compile: Grouping: Algorithm: %v...", ag.name())
if err := ag.init(g); err != nil {
log.Fatalf("Error running autoGroup(init): %v", err)
}
for {
var v, w *Vertex
v, w, err := ag.vertexNext() // get pair to compare
if err != nil {
log.Fatalf("Error running autoGroup(vertexNext): %v", err)
}
merged := false
// save names since they change during the runs
vStr := fmt.Sprintf("%s", v) // valid even if it is nil
wStr := fmt.Sprintf("%s", w)
if err := ag.vertexCmp(v, w); err != nil { // cmp ?
if DEBUG {
strch <- fmt.Sprintf("Compile: Grouping: !GroupCmp for: %s into %s", wStr, vStr)
}
// remove grouped vertex and merge edges (res is safe)
} else if err := g.VertexMerge(v, w, ag.vertexMerge, ag.edgeMerge); err != nil { // merge...
strch <- fmt.Sprintf("Compile: Grouping: !VertexMerge for: %s into %s", wStr, vStr)
} else { // success!
strch <- fmt.Sprintf("Compile: Grouping: Success for: %s into %s", wStr, vStr)
merged = true // woo
}
// did these get used?
if ok, err := ag.vertexTest(merged); err != nil {
log.Fatalf("Error running autoGroup(vertexTest): %v", err)
} else if !ok {
break // done!
}
}
close(strch)
return
}(strch) // call function
return strch
}
// AutoGroup runs the auto grouping on the graph and prints out log messages
func (g *Graph) AutoGroup() {
// receive log messages from channel...
// this allows test cases to avoid printing them when they're unwanted!
// TODO: this algorithm may not be correct in all cases. replace if needed!
for str := range g.autoGroup(&nonReachabilityGrouper{}) {
log.Println(str)
}
return true
}

View File

@@ -27,7 +27,7 @@ import (
"syscall"
)
// XXX: it would be great if we could reuse code between this and the file type
// XXX: it would be great if we could reuse code between this and the file resource
// XXX: patch this to submit it as part of go-fsnotify if they're interested...
func ConfigWatch(file string) chan bool {
ch := make(chan bool)
@@ -61,7 +61,7 @@ func ConfigWatch(file string) chan bool {
} else if err == syscall.ENOSPC {
// XXX: occasionally: no space left on device,
// XXX: probably due to lack of inotify watches
log.Printf("Lack of watches for config(%v) error: %+v", file, err.Error) // 0x408da0
log.Printf("Out of inotify watches for config(%v)", file)
log.Fatal(err)
} else {
log.Printf("Unknown config(%v) error:", file)
@@ -138,7 +138,7 @@ func ConfigWatch(file string) chan bool {
}
case err := <-watcher.Errors:
log.Println("error:", err)
log.Printf("error: %v", err)
log.Fatal(err)
}

40
etcd.go
View File

@@ -19,8 +19,8 @@ package main
import (
"fmt"
etcd_context "github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/net/context"
etcd "github.com/coreos/etcd/client"
etcd_context "golang.org/x/net/context"
"log"
"math"
"strings"
@@ -191,8 +191,8 @@ func (etcdO *EtcdWObject) EtcdWatch() chan etcdMsg {
}
}
// FIXME: we get events on key/type/value changes for
// each type directory... ignore the non final ones...
// FIXME: we get events on key/res/value changes for
// each res directory... ignore the non final ones...
// IOW, ignore everything except for the value or some
// field which gets set last... this could be the max count field thing...
@@ -207,20 +207,14 @@ func (etcdO *EtcdWObject) EtcdWatch() chan etcdMsg {
}
// helper function to store our data in etcd
func (etcdO *EtcdWObject) EtcdPut(hostname, key, typ string, obj interface{}) bool {
func (etcdO *EtcdWObject) EtcdPut(hostname, key, res string, data string) bool {
kapi := etcdO.GetKAPI()
output, ok := ObjToB64(obj)
if !ok {
log.Printf("Etcd: Could not encode %v key.", key)
return false
}
path := fmt.Sprintf("/exported/%s/types/%s/type", hostname, key)
_, err := kapi.Set(etcd_context.Background(), path, typ, nil)
path := fmt.Sprintf("/exported/%s/resources/%s/res", hostname, key)
_, err := kapi.Set(etcd_context.Background(), path, res, nil)
// XXX validate...
path = fmt.Sprintf("/exported/%s/types/%s/value", hostname, key)
resp, err := kapi.Set(etcd_context.Background(), path, output, nil)
path = fmt.Sprintf("/exported/%s/resources/%s/value", hostname, key)
resp, err := kapi.Set(etcd_context.Background(), path, data, nil)
if err != nil {
if cerr, ok := err.(*etcd.ClusterError); ok {
// not running or disconnected
@@ -240,7 +234,7 @@ func (etcdO *EtcdWObject) EtcdPut(hostname, key, typ string, obj interface{}) bo
// lookup /exported/ node hierarchy
func (etcdO *EtcdWObject) EtcdGet() (etcd.Nodes, bool) {
kapi := etcdO.GetKAPI()
// key structure is /exported/<hostname>/types/...
// key structure is /exported/<hostname>/resources/...
resp, err := kapi.Get(etcd_context.Background(), "/exported/", &etcd.GetOptions{Recursive: true})
if err != nil {
return nil, false // not found
@@ -248,8 +242,8 @@ func (etcdO *EtcdWObject) EtcdGet() (etcd.Nodes, bool) {
return resp.Node.Nodes, true
}
func (etcdO *EtcdWObject) EtcdGetProcess(nodes etcd.Nodes, typ string) []string {
//path := fmt.Sprintf("/exported/%s/types/", h)
func (etcdO *EtcdWObject) EtcdGetProcess(nodes etcd.Nodes, res string) []string {
//path := fmt.Sprintf("/exported/%s/resources/", h)
top := "/exported/"
log.Printf("Etcd: Get: %+v", nodes) // Get().Nodes.Nodes
var output []string
@@ -261,20 +255,20 @@ func (etcdO *EtcdWObject) EtcdGetProcess(nodes etcd.Nodes, typ string) []string
host := x.Key[len(top):]
//log.Printf("Get().Nodes[%v]: %+v ==> %+v", -1, host, x.Nodes)
//log.Printf("Get().Nodes[%v]: %+v ==> %+v", i, x.Key, x.Nodes)
types, ok := EtcdGetChildNodeByKey(x, "types")
resources, ok := EtcdGetChildNodeByKey(x, "resources")
if !ok {
continue
}
for _, y := range types.Nodes { // loop through types
for _, y := range resources.Nodes { // loop through resources
//key := y.Key # UUID?
//log.Printf("Get(%v): TYPE[%v]", host, y.Key)
t, ok := EtcdGetChildNodeByKey(y, "type")
//log.Printf("Get(%v): RES[%v]", host, y.Key)
t, ok := EtcdGetChildNodeByKey(y, "res")
if !ok {
continue
}
if typ != "" && typ != t.Value {
if res != "" && res != t.Value {
continue
} // filter based on type
} // filter based on res
v, ok := EtcdGetChildNodeByKey(y, "value") // B64ToObj this
if !ok {

View File

@@ -21,16 +21,19 @@ package main
type eventName int
const (
eventExit eventName = iota
eventNil eventName = iota
eventExit
eventStart
eventPause
eventPoke
eventBackPoke
)
type Resp chan bool
type Event struct {
Name eventName
Resp chan bool // channel to send an ack response on, nil to skip
Resp Resp // channel to send an ack response on, nil to skip
//Wg *sync.WaitGroup // receiver barrier to Wait() for everyone else on
Msg string // some words for fun
Activity bool // did something interesting happen?
@@ -49,6 +52,23 @@ func (event *Event) NACK() {
}
}
// Resp is just a helper to return the right type of response channel
func NewResp() Resp {
resp := make(chan bool)
return resp
}
// ACKWait waits for a +ive Ack from a Resp channel
func (resp Resp) ACKWait() {
for {
value := <-resp
// wait until true value
if value {
return
}
}
}
// get the activity value
func (event *Event) GetActivity() bool {
return event.Activity

19
examples/autoedges1.yaml Normal file
View File

@@ -0,0 +1,19 @@
---
graph: mygraph
resources:
file:
- name: file1
meta:
autoedge: true
path: "/tmp/foo/bar/f1"
content: |
i am f1
state: exists
- name: file2
meta:
autoedge: true
path: "/tmp/foo/"
content: |
i am f2
state: exists
edges: []

24
examples/autoedges2.yaml Normal file
View File

@@ -0,0 +1,24 @@
---
graph: mygraph
resources:
file:
- name: file1
meta:
autoedge: true
path: "/etc/drbd.conf"
content: |
# this is an mgmt test
state: exists
- name: file2
meta:
autoedge: true
path: "/tmp/foo/"
content: |
i am f2
state: exists
pkg:
- name: drbd-utils
meta:
autoedge: true
state: installed
edges: []

29
examples/autoedges3.yaml Normal file
View File

@@ -0,0 +1,29 @@
---
graph: mygraph
resources:
pkg:
- name: drbd-utils
meta:
autoedge: true
state: installed
file:
- name: file1
meta:
autoedge: true
path: "/etc/drbd.conf"
content: |
# this is an mgmt test
state: exists
- name: file2
meta:
autoedge: true
path: "/etc/drbd.d/"
content: |
i am a directory
state: exists
svc:
- name: drbd
meta:
autoedge: true
state: stopped
edges: []

21
examples/autogroup1.yaml Normal file
View File

@@ -0,0 +1,21 @@
---
graph: mygraph
resources:
pkg:
- name: drbd-utils
meta:
autogroup: false
state: installed
- name: powertop
meta:
autogroup: true
state: installed
- name: sl
meta:
autogroup: true
state: installed
- name: cowsay
meta:
autogroup: true
state: installed
edges: []

18
examples/etcd1a.yaml Normal file
View File

@@ -0,0 +1,18 @@
---
graph: mygraph
resources:
file:
- name: file1a
path: "/tmp/mgmtA/f1a"
content: |
i am f1
state: exists
- name: "@@file2a"
path: "/tmp/mgmtA/f2a"
content: |
i am f2, exported from host A
state: exists
collect:
- kind: file
pattern: "/tmp/mgmtA/"
edges: []

18
examples/etcd1b.yaml Normal file
View File

@@ -0,0 +1,18 @@
---
graph: mygraph
resources:
file:
- name: file1b
path: "/tmp/mgmtB/f1b"
content: |
i am f1
state: exists
- name: "@@file2b"
path: "/tmp/mgmtB/f2b"
content: |
i am f2, exported from host B
state: exists
collect:
- kind: file
pattern: "/tmp/mgmtB/"
edges: []

18
examples/etcd1c.yaml Normal file
View File

@@ -0,0 +1,18 @@
---
graph: mygraph
resources:
file:
- name: file1c
path: "/tmp/mgmtC/f1c"
content: |
i am f1
state: exists
- name: "@@file2c"
path: "/tmp/mgmtC/f2c"
content: |
i am f2, exported from host C
state: exists
collect:
- kind: file
pattern: "/tmp/mgmtC/"
edges: []

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
exec:
- name: exec1
cmd: sleep 10s
@@ -45,15 +45,15 @@ types:
edges:
- name: e1
from:
type: exec
kind: exec
name: exec1
to:
type: exec
kind: exec
name: exec2
- name: e2
from:
type: exec
kind: exec
name: exec2
to:
type: exec
kind: exec
name: exec3

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
exec:
- name: exec1
cmd: sleep 10s
@@ -25,8 +25,8 @@ types:
edges:
- name: e1
from:
type: exec
kind: exec
name: exec1
to:
type: exec
kind: exec
name: exec2

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
exec:
- name: exec1
cmd: sleep 10s
@@ -25,8 +25,8 @@ types:
edges:
- name: e1
from:
type: exec
kind: exec
name: exec1
to:
type: exec
kind: exec
name: exec2

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
exec:
- name: exec1
cmd: echo hello from exec1
@@ -25,8 +25,8 @@ types:
edges:
- name: e1
from:
type: exec
kind: exec
name: exec1
to:
type: exec
kind: exec
name: exec2

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
exec:
- name: exec1
cmd: echo hello from exec1

83
examples/exec2.yaml Normal file
View File

@@ -0,0 +1,83 @@
---
graph: mygraph
resources:
exec:
- name: exec1
cmd: sleep 10s
shell: ''
timeout: 0
watchcmd: ''
watchshell: ''
ifcmd: ''
ifshell: ''
pollint: 0
state: present
- name: exec2
cmd: sleep 10s
shell: ''
timeout: 0
watchcmd: ''
watchshell: ''
ifcmd: ''
ifshell: ''
pollint: 0
state: present
- name: exec3
cmd: sleep 10s
shell: ''
timeout: 0
watchcmd: ''
watchshell: ''
ifcmd: ''
ifshell: ''
pollint: 0
state: present
- name: exec4
cmd: sleep 10s
shell: ''
timeout: 0
watchcmd: ''
watchshell: ''
ifcmd: ''
ifshell: ''
pollint: 0
state: present
- name: exec5
cmd: sleep 15s
shell: ''
timeout: 0
watchcmd: ''
watchshell: ''
ifcmd: ''
ifshell: ''
pollint: 0
state: present
edges:
- name: e1
from:
kind: exec
name: exec1
to:
kind: exec
name: exec2
- name: e2
from:
kind: exec
name: exec1
to:
kind: exec
name: exec3
- name: e3
from:
kind: exec
name: exec2
to:
kind: exec
name: exec4
- name: e4
from:
kind: exec
name: exec3
to:
kind: exec
name: exec4

41
examples/file1.yaml Normal file
View File

@@ -0,0 +1,41 @@
---
graph: mygraph
resources:
noop:
- name: noop1
file:
- name: file1
path: "/tmp/mgmt/f1"
content: |
i am f1
state: exists
- name: file2
path: "/tmp/mgmt/f2"
content: |
i am f2
state: exists
- name: file3
path: "/tmp/mgmt/f3"
content: |
i am f3
state: exists
- name: file4
path: "/tmp/mgmt/f4"
content: |
i am f4 and i should not be here
state: absent
edges:
- name: e1
from:
kind: file
name: file1
to:
kind: file
name: file2
- name: e2
from:
kind: file
name: file2
to:
kind: file
name: file3

View File

@@ -1,7 +1,7 @@
---
graph: mygraph
comment: hello world example
types:
resources:
noop:
- name: noop1
file:
@@ -13,8 +13,8 @@ types:
edges:
- name: e1
from:
type: noop
kind: noop
name: noop1
to:
type: file
kind: file
name: file1

View File

@@ -1,7 +1,7 @@
---
graph: mygraph
comment: simple exec fan in to fan out example to demonstrate optimization
types:
resources:
exec:
- name: exec1
cmd: sleep 10s
@@ -86,43 +86,43 @@ types:
edges:
- name: e1
from:
type: exec
kind: exec
name: exec1
to:
type: exec
kind: exec
name: exec4
- name: e2
from:
type: exec
kind: exec
name: exec2
to:
type: exec
kind: exec
name: exec4
- name: e3
from:
type: exec
kind: exec
name: exec3
to:
type: exec
kind: exec
name: exec4
- name: e4
from:
type: exec
kind: exec
name: exec4
to:
type: exec
kind: exec
name: exec5
- name: e5
from:
type: exec
kind: exec
name: exec4
to:
type: exec
kind: exec
name: exec6
- name: e6
from:
type: exec
kind: exec
name: exec4
to:
type: exec
kind: exec
name: exec7

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
file:
- name: file1
path: "/tmp/mgmt/f1"
@@ -15,8 +15,8 @@ types:
edges:
- name: e1
from:
type: file
kind: file
name: file1
to:
type: file
kind: file
name: file2

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
file:
- name: file2
path: "/tmp/mgmt/f2"
@@ -15,8 +15,8 @@ types:
edges:
- name: e2
from:
type: file
kind: file
name: file2
to:
type: file
kind: file
name: file3

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
file:
- name: file1a
path: "/tmp/mgmtA/f1a"
@@ -23,6 +23,6 @@ types:
i am f4, exported from host A
state: exists
collect:
- type: file
- kind: file
pattern: "/tmp/mgmtA/"
edges: []

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
file:
- name: file1b
path: "/tmp/mgmtB/f1b"
@@ -23,6 +23,6 @@ types:
i am f4, exported from host B
state: exists
collect:
- type: file
- kind: file
pattern: "/tmp/mgmtB/"
edges: []

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
file:
- name: file1c
path: "/tmp/mgmtC/f1c"
@@ -23,6 +23,6 @@ types:
i am f4, exported from host C
state: exists
collect:
- type: file
- kind: file
pattern: "/tmp/mgmtC/"
edges: []

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
file:
- name: file1
path: "/tmp/mgmt/f1"
@@ -13,6 +13,6 @@ types:
i am f3, exported from host A
state: exists
collect:
- type: file
- kind: file
pattern: ''
edges:

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
file:
- name: file1
path: "/tmp/mgmt/f1"
@@ -8,6 +8,6 @@ types:
i am f1
state: exists
collect:
- type: file
- kind: file
pattern: ''
edges:

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
noop:
- name: noop1
edges:

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
noop:
- name: noop1
exec:

View File

@@ -1,7 +1,7 @@
---
graph: mygraph
comment: simple exec fan in example to demonstrate optimization
types:
resources:
exec:
- name: exec1
cmd: sleep 10s
@@ -56,22 +56,22 @@ types:
edges:
- name: e1
from:
type: exec
kind: exec
name: exec1
to:
type: exec
kind: exec
name: exec5
- name: e2
from:
type: exec
kind: exec
name: exec2
to:
type: exec
kind: exec
name: exec5
- name: e3
from:
type: exec
kind: exec
name: exec3
to:
type: exec
kind: exec
name: exec5

7
examples/pkg1.yaml Normal file
View File

@@ -0,0 +1,7 @@
---
graph: mygraph
resources:
pkg:
- name: powertop
state: installed
edges: []

7
examples/pkg2.yaml Normal file
View File

@@ -0,0 +1,7 @@
---
graph: mygraph
resources:
pkg:
- name: powertop
state: uninstalled
edges: []

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
noop:
- name: noop1
file:
@@ -9,22 +9,22 @@ types:
content: |
i am f1
state: exists
service:
svc:
- name: purpleidea
state: running
startup: enabled
edges:
- name: e1
from:
type: noop
kind: noop
name: noop1
to:
type: file
kind: file
name: file1
- name: e2
from:
type: file
kind: file
name: file1
to:
type: service
kind: svc
name: purpleidea

204
exec.go
View File

@@ -20,13 +20,19 @@ package main
import (
"bufio"
"bytes"
"encoding/gob"
"errors"
"log"
"os/exec"
"strings"
)
type ExecType struct {
BaseType `yaml:",inline"`
func init() {
gob.Register(&ExecRes{})
}
type ExecRes struct {
BaseRes `yaml:",inline"`
State string `yaml:"state"` // state: exists/present?, absent, (undefined?)
Cmd string `yaml:"cmd"` // the command to run
Shell string `yaml:"shell"` // the (optional) shell to use to run the cmd
@@ -38,13 +44,10 @@ type ExecType struct {
PollInt int `yaml:"pollint"` // the poll interval for the ifcmd
}
func NewExecType(name, cmd, shell string, timeout int, watchcmd, watchshell, ifcmd, ifshell string, pollint int, state string) *ExecType {
// FIXME if path = nil, path = name ...
return &ExecType{
BaseType: BaseType{
func NewExecRes(name, cmd, shell string, timeout int, watchcmd, watchshell, ifcmd, ifshell string, pollint int, state string) *ExecRes {
obj := &ExecRes{
BaseRes: BaseRes{
Name: name,
events: make(chan Event),
vertex: nil,
},
Cmd: cmd,
Shell: shell,
@@ -56,15 +59,18 @@ func NewExecType(name, cmd, shell string, timeout int, watchcmd, watchshell, ifc
PollInt: pollint,
State: state,
}
obj.Init()
return obj
}
func (obj *ExecType) GetType() string {
return "Exec"
func (obj *ExecRes) Init() {
obj.BaseRes.kind = "Exec"
obj.BaseRes.Init() // call base init, b/c we're overriding
}
// validate if the params passed in are valid data
// FIXME: where should this get called ?
func (obj *ExecType) Validate() bool {
func (obj *ExecRes) Validate() bool {
if obj.Cmd == "" { // this is the only thing that is really required
return false
}
@@ -78,7 +84,7 @@ func (obj *ExecType) Validate() bool {
}
// wraps the scanner output in a channel
func (obj *ExecType) BufioChanScanner(scanner *bufio.Scanner) (chan string, chan error) {
func (obj *ExecRes) BufioChanScanner(scanner *bufio.Scanner) (chan string, chan error) {
ch, errch := make(chan string), make(chan error)
go func() {
for scanner.Scan() {
@@ -96,7 +102,7 @@ func (obj *ExecType) BufioChanScanner(scanner *bufio.Scanner) (chan string, chan
}
// Exec watcher
func (obj *ExecType) Watch() {
func (obj *ExecRes) Watch(processChan chan Event) {
if obj.IsWatching() {
return
}
@@ -128,7 +134,7 @@ func (obj *ExecType) Watch() {
cmdReader, err := cmd.StdoutPipe()
if err != nil {
log.Printf("%v[%v]: Error creating StdoutPipe for Cmd: %v", obj.GetType(), obj.GetName(), err)
log.Printf("%v[%v]: Error creating StdoutPipe for Cmd: %v", obj.Kind(), obj.GetName(), err)
log.Fatal(err) // XXX: how should we handle errors?
}
scanner := bufio.NewScanner(cmdReader)
@@ -140,7 +146,7 @@ func (obj *ExecType) Watch() {
cmd.Process.Kill() // TODO: is this necessary?
}()
if err := cmd.Start(); err != nil {
log.Printf("%v[%v]: Error starting Cmd: %v", obj.GetType(), obj.GetName(), err)
log.Printf("%v[%v]: Error starting Cmd: %v", obj.Kind(), obj.GetName(), err)
log.Fatal(err) // XXX: how should we handle errors?
}
@@ -148,36 +154,36 @@ func (obj *ExecType) Watch() {
}
for {
obj.SetState(typeWatching) // reset
obj.SetState(resStateWatching) // reset
select {
case text := <-bufioch:
obj.SetConvergedState(typeConvergedNil)
obj.SetConvergedState(resConvergedNil)
// each time we get a line of output, we loop!
log.Printf("%v[%v]: Watch output: %s", obj.GetType(), obj.GetName(), text)
log.Printf("%v[%v]: Watch output: %s", obj.Kind(), obj.GetName(), text)
if text != "" {
send = true
}
case err := <-errch:
obj.SetConvergedState(typeConvergedNil) // XXX ?
obj.SetConvergedState(resConvergedNil) // XXX ?
if err == nil { // EOF
// FIXME: add an "if watch command ends/crashes"
// restart or generate error option
log.Printf("%v[%v]: Reached EOF", obj.GetType(), obj.GetName())
log.Printf("%v[%v]: Reached EOF", obj.Kind(), obj.GetName())
return
}
log.Printf("%v[%v]: Error reading input?: %v", obj.GetType(), obj.GetName(), err)
log.Printf("%v[%v]: Error reading input?: %v", obj.Kind(), obj.GetName(), err)
log.Fatal(err)
// XXX: how should we handle errors?
case event := <-obj.events:
obj.SetConvergedState(typeConvergedNil)
obj.SetConvergedState(resConvergedNil)
if exit, send = obj.ReadEvent(&event); exit {
return // exit
}
case _ = <-TimeAfterOrBlock(obj.ctimeout):
obj.SetConvergedState(typeConvergedTimeout)
obj.SetConvergedState(resConvergedTimeout)
obj.converged <- true
continue
}
@@ -187,19 +193,22 @@ func (obj *ExecType) Watch() {
send = false
// it is okay to invalidate the clean state on poke too
obj.isStateOK = false // something made state dirty
Process(obj) // XXX: rename this function
resp := NewResp()
processChan <- Event{eventNil, resp, "", true} // trigger process
resp.ACKWait() // wait for the ACK()
}
}
}
// TODO: expand the IfCmd to be a list of commands
func (obj *ExecType) StateOK() bool {
func (obj *ExecRes) CheckApply(apply bool) (stateok bool, err error) {
log.Printf("%v[%v]: CheckApply(%t)", obj.Kind(), obj.GetName(), apply)
// if there is a watch command, but no if command, run based on state
if b := obj.isStateOK; obj.WatchCmd != "" && obj.IfCmd == "" {
obj.isStateOK = true // reset
//if !obj.isStateOK { obj.isStateOK = true; return false }
return b
if obj.WatchCmd != "" && obj.IfCmd == "" {
if obj.isStateOK {
return true, nil
}
// if there is no watcher, but there is an onlyif check, run it to see
} else if obj.IfCmd != "" { // && obj.WatchCmd == ""
@@ -226,23 +235,27 @@ func (obj *ExecType) StateOK() bool {
cmdName = obj.IfShell // usually bash, or sh
cmdArgs = []string{"-c", obj.IfCmd}
}
err := exec.Command(cmdName, cmdArgs...).Run()
err = exec.Command(cmdName, cmdArgs...).Run()
if err != nil {
// TODO: check exit value
return true // don't run
return true, nil // don't run
}
return false // just run
// if there is no watcher and no onlyif check, assume we should run
} else { // if obj.WatchCmd == "" && obj.IfCmd == "" {
b := obj.isStateOK
obj.isStateOK = true
return b // just run if state is dirty
// just run if state is dirty
if obj.isStateOK {
return true, nil
}
}
}
func (obj *ExecType) Apply() bool {
log.Printf("%v[%v]: Apply", obj.GetType(), obj.GetName())
// state is not okay, no work done, exit, but without error
if !apply {
return false, nil
}
// apply portion
log.Printf("%v[%v]: Apply", obj.Kind(), obj.GetName())
var cmdName string
var cmdArgs []string
if obj.Shell == "" {
@@ -263,9 +276,9 @@ func (obj *ExecType) Apply() bool {
var out bytes.Buffer
cmd.Stdout = &out
if err := cmd.Start(); err != nil {
log.Printf("%v[%v]: Error starting Cmd: %v", obj.GetType(), obj.GetName(), err)
return false
if err = cmd.Start(); err != nil {
log.Printf("%v[%v]: Error starting Cmd: %v", obj.Kind(), obj.GetName(), err)
return false, err
}
timeout := obj.Timeout
@@ -276,16 +289,16 @@ func (obj *ExecType) Apply() bool {
go func() { done <- cmd.Wait() }()
select {
case err := <-done:
case err = <-done:
if err != nil {
log.Printf("%v[%v]: Error waiting for Cmd: %v", obj.GetType(), obj.GetName(), err)
return false
log.Printf("%v[%v]: Error waiting for Cmd: %v", obj.Kind(), obj.GetName(), err)
return false, err
}
case <-TimeAfterOrBlock(timeout):
log.Printf("%v[%v]: Timeout waiting for Cmd", obj.GetType(), obj.GetName())
log.Printf("%v[%v]: Timeout waiting for Cmd", obj.Kind(), obj.GetName())
//cmd.Process.Kill() // TODO: is this necessary?
return false
return false, errors.New("Timeout waiting for Cmd!")
}
// TODO: if we printed the stdout while the command is running, this
@@ -298,38 +311,111 @@ func (obj *ExecType) Apply() bool {
log.Printf(out.String())
}
// XXX: return based on exit value!!
// the state tracking is for exec resources that can't "detect" their
// state, and assume it's invalid when the Watch() function triggers.
// if we apply state successfully, we should reset it here so that we
// know that we have applied since the state was set not ok by event!
obj.isStateOK = true // reset
return false, nil // success
}
type ExecUUID struct {
BaseUUID
Cmd string
IfCmd string
// TODO: add more elements here
}
// if and only if they are equivalent, return true
// if they are not equivalent, return false
func (obj *ExecUUID) IFF(uuid ResUUID) bool {
res, ok := uuid.(*ExecUUID)
if !ok {
return false
}
if obj.Cmd != res.Cmd {
return false
}
// TODO: add more checks here
//if obj.Shell != res.Shell {
// return false
//}
//if obj.Timeout != res.Timeout {
// return false
//}
//if obj.WatchCmd != res.WatchCmd {
// return false
//}
//if obj.WatchShell != res.WatchShell {
// return false
//}
if obj.IfCmd != res.IfCmd {
return false
}
//if obj.PollInt != res.PollInt {
// return false
//}
//if obj.State != res.State {
// return false
//}
return true
}
func (obj *ExecType) Compare(typ Type) bool {
switch typ.(type) {
case *ExecType:
typ := typ.(*ExecType)
if obj.Name != typ.Name {
func (obj *ExecRes) AutoEdges() AutoEdge {
// TODO: parse as many exec params to look for auto edges, for example
// the path of the binary in the Cmd variable might be from in a pkg
return nil
}
// include all params to make a unique identification of this object
func (obj *ExecRes) GetUUIDs() []ResUUID {
x := &ExecUUID{
BaseUUID: BaseUUID{name: obj.GetName(), kind: obj.Kind()},
Cmd: obj.Cmd,
IfCmd: obj.IfCmd,
// TODO: add more params here
}
return []ResUUID{x}
}
func (obj *ExecRes) GroupCmp(r Res) bool {
_, ok := r.(*SvcRes)
if !ok {
return false
}
if obj.Cmd != typ.Cmd {
return false // not possible atm
}
func (obj *ExecRes) Compare(res Res) bool {
switch res.(type) {
case *ExecRes:
res := res.(*ExecRes)
if obj.Name != res.Name {
return false
}
if obj.Shell != typ.Shell {
if obj.Cmd != res.Cmd {
return false
}
if obj.Timeout != typ.Timeout {
if obj.Shell != res.Shell {
return false
}
if obj.WatchCmd != typ.WatchCmd {
if obj.Timeout != res.Timeout {
return false
}
if obj.WatchShell != typ.WatchShell {
if obj.WatchCmd != res.WatchCmd {
return false
}
if obj.IfCmd != typ.IfCmd {
if obj.WatchShell != res.WatchShell {
return false
}
if obj.PollInt != typ.PollInt {
if obj.IfCmd != res.IfCmd {
return false
}
if obj.State != typ.State {
if obj.PollInt != res.PollInt {
return false
}
if obj.State != res.State {
return false
}
default:

337
file.go
View File

@@ -22,6 +22,7 @@ import (
"encoding/hex"
"gopkg.in/fsnotify.v1"
//"github.com/go-fsnotify/fsnotify" // git master of "gopkg.in/fsnotify.v1"
"encoding/gob"
"io"
"log"
"math"
@@ -31,8 +32,12 @@ import (
"syscall"
)
type FileType struct {
BaseType `yaml:",inline"`
func init() {
gob.Register(&FileRes{})
}
type FileRes struct {
BaseRes `yaml:",inline"`
Path string `yaml:"path"` // path variable (should default to name)
Dirname string `yaml:"dirname"`
Basename string `yaml:"basename"`
@@ -41,13 +46,11 @@ type FileType struct {
sha256sum string
}
func NewFileType(name, path, dirname, basename, content, state string) *FileType {
func NewFileRes(name, path, dirname, basename, content, state string) *FileRes {
// FIXME if path = nil, path = name ...
return &FileType{
BaseType: BaseType{
obj := &FileRes{
BaseRes: BaseRes{
Name: name,
events: make(chan Event),
vertex: nil,
},
Path: path,
Dirname: dirname,
@@ -56,14 +59,31 @@ func NewFileType(name, path, dirname, basename, content, state string) *FileType
State: state,
sha256sum: "",
}
obj.Init()
return obj
}
func (obj *FileType) GetType() string {
return "File"
func (obj *FileRes) Init() {
obj.BaseRes.kind = "File"
obj.BaseRes.Init() // call base init, b/c we're overriding
}
func (obj *FileRes) GetPath() string {
d := Dirname(obj.Path)
b := Basename(obj.Path)
if !obj.Validate() || (obj.Dirname == "" && obj.Basename == "") {
return obj.Path
} else if obj.Dirname == "" {
return d + obj.Basename
} else if obj.Basename == "" {
return obj.Dirname + b
} else { // if obj.dirname != "" && obj.basename != "" {
return obj.Dirname + obj.Basename
}
}
// validate if the params passed in are valid data
func (obj *FileType) Validate() bool {
func (obj *FileRes) Validate() bool {
if obj.Dirname != "" {
// must end with /
if obj.Dirname[len(obj.Dirname)-1:] != "/" {
@@ -79,24 +99,10 @@ func (obj *FileType) Validate() bool {
return true
}
func (obj *FileType) GetPath() string {
d := Dirname(obj.Path)
b := Basename(obj.Path)
if !obj.Validate() || (obj.Dirname == "" && obj.Basename == "") {
return obj.Path
} else if obj.Dirname == "" {
return d + obj.Basename
} else if obj.Basename == "" {
return obj.Dirname + b
} else { // if obj.dirname != "" && obj.basename != "" {
return obj.Dirname + obj.Basename
}
}
// File watcher for files and directories
// Modify with caution, probably important to write some test cases first!
// obj.GetPath(): file or directory
func (obj *FileType) Watch() {
func (obj *FileRes) Watch(processChan chan Event) {
if obj.IsWatching() {
return
}
@@ -142,7 +148,7 @@ func (obj *FileType) Watch() {
} else if err == syscall.ENOSPC {
// XXX: occasionally: no space left on device,
// XXX: probably due to lack of inotify watches
log.Printf("Lack of watches for file[%v] error: %+v", obj.Name, err.Error) // 0x408da0
log.Printf("%v[%v]: Out of inotify watches!", obj.Kind(), obj.GetName())
log.Fatal(err)
} else {
log.Printf("Unknown file[%v] error:", obj.Name)
@@ -152,13 +158,13 @@ func (obj *FileType) Watch() {
continue
}
obj.SetState(typeWatching) // reset
obj.SetState(resStateWatching) // reset
select {
case event := <-watcher.Events:
if DEBUG {
log.Printf("File[%v]: Watch(%v), Event(%v): %v", obj.GetName(), current, event.Name, event.Op)
}
obj.SetConvergedState(typeConvergedNil) // XXX: technically i can detect if the event is erroneous or not first
obj.SetConvergedState(resConvergedNil) // XXX: technically i can detect if the event is erroneous or not first
// the deeper you go, the bigger the deltaDepth is...
// this is the difference between what we're watching,
// and the event... doesn't mean we can't watch deeper
@@ -228,20 +234,20 @@ func (obj *FileType) Watch() {
}
case err := <-watcher.Errors:
obj.SetConvergedState(typeConvergedNil) // XXX ?
log.Println("error:", err)
obj.SetConvergedState(resConvergedNil) // XXX ?
log.Printf("error: %v", err)
log.Fatal(err)
//obj.events <- fmt.Sprintf("file: %v", "error") // XXX: how should we handle errors?
case event := <-obj.events:
obj.SetConvergedState(typeConvergedNil)
obj.SetConvergedState(resConvergedNil)
if exit, send = obj.ReadEvent(&event); exit {
return // exit
}
//dirty = false // these events don't invalidate state
case _ = <-TimeAfterOrBlock(obj.ctimeout):
obj.SetConvergedState(typeConvergedTimeout)
obj.SetConvergedState(resConvergedTimeout)
obj.converged <- true
continue
}
@@ -254,12 +260,14 @@ func (obj *FileType) Watch() {
dirty = false
obj.isStateOK = false // something made state dirty
}
Process(obj) // XXX: rename this function
resp := NewResp()
processChan <- Event{eventNil, resp, "", true} // trigger process
resp.ACKWait() // wait for the ACK()
}
}
}
func (obj *FileType) HashSHA256fromContent() string {
func (obj *FileRes) HashSHA256fromContent() string {
if obj.sha256sum != "" { // return if already computed
return obj.sha256sum
}
@@ -270,138 +278,216 @@ func (obj *FileType) HashSHA256fromContent() string {
return obj.sha256sum
}
// FIXME: add the obj.CleanState() calls all over the true returns!
func (obj *FileType) StateOK() bool {
if obj.isStateOK { // cache the state
return true
func (obj *FileRes) FileHashSHA256Check() (bool, error) {
if PathIsDir(obj.GetPath()) { // assert
log.Fatal("This should only be called on a File resource.")
}
if _, err := os.Stat(obj.GetPath()); os.IsNotExist(err) {
// no such file or directory
if obj.State == "absent" {
return obj.CleanState() // missing file should be missing, phew :)
} else {
// state invalid, skip expensive checksums
return false
}
}
// TODO: add file mode check here...
if PathIsDir(obj.GetPath()) {
return obj.StateOKDir()
} else {
return obj.StateOKFile()
}
}
func (obj *FileType) StateOKFile() bool {
if PathIsDir(obj.GetPath()) {
log.Fatal("This should only be called on a File type.")
}
// run a diff, and return true if needs changing
// run a diff, and return true if it needs changing
hash := sha256.New()
f, err := os.Open(obj.GetPath())
if err != nil {
//log.Fatal(err)
return false
if e, ok := err.(*os.PathError); ok && (e.Err.(syscall.Errno) == syscall.ENOENT) {
return false, nil // no "error", file is just absent
}
return false, err
}
defer f.Close()
if _, err := io.Copy(hash, f); err != nil {
//log.Fatal(err)
return false
return false, err
}
sha256sum := hex.EncodeToString(hash.Sum(nil))
//log.Printf("sha256sum: %v", sha256sum)
if obj.HashSHA256fromContent() == sha256sum {
return true
return true, nil
}
return false
return false, nil
}
func (obj *FileType) StateOKDir() bool {
if !PathIsDir(obj.GetPath()) {
log.Fatal("This should only be called on a Dir type.")
}
// XXX: not implemented
log.Fatal("Not implemented!")
return false
}
func (obj *FileType) Apply() bool {
log.Printf("%v[%v]: Apply", obj.GetType(), obj.GetName())
func (obj *FileRes) FileApply() error {
if PathIsDir(obj.GetPath()) {
return obj.ApplyDir()
} else {
return obj.ApplyFile()
}
}
func (obj *FileType) ApplyFile() bool {
if PathIsDir(obj.GetPath()) {
log.Fatal("This should only be called on a File type.")
log.Fatal("This should only be called on a File resource.")
}
if obj.State == "absent" {
log.Printf("About to remove: %v", obj.GetPath())
err := os.Remove(obj.GetPath())
if err != nil {
return false
}
return true
return err // either nil or not, for success or failure
}
//log.Println("writing: " + filename)
f, err := os.Create(obj.GetPath())
if err != nil {
log.Println("error:", err)
return false
return nil
}
defer f.Close()
_, err = io.WriteString(f, obj.Content)
if err != nil {
log.Println("error:", err)
return false
return err
}
return true
return nil // success
}
func (obj *FileType) ApplyDir() bool {
if !PathIsDir(obj.GetPath()) {
log.Fatal("This should only be called on a Dir type.")
func (obj *FileRes) CheckApply(apply bool) (stateok bool, err error) {
log.Printf("%v[%v]: CheckApply(%t)", obj.Kind(), obj.GetName(), apply)
if obj.isStateOK { // cache the state
return true, nil
}
// XXX: not implemented
log.Fatal("Not implemented!")
return true
if _, err = os.Stat(obj.GetPath()); os.IsNotExist(err) {
// no such file or directory
if obj.State == "absent" {
// missing file should be missing, phew :)
obj.isStateOK = true
return true, nil
}
}
err = nil // reset
// FIXME: add file mode check here...
if PathIsDir(obj.GetPath()) {
log.Fatal("Not implemented!") // XXX
} else {
ok, err := obj.FileHashSHA256Check()
if err != nil {
return false, err
}
if ok {
obj.isStateOK = true
return true, nil
}
// if no err, but !ok, then we continue on...
}
// state is not okay, no work done, exit, but without error
if !apply {
return false, nil
}
// apply portion
log.Printf("%v[%v]: Apply", obj.Kind(), obj.GetName())
if PathIsDir(obj.GetPath()) {
log.Fatal("Not implemented!") // XXX
} else {
err = obj.FileApply()
if err != nil {
return false, err
}
}
obj.isStateOK = true
return false, nil // success
}
func (obj *FileType) Compare(typ Type) bool {
switch typ.(type) {
case *FileType:
typ := typ.(*FileType)
if obj.Name != typ.Name {
type FileUUID struct {
BaseUUID
path string
}
// if and only if they are equivalent, return true
// if they are not equivalent, return false
func (obj *FileUUID) IFF(uuid ResUUID) bool {
res, ok := uuid.(*FileUUID)
if !ok {
return false
}
if obj.GetPath() != typ.Path {
return obj.path == res.path
}
type FileResAutoEdges struct {
data []ResUUID
pointer int
found bool
}
func (obj *FileResAutoEdges) Next() []ResUUID {
if obj.found {
log.Fatal("Shouldn't be called anymore!")
}
if len(obj.data) == 0 { // check length for rare scenarios
return nil
}
value := obj.data[obj.pointer]
obj.pointer++
return []ResUUID{value} // we return one, even though api supports N
}
// get results of the earlier Next() call, return if we should continue!
func (obj *FileResAutoEdges) Test(input []bool) bool {
// if there aren't any more remaining
if len(obj.data) <= obj.pointer {
return false
}
if obj.Content != typ.Content {
if obj.found { // already found, done!
return false
}
if obj.State != typ.State {
if len(input) != 1 { // in case we get given bad data
log.Fatal("Expecting a single value!")
}
if input[0] { // if a match is found, we're done!
obj.found = true // no more to find!
return false
}
return true // keep going
}
// generate a simple linear sequence of each parent directory from bottom up!
func (obj *FileRes) AutoEdges() AutoEdge {
var data []ResUUID // store linear result chain here...
values := PathSplitFullReversed(obj.GetPath()) // build it
_, values = values[0], values[1:] // get rid of first value which is me!
for _, x := range values {
var reversed = true // cheat by passing a pointer
data = append(data, &FileUUID{
BaseUUID: BaseUUID{
name: obj.GetName(),
kind: obj.Kind(),
reversed: &reversed,
},
path: x, // what matters
}) // build list
}
return &FileResAutoEdges{
data: data,
pointer: 0,
found: false,
}
}
func (obj *FileRes) GetUUIDs() []ResUUID {
x := &FileUUID{
BaseUUID: BaseUUID{name: obj.GetName(), kind: obj.Kind()},
path: obj.GetPath(),
}
return []ResUUID{x}
}
func (obj *FileRes) GroupCmp(r Res) bool {
_, ok := r.(*FileRes)
if !ok {
return false
}
// TODO: we might be able to group directory children into a single
// recursive watcher in the future, thus saving fanotify watches
return false // not possible atm
}
func (obj *FileRes) Compare(res Res) bool {
switch res.(type) {
case *FileRes:
res := res.(*FileRes)
if obj.Name != res.Name {
return false
}
if obj.GetPath() != res.Path {
return false
}
if obj.Content != res.Content {
return false
}
if obj.State != res.State {
return false
}
default:
@@ -409,3 +495,8 @@ func (obj *FileType) Compare(typ Type) bool {
}
return true
}
func (obj *FileRes) CollectPattern(pattern string) {
// XXX: currently the pattern for files can only override the Dirname variable :P
obj.Dirname = pattern // XXX: simplistic for now
}

36
main.go
View File

@@ -25,7 +25,6 @@ import (
"sync"
"syscall"
"time"
//etcd_context "github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/net/context"
)
// set at compile time
@@ -64,7 +63,7 @@ func run(c *cli.Context) {
converged := make(chan bool) // converged signal
log.Printf("This is: %v, version: %v", program, version)
log.Printf("Main: Start: %v", start)
G := NewGraph("Graph") // give graph a default name
var G, fullGraph *Graph
// exit after `max-runtime` seconds for no reason at all...
if i := c.Int("max-runtime"); i > 0 {
@@ -103,10 +102,11 @@ func run(c *cli.Context) {
if !c.Bool("no-watch") {
configchan = ConfigWatch(file)
}
log.Printf("Etcd: Starting...")
log.Println("Etcd: Starting...")
etcdchan := etcdO.EtcdWatch()
first := true // first loop or not
for {
log.Println("Main: Waiting...")
select {
case _ = <-startchan: // kick the loop once at start
// pass
@@ -135,17 +135,29 @@ func run(c *cli.Context) {
}
// run graph vertex LOCK...
if !first { // XXX: we can flatten this check out I think
log.Printf("State: %v -> %v", G.SetState(graphPausing), G.GetState())
if !first { // TODO: we can flatten this check out I think
G.Pause() // sync
log.Printf("State: %v -> %v", G.SetState(graphPaused), G.GetState())
}
// build the graph from a config file
// build the graph on events (eg: from etcd)
if !UpdateGraphFromConfig(config, hostname, G, etcdO) {
log.Fatal("Config: We borked the graph.") // XXX
// build graph from yaml file on events (eg: from etcd)
// we need the vertices to be paused to work on them
if newFullgraph, err := fullGraph.NewGraphFromConfig(config, etcdO, hostname); err == nil { // keep references to all original elements
fullGraph = newFullgraph
} else {
log.Printf("Config: Error making new graph from config: %v", err)
// unpause!
if !first {
G.Start(&wg, first) // sync
}
continue
}
G = fullGraph.Copy() // copy to active graph
// XXX: do etcd transaction out here...
G.AutoEdges() // add autoedges; modifies the graph
G.AutoGroup() // run autogroup; modifies the graph
// TODO: do we want to do a transitive reduction?
log.Printf("Graph: %v", G) // show graph
err := G.ExecGraphviz(c.String("graphviz-filter"), c.String("graphviz"))
if err != nil {
@@ -160,9 +172,7 @@ func run(c *cli.Context) {
// some are not ready yet and the EtcdWatch
// loops, we'll cause G.Pause(...) before we
// even got going, thus causing nil pointer errors
log.Printf("State: %v -> %v", G.SetState(graphStarting), G.GetState())
G.Start(&wg, first) // sync
log.Printf("State: %v -> %v", G.SetState(graphStarted), G.GetState())
first = false
}
}()
@@ -177,7 +187,7 @@ func run(c *cli.Context) {
continue
}
for v := range G.GetVerticesChan() {
if v.Type.GetConvergedState() != typeConvergedTimeout {
if v.Res.GetConvergedState() != resConvergedTimeout {
continue ConvergedLoop
}
}

229
misc.go
View File

@@ -18,14 +18,115 @@
package main
import (
"bytes"
"encoding/base64"
"encoding/gob"
"github.com/godbus/dbus"
"path"
"sort"
"strings"
"time"
)
// returns the string with the first character capitalized
func FirstToUpper(str string) string {
return strings.ToUpper(str[0:1]) + str[1:]
}
// return true if a string exists inside a list, otherwise false
func StrInList(needle string, haystack []string) bool {
for _, x := range haystack {
if needle == x {
return true
}
}
return false
}
// remove any duplicate values in the list
// possibly sub-optimal, O(n^2)? implementation
func StrRemoveDuplicatesInList(list []string) []string {
unique := []string{}
for _, x := range list {
if !StrInList(x, unique) {
unique = append(unique, x)
}
}
return unique
}
// remove any of the elements in filter, if they exist in list
func StrFilterElementsInList(filter []string, list []string) []string {
result := []string{}
for _, x := range list {
if !StrInList(x, filter) {
result = append(result, x)
}
}
return result
}
// remove any of the elements in filter, if they don't exist in list
// this is an in order intersection of two lists
func StrListIntersection(list1 []string, list2 []string) []string {
result := []string{}
for _, x := range list1 {
if StrInList(x, list2) {
result = append(result, x)
}
}
return result
}
// reverse a list of strings
func ReverseStringList(in []string) []string {
var out []string // empty list
l := len(in)
for i := range in {
out = append(out, in[l-i-1])
}
return out
}
// return the sorted list of string keys in a map with string keys
// NOTE: i thought it would be nice for this to use: map[string]interface{} but
// it turns out that's not allowed. I know we don't have generics, but common!
func StrMapKeys(m map[string]string) []string {
result := []string{}
for k, _ := range m {
result = append(result, k)
}
sort.Strings(result) // deterministic order
return result
}
// return the sorted list of bool values in a map with string values
func BoolMapValues(m map[string]bool) []bool {
result := []bool{}
for _, v := range m {
result = append(result, v)
}
//sort.Bools(result) // TODO: deterministic order
return result
}
// return the sorted list of string values in a map with string values
func StrMapValues(m map[string]string) []string {
result := []string{}
for _, v := range m {
result = append(result, v)
}
sort.Strings(result) // deterministic order
return result
}
// return true if everyone is true
func BoolMapTrue(l []bool) bool {
for _, b := range l {
if !b {
return false
}
}
return true
}
// Similar to the GNU dirname command
func Dirname(p string) string {
if p == "/" {
@@ -37,6 +138,9 @@ func Dirname(p string) string {
func Basename(p string) string {
_, b := path.Split(path.Clean(p))
if p == "" {
return ""
}
if p[len(p)-1:] == "/" { // don't loose the tail slash
b += "/"
}
@@ -45,6 +149,9 @@ func Basename(p string) string {
// Split a path into an array of tokens excluding any trailing empty tokens
func PathSplit(p string) []string {
if p == "/" { // TODO: can't this all be expressed nicely in one line?
return []string{""}
}
return strings.Split(path.Clean(p), "/")
}
@@ -67,6 +174,46 @@ func HasPathPrefix(p, prefix string) bool {
return true
}
func StrInPathPrefixList(needle string, haystack []string) bool {
for _, x := range haystack {
if HasPathPrefix(x, needle) {
return true
}
}
return false
}
// remove redundant file path prefixes that are under the tree of other files
func RemoveCommonFilePrefixes(paths []string) []string {
var result = make([]string, len(paths))
for i := 0; i < len(paths); i++ { // copy, b/c append can modify the args!!
result[i] = paths[i]
}
// is there a string path which is common everywhere?
// if so, remove it, and iterate until nothing common is left
// return what's left over, that's the most common superset
loop:
for {
if len(result) <= 1 {
return result
}
for i := 0; i < len(result); i++ {
var copied = make([]string, len(result))
for j := 0; j < len(result); j++ { // copy, b/c append can modify the args!!
copied[j] = result[j]
}
noi := append(copied[:i], copied[i+1:]...) // rm i
if StrInPathPrefixList(result[i], noi) {
// delete the element common to everyone
result = noi
continue loop
}
}
break
}
return result
}
// Delta of path prefix, tells you how many path tokens different the prefix is
func PathPrefixDelta(p, prefix string) int {
@@ -82,34 +229,45 @@ func PathIsDir(p string) bool {
return p[len(p)-1:] == "/" // a dir has a trailing slash in this context
}
// encode an object as base 64, serialize and then base64 encode
func ObjToB64(obj interface{}) (string, bool) {
b := bytes.Buffer{}
e := gob.NewEncoder(&b)
err := e.Encode(obj)
if err != nil {
//log.Println("Gob failed to Encode: ", err)
return "", false
// return the full list of "dependency" paths for a given path in reverse order
func PathSplitFullReversed(p string) []string {
var result []string
split := PathSplit(p)
count := len(split)
var x string
for i := 0; i < count; i++ {
x = "/" + path.Join(split[0:i+1]...)
if i != 0 && !(i+1 == count && !PathIsDir(p)) {
x += "/" // add trailing slash
}
return base64.StdEncoding.EncodeToString(b.Bytes()), true
result = append(result, x)
}
return ReverseStringList(result)
}
// TODO: is it possible to somehow generically just return the obj?
// decode an object into the waiting obj which you pass a reference to
func B64ToObj(str string, obj interface{}) bool {
bb, err := base64.StdEncoding.DecodeString(str)
if err != nil {
//log.Println("Base64 failed to Decode: ", err)
return false
// add trailing slashes to any likely dirs in a package manager fileList
// if removeDirs is true, instead, don't keep the dirs in our output
func DirifyFileList(fileList []string, removeDirs bool) []string {
dirs := []string{}
for _, file := range fileList {
dir, _ := path.Split(file) // dir
dir = path.Clean(dir) // clean so cmp is easier
if !StrInList(dir, dirs) {
dirs = append(dirs, dir)
}
b := bytes.NewBuffer(bb)
d := gob.NewDecoder(b)
err = d.Decode(obj)
if err != nil {
//log.Println("Gob failed to Decode: ", err)
return false
}
return true
result := []string{}
for _, file := range fileList {
cleanFile := path.Clean(file)
if !StrInList(cleanFile, dirs) { // we're not a directory!
result = append(result, file) // pass through
} else if !removeDirs {
result = append(result, cleanFile+"/")
}
}
return result
}
// special version of time.After that blocks when given a negative integer
@@ -120,3 +278,22 @@ func TimeAfterOrBlock(t int) <-chan time.Time {
}
return time.After(time.Duration(t) * time.Second)
}
// making using the private bus usable, should be upstream:
// TODO: https://github.com/godbus/dbus/issues/15
func SystemBusPrivateUsable() (conn *dbus.Conn, err error) {
conn, err = dbus.SystemBusPrivate()
if err != nil {
return nil, err
}
if err = conn.Auth(nil); err != nil {
conn.Close()
conn = nil
return
}
if err = conn.Hello(); err != nil {
conn.Close()
conn = nil
}
return conn, nil // success
}

1
misc/example.conf Normal file
View File

@@ -0,0 +1 @@
# example mgmt configuration file, currently has no options at the moment!

View File

@@ -1,5 +1,8 @@
#!/bin/bash
# setup a simple go environment
XPWD=`pwd`
ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && cd .. && pwd )" # dir!
cd "${ROOT}" >/dev/null
travis=0
if env | grep -q '^TRAVIS=true$'; then
@@ -19,21 +22,30 @@ if [ $travis -eq 0 ]; then
fi
if [ ! -z "$APT" ]; then
sudo $APT install -y golang mercurial
sudo $APT update
sudo $APT install -y golang make gcc packagekit mercurial
# one of these two golang tools packages should work on debian
sudo $APT install -y golang-golang-x-tools || true
sudo $APT install -y golang-go.tools || true
fi
fi
# build etcd
git clone --recursive https://github.com/coreos/etcd/ && cd etcd
git checkout v2.2.4 # TODO: update to newer versions as needed
goversion=$(go version)
# if 'go version' contains string 'devel', then use git master of etcd...
if [ "${goversion#*devel}" == "$goversion" ]; then
git checkout v2.2.4 # TODO: update to newer versions as needed
fi
[ -x build ] && ./build
mkdir -p ~/bin/
cp bin/etcd ~/bin/
cd -
cd - >/dev/null
rm -rf etcd # clean up to avoid failing on upstream gofmt errors
go get ./... # get all the go dependencies
[ -e "$GOBIN/mgmt" ] && rm -f "$GOBIN/mgmt" # the `go get` version has no -X
go get golang.org/x/tools/cmd/vet # add in `go vet` for travis
go get golang.org/x/tools/cmd/stringer # for automatic stringer-ing
go get github.com/golang/lint/golint # for `golint`-ing
cd "$XPWD" >/dev/null

View File

@@ -1 +0,0 @@
# example mgmt configuration file, currently has not options at the moment!

View File

@@ -18,7 +18,8 @@
package main
import (
"fmt"
"reflect"
"sort"
"testing"
)
@@ -32,6 +33,10 @@ func TestMiscT1(t *testing.T) {
t.Errorf("Result is incorrect.")
}
if Dirname("/foo/") != "/" {
t.Errorf("Result is incorrect.")
}
if Dirname("/") != "" { // TODO: should this equal "/" or "" ?
t.Errorf("Result is incorrect.")
}
@@ -44,15 +49,29 @@ func TestMiscT1(t *testing.T) {
t.Errorf("Result is incorrect.")
}
if Basename("/foo/") != "foo/" {
t.Errorf("Result is incorrect.")
}
if Basename("/") != "/" { // TODO: should this equal "" or "/" ?
t.Errorf("Result is incorrect.")
}
if Basename("") != "" { // TODO: should this equal something different?
t.Errorf("Result is incorrect.")
}
}
func TestMiscT2(t *testing.T) {
// TODO: compare the output with the actual list
p0 := "/"
r0 := []string{""} // TODO: is this correct?
if len(PathSplit(p0)) != len(r0) {
t.Errorf("Result should be: %q.", r0)
t.Errorf("Result should have a length of: %v.", len(r0))
}
p1 := "/foo/bar/baz"
r1 := []string{"", "foo", "bar", "baz"}
if len(PathSplit(p1)) != len(r1) {
@@ -92,6 +111,10 @@ func TestMiscT3(t *testing.T) {
if HasPathPrefix("/foo/bar/baz/", "/foo/bar/baz/dude") != false {
t.Errorf("Result should be false.")
}
if HasPathPrefix("/foo/bar/baz/boo/", "/foo/") != true {
t.Errorf("Result should be true.")
}
}
func TestMiscT4(t *testing.T) {
@@ -148,53 +171,574 @@ func TestMiscT5(t *testing.T) {
}
}
func TestMiscT6(t *testing.T) {
func TestMiscT8(t *testing.T) {
type foo struct {
Name string `yaml:"name"`
Type string `yaml:"type"`
Value int `yaml:"value"`
r0 := []string{"/"}
if fullList0 := PathSplitFullReversed("/"); !reflect.DeepEqual(r0, fullList0) {
t.Errorf("PathSplitFullReversed expected: %v; got: %v.", r0, fullList0)
}
obj := foo{"dude", "sweet", 42}
output, ok := ObjToB64(obj)
if ok != true {
t.Errorf("First result should be true.")
r1 := []string{"/foo/bar/baz/file", "/foo/bar/baz/", "/foo/bar/", "/foo/", "/"}
if fullList1 := PathSplitFullReversed("/foo/bar/baz/file"); !reflect.DeepEqual(r1, fullList1) {
t.Errorf("PathSplitFullReversed expected: %v; got: %v.", r1, fullList1)
}
r2 := []string{"/foo/bar/baz/dir/", "/foo/bar/baz/", "/foo/bar/", "/foo/", "/"}
if fullList2 := PathSplitFullReversed("/foo/bar/baz/dir/"); !reflect.DeepEqual(r2, fullList2) {
t.Errorf("PathSplitFullReversed expected: %v; got: %v.", r2, fullList2)
}
}
func TestMiscT9(t *testing.T) {
fileListIn := []string{ // list taken from drbd-utils package
"/etc/drbd.conf",
"/etc/drbd.d/global_common.conf",
"/lib/drbd/drbd",
"/lib/drbd/drbdadm-83",
"/lib/drbd/drbdadm-84",
"/lib/drbd/drbdsetup-83",
"/lib/drbd/drbdsetup-84",
"/usr/lib/drbd/crm-fence-peer.sh",
"/usr/lib/drbd/crm-unfence-peer.sh",
"/usr/lib/drbd/notify-emergency-reboot.sh",
"/usr/lib/drbd/notify-emergency-shutdown.sh",
"/usr/lib/drbd/notify-io-error.sh",
"/usr/lib/drbd/notify-out-of-sync.sh",
"/usr/lib/drbd/notify-pri-lost-after-sb.sh",
"/usr/lib/drbd/notify-pri-lost.sh",
"/usr/lib/drbd/notify-pri-on-incon-degr.sh",
"/usr/lib/drbd/notify-split-brain.sh",
"/usr/lib/drbd/notify.sh",
"/usr/lib/drbd/outdate-peer.sh",
"/usr/lib/drbd/rhcs_fence",
"/usr/lib/drbd/snapshot-resync-target-lvm.sh",
"/usr/lib/drbd/stonith_admin-fence-peer.sh",
"/usr/lib/drbd/unsnapshot-resync-target-lvm.sh",
"/usr/lib/systemd/system/drbd.service",
"/usr/lib/tmpfiles.d/drbd.conf",
"/usr/sbin/drbd-overview",
"/usr/sbin/drbdadm",
"/usr/sbin/drbdmeta",
"/usr/sbin/drbdsetup",
"/usr/share/doc/drbd-utils/COPYING",
"/usr/share/doc/drbd-utils/ChangeLog",
"/usr/share/doc/drbd-utils/README",
"/usr/share/doc/drbd-utils/drbd.conf.example",
"/usr/share/man/man5/drbd.conf-8.3.5.gz",
"/usr/share/man/man5/drbd.conf-8.4.5.gz",
"/usr/share/man/man5/drbd.conf-9.0.5.gz",
"/usr/share/man/man5/drbd.conf.5.gz",
"/usr/share/man/man8/drbd-8.3.8.gz",
"/usr/share/man/man8/drbd-8.4.8.gz",
"/usr/share/man/man8/drbd-9.0.8.gz",
"/usr/share/man/man8/drbd-overview-9.0.8.gz",
"/usr/share/man/man8/drbd-overview.8.gz",
"/usr/share/man/man8/drbd.8.gz",
"/usr/share/man/man8/drbdadm-8.3.8.gz",
"/usr/share/man/man8/drbdadm-8.4.8.gz",
"/usr/share/man/man8/drbdadm-9.0.8.gz",
"/usr/share/man/man8/drbdadm.8.gz",
"/usr/share/man/man8/drbddisk-8.3.8.gz",
"/usr/share/man/man8/drbddisk-8.4.8.gz",
"/usr/share/man/man8/drbdmeta-8.3.8.gz",
"/usr/share/man/man8/drbdmeta-8.4.8.gz",
"/usr/share/man/man8/drbdmeta-9.0.8.gz",
"/usr/share/man/man8/drbdmeta.8.gz",
"/usr/share/man/man8/drbdsetup-8.3.8.gz",
"/usr/share/man/man8/drbdsetup-8.4.8.gz",
"/usr/share/man/man8/drbdsetup-9.0.8.gz",
"/usr/share/man/man8/drbdsetup.8.gz",
"/etc/drbd.d",
"/usr/share/doc/drbd-utils",
"/var/lib/drbd",
}
sort.Strings(fileListIn)
fileListOut := []string{ // fixed up manually
"/etc/drbd.conf",
"/etc/drbd.d/global_common.conf",
"/lib/drbd/drbd",
"/lib/drbd/drbdadm-83",
"/lib/drbd/drbdadm-84",
"/lib/drbd/drbdsetup-83",
"/lib/drbd/drbdsetup-84",
"/usr/lib/drbd/crm-fence-peer.sh",
"/usr/lib/drbd/crm-unfence-peer.sh",
"/usr/lib/drbd/notify-emergency-reboot.sh",
"/usr/lib/drbd/notify-emergency-shutdown.sh",
"/usr/lib/drbd/notify-io-error.sh",
"/usr/lib/drbd/notify-out-of-sync.sh",
"/usr/lib/drbd/notify-pri-lost-after-sb.sh",
"/usr/lib/drbd/notify-pri-lost.sh",
"/usr/lib/drbd/notify-pri-on-incon-degr.sh",
"/usr/lib/drbd/notify-split-brain.sh",
"/usr/lib/drbd/notify.sh",
"/usr/lib/drbd/outdate-peer.sh",
"/usr/lib/drbd/rhcs_fence",
"/usr/lib/drbd/snapshot-resync-target-lvm.sh",
"/usr/lib/drbd/stonith_admin-fence-peer.sh",
"/usr/lib/drbd/unsnapshot-resync-target-lvm.sh",
"/usr/lib/systemd/system/drbd.service",
"/usr/lib/tmpfiles.d/drbd.conf",
"/usr/sbin/drbd-overview",
"/usr/sbin/drbdadm",
"/usr/sbin/drbdmeta",
"/usr/sbin/drbdsetup",
"/usr/share/doc/drbd-utils/COPYING",
"/usr/share/doc/drbd-utils/ChangeLog",
"/usr/share/doc/drbd-utils/README",
"/usr/share/doc/drbd-utils/drbd.conf.example",
"/usr/share/man/man5/drbd.conf-8.3.5.gz",
"/usr/share/man/man5/drbd.conf-8.4.5.gz",
"/usr/share/man/man5/drbd.conf-9.0.5.gz",
"/usr/share/man/man5/drbd.conf.5.gz",
"/usr/share/man/man8/drbd-8.3.8.gz",
"/usr/share/man/man8/drbd-8.4.8.gz",
"/usr/share/man/man8/drbd-9.0.8.gz",
"/usr/share/man/man8/drbd-overview-9.0.8.gz",
"/usr/share/man/man8/drbd-overview.8.gz",
"/usr/share/man/man8/drbd.8.gz",
"/usr/share/man/man8/drbdadm-8.3.8.gz",
"/usr/share/man/man8/drbdadm-8.4.8.gz",
"/usr/share/man/man8/drbdadm-9.0.8.gz",
"/usr/share/man/man8/drbdadm.8.gz",
"/usr/share/man/man8/drbddisk-8.3.8.gz",
"/usr/share/man/man8/drbddisk-8.4.8.gz",
"/usr/share/man/man8/drbdmeta-8.3.8.gz",
"/usr/share/man/man8/drbdmeta-8.4.8.gz",
"/usr/share/man/man8/drbdmeta-9.0.8.gz",
"/usr/share/man/man8/drbdmeta.8.gz",
"/usr/share/man/man8/drbdsetup-8.3.8.gz",
"/usr/share/man/man8/drbdsetup-8.4.8.gz",
"/usr/share/man/man8/drbdsetup-9.0.8.gz",
"/usr/share/man/man8/drbdsetup.8.gz",
"/etc/drbd.d/", // added trailing slash
"/usr/share/doc/drbd-utils/", // added trailing slash
"/var/lib/drbd", // can't be fixed :(
}
sort.Strings(fileListOut)
dirify := DirifyFileList(fileListIn, false) // TODO: test with true
sort.Strings(dirify)
equals := reflect.DeepEqual(fileListOut, dirify)
if a, b := len(fileListOut), len(dirify); a != b {
t.Errorf("DirifyFileList counts didn't match: %d != %d", a, b)
} else if !equals {
t.Error("DirifyFileList did not match expected!")
for i := 0; i < len(dirify); i++ {
if fileListOut[i] != dirify[i] {
t.Errorf("# %d: %v <> %v", i, fileListOut[i], dirify[i])
}
var data foo
if B64ToObj(output, &data) != true {
t.Errorf("Second result should be true.")
}
// TODO: there is probably a better way to compare these two...
if fmt.Sprintf("%+v\n", obj) != fmt.Sprintf("%+v\n", data) {
t.Errorf("Strings should match.")
}
}
func TestMiscT7(t *testing.T) {
func TestMiscT10(t *testing.T) {
fileListIn := []string{ // fake package list
"/etc/drbd.conf",
"/usr/share/man/man8/drbdsetup.8.gz",
"/etc/drbd.d",
"/etc/drbd.d/foo",
"/var/lib/drbd",
"/var/somedir/",
}
sort.Strings(fileListIn)
type Foo struct {
Name string `yaml:"name"`
Type string `yaml:"type"`
Value int `yaml:"value"`
fileListOut := []string{ // fixed up manually
"/etc/drbd.conf",
"/usr/share/man/man8/drbdsetup.8.gz",
"/etc/drbd.d/", // added trailing slash
"/etc/drbd.d/foo",
"/var/lib/drbd", // can't be fixed :(
"/var/somedir/", // stays the same
}
sort.Strings(fileListOut)
type bar struct {
Foo `yaml:",inline"` // anonymous struct must be public!
Comment string `yaml:"comment"`
dirify := DirifyFileList(fileListIn, false) // TODO: test with true
sort.Strings(dirify)
equals := reflect.DeepEqual(fileListOut, dirify)
if a, b := len(fileListOut), len(dirify); a != b {
t.Errorf("DirifyFileList counts didn't match: %d != %d", a, b)
} else if !equals {
t.Error("DirifyFileList did not match expected!")
for i := 0; i < len(dirify); i++ {
if fileListOut[i] != dirify[i] {
t.Errorf("# %d: %v <> %v", i, fileListOut[i], dirify[i])
}
}
}
}
func TestMiscT11(t *testing.T) {
in1 := []string{"/", "/usr/", "/usr/lib/", "/usr/share/"} // input
ex1 := []string{"/usr/lib/", "/usr/share/"} // expected
sort.Strings(ex1)
out1 := RemoveCommonFilePrefixes(in1)
sort.Strings(out1)
if !reflect.DeepEqual(ex1, out1) {
t.Errorf("RemoveCommonFilePrefixes expected: %v; got: %v.", ex1, out1)
}
in2 := []string{"/", "/usr/"}
ex2 := []string{"/usr/"}
sort.Strings(ex2)
out2 := RemoveCommonFilePrefixes(in2)
sort.Strings(out2)
if !reflect.DeepEqual(ex2, out2) {
t.Errorf("RemoveCommonFilePrefixes expected: %v; got: %v.", ex2, out2)
}
in3 := []string{"/"}
ex3 := []string{"/"}
out3 := RemoveCommonFilePrefixes(in3)
if !reflect.DeepEqual(ex3, out3) {
t.Errorf("RemoveCommonFilePrefixes expected: %v; got: %v.", ex3, out3)
}
in4 := []string{"/usr/bin/foo", "/usr/bin/bar", "/usr/lib/", "/usr/share/"}
ex4 := []string{"/usr/bin/foo", "/usr/bin/bar", "/usr/lib/", "/usr/share/"}
sort.Strings(ex4)
out4 := RemoveCommonFilePrefixes(in4)
sort.Strings(out4)
if !reflect.DeepEqual(ex4, out4) {
t.Errorf("RemoveCommonFilePrefixes expected: %v; got: %v.", ex4, out4)
}
in5 := []string{"/usr/bin/foo", "/usr/bin/bar", "/usr/lib/", "/usr/share/", "/usr/bin"}
ex5 := []string{"/usr/bin/foo", "/usr/bin/bar", "/usr/lib/", "/usr/share/"}
sort.Strings(ex5)
out5 := RemoveCommonFilePrefixes(in5)
sort.Strings(out5)
if !reflect.DeepEqual(ex5, out5) {
t.Errorf("RemoveCommonFilePrefixes expected: %v; got: %v.", ex5, out5)
}
in6 := []string{"/etc/drbd.d/", "/lib/drbd/", "/usr/lib/drbd/", "/usr/lib/systemd/system/", "/usr/lib/tmpfiles.d/", "/usr/sbin/", "/usr/share/doc/drbd-utils/", "/usr/share/man/man5/", "/usr/share/man/man8/", "/usr/share/doc/", "/var/lib/"}
ex6 := []string{"/etc/drbd.d/", "/lib/drbd/", "/usr/lib/drbd/", "/usr/lib/systemd/system/", "/usr/lib/tmpfiles.d/", "/usr/sbin/", "/usr/share/doc/drbd-utils/", "/usr/share/man/man5/", "/usr/share/man/man8/", "/var/lib/"}
sort.Strings(ex6)
out6 := RemoveCommonFilePrefixes(in6)
sort.Strings(out6)
if !reflect.DeepEqual(ex6, out6) {
t.Errorf("RemoveCommonFilePrefixes expected: %v; got: %v.", ex6, out6)
}
in7 := []string{"/etc/", "/lib/", "/usr/lib/", "/usr/lib/systemd/", "/usr/", "/usr/share/doc/", "/usr/share/man/", "/var/"}
ex7 := []string{"/etc/", "/lib/", "/usr/lib/systemd/", "/usr/share/doc/", "/usr/share/man/", "/var/"}
sort.Strings(ex7)
out7 := RemoveCommonFilePrefixes(in7)
sort.Strings(out7)
if !reflect.DeepEqual(ex7, out7) {
t.Errorf("RemoveCommonFilePrefixes expected: %v; got: %v.", ex7, out7)
}
in8 := []string{
"/etc/drbd.conf",
"/etc/drbd.d/global_common.conf",
"/lib/drbd/drbd",
"/lib/drbd/drbdadm-83",
"/lib/drbd/drbdadm-84",
"/lib/drbd/drbdsetup-83",
"/lib/drbd/drbdsetup-84",
"/usr/lib/drbd/crm-fence-peer.sh",
"/usr/lib/drbd/crm-unfence-peer.sh",
"/usr/lib/drbd/notify-emergency-reboot.sh",
"/usr/lib/drbd/notify-emergency-shutdown.sh",
"/usr/lib/drbd/notify-io-error.sh",
"/usr/lib/drbd/notify-out-of-sync.sh",
"/usr/lib/drbd/notify-pri-lost-after-sb.sh",
"/usr/lib/drbd/notify-pri-lost.sh",
"/usr/lib/drbd/notify-pri-on-incon-degr.sh",
"/usr/lib/drbd/notify-split-brain.sh",
"/usr/lib/drbd/notify.sh",
"/usr/lib/drbd/outdate-peer.sh",
"/usr/lib/drbd/rhcs_fence",
"/usr/lib/drbd/snapshot-resync-target-lvm.sh",
"/usr/lib/drbd/stonith_admin-fence-peer.sh",
"/usr/lib/drbd/unsnapshot-resync-target-lvm.sh",
"/usr/lib/systemd/system/drbd.service",
"/usr/lib/tmpfiles.d/drbd.conf",
"/usr/sbin/drbd-overview",
"/usr/sbin/drbdadm",
"/usr/sbin/drbdmeta",
"/usr/sbin/drbdsetup",
"/usr/share/doc/drbd-utils/COPYING",
"/usr/share/doc/drbd-utils/ChangeLog",
"/usr/share/doc/drbd-utils/README",
"/usr/share/doc/drbd-utils/drbd.conf.example",
"/usr/share/man/man5/drbd.conf-8.3.5.gz",
"/usr/share/man/man5/drbd.conf-8.4.5.gz",
"/usr/share/man/man5/drbd.conf-9.0.5.gz",
"/usr/share/man/man5/drbd.conf.5.gz",
"/usr/share/man/man8/drbd-8.3.8.gz",
"/usr/share/man/man8/drbd-8.4.8.gz",
"/usr/share/man/man8/drbd-9.0.8.gz",
"/usr/share/man/man8/drbd-overview-9.0.8.gz",
"/usr/share/man/man8/drbd-overview.8.gz",
"/usr/share/man/man8/drbd.8.gz",
"/usr/share/man/man8/drbdadm-8.3.8.gz",
"/usr/share/man/man8/drbdadm-8.4.8.gz",
"/usr/share/man/man8/drbdadm-9.0.8.gz",
"/usr/share/man/man8/drbdadm.8.gz",
"/usr/share/man/man8/drbddisk-8.3.8.gz",
"/usr/share/man/man8/drbddisk-8.4.8.gz",
"/usr/share/man/man8/drbdmeta-8.3.8.gz",
"/usr/share/man/man8/drbdmeta-8.4.8.gz",
"/usr/share/man/man8/drbdmeta-9.0.8.gz",
"/usr/share/man/man8/drbdmeta.8.gz",
"/usr/share/man/man8/drbdsetup-8.3.8.gz",
"/usr/share/man/man8/drbdsetup-8.4.8.gz",
"/usr/share/man/man8/drbdsetup-9.0.8.gz",
"/usr/share/man/man8/drbdsetup.8.gz",
"/etc/drbd.d/",
"/usr/share/doc/drbd-utils/",
"/var/lib/drbd",
}
ex8 := []string{
"/etc/drbd.conf",
"/etc/drbd.d/global_common.conf",
"/lib/drbd/drbd",
"/lib/drbd/drbdadm-83",
"/lib/drbd/drbdadm-84",
"/lib/drbd/drbdsetup-83",
"/lib/drbd/drbdsetup-84",
"/usr/lib/drbd/crm-fence-peer.sh",
"/usr/lib/drbd/crm-unfence-peer.sh",
"/usr/lib/drbd/notify-emergency-reboot.sh",
"/usr/lib/drbd/notify-emergency-shutdown.sh",
"/usr/lib/drbd/notify-io-error.sh",
"/usr/lib/drbd/notify-out-of-sync.sh",
"/usr/lib/drbd/notify-pri-lost-after-sb.sh",
"/usr/lib/drbd/notify-pri-lost.sh",
"/usr/lib/drbd/notify-pri-on-incon-degr.sh",
"/usr/lib/drbd/notify-split-brain.sh",
"/usr/lib/drbd/notify.sh",
"/usr/lib/drbd/outdate-peer.sh",
"/usr/lib/drbd/rhcs_fence",
"/usr/lib/drbd/snapshot-resync-target-lvm.sh",
"/usr/lib/drbd/stonith_admin-fence-peer.sh",
"/usr/lib/drbd/unsnapshot-resync-target-lvm.sh",
"/usr/lib/systemd/system/drbd.service",
"/usr/lib/tmpfiles.d/drbd.conf",
"/usr/sbin/drbd-overview",
"/usr/sbin/drbdadm",
"/usr/sbin/drbdmeta",
"/usr/sbin/drbdsetup",
"/usr/share/doc/drbd-utils/COPYING",
"/usr/share/doc/drbd-utils/ChangeLog",
"/usr/share/doc/drbd-utils/README",
"/usr/share/doc/drbd-utils/drbd.conf.example",
"/usr/share/man/man5/drbd.conf-8.3.5.gz",
"/usr/share/man/man5/drbd.conf-8.4.5.gz",
"/usr/share/man/man5/drbd.conf-9.0.5.gz",
"/usr/share/man/man5/drbd.conf.5.gz",
"/usr/share/man/man8/drbd-8.3.8.gz",
"/usr/share/man/man8/drbd-8.4.8.gz",
"/usr/share/man/man8/drbd-9.0.8.gz",
"/usr/share/man/man8/drbd-overview-9.0.8.gz",
"/usr/share/man/man8/drbd-overview.8.gz",
"/usr/share/man/man8/drbd.8.gz",
"/usr/share/man/man8/drbdadm-8.3.8.gz",
"/usr/share/man/man8/drbdadm-8.4.8.gz",
"/usr/share/man/man8/drbdadm-9.0.8.gz",
"/usr/share/man/man8/drbdadm.8.gz",
"/usr/share/man/man8/drbddisk-8.3.8.gz",
"/usr/share/man/man8/drbddisk-8.4.8.gz",
"/usr/share/man/man8/drbdmeta-8.3.8.gz",
"/usr/share/man/man8/drbdmeta-8.4.8.gz",
"/usr/share/man/man8/drbdmeta-9.0.8.gz",
"/usr/share/man/man8/drbdmeta.8.gz",
"/usr/share/man/man8/drbdsetup-8.3.8.gz",
"/usr/share/man/man8/drbdsetup-8.4.8.gz",
"/usr/share/man/man8/drbdsetup-9.0.8.gz",
"/usr/share/man/man8/drbdsetup.8.gz",
"/var/lib/drbd",
}
sort.Strings(ex8)
out8 := RemoveCommonFilePrefixes(in8)
sort.Strings(out8)
if !reflect.DeepEqual(ex8, out8) {
t.Errorf("RemoveCommonFilePrefixes expected: %v; got: %v.", ex8, out8)
}
in9 := []string{
"/etc/drbd.conf",
"/etc/drbd.d/",
"/lib/drbd/drbd",
"/lib/drbd/",
"/lib/drbd/",
"/lib/drbd/",
"/usr/lib/drbd/",
"/usr/lib/drbd/",
"/usr/lib/drbd/",
"/usr/lib/drbd/",
"/usr/lib/drbd/",
"/usr/lib/systemd/system/",
"/usr/lib/tmpfiles.d/",
"/usr/sbin/",
"/usr/sbin/",
"/usr/share/doc/drbd-utils/",
"/usr/share/doc/drbd-utils/",
"/usr/share/man/man5/",
"/usr/share/man/man5/",
"/usr/share/man/man8/",
"/usr/share/man/man8/",
"/usr/share/man/man8/",
"/etc/drbd.d/",
"/usr/share/doc/drbd-utils/",
"/var/lib/drbd",
}
ex9 := []string{
"/etc/drbd.conf",
"/etc/drbd.d/",
"/lib/drbd/drbd",
"/usr/lib/drbd/",
"/usr/lib/systemd/system/",
"/usr/lib/tmpfiles.d/",
"/usr/sbin/",
"/usr/share/doc/drbd-utils/",
"/usr/share/man/man5/",
"/usr/share/man/man8/",
"/var/lib/drbd",
}
sort.Strings(ex9)
out9 := RemoveCommonFilePrefixes(in9)
sort.Strings(out9)
if !reflect.DeepEqual(ex9, out9) {
t.Errorf("RemoveCommonFilePrefixes expected: %v; got: %v.", ex9, out9)
}
in10 := []string{
"/etc/drbd.conf",
"/etc/drbd.d/", // watch me, i'm a dir
"/etc/drbd.d/global_common.conf", // and watch me i'm a file!
"/lib/drbd/drbd",
"/lib/drbd/drbdadm-83",
"/lib/drbd/drbdadm-84",
"/lib/drbd/drbdsetup-83",
"/lib/drbd/drbdsetup-84",
"/usr/lib/drbd/crm-fence-peer.sh",
"/usr/lib/drbd/crm-unfence-peer.sh",
"/usr/lib/drbd/notify-emergency-reboot.sh",
"/usr/lib/drbd/notify-emergency-shutdown.sh",
"/usr/lib/drbd/notify-io-error.sh",
"/usr/lib/drbd/notify-out-of-sync.sh",
"/usr/lib/drbd/notify-pri-lost-after-sb.sh",
"/usr/lib/drbd/notify-pri-lost.sh",
"/usr/lib/drbd/notify-pri-on-incon-degr.sh",
"/usr/lib/drbd/notify-split-brain.sh",
"/usr/lib/drbd/notify.sh",
"/usr/lib/drbd/outdate-peer.sh",
"/usr/lib/drbd/rhcs_fence",
"/usr/lib/drbd/snapshot-resync-target-lvm.sh",
"/usr/lib/drbd/stonith_admin-fence-peer.sh",
"/usr/lib/drbd/unsnapshot-resync-target-lvm.sh",
"/usr/lib/systemd/system/drbd.service",
"/usr/lib/tmpfiles.d/drbd.conf",
"/usr/sbin/drbd-overview",
"/usr/sbin/drbdadm",
"/usr/sbin/drbdmeta",
"/usr/sbin/drbdsetup",
"/usr/share/doc/drbd-utils/", // watch me, i'm a dir too
"/usr/share/doc/drbd-utils/COPYING",
"/usr/share/doc/drbd-utils/ChangeLog",
"/usr/share/doc/drbd-utils/README",
"/usr/share/doc/drbd-utils/drbd.conf.example",
"/usr/share/man/man5/drbd.conf-8.3.5.gz",
"/usr/share/man/man5/drbd.conf-8.4.5.gz",
"/usr/share/man/man5/drbd.conf-9.0.5.gz",
"/usr/share/man/man5/drbd.conf.5.gz",
"/usr/share/man/man8/drbd-8.3.8.gz",
"/usr/share/man/man8/drbd-8.4.8.gz",
"/usr/share/man/man8/drbd-9.0.8.gz",
"/usr/share/man/man8/drbd-overview-9.0.8.gz",
"/usr/share/man/man8/drbd-overview.8.gz",
"/usr/share/man/man8/drbd.8.gz",
"/usr/share/man/man8/drbdadm-8.3.8.gz",
"/usr/share/man/man8/drbdadm-8.4.8.gz",
"/usr/share/man/man8/drbdadm-9.0.8.gz",
"/usr/share/man/man8/drbdadm.8.gz",
"/usr/share/man/man8/drbddisk-8.3.8.gz",
"/usr/share/man/man8/drbddisk-8.4.8.gz",
"/usr/share/man/man8/drbdmeta-8.3.8.gz",
"/usr/share/man/man8/drbdmeta-8.4.8.gz",
"/usr/share/man/man8/drbdmeta-9.0.8.gz",
"/usr/share/man/man8/drbdmeta.8.gz",
"/usr/share/man/man8/drbdsetup-8.3.8.gz",
"/usr/share/man/man8/drbdsetup-8.4.8.gz",
"/usr/share/man/man8/drbdsetup-9.0.8.gz",
"/usr/share/man/man8/drbdsetup.8.gz",
"/var/lib/drbd",
}
ex10 := []string{
"/etc/drbd.conf",
"/etc/drbd.d/global_common.conf",
"/lib/drbd/drbd",
"/lib/drbd/drbdadm-83",
"/lib/drbd/drbdadm-84",
"/lib/drbd/drbdsetup-83",
"/lib/drbd/drbdsetup-84",
"/usr/lib/drbd/crm-fence-peer.sh",
"/usr/lib/drbd/crm-unfence-peer.sh",
"/usr/lib/drbd/notify-emergency-reboot.sh",
"/usr/lib/drbd/notify-emergency-shutdown.sh",
"/usr/lib/drbd/notify-io-error.sh",
"/usr/lib/drbd/notify-out-of-sync.sh",
"/usr/lib/drbd/notify-pri-lost-after-sb.sh",
"/usr/lib/drbd/notify-pri-lost.sh",
"/usr/lib/drbd/notify-pri-on-incon-degr.sh",
"/usr/lib/drbd/notify-split-brain.sh",
"/usr/lib/drbd/notify.sh",
"/usr/lib/drbd/outdate-peer.sh",
"/usr/lib/drbd/rhcs_fence",
"/usr/lib/drbd/snapshot-resync-target-lvm.sh",
"/usr/lib/drbd/stonith_admin-fence-peer.sh",
"/usr/lib/drbd/unsnapshot-resync-target-lvm.sh",
"/usr/lib/systemd/system/drbd.service",
"/usr/lib/tmpfiles.d/drbd.conf",
"/usr/sbin/drbd-overview",
"/usr/sbin/drbdadm",
"/usr/sbin/drbdmeta",
"/usr/sbin/drbdsetup",
"/usr/share/doc/drbd-utils/COPYING",
"/usr/share/doc/drbd-utils/ChangeLog",
"/usr/share/doc/drbd-utils/README",
"/usr/share/doc/drbd-utils/drbd.conf.example",
"/usr/share/man/man5/drbd.conf-8.3.5.gz",
"/usr/share/man/man5/drbd.conf-8.4.5.gz",
"/usr/share/man/man5/drbd.conf-9.0.5.gz",
"/usr/share/man/man5/drbd.conf.5.gz",
"/usr/share/man/man8/drbd-8.3.8.gz",
"/usr/share/man/man8/drbd-8.4.8.gz",
"/usr/share/man/man8/drbd-9.0.8.gz",
"/usr/share/man/man8/drbd-overview-9.0.8.gz",
"/usr/share/man/man8/drbd-overview.8.gz",
"/usr/share/man/man8/drbd.8.gz",
"/usr/share/man/man8/drbdadm-8.3.8.gz",
"/usr/share/man/man8/drbdadm-8.4.8.gz",
"/usr/share/man/man8/drbdadm-9.0.8.gz",
"/usr/share/man/man8/drbdadm.8.gz",
"/usr/share/man/man8/drbddisk-8.3.8.gz",
"/usr/share/man/man8/drbddisk-8.4.8.gz",
"/usr/share/man/man8/drbdmeta-8.3.8.gz",
"/usr/share/man/man8/drbdmeta-8.4.8.gz",
"/usr/share/man/man8/drbdmeta-9.0.8.gz",
"/usr/share/man/man8/drbdmeta.8.gz",
"/usr/share/man/man8/drbdsetup-8.3.8.gz",
"/usr/share/man/man8/drbdsetup-8.4.8.gz",
"/usr/share/man/man8/drbdsetup-9.0.8.gz",
"/usr/share/man/man8/drbdsetup.8.gz",
"/var/lib/drbd",
}
sort.Strings(ex10)
out10 := RemoveCommonFilePrefixes(in10)
sort.Strings(out10)
if !reflect.DeepEqual(ex10, out10) {
t.Errorf("RemoveCommonFilePrefixes expected: %v; got: %v.", ex10, out10)
for i := 0; i < len(ex10); i++ {
if ex10[i] != out10[i] {
t.Errorf("# %d: %v <> %v", i, ex10[i], out10[i])
}
obj := bar{Foo{"dude", "sweet", 42}, "hello world"}
output, ok := ObjToB64(obj)
if ok != true {
t.Errorf("First result should be true.")
}
var data bar
if B64ToObj(output, &data) != true {
t.Errorf("Second result should be true.")
}
// TODO: there is probably a better way to compare these two...
if fmt.Sprintf("%+v\n", obj) != fmt.Sprintf("%+v\n", data) {
t.Errorf("Strings should match.")
}
}

143
noop.go Normal file
View File

@@ -0,0 +1,143 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"encoding/gob"
"log"
)
func init() {
gob.Register(&NoopRes{})
}
type NoopRes struct {
BaseRes `yaml:",inline"`
Comment string `yaml:"comment"` // extra field for example purposes
}
func NewNoopRes(name string) *NoopRes {
obj := &NoopRes{
BaseRes: BaseRes{
Name: name,
},
Comment: "",
}
obj.Init()
return obj
}
func (obj *NoopRes) Init() {
obj.BaseRes.kind = "Noop"
obj.BaseRes.Init() // call base init, b/c we're overriding
}
// validate if the params passed in are valid data
// FIXME: where should this get called ?
func (obj *NoopRes) Validate() bool {
return true
}
func (obj *NoopRes) Watch(processChan chan Event) {
if obj.IsWatching() {
return
}
obj.SetWatching(true)
defer obj.SetWatching(false)
//vertex := obj.vertex // stored with SetVertex
var send = false // send event?
var exit = false
for {
obj.SetState(resStateWatching) // reset
select {
case event := <-obj.events:
obj.SetConvergedState(resConvergedNil)
// we avoid sending events on unpause
if exit, send = obj.ReadEvent(&event); exit {
return // exit
}
case _ = <-TimeAfterOrBlock(obj.ctimeout):
obj.SetConvergedState(resConvergedTimeout)
obj.converged <- true
continue
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
// only do this on certain types of events
//obj.isStateOK = false // something made state dirty
resp := NewResp()
processChan <- Event{eventNil, resp, "", true} // trigger process
resp.ACKWait() // wait for the ACK()
}
}
}
// CheckApply method for Noop resource. Does nothing, returns happy!
func (obj *NoopRes) CheckApply(apply bool) (stateok bool, err error) {
log.Printf("%v[%v]: CheckApply(%t)", obj.Kind(), obj.GetName(), apply)
return true, nil // state is always okay
}
type NoopUUID struct {
BaseUUID
name string
}
func (obj *NoopRes) AutoEdges() AutoEdge {
return nil
}
// include all params to make a unique identification of this object
// most resources only return one, although some resources return multiple
func (obj *NoopRes) GetUUIDs() []ResUUID {
x := &NoopUUID{
BaseUUID: BaseUUID{name: obj.GetName(), kind: obj.Kind()},
name: obj.Name,
}
return []ResUUID{x}
}
func (obj *NoopRes) GroupCmp(r Res) bool {
_, ok := r.(*NoopRes)
if !ok {
// NOTE: technically we could group a noop into any other
// resource, if that resource knew how to handle it, although,
// since the mechanics of inter-kind resource grouping are
// tricky, avoid doing this until there's a good reason.
return false
}
return true // noop resources can always be grouped together!
}
func (obj *NoopRes) Compare(res Res) bool {
switch res.(type) {
// we can only compare NoopRes to others of the same resource
case *NoopRes:
res := res.(*NoopRes)
if obj.Name != res.Name {
return false
}
default:
return false
}
return true
}

912
packagekit.go Normal file
View File

@@ -0,0 +1,912 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// DOCS: https://www.freedesktop.org/software/PackageKit/gtk-doc/index.html
//package packagekit // TODO
package main
import (
"fmt"
"github.com/godbus/dbus"
"log"
"runtime"
"strings"
)
const (
PK_DEBUG = false
PARANOID = false // enable if you see any ghosts
)
const (
// FIXME: if PkBufferSize is too low, install seems to drop signals
PkBufferSize = 1000
// TODO: the PkSignalTimeout value might be too low
PkSignalPackageTimeout = 60 // 60 seconds, arbitrary
PkSignalDestroyTimeout = 15 // 15 seconds, arbitrary
PkPath = "/org/freedesktop/PackageKit"
PkIface = "org.freedesktop.PackageKit"
PkIfaceTransaction = PkIface + ".Transaction"
dbusAddMatch = "org.freedesktop.DBus.AddMatch"
)
var (
// GOARCH's: 386, amd64, arm, arm64, mips64, mips64le, ppc64, ppc64le
PkArchMap = map[string]string{ // map of PackageKit arch to GOARCH
// TODO: add more values
// noarch
"noarch": "ANY", // special value "ANY"
// fedora
"x86_64": "amd64",
"aarch64": "arm64",
// debian, from: https://www.debian.org/ports/
"amd64": "amd64",
"arm64": "arm64",
"i386": "386",
"i486": "386",
"i586": "386",
"i686": "386",
}
)
//type enum_filter uint64
// https://github.com/hughsie/PackageKit/blob/master/lib/packagekit-glib2/pk-enum.c
const ( //static const PkEnumMatch enum_filter[]
PK_FILTER_ENUM_UNKNOWN uint64 = 1 << iota // "unknown"
PK_FILTER_ENUM_NONE // "none"
PK_FILTER_ENUM_INSTALLED // "installed"
PK_FILTER_ENUM_NOT_INSTALLED // "~installed"
PK_FILTER_ENUM_DEVELOPMENT // "devel"
PK_FILTER_ENUM_NOT_DEVELOPMENT // "~devel"
PK_FILTER_ENUM_GUI // "gui"
PK_FILTER_ENUM_NOT_GUI // "~gui"
PK_FILTER_ENUM_FREE // "free"
PK_FILTER_ENUM_NOT_FREE // "~free"
PK_FILTER_ENUM_VISIBLE // "visible"
PK_FILTER_ENUM_NOT_VISIBLE // "~visible"
PK_FILTER_ENUM_SUPPORTED // "supported"
PK_FILTER_ENUM_NOT_SUPPORTED // "~supported"
PK_FILTER_ENUM_BASENAME // "basename"
PK_FILTER_ENUM_NOT_BASENAME // "~basename"
PK_FILTER_ENUM_NEWEST // "newest"
PK_FILTER_ENUM_NOT_NEWEST // "~newest"
PK_FILTER_ENUM_ARCH // "arch"
PK_FILTER_ENUM_NOT_ARCH // "~arch"
PK_FILTER_ENUM_SOURCE // "source"
PK_FILTER_ENUM_NOT_SOURCE // "~source"
PK_FILTER_ENUM_COLLECTIONS // "collections"
PK_FILTER_ENUM_NOT_COLLECTIONS // "~collections"
PK_FILTER_ENUM_APPLICATION // "application"
PK_FILTER_ENUM_NOT_APPLICATION // "~application"
PK_FILTER_ENUM_DOWNLOADED // "downloaded"
PK_FILTER_ENUM_NOT_DOWNLOADED // "~downloaded"
)
const ( //static const PkEnumMatch enum_transaction_flag[]
PK_TRANSACTION_FLAG_ENUM_NONE uint64 = 1 << iota // "none"
PK_TRANSACTION_FLAG_ENUM_ONLY_TRUSTED // "only-trusted"
PK_TRANSACTION_FLAG_ENUM_SIMULATE // "simulate"
PK_TRANSACTION_FLAG_ENUM_ONLY_DOWNLOAD // "only-download"
PK_TRANSACTION_FLAG_ENUM_ALLOW_REINSTALL // "allow-reinstall"
PK_TRANSACTION_FLAG_ENUM_JUST_REINSTALL // "just-reinstall"
PK_TRANSACTION_FLAG_ENUM_ALLOW_DOWNGRADE // "allow-downgrade"
)
const ( //typedef enum
PK_INFO_ENUM_UNKNOWN uint64 = 1 << iota
PK_INFO_ENUM_INSTALLED
PK_INFO_ENUM_AVAILABLE
PK_INFO_ENUM_LOW
PK_INFO_ENUM_ENHANCEMENT
PK_INFO_ENUM_NORMAL
PK_INFO_ENUM_BUGFIX
PK_INFO_ENUM_IMPORTANT
PK_INFO_ENUM_SECURITY
PK_INFO_ENUM_BLOCKED
PK_INFO_ENUM_DOWNLOADING
PK_INFO_ENUM_UPDATING
PK_INFO_ENUM_INSTALLING
PK_INFO_ENUM_REMOVING
PK_INFO_ENUM_CLEANUP
PK_INFO_ENUM_OBSOLETING
PK_INFO_ENUM_COLLECTION_INSTALLED
PK_INFO_ENUM_COLLECTION_AVAILABLE
PK_INFO_ENUM_FINISHED
PK_INFO_ENUM_REINSTALLING
PK_INFO_ENUM_DOWNGRADING
PK_INFO_ENUM_PREPARING
PK_INFO_ENUM_DECOMPRESSING
PK_INFO_ENUM_UNTRUSTED
PK_INFO_ENUM_TRUSTED
PK_INFO_ENUM_UNAVAILABLE
PK_INFO_ENUM_LAST
)
// wrapper struct so we can pass bus connection around in the struct
type Conn struct {
conn *dbus.Conn
}
// struct that is returned by PackagesToPackageIDs in the map values
type PkPackageIDActionData struct {
Found bool
Installed bool
Version string
PackageID string
Newest bool
}
// get a new bus connection
func NewBus() *Conn {
// if we share the bus with others, we will get each others messages!!
bus, err := SystemBusPrivateUsable() // don't share the bus connection!
if err != nil {
return nil
}
return &Conn{
conn: bus,
}
}
// get the dbus connection object
func (bus *Conn) GetBus() *dbus.Conn {
return bus.conn
}
// close the dbus connection object
func (bus *Conn) Close() error {
return bus.conn.Close()
}
// internal helper to add signal matches to the bus, should only be called once
func (bus *Conn) matchSignal(ch chan *dbus.Signal, path dbus.ObjectPath, iface string, signals []string) error {
if PK_DEBUG {
log.Printf("PackageKit: matchSignal(%v, %v, %v, %v)", ch, path, iface, signals)
}
// eg: gdbus monitor --system --dest org.freedesktop.PackageKit --object-path /org/freedesktop/PackageKit | grep <signal>
var call *dbus.Call
// TODO: if we make this call many times, we seem to receive signals
// that many times... Maybe this should be an object singleton?
obj := bus.GetBus().BusObject()
pathStr := fmt.Sprintf("%s", path)
if len(signals) == 0 {
call = obj.Call(dbusAddMatch, 0, "type='signal',path='"+pathStr+"',interface='"+iface+"'")
} else {
for _, signal := range signals {
call = obj.Call(dbusAddMatch, 0, "type='signal',path='"+pathStr+"',interface='"+iface+"',member='"+signal+"'")
if call.Err != nil {
break
}
}
}
if call.Err != nil {
return call.Err
}
// The caller has to make sure that ch is sufficiently buffered; if a
// message arrives when a write to c is not possible, it is discarded!
// This can be disastrous if we're waiting for a "Finished" signal!
bus.GetBus().Signal(ch)
return nil
}
// get a signal anytime an event happens
func (bus *Conn) WatchChanges() (chan *dbus.Signal, error) {
ch := make(chan *dbus.Signal, PkBufferSize)
// NOTE: the TransactionListChanged signal fires much more frequently,
// but with much less specificity. If we're missing events, report the
// issue upstream! The UpdatesChanged signal is what hughsie suggested
var signal = "UpdatesChanged"
err := bus.matchSignal(ch, PkPath, PkIface, []string{signal})
if err != nil {
return nil, err
}
if PARANOID { // TODO: this filtering might not be necessary anymore...
// try to handle the filtering inside this function!
rch := make(chan *dbus.Signal)
go func() {
loop:
for {
select {
case event := <-ch:
// "A receive from a closed channel returns the
// zero value immediately": if i get nil here,
// it means the channel was closed by someone!!
if event == nil { // shared bus issue?
log.Println("PackageKit: Hrm, channel was closed!")
break loop // TODO: continue?
}
// i think this was caused by using the shared
// bus, but we might as well leave it in for now
if event.Path != PkPath || event.Name != fmt.Sprintf("%s.%s", PkIface, signal) {
log.Printf("PackageKit: Woops: Event: %+v", event)
continue
}
rch <- event // forward...
}
}
defer close(ch)
}()
return rch, nil
}
return ch, nil
}
// create and return a transaction path
func (bus *Conn) CreateTransaction() (dbus.ObjectPath, error) {
if PK_DEBUG {
log.Println("PackageKit: CreateTransaction()")
}
var interfacePath dbus.ObjectPath
obj := bus.GetBus().Object(PkIface, PkPath)
call := obj.Call(fmt.Sprintf("%s.CreateTransaction", PkIface), 0).Store(&interfacePath)
if call != nil {
return "", call
}
if PK_DEBUG {
log.Printf("PackageKit: CreateTransaction(): %v", interfacePath)
}
return interfacePath, nil
}
func (bus *Conn) ResolvePackages(packages []string, filter uint64) ([]string, error) {
packageIDs := []string{}
ch := make(chan *dbus.Signal, PkBufferSize) // we need to buffer :(
interfacePath, err := bus.CreateTransaction() // emits Destroy on close
if err != nil {
return []string{}, err
}
// add signal matches for Package and Finished which will always be last
var signals = []string{"Package", "Finished", "Error", "Destroy"}
bus.matchSignal(ch, interfacePath, PkIfaceTransaction, signals)
if PK_DEBUG {
log.Printf("PackageKit: ResolvePackages(): Object(%v, %v)", PkIface, interfacePath)
}
obj := bus.GetBus().Object(PkIface, interfacePath) // pass in found transaction path
call := obj.Call(FmtTransactionMethod("Resolve"), 0, filter, packages)
if PK_DEBUG {
log.Println("PackageKit: ResolvePackages(): Call: Success!")
}
if call.Err != nil {
return []string{}, call.Err
}
loop:
for {
// FIXME: add a timeout option to error in case signals are dropped!
select {
case signal := <-ch:
if PK_DEBUG {
log.Printf("PackageKit: ResolvePackages(): Signal: %+v", signal)
}
if signal.Path != interfacePath {
log.Printf("PackageKit: Woops: Signal.Path: %+v", signal.Path)
continue loop
}
if signal.Name == FmtTransactionMethod("Package") {
//pkg_int, ok := signal.Body[0].(int)
packageID, ok := signal.Body[1].(string)
// format is: name;version;arch;data
if !ok {
continue loop
}
//comment, ok := signal.Body[2].(string)
for _, p := range packageIDs {
if packageID == p {
continue loop // duplicate!
}
}
packageIDs = append(packageIDs, packageID)
} else if signal.Name == FmtTransactionMethod("Finished") {
// TODO: should we wait for the Destroy signal?
break loop
} else if signal.Name == FmtTransactionMethod("Destroy") {
// should already be broken
break loop
} else {
return []string{}, fmt.Errorf("PackageKit: Error: %v", signal.Body)
}
}
}
return packageIDs, nil
}
func (bus *Conn) IsInstalledList(packages []string) ([]bool, error) {
var filter uint64 // initializes at the "zero" value of 0
filter += PK_FILTER_ENUM_ARCH // always search in our arch
packageIDs, e := bus.ResolvePackages(packages, filter)
if e != nil {
return nil, fmt.Errorf("ResolvePackages error: %v", e)
}
var m = make(map[string]int)
for _, packageID := range packageIDs {
s := strings.Split(packageID, ";")
//if len(s) != 4 { continue } // this would be a bug!
pkg := s[0]
flags := strings.Split(s[3], ":")
for _, f := range flags {
if f == "installed" {
if _, exists := m[pkg]; !exists {
m[pkg] = 0
}
m[pkg]++ // if we see pkg installed, increment
break
}
}
}
var r []bool
for _, p := range packages {
if value, exists := m[p]; exists {
r = append(r, value > 0) // at least 1 means installed
} else {
r = append(r, false)
}
}
return r, nil
}
// is package installed ?
// TODO: this could be optimized by making the resolve call directly
func (bus *Conn) IsInstalled(pkg string) (bool, error) {
p, e := bus.IsInstalledList([]string{pkg})
if len(p) != 1 {
return false, e
}
return p[0], nil
}
// install list of packages by packageID
func (bus *Conn) InstallPackages(packageIDs []string, transactionFlags uint64) error {
ch := make(chan *dbus.Signal, PkBufferSize) // we need to buffer :(
interfacePath, err := bus.CreateTransaction() // emits Destroy on close
if err != nil {
return err
}
var signals = []string{"Package", "ErrorCode", "Finished", "Destroy"} // "ItemProgress", "Status" ?
bus.matchSignal(ch, interfacePath, PkIfaceTransaction, signals)
obj := bus.GetBus().Object(PkIface, interfacePath) // pass in found transaction path
call := obj.Call(FmtTransactionMethod("InstallPackages"), 0, transactionFlags, packageIDs)
if call.Err != nil {
return call.Err
}
timeout := -1 // disabled initially
finished := false
loop:
for {
select {
case signal := <-ch:
if signal.Path != interfacePath {
log.Printf("PackageKit: Woops: Signal.Path: %+v", signal.Path)
continue loop
}
if signal.Name == FmtTransactionMethod("ErrorCode") {
return fmt.Errorf("PackageKit: Error: %v", signal.Body)
} else if signal.Name == FmtTransactionMethod("Package") {
// a package was installed...
// only start the timer once we're here...
timeout = PkSignalPackageTimeout
} else if signal.Name == FmtTransactionMethod("Finished") {
finished = true
timeout = PkSignalDestroyTimeout // wait a bit
} else if signal.Name == FmtTransactionMethod("Destroy") {
return nil // success
} else {
return fmt.Errorf("PackageKit: Error: %v", signal.Body)
}
case _ = <-TimeAfterOrBlock(timeout):
if finished {
log.Println("PackageKit: Timeout: InstallPackages: Waiting for 'Destroy'")
return nil // got tired of waiting for Destroy
}
return fmt.Errorf("PackageKit: Timeout: InstallPackages: %v", strings.Join(packageIDs, ", "))
}
}
}
// remove list of packages
func (bus *Conn) RemovePackages(packageIDs []string, transactionFlags uint64) error {
var allowDeps = true // TODO: configurable
var autoremove = false // unsupported on GNU/Linux
ch := make(chan *dbus.Signal, PkBufferSize) // we need to buffer :(
interfacePath, err := bus.CreateTransaction() // emits Destroy on close
if err != nil {
return err
}
var signals = []string{"Package", "ErrorCode", "Finished", "Destroy"} // "ItemProgress", "Status" ?
bus.matchSignal(ch, interfacePath, PkIfaceTransaction, signals)
obj := bus.GetBus().Object(PkIface, interfacePath) // pass in found transaction path
call := obj.Call(FmtTransactionMethod("RemovePackages"), 0, transactionFlags, packageIDs, allowDeps, autoremove)
if call.Err != nil {
return call.Err
}
loop:
for {
// FIXME: add a timeout option to error in case signals are dropped!
select {
case signal := <-ch:
if signal.Path != interfacePath {
log.Printf("PackageKit: Woops: Signal.Path: %+v", signal.Path)
continue loop
}
if signal.Name == FmtTransactionMethod("ErrorCode") {
return fmt.Errorf("PackageKit: Error: %v", signal.Body)
} else if signal.Name == FmtTransactionMethod("Package") {
// a package was installed...
continue loop
} else if signal.Name == FmtTransactionMethod("Finished") {
// TODO: should we wait for the Destroy signal?
break loop
} else if signal.Name == FmtTransactionMethod("Destroy") {
// should already be broken
break loop
} else {
return fmt.Errorf("PackageKit: Error: %v", signal.Body)
}
}
}
return nil
}
// update list of packages to versions that are specified
func (bus *Conn) UpdatePackages(packageIDs []string, transactionFlags uint64) error {
ch := make(chan *dbus.Signal, PkBufferSize) // we need to buffer :(
interfacePath, err := bus.CreateTransaction()
if err != nil {
return err
}
var signals = []string{"Package", "ErrorCode", "Finished", "Destroy"} // "ItemProgress", "Status" ?
bus.matchSignal(ch, interfacePath, PkIfaceTransaction, signals)
obj := bus.GetBus().Object(PkIface, interfacePath) // pass in found transaction path
call := obj.Call(FmtTransactionMethod("UpdatePackages"), 0, transactionFlags, packageIDs)
if call.Err != nil {
return call.Err
}
loop:
for {
// FIXME: add a timeout option to error in case signals are dropped!
select {
case signal := <-ch:
if signal.Path != interfacePath {
log.Printf("PackageKit: Woops: Signal.Path: %+v", signal.Path)
continue loop
}
if signal.Name == FmtTransactionMethod("ErrorCode") {
return fmt.Errorf("PackageKit: Error: %v", signal.Body)
} else if signal.Name == FmtTransactionMethod("Package") {
} else if signal.Name == FmtTransactionMethod("Finished") {
// TODO: should we wait for the Destroy signal?
break loop
} else if signal.Name == FmtTransactionMethod("Destroy") {
// should already be broken
break loop
} else {
return fmt.Errorf("PackageKit: Error: %v", signal.Body)
}
}
}
return nil
}
// get the list of files that are contained inside a list of packageids
func (bus *Conn) GetFilesByPackageID(packageIDs []string) (files map[string][]string, err error) {
// NOTE: the maximum number of files in an RPM is 52116 in Fedora 23
// https://gist.github.com/purpleidea/b98e60dcd449e1ac3b8a
ch := make(chan *dbus.Signal, PkBufferSize) // we need to buffer :(
interfacePath, err := bus.CreateTransaction()
if err != nil {
return
}
var signals = []string{"Files", "ErrorCode", "Finished", "Destroy"} // "ItemProgress", "Status" ?
bus.matchSignal(ch, interfacePath, PkIfaceTransaction, signals)
obj := bus.GetBus().Object(PkIface, interfacePath) // pass in found transaction path
call := obj.Call(FmtTransactionMethod("GetFiles"), 0, packageIDs)
if call.Err != nil {
err = call.Err
return
}
files = make(map[string][]string)
loop:
for {
// FIXME: add a timeout option to error in case signals are dropped!
select {
case signal := <-ch:
if signal.Path != interfacePath {
log.Printf("PackageKit: Woops: Signal.Path: %+v", signal.Path)
continue loop
}
if signal.Name == FmtTransactionMethod("ErrorCode") {
err = fmt.Errorf("PackageKit: Error: %v", signal.Body)
return
// one signal returned per packageID found...
} else if signal.Name == FmtTransactionMethod("Files") {
if len(signal.Body) != 2 { // bad data
continue loop
}
var ok bool
var key string
var fileList []string
if key, ok = signal.Body[0].(string); !ok {
continue loop
}
if fileList, ok = signal.Body[1].([]string); !ok {
continue loop // failed conversion
}
files[key] = fileList // build up map
} else if signal.Name == FmtTransactionMethod("Finished") {
// TODO: should we wait for the Destroy signal?
break loop
} else if signal.Name == FmtTransactionMethod("Destroy") {
// should already be broken
break loop
} else {
err = fmt.Errorf("PackageKit: Error: %v", signal.Body)
return
}
}
}
return
}
// get list of packages that are installed and which can be updated, mod filter
func (bus *Conn) GetUpdates(filter uint64) ([]string, error) {
if PK_DEBUG {
log.Println("PackageKit: GetUpdates()")
}
packageIDs := []string{}
ch := make(chan *dbus.Signal, PkBufferSize) // we need to buffer :(
interfacePath, err := bus.CreateTransaction()
if err != nil {
return nil, err
}
var signals = []string{"Package", "ErrorCode", "Finished", "Destroy"} // "ItemProgress" ?
bus.matchSignal(ch, interfacePath, PkIfaceTransaction, signals)
obj := bus.GetBus().Object(PkIface, interfacePath) // pass in found transaction path
call := obj.Call(FmtTransactionMethod("GetUpdates"), 0, filter)
if call.Err != nil {
return nil, call.Err
}
loop:
for {
// FIXME: add a timeout option to error in case signals are dropped!
select {
case signal := <-ch:
if signal.Path != interfacePath {
log.Printf("PackageKit: Woops: Signal.Path: %+v", signal.Path)
continue loop
}
if signal.Name == FmtTransactionMethod("ErrorCode") {
return nil, fmt.Errorf("PackageKit: Error: %v", signal.Body)
} else if signal.Name == FmtTransactionMethod("Package") {
//pkg_int, ok := signal.Body[0].(int)
packageID, ok := signal.Body[1].(string)
// format is: name;version;arch;data
if !ok {
continue loop
}
//comment, ok := signal.Body[2].(string)
for _, p := range packageIDs { // optional?
if packageID == p {
continue loop // duplicate!
}
}
packageIDs = append(packageIDs, packageID)
} else if signal.Name == FmtTransactionMethod("Finished") {
// TODO: should we wait for the Destroy signal?
break loop
} else if signal.Name == FmtTransactionMethod("Destroy") {
// should already be broken
break loop
} else {
return nil, fmt.Errorf("PackageKit: Error: %v", signal.Body)
}
}
}
return packageIDs, nil
}
// this is a helper function that *might* be generally useful outside mgmtconfig
// packageMap input has the package names as keys and requested states as values
// these states can be installed, uninstalled, newest or a requested version str
func (bus *Conn) PackagesToPackageIDs(packageMap map[string]string, filter uint64) (map[string]*PkPackageIDActionData, error) {
count := 0
packages := make([]string, len(packageMap))
for k := range packageMap { // lol, golang has no hash.keys() function!
packages[count] = k
count++
}
if !(filter&PK_FILTER_ENUM_ARCH == PK_FILTER_ENUM_ARCH) {
filter += PK_FILTER_ENUM_ARCH // always search in our arch
}
if PK_DEBUG {
log.Printf("PackageKit: PackagesToPackageIDs(): %v", strings.Join(packages, ", "))
}
resolved, e := bus.ResolvePackages(packages, filter)
if e != nil {
return nil, fmt.Errorf("Resolve error: %v", e)
}
found := make([]bool, count) // default false
installed := make([]bool, count)
version := make([]string, count)
usePackageID := make([]string, count)
newest := make([]bool, count) // default true
for i := range newest {
newest[i] = true // assume, for now
}
var index int
for _, packageID := range resolved {
index = -1
//log.Printf("* %v", packageID)
// format is: name;version;arch;data
s := strings.Split(packageID, ";")
//if len(s) != 4 { continue } // this would be a bug!
pkg, ver, arch, data := s[0], s[1], s[2], s[3]
// we might need to allow some of this, eg: i386 .deb on amd64
if !IsMyArch(arch) {
continue
}
for i := range packages { // find pkg if it exists
if pkg == packages[i] {
index = i
}
}
if index == -1 { // can't find what we're looking for
continue
}
state := packageMap[pkg] // lookup the requested state/version
if state == "" {
return nil, fmt.Errorf("Empty package state for %v", pkg)
}
found[index] = true
stateIsVersion := (state != "installed" && state != "uninstalled" && state != "newest") // must be a ver. string
if stateIsVersion {
if state == ver && ver != "" { // we match what we want...
usePackageID[index] = packageID
}
}
if FlagInData("installed", data) {
installed[index] = true
version[index] = ver
// state of "uninstalled" matched during CheckApply, and
// states of "installed" and "newest" for fileList
if !stateIsVersion {
usePackageID[index] = packageID // save for later
}
} else { // not installed...
if !stateIsVersion {
// if there is more than one result, eg: there
// is the old and newest version of a package,
// then this section can run more than once...
// in that case, don't worry, we'll choose the
// right value in the "updates" section below!
usePackageID[index] = packageID
}
}
}
// we can't determine which packages are "newest", without searching
// for each one individually, so instead we check if any updates need
// to be done, and if so, anything that needs updating isn't newest!
// if something isn't installed, we can't verify it with this method
// FIXME: https://github.com/hughsie/PackageKit/issues/116
updates, e := bus.GetUpdates(filter)
if e != nil {
return nil, fmt.Errorf("Updates error: %v", e)
}
for _, packageID := range updates {
//log.Printf("* %v", packageID)
// format is: name;version;arch;data
s := strings.Split(packageID, ";")
//if len(s) != 4 { continue } // this would be a bug!
pkg, _, _, _ := s[0], s[1], s[2], s[3]
for index := range packages { // find pkg if it exists
if pkg == packages[index] {
state := packageMap[pkg] // lookup
newest[index] = false
if state == "installed" || state == "newest" {
// fix up in case above wasn't correct!
usePackageID[index] = packageID
}
break
}
}
}
// skip if the "newest" filter was used, otherwise we might need fixing
// this check is for packages that need to verify their "newest" status
// we need to know this so we can install the correct newest packageID!
recursion := make(map[string]*PkPackageIDActionData)
if !(filter&PK_FILTER_ENUM_NEWEST == PK_FILTER_ENUM_NEWEST) {
checkPackages := []string{}
filteredPackageMap := make(map[string]string)
for index, pkg := range packages {
state := packageMap[pkg] // lookup the requested state/version
if !found[index] || installed[index] { // skip these, they're okay
continue
}
if !(state == "newest" || state == "installed") {
continue
}
checkPackages = append(checkPackages, pkg)
filteredPackageMap[pkg] = packageMap[pkg] // check me!
}
// we _could_ do a second resolve and then parse like this...
//resolved, e := bus.ResolvePackages(..., filter+PK_FILTER_ENUM_NEWEST)
// but that's basically what recursion here could do too!
if len(checkPackages) > 0 {
if PK_DEBUG {
log.Printf("PackageKit: PackagesToPackageIDs(): Recurse: %v", strings.Join(checkPackages, ", "))
}
recursion, e = bus.PackagesToPackageIDs(filteredPackageMap, filter+PK_FILTER_ENUM_NEWEST)
if e != nil {
return nil, fmt.Errorf("Recursion error: %v", e)
}
}
}
// fix up and build result format
result := make(map[string]*PkPackageIDActionData)
for index, pkg := range packages {
if !found[index] || !installed[index] {
newest[index] = false // make the results more logical!
}
// prefer recursion results if present
if lookup, ok := recursion[pkg]; ok {
result[pkg] = lookup
} else {
result[pkg] = &PkPackageIDActionData{
Found: found[index],
Installed: installed[index],
Version: version[index],
PackageID: usePackageID[index],
Newest: newest[index],
}
}
}
return result, nil
}
// returns a list of packageIDs which match the set of package names in packages
func FilterPackageIDs(m map[string]*PkPackageIDActionData, packages []string) ([]string, error) {
result := []string{}
for _, k := range packages {
obj, ok := m[k] // lookup single package
// package doesn't exist, this is an error!
if !ok || !obj.Found || obj.PackageID == "" {
return nil, fmt.Errorf("Can't find package named '%s'.", k)
}
result = append(result, obj.PackageID)
}
return result, nil
}
func FilterState(m map[string]*PkPackageIDActionData, packages []string, state string) (result map[string]bool, err error) {
result = make(map[string]bool)
pkgs := []string{} // bad pkgs that don't have a bool state
for _, k := range packages {
obj, ok := m[k] // lookup single package
// package doesn't exist, this is an error!
if !ok || !obj.Found {
return nil, fmt.Errorf("Can't find package named '%s'.", k)
}
var b bool
if state == "installed" {
b = obj.Installed
} else if state == "uninstalled" {
b = !obj.Installed
} else if state == "newest" {
b = obj.Newest
} else {
// we can't filter "version" state in this function
pkgs = append(pkgs, k)
continue
}
result[k] = b // save
}
if len(pkgs) > 0 {
err = fmt.Errorf("Can't filter non-boolean state on: %v!", strings.Join(pkgs, ","))
}
return result, err
}
// return all packages that are in package and match the specific state
func FilterPackageState(m map[string]*PkPackageIDActionData, packages []string, state string) (result []string, err error) {
result = []string{}
for _, k := range packages {
obj, ok := m[k] // lookup single package
// package doesn't exist, this is an error!
if !ok || !obj.Found {
return nil, fmt.Errorf("Can't find package named '%s'.", k)
}
b := false
if state == "installed" && obj.Installed {
b = true
} else if state == "uninstalled" && !obj.Installed {
b = true
} else if state == "newest" && obj.Newest {
b = true
} else if state == obj.Version {
b = true
}
if b {
result = append(result, k)
}
}
return result, err
}
// does flag exist inside data portion of packageID field?
func FlagInData(flag, data string) bool {
flags := strings.Split(data, ":")
for _, f := range flags {
if f == flag {
return true
}
}
return false
}
// builds the transaction method string
func FmtTransactionMethod(method string) string {
return fmt.Sprintf("%s.%s", PkIfaceTransaction, method)
}
func IsMyArch(arch string) bool {
goarch, ok := PkArchMap[arch]
if !ok {
// if you get this error, please update the PkArchMap const
log.Fatalf("PackageKit: Arch '%v', not found!", arch)
}
if goarch == "ANY" { // special value that corresponds to noarch
return true
}
return goarch == runtime.GOARCH
}

453
pgraph.go
View File

@@ -25,6 +25,7 @@ import (
"log"
"os"
"os/exec"
"sort"
"strconv"
"sync"
"syscall"
@@ -35,11 +36,11 @@ import (
type graphState int
const (
graphNil graphState = iota
graphStarting
graphStarted
graphPausing
graphPaused
graphStateNil graphState = iota
graphStateStarting
graphStateStarted
graphStatePausing
graphStatePaused
)
// The graph abstract data type (ADT) is defined as follows:
@@ -52,13 +53,11 @@ type Graph struct {
Adjacency map[*Vertex]map[*Vertex]*Edge // *Vertex -> *Vertex (edge)
state graphState
mutex sync.Mutex // used when modifying graph State variable
//Directed bool
}
type Vertex struct {
graph *Graph // store a pointer to the graph it's on
Type // anonymous field
data map[string]string // XXX: currently unused i think, remove?
Res // anonymous field
timestamp int64 // last updated timestamp ?
}
type Edge struct {
@@ -69,14 +68,13 @@ func NewGraph(name string) *Graph {
return &Graph{
Name: name,
Adjacency: make(map[*Vertex]map[*Vertex]*Edge),
state: graphNil,
state: graphStateNil,
}
}
func NewVertex(t Type) *Vertex {
func NewVertex(r Res) *Vertex {
return &Vertex{
Type: t,
data: make(map[string]string),
Res: r,
}
}
@@ -86,6 +84,19 @@ func NewEdge(name string) *Edge {
}
}
// Copy makes a copy of the graph struct
func (g *Graph) Copy() *Graph {
newGraph := &Graph{
Name: g.Name,
Adjacency: make(map[*Vertex]map[*Vertex]*Edge, len(g.Adjacency)),
state: g.state,
}
for k, v := range g.Adjacency {
newGraph.Adjacency[k] = v // copy
}
return newGraph
}
// returns the name of the graph
func (g *Graph) GetName() string {
return g.Name
@@ -111,20 +122,19 @@ func (g *Graph) SetState(state graphState) graphState {
return prev
}
// store a pointer in the type to it's parent vertex
// store a pointer in the resource to it's parent vertex
func (g *Graph) SetVertex() {
for v := range g.GetVerticesChan() {
v.Type.SetVertex(v)
v.Res.SetVertex(v)
}
}
// add a new vertex to the graph
func (g *Graph) AddVertex(v *Vertex) {
// AddVertex uses variadic input to add all listed vertices to the graph
func (g *Graph) AddVertex(xv ...*Vertex) {
for _, v := range xv {
if _, exists := g.Adjacency[v]; !exists {
g.Adjacency[v] = make(map[*Vertex]*Edge)
// store a pointer to the graph it's on for convenience and readability
v.graph = g
}
}
}
@@ -138,31 +148,15 @@ func (g *Graph) DeleteVertex(v *Vertex) {
// adds a directed edge to the graph from v1 to v2
func (g *Graph) AddEdge(v1, v2 *Vertex, e *Edge) {
// NOTE: this doesn't allow more than one edge between two vertexes...
// TODO: is this a problem?
g.AddVertex(v1)
g.AddVertex(v2)
g.AddVertex(v1, v2) // supports adding N vertices now
// TODO: check if an edge exists to avoid overwriting it!
// NOTE: VertexMerge() depends on overwriting it at the moment...
g.Adjacency[v1][v2] = e
}
// XXX: does it make sense to return a channel here?
// GetVertex finds the vertex in the graph with a particular search name
func (g *Graph) GetVertex(name string) chan *Vertex {
ch := make(chan *Vertex, 1)
go func(name string) {
func (g *Graph) GetVertexMatch(obj Res) *Vertex {
for k := range g.Adjacency {
if k.GetName() == name {
ch <- k
break
}
}
close(ch)
}(name)
return ch
}
func (g *Graph) GetVertexMatch(obj Type) *Vertex {
for k := range g.Adjacency {
if k.Compare(obj) { // XXX test
if k.Res.Compare(obj) {
return k
}
}
@@ -173,11 +167,6 @@ func (g *Graph) HasVertex(v *Vertex) bool {
if _, exists := g.Adjacency[v]; exists {
return true
}
//for k := range g.Adjacency {
// if k == v {
// return true
// }
//}
return false
}
@@ -195,7 +184,8 @@ func (g *Graph) NumEdges() int {
return count
}
// get an array (slice) of all vertices in the graph
// GetVertices returns a randomly sorted slice of all vertices in the graph
// The order is random, because the map implementation is intentionally so!
func (g *Graph) GetVertices() []*Vertex {
var vertices []*Vertex
for k := range g.Adjacency {
@@ -216,11 +206,33 @@ func (g *Graph) GetVerticesChan() chan *Vertex {
return ch
}
type VertexSlice []*Vertex
func (vs VertexSlice) Len() int { return len(vs) }
func (vs VertexSlice) Swap(i, j int) { vs[i], vs[j] = vs[j], vs[i] }
func (vs VertexSlice) Less(i, j int) bool { return vs[i].String() < vs[j].String() }
// GetVerticesSorted returns a sorted slice of all vertices in the graph
// The order is sorted by String() to avoid the non-determinism in the map type
func (g *Graph) GetVerticesSorted() []*Vertex {
var vertices []*Vertex
for k := range g.Adjacency {
vertices = append(vertices, k)
}
sort.Sort(VertexSlice(vertices)) // add determinism
return vertices
}
// make the graph pretty print
func (g *Graph) String() string {
return fmt.Sprintf("Vertices(%d), Edges(%d)", g.NumVertices(), g.NumEdges())
}
// String returns the canonical form for a vertex
func (v *Vertex) String() string {
return fmt.Sprintf("%s[%s]", v.Res.Kind(), v.Res.GetName())
}
// output the graph in graphviz format
// https://en.wikipedia.org/wiki/DOT_%28graph_description_language%29
func (g *Graph) Graphviz() (out string) {
@@ -241,7 +253,7 @@ func (g *Graph) Graphviz() (out string) {
//out += "\tnode [shape=box];\n"
str := ""
for i := range g.Adjacency { // reverse paths
out += fmt.Sprintf("\t%v [label=\"%v[%v]\"];\n", i.GetName(), i.GetType(), i.GetName())
out += fmt.Sprintf("\t%v [label=\"%v[%v]\"];\n", i.GetName(), i.Kind(), i.GetName())
for j := range g.Adjacency[i] {
k := g.Adjacency[i][j]
// use str for clearer output ordering
@@ -303,18 +315,8 @@ func (g *Graph) ExecGraphviz(program, filename string) error {
return nil
}
// google/golang hackers apparently do not think contains should be a built-in!
func Contains(s []*Vertex, element *Vertex) bool {
for _, v := range s {
if element == v {
return true
}
}
return false
}
// return an array (slice) of all directed vertices to vertex v (??? -> v)
// ostimestamp should use this
// OKTimestamp should use this
func (g *Graph) IncomingGraphEdges(v *Vertex) []*Vertex {
// TODO: we might be able to implement this differently by reversing
// the Adjacency graph and then looping through it again...
@@ -356,10 +358,9 @@ func (g *Graph) DFS(start *Vertex) []*Vertex {
v := start
s = append(s, v)
for len(s) > 0 {
v, s = s[len(s)-1], s[:len(s)-1] // s.pop()
if !Contains(d, v) { // if not discovered
if !VertexContains(v, d) { // if not discovered
d = append(d, v) // label as discovered
for _, w := range g.GraphEdges(v) {
@@ -373,17 +374,14 @@ func (g *Graph) DFS(start *Vertex) []*Vertex {
// build a new graph containing only vertices from the list...
func (g *Graph) FilterGraph(name string, vertices []*Vertex) *Graph {
newgraph := NewGraph(name)
for k1, x := range g.Adjacency {
for k2, e := range x {
//log.Printf("Filter: %v -> %v # %v", k1.Name, k2.Name, e.Name)
if Contains(vertices, k1) || Contains(vertices, k2) {
if VertexContains(k1, vertices) || VertexContains(k2, vertices) {
newgraph.AddEdge(k1, k2, e)
}
}
}
return newgraph
}
@@ -399,7 +397,7 @@ func (g *Graph) GetDisconnectedGraphs() chan *Graph {
// get an undiscovered vertex to start from
for _, s := range g.GetVertices() {
if !Contains(d, s) {
if !VertexContains(s, d) {
start = s
}
}
@@ -457,7 +455,6 @@ func (g *Graph) OutDegree() map[*Vertex]int {
// based on descriptions and code from wikipedia and rosetta code
// TODO: add memoization, and cache invalidation to speed this up :)
func (g *Graph) TopologicalSort() (result []*Vertex, ok bool) { // kahn's algorithm
var L []*Vertex // empty list that will contain the sorted elements
var S []*Vertex // set of all nodes with no incoming edges
remaining := make(map[*Vertex]int) // amount of edges remaining
@@ -503,34 +500,137 @@ func (g *Graph) TopologicalSort() (result []*Vertex, ok bool) { // kahn's algori
return L, true
}
func (v *Vertex) Value(key string) (string, bool) {
if value, exists := v.data[key]; exists {
return value, true
// Reachability finds the shortest path in a DAG from a to b, and returns the
// slice of vertices that matched this particular path including both a and b.
// It returns nil if a or b is nil, and returns empty list if no path is found.
// Since there could be more than one possible result for this operation, we
// arbitrarily choose one of the shortest possible. As a result, this should
// actually return a tree if we cared about correctness.
// This operates by a recursive algorithm; a more efficient version is likely.
// If you don't give this function a DAG, you might cause infinite recursion!
func (g *Graph) Reachability(a, b *Vertex) []*Vertex {
if a == nil || b == nil {
return nil
}
return "", false
}
func (v *Vertex) SetValue(key, value string) bool {
v.data[key] = value
return true
}
func (g *Graph) GetVerticesKeyValue(key, value string) chan *Vertex {
ch := make(chan *Vertex)
go func() {
for vertex := range g.GetVerticesChan() {
if v, exists := vertex.Value(key); exists && v == value {
ch <- vertex
vertices := g.OutgoingGraphEdges(a) // what points away from a ?
if len(vertices) == 0 {
return []*Vertex{} // nope
}
if VertexContains(b, vertices) {
return []*Vertex{a, b} // found
}
// TODO: parallelize this with go routines?
var collected = make([][]*Vertex, len(vertices))
pick := -1
for i, v := range vertices {
collected[i] = g.Reachability(v, b) // find b by recursion
if l := len(collected[i]); l > 0 {
// pick shortest path
// TODO: technically i should return a tree
if pick < 0 || l < len(collected[pick]) {
pick = i
}
}
close(ch)
}()
return ch
}
if pick < 0 {
return []*Vertex{} // nope
}
result := []*Vertex{a} // tack on a
result = append(result, collected[pick]...)
return result
}
// return a pointer to the graph a vertex is on
func (v *Vertex) GetGraph() *Graph {
return v.graph
// VertexMerge merges v2 into v1 by reattaching the edges where appropriate,
// and then by deleting v2 from the graph. Since more than one edge between two
// vertices is not allowed, duplicate edges are merged as well. an edge merge
// function can be provided if you'd like to control how you merge the edges!
func (g *Graph) VertexMerge(v1, v2 *Vertex, vertexMergeFn func(*Vertex, *Vertex) (*Vertex, error), edgeMergeFn func(*Edge, *Edge) *Edge) error {
// methodology
// 1) edges between v1 and v2 are removed
//Loop:
for k1 := range g.Adjacency {
for k2 := range g.Adjacency[k1] {
// v1 -> v2 || v2 -> v1
if (k1 == v1 && k2 == v2) || (k1 == v2 && k2 == v1) {
delete(g.Adjacency[k1], k2) // delete map & edge
// NOTE: if we assume this is a DAG, then we can
// assume only v1 -> v2 OR v2 -> v1 exists, and
// we can break out of these loops immediately!
//break Loop
break
}
}
}
// 2) edges that point towards v2 from X now point to v1 from X (no dupes)
for _, x := range g.IncomingGraphEdges(v2) { // all to vertex v (??? -> v)
e := g.Adjacency[x][v2] // previous edge
r := g.Reachability(x, v1)
// merge e with ex := g.Adjacency[x][v1] if it exists!
if ex, exists := g.Adjacency[x][v1]; exists && edgeMergeFn != nil && len(r) == 0 {
e = edgeMergeFn(e, ex)
}
if len(r) == 0 { // if not reachable, add it
g.AddEdge(x, v1, e) // overwrite edge
} else if edgeMergeFn != nil { // reachable, merge e through...
prev := x // initial condition
for i, next := range r {
if i == 0 {
// next == prev, therefore skip
continue
}
// this edge is from: prev, to: next
ex, _ := g.Adjacency[prev][next] // get
ex = edgeMergeFn(ex, e)
g.Adjacency[prev][next] = ex // set
prev = next
}
}
delete(g.Adjacency[x], v2) // delete old edge
}
// 3) edges that point from v2 to X now point from v1 to X (no dupes)
for _, x := range g.OutgoingGraphEdges(v2) { // all from vertex v (v -> ???)
e := g.Adjacency[v2][x] // previous edge
r := g.Reachability(v1, x)
// merge e with ex := g.Adjacency[v1][x] if it exists!
if ex, exists := g.Adjacency[v1][x]; exists && edgeMergeFn != nil && len(r) == 0 {
e = edgeMergeFn(e, ex)
}
if len(r) == 0 {
g.AddEdge(v1, x, e) // overwrite edge
} else if edgeMergeFn != nil { // reachable, merge e through...
prev := v1 // initial condition
for i, next := range r {
if i == 0 {
// next == prev, therefore skip
continue
}
// this edge is from: prev, to: next
ex, _ := g.Adjacency[prev][next]
ex = edgeMergeFn(ex, e)
g.Adjacency[prev][next] = ex
prev = next
}
}
delete(g.Adjacency[v2], x)
}
// 4) merge and then remove the (now merged/grouped) vertex
if vertexMergeFn != nil { // run vertex merge function
if v, err := vertexMergeFn(v1, v2); err != nil {
return err
} else if v != nil { // replace v1 with the "merged" version...
v1 = v // XXX: will this replace v1 the way we want?
}
}
g.DeleteVertex(v2) // remove grouped vertex
// 5) creation of a cyclic graph should throw an error
if _, dag := g.TopologicalSort(); !dag { // am i a dag or not?
return fmt.Errorf("Graph is not a dag!")
}
return nil // success
}
func HeisenbergCount(ch chan *Vertex) int {
@@ -542,21 +642,165 @@ func HeisenbergCount(ch chan *Vertex) int {
return c
}
// GetTimestamp returns the timestamp of a vertex
func (v *Vertex) GetTimestamp() int64 {
return v.timestamp
}
// UpdateTimestamp updates the timestamp on a vertex and returns the new value
func (v *Vertex) UpdateTimestamp() int64 {
v.timestamp = time.Now().UnixNano() // update
return v.timestamp
}
// can this element run right now?
func (g *Graph) OKTimestamp(v *Vertex) bool {
// these are all the vertices pointing TO v, eg: ??? -> v
for _, n := range g.IncomingGraphEdges(v) {
// if the vertex has a greater timestamp than any pre-req (n)
// then we can't run right now...
// if they're equal (eg: on init of 0) then we also can't run
// b/c we should let our pre-req's go first...
x, y := v.GetTimestamp(), n.GetTimestamp()
if DEBUG {
log.Printf("%v[%v]: OKTimestamp: (%v) >= %v[%v](%v): !%v", v.Kind(), v.GetName(), x, n.Kind(), n.GetName(), y, x >= y)
}
if x >= y {
return false
}
}
return true
}
// notify nodes after me in the dependency graph that they need refreshing...
// NOTE: this assumes that this can never fail or need to be rescheduled
func (g *Graph) Poke(v *Vertex, activity bool) {
// these are all the vertices pointing AWAY FROM v, eg: v -> ???
for _, n := range g.OutgoingGraphEdges(v) {
// XXX: if we're in state event and haven't been cancelled by
// apply, then we can cancel a poke to a child, right? XXX
// XXX: if n.Res.GetState() != resStateEvent { // is this correct?
if true { // XXX
if DEBUG {
log.Printf("%v[%v]: Poke: %v[%v]", v.Kind(), v.GetName(), n.Kind(), n.GetName())
}
n.SendEvent(eventPoke, false, activity) // XXX: can this be switched to sync?
} else {
if DEBUG {
log.Printf("%v[%v]: Poke: %v[%v]: Skipped!", v.Kind(), v.GetName(), n.Kind(), n.GetName())
}
}
}
}
// poke the pre-requisites that are stale and need to run before I can run...
func (g *Graph) BackPoke(v *Vertex) {
// these are all the vertices pointing TO v, eg: ??? -> v
for _, n := range g.IncomingGraphEdges(v) {
x, y, s := v.GetTimestamp(), n.GetTimestamp(), n.Res.GetState()
// if the parent timestamp needs poking AND it's not in state
// resStateEvent, then poke it. If the parent is in resStateEvent it
// means that an event is pending, so we'll be expecting a poke
// back soon, so we can safely discard the extra parent poke...
// TODO: implement a stateLT (less than) to tell if something
// happens earlier in the state cycle and that doesn't wrap nil
if x >= y && (s != resStateEvent && s != resStateCheckApply) {
if DEBUG {
log.Printf("%v[%v]: BackPoke: %v[%v]", v.Kind(), v.GetName(), n.Kind(), n.GetName())
}
n.SendEvent(eventBackPoke, false, false) // XXX: can this be switched to sync?
} else {
if DEBUG {
log.Printf("%v[%v]: BackPoke: %v[%v]: Skipped!", v.Kind(), v.GetName(), n.Kind(), n.GetName())
}
}
}
}
// XXX: rename this function
func (g *Graph) Process(v *Vertex) {
obj := v.Res
if DEBUG {
log.Printf("%v[%v]: Process()", obj.Kind(), obj.GetName())
}
obj.SetState(resStateEvent)
var ok = true
var apply = false // did we run an apply?
// is it okay to run dependency wise right now?
// if not, that's okay because when the dependency runs, it will poke
// us back and we will run if needed then!
if g.OKTimestamp(v) {
if DEBUG {
log.Printf("%v[%v]: OKTimestamp(%v)", obj.Kind(), obj.GetName(), v.GetTimestamp())
}
obj.SetState(resStateCheckApply)
// if this fails, don't UpdateTimestamp()
stateok, err := obj.CheckApply(true)
if stateok && err != nil { // should never return this way
log.Fatalf("%v[%v]: CheckApply(): %t, %+v", obj.Kind(), obj.GetName(), stateok, err)
}
if DEBUG {
log.Printf("%v[%v]: CheckApply(): %t, %v", obj.Kind(), obj.GetName(), stateok, err)
}
if !stateok { // if state *was* not ok, we had to have apply'ed
if err != nil { // error during check or apply
ok = false
} else {
apply = true
}
}
if ok {
// update this timestamp *before* we poke or the poked
// nodes might fail due to having a too old timestamp!
v.UpdateTimestamp() // this was touched...
obj.SetState(resStatePoking) // can't cancel parent poke
g.Poke(v, apply)
}
// poke at our pre-req's instead since they need to refresh/run...
} else {
// only poke at the pre-req's that need to run
go g.BackPoke(v)
}
}
// main kick to start the graph
func (g *Graph) Start(wg *sync.WaitGroup, first bool) { // start or continue
log.Printf("State: %v -> %v", g.SetState(graphStateStarting), g.GetState())
defer log.Printf("State: %v -> %v", g.SetState(graphStateStarted), g.GetState())
t, _ := g.TopologicalSort()
// TODO: only calculate indegree if `first` is true to save resources
indegree := g.InDegree() // compute all of the indegree's
for _, v := range Reverse(t) {
if !v.Type.IsWatching() { // if Watch() is not running...
if !v.Res.IsWatching() { // if Watch() is not running...
wg.Add(1)
// must pass in value to avoid races...
// see: https://ttboj.wordpress.com/2015/07/27/golang-parallelism-issues-causing-too-many-open-files-error/
go func(vv *Vertex) {
defer wg.Done()
vv.Type.Watch()
log.Printf("%v[%v]: Exited", vv.GetType(), vv.GetName())
// listen for chan events from Watch() and run
// the Process() function when they're received
// this avoids us having to pass the data into
// the Watch() function about which graph it is
// running on, which isolates things nicely...
chanProcess := make(chan Event)
go func() {
for event := range chanProcess {
// this has to be synchronous,
// because otherwise the Res
// event loop will keep running
// and change state, causing the
// converged timeout to fire!
g.Process(vv)
event.ACK() // sync
}
}()
vv.Res.Watch(chanProcess) // i block until i end
close(chanProcess)
log.Printf("%v[%v]: Exited", vv.Kind(), vv.GetName())
}(v)
}
@@ -574,10 +818,10 @@ func (g *Graph) Start(wg *sync.WaitGroup, first bool) { // start or continue
// and not just selectively the subset with no indegree.
if (!first) || indegree[v] == 0 {
// ensure state is started before continuing on to next vertex
for !v.Type.SendEvent(eventStart, true, false) {
for !v.SendEvent(eventStart, true, false) {
if DEBUG {
// if SendEvent fails, we aren't up yet
log.Printf("%v[%v]: Retrying SendEvent(Start)", v.GetType(), v.GetName())
log.Printf("%v[%v]: Retrying SendEvent(Start)", v.Kind(), v.GetName())
// sleep here briefly or otherwise cause
// a different goroutine to be scheduled
time.Sleep(1 * time.Millisecond)
@@ -588,13 +832,18 @@ func (g *Graph) Start(wg *sync.WaitGroup, first bool) { // start or continue
}
func (g *Graph) Pause() {
log.Printf("State: %v -> %v", g.SetState(graphStatePausing), g.GetState())
defer log.Printf("State: %v -> %v", g.SetState(graphStatePaused), g.GetState())
t, _ := g.TopologicalSort()
for _, v := range t { // squeeze out the events...
v.Type.SendEvent(eventPause, true, false)
v.SendEvent(eventPause, true, false)
}
}
func (g *Graph) Exit() {
if g == nil {
return
} // empty graph that wasn't populated yet
t, _ := g.TopologicalSort()
for _, v := range t { // squeeze out the events...
// turn off the taps...
@@ -602,20 +851,20 @@ func (g *Graph) Exit() {
// when we hit the 'default' in the select statement!
// XXX: we can do this to quiesce, but it's not necessary now
v.Type.SendEvent(eventExit, true, false)
v.SendEvent(eventExit, true, false)
}
}
func (g *Graph) SetConvergedCallback(ctimeout int, converged chan bool) {
for v := range g.GetVerticesChan() {
v.Type.SetConvegedCallback(ctimeout, converged)
v.Res.SetConvergedCallback(ctimeout, converged)
}
}
// in array function to test *vertices in a slice of *vertices
func HasVertex(v *Vertex, haystack []*Vertex) bool {
for _, r := range haystack {
if v == r {
// in array function to test *Vertex in a slice of *Vertices
func VertexContains(needle *Vertex, haystack []*Vertex) bool {
for _, v := range haystack {
if needle == v {
return true
}
}

File diff suppressed because it is too large Load Diff

545
pkg.go Normal file
View File

@@ -0,0 +1,545 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package main
import (
//"packagekit" // TODO
"encoding/gob"
"errors"
"fmt"
"log"
"path"
"strings"
)
func init() {
gob.Register(&PkgRes{})
}
type PkgRes struct {
BaseRes `yaml:",inline"`
State string `yaml:"state"` // state: installed, uninstalled, newest, <version>
AllowUntrusted bool `yaml:"allowuntrusted"` // allow untrusted packages to be installed?
AllowNonFree bool `yaml:"allownonfree"` // allow nonfree packages to be found?
AllowUnsupported bool `yaml:"allowunsupported"` // allow unsupported packages to be found?
//bus *Conn // pk bus connection
fileList []string // FIXME: update if pkg changes
}
// helper function for creating new pkg resources that calls Init()
func NewPkgRes(name, state string, allowuntrusted, allownonfree, allowunsupported bool) *PkgRes {
obj := &PkgRes{
BaseRes: BaseRes{
Name: name,
events: make(chan Event),
vertex: nil,
},
State: state,
AllowUntrusted: allowuntrusted,
AllowNonFree: allownonfree,
AllowUnsupported: allowunsupported,
}
obj.Init()
return obj
}
func (obj *PkgRes) Init() {
obj.BaseRes.kind = "Pkg"
obj.BaseRes.Init() // call base init, b/c we're overriding
bus := NewBus()
if bus == nil {
log.Fatal("Can't connect to PackageKit bus.")
}
defer bus.Close()
result, err := obj.pkgMappingHelper(bus)
if err != nil {
// FIXME: return error?
log.Fatalf("The pkgMappingHelper failed with: %v.", err)
return
}
data, ok := result[obj.Name] // lookup single package (init does just one)
// package doesn't exist, this is an error!
if !ok || !data.Found {
// FIXME: return error?
log.Fatalf("Can't find package named '%s'.", obj.Name)
return
}
packageIDs := []string{data.PackageID} // just one for now
filesMap, err := bus.GetFilesByPackageID(packageIDs)
if err != nil {
// FIXME: return error?
log.Fatalf("Can't run GetFilesByPackageID: %v", err)
return
}
if files, ok := filesMap[data.PackageID]; ok {
obj.fileList = DirifyFileList(files, false)
}
}
func (obj *PkgRes) Validate() bool {
if obj.State == "" {
return false
}
return true
}
// use UpdatesChanged signal to watch for changes
// TODO: https://github.com/hughsie/PackageKit/issues/109
// TODO: https://github.com/hughsie/PackageKit/issues/110
func (obj *PkgRes) Watch(processChan chan Event) {
if obj.IsWatching() {
return
}
obj.SetWatching(true)
defer obj.SetWatching(false)
bus := NewBus()
if bus == nil {
log.Fatal("Can't connect to PackageKit bus.")
}
defer bus.Close()
ch, err := bus.WatchChanges()
if err != nil {
log.Fatalf("Error adding signal match: %v", err)
}
var send = false // send event?
var exit = false
var dirty = false
for {
if DEBUG {
log.Printf("%v: Watching...", obj.fmtNames(obj.getNames()))
}
obj.SetState(resStateWatching) // reset
select {
case event := <-ch:
// FIXME: ask packagekit for info on what packages changed
if DEBUG {
log.Printf("%v: Event: %v", obj.fmtNames(obj.getNames()), event.Name)
}
// since the chan is buffered, remove any supplemental
// events since they would just be duplicates anyways!
for len(ch) > 0 { // we can detect pending count here!
<-ch // discard
}
obj.SetConvergedState(resConvergedNil)
send = true
dirty = true
case event := <-obj.events:
obj.SetConvergedState(resConvergedNil)
if exit, send = obj.ReadEvent(&event); exit {
return // exit
}
//dirty = false // these events don't invalidate state
case _ = <-TimeAfterOrBlock(obj.ctimeout):
obj.SetConvergedState(resConvergedTimeout)
obj.converged <- true
continue
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
// only invalid state on certain types of events
if dirty {
dirty = false
obj.isStateOK = false // something made state dirty
}
resp := NewResp()
processChan <- Event{eventNil, resp, "", true} // trigger process
resp.ACKWait() // wait for the ACK()
}
}
}
// get list of names when grouped or not
func (obj *PkgRes) getNames() []string {
if g := obj.GetGroup(); len(g) > 0 { // grouped elements
names := []string{obj.GetName()}
for _, x := range g {
pkg, ok := x.(*PkgRes) // convert from Res
if ok {
names = append(names, pkg.Name)
}
}
return names
}
return []string{obj.GetName()}
}
// pretty print for header values
func (obj *PkgRes) fmtNames(names []string) string {
if len(obj.GetGroup()) > 0 { // grouped elements
return fmt.Sprintf("%v[autogroup:(%v)]", obj.Kind(), strings.Join(names, ","))
}
return fmt.Sprintf("%v[%v]", obj.Kind(), obj.GetName())
}
func (obj *PkgRes) groupMappingHelper() map[string]string {
var result = make(map[string]string)
if g := obj.GetGroup(); len(g) > 0 { // add any grouped elements
for _, x := range g {
pkg, ok := x.(*PkgRes) // convert from Res
if !ok {
log.Fatalf("Grouped member %v is not a %v", x, obj.Kind())
}
result[pkg.Name] = pkg.State
}
}
return result
}
func (obj *PkgRes) pkgMappingHelper(bus *Conn) (map[string]*PkPackageIDActionData, error) {
packageMap := obj.groupMappingHelper() // get the grouped values
packageMap[obj.Name] = obj.State // key is pkg name, value is pkg state
var filter uint64 // initializes at the "zero" value of 0
filter += PK_FILTER_ENUM_ARCH // always search in our arch (optional!)
// we're requesting latest version, or to narrow down install choices!
if obj.State == "newest" || obj.State == "installed" {
// if we add this, we'll still see older packages if installed
// this is an optimization, and is *optional*, this logic is
// handled inside of PackagesToPackageIDs now automatically!
filter += PK_FILTER_ENUM_NEWEST // only search for newest packages
}
if !obj.AllowNonFree {
filter += PK_FILTER_ENUM_FREE
}
if !obj.AllowUnsupported {
filter += PK_FILTER_ENUM_SUPPORTED
}
result, e := bus.PackagesToPackageIDs(packageMap, filter)
if e != nil {
return nil, fmt.Errorf("Can't run PackagesToPackageIDs: %v", e)
}
return result, nil
}
func (obj *PkgRes) CheckApply(apply bool) (stateok bool, err error) {
log.Printf("%v: CheckApply(%t)", obj.fmtNames(obj.getNames()), apply)
if obj.State == "" { // TODO: Validate() should replace this check!
log.Fatalf("%v: Package state is undefined!", obj.fmtNames(obj.getNames()))
}
if obj.isStateOK { // cache the state
return true, nil
}
bus := NewBus()
if bus == nil {
return false, errors.New("Can't connect to PackageKit bus.")
}
defer bus.Close()
result, err := obj.pkgMappingHelper(bus)
if err != nil {
return false, fmt.Errorf("The pkgMappingHelper failed with: %v.", err)
}
packageMap := obj.groupMappingHelper() // map[string]string
packageList := []string{obj.Name}
packageList = append(packageList, StrMapKeys(packageMap)...)
//stateList := []string{obj.State}
//stateList = append(stateList, StrMapValues(packageMap)...)
// TODO: at the moment, all the states are the same, but
// eventually we might be able to drop this constraint!
states, err := FilterState(result, packageList, obj.State)
if err != nil {
return false, fmt.Errorf("The FilterState method failed with: %v.", err)
}
data, _ := result[obj.Name] // if above didn't error, we won't either!
validState := BoolMapTrue(BoolMapValues(states))
// obj.State == "installed" || "uninstalled" || "newest" || "4.2-1.fc23"
switch obj.State {
case "installed":
fallthrough
case "uninstalled":
fallthrough
case "newest":
if validState {
return true, nil // state is correct, exit!
}
default: // version string
if obj.State == data.Version && data.Version != "" {
return true, nil
}
}
// state is not okay, no work done, exit, but without error
if !apply {
return false, nil
}
// apply portion
log.Printf("%v: Apply", obj.fmtNames(obj.getNames()))
readyPackages, err := FilterPackageState(result, packageList, obj.State)
if err != nil {
return false, err // fail
}
// these are the packages that actually need their states applied!
applyPackages := StrFilterElementsInList(readyPackages, packageList)
packageIDs, _ := FilterPackageIDs(result, applyPackages) // would be same err as above
var transactionFlags uint64 // initializes at the "zero" value of 0
if !obj.AllowUntrusted { // allow
transactionFlags += PK_TRANSACTION_FLAG_ENUM_ONLY_TRUSTED
}
// apply correct state!
log.Printf("%v: Set: %v...", obj.fmtNames(StrListIntersection(applyPackages, obj.getNames())), obj.State)
switch obj.State {
case "uninstalled": // run remove
// NOTE: packageID is different than when installed, because now
// it has the "installed" flag added to the data portion if it!!
err = bus.RemovePackages(packageIDs, transactionFlags)
case "newest": // TODO: isn't this the same operation as install, below?
err = bus.UpdatePackages(packageIDs, transactionFlags)
case "installed":
fallthrough // same method as for "set specific version", below
default: // version string
err = bus.InstallPackages(packageIDs, transactionFlags)
}
if err != nil {
return false, err // fail
}
log.Printf("%v: Set: %v success!", obj.fmtNames(StrListIntersection(applyPackages, obj.getNames())), obj.State)
return false, nil // success
}
type PkgUUID struct {
BaseUUID
name string // pkg name
state string // pkg state or "version"
}
// if and only if they are equivalent, return true
// if they are not equivalent, return false
func (obj *PkgUUID) IFF(uuid ResUUID) bool {
res, ok := uuid.(*PkgUUID)
if !ok {
return false
}
// FIXME: match on obj.state vs. res.state ?
return obj.name == res.name
}
type PkgResAutoEdges struct {
fileList []string
svcUUIDs []ResUUID
testIsNext bool // safety
name string // saved data from PkgRes obj
kind string
}
func (obj *PkgResAutoEdges) Next() []ResUUID {
if obj.testIsNext {
log.Fatal("Expecting a call to Test()")
}
obj.testIsNext = true // set after all the errors paths are past
// first return any matching svcUUIDs
if x := obj.svcUUIDs; len(x) > 0 {
return x
}
var result []ResUUID
// return UUID's for whatever is in obj.fileList
for _, x := range obj.fileList {
var reversed = false // cheat by passing a pointer
result = append(result, &FileUUID{
BaseUUID: BaseUUID{
name: obj.name,
kind: obj.kind,
reversed: &reversed,
},
path: x, // what matters
}) // build list
}
return result
}
func (obj *PkgResAutoEdges) Test(input []bool) bool {
if !obj.testIsNext {
log.Fatal("Expecting a call to Next()")
}
// ack the svcUUID's...
if x := obj.svcUUIDs; len(x) > 0 {
if y := len(x); y != len(input) {
log.Fatalf("Expecting %d value(s)!", y)
}
obj.svcUUIDs = []ResUUID{} // empty
obj.testIsNext = false
return true
}
count := len(obj.fileList)
if count != len(input) {
log.Fatalf("Expecting %d value(s)!", count)
}
obj.testIsNext = false // set after all the errors paths are past
// while i do believe this algorithm generates the *correct* result, i
// don't know if it does so in the optimal way. improvements welcome!
// the basic logic is:
// 0) Next() returns whatever is in fileList
// 1) Test() computes the dirname of each file, and removes duplicates
// and dirname's that have been in the path of an ack from input results
// 2) It then simplifies the list by removing the common path prefixes
// 3) Lastly, the remaining set of files (dirs) is used as new fileList
// 4) We then iterate in (0) until the fileList is empty!
var dirs = make([]string, count)
done := []string{}
for i := 0; i < count; i++ {
dir := Dirname(obj.fileList[i]) // dirname of /foo/ should be /
dirs[i] = dir
if input[i] {
done = append(done, dir)
}
}
nodupes := StrRemoveDuplicatesInList(dirs) // remove duplicates
nodones := StrFilterElementsInList(done, nodupes) // filter out done
noempty := StrFilterElementsInList([]string{""}, nodones) // remove the "" from /
obj.fileList = RemoveCommonFilePrefixes(noempty) // magic
if len(obj.fileList) == 0 { // nothing more, don't continue
return false
}
return true // continue, there are more files!
}
// produce an object which generates a minimal pkg file optimization sequence
func (obj *PkgRes) AutoEdges() AutoEdge {
// in contrast with the FileRes AutoEdges() function which contains
// more of the mechanics, most of the AutoEdge mechanics for the PkgRes
// is contained in the Test() method! This design is completely okay!
// add matches for any svc resources found in pkg definition!
var svcUUIDs []ResUUID
for _, x := range ReturnSvcInFileList(obj.fileList) {
var reversed = false
svcUUIDs = append(svcUUIDs, &SvcUUID{
BaseUUID: BaseUUID{
name: obj.GetName(),
kind: obj.Kind(),
reversed: &reversed,
},
name: x, // the svc name itself in the SvcUUID object!
}) // build list
}
return &PkgResAutoEdges{
fileList: RemoveCommonFilePrefixes(obj.fileList), // clean start!
svcUUIDs: svcUUIDs,
testIsNext: false, // start with Next() call
name: obj.GetName(), // save data for PkgResAutoEdges obj
kind: obj.Kind(),
}
}
// include all params to make a unique identification of this object
func (obj *PkgRes) GetUUIDs() []ResUUID {
x := &PkgUUID{
BaseUUID: BaseUUID{name: obj.GetName(), kind: obj.Kind()},
name: obj.Name,
state: obj.State,
}
result := []ResUUID{x}
return result
}
// can these two resources be merged ?
// (aka does this resource support doing so?)
// will resource allow itself to be grouped _into_ this obj?
func (obj *PkgRes) GroupCmp(r Res) bool {
res, ok := r.(*PkgRes)
if !ok {
return false
}
objStateIsVersion := (obj.State != "installed" && obj.State != "uninstalled" && obj.State != "newest") // must be a ver. string
resStateIsVersion := (res.State != "installed" && res.State != "uninstalled" && res.State != "newest") // must be a ver. string
if objStateIsVersion || resStateIsVersion {
// can't merge specific version checks atm
return false
}
// FIXME: keep it simple for now, only merge same states
if obj.State != res.State {
return false
}
return true
}
func (obj *PkgRes) Compare(res Res) bool {
switch res.(type) {
case *PkgRes:
res := res.(*PkgRes)
if obj.Name != res.Name {
return false
}
if obj.State != res.State {
return false
}
if obj.AllowUntrusted != res.AllowUntrusted {
return false
}
if obj.AllowNonFree != res.AllowNonFree {
return false
}
if obj.AllowUnsupported != res.AllowUnsupported {
return false
}
default:
return false
}
return true
}
// return a list of svc names for matches like /usr/lib/systemd/system/*.service
func ReturnSvcInFileList(fileList []string) []string {
result := []string{}
for _, x := range fileList {
dirname, basename := path.Split(path.Clean(x))
// TODO: do we also want to look for /etc/systemd/system/ ?
if dirname != "/usr/lib/systemd/system/" {
continue
}
if !strings.HasSuffix(basename, ".service") {
continue
}
if s := strings.TrimSuffix(basename, ".service"); !StrInList(s, result) {
result = append(result, s)
}
}
return result
}

353
resources.go Normal file
View File

@@ -0,0 +1,353 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"bytes"
"encoding/base64"
"encoding/gob"
"fmt"
"log"
)
//go:generate stringer -type=resState -output=resstate_stringer.go
type resState int
const (
resStateNil resState = iota
resStateWatching
resStateEvent // an event has happened, but we haven't poked yet
resStateCheckApply
resStatePoking
)
//go:generate stringer -type=resConvergedState -output=resconvergedstate_stringer.go
type resConvergedState int
const (
resConvergedNil resConvergedState = iota
//resConverged
resConvergedTimeout
)
// a unique identifier for a resource, namely it's name, and the kind ("type")
type ResUUID interface {
GetName() string
Kind() string
IFF(ResUUID) bool
Reversed() bool // true means this resource happens before the generator
}
type BaseUUID struct {
name string // name and kind are the values of where this is coming from
kind string
reversed *bool // piggyback edge information here
}
type AutoEdge interface {
Next() []ResUUID // call to get list of edges to add
Test([]bool) bool // call until false
}
type MetaParams struct {
AutoEdge bool `yaml:"autoedge"` // metaparam, should we generate auto edges? // XXX should default to true
AutoGroup bool `yaml:"autogroup"` // metaparam, should we auto group? // XXX should default to true
}
// this interface is everything that is common to all resources
// everything here only needs to be implemented once, in the BaseRes
type Base interface {
GetName() string // can't be named "Name()" because of struct field
SetName(string)
Kind() string
GetMeta() MetaParams
SetVertex(*Vertex)
SetConvergedCallback(ctimeout int, converged chan bool)
IsWatching() bool
SetWatching(bool)
GetConvergedState() resConvergedState
SetConvergedState(resConvergedState)
GetState() resState
SetState(resState)
SendEvent(eventName, bool, bool) bool
ReadEvent(*Event) (bool, bool) // TODO: optional here?
GroupCmp(Res) bool // TODO: is there a better name for this?
GroupRes(Res) error // group resource (arg) into self
IsGrouped() bool // am I grouped?
SetGrouped(bool) // set grouped bool
GetGroup() []Res // return everyone grouped inside me
SetGroup([]Res)
}
// this is the minimum interface you need to implement to make a new resource
type Res interface {
Base // include everything from the Base interface
Init()
//Validate() bool // TODO: this might one day be added
GetUUIDs() []ResUUID // most resources only return one
Watch(chan Event) // send on channel to signal process() events
CheckApply(bool) (bool, error)
AutoEdges() AutoEdge
Compare(Res) bool
CollectPattern(string) // XXX: temporary until Res collection is more advanced
}
type BaseRes struct {
Name string `yaml:"name"`
Meta MetaParams `yaml:"meta"` // struct of all the metaparams
kind string
events chan Event
vertex *Vertex
state resState
convergedState resConvergedState
watching bool // is Watch() loop running ?
ctimeout int // converged timeout
converged chan bool
isStateOK bool // whether the state is okay based on events or not
isGrouped bool // am i contained within a group?
grouped []Res // list of any grouped resources
}
// wraps the IFF method when used with a list of UUID's
func UUIDExistsInUUIDs(uuid ResUUID, uuids []ResUUID) bool {
for _, u := range uuids {
if uuid.IFF(u) {
return true
}
}
return false
}
func (obj *BaseUUID) GetName() string {
return obj.name
}
func (obj *BaseUUID) Kind() string {
return obj.kind
}
// if and only if they are equivalent, return true
// if they are not equivalent, return false
// most resource will want to override this method, since it does the important
// work of actually discerning if two resources are identical in function
func (obj *BaseUUID) IFF(uuid ResUUID) bool {
res, ok := uuid.(*BaseUUID)
if !ok {
return false
}
return obj.name == res.name
}
func (obj *BaseUUID) Reversed() bool {
if obj.reversed == nil {
log.Fatal("Programming error!")
}
return *obj.reversed
}
// initialize structures like channels if created without New constructor
func (obj *BaseRes) Init() {
obj.events = make(chan Event) // unbuffered chan size to avoid stale events
}
// this method gets used by all the resources
func (obj *BaseRes) GetName() string {
return obj.Name
}
func (obj *BaseRes) SetName(name string) {
obj.Name = name
}
// return the kind of resource this is
func (obj *BaseRes) Kind() string {
return obj.kind
}
func (obj *BaseRes) GetMeta() MetaParams {
return obj.Meta
}
func (obj *BaseRes) GetVertex() *Vertex {
return obj.vertex
}
func (obj *BaseRes) SetVertex(v *Vertex) {
obj.vertex = v
}
func (obj *BaseRes) SetConvergedCallback(ctimeout int, converged chan bool) {
obj.ctimeout = ctimeout
obj.converged = converged
}
// is the Watch() function running?
func (obj *BaseRes) IsWatching() bool {
return obj.watching
}
// store status of if the Watch() function is running
func (obj *BaseRes) SetWatching(b bool) {
obj.watching = b
}
func (obj *BaseRes) GetConvergedState() resConvergedState {
return obj.convergedState
}
func (obj *BaseRes) SetConvergedState(state resConvergedState) {
obj.convergedState = state
}
func (obj *BaseRes) GetState() resState {
return obj.state
}
func (obj *BaseRes) SetState(state resState) {
if DEBUG {
log.Printf("%v[%v]: State: %v -> %v", obj.Kind(), obj.GetName(), obj.GetState(), state)
}
obj.state = state
}
// push an event into the message queue for a particular vertex
func (obj *BaseRes) SendEvent(event eventName, sync bool, activity bool) bool {
// TODO: isn't this race-y ?
if !obj.IsWatching() { // element has already exited
return false // if we don't return, we'll block on the send
}
if !sync {
obj.events <- Event{event, nil, "", activity}
return true
}
resp := make(chan bool)
obj.events <- Event{event, resp, "", activity}
for {
value := <-resp
// wait until true value
if value {
return true
}
}
}
// process events when a select gets one, this handles the pause code too!
// the return values specify if we should exit and poke respectively
func (obj *BaseRes) ReadEvent(event *Event) (exit, poke bool) {
event.ACK()
switch event.Name {
case eventStart:
return false, true
case eventPoke:
return false, true
case eventBackPoke:
return false, true // forward poking in response to a back poke!
case eventExit:
return true, false
case eventPause:
// wait for next event to continue
select {
case e := <-obj.events:
e.ACK()
if e.Name == eventExit {
return true, false
} else if e.Name == eventStart { // eventContinue
return false, false // don't poke on unpause!
} else {
// if we get a poke event here, it's a bug!
log.Fatalf("%v[%v]: Unknown event: %v, while paused!", obj.Kind(), obj.GetName(), e)
}
}
default:
log.Fatal("Unknown event: ", event)
}
return true, false // required to keep the stupid go compiler happy
}
func (obj *BaseRes) GroupRes(res Res) error {
if l := len(res.GetGroup()); l > 0 {
return fmt.Errorf("Res: %v already contains %d grouped resources!", res, l)
}
if res.IsGrouped() {
return fmt.Errorf("Res: %v is already grouped!", res)
}
obj.grouped = append(obj.grouped, res)
res.SetGrouped(true) // i am contained _in_ a group
return nil
}
func (obj *BaseRes) IsGrouped() bool { // am I grouped?
return obj.isGrouped
}
func (obj *BaseRes) SetGrouped(b bool) {
obj.isGrouped = b
}
func (obj *BaseRes) GetGroup() []Res { // return everyone grouped inside me
return obj.grouped
}
func (obj *BaseRes) SetGroup(g []Res) {
obj.grouped = g
}
func (obj *BaseRes) CollectPattern(pattern string) {
// XXX: default method is empty
}
// ResToB64 encodes a resource to a base64 encoded string (after serialization)
func ResToB64(res Res) (string, error) {
b := bytes.Buffer{}
e := gob.NewEncoder(&b)
err := e.Encode(&res) // pass with &
if err != nil {
return "", fmt.Errorf("Gob failed to encode: %v", err)
}
return base64.StdEncoding.EncodeToString(b.Bytes()), nil
}
// B64ToRes decodes a resource from a base64 encoded string (after deserialization)
func B64ToRes(str string) (Res, error) {
var output interface{}
bb, err := base64.StdEncoding.DecodeString(str)
if err != nil {
return nil, fmt.Errorf("Base64 failed to decode: %v", err)
}
b := bytes.NewBuffer(bb)
d := gob.NewDecoder(b)
err = d.Decode(&output) // pass with &
if err != nil {
return nil, fmt.Errorf("Gob failed to decode: %v", err)
}
res, ok := output.(Res)
if !ok {
return nil, fmt.Errorf("Output %v is not a Res", res)
}
return res, nil
}

105
resources_test.go Normal file
View File

@@ -0,0 +1,105 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"bytes"
"encoding/base64"
"encoding/gob"
"testing"
)
func TestMiscEncodeDecode1(t *testing.T) {
var err error
//gob.Register( &NoopRes{} ) // happens in noop.go : init()
//gob.Register( &FileRes{} ) // happens in file.go : init()
// ...
// encode
var input interface{} = &FileRes{}
b1 := bytes.Buffer{}
e := gob.NewEncoder(&b1)
err = e.Encode(&input) // pass with &
if err != nil {
t.Errorf("Gob failed to Encode: %v", err)
}
str := base64.StdEncoding.EncodeToString(b1.Bytes())
// decode
var output interface{}
bb, err := base64.StdEncoding.DecodeString(str)
if err != nil {
t.Errorf("Base64 failed to Decode: %v", err)
}
b2 := bytes.NewBuffer(bb)
d := gob.NewDecoder(b2)
err = d.Decode(&output) // pass with &
if err != nil {
t.Errorf("Gob failed to Decode: %v", err)
}
res1, ok := input.(Res)
if !ok {
t.Errorf("Input %v is not a Res", res1)
return
}
res2, ok := output.(Res)
if !ok {
t.Errorf("Output %v is not a Res", res2)
return
}
if !res1.Compare(res2) {
t.Error("The input and output Res values do not match!")
}
}
func TestMiscEncodeDecode2(t *testing.T) {
var err error
//gob.Register( &NoopRes{} ) // happens in noop.go : init()
//gob.Register( &FileRes{} ) // happens in file.go : init()
// ...
// encode
var input Res = &FileRes{}
b64, err := ResToB64(input)
if err != nil {
t.Errorf("Can't encode: %v", err)
return
}
output, err := B64ToRes(b64)
if err != nil {
t.Errorf("Can't decode: %v", err)
return
}
res1, ok := input.(Res)
if !ok {
t.Errorf("Input %v is not a Res", res1)
return
}
res2, ok := output.(Res)
if !ok {
t.Errorf("Output %v is not a Res", res2)
return
}
if !res1.Compare(res2) {
t.Error("The input and output Res values do not match!")
}
}

View File

@@ -1,339 +0,0 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// NOTE: docs are found at: https://godoc.org/github.com/coreos/go-systemd/dbus
package main
import (
"fmt"
systemd "github.com/coreos/go-systemd/dbus" // change namespace
"github.com/coreos/go-systemd/util"
"github.com/godbus/dbus" // namespace collides with systemd wrapper
"log"
)
type ServiceType struct {
BaseType `yaml:",inline"`
State string `yaml:"state"` // state: running, stopped
Startup string `yaml:"startup"` // enabled, disabled, undefined
}
func NewServiceType(name, state, startup string) *ServiceType {
return &ServiceType{
BaseType: BaseType{
Name: name,
events: make(chan Event),
vertex: nil,
},
State: state,
Startup: startup,
}
}
func (obj *ServiceType) GetType() string {
return "Service"
}
// Service watcher
func (obj *ServiceType) Watch() {
if obj.IsWatching() {
return
}
obj.SetWatching(true)
defer obj.SetWatching(false)
// obj.Name: service name
//vertex := obj.GetVertex() // stored with SetVertex
if !util.IsRunningSystemd() {
log.Fatal("Systemd is not running.")
}
conn, err := systemd.NewSystemdConnection() // needs root access
if err != nil {
log.Fatal("Failed to connect to systemd: ", err)
}
defer conn.Close()
bus, err := dbus.SystemBus()
if err != nil {
log.Fatal("Failed to connect to bus: ", err)
}
// XXX: will this detect new units?
bus.BusObject().Call("org.freedesktop.DBus.AddMatch", 0,
"type='signal',interface='org.freedesktop.systemd1.Manager',member='Reloading'")
buschan := make(chan *dbus.Signal, 10)
bus.Signal(buschan)
var service = fmt.Sprintf("%v.service", obj.Name) // systemd name
var send = false // send event?
var exit = false
var dirty = false
var invalid = false // does the service exist or not?
var previous bool // previous invalid value
set := conn.NewSubscriptionSet() // no error should be returned
subChannel, subErrors := set.Subscribe()
var activeSet = false
for {
// XXX: watch for an event for new units...
// XXX: detect if startup enabled/disabled value changes...
previous = invalid
invalid = false
// firstly, does service even exist or not?
loadstate, err := conn.GetUnitProperty(service, "LoadState")
if err != nil {
log.Printf("Failed to get property: %v", err)
invalid = true
}
if !invalid {
var notFound = (loadstate.Value == dbus.MakeVariant("not-found"))
if notFound { // XXX: in the loop we'll handle changes better...
log.Printf("Failed to find service: %v", service)
invalid = true // XXX ?
}
}
if previous != invalid { // if invalid changed, send signal
send = true
dirty = true
}
if invalid {
log.Printf("Waiting for: %v", service) // waiting for service to appear...
if activeSet {
activeSet = false
set.Remove(service) // no return value should ever occur
}
obj.SetState(typeWatching) // reset
select {
case _ = <-buschan: // XXX wait for new units event to unstick
obj.SetConvergedState(typeConvergedNil)
// loop so that we can see the changed invalid signal
log.Printf("Service[%v]->DaemonReload()", service)
case event := <-obj.events:
obj.SetConvergedState(typeConvergedNil)
if exit, send = obj.ReadEvent(&event); exit {
return // exit
}
if event.GetActivity() {
dirty = true
}
case _ = <-TimeAfterOrBlock(obj.ctimeout):
obj.SetConvergedState(typeConvergedTimeout)
obj.converged <- true
continue
}
} else {
if !activeSet {
activeSet = true
set.Add(service) // no return value should ever occur
}
log.Printf("Watching: %v", service) // attempting to watch...
obj.SetState(typeWatching) // reset
select {
case event := <-subChannel:
log.Printf("Service event: %+v", event)
// NOTE: the value returned is a map for some reason...
if event[service] != nil {
// event[service].ActiveState is not nil
if event[service].ActiveState == "active" {
log.Printf("Service[%v]->Started()", service)
} else if event[service].ActiveState == "inactive" {
log.Printf("Service[%v]->Stopped!()", service)
} else {
log.Fatal("Unknown service state: ", event[service].ActiveState)
}
} else {
// service stopped (and ActiveState is nil...)
log.Printf("Service[%v]->Stopped", service)
}
send = true
dirty = true
case err := <-subErrors:
obj.SetConvergedState(typeConvergedNil) // XXX ?
log.Println("error:", err)
log.Fatal(err)
//vertex.events <- fmt.Sprintf("service: %v", "error") // XXX: how should we handle errors?
case event := <-obj.events:
obj.SetConvergedState(typeConvergedNil)
if exit, send = obj.ReadEvent(&event); exit {
return // exit
}
if event.GetActivity() {
dirty = true
}
}
}
if send {
send = false
if dirty {
dirty = false
obj.isStateOK = false // something made state dirty
}
Process(obj) // XXX: rename this function
}
}
}
func (obj *ServiceType) StateOK() bool {
if obj.isStateOK { // cache the state
return true
}
if !util.IsRunningSystemd() {
log.Fatal("Systemd is not running.")
}
conn, err := systemd.NewSystemdConnection() // needs root access
if err != nil {
log.Fatal("Failed to connect to systemd: ", err)
}
defer conn.Close()
var service = fmt.Sprintf("%v.service", obj.Name) // systemd name
loadstate, err := conn.GetUnitProperty(service, "LoadState")
if err != nil {
log.Printf("Failed to get load state: %v", err)
return false
}
// NOTE: we have to compare variants with other variants, they are really strings...
var notFound = (loadstate.Value == dbus.MakeVariant("not-found"))
if notFound {
log.Printf("Failed to find service: %v", service)
return false
}
// XXX: check service "enabled at boot" or not status...
//conn.GetUnitProperties(service)
activestate, err := conn.GetUnitProperty(service, "ActiveState")
if err != nil {
log.Fatal("Failed to get active state: ", err)
}
var running = (activestate.Value == dbus.MakeVariant("active"))
if obj.State == "running" {
if !running {
return false // we are in the wrong state
}
} else if obj.State == "stopped" {
if running {
return false
}
} else {
log.Fatal("Unknown state: ", obj.State)
}
return true // all is good, no state change needed
}
func (obj *ServiceType) Apply() bool {
log.Printf("%v[%v]: Apply", obj.GetType(), obj.GetName())
if !util.IsRunningSystemd() {
log.Fatal("Systemd is not running.")
}
conn, err := systemd.NewSystemdConnection() // needs root access
if err != nil {
log.Fatal("Failed to connect to systemd: ", err)
}
defer conn.Close()
var service = fmt.Sprintf("%v.service", obj.Name) // systemd name
var files = []string{service} // the service represented in a list
if obj.Startup == "enabled" {
_, _, err = conn.EnableUnitFiles(files, false, true)
} else if obj.Startup == "disabled" {
_, err = conn.DisableUnitFiles(files, false)
} else {
err = nil
}
if err != nil {
log.Printf("Unable to change startup status: %v", err)
return false
}
result := make(chan string, 1) // catch result information
if obj.State == "running" {
_, err := conn.StartUnit(service, "fail", result)
if err != nil {
log.Fatal("Failed to start unit: ", err)
return false
}
} else if obj.State == "stopped" {
_, err = conn.StopUnit(service, "fail", result)
if err != nil {
log.Fatal("Failed to stop unit: ", err)
return false
}
} else {
log.Fatal("Unknown state: ", obj.State)
}
status := <-result
if &status == nil {
log.Fatal("Result is nil")
return false
}
if status != "done" {
log.Fatal("Unknown return string: ", status)
return false
}
// XXX: also set enabled on boot
return true
}
func (obj *ServiceType) Compare(typ Type) bool {
switch typ.(type) {
case *ServiceType:
typ := typ.(*ServiceType)
if obj.Name != typ.Name {
return false
}
if obj.State != typ.State {
return false
}
if obj.Startup != typ.Startup {
return false
}
default:
return false
}
return true
}

View File

@@ -1,13 +1,13 @@
%global project_version __VERSION__
%define debug_package %{nil}
Name: mgmt
Name: __PROGRAM__
Version: __VERSION__
Release: __RELEASE__
Summary: A next generation config management prototype!
License: AGPLv3+
URL: https://github.com/purpleidea/mgmt
Source0: https://dl.fedoraproject.org/pub/alt/purpleidea/mgmt/SOURCES/mgmt-%{project_version}.tar.bz2
Source0: https://dl.fedoraproject.org/pub/alt/purpleidea/__PROGRAM__/SOURCES/__PROGRAM__-%{project_version}.tar.bz2
# graphviz should really be a "suggests", since technically it's optional
Requires: graphviz
@@ -38,26 +38,26 @@ make build
%install
rm -rf %{buildroot}
# _datadir is typically /usr/share/
install -d -m 0755 %{buildroot}/%{_datadir}/mgmt/
cp -a AUTHORS COPYING COPYRIGHT DOCUMENTATION.md README.md THANKS examples/ %{buildroot}/%{_datadir}/mgmt/
install -d -m 0755 %{buildroot}/%{_datadir}/__PROGRAM__/
cp -a AUTHORS COPYING COPYRIGHT DOCUMENTATION.md README.md THANKS examples/ %{buildroot}/%{_datadir}/__PROGRAM__/
# install the binary
mkdir -p %{buildroot}/%{_bindir}
install -m 0755 mgmt %{buildroot}/%{_bindir}/mgmt
install -m 0755 __PROGRAM__ %{buildroot}/%{_bindir}/__PROGRAM__
# profile.d bash completion
mkdir -p %{buildroot}%{_sysconfdir}/profile.d
install misc/mgmt.bashrc -m 0755 %{buildroot}%{_sysconfdir}/profile.d/mgmt.sh
install misc/bashrc.sh -m 0755 %{buildroot}%{_sysconfdir}/profile.d/__PROGRAM__.sh
# etc dir
mkdir -p %{buildroot}%{_sysconfdir}/mgmt/
install -m 0644 misc/mgmt.conf.example %{buildroot}%{_sysconfdir}/mgmt/mgmt.conf
mkdir -p %{buildroot}%{_sysconfdir}/__PROGRAM__/
install -m 0644 misc/example.conf %{buildroot}%{_sysconfdir}/__PROGRAM__/__PROGRAM__.conf
%files
%attr(0755, root, root) %{_sysconfdir}/profile.d/mgmt.sh
%{_datadir}/mgmt/*
%{_bindir}/mgmt
%{_sysconfdir}/mgmt/*
%attr(0755, root, root) %{_sysconfdir}/profile.d/__PROGRAM__.sh
%{_datadir}/__PROGRAM__/*
%{_bindir}/__PROGRAM__
%{_sysconfdir}/__PROGRAM__/*
# this changelog is auto-generated by git log
%changelog

436
svc.go Normal file
View File

@@ -0,0 +1,436 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// DOCS: https://godoc.org/github.com/coreos/go-systemd/dbus
package main
import (
"encoding/gob"
"errors"
"fmt"
systemd "github.com/coreos/go-systemd/dbus" // change namespace
systemdUtil "github.com/coreos/go-systemd/util"
"github.com/godbus/dbus" // namespace collides with systemd wrapper
"log"
)
func init() {
gob.Register(&SvcRes{})
}
type SvcRes struct {
BaseRes `yaml:",inline"`
State string `yaml:"state"` // state: running, stopped, undefined
Startup string `yaml:"startup"` // enabled, disabled, undefined
}
func NewSvcRes(name, state, startup string) *SvcRes {
obj := &SvcRes{
BaseRes: BaseRes{
Name: name,
},
State: state,
Startup: startup,
}
obj.Init()
return obj
}
func (obj *SvcRes) Init() {
obj.BaseRes.kind = "Svc"
obj.BaseRes.Init() // call base init, b/c we're overriding
}
func (obj *SvcRes) Validate() bool {
if obj.State != "running" && obj.State != "stopped" && obj.State != "" {
return false
}
if obj.Startup != "enabled" && obj.Startup != "disabled" && obj.Startup != "" {
return false
}
return true
}
// Service watcher
func (obj *SvcRes) Watch(processChan chan Event) {
if obj.IsWatching() {
return
}
obj.SetWatching(true)
defer obj.SetWatching(false)
// obj.Name: svc name
//vertex := obj.GetVertex() // stored with SetVertex
if !systemdUtil.IsRunningSystemd() {
log.Fatal("Systemd is not running.")
}
conn, err := systemd.NewSystemdConnection() // needs root access
if err != nil {
log.Fatal("Failed to connect to systemd: ", err)
}
defer conn.Close()
// if we share the bus with others, we will get each others messages!!
bus, err := SystemBusPrivateUsable() // don't share the bus connection!
if err != nil {
log.Fatal("Failed to connect to bus: ", err)
}
// XXX: will this detect new units?
bus.BusObject().Call("org.freedesktop.DBus.AddMatch", 0,
"type='signal',interface='org.freedesktop.systemd1.Manager',member='Reloading'")
buschan := make(chan *dbus.Signal, 10)
bus.Signal(buschan)
var svc = fmt.Sprintf("%v.service", obj.Name) // systemd name
var send = false // send event?
var exit = false
var dirty = false
var invalid = false // does the svc exist or not?
var previous bool // previous invalid value
set := conn.NewSubscriptionSet() // no error should be returned
subChannel, subErrors := set.Subscribe()
var activeSet = false
for {
// XXX: watch for an event for new units...
// XXX: detect if startup enabled/disabled value changes...
previous = invalid
invalid = false
// firstly, does svc even exist or not?
loadstate, err := conn.GetUnitProperty(svc, "LoadState")
if err != nil {
log.Printf("Failed to get property: %v", err)
invalid = true
}
if !invalid {
var notFound = (loadstate.Value == dbus.MakeVariant("not-found"))
if notFound { // XXX: in the loop we'll handle changes better...
log.Printf("Failed to find svc: %v", svc)
invalid = true // XXX ?
}
}
if previous != invalid { // if invalid changed, send signal
send = true
dirty = true
}
if invalid {
log.Printf("Waiting for: %v", svc) // waiting for svc to appear...
if activeSet {
activeSet = false
set.Remove(svc) // no return value should ever occur
}
obj.SetState(resStateWatching) // reset
select {
case _ = <-buschan: // XXX wait for new units event to unstick
obj.SetConvergedState(resConvergedNil)
// loop so that we can see the changed invalid signal
log.Printf("Svc[%v]->DaemonReload()", svc)
case event := <-obj.events:
obj.SetConvergedState(resConvergedNil)
if exit, send = obj.ReadEvent(&event); exit {
return // exit
}
if event.GetActivity() {
dirty = true
}
case _ = <-TimeAfterOrBlock(obj.ctimeout):
obj.SetConvergedState(resConvergedTimeout)
obj.converged <- true
continue
}
} else {
if !activeSet {
activeSet = true
set.Add(svc) // no return value should ever occur
}
log.Printf("Watching: %v", svc) // attempting to watch...
obj.SetState(resStateWatching) // reset
select {
case event := <-subChannel:
log.Printf("Svc event: %+v", event)
// NOTE: the value returned is a map for some reason...
if event[svc] != nil {
// event[svc].ActiveState is not nil
if event[svc].ActiveState == "active" {
log.Printf("Svc[%v]->Started()", svc)
} else if event[svc].ActiveState == "inactive" {
log.Printf("Svc[%v]->Stopped!()", svc)
} else {
log.Fatal("Unknown svc state: ", event[svc].ActiveState)
}
} else {
// svc stopped (and ActiveState is nil...)
log.Printf("Svc[%v]->Stopped", svc)
}
send = true
dirty = true
case err := <-subErrors:
obj.SetConvergedState(resConvergedNil) // XXX ?
log.Printf("error: %v", err)
log.Fatal(err)
//vertex.events <- fmt.Sprintf("svc: %v", "error") // XXX: how should we handle errors?
case event := <-obj.events:
obj.SetConvergedState(resConvergedNil)
if exit, send = obj.ReadEvent(&event); exit {
return // exit
}
if event.GetActivity() {
dirty = true
}
}
}
if send {
send = false
if dirty {
dirty = false
obj.isStateOK = false // something made state dirty
}
resp := NewResp()
processChan <- Event{eventNil, resp, "", true} // trigger process
resp.ACKWait() // wait for the ACK()
}
}
}
func (obj *SvcRes) CheckApply(apply bool) (stateok bool, err error) {
log.Printf("%v[%v]: CheckApply(%t)", obj.Kind(), obj.GetName(), apply)
if obj.isStateOK { // cache the state
return true, nil
}
if !systemdUtil.IsRunningSystemd() {
return false, errors.New("Systemd is not running.")
}
conn, err := systemd.NewSystemdConnection() // needs root access
if err != nil {
return false, fmt.Errorf("Failed to connect to systemd: %v", err)
}
defer conn.Close()
var svc = fmt.Sprintf("%v.service", obj.Name) // systemd name
loadstate, err := conn.GetUnitProperty(svc, "LoadState")
if err != nil {
return false, fmt.Errorf("Failed to get load state: %v", err)
}
// NOTE: we have to compare variants with other variants, they are really strings...
var notFound = (loadstate.Value == dbus.MakeVariant("not-found"))
if notFound {
return false, fmt.Errorf("Failed to find svc: %v", svc)
}
// XXX: check svc "enabled at boot" or not status...
//conn.GetUnitProperties(svc)
activestate, err := conn.GetUnitProperty(svc, "ActiveState")
if err != nil {
return false, fmt.Errorf("Failed to get active state: %v", err)
}
var running = (activestate.Value == dbus.MakeVariant("active"))
var stateOK = ((obj.State == "") || (obj.State == "running" && running) || (obj.State == "stopped" && !running))
var startupOK = true // XXX DETECT AND SET
if stateOK && startupOK {
return true, nil // we are in the correct state
}
// state is not okay, no work done, exit, but without error
if !apply {
return false, nil
}
// apply portion
log.Printf("%v[%v]: Apply", obj.Kind(), obj.GetName())
var files = []string{svc} // the svc represented in a list
if obj.Startup == "enabled" {
_, _, err = conn.EnableUnitFiles(files, false, true)
} else if obj.Startup == "disabled" {
_, err = conn.DisableUnitFiles(files, false)
}
if err != nil {
return false, fmt.Errorf("Unable to change startup status: %v", err)
}
// XXX: do we need to use a buffered channel here?
result := make(chan string, 1) // catch result information
if obj.State == "running" {
_, err = conn.StartUnit(svc, "fail", result)
if err != nil {
return false, fmt.Errorf("Failed to start unit: %v", err)
}
} else if obj.State == "stopped" {
_, err = conn.StopUnit(svc, "fail", result)
if err != nil {
return false, fmt.Errorf("Failed to stop unit: %v", err)
}
}
status := <-result
if &status == nil {
return false, errors.New("Systemd service action result is nil")
}
if status != "done" {
return false, fmt.Errorf("Unknown systemd return string: %v", status)
}
// XXX: also set enabled on boot
return false, nil // success
}
type SvcUUID struct {
// NOTE: there is also a name variable in the BaseUUID struct, this is
// information about where this UUID came from, and is unrelated to the
// information about the resource we're matching. That data which is
// used in the IFF function, is what you see in the struct fields here.
BaseUUID
name string // the svc name
}
// if and only if they are equivalent, return true
// if they are not equivalent, return false
func (obj *SvcUUID) IFF(uuid ResUUID) bool {
res, ok := uuid.(*SvcUUID)
if !ok {
return false
}
return obj.name == res.name
}
type SvcResAutoEdges struct {
data []ResUUID
pointer int
found bool
}
func (obj *SvcResAutoEdges) Next() []ResUUID {
if obj.found {
log.Fatal("Shouldn't be called anymore!")
}
if len(obj.data) == 0 { // check length for rare scenarios
return nil
}
value := obj.data[obj.pointer]
obj.pointer++
return []ResUUID{value} // we return one, even though api supports N
}
// get results of the earlier Next() call, return if we should continue!
func (obj *SvcResAutoEdges) Test(input []bool) bool {
// if there aren't any more remaining
if len(obj.data) <= obj.pointer {
return false
}
if obj.found { // already found, done!
return false
}
if len(input) != 1 { // in case we get given bad data
log.Fatal("Expecting a single value!")
}
if input[0] { // if a match is found, we're done!
obj.found = true // no more to find!
return false
}
return true // keep going
}
func (obj *SvcRes) AutoEdges() AutoEdge {
var data []ResUUID
svcFiles := []string{
fmt.Sprintf("/etc/systemd/system/%s.service", obj.Name), // takes precedence
fmt.Sprintf("/usr/lib/systemd/system/%s.service", obj.Name), // pkg default
}
for _, x := range svcFiles {
var reversed = true
data = append(data, &FileUUID{
BaseUUID: BaseUUID{
name: obj.GetName(),
kind: obj.Kind(),
reversed: &reversed,
},
path: x, // what matters
})
}
return &FileResAutoEdges{
data: data,
pointer: 0,
found: false,
}
}
// include all params to make a unique identification of this object
func (obj *SvcRes) GetUUIDs() []ResUUID {
x := &SvcUUID{
BaseUUID: BaseUUID{name: obj.GetName(), kind: obj.Kind()},
name: obj.Name, // svc name
}
return []ResUUID{x}
}
func (obj *SvcRes) GroupCmp(r Res) bool {
_, ok := r.(*SvcRes)
if !ok {
return false
}
// TODO: depending on if the systemd service api allows batching, we
// might be able to build this, although not sure how useful it is...
// it might just eliminate parallelism be bunching up the graph
return false // not possible atm
}
func (obj *SvcRes) Compare(res Res) bool {
switch res.(type) {
case *SvcRes:
res := res.(*SvcRes)
if obj.Name != res.Name {
return false
}
if obj.State != res.State {
return false
}
if obj.Startup != res.Startup {
return false
}
default:
return false
}
return true
}

View File

@@ -1,5 +1,4 @@
#!/bin/bash -e
# test suite...
echo running test.sh
echo "ENV:"
@@ -15,9 +14,9 @@ diff <(tail -n +$start AUTHORS | sort) <(tail -n +$start AUTHORS)
./test/test-gofmt.sh
./test/test-yamlfmt.sh
./test/test-bashfmt.sh
./test/test-headerfmt.sh
go test
echo running go vet # since it doesn't output an ok message on pass
go vet && echo PASS
./test/test-govet.sh
# do these longer tests only when running on ci
if env | grep -q -e '^TRAVIS=true$' -e '^JENKINS_URL=' -e '^BUILD_TAG=jenkins'; then
@@ -32,3 +31,4 @@ fi
if env | grep -q -e '^JENKINS_URL=' -e '^BUILD_TAG=jenkins'; then
./test/test-omv.sh
fi
./test/test-golint.sh # test last, because this test is somewhat arbitrary

View File

@@ -1,7 +1,7 @@
---
:domain: example.com
:network: 192.168.123.0/24
:image: fedora-23
:image: centos-7.1
:cpus: ''
:memory: ''
:disks: 0

52
test/omv/pkg1a.yaml Normal file
View File

@@ -0,0 +1,52 @@
---
:domain: example.com
:network: 192.168.123.0/24
:image: centos-7.1
:cpus: ''
:memory: ''
:disks: 0
:disksize: 40G
:boxurlprefix: ''
:sync: rsync
:syncdir: ''
:syncsrc: ''
:folder: ".omv"
:extern:
- type: git
repository: https://github.com/purpleidea/mgmt
directory: mgmt
:cd: ''
:puppet: false
:classes: []
:shell:
- mkdir /tmp/mgmt/
:docker: false
:kubernetes: false
:ansible: []
:playbook: []
:ansible_extras: {}
:cachier: false
:vms:
- :name: mgmt1
:shell:
- iptables -F
- cd /vagrant/mgmt/ && make path
- cd /vagrant/mgmt/ && make deps && make build && cp mgmt ~/bin/
- etcd -bind-addr "`hostname --ip-address`:2379" &
- cd && mgmt run --file /vagrant/mgmt/examples/pkg1.yaml --converged-timeout=5
:namespace: omv
:count: 0
:username: ''
:password: ''
:poolid: true
:repos: []
:update: false
:reboot: false
:unsafe: false
:nested: false
:tests:
- omv up
- vssh root@mgmt1 -c which powertop
- omv destroy
:comment: simple package install test case
:reallyrm: false

53
test/omv/pkg1b.yaml Normal file
View File

@@ -0,0 +1,53 @@
---
:domain: example.com
:network: 192.168.123.0/24
:image: debian-8
:cpus: ''
:memory: ''
:disks: 0
:disksize: 40G
:boxurlprefix: ''
:sync: rsync
:syncdir: ''
:syncsrc: ''
:folder: ".omv"
:extern:
- type: git
repository: https://github.com/purpleidea/mgmt
directory: mgmt
:cd: ''
:puppet: false
:classes: []
:shell:
- mkdir /tmp/mgmt/
:docker: false
:kubernetes: false
:ansible: []
:playbook: []
:ansible_extras: {}
:cachier: false
:vms:
- :name: mgmt1
:shell:
- apt-get install -y make
- iptables -F
- cd /vagrant/mgmt/ && make path
- cd /vagrant/mgmt/ && make deps && make build && cp mgmt ~/bin/
- etcd -bind-addr "`hostname --ip-address`:2379" &
- cd && mgmt run --file /vagrant/mgmt/examples/pkg1.yaml --converged-timeout=5
:namespace: omv
:count: 0
:username: ''
:password: ''
:poolid: true
:repos: []
:update: false
:reboot: false
:unsafe: false
:nested: false
:tests:
- omv up
- vssh root@mgmt1 -c which powertop
- omv destroy
:comment: simple package install test case
:reallyrm: false

View File

@@ -1,10 +1,21 @@
# NOTE: boiler plate to run etcd; source with: . etcd.sh; should NOT be +x
cleanup ()
{
echo "cleanup: $1"
killall etcd || killall -9 etcd || true # kill etcd
rm -rf /tmp/etcd/
}
trap cleanup INT QUIT TERM EXIT ERR
trap_with_arg() {
func="$1"
shift
for sig in "$@"
do
trap "$func $sig" "$sig"
done
}
trap_with_arg cleanup INT QUIT TERM EXIT # ERR
mkdir -p /tmp/etcd/
cd /tmp/etcd/ >/dev/null # shush the cd operation
etcd & # start etcd as job # 1

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/bin/bash -e
# NOTES:
# * this is a simple shell based `mgmt` test case
# * it is recommended that you run mgmt wrapped in the timeout command

View File

@@ -1,12 +1,17 @@
#!/bin/bash
#!/bin/bash -e
if env | grep -q -e '^TRAVIS=true$'; then
# inotify doesn't seem to work properly on travis
echo "Travis and Jenkins give wonky results here, skipping test!"
exit
fi
. etcd.sh # start etcd as job # 1
# run till completion
timeout --kill-after=15s 10s ./mgmt run --file t2.yaml --converged-timeout=5 --no-watch &
#jobs # etcd is 1
wait -n 2 # wait for mgmt to exit
. wait.sh # wait for everything except etcd
test -e /tmp/mgmt/f1
test -e /tmp/mgmt/f2

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
noop:
- name: noop1
file:
@@ -27,15 +27,15 @@ types:
edges:
- name: e1
from:
type: file
res: file
name: file1
to:
type: file
res: file
name: file2
- name: e2
from:
type: file
res: file
name: file2
to:
type: file
res: file
name: file3

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
file:
- name: file1a
path: "/tmp/mgmt/mgmtA/f1a"
@@ -23,6 +23,6 @@ types:
i am f4, exported from host A
state: exists
collect:
- type: file
- res: file
pattern: "/tmp/mgmt/mgmtA/"
edges: []

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
file:
- name: file1b
path: "/tmp/mgmt/mgmtB/f1b"
@@ -23,6 +23,6 @@ types:
i am f4, exported from host B
state: exists
collect:
- type: file
- res: file
pattern: "/tmp/mgmt/mgmtB/"
edges: []

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
file:
- name: file1c
path: "/tmp/mgmt/mgmtC/f1c"
@@ -23,6 +23,6 @@ types:
i am f4, exported from host C
state: exists
collect:
- type: file
- res: file
pattern: "/tmp/mgmt/mgmtC/"
edges: []

View File

@@ -1,4 +1,10 @@
#!/bin/bash
#!/bin/bash -e
if env | grep -q -e '^TRAVIS=true$'; then
# inotify doesn't seem to work properly on travis
echo "Travis and Jenkins give wonky results here, skipping test!"
exit
fi
. etcd.sh # start etcd as job # 1

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/bin/bash -e
. etcd.sh # start etcd as job # 1

View File

@@ -1,7 +1,7 @@
---
graph: mygraph
comment: simple exec fan in example to demonstrate optimization)
types:
resources:
exec:
- name: exec1
cmd: sleep 10s
@@ -56,22 +56,22 @@ types:
edges:
- name: e1
from:
type: exec
res: exec
name: exec1
to:
type: exec
res: exec
name: exec5
- name: e2
from:
type: exec
res: exec
name: exec2
to:
type: exec
res: exec
name: exec5
- name: e3
from:
type: exec
res: exec
name: exec3
to:
type: exec
res: exec
name: exec5

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/bin/bash -e
. etcd.sh # start etcd as job # 1

View File

@@ -1,7 +1,7 @@
---
graph: mygraph
comment: simple exec fan in to fan out example to demonstrate optimization
types:
resources:
exec:
- name: exec1
cmd: sleep 10s
@@ -86,43 +86,43 @@ types:
edges:
- name: e1
from:
type: exec
res: exec
name: exec1
to:
type: exec
res: exec
name: exec4
- name: e2
from:
type: exec
res: exec
name: exec2
to:
type: exec
res: exec
name: exec4
- name: e3
from:
type: exec
res: exec
name: exec3
to:
type: exec
res: exec
name: exec4
- name: e4
from:
type: exec
res: exec
name: exec4
to:
type: exec
res: exec
name: exec5
- name: e5
from:
type: exec
res: exec
name: exec4
to:
type: exec
res: exec
name: exec6
- name: e6
from:
type: exec
res: exec
name: exec4
to:
type: exec
res: exec
name: exec7

33
test/shell/t6.sh Executable file
View File

@@ -0,0 +1,33 @@
#!/bin/bash -e
if env | grep -q -e '^TRAVIS=true$'; then
# inotify doesn't seem to work properly on travis
echo "Travis and Jenkins give wonky results here, skipping test!"
exit
fi
. etcd.sh # start etcd as job # 1
# run till completion
timeout --kill-after=20s 15s ./mgmt run --file t6.yaml --no-watch &
sleep 1s # let it converge
test -e /tmp/mgmt/f1
test -e /tmp/mgmt/f2
test -e /tmp/mgmt/f3
test ! -e /tmp/mgmt/f4
rm -f /tmp/mgmt/f2
sleep 0.1s # let it converge or tests will fail
test -e /tmp/mgmt/f2
rm -f /tmp/mgmt/f2
sleep 0.1s
test -e /tmp/mgmt/f2
echo foo > /tmp/mgmt/f2
sleep 0.1s
test "`cat /tmp/mgmt/f2`" = "i am f2"
rm -f /tmp/mgmt/f2
sleep 0.1s
test -e /tmp/mgmt/f2
killall -SIGINT mgmt # send ^C to exit mgmt
. wait.sh # wait for everything except etcd

View File

@@ -1,6 +1,6 @@
---
graph: mygraph
types:
resources:
noop:
- name: noop1
file:
@@ -27,15 +27,15 @@ types:
edges:
- name: e1
from:
type: file
res: file
name: file1
to:
type: file
res: file
name: file2
- name: e2
from:
type: file
res: file
name: file2
to:
type: file
res: file
name: file3

View File

@@ -1,6 +1,9 @@
# NOTE: boiler plate to wait on mgmt; source with: . wait.sh; should NOT be +x
for j in `jobs -p`
while test "`jobs -p`" != "" && test "`jobs -p`" != "`pidof etcd`"
do
[ $j -eq `pidof etcd` ] && continue # don't wait for etcd
wait $j # wait for mgmt job $j
for j in `jobs -p`
do
[ "$j" = "`pidof etcd`" ] && continue # don't wait for etcd
wait $j || continue # wait for mgmt job $j
done
done

View File

@@ -1,7 +1,7 @@
#!/bin/bash
# check for any bash files that aren't properly formatted
# TODO: this is hardly exhaustive
echo running test-bashfmt.sh
set -o errexit
set -o nounset
set -o pipefail

View File

@@ -1,6 +1,6 @@
#!/bin/bash
# original version of this script from kubernetes project, under ALv2 license
echo running test-gofmt.sh
set -o errexit
set -o nounset
set -o pipefail
@@ -9,7 +9,7 @@ ROOT=$(dirname "${BASH_SOURCE}")/..
GO_VERSION=($(go version))
if [[ -z $(echo "${GO_VERSION[2]}" | grep -E 'go1.2|go1.3|go1.4|go1.5') ]]; then
if [[ -z $(echo "${GO_VERSION[2]}" | grep -E 'go1.2|go1.3|go1.4|go1.5|go1.6') ]]; then
echo "Unknown go version '${GO_VERSION}', skipping gofmt."
exit 0
fi

67
test/test-golint.sh Executable file
View File

@@ -0,0 +1,67 @@
#!/bin/bash
# check that go lint passes or doesn't get worse by some threshold
echo running test-golint.sh
# TODO: output a diff of what has changed in the golint output
# FIXME: test a range of commits, since only the last patch is checked here
PREVIOUS='HEAD^'
CURRENT='HEAD'
THRESHOLD=15 # percent problems per new LOC
XPWD=`pwd`
ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && cd .. && pwd )" # dir!
cd "${ROOT}" >/dev/null
# if this branch has more than one commit as compared to master, diff to that
# note: this is a cheap way to avoid doing a fancy succession of golint's...
HACK=''
COMMITS="`git rev-list --count $CURRENT ^master`" # commit delta to master
# avoid: bad revision '^master' on travis for unknown reason :(
if [ "$COMMITS" != "" ] && [ "$COMMITS" -gt "1" ]; then
PREVIOUS='master'
HACK="yes"
fi
LINT=`golint` # current golint output
COUNT=`echo -e "$LINT" | wc -l` # number of golint problems in current branch
[ "$LINT" = "" ] && echo PASS && exit # everything is "perfect"
T=`mktemp --tmpdir -d tmp.XXX`
[ "$T" = "" ] && exit 1
cd $T || exit 1
git clone --recursive "${ROOT}" 2>/dev/null # make a copy
cd "`basename ${ROOT}`" >/dev/null || exit 1
if [ "$HACK" != "" ]; then
# ensure master branch really exists when cloning from a branched repo!
git checkout master &>/dev/null && git checkout - &>/dev/null
fi
DIFF1=0
NUMSTAT1=`git diff "$PREVIOUS" "$CURRENT" --numstat` # numstat diff since previous commit
while read -r line; do
add=`echo "$line" | cut -f1`
# TODO: should we only count added lines, or count the difference?
sum="$add"
#del=`echo "$line" | cut -f2`
#sum=`expr $add - $del`
DIFF1=`expr $DIFF1 + $sum`
done <<< "$NUMSTAT1" # three < is the secret to putting a variable into read
git checkout "$PREVIOUS" &>/dev/null # previous commit
LINT1=`golint`
COUNT1=`echo -e "$LINT1" | wc -l` # number of golint problems in older branch
# clean up
cd "$XPWD" >/dev/null
rm -rf "$T"
DELTA=$(printf "%.0f\n" `echo - | awk "{ print (($COUNT - $COUNT1) / $DIFF1) * 100 }"`)
echo "Lines of code: $DIFF1"
echo "Prev. # of issues: $COUNT1"
echo "Curr. # of issues: $COUNT"
echo "Issue count delta is: $DELTA %"
if [ "$DELTA" -gt "$THRESHOLD" ]; then
echo "Maximum threshold is: $THRESHOLD %"
echo '`golint` FAIL'
exit 1
fi
echo PASS

8
test/test-govet.sh Executable file
View File

@@ -0,0 +1,8 @@
#!/bin/bash
# check that go vet passes
echo running test-govet.sh
ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && cd .. && pwd )" # dir!
cd "${ROOT}"
go vet && echo PASS || exit 1 # since it doesn't output an ok message on pass
grep 'log.' *.go | grep '\\n' && exit 1 || echo PASS # no \n needed in log.Printf()

30
test/test-headerfmt.sh Executable file
View File

@@ -0,0 +1,30 @@
#!/bin/bash
# check that headers are properly formatted
echo running test-headerfmt.sh
ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && cd .. && pwd )" # dir!
FILE="${ROOT}/main.go" # file headers should match main.go
COUNT=0
while IFS='' read -r line; do # find what header should look like
echo "$line" | grep -q '^//' || break
COUNT=`expr $COUNT + 1`
done < "$FILE"
cd "${ROOT}"
find_files() {
git ls-files | grep '\.go$'
}
bad_files=$(
for i in $(find_files); do
if ! diff -q <( head -n $COUNT "$i" ) <( head -n $COUNT "$FILE" ) &>/dev/null; then
echo "$i"
fi
done
)
if [[ -n "${bad_files}" ]]; then
echo 'FAIL'
echo 'The following file headers are not properly formatted:'
echo "${bad_files}"
exit 1
fi

View File

@@ -1,11 +1,21 @@
#!/bin/bash -ie
#!/bin/bash -i
# simple test harness for testing mgmt via omv
echo running test-omv.sh
CWD=`pwd`
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" # dir!
cd "$DIR" >/dev/null # work from test directory
# vtest+ tests
vtest+ omv/helloworld.yaml
RET=0
for i in omv/*.yaml; do
echo "running: vtest+ $i"
vtest+ "$i"
if [ $? -ne 0 ]; then
RET=1
break # remove this if we should run all tests even if one fails
fi
done
# return to original dir
cd "$CWD" >/dev/null
exit $RET

View File

@@ -1,6 +1,6 @@
#!/bin/bash
# simple test for reproducibility, probably needs major improvements
echo running test-reproducible.sh
set -o errexit
set -o pipefail

View File

@@ -1,7 +1,7 @@
#!/bin/bash
# simple test harness for testing mgmt
# NOTE: this will rm -rf /tmp/mgmt/
echo running test-shell.sh
set -o errexit
set -o pipefail
@@ -34,6 +34,7 @@ for i in $DIR/test/shell/*.sh; do
cd - >/dev/null
rm -rf '/tmp/mgmt/' # clean up after test
if [ $e -ne 0 ]; then
echo -e "FAIL\t$ii" # fail
# store failures...
failures=$(
# prepend previous failures if any

View File

@@ -1,6 +1,6 @@
#!/bin/bash
# check for any yaml files that aren't properly formatted
echo running test-yamlfmt.sh
set -o errexit
set -o nounset
set -o pipefail

407
types.go
View File

@@ -1,407 +0,0 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"log"
"time"
)
//go:generate stringer -type=typeState -output=typestate_stringer.go
type typeState int
const (
typeNil typeState = iota
typeWatching
typeEvent // an event has happened, but we haven't poked yet
typeApplying
typePoking
)
//go:generate stringer -type=typeConvergedState -output=typeconvergedstate_stringer.go
type typeConvergedState int
const (
typeConvergedNil typeConvergedState = iota
//typeConverged
typeConvergedTimeout
)
type Type interface {
Init()
GetName() string // can't be named "Name()" because of struct field
GetType() string
Watch()
StateOK() bool // TODO: can we rename this to something better?
Apply() bool
SetVertex(*Vertex)
SetConvegedCallback(ctimeout int, converged chan bool)
Compare(Type) bool
SendEvent(eventName, bool, bool) bool
IsWatching() bool
SetWatching(bool)
GetConvergedState() typeConvergedState
SetConvergedState(typeConvergedState)
GetState() typeState
SetState(typeState)
GetTimestamp() int64
UpdateTimestamp() int64
OKTimestamp() bool
Poke(bool)
BackPoke()
}
type BaseType struct {
Name string `yaml:"name"`
timestamp int64 // last updated timestamp ?
events chan Event
vertex *Vertex
state typeState
convergedState typeConvergedState
watching bool // is Watch() loop running ?
ctimeout int // converged timeout
converged chan bool
isStateOK bool // whether the state is okay based on events or not
}
type NoopType struct {
BaseType `yaml:",inline"`
Comment string `yaml:"comment"` // extra field for example purposes
}
func NewNoopType(name string) *NoopType {
// FIXME: we could get rid of this New constructor and use raw object creation with a required Init()
return &NoopType{
BaseType: BaseType{
Name: name,
events: make(chan Event), // unbuffered chan size to avoid stale events
vertex: nil,
},
Comment: "",
}
}
// initialize structures like channels if created without New constructor
func (obj *BaseType) Init() {
obj.events = make(chan Event)
}
// this method gets used by all the types, if we have one of (obj NoopType) it would get overridden in that case!
func (obj *BaseType) GetName() string {
return obj.Name
}
func (obj *BaseType) GetType() string {
return "Base"
}
func (obj *BaseType) GetVertex() *Vertex {
return obj.vertex
}
func (obj *BaseType) SetVertex(v *Vertex) {
obj.vertex = v
}
func (obj *BaseType) SetConvegedCallback(ctimeout int, converged chan bool) {
obj.ctimeout = ctimeout
obj.converged = converged
}
// is the Watch() function running?
func (obj *BaseType) IsWatching() bool {
return obj.watching
}
// store status of if the Watch() function is running
func (obj *BaseType) SetWatching(b bool) {
obj.watching = b
}
func (obj *BaseType) GetConvergedState() typeConvergedState {
return obj.convergedState
}
func (obj *BaseType) SetConvergedState(state typeConvergedState) {
obj.convergedState = state
}
func (obj *BaseType) GetState() typeState {
return obj.state
}
func (obj *BaseType) SetState(state typeState) {
if DEBUG {
log.Printf("%v[%v]: State: %v -> %v", obj.GetType(), obj.GetName(), obj.GetState(), state)
}
obj.state = state
}
// GetTimestamp returns the timestamp of a vertex
func (obj *BaseType) GetTimestamp() int64 {
return obj.timestamp
}
// UpdateTimestamp updates the timestamp on a vertex and returns the new value
func (obj *BaseType) UpdateTimestamp() int64 {
obj.timestamp = time.Now().UnixNano() // update
return obj.timestamp
}
// can this element run right now?
func (obj *BaseType) OKTimestamp() bool {
v := obj.GetVertex()
g := v.GetGraph()
// these are all the vertices pointing TO v, eg: ??? -> v
for _, n := range g.IncomingGraphEdges(v) {
// if the vertex has a greater timestamp than any pre-req (n)
// then we can't run right now...
// if they're equal (eg: on init of 0) then we also can't run
// b/c we should let our pre-req's go first...
x, y := obj.GetTimestamp(), n.Type.GetTimestamp()
if DEBUG {
log.Printf("%v[%v]: OKTimestamp: (%v) >= %v[%v](%v): !%v", obj.GetType(), obj.GetName(), x, n.GetType(), n.GetName(), y, x >= y)
}
if x >= y {
return false
}
}
return true
}
// notify nodes after me in the dependency graph that they need refreshing...
// NOTE: this assumes that this can never fail or need to be rescheduled
func (obj *BaseType) Poke(activity bool) {
v := obj.GetVertex()
g := v.GetGraph()
// these are all the vertices pointing AWAY FROM v, eg: v -> ???
for _, n := range g.OutgoingGraphEdges(v) {
// XXX: if we're in state event and haven't been cancelled by
// apply, then we can cancel a poke to a child, right? XXX
// XXX: if n.Type.GetState() != typeEvent { // is this correct?
if true { // XXX
if DEBUG {
log.Printf("%v[%v]: Poke: %v[%v]", v.GetType(), v.GetName(), n.GetType(), n.GetName())
}
n.SendEvent(eventPoke, false, activity) // XXX: can this be switched to sync?
} else {
if DEBUG {
log.Printf("%v[%v]: Poke: %v[%v]: Skipped!", v.GetType(), v.GetName(), n.GetType(), n.GetName())
}
}
}
}
// poke the pre-requisites that are stale and need to run before I can run...
func (obj *BaseType) BackPoke() {
v := obj.GetVertex()
g := v.GetGraph()
// these are all the vertices pointing TO v, eg: ??? -> v
for _, n := range g.IncomingGraphEdges(v) {
x, y, s := obj.GetTimestamp(), n.Type.GetTimestamp(), n.Type.GetState()
// if the parent timestamp needs poking AND it's not in state
// typeEvent, then poke it. If the parent is in typeEvent it
// means that an event is pending, so we'll be expecting a poke
// back soon, so we can safely discard the extra parent poke...
// TODO: implement a stateLT (less than) to tell if something
// happens earlier in the state cycle and that doesn't wrap nil
if x >= y && (s != typeEvent && s != typeApplying) {
if DEBUG {
log.Printf("%v[%v]: BackPoke: %v[%v]", v.GetType(), v.GetName(), n.GetType(), n.GetName())
}
n.SendEvent(eventBackPoke, false, false) // XXX: can this be switched to sync?
} else {
if DEBUG {
log.Printf("%v[%v]: BackPoke: %v[%v]: Skipped!", v.GetType(), v.GetName(), n.GetType(), n.GetName())
}
}
}
}
// push an event into the message queue for a particular type vertex
func (obj *BaseType) SendEvent(event eventName, sync bool, activity bool) bool {
// TODO: isn't this race-y ?
if !obj.IsWatching() { // element has already exited
return false // if we don't return, we'll block on the send
}
if !sync {
obj.events <- Event{event, nil, "", activity}
return true
}
resp := make(chan bool)
obj.events <- Event{event, resp, "", activity}
for {
value := <-resp
// wait until true value
if value {
return true
}
}
}
// process events when a select gets one, this handles the pause code too!
// the return values specify if we should exit and poke respectively
func (obj *BaseType) ReadEvent(event *Event) (exit, poke bool) {
event.ACK()
switch event.Name {
case eventStart:
return false, true
case eventPoke:
return false, true
case eventBackPoke:
return false, true // forward poking in response to a back poke!
case eventExit:
return true, false
case eventPause:
// wait for next event to continue
select {
case e := <-obj.events:
e.ACK()
if e.Name == eventExit {
return true, false
} else if e.Name == eventStart { // eventContinue
return false, false // don't poke on unpause!
} else {
// if we get a poke event here, it's a bug!
log.Fatalf("%v[%v]: Unknown event: %v, while paused!", obj.GetType(), obj.GetName(), e)
}
}
default:
log.Fatal("Unknown event: ", event)
}
return true, false // required to keep the stupid go compiler happy
}
// useful for using as: return CleanState() in the StateOK functions when there
// are multiple `true` return exits
func (obj *BaseType) CleanState() bool {
obj.isStateOK = true
return true
}
// XXX: rename this function
func Process(obj Type) {
if DEBUG {
log.Printf("%v[%v]: Process()", obj.GetType(), obj.GetName())
}
obj.SetState(typeEvent)
var ok = true
var apply = false // did we run an apply?
// is it okay to run dependency wise right now?
// if not, that's okay because when the dependency runs, it will poke
// us back and we will run if needed then!
if obj.OKTimestamp() {
if DEBUG {
log.Printf("%v[%v]: OKTimestamp(%v)", obj.GetType(), obj.GetName(), obj.GetTimestamp())
}
if !obj.StateOK() { // TODO: can we rename this to something better?
if DEBUG {
log.Printf("%v[%v]: !StateOK()", obj.GetType(), obj.GetName())
}
// throw an error if apply fails...
// if this fails, don't UpdateTimestamp()
obj.SetState(typeApplying)
if !obj.Apply() { // check for error
ok = false
} else {
apply = true
}
}
if ok {
// update this timestamp *before* we poke or the poked
// nodes might fail due to having a too old timestamp!
obj.UpdateTimestamp() // this was touched...
obj.SetState(typePoking) // can't cancel parent poke
obj.Poke(apply)
}
// poke at our pre-req's instead since they need to refresh/run...
} else {
// only poke at the pre-req's that need to run
go obj.BackPoke()
}
}
func (obj *NoopType) GetType() string {
return "Noop"
}
func (obj *NoopType) Watch() {
if obj.IsWatching() {
return
}
obj.SetWatching(true)
defer obj.SetWatching(false)
//vertex := obj.vertex // stored with SetVertex
var send = false // send event?
var exit = false
for {
obj.SetState(typeWatching) // reset
select {
case event := <-obj.events:
obj.SetConvergedState(typeConvergedNil)
// we avoid sending events on unpause
if exit, send = obj.ReadEvent(&event); exit {
return // exit
}
case _ = <-TimeAfterOrBlock(obj.ctimeout):
obj.SetConvergedState(typeConvergedTimeout)
obj.converged <- true
continue
}
// do all our event sending all together to avoid duplicate msgs
if send {
send = false
// only do this on certain types of events
//obj.isStateOK = false // something made state dirty
Process(obj) // XXX: rename this function
}
}
}
func (obj *NoopType) StateOK() bool {
return true // never needs updating
}
func (obj *NoopType) Apply() bool {
log.Printf("%v[%v]: Apply", obj.GetType(), obj.GetName())
return true
}
func (obj *NoopType) Compare(typ Type) bool {
switch typ.(type) {
// we can only compare NoopType to others of the same type
case *NoopType:
typ := typ.(*NoopType)
if obj.Name != typ.Name {
return false
}
default:
return false
}
return true
}