32 Commits
0.0.5 ... 0.0.6

Author SHA1 Message Date
James Shubin
2e2658ab6f examples: make the libmgmt example more fun
You can try it out yourself by running `go build` and then calling it.
Use a bare integer argument to create that number of noop resources.
There are clearly some performance optimizations that we could do for
extremely large graphs.
2016-11-03 04:18:26 -04:00
James Shubin
1370f2a76b gapi: Split out graph generation into a proper graph API
This is a monster patch that splits out the yaml and puppet based graph
generation and pushes them behind a common API. In addition alternate
pluggable GAPI's can be easily added! The important side benefit is that
you can now write a custom GAPI for embedding mgmt!

This also includes some slight clean ups that I didn't find it worth
splitting into separate patches.
2016-11-03 03:56:16 -04:00
Joe Julian
75dedf391a virt: don't set emulator path
Remove the implicit emulator path from the domain definition. Libvirt is
already configured to use the correct emulator for kvm or qemu and
specifying it creates distro dependence.

Fixes #85
2016-10-27 15:20:06 -04:00
Juergen Hoetzel
7b5c640d05 readme: Fix go get command
"go get" requires a package name
2016-10-27 18:30:00 +02:00
James Shubin
aa9a21b4d0 cli: Pass through program and version strings
We forgot to pass these through. If they're undefined, it errors.
2016-10-24 17:41:03 -04:00
James Shubin
71de8014d5 main: Libify mgmt with a golang API
This is an initial implementation of a possible golang API. In this
particular version, the *gconfig.GraphConfig data structures are
emitted, instead of possibly building a pgraph. As long as we can
represent any local graph as the data structure, then this is fine!

Is there a way to merge the gconfig Vertex and the pgraph Vertex?
2016-10-24 17:33:31 -04:00
Marc Antoine Dumont
80476d19f9 Add the link to a new dependencies in README.md
Add the link to the dependencise github.com/rgbkrk/libvirt-go
2016-10-24 16:16:05 -04:00
James Shubin
15103d18ef readme: Update README file to make it clearer for new hackers 2016-10-23 20:47:12 -04:00
James Shubin
0dbd2004ad main: Split apart logic in main
This splits most of the main logic from the cli logic so that they can
be used independently, in particular for if we ever libify mgmt.
2016-10-23 20:23:04 -04:00
James Shubin
8c92566889 resources: virt: Update CPUs variable to new uint16 signature
Now things are consistent after my new patch upstream!
2016-10-23 02:41:32 -04:00
James Shubin
fb9449038b resources: Update constructor signature to return error as well
Update the helper functions so they're easier to properly use!
2016-10-23 01:36:34 -04:00
James Shubin
e06c4a873d resources: Set the defaults for metaparameters
This now lets us have defaults for metaparameters that aren't the zero
value for that type.
2016-10-23 01:14:02 -04:00
James Shubin
c4c28c6c82 spec: Improve the rpm package
This still needs a lot of work by a packaging specialist.
2016-10-19 20:10:11 -04:00
James Shubin
42ff9b803a resources: Use Events() method instead of raw channel
This makes things easier if we ever split resources out into separate
packages.
2016-10-19 20:08:53 -04:00
James Shubin
3831e9739c resources: virt: Update to new function signature
This changed in git master, and is now more idiomatic.
2016-10-19 13:59:20 -04:00
James Shubin
f196e5cca2 test: Fix travis so it pulls in our deps 2016-10-19 13:51:38 -04:00
Joe Julian
d3af9105ee Use the download-only flag when fetching dependencies 2016-10-19 09:54:15 -07:00
James Shubin
6d685ae4d6 misc: Add libvirt header file dependency 2016-10-19 04:23:15 -04:00
James Shubin
8381d8246a resources: virt: Add a virt resource based on libvirt
This adds an initial implementation of a virt resource based on libvirt.
It is not complete and requires more testing. The initial skeleton was
written by nseps but was not merged. It was later cleaned up and merged
in its current form by purpleidea. Many thanks to nseps for getting this
going, and hopefully we'll get you contributing more in the future!
2016-10-19 04:11:17 -04:00
James Shubin
b26322fc20 all: Rename UUID to UID.
Felix pointed out that these ID's are unique, but not universally unique
across the cluster, which might be confusing to new programmers. As a
result, rename them all.
2016-10-18 23:03:55 -04:00
James Shubin
1c1e8127d8 resources: Check that resource kind is set.
This could be a Fatal instead, but might as well fail gracefully.
2016-10-18 14:07:27 -04:00
James Shubin
1b3b4406ff resources: Interfaces parameters can be named to help documentation 2016-10-18 14:04:47 -04:00
James Shubin
cf0b77518a resources: List resources in alphabetical order 2016-10-18 14:04:21 -04:00
James Shubin
afdbf44e23 make: Sed needs g option to replace multiple PROGRAM names 2016-10-13 10:23:18 -04:00
Alexandre-Xavier Labonté-Lamoureux
ec87781956 test: Tokens should always have a colon 2016-10-11 13:46:59 -04:00
James Shubin
a6ae958be7 etcd: Fix type issue
I was lazy and pushed the previous fix too quickly. Sorry, fixed now!
2016-10-07 15:59:53 -04:00
James Shubin
312103ef1b test: update lint checker to support packages 2016-10-07 15:51:58 -04:00
James Shubin
c2911bb2b7 etcd: Verify struct is not nil before accessing retries value
This didn't happen often because there's a nominateCallback race, but is
a bug which happened occasionally.
2016-10-07 15:36:09 -04:00
James Shubin
8ca5e38121 readme: Update repository with information about remote execution 2016-10-07 15:35:29 -04:00
James Shubin
4b8ad3a8a7 godoc: Document packagekit package 2016-10-03 15:28:25 -04:00
James Shubin
f219c2649d godoc: Document resources package 2016-10-03 15:26:41 -04:00
James Shubin
cfde54261b golint: Outdent else statement 2016-10-03 15:21:08 -04:00
50 changed files with 2992 additions and 1160 deletions

View File

@@ -3,7 +3,8 @@ go:
- 1.6
- 1.7
- tip
sudo: false
sudo: true
dist: trusty
before_install: 'git fetch --unshallow'
install: 'make deps'
script: 'make test'

View File

@@ -70,7 +70,7 @@ Older videos and other material [is available](https://github.com/purpleidea/mgm
##Setup
During this prototype phase, the tool can be run out of the source directory.
You'll probably want to use ```./run.sh run --file examples/graph1.yaml``` to
You'll probably want to use ```./run.sh run --yaml examples/graph1.yaml``` to
get started. Beware that this _can_ cause data loss. Understand what you're
doing first, or perform these actions in a virtual environment such as the one
provided by [Oh-My-Vagrant](https://github.com/purpleidea/oh-my-vagrant).
@@ -170,7 +170,8 @@ which need to exchange information that is only available at run time.
####Blog post
An introductory blog post about this topic will follow soon.
You can read the introductory blog post about this topic here:
[https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/](https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/)
###Puppet support
@@ -222,6 +223,7 @@ parameter with the [Noop](#Noop) resource.
* [Pkg](#Pkg): Manage system packages with PackageKit.
* [Svc](#Svc): Manage system systemd services.
* [Timer](#Timer): Manage system systemd services.
* [Virt](#Virt): Manage virtual machines with libvirt.
###Exec
@@ -286,6 +288,10 @@ The service resource is still very WIP. Please help us my improving it!
This resource needs better documentation. Please help us my improving it!
###Virt
The virt resource can manage virtual machines via libvirt.
##Usage and frequently asked questions
(Send your questions as a patch to this FAQ! I'll review it, merge it, and
respond by commit with the answer.)
@@ -333,7 +339,7 @@ starting up, and as a result, a default endpoint never gets added. The solution
is to either reconcile the mistake, and if there is no important data saved, you
can remove the etcd dataDir. This is typically `/var/lib/mgmt/etcd/member/`.
###Why do resources have both a `Compare` method and an `IFF` (on the UUID) method?
###Why do resources have both a `Compare` method and an `IFF` (on the UID) method?
The `Compare()` methods are for determining if two resources are effectively the
same, which is used to make graph change delta's efficient. This is when we want
@@ -342,9 +348,9 @@ vertices. Since we want to make this process efficient, we only update the parts
that are different, and leave everything else alone. This `Compare()` method can
tell us if two resources are the same.
The `IFF()` method is part of the whole UUID system, which is for discerning if
a resource meets the requirements another expects for an automatic edge. This is
because the automatic edge system assumes a unified UUID pattern to test for
The `IFF()` method is part of the whole UID system, which is for discerning if a
resource meets the requirements another expects for an automatic edge. This is
because the automatic edge system assumes a unified UID pattern to test for
equality. In the future it might be helpful or sane to merge the two similar
comparison functions although for now they are separate because they are
actually answer different questions.
@@ -416,7 +422,7 @@ you can probably figure out most of it, as it's fairly intuitive.
The main interface to the `mgmt` tool is the command line. For the most recent
documentation, please run `mgmt --help`.
####`--file <graph.yaml>`
####`--yaml <graph.yaml>`
Point to a graph file to run.
####`--converged-timeout <seconds>`

View File

@@ -184,7 +184,7 @@ $(SRPM): $(SPEC) $(SOURCE)
$(SPEC): rpmbuild/ spec.in
@echo Running templater...
#cat spec.in > $(SPEC)
sed -e s/__PROGRAM__/$(PROGRAM)/ -e s/__VERSION__/$(VERSION)/ -e s/__RELEASE__/$(RELEASE)/ < spec.in > $(SPEC)
sed -e s/__PROGRAM__/$(PROGRAM)/g -e s/__VERSION__/$(VERSION)/g -e s/__RELEASE__/$(RELEASE)/g < spec.in > $(SPEC)
# append a changelog to the .spec file
git log --format="* %cd %aN <%aE>%n- (%h) %s%d%n" --date=local | sed -r 's/[0-9]+:[0-9]+:[0-9]+ //' >> $(SPEC)

View File

@@ -27,10 +27,21 @@ If you have a well phrased question that might benefit others, consider asking i
## Quick start:
* Make sure you have golang version 1.6 or greater installed.
* Clone the repository recursively, eg: `git clone --recursive https://github.com/purpleidea/mgmt/`.
* Get the remaining golang dependencies on your own, or run `make deps` if you're comfortable with how we install them.
* If you do not have a GOPATH yet, create one and export it:
```
mkdir $HOME/gopath
export GOPATH=$HOME/gopath
```
* You might also want to add the GOPATH to your `~/.bashrc` or `~/.profile`.
* For more information you can read the [GOPATH documentation](https://golang.org/cmd/go/#hdr-GOPATH_environment_variable).
* Next download the mgmt code base, and switch to that directory:
```
go get -u github.com/purpleidea/mgmt
cd $GOPATH/src/github.com/purpleidea/mgmt
```
* Get the remaining golang deps with `go get ./...`, or run `make deps` if you're comfortable with how we install them.
* Run `make build` to get a freshly built `mgmt` binary.
* Run `time ./mgmt run --file examples/graph0.yaml --converged-timeout=5 --tmp-prefix` to try out a very simple example!
* Run `time ./mgmt run --yaml examples/graph0.yaml --converged-timeout=5 --tmp-prefix` to try out a very simple example!
* To run continuously in the default mode of operation, omit the `--converged-timeout` option.
* Have fun hacking on our future technology!
@@ -38,7 +49,7 @@ If you have a well phrased question that might benefit others, consider asking i
Please look in the [examples/](examples/) folder for more examples!
## Documentation:
Please see: [DOCUMENTATION.md](DOCUMENTATION.md) or [PDF](https://pdfdoc-purpleidea.rhcloud.com/pdf/https://github.com/purpleidea/mgmt/blob/master/DOCUMENTATION.md).
Please see: the manually created [DOCUMENTATION.md](DOCUMENTATION.md) (also available as [PDF](https://pdfdoc-purpleidea.rhcloud.com/pdf/https://github.com/purpleidea/mgmt/blob/master/DOCUMENTATION.md)) and the automatically generated [GoDoc documentation](https://godoc.org/github.com/purpleidea/mgmt).
## Roadmap:
Please see: [TODO.md](TODO.md) for a list of upcoming work and TODO items.
@@ -53,7 +64,7 @@ Feel free to read my article on [debugging golang programs](https://ttboj.wordpr
## Dependencies:
* golang 1.6 or higher (required, available in most distros)
* golang libraries (required, available with `go get`)
```
go get github.com/coreos/etcd/client
go get gopkg.in/yaml.v2
go get gopkg.in/fsnotify.v1
@@ -61,11 +72,12 @@ Feel free to read my article on [debugging golang programs](https://ttboj.wordpr
go get github.com/coreos/go-systemd/dbus
go get github.com/coreos/go-systemd/util
go get github.com/coreos/pkg/capnslog
* stringer (required for building), available as a package on some platforms, otherwise via `go get`
go get github.com/rgbkrk/libvirt-go
```
* stringer (optional for building), available as a package on some platforms, otherwise via `go get`
```
go get golang.org/x/tools/cmd/stringer
```
* pandoc (optional, for building a pdf of the documentation)
* graphviz (optional, for building a visual representation of the graph)
@@ -90,6 +102,8 @@ We'd love to have your patches! Please send them by email, or as a pull request.
* James Shubin; video: [Recording from DebConf16](http://meetings-archive.debian.net/pub/debian-meetings/2016/debconf16/Next_Generation_Config_Mgmt.webm) ([Slides](https://annex.debconf.org//debconf-share/debconf16/slides/15-next-generation-config-mgmt.pdf))
* Felix Frank; blog: [Edging It All In (puppet and mgmt edges)](https://ffrank.github.io/features/2016/07/12/edging-it-all-in/)
* Felix Frank; blog: [Translating All The Things (puppet to mgmt translation warnings)](https://ffrank.github.io/features/2016/08/19/translating-all-the-things/)
* James Shubin; video: [Recording from systemd.conf 2016](https://www.youtube.com/watch?v=jB992Zb3nH0&html5=1)
* James Shubin; blog: [Remote execution in mgmt](https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/)
##

View File

@@ -27,26 +27,26 @@ import (
)
// TODO: we could make a new function that masks out the state of certain
// UUID's, but at the moment the new Timer code has obsoleted the need...
// UID's, but at the moment the new Timer code has obsoleted the need...
// Converger is the general interface for implementing a convergence watcher
type Converger interface { // TODO: need a better name
Register() ConvergerUUID
IsConverged(ConvergerUUID) bool // is the UUID converged ?
SetConverged(ConvergerUUID, bool) error // set the converged state of the UUID
Unregister(ConvergerUUID)
Register() ConvergerUID
IsConverged(ConvergerUID) bool // is the UID converged ?
SetConverged(ConvergerUID, bool) error // set the converged state of the UID
Unregister(ConvergerUID)
Start()
Pause()
Loop(bool)
ConvergedTimer(ConvergerUUID) <-chan time.Time
ConvergedTimer(ConvergerUID) <-chan time.Time
Status() map[uint64]bool
Timeout() int // returns the timeout that this was created with
SetStateFn(func(bool) error) // sets the stateFn
}
// ConvergerUUID is the interface resources can use to notify with if converged
// ConvergerUID is the interface resources can use to notify with if converged
// you'll need to use part of the Converger interface to Register initially too
type ConvergerUUID interface {
type ConvergerUID interface {
ID() uint64 // get Id
Name() string // get a friendly name
SetName(string)
@@ -73,8 +73,8 @@ type converger struct {
status map[uint64]bool
}
// convergerUUID is an implementation of the ConvergerUUID interface
type convergerUUID struct {
// convergerUID is an implementation of the ConvergerUID interface
type convergerUID struct {
converger Converger
id uint64
name string // user defined, friendly name
@@ -95,13 +95,13 @@ func NewConverger(timeout int, stateFn func(bool) error) *converger {
}
}
// Register assigns a ConvergerUUID to the caller
func (obj *converger) Register() ConvergerUUID {
// Register assigns a ConvergerUID to the caller
func (obj *converger) Register() ConvergerUID {
obj.mutex.Lock()
defer obj.mutex.Unlock()
obj.lastid++
obj.status[obj.lastid] = false // initialize as not converged
return &convergerUUID{
return &convergerUID{
converger: obj,
id: obj.lastid,
name: fmt.Sprintf("%d", obj.lastid), // some default
@@ -110,30 +110,30 @@ func (obj *converger) Register() ConvergerUUID {
}
}
// IsConverged gets the converged status of a uuid
func (obj *converger) IsConverged(uuid ConvergerUUID) bool {
if !uuid.IsValid() {
panic(fmt.Sprintf("Id of ConvergerUUID(%s) is nil!", uuid.Name()))
// IsConverged gets the converged status of a uid
func (obj *converger) IsConverged(uid ConvergerUID) bool {
if !uid.IsValid() {
panic(fmt.Sprintf("Id of ConvergerUID(%s) is nil!", uid.Name()))
}
obj.mutex.RLock()
isConverged, found := obj.status[uuid.ID()] // lookup
isConverged, found := obj.status[uid.ID()] // lookup
obj.mutex.RUnlock()
if !found {
panic("Id of ConvergerUUID is unregistered!")
panic("Id of ConvergerUID is unregistered!")
}
return isConverged
}
// SetConverged updates the converger with the converged state of the UUID
func (obj *converger) SetConverged(uuid ConvergerUUID, isConverged bool) error {
if !uuid.IsValid() {
return fmt.Errorf("Id of ConvergerUUID(%s) is nil!", uuid.Name())
// SetConverged updates the converger with the converged state of the UID
func (obj *converger) SetConverged(uid ConvergerUID, isConverged bool) error {
if !uid.IsValid() {
return fmt.Errorf("Id of ConvergerUID(%s) is nil!", uid.Name())
}
obj.mutex.Lock()
if _, found := obj.status[uuid.ID()]; !found {
panic("Id of ConvergerUUID is unregistered!")
if _, found := obj.status[uid.ID()]; !found {
panic("Id of ConvergerUID is unregistered!")
}
obj.status[uuid.ID()] = isConverged // set
obj.status[uid.ID()] = isConverged // set
obj.mutex.Unlock() // unlock *before* poke or deadlock!
if isConverged != obj.converged { // only poke if it would be helpful
// run in a go routine so that we never block... just queue up!
@@ -143,7 +143,7 @@ func (obj *converger) SetConverged(uuid ConvergerUUID, isConverged bool) error {
return nil
}
// isConverged returns true if *every* registered uuid has converged
// isConverged returns true if *every* registered uid has converged
func (obj *converger) isConverged() bool {
obj.mutex.RLock() // take a read lock
defer obj.mutex.RUnlock()
@@ -155,16 +155,16 @@ func (obj *converger) isConverged() bool {
return true
}
// Unregister dissociates the ConvergedUUID from the converged checking
func (obj *converger) Unregister(uuid ConvergerUUID) {
if !uuid.IsValid() {
panic(fmt.Sprintf("Id of ConvergerUUID(%s) is nil!", uuid.Name()))
// Unregister dissociates the ConvergedUID from the converged checking
func (obj *converger) Unregister(uid ConvergerUID) {
if !uid.IsValid() {
panic(fmt.Sprintf("Id of ConvergerUID(%s) is nil!", uid.Name()))
}
obj.mutex.Lock()
uuid.StopTimer() // ignore any errors
delete(obj.status, uuid.ID())
uid.StopTimer() // ignore any errors
delete(obj.status, uid.ID())
obj.mutex.Unlock()
uuid.InvalidateID()
uid.InvalidateID()
}
// Start causes a Converger object to start or resume running
@@ -245,18 +245,18 @@ func (obj *converger) Loop(startPaused bool) {
// ConvergedTimer adds a timeout to a select call and blocks until then
// TODO: this means we could eventually have per resource converged timeouts
func (obj *converger) ConvergedTimer(uuid ConvergerUUID) <-chan time.Time {
func (obj *converger) ConvergedTimer(uid ConvergerUID) <-chan time.Time {
// be clever: if i'm already converged, this timeout should block which
// avoids unnecessary new signals being sent! this avoids fast loops if
// we have a low timeout, or in particular a timeout == 0
if uuid.IsConverged() {
if uid.IsConverged() {
// blocks the case statement in select forever!
return util.TimeAfterOrBlock(-1)
}
return util.TimeAfterOrBlock(obj.timeout)
}
// Status returns a map of the converged status of each UUID.
// Status returns a map of the converged status of each UID.
func (obj *converger) Status() map[uint64]bool {
status := make(map[uint64]bool)
obj.mutex.RLock() // take a read lock
@@ -279,53 +279,53 @@ func (obj *converger) SetStateFn(stateFn func(bool) error) {
obj.stateFn = stateFn
}
// Id returns the unique id of this UUID object
func (obj *convergerUUID) ID() uint64 {
// Id returns the unique id of this UID object
func (obj *convergerUID) ID() uint64 {
return obj.id
}
// Name returns a user defined name for the specific convergerUUID.
func (obj *convergerUUID) Name() string {
// Name returns a user defined name for the specific convergerUID.
func (obj *convergerUID) Name() string {
return obj.name
}
// SetName sets a user defined name for the specific convergerUUID.
func (obj *convergerUUID) SetName(name string) {
// SetName sets a user defined name for the specific convergerUID.
func (obj *convergerUID) SetName(name string) {
obj.name = name
}
// IsValid tells us if the id is valid or has already been destroyed
func (obj *convergerUUID) IsValid() bool {
func (obj *convergerUID) IsValid() bool {
return obj.id != 0 // an id of 0 is invalid
}
// InvalidateID marks the id as no longer valid
func (obj *convergerUUID) InvalidateID() {
func (obj *convergerUID) InvalidateID() {
obj.id = 0 // an id of 0 is invalid
}
// IsConverged is a helper function to the regular IsConverged method
func (obj *convergerUUID) IsConverged() bool {
func (obj *convergerUID) IsConverged() bool {
return obj.converger.IsConverged(obj)
}
// SetConverged is a helper function to the regular SetConverged notification
func (obj *convergerUUID) SetConverged(isConverged bool) error {
func (obj *convergerUID) SetConverged(isConverged bool) error {
return obj.converger.SetConverged(obj, isConverged)
}
// Unregister is a helper function to unregister myself
func (obj *convergerUUID) Unregister() {
func (obj *convergerUID) Unregister() {
obj.converger.Unregister(obj)
}
// ConvergedTimer is a helper around the regular ConvergedTimer method
func (obj *convergerUUID) ConvergedTimer() <-chan time.Time {
func (obj *convergerUID) ConvergedTimer() <-chan time.Time {
return obj.converger.ConvergedTimer(obj)
}
// StartTimer runs an invisible timer that automatically converges on timeout.
func (obj *convergerUUID) StartTimer() (func() error, error) {
func (obj *convergerUID) StartTimer() (func() error, error) {
obj.mutex.Lock()
if !obj.running {
obj.timer = make(chan struct{})
@@ -359,7 +359,7 @@ func (obj *convergerUUID) StartTimer() (func() error, error) {
}
// ResetTimer resets the counter to zero if using a StartTimer internally.
func (obj *convergerUUID) ResetTimer() error {
func (obj *convergerUID) ResetTimer() error {
obj.mutex.Lock()
defer obj.mutex.Unlock()
if obj.running {
@@ -370,7 +370,7 @@ func (obj *convergerUUID) ResetTimer() error {
}
// StopTimer stops the running timer permanently until a StartTimer is run.
func (obj *convergerUUID) StopTimer() error {
func (obj *convergerUID) StopTimer() error {
obj.mutex.Lock()
defer obj.mutex.Unlock()
if !obj.running {

View File

@@ -36,11 +36,12 @@
// * The elected leader should decide who to nominate/unnominate to keep the right number of servers.
//
// Smoke testing:
// ./mgmt run --file examples/etcd1a.yaml --hostname h1
// ./mgmt run --file examples/etcd1b.yaml --hostname h2 --tmp-prefix --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2381 --server-urls http://127.0.0.1:2382
// ./mgmt run --file examples/etcd1c.yaml --hostname h3 --tmp-prefix --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2383 --server-urls http://127.0.0.1:2384
// mkdir /tmp/mgmt{A..E}
// ./mgmt run --yaml examples/etcd1a.yaml --hostname h1 --tmp-prefix
// ./mgmt run --yaml examples/etcd1b.yaml --hostname h2 --tmp-prefix --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2381 --server-urls http://127.0.0.1:2382
// ./mgmt run --yaml examples/etcd1c.yaml --hostname h3 --tmp-prefix --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2383 --server-urls http://127.0.0.1:2384
// ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 put /_mgmt/idealClusterSize 3
// ./mgmt run --file examples/etcd1d.yaml --hostname h4 --tmp-prefix --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2385 --server-urls http://127.0.0.1:2386
// ./mgmt run --yaml examples/etcd1d.yaml --hostname h4 --tmp-prefix --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2385 --server-urls http://127.0.0.1:2386
// ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 member list
// ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2381 put /_mgmt/idealClusterSize 5
// ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2381 member list
@@ -691,22 +692,22 @@ func (obj *EmbdEtcd) CtxError(ctx context.Context, err error) (context.Context,
// CbLoop is the loop where callback execution is serialized
func (obj *EmbdEtcd) CbLoop() {
cuuid := obj.converger.Register()
cuuid.SetName("Etcd: CbLoop")
defer cuuid.Unregister()
cuid := obj.converger.Register()
cuid.SetName("Etcd: CbLoop")
defer cuid.Unregister()
if e := obj.Connect(false); e != nil {
return // fatal
}
// we use this timer because when we ignore un-converge events and loop,
// we reset the ConvergedTimer case statement, ruining the timeout math!
cuuid.StartTimer()
cuid.StartTimer()
for {
ctx := context.Background() // TODO: inherit as input argument?
select {
// etcd watcher event
case re := <-obj.wevents:
if !re.skipConv { // if we want to count it...
cuuid.ResetTimer() // activity!
cuid.ResetTimer() // activity!
}
if global.TRACE {
log.Printf("Trace: Etcd: CbLoop: Event: StartLoop")
@@ -739,7 +740,7 @@ func (obj *EmbdEtcd) CbLoop() {
// exit loop commit
case <-obj.exitTimeout:
log.Println("Etcd: Exiting callback loop!")
cuuid.StopTimer() // clean up nicely
cuid.StopTimer() // clean up nicely
return
}
}
@@ -747,19 +748,19 @@ func (obj *EmbdEtcd) CbLoop() {
// Loop is the main loop where everything is serialized
func (obj *EmbdEtcd) Loop() {
cuuid := obj.converger.Register()
cuuid.SetName("Etcd: Loop")
defer cuuid.Unregister()
cuid := obj.converger.Register()
cuid.SetName("Etcd: Loop")
defer cuid.Unregister()
if e := obj.Connect(false); e != nil {
return // fatal
}
cuuid.StartTimer()
cuid.StartTimer()
for {
ctx := context.Background() // TODO: inherit as input argument?
// priority channel...
select {
case aw := <-obj.awq:
cuuid.ResetTimer() // activity!
cuid.ResetTimer() // activity!
if global.TRACE {
log.Printf("Trace: Etcd: Loop: PriorityAW: StartLoop")
}
@@ -775,7 +776,7 @@ func (obj *EmbdEtcd) Loop() {
select {
// add watcher
case aw := <-obj.awq:
cuuid.ResetTimer() // activity!
cuid.ResetTimer() // activity!
if global.TRACE {
log.Printf("Trace: Etcd: Loop: AW: StartLoop")
}
@@ -786,7 +787,7 @@ func (obj *EmbdEtcd) Loop() {
// set kv pair
case kv := <-obj.setq:
cuuid.ResetTimer() // activity!
cuid.ResetTimer() // activity!
if global.TRACE {
log.Printf("Trace: Etcd: Loop: Set: StartLoop")
}
@@ -811,7 +812,7 @@ func (obj *EmbdEtcd) Loop() {
// get value
case gq := <-obj.getq:
if !gq.skipConv {
cuuid.ResetTimer() // activity!
cuid.ResetTimer() // activity!
}
if global.TRACE {
log.Printf("Trace: Etcd: Loop: Get: StartLoop")
@@ -837,7 +838,7 @@ func (obj *EmbdEtcd) Loop() {
// delete value
case dl := <-obj.delq:
cuuid.ResetTimer() // activity!
cuid.ResetTimer() // activity!
if global.TRACE {
log.Printf("Trace: Etcd: Loop: Delete: StartLoop")
}
@@ -862,7 +863,7 @@ func (obj *EmbdEtcd) Loop() {
// run txn
case tn := <-obj.txnq:
cuuid.ResetTimer() // activity!
cuid.ResetTimer() // activity!
if global.TRACE {
log.Printf("Trace: Etcd: Loop: Txn: StartLoop")
}
@@ -897,7 +898,7 @@ func (obj *EmbdEtcd) Loop() {
// exit loop commit
case <-obj.exitTimeout:
log.Println("Etcd: Exiting loop!")
cuuid.StopTimer() // clean up nicely
cuid.StopTimer() // clean up nicely
return
}
}
@@ -1085,7 +1086,7 @@ func (obj *EmbdEtcd) rawAddWatcher(ctx context.Context, aw *AW) (func(), error)
}
if err == nil { // watch from latest good revision
rev = response.Header.Revision // TODO +1 ?
rev = response.Header.Revision // TODO: +1 ?
useRev = true
if !locked {
retry = false
@@ -1399,7 +1400,7 @@ func (obj *EmbdEtcd) nominateCallback(re *RE) error {
if global.DEBUG {
log.Printf("Etcd: nominateCallback(): newCluster: %v; exists: %v; obj.server == nil: %t", newCluster, exists, obj.server == nil)
}
// XXX check if i have actually volunteered first of all...
// XXX: check if i have actually volunteered first of all...
if obj.server == nil && (newCluster || exists) {
log.Printf("Etcd: StartServer(newCluster: %t): %+v", newCluster, obj.nominated)
@@ -1408,8 +1409,12 @@ func (obj *EmbdEtcd) nominateCallback(re *RE) error {
obj.nominated, // other peer members and urls or empty map
)
if err != nil {
var retries uint
if re != nil {
retries = re.retries
}
// retry maxStartServerRetries times, then permanently fail
return &CtxRetriesErr{maxStartServerRetries - re.retries, fmt.Sprintf("Etcd: StartServer: Error: %+v", err)}
return &CtxRetriesErr{maxStartServerRetries - retries, fmt.Sprintf("Etcd: StartServer: Error: %+v", err)}
}
if len(obj.endpoints) == 0 {
@@ -1426,7 +1431,7 @@ func (obj *EmbdEtcd) nominateCallback(re *RE) error {
// XXX: just put this wherever for now so we don't block
// nominate self so "member" list is correct for peers to see
EtcdNominate(obj, obj.hostname, obj.serverURLs)
// XXX if this fails, where will we retry this part ?
// XXX: if this fails, where will we retry this part ?
}
// advertise client urls
@@ -1434,7 +1439,7 @@ func (obj *EmbdEtcd) nominateCallback(re *RE) error {
// XXX: don't advertise local addresses! 127.0.0.1:2381 doesn't really help remote hosts
// XXX: but sometimes this is what we want... hmmm how do we decide? filter on callback?
EtcdAdvertiseEndpoints(obj, curls)
// XXX if this fails, where will we retry this part ?
// XXX: if this fails, where will we retry this part ?
// force this to remove sentinel before we reconnect...
obj.endpointCallback(nil)
@@ -1650,7 +1655,7 @@ func (obj *EmbdEtcd) StartServer(newCluster bool, peerURLsMap etcdtypes.URLsMap)
} else {
cfg.ClusterState = embed.ClusterStateFlagExisting
}
//cfg.ForceNewCluster = newCluster // TODO ?
//cfg.ForceNewCluster = newCluster // TODO: ?
log.Printf("Etcd: StartServer: Starting server...")
obj.server, err = embed.StartEtcd(cfg)
@@ -1954,7 +1959,7 @@ func EtcdGetClusterSize(obj *EmbdEtcd) (uint16, error) {
// EtcdMemberAdd adds a member to the cluster.
func EtcdMemberAdd(obj *EmbdEtcd, peerURLs etcdtypes.URLs) (*etcd.MemberAddResponse, error) {
//obj.Connect(false) // TODO ?
//obj.Connect(false) // TODO: ?
ctx := context.Background()
var response *etcd.MemberAddResponse
var err error
@@ -1979,7 +1984,7 @@ func EtcdMemberAdd(obj *EmbdEtcd, peerURLs etcdtypes.URLs) (*etcd.MemberAddRespo
// if there was an error. This is because it might have run without error, but
// the member wasn't found, for example.
func EtcdMemberRemove(obj *EmbdEtcd, mID uint64) (bool, error) {
//obj.Connect(false) // TODO ?
//obj.Connect(false) // TODO: ?
ctx := context.Background()
for {
if obj.exiting { // the exit signal has been sent!
@@ -2005,7 +2010,7 @@ func EtcdMemberRemove(obj *EmbdEtcd, mID uint64) (bool, error) {
// The member ID's are the keys, because an empty names means unstarted!
// TODO: consider queueing this through the main loop with CtxError(ctx, err)
func EtcdMembers(obj *EmbdEtcd) (map[uint64]string, error) {
//obj.Connect(false) // TODO ?
//obj.Connect(false) // TODO: ?
ctx := context.Background()
var response *etcd.MemberListResponse
var err error
@@ -2036,7 +2041,7 @@ func EtcdMembers(obj *EmbdEtcd) (map[uint64]string, error) {
// EtcdLeader returns the current leader of the etcd server cluster
func EtcdLeader(obj *EmbdEtcd) (string, error) {
//obj.Connect(false) // TODO ?
//obj.Connect(false) // TODO: ?
var err error
membersMap := make(map[uint64]string)
if membersMap, err = EtcdMembers(obj); err != nil {
@@ -2109,7 +2114,7 @@ func EtcdWatch(obj *EmbdEtcd) chan bool {
// EtcdSetResources exports all of the resources which we pass in to etcd
func EtcdSetResources(obj *EmbdEtcd, hostname string, resourceList []resources.Res) error {
// key structure is /$NS/exported/$hostname/resources/$uuid = $data
// key structure is /$NS/exported/$hostname/resources/$uid = $data
var kindFilter []string // empty to get from everyone
hostnameFilter := []string{hostname}
@@ -2130,8 +2135,8 @@ func EtcdSetResources(obj *EmbdEtcd, hostname string, resourceList []resources.R
if res.Kind() == "" {
log.Fatalf("Etcd: SetResources: Error: Empty kind: %v", res.GetName())
}
uuid := fmt.Sprintf("%s/%s", res.Kind(), res.GetName())
path := fmt.Sprintf("/%s/exported/%s/resources/%s", NS, hostname, uuid)
uid := fmt.Sprintf("%s/%s", res.Kind(), res.GetName())
path := fmt.Sprintf("/%s/exported/%s/resources/%s", NS, hostname, uid)
if data, err := resources.ResToB64(res); err == nil {
ifs = append(ifs, etcd.Compare(etcd.Value(path), "=", data)) // desired state
ops = append(ops, etcd.OpPut(path, data))
@@ -2155,8 +2160,8 @@ func EtcdSetResources(obj *EmbdEtcd, hostname string, resourceList []resources.R
if res.Kind() == "" {
log.Fatalf("Etcd: SetResources: Error: Empty kind: %v", res.GetName())
}
uuid := fmt.Sprintf("%s/%s", res.Kind(), res.GetName())
path := fmt.Sprintf("/%s/exported/%s/resources/%s", NS, hostname, uuid)
uid := fmt.Sprintf("%s/%s", res.Kind(), res.GetName())
path := fmt.Sprintf("/%s/exported/%s/resources/%s", NS, hostname, uid)
if match(res, resourceList) { // if we match, no need to delete!
continue
@@ -2182,9 +2187,9 @@ func EtcdSetResources(obj *EmbdEtcd, hostname string, resourceList []resources.R
// If the kindfilter or hostnameFilter is empty, then it assumes no filtering...
// TODO: Expand this with a more powerful filter based on what we eventually
// support in our collect DSL. Ideally a server side filter like WithFilter()
// We could do this if the pattern was /$NS/exported/$kind/$hostname/$uuid = $data
// We could do this if the pattern was /$NS/exported/$kind/$hostname/$uid = $data
func EtcdGetResources(obj *EmbdEtcd, hostnameFilter, kindFilter []string) ([]resources.Res, error) {
// key structure is /$NS/exported/$hostname/resources/$uuid = $data
// key structure is /$NS/exported/$hostname/resources/$uid = $data
path := fmt.Sprintf("/%s/exported/", NS)
resourceList := []resources.Res{}
keyMap, err := obj.Get(path, etcd.WithPrefix(), etcd.WithSort(etcd.SortByKey, etcd.SortAscend))

188
examples/lib/libmgmt1.go Normal file
View File

@@ -0,0 +1,188 @@
// libmgmt example
package main
import (
"fmt"
"log"
"os"
"os/signal"
"sync"
"syscall"
"time"
"github.com/purpleidea/mgmt/gapi"
mgmt "github.com/purpleidea/mgmt/mgmtmain"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
"github.com/purpleidea/mgmt/yamlgraph"
)
// MyGAPI implements the main GAPI interface.
type MyGAPI struct {
Name string // graph name
Interval uint // refresh interval, 0 to never refresh
data gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewMyGAPI creates a new MyGAPI struct and calls Init().
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
obj := &MyGAPI{
Name: name,
Interval: interval,
}
return obj, obj.Init(data)
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("Already initialized!")
}
if obj.Name == "" {
return fmt.Errorf("The graph name must be specified!")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
obj.initialized = true
return nil
}
// Graph returns a current Graph.
func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
n1, err := resources.NewNoopRes("noop1")
if err != nil {
return nil, fmt.Errorf("Can't create resource: %v", err)
}
// we can still build a graph via the yaml method
gc := &yamlgraph.GraphConfig{
Graph: obj.Name,
Resources: yamlgraph.Resources{ // must redefine anonymous struct :(
// in alphabetical order
Exec: []*resources.ExecRes{},
File: []*resources.FileRes{},
Msg: []*resources.MsgRes{},
Noop: []*resources.NoopRes{n1},
Pkg: []*resources.PkgRes{},
Svc: []*resources.SvcRes{},
Timer: []*resources.TimerRes{},
Virt: []*resources.VirtRes{},
},
//Collector: []collectorResConfig{},
//Edges: []Edge{},
Comment: "comment!",
}
g, err := gc.NewGraphFromConfig(obj.data.Hostname, obj.data.EmbdEtcd, obj.data.Noop)
return g, err
}
// SwitchStream returns nil errors every time there could be a new graph.
func (obj *MyGAPI) SwitchStream() chan error {
if obj.data.NoWatch || obj.Interval <= 0 {
return nil
}
ch := make(chan error)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("libmgmt: MyGAPI is not initialized")
return
}
// arbitrarily change graph every interval seconds
ticker := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer ticker.Stop()
for {
select {
case <-ticker.C:
log.Printf("libmgmt: Generating new graph...")
ch <- nil // trigger a run
case <-obj.closeChan:
return
}
}
}()
return ch
}
// Close shuts down the MyGAPI.
func (obj *MyGAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
close(obj.closeChan)
obj.wg.Wait()
obj.initialized = false // closed = true
return nil
}
// Run runs an embedded mgmt server.
func Run() error {
obj := &mgmt.Main{}
obj.Program = "libmgmt" // TODO: set on compilation
obj.Version = "0.0.1" // TODO: set on compilation
obj.TmpPrefix = true
obj.IdealClusterSize = -1
obj.ConvergedTimeout = -1
obj.Noop = true
obj.GAPI = &MyGAPI{ // graph API
Name: "libmgmt", // TODO: set on compilation
Interval: 15, // arbitrarily change graph every 15 seconds
}
if err := obj.Init(); err != nil {
return err
}
// install the exit signal handler
exit := make(chan struct{})
defer close(exit)
go func() {
signals := make(chan os.Signal, 1)
signal.Notify(signals, os.Interrupt) // catch ^C
//signal.Notify(signals, os.Kill) // catch signals
signal.Notify(signals, syscall.SIGTERM)
select {
case sig := <-signals: // any signal will do
if sig == os.Interrupt {
log.Println("Interrupted by ^C")
obj.Exit(nil)
return
}
log.Println("Interrupted by signal")
obj.Exit(fmt.Errorf("Killed by %v", sig))
return
case <-exit:
return
}
}()
if err := obj.Run(); err != nil {
return err
}
return nil
}
func main() {
log.Printf("Hello!")
if err := Run(); err != nil {
fmt.Println(err)
os.Exit(1)
return
}
log.Printf("Goodbye!")
}

188
examples/lib/libmgmt2.go Normal file
View File

@@ -0,0 +1,188 @@
// libmgmt example
package main
import (
"fmt"
"log"
"os"
"os/signal"
"strconv"
"sync"
"syscall"
"time"
"github.com/purpleidea/mgmt/gapi"
mgmt "github.com/purpleidea/mgmt/mgmtmain"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
)
// MyGAPI implements the main GAPI interface.
type MyGAPI struct {
Name string // graph name
Count uint // number of resources to create
Interval uint // refresh interval, 0 to never refresh
data gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewMyGAPI creates a new MyGAPI struct and calls Init().
func NewMyGAPI(data gapi.Data, name string, interval uint, count uint) (*MyGAPI, error) {
obj := &MyGAPI{
Name: name,
Count: count,
Interval: interval,
}
return obj, obj.Init(data)
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("Already initialized!")
}
if obj.Name == "" {
return fmt.Errorf("The graph name must be specified!")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
obj.initialized = true
return nil
}
// Graph returns a current Graph.
func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
g := pgraph.NewGraph(obj.Name)
var vertex *pgraph.Vertex
for i := uint(0); i < obj.Count; i++ {
n, err := resources.NewNoopRes(fmt.Sprintf("noop%d", i))
if err != nil {
return nil, fmt.Errorf("Can't create resource: %v", err)
}
v := pgraph.NewVertex(n)
g.AddVertex(v)
if i > 0 {
g.AddEdge(vertex, v, pgraph.NewEdge(fmt.Sprintf("e%d", i)))
}
vertex = v // save
}
//g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.EmbdEtcd, obj.data.Noop)
return g, nil
}
// SwitchStream returns nil errors every time there could be a new graph.
func (obj *MyGAPI) SwitchStream() chan error {
if obj.data.NoWatch || obj.Interval <= 0 {
return nil
}
ch := make(chan error)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("libmgmt: MyGAPI is not initialized")
return
}
// arbitrarily change graph every interval seconds
ticker := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer ticker.Stop()
for {
select {
case <-ticker.C:
log.Printf("libmgmt: Generating new graph...")
ch <- nil // trigger a run
case <-obj.closeChan:
return
}
}
}()
return ch
}
// Close shuts down the MyGAPI.
func (obj *MyGAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
close(obj.closeChan)
obj.wg.Wait()
obj.initialized = false // closed = true
return nil
}
// Run runs an embedded mgmt server.
func Run(count uint) error {
obj := &mgmt.Main{}
obj.Program = "libmgmt" // TODO: set on compilation
obj.Version = "0.0.1" // TODO: set on compilation
obj.TmpPrefix = true
obj.IdealClusterSize = -1
obj.ConvergedTimeout = -1
obj.Noop = true
obj.GAPI = &MyGAPI{ // graph API
Name: "libmgmt", // TODO: set on compilation
Count: count, // number of vertices to add
Interval: 15, // arbitrarily change graph every 15 seconds
}
if err := obj.Init(); err != nil {
return err
}
// install the exit signal handler
exit := make(chan struct{})
defer close(exit)
go func() {
signals := make(chan os.Signal, 1)
signal.Notify(signals, os.Interrupt) // catch ^C
//signal.Notify(signals, os.Kill) // catch signals
signal.Notify(signals, syscall.SIGTERM)
select {
case sig := <-signals: // any signal will do
if sig == os.Interrupt {
log.Println("Interrupted by ^C")
obj.Exit(nil)
return
}
log.Println("Interrupted by signal")
obj.Exit(fmt.Errorf("Killed by %v", sig))
return
case <-exit:
return
}
}()
if err := obj.Run(); err != nil {
return err
}
return nil
}
func main() {
log.Printf("Hello!")
var count uint = 1 // default
if len(os.Args) == 2 {
if i, err := strconv.Atoi(os.Args[1]); err == nil && i > 0 {
count = uint(i)
}
}
if err := Run(count); err != nil {
fmt.Println(err)
os.Exit(1)
return
}
log.Printf("Goodbye!")
}

20
examples/remote2a.yaml Normal file
View File

@@ -0,0 +1,20 @@
---
graph: mygraph
comment: remote noop example
resources:
file:
- name: file1a
path: "/tmp/file1a"
content: |
i am file1a
state: exists
- name: "@@file2a"
path: "/tmp/file2a"
content: |
i am file2a, exported from host a
state: exists
collect:
- kind: file
pattern: "/tmp/"
edges: []
remote: ssh://root:vagrant@192.168.121.201:22

20
examples/remote2b.yaml Normal file
View File

@@ -0,0 +1,20 @@
---
graph: mygraph
comment: remote noop example
resources:
file:
- name: file1b
path: "/tmp/file1b"
content: |
i am file1b
state: exists
- name: "@@file2b"
path: "/tmp/file2b"
content: |
i am file2b, exported from host b
state: exists
collect:
- kind: file
pattern: "/tmp/"
edges: []
remote: ssh://root:vagrant@192.168.121.202:22

11
examples/virt1.yaml Normal file
View File

@@ -0,0 +1,11 @@
---
graph: mygraph
resources:
virt:
- name: mgmt1
uri: 'qemu:///session'
cpus: 1
memory: 524288
state: shutoff
transient: true
edges: []

11
examples/virt2.yaml Normal file
View File

@@ -0,0 +1,11 @@
---
graph: mygraph
resources:
virt:
- name: mgmt2
uri: 'qemu:///session'
cpus: 1
memory: 524288
state: shutoff
transient: false
edges: []

11
examples/virt3.yaml Normal file
View File

@@ -0,0 +1,11 @@
---
graph: mygraph
resources:
virt:
- name: mgmt3
uri: 'qemu:///session'
cpus: 1
memory: 524288
state: running
transient: false
edges: []

View File

@@ -15,12 +15,27 @@
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package main
// Package gapi defines the interface that graph API generators must meet.
package gapi
import (
//"testing"
"github.com/purpleidea/mgmt/etcd"
"github.com/purpleidea/mgmt/pgraph"
)
//func TestT1(t *testing.T) {
// Data is the set of input values passed into the GAPI structs via Init.
type Data struct {
Hostname string // uuid for the host, required for GAPI
EmbdEtcd *etcd.EmbdEtcd
Noop bool
NoWatch bool
// NOTE: we can add more fields here if needed by GAPI endpoints
}
//}
// GAPI is a Graph API that represents incoming graphs and change streams.
type GAPI interface {
Init(Data) error // initializes the GAPI and passes in useful data
Graph() (*pgraph.Graph, error) // returns the most recent pgraph
SwitchStream() chan error // returns a stream of switch events
Close() error // shutdown the GAPI
}

565
main.go
View File

@@ -19,576 +19,21 @@ package main
import (
"fmt"
"io/ioutil"
"log"
"os"
"os/signal"
"sync"
"syscall"
"time"
"github.com/purpleidea/mgmt/converger"
"github.com/purpleidea/mgmt/etcd"
"github.com/purpleidea/mgmt/gconfig"
"github.com/purpleidea/mgmt/global"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/puppet"
"github.com/purpleidea/mgmt/recwatch"
"github.com/purpleidea/mgmt/remote"
"github.com/purpleidea/mgmt/util"
etcdtypes "github.com/coreos/etcd/pkg/types"
"github.com/coreos/pkg/capnslog"
"github.com/urfave/cli"
"github.com/purpleidea/mgmt/mgmtmain"
)
// set at compile time
var (
program string
version string
prefix = fmt.Sprintf("/var/lib/%s/", program)
)
// signal handler
func waitForSignal(exit chan error) error {
signals := make(chan os.Signal, 1)
signal.Notify(signals, os.Interrupt) // catch ^C
//signal.Notify(signals, os.Kill) // catch signals
signal.Notify(signals, syscall.SIGTERM)
select {
case sig := <-signals: // any signal will do
if sig == os.Interrupt {
log.Println("Interrupted by ^C")
return nil
} else {
log.Println("Interrupted by signal")
return fmt.Errorf("Killed by %v", sig)
}
case err := <-exit: // or a manual signal
log.Println("Interrupted by exit signal")
return err
}
}
// run is the main run target.
func run(c *cli.Context) error {
var start = time.Now().UnixNano()
log.Printf("This is: %v, version: %v", program, version)
log.Printf("Main: Start: %v", start)
hostname, _ := os.Hostname()
// allow passing in the hostname, instead of using --hostname
if c.IsSet("file") {
if config := gconfig.ParseConfigFromFile(c.String("file")); config != nil {
if h := config.Hostname; h != "" {
hostname = h
}
}
}
if c.IsSet("hostname") { // override by cli
if h := c.String("hostname"); h != "" {
hostname = h
}
}
noop := c.Bool("noop")
seeds, err := etcdtypes.NewURLs(
util.FlattenListWithSplit(c.StringSlice("seeds"), []string{",", ";", " "}),
)
if err != nil && len(c.StringSlice("seeds")) > 0 {
log.Printf("Main: Error: seeds didn't parse correctly!")
return cli.NewExitError("", 1)
}
clientURLs, err := etcdtypes.NewURLs(
util.FlattenListWithSplit(c.StringSlice("client-urls"), []string{",", ";", " "}),
)
if err != nil && len(c.StringSlice("client-urls")) > 0 {
log.Printf("Main: Error: clientURLs didn't parse correctly!")
return cli.NewExitError("", 1)
}
serverURLs, err := etcdtypes.NewURLs(
util.FlattenListWithSplit(c.StringSlice("server-urls"), []string{",", ";", " "}),
)
if err != nil && len(c.StringSlice("server-urls")) > 0 {
log.Printf("Main: Error: serverURLs didn't parse correctly!")
return cli.NewExitError("", 1)
}
idealClusterSize := uint16(c.Int("ideal-cluster-size"))
if idealClusterSize < 1 {
log.Printf("Main: Error: idealClusterSize should be at least one!")
return cli.NewExitError("", 1)
}
if c.IsSet("file") && c.IsSet("puppet") {
log.Println("Main: Error: the --file and --puppet parameters cannot be used together!")
return cli.NewExitError("", 1)
}
if c.Bool("no-server") && len(c.StringSlice("remote")) > 0 {
// TODO: in this case, we won't be able to tunnel stuff back to
// here, so if we're okay with every remote graph running in an
// isolated mode, then this is okay. Improve on this if there's
// someone who really wants to be able to do this.
log.Println("Main: Error: the --no-server and --remote parameters cannot be used together!")
return cli.NewExitError("", 1)
}
cConns := uint16(c.Int("cconns"))
if cConns < 0 {
log.Printf("Main: Error: --cconns should be at least zero!")
return cli.NewExitError("", 1)
}
if c.IsSet("converged-timeout") && cConns > 0 && len(c.StringSlice("remote")) > c.Int("cconns") {
log.Printf("Main: Error: combining --converged-timeout with more remotes than available connections will never converge!")
return cli.NewExitError("", 1)
}
depth := uint16(c.Int("depth"))
if depth < 0 { // user should not be using this argument manually
log.Printf("Main: Error: negative values for --depth are not permitted!")
return cli.NewExitError("", 1)
}
if c.IsSet("prefix") && c.Bool("tmp-prefix") {
log.Println("Main: Error: combining --prefix and the request for a tmp prefix is illogical!")
return cli.NewExitError("", 1)
}
if s := c.String("prefix"); c.IsSet("prefix") && s != "" {
prefix = s
}
// make sure the working directory prefix exists
if c.Bool("tmp-prefix") || os.MkdirAll(prefix, 0770) != nil {
if c.Bool("tmp-prefix") || c.Bool("allow-tmp-prefix") {
if prefix, err = ioutil.TempDir("", program+"-"); err != nil {
log.Printf("Main: Error: Can't create temporary prefix!")
return cli.NewExitError("", 1)
}
log.Println("Main: Warning: Working prefix directory is temporary!")
} else {
log.Printf("Main: Error: Can't create prefix!")
return cli.NewExitError("", 1)
}
}
log.Printf("Main: Working prefix is: %s", prefix)
var wg sync.WaitGroup
exit := make(chan error) // exit signal
var G, fullGraph *pgraph.Graph
// exit after `max-runtime` seconds for no reason at all...
if i := c.Int("max-runtime"); i > 0 {
go func() {
time.Sleep(time.Duration(i) * time.Second)
exit <- nil
}()
}
// setup converger
converger := converger.NewConverger(
c.Int("converged-timeout"),
nil, // stateFn gets added in by EmbdEtcd
)
go converger.Loop(true) // main loop for converger, true to start paused
// embedded etcd
if len(seeds) == 0 {
log.Printf("Main: Seeds: No seeds specified!")
} else {
log.Printf("Main: Seeds(%v): %v", len(seeds), seeds)
}
EmbdEtcd := etcd.NewEmbdEtcd(
hostname,
seeds,
clientURLs,
serverURLs,
c.Bool("no-server"),
idealClusterSize,
prefix,
converger,
)
if EmbdEtcd == nil {
// TODO: verify EmbdEtcd is not nil below...
exit <- fmt.Errorf("Main: Etcd: Creation failed!")
} else if err := EmbdEtcd.Startup(); err != nil { // startup (returns when etcd main loop is running)
exit <- fmt.Errorf("Main: Etcd: Startup failed: %v", err)
}
convergerStateFn := func(b bool) error {
// exit if we are using the converged-timeout and we are the
// root node. otherwise, if we are a child node in a remote
// execution hierarchy, we should only notify our converged
// state and wait for the parent to trigger the exit.
if depth == 0 && c.Int("converged-timeout") >= 0 {
if b {
log.Printf("Converged for %d seconds, exiting!", c.Int("converged-timeout"))
exit <- nil // trigger an exit!
}
return nil
}
// send our individual state into etcd for others to see
return etcd.EtcdSetHostnameConverged(EmbdEtcd, hostname, b) // TODO: what should happen on error?
}
if EmbdEtcd != nil {
converger.SetStateFn(convergerStateFn)
}
exitchan := make(chan struct{}) // exit on close
go func() {
startchan := make(chan struct{}) // start signal
go func() { startchan <- struct{}{} }()
file := c.String("file")
var configchan chan error
var puppetchan <-chan time.Time
if !c.Bool("no-watch") && c.IsSet("file") {
configchan = recwatch.ConfigWatch(file)
} else if c.IsSet("puppet") {
interval := puppet.PuppetInterval(c.String("puppet-conf"))
puppetchan = time.Tick(time.Duration(interval) * time.Second)
}
log.Println("Etcd: Starting...")
etcdchan := etcd.EtcdWatch(EmbdEtcd)
first := true // first loop or not
for {
log.Println("Main: Waiting...")
select {
case <-startchan: // kick the loop once at start
// pass
case b := <-etcdchan:
if !b { // ignore the message
continue
}
// everything else passes through to cause a compile!
case <-puppetchan:
// nothing, just go on
case e := <-configchan:
if c.Bool("no-watch") {
continue // not ready to read config
}
if e != nil {
exit <- e // trigger exit
continue
//return // TODO: return or wait for exitchan?
}
// XXX: case compile_event: ...
// ...
case <-exitchan:
return
}
var config *gconfig.GraphConfig
if c.IsSet("file") {
config = gconfig.ParseConfigFromFile(file)
} else if c.IsSet("puppet") {
config = puppet.ParseConfigFromPuppet(c.String("puppet"), c.String("puppet-conf"))
}
if config == nil {
log.Printf("Config: Parse failure")
continue
}
if config.Hostname != "" && config.Hostname != hostname {
log.Printf("Config: Hostname changed, ignoring config!")
continue
}
config.Hostname = hostname // set it in case it was ""
// run graph vertex LOCK...
if !first { // TODO: we can flatten this check out I think
converger.Pause() // FIXME: add sync wait?
G.Pause() // sync
}
// build graph from yaml file on events (eg: from etcd)
// we need the vertices to be paused to work on them
if newFullgraph, err := config.NewGraphFromConfig(fullGraph, EmbdEtcd, noop); err == nil { // keep references to all original elements
fullGraph = newFullgraph
} else {
log.Printf("Config: Error making new graph from config: %v", err)
// unpause!
if !first {
G.Start(&wg, first) // sync
converger.Start() // after G.Start()
}
continue
}
G = fullGraph.Copy() // copy to active graph
// XXX: do etcd transaction out here...
G.AutoEdges() // add autoedges; modifies the graph
G.AutoGroup() // run autogroup; modifies the graph
// TODO: do we want to do a transitive reduction?
log.Printf("Graph: %v", G) // show graph
err := G.ExecGraphviz(c.String("graphviz-filter"), c.String("graphviz"))
if err != nil {
log.Printf("Graphviz: %v", err)
} else {
log.Printf("Graphviz: Successfully generated graph!")
}
G.AssociateData(converger)
// G.Start(...) needs to be synchronous or wait,
// because if half of the nodes are started and
// some are not ready yet and the EtcdWatch
// loops, we'll cause G.Pause(...) before we
// even got going, thus causing nil pointer errors
G.Start(&wg, first) // sync
converger.Start() // after G.Start()
first = false
}
}()
configWatcher := recwatch.NewConfigWatcher()
events := configWatcher.Events()
if !c.Bool("no-watch") {
configWatcher.Add(c.StringSlice("remote")...) // add all the files...
} else {
events = nil // signal that no-watch is true
}
go func() {
select {
case err := <-configWatcher.Error():
exit <- err // trigger an exit!
case <-exitchan:
return
}
}()
// initialize the add watcher, which calls the f callback on map changes
convergerCb := func(f func(map[string]bool) error) (func(), error) {
return etcd.EtcdAddHostnameConvergedWatcher(EmbdEtcd, f)
}
// build remotes struct for remote ssh
remotes := remote.NewRemotes(
EmbdEtcd.LocalhostClientURLs().StringSlice(),
[]string{etcd.DefaultClientURL},
noop,
c.StringSlice("remote"), // list of files
events, // watch for file changes
cConns,
c.Bool("allow-interactive"),
c.String("ssh-priv-id-rsa"),
!c.Bool("no-caching"),
depth,
prefix,
converger,
convergerCb,
program,
)
// TODO: is there any benefit to running the remotes above in the loop?
// wait for etcd to be running before we remote in, which we do above!
go remotes.Run()
if !c.IsSet("file") && !c.IsSet("puppet") {
converger.Start() // better start this for empty graphs
}
log.Println("Main: Running...")
err = waitForSignal(exit) // pass in exit channel to watch
log.Println("Destroy...")
configWatcher.Close() // stop sending file changes to remotes
remotes.Exit() // tell all the remote connections to shutdown; waits!
G.Exit() // tell all the children to exit
// tell inner main loop to exit
close(exitchan)
// cleanup etcd main loop last so it can process everything first
if err := EmbdEtcd.Destroy(); err != nil { // shutdown and cleanup etcd
log.Printf("Etcd exited poorly with: %v", err)
}
if global.DEBUG {
log.Printf("Graph: %v", G)
}
wg.Wait() // wait for primary go routines to exit
// TODO: wait for each vertex to exit...
log.Println("Goodbye!")
return err
}
func main() {
var flags int
if global.DEBUG || true { // TODO: remove || true
flags = log.LstdFlags | log.Lshortfile
if err := mgmtmain.CLI(program, version); err != nil {
fmt.Println(err)
os.Exit(1)
return
}
flags = (flags - log.Ldate) // remove the date for now
log.SetFlags(flags)
// un-hijack from capnslog...
log.SetOutput(os.Stderr)
if global.VERBOSE {
capnslog.SetFormatter(capnslog.NewLogFormatter(os.Stderr, "(etcd) ", flags))
} else {
capnslog.SetFormatter(capnslog.NewNilFormatter())
}
// test for sanity
if program == "" || version == "" {
log.Fatal("Program was not compiled correctly. Please see Makefile.")
}
app := cli.NewApp()
app.Name = program
app.Usage = "next generation config management"
app.Version = version
//app.Action = ... // without a default action, help runs
app.Commands = []cli.Command{
{
Name: "run",
Aliases: []string{"r"},
Usage: "run",
Action: run,
Flags: []cli.Flag{
cli.StringFlag{
Name: "file, f",
Value: "",
Usage: "graph definition to run",
EnvVar: "MGMT_FILE",
},
cli.BoolFlag{
Name: "no-watch",
Usage: "do not update graph on watched graph definition file changes",
},
cli.StringFlag{
Name: "code, c",
Value: "",
Usage: "code definition to run",
},
cli.StringFlag{
Name: "graphviz, g",
Value: "",
Usage: "output file for graphviz data",
},
cli.StringFlag{
Name: "graphviz-filter, gf",
Value: "dot", // directed graph default
Usage: "graphviz filter to use",
},
// useful for testing multiple instances on same machine
cli.StringFlag{
Name: "hostname",
Value: "",
Usage: "hostname to use",
},
// if empty, it will startup a new server
cli.StringSliceFlag{
Name: "seeds, s",
Value: &cli.StringSlice{}, // empty slice
Usage: "default etc client endpoint",
EnvVar: "MGMT_SEEDS",
},
// port 2379 and 4001 are common
cli.StringSliceFlag{
Name: "client-urls",
Value: &cli.StringSlice{},
Usage: "list of URLs to listen on for client traffic",
EnvVar: "MGMT_CLIENT_URLS",
},
// port 2380 and 7001 are common
cli.StringSliceFlag{
Name: "server-urls, peer-urls",
Value: &cli.StringSlice{},
Usage: "list of URLs to listen on for server (peer) traffic",
EnvVar: "MGMT_SERVER_URLS",
},
cli.BoolFlag{
Name: "no-server",
Usage: "do not let other servers peer with me",
},
cli.IntFlag{
Name: "ideal-cluster-size",
Value: etcd.DefaultIdealClusterSize,
Usage: "ideal number of server peers in cluster, only read by initial server",
EnvVar: "MGMT_IDEAL_CLUSTER_SIZE",
},
cli.IntFlag{
Name: "converged-timeout, t",
Value: -1,
Usage: "exit after approximately this many seconds in a converged state",
EnvVar: "MGMT_CONVERGED_TIMEOUT",
},
cli.IntFlag{
Name: "max-runtime",
Value: 0,
Usage: "exit after a maximum of approximately this many seconds",
EnvVar: "MGMT_MAX_RUNTIME",
},
cli.BoolFlag{
Name: "noop",
Usage: "globally force all resources into no-op mode",
},
cli.StringFlag{
Name: "puppet, p",
Value: "",
Usage: "load graph from puppet, optionally takes a manifest or path to manifest file",
},
cli.StringFlag{
Name: "puppet-conf",
Value: "",
Usage: "supply the path to an alternate puppet.conf file to use",
},
cli.StringSliceFlag{
Name: "remote",
Value: &cli.StringSlice{},
Usage: "list of remote graph definitions to run",
},
cli.BoolFlag{
Name: "allow-interactive",
Usage: "allow interactive prompting, such as for remote passwords",
},
cli.StringFlag{
Name: "ssh-priv-id-rsa",
Value: "~/.ssh/id_rsa",
Usage: "default path to ssh key file, set empty to never touch",
EnvVar: "MGMT_SSH_PRIV_ID_RSA",
},
cli.IntFlag{
Name: "cconns",
Value: 0,
Usage: "number of maximum concurrent remote ssh connections to run, 0 for unlimited",
EnvVar: "MGMT_CCONNS",
},
cli.BoolFlag{
Name: "no-caching",
Usage: "don't allow remote caching of remote execution binary",
},
cli.IntFlag{
Name: "depth",
Hidden: true, // internal use only
Value: 0,
Usage: "specify depth in remote hierarchy",
},
cli.StringFlag{
Name: "prefix",
Usage: "specify a path to the working prefix directory",
EnvVar: "MGMT_PREFIX",
},
cli.BoolFlag{
Name: "tmp-prefix",
Usage: "request a pseudo-random, temporary prefix to be used",
},
cli.BoolFlag{
Name: "allow-tmp-prefix",
Usage: "allow creation of a new temporary prefix if main prefix is unavailable",
},
},
},
}
app.EnableBashCompletion = true
app.Run(os.Args)
}

296
mgmtmain/cli.go Normal file
View File

@@ -0,0 +1,296 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package mgmtmain
import (
"fmt"
"log"
"os"
"os/signal"
"syscall"
"github.com/purpleidea/mgmt/puppet"
"github.com/purpleidea/mgmt/yamlgraph"
"github.com/urfave/cli"
)
// run is the main run target.
func run(c *cli.Context) error {
obj := &Main{}
obj.Program = c.App.Name
obj.Version = c.App.Version
if h := c.String("hostname"); c.IsSet("hostname") && h != "" {
obj.Hostname = &h
}
if s := c.String("prefix"); c.IsSet("prefix") && s != "" {
obj.Prefix = &s
}
obj.TmpPrefix = c.Bool("tmp-prefix")
obj.AllowTmpPrefix = c.Bool("allow-tmp-prefix")
if _ = c.String("code"); c.IsSet("code") {
if obj.GAPI != nil {
return fmt.Errorf("Can't combine code GAPI with existing GAPI.")
}
// TODO: implement DSL GAPI
//obj.GAPI = &dsl.GAPI{
// Code: &s,
//}
return fmt.Errorf("The Code GAPI is not implemented yet!") // TODO: DSL
}
if y := c.String("yaml"); c.IsSet("yaml") {
if obj.GAPI != nil {
return fmt.Errorf("Can't combine YAML GAPI with existing GAPI.")
}
obj.GAPI = &yamlgraph.GAPI{
File: &y,
}
}
if p := c.String("puppet"); c.IsSet("puppet") {
if obj.GAPI != nil {
return fmt.Errorf("Can't combine puppet GAPI with existing GAPI.")
}
obj.GAPI = &puppet.GAPI{
PuppetParam: &p,
PuppetConf: c.String("puppet-conf"),
}
}
obj.Remotes = c.StringSlice("remote") // FIXME: GAPI-ify somehow?
obj.NoWatch = c.Bool("no-watch")
obj.Noop = c.Bool("noop")
obj.Graphviz = c.String("graphviz")
obj.GraphvizFilter = c.String("graphviz-filter")
obj.ConvergedTimeout = c.Int("converged-timeout")
obj.MaxRuntime = uint(c.Int("max-runtime"))
obj.Seeds = c.StringSlice("seeds")
obj.ClientURLs = c.StringSlice("client-urls")
obj.ServerURLs = c.StringSlice("server-urls")
obj.IdealClusterSize = c.Int("ideal-cluster-size")
obj.NoServer = c.Bool("no-server")
obj.CConns = uint16(c.Int("cconns"))
obj.AllowInteractive = c.Bool("allow-interactive")
obj.SSHPrivIDRsa = c.String("ssh-priv-id-rsa")
obj.NoCaching = c.Bool("no-caching")
obj.Depth = uint16(c.Int("depth"))
if err := obj.Init(); err != nil {
return err
}
// install the exit signal handler
exit := make(chan struct{})
defer close(exit)
go func() {
signals := make(chan os.Signal, 1)
signal.Notify(signals, os.Interrupt) // catch ^C
//signal.Notify(signals, os.Kill) // catch signals
signal.Notify(signals, syscall.SIGTERM)
select {
case sig := <-signals: // any signal will do
if sig == os.Interrupt {
log.Println("Interrupted by ^C")
obj.Exit(nil)
return
}
log.Println("Interrupted by signal")
obj.Exit(fmt.Errorf("Killed by %v", sig))
return
case <-exit:
return
}
}()
if err := obj.Run(); err != nil {
return err
//return cli.NewExitError(err.Error(), 1) // TODO: ?
//return cli.NewExitError("", 1) // TODO: ?
}
return nil
}
// CLI is the entry point for using mgmt normally from the CLI.
func CLI(program, version string) error {
// test for sanity
if program == "" || version == "" {
return fmt.Errorf("Program was not compiled correctly. Please see Makefile.")
}
app := cli.NewApp()
app.Name = program // App.name and App.version pass these values through
app.Version = version
app.Usage = "next generation config management"
//app.Action = ... // without a default action, help runs
app.Commands = []cli.Command{
{
Name: "run",
Aliases: []string{"r"},
Usage: "run",
Action: run,
Flags: []cli.Flag{
// useful for testing multiple instances on same machine
cli.StringFlag{
Name: "hostname",
Value: "",
Usage: "hostname to use",
},
cli.StringFlag{
Name: "prefix",
Usage: "specify a path to the working prefix directory",
EnvVar: "MGMT_PREFIX",
},
cli.BoolFlag{
Name: "tmp-prefix",
Usage: "request a pseudo-random, temporary prefix to be used",
},
cli.BoolFlag{
Name: "allow-tmp-prefix",
Usage: "allow creation of a new temporary prefix if main prefix is unavailable",
},
cli.StringFlag{
Name: "code, c",
Value: "",
Usage: "code definition to run",
},
cli.StringFlag{
Name: "yaml",
Value: "",
Usage: "yaml graph definition to run",
},
cli.StringFlag{
Name: "puppet, p",
Value: "",
Usage: "load graph from puppet, optionally takes a manifest or path to manifest file",
},
cli.StringFlag{
Name: "puppet-conf",
Value: "",
Usage: "the path to an alternate puppet.conf file",
},
cli.StringSliceFlag{
Name: "remote",
Value: &cli.StringSlice{},
Usage: "list of remote graph definitions to run",
},
cli.BoolFlag{
Name: "no-watch",
Usage: "do not update graph on stream switch events",
},
cli.BoolFlag{
Name: "noop",
Usage: "globally force all resources into no-op mode",
},
cli.StringFlag{
Name: "graphviz, g",
Value: "",
Usage: "output file for graphviz data",
},
cli.StringFlag{
Name: "graphviz-filter, gf",
Value: "dot", // directed graph default
Usage: "graphviz filter to use",
},
cli.IntFlag{
Name: "converged-timeout, t",
Value: -1,
Usage: "exit after approximately this many seconds in a converged state",
EnvVar: "MGMT_CONVERGED_TIMEOUT",
},
cli.IntFlag{
Name: "max-runtime",
Value: 0,
Usage: "exit after a maximum of approximately this many seconds",
EnvVar: "MGMT_MAX_RUNTIME",
},
// if empty, it will startup a new server
cli.StringSliceFlag{
Name: "seeds, s",
Value: &cli.StringSlice{}, // empty slice
Usage: "default etc client endpoint",
EnvVar: "MGMT_SEEDS",
},
// port 2379 and 4001 are common
cli.StringSliceFlag{
Name: "client-urls",
Value: &cli.StringSlice{},
Usage: "list of URLs to listen on for client traffic",
EnvVar: "MGMT_CLIENT_URLS",
},
// port 2380 and 7001 are common
cli.StringSliceFlag{
Name: "server-urls, peer-urls",
Value: &cli.StringSlice{},
Usage: "list of URLs to listen on for server (peer) traffic",
EnvVar: "MGMT_SERVER_URLS",
},
cli.IntFlag{
Name: "ideal-cluster-size",
Value: -1,
Usage: "ideal number of server peers in cluster; only read by initial server",
EnvVar: "MGMT_IDEAL_CLUSTER_SIZE",
},
cli.BoolFlag{
Name: "no-server",
Usage: "do not let other servers peer with me",
},
cli.IntFlag{
Name: "cconns",
Value: 0,
Usage: "number of maximum concurrent remote ssh connections to run; 0 for unlimited",
EnvVar: "MGMT_CCONNS",
},
cli.BoolFlag{
Name: "allow-interactive",
Usage: "allow interactive prompting, such as for remote passwords",
},
cli.StringFlag{
Name: "ssh-priv-id-rsa",
Value: "~/.ssh/id_rsa",
Usage: "default path to ssh key file, set empty to never touch",
EnvVar: "MGMT_SSH_PRIV_ID_RSA",
},
cli.BoolFlag{
Name: "no-caching",
Usage: "don't allow remote caching of remote execution binary",
},
cli.IntFlag{
Name: "depth",
Hidden: true, // internal use only
Value: 0,
Usage: "specify depth in remote hierarchy",
},
},
},
}
app.EnableBashCompletion = true
return app.Run(os.Args)
}

480
mgmtmain/main.go Normal file
View File

@@ -0,0 +1,480 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package mgmtmain
import (
"fmt"
"io/ioutil"
"log"
"os"
"sync"
"time"
"github.com/purpleidea/mgmt/converger"
"github.com/purpleidea/mgmt/etcd"
"github.com/purpleidea/mgmt/gapi"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/recwatch"
"github.com/purpleidea/mgmt/remote"
"github.com/purpleidea/mgmt/util"
etcdtypes "github.com/coreos/etcd/pkg/types"
"github.com/coreos/pkg/capnslog"
multierr "github.com/hashicorp/go-multierror"
errwrap "github.com/pkg/errors"
)
// Main is the main struct for running the mgmt logic.
type Main struct {
Program string // the name of this program, usually set at compile time
Version string // the version of this program, usually set at compile time
Hostname *string // hostname to use; nil if undefined
Prefix *string // prefix passed in; nil if undefined
TmpPrefix bool // request a pseudo-random, temporary prefix to be used
AllowTmpPrefix bool // allow creation of a new temporary prefix if main prefix is unavailable
GAPI gapi.GAPI // graph API interface struct
Remotes []string // list of remote graph definitions to run
NoWatch bool // do not update graph on watched graph definition file changes
Noop bool // globally force all resources into no-op mode
Graphviz string // output file for graphviz data
GraphvizFilter string // graphviz filter to use
ConvergedTimeout int // exit after approximately this many seconds in a converged state; -1 to disable
MaxRuntime uint // exit after a maximum of approximately this many seconds
Seeds []string // default etc client endpoint
ClientURLs []string // list of URLs to listen on for client traffic
ServerURLs []string // list of URLs to listen on for server (peer) traffic
IdealClusterSize int // ideal number of server peers in cluster; only read by initial server
NoServer bool // do not let other servers peer with me
CConns uint16 // number of maximum concurrent remote ssh connections to run, 0 for unlimited
AllowInteractive bool // allow interactive prompting, such as for remote passwords
SSHPrivIDRsa string // default path to ssh key file, set empty to never touch
NoCaching bool // don't allow remote caching of remote execution binary
Depth uint16 // depth in remote hierarchy; for internal use only
DEBUG bool
VERBOSE bool
seeds etcdtypes.URLs // processed seeds value
clientURLs etcdtypes.URLs // processed client urls value
serverURLs etcdtypes.URLs // processed server urls value
idealClusterSize uint16 // processed ideal cluster size value
exit chan error // exit signal
}
// Init initializes the main struct after it performs some validation.
func (obj *Main) Init() error {
if obj.Program == "" || obj.Version == "" {
return fmt.Errorf("You must set the Program and Version strings!")
}
if obj.Prefix != nil && obj.TmpPrefix {
return fmt.Errorf("Choosing a prefix and the request for a tmp prefix is illogical!")
}
obj.idealClusterSize = uint16(obj.IdealClusterSize)
if obj.IdealClusterSize < 0 { // value is undefined, set to the default
obj.idealClusterSize = etcd.DefaultIdealClusterSize
}
if obj.idealClusterSize < 1 {
return fmt.Errorf("IdealClusterSize should be at least one!")
}
if obj.NoServer && len(obj.Remotes) > 0 {
// TODO: in this case, we won't be able to tunnel stuff back to
// here, so if we're okay with every remote graph running in an
// isolated mode, then this is okay. Improve on this if there's
// someone who really wants to be able to do this.
return fmt.Errorf("The Server is required when using Remotes!")
}
if obj.CConns < 0 {
return fmt.Errorf("The CConns value should be at least zero!")
}
if obj.ConvergedTimeout >= 0 && obj.CConns > 0 && len(obj.Remotes) > int(obj.CConns) {
return fmt.Errorf("You can't converge if you have more remotes than available connections!")
}
if obj.Depth < 0 { // user should not be using this argument manually
return fmt.Errorf("Negative values for Depth are not permitted!")
}
// transform the url list inputs into etcd typed lists
var err error
obj.seeds, err = etcdtypes.NewURLs(
util.FlattenListWithSplit(obj.Seeds, []string{",", ";", " "}),
)
if err != nil && len(obj.Seeds) > 0 {
return fmt.Errorf("Seeds didn't parse correctly!")
}
obj.clientURLs, err = etcdtypes.NewURLs(
util.FlattenListWithSplit(obj.ClientURLs, []string{",", ";", " "}),
)
if err != nil && len(obj.ClientURLs) > 0 {
return fmt.Errorf("ClientURLs didn't parse correctly!")
}
obj.serverURLs, err = etcdtypes.NewURLs(
util.FlattenListWithSplit(obj.ServerURLs, []string{",", ";", " "}),
)
if err != nil && len(obj.ServerURLs) > 0 {
return fmt.Errorf("ServerURLs didn't parse correctly!")
}
obj.exit = make(chan error)
return nil
}
// Exit causes a safe shutdown. This is often attached to the ^C signal handler.
func (obj *Main) Exit(err error) {
obj.exit <- err // trigger an exit!
}
// Run is the main execution entrypoint to run mgmt.
func (obj *Main) Run() error {
var start = time.Now().UnixNano()
var flags int
if obj.DEBUG || true { // TODO: remove || true
flags = log.LstdFlags | log.Lshortfile
}
flags = (flags - log.Ldate) // remove the date for now
log.SetFlags(flags)
// un-hijack from capnslog...
log.SetOutput(os.Stderr)
if obj.VERBOSE {
capnslog.SetFormatter(capnslog.NewLogFormatter(os.Stderr, "(etcd) ", flags))
} else {
capnslog.SetFormatter(capnslog.NewNilFormatter())
}
log.Printf("This is: %s, version: %s", obj.Program, obj.Version)
log.Printf("Main: Start: %v", start)
hostname, err := os.Hostname() // a sensible default
// allow passing in the hostname, instead of using the system setting
if h := obj.Hostname; h != nil && *h != "" { // override by cli
hostname = *h
} else if err != nil {
return errwrap.Wrapf(err, "Can't get default hostname!")
}
if hostname == "" { // safety check
return fmt.Errorf("Hostname cannot be empty!")
}
var prefix = fmt.Sprintf("/var/lib/%s/", obj.Program) // default prefix
if p := obj.Prefix; p != nil {
prefix = *p
}
// make sure the working directory prefix exists
if obj.TmpPrefix || os.MkdirAll(prefix, 0770) != nil {
if obj.TmpPrefix || obj.AllowTmpPrefix {
var err error
if prefix, err = ioutil.TempDir("", obj.Program+"-"+hostname+"-"); err != nil {
return fmt.Errorf("Main: Error: Can't create temporary prefix!")
}
log.Println("Main: Warning: Working prefix directory is temporary!")
} else {
return fmt.Errorf("Main: Error: Can't create prefix!")
}
}
log.Printf("Main: Working prefix is: %s", prefix)
var wg sync.WaitGroup
var G, oldGraph *pgraph.Graph
// exit after `max-runtime` seconds for no reason at all...
if i := obj.MaxRuntime; i > 0 {
go func() {
time.Sleep(time.Duration(i) * time.Second)
obj.Exit(nil)
}()
}
// setup converger
converger := converger.NewConverger(
obj.ConvergedTimeout,
nil, // stateFn gets added in by EmbdEtcd
)
go converger.Loop(true) // main loop for converger, true to start paused
// embedded etcd
if len(obj.seeds) == 0 {
log.Printf("Main: Seeds: No seeds specified!")
} else {
log.Printf("Main: Seeds(%d): %v", len(obj.seeds), obj.seeds)
}
EmbdEtcd := etcd.NewEmbdEtcd(
hostname,
obj.seeds,
obj.clientURLs,
obj.serverURLs,
obj.NoServer,
obj.idealClusterSize,
prefix,
converger,
)
if EmbdEtcd == nil {
// TODO: verify EmbdEtcd is not nil below...
obj.Exit(fmt.Errorf("Main: Etcd: Creation failed!"))
} else if err := EmbdEtcd.Startup(); err != nil { // startup (returns when etcd main loop is running)
obj.Exit(fmt.Errorf("Main: Etcd: Startup failed: %v", err))
}
convergerStateFn := func(b bool) error {
// exit if we are using the converged timeout and we are the
// root node. otherwise, if we are a child node in a remote
// execution hierarchy, we should only notify our converged
// state and wait for the parent to trigger the exit.
if t := obj.ConvergedTimeout; obj.Depth == 0 && t >= 0 {
if b {
log.Printf("Converged for %d seconds, exiting!", t)
obj.Exit(nil) // trigger an exit!
}
return nil
}
// send our individual state into etcd for others to see
return etcd.EtcdSetHostnameConverged(EmbdEtcd, hostname, b) // TODO: what should happen on error?
}
if EmbdEtcd != nil {
converger.SetStateFn(convergerStateFn)
}
var gapiChan chan error // stream events are nil errors
if obj.GAPI != nil {
data := gapi.Data{
Hostname: hostname,
EmbdEtcd: EmbdEtcd,
Noop: obj.Noop,
NoWatch: obj.NoWatch,
}
if err := obj.GAPI.Init(data); err != nil {
obj.Exit(fmt.Errorf("Main: GAPI: Init failed: %v", err))
} else if !obj.NoWatch {
gapiChan = obj.GAPI.SwitchStream() // stream of graph switch events!
}
}
exitchan := make(chan struct{}) // exit on close
go func() {
startchan := make(chan struct{}) // start signal
go func() { startchan <- struct{}{} }()
log.Println("Etcd: Starting...")
etcdchan := etcd.EtcdWatch(EmbdEtcd)
first := true // first loop or not
for {
log.Println("Main: Waiting...")
select {
case <-startchan: // kick the loop once at start
// pass
case b := <-etcdchan:
if !b { // ignore the message
continue
}
// everything else passes through to cause a compile!
case err, ok := <-gapiChan:
if !ok { // channel closed
continue
}
if err != nil {
obj.Exit(err) // trigger exit
continue
//return // TODO: return or wait for exitchan?
}
if obj.NoWatch { // extra safety for bad GAPI's
log.Printf("Main: GAPI stream should be quiet with NoWatch!") // fix the GAPI!
continue // no stream events should be sent
}
case <-exitchan:
return
}
if obj.GAPI == nil {
log.Printf("Config: GAPI is empty!")
continue
}
// we need the vertices to be paused to work on them, so
// run graph vertex LOCK...
if !first { // TODO: we can flatten this check out I think
converger.Pause() // FIXME: add sync wait?
G.Pause() // sync
//G.UnGroup() // FIXME: implement me if needed!
}
// make the graph from yaml, lib, puppet->yaml, or dsl!
newGraph, err := obj.GAPI.Graph() // generate graph!
if err != nil {
log.Printf("Config: Error creating new graph: %v", err)
// unpause!
if !first {
G.Start(&wg, first) // sync
converger.Start() // after G.Start()
}
continue
}
// apply the global noop parameter if requested
if obj.Noop {
for _, m := range newGraph.GraphMetas() {
m.Noop = obj.Noop
}
}
// FIXME: make sure we "UnGroup()" any semi-destructive
// changes to the resources so our efficient GraphSync
// will be able to re-use and cmp to the old graph.
newFullGraph, err := newGraph.GraphSync(oldGraph)
if err != nil {
log.Printf("Config: Error running graph sync: %v", err)
// unpause!
if !first {
G.Start(&wg, first) // sync
converger.Start() // after G.Start()
}
continue
}
oldGraph = newFullGraph // save old graph
G = oldGraph.Copy() // copy to active graph
G.AutoEdges() // add autoedges; modifies the graph
G.AutoGroup() // run autogroup; modifies the graph
// TODO: do we want to do a transitive reduction?
log.Printf("Graph: %v", G) // show graph
if obj.GraphvizFilter != "" {
if err := G.ExecGraphviz(obj.GraphvizFilter, obj.Graphviz); err != nil {
log.Printf("Graphviz: %v", err)
} else {
log.Printf("Graphviz: Successfully generated graph!")
}
}
G.AssociateData(converger)
// G.Start(...) needs to be synchronous or wait,
// because if half of the nodes are started and
// some are not ready yet and the EtcdWatch
// loops, we'll cause G.Pause(...) before we
// even got going, thus causing nil pointer errors
G.Start(&wg, first) // sync
converger.Start() // after G.Start()
first = false
}
}()
configWatcher := recwatch.NewConfigWatcher()
events := configWatcher.Events()
if !obj.NoWatch {
configWatcher.Add(obj.Remotes...) // add all the files...
} else {
events = nil // signal that no-watch is true
}
go func() {
select {
case err := <-configWatcher.Error():
obj.Exit(err) // trigger an exit!
case <-exitchan:
return
}
}()
// initialize the add watcher, which calls the f callback on map changes
convergerCb := func(f func(map[string]bool) error) (func(), error) {
return etcd.EtcdAddHostnameConvergedWatcher(EmbdEtcd, f)
}
// build remotes struct for remote ssh
remotes := remote.NewRemotes(
EmbdEtcd.LocalhostClientURLs().StringSlice(),
[]string{etcd.DefaultClientURL},
obj.Noop,
obj.Remotes, // list of files
events, // watch for file changes
obj.CConns,
obj.AllowInteractive,
obj.SSHPrivIDRsa,
!obj.NoCaching,
obj.Depth,
prefix,
converger,
convergerCb,
obj.Program,
)
// TODO: is there any benefit to running the remotes above in the loop?
// wait for etcd to be running before we remote in, which we do above!
go remotes.Run()
if obj.GAPI == nil {
converger.Start() // better start this for empty graphs
}
log.Println("Main: Running...")
reterr := <-obj.exit // wait for exit signal
log.Println("Destroy...")
if obj.GAPI != nil {
if err := obj.GAPI.Close(); err != nil {
err = errwrap.Wrapf(err, "GAPI closed poorly!")
reterr = multierr.Append(reterr, err) // list of errors
}
}
configWatcher.Close() // stop sending file changes to remotes
if err := remotes.Exit(); err != nil { // tell all the remote connections to shutdown; waits!
err = errwrap.Wrapf(err, "Remote exited poorly!")
reterr = multierr.Append(reterr, err) // list of errors
}
G.Exit() // tell all the children to exit
// tell inner main loop to exit
close(exitchan)
// cleanup etcd main loop last so it can process everything first
if err := EmbdEtcd.Destroy(); err != nil { // shutdown and cleanup etcd
err = errwrap.Wrapf(err, "Etcd exited poorly!")
reterr = multierr.Append(reterr, err) // list of errors
}
if obj.DEBUG {
log.Printf("Graph: %v", G)
}
wg.Wait() // wait for primary go routines to exit
// TODO: wait for each vertex to exit...
log.Println("Goodbye!")
return reterr
}

View File

@@ -11,13 +11,23 @@ fi
sudo_command=$(which sudo)
if [ $travis -eq 0 ]; then
YUM=`which yum 2>/dev/null`
APT=`which apt-get 2>/dev/null`
if [ -z "$YUM" -a -z "$APT" ]; then
echo "The package managers can't be found."
exit 1
fi
if [ ! -z "$YUM" ]; then
$sudo_command $YUM install -y libvirt-devel
fi
if [ ! -z "$APT" ]; then
$sudo_command $APT install -y libvirt-dev || true
$sudo_command $APT install -y libpcap0.8-dev || true
fi
if [ $travis -eq 0 ]; then
if [ ! -z "$YUM" ]; then
# some go dependencies are stored in mercurial
$sudo_command $YUM install -y golang golang-googlecode-tools-stringer hg
@@ -29,7 +39,6 @@ if [ $travis -eq 0 ]; then
# one of these two golang tools packages should work on debian
$sudo_command $APT install -y golang-golang-x-tools || true
$sudo_command $APT install -y golang-go.tools || true
$sudo_command $APT install -y libpcap0.8-dev || true
fi
fi
@@ -39,7 +48,7 @@ if go version | grep 'go1\.[0123]\.'; then
exit 1
fi
go get ./... # get all the go dependencies
go get -d ./... # get all the go dependencies
[ -e "$GOBIN/mgmt" ] && rm -f "$GOBIN/mgmt" # the `go get` version has no -X
# vet is built-in in go 1.6 - we check for go vet command
go vet 1> /dev/null 2>&1

View File

@@ -26,29 +26,29 @@ import (
"github.com/purpleidea/mgmt/resources"
)
// add edges to the vertex in a graph based on if it matches a uuid list
func (g *Graph) addEdgesByMatchingUUIDS(v *Vertex, uuids []resources.ResUUID) []bool {
// add edges to the vertex in a graph based on if it matches a uid list
func (g *Graph) addEdgesByMatchingUIDS(v *Vertex, uids []resources.ResUID) []bool {
// search for edges and see what matches!
var result []bool
// loop through each uuid, and see if it matches any vertex
for _, uuid := range uuids {
// loop through each uid, and see if it matches any vertex
for _, uid := range uids {
var found = false
// uuid is a ResUUID object
// uid is a ResUID object
for _, vv := range g.GetVertices() { // search
if v == vv { // skip self
continue
}
if global.DEBUG {
log.Printf("Compile: AutoEdge: Match: %v[%v] with UUID: %v[%v]", vv.Kind(), vv.GetName(), uuid.Kind(), uuid.GetName())
log.Printf("Compile: AutoEdge: Match: %v[%v] with UID: %v[%v]", vv.Kind(), vv.GetName(), uid.Kind(), uid.GetName())
}
// we must match to an effective UUID for the resource,
// we must match to an effective UID for the resource,
// that is to say, the name value of a res is a helpful
// handle, but it is not necessarily a unique identity!
// remember, resources can return multiple UUID's each!
if resources.UUIDExistsInUUIDs(uuid, vv.GetUUIDs()) {
// remember, resources can return multiple UID's each!
if resources.UIDExistsInUIDs(uid, vv.GetUIDs()) {
// add edge from: vv -> v
if uuid.Reversed() {
if uid.Reversed() {
txt := fmt.Sprintf("AutoEdge: %v[%v] -> %v[%v]", vv.Kind(), vv.GetName(), v.Kind(), v.GetName())
log.Printf("Compile: Adding %v", txt)
g.AddEdge(vv, v, NewEdge(txt))
@@ -79,21 +79,21 @@ func (g *Graph) AutoEdges() {
continue // next vertex
}
for { // while the autoEdgeObj has more uuids to add...
uuids := autoEdgeObj.Next() // get some!
if uuids == nil {
for { // while the autoEdgeObj has more uids to add...
uids := autoEdgeObj.Next() // get some!
if uids == nil {
log.Printf("%v[%v]: Config: The auto edge list is empty!", v.Kind(), v.GetName())
break // inner loop
}
if global.DEBUG {
log.Println("Compile: AutoEdge: UUIDS:")
for i, u := range uuids {
log.Printf("Compile: AutoEdge: UUID%d: %v", i, u)
log.Println("Compile: AutoEdge: UIDS:")
for i, u := range uids {
log.Printf("Compile: AutoEdge: UID%d: %v", i, u)
}
}
// match and add edges
result := g.addEdgesByMatchingUUIDS(v, uuids)
result := g.addEdgesByMatchingUIDS(v, uids)
// report back, and find out if we should continue
if !autoEdgeObj.Test(result) {

View File

@@ -22,6 +22,8 @@ import (
"log"
"github.com/purpleidea/mgmt/global"
errwrap "github.com/pkg/errors"
)
// AutoGrouper is the required interface to implement for an autogroup algorithm
@@ -283,8 +285,8 @@ func (g *Graph) VertexMerge(v1, v2 *Vertex, vertexMergeFn func(*Vertex, *Vertex)
g.DeleteVertex(v2) // remove grouped vertex
// 5) creation of a cyclic graph should throw an error
if _, dag := g.TopologicalSort(); !dag { // am i a dag or not?
return fmt.Errorf("Graph is not a dag!")
if _, err := g.TopologicalSort(); err != nil { // am i a dag or not?
return errwrap.Wrapf(err, "TopologicalSort failed") // not a dag
}
return nil // success
}

View File

@@ -19,7 +19,6 @@
package pgraph
import (
"errors"
"fmt"
"io/ioutil"
"log"
@@ -36,6 +35,8 @@ import (
"github.com/purpleidea/mgmt/event"
"github.com/purpleidea/mgmt/global"
"github.com/purpleidea/mgmt/resources"
errwrap "github.com/pkg/errors"
)
//go:generate stringer -type=graphState -output=graphstate_stringer.go
@@ -163,6 +164,18 @@ func (g *Graph) AddEdge(v1, v2 *Vertex, e *Edge) {
g.Adjacency[v1][v2] = e
}
// DeleteEdge deletes a particular edge from the graph.
// FIXME: add test cases
func (g *Graph) DeleteEdge(e *Edge) {
for v1 := range g.Adjacency {
for v2, edge := range g.Adjacency[v1] {
if e == edge {
delete(g.Adjacency[v1], v2)
}
}
}
}
// GetVertexMatch searches for an equivalent resource in the graph and returns
// the vertex it is found in, or nil if not found.
func (g *Graph) GetVertexMatch(obj resources.Res) *Vertex {
@@ -285,11 +298,11 @@ func (g *Graph) ExecGraphviz(program, filename string) error {
switch program {
case "dot", "neato", "twopi", "circo", "fdp":
default:
return errors.New("Invalid graphviz program selected!")
return fmt.Errorf("Invalid graphviz program selected!")
}
if filename == "" {
return errors.New("No filename given!")
return fmt.Errorf("No filename given!")
}
// run as a normal user if possible when run with sudo
@@ -298,18 +311,18 @@ func (g *Graph) ExecGraphviz(program, filename string) error {
err := ioutil.WriteFile(filename, []byte(g.Graphviz()), 0644)
if err != nil {
return errors.New("Error writing to filename!")
return fmt.Errorf("Error writing to filename!")
}
if err1 == nil && err2 == nil {
if err := os.Chown(filename, uid, gid); err != nil {
return errors.New("Error changing file owner!")
return fmt.Errorf("Error changing file owner!")
}
}
path, err := exec.LookPath(program)
if err != nil {
return errors.New("Graphviz is missing!")
return fmt.Errorf("Graphviz is missing!")
}
out := fmt.Sprintf("%v.png", filename)
@@ -324,7 +337,7 @@ func (g *Graph) ExecGraphviz(program, filename string) error {
}
_, err = cmd.Output()
if err != nil {
return errors.New("Error writing to image!")
return fmt.Errorf("Error writing to image!")
}
return nil
}
@@ -468,7 +481,7 @@ func (g *Graph) OutDegree() map[*Vertex]int {
// TopologicalSort returns the sort of graph vertices in that order.
// based on descriptions and code from wikipedia and rosetta code
// TODO: add memoization, and cache invalidation to speed this up :)
func (g *Graph) TopologicalSort() (result []*Vertex, ok bool) { // kahn's algorithm
func (g *Graph) TopologicalSort() ([]*Vertex, error) { // kahn's algorithm
var L []*Vertex // empty list that will contain the sorted elements
var S []*Vertex // set of all nodes with no incoming edges
remaining := make(map[*Vertex]int) // amount of edges remaining
@@ -505,13 +518,13 @@ func (g *Graph) TopologicalSort() (result []*Vertex, ok bool) { // kahn's algori
if in > 0 {
for n := range g.Adjacency[c] {
if remaining[n] > 0 {
return nil, false // not a dag!
return nil, fmt.Errorf("Not a dag!")
}
}
}
}
return L, true
return L, nil
}
// Reachability finds the shortest path in a DAG from a to b, and returns the
@@ -792,7 +805,7 @@ func (g *Graph) Worker(v *Vertex) error {
// TODO: resources could have a separate exit channel to avoid this complexity!?
case event := <-obj.Events():
// NOTE: this code should match the similar Res code!
//cuuid.SetConverged(false) // TODO ?
//cuid.SetConverged(false) // TODO: ?
if exit, send := obj.ReadEvent(&event); exit {
return nil // exit
} else if send {
@@ -939,6 +952,94 @@ func (g *Graph) Exit() {
}
}
// GraphSync updates the oldGraph so that it matches the newGraph receiver. It
// leaves identical elements alone so that they don't need to be refreshed.
// FIXME: add test cases
func (g *Graph) GraphSync(oldGraph *Graph) (*Graph, error) {
if oldGraph == nil {
oldGraph = NewGraph(g.GetName()) // copy over the name
}
oldGraph.SetName(g.GetName()) // overwrite the name
var lookup = make(map[*Vertex]*Vertex)
var vertexKeep []*Vertex // list of vertices which are the same in new graph
var edgeKeep []*Edge // list of vertices which are the same in new graph
for v := range g.Adjacency { // loop through the vertices (resources)
res := v.Res // resource
vertex := oldGraph.GetVertexMatch(res)
if vertex == nil { // no match found
if err := res.Init(); err != nil {
return nil, errwrap.Wrapf(err, "could not Init() resource")
}
vertex = NewVertex(res)
oldGraph.AddVertex(vertex) // call standalone in case not part of an edge
}
lookup[v] = vertex // used for constructing edges
vertexKeep = append(vertexKeep, vertex) // append
}
// get rid of any vertices we shouldn't keep (that aren't in new graph)
for v := range oldGraph.Adjacency {
if !VertexContains(v, vertexKeep) {
// wait for exit before starting new graph!
v.SendEvent(event.EventExit, true, false)
oldGraph.DeleteVertex(v)
}
}
// compare edges
for v1 := range g.Adjacency { // loop through the vertices (resources)
for v2, e := range g.Adjacency[v1] {
// we have an edge!
// lookup vertices (these should exist now)
//res1 := v1.Res // resource
//res2 := v2.Res
//vertex1 := oldGraph.GetVertexMatch(res1)
//vertex2 := oldGraph.GetVertexMatch(res2)
vertex1, exists1 := lookup[v1]
vertex2, exists2 := lookup[v2]
if !exists1 || !exists2 { // no match found, bug?
//if vertex1 == nil || vertex2 == nil { // no match found
return nil, fmt.Errorf("New vertices weren't found!") // programming error
}
edge, exists := oldGraph.Adjacency[vertex1][vertex2]
if !exists || edge.Name != e.Name { // TODO: edgeCmp
edge = e // use or overwrite edge
}
oldGraph.Adjacency[vertex1][vertex2] = edge // store it (AddEdge)
edgeKeep = append(edgeKeep, edge) // mark as saved
}
}
// delete unused edges
for v1 := range oldGraph.Adjacency {
for _, e := range oldGraph.Adjacency[v1] {
// we have an edge!
if !EdgeContains(e, edgeKeep) {
oldGraph.DeleteEdge(e)
}
}
}
return oldGraph, nil
}
// GraphMetas returns a list of pointers to each of the resource MetaParams.
func (g *Graph) GraphMetas() []*resources.MetaParams {
metas := []*resources.MetaParams{}
for v := range g.Adjacency { // loop through the vertices (resources))
res := v.Res // resource
meta := res.Meta()
metas = append(metas, meta)
}
return metas
}
// AssociateData associates some data with the object in the graph in question
func (g *Graph) AssociateData(converger converger.Converger) {
for v := range g.GetVerticesChan() {
@@ -956,6 +1057,16 @@ func VertexContains(needle *Vertex, haystack []*Vertex) bool {
return false
}
// EdgeContains is an "in array" function to test for an edge in a slice of edges.
func EdgeContains(needle *Edge, haystack []*Edge) bool {
for _, v := range haystack {
if needle == v {
return true
}
}
return false
}
// Reverse reverses a list of vertices.
func Reverse(vs []*Vertex) []*Vertex {
//var out []*Vertex // XXX: golint suggests, but it fails testing

View File

@@ -26,6 +26,15 @@ import (
"time"
)
// NV is a helper function to make testing easier. It creates a new noop vertex.
func NV(s string) *Vertex {
obj, err := NewNoopRes(s)
if err != nil {
panic(err) // unlikely test failure!
}
return NewVertex(obj)
}
func TestPgraphT1(t *testing.T) {
G := NewGraph("g1")
@@ -38,8 +47,8 @@ func TestPgraphT1(t *testing.T) {
t.Errorf("Should have 0 edges instead of: %d.", i)
}
v1 := NewVertex(NewNoopRes("v1"))
v2 := NewVertex(NewNoopRes("v2"))
v1 := NV("v1")
v2 := NV("v2")
e1 := NewEdge("e1")
G.AddEdge(v1, v2, e1)
@@ -55,12 +64,12 @@ func TestPgraphT1(t *testing.T) {
func TestPgraphT2(t *testing.T) {
G := NewGraph("g2")
v1 := NewVertex(NewNoopRes("v1"))
v2 := NewVertex(NewNoopRes("v2"))
v3 := NewVertex(NewNoopRes("v3"))
v4 := NewVertex(NewNoopRes("v4"))
v5 := NewVertex(NewNoopRes("v5"))
v6 := NewVertex(NewNoopRes("v6"))
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
@@ -82,12 +91,12 @@ func TestPgraphT2(t *testing.T) {
func TestPgraphT3(t *testing.T) {
G := NewGraph("g3")
v1 := NewVertex(NewNoopRes("v1"))
v2 := NewVertex(NewNoopRes("v2"))
v3 := NewVertex(NewNoopRes("v3"))
v4 := NewVertex(NewNoopRes("v4"))
v5 := NewVertex(NewNoopRes("v5"))
v6 := NewVertex(NewNoopRes("v6"))
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
@@ -123,9 +132,9 @@ func TestPgraphT3(t *testing.T) {
func TestPgraphT4(t *testing.T) {
G := NewGraph("g4")
v1 := NewVertex(NewNoopRes("v1"))
v2 := NewVertex(NewNoopRes("v2"))
v3 := NewVertex(NewNoopRes("v3"))
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
@@ -145,12 +154,12 @@ func TestPgraphT4(t *testing.T) {
func TestPgraphT5(t *testing.T) {
G := NewGraph("g5")
v1 := NewVertex(NewNoopRes("v1"))
v2 := NewVertex(NewNoopRes("v2"))
v3 := NewVertex(NewNoopRes("v3"))
v4 := NewVertex(NewNoopRes("v4"))
v5 := NewVertex(NewNoopRes("v5"))
v6 := NewVertex(NewNoopRes("v6"))
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
@@ -174,12 +183,12 @@ func TestPgraphT5(t *testing.T) {
func TestPgraphT6(t *testing.T) {
G := NewGraph("g6")
v1 := NewVertex(NewNoopRes("v1"))
v2 := NewVertex(NewNoopRes("v2"))
v3 := NewVertex(NewNoopRes("v3"))
v4 := NewVertex(NewNoopRes("v4"))
v5 := NewVertex(NewNoopRes("v5"))
v6 := NewVertex(NewNoopRes("v6"))
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
@@ -212,9 +221,9 @@ func TestPgraphT6(t *testing.T) {
func TestPgraphT7(t *testing.T) {
G := NewGraph("g7")
v1 := NewVertex(NewNoopRes("v1"))
v2 := NewVertex(NewNoopRes("v2"))
v3 := NewVertex(NewNoopRes("v3"))
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
@@ -253,28 +262,28 @@ func TestPgraphT7(t *testing.T) {
func TestPgraphT8(t *testing.T) {
v1 := NewVertex(NewNoopRes("v1"))
v2 := NewVertex(NewNoopRes("v2"))
v3 := NewVertex(NewNoopRes("v3"))
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
if VertexContains(v1, []*Vertex{v1, v2, v3}) != true {
t.Errorf("Should be true instead of false.")
}
v4 := NewVertex(NewNoopRes("v4"))
v5 := NewVertex(NewNoopRes("v5"))
v6 := NewVertex(NewNoopRes("v6"))
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
if VertexContains(v4, []*Vertex{v5, v6}) != false {
t.Errorf("Should be false instead of true.")
}
v7 := NewVertex(NewNoopRes("v7"))
v8 := NewVertex(NewNoopRes("v8"))
v9 := NewVertex(NewNoopRes("v9"))
v7 := NV("v7")
v8 := NV("v8")
v9 := NV("v9")
if VertexContains(v8, []*Vertex{v7, v8, v9}) != true {
t.Errorf("Should be true instead of false.")
}
v1b := NewVertex(NewNoopRes("v1")) // same value, different objects
v1b := NV("v1") // same value, different objects
if VertexContains(v1b, []*Vertex{v1, v2, v3}) != false {
t.Errorf("Should be false instead of true.")
}
@@ -283,12 +292,12 @@ func TestPgraphT8(t *testing.T) {
func TestPgraphT9(t *testing.T) {
G := NewGraph("g9")
v1 := NewVertex(NewNoopRes("v1"))
v2 := NewVertex(NewNoopRes("v2"))
v3 := NewVertex(NewNoopRes("v3"))
v4 := NewVertex(NewNoopRes("v4"))
v5 := NewVertex(NewNoopRes("v5"))
v6 := NewVertex(NewNoopRes("v6"))
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
@@ -343,11 +352,11 @@ func TestPgraphT9(t *testing.T) {
t.Errorf("Outdegree of v6 should be 0 instead of: %d.", i)
}
s, ok := G.TopologicalSort()
s, err := G.TopologicalSort()
// either possibility is a valid toposort
match := reflect.DeepEqual(s, []*Vertex{v1, v2, v3, v4, v5, v6}) || reflect.DeepEqual(s, []*Vertex{v1, v3, v2, v4, v5, v6})
if !ok || !match {
t.Errorf("Topological sort failed, status: %v.", ok)
if err != nil || !match {
t.Errorf("Topological sort failed, error: %v.", err)
str := "Found:"
for _, v := range s {
str += " " + v.Res.GetName()
@@ -359,12 +368,12 @@ func TestPgraphT9(t *testing.T) {
func TestPgraphT10(t *testing.T) {
G := NewGraph("g10")
v1 := NewVertex(NewNoopRes("v1"))
v2 := NewVertex(NewNoopRes("v2"))
v3 := NewVertex(NewNoopRes("v3"))
v4 := NewVertex(NewNoopRes("v4"))
v5 := NewVertex(NewNoopRes("v5"))
v6 := NewVertex(NewNoopRes("v6"))
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
@@ -378,8 +387,8 @@ func TestPgraphT10(t *testing.T) {
G.AddEdge(v5, v6, e5)
G.AddEdge(v4, v2, e6) // cycle
if _, ok := G.TopologicalSort(); ok {
t.Errorf("Topological sort passed, but graph is cyclic.")
if _, err := G.TopologicalSort(); err == nil {
t.Errorf("Topological sort passed, but graph is cyclic!")
}
}
@@ -399,8 +408,8 @@ func TestPgraphReachability0(t *testing.T) {
}
{
G := NewGraph("g")
v1 := NewVertex(NewNoopRes("v1"))
v6 := NewVertex(NewNoopRes("v6"))
v1 := NV("v1")
v6 := NV("v6")
result := G.Reachability(v1, v6)
expected := []*Vertex{}
@@ -416,12 +425,12 @@ func TestPgraphReachability0(t *testing.T) {
}
{
G := NewGraph("g")
v1 := NewVertex(NewNoopRes("v1"))
v2 := NewVertex(NewNoopRes("v2"))
v3 := NewVertex(NewNoopRes("v3"))
v4 := NewVertex(NewNoopRes("v4"))
v5 := NewVertex(NewNoopRes("v5"))
v6 := NewVertex(NewNoopRes("v6"))
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
@@ -450,12 +459,12 @@ func TestPgraphReachability0(t *testing.T) {
// simple linear path
func TestPgraphReachability1(t *testing.T) {
G := NewGraph("g")
v1 := NewVertex(NewNoopRes("v1"))
v2 := NewVertex(NewNoopRes("v2"))
v3 := NewVertex(NewNoopRes("v3"))
v4 := NewVertex(NewNoopRes("v4"))
v5 := NewVertex(NewNoopRes("v5"))
v6 := NewVertex(NewNoopRes("v6"))
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
@@ -484,12 +493,12 @@ func TestPgraphReachability1(t *testing.T) {
// pick one of two correct paths
func TestPgraphReachability2(t *testing.T) {
G := NewGraph("g")
v1 := NewVertex(NewNoopRes("v1"))
v2 := NewVertex(NewNoopRes("v2"))
v3 := NewVertex(NewNoopRes("v3"))
v4 := NewVertex(NewNoopRes("v4"))
v5 := NewVertex(NewNoopRes("v5"))
v6 := NewVertex(NewNoopRes("v6"))
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
@@ -521,12 +530,12 @@ func TestPgraphReachability2(t *testing.T) {
// pick shortest path
func TestPgraphReachability3(t *testing.T) {
G := NewGraph("g")
v1 := NewVertex(NewNoopRes("v1"))
v2 := NewVertex(NewNoopRes("v2"))
v3 := NewVertex(NewNoopRes("v3"))
v4 := NewVertex(NewNoopRes("v4"))
v5 := NewVertex(NewNoopRes("v5"))
v6 := NewVertex(NewNoopRes("v6"))
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
@@ -556,12 +565,12 @@ func TestPgraphReachability3(t *testing.T) {
// direct path
func TestPgraphReachability4(t *testing.T) {
G := NewGraph("g")
v1 := NewVertex(NewNoopRes("v1"))
v2 := NewVertex(NewNoopRes("v2"))
v3 := NewVertex(NewNoopRes("v3"))
v4 := NewVertex(NewNoopRes("v4"))
v5 := NewVertex(NewNoopRes("v5"))
v6 := NewVertex(NewNoopRes("v6"))
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
@@ -589,12 +598,12 @@ func TestPgraphReachability4(t *testing.T) {
}
func TestPgraphT11(t *testing.T) {
v1 := NewVertex(NewNoopRes("v1"))
v2 := NewVertex(NewNoopRes("v2"))
v3 := NewVertex(NewNoopRes("v3"))
v4 := NewVertex(NewNoopRes("v4"))
v5 := NewVertex(NewNoopRes("v5"))
v6 := NewVertex(NewNoopRes("v6"))
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
if rev := Reverse([]*Vertex{}); !reflect.DeepEqual(rev, []*Vertex{}) {
t.Errorf("Reverse of vertex slice failed.")

121
puppet/gapi.go Normal file
View File

@@ -0,0 +1,121 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package puppet
import (
"fmt"
"log"
"sync"
"time"
"github.com/purpleidea/mgmt/gapi"
"github.com/purpleidea/mgmt/pgraph"
)
// GAPI implements the main puppet GAPI interface.
type GAPI struct {
PuppetParam *string // puppet mode to run; nil if undefined
PuppetConf string // the path to an alternate puppet.conf file
data gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewGAPI creates a new puppet GAPI struct and calls Init().
func NewGAPI(data gapi.Data, puppetParam *string, puppetConf string) (*GAPI, error) {
obj := &GAPI{
PuppetParam: puppetParam,
PuppetConf: puppetConf,
}
return obj, obj.Init(data)
}
// Init initializes the puppet GAPI struct.
func (obj *GAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("Already initialized!")
}
if obj.PuppetParam == nil {
return fmt.Errorf("The PuppetParam param must be specified!")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
obj.initialized = true
return nil
}
// Graph returns a current Graph.
func (obj *GAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("Puppet: GAPI is not initialized!")
}
config := ParseConfigFromPuppet(*obj.PuppetParam, obj.PuppetConf)
if config == nil {
return nil, fmt.Errorf("Puppet: ParseConfigFromPuppet returned nil!")
}
g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.EmbdEtcd, obj.data.Noop)
return g, err
}
// SwitchStream returns nil errors every time there could be a new graph.
func (obj *GAPI) SwitchStream() chan error {
if obj.data.NoWatch {
return nil
}
puppetChan := func() <-chan time.Time { // helper function
return time.Tick(time.Duration(PuppetInterval(obj.PuppetConf)) * time.Second)
}
ch := make(chan error)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("Puppet: GAPI is not initialized!")
return
}
pChan := puppetChan()
for {
select {
case _, ok := <-pChan:
if !ok { // the channel closed!
return
}
log.Printf("Puppet: Generating new graph...")
pChan = puppetChan() // TODO: okay to update interval in case it changed?
ch <- nil // trigger a run
case <-obj.closeChan:
return
}
}
}()
return ch
}
// Close shuts down the Puppet GAPI.
func (obj *GAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("Puppet: GAPI is not initialized!")
}
close(obj.closeChan)
obj.wg.Wait()
obj.initialized = false // closed = true
return nil
}

View File

@@ -26,8 +26,8 @@ import (
"strconv"
"strings"
"github.com/purpleidea/mgmt/gconfig"
"github.com/purpleidea/mgmt/global"
"github.com/purpleidea/mgmt/yamlgraph"
)
const (
@@ -87,7 +87,7 @@ func runPuppetCommand(cmd *exec.Cmd) ([]byte, error) {
// ParseConfigFromPuppet takes a special puppet param string and config and
// returns the graph configuration structure.
func ParseConfigFromPuppet(puppetParam, puppetConf string) *gconfig.GraphConfig {
func ParseConfigFromPuppet(puppetParam, puppetConf string) *yamlgraph.GraphConfig {
var puppetConfArg string
if puppetConf != "" {
puppetConfArg = "--config=" + puppetConf
@@ -104,7 +104,7 @@ func ParseConfigFromPuppet(puppetParam, puppetConf string) *gconfig.GraphConfig
log.Println("Puppet: launching translator")
var config gconfig.GraphConfig
var config yamlgraph.GraphConfig
if data, err := runPuppetCommand(cmd); err != nil {
return nil
} else if err := config.Parse(data); err != nil {

View File

@@ -96,11 +96,11 @@ func (obj *RecWatcher) Init() error {
return nil
}
//func (obj *RecWatcher) Add(path string) error { // XXX implement me or not?
//func (obj *RecWatcher) Add(path string) error { // XXX: implement me or not?
//
//}
//
//func (obj *RecWatcher) Remove(path string) error { // XXX implement me or not?
//func (obj *RecWatcher) Remove(path string) error { // XXX: implement me or not?
//
//}
@@ -209,7 +209,7 @@ func (obj *RecWatcher) Watch() error {
}
} else {
// TODO different watchers get each others events!
// TODO: different watchers get each others events!
// https://github.com/go-fsnotify/fsnotify/issues/95
// this happened with two values such as:
// event.Name: /tmp/mgmt/f3 and current: /tmp/mgmt/f2

View File

@@ -62,12 +62,14 @@ import (
"time"
cv "github.com/purpleidea/mgmt/converger"
"github.com/purpleidea/mgmt/gconfig"
"github.com/purpleidea/mgmt/global"
"github.com/purpleidea/mgmt/util"
"github.com/purpleidea/mgmt/yamlgraph"
multierr "github.com/hashicorp/go-multierror"
"github.com/howeyc/gopass"
"github.com/kardianos/osext"
errwrap "github.com/pkg/errors"
"github.com/pkg/sftp"
"golang.org/x/crypto/ssh"
)
@@ -171,7 +173,7 @@ func (obj *SSH) Sftp() error {
// TODO: make the path configurable to deal with /tmp/ mounted noexec?
tmpdir := func() string {
return fmt.Sprintf(formatPattern, fmtUUID(10)) // eg: /tmp/mgmt.abcdefghij/
return fmt.Sprintf(formatPattern, fmtUID(10)) // eg: /tmp/mgmt.abcdefghij/
}
var ready bool
obj.remotewd = ""
@@ -189,7 +191,7 @@ func (obj *SSH) Sftp() error {
}
for i := 0; true; {
// NOTE: since fmtUUID is deterministic, if we don't clean up
// NOTE: since fmtUID is deterministic, if we don't clean up
// previous runs, we may get the same paths generated, and here
// they will conflict.
if err := obj.sftp.Mkdir(obj.remotewd); err != nil {
@@ -491,7 +493,7 @@ func (obj *SSH) Exec() error {
// TODO: do something less arbitrary about which one we pick?
url := cleanURL(obj.remoteURLs[0]) // arbitrarily pick the first one
seeds := fmt.Sprintf("--no-server --seeds 'http://%s'", url) // XXX: escape untrusted input? (or check if url is valid)
file := fmt.Sprintf("--file '%s'", obj.filepath) // XXX: escape untrusted input! (or check if file path exists)
file := fmt.Sprintf("--yaml '%s'", obj.filepath) // XXX: escape untrusted input! (or check if file path exists)
depth := fmt.Sprintf("--depth %d", obj.depth+1) // child is +1 distance
args := []string{hostname, seeds, file, depth}
if obj.noop {
@@ -698,8 +700,8 @@ type Remotes struct {
exitChan chan struct{} // closes when we should exit
semaphore Semaphore // counting semaphore to limit concurrent connections
hostnames []string // list of hostnames we've seen so far
cuuid cv.ConvergerUUID // convergerUUID for the remote itself
cuuids map[string]cv.ConvergerUUID // map to each SSH struct with the remote as the key
cuid cv.ConvergerUID // convergerUID for the remote itself
cuids map[string]cv.ConvergerUID // map to each SSH struct with the remote as the key
callbackCancelFunc func() // stored callback function cancel function
program string // name of the program
@@ -725,7 +727,7 @@ func NewRemotes(clientURLs, remoteURLs []string, noop bool, remotes []string, fi
exitChan: make(chan struct{}),
semaphore: NewSemaphore(int(cConns)),
hostnames: make([]string, len(remotes)),
cuuids: make(map[string]cv.ConvergerUUID),
cuids: make(map[string]cv.ConvergerUID),
program: program,
}
}
@@ -734,7 +736,7 @@ func NewRemotes(clientURLs, remoteURLs []string, noop bool, remotes []string, fi
// It takes as input the path to a graph definition file.
func (obj *Remotes) NewSSH(file string) (*SSH, error) {
// first do the parsing...
config := gconfig.ParseConfigFromFile(file)
config := yamlgraph.ParseConfigFromFile(file) // FIXME: GAPI-ify somehow?
if config == nil {
return nil, fmt.Errorf("Remote: Error parsing remote graph: %s", file)
}
@@ -791,7 +793,8 @@ func (obj *Remotes) NewSSH(file string) (*SSH, error) {
return nil, fmt.Errorf("No authentication methods available!")
}
hostname := config.Hostname
//hostname := config.Hostname // TODO: optionally specify local hostname somehow
hostname := ""
if hostname == "" {
hostname = host // default to above
}
@@ -894,12 +897,12 @@ func (obj *Remotes) passwordCallback(user, host string) func() (string, error) {
func (obj *Remotes) Run() {
// TODO: we can disable a lot of this if we're not using --converged-timeout
// link in all the converged timeout checking and callbacks...
obj.cuuid = obj.converger.Register() // one for me!
obj.cuuid.SetName("Remote: Run")
obj.cuid = obj.converger.Register() // one for me!
obj.cuid.SetName("Remote: Run")
for _, f := range obj.remotes { // one for each remote...
obj.cuuids[f] = obj.converger.Register() // save a reference
obj.cuuids[f].SetName(fmt.Sprintf("Remote: %s", f))
//obj.cuuids[f].SetConverged(false) // everyone starts off false
obj.cuids[f] = obj.converger.Register() // save a reference
obj.cuids[f].SetName(fmt.Sprintf("Remote: %s", f))
//obj.cuids[f].SetConverged(false) // everyone starts off false
}
// watch for converged state in the group of remotes...
@@ -926,7 +929,7 @@ func (obj *Remotes) Run() {
}
// if exiting, don't update, it will be unregistered...
if !sshobj.exiting { // this is actually racy, but safe
obj.cuuids[f].SetConverged(b) // ignore errors!
obj.cuids[f].SetConverged(b) // ignore errors!
}
}
@@ -953,10 +956,10 @@ func (obj *Remotes) Run() {
if !more {
return
}
obj.cuuid.SetConverged(false) // activity!
obj.cuid.SetConverged(false) // activity!
case <-obj.cuuid.ConvergedTimer():
obj.cuuid.SetConverged(true) // converged!
case <-obj.cuid.ConvergedTimer():
obj.cuid.SetConverged(true) // converged!
continue
}
obj.lock.Lock()
@@ -975,7 +978,7 @@ func (obj *Remotes) Run() {
}
}()
} else {
obj.cuuid.SetConverged(true) // if no watches, we're converged!
obj.cuid.SetConverged(true) // if no watches, we're converged!
}
// the semaphore provides the max simultaneous connection limit
@@ -993,7 +996,7 @@ func (obj *Remotes) Run() {
if obj.cConns != 0 {
obj.semaphore.V(1) // don't lock the loop
}
obj.cuuids[f].Unregister() // don't stall the converge!
obj.cuids[f].Unregister() // don't stall the converge!
continue
}
obj.sshmap[f] = sshobj // save a reference
@@ -1004,7 +1007,7 @@ func (obj *Remotes) Run() {
defer obj.semaphore.V(1)
}
defer obj.wg.Done()
defer obj.cuuids[f].Unregister()
defer obj.cuids[f].Unregister()
if err := sshobj.Go(); err != nil {
log.Printf("Remote: Error: %s", err)
@@ -1017,11 +1020,12 @@ func (obj *Remotes) Run() {
// Exit causes as much of the Remotes struct to shutdown as quickly and as
// cleanly as possible. It only returns once everything is shutdown.
func (obj *Remotes) Exit() {
func (obj *Remotes) Exit() error {
obj.lock.Lock()
obj.exiting = true // don't spawn new ones once this flag is set!
obj.lock.Unlock()
close(obj.exitChan)
var reterr error
for _, f := range obj.remotes {
sshobj, exists := obj.sshmap[f]
if !exists || sshobj == nil {
@@ -1030,7 +1034,8 @@ func (obj *Remotes) Exit() {
// TODO: should we run these as go routines?
if err := sshobj.Stop(); err != nil {
log.Printf("Remote: Error stopping: %s", err)
err = errwrap.Wrapf(err, "Remote: Error stopping!")
reterr = multierr.Append(reterr, err) // list of errors
}
}
@@ -1038,14 +1043,15 @@ func (obj *Remotes) Exit() {
obj.callbackCancelFunc() // cancel our callback
}
defer obj.cuuid.Unregister()
defer obj.cuid.Unregister()
obj.wg.Wait() // wait for everyone to exit
return reterr
}
// fmtUUID makes a random string of length n, it is not cryptographically safe.
// fmtUID makes a random string of length n, it is not cryptographically safe.
// This function actually usually generates the same sequence of random strings
// each time the program is run, which makes repeatability of this code easier.
func fmtUUID(n int) string {
func fmtUID(n int) string {
b := make([]byte, n)
for i := range b {
b[i] = formatChars[rand.Intn(len(formatChars))]

View File

@@ -51,7 +51,7 @@ type ExecRes struct {
}
// NewExecRes is a constructor for this resource. It also calls Init() for you.
func NewExecRes(name, cmd, shell string, timeout int, watchcmd, watchshell, ifcmd, ifshell string, pollint int, state string) *ExecRes {
func NewExecRes(name, cmd, shell string, timeout int, watchcmd, watchshell, ifcmd, ifshell string, pollint int, state string) (*ExecRes, error) {
obj := &ExecRes{
BaseRes: BaseRes{
Name: name,
@@ -66,8 +66,7 @@ func NewExecRes(name, cmd, shell string, timeout int, watchcmd, watchshell, ifcm
PollInt: pollint,
State: state,
}
obj.Init()
return obj
return obj, obj.Init()
}
// Init runs some startup code for this resource.
@@ -99,7 +98,7 @@ func (obj *ExecRes) BufioChanScanner(scanner *bufio.Scanner) (chan string, chan
ch <- scanner.Text() // blocks here ?
if e := scanner.Err(); e != nil {
errch <- e // send any misc errors we encounter
//break // TODO ?
//break // TODO: ?
}
}
close(ch)
@@ -116,8 +115,8 @@ func (obj *ExecRes) Watch(processChan chan event.Event) error {
}
obj.SetWatching(true)
defer obj.SetWatching(false)
cuuid := obj.converger.Register()
defer cuuid.Unregister()
cuid := obj.converger.Register()
defer cuid.Unregister()
var startup bool
Startup := func(block bool) <-chan time.Time {
@@ -173,7 +172,7 @@ func (obj *ExecRes) Watch(processChan chan event.Event) error {
obj.SetState(ResStateWatching) // reset
select {
case text := <-bufioch:
cuuid.SetConverged(false)
cuid.SetConverged(false)
// each time we get a line of output, we loop!
log.Printf("%v[%v]: Watch output: %s", obj.Kind(), obj.GetName(), text)
if text != "" {
@@ -181,7 +180,7 @@ func (obj *ExecRes) Watch(processChan chan event.Event) error {
}
case err := <-errch:
cuuid.SetConverged(false)
cuid.SetConverged(false)
if err == nil { // EOF
// FIXME: add an "if watch command ends/crashes"
// restart or generate error option
@@ -190,18 +189,18 @@ func (obj *ExecRes) Watch(processChan chan event.Event) error {
// error reading input?
return fmt.Errorf("Unknown %s[%s] error: %v", obj.Kind(), obj.GetName(), err)
case event := <-obj.events:
cuuid.SetConverged(false)
case event := <-obj.Events():
cuid.SetConverged(false)
if exit, send = obj.ReadEvent(&event); exit {
return nil // exit
}
case <-cuuid.ConvergedTimer():
cuuid.SetConverged(true) // converged!
case <-cuid.ConvergedTimer():
cuid.SetConverged(true) // converged!
continue
case <-Startup(startup):
cuuid.SetConverged(false)
cuid.SetConverged(false)
send = true
}
@@ -236,8 +235,8 @@ func (obj *ExecRes) CheckApply(apply bool) (checkok bool, err error) {
//} else if obj.IfCmd != "" && obj.WatchCmd != "" {
if obj.PollInt > 0 { // && obj.WatchCmd == ""
// XXX have the Watch() command output onlyif poll events...
// XXX we can optimize by saving those results for returning here
// XXX: have the Watch() command output onlyif poll events...
// XXX: we can optimize by saving those results for returning here
// return XXX
}
@@ -340,17 +339,17 @@ func (obj *ExecRes) CheckApply(apply bool) (checkok bool, err error) {
return false, nil // success
}
// ExecUUID is the UUID struct for ExecRes.
type ExecUUID struct {
BaseUUID
// ExecUID is the UID struct for ExecRes.
type ExecUID struct {
BaseUID
Cmd string
IfCmd string
// TODO: add more elements here
}
// IFF aka if and only if they are equivalent, return true. If not, false.
func (obj *ExecUUID) IFF(uuid ResUUID) bool {
res, ok := uuid.(*ExecUUID)
func (obj *ExecUID) IFF(uid ResUID) bool {
res, ok := uid.(*ExecUID)
if !ok {
return false
}
@@ -389,16 +388,16 @@ func (obj *ExecRes) AutoEdges() AutoEdge {
return nil
}
// GetUUIDs includes all params to make a unique identification of this object.
// GetUIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *ExecRes) GetUUIDs() []ResUUID {
x := &ExecUUID{
BaseUUID: BaseUUID{name: obj.GetName(), kind: obj.Kind()},
func (obj *ExecRes) GetUIDs() []ResUID {
x := &ExecUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
Cmd: obj.Cmd,
IfCmd: obj.IfCmd,
// TODO: add more params here
}
return []ResUUID{x}
return []ResUID{x}
}
// GroupCmp returns whether two resources can be grouped together or not.

View File

@@ -60,7 +60,7 @@ type FileRes struct {
}
// NewFileRes is a constructor for this resource. It also calls Init() for you.
func NewFileRes(name, path, dirname, basename, content, source, state string, recurse, force bool) *FileRes {
func NewFileRes(name, path, dirname, basename, content, source, state string, recurse, force bool) (*FileRes, error) {
obj := &FileRes{
BaseRes: BaseRes{
Name: name,
@@ -74,8 +74,7 @@ func NewFileRes(name, path, dirname, basename, content, source, state string, re
Recurse: recurse,
Force: force,
}
obj.Init()
return obj
return obj, obj.Init()
}
// Init runs some startup code for this resource.
@@ -147,8 +146,8 @@ func (obj *FileRes) Watch(processChan chan event.Event) error {
}
obj.SetWatching(true)
defer obj.SetWatching(false)
cuuid := obj.converger.Register()
defer cuuid.Unregister()
cuid := obj.converger.Register()
defer cuid.Unregister()
var startup bool
Startup := func(block bool) <-chan time.Time {
@@ -181,7 +180,7 @@ func (obj *FileRes) Watch(processChan chan event.Event) error {
if !ok { // channel shutdown
return nil
}
cuuid.SetConverged(false)
cuid.SetConverged(false)
if err := event.Error; err != nil {
return fmt.Errorf("Unknown %s[%s] watcher error: %v", obj.Kind(), obj.GetName(), err)
}
@@ -191,19 +190,19 @@ func (obj *FileRes) Watch(processChan chan event.Event) error {
send = true
dirty = true
case event := <-obj.events:
cuuid.SetConverged(false)
case event := <-obj.Events():
cuid.SetConverged(false)
if exit, send = obj.ReadEvent(&event); exit {
return nil // exit
}
//dirty = false // these events don't invalidate state
case <-cuuid.ConvergedTimer():
cuuid.SetConverged(true) // converged!
case <-cuid.ConvergedTimer():
cuid.SetConverged(true) // converged!
continue
case <-Startup(startup):
cuuid.SetConverged(false)
cuid.SetConverged(false)
send = true
dirty = true
}
@@ -365,7 +364,7 @@ func (obj *FileRes) fileCheckApply(apply bool, src io.ReadSeeker, dst string, sh
// hash comparison (efficient because we can cache hash of content str)
if sha256sum == "" { // cache is invalid
hash := sha256.New()
// TODO file existence test?
// TODO: file existence test?
if _, err := io.Copy(hash, src); err != nil {
return "", false, err
}
@@ -674,15 +673,15 @@ func (obj *FileRes) CheckApply(apply bool) (checkOK bool, _ error) {
return checkOK, nil // w00t
}
// FileUUID is the UUID struct for FileRes.
type FileUUID struct {
BaseUUID
// FileUID is the UID struct for FileRes.
type FileUID struct {
BaseUID
path string
}
// IFF aka if and only if they are equivalent, return true. If not, false.
func (obj *FileUUID) IFF(uuid ResUUID) bool {
res, ok := uuid.(*FileUUID)
func (obj *FileUID) IFF(uid ResUID) bool {
res, ok := uid.(*FileUID)
if !ok {
return false
}
@@ -691,13 +690,13 @@ func (obj *FileUUID) IFF(uuid ResUUID) bool {
// FileResAutoEdges holds the state of the auto edge generator.
type FileResAutoEdges struct {
data []ResUUID
data []ResUID
pointer int
found bool
}
// Next returns the next automatic edge.
func (obj *FileResAutoEdges) Next() []ResUUID {
func (obj *FileResAutoEdges) Next() []ResUID {
if obj.found {
log.Fatal("Shouldn't be called anymore!")
}
@@ -706,7 +705,7 @@ func (obj *FileResAutoEdges) Next() []ResUUID {
}
value := obj.data[obj.pointer]
obj.pointer++
return []ResUUID{value} // we return one, even though api supports N
return []ResUID{value} // we return one, even though api supports N
}
// Test gets results of the earlier Next() call, & returns if we should continue!
@@ -731,13 +730,13 @@ func (obj *FileResAutoEdges) Test(input []bool) bool {
// AutoEdges generates a simple linear sequence of each parent directory from
// the bottom up!
func (obj *FileRes) AutoEdges() AutoEdge {
var data []ResUUID // store linear result chain here...
var data []ResUID // store linear result chain here...
values := util.PathSplitFullReversed(obj.path) // build it
_, values = values[0], values[1:] // get rid of first value which is me!
for _, x := range values {
var reversed = true // cheat by passing a pointer
data = append(data, &FileUUID{
BaseUUID: BaseUUID{
data = append(data, &FileUID{
BaseUID: BaseUID{
name: obj.GetName(),
kind: obj.Kind(),
reversed: &reversed,
@@ -752,14 +751,14 @@ func (obj *FileRes) AutoEdges() AutoEdge {
}
}
// GetUUIDs includes all params to make a unique identification of this object.
// GetUIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *FileRes) GetUUIDs() []ResUUID {
x := &FileUUID{
BaseUUID: BaseUUID{name: obj.GetName(), kind: obj.Kind()},
func (obj *FileRes) GetUIDs() []ResUID {
x := &FileUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
path: obj.path,
}
return []ResUUID{x}
return []ResUID{x}
}
// GroupCmp returns whether two resources can be grouped together or not.

View File

@@ -47,14 +47,14 @@ type MsgRes struct {
syslogStateOK bool
}
// MsgUUID is a unique representation for a MsgRes object.
type MsgUUID struct {
BaseUUID
// MsgUID is a unique representation for a MsgRes object.
type MsgUID struct {
BaseUID
body string
}
// NewMsgRes is a constructor for this resource.
func NewMsgRes(name, body, priority string, journal, syslog bool, fields map[string]string) *MsgRes {
func NewMsgRes(name, body, priority string, journal, syslog bool, fields map[string]string) (*MsgRes, error) {
message := name
if body != "" {
message = body
@@ -71,8 +71,7 @@ func NewMsgRes(name, body, priority string, journal, syslog bool, fields map[str
Syslog: syslog,
}
obj.Init()
return obj
return obj, obj.Init()
}
// Init runs some startup code for this resource.
@@ -102,8 +101,8 @@ func (obj *MsgRes) Watch(processChan chan event.Event) error {
}
obj.SetWatching(true)
defer obj.SetWatching(false)
cuuid := obj.converger.Register()
defer cuuid.Unregister()
cuid := obj.converger.Register()
defer cuid.Unregister()
var startup bool
Startup := func(block bool) <-chan time.Time {
@@ -119,8 +118,8 @@ func (obj *MsgRes) Watch(processChan chan event.Event) error {
for {
obj.SetState(ResStateWatching) // reset
select {
case event := <-obj.events:
cuuid.SetConverged(false)
case event := <-obj.Events():
cuid.SetConverged(false)
// we avoid sending events on unpause
if exit, send = obj.ReadEvent(&event); exit {
return nil // exit
@@ -138,12 +137,12 @@ func (obj *MsgRes) Watch(processChan chan event.Event) error {
*/
send = true
case <-cuuid.ConvergedTimer():
cuuid.SetConverged(true) // converged!
case <-cuid.ConvergedTimer():
cuid.SetConverged(true) // converged!
continue
case <-Startup(startup):
cuuid.SetConverged(false)
cuid.SetConverged(false)
send = true
}
@@ -160,17 +159,17 @@ func (obj *MsgRes) Watch(processChan chan event.Event) error {
}
}
// GetUUIDs includes all params to make a unique identification of this object.
// GetUIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *MsgRes) GetUUIDs() []ResUUID {
x := &MsgUUID{
BaseUUID: BaseUUID{
func (obj *MsgRes) GetUIDs() []ResUID {
x := &MsgUID{
BaseUID: BaseUID{
name: obj.GetName(),
kind: obj.Kind(),
},
body: obj.Body,
}
return []ResUUID{x}
return []ResUID{x}
}
// AutoEdges returns the AutoEdges. In this case none are used.
@@ -218,7 +217,7 @@ func (obj *MsgRes) isAllStateOK() bool {
}
// JournalPriority converts a string description to a numeric priority.
// XXX Have Validate() make sure it actually is one of these.
// XXX: Have Validate() make sure it actually is one of these.
func (obj *MsgRes) journalPriority() journal.Priority {
switch obj.Priority {
case "Emerg":

View File

@@ -36,15 +36,14 @@ type NoopRes struct {
}
// NewNoopRes is a constructor for this resource. It also calls Init() for you.
func NewNoopRes(name string) *NoopRes {
func NewNoopRes(name string) (*NoopRes, error) {
obj := &NoopRes{
BaseRes: BaseRes{
Name: name,
},
Comment: "",
}
obj.Init()
return obj
return obj, obj.Init()
}
// Init runs some startup code for this resource.
@@ -66,8 +65,8 @@ func (obj *NoopRes) Watch(processChan chan event.Event) error {
}
obj.SetWatching(true)
defer obj.SetWatching(false)
cuuid := obj.converger.Register()
defer cuuid.Unregister()
cuid := obj.converger.Register()
defer cuid.Unregister()
var startup bool
Startup := func(block bool) <-chan time.Time {
@@ -83,19 +82,19 @@ func (obj *NoopRes) Watch(processChan chan event.Event) error {
for {
obj.SetState(ResStateWatching) // reset
select {
case event := <-obj.events:
cuuid.SetConverged(false)
case event := <-obj.Events():
cuid.SetConverged(false)
// we avoid sending events on unpause
if exit, send = obj.ReadEvent(&event); exit {
return nil // exit
}
case <-cuuid.ConvergedTimer():
cuuid.SetConverged(true) // converged!
case <-cuid.ConvergedTimer():
cuid.SetConverged(true) // converged!
continue
case <-Startup(startup):
cuuid.SetConverged(false)
cuid.SetConverged(false)
send = true
}
@@ -118,9 +117,9 @@ func (obj *NoopRes) CheckApply(apply bool) (checkok bool, err error) {
return true, nil // state is always okay
}
// NoopUUID is the UUID struct for NoopRes.
type NoopUUID struct {
BaseUUID
// NoopUID is the UID struct for NoopRes.
type NoopUID struct {
BaseUID
name string
}
@@ -129,14 +128,14 @@ func (obj *NoopRes) AutoEdges() AutoEdge {
return nil
}
// GetUUIDs includes all params to make a unique identification of this object.
// GetUIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *NoopRes) GetUUIDs() []ResUUID {
x := &NoopUUID{
BaseUUID: BaseUUID{name: obj.GetName(), kind: obj.Kind()},
func (obj *NoopRes) GetUIDs() []ResUID {
x := &NoopUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
name: obj.Name,
}
return []ResUUID{x}
return []ResUID{x}
}
// GroupCmp returns whether two resources can be grouped together or not.

View File

@@ -15,8 +15,9 @@
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// DOCS: https://www.freedesktop.org/software/PackageKit/gtk-doc/index.html
// Package packagekit provides an interface to interact with packagekit.
// See: https://www.freedesktop.org/software/PackageKit/gtk-doc/index.html for
// more information.
package packagekit
import (

View File

@@ -48,7 +48,7 @@ type PkgRes struct {
}
// NewPkgRes is a constructor for this resource. It also calls Init() for you.
func NewPkgRes(name, state string, allowuntrusted, allownonfree, allowunsupported bool) *PkgRes {
func NewPkgRes(name, state string, allowuntrusted, allownonfree, allowunsupported bool) (*PkgRes, error) {
obj := &PkgRes{
BaseRes: BaseRes{
Name: name,
@@ -58,8 +58,7 @@ func NewPkgRes(name, state string, allowuntrusted, allownonfree, allowunsupporte
AllowNonFree: allownonfree,
AllowUnsupported: allowunsupported,
}
obj.Init() // XXX: on error return nil, or separate error return?
return obj
return obj, obj.Init()
}
// Init runs some startup code for this resource.
@@ -116,8 +115,8 @@ func (obj *PkgRes) Watch(processChan chan event.Event) error {
}
obj.SetWatching(true)
defer obj.SetWatching(false)
cuuid := obj.converger.Register()
defer cuuid.Unregister()
cuid := obj.converger.Register()
defer cuid.Unregister()
var startup bool
Startup := func(block bool) <-chan time.Time {
@@ -151,7 +150,7 @@ func (obj *PkgRes) Watch(processChan chan event.Event) error {
obj.SetState(ResStateWatching) // reset
select {
case event := <-ch:
cuuid.SetConverged(false)
cuid.SetConverged(false)
// FIXME: ask packagekit for info on what packages changed
if global.DEBUG {
@@ -167,19 +166,19 @@ func (obj *PkgRes) Watch(processChan chan event.Event) error {
send = true
dirty = true
case event := <-obj.events:
cuuid.SetConverged(false)
case event := <-obj.Events():
cuid.SetConverged(false)
if exit, send = obj.ReadEvent(&event); exit {
return nil // exit
}
dirty = false // these events don't invalidate state
case <-cuuid.ConvergedTimer():
cuuid.SetConverged(true) // converged!
case <-cuid.ConvergedTimer():
cuid.SetConverged(true) // converged!
continue
case <-Startup(startup):
cuuid.SetConverged(false)
cuid.SetConverged(false)
send = true
dirty = true
}
@@ -362,16 +361,16 @@ func (obj *PkgRes) CheckApply(apply bool) (checkok bool, err error) {
return false, nil // success
}
// PkgUUID is the UUID struct for PkgRes.
type PkgUUID struct {
BaseUUID
// PkgUID is the UID struct for PkgRes.
type PkgUID struct {
BaseUID
name string // pkg name
state string // pkg state or "version"
}
// IFF aka if and only if they are equivalent, return true. If not, false.
func (obj *PkgUUID) IFF(uuid ResUUID) bool {
res, ok := uuid.(*PkgUUID)
func (obj *PkgUID) IFF(uid ResUID) bool {
res, ok := uid.(*PkgUID)
if !ok {
return false
}
@@ -382,30 +381,30 @@ func (obj *PkgUUID) IFF(uuid ResUUID) bool {
// PkgResAutoEdges holds the state of the auto edge generator.
type PkgResAutoEdges struct {
fileList []string
svcUUIDs []ResUUID
svcUIDs []ResUID
testIsNext bool // safety
name string // saved data from PkgRes obj
kind string
}
// Next returns the next automatic edge.
func (obj *PkgResAutoEdges) Next() []ResUUID {
func (obj *PkgResAutoEdges) Next() []ResUID {
if obj.testIsNext {
log.Fatal("Expecting a call to Test()")
}
obj.testIsNext = true // set after all the errors paths are past
// first return any matching svcUUIDs
if x := obj.svcUUIDs; len(x) > 0 {
// first return any matching svcUIDs
if x := obj.svcUIDs; len(x) > 0 {
return x
}
var result []ResUUID
// return UUID's for whatever is in obj.fileList
var result []ResUID
// return UID's for whatever is in obj.fileList
for _, x := range obj.fileList {
var reversed = false // cheat by passing a pointer
result = append(result, &FileUUID{
BaseUUID: BaseUUID{
result = append(result, &FileUID{
BaseUID: BaseUID{
name: obj.name,
kind: obj.kind,
reversed: &reversed,
@@ -422,12 +421,12 @@ func (obj *PkgResAutoEdges) Test(input []bool) bool {
log.Fatal("Expecting a call to Next()")
}
// ack the svcUUID's...
if x := obj.svcUUIDs; len(x) > 0 {
// ack the svcUID's...
if x := obj.svcUIDs; len(x) > 0 {
if y := len(x); y != len(input) {
log.Fatalf("Expecting %d value(s)!", y)
}
obj.svcUUIDs = []ResUUID{} // empty
obj.svcUIDs = []ResUID{} // empty
obj.testIsNext = false
return true
}
@@ -475,37 +474,37 @@ func (obj *PkgRes) AutoEdges() AutoEdge {
// is contained in the Test() method! This design is completely okay!
// add matches for any svc resources found in pkg definition!
var svcUUIDs []ResUUID
var svcUIDs []ResUID
for _, x := range ReturnSvcInFileList(obj.fileList) {
var reversed = false
svcUUIDs = append(svcUUIDs, &SvcUUID{
BaseUUID: BaseUUID{
svcUIDs = append(svcUIDs, &SvcUID{
BaseUID: BaseUID{
name: obj.GetName(),
kind: obj.Kind(),
reversed: &reversed,
},
name: x, // the svc name itself in the SvcUUID object!
name: x, // the svc name itself in the SvcUID object!
}) // build list
}
return &PkgResAutoEdges{
fileList: util.RemoveCommonFilePrefixes(obj.fileList), // clean start!
svcUUIDs: svcUUIDs,
svcUIDs: svcUIDs,
testIsNext: false, // start with Next() call
name: obj.GetName(), // save data for PkgResAutoEdges obj
kind: obj.Kind(),
}
}
// GetUUIDs includes all params to make a unique identification of this object.
// GetUIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *PkgRes) GetUUIDs() []ResUUID {
x := &PkgUUID{
BaseUUID: BaseUUID{name: obj.GetName(), kind: obj.Kind()},
func (obj *PkgRes) GetUIDs() []ResUID {
x := &PkgUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
name: obj.Name,
state: obj.State,
}
result := []ResUUID{x}
result := []ResUID{x}
return result
}

View File

@@ -15,6 +15,7 @@
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package resources provides the resource framework and idempotent primitives.
package resources
import (
@@ -44,17 +45,17 @@ const (
ResStatePoking
)
// ResUUID is a unique identifier for a resource, namely it's name, and the kind ("type").
type ResUUID interface {
// ResUID is a unique identifier for a resource, namely it's name, and the kind ("type").
type ResUID interface {
GetName() string
Kind() string
IFF(ResUUID) bool
IFF(ResUID) bool
Reversed() bool // true means this resource happens before the generator
}
// The BaseUUID struct is used to provide a unique resource identifier.
type BaseUUID struct {
// The BaseUID struct is used to provide a unique resource identifier.
type BaseUID struct {
name string // name and kind are the values of where this is coming from
kind string
@@ -63,14 +64,14 @@ type BaseUUID struct {
// The AutoEdge interface is used to implement the autoedges feature.
type AutoEdge interface {
Next() []ResUUID // call to get list of edges to add
Next() []ResUID // call to get list of edges to add
Test([]bool) bool // call until false
}
// MetaParams is a struct will all params that apply to every resource.
type MetaParams struct {
AutoEdge bool `yaml:"autoedge"` // metaparam, should we generate auto edges? // XXX should default to true
AutoGroup bool `yaml:"autogroup"` // metaparam, should we auto group? // XXX should default to true
AutoEdge bool `yaml:"autoedge"` // metaparam, should we generate auto edges?
AutoGroup bool `yaml:"autogroup"` // metaparam, should we auto group?
Noop bool `yaml:"noop"`
// NOTE: there are separate Watch and CheckApply retry and delay values,
// but I've decided to use the same ones for both until there's a proper
@@ -79,6 +80,29 @@ type MetaParams struct {
Delay uint64 `yaml:"delay"` // metaparam, number of milliseconds to wait between retries
}
// UnmarshalYAML is the custom unmarshal handler for the MetaParams struct. It
// is primarily useful for setting the defaults.
func (obj *MetaParams) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawMetaParams MetaParams // indirection to avoid infinite recursion
raw := rawMetaParams(DefaultMetaParams) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = MetaParams(raw) // restore from indirection with type conversion!
return nil
}
// DefaultMetaParams are the defaults to be used for undefined metaparams.
var DefaultMetaParams = MetaParams{
AutoEdge: true,
AutoGroup: true,
Noop: false,
Retry: 0, // TODO: is this a good default?
Delay: 0, // TODO: is this a good default?
}
// The Base interface is everything that is common to all resources.
// Everything here only needs to be implemented once, in the BaseRes.
type Base interface {
@@ -109,9 +133,9 @@ type Res interface {
Base // include everything from the Base interface
Init() error
//Validate() error // TODO: this might one day be added
GetUUIDs() []ResUUID // most resources only return one
GetUIDs() []ResUID // most resources only return one
Watch(chan event.Event) error // send on channel to signal process() events
CheckApply(bool) (bool, error)
CheckApply(apply bool) (checkOK bool, err error)
AutoEdges() AutoEdge
Compare(Res) bool
CollectPattern(string) // XXX: temporary until Res collection is more advanced
@@ -131,10 +155,10 @@ type BaseRes struct {
grouped []Res // list of any grouped resources
}
// UUIDExistsInUUIDs wraps the IFF method when used with a list of UUID's.
func UUIDExistsInUUIDs(uuid ResUUID, uuids []ResUUID) bool {
for _, u := range uuids {
if uuid.IFF(u) {
// UIDExistsInUIDs wraps the IFF method when used with a list of UID's.
func UIDExistsInUIDs(uid ResUID, uids []ResUID) bool {
for _, u := range uids {
if uid.IFF(u) {
return true
}
}
@@ -142,30 +166,30 @@ func UUIDExistsInUUIDs(uuid ResUUID, uuids []ResUUID) bool {
}
// GetName returns the name of the resource.
func (obj *BaseUUID) GetName() string {
func (obj *BaseUID) GetName() string {
return obj.name
}
// Kind returns the kind of resource.
func (obj *BaseUUID) Kind() string {
func (obj *BaseUID) Kind() string {
return obj.kind
}
// IFF looks at two UUID's and if and only if they are equivalent, returns true.
// IFF looks at two UID's and if and only if they are equivalent, returns true.
// If they are not equivalent, it returns false.
// Most resources will want to override this method, since it does the important
// work of actually discerning if two resources are identical in function.
func (obj *BaseUUID) IFF(uuid ResUUID) bool {
res, ok := uuid.(*BaseUUID)
func (obj *BaseUID) IFF(uid ResUID) bool {
res, ok := uid.(*BaseUID)
if !ok {
return false
}
return obj.name == res.name
}
// Reversed is part of the ResUUID interface, and true means this resource
// Reversed is part of the ResUID interface, and true means this resource
// happens before the generator.
func (obj *BaseUUID) Reversed() bool {
func (obj *BaseUID) Reversed() bool {
if obj.reversed == nil {
log.Fatal("Programming error!")
}
@@ -174,6 +198,9 @@ func (obj *BaseUUID) Reversed() bool {
// Init initializes structures like channels if created without New constructor.
func (obj *BaseRes) Init() error {
if obj.kind == "" {
return fmt.Errorf("Resource did not set kind!")
}
obj.events = make(chan event.Event) // unbuffered chan size to avoid stale events
return nil
}
@@ -252,7 +279,7 @@ func (obj *BaseRes) DoSend(processChan chan event.Event, comment string) (bool,
// }
//case event := <-obj.events:
// // NOTE: this code should match the similar code below!
// //cuuid.SetConverged(false) // TODO ?
// //cuid.SetConverged(false) // TODO: ?
// if exit, send := obj.ReadEvent(&event); exit {
// return true, nil // exit, without error
// } else if send {
@@ -299,7 +326,7 @@ func (obj *BaseRes) ReadEvent(ev *event.Event) (exit, poke bool) {
case event.EventPause:
// wait for next event to continue
select {
case e := <-obj.events:
case e := <-obj.Events():
e.ACK()
if e.Name == event.EventExit {
return true, false

View File

@@ -105,16 +105,16 @@ func TestMiscEncodeDecode2(t *testing.T) {
}
func TestIFF(t *testing.T) {
uuid := &BaseUUID{name: "/tmp/unit-test"}
same := &BaseUUID{name: "/tmp/unit-test"}
diff := &BaseUUID{name: "/tmp/other-file"}
uid := &BaseUID{name: "/tmp/unit-test"}
same := &BaseUID{name: "/tmp/unit-test"}
diff := &BaseUID{name: "/tmp/other-file"}
if !uuid.IFF(same) {
t.Error("basic resource UUIDs with the same name should satisfy each other's IFF condition.")
if !uid.IFF(same) {
t.Error("basic resource UIDs with the same name should satisfy each other's IFF condition.")
}
if uuid.IFF(diff) {
t.Error("basic resource UUIDs with different names should NOT satisfy each other's IFF condition.")
if uid.IFF(diff) {
t.Error("basic resource UIDs with different names should NOT satisfy each other's IFF condition.")
}
}

View File

@@ -46,7 +46,7 @@ type SvcRes struct {
}
// NewSvcRes is a constructor for this resource. It also calls Init() for you.
func NewSvcRes(name, state, startup string) *SvcRes {
func NewSvcRes(name, state, startup string) (*SvcRes, error) {
obj := &SvcRes{
BaseRes: BaseRes{
Name: name,
@@ -54,8 +54,7 @@ func NewSvcRes(name, state, startup string) *SvcRes {
State: state,
Startup: startup,
}
obj.Init()
return obj
return obj, obj.Init()
}
// Init runs some startup code for this resource.
@@ -82,8 +81,8 @@ func (obj *SvcRes) Watch(processChan chan event.Event) error {
}
obj.SetWatching(true)
defer obj.SetWatching(false)
cuuid := obj.converger.Register()
defer cuuid.Unregister()
cuid := obj.converger.Register()
defer cuid.Unregister()
var startup bool
Startup := func(block bool) <-chan time.Time {
@@ -145,7 +144,7 @@ func (obj *SvcRes) Watch(processChan chan event.Event) error {
var notFound = (loadstate.Value == dbus.MakeVariant("not-found"))
if notFound { // XXX: in the loop we'll handle changes better...
log.Printf("Failed to find svc: %v", svc)
invalid = true // XXX ?
invalid = true // XXX: ?
}
}
@@ -163,13 +162,13 @@ func (obj *SvcRes) Watch(processChan chan event.Event) error {
obj.SetState(ResStateWatching) // reset
select {
case <-buschan: // XXX wait for new units event to unstick
cuuid.SetConverged(false)
case <-buschan: // XXX: wait for new units event to unstick
cuid.SetConverged(false)
// loop so that we can see the changed invalid signal
log.Printf("Svc[%v]->DaemonReload()", svc)
case event := <-obj.events:
cuuid.SetConverged(false)
case event := <-obj.Events():
cuid.SetConverged(false)
if exit, send = obj.ReadEvent(&event); exit {
return nil // exit
}
@@ -177,12 +176,12 @@ func (obj *SvcRes) Watch(processChan chan event.Event) error {
dirty = true
}
case <-cuuid.ConvergedTimer():
cuuid.SetConverged(true) // converged!
case <-cuid.ConvergedTimer():
cuid.SetConverged(true) // converged!
continue
case <-Startup(startup):
cuuid.SetConverged(false)
cuid.SetConverged(false)
send = true
dirty = true
}
@@ -220,11 +219,11 @@ func (obj *SvcRes) Watch(processChan chan event.Event) error {
dirty = true
case err := <-subErrors:
cuuid.SetConverged(false)
cuid.SetConverged(false)
return fmt.Errorf("Unknown %s[%s] error: %v", obj.Kind(), obj.GetName(), err)
case event := <-obj.events:
cuuid.SetConverged(false)
case event := <-obj.Events():
cuid.SetConverged(false)
if exit, send = obj.ReadEvent(&event); exit {
return nil // exit
}
@@ -232,12 +231,12 @@ func (obj *SvcRes) Watch(processChan chan event.Event) error {
dirty = true
}
case <-cuuid.ConvergedTimer():
cuuid.SetConverged(true) // converged!
case <-cuid.ConvergedTimer():
cuid.SetConverged(true) // converged!
continue
case <-Startup(startup):
cuuid.SetConverged(false)
cuid.SetConverged(false)
send = true
dirty = true
}
@@ -299,7 +298,7 @@ func (obj *SvcRes) CheckApply(apply bool) (checkok bool, err error) {
var running = (activestate.Value == dbus.MakeVariant("active"))
var stateOK = ((obj.State == "") || (obj.State == "running" && running) || (obj.State == "stopped" && !running))
var startupOK = true // XXX DETECT AND SET
var startupOK = true // XXX: DETECT AND SET
if stateOK && startupOK {
return true, nil // we are in the correct state
@@ -352,19 +351,19 @@ func (obj *SvcRes) CheckApply(apply bool) (checkok bool, err error) {
return false, nil // success
}
// SvcUUID is the UUID struct for SvcRes.
type SvcUUID struct {
// NOTE: there is also a name variable in the BaseUUID struct, this is
// information about where this UUID came from, and is unrelated to the
// SvcUID is the UID struct for SvcRes.
type SvcUID struct {
// NOTE: there is also a name variable in the BaseUID struct, this is
// information about where this UID came from, and is unrelated to the
// information about the resource we're matching. That data which is
// used in the IFF function, is what you see in the struct fields here.
BaseUUID
BaseUID
name string // the svc name
}
// IFF aka if and only if they are equivalent, return true. If not, false.
func (obj *SvcUUID) IFF(uuid ResUUID) bool {
res, ok := uuid.(*SvcUUID)
func (obj *SvcUID) IFF(uid ResUID) bool {
res, ok := uid.(*SvcUID)
if !ok {
return false
}
@@ -373,13 +372,13 @@ func (obj *SvcUUID) IFF(uuid ResUUID) bool {
// SvcResAutoEdges holds the state of the auto edge generator.
type SvcResAutoEdges struct {
data []ResUUID
data []ResUID
pointer int
found bool
}
// Next returns the next automatic edge.
func (obj *SvcResAutoEdges) Next() []ResUUID {
func (obj *SvcResAutoEdges) Next() []ResUID {
if obj.found {
log.Fatal("Shouldn't be called anymore!")
}
@@ -388,7 +387,7 @@ func (obj *SvcResAutoEdges) Next() []ResUUID {
}
value := obj.data[obj.pointer]
obj.pointer++
return []ResUUID{value} // we return one, even though api supports N
return []ResUID{value} // we return one, even though api supports N
}
// Test gets results of the earlier Next() call, & returns if we should continue!
@@ -412,15 +411,15 @@ func (obj *SvcResAutoEdges) Test(input []bool) bool {
// AutoEdges returns the AutoEdge interface. In this case the systemd units.
func (obj *SvcRes) AutoEdges() AutoEdge {
var data []ResUUID
var data []ResUID
svcFiles := []string{
fmt.Sprintf("/etc/systemd/system/%s.service", obj.Name), // takes precedence
fmt.Sprintf("/usr/lib/systemd/system/%s.service", obj.Name), // pkg default
}
for _, x := range svcFiles {
var reversed = true
data = append(data, &FileUUID{
BaseUUID: BaseUUID{
data = append(data, &FileUID{
BaseUID: BaseUID{
name: obj.GetName(),
kind: obj.Kind(),
reversed: &reversed,
@@ -435,14 +434,14 @@ func (obj *SvcRes) AutoEdges() AutoEdge {
}
}
// GetUUIDs includes all params to make a unique identification of this object.
// GetUIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *SvcRes) GetUUIDs() []ResUUID {
x := &SvcUUID{
BaseUUID: BaseUUID{name: obj.GetName(), kind: obj.Kind()},
func (obj *SvcRes) GetUIDs() []ResUID {
x := &SvcUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
name: obj.Name, // svc name
}
return []ResUUID{x}
return []ResUID{x}
}
// GroupCmp returns whether two resources can be grouped together or not.

View File

@@ -35,22 +35,21 @@ type TimerRes struct {
Interval int `yaml:"interval"` // Interval : Interval between runs
}
// TimerUUID is the UUID struct for TimerRes.
type TimerUUID struct {
BaseUUID
// TimerUID is the UID struct for TimerRes.
type TimerUID struct {
BaseUID
name string
}
// NewTimerRes is a constructor for this resource. It also calls Init() for you.
func NewTimerRes(name string, interval int) *TimerRes {
func NewTimerRes(name string, interval int) (*TimerRes, error) {
obj := &TimerRes{
BaseRes: BaseRes{
Name: name,
},
Interval: interval,
}
obj.Init()
return obj
return obj, obj.Init()
}
// Init runs some startup code for this resource.
@@ -73,8 +72,8 @@ func (obj *TimerRes) Watch(processChan chan event.Event) error {
}
obj.SetWatching(true)
defer obj.SetWatching(false)
cuuid := obj.converger.Register()
defer cuuid.Unregister()
cuid := obj.converger.Register()
defer cuid.Unregister()
var startup bool
Startup := func(block bool) <-chan time.Time {
@@ -97,17 +96,18 @@ func (obj *TimerRes) Watch(processChan chan event.Event) error {
case <-ticker.C: // received the timer event
send = true
log.Printf("%v[%v]: received tick", obj.Kind(), obj.GetName())
case event := <-obj.events:
cuuid.SetConverged(false)
case event := <-obj.Events():
cuid.SetConverged(false)
if exit, _ := obj.ReadEvent(&event); exit {
return nil
}
case <-cuuid.ConvergedTimer():
cuuid.SetConverged(true)
case <-cuid.ConvergedTimer():
cuid.SetConverged(true)
continue
case <-Startup(startup):
cuuid.SetConverged(false)
cuid.SetConverged(false)
send = true
}
if send {
@@ -121,17 +121,17 @@ func (obj *TimerRes) Watch(processChan chan event.Event) error {
}
}
// GetUUIDs includes all params to make a unique identification of this object.
// GetUIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *TimerRes) GetUUIDs() []ResUUID {
x := &TimerUUID{
BaseUUID: BaseUUID{
func (obj *TimerRes) GetUIDs() []ResUID {
x := &TimerUID{
BaseUID: BaseUID{
name: obj.GetName(),
kind: obj.Kind(),
},
name: obj.Name,
}
return []ResUUID{x}
return []ResUID{x}
}
// AutoEdges returns the AutoEdge interface. In this case no autoedges are used.

729
resources/virt.go Normal file
View File

@@ -0,0 +1,729 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"encoding/gob"
"fmt"
"log"
"math/rand"
"time"
"github.com/purpleidea/mgmt/event"
"github.com/purpleidea/mgmt/global"
errwrap "github.com/pkg/errors"
"github.com/rgbkrk/libvirt-go"
)
func init() {
gob.Register(&VirtRes{})
}
var (
libvirtInitialized = false
)
// VirtRes is a libvirt resource. A transient virt resource, which has its state
// set to `shutoff` is one which does not exist. The parallel equivalent is a
// file resource which removes a particular path.
type VirtRes struct {
BaseRes `yaml:",inline"`
URI string `yaml:"uri"` // connection uri, eg: qemu:///session
State string `yaml:"state"` // running, paused, shutoff
Transient bool `yaml:"transient"` // defined (false) or undefined (true)
CPUs uint16 `yaml:"cpus"`
Memory uint64 `yaml:"memory"` // in KBytes
Boot []string `yaml:"boot"` // boot order. values: fd, hd, cdrom, network
Disk []diskDevice `yaml:"disk"`
CDRom []cdRomDevice `yaml:"cdrom"`
Network []networkDevice `yaml:"network"`
Filesystem []filesystemDevice `yaml:"filesystem"`
conn libvirt.VirConnection
absent bool // cached state
}
// NewVirtRes is a constructor for this resource. It also calls Init() for you.
func NewVirtRes(name string, uri, state string, transient bool, cpus uint16, memory uint64) (*VirtRes, error) {
obj := &VirtRes{
BaseRes: BaseRes{
Name: name,
},
URI: uri,
State: state,
Transient: transient,
CPUs: cpus,
Memory: memory,
}
return obj, obj.Init()
}
// Init runs some startup code for this resource.
func (obj *VirtRes) Init() error {
if !libvirtInitialized {
if err := libvirt.EventRegisterDefaultImpl(); err != nil {
return errwrap.Wrapf(err, "EventRegisterDefaultImpl failed")
}
libvirtInitialized = true
}
obj.absent = (obj.Transient && obj.State == "shutoff") // machine shouldn't exist
obj.BaseRes.kind = "Virt"
return obj.BaseRes.Init() // call base init, b/c we're overriding
}
// Validate if the params passed in are valid data.
func (obj *VirtRes) Validate() error {
return nil
}
// Watch is the primary listener for this resource and it outputs events.
func (obj *VirtRes) Watch(processChan chan event.Event) error {
if obj.IsWatching() {
return nil
}
obj.SetWatching(true)
defer obj.SetWatching(false)
cuid := obj.converger.Register()
defer cuid.Unregister()
var startup bool
Startup := func(block bool) <-chan time.Time {
if block {
return nil // blocks forever
//return make(chan time.Time) // blocks forever
}
return time.After(time.Duration(500) * time.Millisecond) // 1/2 the resolution of converged timeout
}
conn, err := libvirt.NewVirConnection(obj.URI)
if err != nil {
return fmt.Errorf("Connection to libvirt failed with: %s", err)
}
eventChan := make(chan int) // TODO: do we need to buffer this?
errorChan := make(chan error)
exitChan := make(chan struct{})
defer close(exitChan)
// run libvirt event loop
// TODO: *trigger* EventRunDefaultImpl to unblock so it can shut down...
// at the moment this isn't a major issue because it seems to unblock in
// bursts every 5 seconds! we can do this by writing to an event handler
// in the meantime, terminating the program causes it to exit anyways...
go func() {
for {
// TODO: can we merge this into our main for loop below?
select {
case <-exitChan:
log.Printf("EventRunDefaultImpl exited!")
return
default:
}
//log.Printf("EventRunDefaultImpl started!")
if err := libvirt.EventRunDefaultImpl(); err != nil {
errorChan <- errwrap.Wrapf(err, "EventRunDefaultImpl failed")
return
}
//log.Printf("EventRunDefaultImpl looped!")
}
}()
callback := libvirt.DomainEventCallback(
func(c *libvirt.VirConnection, d *libvirt.VirDomain, eventDetails interface{}, f func()) int {
if lifecycleEvent, ok := eventDetails.(libvirt.DomainLifecycleEvent); ok {
domName, _ := d.GetName()
if domName == obj.GetName() {
eventChan <- lifecycleEvent.Event
}
} else if global.DEBUG {
log.Printf("%s[%s]: Event details isn't DomainLifecycleEvent", obj.Kind(), obj.GetName())
}
return 0
},
)
callbackID := conn.DomainEventRegister(
libvirt.VirDomain{},
libvirt.VIR_DOMAIN_EVENT_ID_LIFECYCLE,
&callback,
nil,
)
defer conn.DomainEventDeregister(callbackID)
var send = false
var exit = false
var dirty = false
for {
select {
case event := <-eventChan:
// TODO: shouldn't we do these checks in CheckApply ?
switch event {
case libvirt.VIR_DOMAIN_EVENT_DEFINED:
if obj.Transient {
dirty = true
send = true
}
case libvirt.VIR_DOMAIN_EVENT_UNDEFINED:
if !obj.Transient {
dirty = true
send = true
}
case libvirt.VIR_DOMAIN_EVENT_STARTED:
fallthrough
case libvirt.VIR_DOMAIN_EVENT_RESUMED:
if obj.State != "running" {
dirty = true
send = true
}
case libvirt.VIR_DOMAIN_EVENT_SUSPENDED:
if obj.State != "paused" {
dirty = true
send = true
}
case libvirt.VIR_DOMAIN_EVENT_STOPPED:
fallthrough
case libvirt.VIR_DOMAIN_EVENT_SHUTDOWN:
if obj.State != "shutoff" {
dirty = true
send = true
}
case libvirt.VIR_DOMAIN_EVENT_PMSUSPENDED:
fallthrough
case libvirt.VIR_DOMAIN_EVENT_CRASHED:
dirty = true
send = true
}
case err := <-errorChan:
cuid.SetConverged(false)
return fmt.Errorf("Unknown %s[%s] libvirt error: %s", obj.Kind(), obj.GetName(), err)
case event := <-obj.Events():
cuid.SetConverged(false)
if exit, send = obj.ReadEvent(&event); exit {
return nil // exit
}
case <-cuid.ConvergedTimer():
cuid.SetConverged(true) // converged!
continue
case <-Startup(startup):
cuid.SetConverged(false)
send = true
dirty = true
}
if send {
startup = true // startup finished
send = false
// only invalid state on certain types of events
if dirty {
dirty = false
obj.isStateOK = false // something made state dirty
}
if exit, err := obj.DoSend(processChan, ""); exit || err != nil {
return err // we exit or bubble up a NACK...
}
}
}
}
// attrCheckApply performs the CheckApply functions for CPU, Memory and others.
// This shouldn't be called when the machine is absent; it won't be found!
func (obj *VirtRes) attrCheckApply(apply bool) (bool, error) {
var checkOK = true
dom, err := obj.conn.LookupDomainByName(obj.GetName())
if err != nil {
return false, errwrap.Wrapf(err, "conn.LookupDomainByName failed")
}
domInfo, err := dom.GetInfo()
if err != nil {
// we don't know if the state is ok
return false, errwrap.Wrapf(err, "domain.GetInfo failed")
}
// check memory
if domInfo.GetMemory() != obj.Memory {
checkOK = false
if !apply {
return false, nil
}
if err := dom.SetMemory(obj.Memory); err != nil {
return false, errwrap.Wrapf(err, "domain.SetMemory failed")
}
log.Printf("%s[%s]: Memory changed", obj.Kind(), obj.GetName())
}
// check cpus
if domInfo.GetNrVirtCpu() != obj.CPUs {
checkOK = false
if !apply {
return false, nil
}
if err := dom.SetVcpus(obj.CPUs); err != nil {
return false, errwrap.Wrapf(err, "domain.SetVcpus failed")
}
log.Printf("%s[%s]: CPUs changed", obj.Kind(), obj.GetName())
}
return checkOK, nil
}
// domainCreate creates a transient or persistent domain in the correct state. It
// doesn't check the state before hand, as it is a simple helper function.
func (obj *VirtRes) domainCreate() (libvirt.VirDomain, bool, error) {
if obj.Transient {
var flag uint32
var state string
switch obj.State {
case "running":
flag = libvirt.VIR_DOMAIN_NONE
state = "started"
case "paused":
flag = libvirt.VIR_DOMAIN_START_PAUSED
state = "paused"
case "shutoff":
// a transient, shutoff machine, means machine is absent
return libvirt.VirDomain{}, true, nil // returned dom is invalid
}
dom, err := obj.conn.DomainCreateXML(obj.getDomainXML(), flag)
if err != nil {
return dom, false, err // returned dom is invalid
}
log.Printf("%s[%s]: Domain transient %s", state, obj.Kind(), obj.GetName())
return dom, false, nil
}
dom, err := obj.conn.DomainDefineXML(obj.getDomainXML())
if err != nil {
return dom, false, err // returned dom is invalid
}
log.Printf("%s[%s]: Domain defined", obj.Kind(), obj.GetName())
if obj.State == "running" {
if err := dom.Create(); err != nil {
return dom, false, err
}
log.Printf("%s[%s]: Domain started", obj.Kind(), obj.GetName())
}
if obj.State == "paused" {
if err := dom.CreateWithFlags(libvirt.VIR_DOMAIN_START_PAUSED); err != nil {
return dom, false, err
}
log.Printf("%s[%s]: Domain created paused", obj.Kind(), obj.GetName())
}
return dom, false, nil
}
// CheckApply checks the resource state and applies the resource if the bool
// input is true. It returns error info and if the state check passed or not.
func (obj *VirtRes) CheckApply(apply bool) (bool, error) {
log.Printf("%s[%s]: CheckApply(%t)", obj.Kind(), obj.GetName(), apply)
if obj.isStateOK { // cache the state
return true, nil
}
var err error
obj.conn, err = libvirt.NewVirConnection(obj.URI)
if err != nil {
return false, fmt.Errorf("Connection to libvirt failed with: %s", err)
}
var checkOK = true
dom, err := obj.conn.LookupDomainByName(obj.GetName())
if err == nil {
// pass
} else if virErr, ok := err.(libvirt.VirError); ok && virErr.Domain == libvirt.VIR_FROM_QEMU && virErr.Code == libvirt.VIR_ERR_NO_DOMAIN {
// domain not found
if obj.absent {
obj.isStateOK = true
return true, nil
}
if !apply {
return false, nil
}
var c = true
dom, c, err = obj.domainCreate() // create the domain
if err != nil {
return false, errwrap.Wrapf(err, "domainCreate failed")
} else if !c {
checkOK = false
}
} else {
return false, errwrap.Wrapf(err, "LookupDomainByName failed")
}
defer dom.Free()
// domain exists
domInfo, err := dom.GetInfo()
if err != nil {
// we don't know if the state is ok
return false, errwrap.Wrapf(err, "domain.GetInfo failed")
}
isPersistent, err := dom.IsPersistent()
if err != nil {
// we don't know if the state is ok
return false, errwrap.Wrapf(err, "domain.IsPersistent failed")
}
isActive, err := dom.IsActive()
if err != nil {
// we don't know if the state is ok
return false, errwrap.Wrapf(err, "domain.IsActive failed")
}
// check for persistence
if isPersistent == obj.Transient { // if they're different!
if !apply {
return false, nil
}
if isPersistent {
if err := dom.Undefine(); err != nil {
return false, errwrap.Wrapf(err, "domain.Undefine failed")
}
log.Printf("%s[%s]: Domain undefined", obj.Kind(), obj.GetName())
} else {
domXML, err := dom.GetXMLDesc(libvirt.VIR_DOMAIN_XML_INACTIVE)
if err != nil {
return false, errwrap.Wrapf(err, "domain.GetXMLDesc failed")
}
if _, err = obj.conn.DomainDefineXML(domXML); err != nil {
return false, errwrap.Wrapf(err, "conn.DomainDefineXML failed")
}
log.Printf("%s[%s]: Domain defined", obj.Kind(), obj.GetName())
}
checkOK = false
}
// check for valid state
domState := domInfo.GetState()
switch obj.State {
case "running":
if domState == libvirt.VIR_DOMAIN_RUNNING {
break
}
if domState == libvirt.VIR_DOMAIN_BLOCKED {
// TODO: what should happen?
return false, fmt.Errorf("Domain %s is blocked!", obj.GetName())
}
if !apply {
return false, nil
}
if isActive { // domain must be paused ?
if err := dom.Resume(); err != nil {
return false, errwrap.Wrapf(err, "domain.Resume failed")
}
checkOK = false
log.Printf("%s[%s]: Domain resumed", obj.Kind(), obj.GetName())
break
}
if err := dom.Create(); err != nil {
return false, errwrap.Wrapf(err, "domain.Create failed")
}
checkOK = false
log.Printf("%s[%s]: Domain created", obj.Kind(), obj.GetName())
case "paused":
if domState == libvirt.VIR_DOMAIN_PAUSED {
break
}
if !apply {
return false, nil
}
if isActive { // domain must be running ?
if err := dom.Suspend(); err != nil {
return false, errwrap.Wrapf(err, "domain.Suspend failed")
}
checkOK = false
log.Printf("%s[%s]: Domain paused", obj.Kind(), obj.GetName())
break
}
if err := dom.CreateWithFlags(libvirt.VIR_DOMAIN_START_PAUSED); err != nil {
return false, errwrap.Wrapf(err, "domain.CreateWithFlags failed")
}
checkOK = false
log.Printf("%s[%s]: Domain created paused", obj.Kind(), obj.GetName())
case "shutoff":
if domState == libvirt.VIR_DOMAIN_SHUTOFF || domState == libvirt.VIR_DOMAIN_SHUTDOWN {
break
}
if !apply {
return false, nil
}
if err := dom.Destroy(); err != nil {
return false, errwrap.Wrapf(err, "domain.Destroy failed")
}
checkOK = false
log.Printf("%s[%s]: Domain destroyed", obj.Kind(), obj.GetName())
}
if !apply {
return false, nil
}
// remaining apply portion
// mem & cpu checks...
if !obj.absent {
if c, err := obj.attrCheckApply(apply); err != nil {
return false, errwrap.Wrapf(err, "attrCheckApply failed")
} else if !c {
checkOK = false
}
}
if apply || checkOK {
obj.isStateOK = true
}
return checkOK, nil // w00t
}
func (obj *VirtRes) getDomainXML() string {
var b string
b += "<domain type='kvm'>" // start domain
b += fmt.Sprintf("<name>%s</name>", obj.GetName())
b += fmt.Sprintf("<memory unit='KiB'>%d</memory>", obj.Memory)
b += fmt.Sprintf("<vcpu>%d</vcpu>", obj.CPUs)
b += "<os>"
b += "<type>hvm</type>"
if obj.Boot != nil {
for _, boot := range obj.Boot {
b += fmt.Sprintf("<boot dev='%s'/>", boot)
}
}
b += fmt.Sprintf("</os>")
b += fmt.Sprintf("<devices>") // start devices
if obj.Disk != nil {
for i, disk := range obj.Disk {
b += fmt.Sprintf(disk.GetXML(i))
}
}
if obj.CDRom != nil {
for i, cdrom := range obj.CDRom {
b += fmt.Sprintf(cdrom.GetXML(i))
}
}
if obj.Network != nil {
for i, net := range obj.Network {
b += fmt.Sprintf(net.GetXML(i))
}
}
if obj.Filesystem != nil {
for i, fs := range obj.Filesystem {
b += fmt.Sprintf(fs.GetXML(i))
}
}
b += "<serial type='pty'><target port='0'/></serial>"
b += "<console type='pty'><target type='serial' port='0'/></console>"
b += "</devices>" // end devices
b += "</domain>" // end domain
return b
}
type virtDevice interface {
GetXML(idx int) string
}
type diskDevice struct {
Source string `yaml:"source"`
Type string `yaml:"type"`
}
type cdRomDevice struct {
Source string `yaml:"source"`
Type string `yaml:"type"`
}
type networkDevice struct {
Name string `yaml:"name"`
MAC string `yaml:"mac"`
}
type filesystemDevice struct {
Access string `yaml:"access"`
Source string `yaml:"source"`
Target string `yaml:"target"`
ReadOnly bool `yaml:"read_only"`
}
func (d *diskDevice) GetXML(idx int) string {
var b string
b += "<disk type='file' device='disk'>"
b += fmt.Sprintf("<driver name='qemu' type='%s'/>", d.Type)
b += fmt.Sprintf("<source file='%s'/>", d.Source)
b += fmt.Sprintf("<target dev='vd%s' bus='virtio'/>", (string)(idx+97)) // TODO: 26, 27... should be 'aa', 'ab'...
b += "</disk>"
return b
}
func (d *cdRomDevice) GetXML(idx int) string {
var b string
b += "<disk type='file' device='cdrom'>"
b += fmt.Sprintf("<driver name='qemu' type='%s'/>", d.Type)
b += fmt.Sprintf("<source file='%s'/>", d.Source)
b += fmt.Sprintf("<target dev='hd%s' bus='ide'/>", (string)(idx+97)) // TODO: 26, 27... should be 'aa', 'ab'...
b += "<readonly/>"
b += "</disk>"
return b
}
func (d *networkDevice) GetXML(idx int) string {
if d.MAC == "" {
d.MAC = randMAC()
}
var b string
b += "<interface type='network'>"
b += fmt.Sprintf("<mac address='%s'/>", d.MAC)
b += fmt.Sprintf("<source network='%s'/>", d.Name)
b += "</interface>"
return b
}
func (d *filesystemDevice) GetXML(idx int) string {
var b string
b += "<filesystem" // open
if d.Access != "" {
b += fmt.Sprintf(" accessmode='%s'", d.Access)
}
b += ">" // close
b += fmt.Sprintf("<source dir='%s'/>", d.Source)
b += fmt.Sprintf("<target dir='%s'/>", d.Target)
if d.ReadOnly {
b += "<readonly/>"
}
b += "</filesystem>"
return b
}
// VirtUID is the UID struct for FileRes.
type VirtUID struct {
BaseUID
}
// GetUIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *VirtRes) GetUIDs() []ResUID {
x := &VirtUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
// TODO: add more properties here so we can link to vm dependencies
}
return []ResUID{x}
}
// GroupCmp returns whether two resources can be grouped together or not.
func (obj *VirtRes) GroupCmp(r Res) bool {
_, ok := r.(*VirtRes)
if !ok {
return false
}
return false // not possible atm
}
// AutoEdges returns the AutoEdge interface. In this case no autoedges are used.
func (obj *VirtRes) AutoEdges() AutoEdge {
return nil
}
// Compare two resources and return if they are equivalent.
func (obj *VirtRes) Compare(res Res) bool {
switch res.(type) {
case *VirtRes:
res := res.(*VirtRes)
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
if obj.URI != res.URI {
return false
}
if obj.State != res.State {
return false
}
if obj.Transient != res.Transient {
return false
}
if obj.CPUs != res.CPUs {
return false
}
// TODO: can we skip the compare of certain properties such as
// Memory because this object (but with different memory) can be
// *converted* into the new version that has more/less memory?
// We would need to run some sort of "old struct update", to get
// the new values, but that's easy to add.
if obj.Memory != res.Memory {
return false
}
// TODO:
//if obj.Boot != res.Boot {
// return false
//}
//if obj.Disk != res.Disk {
// return false
//}
//if obj.CDRom != res.CDRom {
// return false
//}
//if obj.Network != res.Network {
// return false
//}
//if obj.Filesystem != res.Filesystem {
// return false
//}
default:
return false
}
return true
}
// CollectPattern applies the pattern for collection resources.
func (obj *VirtRes) CollectPattern(string) {
}
// randMAC returns a random mac address in the libvirt range.
func randMAC() string {
rand.Seed(time.Now().UnixNano())
return "52:54:00" +
fmt.Sprintf(":%x", rand.Intn(255)) +
fmt.Sprintf(":%x", rand.Intn(255)) +
fmt.Sprintf(":%x", rand.Intn(255))
}

View File

@@ -12,11 +12,14 @@ Source0: https://dl.fedoraproject.org/pub/alt/purpleidea/__PROGRAM__/SOURCES/__P
# graphviz should really be a "suggests", since technically it's optional
Requires: graphviz
BuildRequires: golang
# If go_compiler is not set to 1, there is no virtual provide. Use golang instead.
BuildRequires: %{?go_compiler:compiler(go-compiler)}%{!?go_compiler:golang}
BuildRequires: golang-googlecode-tools-stringer
BuildRequires: git-core
BuildRequires: mercurial
ExclusiveArch: %{go_arches}
%description
A next generation config management prototype!

View File

@@ -32,7 +32,7 @@
- iptables -F
- cd /vagrant/mgmt/ && make path
- cd /vagrant/mgmt/ && make deps && make build && cp mgmt ~/bin/
- cd && mgmt run --file /vagrant/mgmt/examples/pkg1.yaml --converged-timeout=5
- cd && mgmt run --yaml /vagrant/mgmt/examples/pkg1.yaml --converged-timeout=5
:namespace: omv
:count: 0
:username: ''

View File

@@ -33,7 +33,7 @@
- iptables -F
- cd /vagrant/mgmt/ && make path
- cd /vagrant/mgmt/ && make deps && make build && cp mgmt ~/bin/
- cd && mgmt run --file /vagrant/mgmt/examples/pkg1.yaml --converged-timeout=5
- cd && mgmt run --yaml /vagrant/mgmt/examples/pkg1.yaml --converged-timeout=5
:namespace: omv
:count: 0
:username: ''

View File

@@ -7,7 +7,7 @@ if env | grep -q -e '^TRAVIS=true$'; then
fi
# run till completion
timeout --kill-after=15s 10s ./mgmt run --file t2.yaml --converged-timeout=5 --no-watch --tmp-prefix &
timeout --kill-after=15s 10s ./mgmt run --yaml t2.yaml --converged-timeout=5 --no-watch --tmp-prefix &
pid=$!
wait $pid # get exit status
e=$?

View File

@@ -10,11 +10,11 @@ fi
mkdir -p "${MGMT_TMPDIR}"mgmt{A..C}
# run till completion
timeout --kill-after=15s 10s ./mgmt run --file t3-a.yaml --converged-timeout=5 --no-watch --tmp-prefix &
timeout --kill-after=15s 10s ./mgmt run --yaml t3-a.yaml --converged-timeout=5 --no-watch --tmp-prefix &
pid1=$!
timeout --kill-after=15s 10s ./mgmt run --file t3-b.yaml --converged-timeout=5 --no-watch --tmp-prefix &
timeout --kill-after=15s 10s ./mgmt run --yaml t3-b.yaml --converged-timeout=5 --no-watch --tmp-prefix &
pid2=$!
timeout --kill-after=15s 10s ./mgmt run --file t3-c.yaml --converged-timeout=5 --no-watch --tmp-prefix &
timeout --kill-after=15s 10s ./mgmt run --yaml t3-c.yaml --converged-timeout=5 --no-watch --tmp-prefix &
pid3=$!
wait $pid1 # get exit status

View File

@@ -1,7 +1,7 @@
#!/bin/bash -e
# should take slightly more than 25s, but fail if we take 35s)
timeout --kill-after=35s 30s ./mgmt run --file t4.yaml --converged-timeout=5 --no-watch --tmp-prefix &
timeout --kill-after=35s 30s ./mgmt run --yaml t4.yaml --converged-timeout=5 --no-watch --tmp-prefix &
pid=$!
wait $pid # get exit status
exit $?

View File

@@ -1,7 +1,7 @@
#!/bin/bash -e
# should take slightly more than 35s, but fail if we take 45s)
timeout --kill-after=45s 40s ./mgmt run --file t5.yaml --converged-timeout=5 --no-watch --tmp-prefix &
timeout --kill-after=45s 40s ./mgmt run --yaml t5.yaml --converged-timeout=5 --no-watch --tmp-prefix &
pid=$!
wait $pid # get exit status
exit $?

View File

@@ -7,7 +7,7 @@ if env | grep -q -e '^TRAVIS=true$'; then
fi
# run till completion
timeout --kill-after=20s 15s ./mgmt run --file t6.yaml --no-watch --tmp-prefix &
timeout --kill-after=20s 15s ./mgmt run --yaml t6.yaml --no-watch --tmp-prefix &
pid=$!
sleep 1s # let it converge
test -e /tmp/mgmt/f1

View File

@@ -20,9 +20,10 @@ if [ "$COMMITS" != "" ] && [ "$COMMITS" -gt "1" ]; then
HACK="yes"
fi
LINT=`golint` # current golint output
LINT=`find . -maxdepth 3 -iname '*.go' -not -path './old/*' -not -path './tmp/*' -exec golint {} \;` # current golint output
COUNT=`echo -e "$LINT" | wc -l` # number of golint problems in current branch
[ "$LINT" = "" ] && echo PASS && exit # everything is "perfect"
echo "$LINT" # display the issues
T=`mktemp --tmpdir -d tmp.XXX`
[ "$T" = "" ] && exit 1
@@ -46,7 +47,7 @@ while read -r line; do
done <<< "$NUMSTAT1" # three < is the secret to putting a variable into read
git checkout "$PREVIOUS" &>/dev/null # previous commit
LINT1=`golint`
LINT1=`find . -maxdepth 3 -iname '*.go' -not -path './old/*' -not -path './tmp/*' -exec golint {} \;`
COUNT1=`echo -e "$LINT1" | wc -l` # number of golint problems in older branch
# clean up

View File

@@ -8,4 +8,5 @@ for file in `find . -maxdepth 3 -type f -name '*.go' -not -path './old/*' -not -
go vet "$file" && echo PASS || exit 1 # since it doesn't output an ok message on pass
grep 'log.' "$file" | grep '\\n"' && echo 'no \n needed in log.Printf()' && exit 1 || echo PASS # no \n needed in log.Printf()
grep 'case _ = <-' "$file" && echo 'case _ = <- can be simplified to: case <-' && exit 1 || echo PASS # this can be simplified
grep -Ei "[\/]+[\/]+[ ]*+(FIXME[^:]|TODO[^:]|XXX[^:])" "$file" && echo 'Token is missing a colon' && exit 1 || echo PASS # tokens must end with a colon
done

View File

@@ -11,7 +11,7 @@ done < "$FILE"
cd "${ROOT}"
find_files() {
git ls-files | grep '\.go$'
git ls-files | grep '\.go$' | grep -v '^examples/'
}
bad_files=$(

120
yamlgraph/gapi.go Normal file
View File

@@ -0,0 +1,120 @@
// Mgmt
// Copyright (C) 2013-2016+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package yamlgraph
import (
"fmt"
"log"
"sync"
"github.com/purpleidea/mgmt/gapi"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/recwatch"
)
// GAPI implements the main yamlgraph GAPI interface.
type GAPI struct {
File *string // yaml graph definition to use; nil if undefined
data gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewGAPI creates a new yamlgraph GAPI struct and calls Init().
func NewGAPI(data gapi.Data, file *string) (*GAPI, error) {
obj := &GAPI{
File: file,
}
return obj, obj.Init(data)
}
// Init initializes the yamlgraph GAPI struct.
func (obj *GAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("Already initialized!")
}
if obj.File == nil {
return fmt.Errorf("The File param must be specified!")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
obj.initialized = true
return nil
}
// Graph returns a current Graph.
func (obj *GAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("yamlgraph: GAPI is not initialized")
}
config := ParseConfigFromFile(*obj.File)
if config == nil {
return nil, fmt.Errorf("yamlgraph: ParseConfigFromFile returned nil")
}
g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.EmbdEtcd, obj.data.Noop)
return g, err
}
// SwitchStream returns nil errors every time there could be a new graph.
func (obj *GAPI) SwitchStream() chan error {
if obj.data.NoWatch {
return nil
}
ch := make(chan error)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("yamlgraph: GAPI is not initialized")
return
}
configChan := recwatch.ConfigWatch(*obj.File)
for {
select {
case err, ok := <-configChan: // returns nil events on ok!
if !ok { // the channel closed!
return
}
log.Printf("yamlgraph: Generating new graph...")
ch <- err // trigger a run
if err != nil {
return
}
case <-obj.closeChan:
return
}
}
}()
return ch
}
// Close shuts down the yamlgraph GAPI.
func (obj *GAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("yamlgraph: GAPI is not initialized")
}
close(obj.closeChan)
obj.wg.Wait()
obj.initialized = false // closed = true
return nil
}

View File

@@ -15,8 +15,8 @@
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package gconfig provides the facilities for loading a graph from a yaml file.
package gconfig
// Package yamlgraph provides the facilities for loading a graph from a yaml file.
package yamlgraph
import (
"errors"
@@ -27,7 +27,6 @@ import (
"strings"
"github.com/purpleidea/mgmt/etcd"
"github.com/purpleidea/mgmt/event"
"github.com/purpleidea/mgmt/global"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
@@ -41,33 +40,39 @@ type collectorResConfig struct {
Pattern string `yaml:"pattern"` // XXX: Not Implemented
}
type vertexConfig struct {
// Vertex is the data structure of a vertex.
type Vertex struct {
Kind string `yaml:"kind"`
Name string `yaml:"name"`
}
type edgeConfig struct {
// Edge is the data structure of an edge.
type Edge struct {
Name string `yaml:"name"`
From vertexConfig `yaml:"from"`
To vertexConfig `yaml:"to"`
From Vertex `yaml:"from"`
To Vertex `yaml:"to"`
}
// Resources is the data structure of the set of resources.
type Resources struct {
// in alphabetical order
Exec []*resources.ExecRes `yaml:"exec"`
File []*resources.FileRes `yaml:"file"`
Msg []*resources.MsgRes `yaml:"msg"`
Noop []*resources.NoopRes `yaml:"noop"`
Pkg []*resources.PkgRes `yaml:"pkg"`
Svc []*resources.SvcRes `yaml:"svc"`
Timer []*resources.TimerRes `yaml:"timer"`
Virt []*resources.VirtRes `yaml:"virt"`
}
// GraphConfig is the data structure that describes a single graph to run.
type GraphConfig struct {
Graph string `yaml:"graph"`
Resources struct {
Noop []*resources.NoopRes `yaml:"noop"`
Pkg []*resources.PkgRes `yaml:"pkg"`
File []*resources.FileRes `yaml:"file"`
Svc []*resources.SvcRes `yaml:"svc"`
Exec []*resources.ExecRes `yaml:"exec"`
Timer []*resources.TimerRes `yaml:"timer"`
Msg []*resources.MsgRes `yaml:"msg"`
} `yaml:"resources"`
Resources Resources `yaml:"resources"`
Collector []collectorResConfig `yaml:"collect"`
Edges []edgeConfig `yaml:"edges"`
Edges []Edge `yaml:"edges"`
Comment string `yaml:"comment"`
Hostname string `yaml:"hostname"` // uuid for the host
Remote string `yaml:"remote"`
}
@@ -82,36 +87,13 @@ func (c *GraphConfig) Parse(data []byte) error {
return nil
}
// ParseConfigFromFile takes a filename and returns the graph config structure.
func ParseConfigFromFile(filename string) *GraphConfig {
data, err := ioutil.ReadFile(filename)
if err != nil {
log.Printf("Config: Error: ParseConfigFromFile: File: %v", err)
return nil
}
var config GraphConfig
if err := config.Parse(data); err != nil {
log.Printf("Config: Error: ParseConfigFromFile: Parse: %v", err)
return nil
}
return &config
}
// NewGraphFromConfig returns a new graph from existing input, such as from the
// existing graph, and a GraphConfig struct.
func (c *GraphConfig) NewGraphFromConfig(g *pgraph.Graph, embdEtcd *etcd.EmbdEtcd, noop bool) (*pgraph.Graph, error) {
if c.Hostname == "" {
return nil, fmt.Errorf("Config: Error: Hostname can't be empty!")
}
// NewGraphFromConfig transforms a GraphConfig struct into a new graph.
// FIXME: remove any possibly left over, now obsolete graph diff code from here!
func (c *GraphConfig) NewGraphFromConfig(hostname string, embdEtcd *etcd.EmbdEtcd, noop bool) (*pgraph.Graph, error) {
// hostname is the uuid for the host
var graph *pgraph.Graph // new graph to return
if g == nil { // FIXME: how can we check for an empty graph?
graph = pgraph.NewGraph("Graph") // give graph a default name
} else {
graph = g.Copy() // same vertices, since they're pointers!
}
var lookup = make(map[string]map[string]*pgraph.Vertex)
@@ -141,18 +123,15 @@ func (c *GraphConfig) NewGraphFromConfig(g *pgraph.Graph, embdEtcd *etcd.EmbdEtc
if !ok {
return nil, fmt.Errorf("Config: Error: Can't convert: %v of type: %T to Res.", x, x)
}
if noop {
res.Meta().Noop = noop
}
//if noop { // now done in mgmtmain
// res.Meta().Noop = noop
//}
if _, exists := lookup[kind]; !exists {
lookup[kind] = make(map[string]*pgraph.Vertex)
}
// XXX: should we export based on a @@ prefix, or a metaparam
// like exported => true || exported => (host pattern)||(other pattern?)
if !strings.HasPrefix(res.GetName(), "@@") { // not exported resource
// XXX: we don't have a way of knowing if any of the
// metaparams are undefined, and as a result to set the
// defaults that we want! I hate the go yaml parser!!!
v := graph.GetVertexMatch(res)
if v == nil { // no match found
res.Init()
@@ -171,7 +150,7 @@ func (c *GraphConfig) NewGraphFromConfig(g *pgraph.Graph, embdEtcd *etcd.EmbdEtc
}
}
// store in etcd
if err := etcd.EtcdSetResources(embdEtcd, c.Hostname, resourceList); err != nil {
if err := etcd.EtcdSetResources(embdEtcd, hostname, resourceList); err != nil {
return nil, fmt.Errorf("Config: Could not export resources: %v", err)
}
@@ -213,9 +192,9 @@ func (c *GraphConfig) NewGraphFromConfig(g *pgraph.Graph, embdEtcd *etcd.EmbdEtc
matched = true
// collect resources but add the noop metaparam
if noop {
res.Meta().Noop = noop
}
//if noop { // now done in mgmtmain
// res.Meta().Noop = noop
//}
if t.Pattern != "" { // XXX: simplistic for now
res.CollectPattern(t.Pattern) // res.Dirname = t.Pattern
@@ -240,15 +219,6 @@ func (c *GraphConfig) NewGraphFromConfig(g *pgraph.Graph, embdEtcd *etcd.EmbdEtc
}
}
// get rid of any vertices we shouldn't "keep" (that aren't in new graph)
for _, v := range graph.GetVertices() {
if !pgraph.VertexContains(v, keep) {
// wait for exit before starting new graph!
v.SendEvent(event.EventExit, true, false)
graph.DeleteVertex(v)
}
}
for _, e := range c.Edges {
if _, ok := lookup[util.FirstToUpper(e.From.Kind)]; !ok {
return nil, fmt.Errorf("Can't find 'from' resource!")
@@ -267,3 +237,20 @@ func (c *GraphConfig) NewGraphFromConfig(g *pgraph.Graph, embdEtcd *etcd.EmbdEtc
return graph, nil
}
// ParseConfigFromFile takes a filename and returns the graph config structure.
func ParseConfigFromFile(filename string) *GraphConfig {
data, err := ioutil.ReadFile(filename)
if err != nil {
log.Printf("Config: Error: ParseConfigFromFile: File: %v", err)
return nil
}
var config GraphConfig
if err := config.Parse(data); err != nil {
log.Printf("Config: Error: ParseConfigFromFile: Parse: %v", err)
return nil
}
return &config
}