52 Commits

Author SHA1 Message Date
James Shubin
28a6430778 test: Add gometalinter to our test suite
Add a bunch of new linters to our tests! We can uncomment each sub
linter as we fix up the few remaining issues.
2017-06-03 02:04:10 -04:00
James Shubin
6e4157da35 test: Remove debugging echo from go vet test
I accidentally left it in which totally defeats the point of tests!
2017-06-03 01:34:02 -04:00
James Shubin
4f420dde05 etcd: Wait for server to start before continuing
I think there was a rare race where we would make use of the etcd server
before it had fully started up. I only ever saw this occur on travis,
and with this fix hopefully we'll never see it again.

It is worth mentioning that much of my etcd code and the lib Run()
function could use a solid cleaning.
2017-06-03 01:00:35 -04:00
James Shubin
d9601471df etcd: Small cleanup of the package
Split things into multiple files, and fix up some doc formatting.
2017-06-03 00:34:58 -04:00
James Shubin
9941a97e37 resources: pkg: Add a simple test based on internal logic
We expect the following to stay true. This has always been a bit weird
for me to either remember or expect, so I added a test for my sanity.
2017-06-03 00:15:30 -04:00
James Shubin
0a64b08669 resources: autoedges: Process in a deterministic order
The order you loop through map's isn't necessarily stable, so make sure
you sort everything before you go through it.
2017-06-02 22:29:42 -04:00
James Shubin
4d9d0d4548 resources: Improve AutoEdge API and pkg breakage
I previously broke the pkg auto edges because the package list wasn't
available by the time it was called. This fixes the pkg resource so that
it gets the necessary list of packages when needed. Since this means
that a possible failure could happen, we also update the AutoEdges API
to support errors. Errors can only be generated at AutoEdge struct
creation, once the struct has been returned (right before modification
of the graph structure) there is no possibility to return any errors.

It's important to remember that the AutoEdges stuff gets called because
the Init of each resource, so make sure it doesn't depend on anything
that happens there or that gets cached as a result of Init.

This is all much nicer now and has a test too :)
2017-06-02 22:15:28 -04:00
James Shubin
5f6c8545c6 resources: Replace stored pgraph with mgraph and clean up hacks
Now that we're using our meta wrapper graph struct instead of the
pgraph, we can re-implement our SetValue hacks in terms of struct fields
and the implementation is now cleaner.
2017-06-02 18:50:23 -04:00
James Shubin
ddc335d65a resources: Reorganize package and split into multiple files
This should hopefully make finding and changing code easier.
2017-06-02 18:08:47 -04:00
James Shubin
9cbaa892d3 gapi: Allow the GAPI implementer to specify fast and exit
This allows the implementer of the GAPI to specify three parameters for
every Next message sent on the channel. The Fast parameter tells the
agent if it should do the pause quickly or if it should finish the
sequence. A quick pause means that it will cause a pause immediately
after the currently running resources finish, where as a slow (default)
pause will allow the wave of execution to finish. This is usually
preferred in scenarios where complex graphs are used where we want each
step to complete. The Exit parameter tells the engine to exit, and the
Err parameter tells the engine that an error occurred.
2017-06-02 04:03:10 -04:00
James Shubin
9531465410 test: Make sure our examples build
Since there are occasional API changes, I'd like to at least remember to
keep the examples building, so we now have a test to remind us!
2017-06-02 03:32:53 -04:00
James Shubin
c35916fad1 resources: Rename the Data struct to ResData to avoid ambiguity
There's a similarly named gapi.Data struct which we could also rename.
2017-06-02 02:53:53 -04:00
James Shubin
bf476a058e resources: exec: Add send/recv for exec output, stdout and stderr
This adds send/recv output parameters from exec for stdout, stderr, and
output which is a combination of those two. This also includes a few
tests, and a working example too!

Gone are the `some_command > some_file` days of puppet.
2017-06-02 02:52:03 -04:00
James Shubin
d4e815a4cb resources: Clean up converger and make it easier for tests
This cleans up the resource converger code slightly and makes it easier
to write resource specific test cases.
2017-06-02 01:15:25 -04:00
James Shubin
0545c4167b pgraph: Remove NewVertex and NewEdge methods and fix examples
Since the pgraph graph can store arbitrary pointers, we don't need a
special method to create the vertices or edges as long as they implement
the String() string method. This cleans up the library and some of the
examples which I let rot previously.
2017-05-31 18:04:58 -04:00
James Shubin
6838dd02c0 resources: graph: Add partial implementation of a graph resource
This is something I've wanted to do for a while, but for the reasons
mentioned in the comments, I've been unable to complete yet. I figured
I'd at least merge what does exist so far in case someone else would
like to pick this up. It's a bit of a brain hurdle / monster, because
the tricky part is refactoring the core engine so that this fits in
nicely. Perhaps someone will have more time and/or less tunnel vision
than I to either merge something or sketch out some ideas on the path
forwards. I think it's a useful goal because if recursive resources are
possible, it could force the core engine into a more elegant design.

Happy hacking!
2017-05-31 17:27:34 -04:00
James Shubin
14c2fd1edd resources: Add proper edge compare method
Might as well do this cleanly in one place.
2017-05-31 17:27:34 -04:00
James Shubin
6e503cc79b resources: Simplify the resource Compare functions
This removes one level of indentation and simplifies the code.
2017-05-31 17:27:34 -04:00
James Shubin
bd4563b699 pgraph: Add sort function to sort a list of vertices
With tests too!
2017-05-31 17:27:34 -04:00
James Shubin
458e115490 pgraph: Add logic functions for adding subgraphs
These are helper functions to merge in existing graphs into a main graph
with or without adding an edge relationship between a vertex and the new
graph. These are particularly useful if using mgmt as a lib to break
apart units of work into functions that create sub graphs, which are
then added to the main graph when they're returned.
2017-05-31 17:27:25 -04:00
James Shubin
51369adad1 pgraph: Add a GraphCmp method
This could probably be more efficient using a known algorithm, and it
could definitely require more tests, but is good enough for now.
2017-05-31 16:45:39 -04:00
James Shubin
f65c5fb147 resources: nspawn: Fix small style issues 2017-05-31 15:36:15 -04:00
James Shubin
4150ae7307 pgraph: Replace edge struct with interface
This further cleans up the pgraph lib to be more generic.
2017-05-31 15:36:15 -04:00
James Shubin
a87288d519 pgraph, resources: Major refactoring continued
There was simply some technical debt I needed to kill off. Sorry for not
splitting this up into more patches.
2017-05-31 15:36:14 -04:00
James Shubin
3cf9639e99 pgraph, resources: Major refactor to remove pgraph to resource dep
This is the mechanical port of the remaining bits. Next to clean it up a
bit.
2017-05-29 15:43:50 -04:00
James Shubin
4490c3ed1a resources: Map to semaphores doesn't need to be a pointer
A map in golang is a reference type.
2017-05-29 15:43:50 -04:00
James Shubin
fbcb562781 pgraph: Move the timestamp storage into the resource 2017-05-29 15:43:50 -04:00
James Shubin
b1e035f96a pgraph: Move get/set state methods out to resource package 2017-05-29 15:43:50 -04:00
James Shubin
11c3a26c23 pgraph: Move the AutoEdges mechanism into the resource package
Remove the pgraph->resource dependency.
2017-05-29 15:43:50 -04:00
James Shubin
1fbe72b52d test: Run go vet across whole packages not individual files
The golang tooling is quite deficient, in that it makes it quite
difficult to get the tools to do_the_right_thing, without ample wrapping
of bash scripting. Go vet was finding issues because it didn't have the
full context available. Hopefully this package level context is
sufficient for now. It still lacks inter-package context though.
2017-05-29 15:43:50 -04:00
James Shubin
f4bb066737 test: Run go vet with -source flag in newer releases
This should hopefully eliminate some false positives.
https://github.com/golang/go/issues/20514
2017-05-29 15:43:50 -04:00
Julien Pivotto
aaac9cbeeb vagrant: Setup Packagekit in the box
Without packagekit the 'pkg' resources can not be used

Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-05-17 09:54:23 +02:00
Julien Pivotto
0e68ff6923 vagrant: Install make in the Vagrant box
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
2017-05-17 06:41:43 +02:00
James Shubin
1c59712cbf pgraph: Move AssociateData function out of the package
This removes another dependency on the resource package.
2017-05-15 10:19:46 -04:00
James Shubin
c2cb1c9168 pgraph: Move GraphMetas function out of package
This removes a dependency on the resources package which wasn't
necessary.
2017-05-15 10:06:31 -04:00
James Shubin
cc8e2e40dd pgraph: Update graph API to remove Get prefix and add Adjacency
Simple cleanups.
2017-05-15 09:58:10 -04:00
James Shubin
e67d97d9da pgraph: Replace CompareMatch with VertexMatchFn
This removes a reference to the resources package in pgraph.
2017-05-13 13:55:42 -04:00
James Shubin
d74c2115fd pgraph: Untangle the semaphore code from the pgraph implementation
This re-implements the semaphore code on top of the graph kv store.
2017-05-13 13:28:41 -04:00
James Shubin
70e7ee2d46 pgraph: Remove use of Flags struct in favour of Value API
One small step to completely cleaning up the pgraph package so that we
can eventually fix the code that would otherwise create a cycle!
2017-05-13 13:28:41 -04:00
James Shubin
d11854f4e8 pgraph: Clean up pgraph module to get ready for clean lib status
The graph of dependencies in golang is a DAG, and as such doesn't allow
cycles. Clean up this lib so that it eventually doesn't import our
resources module or anything else which might want to import it.

This patch makes adjacency private, and adds a generalized key store to
the graph struct.
2017-05-13 13:28:41 -04:00
James Shubin
4bb553e015 pgraph: Use the correct vertex handle to prevent a race
Small typo made that is now fixed! These need to get caught with golint!
2017-05-13 10:08:38 -04:00
James Shubin
0af9af44e5 etcd, resources, world: Add World API for shared keys
It's up to the end user to decide who is writing and/or overwriting
them.

It could also be useful to reimplement (refactor) some of the existing
World API's to be implemented in terms of these primitives.
2017-04-17 07:03:29 -04:00
James Shubin
3a0d73f740 readme: Add new links 2017-04-13 04:35:59 -04:00
James Shubin
9b9ff2622d resources: Make resource kind and baseuid fields public
This is required if we're going to have out of package resources. In
particular for third party packages, and also for if we decide to split
out each resource into a separate sub package.
2017-04-11 01:52:21 -04:00
James Shubin
a4858be967 lib, gapi: Next method of GAPI should generate first event
This puts the generation of the initial event into the Next method of
the GAPI. If it does not happen, then we will never get a graph. This is
important because this notifies the GAPI when we're actually ready to
try and generate a graph, rather than blocking on the Graph method if we
have a long compile for example.

This is also required for the etcd watch cleanup.
2017-04-10 03:20:58 -04:00
James Shubin
6fd5623b1f gapi: Move separate etcd Watch method into GAPI
This cleans up the API to not have a special case for etcd anymore. In
particular, this also adds the requirement that the GAPI must generate
an event on startup as soon as it is ready to generate a graph.
2017-04-10 03:20:58 -04:00
James Shubin
66d9c7091c lib: examples: Update to most recent API
At some point in the past the API changed. Fixed now.
2017-04-10 03:20:58 -04:00
Mildred Ki'Lya
525a1e8140 yamlgraph: Refactor parsing for dynamic resource registration
Avoid use of the reflect package, and use an extensible list of registred
resource kinds. This also has the benefit of removing the empty VirtRes and
AugeasRes struct types when compiling without libvirt and libaugeas.
2017-03-24 22:38:06 +01:00
James Shubin
64dc47d7e9 misc: Fixup documentation 2017-03-20 17:11:51 -04:00
James Shubin
f3fc7bb91e resources: svc: Add basic support for user services
These are user specific services and are available on the session bus.
This doesn't use the private user API because
https://github.com/coreos/go-systemd/pull/225 was NACKed.
2017-03-17 10:15:02 -04:00
James Shubin
028ef14cc0 misc: Replace sloppy use of %v with %s 2017-03-16 13:18:36 -04:00
James Shubin
3e001f9a1c main: Update log messages for consistency 2017-03-16 13:14:50 -04:00
72 changed files with 6633 additions and 3353 deletions

View File

@@ -12,7 +12,7 @@
Come join us in the `mgmt` community!
| Medium | Link |
|---|---|---|
|---|---|
| IRC | [#mgmtconfig](https://webchat.freenode.net/?channels=#mgmtconfig) on Freenode |
| Twitter | [@mgmtconfig](https://twitter.com/mgmtconfig) & [#mgmtconfig](https://twitter.com/hashtag/mgmtconfig) |
| Mailing list | [mgmtconfig-list@redhat.com](https://www.redhat.com/mailman/listinfo/mgmtconfig-list) |
@@ -78,6 +78,8 @@ We'd love to have your patches! Please send them by email, or as a pull request.
| James Shubin | video | [Recording from NLUUG 2016](https://www.youtube.com/watch?v=MmpwOQAb_SE&html5=1) |
| James Shubin | blog | [Send/Recv in mgmt](https://ttboj.wordpress.com/2016/12/07/sendrecv-in-mgmt/) |
| James Shubin | blog | [Metaparameters in mgmt](https://ttboj.wordpress.com/2017/03/01/metaparameters-in-mgmt/) |
| James Shubin | video | [Recording from Incontro DevOps 2017](https://vimeo.com/212241877) |
| Yves Brissaud | blog | [mgmt aux HumanTalks Grenoble (french)](http://log.winsos.net/2017/04/12/mgmt-aux-human-talks-grenoble.html) |
##

11
Vagrantfile vendored
View File

@@ -21,7 +21,16 @@ Vagrant.configure(2) do |config|
config.vm.provision "file", source: "~/.gitconfig", destination: ".gitconfig"
# copied from make-deps.sh (with added git)
config.vm.provision "shell", inline: "dnf install -y libvirt-devel golang golang-googlecode-tools-stringer hg git"
config.vm.provision "shell", inline: "dnf install -y libvirt-devel golang golang-googlecode-tools-stringer hg git make"
# set up packagekit
config.vm.provision "shell" do |shell|
shell.inline = <<-SCRIPT
dnf install -y PackageKit
systemctl enable packagekit
systemctl start packagekit
SCRIPT
end
# set up vagrant home
script = <<-SCRIPT

View File

@@ -80,7 +80,7 @@ work, and finish by calling the `Init` method of the base resource.
```golang
// Init initializes the Foo resource.
func (obj *FooRes) Init() error {
obj.BaseRes.kind = "foo" // must lower case resource kind
obj.BaseRes.Kind = "foo" // must lower case resource kind
// run the resource specific initialization, and error if anything fails
if some_error {
return err // something went wrong!
@@ -202,7 +202,7 @@ will likely find the state to now be correct.
### Watch
```golang
Watch(chan *Event) error
Watch() error
```
`Watch` is a main loop that runs and sends messages when it detects that the
@@ -344,25 +344,26 @@ some way.
#### Example
```golang
// Compare two resources and return if they are equivalent.
func (obj *FooRes) Compare(res Res) bool {
switch res.(type) {
case *FooRes: // only compare to other resources of the Foo kind!
res := res.(*FileRes)
func (obj *FooRes) Compare(r Res) bool {
// we can only compare FooRes to others of the same resource kind
res, ok := r.(*FooRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
if obj.whatever != res.whatever {
return false
}
if obj.Flag != res.Flag {
return false
}
default:
return false // different kind of resource
}
return true // they must match!
}
```
@@ -378,7 +379,7 @@ if another resource can match a dependency to this one.
### AutoEdges
```golang
AutoEdges() AutoEdge
AutoEdges() (AutoEdge, error)
```
This returns a struct that implements the `AutoEdge` interface. This struct
@@ -516,7 +517,7 @@ This can _only_ be done inside of the `CheckApply` function!
```golang
// inside CheckApply, probably near the top
if val, exists := obj.Recv["SomeKey"]; exists {
log.Printf("SomeKey was sent to us from: %s[%s].%s", val.Res.Kind(), val.Res.GetName(), val.Key)
log.Printf("SomeKey was sent to us from: %s.%s", val.Res, val.Key)
if val.Changed {
log.Printf("SomeKey was just updated!")
// you may want to invalidate some local cache

View File

@@ -65,7 +65,6 @@ import (
"github.com/purpleidea/mgmt/converger"
"github.com/purpleidea/mgmt/event"
"github.com/purpleidea/mgmt/resources"
"github.com/purpleidea/mgmt/util"
etcd "github.com/coreos/etcd/clientv3" // "clientv3"
@@ -82,8 +81,8 @@ import (
const (
NS = "_mgmt" // root namespace for mgmt operations
seedSentinel = "_seed" // you must not name your hostname this
maxStartServerTimeout = 60 // max number of seconds to wait for server to start
maxStartServerRetries = 3 // number of times to retry starting the etcd server
MaxStartServerTimeout = 60 // max number of seconds to wait for server to start
MaxStartServerRetries = 3 // number of times to retry starting the etcd server
maxClientConnectRetries = 5 // number of times to retry consecutive connect failures
selfRemoveTimeout = 3 // give unnominated members a chance to self exit
exitDelay = 3 // number of sec of inactivity after exit to clean up
@@ -96,7 +95,7 @@ var (
errApplyDeltaEventsInconsistent = errors.New("inconsistent key in ApplyDeltaEvents")
)
// AW is a struct for the AddWatcher queue
// AW is a struct for the AddWatcher queue.
type AW struct {
path string
opts []etcd.OpOption
@@ -107,8 +106,8 @@ type AW struct {
cancelFunc func() // data
}
// RE is a response + error struct since these two values often occur together
// This is now called an event with the move to the etcd v3 API
// RE is a response + error struct since these two values often occur together.
// This is now called an event with the move to the etcd v3 API.
type RE struct {
response etcd.WatchResponse
path string
@@ -120,7 +119,7 @@ type RE struct {
retries uint // number of times we've retried on error
}
// KV is a key + value struct to hold the two items together
// KV is a key + value struct to hold the two items together.
type KV struct {
key string
value string
@@ -128,7 +127,7 @@ type KV struct {
resp event.Resp
}
// GQ is a struct for the get queue
// GQ is a struct for the get queue.
type GQ struct {
path string
skipConv bool
@@ -137,7 +136,7 @@ type GQ struct {
data map[string]string
}
// DL is a struct for the delete queue
// DL is a struct for the delete queue.
type DL struct {
path string
opts []etcd.OpOption
@@ -145,7 +144,7 @@ type DL struct {
data int64
}
// TN is a struct for the txn queue
// TN is a struct for the txn queue.
type TN struct {
ifcmps []etcd.Cmp
thenops []etcd.Op
@@ -161,7 +160,7 @@ type Flags struct {
Verbose bool // add extra log message output
}
// EmbdEtcd provides the embedded server and client etcd functionality
// EmbdEtcd provides the embedded server and client etcd functionality.
type EmbdEtcd struct { // EMBeddeD etcd
// etcd client connection related
cLock sync.Mutex // client connect lock
@@ -205,9 +204,10 @@ type EmbdEtcd struct { // EMBeddeD etcd
serverwg sync.WaitGroup // wait for server to shutdown
server *embed.Etcd // technically this contains the server struct
dataDir string // our data dir, prefix + "etcd"
serverReady chan struct{} // closes when ready
}
// NewEmbdEtcd creates the top level embedded etcd struct client and server obj
// NewEmbdEtcd creates the top level embedded etcd struct client and server obj.
func NewEmbdEtcd(hostname string, seeds, clientURLs, serverURLs etcdtypes.URLs, noServer bool, idealClusterSize uint16, flags Flags, prefix string, converger converger.Converger) *EmbdEtcd {
endpoints := make(etcdtypes.URLsMap)
if hostname == seedSentinel { // safety
@@ -240,6 +240,7 @@ func NewEmbdEtcd(hostname string, seeds, clientURLs, serverURLs etcdtypes.URLs,
flags: flags,
prefix: prefix,
dataDir: path.Join(prefix, "etcd"),
serverReady: make(chan struct{}),
}
// TODO: add some sort of auto assign method for picking these defaults
// add a default so that our local client can connect locally if needed
@@ -260,7 +261,7 @@ func NewEmbdEtcd(hostname string, seeds, clientURLs, serverURLs etcdtypes.URLs,
return obj
}
// GetConfig returns the config struct to be used for the etcd client connect
// GetConfig returns the config struct to be used for the etcd client connect.
func (obj *EmbdEtcd) GetConfig() etcd.Config {
endpoints := []string{}
// XXX: filter out any urls which wouldn't resolve here ?
@@ -342,7 +343,7 @@ func (obj *EmbdEtcd) Connect(reconnect bool) error {
return nil
}
// Startup is the main entry point to kick off the embedded etcd client & server
// Startup is the main entry point to kick off the embedded etcd client & server.
func (obj *EmbdEtcd) Startup() error {
bootstrapping := len(obj.endpoints) == 0 // because value changes after start
@@ -464,7 +465,7 @@ func (obj *EmbdEtcd) Destroy() error {
return nil
}
// CtxDelayErr requests a retry in Delta duration
// CtxDelayErr requests a retry in Delta duration.
type CtxDelayErr struct {
Delta time.Duration
Message string
@@ -474,7 +475,7 @@ func (obj *CtxDelayErr) Error() string {
return fmt.Sprintf("CtxDelayErr(%v): %s", obj.Delta, obj.Message)
}
// CtxRetriesErr lets you retry as long as you have retries available
// CtxRetriesErr lets you retry as long as you have retries available.
// TODO: consider combining this with CtxDelayErr
type CtxRetriesErr struct {
Retries uint
@@ -494,7 +495,7 @@ func (obj *CtxPermanentErr) Error() string {
return fmt.Sprintf("CtxPermanentErr: %s", obj.Message)
}
// CtxReconnectErr requests a client reconnect to the new endpoint list
// CtxReconnectErr requests a client reconnect to the new endpoint list.
type CtxReconnectErr struct {
Message string
}
@@ -503,7 +504,7 @@ func (obj *CtxReconnectErr) Error() string {
return fmt.Sprintf("CtxReconnectErr: %s", obj.Message)
}
// CancelCtx adds a tracked cancel function around an existing context
// CancelCtx adds a tracked cancel function around an existing context.
func (obj *EmbdEtcd) CancelCtx(ctx context.Context) (context.Context, func()) {
cancelCtx, cancelFunc := context.WithCancel(ctx)
obj.cancelLock.Lock()
@@ -512,7 +513,7 @@ func (obj *EmbdEtcd) CancelCtx(ctx context.Context) (context.Context, func()) {
return cancelCtx, cancelFunc
}
// TimeoutCtx adds a tracked cancel function with timeout around an existing context
// TimeoutCtx adds a tracked cancel function with timeout around an existing context.
func (obj *EmbdEtcd) TimeoutCtx(ctx context.Context, t time.Duration) (context.Context, func()) {
timeoutCtx, cancelFunc := context.WithTimeout(ctx, t)
obj.cancelLock.Lock()
@@ -699,7 +700,7 @@ func (obj *EmbdEtcd) CtxError(ctx context.Context, err error) (context.Context,
return ctx, obj.ctxErr
}
// CbLoop is the loop where callback execution is serialized
// CbLoop is the loop where callback execution is serialized.
func (obj *EmbdEtcd) CbLoop() {
cuid := obj.converger.Register()
cuid.SetName("Etcd: CbLoop")
@@ -755,7 +756,7 @@ func (obj *EmbdEtcd) CbLoop() {
}
}
// Loop is the main loop where everything is serialized
// Loop is the main loop where everything is serialized.
func (obj *EmbdEtcd) Loop() {
cuid := obj.converger.Register()
cuid.SetName("Etcd: Loop")
@@ -933,7 +934,7 @@ func (obj *EmbdEtcd) loopProcessAW(ctx context.Context, aw *AW) {
}
}
// Set queues up a set operation to occur using our mainloop
// Set queues up a set operation to occur using our mainloop.
func (obj *EmbdEtcd) Set(key, value string, opts ...etcd.OpOption) error {
resp := event.NewResp()
obj.setq <- &KV{key: key, value: value, opts: opts, resp: resp}
@@ -943,7 +944,7 @@ func (obj *EmbdEtcd) Set(key, value string, opts ...etcd.OpOption) error {
return nil
}
// rawSet actually implements the key set operation
// rawSet actually implements the key set operation.
func (obj *EmbdEtcd) rawSet(ctx context.Context, kv *KV) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: rawSet()")
@@ -960,7 +961,7 @@ func (obj *EmbdEtcd) rawSet(ctx context.Context, kv *KV) error {
return err
}
// Get performs a get operation and waits for an ACK to continue
// Get performs a get operation and waits for an ACK to continue.
func (obj *EmbdEtcd) Get(path string, opts ...etcd.OpOption) (map[string]string, error) {
return obj.ComplexGet(path, false, opts...)
}
@@ -1001,7 +1002,7 @@ func (obj *EmbdEtcd) rawGet(ctx context.Context, gq *GQ) (result map[string]stri
return
}
// Delete performs a delete operation and waits for an ACK to continue
// Delete performs a delete operation and waits for an ACK to continue.
func (obj *EmbdEtcd) Delete(path string, opts ...etcd.OpOption) (int64, error) {
resp := event.NewResp()
dl := &DL{path: path, opts: opts, resp: resp, data: -1}
@@ -1029,7 +1030,7 @@ func (obj *EmbdEtcd) rawDelete(ctx context.Context, dl *DL) (count int64, err er
return
}
// Txn performs a transaction and waits for an ACK to continue
// Txn performs a transaction and waits for an ACK to continue.
func (obj *EmbdEtcd) Txn(ifcmps []etcd.Cmp, thenops, elseops []etcd.Op) (*etcd.TxnResponse, error) {
resp := event.NewResp()
tn := &TN{ifcmps: ifcmps, thenops: thenops, elseops: elseops, resp: resp, data: nil}
@@ -1053,8 +1054,8 @@ func (obj *EmbdEtcd) rawTxn(ctx context.Context, tn *TN) (*etcd.TxnResponse, err
return response, err
}
// AddWatcher queues up an add watcher request and returns a cancel function
// Remember to add the etcd.WithPrefix() option if you want to watch recursively
// AddWatcher queues up an add watcher request and returns a cancel function.
// Remember to add the etcd.WithPrefix() option if you want to watch recursively.
func (obj *EmbdEtcd) AddWatcher(path string, callback func(re *RE) error, errCheck bool, skipConv bool, opts ...etcd.OpOption) (func(), error) {
resp := event.NewResp()
awq := &AW{path: path, opts: opts, callback: callback, errCheck: errCheck, skipConv: skipConv, cancelFunc: nil, resp: resp}
@@ -1065,7 +1066,7 @@ func (obj *EmbdEtcd) AddWatcher(path string, callback func(re *RE) error, errChe
return awq.cancelFunc, nil
}
// rawAddWatcher adds a watcher and returns a cancel function to call to end it
// rawAddWatcher adds a watcher and returns a cancel function to call to end it.
func (obj *EmbdEtcd) rawAddWatcher(ctx context.Context, aw *AW) (func(), error) {
cancelCtx, cancelFunc := obj.CancelCtx(ctx)
go func(ctx context.Context) {
@@ -1142,7 +1143,7 @@ func (obj *EmbdEtcd) rawAddWatcher(ctx context.Context, aw *AW) (func(), error)
return cancelFunc, nil
}
// rawCallback is the companion to AddWatcher which runs the callback processing
// rawCallback is the companion to AddWatcher which runs the callback processing.
func rawCallback(ctx context.Context, re *RE) error {
var err = re.err // the watch event itself might have had an error
if err == nil {
@@ -1161,8 +1162,8 @@ func rawCallback(ctx context.Context, re *RE) error {
return err
}
// volunteerCallback runs to respond to the volunteer list change events
// functionally, it controls the adding and removing of members
// volunteerCallback runs to respond to the volunteer list change events.
// Functionally, it controls the adding and removing of members.
// FIXME: we might need to respond to member change/disconnect/shutdown events,
// see: https://github.com/coreos/etcd/issues/5277
func (obj *EmbdEtcd) volunteerCallback(re *RE) error {
@@ -1351,8 +1352,8 @@ func (obj *EmbdEtcd) volunteerCallback(re *RE) error {
return nil
}
// nominateCallback runs to respond to the nomination list change events
// functionally, it controls the starting and stopping of the server process
// nominateCallback runs to respond to the nomination list change events.
// Functionally, it controls the starting and stopping of the server process.
func (obj *EmbdEtcd) nominateCallback(re *RE) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: nominateCallback()")
@@ -1419,8 +1420,8 @@ func (obj *EmbdEtcd) nominateCallback(re *RE) error {
if re != nil {
retries = re.retries
}
// retry maxStartServerRetries times, then permanently fail
return &CtxRetriesErr{maxStartServerRetries - retries, fmt.Sprintf("Etcd: StartServer: Error: %+v", err)}
// retry MaxStartServerRetries times, then permanently fail
return &CtxRetriesErr{MaxStartServerRetries - retries, fmt.Sprintf("Etcd: StartServer: Error: %+v", err)}
}
if len(obj.endpoints) == 0 {
@@ -1504,7 +1505,7 @@ func (obj *EmbdEtcd) nominateCallback(re *RE) error {
return nil
}
// endpointCallback runs to respond to the endpoint list change events
// endpointCallback runs to respond to the endpoint list change events.
func (obj *EmbdEtcd) endpointCallback(re *RE) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: endpointCallback()")
@@ -1570,7 +1571,7 @@ func (obj *EmbdEtcd) endpointCallback(re *RE) error {
return nil
}
// idealClusterSizeCallback runs to respond to the ideal cluster size changes
// idealClusterSizeCallback runs to respond to the ideal cluster size changes.
func (obj *EmbdEtcd) idealClusterSizeCallback(re *RE) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: idealClusterSizeCallback()")
@@ -1604,8 +1605,8 @@ func (obj *EmbdEtcd) idealClusterSizeCallback(re *RE) error {
return nil
}
// LocalhostClientURLs returns the most localhost like URLs for direct connection
// this gets clients to talk to the local servers first before searching remotely
// LocalhostClientURLs returns the most localhost like URLs for direct connection.
// This gets clients to talk to the local servers first before searching remotely.
func (obj *EmbdEtcd) LocalhostClientURLs() etcdtypes.URLs {
// look through obj.clientURLs and return the localhost ones
urls := etcdtypes.URLs{}
@@ -1671,8 +1672,8 @@ func (obj *EmbdEtcd) StartServer(newCluster bool, peerURLsMap etcdtypes.URLsMap)
select {
case <-obj.server.Server.ReadyNotify(): // we hang here if things are bad
log.Printf("Etcd: StartServer: Done starting server!") // it didn't hang!
case <-time.After(time.Duration(maxStartServerTimeout) * time.Second):
e := fmt.Errorf("timeout of %d seconds reached", maxStartServerTimeout)
case <-time.After(time.Duration(MaxStartServerTimeout) * time.Second):
e := fmt.Errorf("timeout of %d seconds reached", MaxStartServerTimeout)
log.Printf("Etcd: StartServer: %s", e.Error())
obj.server.Server.Stop() // trigger a shutdown
obj.serverwg.Add(1) // add for the DestroyServer()
@@ -1690,12 +1691,16 @@ func (obj *EmbdEtcd) StartServer(newCluster bool, peerURLsMap etcdtypes.URLsMap)
//log.Fatal(<-obj.server.Err()) XXX
log.Printf("Etcd: StartServer: Server running...")
obj.memberID = uint64(obj.server.Server.ID()) // store member id for internal use
close(obj.serverReady) // send a signal
obj.serverwg.Add(1)
return nil
}
// DestroyServer shuts down the embedded etcd server portion
// ServerReady returns on a channel when the server has started successfully.
func (obj *EmbdEtcd) ServerReady() <-chan struct{} { return obj.serverReady }
// DestroyServer shuts down the embedded etcd server portion.
func (obj *EmbdEtcd) DestroyServer() error {
var err error
log.Printf("Etcd: DestroyServer: Destroying...")
@@ -1710,544 +1715,11 @@ func (obj *EmbdEtcd) DestroyServer() error {
}
obj.server = nil // important because this is used as an isRunning flag
log.Printf("Etcd: DestroyServer: Unlocking server...")
obj.serverReady = make(chan struct{}) // reset the signal
obj.serverwg.Done() // -1
return err
}
// TODO: Could all these Etcd*(obj *EmbdEtcd, ...) functions which deal with the
// interface between etcd paths and behaviour be grouped into a single struct ?
// Nominate nominates a particular client to be a server (peer)
func Nominate(obj *EmbdEtcd, hostname string, urls etcdtypes.URLs) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Nominate(%v): %v", hostname, urls.String())
defer log.Printf("Trace: Etcd: Nominate(%v): Finished!", hostname)
}
// nominate someone to be a server
nominate := fmt.Sprintf("/%s/nominated/%s", NS, hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
ops = append(ops, etcd.OpPut(nominate, urls.String())) // TODO: add a TTL? (etcd.WithLease)
} else { // delete message if set to erase
ops = append(ops, etcd.OpDelete(nominate))
}
if _, err := obj.Txn(nil, ops, nil); err != nil {
return fmt.Errorf("nominate failed") // exit in progress?
}
return nil
}
// Nominated returns a urls map of nominated etcd server volunteers
// NOTE: I know 'nominees' might be more correct, but is less consistent here
func Nominated(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
path := fmt.Sprintf("/%s/nominated/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix()) // map[string]string, bool
if err != nil {
return nil, fmt.Errorf("nominated isn't available: %v", err)
}
nominated := make(etcdtypes.URLsMap)
for key, val := range keyMap { // loop through directory of nominated
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of nominee
if val == "" { // skip "erased" values
continue
}
urls, err := etcdtypes.NewURLs(strings.Split(val, ","))
if err != nil {
return nil, fmt.Errorf("nominated data format error: %v", err)
}
nominated[name] = urls // add to map
if obj.flags.Debug {
log.Printf("Etcd: Nominated(%v): %v", name, val)
}
}
return nominated, nil
}
// Volunteer offers yourself up to be a server if needed
func Volunteer(obj *EmbdEtcd, urls etcdtypes.URLs) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Volunteer(%v): %v", obj.hostname, urls.String())
defer log.Printf("Trace: Etcd: Volunteer(%v): Finished!", obj.hostname)
}
// volunteer to be a server
volunteer := fmt.Sprintf("/%s/volunteers/%s", NS, obj.hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
// XXX: adding a TTL is crucial! (i think)
ops = append(ops, etcd.OpPut(volunteer, urls.String())) // value is usually a peer "serverURL"
} else { // delete message if set to erase
ops = append(ops, etcd.OpDelete(volunteer))
}
if _, err := obj.Txn(nil, ops, nil); err != nil {
return fmt.Errorf("volunteering failed") // exit in progress?
}
return nil
}
// Volunteers returns a urls map of available etcd server volunteers
func Volunteers(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Volunteers()")
defer log.Printf("Trace: Etcd: Volunteers(): Finished!")
}
path := fmt.Sprintf("/%s/volunteers/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix())
if err != nil {
return nil, fmt.Errorf("volunteers aren't available: %v", err)
}
volunteers := make(etcdtypes.URLsMap)
for key, val := range keyMap { // loop through directory of volunteers
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of volunteer
if val == "" { // skip "erased" values
continue
}
urls, err := etcdtypes.NewURLs(strings.Split(val, ","))
if err != nil {
return nil, fmt.Errorf("volunteers data format error: %v", err)
}
volunteers[name] = urls // add to map
if obj.flags.Debug {
log.Printf("Etcd: Volunteer(%v): %v", name, val)
}
}
return volunteers, nil
}
// AdvertiseEndpoints advertises the list of available client endpoints
func AdvertiseEndpoints(obj *EmbdEtcd, urls etcdtypes.URLs) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: AdvertiseEndpoints(%v): %v", obj.hostname, urls.String())
defer log.Printf("Trace: Etcd: AdvertiseEndpoints(%v): Finished!", obj.hostname)
}
// advertise endpoints
endpoints := fmt.Sprintf("/%s/endpoints/%s", NS, obj.hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
// TODO: add a TTL? (etcd.WithLease)
ops = append(ops, etcd.OpPut(endpoints, urls.String())) // value is usually a "clientURL"
} else { // delete message if set to erase
ops = append(ops, etcd.OpDelete(endpoints))
}
if _, err := obj.Txn(nil, ops, nil); err != nil {
return fmt.Errorf("endpoint advertising failed") // exit in progress?
}
return nil
}
// Endpoints returns a urls map of available etcd server endpoints
func Endpoints(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Endpoints()")
defer log.Printf("Trace: Etcd: Endpoints(): Finished!")
}
path := fmt.Sprintf("/%s/endpoints/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix())
if err != nil {
return nil, fmt.Errorf("endpoints aren't available: %v", err)
}
endpoints := make(etcdtypes.URLsMap)
for key, val := range keyMap { // loop through directory of endpoints
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of volunteer
if val == "" { // skip "erased" values
continue
}
urls, err := etcdtypes.NewURLs(strings.Split(val, ","))
if err != nil {
return nil, fmt.Errorf("endpoints data format error: %v", err)
}
endpoints[name] = urls // add to map
if obj.flags.Debug {
log.Printf("Etcd: Endpoint(%v): %v", name, val)
}
}
return endpoints, nil
}
// SetHostnameConverged sets whether a specific hostname is converged.
func SetHostnameConverged(obj *EmbdEtcd, hostname string, isConverged bool) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: SetHostnameConverged(%s): %v", hostname, isConverged)
defer log.Printf("Trace: Etcd: SetHostnameConverged(%v): Finished!", hostname)
}
converged := fmt.Sprintf("/%s/converged/%s", NS, hostname)
op := []etcd.Op{etcd.OpPut(converged, fmt.Sprintf("%t", isConverged))}
if _, err := obj.Txn(nil, op, nil); err != nil { // TODO: do we need a skipConv flag here too?
return fmt.Errorf("set converged failed") // exit in progress?
}
return nil
}
// HostnameConverged returns a map of every hostname's converged state.
func HostnameConverged(obj *EmbdEtcd) (map[string]bool, error) {
if obj.flags.Trace {
log.Printf("Trace: Etcd: HostnameConverged()")
defer log.Printf("Trace: Etcd: HostnameConverged(): Finished!")
}
path := fmt.Sprintf("/%s/converged/", NS)
keyMap, err := obj.ComplexGet(path, true, etcd.WithPrefix()) // don't un-converge
if err != nil {
return nil, fmt.Errorf("converged values aren't available: %v", err)
}
converged := make(map[string]bool)
for key, val := range keyMap { // loop through directory...
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of key
if val == "" { // skip "erased" values
continue
}
b, err := strconv.ParseBool(val)
if err != nil {
return nil, fmt.Errorf("converged data format error: %v", err)
}
converged[name] = b // add to map
}
return converged, nil
}
// AddHostnameConvergedWatcher adds a watcher with a callback that runs on
// hostname state changes.
func AddHostnameConvergedWatcher(obj *EmbdEtcd, callbackFn func(map[string]bool) error) (func(), error) {
path := fmt.Sprintf("/%s/converged/", NS)
internalCbFn := func(re *RE) error {
// TODO: get the value from the response, and apply delta...
// for now, just run a get operation which is easier to code!
m, err := HostnameConverged(obj)
if err != nil {
return err
}
return callbackFn(m) // call my function
}
return obj.AddWatcher(path, internalCbFn, true, true, etcd.WithPrefix()) // no block and no converger reset
}
// SetClusterSize sets the ideal target cluster size of etcd peers
func SetClusterSize(obj *EmbdEtcd, value uint16) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: SetClusterSize(): %v", value)
defer log.Printf("Trace: Etcd: SetClusterSize(): Finished!")
}
key := fmt.Sprintf("/%s/idealClusterSize", NS)
if err := obj.Set(key, strconv.FormatUint(uint64(value), 10)); err != nil {
return fmt.Errorf("function SetClusterSize failed: %v", err) // exit in progress?
}
return nil
}
// GetClusterSize gets the ideal target cluster size of etcd peers
func GetClusterSize(obj *EmbdEtcd) (uint16, error) {
key := fmt.Sprintf("/%s/idealClusterSize", NS)
keyMap, err := obj.Get(key)
if err != nil {
return 0, fmt.Errorf("function GetClusterSize failed: %v", err)
}
val, exists := keyMap[key]
if !exists || val == "" {
return 0, fmt.Errorf("function GetClusterSize failed: %v", err)
}
v, err := strconv.ParseUint(val, 10, 16)
if err != nil {
return 0, fmt.Errorf("function GetClusterSize failed: %v", err)
}
return uint16(v), nil
}
// MemberAdd adds a member to the cluster.
func MemberAdd(obj *EmbdEtcd, peerURLs etcdtypes.URLs) (*etcd.MemberAddResponse, error) {
//obj.Connect(false) // TODO: ?
ctx := context.Background()
var response *etcd.MemberAddResponse
var err error
for {
if obj.exiting { // the exit signal has been sent!
return nil, fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
response, err = obj.client.MemberAdd(ctx, peerURLs.StringSlice())
obj.rLock.RUnlock()
if err == nil {
break
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return nil, err
}
}
return response, nil
}
// MemberRemove removes a member by mID and returns if it worked, and also
// if there was an error. This is because it might have run without error, but
// the member wasn't found, for example.
func MemberRemove(obj *EmbdEtcd, mID uint64) (bool, error) {
//obj.Connect(false) // TODO: ?
ctx := context.Background()
for {
if obj.exiting { // the exit signal has been sent!
return false, fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
_, err := obj.client.MemberRemove(ctx, mID)
obj.rLock.RUnlock()
if err == nil {
break
} else if err == rpctypes.ErrMemberNotFound {
// if we get this, member already shut itself down :)
return false, nil
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return false, err
}
}
return true, nil
}
// Members returns information on cluster membership.
// The member ID's are the keys, because an empty names means unstarted!
// TODO: consider queueing this through the main loop with CtxError(ctx, err)
func Members(obj *EmbdEtcd) (map[uint64]string, error) {
//obj.Connect(false) // TODO: ?
ctx := context.Background()
var response *etcd.MemberListResponse
var err error
for {
if obj.exiting { // the exit signal has been sent!
return nil, fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
if obj.flags.Trace {
log.Printf("Trace: Etcd: Members(): Endpoints are: %v", obj.client.Endpoints())
}
response, err = obj.client.MemberList(ctx)
obj.rLock.RUnlock()
if err == nil {
break
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return nil, err
}
}
members := make(map[uint64]string)
for _, x := range response.Members {
members[x.ID] = x.Name // x.Name will be "" if unstarted!
}
return members, nil
}
// Leader returns the current leader of the etcd server cluster
func Leader(obj *EmbdEtcd) (string, error) {
//obj.Connect(false) // TODO: ?
var err error
membersMap := make(map[uint64]string)
if membersMap, err = Members(obj); err != nil {
return "", err
}
addresses := obj.LocalhostClientURLs() // heuristic, but probably correct
if len(addresses) == 0 {
// probably a programming error...
return "", fmt.Errorf("programming error")
}
endpoint := addresses[0].Host // FIXME: arbitrarily picked the first one
// part two
ctx := context.Background()
var response *etcd.StatusResponse
for {
if obj.exiting { // the exit signal has been sent!
return "", fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
response, err = obj.client.Maintenance.Status(ctx, endpoint)
obj.rLock.RUnlock()
if err == nil {
break
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return "", err
}
}
// isLeader: response.Header.MemberId == response.Leader
for id, name := range membersMap {
if id == response.Leader {
return name, nil
}
}
return "", fmt.Errorf("members map is not current") // not found
}
// WatchAll returns a channel that outputs a true bool when activity occurs
// TODO: Filter our watch (on the server side if possible) based on the
// collection prefixes and filters that we care about...
func WatchAll(obj *EmbdEtcd) chan bool {
ch := make(chan bool, 1) // buffer it so we can measure it
path := fmt.Sprintf("/%s/exported/", NS)
callback := func(re *RE) error {
// TODO: is this even needed? it used to happen on conn errors
log.Printf("Etcd: Watch: Path: %v", path) // event
if re == nil || re.response.Canceled {
return fmt.Errorf("watch is empty") // will cause a CtxError+retry
}
// we normally need to check if anything changed since the last
// event, since a set (export) with no changes still causes the
// watcher to trigger and this would cause an infinite loop. we
// don't need to do this check anymore because we do the export
// transactionally, and only if a change is needed. since it is
// atomic, all the changes arrive together which avoids dupes!!
if len(ch) == 0 { // send event only if one isn't pending
// this check avoids multiple events all queueing up and then
// being released continuously long after the changes stopped
// do not block!
ch <- true // event
}
return nil
}
_, _ = obj.AddWatcher(path, callback, true, false, etcd.WithPrefix()) // no need to check errors
return ch
}
// SetResources exports all of the resources which we pass in to etcd
func SetResources(obj *EmbdEtcd, hostname string, resourceList []resources.Res) error {
// key structure is /$NS/exported/$hostname/resources/$uid = $data
var kindFilter []string // empty to get from everyone
hostnameFilter := []string{hostname}
// this is not a race because we should only be reading keys which we
// set, and there should not be any contention with other hosts here!
originals, err := GetResources(obj, hostnameFilter, kindFilter)
if err != nil {
return err
}
if len(originals) == 0 && len(resourceList) == 0 { // special case of no add or del
return nil
}
ifs := []etcd.Cmp{} // list matching the desired state
ops := []etcd.Op{} // list of ops in this transaction
for _, res := range resourceList {
if res.Kind() == "" {
log.Fatalf("Etcd: SetResources: Error: Empty kind: %v", res.GetName())
}
uid := fmt.Sprintf("%s/%s", res.Kind(), res.GetName())
path := fmt.Sprintf("/%s/exported/%s/resources/%s", NS, hostname, uid)
if data, err := resources.ResToB64(res); err == nil {
ifs = append(ifs, etcd.Compare(etcd.Value(path), "=", data)) // desired state
ops = append(ops, etcd.OpPut(path, data))
} else {
return fmt.Errorf("can't convert to B64: %v", err)
}
}
match := func(res resources.Res, resourceList []resources.Res) bool { // helper lambda
for _, x := range resourceList {
if res.Kind() == x.Kind() && res.GetName() == x.GetName() {
return true
}
}
return false
}
hasDeletes := false
// delete old, now unused resources here...
for _, res := range originals {
if res.Kind() == "" {
log.Fatalf("Etcd: SetResources: Error: Empty kind: %v", res.GetName())
}
uid := fmt.Sprintf("%s/%s", res.Kind(), res.GetName())
path := fmt.Sprintf("/%s/exported/%s/resources/%s", NS, hostname, uid)
if match(res, resourceList) { // if we match, no need to delete!
continue
}
ops = append(ops, etcd.OpDelete(path))
hasDeletes = true
}
// if everything is already correct, do nothing, otherwise, run the ops!
// it's important to do this in one transaction, and atomically, because
// this way, we only generate one watch event, and only when it's needed
if hasDeletes { // always run, ifs don't matter
_, err = obj.Txn(nil, ops, nil) // TODO: does this run? it should!
} else {
_, err = obj.Txn(ifs, nil, ops) // TODO: do we need to look at response?
}
return err
}
// GetResources collects all of the resources which match a filter from etcd
// If the kindfilter or hostnameFilter is empty, then it assumes no filtering...
// TODO: Expand this with a more powerful filter based on what we eventually
// support in our collect DSL. Ideally a server side filter like WithFilter()
// We could do this if the pattern was /$NS/exported/$kind/$hostname/$uid = $data
func GetResources(obj *EmbdEtcd, hostnameFilter, kindFilter []string) ([]resources.Res, error) {
// key structure is /$NS/exported/$hostname/resources/$uid = $data
path := fmt.Sprintf("/%s/exported/", NS)
resourceList := []resources.Res{}
keyMap, err := obj.Get(path, etcd.WithPrefix(), etcd.WithSort(etcd.SortByKey, etcd.SortAscend))
if err != nil {
return nil, fmt.Errorf("could not get resources: %v", err)
}
for key, val := range keyMap {
if !strings.HasPrefix(key, path) { // sanity check
continue
}
str := strings.Split(key[len(path):], "/")
if len(str) != 4 {
return nil, fmt.Errorf("unexpected chunk count")
}
hostname, r, kind, name := str[0], str[1], str[2], str[3]
if r != "resources" {
return nil, fmt.Errorf("unexpected chunk pattern")
}
if kind == "" {
return nil, fmt.Errorf("unexpected kind chunk")
}
// FIXME: ideally this would be a server side filter instead!
if len(hostnameFilter) > 0 && !util.StrInList(hostname, hostnameFilter) {
continue
}
// FIXME: ideally this would be a server side filter instead!
if len(kindFilter) > 0 && !util.StrInList(kind, kindFilter) {
continue
}
if obj, err := resources.B64ToRes(val); err == nil {
obj.SetKind(kind) // cheap init
log.Printf("Etcd: Get: (Hostname, Kind, Name): (%s, %s, %s)", hostname, kind, name)
resourceList = append(resourceList, obj)
} else {
return nil, fmt.Errorf("can't convert from B64: %v", err)
}
}
return resourceList, nil
}
//func UrlRemoveScheme(urls etcdtypes.URLs) []string {
// strs := []string{}
// for _, u := range urls {
@@ -2256,7 +1728,7 @@ func GetResources(obj *EmbdEtcd, hostnameFilter, kindFilter []string) ([]resourc
// return strs
//}
// ApplyDeltaEvents modifies a URLsMap with the deltas from a WatchResponse
// ApplyDeltaEvents modifies a URLsMap with the deltas from a WatchResponse.
func ApplyDeltaEvents(re *RE, urlsmap etcdtypes.URLsMap) (etcdtypes.URLsMap, error) {
if re == nil { // passthrough
return urlsmap, nil

412
etcd/methods.go Normal file
View File

@@ -0,0 +1,412 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package etcd
import (
"fmt"
"log"
"strconv"
"strings"
etcd "github.com/coreos/etcd/clientv3"
rpctypes "github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
etcdtypes "github.com/coreos/etcd/pkg/types"
context "golang.org/x/net/context"
)
// TODO: Could all these Etcd*(obj *EmbdEtcd, ...) functions which deal with the
// interface between etcd paths and behaviour be grouped into a single struct ?
// Nominate nominates a particular client to be a server (peer).
func Nominate(obj *EmbdEtcd, hostname string, urls etcdtypes.URLs) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Nominate(%v): %v", hostname, urls.String())
defer log.Printf("Trace: Etcd: Nominate(%v): Finished!", hostname)
}
// nominate someone to be a server
nominate := fmt.Sprintf("/%s/nominated/%s", NS, hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
ops = append(ops, etcd.OpPut(nominate, urls.String())) // TODO: add a TTL? (etcd.WithLease)
} else { // delete message if set to erase
ops = append(ops, etcd.OpDelete(nominate))
}
if _, err := obj.Txn(nil, ops, nil); err != nil {
return fmt.Errorf("nominate failed") // exit in progress?
}
return nil
}
// Nominated returns a urls map of nominated etcd server volunteers.
// NOTE: I know 'nominees' might be more correct, but is less consistent here
func Nominated(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
path := fmt.Sprintf("/%s/nominated/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix()) // map[string]string, bool
if err != nil {
return nil, fmt.Errorf("nominated isn't available: %v", err)
}
nominated := make(etcdtypes.URLsMap)
for key, val := range keyMap { // loop through directory of nominated
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of nominee
if val == "" { // skip "erased" values
continue
}
urls, err := etcdtypes.NewURLs(strings.Split(val, ","))
if err != nil {
return nil, fmt.Errorf("nominated data format error: %v", err)
}
nominated[name] = urls // add to map
if obj.flags.Debug {
log.Printf("Etcd: Nominated(%v): %v", name, val)
}
}
return nominated, nil
}
// Volunteer offers yourself up to be a server if needed.
func Volunteer(obj *EmbdEtcd, urls etcdtypes.URLs) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Volunteer(%v): %v", obj.hostname, urls.String())
defer log.Printf("Trace: Etcd: Volunteer(%v): Finished!", obj.hostname)
}
// volunteer to be a server
volunteer := fmt.Sprintf("/%s/volunteers/%s", NS, obj.hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
// XXX: adding a TTL is crucial! (i think)
ops = append(ops, etcd.OpPut(volunteer, urls.String())) // value is usually a peer "serverURL"
} else { // delete message if set to erase
ops = append(ops, etcd.OpDelete(volunteer))
}
if _, err := obj.Txn(nil, ops, nil); err != nil {
return fmt.Errorf("volunteering failed") // exit in progress?
}
return nil
}
// Volunteers returns a urls map of available etcd server volunteers.
func Volunteers(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Volunteers()")
defer log.Printf("Trace: Etcd: Volunteers(): Finished!")
}
path := fmt.Sprintf("/%s/volunteers/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix())
if err != nil {
return nil, fmt.Errorf("volunteers aren't available: %v", err)
}
volunteers := make(etcdtypes.URLsMap)
for key, val := range keyMap { // loop through directory of volunteers
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of volunteer
if val == "" { // skip "erased" values
continue
}
urls, err := etcdtypes.NewURLs(strings.Split(val, ","))
if err != nil {
return nil, fmt.Errorf("volunteers data format error: %v", err)
}
volunteers[name] = urls // add to map
if obj.flags.Debug {
log.Printf("Etcd: Volunteer(%v): %v", name, val)
}
}
return volunteers, nil
}
// AdvertiseEndpoints advertises the list of available client endpoints.
func AdvertiseEndpoints(obj *EmbdEtcd, urls etcdtypes.URLs) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: AdvertiseEndpoints(%v): %v", obj.hostname, urls.String())
defer log.Printf("Trace: Etcd: AdvertiseEndpoints(%v): Finished!", obj.hostname)
}
// advertise endpoints
endpoints := fmt.Sprintf("/%s/endpoints/%s", NS, obj.hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
// TODO: add a TTL? (etcd.WithLease)
ops = append(ops, etcd.OpPut(endpoints, urls.String())) // value is usually a "clientURL"
} else { // delete message if set to erase
ops = append(ops, etcd.OpDelete(endpoints))
}
if _, err := obj.Txn(nil, ops, nil); err != nil {
return fmt.Errorf("endpoint advertising failed") // exit in progress?
}
return nil
}
// Endpoints returns a urls map of available etcd server endpoints.
func Endpoints(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
if obj.flags.Trace {
log.Printf("Trace: Etcd: Endpoints()")
defer log.Printf("Trace: Etcd: Endpoints(): Finished!")
}
path := fmt.Sprintf("/%s/endpoints/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix())
if err != nil {
return nil, fmt.Errorf("endpoints aren't available: %v", err)
}
endpoints := make(etcdtypes.URLsMap)
for key, val := range keyMap { // loop through directory of endpoints
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of volunteer
if val == "" { // skip "erased" values
continue
}
urls, err := etcdtypes.NewURLs(strings.Split(val, ","))
if err != nil {
return nil, fmt.Errorf("endpoints data format error: %v", err)
}
endpoints[name] = urls // add to map
if obj.flags.Debug {
log.Printf("Etcd: Endpoint(%v): %v", name, val)
}
}
return endpoints, nil
}
// SetHostnameConverged sets whether a specific hostname is converged.
func SetHostnameConverged(obj *EmbdEtcd, hostname string, isConverged bool) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: SetHostnameConverged(%s): %v", hostname, isConverged)
defer log.Printf("Trace: Etcd: SetHostnameConverged(%v): Finished!", hostname)
}
converged := fmt.Sprintf("/%s/converged/%s", NS, hostname)
op := []etcd.Op{etcd.OpPut(converged, fmt.Sprintf("%t", isConverged))}
if _, err := obj.Txn(nil, op, nil); err != nil { // TODO: do we need a skipConv flag here too?
return fmt.Errorf("set converged failed") // exit in progress?
}
return nil
}
// HostnameConverged returns a map of every hostname's converged state.
func HostnameConverged(obj *EmbdEtcd) (map[string]bool, error) {
if obj.flags.Trace {
log.Printf("Trace: Etcd: HostnameConverged()")
defer log.Printf("Trace: Etcd: HostnameConverged(): Finished!")
}
path := fmt.Sprintf("/%s/converged/", NS)
keyMap, err := obj.ComplexGet(path, true, etcd.WithPrefix()) // don't un-converge
if err != nil {
return nil, fmt.Errorf("converged values aren't available: %v", err)
}
converged := make(map[string]bool)
for key, val := range keyMap { // loop through directory...
if !strings.HasPrefix(key, path) {
continue
}
name := key[len(path):] // get name of key
if val == "" { // skip "erased" values
continue
}
b, err := strconv.ParseBool(val)
if err != nil {
return nil, fmt.Errorf("converged data format error: %v", err)
}
converged[name] = b // add to map
}
return converged, nil
}
// AddHostnameConvergedWatcher adds a watcher with a callback that runs on
// hostname state changes.
func AddHostnameConvergedWatcher(obj *EmbdEtcd, callbackFn func(map[string]bool) error) (func(), error) {
path := fmt.Sprintf("/%s/converged/", NS)
internalCbFn := func(re *RE) error {
// TODO: get the value from the response, and apply delta...
// for now, just run a get operation which is easier to code!
m, err := HostnameConverged(obj)
if err != nil {
return err
}
return callbackFn(m) // call my function
}
return obj.AddWatcher(path, internalCbFn, true, true, etcd.WithPrefix()) // no block and no converger reset
}
// SetClusterSize sets the ideal target cluster size of etcd peers.
func SetClusterSize(obj *EmbdEtcd, value uint16) error {
if obj.flags.Trace {
log.Printf("Trace: Etcd: SetClusterSize(): %v", value)
defer log.Printf("Trace: Etcd: SetClusterSize(): Finished!")
}
key := fmt.Sprintf("/%s/idealClusterSize", NS)
if err := obj.Set(key, strconv.FormatUint(uint64(value), 10)); err != nil {
return fmt.Errorf("function SetClusterSize failed: %v", err) // exit in progress?
}
return nil
}
// GetClusterSize gets the ideal target cluster size of etcd peers.
func GetClusterSize(obj *EmbdEtcd) (uint16, error) {
key := fmt.Sprintf("/%s/idealClusterSize", NS)
keyMap, err := obj.Get(key)
if err != nil {
return 0, fmt.Errorf("function GetClusterSize failed: %v", err)
}
val, exists := keyMap[key]
if !exists || val == "" {
return 0, fmt.Errorf("function GetClusterSize failed: %v", err)
}
v, err := strconv.ParseUint(val, 10, 16)
if err != nil {
return 0, fmt.Errorf("function GetClusterSize failed: %v", err)
}
return uint16(v), nil
}
// MemberAdd adds a member to the cluster.
func MemberAdd(obj *EmbdEtcd, peerURLs etcdtypes.URLs) (*etcd.MemberAddResponse, error) {
//obj.Connect(false) // TODO: ?
ctx := context.Background()
var response *etcd.MemberAddResponse
var err error
for {
if obj.exiting { // the exit signal has been sent!
return nil, fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
response, err = obj.client.MemberAdd(ctx, peerURLs.StringSlice())
obj.rLock.RUnlock()
if err == nil {
break
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return nil, err
}
}
return response, nil
}
// MemberRemove removes a member by mID and returns if it worked, and also
// if there was an error. This is because it might have run without error, but
// the member wasn't found, for example.
func MemberRemove(obj *EmbdEtcd, mID uint64) (bool, error) {
//obj.Connect(false) // TODO: ?
ctx := context.Background()
for {
if obj.exiting { // the exit signal has been sent!
return false, fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
_, err := obj.client.MemberRemove(ctx, mID)
obj.rLock.RUnlock()
if err == nil {
break
} else if err == rpctypes.ErrMemberNotFound {
// if we get this, member already shut itself down :)
return false, nil
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return false, err
}
}
return true, nil
}
// Members returns information on cluster membership.
// The member ID's are the keys, because an empty names means unstarted!
// TODO: consider queueing this through the main loop with CtxError(ctx, err)
func Members(obj *EmbdEtcd) (map[uint64]string, error) {
//obj.Connect(false) // TODO: ?
ctx := context.Background()
var response *etcd.MemberListResponse
var err error
for {
if obj.exiting { // the exit signal has been sent!
return nil, fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
if obj.flags.Trace {
log.Printf("Trace: Etcd: Members(): Endpoints are: %v", obj.client.Endpoints())
}
response, err = obj.client.MemberList(ctx)
obj.rLock.RUnlock()
if err == nil {
break
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return nil, err
}
}
members := make(map[uint64]string)
for _, x := range response.Members {
members[x.ID] = x.Name // x.Name will be "" if unstarted!
}
return members, nil
}
// Leader returns the current leader of the etcd server cluster.
func Leader(obj *EmbdEtcd) (string, error) {
//obj.Connect(false) // TODO: ?
var err error
membersMap := make(map[uint64]string)
if membersMap, err = Members(obj); err != nil {
return "", err
}
addresses := obj.LocalhostClientURLs() // heuristic, but probably correct
if len(addresses) == 0 {
// probably a programming error...
return "", fmt.Errorf("programming error")
}
endpoint := addresses[0].Host // FIXME: arbitrarily picked the first one
// part two
ctx := context.Background()
var response *etcd.StatusResponse
for {
if obj.exiting { // the exit signal has been sent!
return "", fmt.Errorf("exiting etcd")
}
obj.rLock.RLock()
response, err = obj.client.Maintenance.Status(ctx, endpoint)
obj.rLock.RUnlock()
if err == nil {
break
}
if ctx, err = obj.CtxError(ctx, err); err != nil {
return "", err
}
}
// isLeader: response.Header.MemberId == response.Leader
for id, name := range membersMap {
if id == response.Leader {
return name, nil
}
}
return "", fmt.Errorf("members map is not current") // not found
}

182
etcd/resources.go Normal file
View File

@@ -0,0 +1,182 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package etcd
import (
"fmt"
"log"
"strings"
"github.com/purpleidea/mgmt/resources"
"github.com/purpleidea/mgmt/util"
etcd "github.com/coreos/etcd/clientv3"
)
// WatchResources returns a channel that outputs events when exported resources
// change.
// TODO: Filter our watch (on the server side if possible) based on the
// collection prefixes and filters that we care about...
func WatchResources(obj *EmbdEtcd) chan error {
ch := make(chan error, 1) // buffer it so we can measure it
path := fmt.Sprintf("/%s/exported/", NS)
callback := func(re *RE) error {
// TODO: is this even needed? it used to happen on conn errors
log.Printf("Etcd: Watch: Path: %v", path) // event
if re == nil || re.response.Canceled {
return fmt.Errorf("watch is empty") // will cause a CtxError+retry
}
// we normally need to check if anything changed since the last
// event, since a set (export) with no changes still causes the
// watcher to trigger and this would cause an infinite loop. we
// don't need to do this check anymore because we do the export
// transactionally, and only if a change is needed. since it is
// atomic, all the changes arrive together which avoids dupes!!
if len(ch) == 0 { // send event only if one isn't pending
// this check avoids multiple events all queueing up and then
// being released continuously long after the changes stopped
// do not block!
ch <- nil // event
}
return nil
}
_, _ = obj.AddWatcher(path, callback, true, false, etcd.WithPrefix()) // no need to check errors
return ch
}
// SetResources exports all of the resources which we pass in to etcd.
func SetResources(obj *EmbdEtcd, hostname string, resourceList []resources.Res) error {
// key structure is /$NS/exported/$hostname/resources/$uid = $data
var kindFilter []string // empty to get from everyone
hostnameFilter := []string{hostname}
// this is not a race because we should only be reading keys which we
// set, and there should not be any contention with other hosts here!
originals, err := GetResources(obj, hostnameFilter, kindFilter)
if err != nil {
return err
}
if len(originals) == 0 && len(resourceList) == 0 { // special case of no add or del
return nil
}
ifs := []etcd.Cmp{} // list matching the desired state
ops := []etcd.Op{} // list of ops in this transaction
for _, res := range resourceList {
if res.GetKind() == "" {
log.Fatalf("Etcd: SetResources: Error: Empty kind: %v", res.GetName())
}
uid := fmt.Sprintf("%s/%s", res.GetKind(), res.GetName())
path := fmt.Sprintf("/%s/exported/%s/resources/%s", NS, hostname, uid)
if data, err := resources.ResToB64(res); err == nil {
ifs = append(ifs, etcd.Compare(etcd.Value(path), "=", data)) // desired state
ops = append(ops, etcd.OpPut(path, data))
} else {
return fmt.Errorf("can't convert to B64: %v", err)
}
}
match := func(res resources.Res, resourceList []resources.Res) bool { // helper lambda
for _, x := range resourceList {
if res.GetKind() == x.GetKind() && res.GetName() == x.GetName() {
return true
}
}
return false
}
hasDeletes := false
// delete old, now unused resources here...
for _, res := range originals {
if res.GetKind() == "" {
log.Fatalf("Etcd: SetResources: Error: Empty kind: %v", res.GetName())
}
uid := fmt.Sprintf("%s/%s", res.GetKind(), res.GetName())
path := fmt.Sprintf("/%s/exported/%s/resources/%s", NS, hostname, uid)
if match(res, resourceList) { // if we match, no need to delete!
continue
}
ops = append(ops, etcd.OpDelete(path))
hasDeletes = true
}
// if everything is already correct, do nothing, otherwise, run the ops!
// it's important to do this in one transaction, and atomically, because
// this way, we only generate one watch event, and only when it's needed
if hasDeletes { // always run, ifs don't matter
_, err = obj.Txn(nil, ops, nil) // TODO: does this run? it should!
} else {
_, err = obj.Txn(ifs, nil, ops) // TODO: do we need to look at response?
}
return err
}
// GetResources collects all of the resources which match a filter from etcd.
// If the kindfilter or hostnameFilter is empty, then it assumes no filtering...
// TODO: Expand this with a more powerful filter based on what we eventually
// support in our collect DSL. Ideally a server side filter like WithFilter()
// We could do this if the pattern was /$NS/exported/$kind/$hostname/$uid = $data.
func GetResources(obj *EmbdEtcd, hostnameFilter, kindFilter []string) ([]resources.Res, error) {
// key structure is /$NS/exported/$hostname/resources/$uid = $data
path := fmt.Sprintf("/%s/exported/", NS)
resourceList := []resources.Res{}
keyMap, err := obj.Get(path, etcd.WithPrefix(), etcd.WithSort(etcd.SortByKey, etcd.SortAscend))
if err != nil {
return nil, fmt.Errorf("could not get resources: %v", err)
}
for key, val := range keyMap {
if !strings.HasPrefix(key, path) { // sanity check
continue
}
str := strings.Split(key[len(path):], "/")
if len(str) != 4 {
return nil, fmt.Errorf("unexpected chunk count")
}
hostname, r, kind, name := str[0], str[1], str[2], str[3]
if r != "resources" {
return nil, fmt.Errorf("unexpected chunk pattern")
}
if kind == "" {
return nil, fmt.Errorf("unexpected kind chunk")
}
// FIXME: ideally this would be a server side filter instead!
if len(hostnameFilter) > 0 && !util.StrInList(hostname, hostnameFilter) {
continue
}
// FIXME: ideally this would be a server side filter instead!
if len(kindFilter) > 0 && !util.StrInList(kind, kindFilter) {
continue
}
if obj, err := resources.B64ToRes(val); err == nil {
obj.SetKind(kind) // cheap init
log.Printf("Etcd: Get: (Hostname, Kind, Name): (%s, %s, %s)", hostname, kind, name)
resourceList = append(resourceList, obj)
} else {
return nil, fmt.Errorf("can't convert from B64: %v", err)
}
}
return resourceList, nil
}

View File

@@ -18,20 +18,22 @@
package etcd
import (
"errors"
"fmt"
"strings"
"github.com/purpleidea/mgmt/util"
etcd "github.com/coreos/etcd/clientv3"
errwrap "github.com/pkg/errors"
)
// ErrNotExist is returned when GetStr can not find the requested key.
// TODO: https://dave.cheney.net/2016/04/07/constant-errors
var ErrNotExist = errors.New("errNotExist")
// WatchStr returns a channel which spits out events on key activity.
// FIXME: It should close the channel when it's done, and spit out errors when
// something goes wrong.
func WatchStr(obj *EmbdEtcd, key string) chan error {
// new key structure is /$NS/strings/$key/$hostname = $data
// new key structure is /$NS/strings/$key = $data
path := fmt.Sprintf("/%s/strings/%s", NS, key)
ch := make(chan error, 1)
// FIXME: fix our API so that we get a close event on shutdown.
@@ -50,50 +52,38 @@ func WatchStr(obj *EmbdEtcd, key string) chan error {
return ch
}
// GetStr collects all of the strings which match a namespace in etcd.
func GetStr(obj *EmbdEtcd, hostnameFilter []string, key string) (map[string]string, error) {
// old key structure is /$NS/strings/$hostname/$key = $data
// new key structure is /$NS/strings/$key/$hostname = $data
// FIXME: if we have the $key as the last token (old key structure), we
// can allow the key to contain the slash char, otherwise we need to
// verify that one isn't present in the input string.
// GetStr collects the string which matches a global namespace in etcd.
func GetStr(obj *EmbdEtcd, key string) (string, error) {
// new key structure is /$NS/strings/$key = $data
path := fmt.Sprintf("/%s/strings/%s", NS, key)
keyMap, err := obj.Get(path, etcd.WithPrefix(), etcd.WithSort(etcd.SortByKey, etcd.SortAscend))
keyMap, err := obj.Get(path, etcd.WithPrefix())
if err != nil {
return nil, errwrap.Wrapf(err, "could not get strings in: %s", key)
}
result := make(map[string]string)
for key, val := range keyMap {
if !strings.HasPrefix(key, path) { // sanity check
continue
return "", errwrap.Wrapf(err, "could not get strings in: %s", key)
}
str := strings.Split(key[len(path):], "/")
if len(str) != 2 {
return nil, fmt.Errorf("unexpected chunk count of %d", len(str))
}
_, hostname := str[0], str[1]
if hostname == "" {
return nil, fmt.Errorf("unexpected chunk length of %d", len(hostname))
if len(keyMap) == 0 {
return "", ErrNotExist
}
// FIXME: ideally this would be a server side filter instead!
if len(hostnameFilter) > 0 && !util.StrInList(hostname, hostnameFilter) {
continue
if count := len(keyMap); count != 1 {
return "", fmt.Errorf("returned %d entries", count)
}
//log.Printf("Etcd: GetStr(%s): (Hostname, Data): (%s, %s)", key, hostname, val)
result[hostname] = val
val, exists := keyMap[path]
if !exists {
return "", fmt.Errorf("path `%s` is missing", path)
}
return result, nil
//log.Printf("Etcd: GetStr(%s): %s", key, val)
return val, nil
}
// SetStr sets a key and hostname pair to a certain value. If the value is nil,
// then it deletes the key. Otherwise the value should point to a string.
// SetStr sets a key and hostname pair to a certain value. If the value is
// nil, then it deletes the key. Otherwise the value should point to a string.
// TODO: TTL or delete disconnect?
func SetStr(obj *EmbdEtcd, hostname, key string, data *string) error {
// key structure is /$NS/strings/$key/$hostname = $data
path := fmt.Sprintf("/%s/strings/%s/%s", NS, key, hostname)
func SetStr(obj *EmbdEtcd, key string, data *string) error {
// key structure is /$NS/strings/$key = $data
path := fmt.Sprintf("/%s/strings/%s", NS, key)
ifs := []etcd.Cmp{} // list matching the desired state
ops := []etcd.Op{} // list of ops in this transaction (then)
els := []etcd.Op{} // list of ops in this transaction (else)

115
etcd/strmap.go Normal file
View File

@@ -0,0 +1,115 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package etcd
import (
"fmt"
"strings"
"github.com/purpleidea/mgmt/util"
etcd "github.com/coreos/etcd/clientv3"
errwrap "github.com/pkg/errors"
)
// WatchStrMap returns a channel which spits out events on key activity.
// FIXME: It should close the channel when it's done, and spit out errors when
// something goes wrong.
func WatchStrMap(obj *EmbdEtcd, key string) chan error {
// new key structure is /$NS/strings/$key/$hostname = $data
path := fmt.Sprintf("/%s/strings/%s", NS, key)
ch := make(chan error, 1)
// FIXME: fix our API so that we get a close event on shutdown.
callback := func(re *RE) error {
// TODO: is this even needed? it used to happen on conn errors
//log.Printf("Etcd: Watch: Path: %v", path) // event
if re == nil || re.response.Canceled {
return fmt.Errorf("watch is empty") // will cause a CtxError+retry
}
if len(ch) == 0 { // send event only if one isn't pending
ch <- nil // event
}
return nil
}
_, _ = obj.AddWatcher(path, callback, true, false, etcd.WithPrefix()) // no need to check errors
return ch
}
// GetStrMap collects all of the strings which match a namespace in etcd.
func GetStrMap(obj *EmbdEtcd, hostnameFilter []string, key string) (map[string]string, error) {
// old key structure is /$NS/strings/$hostname/$key = $data
// new key structure is /$NS/strings/$key/$hostname = $data
// FIXME: if we have the $key as the last token (old key structure), we
// can allow the key to contain the slash char, otherwise we need to
// verify that one isn't present in the input string.
path := fmt.Sprintf("/%s/strings/%s", NS, key)
keyMap, err := obj.Get(path, etcd.WithPrefix(), etcd.WithSort(etcd.SortByKey, etcd.SortAscend))
if err != nil {
return nil, errwrap.Wrapf(err, "could not get strings in: %s", key)
}
result := make(map[string]string)
for key, val := range keyMap {
if !strings.HasPrefix(key, path) { // sanity check
continue
}
str := strings.Split(key[len(path):], "/")
if len(str) != 2 {
return nil, fmt.Errorf("unexpected chunk count of %d", len(str))
}
_, hostname := str[0], str[1]
if hostname == "" {
return nil, fmt.Errorf("unexpected chunk length of %d", len(hostname))
}
// FIXME: ideally this would be a server side filter instead!
if len(hostnameFilter) > 0 && !util.StrInList(hostname, hostnameFilter) {
continue
}
//log.Printf("Etcd: GetStr(%s): (Hostname, Data): (%s, %s)", key, hostname, val)
result[hostname] = val
}
return result, nil
}
// SetStrMap sets a key and hostname pair to a certain value. If the value is
// nil, then it deletes the key. Otherwise the value should point to a string.
// TODO: TTL or delete disconnect?
func SetStrMap(obj *EmbdEtcd, hostname, key string, data *string) error {
// key structure is /$NS/strings/$key/$hostname = $data
path := fmt.Sprintf("/%s/strings/%s/%s", NS, key, hostname)
ifs := []etcd.Cmp{} // list matching the desired state
ops := []etcd.Op{} // list of ops in this transaction (then)
els := []etcd.Op{} // list of ops in this transaction (else)
if data == nil { // perform a delete
// TODO: use https://github.com/coreos/etcd/pull/7417 if merged
//ifs = append(ifs, etcd.KeyExists(path))
ifs = append(ifs, etcd.Compare(etcd.Version(path), ">", 0))
ops = append(ops, etcd.OpDelete(path))
} else {
data := *data // get the real value
ifs = append(ifs, etcd.Compare(etcd.Value(path), "=", data)) // desired state
els = append(els, etcd.OpPut(path, data))
}
// it's important to do this in one transaction, and atomically, because
// this way, we only generate one watch event, and only when it's needed
_, err := obj.Txn(ifs, ops, els) // TODO: do we need to look at response?
return errwrap.Wrapf(err, "could not set strings in: %s", key)
}

View File

@@ -27,6 +27,12 @@ type World struct {
EmbdEtcd *EmbdEtcd
}
// ResWatch returns a channel which spits out events on possible exported
// resource changes.
func (obj *World) ResWatch() chan error {
return WatchResources(obj.EmbdEtcd)
}
// ResExport exports a list of resources under our hostname namespace.
// Subsequent calls replace the previously set collection atomically.
func (obj *World) ResExport(resourceList []resources.Res) error {
@@ -42,23 +48,48 @@ func (obj *World) ResCollect(hostnameFilter, kindFilter []string) ([]resources.R
return GetResources(obj.EmbdEtcd, hostnameFilter, kindFilter)
}
// SetWatch returns a channel which spits out events on possible string changes.
// StrWatch returns a channel which spits out events on possible string changes.
func (obj *World) StrWatch(namespace string) chan error {
return WatchStr(obj.EmbdEtcd, namespace)
}
// StrGet returns a map of hostnames to values in the given namespace.
func (obj *World) StrGet(namespace string) (map[string]string, error) {
return GetStr(obj.EmbdEtcd, []string{}, namespace)
// StrIsNotExist returns whether the error from StrGet is a key missing error.
func (obj *World) StrIsNotExist(err error) bool {
return err == ErrNotExist
}
// StrSet sets the namespace value to a particular string under the identity of
// its own hostname.
// StrGet returns the value for the the given namespace.
func (obj *World) StrGet(namespace string) (string, error) {
return GetStr(obj.EmbdEtcd, namespace)
}
// StrSet sets the namespace value to a particular string.
func (obj *World) StrSet(namespace, value string) error {
return SetStr(obj.EmbdEtcd, obj.Hostname, namespace, &value)
return SetStr(obj.EmbdEtcd, namespace, &value)
}
// StrDel deletes the value in a particular namespace.
func (obj *World) StrDel(namespace string) error {
return SetStr(obj.EmbdEtcd, obj.Hostname, namespace, nil)
return SetStr(obj.EmbdEtcd, namespace, nil)
}
// StrMapWatch returns a channel which spits out events on possible string changes.
func (obj *World) StrMapWatch(namespace string) chan error {
return WatchStrMap(obj.EmbdEtcd, namespace)
}
// StrMapGet returns a map of hostnames to values in the given namespace.
func (obj *World) StrMapGet(namespace string) (map[string]string, error) {
return GetStrMap(obj.EmbdEtcd, []string{}, namespace)
}
// StrMapSet sets the namespace value to a particular string under the identity
// of its own hostname.
func (obj *World) StrMapSet(namespace, value string) error {
return SetStrMap(obj.EmbdEtcd, obj.Hostname, namespace, &value)
}
// StrMapDel deletes the value in a particular namespace.
func (obj *World) StrMapDel(namespace string) error {
return SetStrMap(obj.EmbdEtcd, obj.Hostname, namespace, nil)
}

10
examples/file0.yaml Normal file
View File

@@ -0,0 +1,10 @@
---
graph: mygraph
resources:
file:
- name: file0
path: "/tmp/mgmt/f1"
content: |
i am f0
state: exists
edges: []

View File

@@ -0,0 +1,246 @@
// libmgmt example of send->recv
package main
import (
"fmt"
"log"
"os"
"os/signal"
"sync"
"syscall"
"time"
"github.com/purpleidea/mgmt/gapi"
mgmt "github.com/purpleidea/mgmt/lib"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
)
// MyGAPI implements the main GAPI interface.
type MyGAPI struct {
Name string // graph name
Interval uint // refresh interval, 0 to never refresh
data gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewMyGAPI creates a new MyGAPI struct and calls Init().
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
obj := &MyGAPI{
Name: name,
Interval: interval,
}
return obj, obj.Init(data)
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}
if obj.Name == "" {
return fmt.Errorf("the graph name must be specified")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
obj.initialized = true
return nil
}
// Graph returns a current Graph.
func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, err
}
// FIXME: these are being specified temporarily until it's the default!
metaparams := resources.DefaultMetaParams
exec1 := &resources.ExecRes{
BaseRes: resources.BaseRes{
Name: "exec1",
MetaParams: metaparams,
},
Cmd: "echo hello world && echo goodbye world 1>&2", // to stdout && stderr
Shell: "/bin/bash",
}
g.AddVertex(exec1)
output := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "output",
MetaParams: metaparams,
// send->recv!
Recv: map[string]*resources.Send{
"Content": {Res: exec1, Key: "Output"},
},
},
Path: "/tmp/mgmt/output",
State: "present",
}
g.AddVertex(output)
g.AddEdge(exec1, output, &resources.Edge{Name: "e0"})
stdout := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "stdout",
MetaParams: metaparams,
// send->recv!
Recv: map[string]*resources.Send{
"Content": {Res: exec1, Key: "Stdout"},
},
},
Path: "/tmp/mgmt/stdout",
State: "present",
}
g.AddVertex(stdout)
g.AddEdge(exec1, stdout, &resources.Edge{Name: "e1"})
stderr := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "stderr",
MetaParams: metaparams,
// send->recv!
Recv: map[string]*resources.Send{
"Content": {Res: exec1, Key: "Stderr"},
},
},
Path: "/tmp/mgmt/stderr",
State: "present",
}
g.AddVertex(stderr)
g.AddEdge(exec1, stderr, &resources.Edge{Name: "e2"})
//g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
return g, nil
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
return
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}
}
}()
return ch
}
// Close shuts down the MyGAPI.
func (obj *MyGAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
close(obj.closeChan)
obj.wg.Wait()
obj.initialized = false // closed = true
return nil
}
// Run runs an embedded mgmt server.
func Run() error {
obj := &mgmt.Main{}
obj.Program = "libmgmt" // TODO: set on compilation
obj.Version = "0.0.1" // TODO: set on compilation
obj.TmpPrefix = true // disable for easy debugging
//prefix := "/tmp/testprefix/"
//obj.Prefix = &p // enable for easy debugging
obj.IdealClusterSize = -1
obj.ConvergedTimeout = -1
obj.Noop = false // FIXME: careful!
obj.GAPI = &MyGAPI{ // graph API
Name: "libmgmt", // TODO: set on compilation
Interval: 60 * 10, // arbitrarily change graph every 15 seconds
}
if err := obj.Init(); err != nil {
return err
}
// install the exit signal handler
exit := make(chan struct{})
defer close(exit)
go func() {
signals := make(chan os.Signal, 1)
signal.Notify(signals, os.Interrupt) // catch ^C
//signal.Notify(signals, os.Kill) // catch signals
signal.Notify(signals, syscall.SIGTERM)
select {
case sig := <-signals: // any signal will do
if sig == os.Interrupt {
log.Println("Interrupted by ^C")
obj.Exit(nil)
return
}
log.Println("Interrupted by signal")
obj.Exit(fmt.Errorf("killed by %v", sig))
return
case <-exit:
return
}
}()
if err := obj.Run(); err != nil {
return err
}
return nil
}
func main() {
log.Printf("Hello!")
if err := Run(); err != nil {
fmt.Println(err)
os.Exit(1)
return
}
log.Printf("Goodbye!")
}

View File

@@ -0,0 +1,255 @@
// libmgmt example of flattened subgraph
package main
import (
"fmt"
"log"
"os"
"os/signal"
"sync"
"syscall"
"time"
"github.com/purpleidea/mgmt/gapi"
mgmt "github.com/purpleidea/mgmt/lib"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
errwrap "github.com/pkg/errors"
)
// MyGAPI implements the main GAPI interface.
type MyGAPI struct {
Name string // graph name
Interval uint // refresh interval, 0 to never refresh
data gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewMyGAPI creates a new MyGAPI struct and calls Init().
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
obj := &MyGAPI{
Name: name,
Interval: interval,
}
return obj, obj.Init(data)
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}
if obj.Name == "" {
return fmt.Errorf("the graph name must be specified")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
obj.initialized = true
return nil
}
func (obj *MyGAPI) subGraph() (*pgraph.Graph, error) {
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, err
}
metaparams := resources.DefaultMetaParams
f1 := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "file1",
MetaParams: metaparams,
},
Path: "/tmp/mgmt/sub1",
State: "present",
}
g.AddVertex(f1)
n1 := &resources.NoopRes{
BaseRes: resources.BaseRes{
Name: "noop1",
MetaParams: metaparams,
},
}
g.AddVertex(n1)
return g, nil
}
// Graph returns a current Graph.
func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, err
}
// FIXME: these are being specified temporarily until it's the default!
metaparams := resources.DefaultMetaParams
content := "I created a subgraph!\n"
f0 := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "README",
MetaParams: metaparams,
},
Path: "/tmp/mgmt/README",
Content: &content,
State: "present",
}
g.AddVertex(f0)
subGraph, err := obj.subGraph()
if err != nil {
return nil, errwrap.Wrapf(err, "running subGraph() failed")
}
edgeGenFn := func(v1, v2 pgraph.Vertex) pgraph.Edge {
edge := &resources.Edge{
Name: fmt.Sprintf("edge: %s->%s", v1, v2),
}
// if we want to do something specific based on input
_, v2IsFile := v2.(*resources.FileRes)
if v1 == f0 && v2IsFile {
edge.Notify = true
}
return edge
}
g.AddEdgeVertexGraph(f0, subGraph, edgeGenFn)
//g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
return g, nil
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
return
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}
}
}()
return ch
}
// Close shuts down the MyGAPI.
func (obj *MyGAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
close(obj.closeChan)
obj.wg.Wait()
obj.initialized = false // closed = true
return nil
}
// Run runs an embedded mgmt server.
func Run() error {
obj := &mgmt.Main{}
obj.Program = "libmgmt" // TODO: set on compilation
obj.Version = "0.0.1" // TODO: set on compilation
obj.TmpPrefix = true // disable for easy debugging
//prefix := "/tmp/testprefix/"
//obj.Prefix = &p // enable for easy debugging
obj.IdealClusterSize = -1
obj.ConvergedTimeout = -1
obj.Noop = false // FIXME: careful!
obj.GAPI = &MyGAPI{ // graph API
Name: "libmgmt", // TODO: set on compilation
Interval: 60 * 10, // arbitrarily change graph every 15 seconds
}
if err := obj.Init(); err != nil {
return err
}
// install the exit signal handler
exit := make(chan struct{})
defer close(exit)
go func() {
signals := make(chan os.Signal, 1)
signal.Notify(signals, os.Interrupt) // catch ^C
//signal.Notify(signals, os.Kill) // catch signals
signal.Notify(signals, syscall.SIGTERM)
select {
case sig := <-signals: // any signal will do
if sig == os.Interrupt {
log.Println("Interrupted by ^C")
obj.Exit(nil)
return
}
log.Println("Interrupted by signal")
obj.Exit(fmt.Errorf("killed by %v", sig))
return
case <-exit:
return
}
}()
if err := obj.Run(); err != nil {
return err
}
return nil
}
func main() {
log.Printf("Hello!")
if err := Run(); err != nil {
fmt.Println(err)
os.Exit(1)
return
}
log.Printf("Goodbye!")
}

View File

@@ -0,0 +1,243 @@
// libmgmt example of graph resource
package main
import (
"fmt"
"log"
"os"
"os/signal"
"sync"
"syscall"
"time"
"github.com/purpleidea/mgmt/gapi"
mgmt "github.com/purpleidea/mgmt/lib"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
)
// MyGAPI implements the main GAPI interface.
type MyGAPI struct {
Name string // graph name
Interval uint // refresh interval, 0 to never refresh
data gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
}
// NewMyGAPI creates a new MyGAPI struct and calls Init().
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
obj := &MyGAPI{
Name: name,
Interval: interval,
}
return obj, obj.Init(data)
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}
if obj.Name == "" {
return fmt.Errorf("the graph name must be specified")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
obj.initialized = true
return nil
}
// Graph returns a current Graph.
func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, err
}
// FIXME: these are being specified temporarily until it's the default!
metaparams := resources.DefaultMetaParams
content := "I created a subgraph!\n"
f0 := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "README",
MetaParams: metaparams,
},
Path: "/tmp/mgmt/README",
Content: &content,
State: "present",
}
g.AddVertex(f0)
// create a subgraph to add *into* a graph resource
subGraph, err := pgraph.NewGraph(fmt.Sprintf("%s->subgraph", obj.Name))
if err != nil {
return nil, err
}
// add elements into the sub graph
f1 := &resources.FileRes{
BaseRes: resources.BaseRes{
Name: "file1",
MetaParams: metaparams,
},
Path: "/tmp/mgmt/sub1",
State: "present",
}
subGraph.AddVertex(f1)
n1 := &resources.NoopRes{
BaseRes: resources.BaseRes{
Name: "noop1",
MetaParams: metaparams,
},
}
subGraph.AddVertex(n1)
e0 := &resources.Edge{Name: "e0"}
e0.Notify = true // send a notification from v0 to v1
subGraph.AddEdge(f1, n1, e0)
// create the actual resource to hold the sub graph
subGraphRes0 := &resources.GraphRes{ // TODO: should we name this SubGraphRes ?
BaseRes: resources.BaseRes{
Name: "subgraph1",
MetaParams: metaparams,
},
Graph: subGraph,
}
g.AddVertex(subGraphRes0) // add it to the main graph
//g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
return g, nil
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
return
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}
}
}()
return ch
}
// Close shuts down the MyGAPI.
func (obj *MyGAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
close(obj.closeChan)
obj.wg.Wait()
obj.initialized = false // closed = true
return nil
}
// Run runs an embedded mgmt server.
func Run() error {
obj := &mgmt.Main{}
obj.Program = "libmgmt" // TODO: set on compilation
obj.Version = "0.0.1" // TODO: set on compilation
obj.TmpPrefix = true // disable for easy debugging
//prefix := "/tmp/testprefix/"
//obj.Prefix = &p // enable for easy debugging
obj.IdealClusterSize = -1
obj.ConvergedTimeout = -1
obj.Noop = false // FIXME: careful!
obj.GAPI = &MyGAPI{ // graph API
Name: "libmgmt", // TODO: set on compilation
Interval: 60 * 10, // arbitrarily change graph every 15 seconds
}
if err := obj.Init(); err != nil {
return err
}
// install the exit signal handler
exit := make(chan struct{})
defer close(exit)
go func() {
signals := make(chan os.Signal, 1)
signal.Notify(signals, os.Interrupt) // catch ^C
//signal.Notify(signals, os.Kill) // catch signals
signal.Notify(signals, syscall.SIGTERM)
select {
case sig := <-signals: // any signal will do
if sig == os.Interrupt {
log.Println("Interrupted by ^C")
obj.Exit(nil)
return
}
log.Println("Interrupted by signal")
obj.Exit(fmt.Errorf("killed by %v", sig))
return
case <-exit:
return
}
}()
if err := obj.Run(); err != nil {
return err
}
return nil
}
func main() {
log.Printf("Hello!")
if err := Run(); err != nil {
fmt.Println(err)
os.Exit(1)
return
}
log.Printf("Goodbye!")
}

View File

@@ -57,9 +57,11 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
n1, err := resources.NewNoopRes("noop1")
if err != nil {
return nil, fmt.Errorf("can't create resource: %v", err)
n1 := &resources.NoopRes{
BaseRes: resources.BaseRes{
Name: "noop1",
MetaParams: resources.DefaultMetaParams,
},
}
// we can still build a graph via the yaml method
@@ -86,32 +88,45 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan error {
if obj.data.NoWatch || obj.Interval <= 0 {
return nil
}
ch := make(chan error)
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("libmgmt: MyGAPI is not initialized")
return
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
ticker := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer ticker.Stop()
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-ticker.C:
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- nil: // trigger a run
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}

View File

@@ -59,19 +59,23 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
g := pgraph.NewGraph(obj.Name)
var vertex *pgraph.Vertex
for i := uint(0); i < obj.Count; i++ {
n, err := resources.NewNoopRes(fmt.Sprintf("noop%d", i))
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, fmt.Errorf("can't create resource: %v", err)
return nil, err
}
v := pgraph.NewVertex(n)
g.AddVertex(v)
var vertex pgraph.Vertex
for i := uint(0); i < obj.Count; i++ {
n := &resources.NoopRes{
BaseRes: resources.BaseRes{
Name: fmt.Sprintf("noop%d", i),
MetaParams: resources.DefaultMetaParams,
},
}
g.AddVertex(n)
if i > 0 {
g.AddEdge(vertex, v, pgraph.NewEdge(fmt.Sprintf("e%d", i)))
g.AddEdge(vertex, n, &resources.Edge{Name: fmt.Sprintf("e%d", i)})
}
vertex = v // save
vertex = n // save
}
//g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
@@ -79,32 +83,45 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan error {
if obj.data.NoWatch || obj.Interval <= 0 {
return nil
}
ch := make(chan error)
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("libmgmt: MyGAPI is not initialized")
return
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
ticker := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer ticker.Stop()
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-ticker.C:
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- nil: // trigger a run
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}

View File

@@ -14,8 +14,6 @@ import (
mgmt "github.com/purpleidea/mgmt/lib"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
"golang.org/x/time/rate"
)
// MyGAPI implements the main GAPI interface.
@@ -58,13 +56,13 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
}
g := pgraph.NewGraph(obj.Name)
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, err
}
// FIXME: these are being specified temporarily until it's the default!
metaparams := resources.MetaParams{
Limit: rate.Inf,
Burst: 0,
}
metaparams := resources.DefaultMetaParams
content := "Delete me to trigger a notification!\n"
f0 := &resources.FileRes{
@@ -77,8 +75,7 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
State: "present",
}
v0 := pgraph.NewVertex(f0)
g.AddVertex(v0)
g.AddVertex(f0)
p1 := &resources.PasswordRes{
BaseRes: resources.BaseRes{
@@ -88,8 +85,7 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
Length: 8, // generated string will have this many characters
Saved: true, // this causes passwords to be stored in plain text!
}
v1 := pgraph.NewVertex(p1)
g.AddVertex(v1)
g.AddVertex(p1)
f1 := &resources.FileRes{
BaseRes: resources.BaseRes{
@@ -105,8 +101,7 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
State: "present",
}
v2 := pgraph.NewVertex(f1)
g.AddVertex(v2)
g.AddVertex(f1)
n1 := &resources.NoopRes{
BaseRes: resources.BaseRes{
@@ -115,50 +110,62 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
},
}
v3 := pgraph.NewVertex(n1)
g.AddVertex(v3)
g.AddVertex(n1)
e0 := pgraph.NewEdge("e0")
e0.Notify = true // send a notification from v0 to v1
g.AddEdge(v0, v1, e0)
e0 := &resources.Edge{Name: "e0"}
e0.Notify = true // send a notification from f0 to p1
g.AddEdge(f0, p1, e0)
g.AddEdge(v1, v2, pgraph.NewEdge("e1"))
g.AddEdge(p1, f1, &resources.Edge{Name: "e1"})
e2 := pgraph.NewEdge("e2")
e2.Notify = true // send a notification from v2 to v3
g.AddEdge(v2, v3, e2)
e2 := &resources.Edge{Name: "e2"}
e2.Notify = true // send a notification from f1 to n1
g.AddEdge(f1, n1, e2)
//g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
return g, nil
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan error {
if obj.data.NoWatch || obj.Interval <= 0 {
return nil
}
ch := make(chan error)
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("libmgmt: MyGAPI is not initialized")
return
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
ticker := make(<-chan time.Time)
if obj.data.NoStreamWatch || obj.Interval <= 0 {
ticker = nil
} else {
// arbitrarily change graph every interval seconds
ticker := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer ticker.Stop()
t := time.NewTicker(time.Duration(obj.Interval) * time.Second)
defer t.Stop()
ticker = t.C
}
for {
select {
case <-ticker.C:
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- nil: // trigger a run
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case <-ticker:
// pass
case <-obj.closeChan:
return
}
log.Printf("libmgmt: Generating new graph...")
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
return
}

7
examples/noop0.yaml Normal file
View File

@@ -0,0 +1,7 @@
---
graph: mygraph
comment: simple noop example
resources:
noop:
- name: noop0
edges: []

8
examples/svc2.yaml Normal file
View File

@@ -0,0 +1,8 @@
---
graph: mygraph
resources:
svc:
- name: purpleidea
state: running
session: true
edges: []

View File

@@ -28,14 +28,28 @@ type Data struct {
Hostname string // uuid for the host, required for GAPI
World resources.World
Noop bool
NoWatch bool
NoConfigWatch bool
NoStreamWatch bool
// NOTE: we can add more fields here if needed by GAPI endpoints
}
// Next describes the particular response the GAPI implementer wishes to emit.
type Next struct {
// FIXME: the Fast pause parameter should eventually get replaced with a
// "SwitchMethod" parameter or similar that instead lets the implementer
// choose between fast pause, slow pause, and interrupt. Interrupt could
// be a future extension to the Resource API that lets an Interrupt() be
// called if we want to exit immediately from the CheckApply part of the
// resource for some reason. For now we'll keep this simple with a bool.
Fast bool // run a fast pause to switch?
Exit bool // should we cause the program to exit? (specify err or not)
Err error // if something goes wrong (use with or without exit!)
}
// GAPI is a Graph API that represents incoming graphs and change streams.
type GAPI interface {
Init(Data) error // initializes the GAPI and passes in useful data
Graph() (*pgraph.Graph, error) // returns the most recent pgraph
Next() chan error // returns a stream of switch events
Next() chan Next // returns a stream of switch events
Close() error // shutdown the GAPI
}

View File

@@ -26,6 +26,7 @@ import (
"github.com/purpleidea/mgmt/puppet"
"github.com/purpleidea/mgmt/yamlgraph"
"github.com/purpleidea/mgmt/yamlgraph2"
"github.com/urfave/cli"
)
@@ -71,6 +72,14 @@ func run(c *cli.Context) error {
File: &y,
}
}
if y := c.String("yaml2"); c.IsSet("yaml2") {
if obj.GAPI != nil {
return fmt.Errorf("can't combine YAMLv2 GAPI with existing GAPI")
}
obj.GAPI = &yamlgraph2.GAPI{
File: &y,
}
}
if p := c.String("puppet"); c.IsSet("puppet") {
if obj.GAPI != nil {
return fmt.Errorf("can't combine puppet GAPI with existing GAPI")
@@ -83,6 +92,9 @@ func run(c *cli.Context) error {
obj.Remotes = c.StringSlice("remote") // FIXME: GAPI-ify somehow?
obj.NoWatch = c.Bool("no-watch")
obj.NoConfigWatch = c.Bool("no-config-watch")
obj.NoStreamWatch = c.Bool("no-stream-watch")
obj.Noop = c.Bool("noop")
obj.Sema = c.Int("sema")
obj.Graphviz = c.String("graphviz")
@@ -205,6 +217,11 @@ func CLI(program, version string, flags Flags) error {
Value: "",
Usage: "yaml graph definition to run",
},
cli.StringFlag{
Name: "yaml2",
Value: "",
Usage: "yaml graph definition to run (parser v2)",
},
cli.StringFlag{
Name: "puppet, p",
Value: "",
@@ -223,8 +240,17 @@ func CLI(program, version string, flags Flags) error {
cli.BoolFlag{
Name: "no-watch",
Usage: "do not update graph under any switch events",
},
cli.BoolFlag{
Name: "no-config-watch",
Usage: "do not update graph on config switch events",
},
cli.BoolFlag{
Name: "no-stream-watch",
Usage: "do not update graph on stream switch events",
},
cli.BoolFlag{
Name: "noop",
Usage: "globally force all resources into no-op mode",

View File

@@ -65,7 +65,10 @@ type Main struct {
GAPI gapi.GAPI // graph API interface struct
Remotes []string // list of remote graph definitions to run
NoWatch bool // do not update graph on watched graph definition file changes
NoWatch bool // do not change graph under any circumstances
NoConfigWatch bool // do not update graph due to config changes
NoStreamWatch bool // do not update graph due to stream changes
Noop bool // globally force all resources into no-op mode
Sema int // add a semaphore with this lock count to each resource
Graphviz string // output file for graphviz data
@@ -112,6 +115,15 @@ func (obj *Main) Init() error {
return fmt.Errorf("choosing a prefix and the request for a tmp prefix is illogical")
}
// if we've turned off watching, then be explicit and disable them all!
// if all the watches are disabled, then it's equivalent to no watching
if obj.NoWatch {
obj.NoConfigWatch = true
obj.NoStreamWatch = true
} else if obj.NoConfigWatch && obj.NoStreamWatch {
obj.NoWatch = true
}
obj.idealClusterSize = uint16(obj.IdealClusterSize)
if obj.IdealClusterSize < 0 { // value is undefined, set to the default
obj.idealClusterSize = etcd.DefaultIdealClusterSize
@@ -286,7 +298,11 @@ func (obj *Main) Run() error {
// TODO: Import admin key
}
var G, oldGraph *pgraph.Graph
oldGraph := &pgraph.Graph{}
graph := &resources.MGraph{}
// pass in the information we need
graph.Debug = obj.Flags.Debug
graph.Init()
// exit after `max-runtime` seconds for no reason at all...
if i := obj.MaxRuntime; i > 0 {
@@ -330,6 +346,16 @@ func (obj *Main) Run() error {
} else if err := EmbdEtcd.Startup(); err != nil { // startup (returns when etcd main loop is running)
obj.Exit(fmt.Errorf("Main: Etcd: Startup failed: %v", err))
}
// wait for etcd server to be ready before continuing...
select {
case <-EmbdEtcd.ServerReady():
log.Printf("Main: Etcd: Server: Ready!")
// pass
case <-time.After(((etcd.MaxStartServerTimeout * etcd.MaxStartServerRetries) + 1) * time.Second):
obj.Exit(fmt.Errorf("Main: Etcd: Startup timeout"))
}
convergerStateFn := func(b bool) error {
// exit if we are using the converged timeout and we are the
// root node. otherwise, if we are a child node in a remote
@@ -337,7 +363,7 @@ func (obj *Main) Run() error {
// state and wait for the parent to trigger the exit.
if t := obj.ConvergedTimeout; obj.Depth == 0 && t >= 0 {
if b {
log.Printf("Converged for %d seconds, exiting!", t)
log.Printf("Main: Converged for %d seconds, exiting!", t)
obj.Exit(nil) // trigger an exit!
}
return nil
@@ -355,43 +381,43 @@ func (obj *Main) Run() error {
EmbdEtcd: EmbdEtcd,
}
var gapiChan chan error // stream events are nil errors
graph.Data = &resources.ResData{
Hostname: hostname,
Converger: converger,
Prometheus: prom,
World: world,
Prefix: pgraphPrefix,
Debug: obj.Flags.Debug,
}
var gapiChan chan gapi.Next // stream events contain some instructions!
if obj.GAPI != nil {
data := gapi.Data{
Hostname: hostname,
World: world,
Noop: obj.Noop,
NoWatch: obj.NoWatch,
//NoWatch: obj.NoWatch,
NoConfigWatch: obj.NoConfigWatch,
NoStreamWatch: obj.NoStreamWatch,
}
if err := obj.GAPI.Init(data); err != nil {
obj.Exit(fmt.Errorf("Main: GAPI: Init failed: %v", err))
} else if !obj.NoWatch {
} else {
// this must generate at least one event for it to work
gapiChan = obj.GAPI.Next() // stream of graph switch events!
}
}
exitchan := make(chan struct{}) // exit on close
go func() {
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
log.Println("Etcd: Starting...")
etcdChan := etcd.WatchAll(EmbdEtcd)
first := true // first loop or not
for {
log.Println("Main: Waiting...")
// The GAPI should always kick off an event on Next() at
// startup when (and if) it indeed has a graph to share!
fastPause := false
select {
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case b := <-etcdChan:
if !b { // ignore the message
continue
}
// everything else passes through to cause a compile!
case err, ok := <-gapiChan:
case next, ok := <-gapiChan:
if !ok { // channel closed
if obj.Flags.Debug {
log.Printf("Main: GAPI exited")
@@ -399,21 +425,29 @@ func (obj *Main) Run() error {
gapiChan = nil // disable it
continue
}
if err != nil {
obj.Exit(err) // trigger exit
// if we've been asked to exit...
if next.Exit {
obj.Exit(next.Err) // trigger exit
continue // wait for exitchan
}
if obj.NoWatch { // extra safety for bad GAPI's
log.Printf("Main: GAPI stream should be quiet with NoWatch!") // fix the GAPI!
continue // no stream events should be sent
// the gapi lets us send an error to the channel
// this means there was a failure, but not fatal
if err := next.Err; err != nil {
log.Printf("Main: Error with graph stream: %v", err)
continue // wait for another event
}
// everything else passes through to cause a compile!
fastPause = next.Fast // should we pause fast?
case <-exitchan:
return
}
if obj.GAPI == nil {
log.Printf("Config: GAPI is empty!")
log.Printf("Main: GAPI is empty!")
continue
}
@@ -421,34 +455,31 @@ func (obj *Main) Run() error {
// run graph vertex LOCK...
if !first { // TODO: we can flatten this check out I think
converger.Pause() // FIXME: add sync wait?
G.Pause(false) // sync
graph.Pause(fastPause) // sync
//G.UnGroup() // FIXME: implement me if needed!
//graph.UnGroup() // FIXME: implement me if needed!
}
// make the graph from yaml, lib, puppet->yaml, or dsl!
newGraph, err := obj.GAPI.Graph() // generate graph!
if err != nil {
log.Printf("Config: Error creating new graph: %v", err)
log.Printf("Main: Error creating new graph: %v", err)
// unpause!
if !first {
G.Start(first) // sync
converger.Start() // after G.Start()
graph.Start(first) // sync
converger.Start() // after Start()
}
continue
}
newGraph.Flags = pgraph.Flags{Debug: obj.Flags.Debug}
// pass in the information we need
newGraph.AssociateData(&resources.Data{
Hostname: hostname,
Converger: converger,
Prometheus: prom,
World: world,
Prefix: pgraphPrefix,
Debug: obj.Flags.Debug,
})
if obj.Flags.Debug {
log.Printf("Main: New Graph: %v", newGraph)
}
for _, m := range newGraph.GraphMetas() {
// this edits the paused vertices, but it is safe to do
// so even if we don't use this new graph, since those
// value should be the same for existing vertices...
for _, v := range newGraph.Vertices() {
m := resources.VtoR(v).Meta()
// apply the global noop parameter if requested
if obj.Noop {
m.Noop = obj.Noop
@@ -461,51 +492,80 @@ func (obj *Main) Run() error {
}
}
// FIXME: make sure we "UnGroup()" any semi-destructive
// changes to the resources so our efficient GraphSync
// will be able to re-use and cmp to the old graph.
// We don't have to "UnGroup()" to compare, since we
// save the old graph to use when we compare.
// TODO: Does this hurt performance or graph changes ?
log.Printf("Main: GraphSync...")
newFullGraph, err := newGraph.GraphSync(oldGraph)
if err != nil {
log.Printf("Config: Error running graph sync: %v", err)
vertexCmpFn := func(v1, v2 pgraph.Vertex) (bool, error) {
return resources.VtoR(v1).Compare(resources.VtoR(v2)), nil
}
vertexAddFn := func(v pgraph.Vertex) error {
err := resources.VtoR(v).Validate()
return errwrap.Wrapf(err, "could not Validate() resource")
}
vertexRemoveFn := func(v pgraph.Vertex) error {
// wait for exit before starting new graph!
resources.VtoR(v).Exit() // sync
return nil
}
edgeCmpFn := func(e1, e2 pgraph.Edge) (bool, error) {
edge1 := e1.(*resources.Edge) // panic if wrong
edge2 := e2.(*resources.Edge) // panic if wrong
return edge1.Compare(edge2), nil
}
// on success, this updates the receiver graph...
if err := oldGraph.GraphSync(newGraph, vertexCmpFn, vertexAddFn, vertexRemoveFn, edgeCmpFn); err != nil {
log.Printf("Main: Error running graph sync: %v", err)
// unpause!
if !first {
G.Start(first) // sync
converger.Start() // after G.Start()
graph.Start(first) // sync
converger.Start() // after Start()
}
continue
}
oldGraph = newFullGraph // save old graph
G = oldGraph.Copy() // copy to active graph
G.AutoEdges() // add autoedges; modifies the graph
G.AutoGroup() // run autogroup; modifies the graph
// TODO: should we call each Res.Setup() here instead?
// add autoedges; modifies the graph only if no error
if err := resources.AutoEdges(oldGraph); err != nil {
log.Printf("Main: Error running auto edges: %v", err)
// unpause!
if !first {
graph.Start(first) // sync
converger.Start() // after Start()
}
continue
}
graph.Update(oldGraph) // copy in structure of new graph
resources.AutoGroup(graph.Graph, &resources.NonReachabilityGrouper{}) // run autogroup; modifies the graph
// TODO: do we want to do a transitive reduction?
// FIXME: run a type checker that verifies all the send->recv relationships
// Call this here because at this point the graph does not
// know anything about the prometheus instance.
// Call this here because at this point the graph does
// not know anything about the prometheus instance.
if err := prom.UpdatePgraphStartTime(); err != nil {
log.Printf("Main: Prometheus.UpdatePgraphStartTime() errored: %v", err)
}
// G.Start(...) needs to be synchronous or wait,
// Start() needs to be synchronous or wait,
// because if half of the nodes are started and
// some are not ready yet and the EtcdWatch
// loops, we'll cause G.Pause(...) before we
// loops, we'll cause Pause() before we
// even got going, thus causing nil pointer errors
G.Start(first) // sync
converger.Start() // after G.Start()
graph.Start(first) // sync
converger.Start() // after Start()
log.Printf("Graph: %v", G) // show graph
log.Printf("Main: Graph: %v", graph) // show graph
if obj.Graphviz != "" {
filter := obj.GraphvizFilter
if filter == "" {
filter = "dot" // directed graph default
}
if err := G.ExecGraphviz(filter, obj.Graphviz, hostname); err != nil {
log.Printf("Graphviz: %v", err)
if err := graph.ExecGraphviz(filter, obj.Graphviz, hostname); err != nil {
log.Printf("Main: Graphviz: %v", err)
} else {
log.Printf("Graphviz: Successfully generated graph!")
log.Printf("Main: Graphviz: Successfully generated graph!")
}
}
first = false
@@ -515,7 +575,7 @@ func (obj *Main) Run() error {
configWatcher := recwatch.NewConfigWatcher()
configWatcher.Flags = recwatch.Flags{Debug: obj.Flags.Debug}
events := configWatcher.Events()
if !obj.NoWatch {
if !obj.NoWatch { // FIXME: fit this into a clean GAPI?
configWatcher.Add(obj.Remotes...) // add all the files...
} else {
events = nil // signal that no-watch is true
@@ -567,7 +627,7 @@ func (obj *Main) Run() error {
reterr := <-obj.exit // wait for exit signal
log.Println("Destroy...")
log.Println("Main: Destroy...")
if obj.GAPI != nil {
if err := obj.GAPI.Close(); err != nil {
@@ -585,7 +645,7 @@ func (obj *Main) Run() error {
// tell inner main loop to exit
close(exitchan)
G.Exit() // tells all the children to exit, and waits for them to do so
graph.Exit() // tells all the children to exit, and waits for them to do so
// cleanup etcd main loop last so it can process everything first
if err := EmbdEtcd.Destroy(); err != nil { // shutdown and cleanup etcd
@@ -602,7 +662,7 @@ func (obj *Main) Run() error {
}
if obj.Flags.Debug {
log.Printf("Main: Graph: %v", G)
log.Printf("Main: Graph: %v", graph)
}
// TODO: wait for each vertex to exit...

View File

@@ -80,4 +80,5 @@ if [[ $ret != 0 ]]; then
fi
go get golang.org/x/tools/cmd/stringer # for automatic stringer-ing
go get github.com/golang/lint/golint # for `golint`-ing
go get github.com/alecthomas/gometalinter && gometalinter --install # bonus
cd "$XPWD" >/dev/null

View File

@@ -1,103 +0,0 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package pgraph represents the internal "pointer graph" that we use.
package pgraph
import (
"fmt"
"log"
"github.com/purpleidea/mgmt/resources"
)
// add edges to the vertex in a graph based on if it matches a uid list
func (g *Graph) addEdgesByMatchingUIDS(v *Vertex, uids []resources.ResUID) []bool {
// search for edges and see what matches!
var result []bool
// loop through each uid, and see if it matches any vertex
for _, uid := range uids {
var found = false
// uid is a ResUID object
for _, vv := range g.GetVertices() { // search
if v == vv { // skip self
continue
}
if g.Flags.Debug {
log.Printf("Compile: AutoEdge: Match: %v[%v] with UID: %v[%v]", vv.Kind(), vv.GetName(), uid.Kind(), uid.GetName())
}
// we must match to an effective UID for the resource,
// that is to say, the name value of a res is a helpful
// handle, but it is not necessarily a unique identity!
// remember, resources can return multiple UID's each!
if resources.UIDExistsInUIDs(uid, vv.UIDs()) {
// add edge from: vv -> v
if uid.Reversed() {
txt := fmt.Sprintf("AutoEdge: %v[%v] -> %v[%v]", vv.Kind(), vv.GetName(), v.Kind(), v.GetName())
log.Printf("Compile: Adding %v", txt)
g.AddEdge(vv, v, NewEdge(txt))
} else { // edges go the "normal" way, eg: pkg resource
txt := fmt.Sprintf("AutoEdge: %v[%v] -> %v[%v]", v.Kind(), v.GetName(), vv.Kind(), vv.GetName())
log.Printf("Compile: Adding %v", txt)
g.AddEdge(v, vv, NewEdge(txt))
}
found = true
break
}
}
result = append(result, found)
}
return result
}
// AutoEdges adds the automatic edges to the graph.
func (g *Graph) AutoEdges() {
log.Println("Compile: Adding AutoEdges...")
for _, v := range g.GetVertices() { // for each vertexes autoedges
if !v.Meta().AutoEdge { // is the metaparam true?
continue
}
autoEdgeObj := v.AutoEdges()
if autoEdgeObj == nil {
log.Printf("%v[%v]: Config: No auto edges were found!", v.Kind(), v.GetName())
continue // next vertex
}
for { // while the autoEdgeObj has more uids to add...
uids := autoEdgeObj.Next() // get some!
if uids == nil {
log.Printf("%v[%v]: Config: The auto edge list is empty!", v.Kind(), v.GetName())
break // inner loop
}
if g.Flags.Debug {
log.Println("Compile: AutoEdge: UIDS:")
for i, u := range uids {
log.Printf("Compile: AutoEdge: UID%d: %v", i, u)
}
}
// match and add edges
result := g.addEdgesByMatchingUIDS(v, uids)
// report back, and find out if we should continue
if !autoEdgeObj.Test(result) {
break
}
}
}
}

View File

@@ -1,486 +0,0 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
import (
"testing"
)
// all of the following test cases are laid out with the following semantics:
// * vertices which start with the same single letter are considered "like"
// * "like" elements should be merged
// * vertices can have any integer after their single letter "family" type
// * grouped vertices should have a name with a comma separated list of names
// * edges follow the same conventions about grouping
// empty graph
func TestPgraphGrouping1(t *testing.T) {
g1 := NewGraph("g1") // original graph
g2 := NewGraph("g2") // expected result
runGraphCmp(t, g1, g2)
}
// single vertex
func TestPgraphGrouping2(t *testing.T) {
g1 := NewGraph("g1") // original graph
{ // grouping to limit variable scope
a1 := NewVertex(NewNoopResTest("a1"))
g1.AddVertex(a1)
}
g2 := NewGraph("g2") // expected result
{
a1 := NewVertex(NewNoopResTest("a1"))
g2.AddVertex(a1)
}
runGraphCmp(t, g1, g2)
}
// two vertices
func TestPgraphGrouping3(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
b1 := NewVertex(NewNoopResTest("b1"))
g1.AddVertex(a1, b1)
}
g2 := NewGraph("g2") // expected result
{
a1 := NewVertex(NewNoopResTest("a1"))
b1 := NewVertex(NewNoopResTest("b1"))
g2.AddVertex(a1, b1)
}
runGraphCmp(t, g1, g2)
}
// two vertices merge
func TestPgraphGrouping4(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
g1.AddVertex(a1, a2)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2"))
g2.AddVertex(a)
}
runGraphCmp(t, g1, g2)
}
// three vertices merge
func TestPgraphGrouping5(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
a3 := NewVertex(NewNoopResTest("a3"))
g1.AddVertex(a1, a2, a3)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2,a3"))
g2.AddVertex(a)
}
runGraphCmp(t, g1, g2)
}
// three vertices, two merge
func TestPgraphGrouping6(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
b1 := NewVertex(NewNoopResTest("b1"))
g1.AddVertex(a1, a2, b1)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2"))
b1 := NewVertex(NewNoopResTest("b1"))
g2.AddVertex(a, b1)
}
runGraphCmp(t, g1, g2)
}
// four vertices, three merge
func TestPgraphGrouping7(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
a3 := NewVertex(NewNoopResTest("a3"))
b1 := NewVertex(NewNoopResTest("b1"))
g1.AddVertex(a1, a2, a3, b1)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2,a3"))
b1 := NewVertex(NewNoopResTest("b1"))
g2.AddVertex(a, b1)
}
runGraphCmp(t, g1, g2)
}
// four vertices, two&two merge
func TestPgraphGrouping8(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
b1 := NewVertex(NewNoopResTest("b1"))
b2 := NewVertex(NewNoopResTest("b2"))
g1.AddVertex(a1, a2, b1, b2)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2"))
b := NewVertex(NewNoopResTest("b1,b2"))
g2.AddVertex(a, b)
}
runGraphCmp(t, g1, g2)
}
// five vertices, two&three merge
func TestPgraphGrouping9(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
b1 := NewVertex(NewNoopResTest("b1"))
b2 := NewVertex(NewNoopResTest("b2"))
b3 := NewVertex(NewNoopResTest("b3"))
g1.AddVertex(a1, a2, b1, b2, b3)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2"))
b := NewVertex(NewNoopResTest("b1,b2,b3"))
g2.AddVertex(a, b)
}
runGraphCmp(t, g1, g2)
}
// three unique vertices
func TestPgraphGrouping10(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
b1 := NewVertex(NewNoopResTest("b1"))
c1 := NewVertex(NewNoopResTest("c1"))
g1.AddVertex(a1, b1, c1)
}
g2 := NewGraph("g2") // expected result
{
a1 := NewVertex(NewNoopResTest("a1"))
b1 := NewVertex(NewNoopResTest("b1"))
c1 := NewVertex(NewNoopResTest("c1"))
g2.AddVertex(a1, b1, c1)
}
runGraphCmp(t, g1, g2)
}
// three unique vertices, two merge
func TestPgraphGrouping11(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
b1 := NewVertex(NewNoopResTest("b1"))
b2 := NewVertex(NewNoopResTest("b2"))
c1 := NewVertex(NewNoopResTest("c1"))
g1.AddVertex(a1, b1, b2, c1)
}
g2 := NewGraph("g2") // expected result
{
a1 := NewVertex(NewNoopResTest("a1"))
b := NewVertex(NewNoopResTest("b1,b2"))
c1 := NewVertex(NewNoopResTest("c1"))
g2.AddVertex(a1, b, c1)
}
runGraphCmp(t, g1, g2)
}
// simple merge 1
// a1 a2 a1,a2
// \ / >>> | (arrows point downwards)
// b b
func TestPgraphGrouping12(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
b1 := NewVertex(NewNoopResTest("b1"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(a2, b1, e2)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2"))
b1 := NewVertex(NewNoopResTest("b1"))
e := NewEdge("e1,e2")
g2.AddEdge(a, b1, e)
}
runGraphCmp(t, g1, g2)
}
// simple merge 2
// b b
// / \ >>> | (arrows point downwards)
// a1 a2 a1,a2
func TestPgraphGrouping13(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
b1 := NewVertex(NewNoopResTest("b1"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
g1.AddEdge(b1, a1, e1)
g1.AddEdge(b1, a2, e2)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2"))
b1 := NewVertex(NewNoopResTest("b1"))
e := NewEdge("e1,e2")
g2.AddEdge(b1, a, e)
}
runGraphCmp(t, g1, g2)
}
// triple merge
// a1 a2 a3 a1,a2,a3
// \ | / >>> | (arrows point downwards)
// b b
func TestPgraphGrouping14(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
a3 := NewVertex(NewNoopResTest("a3"))
b1 := NewVertex(NewNoopResTest("b1"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(a2, b1, e2)
g1.AddEdge(a3, b1, e3)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2,a3"))
b1 := NewVertex(NewNoopResTest("b1"))
e := NewEdge("e1,e2,e3")
g2.AddEdge(a, b1, e)
}
runGraphCmp(t, g1, g2)
}
// chain merge
// a1 a1
// / \ |
// b1 b2 >>> b1,b2 (arrows point downwards)
// \ / |
// c1 c1
func TestPgraphGrouping15(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
b1 := NewVertex(NewNoopResTest("b1"))
b2 := NewVertex(NewNoopResTest("b2"))
c1 := NewVertex(NewNoopResTest("c1"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
e4 := NewEdge("e4")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(a1, b2, e2)
g1.AddEdge(b1, c1, e3)
g1.AddEdge(b2, c1, e4)
}
g2 := NewGraph("g2") // expected result
{
a1 := NewVertex(NewNoopResTest("a1"))
b := NewVertex(NewNoopResTest("b1,b2"))
c1 := NewVertex(NewNoopResTest("c1"))
e1 := NewEdge("e1,e2")
e2 := NewEdge("e3,e4")
g2.AddEdge(a1, b, e1)
g2.AddEdge(b, c1, e2)
}
runGraphCmp(t, g1, g2)
}
// re-attach 1 (outer)
// technically the second possibility is valid too, depending on which order we
// merge edges in, and if we don't filter out any unnecessary edges afterwards!
// a1 a2 a1,a2 a1,a2
// | / | | \
// b1 / >>> b1 OR b1 / (arrows point downwards)
// | / | | /
// c1 c1 c1
func TestPgraphGrouping16(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
b1 := NewVertex(NewNoopResTest("b1"))
c1 := NewVertex(NewNoopResTest("c1"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(b1, c1, e2)
g1.AddEdge(a2, c1, e3)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2"))
b1 := NewVertex(NewNoopResTest("b1"))
c1 := NewVertex(NewNoopResTest("c1"))
e1 := NewEdge("e1,e3")
e2 := NewEdge("e2,e3") // e3 gets "merged through" to BOTH edges!
g2.AddEdge(a, b1, e1)
g2.AddEdge(b1, c1, e2)
}
runGraphCmp(t, g1, g2)
}
// re-attach 2 (inner)
// a1 b2 a1
// | / |
// b1 / >>> b1,b2 (arrows point downwards)
// | / |
// c1 c1
func TestPgraphGrouping17(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
b1 := NewVertex(NewNoopResTest("b1"))
b2 := NewVertex(NewNoopResTest("b2"))
c1 := NewVertex(NewNoopResTest("c1"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(b1, c1, e2)
g1.AddEdge(b2, c1, e3)
}
g2 := NewGraph("g2") // expected result
{
a1 := NewVertex(NewNoopResTest("a1"))
b := NewVertex(NewNoopResTest("b1,b2"))
c1 := NewVertex(NewNoopResTest("c1"))
e1 := NewEdge("e1")
e2 := NewEdge("e2,e3")
g2.AddEdge(a1, b, e1)
g2.AddEdge(b, c1, e2)
}
runGraphCmp(t, g1, g2)
}
// re-attach 3 (double)
// similar to "re-attach 1", technically there is a second possibility for this
// a2 a1 b2 a1,a2
// \ | / |
// \ b1 / >>> b1,b2 (arrows point downwards)
// \ | / |
// c1 c1
func TestPgraphGrouping18(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
b1 := NewVertex(NewNoopResTest("b1"))
b2 := NewVertex(NewNoopResTest("b2"))
c1 := NewVertex(NewNoopResTest("c1"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
e4 := NewEdge("e4")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(b1, c1, e2)
g1.AddEdge(a2, c1, e3)
g1.AddEdge(b2, c1, e4)
}
g2 := NewGraph("g2") // expected result
{
a := NewVertex(NewNoopResTest("a1,a2"))
b := NewVertex(NewNoopResTest("b1,b2"))
c1 := NewVertex(NewNoopResTest("c1"))
e1 := NewEdge("e1,e3")
e2 := NewEdge("e2,e3,e4") // e3 gets "merged through" to BOTH edges!
g2.AddEdge(a, b, e1)
g2.AddEdge(b, c1, e2)
}
runGraphCmp(t, g1, g2)
}
// connected merge 0, (no change!)
// a1 a1
// \ >>> \ (arrows point downwards)
// a2 a2
func TestPgraphGroupingConnected0(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
e1 := NewEdge("e1")
g1.AddEdge(a1, a2, e1)
}
g2 := NewGraph("g2") // expected result ?
{
a1 := NewVertex(NewNoopResTest("a1"))
a2 := NewVertex(NewNoopResTest("a2"))
e1 := NewEdge("e1")
g2.AddEdge(a1, a2, e1)
}
runGraphCmp(t, g1, g2)
}
// connected merge 1, (no change!)
// a1 a1
// \ \
// b >>> b (arrows point downwards)
// \ \
// a2 a2
func TestPgraphGroupingConnected1(t *testing.T) {
g1 := NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTest("a1"))
b := NewVertex(NewNoopResTest("b"))
a2 := NewVertex(NewNoopResTest("a2"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
g1.AddEdge(a1, b, e1)
g1.AddEdge(b, a2, e2)
}
g2 := NewGraph("g2") // expected result ?
{
a1 := NewVertex(NewNoopResTest("a1"))
b := NewVertex(NewNoopResTest("b"))
a2 := NewVertex(NewNoopResTest("a2"))
e1 := NewEdge("e1")
e2 := NewEdge("e2")
g2.AddEdge(a1, b, e1)
g2.AddEdge(b, a2, e2)
}
runGraphCmp(t, g1, g2)
}

129
pgraph/graphsync.go Normal file
View File

@@ -0,0 +1,129 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
import (
"fmt"
errwrap "github.com/pkg/errors"
)
// GraphSync updates the Graph so that it matches the newGraph. It leaves
// identical elements alone so that they don't need to be refreshed.
// It tries to mutate existing elements into new ones, if they support this.
// This updates the Graph on success only.
// FIXME: should we do this with copies of the vertex resources?
// FIXME: add test cases
func (obj *Graph) GraphSync(newGraph *Graph, vertexCmpFn func(Vertex, Vertex) (bool, error), vertexAddFn func(Vertex) error, vertexRemoveFn func(Vertex) error, edgeCmpFn func(Edge, Edge) (bool, error)) error {
oldGraph := obj.Copy() // work on a copy of the old graph
if oldGraph == nil {
var err error
oldGraph, err = NewGraph(newGraph.GetName()) // copy over the name
if err != nil {
return errwrap.Wrapf(err, "GraphSync failed")
}
}
oldGraph.SetName(newGraph.GetName()) // overwrite the name
var lookup = make(map[Vertex]Vertex)
var vertexKeep []Vertex // list of vertices which are the same in new graph
var edgeKeep []Edge // list of vertices which are the same in new graph
for v := range newGraph.Adjacency() { // loop through the vertices (resources)
var vertex Vertex
// step one, direct compare with res.Compare
if vertex == nil { // redundant guard for consistency
fn := func(vv Vertex) (bool, error) {
b, err := vertexCmpFn(vv, v)
return b, errwrap.Wrapf(err, "vertexCmpFn failed")
}
var err error
vertex, err = oldGraph.VertexMatchFn(fn)
if err != nil {
return errwrap.Wrapf(err, "VertexMatchFn failed")
}
}
// TODO: consider adding a mutate API.
// step two, try and mutate with res.Mutate
//if vertex == nil { // not found yet...
// vertex = oldGraph.MutateMatch(res)
//}
if vertex == nil { // no match found yet
if err := vertexAddFn(v); err != nil {
return errwrap.Wrapf(err, "vertexAddFn failed")
}
vertex = v
oldGraph.AddVertex(vertex) // call standalone in case not part of an edge
}
lookup[v] = vertex // used for constructing edges
vertexKeep = append(vertexKeep, vertex) // append
}
// get rid of any vertices we shouldn't keep (that aren't in new graph)
for v := range oldGraph.Adjacency() {
if !VertexContains(v, vertexKeep) {
if err := vertexRemoveFn(v); err != nil {
return errwrap.Wrapf(err, "vertexRemoveFn failed")
}
oldGraph.DeleteVertex(v)
}
}
// compare edges
for v1 := range newGraph.Adjacency() { // loop through the vertices (resources)
for v2, e := range newGraph.Adjacency()[v1] {
// we have an edge!
// lookup vertices (these should exist now)
vertex1, exists1 := lookup[v1]
vertex2, exists2 := lookup[v2]
if !exists1 || !exists2 { // no match found, bug?
//if vertex1 == nil || vertex2 == nil { // no match found
return fmt.Errorf("new vertices weren't found") // programming error
}
edge, exists := oldGraph.Adjacency()[vertex1][vertex2]
if !exists {
edge = e // use edge
} else if b, err := edgeCmpFn(edge, e); err != nil {
return errwrap.Wrapf(err, "edgeCmpFn failed")
} else if !b {
edge = e // overwrite edge
}
oldGraph.Adjacency()[vertex1][vertex2] = edge // store it (AddEdge)
edgeKeep = append(edgeKeep, edge) // mark as saved
}
}
// delete unused edges
for v1 := range oldGraph.Adjacency() {
for _, e := range oldGraph.Adjacency()[v1] {
// we have an edge!
if !EdgeContains(e, edgeKeep) {
oldGraph.DeleteEdge(e)
}
}
}
// success
*obj = *oldGraph // save old graph
return nil
}

View File

@@ -45,16 +45,16 @@ func (g *Graph) Graphviz() (out string) {
out += fmt.Sprintf("\tlabel=\"%s\";\n", g.GetName())
//out += "\tnode [shape=box];\n"
str := ""
for i := range g.Adjacency { // reverse paths
out += fmt.Sprintf("\t\"%s\" [label=\"%s[%s]\"];\n", i.GetName(), i.Kind(), i.GetName())
for j := range g.Adjacency[i] {
k := g.Adjacency[i][j]
for i := range g.Adjacency() { // reverse paths
out += fmt.Sprintf("\t\"%s\" [label=\"%s\"];\n", i, i)
for j := range g.Adjacency()[i] {
k := g.Adjacency()[i][j]
// use str for clearer output ordering
if k.Notify {
str += fmt.Sprintf("\t\"%s\" -> \"%s\" [label=\"%s\",style=bold];\n", i.GetName(), j.GetName(), k.Name)
} else {
str += fmt.Sprintf("\t\"%s\" -> \"%s\" [label=\"%s\"];\n", i.GetName(), j.GetName(), k.Name)
}
//if fmtBoldFn(k) { // TODO: add this sort of formatting
// str += fmt.Sprintf("\t\"%s\" -> \"%s\" [label=\"%s\",style=bold];\n", i, j, k)
//} else {
str += fmt.Sprintf("\t\"%s\" -> \"%s\" [label=\"%s\"];\n", i, j, k)
//}
}
}
out += str

View File

@@ -21,32 +21,10 @@ package pgraph
import (
"fmt"
"sort"
"sync"
"github.com/purpleidea/mgmt/event"
"github.com/purpleidea/mgmt/prometheus"
"github.com/purpleidea/mgmt/resources"
"github.com/purpleidea/mgmt/util/semaphore"
errwrap "github.com/pkg/errors"
)
//go:generate stringer -type=graphState -output=graphstate_stringer.go
type graphState int
const (
graphStateNil graphState = iota
graphStateStarting
graphStateStarted
graphStatePausing
graphStatePaused
)
// Flags contains specific constants used by the graph.
type Flags struct {
Debug bool
}
// Graph is the graph structure in this library.
// The graph abstract data type (ADT) is defined as follows:
// * the directed graph arrows point from left to right ( -> )
@@ -55,87 +33,71 @@ type Flags struct {
// * This is also the direction that the notify should happen in...
type Graph struct {
Name string
Adjacency map[*Vertex]map[*Vertex]*Edge // *Vertex -> *Vertex (edge)
Flags Flags
state graphState
fastPause bool // used to disable pokes for a fast pause
mutex *sync.Mutex // used when modifying graph State variable
wg *sync.WaitGroup
semas map[string]*semaphore.Semaphore
slock *sync.Mutex // semaphore mutex
prometheus *prometheus.Prometheus // the prometheus instance
adjacency map[Vertex]map[Vertex]Edge // Vertex -> Vertex (edge)
kv map[string]interface{} // some values associated with the graph
}
// Vertex is the primary vertex struct in this library.
type Vertex struct {
resources.Res // anonymous field
timestamp int64 // last updated timestamp ?
// Vertex is the primary vertex struct in this library. It can be anything that
// implements Stringer. The string output must be stable and unique in a graph.
type Vertex interface {
fmt.Stringer // String() string
}
// Edge is the primary edge struct in this library.
type Edge struct {
Name string
Notify bool // should we send a refresh notification along this edge?
// Edge is the primary edge struct in this library. It can be anything that
// implements Stringer. The string output must be stable and unique in a graph.
type Edge interface {
fmt.Stringer // String() string
}
refresh bool // is there a notify pending for the dest vertex ?
// Init initializes the graph which populates all the internal structures.
func (g *Graph) Init() error {
if g.Name == "" { // FIXME: is this really a good requirement?
return fmt.Errorf("can't initialize graph with empty name")
}
//g.adjacency = make(map[Vertex]map[Vertex]Edge) // not required
//g.kv = make(map[string]interface{}) // not required
return nil
}
// NewGraph builds a new graph.
func NewGraph(name string) *Graph {
return &Graph{
Name: name,
Adjacency: make(map[*Vertex]map[*Vertex]*Edge),
state: graphStateNil,
// ptr b/c: Mutex/WaitGroup must not be copied after first use
mutex: &sync.Mutex{},
wg: &sync.WaitGroup{},
semas: make(map[string]*semaphore.Semaphore),
slock: &sync.Mutex{},
}
}
// NewVertex returns a new graph vertex struct with a contained resource.
func NewVertex(r resources.Res) *Vertex {
return &Vertex{
Res: r,
}
}
// NewEdge returns a new graph edge struct.
func NewEdge(name string) *Edge {
return &Edge{
func NewGraph(name string) (*Graph, error) {
g := &Graph{
Name: name,
}
if err := g.Init(); err != nil {
return nil, err
}
return g, nil
}
// Refresh returns the pending refresh status of this edge.
func (obj *Edge) Refresh() bool {
return obj.refresh
// Value returns a value stored alongside the graph in a particular key.
func (g *Graph) Value(key string) (interface{}, bool) {
val, exists := g.kv[key]
return val, exists
}
// SetRefresh sets the pending refresh status of this edge.
func (obj *Edge) SetRefresh(b bool) {
obj.refresh = b
// SetValue sets a value to be stored alongside the graph in a particular key.
func (g *Graph) SetValue(key string, val interface{}) {
if g.kv == nil { // initialize on first use
g.kv = make(map[string]interface{})
}
g.kv[key] = val
}
// Copy makes a copy of the graph struct
// Copy makes a copy of the graph struct.
func (g *Graph) Copy() *Graph {
if g == nil { // allow nil graphs through
return g
}
newGraph := &Graph{
Name: g.Name,
Adjacency: make(map[*Vertex]map[*Vertex]*Edge, len(g.Adjacency)),
Flags: g.Flags,
state: g.state,
mutex: g.mutex,
wg: g.wg,
semas: g.semas,
slock: g.slock,
fastPause: g.fastPause,
prometheus: g.prometheus,
adjacency: make(map[Vertex]map[Vertex]Edge, len(g.adjacency)),
kv: g.kv,
}
for k, v := range g.Adjacency {
newGraph.Adjacency[k] = v // copy
for k, v := range g.adjacency {
newGraph.adjacency[k] = v // copy
}
return newGraph
}
@@ -150,87 +112,50 @@ func (g *Graph) SetName(name string) {
g.Name = name
}
// getState returns the state of the graph. This state is used for optimizing
// certain algorithms by knowing what part of processing the graph is currently
// undergoing.
func (g *Graph) getState() graphState {
//g.mutex.Lock()
//defer g.mutex.Unlock()
return g.state
}
// setState sets the graph state and returns the previous state.
func (g *Graph) setState(state graphState) graphState {
g.mutex.Lock()
defer g.mutex.Unlock()
prev := g.getState()
g.state = state
return prev
}
// AddVertex uses variadic input to add all listed vertices to the graph
func (g *Graph) AddVertex(xv ...*Vertex) {
// AddVertex uses variadic input to add all listed vertices to the graph.
func (g *Graph) AddVertex(xv ...Vertex) {
if g.adjacency == nil { // initialize on first use
g.adjacency = make(map[Vertex]map[Vertex]Edge)
}
for _, v := range xv {
if _, exists := g.Adjacency[v]; !exists {
g.Adjacency[v] = make(map[*Vertex]*Edge)
if _, exists := g.adjacency[v]; !exists {
g.adjacency[v] = make(map[Vertex]Edge)
}
}
}
// DeleteVertex deletes a particular vertex from the graph.
func (g *Graph) DeleteVertex(v *Vertex) {
delete(g.Adjacency, v)
for k := range g.Adjacency {
delete(g.Adjacency[k], v)
func (g *Graph) DeleteVertex(v Vertex) {
delete(g.adjacency, v)
for k := range g.adjacency {
delete(g.adjacency[k], v)
}
}
// AddEdge adds a directed edge to the graph from v1 to v2.
func (g *Graph) AddEdge(v1, v2 *Vertex, e *Edge) {
func (g *Graph) AddEdge(v1, v2 Vertex, e Edge) {
// NOTE: this doesn't allow more than one edge between two vertexes...
g.AddVertex(v1, v2) // supports adding N vertices now
// TODO: check if an edge exists to avoid overwriting it!
// NOTE: VertexMerge() depends on overwriting it at the moment...
g.Adjacency[v1][v2] = e
g.adjacency[v1][v2] = e
}
// DeleteEdge deletes a particular edge from the graph.
// FIXME: add test cases
func (g *Graph) DeleteEdge(e *Edge) {
for v1 := range g.Adjacency {
for v2, edge := range g.Adjacency[v1] {
func (g *Graph) DeleteEdge(e Edge) {
for v1 := range g.adjacency {
for v2, edge := range g.adjacency[v1] {
if e == edge {
delete(g.Adjacency[v1], v2)
delete(g.adjacency[v1], v2)
}
}
}
}
// CompareMatch searches for an equivalent resource in the graph and returns the
// vertex it is found in, or nil if not found.
func (g *Graph) CompareMatch(obj resources.Res) *Vertex {
for v := range g.Adjacency {
if v.Res.Compare(obj) {
return v
}
}
return nil
}
// TODO: consider adding a mutate API.
//func (g *Graph) MutateMatch(obj resources.Res) *Vertex {
// for v := range g.Adjacency {
// if err := v.Res.Mutate(obj); err == nil {
// // transmogrified!
// return v
// }
// }
// return nil
//}
// HasVertex returns if the input vertex exists in the graph.
func (g *Graph) HasVertex(v *Vertex) bool {
if _, exists := g.Adjacency[v]; exists {
func (g *Graph) HasVertex(v Vertex) bool {
if _, exists := g.adjacency[v]; exists {
return true
}
return false
@@ -238,33 +163,40 @@ func (g *Graph) HasVertex(v *Vertex) bool {
// NumVertices returns the number of vertices in the graph.
func (g *Graph) NumVertices() int {
return len(g.Adjacency)
return len(g.adjacency)
}
// NumEdges returns the number of edges in the graph.
func (g *Graph) NumEdges() int {
count := 0
for k := range g.Adjacency {
count += len(g.Adjacency[k])
for k := range g.adjacency {
count += len(g.adjacency[k])
}
return count
}
// GetVertices returns a randomly sorted slice of all vertices in the graph
// Adjacency returns the adjacency map representing this graph. This is useful
// for users who which to operate on the raw data structure more efficiently.
// This works because maps are reference types so we can edit this at will.
func (g *Graph) Adjacency() map[Vertex]map[Vertex]Edge {
return g.adjacency
}
// Vertices returns a randomly sorted slice of all vertices in the graph.
// The order is random, because the map implementation is intentionally so!
func (g *Graph) GetVertices() []*Vertex {
var vertices []*Vertex
for k := range g.Adjacency {
func (g *Graph) Vertices() []Vertex {
var vertices []Vertex
for k := range g.adjacency {
vertices = append(vertices, k)
}
return vertices
}
// GetVerticesChan returns a channel of all vertices in the graph.
func (g *Graph) GetVerticesChan() chan *Vertex {
ch := make(chan *Vertex)
go func(ch chan *Vertex) {
for k := range g.Adjacency {
// VerticesChan returns a channel of all vertices in the graph.
func (g *Graph) VerticesChan() chan Vertex {
ch := make(chan Vertex)
go func(ch chan Vertex) {
for k := range g.adjacency {
ch <- k
}
close(ch)
@@ -273,17 +205,17 @@ func (g *Graph) GetVerticesChan() chan *Vertex {
}
// VertexSlice is a linear list of vertices. It can be sorted.
type VertexSlice []*Vertex
type VertexSlice []Vertex
func (vs VertexSlice) Len() int { return len(vs) }
func (vs VertexSlice) Swap(i, j int) { vs[i], vs[j] = vs[j], vs[i] }
func (vs VertexSlice) Less(i, j int) bool { return vs[i].String() < vs[j].String() }
// GetVerticesSorted returns a sorted slice of all vertices in the graph
// The order is sorted by String() to avoid the non-determinism in the map type
func (g *Graph) GetVerticesSorted() []*Vertex {
var vertices []*Vertex
for k := range g.Adjacency {
// VerticesSorted returns a sorted slice of all vertices in the graph.
// The order is sorted by String() to avoid the non-determinism in the map type.
func (g *Graph) VerticesSorted() []Vertex {
var vertices []Vertex
for k := range g.adjacency {
vertices = append(vertices, k)
}
sort.Sort(VertexSlice(vertices)) // add determinism
@@ -295,19 +227,14 @@ func (g *Graph) String() string {
return fmt.Sprintf("Vertices(%d), Edges(%d)", g.NumVertices(), g.NumEdges())
}
// String returns the canonical form for a vertex
func (v *Vertex) String() string {
return fmt.Sprintf("%s[%s]", v.Res.Kind(), v.Res.GetName())
}
// IncomingGraphVertices returns an array (slice) of all directed vertices to
// vertex v (??? -> v). OKTimestamp should probably use this.
func (g *Graph) IncomingGraphVertices(v *Vertex) []*Vertex {
func (g *Graph) IncomingGraphVertices(v Vertex) []Vertex {
// TODO: we might be able to implement this differently by reversing
// the Adjacency graph and then looping through it again...
var s []*Vertex
for k := range g.Adjacency { // reverse paths
for w := range g.Adjacency[k] {
var s []Vertex
for k := range g.adjacency { // reverse paths
for w := range g.adjacency[k] {
if w == v {
s = append(s, k)
}
@@ -318,9 +245,9 @@ func (g *Graph) IncomingGraphVertices(v *Vertex) []*Vertex {
// OutgoingGraphVertices returns an array (slice) of all vertices that vertex v
// points to (v -> ???). Poke should probably use this.
func (g *Graph) OutgoingGraphVertices(v *Vertex) []*Vertex {
var s []*Vertex
for k := range g.Adjacency[v] { // forward paths
func (g *Graph) OutgoingGraphVertices(v Vertex) []Vertex {
var s []Vertex
for k := range g.adjacency[v] { // forward paths
s = append(s, k)
}
return s
@@ -328,18 +255,18 @@ func (g *Graph) OutgoingGraphVertices(v *Vertex) []*Vertex {
// GraphVertices returns an array (slice) of all vertices that connect to vertex v.
// This is the union of IncomingGraphVertices and OutgoingGraphVertices.
func (g *Graph) GraphVertices(v *Vertex) []*Vertex {
var s []*Vertex
func (g *Graph) GraphVertices(v Vertex) []Vertex {
var s []Vertex
s = append(s, g.IncomingGraphVertices(v)...)
s = append(s, g.OutgoingGraphVertices(v)...)
return s
}
// IncomingGraphEdges returns all of the edges that point to vertex v (??? -> v).
func (g *Graph) IncomingGraphEdges(v *Vertex) []*Edge {
var edges []*Edge
for v1 := range g.Adjacency { // reverse paths
for v2, e := range g.Adjacency[v1] {
func (g *Graph) IncomingGraphEdges(v Vertex) []Edge {
var edges []Edge
for v1 := range g.adjacency { // reverse paths
for v2, e := range g.adjacency[v1] {
if v2 == v {
edges = append(edges, e)
}
@@ -349,9 +276,9 @@ func (g *Graph) IncomingGraphEdges(v *Vertex) []*Edge {
}
// OutgoingGraphEdges returns all of the edges that point from vertex v (v -> ???).
func (g *Graph) OutgoingGraphEdges(v *Vertex) []*Edge {
var edges []*Edge
for _, e := range g.Adjacency[v] { // forward paths
func (g *Graph) OutgoingGraphEdges(v Vertex) []Edge {
var edges []Edge
for _, e := range g.adjacency[v] { // forward paths
edges = append(edges, e)
}
return edges
@@ -359,18 +286,18 @@ func (g *Graph) OutgoingGraphEdges(v *Vertex) []*Edge {
// GraphEdges returns an array (slice) of all edges that connect to vertex v.
// This is the union of IncomingGraphEdges and OutgoingGraphEdges.
func (g *Graph) GraphEdges(v *Vertex) []*Edge {
var edges []*Edge
func (g *Graph) GraphEdges(v Vertex) []Edge {
var edges []Edge
edges = append(edges, g.IncomingGraphEdges(v)...)
edges = append(edges, g.OutgoingGraphEdges(v)...)
return edges
}
// DFS returns a depth first search for the graph, starting at the input vertex.
func (g *Graph) DFS(start *Vertex) []*Vertex {
var d []*Vertex // discovered
var s []*Vertex // stack
if _, exists := g.Adjacency[start]; !exists {
func (g *Graph) DFS(start Vertex) []Vertex {
var d []Vertex // discovered
var s []Vertex // stack
if _, exists := g.adjacency[start]; !exists {
return nil // TODO: error
}
v := start
@@ -390,31 +317,32 @@ func (g *Graph) DFS(start *Vertex) []*Vertex {
}
// FilterGraph builds a new graph containing only vertices from the list.
func (g *Graph) FilterGraph(name string, vertices []*Vertex) *Graph {
newgraph := NewGraph(name)
for k1, x := range g.Adjacency {
func (g *Graph) FilterGraph(name string, vertices []Vertex) (*Graph, error) {
newGraph := &Graph{Name: name}
if err := newGraph.Init(); err != nil {
return nil, errwrap.Wrapf(err, "could not run FilterGraph() properly")
}
for k1, x := range g.adjacency {
for k2, e := range x {
//log.Printf("Filter: %s -> %s # %s", k1.Name, k2.Name, e.Name)
if VertexContains(k1, vertices) || VertexContains(k2, vertices) {
newgraph.AddEdge(k1, k2, e)
newGraph.AddEdge(k1, k2, e)
}
}
}
return newgraph
return newGraph, nil
}
// GetDisconnectedGraphs returns a channel containing the N disconnected graphs
// in our main graph. We can then process each of these in parallel.
func (g *Graph) GetDisconnectedGraphs() chan *Graph {
ch := make(chan *Graph)
go func() {
var start *Vertex
var d []*Vertex // discovered
// DisconnectedGraphs returns a list containing the N disconnected graphs.
func (g *Graph) DisconnectedGraphs() ([]*Graph, error) {
graphs := []*Graph{}
var start Vertex
var d []Vertex // discovered
c := g.NumVertices()
for len(d) < c {
// get an undiscovered vertex to start from
for _, s := range g.GetVertices() {
for _, s := range g.Vertices() {
if !VertexContains(s, d) {
start = s
}
@@ -423,31 +351,31 @@ func (g *Graph) GetDisconnectedGraphs() chan *Graph {
// dfs through the graph
dfs := g.DFS(start)
// filter all the collected elements into a new graph
newgraph := g.FilterGraph(g.Name, dfs)
newgraph, err := g.FilterGraph(g.Name, dfs)
if err != nil {
return nil, errwrap.Wrapf(err, "could not run DisconnectedGraphs() properly")
}
// add number of elements found to found variable
d = append(d, dfs...) // extend
// return this new graph to the channel
ch <- newgraph
// append this new graph to the list
graphs = append(graphs, newgraph)
// if we've found all the elements, then we're done
// otherwise loop through to continue...
}
close(ch)
}()
return ch
return graphs, nil
}
// InDegree returns the count of vertices that point to me in one big lookup map.
func (g *Graph) InDegree() map[*Vertex]int {
result := make(map[*Vertex]int)
for k := range g.Adjacency {
func (g *Graph) InDegree() map[Vertex]int {
result := make(map[Vertex]int)
for k := range g.adjacency {
result[k] = 0 // initialize
}
for k := range g.Adjacency {
for z := range g.Adjacency[k] {
for k := range g.adjacency {
for z := range g.adjacency[k] {
result[z]++
}
}
@@ -455,12 +383,12 @@ func (g *Graph) InDegree() map[*Vertex]int {
}
// OutDegree returns the count of vertices that point away in one big lookup map.
func (g *Graph) OutDegree() map[*Vertex]int {
result := make(map[*Vertex]int)
func (g *Graph) OutDegree() map[Vertex]int {
result := make(map[Vertex]int)
for k := range g.Adjacency {
for k := range g.adjacency {
result[k] = 0 // initialize
for range g.Adjacency[k] {
for range g.adjacency[k] {
result[k]++
}
}
@@ -468,12 +396,12 @@ func (g *Graph) OutDegree() map[*Vertex]int {
}
// TopologicalSort returns the sort of graph vertices in that order.
// based on descriptions and code from wikipedia and rosetta code
// It is based on descriptions and code from wikipedia and rosetta code.
// TODO: add memoization, and cache invalidation to speed this up :)
func (g *Graph) TopologicalSort() ([]*Vertex, error) { // kahn's algorithm
var L []*Vertex // empty list that will contain the sorted elements
var S []*Vertex // set of all nodes with no incoming edges
remaining := make(map[*Vertex]int) // amount of edges remaining
func (g *Graph) TopologicalSort() ([]Vertex, error) { // kahn's algorithm
var L []Vertex // empty list that will contain the sorted elements
var S []Vertex // set of all nodes with no incoming edges
remaining := make(map[Vertex]int) // amount of edges remaining
for v, d := range g.InDegree() {
if d == 0 {
@@ -490,7 +418,7 @@ func (g *Graph) TopologicalSort() ([]*Vertex, error) { // kahn's algorithm
v := S[last]
S = S[:last]
L = append(L, v) // add v to tail of L
for n := range g.Adjacency[v] {
for n := range g.adjacency[v] {
// for each node n remaining in the graph, consume from
// remaining, so for remaining[n] > 0
if remaining[n] > 0 {
@@ -505,7 +433,7 @@ func (g *Graph) TopologicalSort() ([]*Vertex, error) { // kahn's algorithm
// if graph has edges, eg if any value in rem is > 0
for c, in := range remaining {
if in > 0 {
for n := range g.Adjacency[c] {
for n := range g.adjacency[c] {
if remaining[n] > 0 {
return nil, fmt.Errorf("not a dag")
}
@@ -524,19 +452,19 @@ func (g *Graph) TopologicalSort() ([]*Vertex, error) { // kahn's algorithm
// actually return a tree if we cared about correctness.
// This operates by a recursive algorithm; a more efficient version is likely.
// If you don't give this function a DAG, you might cause infinite recursion!
func (g *Graph) Reachability(a, b *Vertex) []*Vertex {
func (g *Graph) Reachability(a, b Vertex) []Vertex {
if a == nil || b == nil {
return nil
}
vertices := g.OutgoingGraphVertices(a) // what points away from a ?
if len(vertices) == 0 {
return []*Vertex{} // nope
return []Vertex{} // nope
}
if VertexContains(b, vertices) {
return []*Vertex{a, b} // found
return []Vertex{a, b} // found
}
// TODO: parallelize this with go routines?
var collected = make([][]*Vertex, len(vertices))
var collected = make([][]Vertex, len(vertices))
pick := -1
for i, v := range vertices {
collected[i] = g.Reachability(v, b) // find b by recursion
@@ -549,126 +477,111 @@ func (g *Graph) Reachability(a, b *Vertex) []*Vertex {
}
}
if pick < 0 {
return []*Vertex{} // nope
return []Vertex{} // nope
}
result := []*Vertex{a} // tack on a
result := []Vertex{a} // tack on a
result = append(result, collected[pick]...)
return result
}
// GraphSync updates the oldGraph so that it matches the newGraph receiver. It
// leaves identical elements alone so that they don't need to be refreshed. It
// tries to mutate existing elements into new ones, if they support this.
// FIXME: add test cases
func (g *Graph) GraphSync(oldGraph *Graph) (*Graph, error) {
if oldGraph == nil {
oldGraph = NewGraph(g.GetName()) // copy over the name
}
oldGraph.SetName(g.GetName()) // overwrite the name
var lookup = make(map[*Vertex]*Vertex)
var vertexKeep []*Vertex // list of vertices which are the same in new graph
var edgeKeep []*Edge // list of vertices which are the same in new graph
for v := range g.Adjacency { // loop through the vertices (resources)
res := v.Res // resource
var vertex *Vertex
// step one, direct compare with res.Compare
if vertex == nil { // redundant guard for consistency
vertex = oldGraph.CompareMatch(res)
}
// TODO: consider adding a mutate API.
// step two, try and mutate with res.Mutate
//if vertex == nil { // not found yet...
// vertex = oldGraph.MutateMatch(res)
//}
if vertex == nil { // no match found yet
if err := res.Validate(); err != nil {
return nil, errwrap.Wrapf(err, "could not Validate() resource")
}
vertex = v
oldGraph.AddVertex(vertex) // call standalone in case not part of an edge
}
lookup[v] = vertex // used for constructing edges
vertexKeep = append(vertexKeep, vertex) // append
}
// get rid of any vertices we shouldn't keep (that aren't in new graph)
for v := range oldGraph.Adjacency {
if !VertexContains(v, vertexKeep) {
// wait for exit before starting new graph!
v.SendEvent(event.EventExit, nil) // sync
v.Res.WaitGroup().Wait()
oldGraph.DeleteVertex(v)
// VertexMatchFn searches for a vertex in the graph and returns the vertex if
// one matches. It uses a user defined function to match. That function must
// return true on match, and an error if anything goes wrong.
func (g *Graph) VertexMatchFn(fn func(Vertex) (bool, error)) (Vertex, error) {
for v := range g.adjacency {
if b, err := fn(v); err != nil {
return nil, errwrap.Wrapf(err, "fn in VertexMatchFn() errored")
} else if b {
return v, nil
}
}
// compare edges
for v1 := range g.Adjacency { // loop through the vertices (resources)
for v2, e := range g.Adjacency[v1] {
// we have an edge!
// lookup vertices (these should exist now)
//res1 := v1.Res // resource
//res2 := v2.Res
//vertex1 := oldGraph.CompareMatch(res1)
//vertex2 := oldGraph.CompareMatch(res2)
vertex1, exists1 := lookup[v1]
vertex2, exists2 := lookup[v2]
if !exists1 || !exists2 { // no match found, bug?
//if vertex1 == nil || vertex2 == nil { // no match found
return nil, fmt.Errorf("new vertices weren't found") // programming error
}
edge, exists := oldGraph.Adjacency[vertex1][vertex2]
if !exists || edge.Name != e.Name { // TODO: edgeCmp
edge = e // use or overwrite edge
}
oldGraph.Adjacency[vertex1][vertex2] = edge // store it (AddEdge)
edgeKeep = append(edgeKeep, edge) // mark as saved
}
}
// delete unused edges
for v1 := range oldGraph.Adjacency {
for _, e := range oldGraph.Adjacency[v1] {
// we have an edge!
if !EdgeContains(e, edgeKeep) {
oldGraph.DeleteEdge(e)
}
}
}
return oldGraph, nil
return nil, nil // nothing found
}
// GraphMetas returns a list of pointers to each of the resource MetaParams.
func (g *Graph) GraphMetas() []*resources.MetaParams {
metas := []*resources.MetaParams{}
for v := range g.Adjacency { // loop through the vertices (resources))
res := v.Res // resource
meta := res.Meta()
metas = append(metas, meta)
// GraphCmp compares the topology of this graph to another and returns nil if
// they're equal. It uses a user defined function to compare topologically
// equivalent vertices, and edges.
// FIXME: add more test cases
func (g *Graph) GraphCmp(graph *Graph, vertexCmpFn func(Vertex, Vertex) (bool, error), edgeCmpFn func(Edge, Edge) (bool, error)) error {
n1, n2 := g.NumVertices(), graph.NumVertices()
if n1 != n2 {
return fmt.Errorf("base graph has %d vertices, while input graph has %d", n1, n2)
}
return metas
}
// AssociateData associates some data with the object in the graph in question.
func (g *Graph) AssociateData(data *resources.Data) {
// prometheus needs to be associated to this graph as well
g.prometheus = data.Prometheus
for k := range g.Adjacency {
*k.Res.Data() = *data
if e1, e2 := g.NumEdges(), graph.NumEdges(); e1 != e2 {
return fmt.Errorf("base graph has %d edges, while input graph has %d", e1, e2)
}
var m = make(map[Vertex]Vertex) // g to graph vertex correspondence
Loop:
// check vertices
for v1 := range g.Adjacency() { // for each vertex in g
for v2 := range graph.Adjacency() { // does it match in graph ?
b, err := vertexCmpFn(v1, v2)
if err != nil {
return errwrap.Wrapf(err, "could not run vertexCmpFn() properly")
}
// does it match ?
if b {
m[v1] = v2 // store the mapping
continue Loop
}
}
return fmt.Errorf("base graph, has no match in input graph for: %s", v1)
}
// vertices match :)
// is the mapping the right length?
if n1 := len(m); n1 != n2 {
return fmt.Errorf("mapping only has correspondence of %d, when it should have %d", n1, n2)
}
// check if mapping is unique (are there duplicates?)
m1 := []Vertex{}
m2 := []Vertex{}
for k, v := range m {
if VertexContains(k, m1) {
return fmt.Errorf("mapping from %s is used more than once to: %s", k, m1)
}
if VertexContains(v, m2) {
return fmt.Errorf("mapping to %s is used more than once from: %s", v, m2)
}
m1 = append(m1, k)
m2 = append(m2, v)
}
// check edges
for v1 := range g.Adjacency() { // for each vertex in g
v2 := m[v1] // lookup in map to get correspondance
// g.Adjacency()[v1] corresponds to graph.Adjacency()[v2]
if e1, e2 := len(g.Adjacency()[v1]), len(graph.Adjacency()[v2]); e1 != e2 {
return fmt.Errorf("base graph, vertex(%s) has %d edges, while input graph, vertex(%s) has %d", v1, e1, v2, e2)
}
for vv1, ee1 := range g.Adjacency()[v1] {
vv2 := m[vv1]
ee2 := graph.Adjacency()[v2][vv2]
// these are edges from v1 -> vv1 via ee1 (graph 1)
// to cmp to edges from v2 -> vv2 via ee2 (graph 2)
// check: (1) vv1 == vv2 ? (we've already checked this!)
// check: (2) ee1 == ee2
b, err := edgeCmpFn(ee1, ee2)
if err != nil {
return errwrap.Wrapf(err, "could not run edgeCmpFn() properly")
}
if !b {
return fmt.Errorf("base graph edge(%s) doesn't match input graph edge(%s)", ee1, ee2)
}
}
}
return nil // success!
}
// VertexContains is an "in array" function to test for a vertex in a slice of vertices.
func VertexContains(needle *Vertex, haystack []*Vertex) bool {
func VertexContains(needle Vertex, haystack []Vertex) bool {
for _, v := range haystack {
if needle == v {
return true
@@ -678,7 +591,7 @@ func VertexContains(needle *Vertex, haystack []*Vertex) bool {
}
// EdgeContains is an "in array" function to test for an edge in a slice of edges.
func EdgeContains(needle *Edge, haystack []*Edge) bool {
func EdgeContains(needle Edge, haystack []Edge) bool {
for _, v := range haystack {
if needle == v {
return true
@@ -688,12 +601,23 @@ func EdgeContains(needle *Edge, haystack []*Edge) bool {
}
// Reverse reverses a list of vertices.
func Reverse(vs []*Vertex) []*Vertex {
//var out []*Vertex // XXX: golint suggests, but it fails testing
out := make([]*Vertex, 0) // empty list
func Reverse(vs []Vertex) []Vertex {
out := []Vertex{}
l := len(vs)
for i := range vs {
out = append(out, vs[l-i-1])
}
return out
}
// Sort the list of vertices and return a copy without modifying the input.
func Sort(vs []Vertex) []Vertex {
vertices := []Vertex{}
for _, v := range vs { // copy
vertices = append(vertices, v)
}
sort.Sort(VertexSlice(vertices))
return vertices
// sort.Sort(VertexSlice(vs)) // this is wrong, it would modify input!
//return vs
}

View File

@@ -20,29 +20,42 @@ package pgraph
import (
"fmt"
"reflect"
"sort"
"strings"
"testing"
"time"
"github.com/purpleidea/mgmt/resources"
"github.com/purpleidea/mgmt/util"
)
// vertex is a test struct to test the library.
type vertex struct {
name string
}
// String is a required method of the Vertex interface that we must fulfill.
func (v *vertex) String() string {
return v.name
}
// NV is a helper function to make testing easier. It creates a new noop vertex.
func NV(s string) *Vertex {
obj := &resources.NoopRes{
BaseRes: resources.BaseRes{
Name: s,
},
Comment: "Testing!",
}
return NewVertex(obj)
func NV(s string) Vertex {
return &vertex{s}
}
// edge is a test struct to test the library.
type edge struct {
name string
}
// String is a required method of the Edge interface that we must fulfill.
func (e *edge) String() string {
return e.name
}
// NE is a helper function to make testing easier. It creates a new noop edge.
func NE(s string) Edge {
return &edge{s}
}
func TestPgraphT1(t *testing.T) {
G := NewGraph("g1")
G := &Graph{}
if i := G.NumVertices(); i != 0 {
t.Errorf("should have 0 vertices instead of: %d", i)
@@ -54,7 +67,7 @@ func TestPgraphT1(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
e1 := NewEdge("e1")
e1 := NE("e1")
G.AddEdge(v1, v2, e1)
if i := G.NumVertices(); i != 2 {
@@ -68,19 +81,19 @@ func TestPgraphT1(t *testing.T) {
func TestPgraphT2(t *testing.T) {
G := NewGraph("g2")
G := &Graph{Name: "g2"}
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
e4 := NewEdge("e4")
e5 := NewEdge("e5")
//e6 := NewEdge("e6")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e4 := NE("e4")
e5 := NE("e5")
//e6 := NE("e6")
G.AddEdge(v1, v2, e1)
G.AddEdge(v2, v3, e2)
G.AddEdge(v3, v1, e3)
@@ -95,19 +108,19 @@ func TestPgraphT2(t *testing.T) {
func TestPgraphT3(t *testing.T) {
G := NewGraph("g3")
G, _ := NewGraph("g3")
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
e4 := NewEdge("e4")
e5 := NewEdge("e5")
//e6 := NewEdge("e6")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e4 := NE("e4")
e5 := NE("e5")
//e6 := NE("e6")
G.AddEdge(v1, v2, e1)
G.AddEdge(v2, v3, e2)
G.AddEdge(v3, v1, e3)
@@ -120,7 +133,7 @@ func TestPgraphT3(t *testing.T) {
t.Errorf("should have 3 vertices instead of: %d", i)
t.Errorf("found: %v", out1)
for _, v := range out1 {
t.Errorf("value: %v", v.GetName())
t.Errorf("value: %s", v)
}
}
@@ -129,20 +142,20 @@ func TestPgraphT3(t *testing.T) {
t.Errorf("should have 3 vertices instead of: %d", i)
t.Errorf("found: %v", out1)
for _, v := range out1 {
t.Errorf("value: %v", v.GetName())
t.Errorf("value: %s", v)
}
}
}
func TestPgraphT4(t *testing.T) {
G := NewGraph("g4")
G, _ := NewGraph("g4")
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
G.AddEdge(v1, v2, e1)
G.AddEdge(v2, v3, e2)
G.AddEdge(v3, v1, e3)
@@ -152,25 +165,25 @@ func TestPgraphT4(t *testing.T) {
t.Errorf("should have 3 vertices instead of: %d", i)
t.Errorf("found: %v", out)
for _, v := range out {
t.Errorf("value: %v", v.GetName())
t.Errorf("value: %s", v)
}
}
}
func TestPgraphT5(t *testing.T) {
G := NewGraph("g5")
G, _ := NewGraph("g5")
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
e4 := NewEdge("e4")
e5 := NewEdge("e5")
//e6 := NewEdge("e6")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e4 := NE("e4")
e5 := NE("e5")
//e6 := NE("e6")
G.AddEdge(v1, v2, e1)
G.AddEdge(v2, v3, e2)
G.AddEdge(v3, v1, e3)
@@ -179,27 +192,31 @@ func TestPgraphT5(t *testing.T) {
G.AddEdge(v5, v6, e5)
//G.AddEdge(v6, v4, e6)
save := []*Vertex{v1, v2, v3}
out := G.FilterGraph("new g5", save)
save := []Vertex{v1, v2, v3}
out, err := G.FilterGraph("new g5", save)
if err != nil {
t.Errorf("failed with: %v", err)
}
if i := out.NumVertices(); i != 3 {
t.Errorf("should have 3 vertices instead of: %d", i)
}
}
func TestPgraphT6(t *testing.T) {
G := NewGraph("g6")
G, _ := NewGraph("g6")
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
e4 := NewEdge("e4")
e5 := NewEdge("e5")
//e6 := NewEdge("e6")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e4 := NE("e4")
e5 := NE("e5")
//e6 := NE("e6")
G.AddEdge(v1, v2, e1)
G.AddEdge(v2, v3, e2)
G.AddEdge(v3, v1, e3)
@@ -208,30 +225,25 @@ func TestPgraphT6(t *testing.T) {
G.AddEdge(v5, v6, e5)
//G.AddEdge(v6, v4, e6)
graphs := G.GetDisconnectedGraphs()
HeisenbergGraphCount := func(ch chan *Graph) int {
c := 0
for x := range ch {
_ = x
c++
}
return c
graphs, err := G.DisconnectedGraphs()
if err != nil {
t.Errorf("failed with: %v", err)
}
if i := HeisenbergGraphCount(graphs); i != 2 {
if i := len(graphs); i != 2 {
t.Errorf("should have 2 graphs instead of: %d", i)
}
}
func TestPgraphT7(t *testing.T) {
G := NewGraph("g7")
G, _ := NewGraph("g7")
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
G.AddEdge(v1, v2, e1)
G.AddEdge(v2, v3, e2)
G.AddEdge(v3, v1, e3)
@@ -270,45 +282,45 @@ func TestPgraphT8(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
if VertexContains(v1, []*Vertex{v1, v2, v3}) != true {
if VertexContains(v1, []Vertex{v1, v2, v3}) != true {
t.Errorf("should be true instead of false.")
}
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
if VertexContains(v4, []*Vertex{v5, v6}) != false {
if VertexContains(v4, []Vertex{v5, v6}) != false {
t.Errorf("should be false instead of true.")
}
v7 := NV("v7")
v8 := NV("v8")
v9 := NV("v9")
if VertexContains(v8, []*Vertex{v7, v8, v9}) != true {
if VertexContains(v8, []Vertex{v7, v8, v9}) != true {
t.Errorf("should be true instead of false.")
}
v1b := NV("v1") // same value, different objects
if VertexContains(v1b, []*Vertex{v1, v2, v3}) != false {
if VertexContains(v1b, []Vertex{v1, v2, v3}) != false {
t.Errorf("should be false instead of true.")
}
}
func TestPgraphT9(t *testing.T) {
G := NewGraph("g9")
G, _ := NewGraph("g9")
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
e4 := NewEdge("e4")
e5 := NewEdge("e5")
e6 := NewEdge("e6")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e4 := NE("e4")
e5 := NE("e5")
e6 := NE("e6")
G.AddEdge(v1, v2, e1)
G.AddEdge(v1, v3, e2)
G.AddEdge(v2, v4, e3)
@@ -317,7 +329,7 @@ func TestPgraphT9(t *testing.T) {
G.AddEdge(v4, v5, e5)
G.AddEdge(v5, v6, e6)
indegree := G.InDegree() // map[*Vertex]int
indegree := G.InDegree() // map[Vertex]int
if i := indegree[v1]; i != 0 {
t.Errorf("indegree of v1 should be 0 instead of: %d", i)
}
@@ -337,7 +349,7 @@ func TestPgraphT9(t *testing.T) {
t.Errorf("indegree of v6 should be 1 instead of: %d", i)
}
outdegree := G.OutDegree() // map[*Vertex]int
outdegree := G.OutDegree() // map[Vertex]int
if i := outdegree[v1]; i != 2 {
t.Errorf("outdegree of v1 should be 2 instead of: %d", i)
}
@@ -359,12 +371,12 @@ func TestPgraphT9(t *testing.T) {
s, err := G.TopologicalSort()
// either possibility is a valid toposort
match := reflect.DeepEqual(s, []*Vertex{v1, v2, v3, v4, v5, v6}) || reflect.DeepEqual(s, []*Vertex{v1, v3, v2, v4, v5, v6})
match := reflect.DeepEqual(s, []Vertex{v1, v2, v3, v4, v5, v6}) || reflect.DeepEqual(s, []Vertex{v1, v3, v2, v4, v5, v6})
if err != nil || !match {
t.Errorf("topological sort failed, error: %v", err)
str := "Found:"
for _, v := range s {
str += " " + v.Res.GetName()
str += " " + v.String()
}
t.Errorf(str)
}
@@ -372,19 +384,19 @@ func TestPgraphT9(t *testing.T) {
func TestPgraphT10(t *testing.T) {
G := NewGraph("g10")
G, _ := NewGraph("g10")
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
e4 := NewEdge("e4")
e5 := NewEdge("e5")
e6 := NewEdge("e6")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e4 := NE("e4")
e5 := NE("e5")
e6 := NE("e6")
G.AddEdge(v1, v2, e1)
G.AddEdge(v2, v3, e2)
G.AddEdge(v3, v4, e3)
@@ -400,47 +412,47 @@ func TestPgraphT10(t *testing.T) {
// empty
func TestPgraphReachability0(t *testing.T) {
{
G := NewGraph("g")
G, _ := NewGraph("g")
result := G.Reachability(nil, nil)
if result != nil {
t.Logf("reachability failed")
str := "Got:"
for _, v := range result {
str += " " + v.Res.GetName()
str += " " + v.String()
}
t.Errorf(str)
}
}
{
G := NewGraph("g")
G, _ := NewGraph("g")
v1 := NV("v1")
v6 := NV("v6")
result := G.Reachability(v1, v6)
expected := []*Vertex{}
expected := []Vertex{}
if !reflect.DeepEqual(result, expected) {
t.Logf("reachability failed")
str := "Got:"
for _, v := range result {
str += " " + v.Res.GetName()
str += " " + v.String()
}
t.Errorf(str)
}
}
{
G := NewGraph("g")
G, _ := NewGraph("g")
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
e4 := NewEdge("e4")
e5 := NewEdge("e5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e4 := NE("e4")
e5 := NE("e5")
G.AddEdge(v1, v2, e1)
G.AddEdge(v2, v3, e2)
G.AddEdge(v1, v4, e3)
@@ -448,13 +460,13 @@ func TestPgraphReachability0(t *testing.T) {
G.AddEdge(v3, v5, e5)
result := G.Reachability(v1, v6)
expected := []*Vertex{}
expected := []Vertex{}
if !reflect.DeepEqual(result, expected) {
t.Logf("reachability failed")
str := "Got:"
for _, v := range result {
str += " " + v.Res.GetName()
str += " " + v.String()
}
t.Errorf(str)
}
@@ -463,19 +475,19 @@ func TestPgraphReachability0(t *testing.T) {
// simple linear path
func TestPgraphReachability1(t *testing.T) {
G := NewGraph("g")
G, _ := NewGraph("g")
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
e4 := NewEdge("e4")
e5 := NewEdge("e5")
//e6 := NewEdge("e6")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e4 := NE("e4")
e5 := NE("e5")
//e6 := NE("e6")
G.AddEdge(v1, v2, e1)
G.AddEdge(v2, v3, e2)
G.AddEdge(v3, v4, e3)
@@ -483,13 +495,13 @@ func TestPgraphReachability1(t *testing.T) {
G.AddEdge(v5, v6, e5)
result := G.Reachability(v1, v6)
expected := []*Vertex{v1, v2, v3, v4, v5, v6}
expected := []Vertex{v1, v2, v3, v4, v5, v6}
if !reflect.DeepEqual(result, expected) {
t.Logf("reachability failed")
str := "Got:"
for _, v := range result {
str += " " + v.Res.GetName()
str += " " + v.String()
}
t.Errorf(str)
}
@@ -497,19 +509,19 @@ func TestPgraphReachability1(t *testing.T) {
// pick one of two correct paths
func TestPgraphReachability2(t *testing.T) {
G := NewGraph("g")
G, _ := NewGraph("g")
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
e4 := NewEdge("e4")
e5 := NewEdge("e5")
e6 := NewEdge("e6")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e4 := NE("e4")
e5 := NE("e5")
e6 := NE("e6")
G.AddEdge(v1, v2, e1)
G.AddEdge(v1, v3, e2)
G.AddEdge(v2, v4, e3)
@@ -518,15 +530,15 @@ func TestPgraphReachability2(t *testing.T) {
G.AddEdge(v5, v6, e6)
result := G.Reachability(v1, v6)
expected1 := []*Vertex{v1, v2, v4, v5, v6}
expected2 := []*Vertex{v1, v3, v4, v5, v6}
expected1 := []Vertex{v1, v2, v4, v5, v6}
expected2 := []Vertex{v1, v3, v4, v5, v6}
// !xor test
if reflect.DeepEqual(result, expected1) == reflect.DeepEqual(result, expected2) {
t.Logf("reachability failed")
str := "Got:"
for _, v := range result {
str += " " + v.Res.GetName()
str += " " + v.String()
}
t.Errorf(str)
}
@@ -534,19 +546,19 @@ func TestPgraphReachability2(t *testing.T) {
// pick shortest path
func TestPgraphReachability3(t *testing.T) {
G := NewGraph("g")
G, _ := NewGraph("g")
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
e4 := NewEdge("e4")
e5 := NewEdge("e5")
e6 := NewEdge("e6")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e4 := NE("e4")
e5 := NE("e5")
e6 := NE("e6")
G.AddEdge(v1, v2, e1)
G.AddEdge(v2, v3, e2)
G.AddEdge(v3, v4, e3)
@@ -555,13 +567,13 @@ func TestPgraphReachability3(t *testing.T) {
G.AddEdge(v5, v6, e6)
result := G.Reachability(v1, v6)
expected := []*Vertex{v1, v5, v6}
expected := []Vertex{v1, v5, v6}
if !reflect.DeepEqual(result, expected) {
t.Logf("reachability failed")
str := "Got:"
for _, v := range result {
str += " " + v.Res.GetName()
str += " " + v.String()
}
t.Errorf(str)
}
@@ -569,19 +581,19 @@ func TestPgraphReachability3(t *testing.T) {
// direct path
func TestPgraphReachability4(t *testing.T) {
G := NewGraph("g")
G, _ := NewGraph("g")
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
e1 := NewEdge("e1")
e2 := NewEdge("e2")
e3 := NewEdge("e3")
e4 := NewEdge("e4")
e5 := NewEdge("e5")
e6 := NewEdge("e6")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e4 := NE("e4")
e5 := NE("e5")
e6 := NE("e6")
G.AddEdge(v1, v2, e1)
G.AddEdge(v2, v3, e2)
G.AddEdge(v3, v4, e3)
@@ -590,13 +602,13 @@ func TestPgraphReachability4(t *testing.T) {
G.AddEdge(v1, v6, e6)
result := G.Reachability(v1, v6)
expected := []*Vertex{v1, v6}
expected := []Vertex{v1, v6}
if !reflect.DeepEqual(result, expected) {
t.Logf("reachability failed")
str := "Got:"
for _, v := range result {
str += " " + v.Res.GetName()
str += " " + v.String()
}
t.Errorf(str)
}
@@ -610,249 +622,120 @@ func TestPgraphT11(t *testing.T) {
v5 := NV("v5")
v6 := NV("v6")
if rev := Reverse([]*Vertex{}); !reflect.DeepEqual(rev, []*Vertex{}) {
t.Errorf("reverse of vertex slice failed")
if rev := Reverse([]Vertex{}); !reflect.DeepEqual(rev, []Vertex{}) {
t.Errorf("reverse of vertex slice failed (empty)")
}
if rev := Reverse([]*Vertex{v1}); !reflect.DeepEqual(rev, []*Vertex{v1}) {
t.Errorf("reverse of vertex slice failed")
if rev := Reverse([]Vertex{v1}); !reflect.DeepEqual(rev, []Vertex{v1}) {
t.Errorf("reverse of vertex slice failed (single)")
}
if rev := Reverse([]*Vertex{v1, v2, v3, v4, v5, v6}); !reflect.DeepEqual(rev, []*Vertex{v6, v5, v4, v3, v2, v1}) {
t.Errorf("reverse of vertex slice failed")
if rev := Reverse([]Vertex{v1, v2, v3, v4, v5, v6}); !reflect.DeepEqual(rev, []Vertex{v6, v5, v4, v3, v2, v1}) {
t.Errorf("reverse of vertex slice failed (1..6)")
}
if rev := Reverse([]*Vertex{v6, v5, v4, v3, v2, v1}); !reflect.DeepEqual(rev, []*Vertex{v1, v2, v3, v4, v5, v6}) {
t.Errorf("reverse of vertex slice failed")
if rev := Reverse([]Vertex{v6, v5, v4, v3, v2, v1}); !reflect.DeepEqual(rev, []Vertex{v1, v2, v3, v4, v5, v6}) {
t.Errorf("reverse of vertex slice failed (6..1)")
}
}
type NoopResTest struct {
resources.NoopRes
func TestPgraphCopy1(t *testing.T) {
g1 := &Graph{}
g2 := g1.Copy() // check this doesn't panic
if !reflect.DeepEqual(g1.String(), g2.String()) {
t.Errorf("graph copy failed")
}
}
func (obj *NoopResTest) GroupCmp(r resources.Res) bool {
res, ok := r.(*NoopResTest)
if !ok {
return false
}
func TestPgraphDelete1(t *testing.T) {
G := &Graph{}
v1 := NV("v1")
G.DeleteVertex(v1) // check this doesn't panic
// TODO: implement this in vertexCmp for *testGrouper instead?
if strings.Contains(res.Name, ",") { // HACK
return false // element to be grouped is already grouped!
if i := G.NumVertices(); i != 0 {
t.Errorf("should have 0 vertices instead of: %d", i)
}
// group if they start with the same letter! (helpful hack for testing)
return obj.Name[0] == res.Name[0]
}
func NewNoopResTest(name string) *NoopResTest {
obj := &NoopResTest{
NoopRes: resources.NoopRes{
BaseRes: resources.BaseRes{
Name: name,
MetaParams: resources.MetaParams{
AutoGroup: true, // always autogroup
},
},
},
func vertexCmpFn(v1, v2 Vertex) (bool, error) {
if v1.String() == "" || v2.String() == "" {
return false, fmt.Errorf("oops, empty vertex")
}
return obj
return v1.String() == v2.String(), nil
}
// ListStrCmp compares two lists of strings
func ListStrCmp(a, b []string) bool {
//fmt.Printf("CMP: %v with %v\n", a, b) // debugging
if a == nil && b == nil {
return true
func edgeCmpFn(e1, e2 Edge) (bool, error) {
if e1.String() == "" || e2.String() == "" {
return false, fmt.Errorf("oops, empty edge")
}
if a == nil || b == nil {
return false
}
if len(a) != len(b) {
return false
}
for i := range a {
if a[i] != b[i] {
return false
}
}
return true
return e1.String() == e2.String(), nil
}
// GraphCmp compares the topology of two graphs and returns nil if they're equal
// It also compares if grouped element groups are identical
func GraphCmp(g1, g2 *Graph) error {
if n1, n2 := g1.NumVertices(), g2.NumVertices(); n1 != n2 {
return fmt.Errorf("graph g1 has %d vertices, while g2 has %d", n1, n2)
}
if e1, e2 := g1.NumEdges(), g2.NumEdges(); e1 != e2 {
return fmt.Errorf("graph g1 has %d edges, while g2 has %d", e1, e2)
func TestPgraphGraphCmp1(t *testing.T) {
g1 := &Graph{}
g2 := &Graph{}
g3 := &Graph{}
g3.AddVertex(NV("v1"))
g4 := &Graph{}
g4.AddVertex(NV("v2"))
if err := g1.GraphCmp(g2, vertexCmpFn, edgeCmpFn); err != nil {
t.Errorf("should have no error during GraphCmp, but got: %v", err)
}
var m = make(map[*Vertex]*Vertex) // g1 to g2 vertex correspondence
Loop:
// check vertices
for v1 := range g1.Adjacency { // for each vertex in g1
l1 := strings.Split(v1.GetName(), ",") // make list of everyone's names...
for _, x1 := range v1.GetGroup() {
l1 = append(l1, x1.GetName()) // add my contents
}
l1 = util.StrRemoveDuplicatesInList(l1) // remove duplicates
sort.Strings(l1)
// inner loop
for v2 := range g2.Adjacency { // does it match in g2 ?
l2 := strings.Split(v2.GetName(), ",")
for _, x2 := range v2.GetGroup() {
l2 = append(l2, x2.GetName())
}
l2 = util.StrRemoveDuplicatesInList(l2) // remove duplicates
sort.Strings(l2)
// does l1 match l2 ?
if ListStrCmp(l1, l2) { // cmp!
m[v1] = v2
continue Loop
}
}
return fmt.Errorf("graph g1, has no match in g2 for: %v", v1.GetName())
}
// vertices (and groups) match :)
// check edges
for v1 := range g1.Adjacency { // for each vertex in g1
v2 := m[v1] // lookup in map to get correspondance
// g1.Adjacency[v1] corresponds to g2.Adjacency[v2]
if e1, e2 := len(g1.Adjacency[v1]), len(g2.Adjacency[v2]); e1 != e2 {
return fmt.Errorf("graph g1, vertex(%v) has %d edges, while g2, vertex(%v) has %d", v1.GetName(), e1, v2.GetName(), e2)
if err := g1.GraphCmp(g3, vertexCmpFn, edgeCmpFn); err == nil {
t.Errorf("should have error during GraphCmp, but got nil")
}
for vv1, ee1 := range g1.Adjacency[v1] {
vv2 := m[vv1]
ee2 := g2.Adjacency[v2][vv2]
// these are edges from v1 -> vv1 via ee1 (graph 1)
// to cmp to edges from v2 -> vv2 via ee2 (graph 2)
// check: (1) vv1 == vv2 ? (we've already checked this!)
l1 := strings.Split(vv1.GetName(), ",") // make list of everyone's names...
for _, x1 := range vv1.GetGroup() {
l1 = append(l1, x1.GetName()) // add my contents
if err := g3.GraphCmp(g4, vertexCmpFn, edgeCmpFn); err == nil {
t.Errorf("should have error during GraphCmp, but got nil")
}
l1 = util.StrRemoveDuplicatesInList(l1) // remove duplicates
sort.Strings(l1)
l2 := strings.Split(vv2.GetName(), ",")
for _, x2 := range vv2.GetGroup() {
l2 = append(l2, x2.GetName())
}
l2 = util.StrRemoveDuplicatesInList(l2) // remove duplicates
sort.Strings(l2)
// does l1 match l2 ?
if !ListStrCmp(l1, l2) { // cmp!
return fmt.Errorf("graph g1 and g2 don't agree on: %v and %v", vv1.GetName(), vv2.GetName())
}
// check: (2) ee1 == ee2
if ee1.Name != ee2.Name {
return fmt.Errorf("graph g1 edge(%v) doesn't match g2 edge(%v)", ee1.Name, ee2.Name)
}
}
}
// check meta parameters
for v1 := range g1.Adjacency { // for each vertex in g1
for v2 := range g2.Adjacency { // does it match in g2 ?
s1, s2 := v1.Meta().Sema, v2.Meta().Sema
sort.Strings(s1)
sort.Strings(s2)
if !reflect.DeepEqual(s1, s2) {
return fmt.Errorf("vertex %s and vertex %s have different semaphores", v1.GetName(), v2.GetName())
}
}
}
return nil // success!
}
type testGrouper struct {
// TODO: this algorithm may not be correct in all cases. replace if needed!
nonReachabilityGrouper // "inherit" what we want, and reimplement the rest
}
func TestPgraphSort0(t *testing.T) {
vs := []Vertex{}
s := Sort(vs)
func (ag *testGrouper) name() string {
return "testGrouper"
}
func (ag *testGrouper) vertexMerge(v1, v2 *Vertex) (v *Vertex, err error) {
if err := v1.Res.GroupRes(v2.Res); err != nil { // group them first
return nil, err
}
// HACK: update the name so it matches full list of self+grouped
obj := v1.Res
names := strings.Split(obj.GetName(), ",") // load in stored names
for _, n := range obj.GetGroup() {
names = append(names, n.GetName()) // add my contents
}
names = util.StrRemoveDuplicatesInList(names) // remove duplicates
sort.Strings(names)
obj.SetName(strings.Join(names, ","))
return // success or fail, and no need to merge the actual vertices!
}
func (ag *testGrouper) edgeMerge(e1, e2 *Edge) *Edge {
// HACK: update the name so it makes a union of both names
n1 := strings.Split(e1.Name, ",") // load
n2 := strings.Split(e2.Name, ",") // load
names := append(n1, n2...)
names = util.StrRemoveDuplicatesInList(names) // remove duplicates
sort.Strings(names)
return NewEdge(strings.Join(names, ","))
}
func (g *Graph) fullPrint() (str string) {
str += "\n"
for v := range g.Adjacency {
if semas := v.Meta().Sema; len(semas) > 0 {
str += fmt.Sprintf("* v: %v; sema: %v\n", v.GetName(), semas)
if !reflect.DeepEqual(s, []Vertex{}) {
t.Errorf("sort failed!")
if s == nil {
t.Logf("output is nil!")
} else {
str += fmt.Sprintf("* v: %v\n", v.GetName())
str := "Got:"
for _, v := range s {
str += " " + v.String()
}
// TODO: add explicit grouping data?
t.Errorf(str)
}
for v1 := range g.Adjacency {
for v2, e := range g.Adjacency[v1] {
str += fmt.Sprintf("* e: %v -> %v # %v\n", v1.GetName(), v2.GetName(), e.Name)
}
}
return
}
// helper function
func runGraphCmp(t *testing.T, g1, g2 *Graph) {
ch := g1.autoGroup(&testGrouper{}) // edits the graph
for range ch { // bleed the channel or it won't run :(
// pass
}
err := GraphCmp(g1, g2)
if err != nil {
t.Logf(" actual (g1): %v%v", g1, g1.fullPrint())
t.Logf("expected (g2): %v%v", g2, g2.fullPrint())
t.Logf("Cmp error:")
t.Errorf("%v", err)
}
}
func TestDurationAssumptions(t *testing.T) {
var d time.Duration
if (d == 0) != true {
t.Errorf("empty time.Duration is no longer equal to zero")
func TestPgraphSort1(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
v6 := NV("v6")
vs := []Vertex{v3, v2, v6, v1, v5, v4}
s := Sort(vs)
if !reflect.DeepEqual(s, []Vertex{v1, v2, v3, v4, v5, v6}) {
t.Errorf("sort failed!")
str := "Got:"
for _, v := range s {
str += " " + v.String()
}
if (d > 0) != false {
t.Errorf("empty time.Duration is now greater than zero")
t.Errorf(str)
}
if !reflect.DeepEqual(vs, []Vertex{v3, v2, v6, v1, v5, v4}) {
t.Errorf("sort modified input!")
str := "Got:"
for _, v := range vs {
str += " " + v.String()
}
t.Errorf(str)
}
}

106
pgraph/subgraph.go Normal file
View File

@@ -0,0 +1,106 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
// AddGraph adds the set of edges and vertices of a graph to the existing graph.
func (g *Graph) AddGraph(graph *Graph) {
g.addEdgeVertexGraphHelper(nil, graph, nil, false, false)
}
// AddEdgeVertexGraph adds a directed edge to the graph from a vertex.
// This is useful for flattening the relationship between a subgraph and an
// existing graph, without having to run the subgraph recursively. It adds the
// maximum number of edges, creating a relationship to every vertex.
func (g *Graph) AddEdgeVertexGraph(vertex Vertex, graph *Graph, edgeGenFn func(v1, v2 Vertex) Edge) {
g.addEdgeVertexGraphHelper(vertex, graph, edgeGenFn, false, false)
}
// AddEdgeVertexGraphLight adds a directed edge to the graph from a vertex.
// This is useful for flattening the relationship between a subgraph and an
// existing graph, without having to run the subgraph recursively. It adds the
// minimum number of edges, creating a relationship to the vertices with
// indegree equal to zero.
func (g *Graph) AddEdgeVertexGraphLight(vertex Vertex, graph *Graph, edgeGenFn func(v1, v2 Vertex) Edge) {
g.addEdgeVertexGraphHelper(vertex, graph, edgeGenFn, false, true)
}
// AddEdgeGraphVertex adds a directed edge to the vertex from a graph.
// This is useful for flattening the relationship between a subgraph and an
// existing graph, without having to run the subgraph recursively. It adds the
// maximum number of edges, creating a relationship from every vertex.
func (g *Graph) AddEdgeGraphVertex(graph *Graph, vertex Vertex, edgeGenFn func(v1, v2 Vertex) Edge) {
g.addEdgeVertexGraphHelper(vertex, graph, edgeGenFn, true, false)
}
// AddEdgeGraphVertexLight adds a directed edge to the vertex from a graph.
// This is useful for flattening the relationship between a subgraph and an
// existing graph, without having to run the subgraph recursively. It adds the
// minimum number of edges, creating a relationship from the vertices with
// outdegree equal to zero.
func (g *Graph) AddEdgeGraphVertexLight(graph *Graph, vertex Vertex, edgeGenFn func(v1, v2 Vertex) Edge) {
g.addEdgeVertexGraphHelper(vertex, graph, edgeGenFn, true, true)
}
// addEdgeVertexGraphHelper is a helper function to add a directed edges to the
// graph from a vertex, or vice-versa. It operates in this reverse direction by
// specifying the reverse argument as true. It is useful for flattening the
// relationship between a subgraph and an existing graph, without having to run
// the subgraph recursively. It adds the maximum number of edges, creating a
// relationship to or from every vertex if the light argument is false, and if
// it is true, it adds the minimum number of edges, creating a relationship to
// or from the vertices with an indegree or outdegree equal to zero depending on
// if we specified reverse or not.
func (g *Graph) addEdgeVertexGraphHelper(vertex Vertex, graph *Graph, edgeGenFn func(v1, v2 Vertex) Edge, reverse, light bool) {
var degree map[Vertex]int // compute all of the in/outdegree's if needed
if light && reverse {
degree = graph.OutDegree()
} else if light { // && !reverse
degree = graph.InDegree()
}
for _, v := range graph.VerticesSorted() { // sort to help out edgeGenFn
// forward:
// we only want to add edges to indegree == 0, because every
// other vertex is a dependency of at least one of those
// reverse:
// we only want to add edges to outdegree == 0, because every
// other vertex is a pre-requisite to at least one of these
if light && degree[v] != 0 {
continue
}
g.AddVertex(v) // ensure vertex is part of the graph
if vertex != nil && reverse {
edge := edgeGenFn(v, vertex) // generate a new unique edge
g.AddEdge(v, vertex, edge)
} else if vertex != nil { // && !reverse
edge := edgeGenFn(vertex, v)
g.AddEdge(vertex, v, edge)
}
}
// also remember to suck in all of the graph's edges too!
for v1 := range graph.Adjacency() {
for v2, e := range graph.Adjacency()[v1] {
g.AddEdge(v1, v2, e)
}
}
}

210
pgraph/subgraph_test.go Normal file
View File

@@ -0,0 +1,210 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
import (
"fmt"
"testing"
)
// TODO: unify with the other function like this...
// TODO: where should we put our test helpers?
func runGraphCmp(t *testing.T, g1, g2 *Graph) {
err := g1.GraphCmp(g2, vertexCmpFn, edgeCmpFn)
if err != nil {
t.Logf(" actual (g1): %v%v", g1, fullPrint(g1))
t.Logf("expected (g2): %v%v", g2, fullPrint(g2))
t.Logf("Cmp error:")
t.Errorf("%v", err)
}
}
// TODO: unify with the other function like this...
func fullPrint(g *Graph) (str string) {
str += "\n"
for v := range g.Adjacency() {
str += fmt.Sprintf("* v: %s\n", v)
}
for v1 := range g.Adjacency() {
for v2, e := range g.Adjacency()[v1] {
str += fmt.Sprintf("* e: %s -> %s # %s\n", v1, v2, e)
}
}
return
}
// edgeGenFn generates unique edges for each vertex pair, assuming unique
// vertices.
func edgeGenFn(v1, v2 Vertex) Edge {
return NE(fmt.Sprintf("%s,%s", v1, v2))
}
func TestPgraphAddEdgeGraph1(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g := &Graph{}
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
sub := &Graph{}
sub.AddEdge(v4, v5, e3)
g.AddGraph(sub)
// expected (can re-use the same vertices)
expected := &Graph{}
expected.AddEdge(v1, v3, e1)
expected.AddEdge(v2, v3, e2)
expected.AddEdge(v4, v5, e3)
//expected.AddEdge(v3, v4, NE("v3,v4"))
//expected.AddEdge(v3, v5, NE("v3,v5"))
runGraphCmp(t, g, expected)
}
func TestPgraphAddEdgeVertexGraph1(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g := &Graph{}
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
sub := &Graph{}
sub.AddEdge(v4, v5, e3)
g.AddEdgeVertexGraph(v3, sub, edgeGenFn)
// expected (can re-use the same vertices)
expected := &Graph{}
expected.AddEdge(v1, v3, e1)
expected.AddEdge(v2, v3, e2)
expected.AddEdge(v4, v5, e3)
expected.AddEdge(v3, v4, NE("v3,v4"))
expected.AddEdge(v3, v5, NE("v3,v5"))
runGraphCmp(t, g, expected)
}
func TestPgraphAddEdgeGraphVertex1(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g := &Graph{}
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
sub := &Graph{}
sub.AddEdge(v4, v5, e3)
g.AddEdgeGraphVertex(sub, v3, edgeGenFn)
// expected (can re-use the same vertices)
expected := &Graph{}
expected.AddEdge(v1, v3, e1)
expected.AddEdge(v2, v3, e2)
expected.AddEdge(v4, v5, e3)
expected.AddEdge(v4, v3, NE("v4,v3"))
expected.AddEdge(v5, v3, NE("v5,v3"))
runGraphCmp(t, g, expected)
}
func TestPgraphAddEdgeVertexGraphLight1(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g := &Graph{}
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
sub := &Graph{}
sub.AddEdge(v4, v5, e3)
g.AddEdgeVertexGraphLight(v3, sub, edgeGenFn)
// expected (can re-use the same vertices)
expected := &Graph{}
expected.AddEdge(v1, v3, e1)
expected.AddEdge(v2, v3, e2)
expected.AddEdge(v4, v5, e3)
expected.AddEdge(v3, v4, NE("v3,v4"))
//expected.AddEdge(v3, v5, NE("v3,v5")) // not needed with light
runGraphCmp(t, g, expected)
}
func TestPgraphAddEdgeGraphVertexLight1(t *testing.T) {
v1 := NV("v1")
v2 := NV("v2")
v3 := NV("v3")
v4 := NV("v4")
v5 := NV("v5")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g := &Graph{}
g.AddEdge(v1, v3, e1)
g.AddEdge(v2, v3, e2)
sub := &Graph{}
sub.AddEdge(v4, v5, e3)
g.AddEdgeGraphVertexLight(sub, v3, edgeGenFn)
// expected (can re-use the same vertices)
expected := &Graph{}
expected.AddEdge(v1, v3, e1)
expected.AddEdge(v2, v3, e2)
expected.AddEdge(v4, v5, e3)
//expected.AddEdge(v4, v3, NE("v4,v3")) // not needed with light
expected.AddEdge(v5, v3, NE("v5,v3"))
runGraphCmp(t, g, expected)
}

View File

@@ -75,37 +75,61 @@ func (obj *GAPI) Graph() (*pgraph.Graph, error) {
}
// Next returns nil errors every time there could be a new graph.
func (obj *GAPI) Next() chan error {
if obj.data.NoWatch {
return nil
}
func (obj *GAPI) Next() chan gapi.Next {
puppetChan := func() <-chan time.Time { // helper function
return time.Tick(time.Duration(RefreshInterval(obj.PuppetConf)) * time.Second)
}
ch := make(chan error)
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("the puppet GAPI is not initialized")
next := gapi.Next{
Err: fmt.Errorf("the puppet GAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
return
}
pChan := puppetChan()
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
pChan := make(<-chan time.Time)
// NOTE: we don't look at obj.data.NoConfigWatch since emulating
// puppet means we do not switch graphs on code changes anyways.
if obj.data.NoStreamWatch {
pChan = nil
} else {
pChan = puppetChan()
}
for {
select {
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case _, ok := <-pChan:
if !ok { // the channel closed!
return
}
log.Printf("Puppet: Generating new graph...")
pChan = puppetChan() // TODO: okay to update interval in case it changed?
select {
case ch <- nil: // trigger a run (send a msg)
// unblock if we exit while waiting to send!
case <-obj.closeChan:
return
}
log.Printf("Puppet: Generating new graph...")
if obj.data.NoStreamWatch {
pChan = nil
} else {
pChan = puppetChan() // TODO: okay to update interval in case it changed?
}
next := gapi.Next{
//Exit: true, // TODO: for permanent shutdown!
Err: nil,
}
select {
case ch <- next: // trigger a run (send a msg)
// unblock if we exit while waiting to send!
case <-obj.closeChan:
return
}

View File

@@ -15,7 +15,7 @@
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
package resources
import (
"fmt"
@@ -26,36 +26,36 @@ import (
"time"
"github.com/purpleidea/mgmt/event"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/prometheus"
"github.com/purpleidea/mgmt/resources"
"github.com/purpleidea/mgmt/util"
multierr "github.com/hashicorp/go-multierror"
errwrap "github.com/pkg/errors"
"golang.org/x/time/rate"
)
// GetTimestamp returns the timestamp of a vertex
func (v *Vertex) GetTimestamp() int64 {
return v.timestamp
// SentinelErr is a sentinal as an error type that wraps an arbitrary error.
type SentinelErr struct {
err error
}
// UpdateTimestamp updates the timestamp on a vertex and returns the new value
func (v *Vertex) UpdateTimestamp() int64 {
v.timestamp = time.Now().UnixNano() // update
return v.timestamp
// Error is the required method to fulfill the error type.
func (obj *SentinelErr) Error() string {
return obj.err.Error()
}
// OKTimestamp returns true if this element can run right now?
func (g *Graph) OKTimestamp(v *Vertex) bool {
func (obj *BaseRes) OKTimestamp() bool {
// these are all the vertices pointing TO v, eg: ??? -> v
for _, n := range g.IncomingGraphVertices(v) {
for _, n := range obj.Graph.IncomingGraphVertices(obj.Vertex) {
// if the vertex has a greater timestamp than any pre-req (n)
// then we can't run right now...
// if they're equal (eg: on init of 0) then we also can't run
// b/c we should let our pre-req's go first...
x, y := v.GetTimestamp(), n.GetTimestamp()
if g.Flags.Debug {
log.Printf("%s[%s]: OKTimestamp: (%v) >= %s[%s](%v): !%v", v.Kind(), v.GetName(), x, n.Kind(), n.GetName(), y, x >= y)
x, y := obj.Timestamp(), VtoR(n).Timestamp()
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: OKTimestamp: (%v) >= %s(%v): !%v", obj, x, n, y, x >= y)
}
if x >= y {
return false
@@ -65,36 +65,35 @@ func (g *Graph) OKTimestamp(v *Vertex) bool {
}
// Poke tells nodes after me in the dependency graph that they need to refresh.
func (g *Graph) Poke(v *Vertex) error {
func (obj *BaseRes) Poke() error {
// if we're pausing (or exiting) then we should suspend poke's so that
// the graph doesn't go on running forever until it's completely done!
// this is an optional feature which we can do by default on user exit
if g.fastPause {
if obj.Graph.FastPause {
return nil // TODO: should this be an error instead?
}
var wg sync.WaitGroup
// these are all the vertices pointing AWAY FROM v, eg: v -> ???
for _, n := range g.OutgoingGraphVertices(v) {
for _, n := range obj.Graph.OutgoingGraphVertices(obj.Vertex) {
// we can skip this poke if resource hasn't done work yet... it
// needs to be poked if already running, or not running though!
// TODO: does this need an || activity flag?
if n.Res.GetState() != resources.ResStateProcess {
if g.Flags.Debug {
log.Printf("%s[%s]: Poke: %s[%s]", v.Kind(), v.GetName(), n.Kind(), n.GetName())
if VtoR(n).GetState() != ResStateProcess {
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: Poke: %s", obj, n)
}
wg.Add(1)
go func(nn *Vertex) error {
go func(nn pgraph.Vertex) error {
defer wg.Done()
//edge := g.Adjacency[v][nn] // lookup
//edge := obj.Graph.adjacency[v][nn] // lookup
//notify := edge.Notify && edge.Refresh()
return nn.SendEvent(event.EventPoke, nil)
return VtoR(nn).SendEvent(event.EventPoke, nil)
}(n)
} else {
if g.Flags.Debug {
log.Printf("%s[%s]: Poke: %s[%s]: Skipped!", v.Kind(), v.GetName(), n.Kind(), n.GetName())
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: Poke: %s: Skipped!", obj, n)
}
}
}
@@ -104,30 +103,30 @@ func (g *Graph) Poke(v *Vertex) error {
}
// BackPoke pokes the pre-requisites that are stale and need to run before I can run.
func (g *Graph) BackPoke(v *Vertex) {
func (obj *BaseRes) BackPoke() {
var wg sync.WaitGroup
// these are all the vertices pointing TO v, eg: ??? -> v
for _, n := range g.IncomingGraphVertices(v) {
x, y, s := v.GetTimestamp(), n.GetTimestamp(), n.Res.GetState()
for _, n := range obj.Graph.IncomingGraphVertices(obj.Vertex) {
x, y, s := obj.Timestamp(), VtoR(n).Timestamp(), VtoR(n).GetState()
// If the parent timestamp needs poking AND it's not running
// Process, then poke it. If the parent is in ResStateProcess it
// means that an event is pending, so we'll be expecting a poke
// back soon, so we can safely discard the extra parent poke...
// TODO: implement a stateLT (less than) to tell if something
// happens earlier in the state cycle and that doesn't wrap nil
if x >= y && (s != resources.ResStateProcess && s != resources.ResStateCheckApply) {
if g.Flags.Debug {
log.Printf("%s[%s]: BackPoke: %s[%s]", v.Kind(), v.GetName(), n.Kind(), n.GetName())
if x >= y && (s != ResStateProcess && s != ResStateCheckApply) {
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: BackPoke: %s", obj, n)
}
wg.Add(1)
go func(nn *Vertex) error {
go func(nn pgraph.Vertex) error {
defer wg.Done()
return nn.SendEvent(event.EventBackPoke, nil)
return VtoR(nn).SendEvent(event.EventBackPoke, nil)
}(n)
} else {
if g.Flags.Debug {
log.Printf("%s[%s]: BackPoke: %s[%s]: Skipped!", v.Kind(), v.GetName(), n.Kind(), n.GetName())
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: BackPoke: %s: Skipped!", obj, n)
}
}
}
@@ -137,10 +136,11 @@ func (g *Graph) BackPoke(v *Vertex) {
// RefreshPending determines if any previous nodes have a refresh pending here.
// If this is true, it means I am expected to apply a refresh when I next run.
func (g *Graph) RefreshPending(v *Vertex) bool {
func (obj *BaseRes) RefreshPending() bool {
var refresh bool
for _, edge := range g.IncomingGraphEdges(v) {
for _, edge := range obj.Graph.IncomingGraphEdges(obj.Vertex) {
// if we asked for a notify *and* if one is pending!
edge := edge.(*Edge) // panic if wrong
if edge.Notify && edge.Refresh() {
refresh = true
break
@@ -150,8 +150,9 @@ func (g *Graph) RefreshPending(v *Vertex) bool {
}
// SetUpstreamRefresh sets the refresh value to any upstream vertices.
func (g *Graph) SetUpstreamRefresh(v *Vertex, b bool) {
for _, edge := range g.IncomingGraphEdges(v) {
func (obj *BaseRes) SetUpstreamRefresh(b bool) {
for _, edge := range obj.Graph.IncomingGraphEdges(obj.Vertex) {
edge := edge.(*Edge) // panic if wrong
if edge.Notify {
edge.SetRefresh(b)
}
@@ -159,8 +160,9 @@ func (g *Graph) SetUpstreamRefresh(v *Vertex, b bool) {
}
// SetDownstreamRefresh sets the refresh value to any downstream vertices.
func (g *Graph) SetDownstreamRefresh(v *Vertex, b bool) {
for _, edge := range g.OutgoingGraphEdges(v) {
func (obj *BaseRes) SetDownstreamRefresh(b bool) {
for _, edge := range obj.Graph.OutgoingGraphEdges(obj.Vertex) {
edge := edge.(*Edge) // panic if wrong
// if we asked for a notify *and* if one is pending!
if edge.Notify {
edge.SetRefresh(b)
@@ -169,25 +171,24 @@ func (g *Graph) SetDownstreamRefresh(v *Vertex, b bool) {
}
// Process is the primary function to execute for a particular vertex in the graph.
func (g *Graph) Process(v *Vertex) error {
obj := v.Res
if g.Flags.Debug {
log.Printf("%s[%s]: Process()", obj.Kind(), obj.GetName())
func (obj *BaseRes) Process() error {
if obj.debug {
log.Printf("%s: Process()", obj)
}
// FIXME: should these SetState methods be here or after the sema code?
defer obj.SetState(resources.ResStateNil) // reset state when finished
obj.SetState(resources.ResStateProcess)
defer obj.SetState(ResStateNil) // reset state when finished
obj.SetState(ResStateProcess)
// is it okay to run dependency wise right now?
// if not, that's okay because when the dependency runs, it will poke
// us back and we will run if needed then!
if !g.OKTimestamp(v) {
go g.BackPoke(v)
if !obj.OKTimestamp() {
go obj.BackPoke()
return nil
}
// timestamp must be okay...
if g.Flags.Debug {
log.Printf("%s[%s]: OKTimestamp(%v)", obj.Kind(), obj.GetName(), v.GetTimestamp())
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: OKTimestamp(%v)", obj, obj.Timestamp())
}
// semaphores!
@@ -198,23 +199,23 @@ func (g *Graph) Process(v *Vertex) error {
// The exception is that semaphores with a zero count will always block!
// TODO: Add a close mechanism to close/unblock zero count semaphores...
semas := obj.Meta().Sema
if g.Flags.Debug && len(semas) > 0 {
log.Printf("%s[%s]: Sema: P(%s)", obj.Kind(), obj.GetName(), strings.Join(semas, ", "))
if obj.debug && len(semas) > 0 {
log.Printf("%s: Sema: P(%s)", obj, strings.Join(semas, ", "))
}
if err := g.SemaLock(semas); err != nil { // lock
if err := obj.Graph.SemaLock(semas); err != nil { // lock
// NOTE: in practice, this might not ever be truly necessary...
return fmt.Errorf("shutdown of semaphores")
}
defer g.SemaUnlock(semas) // unlock
if g.Flags.Debug && len(semas) > 0 {
defer log.Printf("%s[%s]: Sema: V(%s)", obj.Kind(), obj.GetName(), strings.Join(semas, ", "))
defer obj.Graph.SemaUnlock(semas) // unlock
if obj.debug && len(semas) > 0 {
defer log.Printf("%s: Sema: V(%s)", obj, strings.Join(semas, ", "))
}
var ok = true
var applied = false // did we run an apply?
// connect any senders to receivers and detect if values changed
if updated, err := obj.SendRecv(obj); err != nil {
if updated, err := obj.SendRecv(obj.Res); err != nil {
return errwrap.Wrapf(err, "could not SendRecv in Process")
} else if len(updated) > 0 {
for _, changed := range updated {
@@ -230,16 +231,16 @@ func (g *Graph) Process(v *Vertex) error {
var checkOK bool
var err error
if g.Flags.Debug {
log.Printf("%s[%s]: CheckApply(%t)", obj.Kind(), obj.GetName(), !noop)
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: CheckApply(%t)", obj, !noop)
}
// lookup the refresh (notification) variable
refresh = g.RefreshPending(v) // do i need to perform a refresh?
refresh = obj.RefreshPending() // do i need to perform a refresh?
obj.SetRefresh(refresh) // tell the resource
// changes can occur after this...
obj.SetState(resources.ResStateCheckApply)
obj.SetState(ResStateCheckApply)
// check cached state, to skip CheckApply; can't skip if refreshing
if !refresh && obj.IsStateOK() {
@@ -254,38 +255,37 @@ func (g *Graph) Process(v *Vertex) error {
// run the CheckApply!
} else {
// if this fails, don't UpdateTimestamp()
checkOK, err = obj.CheckApply(!noop)
checkOK, err = obj.Res.CheckApply(!noop)
if promErr := obj.Prometheus().UpdateCheckApplyTotal(obj.Kind(), !noop, !checkOK, err != nil); promErr != nil {
if promErr := obj.Data().Prometheus.UpdateCheckApplyTotal(obj.GetKind(), !noop, !checkOK, err != nil); promErr != nil {
// TODO: how to error correctly
log.Printf("%s[%s]: Prometheus.UpdateCheckApplyTotal() errored: %v", v.Kind(), v.GetName(), err)
log.Printf("%s: Prometheus.UpdateCheckApplyTotal() errored: %v", obj, err)
}
// TODO: Can the `Poll` converged timeout tracking be a
// more general method for all converged timeouts? this
// would simplify the resources by removing boilerplate
if v.Meta().Poll > 0 {
if obj.Meta().Poll > 0 {
if !checkOK { // something changed, restart timer
cuid, _, _ := v.Res.ConvergerUIDs() // get the converger uid used to report status
cuid.ResetTimer() // activity!
if g.Flags.Debug {
log.Printf("%s[%s]: Converger: ResetTimer", obj.Kind(), obj.GetName())
obj.cuid.ResetTimer() // activity!
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: Converger: ResetTimer", obj)
}
}
}
}
if checkOK && err != nil { // should never return this way
log.Fatalf("%s[%s]: CheckApply(): %t, %+v", obj.Kind(), obj.GetName(), checkOK, err)
log.Fatalf("%s: CheckApply(): %t, %+v", obj, checkOK, err)
}
if g.Flags.Debug {
log.Printf("%s[%s]: CheckApply(): %t, %v", obj.Kind(), obj.GetName(), checkOK, err)
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: CheckApply(): %t, %v", obj, checkOK, err)
}
// if CheckApply ran without noop and without error, state should be good
if !noop && err == nil { // aka !noop || checkOK
obj.StateOK(true) // reset
if refresh {
g.SetUpstreamRefresh(v, false) // refresh happened, clear the request
obj.SetUpstreamRefresh(false) // refresh happened, clear the request
obj.SetRefresh(false)
}
}
@@ -311,14 +311,14 @@ func (g *Graph) Process(v *Vertex) error {
}
if activity { // add refresh flag to downstream edges...
g.SetDownstreamRefresh(v, true)
obj.SetDownstreamRefresh(true)
}
// update this timestamp *before* we poke or the poked
// nodes might fail due to having a too old timestamp!
v.UpdateTimestamp() // this was touched...
obj.SetState(resources.ResStatePoking) // can't cancel parent poke
if err := g.Poke(v); err != nil {
obj.UpdateTimestamp() // this was touched...
obj.SetState(ResStatePoking) // can't cancel parent poke
if err := obj.Poke(); err != nil {
return errwrap.Wrapf(err, "the Poke() failed")
}
}
@@ -326,24 +326,11 @@ func (g *Graph) Process(v *Vertex) error {
return errwrap.Wrapf(err, "could not Process() successfully")
}
// SentinelErr is a sentinal as an error type that wraps an arbitrary error.
type SentinelErr struct {
err error
}
// Error is the required method to fulfill the error type.
func (obj *SentinelErr) Error() string {
return obj.err.Error()
}
// innerWorker is the CheckApply runner that reads from processChan.
// TODO: would it be better if this was a method on BaseRes that took in *Graph?
func (g *Graph) innerWorker(v *Vertex) {
obj := v.Res
func (obj *BaseRes) innerWorker() {
running := false
done := make(chan struct{})
playback := false // do we need to run another one?
_, wcuid, pcuid := obj.ConvergerUIDs() // get extra cuids (worker, process)
waiting := false
var timer = time.NewTimer(time.Duration(math.MaxInt64)) // longest duration
@@ -351,9 +338,9 @@ func (g *Graph) innerWorker(v *Vertex) {
<-timer.C // unnecessary, shouldn't happen
}
var delay = time.Duration(v.Meta().Delay) * time.Millisecond
var retry = v.Meta().Retry // number of tries left, -1 for infinite
var limiter = rate.NewLimiter(v.Meta().Limit, v.Meta().Burst)
var delay = time.Duration(obj.Meta().Delay) * time.Millisecond
var retry = obj.Meta().Retry // number of tries left, -1 for infinite
var limiter = rate.NewLimiter(obj.Meta().Limit, obj.Meta().Burst)
limited := false
wg := &sync.WaitGroup{} // wait for Process routine to exit
@@ -361,49 +348,49 @@ func (g *Graph) innerWorker(v *Vertex) {
Loop:
for {
select {
case ev, ok := <-obj.ProcessChan(): // must use like this
case ev, ok := <-obj.processChan: // must use like this
if !ok { // processChan closed, let's exit
break Loop // no event, so no ack!
}
if v.Res.Meta().Poll == 0 { // skip for polling
wcuid.SetConverged(false)
if obj.Meta().Poll == 0 { // skip for polling
obj.wcuid.SetConverged(false)
}
// if process started, but no action yet, skip!
if v.Res.GetState() == resources.ResStateProcess {
if g.Flags.Debug {
log.Printf("%s[%s]: Skipped event!", v.Kind(), v.GetName())
if obj.GetState() == ResStateProcess {
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: Skipped event!", obj)
}
ev.ACK() // ready for next message
v.Res.QuiesceGroup().Done()
obj.quiesceGroup.Done()
continue
}
// if running, we skip running a new execution!
// if waiting, we skip running a new execution!
if running || waiting {
if g.Flags.Debug {
log.Printf("%s[%s]: Playback added!", v.Kind(), v.GetName())
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: Playback added!", obj)
}
playback = true
ev.ACK() // ready for next message
v.Res.QuiesceGroup().Done()
obj.quiesceGroup.Done()
continue
}
// catch invalid rates
if v.Meta().Burst == 0 && !(v.Meta().Limit == rate.Inf) { // blocked
e := fmt.Errorf("%s[%s]: Permanently limited (rate != Inf, burst: 0)", v.Kind(), v.GetName())
if obj.Meta().Burst == 0 && !(obj.Meta().Limit == rate.Inf) { // blocked
e := fmt.Errorf("%s: Permanently limited (rate != Inf, burst: 0)", obj)
ev.ACK() // ready for next message
v.Res.QuiesceGroup().Done()
v.SendEvent(event.EventExit, &SentinelErr{e})
obj.quiesceGroup.Done()
obj.SendEvent(event.EventExit, &SentinelErr{e})
continue
}
// rate limit
// FIXME: consider skipping rate limit check if
// the event is a poke instead of a watch event
if !limited && !(v.Meta().Limit == rate.Inf) { // skip over the playback event...
if !limited && !(obj.Meta().Limit == rate.Inf) { // skip over the playback event...
now := time.Now()
r := limiter.ReserveN(now, 1) // one event
// r.OK() seems to always be true here!
@@ -411,12 +398,12 @@ Loop:
if d > 0 { // delay
limited = true
playback = true
log.Printf("%s[%s]: Limited (rate: %v/sec, burst: %d, next: %v)", v.Kind(), v.GetName(), v.Meta().Limit, v.Meta().Burst, d)
log.Printf("%s: Limited (rate: %v/sec, burst: %d, next: %v)", obj, obj.Meta().Limit, obj.Meta().Burst, d)
// start the timer...
timer.Reset(d)
waiting = true // waiting for retry timer
ev.ACK()
v.Res.QuiesceGroup().Done()
obj.quiesceGroup.Done()
continue
} // otherwise, we run directly!
}
@@ -425,60 +412,60 @@ Loop:
wg.Add(1)
running = true
go func(ev *event.Event) {
pcuid.SetConverged(false) // "block" Process
obj.pcuid.SetConverged(false) // "block" Process
defer wg.Done()
if e := g.Process(v); e != nil {
if e := obj.Process(); e != nil {
playback = true
log.Printf("%s[%s]: CheckApply errored: %v", v.Kind(), v.GetName(), e)
log.Printf("%s: CheckApply errored: %v", obj, e)
if retry == 0 {
if err := obj.Prometheus().UpdateState(fmt.Sprintf("%v[%v]", v.Kind(), v.GetName()), v.Kind(), prometheus.ResStateHardFail); err != nil {
if err := obj.Data().Prometheus.UpdateState(obj.String(), obj.GetKind(), prometheus.ResStateHardFail); err != nil {
// TODO: how to error this?
log.Printf("%s[%s]: Prometheus.UpdateState() errored: %v", v.Kind(), v.GetName(), err)
log.Printf("%s: Prometheus.UpdateState() errored: %v", obj, err)
}
// wrap the error in the sentinel
v.Res.QuiesceGroup().Done() // before the Wait that happens in SendEvent!
v.SendEvent(event.EventExit, &SentinelErr{e})
obj.quiesceGroup.Done() // before the Wait that happens in SendEvent!
obj.SendEvent(event.EventExit, &SentinelErr{e})
return
}
if retry > 0 { // don't decrement the -1
retry--
}
if err := obj.Prometheus().UpdateState(fmt.Sprintf("%v[%v]", v.Kind(), v.GetName()), v.Kind(), prometheus.ResStateSoftFail); err != nil {
if err := obj.Data().Prometheus.UpdateState(obj.String(), obj.GetKind(), prometheus.ResStateSoftFail); err != nil {
// TODO: how to error this?
log.Printf("%s[%s]: Prometheus.UpdateState() errored: %v", v.Kind(), v.GetName(), err)
log.Printf("%s: Prometheus.UpdateState() errored: %v", obj, err)
}
log.Printf("%s[%s]: CheckApply: Retrying after %.4f seconds (%d left)", v.Kind(), v.GetName(), delay.Seconds(), retry)
log.Printf("%s: CheckApply: Retrying after %.4f seconds (%d left)", obj, delay.Seconds(), retry)
// start the timer...
timer.Reset(delay)
waiting = true // waiting for retry timer
// don't v.Res.QuiesceGroup().Done() b/c
// don't obj.quiesceGroup.Done() b/c
// the timer is running and it can exit!
return
}
retry = v.Meta().Retry // reset on success
retry = obj.Meta().Retry // reset on success
close(done) // trigger
}(ev)
ev.ACK() // sync (now mostly useless)
case <-timer.C:
if v.Res.Meta().Poll == 0 { // skip for polling
wcuid.SetConverged(false)
if obj.Meta().Poll == 0 { // skip for polling
obj.wcuid.SetConverged(false)
}
waiting = false
if !timer.Stop() {
//<-timer.C // blocks, docs are wrong!
}
log.Printf("%s[%s]: CheckApply delay expired!", v.Kind(), v.GetName())
log.Printf("%s: CheckApply delay expired!", obj)
close(done)
// a CheckApply run (with possibly retry pause) finished
case <-done:
if v.Res.Meta().Poll == 0 { // skip for polling
wcuid.SetConverged(false)
if obj.Meta().Poll == 0 { // skip for polling
obj.wcuid.SetConverged(false)
}
if g.Flags.Debug {
log.Printf("%s[%s]: CheckApply finished!", v.Kind(), v.GetName())
if b, ok := obj.Graph.Value("debug"); ok && util.Bool(b) {
log.Printf("%s: CheckApply finished!", obj)
}
done = make(chan struct{}) // reset
// re-send this event, to trigger a CheckApply()
@@ -488,21 +475,21 @@ Loop:
// TODO: can this experience indefinite postponement ?
// see: https://github.com/golang/go/issues/11506
// pause or exit is in process if not quiescing!
if !v.Res.IsQuiescing() {
if !obj.quiescing {
playback = false
v.Res.QuiesceGroup().Add(1) // lock around it, b/c still running...
obj.quiesceGroup.Add(1) // lock around it, b/c still running...
go func() {
obj.Event() // replay a new event
v.Res.QuiesceGroup().Done()
obj.quiesceGroup.Done()
}()
}
}
running = false
pcuid.SetConverged(true) // "unblock" Process
v.Res.QuiesceGroup().Done()
obj.pcuid.SetConverged(true) // "unblock" Process
obj.quiesceGroup.Done()
case <-wcuid.ConvergedTimer():
wcuid.SetConverged(true) // converged!
case <-obj.wcuid.ConvergedTimer():
obj.wcuid.SetConverged(true) // converged!
continue
}
}
@@ -513,22 +500,21 @@ Loop:
// Worker is the common run frontend of the vertex. It handles all of the retry
// and retry delay common code, and ultimately returns the final status of this
// vertex execution.
func (g *Graph) Worker(v *Vertex) error {
func (obj *BaseRes) Worker() error {
// listen for chan events from Watch() and run
// the Process() function when they're received
// this avoids us having to pass the data into
// the Watch() function about which graph it is
// running on, which isolates things nicely...
obj := v.Res
if g.Flags.Debug {
log.Printf("%s[%s]: Worker: Running", v.Kind(), v.GetName())
defer log.Printf("%s[%s]: Worker: Stopped", v.Kind(), v.GetName())
if obj.debug {
log.Printf("%s: Worker: Running", obj)
defer log.Printf("%s: Worker: Stopped", obj)
}
// run the init (should match 1-1 with Close function)
if err := obj.Init(); err != nil {
if err := obj.Res.Init(); err != nil {
obj.ProcessExit()
// always exit the worker function by finishing with Close()
if e := obj.Close(); e != nil {
if e := obj.Res.Close(); e != nil {
err = multierr.Append(err, e) // list of errors
}
return errwrap.Wrapf(err, "could not Init() resource")
@@ -538,16 +524,15 @@ func (g *Graph) Worker(v *Vertex) error {
// timeout, we could inappropriately converge mid-apply!
// avoid this by blocking convergence with a fake report
// we also add a similar blocker around the worker loop!
_, wcuid, pcuid := obj.ConvergerUIDs() // get extra cuids (worker, process)
// XXX: put these in Init() ?
wcuid.SetConverged(true) // starts off false, and waits for loop timeout
pcuid.SetConverged(true) // starts off true, because it's not running...
// get extra cuids (worker, process)
obj.wcuid.SetConverged(true) // starts off false, and waits for loop timeout
obj.pcuid.SetConverged(true) // starts off true, because it's not running...
wg := obj.ProcessSync()
wg.Add(1)
obj.processSync.Add(1)
go func() {
defer wg.Done()
g.innerWorker(v)
defer obj.processSync.Done()
obj.innerWorker()
}()
var err error // propagate the error up (this is a permanent BAD error!)
@@ -557,7 +542,7 @@ func (g *Graph) Worker(v *Vertex) error {
// NOTE: we're using the same retry and delay metaparams that CheckApply
// uses. This is for practicality. We can separate them later if needed!
var watchDelay time.Duration
var watchRetry = v.Meta().Retry // number of tries left, -1 for infinite
var watchRetry = obj.Meta().Retry // number of tries left, -1 for infinite
// watch blocks until it ends, & errors to retry
for {
// TODO: do we have to stop the converged-timeout when in this block (perhaps we're in the delay block!)
@@ -580,7 +565,7 @@ func (g *Graph) Worker(v *Vertex) error {
if exit, send := obj.ReadEvent(event); exit != nil {
obj.ProcessExit()
err := *exit // exit err
if e := obj.Close(); err == nil {
if e := obj.Res.Close(); err == nil {
err = e
} else if e != nil {
err = multierr.Append(err, e) // list of errors
@@ -610,7 +595,7 @@ func (g *Graph) Worker(v *Vertex) error {
}
}
timer.Stop() // it's nice to cleanup
log.Printf("%s[%s]: Watch delay expired!", v.Kind(), v.GetName())
log.Printf("%s: Watch delay expired!", obj)
// NOTE: we can avoid the send if running Watch guarantees
// one CheckApply event on startup!
//if pendingSendEvent { // TODO: should this become a list in the future?
@@ -622,13 +607,12 @@ func (g *Graph) Worker(v *Vertex) error {
// TODO: reset the watch retry count after some amount of success
var e error
if v.Res.Meta().Poll > 0 { // poll instead of watching :(
cuid, _, _ := v.Res.ConvergerUIDs() // get the converger uid used to report status
cuid.StartTimer()
e = v.Res.Poll()
cuid.StopTimer() // clean up nicely
if obj.Meta().Poll > 0 { // poll instead of watching :(
obj.cuid.StartTimer()
e = obj.Poll()
obj.cuid.StopTimer() // clean up nicely
} else {
e = v.Res.Watch() // run the watch normally
e = obj.Res.Watch() // run the watch normally
}
if e == nil { // exit signal
err = nil // clean exit
@@ -638,7 +622,7 @@ func (g *Graph) Worker(v *Vertex) error {
err = sentinelErr.err
break // sentinel means, perma-exit
}
log.Printf("%s[%s]: Watch errored: %v", v.Kind(), v.GetName(), e)
log.Printf("%s: Watch errored: %v", obj, e)
if watchRetry == 0 {
err = fmt.Errorf("Permanent watch error: %v", e)
break
@@ -646,8 +630,8 @@ func (g *Graph) Worker(v *Vertex) error {
if watchRetry > 0 { // don't decrement the -1
watchRetry--
}
watchDelay = time.Duration(v.Meta().Delay) * time.Millisecond
log.Printf("%s[%s]: Watch: Retrying after %.4f seconds (%d left)", v.Kind(), v.GetName(), watchDelay.Seconds(), watchRetry)
watchDelay = time.Duration(obj.Meta().Delay) * time.Millisecond
log.Printf("%s: Watch: Retrying after %.4f seconds (%d left)", obj, watchDelay.Seconds(), watchRetry)
// We need to trigger a CheckApply after Watch restarts, so that
// we catch any lost events that happened while down. We do this
// by getting the Watch resource to send one event once it's up!
@@ -656,128 +640,10 @@ func (g *Graph) Worker(v *Vertex) error {
obj.ProcessExit()
// close resource and return possible errors if any
if e := obj.Close(); err == nil {
if e := obj.Res.Close(); err == nil {
err = e
} else if e != nil {
err = multierr.Append(err, e) // list of errors
}
return err
}
// Start is a main kick to start the graph. It goes through in reverse topological
// sort order so that events can't hit un-started vertices.
func (g *Graph) Start(first bool) { // start or continue
log.Printf("State: %v -> %v", g.setState(graphStateStarting), g.getState())
defer log.Printf("State: %v -> %v", g.setState(graphStateStarted), g.getState())
t, _ := g.TopologicalSort()
indegree := g.InDegree() // compute all of the indegree's
reversed := Reverse(t)
wg := &sync.WaitGroup{}
for _, v := range reversed { // run the Setup() for everyone first
// run these in parallel, as long as we wait before continuing
wg.Add(1)
go func(vv *Vertex) {
defer wg.Done()
if !vv.Res.IsWorking() { // if Worker() is not running...
vv.Res.Setup() // initialize some vars in the resource
}
}(v)
}
wg.Wait()
// run through the topological reverse, and start or unpause each vertex
for _, v := range reversed {
// selective poke: here we reduce the number of initial pokes
// to the minimum required to activate every vertex in the
// graph, either by direct action, or by getting poked by a
// vertex that was previously activated. if we poke each vertex
// that has no incoming edges, then we can be sure to reach the
// whole graph. Please note: this may mask certain optimization
// failures, such as any poke limiting code in Poke() or
// BackPoke(). You might want to disable this selective start
// when experimenting with and testing those elements.
// if we are unpausing (since it's not the first run of this
// function) we need to poke to *unpause* every graph vertex,
// and not just selectively the subset with no indegree.
// let the startup code know to poke or not
// this triggers a CheckApply AFTER Watch is Running()
// We *don't* need to also do this to new nodes or nodes that
// are about to get unpaused, because they'll get poked by one
// of the indegree == 0 vertices, and an important aspect of the
// Process() function is that even if the state is correct, it
// will pass through the Poke so that it flows through the DAG.
v.Res.Starter(indegree[v] == 0)
var unpause = true
if !v.Res.IsWorking() { // if Worker() is not running...
unpause = false // doesn't need unpausing on first start
g.wg.Add(1)
// must pass in value to avoid races...
// see: https://ttboj.wordpress.com/2015/07/27/golang-parallelism-issues-causing-too-many-open-files-error/
go func(vv *Vertex) {
defer g.wg.Done()
defer v.Res.Reset()
// TODO: if a sufficient number of workers error,
// should something be done? Should these restart
// after perma-failure if we have a graph change?
log.Printf("%s[%s]: Started", vv.Kind(), vv.GetName())
if err := g.Worker(vv); err != nil { // contains the Watch and CheckApply loops
log.Printf("%s[%s]: Exited with failure: %v", vv.Kind(), vv.GetName(), err)
return
}
log.Printf("%s[%s]: Exited", vv.Kind(), vv.GetName())
}(v)
}
select {
case <-v.Res.Started(): // block until started
case <-v.Res.Stopped(): // we failed on init
// if the resource Init() fails, we don't hang!
}
if unpause { // unpause (if needed)
v.Res.SendEvent(event.EventStart, nil) // sync!
}
}
// we wait for everyone to start before exiting!
}
// Pause sends pause events to the graph in a topological sort order. If you set
// the fastPause argument to true, then it will ask future propagation waves to
// not run through the graph before exiting, and instead will exit much quicker.
func (g *Graph) Pause(fastPause bool) {
log.Printf("State: %v -> %v", g.setState(graphStatePausing), g.getState())
defer log.Printf("State: %v -> %v", g.setState(graphStatePaused), g.getState())
if fastPause {
g.fastPause = true // set flag
}
t, _ := g.TopologicalSort()
for _, v := range t { // squeeze out the events...
v.SendEvent(event.EventPause, nil) // sync
}
g.fastPause = false // reset flag
}
// Exit sends exit events to the graph in a topological sort order.
func (g *Graph) Exit() {
if g == nil { // empty graph that wasn't populated yet
return
}
// FIXME: a second ^C could put this into fast pause, but do it for now!
g.Pause(true) // implement this with pause to avoid duplicating the code
t, _ := g.TopologicalSort()
for _, v := range t { // squeeze out the events...
// turn off the taps...
// XXX: consider instead doing this by closing the Res.events channel instead?
// XXX: do this by sending an exit signal, and then returning
// when we hit the 'default' in the select statement!
// XXX: we can do this to quiesce, but it's not necessary now
v.SendEvent(event.EventExit, nil)
v.Res.WaitGroup().Wait()
}
g.wg.Wait() // for now, this doesn't need to be a separate Wait() method
}

View File

@@ -40,6 +40,7 @@ const (
func init() {
gob.Register(&AugeasRes{})
RegisterResource("augeas", func() Res { return &AugeasRes{} })
}
// AugeasRes is a resource that enables you to use the augeas resource.
@@ -93,7 +94,7 @@ func (obj *AugeasRes) Validate() error {
// Init initiates the resource.
func (obj *AugeasRes) Init() error {
obj.BaseRes.kind = "augeas"
obj.BaseRes.Kind = "augeas"
return obj.BaseRes.Init() // call base init, b/c we're overriding
}
@@ -118,7 +119,7 @@ func (obj *AugeasRes) Watch() error {
for {
if obj.debug {
log.Printf("%s[%s]: Watching: %s", obj.Kind(), obj.GetName(), obj.File) // attempting to watch...
log.Printf("%s: Watching: %s", obj, obj.File) // attempting to watch...
}
select {
@@ -127,10 +128,10 @@ func (obj *AugeasRes) Watch() error {
return nil
}
if err := event.Error; err != nil {
return errwrap.Wrapf(err, "Unknown %s[%s] watcher error", obj.Kind(), obj.GetName())
return errwrap.Wrapf(err, "Unknown %s watcher error", obj)
}
if obj.debug { // don't access event.Body if event.Error isn't nil
log.Printf("%s[%s]: Event(%s): %v", obj.Kind(), obj.GetName(), event.Body.Name, event.Body.Op)
log.Printf("%s: Event(%s): %v", obj, event.Body.Name, event.Body.Op)
}
send = true
obj.StateOK(false) // dirty
@@ -177,7 +178,7 @@ func (obj *AugeasRes) checkApplySet(apply bool, ag *augeas.Augeas, set AugeasSet
// CheckApply method for Augeas resource.
func (obj *AugeasRes) CheckApply(apply bool) (bool, error) {
log.Printf("%s[%s]: CheckApply: %s", obj.Kind(), obj.GetName(), obj.File)
log.Printf("%s: CheckApply: %s", obj, obj.File)
// By default we do not set any option to augeas, we use the defaults.
opts := augeas.None
if obj.Lens != "" {
@@ -225,7 +226,7 @@ func (obj *AugeasRes) CheckApply(apply bool) (bool, error) {
return checkOK, nil
}
log.Printf("%s[%s]: changes needed, saving", obj.Kind(), obj.GetName())
log.Printf("%s: changes needed, saving", obj)
if err = ag.Save(); err != nil {
return false, errwrap.Wrapf(err, "augeas: error while saving augeas values")
}
@@ -247,15 +248,10 @@ type AugeasUID struct {
name string
}
// AutoEdges returns the AutoEdge interface. In this case no autoedges are used.
func (obj *AugeasRes) AutoEdges() AutoEdge {
return nil
}
// UIDs includes all params to make a unique identification of this object.
func (obj *AugeasRes) UIDs() []ResUID {
x := &AugeasUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
BaseUID: BaseUID{Name: obj.GetName(), Kind: obj.GetKind()},
name: obj.Name,
}
return []ResUID{x}
@@ -267,20 +263,19 @@ func (obj *AugeasRes) GroupCmp(r Res) bool {
}
// Compare two resources and return if they are equivalent.
func (obj *AugeasRes) Compare(res Res) bool {
switch res.(type) {
// we can only compare AugeasRes to others of the same resource
case *AugeasRes:
res := res.(*AugeasRes)
func (obj *AugeasRes) Compare(r Res) bool {
// we can only compare AugeasRes to others of the same resource kind
res, ok := r.(*AugeasRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
default:
return false
}
return true
}

148
resources/autoedge.go Normal file
View File

@@ -0,0 +1,148 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"fmt"
"log"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util"
multierr "github.com/hashicorp/go-multierror"
errwrap "github.com/pkg/errors"
)
// The AutoEdge interface is used to implement the autoedges feature.
type AutoEdge interface {
Next() []ResUID // call to get list of edges to add
Test([]bool) bool // call until false
}
// UIDExistsInUIDs wraps the IFF method when used with a list of UID's.
func UIDExistsInUIDs(uid ResUID, uids []ResUID) bool {
for _, u := range uids {
if uid.IFF(u) {
return true
}
}
return false
}
// addEdgesByMatchingUIDS adds edges to the vertex in a graph based on if it
// matches a uid list.
func addEdgesByMatchingUIDS(g *pgraph.Graph, v pgraph.Vertex, uids []ResUID) []bool {
// search for edges and see what matches!
var result []bool
// loop through each uid, and see if it matches any vertex
for _, uid := range uids {
var found = false
// uid is a ResUID object
for _, vv := range g.Vertices() { // search
if v == vv { // skip self
continue
}
if b, ok := g.Value("debug"); ok && util.Bool(b) {
log.Printf("Compile: AutoEdge: Match: %s with UID: %s", vv, uid)
}
// we must match to an effective UID for the resource,
// that is to say, the name value of a res is a helpful
// handle, but it is not necessarily a unique identity!
// remember, resources can return multiple UID's each!
if UIDExistsInUIDs(uid, VtoR(vv).UIDs()) {
// add edge from: vv -> v
if uid.IsReversed() {
txt := fmt.Sprintf("AutoEdge: %s -> %s", vv, v)
log.Printf("Compile: Adding %s", txt)
edge := &Edge{Name: txt}
g.AddEdge(vv, v, edge)
} else { // edges go the "normal" way, eg: pkg resource
txt := fmt.Sprintf("AutoEdge: %s -> %s", v, vv)
log.Printf("Compile: Adding %s", txt)
edge := &Edge{Name: txt}
g.AddEdge(v, vv, edge)
}
found = true
break
}
}
result = append(result, found)
}
return result
}
// AutoEdges adds the automatic edges to the graph.
func AutoEdges(g *pgraph.Graph) error {
log.Println("Compile: Adding AutoEdges...")
// initially get all of the autoedges to seek out all possible errors
var err error
autoEdgeObjVertexMap := make(map[pgraph.Vertex]AutoEdge)
sorted := g.VerticesSorted()
for _, v := range sorted { // for each vertexes autoedges
if !VtoR(v).Meta().AutoEdge { // is the metaparam true?
continue
}
autoEdgeObj, e := VtoR(v).AutoEdges()
if e != nil {
err = multierr.Append(err, e) // collect all errors
continue
}
if autoEdgeObj == nil {
log.Printf("%s: No auto edges were found!", v)
continue // next vertex
}
autoEdgeObjVertexMap[v] = autoEdgeObj // save for next loop
}
if err != nil {
return errwrap.Wrapf(err, "the auto edges had errors")
}
// now that we're guaranteed error free, we can modify the graph safely
for _, v := range sorted { // stable sort order for determinism in logs
autoEdgeObj, exists := autoEdgeObjVertexMap[v]
if !exists {
continue
}
for { // while the autoEdgeObj has more uids to add...
uids := autoEdgeObj.Next() // get some!
if uids == nil {
log.Printf("%s: The auto edge list is empty!", v)
break // inner loop
}
if b, ok := g.Value("debug"); ok && util.Bool(b) {
log.Println("Compile: AutoEdge: UIDS:")
for i, u := range uids {
log.Printf("Compile: AutoEdge: UID%d: %v", i, u)
}
}
// match and add edges
result := addEdgesByMatchingUIDS(g, v, uids)
// report back, and find out if we should continue
if !autoEdgeObj.Test(result) {
break
}
}
}
return nil
}

View File

@@ -15,49 +15,52 @@
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
package resources
import (
"fmt"
"log"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util"
errwrap "github.com/pkg/errors"
)
// AutoGrouper is the required interface to implement for an autogroup algorithm
// AutoGrouper is the required interface to implement for an autogroup algorithm.
type AutoGrouper interface {
// listed in the order these are typically called in...
name() string // friendly identifier
init(*Graph) error // only call once
vertexNext() (*Vertex, *Vertex, error) // mostly algorithmic
vertexCmp(*Vertex, *Vertex) error // can we merge these ?
vertexMerge(*Vertex, *Vertex) (*Vertex, error) // vertex merge fn to use
edgeMerge(*Edge, *Edge) *Edge // edge merge fn to use
init(*pgraph.Graph) error // only call once
vertexNext() (pgraph.Vertex, pgraph.Vertex, error) // mostly algorithmic
vertexCmp(pgraph.Vertex, pgraph.Vertex) error // can we merge these ?
vertexMerge(pgraph.Vertex, pgraph.Vertex) (pgraph.Vertex, error) // vertex merge fn to use
edgeMerge(pgraph.Edge, pgraph.Edge) pgraph.Edge // edge merge fn to use
vertexTest(bool) (bool, error) // call until false
}
// baseGrouper is the base type for implementing the AutoGrouper interface
// baseGrouper is the base type for implementing the AutoGrouper interface.
type baseGrouper struct {
graph *Graph // store a pointer to the graph
vertices []*Vertex // cached list of vertices
graph *pgraph.Graph // store a pointer to the graph
vertices []pgraph.Vertex // cached list of vertices
i int
j int
done bool
}
// name provides a friendly name for the logs to see
// name provides a friendly name for the logs to see.
func (ag *baseGrouper) name() string {
return "baseGrouper"
}
// init is called only once and before using other AutoGrouper interface methods
// the name method is the only exception: call it any time without side effects!
func (ag *baseGrouper) init(g *Graph) error {
func (ag *baseGrouper) init(g *pgraph.Graph) error {
if ag.graph != nil {
return fmt.Errorf("the init method has already been called")
}
ag.graph = g // pointer
ag.vertices = ag.graph.GetVerticesSorted() // cache in deterministic order!
ag.vertices = ag.graph.VerticesSorted() // cache in deterministic order!
ag.i = 0
ag.j = 0
if len(ag.vertices) == 0 { // empty graph
@@ -71,7 +74,7 @@ func (ag *baseGrouper) init(g *Graph) error {
// an intelligent algorithm would selectively offer only valid pairs of vertices
// these should satisfy logical grouping requirements for the autogroup designs!
// the desired algorithms can override, but keep this method as a base iterator!
func (ag *baseGrouper) vertexNext() (v1, v2 *Vertex, err error) {
func (ag *baseGrouper) vertexNext() (v1, v2 pgraph.Vertex, err error) {
// this does a for v... { for w... { return v, w }} but stepwise!
l := len(ag.vertices)
if ag.i < l {
@@ -106,48 +109,49 @@ func (ag *baseGrouper) vertexNext() (v1, v2 *Vertex, err error) {
return
}
func (ag *baseGrouper) vertexCmp(v1, v2 *Vertex) error {
func (ag *baseGrouper) vertexCmp(v1, v2 pgraph.Vertex) error {
if v1 == nil || v2 == nil {
return fmt.Errorf("the vertex is nil")
}
if v1 == v2 { // skip yourself
return fmt.Errorf("the vertices are the same")
}
if v1.Kind() != v2.Kind() { // we must group similar kinds
if VtoR(v1).GetKind() != VtoR(v2).GetKind() { // we must group similar kinds
// TODO: maybe future resources won't need this limitation?
return fmt.Errorf("the two resources aren't the same kind")
}
// someone doesn't want to group!
if !v1.Meta().AutoGroup || !v2.Meta().AutoGroup {
if !VtoR(v1).Meta().AutoGroup || !VtoR(v2).Meta().AutoGroup {
return fmt.Errorf("one of the autogroup flags is false")
}
if v1.Res.IsGrouped() { // already grouped!
if VtoR(v1).IsGrouped() { // already grouped!
return fmt.Errorf("already grouped")
}
if len(v2.Res.GetGroup()) > 0 { // already has children grouped!
if len(VtoR(v2).GetGroup()) > 0 { // already has children grouped!
return fmt.Errorf("already has groups")
}
if !v1.Res.GroupCmp(v2.Res) { // resource groupcmp failed!
if !VtoR(v1).GroupCmp(VtoR(v2)) { // resource groupcmp failed!
return fmt.Errorf("the GroupCmp failed")
}
return nil // success
}
func (ag *baseGrouper) vertexMerge(v1, v2 *Vertex) (v *Vertex, err error) {
func (ag *baseGrouper) vertexMerge(v1, v2 pgraph.Vertex) (v pgraph.Vertex, err error) {
// NOTE: it's important to use w.Res instead of w, b/c
// the w by itself is the *Vertex obj, not the *Res obj
// which is contained within it! They both satisfy the
// Res interface, which is why both will compile! :(
err = v1.Res.GroupRes(v2.Res) // GroupRes skips stupid groupings
err = VtoR(v1).GroupRes(VtoR(v2)) // GroupRes skips stupid groupings
return // success or fail, and no need to merge the actual vertices!
}
func (ag *baseGrouper) edgeMerge(e1, e2 *Edge) *Edge {
func (ag *baseGrouper) edgeMerge(e1, e2 pgraph.Edge) pgraph.Edge {
// FIXME: should we merge the edge.Notify or edge.refresh values?
return e1 // noop
}
// vertexTest processes the results of the grouping for the algorithm to know
// return an error if something went horribly wrong, and bool false to stop
// return an error if something went horribly wrong, and bool false to stop.
func (ag *baseGrouper) vertexTest(b bool) (bool, error) {
// NOTE: this particular baseGrouper version doesn't track what happens
// because since we iterate over every pair, we don't care which merge!
@@ -157,19 +161,20 @@ func (ag *baseGrouper) vertexTest(b bool) (bool, error) {
return true, nil
}
// NonReachabilityGrouper is the most straight-forward algorithm for grouping.
// TODO: this algorithm may not be correct in all cases. replace if needed!
type nonReachabilityGrouper struct {
type NonReachabilityGrouper struct {
baseGrouper // "inherit" what we want, and reimplement the rest
}
func (ag *nonReachabilityGrouper) name() string {
return "nonReachabilityGrouper"
func (ag *NonReachabilityGrouper) name() string {
return "NonReachabilityGrouper"
}
// this algorithm relies on the observation that if there's a path from a to b,
// This algorithm relies on the observation that if there's a path from a to b,
// then they *can't* be merged (b/c of the existing dependency) so therefore we
// merge anything that *doesn't* satisfy this condition or that of the reverse!
func (ag *nonReachabilityGrouper) vertexNext() (v1, v2 *Vertex, err error) {
func (ag *NonReachabilityGrouper) vertexNext() (v1, v2 pgraph.Vertex, err error) {
for {
v1, v2, err = ag.baseGrouper.vertexNext() // get all iterable pairs
if err != nil {
@@ -200,15 +205,15 @@ func (ag *nonReachabilityGrouper) vertexNext() (v1, v2 *Vertex, err error) {
// and then by deleting v2 from the graph. Since more than one edge between two
// vertices is not allowed, duplicate edges are merged as well. an edge merge
// function can be provided if you'd like to control how you merge the edges!
func (g *Graph) VertexMerge(v1, v2 *Vertex, vertexMergeFn func(*Vertex, *Vertex) (*Vertex, error), edgeMergeFn func(*Edge, *Edge) *Edge) error {
func VertexMerge(g *pgraph.Graph, v1, v2 pgraph.Vertex, vertexMergeFn func(pgraph.Vertex, pgraph.Vertex) (pgraph.Vertex, error), edgeMergeFn func(pgraph.Edge, pgraph.Edge) pgraph.Edge) error {
// methodology
// 1) edges between v1 and v2 are removed
//Loop:
for k1 := range g.Adjacency {
for k2 := range g.Adjacency[k1] {
for k1 := range g.Adjacency() {
for k2 := range g.Adjacency()[k1] {
// v1 -> v2 || v2 -> v1
if (k1 == v1 && k2 == v2) || (k1 == v2 && k2 == v1) {
delete(g.Adjacency[k1], k2) // delete map & edge
delete(g.Adjacency()[k1], k2) // delete map & edge
// NOTE: if we assume this is a DAG, then we can
// assume only v1 -> v2 OR v2 -> v1 exists, and
// we can break out of these loops immediately!
@@ -220,10 +225,10 @@ func (g *Graph) VertexMerge(v1, v2 *Vertex, vertexMergeFn func(*Vertex, *Vertex)
// 2) edges that point towards v2 from X now point to v1 from X (no dupes)
for _, x := range g.IncomingGraphVertices(v2) { // all to vertex v (??? -> v)
e := g.Adjacency[x][v2] // previous edge
e := g.Adjacency()[x][v2] // previous edge
r := g.Reachability(x, v1)
// merge e with ex := g.Adjacency[x][v1] if it exists!
if ex, exists := g.Adjacency[x][v1]; exists && edgeMergeFn != nil && len(r) == 0 {
// merge e with ex := g.Adjacency()[x][v1] if it exists!
if ex, exists := g.Adjacency()[x][v1]; exists && edgeMergeFn != nil && len(r) == 0 {
e = edgeMergeFn(e, ex)
}
if len(r) == 0 { // if not reachable, add it
@@ -236,21 +241,21 @@ func (g *Graph) VertexMerge(v1, v2 *Vertex, vertexMergeFn func(*Vertex, *Vertex)
continue
}
// this edge is from: prev, to: next
ex, _ := g.Adjacency[prev][next] // get
ex, _ := g.Adjacency()[prev][next] // get
ex = edgeMergeFn(ex, e)
g.Adjacency[prev][next] = ex // set
g.Adjacency()[prev][next] = ex // set
prev = next
}
}
delete(g.Adjacency[x], v2) // delete old edge
delete(g.Adjacency()[x], v2) // delete old edge
}
// 3) edges that point from v2 to X now point from v1 to X (no dupes)
for _, x := range g.OutgoingGraphVertices(v2) { // all from vertex v (v -> ???)
e := g.Adjacency[v2][x] // previous edge
e := g.Adjacency()[v2][x] // previous edge
r := g.Reachability(v1, x)
// merge e with ex := g.Adjacency[v1][x] if it exists!
if ex, exists := g.Adjacency[v1][x]; exists && edgeMergeFn != nil && len(r) == 0 {
// merge e with ex := g.Adjacency()[v1][x] if it exists!
if ex, exists := g.Adjacency()[v1][x]; exists && edgeMergeFn != nil && len(r) == 0 {
e = edgeMergeFn(e, ex)
}
if len(r) == 0 {
@@ -263,13 +268,13 @@ func (g *Graph) VertexMerge(v1, v2 *Vertex, vertexMergeFn func(*Vertex, *Vertex)
continue
}
// this edge is from: prev, to: next
ex, _ := g.Adjacency[prev][next]
ex, _ := g.Adjacency()[prev][next]
ex = edgeMergeFn(ex, e)
g.Adjacency[prev][next] = ex
g.Adjacency()[prev][next] = ex
prev = next
}
}
delete(g.Adjacency[v2], x)
delete(g.Adjacency()[v2], x)
}
// 4) merge and then remove the (now merged/grouped) vertex
@@ -277,7 +282,8 @@ func (g *Graph) VertexMerge(v1, v2 *Vertex, vertexMergeFn func(*Vertex, *Vertex)
if v, err := vertexMergeFn(v1, v2); err != nil {
return err
} else if v != nil { // replace v1 with the "merged" version...
*v1 = *v // TODO: is this safe? (replacing mutexes is undefined!)
//*v1 = *v // TODO: is this safe? (replacing mutexes is undefined!)
v1 = v
}
}
g.DeleteVertex(v2) // remove grouped vertex
@@ -289,8 +295,8 @@ func (g *Graph) VertexMerge(v1, v2 *Vertex, vertexMergeFn func(*Vertex, *Vertex)
return nil // success
}
// autoGroup is the mechanical auto group "runner" that runs the interface spec
func (g *Graph) autoGroup(ag AutoGrouper) chan string {
// autoGroup is the mechanical auto group "runner" that runs the interface spec.
func autoGroup(g *pgraph.Graph, ag AutoGrouper) chan string {
strch := make(chan string) // output log messages here
go func(strch chan string) {
strch <- fmt.Sprintf("Compile: Grouping: Algorithm: %v...", ag.name())
@@ -299,7 +305,7 @@ func (g *Graph) autoGroup(ag AutoGrouper) chan string {
}
for {
var v, w *Vertex
var v, w pgraph.Vertex
v, w, err := ag.vertexNext() // get pair to compare
if err != nil {
log.Fatalf("error running autoGroup(vertexNext): %v", err)
@@ -310,12 +316,12 @@ func (g *Graph) autoGroup(ag AutoGrouper) chan string {
wStr := fmt.Sprintf("%s", w)
if err := ag.vertexCmp(v, w); err != nil { // cmp ?
if g.Flags.Debug {
if b, ok := g.Value("debug"); ok && util.Bool(b) {
strch <- fmt.Sprintf("Compile: Grouping: !GroupCmp for: %s into %s", wStr, vStr)
}
// remove grouped vertex and merge edges (res is safe)
} else if err := g.VertexMerge(v, w, ag.vertexMerge, ag.edgeMerge); err != nil { // merge...
} else if err := VertexMerge(g, v, w, ag.vertexMerge, ag.edgeMerge); err != nil { // merge...
strch <- fmt.Sprintf("Compile: Grouping: !VertexMerge for: %s into %s", wStr, vStr)
} else { // success!
@@ -337,12 +343,12 @@ func (g *Graph) autoGroup(ag AutoGrouper) chan string {
return strch
}
// AutoGroup runs the auto grouping on the graph and prints out log messages
func (g *Graph) AutoGroup() {
// AutoGroup runs the auto grouping on the graph and prints out log messages.
func AutoGroup(g *pgraph.Graph, ag AutoGrouper) {
// receive log messages from channel...
// this allows test cases to avoid printing them when they're unwanted!
// TODO: this algorithm may not be correct in all cases. replace if needed!
for str := range g.autoGroup(&nonReachabilityGrouper{}) {
for str := range autoGroup(g, ag) {
log.Println(str)
}
}

732
resources/autogroup_test.go Normal file
View File

@@ -0,0 +1,732 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"fmt"
"reflect"
"sort"
"strings"
"testing"
"time"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util"
)
// NE is a helper function to make testing easier. It creates a new noop edge.
func NE(s string) pgraph.Edge {
obj := &Edge{Name: s}
return obj
}
type testGrouper struct {
// TODO: this algorithm may not be correct in all cases. replace if needed!
NonReachabilityGrouper // "inherit" what we want, and reimplement the rest
}
func (ag *testGrouper) name() string {
return "testGrouper"
}
func (ag *testGrouper) vertexMerge(v1, v2 pgraph.Vertex) (v pgraph.Vertex, err error) {
if err := VtoR(v1).GroupRes(VtoR(v2)); err != nil { // group them first
return nil, err
}
// HACK: update the name so it matches full list of self+grouped
obj := VtoR(v1)
names := strings.Split(obj.GetName(), ",") // load in stored names
for _, n := range obj.GetGroup() {
names = append(names, n.GetName()) // add my contents
}
names = util.StrRemoveDuplicatesInList(names) // remove duplicates
sort.Strings(names)
obj.SetName(strings.Join(names, ","))
return // success or fail, and no need to merge the actual vertices!
}
func (ag *testGrouper) edgeMerge(e1, e2 pgraph.Edge) pgraph.Edge {
edge1 := e1.(*Edge) // panic if wrong
edge2 := e2.(*Edge) // panic if wrong
// HACK: update the name so it makes a union of both names
n1 := strings.Split(edge1.Name, ",") // load
n2 := strings.Split(edge2.Name, ",") // load
names := append(n1, n2...)
names = util.StrRemoveDuplicatesInList(names) // remove duplicates
sort.Strings(names)
return &Edge{Name: strings.Join(names, ",")}
}
// helper function
func runGraphCmp(t *testing.T, g1, g2 *pgraph.Graph) {
AutoGroup(g1, &testGrouper{}) // edits the graph
err := GraphCmp(g1, g2)
if err != nil {
t.Logf(" actual (g1): %v%v", g1, fullPrint(g1))
t.Logf("expected (g2): %v%v", g2, fullPrint(g2))
t.Logf("Cmp error:")
t.Errorf("%v", err)
}
}
type NoopResTest struct {
NoopRes
}
func (obj *NoopResTest) GroupCmp(r Res) bool {
res, ok := r.(*NoopResTest)
if !ok {
return false
}
// TODO: implement this in vertexCmp for *testGrouper instead?
if strings.Contains(res.Name, ",") { // HACK
return false // element to be grouped is already grouped!
}
// group if they start with the same letter! (helpful hack for testing)
return obj.Name[0] == res.Name[0]
}
func NewNoopResTest(name string) *NoopResTest {
obj := &NoopResTest{
NoopRes: NoopRes{
BaseRes: BaseRes{
Name: name,
MetaParams: MetaParams{
AutoGroup: true, // always autogroup
},
},
},
}
return obj
}
// GraphCmp compares the topology of two graphs and returns nil if they're
// equal. It also compares if grouped element groups are identical.
// TODO: port this to use the pgraph.GraphCmp function instead.
func GraphCmp(g1, g2 *pgraph.Graph) error {
if n1, n2 := g1.NumVertices(), g2.NumVertices(); n1 != n2 {
return fmt.Errorf("graph g1 has %d vertices, while g2 has %d", n1, n2)
}
if e1, e2 := g1.NumEdges(), g2.NumEdges(); e1 != e2 {
return fmt.Errorf("graph g1 has %d edges, while g2 has %d", e1, e2)
}
var m = make(map[pgraph.Vertex]pgraph.Vertex) // g1 to g2 vertex correspondence
Loop:
// check vertices
for v1 := range g1.Adjacency() { // for each vertex in g1
l1 := strings.Split(VtoR(v1).GetName(), ",") // make list of everyone's names...
for _, x1 := range VtoR(v1).GetGroup() {
l1 = append(l1, x1.GetName()) // add my contents
}
l1 = util.StrRemoveDuplicatesInList(l1) // remove duplicates
sort.Strings(l1)
// inner loop
for v2 := range g2.Adjacency() { // does it match in g2 ?
l2 := strings.Split(VtoR(v2).GetName(), ",")
for _, x2 := range VtoR(v2).GetGroup() {
l2 = append(l2, x2.GetName())
}
l2 = util.StrRemoveDuplicatesInList(l2) // remove duplicates
sort.Strings(l2)
// does l1 match l2 ?
if ListStrCmp(l1, l2) { // cmp!
m[v1] = v2
continue Loop
}
}
return fmt.Errorf("graph g1, has no match in g2 for: %v", VtoR(v1).GetName())
}
// vertices (and groups) match :)
// check edges
for v1 := range g1.Adjacency() { // for each vertex in g1
v2 := m[v1] // lookup in map to get correspondance
// g1.Adjacency()[v1] corresponds to g2.Adjacency()[v2]
if e1, e2 := len(g1.Adjacency()[v1]), len(g2.Adjacency()[v2]); e1 != e2 {
return fmt.Errorf("graph g1, vertex(%v) has %d edges, while g2, vertex(%v) has %d", VtoR(v1).GetName(), e1, VtoR(v2).GetName(), e2)
}
for vv1, ee1 := range g1.Adjacency()[v1] {
vv2 := m[vv1]
ee1 := ee1.(*Edge)
ee2 := g2.Adjacency()[v2][vv2].(*Edge)
// these are edges from v1 -> vv1 via ee1 (graph 1)
// to cmp to edges from v2 -> vv2 via ee2 (graph 2)
// check: (1) vv1 == vv2 ? (we've already checked this!)
l1 := strings.Split(VtoR(vv1).GetName(), ",") // make list of everyone's names...
for _, x1 := range VtoR(vv1).GetGroup() {
l1 = append(l1, x1.GetName()) // add my contents
}
l1 = util.StrRemoveDuplicatesInList(l1) // remove duplicates
sort.Strings(l1)
l2 := strings.Split(VtoR(vv2).GetName(), ",")
for _, x2 := range VtoR(vv2).GetGroup() {
l2 = append(l2, x2.GetName())
}
l2 = util.StrRemoveDuplicatesInList(l2) // remove duplicates
sort.Strings(l2)
// does l1 match l2 ?
if !ListStrCmp(l1, l2) { // cmp!
return fmt.Errorf("graph g1 and g2 don't agree on: %v and %v", VtoR(vv1).GetName(), VtoR(vv2).GetName())
}
// check: (2) ee1 == ee2
if ee1.Name != ee2.Name {
return fmt.Errorf("graph g1 edge(%v) doesn't match g2 edge(%v)", ee1.Name, ee2.Name)
}
}
}
// check meta parameters
for v1 := range g1.Adjacency() { // for each vertex in g1
for v2 := range g2.Adjacency() { // does it match in g2 ?
s1, s2 := VtoR(v1).Meta().Sema, VtoR(v2).Meta().Sema
sort.Strings(s1)
sort.Strings(s2)
if !reflect.DeepEqual(s1, s2) {
return fmt.Errorf("vertex %s and vertex %s have different semaphores", VtoR(v1).GetName(), VtoR(v2).GetName())
}
}
}
return nil // success!
}
// ListStrCmp compares two lists of strings
func ListStrCmp(a, b []string) bool {
//fmt.Printf("CMP: %v with %v\n", a, b) // debugging
if a == nil && b == nil {
return true
}
if a == nil || b == nil {
return false
}
if len(a) != len(b) {
return false
}
for i := range a {
if a[i] != b[i] {
return false
}
}
return true
}
func fullPrint(g *pgraph.Graph) (str string) {
str += "\n"
for v := range g.Adjacency() {
if semas := VtoR(v).Meta().Sema; len(semas) > 0 {
str += fmt.Sprintf("* v: %v; sema: %v\n", VtoR(v).GetName(), semas)
} else {
str += fmt.Sprintf("* v: %v\n", VtoR(v).GetName())
}
// TODO: add explicit grouping data?
}
for v1 := range g.Adjacency() {
for v2, e := range g.Adjacency()[v1] {
edge := e.(*Edge)
str += fmt.Sprintf("* e: %v -> %v # %v\n", VtoR(v1).GetName(), VtoR(v2).GetName(), edge.Name)
}
}
return
}
func TestDurationAssumptions(t *testing.T) {
var d time.Duration
if (d == 0) != true {
t.Errorf("empty time.Duration is no longer equal to zero")
}
if (d > 0) != false {
t.Errorf("empty time.Duration is now greater than zero")
}
}
// all of the following test cases are laid out with the following semantics:
// * vertices which start with the same single letter are considered "like"
// * "like" elements should be merged
// * vertices can have any integer after their single letter "family" type
// * grouped vertices should have a name with a comma separated list of names
// * edges follow the same conventions about grouping
// empty graph
func TestPgraphGrouping1(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
g2, _ := pgraph.NewGraph("g2") // expected result
runGraphCmp(t, g1, g2)
}
// single vertex
func TestPgraphGrouping2(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{ // grouping to limit variable scope
a1 := NewNoopResTest("a1")
g1.AddVertex(a1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
g2.AddVertex(a1)
}
runGraphCmp(t, g1, g2)
}
// two vertices
func TestPgraphGrouping3(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
g1.AddVertex(a1, b1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
g2.AddVertex(a1, b1)
}
runGraphCmp(t, g1, g2)
}
// two vertices merge
func TestPgraphGrouping4(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
g1.AddVertex(a1, a2)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
g2.AddVertex(a)
}
runGraphCmp(t, g1, g2)
}
// three vertices merge
func TestPgraphGrouping5(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
a3 := NewNoopResTest("a3")
g1.AddVertex(a1, a2, a3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2,a3")
g2.AddVertex(a)
}
runGraphCmp(t, g1, g2)
}
// three vertices, two merge
func TestPgraphGrouping6(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
g1.AddVertex(a1, a2, b1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b1 := NewNoopResTest("b1")
g2.AddVertex(a, b1)
}
runGraphCmp(t, g1, g2)
}
// four vertices, three merge
func TestPgraphGrouping7(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
a3 := NewNoopResTest("a3")
b1 := NewNoopResTest("b1")
g1.AddVertex(a1, a2, a3, b1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2,a3")
b1 := NewNoopResTest("b1")
g2.AddVertex(a, b1)
}
runGraphCmp(t, g1, g2)
}
// four vertices, two&two merge
func TestPgraphGrouping8(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
g1.AddVertex(a1, a2, b1, b2)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b := NewNoopResTest("b1,b2")
g2.AddVertex(a, b)
}
runGraphCmp(t, g1, g2)
}
// five vertices, two&three merge
func TestPgraphGrouping9(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
b3 := NewNoopResTest("b3")
g1.AddVertex(a1, a2, b1, b2, b3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b := NewNoopResTest("b1,b2,b3")
g2.AddVertex(a, b)
}
runGraphCmp(t, g1, g2)
}
// three unique vertices
func TestPgraphGrouping10(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
c1 := NewNoopResTest("c1")
g1.AddVertex(a1, b1, c1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
c1 := NewNoopResTest("c1")
g2.AddVertex(a1, b1, c1)
}
runGraphCmp(t, g1, g2)
}
// three unique vertices, two merge
func TestPgraphGrouping11(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
c1 := NewNoopResTest("c1")
g1.AddVertex(a1, b1, b2, c1)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
b := NewNoopResTest("b1,b2")
c1 := NewNoopResTest("c1")
g2.AddVertex(a1, b, c1)
}
runGraphCmp(t, g1, g2)
}
// simple merge 1
// a1 a2 a1,a2
// \ / >>> | (arrows point downwards)
// b b
func TestPgraphGrouping12(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
e1 := NE("e1")
e2 := NE("e2")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(a2, b1, e2)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b1 := NewNoopResTest("b1")
e := NE("e1,e2")
g2.AddEdge(a, b1, e)
}
runGraphCmp(t, g1, g2)
}
// simple merge 2
// b b
// / \ >>> | (arrows point downwards)
// a1 a2 a1,a2
func TestPgraphGrouping13(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
e1 := NE("e1")
e2 := NE("e2")
g1.AddEdge(b1, a1, e1)
g1.AddEdge(b1, a2, e2)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b1 := NewNoopResTest("b1")
e := NE("e1,e2")
g2.AddEdge(b1, a, e)
}
runGraphCmp(t, g1, g2)
}
// triple merge
// a1 a2 a3 a1,a2,a3
// \ | / >>> | (arrows point downwards)
// b b
func TestPgraphGrouping14(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
a3 := NewNoopResTest("a3")
b1 := NewNoopResTest("b1")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(a2, b1, e2)
g1.AddEdge(a3, b1, e3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2,a3")
b1 := NewNoopResTest("b1")
e := NE("e1,e2,e3")
g2.AddEdge(a, b1, e)
}
runGraphCmp(t, g1, g2)
}
// chain merge
// a1 a1
// / \ |
// b1 b2 >>> b1,b2 (arrows point downwards)
// \ / |
// c1 c1
func TestPgraphGrouping15(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e4 := NE("e4")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(a1, b2, e2)
g1.AddEdge(b1, c1, e3)
g1.AddEdge(b2, c1, e4)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
b := NewNoopResTest("b1,b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1,e2")
e2 := NE("e3,e4")
g2.AddEdge(a1, b, e1)
g2.AddEdge(b, c1, e2)
}
runGraphCmp(t, g1, g2)
}
// re-attach 1 (outer)
// technically the second possibility is valid too, depending on which order we
// merge edges in, and if we don't filter out any unnecessary edges afterwards!
// a1 a2 a1,a2 a1,a2
// | / | | \
// b1 / >>> b1 OR b1 / (arrows point downwards)
// | / | | /
// c1 c1 c1
func TestPgraphGrouping16(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
c1 := NewNoopResTest("c1")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(b1, c1, e2)
g1.AddEdge(a2, c1, e3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b1 := NewNoopResTest("b1")
c1 := NewNoopResTest("c1")
e1 := NE("e1,e3")
e2 := NE("e2,e3") // e3 gets "merged through" to BOTH edges!
g2.AddEdge(a, b1, e1)
g2.AddEdge(b1, c1, e2)
}
runGraphCmp(t, g1, g2)
}
// re-attach 2 (inner)
// a1 b2 a1
// | / |
// b1 / >>> b1,b2 (arrows point downwards)
// | / |
// c1 c1
func TestPgraphGrouping17(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(b1, c1, e2)
g1.AddEdge(b2, c1, e3)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a1 := NewNoopResTest("a1")
b := NewNoopResTest("b1,b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1")
e2 := NE("e2,e3")
g2.AddEdge(a1, b, e1)
g2.AddEdge(b, c1, e2)
}
runGraphCmp(t, g1, g2)
}
// re-attach 3 (double)
// similar to "re-attach 1", technically there is a second possibility for this
// a2 a1 b2 a1,a2
// \ | / |
// \ b1 / >>> b1,b2 (arrows point downwards)
// \ | / |
// c1 c1
func TestPgraphGrouping18(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
b1 := NewNoopResTest("b1")
b2 := NewNoopResTest("b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1")
e2 := NE("e2")
e3 := NE("e3")
e4 := NE("e4")
g1.AddEdge(a1, b1, e1)
g1.AddEdge(b1, c1, e2)
g1.AddEdge(a2, c1, e3)
g1.AddEdge(b2, c1, e4)
}
g2, _ := pgraph.NewGraph("g2") // expected result
{
a := NewNoopResTest("a1,a2")
b := NewNoopResTest("b1,b2")
c1 := NewNoopResTest("c1")
e1 := NE("e1,e3")
e2 := NE("e2,e3,e4") // e3 gets "merged through" to BOTH edges!
g2.AddEdge(a, b, e1)
g2.AddEdge(b, c1, e2)
}
runGraphCmp(t, g1, g2)
}
// connected merge 0, (no change!)
// a1 a1
// \ >>> \ (arrows point downwards)
// a2 a2
func TestPgraphGroupingConnected0(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
e1 := NE("e1")
g1.AddEdge(a1, a2, e1)
}
g2, _ := pgraph.NewGraph("g2") // expected result ?
{
a1 := NewNoopResTest("a1")
a2 := NewNoopResTest("a2")
e1 := NE("e1")
g2.AddEdge(a1, a2, e1)
}
runGraphCmp(t, g1, g2)
}
// connected merge 1, (no change!)
// a1 a1
// \ \
// b >>> b (arrows point downwards)
// \ \
// a2 a2
func TestPgraphGroupingConnected1(t *testing.T) {
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewNoopResTest("a1")
b := NewNoopResTest("b")
a2 := NewNoopResTest("a2")
e1 := NE("e1")
e2 := NE("e2")
g1.AddEdge(a1, b, e1)
g1.AddEdge(b, a2, e2)
}
g2, _ := pgraph.NewGraph("g2") // expected result ?
{
a1 := NewNoopResTest("a1")
b := NewNoopResTest("b")
a2 := NewNoopResTest("a2")
e1 := NE("e1")
e2 := NE("e2")
g2.AddEdge(a1, b, e1)
g2.AddEdge(b, a2, e2)
}
runGraphCmp(t, g1, g2)
}

56
resources/edge.go Normal file
View File

@@ -0,0 +1,56 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
// Edge is a struct that represents a graph's edge.
type Edge struct {
Name string
Notify bool // should we send a refresh notification along this edge?
refresh bool // is there a notify pending for the dest vertex ?
}
// String is a required method of the Edge interface that we must fulfill.
func (obj *Edge) String() string {
return obj.Name
}
// Compare returns true if two edges are equivalent. Otherwise it returns false.
func (obj *Edge) Compare(edge *Edge) bool {
if obj.Name != edge.Name {
return false
}
if obj.Notify != edge.Notify {
return false
}
// FIXME: should we compare this as well?
//if obj.refresh != edge.refresh {
// return false
//}
return true
}
// Refresh returns the pending refresh status of this edge.
func (obj *Edge) Refresh() bool {
return obj.refresh
}
// SetRefresh sets the pending refresh status of this edge.
func (obj *Edge) SetRefresh(b bool) {
obj.refresh = b
}

View File

@@ -25,6 +25,7 @@ import (
"log"
"os/exec"
"strings"
"sync"
"syscall"
"github.com/purpleidea/mgmt/util"
@@ -34,6 +35,7 @@ import (
func init() {
gob.Register(&ExecRes{})
RegisterResource("exec", func() Res { return &ExecRes{} })
}
// ExecRes is an exec resource for running commands.
@@ -46,6 +48,9 @@ type ExecRes struct {
WatchShell string `yaml:"watchshell"` // the (optional) shell to use to run the watch cmd
IfCmd string `yaml:"ifcmd"` // the if command to run
IfShell string `yaml:"ifshell"` // the (optional) shell to use to run the if cmd
Output *string // all cmd output, read only, do not set!
Stdout *string // the cmd stdout, read only, do not set!
Stderr *string // the cmd stderr, read only, do not set!
}
// Default returns some sensible defaults for this resource.
@@ -68,7 +73,7 @@ func (obj *ExecRes) Validate() error {
// Init runs some startup code for this resource.
func (obj *ExecRes) Init() error {
obj.BaseRes.kind = "exec"
obj.BaseRes.Kind = "exec"
return obj.BaseRes.Init() // call base init, b/c we're overriding
}
@@ -147,7 +152,7 @@ func (obj *ExecRes) Watch() error {
select {
case text := <-bufioch:
// each time we get a line of output, we loop!
log.Printf("%s[%s]: Watch output: %s", obj.Kind(), obj.GetName(), text)
log.Printf("%s: Watch output: %s", obj, text)
if text != "" {
send = true
obj.StateOK(false) // something made state dirty
@@ -219,7 +224,7 @@ func (obj *ExecRes) CheckApply(apply bool) (bool, error) {
}
// apply portion
log.Printf("%s[%s]: Apply", obj.Kind(), obj.GetName())
log.Printf("%s: Apply", obj)
var cmdName string
var cmdArgs []string
if obj.Shell == "" {
@@ -243,8 +248,12 @@ func (obj *ExecRes) CheckApply(apply bool) (bool, error) {
Pgid: 0,
}
var out bytes.Buffer
cmd.Stdout = &out
var out splitWriter
out.Init()
// from the docs: "If Stdout and Stderr are the same writer, at most one
// goroutine at a time will call Write." so we trick it here!
cmd.Stdout = out.Stdout
cmd.Stderr = out.Stderr
if err := cmd.Start(); err != nil {
return false, errwrap.Wrapf(err, "error starting cmd")
@@ -267,6 +276,21 @@ func (obj *ExecRes) CheckApply(apply bool) (bool, error) {
return false, fmt.Errorf("timeout for cmd")
}
// save in memory for send/recv
// we use pointers to strings to indicate if used or not
if out.Stdout.Activity || out.Stderr.Activity {
str := out.String()
obj.Output = &str
}
if out.Stdout.Activity {
str := out.Stdout.String()
obj.Stdout = &str
}
if out.Stderr.Activity {
str := out.Stderr.String()
obj.Stderr = &str
}
// process the err result from cmd, we process non-zero exits here too!
exitErr, ok := err.(*exec.ExitError) // embeds an os.ProcessState
if err != nil && ok {
@@ -287,10 +311,10 @@ func (obj *ExecRes) CheckApply(apply bool) (bool, error) {
// would be nice, but it would require terminal log output that doesn't
// interleave all the parallel parts which would mix it all up...
if s := out.String(); s == "" {
log.Printf("%s[%s]: Command output is empty!", obj.Kind(), obj.GetName())
log.Printf("%s: Command output is empty!", obj)
} else {
log.Printf("%s[%s]: Command output is:", obj.Kind(), obj.GetName())
log.Printf("%s: Command output is:", obj)
log.Printf(out.String())
}
@@ -311,17 +335,17 @@ type ExecUID struct {
}
// AutoEdges returns the AutoEdge interface. In this case no autoedges are used.
func (obj *ExecRes) AutoEdges() AutoEdge {
func (obj *ExecRes) AutoEdges() (AutoEdge, error) {
// TODO: parse as many exec params to look for auto edges, for example
// the path of the binary in the Cmd variable might be from in a pkg
return nil
return nil, nil
}
// UIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *ExecRes) UIDs() []ResUID {
x := &ExecUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
BaseUID: BaseUID{Name: obj.GetName(), Kind: obj.GetKind()},
Cmd: obj.Cmd,
IfCmd: obj.IfCmd,
// TODO: add more params here
@@ -339,17 +363,19 @@ func (obj *ExecRes) GroupCmp(r Res) bool {
}
// Compare two resources and return if they are equivalent.
func (obj *ExecRes) Compare(res Res) bool {
switch res.(type) {
case *ExecRes:
res := res.(*ExecRes)
func (obj *ExecRes) Compare(r Res) bool {
// we can only compare ExecRes to others of the same resource kind
res, ok := r.(*ExecRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
if obj.Cmd != res.Cmd {
return false
}
@@ -371,9 +397,7 @@ func (obj *ExecRes) Compare(res Res) bool {
if obj.IfShell != res.IfShell {
return false
}
default:
return false
}
return true
}
@@ -396,3 +420,71 @@ func (obj *ExecRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
*obj = ExecRes(raw) // restore from indirection with type conversion!
return nil
}
// splitWriter mimics what the ssh.CombinedOutput command does, but stores the
// the stdout and stderr separately. This is slightly tricky because we don't
// want the combined output to be interleaved incorrectly. It creates sub writer
// structs which share the same lock and a shared output buffer.
type splitWriter struct {
Stdout *wrapWriter
Stderr *wrapWriter
stdout bytes.Buffer // just the stdout
stderr bytes.Buffer // just the stderr
output bytes.Buffer // combined output
mutex *sync.Mutex
initialized bool // is this initialized?
}
// Init initializes the splitWriter.
func (sw *splitWriter) Init() {
if sw.initialized {
panic("splitWriter is already initialized")
}
sw.mutex = &sync.Mutex{}
sw.Stdout = &wrapWriter{
Mutex: sw.mutex,
Buffer: &sw.stdout,
Output: &sw.output,
}
sw.Stderr = &wrapWriter{
Mutex: sw.mutex,
Buffer: &sw.stderr,
Output: &sw.output,
}
sw.initialized = true
}
// String returns the contents of the combined output buffer.
func (sw *splitWriter) String() string {
if !sw.initialized {
panic("splitWriter is not initialized")
}
return sw.output.String()
}
// wrapWriter is a simple writer which is used internally by splitWriter.
type wrapWriter struct {
Mutex *sync.Mutex
Buffer *bytes.Buffer // stdout or stderr
Output *bytes.Buffer // combined output
Activity bool // did we get any writes?
}
// Write writes to both bytes buffers with a parent lock to mix output safely.
func (w *wrapWriter) Write(p []byte) (int, error) {
// TODO: can we move the lock to only guard around the Output.Write ?
w.Mutex.Lock()
defer w.Mutex.Unlock()
w.Activity = true
i, err := w.Buffer.Write(p) // first write
if err != nil {
return i, err
}
return w.Output.Write(p) // shared write
}
// String returns the contents of the unshared buffer.
func (w *wrapWriter) String() string {
return w.Buffer.String()
}

178
resources/exec_test.go Normal file
View File

@@ -0,0 +1,178 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"testing"
)
func TestExecSendRecv1(t *testing.T) {
r1 := &ExecRes{
BaseRes: BaseRes{
Name: "exec1",
//MetaParams: MetaParams,
},
Cmd: "echo hello world",
Shell: "/bin/bash",
}
r1.Setup(nil, r1, r1)
defer func() {
if err := r1.Close(); err != nil {
t.Errorf("close failed with: %v", err)
}
}()
if err := r1.Init(); err != nil {
t.Errorf("init failed with: %v", err)
}
// run artificially without the entire engine
if _, err := r1.CheckApply(true); err != nil {
t.Errorf("checkapply failed with: %v", err)
}
t.Logf("output is: %v", r1.Output)
if r1.Output != nil {
t.Logf("output is: %v", *r1.Output)
}
t.Logf("stdout is: %v", r1.Stdout)
if r1.Stdout != nil {
t.Logf("stdout is: %v", *r1.Stdout)
}
t.Logf("stderr is: %v", r1.Stderr)
if r1.Stderr != nil {
t.Logf("stderr is: %v", *r1.Stderr)
}
if r1.Stdout == nil {
t.Errorf("stdout is nil")
} else {
if out := *r1.Stdout; out != "hello world\n" {
t.Errorf("got wrong stdout(%d): %s", len(out), out)
}
}
}
func TestExecSendRecv2(t *testing.T) {
r1 := &ExecRes{
BaseRes: BaseRes{
Name: "exec1",
//MetaParams: MetaParams,
},
Cmd: "echo hello world 1>&2", // to stderr
Shell: "/bin/bash",
}
r1.Setup(nil, r1, r1)
defer func() {
if err := r1.Close(); err != nil {
t.Errorf("close failed with: %v", err)
}
}()
if err := r1.Init(); err != nil {
t.Errorf("init failed with: %v", err)
}
// run artificially without the entire engine
if _, err := r1.CheckApply(true); err != nil {
t.Errorf("checkapply failed with: %v", err)
}
t.Logf("output is: %v", r1.Output)
if r1.Output != nil {
t.Logf("output is: %v", *r1.Output)
}
t.Logf("stdout is: %v", r1.Stdout)
if r1.Stdout != nil {
t.Logf("stdout is: %v", *r1.Stdout)
}
t.Logf("stderr is: %v", r1.Stderr)
if r1.Stderr != nil {
t.Logf("stderr is: %v", *r1.Stderr)
}
if r1.Stderr == nil {
t.Errorf("stderr is nil")
} else {
if out := *r1.Stderr; out != "hello world\n" {
t.Errorf("got wrong stderr(%d): %s", len(out), out)
}
}
}
func TestExecSendRecv3(t *testing.T) {
r1 := &ExecRes{
BaseRes: BaseRes{
Name: "exec1",
//MetaParams: MetaParams,
},
Cmd: "echo hello world && echo goodbye world 1>&2", // to stdout && stderr
Shell: "/bin/bash",
}
r1.Setup(nil, r1, r1)
defer func() {
if err := r1.Close(); err != nil {
t.Errorf("close failed with: %v", err)
}
}()
if err := r1.Init(); err != nil {
t.Errorf("init failed with: %v", err)
}
// run artificially without the entire engine
if _, err := r1.CheckApply(true); err != nil {
t.Errorf("checkapply failed with: %v", err)
}
t.Logf("output is: %v", r1.Output)
if r1.Output != nil {
t.Logf("output is: %v", *r1.Output)
}
t.Logf("stdout is: %v", r1.Stdout)
if r1.Stdout != nil {
t.Logf("stdout is: %v", *r1.Stdout)
}
t.Logf("stderr is: %v", r1.Stderr)
if r1.Stderr != nil {
t.Logf("stderr is: %v", *r1.Stderr)
}
if r1.Output == nil {
t.Errorf("output is nil")
} else {
// it looks like bash or golang race to the write, so whichever
// order they come out in is ok, as long as they come out whole
if out := *r1.Output; out != "hello world\ngoodbye world\n" && out != "goodbye world\nhello world\n" {
t.Errorf("got wrong output(%d): %s", len(out), out)
}
}
if r1.Stdout == nil {
t.Errorf("stdout is nil")
} else {
if out := *r1.Stdout; out != "hello world\n" {
t.Errorf("got wrong stdout(%d): %s", len(out), out)
}
}
if r1.Stderr == nil {
t.Errorf("stderr is nil")
} else {
if out := *r1.Stderr; out != "goodbye world\n" {
t.Errorf("got wrong stderr(%d): %s", len(out), out)
}
}
}

View File

@@ -42,6 +42,7 @@ import (
func init() {
gob.Register(&FileRes{})
RegisterResource("file", func() Res { return &FileRes{} })
}
// FileRes is a file and directory resource.
@@ -147,7 +148,7 @@ func (obj *FileRes) Init() error {
obj.path = obj.GetPath() // compute once
obj.isDir = strings.HasSuffix(obj.path, "/") // dirs have trailing slashes
obj.BaseRes.kind = "file"
obj.BaseRes.Kind = "file"
return obj.BaseRes.Init() // call base init, b/c we're overriding
}
@@ -198,7 +199,7 @@ func (obj *FileRes) Watch() error {
for {
if obj.debug {
log.Printf("%s[%s]: Watching: %s", obj.Kind(), obj.GetName(), obj.path) // attempting to watch...
log.Printf("%s: Watching: %s", obj, obj.path) // attempting to watch...
}
select {
@@ -207,10 +208,10 @@ func (obj *FileRes) Watch() error {
return nil
}
if err := event.Error; err != nil {
return errwrap.Wrapf(err, "unknown %s[%s] watcher error", obj.Kind(), obj.GetName())
return errwrap.Wrapf(err, "unknown %s watcher error", obj)
}
if obj.debug { // don't access event.Body if event.Error isn't nil
log.Printf("%s[%s]: Event(%s): %v", obj.Kind(), obj.GetName(), event.Body.Name, event.Body.Op)
log.Printf("%s: Event(%s): %v", obj, event.Body.Name, event.Body.Op)
}
send = true
obj.StateOK(false) // dirty
@@ -635,7 +636,7 @@ func (obj *FileRes) syncCheckApply(apply bool, src, dst string) (bool, error) {
// contentCheckApply performs a CheckApply for the file existence and content.
func (obj *FileRes) contentCheckApply(apply bool) (checkOK bool, _ error) {
log.Printf("%s[%s]: contentCheckApply(%t)", obj.Kind(), obj.GetName(), apply)
log.Printf("%s: contentCheckApply(%t)", obj, apply)
if obj.State == "absent" {
if _, err := os.Stat(obj.path); os.IsNotExist(err) {
@@ -697,7 +698,7 @@ func (obj *FileRes) contentCheckApply(apply bool) (checkOK bool, _ error) {
// chmodCheckApply performs a CheckApply for the file permissions.
func (obj *FileRes) chmodCheckApply(apply bool) (checkOK bool, _ error) {
log.Printf("%s[%s]: chmodCheckApply(%t)", obj.Kind(), obj.GetName(), apply)
log.Printf("%s: chmodCheckApply(%t)", obj, apply)
if obj.State == "absent" {
// File is absent
@@ -743,7 +744,7 @@ func (obj *FileRes) chmodCheckApply(apply bool) (checkOK bool, _ error) {
// chownCheckApply performs a CheckApply for the file ownership.
func (obj *FileRes) chownCheckApply(apply bool) (checkOK bool, _ error) {
var expectedUID, expectedGID int
log.Printf("%s[%s]: chownCheckApply(%t)", obj.Kind(), obj.GetName(), apply)
log.Printf("%s: chownCheckApply(%t)", obj, apply)
if obj.State == "absent" {
// File is absent or no owner specified
@@ -897,17 +898,18 @@ func (obj *FileResAutoEdges) Test(input []bool) bool {
// AutoEdges generates a simple linear sequence of each parent directory from
// the bottom up!
func (obj *FileRes) AutoEdges() AutoEdge {
func (obj *FileRes) AutoEdges() (AutoEdge, error) {
var data []ResUID // store linear result chain here...
values := util.PathSplitFullReversed(obj.path) // build it
// build it, but don't use obj.path because this gets called before Init
values := util.PathSplitFullReversed(obj.GetPath())
_, values = values[0], values[1:] // get rid of first value which is me!
for _, x := range values {
var reversed = true // cheat by passing a pointer
data = append(data, &FileUID{
BaseUID: BaseUID{
name: obj.GetName(),
kind: obj.Kind(),
reversed: &reversed,
Name: obj.GetName(),
Kind: obj.GetKind(),
Reversed: &reversed,
},
path: x, // what matters
}) // build list
@@ -916,15 +918,15 @@ func (obj *FileRes) AutoEdges() AutoEdge {
data: data,
pointer: 0,
found: false,
}
}, nil
}
// UIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *FileRes) UIDs() []ResUID {
x := &FileUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
path: obj.path,
BaseUID: BaseUID{Name: obj.GetName(), Kind: obj.GetKind()},
path: obj.GetPath(), // not obj.path b/c we didn't init yet!
}
return []ResUID{x}
}
@@ -941,17 +943,19 @@ func (obj *FileRes) GroupCmp(r Res) bool {
}
// Compare two resources and return if they are equivalent.
func (obj *FileRes) Compare(res Res) bool {
switch res.(type) {
case *FileRes:
res := res.(*FileRes)
func (obj *FileRes) Compare(r Res) bool {
// we can only compare FileRes to others of the same resource kind
res, ok := r.(*FileRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
if obj.path != res.path {
return false
}
@@ -975,9 +979,7 @@ func (obj *FileRes) Compare(res Res) bool {
if obj.Force != res.Force {
return false
}
default:
return false
}
return true
}

79
resources/file_test.go Normal file
View File

@@ -0,0 +1,79 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"testing"
"github.com/purpleidea/mgmt/pgraph"
)
func TestFileAutoEdge1(t *testing.T) {
g, err := pgraph.NewGraph("TestGraph")
if err != nil {
t.Errorf("error creating graph: %v", err)
return
}
r1 := &FileRes{
BaseRes: BaseRes{
Name: "file1",
Kind: "file",
MetaParams: MetaParams{
AutoEdge: true,
},
},
Path: "/tmp/a/b/", // some dir
}
r2 := &FileRes{
BaseRes: BaseRes{
Name: "file2",
Kind: "file",
MetaParams: MetaParams{
AutoEdge: true,
},
},
Path: "/tmp/a/", // some parent dir
}
r3 := &FileRes{
BaseRes: BaseRes{
Name: "file3",
Kind: "file",
MetaParams: MetaParams{
AutoEdge: true,
},
},
Path: "/tmp/a/b/c", // some child file
}
g.AddVertex(r1, r2, r3)
if i := g.NumEdges(); i != 0 {
t.Errorf("should have 0 edges instead of: %d", i)
}
// run artificially without the entire engine
if err := AutoEdges(g); err != nil {
t.Errorf("error running autoedges: %v", err)
}
// two edges should have been added
if i := g.NumEdges(); i != 2 {
t.Errorf("should have 2 edges instead of: %d", i)
}
}

237
resources/graph.go Normal file
View File

@@ -0,0 +1,237 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"encoding/gob"
"fmt"
"github.com/purpleidea/mgmt/pgraph"
multierr "github.com/hashicorp/go-multierror"
errwrap "github.com/pkg/errors"
)
func init() {
RegisterResource("graph", func() Res { return &GraphRes{} })
gob.Register(&GraphRes{})
}
// GraphRes is a resource that recursively runs a sub graph of resources.
// TODO: should we name this SubGraphRes instead?
// TODO: we could also flatten "sub graphs" into the main graph to avoid this,
// and this could even be done with a graph transformation called flatten,
// similar to where autogroup and autoedges run.
// XXX: this resource is not complete, and hasn't even been tested
type GraphRes struct {
BaseRes `yaml:",inline"`
Graph *pgraph.Graph `yaml:"graph"` // TODO: how do we suck in a graph via yaml?
initCount int // number of successfully initialized resources
}
// GraphUID is a unique representation for a GraphRes object.
type GraphUID struct {
BaseUID
//foo string // XXX: not implemented
}
// Default returns some sensible defaults for this resource.
func (obj *GraphRes) Default() Res {
return &GraphRes{
BaseRes: BaseRes{
MetaParams: DefaultMetaParams, // force a default
},
}
}
// Validate the params and sub resources that are passed to GraphRes.
func (obj *GraphRes) Validate() error {
var err error
for _, v := range obj.Graph.VerticesSorted() { // validate everyone
if e := VtoR(v).Validate(); err != nil {
err = multierr.Append(err, e) // list of errors
}
}
if err != nil {
return errwrap.Wrapf(err, "could not Validate() graph")
}
return obj.BaseRes.Validate()
}
// Init runs some startup code for this resource.
func (obj *GraphRes) Init() error {
// Loop through each vertex and initialize it, but keep track of how far
// we've succeeded, because on failure we'll stop and prepare to reverse
// through from there running the Close operation on each vertex that we
// previously did an Init on. The engine always ensures that we run this
// with a 1-1 relationship between Init and Close, so we must do so too.
for i, v := range obj.Graph.VerticesSorted() { // deterministic order!
obj.initCount = i + 1 // store the number that we tried to init
if err := VtoR(v).Init(); err != nil {
return errwrap.Wrapf(err, "could not Init() graph")
}
}
obj.BaseRes.Kind = "graph"
return obj.BaseRes.Init() // call base init, b/c we're overrriding
}
// Close runs some cleanup code for this resource.
func (obj *GraphRes) Close() error {
// The idea is to Close anything we did an Init on including the BaseRes
// methods which are not guaranteed to be safe if called multiple times!
var err error
vertices := obj.Graph.VerticesSorted() // deterministic order!
last := obj.initCount - 1 // index of last vertex we did init on
for i := range vertices {
v := vertices[last-i] // go through in reverse
// if we hit this condition, we haven't been able to get through
// the entire list of vertices that we'd have liked to, on init!
if obj.initCount == 0 {
// if we get here, we exit without calling BaseRes.Close
// because the matching BaseRes.Init did not get called!
return errwrap.Wrapf(err, "could not Close() partial graph")
//break
}
obj.initCount-- // count to avoid closing one that didn't init!
// try to close everyone that got an init, don't stop suddenly!
if e := VtoR(v).Close(); e != nil {
err = multierr.Append(err, e) // list of errors
}
}
// call base close, b/c we're overriding
if e := obj.BaseRes.Close(); err == nil {
err = e
} else if e != nil {
err = multierr.Append(err, e) // list of errors
}
// this returns nil if err is nil
return errwrap.Wrapf(err, "could not Close() graph")
}
// Watch is the primary listener for this resource and it outputs events.
// XXX: should this use mgraph.Start/Pause? if so then what does CheckApply do?
// XXX: we should probably refactor the core engine to make this work, which
// will hopefully lead us to a more elegant core that is easier to understand
func (obj *GraphRes) Watch() error {
return fmt.Errorf("Not implemented")
}
// CheckApply method for Graph resource.
// XXX: not implemented
func (obj *GraphRes) CheckApply(apply bool) (bool, error) {
return false, fmt.Errorf("Not implemented")
}
// UIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *GraphRes) UIDs() []ResUID {
x := &GraphUID{
BaseUID: BaseUID{
Name: obj.GetName(),
Kind: obj.GetKind(),
},
//foo: obj.foo, // XXX: not implemented
}
uids := []ResUID{}
for _, v := range obj.Graph.VerticesSorted() {
uids = append(uids, VtoR(v).UIDs()...)
}
return append([]ResUID{x}, uids...)
}
// XXX: hook up the autogrouping magic!
// Compare two resources and return if they are equivalent.
func (obj *GraphRes) Compare(r Res) bool {
// we can only compare GraphRes to others of the same resource kind
res, ok := r.(*GraphRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) {
return false
}
if obj.Name != res.Name {
return false
}
//if obj.Foo != res.Foo { // XXX: not implemented
// return false
//}
// compare the structure of the two graphs...
vertexCmpFn := func(v1, v2 pgraph.Vertex) (bool, error) {
if v1.String() == "" || v2.String() == "" {
return false, fmt.Errorf("oops, empty vertex")
}
return VtoR(v1).Compare(VtoR(v2)), nil
}
edgeCmpFn := func(e1, e2 pgraph.Edge) (bool, error) {
if e1.String() == "" || e2.String() == "" {
return false, fmt.Errorf("oops, empty edge")
}
edge1 := e1.(*Edge) // panic if wrong
edge2 := e2.(*Edge) // panic if wrong
return edge1.Compare(edge2), nil
}
if err := obj.Graph.GraphCmp(res.Graph, vertexCmpFn, edgeCmpFn); err != nil {
return false
}
// compare individual elements in structurally equivalent graphs
// TODO: is this redundant with the GraphCmp?
g1 := obj.Graph.VerticesSorted()
g2 := res.Graph.VerticesSorted()
for i, v1 := range g1 {
v2 := g2[i]
if !VtoR(v1).Compare(VtoR(v2)) {
return false
}
}
return true
}
// UnmarshalYAML is the custom unmarshal handler for this struct.
// It is primarily useful for setting the defaults.
func (obj *GraphRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawRes GraphRes // indirection to avoid infinite recursion
def := obj.Default() // get the default
res, ok := def.(*GraphRes) // put in the right format
if !ok {
return fmt.Errorf("could not convert to GraphRes")
}
raw := rawRes(*res) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = GraphRes(raw) // restore from indirection with type conversion!
return nil
}

View File

@@ -35,6 +35,7 @@ var ErrResourceInsufficientParameters = errors.New(
"Insufficient parameters for this resource")
func init() {
RegisterResource("hostname", func() Res { return &HostnameRes{} })
gob.Register(&HostnameRes{})
}
@@ -87,7 +88,7 @@ func (obj *HostnameRes) Validate() error {
// Init runs some startup code for this resource.
func (obj *HostnameRes) Init() error {
obj.BaseRes.kind = "hostname"
obj.BaseRes.Kind = "hostname"
if obj.PrettyHostname == "" {
obj.PrettyHostname = obj.Hostname
}
@@ -227,16 +228,11 @@ type HostnameUID struct {
transientHostname string
}
// AutoEdges returns the AutoEdge interface. In this case no autoedges are used.
func (obj *HostnameRes) AutoEdges() AutoEdge {
return nil
}
// UIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *HostnameRes) UIDs() []ResUID {
x := &HostnameUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
BaseUID: BaseUID{Name: obj.GetName(), Kind: obj.GetKind()},
name: obj.Name,
prettyHostname: obj.PrettyHostname,
staticHostname: obj.StaticHostname,
@@ -251,16 +247,19 @@ func (obj *HostnameRes) GroupCmp(r Res) bool {
}
// Compare two resources and return if they are equivalent.
func (obj *HostnameRes) Compare(res Res) bool {
switch res := res.(type) {
// we can only compare HostnameRes to others of the same resource
case *HostnameRes:
func (obj *HostnameRes) Compare(r Res) bool {
// we can only compare HostnameRes to others of the same resource kind
res, ok := r.(*HostnameRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
if obj.PrettyHostname != res.PrettyHostname {
return false
}
@@ -270,9 +269,7 @@ func (obj *HostnameRes) Compare(res Res) bool {
if obj.TransientHostname != res.TransientHostname {
return false
}
default:
return false
}
return true
}

View File

@@ -27,6 +27,7 @@ import (
)
func init() {
RegisterResource("kv", func() Res { return &KVRes{} })
gob.Register(&KVRes{})
}
@@ -88,7 +89,7 @@ func (obj *KVRes) Validate() error {
// Init initializes the resource.
func (obj *KVRes) Init() error {
obj.BaseRes.kind = "kv"
obj.BaseRes.Kind = "kv"
return obj.BaseRes.Init() // call base init, b/c we're overriding
}
@@ -100,7 +101,7 @@ func (obj *KVRes) Watch() error {
return err // bubble up a NACK...
}
ch := obj.Data().World.StrWatch(obj.Key) // get possible events!
ch := obj.Data().World.StrMapWatch(obj.Key) // get possible events!
var send = false // send event?
var exit *error
@@ -112,10 +113,10 @@ func (obj *KVRes) Watch() error {
return nil
}
if err != nil {
return errwrap.Wrapf(err, "unknown %s[%s] watcher error", obj.Kind(), obj.GetName())
return errwrap.Wrapf(err, "unknown %s watcher error", obj)
}
if obj.Data().Debug {
log.Printf("%s[%s]: Event!", obj.Kind(), obj.GetName())
log.Printf("%s: Event!", obj)
}
send = true
obj.StateOK(false) // dirty
@@ -176,7 +177,7 @@ func (obj *KVRes) lessThanCheck(value string) (checkOK bool, err error) {
// CheckApply method for Password resource. Does nothing, returns happy!
func (obj *KVRes) CheckApply(apply bool) (checkOK bool, err error) {
log.Printf("%s[%s]: CheckApply(%t)", obj.Kind(), obj.GetName(), apply)
log.Printf("%s: CheckApply(%t)", obj, apply)
if val, exists := obj.Recv["Value"]; exists && val.Changed {
// if we received on Value, and it changed, wooo, nothing to do.
@@ -184,7 +185,7 @@ func (obj *KVRes) CheckApply(apply bool) (checkOK bool, err error) {
}
hostname := obj.Data().Hostname // me
keyMap, err := obj.Data().World.StrGet(obj.Key)
keyMap, err := obj.Data().World.StrMapGet(obj.Key)
if err != nil {
return false, errwrap.Wrapf(err, "check error during StrGet")
}
@@ -204,7 +205,7 @@ func (obj *KVRes) CheckApply(apply bool) (checkOK bool, err error) {
return true, nil // nothing to delete, we're good!
} else if ok && obj.Value == nil { // delete
err := obj.Data().World.StrDel(obj.Key)
err := obj.Data().World.StrMapDel(obj.Key)
return false, errwrap.Wrapf(err, "apply error during StrDel")
}
@@ -212,7 +213,7 @@ func (obj *KVRes) CheckApply(apply bool) (checkOK bool, err error) {
return false, nil
}
if err := obj.Data().World.StrSet(obj.Key, *obj.Value); err != nil {
if err := obj.Data().World.StrMapSet(obj.Key, *obj.Value); err != nil {
return false, errwrap.Wrapf(err, "apply error during StrSet")
}
@@ -225,16 +226,11 @@ type KVUID struct {
name string
}
// AutoEdges returns the AutoEdge interface. In this case no autoedges are used.
func (obj *KVRes) AutoEdges() AutoEdge {
return nil
}
// UIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *KVRes) UIDs() []ResUID {
x := &KVUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
BaseUID: BaseUID{Name: obj.GetName(), Kind: obj.GetKind()},
name: obj.Name,
}
return []ResUID{x}
@@ -251,11 +247,12 @@ func (obj *KVRes) GroupCmp(r Res) bool {
}
// Compare two resources and return if they are equivalent.
func (obj *KVRes) Compare(res Res) bool {
switch res.(type) {
// we can only compare KVRes to others of the same resource
case *KVRes:
res := res.(*KVRes)
func (obj *KVRes) Compare(r Res) bool {
// we can only compare KVRes to others of the same resource kind
res, ok := r.(*KVRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
@@ -277,9 +274,7 @@ func (obj *KVRes) Compare(res Res) bool {
if obj.SkipCmpStyle != res.SkipCmpStyle {
return false
}
default:
return false
}
return true
}

65
resources/metaparams.go Normal file
View File

@@ -0,0 +1,65 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"golang.org/x/time/rate"
)
// MetaParams is a struct will all params that apply to every resource.
type MetaParams struct {
AutoEdge bool `yaml:"autoedge"` // metaparam, should we generate auto edges?
AutoGroup bool `yaml:"autogroup"` // metaparam, should we auto group?
Noop bool `yaml:"noop"`
// NOTE: there are separate Watch and CheckApply retry and delay values,
// but I've decided to use the same ones for both until there's a proper
// reason to want to do something differently for the Watch errors.
Retry int16 `yaml:"retry"` // metaparam, number of times to retry on error. -1 for infinite
Delay uint64 `yaml:"delay"` // metaparam, number of milliseconds to wait between retries
Poll uint32 `yaml:"poll"` // metaparam, number of seconds between poll intervals, 0 to watch
Limit rate.Limit `yaml:"limit"` // metaparam, number of events per second to allow through
Burst int `yaml:"burst"` // metaparam, number of events to allow in a burst
Sema []string `yaml:"sema"` // metaparam, list of semaphore ids (id | id:count)
}
// UnmarshalYAML is the custom unmarshal handler for the MetaParams struct. It
// is primarily useful for setting the defaults.
func (obj *MetaParams) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawMetaParams MetaParams // indirection to avoid infinite recursion
raw := rawMetaParams(DefaultMetaParams) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = MetaParams(raw) // restore from indirection with type conversion!
return nil
}
// DefaultMetaParams are the defaults to be used for undefined metaparams.
var DefaultMetaParams = MetaParams{
AutoEdge: true,
AutoGroup: true,
Noop: false,
Retry: 0, // TODO: is this a good default?
Delay: 0, // TODO: is this a good default?
Poll: 0, // defaults to watching for events
Limit: rate.Inf, // defaults to no limit
Burst: 0, // no burst needed on an infinite rate // TODO: is this a good default?
//Sema: []string{},
}

215
resources/mgraph.go Normal file
View File

@@ -0,0 +1,215 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources // TODO: can this be a separate package or will it break the dag?
import (
"log"
"sync"
"github.com/purpleidea/mgmt/event"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/util/semaphore"
)
//go:generate stringer -type=graphState -output=graphstate_stringer.go
type graphState uint
const (
graphStateNil graphState = iota
graphStateStarting
graphStateStarted
graphStatePausing
graphStatePaused
)
// MGraph is a meta graph structure used to encapsulate a generic graph
// structure alongside some non-generic elements.
type MGraph struct {
//Graph *pgraph.Graph
*pgraph.Graph // wrap a graph, and use its methods directly
Data *ResData
FastPause bool
Debug bool
state graphState
// ptr b/c: Mutex/WaitGroup must not be copied after first use
mutex *sync.Mutex
wg *sync.WaitGroup
slock *sync.Mutex
semas map[string]*semaphore.Semaphore
}
// Init initializes the internal structures.
func (obj *MGraph) Init() {
obj.mutex = &sync.Mutex{}
obj.wg = &sync.WaitGroup{}
obj.slock = &sync.Mutex{} // semaphore lock
obj.semas = make(map[string]*semaphore.Semaphore)
}
// getState returns the state of the graph. This state is used for optimizing
// certain algorithms by knowing what part of processing the graph is currently
// undergoing.
func (obj *MGraph) getState() graphState {
//obj.mutex.Lock()
//defer obj.mutex.Unlock()
return obj.state
}
// setState sets the graph state and returns the previous state.
func (obj *MGraph) setState(state graphState) graphState {
obj.mutex.Lock()
defer obj.mutex.Unlock()
prev := obj.getState()
obj.state = state
return prev
}
// Update switches our graph structure to the new graph that we pass to it. This
// also updates any references to the old graph so that they're now correct. It
// also updates references to the Data structure that should be passed around.
func (obj *MGraph) Update(newGraph *pgraph.Graph) {
obj.Graph = newGraph.Copy() // store as new active graph
// update stored reference to graph and other values that need storing!
for _, v := range obj.Graph.Vertices() {
res := VtoR(v) // resource
*res.Data() = *obj.Data // push the data around
}
}
// Start is a main kick to start the graph. It goes through in reverse
// topological sort order so that events can't hit un-started vertices.
func (obj *MGraph) Start(first bool) { // start or continue
log.Printf("State: %v -> %v", obj.setState(graphStateStarting), obj.getState())
defer log.Printf("State: %v -> %v", obj.setState(graphStateStarted), obj.getState())
t, _ := obj.Graph.TopologicalSort()
indegree := obj.Graph.InDegree() // compute all of the indegree's
reversed := pgraph.Reverse(t)
wg := &sync.WaitGroup{}
for _, v := range reversed { // run the Setup() for everyone first
// run these in parallel, as long as we wait before continuing
wg.Add(1)
go func(vertex pgraph.Vertex, res Res) {
defer wg.Done()
// TODO: can't we do this check outside of the goroutine?
if !*res.Working() { // if Worker() is not running...
// NOTE: vertex == res here, but pass in both in
// case we ever wrap the res in something before
// we store it as the vertex in the graph struct
res.Setup(obj, vertex, res) // initialize some vars in the resource
}
}(v, VtoR(v))
}
wg.Wait()
// run through the topological reverse, and start or unpause each vertex
for _, v := range reversed {
res := VtoR(v)
// selective poke: here we reduce the number of initial pokes
// to the minimum required to activate every vertex in the
// graph, either by direct action, or by getting poked by a
// vertex that was previously activated. if we poke each vertex
// that has no incoming edges, then we can be sure to reach the
// whole graph. Please note: this may mask certain optimization
// failures, such as any poke limiting code in Poke() or
// BackPoke(). You might want to disable this selective start
// when experimenting with and testing those elements.
// if we are unpausing (since it's not the first run of this
// function) we need to poke to *unpause* every graph vertex,
// and not just selectively the subset with no indegree.
// let the startup code know to poke or not
// this triggers a CheckApply AFTER Watch is Running()
// We *don't* need to also do this to new nodes or nodes that
// are about to get unpaused, because they'll get poked by one
// of the indegree == 0 vertices, and an important aspect of the
// Process() function is that even if the state is correct, it
// will pass through the Poke so that it flows through the DAG.
res.Starter(indegree[v] == 0)
var unpause = true
if !*res.Working() { // if Worker() is not running...
*res.Working() = true // set Worker() running flag
unpause = false // doesn't need unpausing on first start
obj.wg.Add(1)
// must pass in value to avoid races...
// see: https://ttboj.wordpress.com/2015/07/27/golang-parallelism-issues-causing-too-many-open-files-error/
go func(vv pgraph.Vertex) {
defer obj.wg.Done()
// unset Worker() running flag just before exit
defer func() { *VtoR(vv).Working() = false }()
defer VtoR(vv).Reset()
// TODO: if a sufficient number of workers error,
// should something be done? Should these restart
// after perma-failure if we have a graph change?
log.Printf("%s: Started", vv)
if err := VtoR(vv).Worker(); err != nil { // contains the Watch and CheckApply loops
log.Printf("%s: Exited with failure: %v", vv, err)
return
}
log.Printf("%s: Exited", vv)
}(v)
}
select {
case <-res.Started(): // block until started
case <-res.Stopped(): // we failed on init
// if the resource Init() fails, we don't hang!
}
if unpause { // unpause (if needed)
res.SendEvent(event.EventStart, nil) // sync!
}
}
// we wait for everyone to start before exiting!
}
// Pause sends pause events to the graph in a topological sort order. If you set
// the fastPause argument to true, then it will ask future propagation waves to
// not run through the graph before exiting, and instead will exit much quicker.
func (obj *MGraph) Pause(fastPause bool) {
log.Printf("State: %v -> %v", obj.setState(graphStatePausing), obj.getState())
defer log.Printf("State: %v -> %v", obj.setState(graphStatePaused), obj.getState())
if fastPause {
obj.FastPause = true // set flag
}
t, _ := obj.Graph.TopologicalSort()
for _, v := range t { // squeeze out the events...
VtoR(v).SendEvent(event.EventPause, nil) // sync
}
obj.FastPause = false // reset flag
}
// Exit sends exit events to the graph in a topological sort order.
func (obj *MGraph) Exit() {
if obj.Graph == nil { // empty graph that wasn't populated yet
return
}
// FIXME: a second ^C could put this into fast pause, but do it for now!
obj.Pause(true) // implement this with pause to avoid duplicating the code
t, _ := obj.Graph.TopologicalSort()
for _, v := range t { // squeeze out the events...
// turn off the taps...
VtoR(v).Exit() // sync
}
obj.wg.Wait() // for now, this doesn't need to be a separate Wait() method
}

View File

@@ -28,6 +28,7 @@ import (
)
func init() {
RegisterResource("msg", func() Res { return &MsgRes{} })
gob.Register(&MsgRes{})
}
@@ -75,7 +76,7 @@ func (obj *MsgRes) Validate() error {
// Init runs some startup code for this resource.
func (obj *MsgRes) Init() error {
obj.BaseRes.kind = "msg"
obj.BaseRes.Kind = "msg"
return obj.BaseRes.Init() // call base init, b/c we're overrriding
}
@@ -167,7 +168,7 @@ func (obj *MsgRes) CheckApply(apply bool) (bool, error) {
}
if !obj.logStateOK {
log.Printf("%s[%s]: Body: %s", obj.Kind(), obj.GetName(), obj.Body)
log.Printf("%s: Body: %s", obj, obj.Body)
obj.logStateOK = true
obj.updateStateOK()
}
@@ -195,27 +196,25 @@ func (obj *MsgRes) CheckApply(apply bool) (bool, error) {
func (obj *MsgRes) UIDs() []ResUID {
x := &MsgUID{
BaseUID: BaseUID{
name: obj.GetName(),
kind: obj.Kind(),
Name: obj.GetName(),
Kind: obj.GetKind(),
},
body: obj.Body,
}
return []ResUID{x}
}
// AutoEdges returns the AutoEdges. In this case none are used.
func (obj *MsgRes) AutoEdges() AutoEdge {
return nil
}
// Compare two resources and return if they are equivalent.
func (obj *MsgRes) Compare(res Res) bool {
switch res.(type) {
case *MsgRes:
res := res.(*MsgRes)
func (obj *MsgRes) Compare(r Res) bool {
// we can only compare MsgRes to others of the same resource kind
res, ok := r.(*MsgRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) {
return false
}
if obj.Body != res.Body {
return false
}
@@ -230,9 +229,7 @@ func (obj *MsgRes) Compare(res Res) bool {
return false
}
}
default:
return false
}
return true
}

View File

@@ -24,6 +24,7 @@ import (
)
func init() {
RegisterResource("noop", func() Res { return &NoopRes{} })
gob.Register(&NoopRes{})
}
@@ -49,7 +50,7 @@ func (obj *NoopRes) Validate() error {
// Init runs some startup code for this resource.
func (obj *NoopRes) Init() error {
obj.BaseRes.kind = "noop"
obj.BaseRes.Kind = "noop"
return obj.BaseRes.Init() // call base init, b/c we're overriding
}
@@ -82,7 +83,7 @@ func (obj *NoopRes) Watch() error {
// CheckApply method for Noop resource. Does nothing, returns happy!
func (obj *NoopRes) CheckApply(apply bool) (checkOK bool, err error) {
if obj.Refresh() {
log.Printf("%s[%s]: Received a notification!", obj.Kind(), obj.GetName())
log.Printf("%s: Received a notification!", obj)
}
return true, nil // state is always okay
}
@@ -93,16 +94,11 @@ type NoopUID struct {
name string
}
// AutoEdges returns the AutoEdge interface. In this case no autoedges are used.
func (obj *NoopRes) AutoEdges() AutoEdge {
return nil
}
// UIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *NoopRes) UIDs() []ResUID {
x := &NoopUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
BaseUID: BaseUID{Name: obj.GetName(), Kind: obj.GetKind()},
name: obj.Name,
}
return []ResUID{x}
@@ -122,21 +118,20 @@ func (obj *NoopRes) GroupCmp(r Res) bool {
}
// Compare two resources and return if they are equivalent.
func (obj *NoopRes) Compare(res Res) bool {
switch res.(type) {
// we can only compare NoopRes to others of the same resource
case *NoopRes:
res := res.(*NoopRes)
// calling base Compare is unneeded for the noop res
//if !obj.BaseRes.Compare(res) { // call base Compare
// return false
//}
func (obj *NoopRes) Compare(r Res) bool {
// we can only compare NoopRes to others of the same resource kind
res, ok := r.(*NoopRes)
if !ok {
return false
}
// calling base Compare is probably unneeded for the noop res, but do it
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
default:
return false
}
return true
}

View File

@@ -41,18 +41,19 @@ const (
)
func init() {
RegisterResource("nspawn", func() Res { return &NspawnRes{} })
gob.Register(&NspawnRes{})
}
// NspawnRes is an nspawn container resource
// NspawnRes is an nspawn container resource.
type NspawnRes struct {
BaseRes `yaml:",inline"`
State string `yaml:"state"`
// we're using the svc resource to start the machine because that's
// We're using the svc resource to start the machine because that's
// what machinectl does. We're not using svc.Watch because then we
// would have two watches potentially racing each other and producing
// potentially unexpected results. We get everything we need to
// monitor the machine state changes from the org.freedesktop.machine1 object.
// potentially unexpected results. We get everything we need to monitor
// the machine state changes from the org.freedesktop.machine1 object.
svc *SvcRes
}
@@ -74,7 +75,7 @@ func (obj *NspawnRes) Validate() error {
running: {},
}
if _, exists := validStates[obj.State]; !exists {
return fmt.Errorf("Invalid State: %s", obj.State)
return fmt.Errorf("invalid state: %s", obj.State)
}
if err := obj.svc.Validate(); err != nil { // composite resource
@@ -92,11 +93,11 @@ func (obj *NspawnRes) Init() error {
if err := obj.svc.Init(); err != nil {
return err
}
obj.BaseRes.kind = "nspawn"
obj.BaseRes.Kind = "nspawn"
return obj.BaseRes.Init()
}
// Watch for state changes and sends a message to the bus if there is a change
// Watch for state changes and sends a message to the bus if there is a change.
func (obj *NspawnRes) Watch() error {
// this resource depends on systemd ensure that it's running
if !systemdUtil.IsRunningSystemd() {
@@ -133,11 +134,11 @@ func (obj *NspawnRes) Watch() error {
case event := <-buschan:
// process org.freedesktop.machine1 events for this resource's name
if event.Body[0] == obj.GetName() {
log.Printf("%s[%s]: Event received: %v", obj.Kind(), obj.GetName(), event.Name)
log.Printf("%s: Event received: %v", obj, event.Name)
if event.Name == machineNew {
log.Printf("%s[%s]: Machine started", obj.Kind(), obj.GetName())
log.Printf("%s: Machine started", obj)
} else if event.Name == machineRemoved {
log.Printf("%s[%s]: Machine stopped", obj.Kind(), obj.GetName())
log.Printf("%s: Machine stopped", obj)
} else {
return fmt.Errorf("unknown event: %s", event.Name)
}
@@ -160,8 +161,8 @@ func (obj *NspawnRes) Watch() error {
}
// CheckApply is run to check the state and, if apply is true, to apply the
// necessary changes to reach the desired state. this is run before Watch and
// again if watch finds a change occurring to the state
// necessary changes to reach the desired state. This is run before Watch and
// again if Watch finds a change occurring to the state.
func (obj *NspawnRes) CheckApply(apply bool) (checkOK bool, err error) {
// this resource depends on systemd ensure that it's running
if !systemdUtil.IsRunningSystemd() {
@@ -194,13 +195,13 @@ func (obj *NspawnRes) CheckApply(apply bool) (checkOK bool, err error) {
}
}
if obj.debug {
log.Printf("%s[%s]: properties: %v", obj.Kind(), obj.GetName(), properties)
log.Printf("%s: properties: %v", obj, properties)
}
// if the machine doesn't exist and is supposed to
// be stopped or the state matches we're done
if !exists && obj.State == stopped || properties["State"] == obj.State {
if obj.debug {
log.Printf("%s[%s]: CheckApply() in valid state", obj.Kind(), obj.GetName())
log.Printf("%s: CheckApply() in valid state", obj)
}
return true, nil
}
@@ -211,12 +212,12 @@ func (obj *NspawnRes) CheckApply(apply bool) (checkOK bool, err error) {
}
if obj.debug {
log.Printf("%s[%s]: CheckApply() applying '%s' state", obj.Kind(), obj.GetName(), obj.State)
log.Printf("%s: CheckApply() applying '%s' state", obj, obj.State)
}
if obj.State == running {
// start the machine using svc resource
log.Printf("%s[%s]: Starting machine", obj.Kind(), obj.GetName())
log.Printf("%s: Starting machine", obj)
// assume state had to be changed at this point, ignore checkOK
if _, err := obj.svc.CheckApply(apply); err != nil {
return false, errwrap.Wrapf(err, "nested svc failed")
@@ -225,7 +226,7 @@ func (obj *NspawnRes) CheckApply(apply bool) (checkOK bool, err error) {
if obj.State == stopped {
// terminate the machine with
// org.freedesktop.machine1.Manager.KillMachine
log.Printf("%s[%s]: Stopping machine", obj.Kind(), obj.GetName())
log.Printf("%s: Stopping machine", obj)
if err := conn.TerminateMachine(obj.GetName()); err != nil {
return false, errwrap.Wrapf(err, "failed to stop machine")
}
@@ -234,17 +235,17 @@ func (obj *NspawnRes) CheckApply(apply bool) (checkOK bool, err error) {
return false, nil
}
// NspawnUID is a unique resource identifier
// NspawnUID is a unique resource identifier.
type NspawnUID struct {
// NOTE: there is also a name variable in the BaseUID struct, this is
// NOTE: There is also a name variable in the BaseUID struct, this is
// information about where this UID came from, and is unrelated to the
// information about the resource we're matching. That data which is
// used in the IFF function, is what you see in the struct fields here
// used in the IFF function, is what you see in the struct fields here.
BaseUID
name string // the machine name
}
// IFF aka if and only if they are equivalent, return true. If not, false
// IFF aka if and only if they are equivalent, return true. If not, false.
func (obj *NspawnUID) IFF(uid ResUID) bool {
res, ok := uid.(*NspawnUID)
if !ok {
@@ -253,17 +254,17 @@ func (obj *NspawnUID) IFF(uid ResUID) bool {
return obj.name == res.name
}
// UIDs includes all params to make a unique identification of this object
// most resources only return one although some resources can return multiple
// UIDs includes all params to make a unique identification of this object.
// Most resources only return one although some resources can return multiple.
func (obj *NspawnRes) UIDs() []ResUID {
x := &NspawnUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
BaseUID: BaseUID{Name: obj.GetName(), Kind: obj.GetKind()},
name: obj.Name, // svc name
}
return append([]ResUID{x}, obj.svc.UIDs()...)
}
// GroupCmp returns whether two resources can be grouped together or not
// GroupCmp returns whether two resources can be grouped together or not.
func (obj *NspawnRes) GroupCmp(r Res) bool {
_, ok := r.(*NspawnRes)
if !ok {
@@ -273,29 +274,25 @@ func (obj *NspawnRes) GroupCmp(r Res) bool {
return false
}
// Compare two resources and return if they are equivalent
func (obj *NspawnRes) Compare(res Res) bool {
switch res.(type) {
case *NspawnRes:
res := res.(*NspawnRes)
// Compare two resources and return if they are equivalent.
func (obj *NspawnRes) Compare(r Res) bool {
// we can only compare NspawnRes to others of the same resource kind
res, ok := r.(*NspawnRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) {
return false
}
if obj.Name != res.Name {
return false
}
if !obj.svc.Compare(res.svc) {
return false
}
default:
return false
}
return true
}
// AutoEdges returns the AutoEdge interface in this case no autoedges are used
func (obj *NspawnRes) AutoEdges() AutoEdge {
return nil
return true
}
// UnmarshalYAML is the custom unmarshal handler for this struct.

View File

@@ -34,6 +34,7 @@ import (
)
func init() {
RegisterResource("password", func() Res { return &PasswordRes{} })
gob.Register(&PasswordRes{})
}
@@ -73,7 +74,7 @@ func (obj *PasswordRes) Validate() error {
// Init generates a new password for this resource if one was not provided. It
// will save this into a local file. It will load it back in from previous runs.
func (obj *PasswordRes) Init() error {
obj.BaseRes.kind = "password" // must be set before using VarDir
obj.BaseRes.Kind = "password" // must be set before using VarDir
dir, err := obj.VarDir("")
if err != nil {
@@ -187,7 +188,7 @@ func (obj *PasswordRes) Watch() error {
return nil
}
if err := event.Error; err != nil {
return errwrap.Wrapf(err, "unknown %s[%s] watcher error", obj.Kind(), obj.GetName())
return errwrap.Wrapf(err, "unknown %s watcher error", obj)
}
send = true
obj.StateOK(false) // dirty
@@ -228,7 +229,7 @@ func (obj *PasswordRes) CheckApply(apply bool) (checkOK bool, err error) {
if !obj.CheckRecovery {
return false, errwrap.Wrapf(err, "check failed")
}
log.Printf("%s[%s]: Integrity check failed", obj.Kind(), obj.GetName())
log.Printf("%s: Integrity check failed", obj)
generate = true // okay to build a new one
write = true // make sure to write over the old one
}
@@ -262,7 +263,7 @@ func (obj *PasswordRes) CheckApply(apply bool) (checkOK bool, err error) {
}
// generate the actual password
var err error
log.Printf("%s[%s]: Generating new password...", obj.Kind(), obj.GetName())
log.Printf("%s: Generating new password...", obj)
if password, err = obj.generate(); err != nil { // generate one!
return false, errwrap.Wrapf(err, "could not generate password")
}
@@ -279,7 +280,7 @@ func (obj *PasswordRes) CheckApply(apply bool) (checkOK bool, err error) {
output = password
}
// write either an empty token, or the password
log.Printf("%s[%s]: Writing password token...", obj.Kind(), obj.GetName())
log.Printf("%s: Writing password token...", obj)
if _, err := obj.write(output); err != nil {
return false, errwrap.Wrapf(err, "can't write to file")
}
@@ -294,16 +295,11 @@ type PasswordUID struct {
name string
}
// AutoEdges returns the AutoEdge interface. In this case no autoedges are used.
func (obj *PasswordRes) AutoEdges() AutoEdge {
return nil
}
// UIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *PasswordRes) UIDs() []ResUID {
x := &PasswordUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
BaseUID: BaseUID{Name: obj.GetName(), Kind: obj.GetKind()},
name: obj.Name,
}
return []ResUID{x}
@@ -321,18 +317,19 @@ func (obj *PasswordRes) GroupCmp(r Res) bool {
}
// Compare two resources and return if they are equivalent.
func (obj *PasswordRes) Compare(res Res) bool {
switch res.(type) {
// we can only compare PasswordRes to others of the same resource
case *PasswordRes:
res := res.(*PasswordRes)
func (obj *PasswordRes) Compare(r Res) bool {
// we can only compare PasswordRes to others of the same resource kind
res, ok := r.(*PasswordRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
if obj.Length != res.Length {
return false
}
@@ -344,9 +341,7 @@ func (obj *PasswordRes) Compare(res Res) bool {
if obj.CheckRecovery != res.CheckRecovery {
return false
}
default:
return false
}
return true
}

View File

@@ -31,6 +31,7 @@ import (
)
func init() {
RegisterResource("pkg", func() Res { return &PkgRes{} })
gob.Register(&PkgRes{})
}
@@ -66,36 +67,17 @@ func (obj *PkgRes) Validate() error {
// Init runs some startup code for this resource.
func (obj *PkgRes) Init() error {
obj.BaseRes.kind = "pkg"
obj.BaseRes.Kind = "pkg"
if err := obj.BaseRes.Init(); err != nil { // call base init, b/c we're overriding
return err
}
bus := packagekit.NewBus()
if bus == nil {
return fmt.Errorf("can't connect to PackageKit bus")
if obj.fileList == nil {
if err := obj.populateFileList(); err != nil {
return errwrap.Wrapf(err, "error populating file list in init")
}
defer bus.Close()
result, err := obj.pkgMappingHelper(bus)
if err != nil {
return errwrap.Wrapf(err, "the pkgMappingHelper failed")
}
data, ok := result[obj.Name] // lookup single package (init does just one)
// package doesn't exist, this is an error!
if !ok || !data.Found {
return fmt.Errorf("can't find package named '%s'", obj.Name)
}
packageIDs := []string{data.PackageID} // just one for now
filesMap, err := bus.GetFilesByPackageID(packageIDs)
if err != nil {
return errwrap.Wrapf(err, "can't run GetFilesByPackageID")
}
if files, ok := filesMap[data.PackageID]; ok {
obj.fileList = util.DirifyFileList(files, false)
}
return nil
}
@@ -178,9 +160,9 @@ func (obj *PkgRes) getNames() []string {
// pretty print for header values
func (obj *PkgRes) fmtNames(names []string) string {
if len(obj.GetGroup()) > 0 { // grouped elements
return fmt.Sprintf("%s[autogroup:(%v)]", obj.Kind(), strings.Join(names, ","))
return fmt.Sprintf("%s[autogroup:(%s)]", obj.GetKind(), strings.Join(names, ","))
}
return fmt.Sprintf("%s[%s]", obj.Kind(), obj.GetName())
return obj.String()
}
func (obj *PkgRes) groupMappingHelper() map[string]string {
@@ -189,7 +171,7 @@ func (obj *PkgRes) groupMappingHelper() map[string]string {
for _, x := range g {
pkg, ok := x.(*PkgRes) // convert from Res
if !ok {
log.Fatalf("grouped member %v is not a %s", x, obj.Kind())
log.Fatalf("grouped member %v is not a %s", x, obj.GetKind())
}
result[pkg.Name] = pkg.State
}
@@ -222,6 +204,39 @@ func (obj *PkgRes) pkgMappingHelper(bus *packagekit.Conn) (map[string]*packageki
return result, nil
}
// populateFileList fills in the fileList structure with what is in the package.
// TODO: should this work properly if pkg has been autogrouped ?
func (obj *PkgRes) populateFileList() error {
bus := packagekit.NewBus()
if bus == nil {
return fmt.Errorf("can't connect to PackageKit bus")
}
defer bus.Close()
result, err := obj.pkgMappingHelper(bus)
if err != nil {
return errwrap.Wrapf(err, "the pkgMappingHelper failed")
}
data, ok := result[obj.Name] // lookup single package (init does just one)
// package doesn't exist, this is an error!
if !ok || !data.Found {
return fmt.Errorf("can't find package named '%s'", obj.Name)
}
packageIDs := []string{data.PackageID} // just one for now
filesMap, err := bus.GetFilesByPackageID(packageIDs)
if err != nil {
return errwrap.Wrapf(err, "can't run GetFilesByPackageID")
}
if files, ok := filesMap[data.PackageID]; ok {
obj.fileList = util.DirifyFileList(files, false)
}
return nil
}
// CheckApply checks the resource state and applies the resource if the bool
// input is true. It returns error info and if the state check passed or not.
func (obj *PkgRes) CheckApply(apply bool) (checkOK bool, err error) {
@@ -355,9 +370,9 @@ func (obj *PkgResAutoEdges) Next() []ResUID {
var reversed = false // cheat by passing a pointer
result = append(result, &FileUID{
BaseUID: BaseUID{
name: obj.name,
kind: obj.kind,
reversed: &reversed,
Name: obj.name,
Kind: obj.kind,
Reversed: &reversed,
},
path: x, // what matters
}) // build list
@@ -418,10 +433,16 @@ func (obj *PkgResAutoEdges) Test(input []bool) bool {
// AutoEdges produces an object which generates a minimal pkg file optimization
// sequence of edges.
func (obj *PkgRes) AutoEdges() AutoEdge {
func (obj *PkgRes) AutoEdges() (AutoEdge, error) {
// in contrast with the FileRes AutoEdges() function which contains
// more of the mechanics, most of the AutoEdge mechanics for the PkgRes
// is contained in the Test() method! This design is completely okay!
// are contained in the Test() method! This design is completely okay!
if obj.fileList == nil {
if err := obj.populateFileList(); err != nil {
return nil, errwrap.Wrapf(err, "error populating file list for automatic edges")
}
}
// add matches for any svc resources found in pkg definition!
var svcUIDs []ResUID
@@ -429,9 +450,9 @@ func (obj *PkgRes) AutoEdges() AutoEdge {
var reversed = false
svcUIDs = append(svcUIDs, &SvcUID{
BaseUID: BaseUID{
name: obj.GetName(),
kind: obj.Kind(),
reversed: &reversed,
Name: obj.GetName(),
Kind: obj.GetKind(),
Reversed: &reversed,
},
name: x, // the svc name itself in the SvcUID object!
}) // build list
@@ -442,15 +463,15 @@ func (obj *PkgRes) AutoEdges() AutoEdge {
svcUIDs: svcUIDs,
testIsNext: false, // start with Next() call
name: obj.GetName(), // save data for PkgResAutoEdges obj
kind: obj.Kind(),
}
kind: obj.GetKind(),
}, nil
}
// UIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *PkgRes) UIDs() []ResUID {
x := &PkgUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
BaseUID: BaseUID{Name: obj.GetName(), Kind: obj.GetKind()},
name: obj.Name,
state: obj.State,
}
@@ -481,17 +502,19 @@ func (obj *PkgRes) GroupCmp(r Res) bool {
}
// Compare two resources and return if they are equivalent.
func (obj *PkgRes) Compare(res Res) bool {
switch res.(type) {
case *PkgRes:
res := res.(*PkgRes)
func (obj *PkgRes) Compare(r Res) bool {
// we can only compare PkgRes to others of the same resource kind
res, ok := r.(*PkgRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
if obj.State != res.State {
return false
}
@@ -504,29 +527,8 @@ func (obj *PkgRes) Compare(res Res) bool {
if obj.AllowUnsupported != res.AllowUnsupported {
return false
}
default:
return false
}
return true
}
// ReturnSvcInFileList returns a list of svc names for matches like: `/usr/lib/systemd/system/*.service`.
func ReturnSvcInFileList(fileList []string) []string {
result := []string{}
for _, x := range fileList {
dirname, basename := path.Split(path.Clean(x))
// TODO: do we also want to look for /etc/systemd/system/ ?
if dirname != "/usr/lib/systemd/system/" {
continue
}
if !strings.HasSuffix(basename, ".service") {
continue
}
if s := strings.TrimSuffix(basename, ".service"); !util.StrInList(s, result) {
result = append(result, s)
}
}
return result
return true
}
// UnmarshalYAML is the custom unmarshal handler for this struct.
@@ -548,3 +550,22 @@ func (obj *PkgRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
*obj = PkgRes(raw) // restore from indirection with type conversion!
return nil
}
// ReturnSvcInFileList returns a list of svc names for matches like: `/usr/lib/systemd/system/*.service`.
func ReturnSvcInFileList(fileList []string) []string {
result := []string{}
for _, x := range fileList {
dirname, basename := path.Split(path.Clean(x))
// TODO: do we also want to look for /etc/systemd/system/ ?
if dirname != "/usr/lib/systemd/system/" {
continue
}
if !strings.HasSuffix(basename, ".service") {
continue
}
if s := strings.TrimSuffix(basename, ".service"); !util.StrInList(s, result) {
result = append(result, s)
}
}
return result
}

33
resources/pkg_test.go Normal file
View File

@@ -0,0 +1,33 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"testing"
)
func TestNilList1(t *testing.T) {
var x []string
if x != nil { // we have this expectation for obj.fileList in pkg
t.Errorf("list should have been nil, was: %+v", x)
}
x = []string{} // empty list
if x == nil {
t.Errorf("list should have been empty, was: %+v", x)
}
}

View File

@@ -31,6 +31,7 @@ import (
// TODO: should each resource be a sub-package?
"github.com/purpleidea/mgmt/converger"
"github.com/purpleidea/mgmt/event"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/prometheus"
"github.com/purpleidea/mgmt/util"
@@ -38,6 +39,24 @@ import (
"golang.org/x/time/rate"
)
var registeredResources = map[string]func() Res{}
// RegisterResource registers a new resource by providing a constructor
// function that returns a resource object ready to be unmarshalled from YAML.
func RegisterResource(name string, creator func() Res) {
registeredResources[name] = creator
}
// NewEmptyNamedResource returns an empty resource object from a registered
// type, ready to be unmarshalled.
func NewEmptyNamedResource(name string) (Res, error) {
fn, ok := registeredResources[name]
if !ok {
return nil, fmt.Errorf("no resource named %s available", name)
}
return fn(), nil
}
//go:generate stringer -type=ResState -output=resstate_stringer.go
// The ResState type represents the current activity state of each resource.
@@ -57,18 +76,25 @@ const refreshPathToken = "refresh"
// the GAPI to store state and exchange information throughout the cluster. It
// is the interface each machine uses to communicate with the rest of the world.
type World interface { // TODO: is there a better name for this interface?
ResWatch() chan error
ResExport([]Res) error
// FIXME: should this method take a "filter" data struct instead of many args?
ResCollect(hostnameFilter, kindFilter []string) ([]Res, error)
StrWatch(namespace string) chan error
StrGet(namespace string) (map[string]string, error)
StrIsNotExist(error) bool
StrGet(namespace string) (string, error)
StrSet(namespace, value string) error
StrDel(namespace string) error
StrMapWatch(namespace string) chan error
StrMapGet(namespace string) (map[string]string, error)
StrMapSet(namespace, value string) error
StrMapDel(namespace string) error
}
// Data is the set of input values passed into the pgraph for the resources.
type Data struct {
// ResData is the set of input values passed into the pgraph for the resources.
type ResData struct {
Hostname string // uuid for the host
//Noop bool
Converger converger.Converger
@@ -79,92 +105,25 @@ type Data struct {
// NOTE: we can add more fields here if needed for the resources.
}
// ResUID is a unique identifier for a resource, namely it's name, and the kind ("type").
type ResUID interface {
GetName() string
Kind() string
IFF(ResUID) bool
Reversed() bool // true means this resource happens before the generator
}
// The BaseUID struct is used to provide a unique resource identifier.
type BaseUID struct {
name string // name and kind are the values of where this is coming from
kind string
reversed *bool // piggyback edge information here
}
// The AutoEdge interface is used to implement the autoedges feature.
type AutoEdge interface {
Next() []ResUID // call to get list of edges to add
Test([]bool) bool // call until false
}
// MetaParams is a struct will all params that apply to every resource.
type MetaParams struct {
AutoEdge bool `yaml:"autoedge"` // metaparam, should we generate auto edges?
AutoGroup bool `yaml:"autogroup"` // metaparam, should we auto group?
Noop bool `yaml:"noop"`
// NOTE: there are separate Watch and CheckApply retry and delay values,
// but I've decided to use the same ones for both until there's a proper
// reason to want to do something differently for the Watch errors.
Retry int16 `yaml:"retry"` // metaparam, number of times to retry on error. -1 for infinite
Delay uint64 `yaml:"delay"` // metaparam, number of milliseconds to wait between retries
Poll uint32 `yaml:"poll"` // metaparam, number of seconds between poll intervals, 0 to watch
Limit rate.Limit `yaml:"limit"` // metaparam, number of events per second to allow through
Burst int `yaml:"burst"` // metaparam, number of events to allow in a burst
Sema []string `yaml:"sema"` // metaparam, list of semaphore ids (id | id:count)
}
// UnmarshalYAML is the custom unmarshal handler for the MetaParams struct. It
// is primarily useful for setting the defaults.
func (obj *MetaParams) UnmarshalYAML(unmarshal func(interface{}) error) error {
type rawMetaParams MetaParams // indirection to avoid infinite recursion
raw := rawMetaParams(DefaultMetaParams) // convert; the defaults go here
if err := unmarshal(&raw); err != nil {
return err
}
*obj = MetaParams(raw) // restore from indirection with type conversion!
return nil
}
// DefaultMetaParams are the defaults to be used for undefined metaparams.
var DefaultMetaParams = MetaParams{
AutoEdge: true,
AutoGroup: true,
Noop: false,
Retry: 0, // TODO: is this a good default?
Delay: 0, // TODO: is this a good default?
Poll: 0, // defaults to watching for events
Limit: rate.Inf, // defaults to no limit
Burst: 0, // no burst needed on an infinite rate // TODO: is this a good default?
//Sema: []string{},
}
// The Base interface is everything that is common to all resources.
// Everything here only needs to be implemented once, in the BaseRes.
type Base interface {
GetName() string // can't be named "Name()" because of struct field
SetName(string)
SetKind(string)
Kind() string
GetKind() string
String() string
Meta() *MetaParams
Events() chan *event.Event
Data() *Data
IsWorking() bool
IsQuiescing() bool
QuiesceGroup() *sync.WaitGroup
WaitGroup() *sync.WaitGroup
Setup()
Data() *ResData
Working() *bool
Setup(*MGraph, pgraph.Vertex, Res)
Reset()
Converger() converger.Converger
ConvergerUIDs() (converger.UID, converger.UID, converger.UID)
Exit()
GetState() ResState
SetState(ResState)
Timestamp() int64
UpdateTimestamp() int64
Event() error
SendEvent(event.Kind, error) error
ReadEvent(*event.Event) (*error, bool)
@@ -185,10 +144,7 @@ type Base interface {
Stopped() <-chan struct{} // returns when the resource has stopped
Starter(bool)
Poll() error // poll alternative to watching :(
ProcessChan() chan *event.Event
ProcessSync() *sync.WaitGroup
ProcessExit()
Prometheus() *prometheus.Prometheus
Worker() error
}
// Res is the minimum interface you need to implement to define a new resource.
@@ -201,7 +157,7 @@ type Res interface {
UIDs() []ResUID // most resources only return one
Watch() error // send on channel to signal process() events
CheckApply(apply bool) (checkOK bool, err error)
AutoEdges() AutoEdge
AutoEdges() (AutoEdge, error)
Compare(Res) bool
CollectPattern(string) // XXX: temporary until Res collection is more advanced
//UnmarshalYAML(unmarshal func(interface{}) error) error // optional
@@ -209,14 +165,20 @@ type Res interface {
// BaseRes is the base struct that gets used in every resource.
type BaseRes struct {
Res Res // pointer to full res
Graph *MGraph // pointer to graph I'm currently in
Vertex pgraph.Vertex // pointer to vertex I currently am
Recv map[string]*Send // mapping of key to receive on from value
Kind string
Name string `yaml:"name"`
MetaParams MetaParams `yaml:"meta"` // struct of all the metaparams
Recv map[string]*Send // mapping of key to receive on from value
kind string
data Data
timestamp int64 // last updated timestamp
state ResState
isStateOK bool // whether the state is okay based on events or not
prefix string // base prefix for this resource
data ResData
eventsLock *sync.Mutex // locks around sending and closing of events channel
eventsDone bool
@@ -224,10 +186,9 @@ type BaseRes struct {
processLock *sync.Mutex
processDone bool
processChan chan *event.Event
processSync *sync.WaitGroup
processChan chan *event.Event // chan that resources send events to
processSync *sync.WaitGroup // blocks until the innerWorker closes
converger converger.Converger // converged tracking
cuid converger.UID
wcuid converger.UID
pcuid converger.UID
@@ -237,82 +198,18 @@ type BaseRes struct {
isStarted bool // did the started chan already close?
starter bool // does this have indegree == 0 ? XXX: usually?
quiescing bool // are we quiescing (pause or exit)
quiescing bool // are we quiescing (pause or exit), tell event replay
quiesceGroup *sync.WaitGroup
waitGroup *sync.WaitGroup
working bool // is the Worker() loop running ?
debug bool
isStateOK bool // whether the state is okay based on events or not
isGrouped bool // am i contained within a group?
grouped []Res // list of any grouped resources
refresh bool // does this resource have a refresh to run?
//refreshState StatefulBool // TODO: future stateful bool
}
// UnmarshalYAML is the custom unmarshal handler for the BaseRes struct. It is
// primarily useful for setting the defaults, in particular if meta is absent!
// FIXME: how come we can't get this to work properly without dropping fields?
//func (obj *BaseRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
// DefaultBaseRes := BaseRes{
// // without specifying a default here, if we don't specify *any*
// // meta parameters in the yaml file, then the UnmarshalYAML for
// // the MetaParams struct won't run, and we won't get defaults!
// MetaParams: DefaultMetaParams, // force a default
// }
// type rawBaseRes BaseRes // indirection to avoid infinite recursion
// raw := rawBaseRes(DefaultBaseRes) // convert; the defaults go here
// //raw := rawBaseRes{}
// if err := unmarshal(&raw); err != nil {
// return err
// }
// *obj = BaseRes(raw) // restore from indirection with type conversion!
// return nil
//}
// UIDExistsInUIDs wraps the IFF method when used with a list of UID's.
func UIDExistsInUIDs(uid ResUID, uids []ResUID) bool {
for _, u := range uids {
if uid.IFF(u) {
return true
}
}
return false
}
// GetName returns the name of the resource.
func (obj *BaseUID) GetName() string {
return obj.name
}
// Kind returns the kind of resource.
func (obj *BaseUID) Kind() string {
return obj.kind
}
// IFF looks at two UID's and if and only if they are equivalent, returns true.
// If they are not equivalent, it returns false.
// Most resources will want to override this method, since it does the important
// work of actually discerning if two resources are identical in function.
func (obj *BaseUID) IFF(uid ResUID) bool {
res, ok := uid.(*BaseUID)
if !ok {
return false
}
return obj.name == res.name
}
// Reversed is part of the ResUID interface, and true means this resource
// happens before the generator.
func (obj *BaseUID) Reversed() bool {
if obj.reversed == nil {
log.Fatal("Programming error!")
}
return *obj.reversed
debug bool
}
// Validate reports any problems with the struct definition.
@@ -327,15 +224,17 @@ func (obj *BaseRes) Validate() error {
// Init initializes structures like channels if created without New constructor.
func (obj *BaseRes) Init() error {
if obj.debug {
log.Printf("%s[%s]: Init()", obj.Kind(), obj.GetName())
log.Printf("%s: Init()", obj)
}
if obj.kind == "" {
if obj.Kind == "" {
return fmt.Errorf("resource did not set kind")
}
obj.cuid = obj.Converger().Register()
obj.wcuid = obj.Converger().Register() // get a cuid for the worker!
obj.pcuid = obj.Converger().Register() // get a cuid for the process
if converger := obj.Data().Converger; converger != nil {
obj.cuid = converger.Register()
obj.wcuid = converger.Register() // get a cuid for the worker!
obj.pcuid = converger.Register() // get a cuid for the process
}
obj.processLock = &sync.Mutex{} // lock around processChan closing and sending
obj.processDone = false // did we close processChan ?
@@ -345,9 +244,11 @@ func (obj *BaseRes) Init() error {
obj.quiescing = false // no quiesce operation is happening at the moment
obj.quiesceGroup = &sync.WaitGroup{}
// more useful than a closed channel signal, since it can be re-used
// safely without having to recreate it and worry about stale handles
obj.waitGroup = &sync.WaitGroup{} // Init and Close must be 1-1 matched!
obj.waitGroup.Add(1)
obj.working = true // Worker method should now be running...
//obj.working = true // Worker method should now be running...
// FIXME: force a sane default until UnmarshalYAML on *BaseRes works...
if obj.Meta().Burst == 0 && obj.Meta().Limit == 0 { // blocked
@@ -364,7 +265,7 @@ func (obj *BaseRes) Init() error {
// TODO: this StatefulBool implementation could be eventually swappable
//obj.refreshState = &DiskBool{Path: path.Join(dir, refreshPathToken)}
if err := obj.Prometheus().AddManagedResource(fmt.Sprintf("%v[%v]", obj.Kind(), obj.GetName()), obj.Kind()); err != nil {
if err := obj.Data().Prometheus.AddManagedResource(obj.String(), obj.GetKind()); err != nil {
return errwrap.Wrapf(err, "could not increase prometheus counter!")
}
@@ -374,18 +275,20 @@ func (obj *BaseRes) Init() error {
// Close shuts down and performs any cleanup.
func (obj *BaseRes) Close() error {
if obj.debug {
log.Printf("%s[%s]: Close()", obj.Kind(), obj.GetName())
log.Printf("%s: Close()", obj)
}
if converger := obj.Data().Converger; converger != nil {
obj.pcuid.Unregister()
obj.wcuid.Unregister()
obj.cuid.Unregister()
}
obj.working = false // Worker method should now be closing...
//obj.working = false // Worker method should now be closing...
close(obj.stopped)
obj.waitGroup.Done()
if err := obj.Prometheus().RemoveManagedResource(fmt.Sprintf("%v[%v]", obj.Kind(), obj.GetName()), obj.kind); err != nil {
if err := obj.Data().Prometheus.RemoveManagedResource(obj.String(), obj.GetKind()); err != nil {
return errwrap.Wrapf(err, "could not decrease prometheus counter!")
}
@@ -404,12 +307,17 @@ func (obj *BaseRes) SetName(name string) {
// SetKind sets the kind. This is used internally for exported resources.
func (obj *BaseRes) SetKind(kind string) {
obj.kind = kind
obj.Kind = kind
}
// Kind returns the kind of resource this is.
func (obj *BaseRes) Kind() string {
return obj.kind
// GetKind returns the kind of resource this is.
func (obj *BaseRes) GetKind() string {
return obj.Kind
}
// String returns the canonical string representation for a resource.
func (obj *BaseRes) String() string {
return fmt.Sprintf("%s[%s]", obj.GetKind(), obj.GetName())
}
// Meta returns the MetaParams as a reference, which we can then get/set on.
@@ -423,56 +331,47 @@ func (obj *BaseRes) Events() chan *event.Event {
}
// Data returns an associable handle to some data passed in to the resource.
func (obj *BaseRes) Data() *Data {
func (obj *BaseRes) Data() *ResData {
return &obj.data
}
// IsWorking tells us if the Worker() function is running. Not thread safe.
func (obj *BaseRes) IsWorking() bool {
return obj.working
// Working returns a pointer to the bool which should track Worker run state.
func (obj *BaseRes) Working() *bool {
return &obj.working
}
// IsQuiescing returns if there is a quiesce operation in progress. Pause and
// exit both meet this criteria, and this tells some systems to wind down, such
// as the event replay mechanism.
func (obj *BaseRes) IsQuiescing() bool { return obj.quiescing }
// QuiesceGroup returns the sync group associated with the quiesce operations.
func (obj *BaseRes) QuiesceGroup() *sync.WaitGroup { return obj.quiesceGroup }
// WaitGroup returns a sync.WaitGroup which is open when the resource is done.
// This is more useful than a closed channel signal, since it can be re-used
// safely without having to recreate it and worry about stale channel handles.
func (obj *BaseRes) WaitGroup() *sync.WaitGroup { return obj.waitGroup }
// Setup does some work which must happen before the Worker starts. It happens
// once per Worker startup. It can happen in parallel with other Setup calls, so
// add locks around any operation that's not thread-safe.
func (obj *BaseRes) Setup() {
func (obj *BaseRes) Setup(mgraph *MGraph, vertex pgraph.Vertex, res Res) {
obj.started = make(chan struct{}) // closes when started
obj.stopped = make(chan struct{}) // closes when stopped
obj.eventsLock = &sync.Mutex{}
obj.eventsDone = false
obj.eventsChan = make(chan *event.Event) // unbuffered chan to avoid stale events
obj.Res = res // store a pointer to the full object
obj.Vertex = vertex // store a pointer to the vertex i'm
obj.Graph = mgraph // store a pointer to the graph we're in
}
// Reset from Setup. These can get called for different vertices in parallel.
func (obj *BaseRes) Reset() {
obj.Res = nil
obj.Vertex = nil
obj.Graph = nil
return
}
// Converger returns the converger object used by the system. It can be used to
// register new convergers if needed.
func (obj *BaseRes) Converger() converger.Converger {
return obj.data.Converger
}
// ConvergerUIDs returns the ConvergerUIDs for the resource. This is called by
// the various methods that need one of these ConvergerUIDs. They are registered
// by the Init method and unregistered on the resource Close.
func (obj *BaseRes) ConvergerUIDs() (cuid, wcuid, pcuid converger.UID) {
return obj.cuid, obj.wcuid, obj.pcuid
// Exit the resource. Wrapper function to keep the logic in one place for now.
func (obj *BaseRes) Exit() {
// XXX: consider instead doing this by closing the Res.events channel instead?
// XXX: do this by sending an exit signal, and then returning
// when we hit the 'default' in the select statement!
// XXX: we can do this to quiesce, but it's not necessary now
obj.SendEvent(event.EventExit, nil) // sync
obj.waitGroup.Wait()
}
// GetState returns the state of the resource.
@@ -483,11 +382,22 @@ func (obj *BaseRes) GetState() ResState {
// SetState sets the state of the resource.
func (obj *BaseRes) SetState(state ResState) {
if obj.debug {
log.Printf("%s[%s]: State: %v -> %v", obj.Kind(), obj.GetName(), obj.GetState(), state)
log.Printf("%s: State: %v -> %v", obj, obj.GetState(), state)
}
obj.state = state
}
// Timestamp returns the timestamp of a resource.
func (obj *BaseRes) Timestamp() int64 {
return obj.timestamp
}
// UpdateTimestamp updates the timestamp and returns the new value.
func (obj *BaseRes) UpdateTimestamp() int64 {
obj.timestamp = time.Now().UnixNano() // update
return obj.timestamp
}
// IsStateOK returns the cached state value.
func (obj *BaseRes) IsStateOK() bool {
return obj.isStateOK
@@ -498,12 +408,6 @@ func (obj *BaseRes) StateOK(b bool) {
obj.isStateOK = b
}
// ProcessChan returns the chan that resources send events to. Internal API!
func (obj *BaseRes) ProcessChan() chan *event.Event { return obj.processChan }
// ProcessSync returns the WaitGroup that blocks until the innerWorker closes.
func (obj *BaseRes) ProcessSync() *sync.WaitGroup { return obj.processSync }
// ProcessExit causes the innerWorker to close and waits until it does so.
func (obj *BaseRes) ProcessExit() {
obj.processLock.Lock() // lock to avoid a send when closed!
@@ -559,6 +463,13 @@ func (obj *BaseRes) SetGroup(g []Res) {
obj.grouped = g
}
// AutoEdges returns the AutoEdge interface. By default, none are created. This
// should be implemented by the specific resource to be used. This base method
// does not need to be called by the resource specific implementing method.
func (obj *BaseRes) AutoEdges() (AutoEdge, error) {
return nil, nil
}
// Compare is the base compare method, which also handles the metaparams cmp.
func (obj *BaseRes) Compare(res Res) bool {
// TODO: should the AutoEdge values be compared?
@@ -625,7 +536,7 @@ func (obj *BaseRes) VarDir(extra string) (string, error) {
if obj.prefix == "" {
return "", fmt.Errorf("the VarDir prefix is empty")
}
if obj.Kind() == "" {
if obj.GetKind() == "" {
return "", fmt.Errorf("the VarDir kind is empty")
}
if obj.GetName() == "" {
@@ -634,9 +545,9 @@ func (obj *BaseRes) VarDir(extra string) (string, error) {
// FIXME: is obj.GetName() sufficiently unique to use as a UID here?
uid := obj.GetName()
p := fmt.Sprintf("%s/", path.Join(obj.prefix, obj.Kind(), uid, extra))
p := fmt.Sprintf("%s/", path.Join(obj.prefix, obj.GetKind(), uid, extra))
if err := os.MkdirAll(p, 0770); err != nil {
return "", errwrap.Wrapf(err, "can't create prefix for %s[%s]", obj.Kind(), obj.GetName())
return "", errwrap.Wrapf(err, "can't create prefix for %s", obj)
}
return p, nil
}
@@ -653,8 +564,6 @@ func (obj *BaseRes) Starter(b bool) { obj.starter = b }
// Poll is the watch replacement for when we want to poll, which outputs events.
func (obj *BaseRes) Poll() error {
cuid, _, _ := obj.ConvergerUIDs() // get the converger uid used to report status
// create a time.Ticker for the given interval
ticker := time.NewTicker(time.Duration(obj.Meta().Poll) * time.Second)
defer ticker.Stop()
@@ -663,19 +572,19 @@ func (obj *BaseRes) Poll() error {
if err := obj.Running(); err != nil {
return err // bubble up a NACK...
}
cuid.SetConverged(false) // quickly stop any converge due to Running()
obj.cuid.SetConverged(false) // quickly stop any converge due to Running()
var send = false
var exit *error
for {
select {
case <-ticker.C: // received the timer event
log.Printf("%s[%s]: polling...", obj.Kind(), obj.GetName())
log.Printf("%s: polling...", obj)
send = true
obj.StateOK(false) // dirty
case event := <-obj.Events():
cuid.ResetTimer() // important
obj.cuid.ResetTimer() // important
if exit, send = obj.ReadEvent(event); exit != nil {
return *exit // exit
}
@@ -688,7 +597,45 @@ func (obj *BaseRes) Poll() error {
}
}
// Prometheus returns the prometheus instance.
func (obj *BaseRes) Prometheus() *prometheus.Prometheus {
return obj.Data().Prometheus
// UnmarshalYAML is the custom unmarshal handler for the BaseRes struct. It is
// primarily useful for setting the defaults, in particular if meta is absent!
// FIXME: how come we can't get this to work properly without dropping fields?
//func (obj *BaseRes) UnmarshalYAML(unmarshal func(interface{}) error) error {
// DefaultBaseRes := BaseRes{
// // without specifying a default here, if we don't specify *any*
// // meta parameters in the yaml file, then the UnmarshalYAML for
// // the MetaParams struct won't run, and we won't get defaults!
// MetaParams: DefaultMetaParams, // force a default
// }
// type rawBaseRes BaseRes // indirection to avoid infinite recursion
// raw := rawBaseRes(DefaultBaseRes) // convert; the defaults go here
// //raw := rawBaseRes{}
// if err := unmarshal(&raw); err != nil {
// return err
// }
// *obj = BaseRes(raw) // restore from indirection with type conversion!
// return nil
//}
// VtoR casts the Vertex into a Res for use. It panics if it can't convert.
func VtoR(v pgraph.Vertex) Res {
res, ok := v.(Res)
if !ok {
panic("not a Res")
}
return res
}
// TODO: consider adding a mutate API.
//func (g *Graph) MutateMatch(obj resources.Res) Vertex {
// for v := range g.adjacency {
// if err := v.Res.Mutate(obj); err == nil {
// // transmogrified!
// return v
// }
// }
// return nil
//}

View File

@@ -22,9 +22,49 @@ import (
"encoding/base64"
"encoding/gob"
"testing"
//"github.com/purpleidea/mgmt/event"
)
func TestCompare1(t *testing.T) {
r1 := &NoopRes{}
r2 := &NoopRes{}
r3 := &FileRes{}
if !r1.Compare(r2) || !r2.Compare(r1) {
t.Error("The two resources do not match!")
}
if r1.Compare(r3) || r3.Compare(r1) {
t.Error("The two resources should not match!")
}
}
func TestCompare2(t *testing.T) {
r1 := &NoopRes{
BaseRes: BaseRes{
Name: "noop1",
MetaParams: MetaParams{
Noop: true,
},
},
}
r2 := &NoopRes{
BaseRes: BaseRes{
Name: "noop1", // same nampe
MetaParams: MetaParams{
Noop: false, // different noop
},
},
}
if !r2.Compare(r1) { // going from noop(false) -> noop(true) is okay!
t.Error("The two resources do not match!")
}
if r1.Compare(r2) { // going from noop(true) -> noop(false) is not okay!
t.Error("The two resources should not match!")
}
}
func TestMiscEncodeDecode1(t *testing.T) {
var err error
//gob.Register( &NoopRes{} ) // happens in noop.go : init()
@@ -106,9 +146,9 @@ func TestMiscEncodeDecode2(t *testing.T) {
}
func TestIFF(t *testing.T) {
uid := &BaseUID{name: "/tmp/unit-test"}
same := &BaseUID{name: "/tmp/unit-test"}
diff := &BaseUID{name: "/tmp/other-file"}
uid := &BaseUID{Name: "/tmp/unit-test"}
same := &BaseUID{Name: "/tmp/unit-test"}
diff := &BaseUID{Name: "/tmp/other-file"}
if !uid.IFF(same) {
t.Error("basic resource UIDs with the same name should satisfy each other's IFF condition.")

View File

@@ -15,7 +15,7 @@
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
package resources
import (
"fmt"
@@ -32,18 +32,19 @@ import (
const SemaSep = ":"
// SemaLock acquires the list of semaphores in the graph.
func (g *Graph) SemaLock(semas []string) error {
func (obj *MGraph) SemaLock(semas []string) error {
var reterr error
sort.Strings(semas) // very important to avoid deadlock in the dag!
for _, id := range semas {
g.slock.Lock() // semaphore creation lock
sema, ok := g.semas[id] // lookup
obj.slock.Lock() // semaphore creation lock
sema, ok := obj.semas[id] // lookup
if !ok {
size := SemaSize(id) // defaults to 1
g.semas[id] = semaphore.NewSemaphore(size)
sema = g.semas[id]
obj.semas[id] = semaphore.NewSemaphore(size)
sema = obj.semas[id]
}
g.slock.Unlock()
obj.slock.Unlock()
if err := sema.P(1); err != nil { // lock!
reterr = multierr.Append(reterr, err) // list of errors
@@ -53,11 +54,12 @@ func (g *Graph) SemaLock(semas []string) error {
}
// SemaUnlock releases the list of semaphores in the graph.
func (g *Graph) SemaUnlock(semas []string) error {
func (obj *MGraph) SemaUnlock(semas []string) error {
var reterr error
sort.Strings(semas) // unlock in the same order to remove partial locks
for _, id := range semas {
sema, ok := g.semas[id] // lookup
sema, ok := obj.semas[id] // lookup
if !ok {
// programming error!
panic(fmt.Sprintf("graph: sema: %s does not exist", id))

View File

@@ -15,12 +15,12 @@
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package pgraph
package resources
import (
"testing"
"github.com/purpleidea/mgmt/resources"
"github.com/purpleidea/mgmt/pgraph"
)
func TestSemaSize(t *testing.T) {
@@ -38,10 +38,10 @@ func TestSemaSize(t *testing.T) {
func NewNoopResTestSema(name string, semas []string) *NoopResTest {
obj := &NoopResTest{
NoopRes: resources.NoopRes{
BaseRes: resources.BaseRes{
NoopRes: NoopRes{
BaseRes: BaseRes{
Name: name,
MetaParams: resources.MetaParams{
MetaParams: MetaParams{
AutoGroup: true, // always autogroup
Sema: semas,
},
@@ -52,54 +52,54 @@ func NewNoopResTestSema(name string, semas []string) *NoopResTest {
}
func TestPgraphSemaphoreGrouping1(t *testing.T) {
g1 := NewGraph("g1") // original graph
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTestSema("a1", []string{"s:1"}))
a2 := NewVertex(NewNoopResTestSema("a2", []string{"s:2"}))
a3 := NewVertex(NewNoopResTestSema("a3", []string{"s:3"}))
a1 := NewNoopResTestSema("a1", []string{"s:1"})
a2 := NewNoopResTestSema("a2", []string{"s:2"})
a3 := NewNoopResTestSema("a3", []string{"s:3"})
g1.AddVertex(a1)
g1.AddVertex(a2)
g1.AddVertex(a3)
}
g2 := NewGraph("g2") // expected result
g2, _ := pgraph.NewGraph("g2") // expected result
{
a123 := NewVertex(NewNoopResTestSema("a1,a2,a3", []string{"s:1", "s:2", "s:3"}))
a123 := NewNoopResTestSema("a1,a2,a3", []string{"s:1", "s:2", "s:3"})
g2.AddVertex(a123)
}
runGraphCmp(t, g1, g2)
}
func TestPgraphSemaphoreGrouping2(t *testing.T) {
g1 := NewGraph("g1") // original graph
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTestSema("a1", []string{"s:10", "s:11"}))
a2 := NewVertex(NewNoopResTestSema("a2", []string{"s:2"}))
a3 := NewVertex(NewNoopResTestSema("a3", []string{"s:3"}))
a1 := NewNoopResTestSema("a1", []string{"s:10", "s:11"})
a2 := NewNoopResTestSema("a2", []string{"s:2"})
a3 := NewNoopResTestSema("a3", []string{"s:3"})
g1.AddVertex(a1)
g1.AddVertex(a2)
g1.AddVertex(a3)
}
g2 := NewGraph("g2") // expected result
g2, _ := pgraph.NewGraph("g2") // expected result
{
a123 := NewVertex(NewNoopResTestSema("a1,a2,a3", []string{"s:10", "s:11", "s:2", "s:3"}))
a123 := NewNoopResTestSema("a1,a2,a3", []string{"s:10", "s:11", "s:2", "s:3"})
g2.AddVertex(a123)
}
runGraphCmp(t, g1, g2)
}
func TestPgraphSemaphoreGrouping3(t *testing.T) {
g1 := NewGraph("g1") // original graph
g1, _ := pgraph.NewGraph("g1") // original graph
{
a1 := NewVertex(NewNoopResTestSema("a1", []string{"s:1", "s:2"}))
a2 := NewVertex(NewNoopResTestSema("a2", []string{"s:2"}))
a3 := NewVertex(NewNoopResTestSema("a3", []string{"s:3"}))
a1 := NewNoopResTestSema("a1", []string{"s:1", "s:2"})
a2 := NewNoopResTestSema("a2", []string{"s:2"})
a3 := NewNoopResTestSema("a3", []string{"s:3"})
g1.AddVertex(a1)
g1.AddVertex(a2)
g1.AddVertex(a3)
}
g2 := NewGraph("g2") // expected result
g2, _ := pgraph.NewGraph("g2") // expected result
{
a123 := NewVertex(NewNoopResTestSema("a1,a2,a3", []string{"s:1", "s:2", "s:3"}))
a123 := NewNoopResTestSema("a1,a2,a3", []string{"s:1", "s:2", "s:3"})
g2.AddVertex(a123)
}
runGraphCmp(t, g1, g2)

View File

@@ -46,9 +46,9 @@ func (obj *BaseRes) Event() error {
func (obj *BaseRes) SendEvent(ev event.Kind, err error) error {
if obj.debug {
if err == nil {
log.Printf("%s[%s]: SendEvent(%+v)", obj.Kind(), obj.GetName(), ev)
log.Printf("%s: SendEvent(%+v)", obj, ev)
} else {
log.Printf("%s[%s]: SendEvent(%+v): %v", obj.Kind(), obj.GetName(), ev, err)
log.Printf("%s: SendEvent(%+v): %v", obj, ev, err)
}
}
resp := event.NewResp()
@@ -129,7 +129,7 @@ func (obj *BaseRes) ReadEvent(ev *event.Event) (exit *error, send bool) {
continue // silently discard this event while paused
}
// if we get a poke event here, it's a bug!
err = fmt.Errorf("%s[%s]: unknown event: %v, while paused", obj.Kind(), obj.GetName(), e)
err = fmt.Errorf("%s: unknown event: %v, while paused", obj, e)
panic(err) // TODO: return a special sentinel instead?
//return &err, false
}
@@ -149,8 +149,7 @@ func (obj *BaseRes) Running() error {
// converge timeout is very short ( ~ 1s) and the Watch method doesn't
// immediately SetConverged(false) to stop possible early termination.
if obj.Meta().Poll == 0 { // if not polling, unblock this...
cuid, _, _ := obj.ConvergerUIDs()
cuid.SetConverged(true) // a reasonable initial assumption
obj.cuid.SetConverged(true) // a reasonable initial assumption
}
obj.StateOK(false) // assume we're initially dirty
@@ -179,7 +178,7 @@ type Send struct {
func (obj *BaseRes) SendRecv(res Res) (map[string]bool, error) {
if obj.debug {
// NOTE: this could expose private resource data like passwords
log.Printf("%s[%s]: SendRecv: %+v", obj.Kind(), obj.GetName(), obj.Recv)
log.Printf("%s: SendRecv: %+v", obj, obj.Recv)
}
var updated = make(map[string]bool) // list of updated keys
var err error
@@ -205,7 +204,7 @@ func (obj *BaseRes) SendRecv(res Res) (map[string]bool, error) {
// i think we probably want the same kind, at least for now...
if kind1 != kind2 {
e := fmt.Errorf("kind mismatch between %s[%s]: %s and %s[%s]: %s", v.Res.Kind(), v.Res.GetName(), kind1, obj.Kind(), obj.GetName(), kind2)
e := fmt.Errorf("kind mismatch between %s: %s and %s: %s", v.Res, kind1, obj, kind2)
err = multierr.Append(err, e) // list of errors
continue
}
@@ -213,21 +212,21 @@ func (obj *BaseRes) SendRecv(res Res) (map[string]bool, error) {
// if the types don't match, we can't use send->recv
// TODO: do we want to relax this for string -> *string ?
if e := TypeCmp(value1, value2); e != nil {
e := errwrap.Wrapf(e, "type mismatch between %s[%s] and %s[%s]", v.Res.Kind(), v.Res.GetName(), obj.Kind(), obj.GetName())
e := errwrap.Wrapf(e, "type mismatch between %s and %s", v.Res, obj)
err = multierr.Append(err, e) // list of errors
continue
}
// if we can't set, then well this is pointless!
if !value2.CanSet() {
e := fmt.Errorf("can't set %s[%s].%s", obj.Kind(), obj.GetName(), k)
e := fmt.Errorf("can't set %s.%s", obj, k)
err = multierr.Append(err, e) // list of errors
continue
}
// if we can't interface, we can't compare...
if !value1.CanInterface() || !value2.CanInterface() {
e := fmt.Errorf("can't interface %s[%s].%s", obj.Kind(), obj.GetName(), k)
e := fmt.Errorf("can't interface %s.%s", obj, k)
err = multierr.Append(err, e) // list of errors
continue
}
@@ -238,7 +237,7 @@ func (obj *BaseRes) SendRecv(res Res) (map[string]bool, error) {
value2.Set(value1) // do it for all types that match
updated[k] = true // we updated this key!
v.Changed = true // tag this key as updated!
log.Printf("SendRecv: %s[%s].%s -> %s[%s].%s", v.Res.Kind(), v.Res.GetName(), v.Key, obj.Kind(), obj.GetName(), k)
log.Printf("SendRecv: %s.%s -> %s.%s", v.Res, v.Key, obj, k)
}
}
return updated, err

View File

@@ -33,6 +33,7 @@ import (
)
func init() {
RegisterResource("svc", func() Res { return &SvcRes{} })
gob.Register(&SvcRes{})
}
@@ -41,6 +42,7 @@ type SvcRes struct {
BaseRes `yaml:",inline"`
State string `yaml:"state"` // state: running, stopped, undefined
Startup string `yaml:"startup"` // enabled, disabled, undefined
Session bool `yaml:"session"` // user session (true) or system?
}
// Default returns some sensible defaults for this resource.
@@ -65,7 +67,7 @@ func (obj *SvcRes) Validate() error {
// Init runs some startup code for this resource.
func (obj *SvcRes) Init() error {
obj.BaseRes.kind = "svc"
obj.BaseRes.Kind = "svc"
return obj.BaseRes.Init() // call base init, b/c we're overriding
}
@@ -76,7 +78,14 @@ func (obj *SvcRes) Watch() error {
return fmt.Errorf("systemd is not running")
}
conn, err := systemd.NewSystemdConnection() // needs root access
var conn *systemd.Conn
var err error
if obj.Session {
conn, err = systemd.NewUserConnection() // user session
} else {
// we want NewSystemConnection but New falls back to this
conn, err = systemd.New() // needs root access
}
if err != nil {
return errwrap.Wrapf(err, "failed to connect to systemd")
}
@@ -187,7 +196,7 @@ func (obj *SvcRes) Watch() error {
obj.StateOK(false) // dirty
case err := <-subErrors:
return errwrap.Wrapf(err, "unknown %s[%s] error", obj.Kind(), obj.GetName())
return errwrap.Wrapf(err, "unknown %s error", obj)
case event := <-obj.Events():
if exit, send = obj.ReadEvent(event); exit != nil {
@@ -210,7 +219,13 @@ func (obj *SvcRes) CheckApply(apply bool) (checkOK bool, err error) {
return false, fmt.Errorf("systemd is not running")
}
conn, err := systemd.NewSystemdConnection() // needs root access
var conn *systemd.Conn
if obj.Session {
conn, err = systemd.NewUserConnection() // user session
} else {
// we want NewSystemConnection but New falls back to this
conn, err = systemd.New() // needs root access
}
if err != nil {
return false, errwrap.Wrapf(err, "failed to connect to systemd")
}
@@ -252,7 +267,7 @@ func (obj *SvcRes) CheckApply(apply bool) (checkOK bool, err error) {
}
// apply portion
log.Printf("%s[%s]: Apply", obj.Kind(), obj.GetName())
log.Printf("%s: Apply", obj)
var files = []string{svc} // the svc represented in a list
if obj.Startup == "enabled" {
_, _, err = conn.EnableUnitFiles(files, false, true)
@@ -274,7 +289,7 @@ func (obj *SvcRes) CheckApply(apply bool) (checkOK bool, err error) {
return false, errwrap.Wrapf(err, "failed to start unit")
}
if refresh {
log.Printf("%s[%s]: Skipping reload, due to pending start", obj.Kind(), obj.GetName())
log.Printf("%s: Skipping reload, due to pending start", obj)
}
refresh = false // we did a start, so a reload is not needed
} else if obj.State == "stopped" {
@@ -283,7 +298,7 @@ func (obj *SvcRes) CheckApply(apply bool) (checkOK bool, err error) {
return false, errwrap.Wrapf(err, "failed to stop unit")
}
if refresh {
log.Printf("%s[%s]: Skipping reload, due to pending stop", obj.Kind(), obj.GetName())
log.Printf("%s: Skipping reload, due to pending stop", obj)
}
refresh = false // we did a stop, so a reload is not needed
}
@@ -298,7 +313,7 @@ func (obj *SvcRes) CheckApply(apply bool) (checkOK bool, err error) {
if refresh { // we need to reload the service
// XXX: run a svc reload here!
log.Printf("%s[%s]: Reloading...", obj.Kind(), obj.GetName())
log.Printf("%s: Reloading...", obj)
}
// XXX: also set enabled on boot
@@ -365,7 +380,7 @@ func (obj *SvcResAutoEdges) Test(input []bool) bool {
}
// AutoEdges returns the AutoEdge interface. In this case the systemd units.
func (obj *SvcRes) AutoEdges() AutoEdge {
func (obj *SvcRes) AutoEdges() (AutoEdge, error) {
var data []ResUID
svcFiles := []string{
fmt.Sprintf("/etc/systemd/system/%s.service", obj.Name), // takes precedence
@@ -375,9 +390,9 @@ func (obj *SvcRes) AutoEdges() AutoEdge {
var reversed = true
data = append(data, &FileUID{
BaseUID: BaseUID{
name: obj.GetName(),
kind: obj.Kind(),
reversed: &reversed,
Name: obj.GetName(),
Kind: obj.GetKind(),
Reversed: &reversed,
},
path: x, // what matters
})
@@ -386,14 +401,14 @@ func (obj *SvcRes) AutoEdges() AutoEdge {
data: data,
pointer: 0,
found: false,
}
}, nil
}
// UIDs includes all params to make a unique identification of this object.
// Most resources only return one, although some resources can return multiple.
func (obj *SvcRes) UIDs() []ResUID {
x := &SvcUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
BaseUID: BaseUID{Name: obj.GetName(), Kind: obj.GetKind()},
name: obj.Name, // svc name
}
return []ResUID{x}
@@ -412,26 +427,29 @@ func (obj *SvcRes) GroupCmp(r Res) bool {
}
// Compare two resources and return if they are equivalent.
func (obj *SvcRes) Compare(res Res) bool {
switch res.(type) {
case *SvcRes:
res := res.(*SvcRes)
func (obj *SvcRes) Compare(r Res) bool {
// we can only compare SvcRes to others of the same resource kind
res, ok := r.(*SvcRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
if obj.State != res.State {
return false
}
if obj.Startup != res.Startup {
return false
}
default:
if obj.Session != res.Session {
return false
}
return true
}

View File

@@ -25,6 +25,7 @@ import (
)
func init() {
RegisterResource("timer", func() Res { return &TimerRes{} })
gob.Register(&TimerRes{})
}
@@ -58,7 +59,7 @@ func (obj *TimerRes) Validate() error {
// Init runs some startup code for this resource.
func (obj *TimerRes) Init() error {
obj.BaseRes.kind = "timer"
obj.BaseRes.Kind = "timer"
return obj.BaseRes.Init() // call base init, b/c we're overrriding
}
@@ -84,7 +85,7 @@ func (obj *TimerRes) Watch() error {
select {
case <-obj.ticker.C: // received the timer event
send = true
log.Printf("%s[%s]: received tick", obj.Kind(), obj.GetName())
log.Printf("%s: received tick", obj)
case event := <-obj.Events():
if exit, _ := obj.ReadEvent(event); exit != nil {
@@ -120,36 +121,32 @@ func (obj *TimerRes) CheckApply(apply bool) (bool, error) {
func (obj *TimerRes) UIDs() []ResUID {
x := &TimerUID{
BaseUID: BaseUID{
name: obj.GetName(),
kind: obj.Kind(),
Name: obj.GetName(),
Kind: obj.GetKind(),
},
name: obj.Name,
}
return []ResUID{x}
}
// AutoEdges returns the AutoEdge interface. In this case no autoedges are used.
func (obj *TimerRes) AutoEdges() AutoEdge {
return nil
}
// Compare two resources and return if they are equivalent.
func (obj *TimerRes) Compare(res Res) bool {
switch res.(type) {
case *TimerRes:
res := res.(*TimerRes)
func (obj *TimerRes) Compare(r Res) bool {
// we can only compare TimerRes to others of the same resource kind
res, ok := r.(*TimerRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) {
return false
}
if obj.Name != res.Name {
return false
}
if obj.Interval != res.Interval {
return false
}
default:
return false
}
return true
}

78
resources/uid.go Normal file
View File

@@ -0,0 +1,78 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package resources
import (
"fmt"
"log"
)
// ResUID is a unique identifier for a resource, namely it's name, and the kind ("type").
type ResUID interface {
GetName() string
GetKind() string
fmt.Stringer // String() string
IFF(ResUID) bool
IsReversed() bool // true means this resource happens before the generator
}
// The BaseUID struct is used to provide a unique resource identifier.
type BaseUID struct {
Name string // name and kind are the values of where this is coming from
Kind string
Reversed *bool // piggyback edge information here
}
// GetName returns the name of the resource UID.
func (obj *BaseUID) GetName() string {
return obj.Name
}
// GetKind returns the kind of the resource UID.
func (obj *BaseUID) GetKind() string {
return obj.Kind
}
// String returns the canonical string representation for a resource UID.
func (obj *BaseUID) String() string {
return fmt.Sprintf("%s[%s]", obj.GetKind(), obj.GetName())
}
// IFF looks at two UID's and if and only if they are equivalent, returns true.
// If they are not equivalent, it returns false.
// Most resources will want to override this method, since it does the important
// work of actually discerning if two resources are identical in function.
func (obj *BaseUID) IFF(uid ResUID) bool {
res, ok := uid.(*BaseUID)
if !ok {
return false
}
return obj.Name == res.Name
}
// IsReversed is part of the ResUID interface, and true means this resource
// happens before the generator.
func (obj *BaseUID) IsReversed() bool {
if obj.Reversed == nil {
log.Fatal("Programming error!")
}
return *obj.Reversed
}

View File

@@ -37,6 +37,7 @@ import (
)
func init() {
RegisterResource("virt", func() Res { return &VirtRes{} })
gob.Register(&VirtRes{})
}
@@ -136,7 +137,7 @@ func (obj *VirtRes) Init() error {
var u *url.URL
var err error
if u, err = url.Parse(obj.URI); err != nil {
return errwrap.Wrapf(err, "%s[%s]: Parsing URI failed: %s", obj.Kind(), obj.GetName(), obj.URI)
return errwrap.Wrapf(err, "%s: Parsing URI failed: %s", obj, obj.URI)
}
switch u.Scheme {
case "lxc":
@@ -147,7 +148,7 @@ func (obj *VirtRes) Init() error {
obj.conn, err = obj.connect() // gets closed in Close method of Res API
if err != nil {
return errwrap.Wrapf(err, "%s[%s]: Connection to libvirt failed in init", obj.Kind(), obj.GetName())
return errwrap.Wrapf(err, "%s: Connection to libvirt failed in init", obj)
}
// check for hard to change properties
@@ -155,14 +156,14 @@ func (obj *VirtRes) Init() error {
if err == nil {
defer dom.Free()
} else if !isNotFound(err) {
return errwrap.Wrapf(err, "%s[%s]: Could not lookup on init", obj.Kind(), obj.GetName())
return errwrap.Wrapf(err, "%s: Could not lookup on init", obj)
}
if err == nil {
// maxCPUs, err := dom.GetMaxVcpus()
i, err := dom.GetVcpusFlags(libvirt.DOMAIN_VCPU_MAXIMUM)
if err != nil {
return errwrap.Wrapf(err, "%s[%s]: Could not lookup MaxCPUs on init", obj.Kind(), obj.GetName())
return errwrap.Wrapf(err, "%s: Could not lookup MaxCPUs on init", obj)
}
maxCPUs := uint(i)
if obj.MaxCPUs != maxCPUs { // max cpu slots is hard to change
@@ -175,11 +176,11 @@ func (obj *VirtRes) Init() error {
// event handlers so that we don't miss any events via race?
xmlDesc, err := dom.GetXMLDesc(0) // 0 means no flags
if err != nil {
return errwrap.Wrapf(err, "%s[%s]: Could not GetXMLDesc on init", obj.Kind(), obj.GetName())
return errwrap.Wrapf(err, "%s: Could not GetXMLDesc on init", obj)
}
domXML := &libvirtxml.Domain{}
if err := domXML.Unmarshal(xmlDesc); err != nil {
return errwrap.Wrapf(err, "%s[%s]: Could not unmarshal XML on init", obj.Kind(), obj.GetName())
return errwrap.Wrapf(err, "%s: Could not unmarshal XML on init", obj)
}
// guest agent: domain->devices->channel->target->state == connected?
@@ -191,7 +192,7 @@ func (obj *VirtRes) Init() error {
}
}
obj.wg = &sync.WaitGroup{}
obj.BaseRes.kind = "virt"
obj.BaseRes.Kind = "virt"
return obj.BaseRes.Init() // call base init, b/c we're overriding
}
@@ -399,22 +400,22 @@ func (obj *VirtRes) Watch() error {
obj.guestAgentConnected = true
obj.StateOK(false) // dirty
send = true
log.Printf("%s[%s]: Guest agent connected", obj.Kind(), obj.GetName())
log.Printf("%s: Guest agent connected", obj)
} else if state == libvirt.CONNECT_DOMAIN_EVENT_AGENT_LIFECYCLE_STATE_DISCONNECTED {
obj.guestAgentConnected = false
// ignore CONNECT_DOMAIN_EVENT_AGENT_LIFECYCLE_REASON_DOMAIN_STARTED
// events because they just tell you that guest agent channel was added
if reason == libvirt.CONNECT_DOMAIN_EVENT_AGENT_LIFECYCLE_REASON_CHANNEL {
log.Printf("%s[%s]: Guest agent disconnected", obj.Kind(), obj.GetName())
log.Printf("%s: Guest agent disconnected", obj)
}
} else {
return fmt.Errorf("unknown %s[%s] guest agent state: %v", obj.Kind(), obj.GetName(), state)
return fmt.Errorf("unknown %s guest agent state: %v", obj, state)
}
case err := <-errorChan:
return fmt.Errorf("unknown %s[%s] libvirt error: %s", obj.Kind(), obj.GetName(), err)
return fmt.Errorf("unknown %s libvirt error: %s", obj, err)
case event := <-obj.Events():
if exit, send = obj.ReadEvent(event); exit != nil {
@@ -452,7 +453,7 @@ func (obj *VirtRes) domainCreate() (*libvirt.Domain, bool, error) {
if err != nil {
return dom, false, err // returned dom is invalid
}
log.Printf("%s[%s]: Domain transient %s", state, obj.Kind(), obj.GetName())
log.Printf("%s: Domain transient %s", state, obj)
return dom, false, nil
}
@@ -460,20 +461,20 @@ func (obj *VirtRes) domainCreate() (*libvirt.Domain, bool, error) {
if err != nil {
return dom, false, err // returned dom is invalid
}
log.Printf("%s[%s]: Domain defined", obj.Kind(), obj.GetName())
log.Printf("%s: Domain defined", obj)
if obj.State == "running" {
if err := dom.Create(); err != nil {
return dom, false, err
}
log.Printf("%s[%s]: Domain started", obj.Kind(), obj.GetName())
log.Printf("%s: Domain started", obj)
}
if obj.State == "paused" {
if err := dom.CreateWithFlags(libvirt.DOMAIN_START_PAUSED); err != nil {
return dom, false, err
}
log.Printf("%s[%s]: Domain created paused", obj.Kind(), obj.GetName())
log.Printf("%s: Domain created paused", obj)
}
return dom, false, nil
@@ -511,14 +512,14 @@ func (obj *VirtRes) stateCheckApply(apply bool, dom *libvirt.Domain) (bool, erro
return false, errwrap.Wrapf(err, "domain.Resume failed")
}
checkOK = false
log.Printf("%s[%s]: Domain resumed", obj.Kind(), obj.GetName())
log.Printf("%s: Domain resumed", obj)
break
}
if err := dom.Create(); err != nil {
return false, errwrap.Wrapf(err, "domain.Create failed")
}
checkOK = false
log.Printf("%s[%s]: Domain created", obj.Kind(), obj.GetName())
log.Printf("%s: Domain created", obj)
case "paused":
if domInfo.State == libvirt.DOMAIN_PAUSED {
@@ -532,14 +533,14 @@ func (obj *VirtRes) stateCheckApply(apply bool, dom *libvirt.Domain) (bool, erro
return false, errwrap.Wrapf(err, "domain.Suspend failed")
}
checkOK = false
log.Printf("%s[%s]: Domain paused", obj.Kind(), obj.GetName())
log.Printf("%s: Domain paused", obj)
break
}
if err := dom.CreateWithFlags(libvirt.DOMAIN_START_PAUSED); err != nil {
return false, errwrap.Wrapf(err, "domain.CreateWithFlags failed")
}
checkOK = false
log.Printf("%s[%s]: Domain created paused", obj.Kind(), obj.GetName())
log.Printf("%s: Domain created paused", obj)
case "shutoff":
if domInfo.State == libvirt.DOMAIN_SHUTOFF || domInfo.State == libvirt.DOMAIN_SHUTDOWN {
@@ -553,7 +554,7 @@ func (obj *VirtRes) stateCheckApply(apply bool, dom *libvirt.Domain) (bool, erro
return false, errwrap.Wrapf(err, "domain.Destroy failed")
}
checkOK = false
log.Printf("%s[%s]: Domain destroyed", obj.Kind(), obj.GetName())
log.Printf("%s: Domain destroyed", obj)
}
return checkOK, nil
@@ -579,7 +580,7 @@ func (obj *VirtRes) attrCheckApply(apply bool, dom *libvirt.Domain) (bool, error
if err := dom.SetMemory(obj.Memory); err != nil {
return false, errwrap.Wrapf(err, "domain.SetMemory failed")
}
log.Printf("%s[%s]: Memory changed to %d", obj.Kind(), obj.GetName(), obj.Memory)
log.Printf("%s: Memory changed to %d", obj, obj.Memory)
}
// check cpus
@@ -618,7 +619,7 @@ func (obj *VirtRes) attrCheckApply(apply bool, dom *libvirt.Domain) (bool, error
return false, errwrap.Wrapf(err, "domain.SetVcpus failed")
}
checkOK = false
log.Printf("%s[%s]: CPUs (hot) changed to %d", obj.Kind(), obj.GetName(), obj.CPUs)
log.Printf("%s: CPUs (hot) changed to %d", obj, obj.CPUs)
case libvirt.DOMAIN_SHUTOFF, libvirt.DOMAIN_SHUTDOWN:
if !obj.Transient {
@@ -630,7 +631,7 @@ func (obj *VirtRes) attrCheckApply(apply bool, dom *libvirt.Domain) (bool, error
return false, errwrap.Wrapf(err, "domain.SetVcpus failed")
}
checkOK = false
log.Printf("%s[%s]: CPUs (cold) changed to %d", obj.Kind(), obj.GetName(), obj.CPUs)
log.Printf("%s: CPUs (cold) changed to %d", obj, obj.CPUs)
}
default:
@@ -661,7 +662,7 @@ func (obj *VirtRes) attrCheckApply(apply bool, dom *libvirt.Domain) (bool, error
return false, errwrap.Wrapf(err, "domain.SetVcpus failed")
}
checkOK = false
log.Printf("%s[%s]: CPUs (guest) changed to %d", obj.Kind(), obj.GetName(), obj.CPUs)
log.Printf("%s: CPUs (guest) changed to %d", obj, obj.CPUs)
}
}
@@ -685,7 +686,7 @@ func (obj *VirtRes) domainShutdownSync(apply bool, dom *libvirt.Domain) (bool, e
return false, errwrap.Wrapf(err, "domain.GetInfo failed")
}
if domInfo.State == libvirt.DOMAIN_SHUTOFF || domInfo.State == libvirt.DOMAIN_SHUTDOWN {
log.Printf("%s[%s]: Shutdown", obj.Kind(), obj.GetName())
log.Printf("%s: Shutdown", obj)
break
}
@@ -697,7 +698,7 @@ func (obj *VirtRes) domainShutdownSync(apply bool, dom *libvirt.Domain) (bool, e
obj.processExitChan = make(chan struct{})
// if machine shuts down before we call this, we error;
// this isn't ideal, but it happened due to user error!
log.Printf("%s[%s]: Running shutdown", obj.Kind(), obj.GetName())
log.Printf("%s: Running shutdown", obj)
if err := dom.Shutdown(); err != nil {
// FIXME: if machine is already shutdown completely, return early
return false, errwrap.Wrapf(err, "domain.Shutdown failed")
@@ -718,7 +719,7 @@ func (obj *VirtRes) domainShutdownSync(apply bool, dom *libvirt.Domain) (bool, e
// https://libvirt.org/formatdomain.html#elementsEvents
continue
case <-timeout:
return false, fmt.Errorf("%s[%s]: didn't shutdown after %d seconds", obj.Kind(), obj.GetName(), MaxShutdownDelayTimeout)
return false, fmt.Errorf("%s: didn't shutdown after %d seconds", obj, MaxShutdownDelayTimeout)
}
}
@@ -790,7 +791,7 @@ func (obj *VirtRes) CheckApply(apply bool) (bool, error) {
if err := dom.Undefine(); err != nil {
return false, errwrap.Wrapf(err, "domain.Undefine failed")
}
log.Printf("%s[%s]: Domain undefined", obj.Kind(), obj.GetName())
log.Printf("%s: Domain undefined", obj)
} else {
domXML, err := dom.GetXMLDesc(libvirt.DOMAIN_XML_INACTIVE)
if err != nil {
@@ -799,7 +800,7 @@ func (obj *VirtRes) CheckApply(apply bool) (bool, error) {
if _, err = obj.conn.DomainDefineXML(domXML); err != nil {
return false, errwrap.Wrapf(err, "conn.DomainDefineXML failed")
}
log.Printf("%s[%s]: Domain defined", obj.Kind(), obj.GetName())
log.Printf("%s: Domain defined", obj)
}
checkOK = false
}
@@ -847,7 +848,7 @@ func (obj *VirtRes) CheckApply(apply bool) (bool, error) {
// we had to do a restart, we didn't, and we should error if it was needed
if obj.restartScheduled && restart == true && obj.RestartOnDiverge == "error" {
return false, fmt.Errorf("%s[%s]: needed restart but didn't! (RestartOnDiverge: %v)", obj.Kind(), obj.GetName(), obj.RestartOnDiverge)
return false, fmt.Errorf("%s: needed restart but didn't! (RestartOnDiverge: %v)", obj, obj.RestartOnDiverge)
}
return checkOK, nil // w00t
@@ -1054,7 +1055,7 @@ type VirtUID struct {
// Most resources only return one, although some resources can return multiple.
func (obj *VirtRes) UIDs() []ResUID {
x := &VirtUID{
BaseUID: BaseUID{name: obj.GetName(), kind: obj.Kind()},
BaseUID: BaseUID{Name: obj.GetName(), Kind: obj.GetKind()},
// TODO: add more properties here so we can link to vm dependencies
}
return []ResUID{x}
@@ -1069,23 +1070,20 @@ func (obj *VirtRes) GroupCmp(r Res) bool {
return false // not possible atm
}
// AutoEdges returns the AutoEdge interface. In this case no autoedges are used.
func (obj *VirtRes) AutoEdges() AutoEdge {
return nil
}
// Compare two resources and return if they are equivalent.
func (obj *VirtRes) Compare(res Res) bool {
switch res.(type) {
case *VirtRes:
res := res.(*VirtRes)
func (obj *VirtRes) Compare(r Res) bool {
// we can only compare VirtRes to others of the same resource kind
res, ok := r.(*VirtRes)
if !ok {
return false
}
if !obj.BaseRes.Compare(res) { // call base Compare
return false
}
if obj.Name != res.Name {
return false
}
if obj.URI != res.URI {
return false
}
@@ -1128,9 +1126,7 @@ func (obj *VirtRes) Compare(res Res) bool {
//if obj.Filesystem != res.Filesystem {
// return false
//}
default:
return false
}
return true
}

View File

@@ -23,6 +23,7 @@ run-test ./test/test-bashfmt.sh
run-test ./test/test-headerfmt.sh
run-test ./test/test-commit-message.sh
run-test ./test/test-govet.sh
run-test ./test/test-examples.sh
run-test ./test/test-gotest.sh
# do these longer tests only when running on ci
@@ -31,6 +32,7 @@ if env | grep -q -e '^TRAVIS=true$' -e '^JENKINS_URL=' -e '^BUILD_TAG=jenkins';
run-test ./test/test-gotest.sh --race
fi
run-test ./test/test-gometalinter.sh
# FIXME: this now fails everywhere :(
#run-test ./test/test-reproducible.sh

View File

@@ -13,8 +13,6 @@ import (
mgmt "github.com/purpleidea/mgmt/lib"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
"golang.org/x/time/rate"
)
// MyGAPI implements the main GAPI interface.
@@ -58,11 +56,11 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
return nil, fmt.Errorf("%s: MyGAPI is not initialized", obj.Name)
}
// FIXME: these are being specified temporarily until it's the default!
metaparams := resources.MetaParams{
Limit: rate.Inf,
Burst: 0,
metaparams := resources.DefaultMetaParams
g, err := pgraph.NewGraph(obj.Name)
if err != nil {
return nil, err
}
g := pgraph.NewGraph(obj.Name)
n0 := &resources.NoopRes{
BaseRes: resources.BaseRes{
@@ -70,46 +68,46 @@ func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
MetaParams: metaparams,
},
}
v := pgraph.NewVertex(n0)
g.AddVertex(n0)
g.AddVertex(v)
//g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
return g, nil
}
// Next returns nil errors every time there could be a new graph.
func (obj *MyGAPI) Next() chan error {
if obj.data.NoWatch || obj.Interval <= 0 {
return nil
}
ch := make(chan error)
func (obj *MyGAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("%s: MyGAPI is not initialized", obj.Name)
next := gapi.Next{
Err: fmt.Errorf("%s: MyGAPI is not initialized", obj.Name),
Exit: true, // exit, b/c programming error?
}
ch <- next
return
}
log.Printf("%s: Generating a bunch of new graphs...", obj.Name)
ch <- nil
ch <- gapi.Next{}
log.Printf("%s: New graph...", obj.Name)
ch <- nil
ch <- gapi.Next{}
log.Printf("%s: New graph...", obj.Name)
ch <- nil
ch <- gapi.Next{}
log.Printf("%s: New graph...", obj.Name)
ch <- nil
ch <- gapi.Next{}
log.Printf("%s: New graph...", obj.Name)
ch <- nil
ch <- gapi.Next{}
log.Printf("%s: New graph...", obj.Name)
ch <- nil
ch <- gapi.Next{}
log.Printf("%s: New graph...", obj.Name)
ch <- nil
ch <- gapi.Next{}
log.Printf("%s: New graph...", obj.Name)
ch <- nil
ch <- gapi.Next{}
log.Printf("%s: New graph...", obj.Name)
ch <- nil
ch <- gapi.Next{}
time.Sleep(1 * time.Second)
log.Printf("%s: Done generating graphs!", obj.Name)
}()

43
test/test-examples.sh Executable file
View File

@@ -0,0 +1,43 @@
#!/bin/bash
# check that our examples still build, even if we don't run them here
. test/util.sh
echo running test-examples.sh
failures=''
function run-test()
{
$@ || failures=$( [ -n "$failures" ] && echo "$failures\\n$@" || echo "$@" )
}
ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && cd .. && pwd )" # dir!
cd "${ROOT}"
buildout='test-examples.out'
# make symlink to outside of package
linkto="`pwd`/examples/lib/"
tmpdir="`mktemp --tmpdir -d tmp.XXX`" # get a dir outside of the main package
cd "$tmpdir"
ln -s "$linkto" # symlink outside of dir
cd `basename "$linkto"`
# loop through individual *.go files in working dir
for file in `find . -maxdepth 3 -type f -name '*.go'`; do
#echo "running test on: $file"
run-test go build -i -o "$buildout" "$file" || fail_test "could not build: $file"
done
rm "$buildout" || true # clean up build mess
cd - # back to tmp dir
rm `basename "$linkto"`
cd ..
rmdir "$tmpdir" # cleanup
if [[ -n "$failures" ]]; then
echo 'FAIL'
echo 'The following tests have failed:'
echo -e "$failures"
exit 1
fi
echo 'PASS'

View File

@@ -4,6 +4,7 @@
. test/util.sh
echo running test-golint.sh
# TODO: replace with gometalinter instead of plain golint
# TODO: output a diff of what has changed in the golint output
# FIXME: test a range of commits, since only the last patch is checked here
PREVIOUS='HEAD^'

63
test/test-gometalinter.sh Executable file
View File

@@ -0,0 +1,63 @@
#!/bin/bash
# check a bunch of linters with the gometalinter
# TODO: run this from the test-golint.sh file instead to check for deltas
. test/util.sh
echo running test-gometalinter.sh
failures=''
function run-test()
{
$@ || failures=$( [ -n "$failures" ] && echo "$failures\\n$@" || echo "$@" )
}
ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && cd .. && pwd )" # dir!
cd "${ROOT}"
# TODO: run more linters here if we're brave...
gml='gometalinter --disable-all'
#gml="$gml --enable=aligncheck"
#gml="$gml --enable=deadcode" # TODO: only a few fixes needed
#gml="$gml --enable=dupl"
#gml="$gml --enable=errcheck"
#gml="$gml --enable=gas"
#gml="$gml --enable=goconst"
#gml="$gml --enable=gocyclo"
gml="$gml --enable=goimports"
#gml="$gml --enable=golint" # TODO: only a few fixes needed
#gml="$gml --enable=gosimple" # TODO: only a few fixes needed
gml="$gml --enable=gotype"
#gml="$gml --enable=ineffassign" # TODO: only a few fixes needed
#gml="$gml --enable=interfacer" # TODO: only a few fixes needed
#gml="$gml --enable=lll --line-length=200" # TODO: only a few fixes needed
gml="$gml --enable=misspell"
#gml="$gml --enable=safesql" # FIXME: made my machine slow
#gml="$gml --enable=staticcheck" # TODO: only a few fixes needed
#gml="$gml --enable=structcheck" # TODO: only a few fixes needed
gml="$gml --enable=unconvert"
#gml="$gml --enable=unparam" # TODO: only a few fixes needed
#gml="$gml --enable=unused" # TODO: only a few fixes needed
#gml="$gml --enable=varcheck" # TODO: only a few fixes needed
gometalinter="$gml"
# loop through directories in an attempt to scan each go package
# TODO: lint the *.go examples as individual files and not as a single *.go
for dir in `find . -maxdepth 5 -type d -not -path './old/*' -not -path './old' -not -path './tmp/*' -not -path './tmp' -not -path './.*' -not -path './vendor/*' -not -path './examples/*'`; do
match="$dir/*.go"
#echo "match is: $match"
if ! ls $match &>/dev/null; then
#echo "skipping: $match"
continue # no *.go files found
fi
run-test $gometalinter "$dir" || fail_test "gometalinter did not pass"
done
if [[ -n "$failures" ]]; then
echo 'FAIL'
echo 'The following tests have failed:'
echo -e "$failures"
exit 1
fi
echo 'PASS'

View File

@@ -14,6 +14,8 @@ function run-test()
ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && cd .. && pwd )" # dir!
cd "${ROOT}"
GO_VERSION=($(go version))
function simplify-gocase() {
if grep 'case _ = <-' "$1"; then
return 1 # 'case _ = <- can be simplified to: case <-'
@@ -29,8 +31,25 @@ function token-coloncheck() {
return 0
}
# loop through directories in an attempt to scan each go package
for dir in `find . -maxdepth 5 -type d -not -path './old/*' -not -path './old' -not -path './tmp/*' -not -path './tmp' -not -path './.*' -not -path './vendor/*'`; do
match="$dir/*.go"
#echo "match is: $match"
if ! ls $match &>/dev/null; then
#echo "skipping: $match"
continue # no *.go files found
fi
#echo "matching: $match"
if [[ -z $(echo "${GO_VERSION[2]}" | grep -E 'go1.2|go1.3|go1.4|go1.5|go1.6|go1.7|go1.8') ]]; then
# workaround go vet issues by adding the new -source flag (go1.9+)
run-test go vet -source "$match" || fail_test "go vet -source did not pass pkg"
else
run-test go vet "$match" || fail_test "go vet did not pass pkg" # since it doesn't output an ok message on pass
fi
done
# loop through individual *.go files
for file in `find . -maxdepth 3 -type f -name '*.go' -not -path './old/*' -not -path './tmp/*'`; do
run-test go vet "$file" || fail_test "go vet did not pass" # since it doesn't output an ok message on pass
run-test grep 'log.' "$file" | grep '\\n"' && fail_test 'no newline needed in log.Printf()' # no \n needed in log.Printf()
run-test simplify-gocase "$file"
run-test token-coloncheck "$file"

36
util/interfaces.go Normal file
View File

@@ -0,0 +1,36 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package util
// Bool returns the interface value if it is a bool, and otherwise it panics.
func Bool(x interface{}) bool {
b, ok := x.(bool)
if !ok {
panic("not a bool")
}
return b
}
// Uint returns the interface value if it is a uint, and otherwise it panics.
func Uint(x interface{}) uint {
u, ok := x.(uint)
if !ok {
panic("not a uint")
}
return u
}

View File

@@ -77,36 +77,66 @@ func (obj *GAPI) Graph() (*pgraph.Graph, error) {
}
// Next returns nil errors every time there could be a new graph.
func (obj *GAPI) Next() chan error {
if obj.data.NoWatch {
return nil
}
ch := make(chan error)
func (obj *GAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
ch <- fmt.Errorf("yamlgraph: GAPI is not initialized")
next := gapi.Next{
Err: fmt.Errorf("yamlgraph: GAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
return
}
configChan := obj.configWatcher.ConfigWatch(*obj.File) // simple
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
watchChan, configChan := make(chan error), make(chan error)
if obj.data.NoConfigWatch {
configChan = nil
} else {
configChan = obj.configWatcher.ConfigWatch(*obj.File) // simple
}
if obj.data.NoStreamWatch {
watchChan = nil
} else {
watchChan = obj.data.World.ResWatch()
}
for {
var err error
var ok bool
select {
case err, ok := <-configChan: // returns nil events on ok!
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case err, ok = <-watchChan:
if !ok {
return
}
case err, ok = <-configChan: // returns nil events on ok!
if !ok { // the channel closed!
return
}
log.Printf("yamlgraph: Generating new graph...")
select {
case ch <- err: // trigger a run (send a msg)
if err != nil {
return
}
// unblock if we exit while waiting to send!
case <-obj.closeChan:
return
}
log.Printf("yamlgraph: Generating new graph...")
next := gapi.Next{
//Exit: true, // TODO: for permanent shutdown!
Err: err,
}
select {
case ch <- next: // trigger a run (send a msg)
// TODO: if the error is really bad, we could:
//if err != nil {
// return
//}
// unblock if we exit while waiting to send!
case <-obj.closeChan:
return
}

View File

@@ -29,6 +29,7 @@ import (
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
errwrap "github.com/pkg/errors"
"gopkg.in/yaml.v2"
)
@@ -57,6 +58,7 @@ type Resources struct {
Augeas []*resources.AugeasRes `yaml:"augeas"`
Exec []*resources.ExecRes `yaml:"exec"`
File []*resources.FileRes `yaml:"file"`
Graph []*resources.GraphRes `yaml:"graph"`
Hostname []*resources.HostnameRes `yaml:"hostname"`
KV []*resources.KVRes `yaml:"kv"`
Msg []*resources.MsgRes `yaml:"msg"`
@@ -96,16 +98,20 @@ func (c *GraphConfig) NewGraphFromConfig(hostname string, world resources.World,
// hostname is the uuid for the host
var graph *pgraph.Graph // new graph to return
graph = pgraph.NewGraph("Graph") // give graph a default name
var err error
graph, err = pgraph.NewGraph("Graph") // give graph a default name
if err != nil {
return nil, errwrap.Wrapf(err, "could not run NewGraphFromConfig() properly")
}
var lookup = make(map[string]map[string]*pgraph.Vertex)
var lookup = make(map[string]map[string]pgraph.Vertex)
//log.Printf("%+v", config) // debug
// TODO: if defined (somehow)...
graph.SetName(c.Graph) // set graph name
var keep []*pgraph.Vertex // list of vertex which are the same in new graph
var keep []pgraph.Vertex // list of vertex which are the same in new graph
var resourceList []resources.Res // list of resources to export
// use reflection to avoid duplicating code... better options welcome!
value := reflect.Indirect(reflect.ValueOf(c.Resources))
@@ -122,18 +128,25 @@ func (c *GraphConfig) NewGraphFromConfig(hostname string, world resources.World,
if !ok {
return nil, fmt.Errorf("Config: Error: Can't convert: %v of type: %T to Res", x, x)
}
res.SetKind(kind) // cheap init
//if noop { // now done in mgmtmain
// res.Meta().Noop = noop
//}
if _, exists := lookup[kind]; !exists {
lookup[kind] = make(map[string]*pgraph.Vertex)
lookup[kind] = make(map[string]pgraph.Vertex)
}
// XXX: should we export based on a @@ prefix, or a metaparam
// like exported => true || exported => (host pattern)||(other pattern?)
if !strings.HasPrefix(res.GetName(), "@@") { // not exported resource
v := graph.CompareMatch(res)
fn := func(v pgraph.Vertex) (bool, error) {
return resources.VtoR(v).Compare(res), nil
}
v, err := graph.VertexMatchFn(fn)
if err != nil {
return nil, errwrap.Wrapf(err, "could not VertexMatchFn() resource")
}
if v == nil { // no match found
v = pgraph.NewVertex(res)
v = res // a standalone res can be a vertex
graph.AddVertex(v) // call standalone in case not part of an edge
}
lookup[kind][res.GetName()] = v // used for constructing edges
@@ -142,7 +155,6 @@ func (c *GraphConfig) NewGraphFromConfig(hostname string, world resources.World,
} else if !noop { // do not export any resources if noop
// store for addition to backend storage...
res.SetName(res.GetName()[2:]) //slice off @@
res.SetKind(kind) // cheap init
resourceList = append(resourceList, res)
}
}
@@ -177,13 +189,13 @@ func (c *GraphConfig) NewGraphFromConfig(hostname string, world resources.World,
log.Printf("Collect: %v; Pattern: %v", kind, t.Pattern)
// XXX: expand to more complex pattern matching here...
if res.Kind() != kind {
if res.GetKind() != kind {
continue
}
if matched {
// we've already matched this resource, should we match again?
log.Printf("Config: Warning: Matching %v[%v] again!", kind, res.GetName())
log.Printf("Config: Warning: Matching %s again!", res)
}
matched = true
@@ -196,15 +208,22 @@ func (c *GraphConfig) NewGraphFromConfig(hostname string, world resources.World,
res.CollectPattern(t.Pattern) // res.Dirname = t.Pattern
}
log.Printf("Collect: %v[%v]: collected!", kind, res.GetName())
log.Printf("Collect: %s: collected!", res)
// XXX: similar to other resource add code:
if _, exists := lookup[kind]; !exists {
lookup[kind] = make(map[string]*pgraph.Vertex)
lookup[kind] = make(map[string]pgraph.Vertex)
}
fn := func(v pgraph.Vertex) (bool, error) {
return resources.VtoR(v).Compare(res), nil
}
v, err := graph.VertexMatchFn(fn)
if err != nil {
return nil, errwrap.Wrapf(err, "could not VertexMatchFn() resource")
}
v := graph.CompareMatch(res)
if v == nil { // no match found
v = pgraph.NewVertex(res)
v = res // a standalone res can be a vertex
graph.AddVertex(v) // call standalone in case not part of an edge
}
lookup[kind][res.GetName()] = v // used for constructing edges
@@ -229,8 +248,10 @@ func (c *GraphConfig) NewGraphFromConfig(hostname string, world resources.World,
}
from := lookup[strings.ToLower(e.From.Kind)][e.From.Name]
to := lookup[strings.ToLower(e.To.Kind)][e.To.Name]
edge := pgraph.NewEdge(e.Name)
edge.Notify = e.Notify
edge := &resources.Edge{
Name: e.Name,
Notify: e.Notify,
}
graph.AddEdge(from, to, edge)
}

158
yamlgraph2/gapi.go Normal file
View File

@@ -0,0 +1,158 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package yamlgraph2
import (
"fmt"
"log"
"sync"
"github.com/purpleidea/mgmt/gapi"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/recwatch"
)
// GAPI implements the main yamlgraph GAPI interface.
type GAPI struct {
File *string // yaml graph definition to use; nil if undefined
data gapi.Data
initialized bool
closeChan chan struct{}
wg sync.WaitGroup // sync group for tunnel go routines
configWatcher *recwatch.ConfigWatcher
}
// NewGAPI creates a new yamlgraph GAPI struct and calls Init().
func NewGAPI(data gapi.Data, file *string) (*GAPI, error) {
obj := &GAPI{
File: file,
}
return obj, obj.Init(data)
}
// Init initializes the yamlgraph GAPI struct.
func (obj *GAPI) Init(data gapi.Data) error {
if obj.initialized {
return fmt.Errorf("already initialized")
}
if obj.File == nil {
return fmt.Errorf("the File param must be specified")
}
obj.data = data // store for later
obj.closeChan = make(chan struct{})
obj.initialized = true
obj.configWatcher = recwatch.NewConfigWatcher()
return nil
}
// Graph returns a current Graph.
func (obj *GAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("yamlgraph: GAPI is not initialized")
}
config := ParseConfigFromFile(*obj.File)
if config == nil {
return nil, fmt.Errorf("yamlgraph: ParseConfigFromFile returned nil")
}
g, err := config.NewGraphFromConfig(obj.data.Hostname, obj.data.World, obj.data.Noop)
return g, err
}
// Next returns nil errors every time there could be a new graph.
func (obj *GAPI) Next() chan gapi.Next {
ch := make(chan gapi.Next)
obj.wg.Add(1)
go func() {
defer obj.wg.Done()
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("yamlgraph: GAPI is not initialized"),
Exit: true, // exit, b/c programming error?
}
ch <- next
return
}
startChan := make(chan struct{}) // start signal
close(startChan) // kick it off!
watchChan, configChan := make(chan error), make(chan error)
if obj.data.NoConfigWatch {
configChan = nil
} else {
configChan = obj.configWatcher.ConfigWatch(*obj.File) // simple
}
if obj.data.NoStreamWatch {
watchChan = nil
} else {
watchChan = obj.data.World.ResWatch()
}
for {
var err error
var ok bool
select {
case <-startChan: // kick the loop once at start
startChan = nil // disable
// pass
case err, ok = <-watchChan:
if !ok {
return
}
case err, ok = <-configChan: // returns nil events on ok!
if !ok { // the channel closed!
return
}
case <-obj.closeChan:
return
}
log.Printf("yamlgraph: Generating new graph...")
next := gapi.Next{
//Exit: true, // TODO: for permanent shutdown!
Err: err,
}
select {
case ch <- next: // trigger a run (send a msg)
// TODO: if the error is really bad, we could:
//if err != nil {
// return
//}
// unblock if we exit while waiting to send!
case <-obj.closeChan:
return
}
}
}()
return ch
}
// Close shuts down the yamlgraph GAPI.
func (obj *GAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("yamlgraph: GAPI is not initialized")
}
obj.configWatcher.Close()
close(obj.closeChan)
obj.wg.Wait()
obj.initialized = false // closed = true
return nil
}

321
yamlgraph2/gconfig.go Normal file
View File

@@ -0,0 +1,321 @@
// Mgmt
// Copyright (C) 2013-2017+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package yamlgraph2 provides the facilities for loading a graph from a yaml file.
package yamlgraph2
import (
"errors"
"fmt"
"io/ioutil"
"log"
"strings"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
errwrap "github.com/pkg/errors"
"gopkg.in/yaml.v2"
)
type collectorResConfig struct {
Kind string `yaml:"kind"`
Pattern string `yaml:"pattern"` // XXX: Not Implemented
}
// Vertex is the data structure of a vertex.
type Vertex struct {
Kind string `yaml:"kind"`
Name string `yaml:"name"`
}
// Edge is the data structure of an edge.
type Edge struct {
Name string `yaml:"name"`
From Vertex `yaml:"from"`
To Vertex `yaml:"to"`
Notify bool `yaml:"notify"`
}
// ResourceData are the parameters for resource format.
type ResourceData struct {
Name string `yaml:"name"`
Meta resources.MetaParams `yaml:"meta"`
}
// Resource is the object that unmarshalls resources.
type Resource struct {
ResourceData
unmarshal func(interface{}) error
resource resources.Res
}
// Resources is the object that unmarshalls list of resources.
type Resources struct {
Resources map[string][]Resource `yaml:"resources"`
}
// GraphConfigData contains the graph data for GraphConfig.
type GraphConfigData struct {
Graph string `yaml:"graph"`
Collector []collectorResConfig `yaml:"collect"`
Edges []Edge `yaml:"edges"`
Comment string `yaml:"comment"`
Remote string `yaml:"remote"`
}
// GraphConfig is the data structure that describes a single graph to run.
type GraphConfig struct {
GraphConfigData
ResList []resources.Res
}
// UnmarshalYAML unmarshalls the complete graph.
func (c *GraphConfig) UnmarshalYAML(unmarshal func(interface{}) error) error {
// Unmarshal the graph data, except the resources
if err := unmarshal(&c.GraphConfigData); err != nil {
return err
}
// Unmarshal resources
var list Resources
list.Resources = map[string][]Resource{}
if err := unmarshal(&list); err != nil {
return err
}
// Finish unmarshalling by giving to each resource its kind
// and store each resource in the graph
for kind, resList := range list.Resources {
for _, res := range resList {
err := res.Decode(kind)
if err != nil {
return err
}
c.ResList = append(c.ResList, res.resource)
}
}
return nil
}
// UnmarshalYAML is the first stage for unmarshaling of resources.
func (r *Resource) UnmarshalYAML(unmarshal func(interface{}) error) error {
r.unmarshal = unmarshal
return unmarshal(&r.ResourceData)
}
// Decode is the second stage for unmarshaling of resources (knowing their
// kind).
func (r *Resource) Decode(kind string) (err error) {
r.resource, err = resources.NewEmptyNamedResource(kind)
if err != nil {
return err
}
err = r.unmarshal(r.resource)
if err != nil {
return err
}
// Set resource name, meta and kind
r.resource.SetName(r.Name)
r.resource.SetKind(strings.ToLower(kind))
meta := r.resource.Meta()
*meta = r.Meta
return
}
// Parse parses a data stream into the graph structure.
func (c *GraphConfig) Parse(data []byte) error {
if err := yaml.Unmarshal(data, c); err != nil {
return err
}
if c.Graph == "" {
return errors.New("graph config: invalid graph")
}
return nil
}
// NewGraphFromConfig transforms a GraphConfig struct into a new graph.
// FIXME: remove any possibly left over, now obsolete graph diff code from here!
func (c *GraphConfig) NewGraphFromConfig(hostname string, world resources.World, noop bool) (*pgraph.Graph, error) {
// hostname is the uuid for the host
var graph *pgraph.Graph // new graph to return
var err error
graph, err = pgraph.NewGraph("Graph") // give graph a default name
if err != nil {
return nil, errwrap.Wrapf(err, "could not run NewGraphFromConfig() properly")
}
var lookup = make(map[string]map[string]pgraph.Vertex)
//log.Printf("%+v", config) // debug
// TODO: if defined (somehow)...
graph.SetName(c.Graph) // set graph name
var keep []pgraph.Vertex // list of vertex which are the same in new graph
var resourceList []resources.Res // list of resources to export
// Resources
for _, res := range c.ResList {
kind := res.GetKind()
if _, exists := lookup[kind]; !exists {
lookup[kind] = make(map[string]pgraph.Vertex)
}
// XXX: should we export based on a @@ prefix, or a metaparam
// like exported => true || exported => (host pattern)||(other pattern?)
if !strings.HasPrefix(res.GetName(), "@@") { // not exported resource
fn := func(v pgraph.Vertex) (bool, error) {
return resources.VtoR(v).Compare(res), nil
}
v, err := graph.VertexMatchFn(fn)
if err != nil {
return nil, errwrap.Wrapf(err, "could not VertexMatchFn() resource")
}
if v == nil { // no match found
v = res // a standalone res can be a vertex
graph.AddVertex(v) // call standalone in case not part of an edge
}
lookup[kind][res.GetName()] = v // used for constructing edges
keep = append(keep, v) // append
} else if !noop { // do not export any resources if noop
// store for addition to backend storage...
res.SetName(res.GetName()[2:]) // slice off @@
res.SetKind(kind) // cheap init
resourceList = append(resourceList, res)
}
}
// store in backend (usually etcd)
if err := world.ResExport(resourceList); err != nil {
return nil, fmt.Errorf("Config: Could not export resources: %v", err)
}
// lookup from backend (usually etcd)
var hostnameFilter []string // empty to get from everyone
kindFilter := []string{}
for _, t := range c.Collector {
kind := strings.ToLower(t.Kind)
kindFilter = append(kindFilter, kind)
}
// do all the graph look ups in one single step, so that if the backend
// database changes, we don't have a partial state of affairs...
if len(kindFilter) > 0 { // if kindFilter is empty, don't need to do lookups!
var err error
resourceList, err = world.ResCollect(hostnameFilter, kindFilter)
if err != nil {
return nil, fmt.Errorf("Config: Could not collect resources: %v", err)
}
}
for _, res := range resourceList {
matched := false
// see if we find a collect pattern that matches
for _, t := range c.Collector {
kind := strings.ToLower(t.Kind)
// use t.Kind and optionally t.Pattern to collect from storage
log.Printf("Collect: %v; Pattern: %v", kind, t.Pattern)
// XXX: expand to more complex pattern matching here...
if res.GetKind() != kind {
continue
}
if matched {
// we've already matched this resource, should we match again?
log.Printf("Config: Warning: Matching %s again!", res)
}
matched = true
// collect resources but add the noop metaparam
//if noop { // now done in mgmtmain
// res.Meta().Noop = noop
//}
if t.Pattern != "" { // XXX: simplistic for now
res.CollectPattern(t.Pattern) // res.Dirname = t.Pattern
}
log.Printf("Collect: %s: collected!", res)
// XXX: similar to other resource add code:
if _, exists := lookup[kind]; !exists {
lookup[kind] = make(map[string]pgraph.Vertex)
}
fn := func(v pgraph.Vertex) (bool, error) {
return resources.VtoR(v).Compare(res), nil
}
v, err := graph.VertexMatchFn(fn)
if err != nil {
return nil, errwrap.Wrapf(err, "could not VertexMatchFn() resource")
}
if v == nil { // no match found
v = res // a standalone res can be a vertex
graph.AddVertex(v) // call standalone in case not part of an edge
}
lookup[kind][res.GetName()] = v // used for constructing edges
keep = append(keep, v) // append
//break // let's see if another resource even matches
}
}
for _, e := range c.Edges {
if _, ok := lookup[strings.ToLower(e.From.Kind)]; !ok {
return nil, fmt.Errorf("can't find 'from' resource")
}
if _, ok := lookup[strings.ToLower(e.To.Kind)]; !ok {
return nil, fmt.Errorf("can't find 'to' resource")
}
if _, ok := lookup[strings.ToLower(e.From.Kind)][e.From.Name]; !ok {
return nil, fmt.Errorf("can't find 'from' name")
}
if _, ok := lookup[strings.ToLower(e.To.Kind)][e.To.Name]; !ok {
return nil, fmt.Errorf("can't find 'to' name")
}
from := lookup[strings.ToLower(e.From.Kind)][e.From.Name]
to := lookup[strings.ToLower(e.To.Kind)][e.To.Name]
edge := &resources.Edge{
Name: e.Name,
Notify: e.Notify,
}
graph.AddEdge(from, to, edge)
}
return graph, nil
}
// ParseConfigFromFile takes a filename and returns the graph config structure.
func ParseConfigFromFile(filename string) *GraphConfig {
data, err := ioutil.ReadFile(filename)
if err != nil {
log.Printf("Config: Error: ParseConfigFromFile: File: %v", err)
return nil
}
var config GraphConfig
if err := config.Parse(data); err != nil {
log.Printf("Config: Error: ParseConfigFromFile: Parse: %v", err)
return nil
}
return &config
}