lang: Initial implementation of the mgmt language

This is an initial implementation of the mgmt language. It is a
declarative (immutable) functional, reactive, domain specific
programming language. It is intended to be a language that is:

* safe
* powerful
* easy to reason about

With these properties, we hope this language, and the mgmt engine will
allow you to model the real-time systems that you'd like to automate.

This also includes a number of other associated changes. Sorry for the
large size of this patch.
This commit is contained in:
James Shubin
2018-01-20 08:09:29 -05:00
parent 1c8c0b2915
commit b19583e7d3
237 changed files with 25256 additions and 743 deletions

View File

@@ -1,5 +1,8 @@
## Tips:
* please read the style guide before submitting your patch:
[docs/style-guide.md](docs/style-guide.md)
* commit message titles must be in the form:
```topic: Capitalized message with no trailing period```
or:

View File

@@ -10,10 +10,10 @@ repository:
description: Next generation distributed, event-driven, parallel config management!
# A URL with more information about the repository
homepage: https://ttboj.wordpress.com/?s=mgmtconfig
homepage: https://purpleidea.com/tags/mgmtconfig/
# A comma-separated list of topics to set on the repository
topics: golang, go, configuration-management, config-management, devops, etcd, distributed-systems, graph-theory
topics: golang, go, configuration-management, config-management, devops, etcd, distributed-systems, graph-theory, choreography
# Either `true` to make the repository private, or `false` to make it public.
private: false

View File

@@ -16,7 +16,7 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
SHELL = /usr/bin/env bash
.PHONY: all art cleanart version program path deps run race bindata generate build crossbuild clean test gofmt yamlfmt format docs rpmbuild mkdirs rpm srpm spec tar upload upload-sources upload-srpms upload-rpms copr
.PHONY: all art cleanart version program lang path deps run race bindata generate build crossbuild clean test gofmt yamlfmt format docs rpmbuild mkdirs rpm srpm spec tar upload upload-sources upload-srpms upload-rpms copr
.SILENT: clean bindata
GO_FILES := $(shell find . -name '*.go')
@@ -104,12 +104,16 @@ race:
# generate go files from non-go source
bindata:
@echo "Generating: bindata..."
$(MAKE) --quiet -C bindata
generate:
go generate
build: bindata $(PROGRAM)
lang:
@# recursively run make in child dir named lang
@echo "Generating: lang..."
$(MAKE) --quiet -C lang
$(PROGRAM): $(GO_FILES)
@echo "Building: $(PROGRAM), version: $(SVERSION)..."
@@ -128,7 +132,11 @@ $(PROGRAM).static: $(GO_FILES)
go generate
go build -a -installsuffix cgo -tags netgo -ldflags '-extldflags "-static" -X main.program=$(PROGRAM) -X main.version=$(SVERSION) -s -w' -o $(PROGRAM).static $(BUILD_FLAGS);
build: bindata lang $(PROGRAM)
clean:
$(MAKE) --quiet -C bindata clean
$(MAKE) --quiet -C lang clean
[ ! -e $(PROGRAM) ] || rm $(PROGRAM)
rm -f *_stringer.go # generated by `go generate`
rm -f *_mock.go # generated by `go generate`
@@ -138,8 +146,8 @@ test: bindata
gofmt:
# TODO: remove gofmt once goimports has a -s option
find . -maxdepth 3 -type f -name '*.go' -not -path './old/*' -not -path './tmp/*' -exec gofmt -s -w {} \;
find . -maxdepth 3 -type f -name '*.go' -not -path './old/*' -not -path './tmp/*' -exec goimports -w {} \;
find . -maxdepth 6 -type f -name '*.go' -not -path './old/*' -not -path './tmp/*' -not -path './vendor/*' -exec gofmt -s -w {} \;
find . -maxdepth 6 -type f -name '*.go' -not -path './old/*' -not -path './tmp/*' -not -path './vendor/*' -exec goimports -w {} \;
yamlfmt:
find . -maxdepth 3 -type f -name '*.yaml' -not -path './old/*' -not -path './tmp/*' -not -path './omv.yaml' -exec ruby -e "require 'yaml'; x=YAML.load_file('{}').to_yaml.each_line.map(&:rstrip).join(10.chr)+10.chr; File.open('{}', 'w').write x" \;

View File

@@ -18,10 +18,13 @@ Come join us in the `mgmt` community!
| Mailing list | [mgmtconfig-list@redhat.com](https://www.redhat.com/mailman/listinfo/mgmtconfig-list) |
## Status:
Mgmt is a fairly new project.
We're working towards being minimally useful for production environments.
We aren't feature complete for what we'd consider a 1.x release yet.
With your help you'll be able to influence our design and get us there sooner!
Mgmt is a next generation automation tool. It has similarities to other tools in
the configuration management space, but has a fast, modern, distributed systems
approach. The project contains an engine and a language.
[Please have a look at an introductory video or blog post.](docs/on-the-web.md)
Mgmt is a fairly new project. It is usable today, but not yet feature complete.
With your help you'll be able to influence our design and get us to 1.0 sooner!
Interested developers should read the [quick start guide](docs/quick-start-guide.md).
## Documentation:
@@ -33,6 +36,8 @@ Please read, enjoy and help improve our documentation!
| [quick start guide](docs/quick-start-guide.md) | for mgmt developers |
| [frequently asked questions](docs/faq.md) | for everyone |
| [resource guide](docs/resource-guide.md) | for mgmt developers |
| [language guide](docs/language-guide.md) | for everyone |
| [style guide](docs/style-guide.md) | for mgmt developers |
| [godoc API reference](https://godoc.org/github.com/purpleidea/mgmt) | for mgmt developers |
| [prometheus guide](docs/prometheus.md) | for everyone |
| [puppet guide](docs/puppet-guide.md) | for puppet sysadmins |
@@ -49,7 +54,7 @@ Please get involved by working on one of these items or by suggesting something
## Bugs:
Please set the `DEBUG` constant in [main.go](https://github.com/purpleidea/mgmt/blob/master/main.go) to `true`, and post the logs when you report the [issue](https://github.com/purpleidea/mgmt/issues).
Bonus points if you provide a [shell](https://github.com/purpleidea/mgmt/tree/master/test/shell) or [OMV](https://github.com/purpleidea/mgmt/tree/master/test/omv) reproducible test case.
Feel free to read my article on [debugging golang programs](https://ttboj.wordpress.com/2016/02/15/debugging-golang-programs/).
Feel free to read my article on [debugging golang programs](https://purpleidea.com/blog/2016/02/15/debugging-golang-programs/).
## Patches:
We'd love to have your patches! Please send them by email, or as a pull request.

View File

@@ -24,7 +24,6 @@ level and how many hours you'd like to spend on the patch.
- [ ] increment algorithm (linear, exponential, etc...) [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## User/Group resource
- [ ] base resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
- [ ] automatic edges to file resource [:heart:](https://github.com/purpleidea/mgmt/labels/mgmtlove)
## Virt (libvirt) resource
@@ -55,8 +54,7 @@ level and how many hours you'd like to spend on the patch.
- [ ] base plumbing
## Language improvements
- [ ] language design
- [ ] lexer/parser
- [ ] more core functions
- [ ] automatic language formatter, ala `gofmt`
- [ ] gedit/gnome-builder/gtksourceview syntax highlighting
- [ ] vim syntax highlighting

View File

@@ -20,14 +20,19 @@
# `bytes, err := bindata.Asset("FILEPATH")`
# where FILEPATH is the path of the original input file relative to `bindata/`.
.PHONY: build
.PHONY: build clean
default: build
build: bindata.go
# add more input files as dependencies at the end here...
bindata.go: ../COPYING
# go-bindata --pkg bindata -o {OUTPUT} {INPUT}
# go-bindata --pkg bindata -o <OUTPUT> <INPUT>
go-bindata --pkg bindata -o ./$@ $^
# gofmt the output file
gofmt -s -w $@
@ROOT=$$(dirname "$${BASH_SOURCE}")/.. && $$ROOT/misc/header.sh '$@'
clean:
# remove generated bindata/*.go
@ROOT=$$(dirname "$${BASH_SOURCE}")/.. && rm *.go

View File

@@ -13,13 +13,13 @@ foundation in and for, new and existing software.
For more information, you may like to read some blog posts from the author:
* [Next generation config mgmt](https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/)
* [Automatic edges in mgmt](https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/)
* [Automatic grouping in mgmt](https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/)
* [Automatic clustering in mgmt](https://ttboj.wordpress.com/2016/06/20/automatic-clustering-in-mgmt/)
* [Remote execution in mgmt](https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/)
* [Send/Recv in mgmt](https://ttboj.wordpress.com/2016/12/07/sendrecv-in-mgmt/)
* [Metaparameters in mgmt](https://ttboj.wordpress.com/2017/03/01/metaparameters-in-mgmt/)
* [Next generation config mgmt](https://purpleidea.com/blog/2016/01/18/next-generation-configuration-mgmt/)
* [Automatic edges in mgmt](https://purpleidea.com/blog/2016/03/14/automatic-edges-in-mgmt/)
* [Automatic grouping in mgmt](https://purpleidea.com/blog/2016/03/30/automatic-grouping-in-mgmt/)
* [Automatic clustering in mgmt](https://purpleidea.com/blog/2016/06/20/automatic-clustering-in-mgmt/)
* [Remote execution in mgmt](https://purpleidea.com/blog/2016/10/07/remote-execution-in-mgmt/)
* [Send/Recv in mgmt](https://purpleidea.com/blog/2016/12/07/sendrecv-in-mgmt/)
* [Metaparameters in mgmt](https://purpleidea.com/blog/2017/03/01/metaparameters-in-mgmt/)
There is also an [introductory video](https://www.youtube.com/watch?v=LkEtBVLfygE&html5=1) available.
Older videos and other material [is available](on-the-web.md).
@@ -62,7 +62,7 @@ the meta attributes of that resource to `false`.
#### Blog post
You can read the introductory blog post about this topic here:
[https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/](https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/)
[https://purpleidea.com/blog/2016/03/14/automatic-edges-in-mgmt/](https://purpleidea.com/blog/2016/03/14/automatic-edges-in-mgmt/)
### Autogrouping
@@ -81,7 +81,7 @@ the meta attributes of that resource to `false`.
#### Blog post
You can read the introductory blog post about this topic here:
[https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/](https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/)
[https://purpleidea.com/blog/2016/03/30/automatic-grouping-in-mgmt/](https://purpleidea.com/blog/2016/03/30/automatic-grouping-in-mgmt/)
### Automatic clustering
@@ -97,7 +97,7 @@ with the `--seeds` variable.
#### Blog post
You can read the introductory blog post about this topic here:
[https://ttboj.wordpress.com/2016/06/20/automatic-clustering-in-mgmt/](https://ttboj.wordpress.com/2016/06/20/automatic-clustering-in-mgmt/)
[https://purpleidea.com/blog/2016/06/20/automatic-clustering-in-mgmt/](https://purpleidea.com/blog/2016/06/20/automatic-clustering-in-mgmt/)
### Remote ("agent-less") mode
@@ -124,7 +124,7 @@ which need to exchange information that is only available at run time.
#### Blog post
You can read the introductory blog post about this topic here:
[https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/](https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/)
[https://purpleidea.com/blog/2016/10/07/remote-execution-in-mgmt/](https://purpleidea.com/blog/2016/10/07/remote-execution-in-mgmt/)
### Puppet support
@@ -371,7 +371,7 @@ This is a project that I started in my free time in 2013. Development is driven
by all of our collective patches! Dive right in, and start hacking!
Please contact me if you'd like to invite me to speak about this at your event.
You can follow along [on my technical blog](https://ttboj.wordpress.com/).
You can follow along [on my technical blog](https://purpleidea.com/blog/).
To report any bugs, please file a ticket at: [https://github.com/purpleidea/mgmt/issues](https://github.com/purpleidea/mgmt/issues).
@@ -385,4 +385,4 @@ for more information.
* [github](https://github.com/purpleidea/)
* [&#64;purpleidea](https://twitter.com/#!/purpleidea)
* [https://ttboj.wordpress.com/](https://ttboj.wordpress.com/)
* [https://purpleidea.com/](https://purpleidea.com/)

View File

@@ -76,12 +76,16 @@ anguishing, I chose the name because it was short and I thought it was
appropriately descriptive. If you need a less ambiguous search term or phrase,
you can try using `mgmtconfig` or `mgmt config`.
It also doesn't stand for
[Methyl Guanine Methyl Transferase](https://en.wikipedia.org/wiki/O-6-methylguanine-DNA_methyltransferase)
which definitely existed before the band did.
### You didn't answer my question, or I have a question!
It's best to ask on [IRC](https://webchat.freenode.net/?channels=#mgmtconfig)
to see if someone can help you. Once we get a big enough community going, we'll
add a mailing list. If you don't get any response from the above, you can
contact me through my [technical blog](https://ttboj.wordpress.com/contact/)
contact me through my [technical blog](https://purpleidea.com/contact/)
and I'll do my best to help. If you have a good question, please add it as a
patch to this documentation. I'll merge your question, and add a patch with the
answer!

432
docs/language-guide.md Normal file
View File

@@ -0,0 +1,432 @@
# Language guide
## Overview
The `mgmt` tool has various frontends, each of which may produce a stream of
between zero or more graphs that are passed to the engine for desired state
application. In almost all scenarios, you're going to want to use the language
frontend. This guide describes some of the internals of the language.
## Theory
The mgmt language is a declarative (immutable) functional, reactive programming
language. It is implemented in `golang`. A longer introduction to the language
is coming soon!
### Types
All expressions must have a type. A composite type such as a list of strings
(`[]str`) is different from a list of integers (`[]int`).
There _is_ a _variant_ type in the language's type system, but it is only used
internally and only appears briefly when needed for type unification hints
during static polymorphic function generation. This is an advanced topic which
is not required for normal usage of the software.
The implementation of the internal types can be found in
[lang/types/](https://github.com/purpleidea/mgmt/tree/master/lang/types/).
#### bool
A `true` or `false` value.
#### str
Any `"string!"` enclosed in quotes.
#### int
A number like `42` or `-13`. Integers are represented internally as golang's
`int64`.
#### float
A floating point number like: `3.1415926`. Float's are represented internally as
golang's `float64`.
#### list
An ordered collection of values of the same type, eg: `[6, 7, 8, 9,]`. It is
worth mentioning that empty lists have a type, although without type hints it
can be impossible to infer the item's type.
#### map
An unordered set of unique keys of the same type and corresponding value pairs
of another type, eg: `{"boiling" => 100, "freezing" => 0, "room" => "25", "house" => 22, "canada" => -30,}`.
That is to say, all of the keys must have the same type, and all of the values
must have the same type. You can use any type for either, although it is
probably advisable to avoid using very complex types as map keys.
#### struct
An ordered set of field names and corresponding values, each of their own type,
eg: `struct{answer => "42", james => "awesome", is_mgmt_awesome => true,}`.
These are useful for combining more than one type into the same value. Note the
syntactical difference between these and map's: the key's in map's have types,
and as a result, string keys are enclosed in quotes, whereas struct _fields_ are
not string values, and as such are bare and specified without quotes.
#### func
An ordered set of optionally named, differently typed input arguments, and a
return type, eg: `func(s str) int` or:
`func(bool, []str, {str: float}) struct{foo str; bar int}`.
### Expressions
Expressions, and the `Expr` interface need to be better documented. For now
please consume
[lang/interfaces/ast.go](https://github.com/purpleidea/mgmt/tree/master/lang/interfaces/ast.go).
These docs will be expanded on when things are more certain to be stable.
### Statements
Statements, and the `Stmt` interface need to be better documented. For now
please consume
[lang/interfaces/ast.go](https://github.com/purpleidea/mgmt/tree/master/lang/interfaces/ast.go).
These docs will be expanded on when things are more certain to be stable.
### Stages
The mgmt compiler runs in a number of stages. In order of execution they are:
* [Lexing](#lexing)
* [Parsing](#parsing)
* [Interpolation](#interpolation)
* [Scope propagation](#scope-propagation)
* [Type unification](#type-unification)
* [Function graph generation](#function-graph-generation)
* [Function engine creation and validation](#function-engine-creation-and-validation)
All of the above needs to be done every time the source code changes. After this
point, the [function engine runs](#function-engine-running-and-interpret) and
produces events. On every event, we "[interpret](#function-engine-running-and-interpret)"
which produces a resource graph. This series of resource graphs are passed
to the engine as they are produced.
What follows are some notes about each step.
#### Lexing
Lexing is done using [nex](https://github.com/blynn/nex). It is a pure-golang
implementation which is similar to _Lex_ or _Flex_, but which produces golang
code instead of C. It integrates reasonably well with golang's _yacc_ which is
used for parsing. The token definitions are in:
[lang/lexer.nex](https://github.com/purpleidea/mgmt/tree/master/lang/lexer.nex).
Lexing and parsing run together by calling the `LexParse` method.
#### Parsing
The parser used is golang's implementation of
[yacc](https://godoc.org/golang.org/x/tools/cmd/goyacc). The documentation is
quite abysmal, so it's helpful to rely on the documentation from standard yacc
and trial and error. One small advantage yacc has over standard yacc is that it
can produce error messages from examples. The best documentation is to examine
the source. There is a short write up available [here](https://research.swtch.com/yyerror).
The yacc file exists at:
[lang/parser.y](https://github.com/purpleidea/mgmt/tree/master/lang/parser.y).
Lexing and parsing run together by calling the `LexParse` method.
#### Interpolation
Interpolation is used to transform the AST (which was produced from lexing and
parsing) into one which is either identical or different. It expands strings
which might contain expressions to be interpolated (eg: `"the answer is: ${foo}"`)
and can be used for other scenarios in which one statement or expression would
be better represented by a larger AST. Most nodes in the AST simply return their
own node address, and do not modify the AST.
#### Scope propagation
Scope propagation passes the parent scope (starting with the top-level, built-in
scope) down through the AST. This is necessary so that children nodes can access
variables in the scope if needed. Most AST node's simply pass on the scope
without making any changes. The `ExprVar` node naturally consumes scope's and
the `StmtProg` node cleverly passes the scope through in the order expected for
the out-of-order bind logic to work.
#### Type unification
Each expression must have a known type. The unpleasant option is to force the
programmer to specify by annotation every type throughout their whole program
so that each `Expr` node in the AST knows what to expect. Type annotation is
allowed in situations when you want to explicitly specify a type, or when the
compiler cannot deduce it, however, most of it can usually be inferred.
For type inferrence to work, each node in the AST implements a `Unify` method
which is able to return a list of invariants that must hold true. This starts at
the top most AST node, and gets called through to it's children to assemble a
giant list of invariants. The invariants can take different forms. They can
specify that a particular expression must have a particular type, or they can
specify that two expressions must have the same types. More complex invariants
allow you to specify relationships between different types and expressions.
Furthermore, invariants can allow you to specify that only one invariant out of
a set must hold true.
Once the list of invariants has been collected, they are run through an
invariant solver. The solver can return either return successfully or with an
error. If the solver returns successfully, it means that it has found a trivial
mapping between every expression and it's corresponding type. At this point it
is a simple task to run `SetType` on every expression so that the types are
known. If the solver returns in error, it is usually due to one of two
possibilities:
1. Ambiguity
The solver does not have enough information to make a definitive or
unique determination about the expression to type mappings. The set of
invariants is ambiguous, and we cannot continue. An error will be
returned to the programmer. In this scenario the user will probably need
to add a type annotation, possibly because of a design bug in the user's
program.
2. Conflict
The solver has conflicting information that cannot be reconciled. In
this situation an explicit conflict has been found. If two invariants
are found which both expect a particular expression to have different
types, then it is not possible to find a valid solution. This almost
always happens if the user has made a type error in their program.
Only one solver currently exists, but it is possible to easily plug in an
alternate implementation if someone more skilled in the art of solver design
would like to propose a more logical or performant variant.
#### Function graph generation
At this point we have a fully type AST. The AST must now be transformed into a
directed, acyclic graph (DAG) data structure that represents the flow of data as
necessary for everything to be reactive. Note that this graph is *different*
from the resource graph which is produced and sent to the engine. It is just a
coincidence that both happen to be DAG's. (You don't freak out when you see a
list data structure show up in more than one place, do you?)
To produce this graph, each node has a `Graph` method which it can call. This
starts at the top most node, and is called down through the AST. The edges in
the graphs must represent the individual expression values which are passed
from node to node. The names of the edges must match the function type argument
names which are used in the definition of the corresponding function. These
corresponding functions must exist for each expression node and are produced by
calling that expression's `Func` method. These are usually called by the
function engine during function creation and validation.
#### Function engine creation and validation
Finally we have a graph of the data flows. The function engine must first
initialize which creates references to each of the necessary function
implementations, and gets information about each one. It then needs to be type
checked to ensure that the data flows all correctly match what is expected. If
you were to pass an `int` to a function expecting a `bool`, this would be a
problem. If all goes well, the program should get run shortly.
#### Function engine running and interpret
At this point the function engine runs. It produces a stream of events which
cause the `Output()` method of the top-level program to run, which produces the
list of resources and edges. These are then transformed into the resource graph
which is passed to the engine.
### Function API
If you'd like to create a built-in, core function, you'll need to implement the
function API interface named `Func`. It can be found in
[lang/interfaces/func.go](https://github.com/purpleidea/mgmt/tree/master/lang/interfaces/func.go).
Your function must have a specific type. For example, a simple math function
might have a signature of `func(x int, x int) int`. As you can see, all the
types are known _before_ compile time.
What follows are each of the method signatures and a description of each.
Failure to implement the API correctly can cause the function graph engine to
block, or the program to panic.
### Info
```golang
Info() *Info
```
The Info method must return a struct containing some information about your
function. The struct has the following type:
```golang
type Info struct {
Sig *types.Type // the signature of the function, must be KindFunc
}
```
You must implement this correctly. Other fields in the `Info` struct may be
added in the future. This method is usually called before any other, and should
not depend on any other method being called first. Other methods must not depend
on this method being called first.
#### Example
```golang
func (obj *FooFunc) Info() *interfaces.Info {
return &interfaces.Info{
Sig: types.NewType("func(a str, b int) float"),
}
}
```
### Init
```golang
Init(*Init) error
```
Init is called by the function graph engine to create an implementation of this
function. It is passed in a struct of the following form:
```golang
type Init struct {
Hostname string // uuid for the host
Input chan types.Value // Engine will close `input` chan
Output chan types.Value // Stream must close `output` chan
World resources.World
Debug bool
Logf func(format string, v ...interface{})
}
```
These values and references may be used (wisely) inside your function. `Input`
will contain a channel of input structs matching the expected input signature
for your function. `Output` will be the channel which you must send values to
whenever a new value should be produced. This must be done in the `Stream()`
function. You may carefully use `World` to access functionality provided by the
engine. You may use `Logf` to log informational messages, however there is no
guarantee that they will be displayed to the user. `Debug` specifies whether the
function is running in a user-requested debug mode. This might cause you to want
to print more log messages for example. You will need to save references to any
or all of these info fields that you wish to use in the struct implementing this
`Func` interface. At a minimum you will need to save `Output` as a minimum of
one value must be produced.
#### Example
```golang
Please see the example functions in
[lang/funcs/public/](https://github.com/purpleidea/mgmt/tree/master/lang/funcs/public/).
```
### Stream
```golang
Stream() error
```
Stream is called by the function engine when it is ready for your function to
start accepting input and producing output. You must always produce at least one
value. Failure to produce at least one value will probably cause the function
engine to hang waiting for your output. This function must close the `Output`
channel when it has no more values to send. The engine will close the `Input`
channel when it has no more values to send. This may or may not influence
whether or not you close the `Output` channel.
#### Example
```golang
Please see the example functions in
[lang/funcs/public/](https://github.com/purpleidea/mgmt/tree/master/lang/funcs/public/).
```
### Close
```golang
Close() error
```
Close asks the particular function to shutdown its `Stream()` function and
return.
#### Example
```golang
Please see the example functions in
[lang/funcs/public/](https://github.com/purpleidea/mgmt/tree/master/lang/funcs/public/).
```
### Polymorphic Function API
For some functions, it might be helpful to be able to implement a function once,
but to have multiple polymorphic variants that can be chosen at compile time.
For this more advanced topic, you will need to use the
[Polymorphic Function API](#polymorphic-function-api). This will help with code
reuse when you have a small, finite number of possible type signatures, and also
for more complicated cases where you might have an infinite number of possible
type signatures. (eg: `[]str`, or `[][]str`, or `[][][]str`, etc...)
Suppose you want to implement a function which can assume different type
signatures. The mgmt language does not support polymorphic types-- you must use
static types throughout the language, however, it is legal to implement a
function which can take different specific type signatures based on how it is
used. For example, you might wish to add a math function which could take the
form of `func(x int, x int) int` or `func(x float, x float) float` depending on
the input values. You might also want to implement a function which takes an
arbitrary number of input arguments (the number must be statically fixed at the
compile time of your program though) and which returns a string.
The `PolyFunc` interface adds additional methods which you must implement to
satisfy such a function implementation. If you'd like to implement such a
function, then please notify the project authors, and they will expand this
section with a longer description of the process.
#### Examples
What follows are a few examples that might help you understand some of the
language details.
##### Example Foo
TODO: please add an example here!
##### Example Bar
TODO: please add an example here!
## Frequently asked questions
(Send your questions as a patch to this FAQ! I'll review it, merge it, and
respond by commit with the answer.)
### What is the difference between `ExprIf` and `StmtIf`?
The language contains both an `if` expression, and and `if` statement. An `if`
expression takes a boolean conditional *and* it must contain exactly _two_
branches (a `then` and an `else` branch) which each contain one expression. The
`if` expression _will_ return the value of one of the two branches based on the
conditional.
#### Example:
```
# this is an if expression, and both branches must exist
$b = true
$x = if $b {
42
} else {
-13
}
```
The `if` statement also takes a boolean conditional, but it may have either one
or two branches. Branches must only directly contain statements. The `if`
statement does not return any value, but it does produce output when it is
evaluated. The output consists primarily of resources (vertices) and edges.
#### Example:
```
# this is an if statement, and in this scenario the else branch was omitted
$b = true
if $b {
file "/tmp/hello" {
content => "world",
}
}
```
### I don't like the mgmt language, is there an alternative?
Yes, the language is just one of the available "frontends" that passes a stream
of graphs to the engine "backend". While it _is_ the recommended way of using
mgmt, you're welcome to either use an alternate frontend, or write your own. To
write your own frontend, you must implement the
[GAPI](https://github.com/purpleidea/mgmt/blob/master/gapi/gapi.go) interface.
### I'm an expert in FRP, and you got it all wrong; even the names of things!
I am certainly no expert in FRP, and I've certainly got lots more to learn. One
thing FRP experts might notice is that some of the concepts from FRP are either
named differently, or are notably absent.
In mgmt, we don't talk about behaviours, events, or signals in the strict FRP
definitons of the words. Firstly, because we only support discretized, streams
of values with no plan to add continuous semantics. Secondly, because we prefer
to use terms which are more natural and relatable to what our target audience is
expecting. Our users are more likely to have a background in Physiology, or
systems administration than a background in FRP.
Having said that, we hope that the FRP community will engage with us and help
improve the parts that we got wrong. Even if that means adding continuous
behaviours!
### This is brilliant, may I give you a high-five?
Thank you, and yes, probably. "Props" may also be accepted, although patches are
preferred. If you can't do either, [donations](https://purpleidea.com/misc/donate/)
to support the project are welcome too!
### Where can I find more information about mgmt?
Additional blog posts, videos and other material
[is available!](https://github.com/purpleidea/mgmt/blob/master/docs/on-the-web.md).
## Suggestions
If you have any ideas for changes or other improvements to the language, please
let us know! We're still pre 1.0 and pre 0.1 and happy to change it in order to
get it right!

View File

@@ -6,30 +6,30 @@ if we missed something that you think is relevant!
## Links
| Author | Format | Subject |
|---|---|---|
| James Shubin | blog | [Next generation configuration mgmt](https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/) |
| James Shubin | blog | [Next generation configuration mgmt](https://purpleidea.com/blog/2016/01/18/next-generation-configuration-mgmt/) |
| James Shubin | video | [Introductory recording from DevConf.cz 2016](https://www.youtube.com/watch?v=GVhpPF0j-iE&html5=1) |
| James Shubin | video | [Introductory recording from CfgMgmtCamp.eu 2016](https://www.youtube.com/watch?v=fNeooSiIRnA&html5=1) |
| Julian Dunn | video | [On mgmt at CfgMgmtCamp.eu 2016](https://www.youtube.com/watch?v=kfF9IATUask&t=1949&html5=1) |
| Walter Heck | slides | [On mgmt at CfgMgmtCamp.eu 2016](http://www.slideshare.net/olindata/configuration-management-time-for-a-4th-generation/3) |
| Marco Marongiu | blog | [On mgmt](http://syslog.me/2016/02/15/leap-or-die/) |
| Felix Frank | blog | [From Catalog To Mgmt (on puppet to mgmt "transpiling")](https://ffrank.github.io/features/2016/02/18/from-catalog-to-mgmt/) |
| James Shubin | blog | [Automatic edges in mgmt (...and the pkg resource)](https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/) |
| James Shubin | blog | [Automatic grouping in mgmt](https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/) |
| James Shubin | blog | [Automatic edges in mgmt (...and the pkg resource)](https://purpleidea.com/blog/2016/03/14/automatic-edges-in-mgmt/) |
| James Shubin | blog | [Automatic grouping in mgmt](https://purpleidea.com/blog/2016/03/30/automatic-grouping-in-mgmt/) |
| John Arundel | tweet | [“Puppets days are numbered.”](https://twitter.com/bitfield/status/732157519142002688) |
| Felix Frank | blog | [Puppet, Meet Mgmt (on puppet to mgmt internals)](https://ffrank.github.io/features/2016/06/12/puppet,-meet-mgmt/) |
| Felix Frank | blog | [Puppet Powered Mgmt (puppet to mgmt tl;dr)](https://ffrank.github.io/features/2016/06/19/puppet-powered-mgmt/) |
| James Shubin | blog | [Automatic clustering in mgmt](https://ttboj.wordpress.com/2016/06/20/automatic-clustering-in-mgmt/) |
| James Shubin | blog | [Automatic clustering in mgmt](https://purpleidea.com/blog/2016/06/20/automatic-clustering-in-mgmt/) |
| James Shubin | video | [Recording from CoreOSFest 2016](https://www.youtube.com/watch?v=KVmDCUA42wc&html5=1) |
| James Shubin | video | [Recording from DebConf16](http://meetings-archive.debian.net/pub/debian-meetings/2016/debconf16/Next_Generation_Config_Mgmt.webm) ([Slides](https://annex.debconf.org//debconf-share/debconf16/slides/15-next-generation-config-mgmt.pdf)) |
| Felix Frank | blog | [Edging It All In (puppet and mgmt edges)](https://ffrank.github.io/features/2016/07/12/edging-it-all-in/) |
| Felix Frank | blog | [Translating All The Things (puppet to mgmt translation warnings)](https://ffrank.github.io/features/2016/08/19/translating-all-the-things/) |
| James Shubin | video | [Recording from systemd.conf 2016](https://www.youtube.com/watch?v=jB992Zb3nH0&html5=1) |
| James Shubin | blog | [Remote execution in mgmt](https://ttboj.wordpress.com/2016/10/07/remote-execution-in-mgmt/) |
| James Shubin | blog | [Remote execution in mgmt](https://purpleidea.com/blog/2016/10/07/remote-execution-in-mgmt/) |
| James Shubin | video | [Recording from High Load Strategy 2016](https://vimeo.com/191493409) |
| James Shubin | video | [Recording from NLUUG 2016](https://www.youtube.com/watch?v=MmpwOQAb_SE&html5=1) |
| James Shubin | blog | [Send/Recv in mgmt](https://ttboj.wordpress.com/2016/12/07/sendrecv-in-mgmt/) |
| James Shubin | blog | [Send/Recv in mgmt](https://purpleidea.com/blog/2016/12/07/sendrecv-in-mgmt/) |
| Julien Pivotto | blog | [Augeas resource for mgmt](https://roidelapluie.be/blog/2017/02/14/mgmt-augeas/) |
| James Shubin | blog | [Metaparameters in mgmt](https://ttboj.wordpress.com/2017/03/01/metaparameters-in-mgmt/) |
| James Shubin | blog | [Metaparameters in mgmt](https://purpleidea.com/blog/2017/03/01/metaparameters-in-mgmt/) |
| James Shubin | video | [Recording from Incontro DevOps 2017](https://vimeo.com/212241877) |
| Yves Brissaud | blog | [mgmt aux HumanTalks Grenoble (french)](http://log.winsos.net/2017/04/12/mgmt-aux-human-talks-grenoble.html) |
| James Shubin | video | [Recording from OSDC Berlin 2017](https://www.youtube.com/watch?v=LkEtBVLfygE&html5=1) |

View File

@@ -4,7 +4,7 @@
This guide is intended for developers. Once `mgmt` is minimally viable, we'll
publish a quick start guide for users too. If you're brand new to `mgmt`, it's
probably a good idea to start by reading the
[introductory article](https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/)
[introductory article](https://purpleidea.com/blog/2016/01/18/next-generation-configuration-mgmt/)
or to watch an [introductory video](https://www.youtube.com/watch?v=LkEtBVLfygE&html5=1).
Once you're familiar with the general idea, please start hacking...
@@ -38,14 +38,12 @@ cd $GOPATH/src/github.com/purpleidea/mgmt
* Run `make build` to get a freshly built `mgmt` binary.
### Running mgmt
* Run `time ./mgmt run --yaml examples/graph0.yaml --converged-timeout=5 --tmp-prefix` to try out a very simple example!
* To run continuously in the default mode of operation, omit the `--converged-timeout` option.
* Run `time ./mgmt run --lang examples/lang/hello0.mcl --tmp-prefix` to try out a very simple example!
* Look in that example file that you ran to see if you can figure out what it did!
* The yaml frontend is provided as a developer tool to test the engine until the language is ready.
* Have fun hacking on our future technology and get involved to shape the project!
## Examples
Please look in the [examples/](../examples/) folder for some more examples!
Please look in the [examples/lang/](../examples/lang/) folder for some more examples!
## Vagrant
If you would like to avoid doing the above steps manually, we have prepared a

View File

@@ -16,7 +16,7 @@ Resources in `mgmt` are similar to resources in other systems in that they are
uniquely different in that they can detect when their state has changed, and as
a result can run to revert or repair this change instantly. For some background
on this design, please read the
[original article](https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/)
[original article](https://purpleidea.com/blog/2016/01/18/next-generation-configuration-mgmt/)
on the subject.
## Resource API
@@ -465,14 +465,14 @@ func init() { // special golang method that runs once
```
## Automatic edges
Automatic edges in `mgmt` are well described in [this article](https://ttboj.wordpress.com/2016/03/14/automatic-edges-in-mgmt/).
Automatic edges in `mgmt` are well described in [this article](https://purpleidea.com/blog/2016/03/14/automatic-edges-in-mgmt/).
The best example of this technique can be seen in the `svc` resource.
Unfortunately no further documentation about this subject has been written. To
expand this section, please send a patch! Please contact us if you'd like to
work on a resource that uses this feature, or to add it to an existing one!
## Automatic grouping
Automatic grouping in `mgmt` is well described in [this article](https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/).
Automatic grouping in `mgmt` is well described in [this article](https://purpleidea.com/blog/2016/03/30/automatic-grouping-in-mgmt/).
The best example of this technique can be seen in the `pkg` resource.
Unfortunately no further documentation about this subject has been written. To
expand this section, please send a patch! Please contact us if you'd like to
@@ -481,7 +481,7 @@ work on a resource that uses this feature, or to add it to an existing one!
## Send/Recv
In `mgmt` there is a novel concept called _Send/Recv_. For some background,
please [read the introductory article](https://ttboj.wordpress.com/2016/12/07/sendrecv-in-mgmt/).
please [read the introductory article](https://purpleidea.com/blog/2016/12/07/sendrecv-in-mgmt/).
When using this feature, the engine will automatically send the user specified
value to the intended destination without requiring any resource specific code.
Any time that one of the destination values is changed, the engine automatically
@@ -547,7 +547,7 @@ There are still many ideas for new resources that haven't been written yet. If
you'd like to contribute one, please contact us and tell us about your idea!
### Where can I find more information about mgmt?
Additional blog posts, videos and other material [is available!](https://github.com/purpleidea/mgmt/#on-the-web).
Additional blog posts, videos and other material [is available!](https://github.com/purpleidea/mgmt/blob/master/docs/on-the-web.md).
## Suggestions
If you have any ideas for API changes or other improvements to resource writing,

95
docs/style-guide.md Normal file
View File

@@ -0,0 +1,95 @@
# Style guide
## Overview
This document aims to be a reference for the desired style for patches to mgmt.
In particular it describes conventions which we use which are not officially
enforced by the `gofmt` tool, and which might not be clearly defined elsewhere.
Most of these are common sense to seasoned programmers, and we hope this will be
a useful reference for new programmers.
There are a lot of useful code review comments described
[here](https://github.com/golang/go/wiki/CodeReviewComments). We don't
necessarily follow everything strictly, but it is in general a very good guide.
## Basics
* All of our golang code is formatted with `gofmt`.
## Comments
All of our code is commented with the minimums required for `godoc` to function,
and so that our comments pass `golint`. Code comments should either be full
sentences (which end with a period, use proper punctuation, and capitalize the
first word when it is not a lower cased identifier), or are short one-line
comments in the source which are not full sentences and don't end with a period.
They should explain algorithms, describe non-obvious behaviour, or situations
which would otherwise need explanation or additional research during a code
review. Notes about use of unfamiliar API's is a good idea for a code comment.
### Example
Here you can see a function with the correct `godoc` string. The first word must
match the name of the function. It is _not_ capitalized because the function is
private.
```golang
// square multiplies the input integer by itself and returns this product.
func square(x int) int {
return x * x // we don't care about overflow errors
}
```
## Line length
In general we try to stick to 80 character lines when it is appropriate. It is
almost *always* appropriate for function `godoc` comments and most longer
paragraphs. Exceptions are always allowed based on the will of the maintainer.
It is usually better to exceed 80 characters than to break code unnecessarily.
If your code often exceeds 80 characters, it might be an indication that it
needs refactoring.
Occasionally inline, two line source code comments are used within a function.
These should usually be balanced so that you don't have one line with 78
characters and the second with only four. Split the comment between the two.
## Method receiver naming
[Contrary](https://github.com/golang/go/wiki/CodeReviewComments#receiver-names)
to the specialized naming of the method receiver variable, we usually name all
of these `obj` for ease of code copying throughout the project, and for faster
identification when reviewing code. Some anecdotal studies have shown that it
makes the code easier to read since you don't need to remember the name of the
method receiver variable in each different method. This is very similar to what
is done in `python`.
### Example
```golang
// Bar does a thing, and returns the number of baz results found in our
database.
func (obj *Foo) Bar(baz string) int {
if len(obj.s) > 0 {
return strings.Count(obj.s, baz)
}
return -1
}
```
## Consistent ordering
In general we try to preserve a logical ordering in source files which usually
matches the common order of execution that a _lazy evaluator_ would follow.
This is also the order which is recommended when creating interface types. When
implementing an interface, arrange your methods in the same order that they are
declared in the interface.
When implementing code for the various types in the language, please follow this
order: `bool`, `str`, `int`, `float`, `list`, `map`, `struct`, `func`.
## Suggestions
If you have any ideas for suggestions or other improvements to this guide,
please let us know!

94
etcd/client.go Normal file
View File

@@ -0,0 +1,94 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package etcd
import (
"time"
etcd "github.com/coreos/etcd/clientv3" // "clientv3"
errwrap "github.com/pkg/errors"
context "golang.org/x/net/context"
)
// ClientEtcd provides a simple etcd client for deploy and status operations.
type ClientEtcd struct {
Seeds []string // list of endpoints to try to connect
client *etcd.Client
}
// GetClient returns a handle to the raw etcd client object.
func (obj *ClientEtcd) GetClient() *etcd.Client {
return obj.client
}
// GetConfig returns the config struct to be used for the etcd client connect.
func (obj *ClientEtcd) GetConfig() etcd.Config {
cfg := etcd.Config{
Endpoints: obj.Seeds,
// RetryDialer chooses the next endpoint to use
// it comes with a default dialer if unspecified
DialTimeout: 5 * time.Second,
}
return cfg
}
// Connect connects the client to a server, and then builds the *API structs.
// If reconnect is true, it will force a reconnect with new config endpoints.
func (obj *ClientEtcd) Connect() error {
if obj.client != nil { // memoize
return nil
}
var err error
cfg := obj.GetConfig()
obj.client, err = etcd.New(cfg) // connect!
if err != nil {
return errwrap.Wrapf(err, "client connect error")
}
return nil
}
// Destroy cleans up the entire etcd client connection.
func (obj *ClientEtcd) Destroy() error {
err := obj.client.Close()
//obj.wg.Wait()
return err
}
// Get runs a get on the client connection. This has the same signature as our
// EmbdEtcd Get function.
func (obj *ClientEtcd) Get(path string, opts ...etcd.OpOption) (map[string]string, error) {
resp, err := obj.client.Get(context.TODO(), path, opts...)
if err != nil || resp == nil {
return nil, err
}
// TODO: write a resp.ToMap() function on https://godoc.org/github.com/coreos/etcd/etcdserver/etcdserverpb#RangeResponse
result := make(map[string]string)
for _, x := range resp.Kvs {
result[string(x.Key)] = string(x.Value)
}
return result, nil
}
// Txn runs a transaction on the client connection. This has the same signature
// as our EmbdEtcd Txn function.
func (obj *ClientEtcd) Txn(ifcmps []etcd.Cmp, thenops, elseops []etcd.Op) (*etcd.TxnResponse, error) {
return obj.client.KV.Txn(context.TODO()).If(ifcmps...).Then(thenops...).Else(elseops...).Commit()
}

171
etcd/deploy.go Normal file
View File

@@ -0,0 +1,171 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package etcd
import (
"fmt"
"strconv"
"strings"
etcd "github.com/coreos/etcd/clientv3"
errwrap "github.com/pkg/errors"
)
const (
deployPath = "deploy"
payloadPath = "payload"
hashPath = "hash"
)
// WatchDeploy returns a channel which spits out events on new deploy activity.
// FIXME: It should close the channel when it's done, and spit out errors when
// something goes wrong.
func WatchDeploy(obj *EmbdEtcd) chan error {
// key structure is $NS/deploy/$id/payload = $data
path := fmt.Sprintf("%s/%s/", NS, deployPath)
ch := make(chan error, 1)
// FIXME: fix our API so that we get a close event on shutdown.
callback := func(re *RE) error {
// TODO: is this even needed? it used to happen on conn errors
//log.Printf("Etcd: Watch: Path: %v", path) // event
if re == nil || re.response.Canceled {
return fmt.Errorf("watch is empty") // will cause a CtxError+retry
}
if len(ch) == 0 { // send event only if one isn't pending
ch <- nil // event
}
return nil
}
_, _ = obj.AddWatcher(path, callback, true, false, etcd.WithPrefix()) // no need to check errors
return ch
}
// GetDeploys gets all the available deploys.
func GetDeploys(obj Client) (map[uint64]string, error) {
// key structure is $NS/deploy/$id/payload = $data
path := fmt.Sprintf("%s/%s/", NS, deployPath)
keyMap, err := obj.Get(path, etcd.WithPrefix(), etcd.WithSort(etcd.SortByKey, etcd.SortAscend))
if err != nil {
return nil, errwrap.Wrapf(err, "could not get deploy")
}
result := make(map[uint64]string)
for key, val := range keyMap {
if !strings.HasPrefix(key, path) { // sanity check
continue
}
str := strings.Split(key[len(path):], "/")
if len(str) != 2 {
return nil, fmt.Errorf("unexpected chunk count of %d", len(str))
}
if s := str[1]; s != payloadPath {
continue // skip, maybe there are other future additions
}
var id uint64
var err error
x := str[0]
if id, err = strconv.ParseUint(x, 10, 64); err != nil {
return nil, fmt.Errorf("invalid id of `%s`", x)
}
// TODO: do some sort of filtering here?
//log.Printf("Etcd: GetDeploys(%s): Id => Data: %d => %s", key, id, val)
result[id] = val
}
return result, nil
}
// GetDeploy gets the latest deploy if id == 0, otherwise it returns the deploy
// with the specified id if it exists.
// FIXME: implement this more efficiently so that it doesn't have to download *all* the old deploys from etcd!
func GetDeploy(obj Client, id uint64) (string, error) {
result, err := GetDeploys(obj)
if err != nil {
return "", err
}
if id != 0 {
str, exists := result[id]
if !exists {
return "", fmt.Errorf("can't find id `%d`", id)
}
return str, nil
}
// find the latest id
var max uint64
for i := range result {
if i > max {
max = i
}
}
if max == 0 {
return "", nil // no results yet
}
return result[max], nil
}
// AddDeploy adds a new deploy. It takes an id and ensures it's sequential. If
// hash is not empty, then it will check that the pHash matches what the
// previous hash was, and also adds this new hash along side the id. This is
// useful to make sure you get a linear chain of git patches, and to avoid two
// contributors pushing conflicting deploys. This isn't git specific, and so any
// arbitrary string hash can be used.
// FIXME: prune old deploys from the store when they aren't needed anymore...
func AddDeploy(obj Client, id uint64, hash, pHash string, data *string) error {
// key structure is $NS/deploy/$id/payload = $data
// key structure is $NS/deploy/$id/hash = $hash
path := fmt.Sprintf("%s/%s/%d/%s", NS, deployPath, id, payloadPath)
tPath := fmt.Sprintf("%s/%s/%d/%s", NS, deployPath, id, hashPath)
ifs := []etcd.Cmp{} // list matching the desired state
ops := []etcd.Op{} // list of ops in this transaction (then)
// TODO: use https://github.com/coreos/etcd/pull/7417 if merged
// we're append only, so ensure this unique deploy id doesn't exist
ifs = append(ifs, etcd.Compare(etcd.Version(path), "=", 0)) // KeyMissing
//ifs = append(ifs, etcd.KeyMissing(path))
// don't look for previous deploy if this is the first deploy ever
if id > 1 {
// we append sequentially, so ensure previous key *does* exist
prev := fmt.Sprintf("%s/%s/%d/%s", NS, deployPath, id-1, payloadPath)
ifs = append(ifs, etcd.Compare(etcd.Version(prev), ">", 0)) // KeyExists
//ifs = append(ifs, etcd.KeyExists(prev))
if hash != "" && pHash != "" {
// does the previously stored hash match what we expect?
prevHash := fmt.Sprintf("%s/%s/%d/%s", NS, deployPath, id-1, hashPath)
ifs = append(ifs, etcd.Compare(etcd.Value(prevHash), "=", pHash))
}
}
ops = append(ops, etcd.OpPut(path, *data))
if hash != "" {
ops = append(ops, etcd.OpPut(tPath, hash)) // store new hash as well
}
// it's important to do this in one transaction, and atomically, because
// this way, we only generate one watch event, and only when it's needed
result, err := obj.Txn(ifs, ops, nil)
if err != nil {
return errwrap.Wrapf(err, "error creating deploy id %d: %s", id)
}
if !result.Succeeded {
return fmt.Errorf("could not create deploy id %d", id)
}
return nil // success
}

View File

@@ -79,14 +79,14 @@ import (
// constant parameters which may need to be tweaked or customized
const (
NS = "_mgmt" // root namespace for mgmt operations
seedSentinel = "_seed" // you must not name your hostname this
MaxStartServerTimeout = 60 // max number of seconds to wait for server to start
MaxStartServerRetries = 3 // number of times to retry starting the etcd server
maxClientConnectRetries = 5 // number of times to retry consecutive connect failures
selfRemoveTimeout = 3 // give unnominated members a chance to self exit
exitDelay = 3 // number of sec of inactivity after exit to clean up
DefaultIdealClusterSize = 5 // default ideal cluster size target for initial seed
NS = "/_mgmt" // root namespace for mgmt operations
seedSentinel = "_seed" // you must not name your hostname this
MaxStartServerTimeout = 60 // max number of seconds to wait for server to start
MaxStartServerRetries = 3 // number of times to retry starting the etcd server
maxClientConnectRetries = 5 // number of times to retry consecutive connect failures
selfRemoveTimeout = 3 // give unnominated members a chance to self exit
exitDelay = 3 // number of sec of inactivity after exit to clean up
DefaultIdealClusterSize = 5 // default ideal cluster size target for initial seed
DefaultClientURL = "127.0.0.1:2379"
DefaultServerURL = "127.0.0.1:2380"
)
@@ -170,11 +170,12 @@ type EmbdEtcd struct { // EMBeddeD etcd
ctxErr error // permanent ctx error
// exit and cleanup related
cancelLock sync.Mutex // lock for the cancels list
cancels []func() // array of every cancel function for watches
exiting bool
exitchan chan struct{}
exitTimeout <-chan time.Time
cancelLock sync.Mutex // lock for the cancels list
cancels []func() // array of every cancel function for watches
exiting bool
exitchan chan struct{}
exitchanCb chan struct{}
exitwg *sync.WaitGroup // wait for main loops to shutdown
hostname string
memberID uint64 // cluster membership id of server if running
@@ -220,14 +221,15 @@ func NewEmbdEtcd(hostname string, seeds, clientURLs, serverURLs, advertiseClient
idealClusterSize = 0 // unset, get from running cluster
}
obj := &EmbdEtcd{
exitchan: make(chan struct{}), // exit signal for main loop
exitTimeout: nil,
awq: make(chan *AW),
wevents: make(chan *RE),
setq: make(chan *KV),
getq: make(chan *GQ),
delq: make(chan *DL),
txnq: make(chan *TN),
exitchan: make(chan struct{}), // exit signal for main loop
exitchanCb: make(chan struct{}),
exitwg: &sync.WaitGroup{},
awq: make(chan *AW),
wevents: make(chan *RE),
setq: make(chan *KV),
getq: make(chan *GQ),
delq: make(chan *DL),
txnq: make(chan *TN),
nominated: make(etcdtypes.URLsMap),
@@ -265,6 +267,11 @@ func NewEmbdEtcd(hostname string, seeds, clientURLs, serverURLs, advertiseClient
return obj
}
// GetClient returns a handle to the raw etcd client object for those scenarios.
func (obj *EmbdEtcd) GetClient() *etcd.Client {
return obj.client
}
// GetConfig returns the config struct to be used for the etcd client connect.
func (obj *EmbdEtcd) GetConfig() etcd.Config {
endpoints := []string{}
@@ -363,11 +370,11 @@ func (obj *EmbdEtcd) Startup() error {
go obj.Loop() // start main loop
// TODO: implement native etcd watcher method on member API changes
path := fmt.Sprintf("/%s/nominated/", NS)
path := fmt.Sprintf("%s/nominated/", NS)
go obj.AddWatcher(path, obj.nominateCallback, true, false, etcd.WithPrefix()) // no block
// setup ideal cluster size watcher
key := fmt.Sprintf("/%s/idealClusterSize", NS)
key := fmt.Sprintf("%s/idealClusterSize", NS)
go obj.AddWatcher(key, obj.idealClusterSizeCallback, true, false) // no block
// if we have no endpoints, it means we are bootstrapping...
@@ -393,7 +400,7 @@ func (obj *EmbdEtcd) Startup() error {
}
if !obj.noServer {
path := fmt.Sprintf("/%s/volunteers/", NS)
path := fmt.Sprintf("%s/volunteers/", NS)
go obj.AddWatcher(path, obj.volunteerCallback, true, false, etcd.WithPrefix()) // no block
}
@@ -431,7 +438,7 @@ func (obj *EmbdEtcd) Startup() error {
}
}
go obj.AddWatcher(fmt.Sprintf("/%s/endpoints/", NS), obj.endpointCallback, true, false, etcd.WithPrefix())
go obj.AddWatcher(fmt.Sprintf("%s/endpoints/", NS), obj.endpointCallback, true, false, etcd.WithPrefix())
if err := obj.Connect(false); err != nil { // don't exit from this Startup function until connected!
return err
@@ -461,7 +468,8 @@ func (obj *EmbdEtcd) Destroy() error {
}
obj.cancelLock.Unlock()
obj.exitchan <- struct{}{} // cause main loop to exit
close(obj.exitchan) // cause main loop to exit
close(obj.exitchanCb)
obj.rLock.Lock()
if obj.client != nil {
@@ -474,6 +482,7 @@ func (obj *EmbdEtcd) Destroy() error {
//if obj.server != nil {
// return obj.DestroyServer()
//}
obj.exitwg.Wait()
return nil
}
@@ -715,12 +724,15 @@ func (obj *EmbdEtcd) CtxError(ctx context.Context, err error) (context.Context,
// CbLoop is the loop where callback execution is serialized.
func (obj *EmbdEtcd) CbLoop() {
obj.exitwg.Add(1)
defer obj.exitwg.Done()
cuid := obj.converger.Register()
cuid.SetName("Etcd: CbLoop")
defer cuid.Unregister()
if e := obj.Connect(false); e != nil {
return // fatal
}
var exitTimeout <-chan time.Time // = nil is implied
// we use this timer because when we ignore un-converge events and loop,
// we reset the ConvergedTimer case statement, ruining the timeout math!
cuid.StartTimer()
@@ -760,8 +772,18 @@ func (obj *EmbdEtcd) CbLoop() {
log.Printf("Trace: Etcd: CbLoop: Event: FinishLoop")
}
// exit loop signal
case <-obj.exitchanCb:
obj.exitchanCb = nil
log.Println("Etcd: Exiting loop shortly...")
// activate exitTimeout switch which only opens after N
// seconds of inactivity in this select switch, which
// lets everything get bled dry to avoid blocking calls
// which would otherwise block us from exiting cleanly!
exitTimeout = util.TimeAfterOrBlock(exitDelay)
// exit loop commit
case <-obj.exitTimeout:
case <-exitTimeout:
log.Println("Etcd: Exiting callback loop!")
cuid.StopTimer() // clean up nicely
return
@@ -771,12 +793,15 @@ func (obj *EmbdEtcd) CbLoop() {
// Loop is the main loop where everything is serialized.
func (obj *EmbdEtcd) Loop() {
obj.exitwg.Add(1) // TODO: add these to other go routines?
defer obj.exitwg.Done()
cuid := obj.converger.Register()
cuid.SetName("Etcd: Loop")
defer cuid.Unregister()
if e := obj.Connect(false); e != nil {
return // fatal
}
var exitTimeout <-chan time.Time // = nil is implied
cuid.StartTimer()
for {
ctx := context.Background() // TODO: inherit as input argument?
@@ -911,15 +936,16 @@ func (obj *EmbdEtcd) Loop() {
// exit loop signal
case <-obj.exitchan:
obj.exitchan = nil
log.Println("Etcd: Exiting loop shortly...")
// activate exitTimeout switch which only opens after N
// seconds of inactivity in this select switch, which
// lets everything get bled dry to avoid blocking calls
// which would otherwise block us from exiting cleanly!
obj.exitTimeout = util.TimeAfterOrBlock(exitDelay)
exitTimeout = util.TimeAfterOrBlock(exitDelay)
// exit loop commit
case <-obj.exitTimeout:
case <-exitTimeout:
log.Println("Etcd: Exiting loop!")
cuid.StopTimer() // clean up nicely
return
@@ -1597,7 +1623,7 @@ func (obj *EmbdEtcd) idealClusterSizeCallback(re *RE) error {
log.Printf("Trace: Etcd: idealClusterSizeCallback()")
defer log.Printf("Trace: Etcd: idealClusterSizeCallback(): Finished!")
}
path := fmt.Sprintf("/%s/idealClusterSize", NS)
path := fmt.Sprintf("%s/idealClusterSize", NS)
for _, event := range re.response.Events {
if key := bytes.NewBuffer(event.Kv.Key).String(); key != path {
continue

543
etcd/fs/file.go Normal file
View File

@@ -0,0 +1,543 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package fs
import (
"bytes"
"encoding/gob"
"fmt"
"io"
"log"
"os"
"path"
"strings"
"syscall"
"time"
etcd "github.com/coreos/etcd/clientv3" // "clientv3"
errwrap "github.com/pkg/errors"
)
func init() {
gob.Register(&File{})
}
// File represents a file node. This is the node of our tree structure. This is
// not thread safe, and you can have at most one open file handle at a time.
type File struct {
// FIXME: add a rwmutex to make this thread safe
fs *Fs // pointer to file system
Path string // relative path to file, trailing slash if it's a directory
Mode os.FileMode
ModTime time.Time
//Size int64 // XXX: cache the size to avoid full file downloads for stat!
Children []*File // dir's use this
Hash string // string not []byte so it's readable, matches data
data []byte // cache of the data. private so it doesn't get encoded
cursor int64
dirCursor int64
readOnly bool // is the file read-only?
closed bool // is file closed?
}
// path returns the expected path to the actual file in etcd.
func (obj *File) path() string {
// keys are prefixed with the hash-type eg: {sha256} to allow different
// superblocks to share the same data prefix even with different hashes
return fmt.Sprintf("%s/{%s}%s", obj.fs.sb.DataPrefix, obj.fs.Hash, obj.Hash)
}
// cache downloads the file contents from etcd and stores them in our cache.
func (obj *File) cache() error {
if obj.Mode.IsDir() {
return nil
}
h, err := obj.fs.hash(obj.data) // update hash
if err != nil {
return err
}
if h == obj.Hash { // we already have the correct data cached
return nil
}
p := obj.path() // get file data from this path in etcd
result, err := obj.fs.get(p) // download the file...
if err != nil {
return err
}
if result == nil || len(result) == 0 { // nothing found
return err
}
data, exists := result[p]
if !exists {
return fmt.Errorf("could not find data") // programming error?
}
obj.data = data // save
return nil
}
// findNode is the "in array" equivalent for searching through a dir's children.
// You must *not* specify an absolute path as the search string, but rather you
// should specify the name. To search for something name "bar" inside a dir
// named "/tmp/foo/", you just pass in "bar", not "/tmp/foo/bar".
func (obj *File) findNode(name string) (*File, bool) {
for _, node := range obj.Children {
if name == node.Path {
return node, true // found
}
}
return nil, false // not found
}
func fileCreate(fs *Fs, name string) (*File, error) {
if name == "" {
return nil, fmt.Errorf("invalid input path")
}
if !strings.HasPrefix(name, "/") {
return nil, fmt.Errorf("invalid input path (not absolute)")
}
cleanPath := path.Clean(name) // remove possible trailing slashes
// try to add node to tree by first finding the parent node
parentPath, filePath := path.Split(cleanPath) // looking for this
node, err := fs.find(parentPath)
if err != nil { // might be ErrNotExist
return nil, err
}
fi, err := node.Stat()
if err != nil {
return nil, err
}
if !fi.IsDir() { // is the parent a suitable home?
return nil, &os.PathError{Op: "create", Path: name, Err: syscall.ENOTDIR}
}
f, exists := node.findNode(filePath) // does file already exist inside?
if exists { // already exists, overwrite!
if err := f.Truncate(0); err != nil {
return nil, err
}
return f, nil
}
data := []byte("") // empty file contents
h, err := fs.hash(data) // TODO: use memoized value?
if err != nil {
return &File{}, err // TODO: nil instead?
}
f = &File{
fs: fs,
Path: filePath, // the relative path chunk (not incl. dir name)
Hash: h,
data: data,
}
// add to parent
node.Children = append(node.Children, f)
// push new file up if not on server, and then push up the metadata
if err := f.Sync(); err != nil {
return f, err // TODO: ok to return the file so user can run sync?
}
return f, nil
}
func fileOpen(fs *Fs, name string) (*File, error) {
if name == "" {
return nil, fmt.Errorf("invalid input path")
}
if !strings.HasPrefix(name, "/") {
return nil, fmt.Errorf("invalid input path (not absolute)")
}
cleanPath := path.Clean(name) // remove possible trailing slashes
node, err := fs.find(cleanPath)
if err != nil { // might be ErrNotExist
return &File{}, err // TODO: nil instead?
}
// download file contents into obj.data
if err := node.cache(); err != nil {
return &File{}, err // TODO: nil instead?
}
//fi, err := node.Stat()
//if err != nil {
// return nil, err
//}
//if fi.IsDir() { // can we open a directory? - yes we can apparently
// return nil, fmt.Errorf("file is a directory")
//}
node.readOnly = true // as per docs, fileOpen opens files as read-only
node.closed = false // as per docs, fileOpen opens files as read-only
return node, nil
}
// Close closes the file handle. This will try and run Sync automatically.
func (obj *File) Close() error {
if !obj.readOnly {
obj.ModTime = time.Now()
}
if err := obj.Sync(); err != nil {
return err
}
// FIXME: there is a big implementation mistake between the metadata
// node and the file handle, since they're currently sharing a struct!
// invalidate all of the fields
//obj.fs = nil
//obj.Path = ""
//obj.Mode = os.FileMode(0)
//obj.ModTime = time.Time{}
//obj.Children = nil
//obj.Hash = ""
//obj.data = nil
obj.cursor = 0
obj.readOnly = false
obj.closed = true
return nil
}
// Name returns the path of the file.
func (obj *File) Name() string {
return obj.Path
}
// Stat returns some information about the file.
func (obj *File) Stat() (os.FileInfo, error) {
// download file contents into obj.data
if err := obj.cache(); err != nil { // needed so Size() works correctly
return nil, err
}
return &FileInfo{ // everything is actually stored in the main file node
file: obj,
}, nil
}
// Sync flushes the file contents to the server and calls the filesystem
// metadata sync as well.
// FIXME: instead of a txn, run a get and then a put in two separate stages. if
// the get already found the data up there, then we don't need to push it all in
// the put phase. with the txn it is always all sent up even if the put is never
// needed. the get should just be a "key exists" test, and not a download of the
// whole file. if we *do* do the download, we can byte-by-byte check for hash
// collisions and panic if we find one :)
func (obj *File) Sync() error {
if obj.closed {
return ErrFileClosed
}
p := obj.path() // store file data at this path in etcd
// TODO: use https://github.com/coreos/etcd/pull/7417 if merged
cmp := etcd.Compare(etcd.Version(p), "=", 0) // KeyMissing
//cmp := etcd.KeyMissing(p))
op := etcd.OpPut(p, string(obj.data)) // this pushes contents to server
// it's important to do this in one transaction, and atomically, because
// this way, we only generate one watch event, and only when it's needed
result, err := obj.fs.txn([]etcd.Cmp{cmp}, []etcd.Op{op}, nil)
if err != nil {
return errwrap.Wrapf(err, "sync error with: %s (%s)", obj.Path, p)
}
if !result.Succeeded {
if obj.fs.Debug {
log.Printf("debug: data already exists in storage")
}
}
if err := obj.fs.sync(); err != nil { // push metadata up to server
return err
}
return nil
}
// Truncate trims the file to the requested size. Since our file system can only
// read and write data, but never edit existing data blocks, doing this will not
// cause more space to be available.
func (obj *File) Truncate(size int64) error {
if obj.closed {
return ErrFileClosed
}
if obj.readOnly {
return &os.PathError{Op: "truncate", Path: obj.Path, Err: ErrFileReadOnly}
}
if size < 0 {
return ErrOutOfRange
}
if size > 0 { // if size == 0, we don't need to run cache!
// download file contents into obj.data
if err := obj.cache(); err != nil {
return err
}
}
if size > int64(len(obj.data)) {
diff := size - int64(len(obj.data))
obj.data = append(obj.data, bytes.Repeat([]byte{00}, int(diff))...)
} else {
obj.data = obj.data[0:size]
}
h, err := obj.fs.hash(obj.data) // update hash
if err != nil {
return err
}
obj.Hash = h
obj.ModTime = time.Now()
// this pushes the new data and metadata up to etcd
if err := obj.Sync(); err != nil {
return err
}
return nil
}
// Read reads up to len(b) bytes from the File. It returns the number of bytes
// read and any error encountered. At end of file, Read returns 0, io.EOF.
// NOTE: This reads into the byte input. It's a side effect!
func (obj *File) Read(b []byte) (n int, err error) {
if obj.closed {
return 0, ErrFileClosed
}
if obj.Mode.IsDir() {
return 0, fmt.Errorf("file is a directory")
}
// download file contents into obj.data
if err := obj.cache(); err != nil {
return 0, err // TODO: -1 ?
}
// TODO: can we optimize by reading just the length from etcd, and also
// by only downloading the data range we're interested in?
if len(b) > 0 && int(obj.cursor) == len(obj.data) {
return 0, io.EOF
}
if len(obj.data)-int(obj.cursor) >= len(b) {
n = len(b)
} else {
n = len(obj.data) - int(obj.cursor)
}
copy(b, obj.data[obj.cursor:obj.cursor+int64(n)]) // store into input b
obj.cursor = obj.cursor + int64(n) // update cursor
return
}
// ReadAt reads len(b) bytes from the File starting at byte offset off. It
// returns the number of bytes read and the error, if any. ReadAt always returns
// a non-nil error when n < len(b). At end of file, that error is io.EOF.
func (obj *File) ReadAt(b []byte, off int64) (n int, err error) {
obj.cursor = off
return obj.Read(b)
}
// Readdir lists the contents of the directory and returns a list of file info
// objects for each entry.
func (obj *File) Readdir(count int) ([]os.FileInfo, error) {
if !obj.Mode.IsDir() {
return nil, &os.PathError{Op: "readdir", Path: obj.Name(), Err: syscall.ENOTDIR}
}
children := obj.Children[obj.dirCursor:] // available children to output
var l = int64(len(children)) // initially assume to return them all
var err error
// for count > 0, if we return the last entry, also return io.EOF
if count > 0 {
l = int64(count) // initial assumption
if c := len(children); count >= c {
l = int64(c)
err = io.EOF // this result includes the last dir entry
}
}
obj.dirCursor += l // store our progress
output := make([]os.FileInfo, l)
// TODO: should this be sorted by "directory order" what does that mean?
// from `man 3 readdir`: "unlikely that the names will be sorted"
for i := range output {
output[i] = &FileInfo{
file: children[i],
}
}
// we're seen the whole directory, so reset the cursor
if err == io.EOF || count <= 0 {
obj.dirCursor = 0 // TODO: is it okay to reset the cursor?
}
return output, err
}
// Readdirnames returns a list of name is the current file handle's directory.
// TODO: this implementation shares the dirCursor with Readdir, is this okay?
// TODO: should Readdirnames even use a dirCursor at all?
func (obj *File) Readdirnames(n int) (names []string, _ error) {
fis, err := obj.Readdir(n)
if fis != nil {
for i, x := range fis {
if x != nil {
names = append(names, fis[i].Name())
}
}
}
return names, err
}
// Seek sets the offset for the next Read or Write on file to offset,
// interpreted according to whence: 0 means relative to the origin of the file,
// 1 means relative to the current offset, and 2 means relative to the end. It
// returns the new offset and an error, if any. The behavior of Seek on a file
// opened with O_APPEND is not specified.
func (obj *File) Seek(offset int64, whence int) (int64, error) {
if obj.closed {
return 0, ErrFileClosed
}
switch whence {
case io.SeekStart: // 0
obj.cursor = offset
case io.SeekCurrent: // 1
obj.cursor += offset
case io.SeekEnd: // 2
// download file contents into obj.data
if err := obj.cache(); err != nil {
return 0, err // TODO: -1 ?
}
obj.cursor = int64(len(obj.data)) + offset
}
return obj.cursor, nil
}
// Write writes to the given file.
func (obj *File) Write(b []byte) (n int, err error) {
if obj.closed {
return 0, ErrFileClosed
}
if obj.readOnly {
return 0, &os.PathError{Op: "write", Path: obj.Path, Err: ErrFileReadOnly}
}
// download file contents into obj.data
if err := obj.cache(); err != nil {
return 0, err // TODO: -1 ?
}
// calculate the write
n = len(b)
cur := obj.cursor
diff := cur - int64(len(obj.data))
var tail []byte
if n+int(cur) < len(obj.data) {
tail = obj.data[n+int(cur):]
}
if diff > 0 {
obj.data = append(bytes.Repeat([]byte{00}, int(diff)), b...)
obj.data = append(obj.data, tail...)
} else {
obj.data = append(obj.data[:cur], b...)
obj.data = append(obj.data, tail...)
}
h, err := obj.fs.hash(obj.data) // update hash
if err != nil {
return 0, err // TODO: -1 ?
}
obj.Hash = h
obj.ModTime = time.Now()
// this pushes the new data and metadata up to etcd
if err := obj.Sync(); err != nil {
return 0, err // TODO: -1 ?
}
obj.cursor = int64(len(obj.data))
return
}
// WriteAt writes into the given file at a certain offset.
func (obj *File) WriteAt(b []byte, off int64) (n int, err error) {
obj.cursor = off
return obj.Write(b)
}
// WriteString writes a string to the file.
func (obj *File) WriteString(s string) (n int, err error) {
return obj.Write([]byte(s))
}
// FileInfo is a struct which provides some information about a file handle.
type FileInfo struct {
file *File // anonymous pointer to the actual file
}
// Name returns the base name of the file.
func (obj *FileInfo) Name() string {
return obj.file.Name()
}
// Size returns the length in bytes.
func (obj *FileInfo) Size() int64 {
return int64(len(obj.file.data))
}
// Mode returns the file mode bits.
func (obj *FileInfo) Mode() os.FileMode {
return obj.file.Mode
}
// ModTime returns the modification time.
func (obj *FileInfo) ModTime() time.Time {
return obj.file.ModTime
}
// IsDir is an abbreviation for Mode().IsDir().
func (obj *FileInfo) IsDir() bool {
//return obj.file.Mode&os.ModeDir != 0
return obj.file.Mode.IsDir()
}
// Sys returns the underlying data source (can return nil).
func (obj *FileInfo) Sys() interface{} {
return nil // TODO: should we do something better?
//return obj.file.fs // TODO: would this work?
}

821
etcd/fs/fs.go Normal file
View File

@@ -0,0 +1,821 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package fs implements a very simple and limited file system on top of etcd.
package fs
import (
"bytes"
"crypto/sha256"
"encoding/gob"
"encoding/hex"
"errors"
"fmt"
"hash"
"io"
"log"
"os"
"path"
"strings"
"syscall"
"time"
etcd "github.com/coreos/etcd/clientv3" // "clientv3"
rpctypes "github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
errwrap "github.com/pkg/errors"
"github.com/spf13/afero"
context "golang.org/x/net/context"
)
func init() {
gob.Register(&superBlock{})
}
const (
// EtcdTimeout is the timeout to wait before erroring.
EtcdTimeout = 5 * time.Second // FIXME: chosen arbitrarily
// DefaultDataPrefix is the default path for data storage in etcd.
DefaultDataPrefix = "/_etcdfs/data"
// DefaultHash is the default hashing algorithm to use.
DefaultHash = "sha256"
// PathSeparator is the path separator to use on this filesystem.
PathSeparator = os.PathSeparator // usually the slash character
)
// TODO: https://dave.cheney.net/2016/04/07/constant-errors
var (
IsPathSeparator = os.IsPathSeparator
// ErrNotImplemented is returned when something is not implemented by design.
ErrNotImplemented = errors.New("not implemented")
// ErrExist is returned when requested path already exists.
ErrExist = os.ErrExist
// ErrNotExist is returned when we can't find the requested path.
ErrNotExist = os.ErrNotExist
ErrFileClosed = errors.New("File is closed")
ErrFileReadOnly = errors.New("File handle is read only")
ErrOutOfRange = errors.New("Out of range")
)
// Fs is a specialized afero.Fs implementation for etcd. It implements a small
// subset of the features, and has some special properties. In particular, file
// data is stored with it's unique reference being a hash of the data. In this
// way, you cannot actually edit a file, but rather you create a new one, and
// update the metadata pointer to point to the new blob. This might seem slow,
// but it has the unique advantage of being relatively straight forward to
// implement, and repeated uploads of the same file cost almost nothing. Since
// etcd isn't meant for large file systems, this fits the desired use case.
// This implementation is designed to have a single writer for each superblock,
// but as many readers as you like.
// FIXME: this is not currently thread-safe, nor is it clear if it needs to be.
// XXX: we probably aren't updating the modification time everywhere we should!
// XXX: because we never delete data blocks, we need to occasionally "vacuum".
// XXX: this is harder because we need to list of *all* metadata paths, if we
// want them to be able to share storage backends. (we do)
type Fs struct {
Client *etcd.Client
Metadata string // location of "superblock" for this filesystem
DataPrefix string // prefix of data storage (no trailing slashes)
Hash string // eg: sha256
Debug bool
sb *superBlock
mounted bool
}
// superBlock is the metadata structure of everything stored outside of the data
// section in etcd. Its fields need to be exported or they won't get marshalled.
type superBlock struct {
DataPrefix string // prefix of data storage
Hash string // hashing algorithm used
Tree *File // filesystem tree
}
// NewEtcdFs creates a new filesystem handle on an etcd client connection. You
// must specify the metadata string that you wish to use.
func NewEtcdFs(client *etcd.Client, metadata string) afero.Fs {
return &Fs{
Client: client,
Metadata: metadata,
}
}
// get a number of values from etcd.
func (obj *Fs) get(path string, opts ...etcd.OpOption) (map[string][]byte, error) {
ctx, cancel := context.WithTimeout(context.Background(), EtcdTimeout)
resp, err := obj.Client.Get(ctx, path, opts...)
cancel()
if err != nil || resp == nil {
return nil, err
}
// TODO: write a resp.ToMap() function on https://godoc.org/github.com/coreos/etcd/etcdserver/etcdserverpb#RangeResponse
result := make(map[string][]byte) // formerly: map[string][]byte
for _, x := range resp.Kvs {
result[string(x.Key)] = x.Value // formerly: bytes.NewBuffer(x.Value).String()
}
return result, nil
}
// put a value into etcd.
func (obj *Fs) put(path string, data []byte, opts ...etcd.OpOption) error {
ctx, cancel := context.WithTimeout(context.Background(), EtcdTimeout)
_, err := obj.Client.Put(ctx, path, string(data), opts...) // TODO: obj.Client.KV ?
cancel()
if err != nil {
switch err {
case context.Canceled:
return errwrap.Wrapf(err, "ctx canceled")
case context.DeadlineExceeded:
return errwrap.Wrapf(err, "ctx deadline exceeded")
case rpctypes.ErrEmptyKey:
return errwrap.Wrapf(err, "client-side error")
default:
return errwrap.Wrapf(err, "invalid endpoints")
}
}
return nil
}
// txn runs a txn in etcd.
func (obj *Fs) txn(ifcmps []etcd.Cmp, thenops, elseops []etcd.Op) (*etcd.TxnResponse, error) {
ctx, cancel := context.WithTimeout(context.Background(), EtcdTimeout)
resp, err := obj.Client.Txn(ctx).If(ifcmps...).Then(thenops...).Else(elseops...).Commit()
cancel()
return resp, err
}
// hash is a small helper that does the hashing for us.
func (obj *Fs) hash(input []byte) (string, error) {
var h hash.Hash
switch obj.Hash {
// TODO: add other hashes
case "sha256":
h = sha256.New()
default:
return "", fmt.Errorf("hash does not exist")
}
src := bytes.NewReader(input)
if _, err := io.Copy(h, src); err != nil {
return "", err
}
return hex.EncodeToString(h.Sum(nil)), nil
}
// sync overwrites the superblock with whatever version we have stored.
func (obj *Fs) sync() error {
b := bytes.Buffer{}
e := gob.NewEncoder(&b)
err := e.Encode(&obj.sb) // pass with &
if err != nil {
return errwrap.Wrapf(err, "gob failed to encode")
}
//base64.StdEncoding.EncodeToString(b.Bytes())
return obj.put(obj.Metadata, b.Bytes())
}
// mount downloads the initial cache of metadata, including the *file tree.
// Since there's no explicit mount API in the afero.Fs interface, we hide this
// method inside any operation that might do any real work, and make it
// idempotent so that it can be called as much as we want. If there's no
// metadata found (superblock) then we create one.
func (obj *Fs) mount() error {
if obj.mounted {
return nil
}
result, err := obj.get(obj.Metadata) // download the metadata...
if err != nil {
return err
}
if result == nil || len(result) == 0 { // nothing found, create the fs
if obj.Debug {
log.Printf("debug: mount: creating new fs at: %s", obj.Metadata)
}
// trim any trailing slashes from DataPrefix
for strings.HasSuffix(obj.DataPrefix, "/") {
obj.DataPrefix = strings.TrimSuffix(obj.DataPrefix, "/")
}
if obj.DataPrefix == "" {
obj.DataPrefix = DefaultDataPrefix
}
if obj.Hash == "" {
obj.Hash = DefaultHash
}
// test run an empty string to see if our hash selection works!
if _, err := obj.hash([]byte("")); err != nil {
return fmt.Errorf("cannot hash with %s", obj.Hash)
}
obj.sb = &superBlock{
DataPrefix: obj.DataPrefix,
Hash: obj.Hash,
Tree: &File{ // include a root directory
fs: obj,
Path: "", // root dir is "" (empty string)
Mode: os.ModeDir,
},
}
if err := obj.sync(); err != nil {
return err
}
obj.mounted = true
return nil
}
if obj.Debug {
log.Printf("debug: mount: opening old fs at: %s", obj.Metadata)
}
sb, exists := result[obj.Metadata]
if !exists {
return fmt.Errorf("could not find metadata") // programming error?
}
// decode into obj.sb
//bb, err := base64.StdEncoding.DecodeString(str)
//if err != nil {
// return errwrap.Wrapf(err, "base64 failed to decode")
//}
//b := bytes.NewBuffer(bb)
b := bytes.NewBuffer(sb)
d := gob.NewDecoder(b)
if err := d.Decode(&obj.sb); err != nil { // pass with &
return errwrap.Wrapf(err, "gob failed to decode")
}
if obj.DataPrefix != "" && obj.DataPrefix != obj.sb.DataPrefix {
return fmt.Errorf("the DataPrefix mount option `%s` does not match the remote value of `%s`", obj.DataPrefix, obj.sb.DataPrefix)
}
if obj.Hash != "" && obj.Hash != obj.sb.Hash {
return fmt.Errorf("the Hash mount option `%s` does not match the remote value of `%s`", obj.Hash, obj.sb.Hash)
}
// if all checks passed, copy these values down locally
obj.DataPrefix = obj.sb.DataPrefix
obj.Hash = obj.sb.Hash
// hook up file system pointers to each element in the tree structure
obj.traverse(obj.sb.Tree)
obj.mounted = true
return nil
}
// traverse adds the file system pointer to each element in the tree structure.
func (obj *Fs) traverse(node *File) {
if node == nil {
return
}
node.fs = obj
for _, n := range node.Children {
obj.traverse(n)
}
}
// find returns the file node corresponding to this absolute path if it exists.
func (obj *Fs) find(absPath string) (*File, error) { // TODO: function naming?
if absPath == "" {
return nil, fmt.Errorf("empty path specified")
}
if !strings.HasPrefix(absPath, "/") {
return nil, fmt.Errorf("invalid input path (not absolute)")
}
node := obj.sb.Tree
if node == nil {
return nil, ErrNotExist // no nodes exist yet, not even root dir
}
var x string // first value
sp := PathSplit(absPath)
if x, sp = sp[0], sp[1:]; x != node.Path {
return nil, fmt.Errorf("root values do not match") // TODO: panic?
}
for _, p := range sp {
n, exists := node.findNode(p)
if !exists {
return nil, ErrNotExist
}
node = n // descend into this node
}
return node, nil
}
// Name returns the name of this filesystem.
func (obj *Fs) Name() string { return "etcdfs" }
// URI returns a URI representing this particular filesystem.
func (obj *Fs) URI() string {
return fmt.Sprintf("%s://%s", obj.Name(), obj.Metadata)
}
// Create creates a new file.
func (obj *Fs) Create(name string) (afero.File, error) {
if err := obj.mount(); err != nil {
return nil, err
}
return fileCreate(obj, name)
}
// Mkdir makes a new directory.
func (obj *Fs) Mkdir(name string, perm os.FileMode) error {
if err := obj.mount(); err != nil {
return err
}
if name == "" {
return fmt.Errorf("invalid input path")
}
if !strings.HasPrefix(name, "/") {
return fmt.Errorf("invalid input path (not absolute)")
}
// remove possible trailing slashes
cleanPath := path.Clean(name)
for strings.HasSuffix(cleanPath, "/") { // bonus clean for "/" as input
cleanPath = strings.TrimSuffix(cleanPath, "/")
}
if cleanPath == "" {
if obj.sb.Tree == nil {
return fmt.Errorf("woops, missing root directory")
}
return ErrExist // root directory already exists
}
// try to add node to tree by first finding the parent node
parentPath, dirPath := path.Split(cleanPath) // looking for this
f := &File{
fs: obj,
Path: dirPath,
Mode: os.ModeDir,
// TODO: add perm to struct or let chmod below do it
}
node, err := obj.find(parentPath)
if err != nil { // might be ErrNotExist
return err
}
fi, err := node.Stat()
if err != nil {
return err
}
if !fi.IsDir() { // is the parent a suitable home?
return &os.PathError{Op: "mkdir", Path: name, Err: syscall.ENOTDIR}
}
_, exists := node.findNode(dirPath) // does file already exist inside?
if exists {
return ErrExist
}
// add to parent
node.Children = append(node.Children, f)
// push new file up if not on server, and then push up the metadata
if err := f.Sync(); err != nil {
return err
}
return obj.Chmod(name, perm)
}
// MkdirAll creates a directory named path, along with any necessary parents,
// and returns nil, or else returns an error. The permission bits perm are used
// for all directories that MkdirAll creates. If path is already a directory,
// MkdirAll does nothing and returns nil.
func (obj *Fs) MkdirAll(path string, perm os.FileMode) error {
if err := obj.mount(); err != nil {
return err
}
// Copied mostly verbatim from golang stdlib.
// Fast path: if we can tell whether path is a directory or file, stop
// with success or error.
dir, err := obj.Stat(path)
if err == nil {
if dir.IsDir() {
return nil
}
return &os.PathError{Op: "mkdir", Path: path, Err: syscall.ENOTDIR}
}
// Slow path: make sure parent exists and then call Mkdir for path.
i := len(path)
for i > 0 && IsPathSeparator(path[i-1]) { // Skip trailing path separator.
i--
}
j := i
for j > 0 && !IsPathSeparator(path[j-1]) { // Scan backward over element.
j--
}
if j > 1 {
// Create parent
err = obj.MkdirAll(path[0:j-1], perm)
if err != nil {
return err
}
}
// Parent now exists; invoke Mkdir and use its result.
err = obj.Mkdir(path, perm)
if err != nil {
// Handle arguments like "foo/." by
// double-checking that directory doesn't exist.
dir, err1 := obj.Lstat(path)
if err1 == nil && dir.IsDir() {
return nil
}
return err
}
return nil
}
// Open opens a path. It will be opened read-only.
func (obj *Fs) Open(name string) (afero.File, error) {
if err := obj.mount(); err != nil {
return nil, err
}
return fileOpen(obj, name) // this opens as read-only
}
// OpenFile opens a path with a particular flag and permission.
func (obj *Fs) OpenFile(name string, flag int, perm os.FileMode) (afero.File, error) {
if err := obj.mount(); err != nil {
return nil, err
}
chmod := false
f, err := fileOpen(obj, name)
if os.IsNotExist(err) && (flag&os.O_CREATE > 0) {
f, err = fileCreate(obj, name)
chmod = true
}
if err != nil {
return nil, err
}
f.readOnly = (flag == os.O_RDONLY)
if flag&os.O_APPEND > 0 {
if _, err := f.Seek(0, os.SEEK_END); err != nil {
f.Close()
return nil, err
}
}
if flag&os.O_TRUNC > 0 && flag&(os.O_RDWR|os.O_WRONLY) > 0 {
if err := f.Truncate(0); err != nil {
f.Close()
return nil, err
}
}
if chmod {
// TODO: the golang stdlib doesn't check this error, should we?
if err := obj.Chmod(name, perm); err != nil {
return f, err // TODO: should we return the file handle?
}
}
return f, nil
}
// Remove removes a path.
func (obj *Fs) Remove(name string) error {
if err := obj.mount(); err != nil {
return err
}
if name == "" {
return fmt.Errorf("invalid input path")
}
if !strings.HasPrefix(name, "/") {
return fmt.Errorf("invalid input path (not absolute)")
}
// remove possible trailing slashes
cleanPath := path.Clean(name)
for strings.HasSuffix(cleanPath, "/") { // bonus clean for "/" as input
cleanPath = strings.TrimSuffix(cleanPath, "/")
}
if cleanPath == "" {
return fmt.Errorf("can't remove root")
}
f, err := obj.find(name) // get the file
if err != nil {
return err
}
if len(f.Children) > 0 { // this file or dir has children, can't remove!
return &os.PathError{Op: "remove", Path: name, Err: syscall.ENOTEMPTY}
}
// find the parent node
parentPath, filePath := path.Split(cleanPath) // looking for this
node, err := obj.find(parentPath)
if err != nil { // might be ErrNotExist
if os.IsNotExist(err) { // race! must have just disappeared
return nil
}
return err
}
var index = -1 // int
for i, n := range node.Children {
if n.Path == filePath {
index = i // found here!
break
}
}
if index == -1 {
return fmt.Errorf("programming error")
}
// remove from list
node.Children = append(node.Children[:index], node.Children[index+1:]...)
return obj.sync()
}
// RemoveAll removes path and any children it contains. It removes everything it
// can but returns the first error it encounters. If the path does not exist,
// RemoveAll returns nil (no error).
func (obj *Fs) RemoveAll(path string) error {
if err := obj.mount(); err != nil {
return err
}
// Simple case: if Remove works, we're done.
err := obj.Remove(path)
if err == nil || os.IsNotExist(err) {
return nil
}
// Otherwise, is this a directory we need to recurse into?
dir, serr := obj.Lstat(path)
if serr != nil {
// TODO: I didn't check this logic thoroughly (edge cases?)
if serr, ok := serr.(*os.PathError); ok && (os.IsNotExist(serr.Err) || serr.Err == syscall.ENOTDIR) {
return nil
}
return serr
}
if !dir.IsDir() {
// Not a directory; return the error from Remove.
return err
}
// Directory.
fd, err := obj.Open(path)
if err != nil {
if os.IsNotExist(err) {
// Race. It was deleted between the Lstat and Open.
// Return nil per RemoveAll's docs.
return nil
}
return err
}
// Remove contents & return first error.
err = nil
for {
// TODO: why not do this in one shot? is there a syscall limit?
names, err1 := fd.Readdirnames(100)
for _, name := range names {
err1 := obj.RemoveAll(path + string(PathSeparator) + name)
if err == nil {
err = err1
}
}
if err1 == io.EOF {
break
}
// If Readdirnames returned an error, use it.
if err == nil {
err = err1
}
if len(names) == 0 {
break
}
}
// Close directory, because windows won't remove opened directory.
fd.Close()
// Remove directory.
err1 := obj.Remove(path)
if err1 == nil || os.IsNotExist(err1) {
return nil
}
if err == nil {
err = err1
}
return err
}
// Rename moves or renames a file or directory.
// TODO: seems it's okay to move files or directories, but you can't clobber dirs
// but you can clobber single files. a dir can't clobber a file and a file can't
// clobber a dir. but a file can clobber another file but a dir can't clobber
// another dir. you can also transplant dirs or files into other dirs.
func (obj *Fs) Rename(oldname, newname string) error {
// XXX: do we need to check if dest path is inside src path?
// XXX: if dirs/files are next to each other, do we mess up the .Children list of the common parent?
if err := obj.mount(); err != nil {
return err
}
if oldname == newname {
return nil
}
if oldname == "" || newname == "" {
return fmt.Errorf("invalid input path")
}
if !strings.HasPrefix(oldname, "/") || !strings.HasPrefix(newname, "/") {
return fmt.Errorf("invalid input path (not absolute)")
}
// remove possible trailing slashes
srcCleanPath := path.Clean(oldname)
dstCleanPath := path.Clean(newname)
src, err := obj.find(srcCleanPath) // get the file
if err != nil {
return err
}
srcInfo, err := src.Stat()
if err != nil {
return err
}
srcParentPath, srcName := path.Split(srcCleanPath) // looking for this
parent, err := obj.find(srcParentPath)
if err != nil { // might be ErrNotExist
return err
}
var rmi = -1 // index of node to remove from parent
// find the thing to be deleted
for i, n := range parent.Children {
if n.Path == srcName {
rmi = i // found here!
break
}
}
if rmi == -1 {
return fmt.Errorf("programming error")
}
dst, err := obj.find(dstCleanPath) // does the destination already exist?
if err != nil && !os.IsNotExist(err) {
return err
}
if err == nil { // dst exists!
dstInfo, err := dst.Stat()
if err != nil {
return err
}
// dir's can clobber anything or be clobbered apparently
if srcInfo.IsDir() || dstInfo.IsDir() {
return ErrExist // dir's can't clobber anything
}
// remove from list by index
parent.Children = append(parent.Children[:rmi], parent.Children[rmi+1:]...)
// we're a file clobbering another file...
// move file content from src -> dst and then delete src
// TODO: run a dst.Close() for extra safety first?
save := dst.Path // save the "name"
*dst = *src // TODO: is this safe?
dst.Path = save // "rename" it
} else { // dst does not exist
// check if the dst's parent exists and is a dir, if not, error
// if it is a dir, add src as a child to it and then delete src
dstParentPath, dstName := path.Split(dstCleanPath) // looking for this
node, err := obj.find(dstParentPath)
if err != nil { // might be ErrNotExist
return err
}
fi, err := node.Stat()
if err != nil {
return err
}
if !fi.IsDir() { // is the parent a suitable home?
return &os.LinkError{Op: "rename", Old: oldname, New: newname, Err: syscall.ENOTDIR}
}
// remove from list by index
parent.Children = append(parent.Children[:rmi], parent.Children[rmi+1:]...)
src.Path = dstName // "rename" it
node.Children = append(node.Children, src) // "copied"
}
return obj.sync() // push up metadata changes
}
// Stat returns some information about the particular path.
func (obj *Fs) Stat(name string) (os.FileInfo, error) {
if err := obj.mount(); err != nil {
return nil, err
}
if !strings.HasPrefix(name, "/") {
return nil, fmt.Errorf("invalid input path (not absolute)")
}
f, err := obj.find(name) // get the file
if err != nil {
return nil, err
}
return f.Stat()
}
// Lstat does exactly the same as Stat because we currently do not support
// symbolic links.
func (obj *Fs) Lstat(name string) (os.FileInfo, error) {
if err := obj.mount(); err != nil {
return nil, err
}
// TODO: we don't have symbolic links in our fs, so we pass this to stat
return obj.Stat(name)
}
// Chmod changes the mode of a file.
func (obj *Fs) Chmod(name string, mode os.FileMode) error {
if err := obj.mount(); err != nil {
return err
}
if !strings.HasPrefix(name, "/") {
return fmt.Errorf("invalid input path (not absolute)")
}
f, err := obj.find(name) // get the file
if err != nil {
return err
}
f.Mode = f.Mode | mode // XXX: what is the correct way to do this?
return f.Sync() // push up the changed metadata
}
// Chtimes changes the access and modification times of the named file, similar
// to the Unix utime() or utimes() functions. The underlying filesystem may
// truncate or round the values to a less precise time unit. If there is an
// error, it will be of type *PathError.
// FIXME: make sure everything we error is a *PathError
// TODO: atime is not currently implement and so it is silently ignored.
func (obj *Fs) Chtimes(name string, atime time.Time, mtime time.Time) error {
if err := obj.mount(); err != nil {
return err
}
if !strings.HasPrefix(name, "/") {
return fmt.Errorf("invalid input path (not absolute)")
}
f, err := obj.find(name) // get the file
if err != nil {
return err
}
f.ModTime = mtime
// TODO: add atime
return f.Sync() // push up the changed metadata
}
// PathSplit splits a path into an array of tokens excluding any trailing empty
// tokens.
func PathSplit(p string) []string {
if p == "/" { // TODO: can't this all be expressed nicely in one line?
return []string{""}
}
return strings.Split(path.Clean(p), "/")
}

227
etcd/fs/fs_test.go Normal file
View File

@@ -0,0 +1,227 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package fs_test // named this way to make it easier for examples
import (
"io"
"testing"
"github.com/purpleidea/mgmt/etcd"
etcdfs "github.com/purpleidea/mgmt/etcd/fs"
"github.com/purpleidea/mgmt/util"
"github.com/spf13/afero"
)
// XXX: spawn etcd for this test, like `cdtmpmkdir && etcd` and then kill it...
// XXX: write a bunch more tests to test this
// TODO: apparently using 0666 is equivalent to respecting the current umask
const (
umask = 0666
superblock = "/some/superblock" // TODO: generate randomly per test?
)
func TestFs1(t *testing.T) {
etcdClient := &etcd.ClientEtcd{
Seeds: []string{"localhost:2379"}, // endpoints
}
if err := etcdClient.Connect(); err != nil {
t.Logf("client connection error: %+v", err)
return
}
defer etcdClient.Destroy()
etcdFs := &etcdfs.Fs{
Client: etcdClient.GetClient(),
Metadata: superblock,
DataPrefix: etcdfs.DefaultDataPrefix,
}
//var etcdFs afero.Fs = NewEtcdFs()
if err := etcdFs.Mkdir("/", umask); err != nil {
t.Logf("error: %+v", err)
if err != etcdfs.ErrExist {
return
}
}
if err := etcdFs.Mkdir("/tmp", umask); err != nil {
t.Logf("error: %+v", err)
if err != etcdfs.ErrExist {
return
}
}
fi, err := etcdFs.Stat("/tmp")
if err != nil {
t.Logf("stat error: %+v", err)
return
}
t.Logf("fi: %+v", fi)
t.Logf("isdir: %t", fi.IsDir())
f, err := etcdFs.Create("/tmp/foo")
if err != nil {
t.Logf("error: %+v", err)
return
}
t.Logf("handle: %+v", f)
i, err := f.WriteString("hello world!\n")
if err != nil {
t.Logf("error: %+v", err)
return
}
t.Logf("wrote: %d", i)
if err := etcdFs.Mkdir("/tmp/d1", umask); err != nil {
t.Logf("error: %+v", err)
if err != etcdfs.ErrExist {
return
}
}
if err := etcdFs.Rename("/tmp/foo", "/tmp/bar"); err != nil {
t.Logf("rename error: %+v", err)
return
}
//f2, err := etcdFs.Create("/tmp/bar")
//if err != nil {
// t.Logf("error: %+v", err)
// return
//}
//i2, err := f2.WriteString("hello bar!\n")
//if err != nil {
// t.Logf("error: %+v", err)
// return
//}
//t.Logf("wrote: %d", i2)
dir, err := etcdFs.Open("/tmp")
if err != nil {
t.Logf("error: %+v", err)
return
}
names, err := dir.Readdirnames(-1)
if err != nil && err != io.EOF {
t.Logf("error: %+v", err)
return
}
for _, name := range names {
t.Logf("name in /tmp: %+v", name)
}
//dir, err := etcdFs.Open("/")
//if err != nil {
// t.Logf("error: %+v", err)
// return
//}
//names, err := dir.Readdirnames(-1)
//if err != nil && err != io.EOF {
// t.Logf("error: %+v", err)
// return
//}
//for _, name := range names {
// t.Logf("name in /: %+v", name)
//}
}
func TestFs2(t *testing.T) {
etcdClient := &etcd.ClientEtcd{
Seeds: []string{"localhost:2379"}, // endpoints
}
if err := etcdClient.Connect(); err != nil {
t.Logf("client connection error: %+v", err)
return
}
defer etcdClient.Destroy()
etcdFs := &etcdfs.Fs{
Client: etcdClient.GetClient(),
Metadata: superblock,
DataPrefix: etcdfs.DefaultDataPrefix,
}
tree, err := util.FsTree(etcdFs, "/")
if err != nil {
t.Errorf("tree error: %+v", err)
return
}
t.Logf("tree: \n%s", tree)
tree2, err := util.FsTree(etcdFs, "/tmp")
if err != nil {
t.Errorf("tree2 error: %+v", err)
return
}
t.Logf("tree2: \n%s", tree2)
}
func TestFs3(t *testing.T) {
etcdClient := &etcd.ClientEtcd{
Seeds: []string{"localhost:2379"}, // endpoints
}
if err := etcdClient.Connect(); err != nil {
t.Logf("client connection error: %+v", err)
return
}
defer etcdClient.Destroy()
etcdFs := &etcdfs.Fs{
Client: etcdClient.GetClient(),
Metadata: superblock,
DataPrefix: etcdfs.DefaultDataPrefix,
}
tree, err := util.FsTree(etcdFs, "/")
if err != nil {
t.Errorf("tree error: %+v", err)
return
}
t.Logf("tree: \n%s", tree)
var memFs afero.Fs = afero.NewMemMapFs()
if err := util.CopyFs(etcdFs, memFs, "/", "/", false); err != nil {
t.Errorf("CopyFs error: %+v", err)
return
}
if err := util.CopyFs(etcdFs, memFs, "/", "/", true); err != nil {
t.Errorf("CopyFs2 error: %+v", err)
return
}
if err := util.CopyFs(etcdFs, memFs, "/", "/tmp/d1/", false); err != nil {
t.Errorf("CopyFs3 error: %+v", err)
return
}
tree2, err := util.FsTree(memFs, "/")
if err != nil {
t.Errorf("tree2 error: %+v", err)
return
}
t.Logf("tree2: \n%s", tree2)
}

88
etcd/fs/util.go Normal file
View File

@@ -0,0 +1,88 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package fs
import (
"os"
"path/filepath"
"github.com/spf13/afero"
)
// ReadAll reads from r until an error or EOF and returns the data it read.
// A successful call returns err == nil, not err == EOF. Because ReadAll is
// defined to read from src until EOF, it does not treat an EOF from Read
// as an error to be reported.
//func ReadAll(r io.Reader) ([]byte, error) {
// return afero.ReadAll(r)
//}
// ReadDir reads the directory named by dirname and returns
// a list of sorted directory entries.
func (obj *Fs) ReadDir(dirname string) ([]os.FileInfo, error) {
return afero.ReadDir(obj, dirname)
}
// ReadFile reads the file named by filename and returns the contents.
// A successful call returns err == nil, not err == EOF. Because ReadFile
// reads the whole file, it does not treat an EOF from Read as an error
// to be reported.
func (obj *Fs) ReadFile(filename string) ([]byte, error) {
return afero.ReadFile(obj, filename)
}
// TempDir creates a new temporary directory in the directory dir
// with a name beginning with prefix and returns the path of the
// new directory. If dir is the empty string, TempDir uses the
// default directory for temporary files (see os.TempDir).
// Multiple programs calling TempDir simultaneously
// will not choose the same directory. It is the caller's responsibility
// to remove the directory when no longer needed.
func (obj *Fs) TempDir(dir, prefix string) (name string, err error) {
return afero.TempDir(obj, dir, prefix)
}
// TempFile creates a new temporary file in the directory dir
// with a name beginning with prefix, opens the file for reading
// and writing, and returns the resulting *File.
// If dir is the empty string, TempFile uses the default directory
// for temporary files (see os.TempDir).
// Multiple programs calling TempFile simultaneously
// will not choose the same file. The caller can use f.Name()
// to find the pathname of the file. It is the caller's responsibility
// to remove the file when no longer needed.
func (obj *Fs) TempFile(dir, prefix string) (f afero.File, err error) {
return afero.TempFile(obj, dir, prefix)
}
// WriteFile writes data to a file named by filename.
// If the file does not exist, WriteFile creates it with permissions perm;
// otherwise WriteFile truncates it before writing.
func (obj *Fs) WriteFile(filename string, data []byte, perm os.FileMode) error {
return afero.WriteFile(obj, filename, data, perm)
}
// Walk walks the file tree rooted at root, calling walkFn for each file or
// directory in the tree, including root. All errors that arise visiting files
// and directories are filtered by walkFn. The files are walked in lexical
// order, which makes the output deterministic but means that for very
// large directories Walk can be inefficient.
// Walk does not follow symbolic links.
func (obj *Fs) Walk(root string, walkFn filepath.WalkFunc) error {
return afero.Walk(obj, root, walkFn)
}

30
etcd/interfaces.go Normal file
View File

@@ -0,0 +1,30 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package etcd
import (
etcd "github.com/coreos/etcd/clientv3" // "clientv3"
)
// Client provides a simple interface specification for client requests. Both
// EmbdEtcd and ClientEtcd implement this.
type Client interface {
// TODO: add more method signatures
Get(path string, opts ...etcd.OpOption) (map[string]string, error)
Txn(ifcmps []etcd.Cmp, thenops, elseops []etcd.Op) (*etcd.TxnResponse, error)
}

View File

@@ -39,7 +39,7 @@ func Nominate(obj *EmbdEtcd, hostname string, urls etcdtypes.URLs) error {
defer log.Printf("Trace: Etcd: Nominate(%v): Finished!", hostname)
}
// nominate someone to be a server
nominate := fmt.Sprintf("/%s/nominated/%s", NS, hostname)
nominate := fmt.Sprintf("%s/nominated/%s", NS, hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
ops = append(ops, etcd.OpPut(nominate, urls.String())) // TODO: add a TTL? (etcd.WithLease)
@@ -57,7 +57,7 @@ func Nominate(obj *EmbdEtcd, hostname string, urls etcdtypes.URLs) error {
// Nominated returns a urls map of nominated etcd server volunteers.
// NOTE: I know 'nominees' might be more correct, but is less consistent here
func Nominated(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
path := fmt.Sprintf("/%s/nominated/", NS)
path := fmt.Sprintf("%s/nominated/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix()) // map[string]string, bool
if err != nil {
return nil, fmt.Errorf("nominated isn't available: %v", err)
@@ -90,7 +90,7 @@ func Volunteer(obj *EmbdEtcd, urls etcdtypes.URLs) error {
defer log.Printf("Trace: Etcd: Volunteer(%v): Finished!", obj.hostname)
}
// volunteer to be a server
volunteer := fmt.Sprintf("/%s/volunteers/%s", NS, obj.hostname)
volunteer := fmt.Sprintf("%s/volunteers/%s", NS, obj.hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
// XXX: adding a TTL is crucial! (i think)
@@ -112,7 +112,7 @@ func Volunteers(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
log.Printf("Trace: Etcd: Volunteers()")
defer log.Printf("Trace: Etcd: Volunteers(): Finished!")
}
path := fmt.Sprintf("/%s/volunteers/", NS)
path := fmt.Sprintf("%s/volunteers/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix())
if err != nil {
return nil, fmt.Errorf("volunteers aren't available: %v", err)
@@ -145,7 +145,7 @@ func AdvertiseEndpoints(obj *EmbdEtcd, urls etcdtypes.URLs) error {
defer log.Printf("Trace: Etcd: AdvertiseEndpoints(%v): Finished!", obj.hostname)
}
// advertise endpoints
endpoints := fmt.Sprintf("/%s/endpoints/%s", NS, obj.hostname)
endpoints := fmt.Sprintf("%s/endpoints/%s", NS, obj.hostname)
ops := []etcd.Op{} // list of ops in this txn
if urls != nil {
// TODO: add a TTL? (etcd.WithLease)
@@ -167,7 +167,7 @@ func Endpoints(obj *EmbdEtcd) (etcdtypes.URLsMap, error) {
log.Printf("Trace: Etcd: Endpoints()")
defer log.Printf("Trace: Etcd: Endpoints(): Finished!")
}
path := fmt.Sprintf("/%s/endpoints/", NS)
path := fmt.Sprintf("%s/endpoints/", NS)
keyMap, err := obj.Get(path, etcd.WithPrefix())
if err != nil {
return nil, fmt.Errorf("endpoints aren't available: %v", err)
@@ -199,7 +199,7 @@ func SetHostnameConverged(obj *EmbdEtcd, hostname string, isConverged bool) erro
log.Printf("Trace: Etcd: SetHostnameConverged(%s): %v", hostname, isConverged)
defer log.Printf("Trace: Etcd: SetHostnameConverged(%v): Finished!", hostname)
}
converged := fmt.Sprintf("/%s/converged/%s", NS, hostname)
converged := fmt.Sprintf("%s/converged/%s", NS, hostname)
op := []etcd.Op{etcd.OpPut(converged, fmt.Sprintf("%t", isConverged))}
if _, err := obj.Txn(nil, op, nil); err != nil { // TODO: do we need a skipConv flag here too?
return fmt.Errorf("set converged failed") // exit in progress?
@@ -213,7 +213,7 @@ func HostnameConverged(obj *EmbdEtcd) (map[string]bool, error) {
log.Printf("Trace: Etcd: HostnameConverged()")
defer log.Printf("Trace: Etcd: HostnameConverged(): Finished!")
}
path := fmt.Sprintf("/%s/converged/", NS)
path := fmt.Sprintf("%s/converged/", NS)
keyMap, err := obj.ComplexGet(path, true, etcd.WithPrefix()) // don't un-converge
if err != nil {
return nil, fmt.Errorf("converged values aren't available: %v", err)
@@ -239,7 +239,7 @@ func HostnameConverged(obj *EmbdEtcd) (map[string]bool, error) {
// AddHostnameConvergedWatcher adds a watcher with a callback that runs on
// hostname state changes.
func AddHostnameConvergedWatcher(obj *EmbdEtcd, callbackFn func(map[string]bool) error) (func(), error) {
path := fmt.Sprintf("/%s/converged/", NS)
path := fmt.Sprintf("%s/converged/", NS)
internalCbFn := func(re *RE) error {
// TODO: get the value from the response, and apply delta...
// for now, just run a get operation which is easier to code!
@@ -258,7 +258,7 @@ func SetClusterSize(obj *EmbdEtcd, value uint16) error {
log.Printf("Trace: Etcd: SetClusterSize(): %v", value)
defer log.Printf("Trace: Etcd: SetClusterSize(): Finished!")
}
key := fmt.Sprintf("/%s/idealClusterSize", NS)
key := fmt.Sprintf("%s/idealClusterSize", NS)
if err := obj.Set(key, strconv.FormatUint(uint64(value), 10)); err != nil {
return fmt.Errorf("function SetClusterSize failed: %v", err) // exit in progress?
@@ -268,7 +268,7 @@ func SetClusterSize(obj *EmbdEtcd, value uint16) error {
// GetClusterSize gets the ideal target cluster size of etcd peers.
func GetClusterSize(obj *EmbdEtcd) (uint16, error) {
key := fmt.Sprintf("/%s/idealClusterSize", NS)
key := fmt.Sprintf("%s/idealClusterSize", NS)
keyMap, err := obj.Get(key)
if err != nil {
return 0, fmt.Errorf("function GetClusterSize failed: %v", err)

View File

@@ -34,7 +34,7 @@ import (
// collection prefixes and filters that we care about...
func WatchResources(obj *EmbdEtcd) chan error {
ch := make(chan error, 1) // buffer it so we can measure it
path := fmt.Sprintf("/%s/exported/", NS)
path := fmt.Sprintf("%s/exported/", NS)
callback := func(re *RE) error {
// TODO: is this even needed? it used to happen on conn errors
log.Printf("Etcd: Watch: Path: %v", path) // event
@@ -61,7 +61,7 @@ func WatchResources(obj *EmbdEtcd) chan error {
// SetResources exports all of the resources which we pass in to etcd.
func SetResources(obj *EmbdEtcd, hostname string, resourceList []resources.Res) error {
// key structure is /$NS/exported/$hostname/resources/$uid = $data
// key structure is $NS/exported/$hostname/resources/$uid = $data
var kindFilter []string // empty to get from everyone
hostnameFilter := []string{hostname}
@@ -83,7 +83,7 @@ func SetResources(obj *EmbdEtcd, hostname string, resourceList []resources.Res)
log.Fatalf("Etcd: SetResources: Error: Empty kind: %v", res.GetName())
}
uid := fmt.Sprintf("%s/%s", res.GetKind(), res.GetName())
path := fmt.Sprintf("/%s/exported/%s/resources/%s", NS, hostname, uid)
path := fmt.Sprintf("%s/exported/%s/resources/%s", NS, hostname, uid)
if data, err := resources.ResToB64(res); err == nil {
ifs = append(ifs, etcd.Compare(etcd.Value(path), "=", data)) // desired state
ops = append(ops, etcd.OpPut(path, data))
@@ -108,7 +108,7 @@ func SetResources(obj *EmbdEtcd, hostname string, resourceList []resources.Res)
log.Fatalf("Etcd: SetResources: Error: Empty kind: %v", res.GetName())
}
uid := fmt.Sprintf("%s/%s", res.GetKind(), res.GetName())
path := fmt.Sprintf("/%s/exported/%s/resources/%s", NS, hostname, uid)
path := fmt.Sprintf("%s/exported/%s/resources/%s", NS, hostname, uid)
if match(res, resourceList) { // if we match, no need to delete!
continue
@@ -134,10 +134,10 @@ func SetResources(obj *EmbdEtcd, hostname string, resourceList []resources.Res)
// If the kindfilter or hostnameFilter is empty, then it assumes no filtering...
// TODO: Expand this with a more powerful filter based on what we eventually
// support in our collect DSL. Ideally a server side filter like WithFilter()
// We could do this if the pattern was /$NS/exported/$kind/$hostname/$uid = $data.
// We could do this if the pattern was $NS/exported/$kind/$hostname/$uid = $data.
func GetResources(obj *EmbdEtcd, hostnameFilter, kindFilter []string) ([]resources.Res, error) {
// key structure is /$NS/exported/$hostname/resources/$uid = $data
path := fmt.Sprintf("/%s/exported/", NS)
// key structure is $NS/exported/$hostname/resources/$uid = $data
path := fmt.Sprintf("%s/exported/", NS)
resourceList := []resources.Res{}
keyMap, err := obj.Get(path, etcd.WithPrefix(), etcd.WithSort(etcd.SortByKey, etcd.SortAscend))
if err != nil {

View File

@@ -0,0 +1,49 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package scheduler // TODO: i'd like this to be a separate package, but cycles!
import (
"fmt"
"sort"
)
func init() {
Register("alpha", func() Strategy { return &alphaStrategy{} }) // must register the func and name
}
type alphaStrategy struct {
// no state to store
}
// Schedule returns the first host out of a sorted group of available hostnames.
func (obj *alphaStrategy) Schedule(hostnames map[string]string, opts *schedulerOptions) ([]string, error) {
if len(hostnames) <= 0 {
return nil, fmt.Errorf("strategy: cannot schedule from zero hosts")
}
if opts.maxCount <= 0 {
return nil, fmt.Errorf("strategy: cannot schedule with a max of zero")
}
sortedHosts := []string{}
for key := range hostnames {
sortedHosts = append(sortedHosts, key)
}
sort.Strings(sortedHosts)
return []string{sortedHosts[0]}, nil // pick first host
}

100
etcd/scheduler/options.go Normal file
View File

@@ -0,0 +1,100 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package scheduler
import (
"fmt"
)
// Option is a type that can be used to configure the scheduler.
type Option func(*schedulerOptions)
// schedulerOptions represents the different possible configurable options. Not
// all options necessarily work for each scheduler strategy algorithm.
type schedulerOptions struct {
debug bool
logf func(format string, v ...interface{})
strategy Strategy
maxCount int // TODO: should this be *int to know when it's set?
reuseLease bool
sessionTTL int // TODO: should this be *int to know when it's set?
hostsFilter []string
// TODO: add more options
}
// Debug specifies whether we should run in debug mode or not.
func Debug(debug bool) Option {
return func(so *schedulerOptions) {
so.debug = debug
}
}
// Logf passes a logger function that we can use if so desired.
func Logf(logf func(format string, v ...interface{})) Option {
return func(so *schedulerOptions) {
so.logf = logf
}
}
// StrategyKind sets the scheduler strategy used.
func StrategyKind(strategy string) Option {
return func(so *schedulerOptions) {
f, exists := registeredStrategies[strategy]
if !exists {
panic(fmt.Sprintf("scheduler: undefined strategy: %s", strategy))
}
so.strategy = f()
}
}
// MaxCount is the maximum number of hosts that should get simultaneously
// scheduled.
func MaxCount(maxCount int) Option {
return func(so *schedulerOptions) {
if maxCount > 0 {
so.maxCount = maxCount
}
}
}
// ReuseLease specifies whether we should try and re-use the lease between runs.
// Ordinarily it would get discarded with each new version (deploy) of the code.
func ReuseLease(reuseLease bool) Option {
return func(so *schedulerOptions) {
so.reuseLease = reuseLease
}
}
// SessionTTL is the amount of time to delay before expiring a key on abrupt
// host disconnect of if ReuseLease is true.
func SessionTTL(sessionTTL int) Option {
return func(so *schedulerOptions) {
if sessionTTL > 0 {
so.sessionTTL = sessionTTL
}
}
}
// HostsFilter specifies a manual list of hosts, to use as a subset of whatever
// was auto-discovered.
// XXX: think more about this idea...
func HostsFilter(hosts []string) Option {
return func(so *schedulerOptions) {
so.hostsFilter = hosts
}
}

View File

@@ -0,0 +1,84 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package scheduler // TODO: i'd like this to be a separate package, but cycles!
import (
"fmt"
"sort"
"github.com/purpleidea/mgmt/util"
)
func init() {
Register("rr", func() Strategy { return &rrStrategy{} }) // must register the func and name
}
type rrStrategy struct {
// some stored state
hosts []string
}
// Schedule returns hosts in round robin style from the available hostnames.
func (obj *rrStrategy) Schedule(hostnames map[string]string, opts *schedulerOptions) ([]string, error) {
if len(hostnames) <= 0 {
return nil, fmt.Errorf("strategy: cannot schedule from zero hosts")
}
if opts.maxCount <= 0 {
return nil, fmt.Errorf("strategy: cannot schedule with a max of zero")
}
// always get a deterministic list of current hosts first...
sortedHosts := []string{}
for key := range hostnames {
sortedHosts = append(sortedHosts, key)
}
sort.Strings(sortedHosts)
if obj.hosts == nil {
obj.hosts = []string{} // initialize if needed
}
// add any new hosts we learned about, to the end of the list
for _, x := range sortedHosts {
if !util.StrInList(x, obj.hosts) {
obj.hosts = append(obj.hosts, x)
}
}
// remove any hosts we previouly knew about from the list
for ix := len(obj.hosts) - 1; ix >= 0; ix-- {
if !util.StrInList(obj.hosts[ix], sortedHosts) {
// delete entry at this index
obj.hosts = append(obj.hosts[:ix], obj.hosts[ix+1:]...)
}
}
// get the maximum number of hosts to return
max := len(obj.hosts) // can't return more than we have
if opts.maxCount < max { // found a smaller limit
max = opts.maxCount
}
result := []string{}
// now return the number of needed hosts from the list
for i := 0; i < max; i++ {
result = append(result, obj.hosts[i])
}
return result, nil
}

570
etcd/scheduler/scheduler.go Normal file
View File

@@ -0,0 +1,570 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// Package scheduler implements a distributed consensus scheduler with etcd.
package scheduler
import (
"context"
"errors"
"fmt"
"sort"
"strings"
"sync"
etcd "github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/clientv3/concurrency"
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
errwrap "github.com/pkg/errors"
)
const (
// DefaultSessionTTL is the number of seconds to wait before a dead or
// unresponsive host is removed from the scheduled pool.
DefaultSessionTTL = 10 // seconds
// DefaultMaxCount is the maximum number of hosts to schedule on if not
// specified.
DefaultMaxCount = 1 // TODO: what is the logical value to choose? +Inf?
hostnameJoinChar = "," // char used to join and split lists of hostnames
)
// ErrEndOfResults is a sentinel that represents no more results will be coming.
var ErrEndOfResults = errors.New("scheduler: end of results")
var schedulerLeases = make(map[string]etcd.LeaseID) // process lifetime in-memory lease store
// schedulerResult represents output from the scheduler.
type schedulerResult struct {
hosts []string
err error
}
// Result is what is returned when you request a scheduler. You can call methods
// on it, and it stores the necessary state while you're running. When one of
// these is produced, the scheduler has already kicked off running for you
// automatically.
type Result struct {
results chan *schedulerResult
closeFunc func() // run this when you're done with the scheduler // TODO: replace with an input `context`
}
// Next returns the next output from the scheduler when it changes. This blocks
// until a new value is available, which is why you may wish to use a context to
// cancel any read from this. It returns ErrEndOfResults if the scheduler shuts
// down.
func (obj *Result) Next(ctx context.Context) ([]string, error) {
select {
case val, ok := <-obj.results:
if !ok {
return nil, ErrEndOfResults
}
return val.hosts, val.err
case <-ctx.Done():
return nil, ctx.Err()
}
}
// Shutdown causes everything to clean up. We no longer need the scheduler.
// TODO: should this be named Close() instead? Should it return an error?
func (obj *Result) Shutdown() {
obj.closeFunc()
// XXX: should we have a waitgroup to wait for it all to close?
}
// TODO: use: https://github.com/coreos/etcd/pull/8488 when available
func leaseValue(key string) etcd.Cmp {
return etcd.Cmp{Key: []byte(key), Target: pb.Compare_LEASE}
}
// Schedule returns a scheduler result which can be queried with it's available
// methods. This automatically causes different etcd clients sharing the same
// path to discover each other and be part of the scheduled set. On close the
// keys expire and will get removed from the scheduled set. Different options
// can be passed in to customize the behaviour. Hostname represents the unique
// identifier for the caller. The behaviour is undefined if this is run more
// than once with the same path and hostname simultaneously.
func Schedule(client *etcd.Client, path string, hostname string, opts ...Option) (*Result, error) {
if strings.HasSuffix(path, "/") {
return nil, fmt.Errorf("scheduler: path must not end with the slash char")
}
if !strings.HasPrefix(path, "/") {
return nil, fmt.Errorf("scheduler: path must start with the slash char")
}
if hostname == "" {
return nil, fmt.Errorf("scheduler: hostname must not be empty")
}
if strings.Contains(hostname, hostnameJoinChar) {
return nil, fmt.Errorf("scheduler: hostname must not contain join char: %s", hostnameJoinChar)
}
// key structure is $path/election = ???
// key structure is $path/exchange/$hostname = ???
// key structure is $path/scheduled = ???
options := &schedulerOptions{ // default scheduler options
// If reuseLease is false, then on host disconnect, that hosts
// entry will immediately expire, and the scheduler will react
// instantly and remove that host entry from the list. If this
// is true, or if the host closes without a clean shutdown, it
// will take the TTL number of seconds to remove the key. This
// can be set using the concurrency.WithTTL option to Session.
reuseLease: false,
sessionTTL: DefaultSessionTTL,
maxCount: DefaultMaxCount,
}
for _, optionFunc := range opts { // apply the scheduler options
optionFunc(options)
}
if options.strategy == nil {
return nil, fmt.Errorf("scheduler: strategy must be specified")
}
sessionOptions := []concurrency.SessionOption{}
// here we try to re-use lease between multiple runs of the code
// TODO: is it a good idea to try and re-use the lease b/w runs?
if options.reuseLease {
if leaseID, exists := schedulerLeases[path]; exists {
sessionOptions = append(sessionOptions, concurrency.WithLease(leaseID))
}
}
// ttl for key expiry on abrupt disconnection or if reuseLease is true!
if options.sessionTTL > 0 {
sessionOptions = append(sessionOptions, concurrency.WithTTL(options.sessionTTL))
}
//options.debug = true // use this for local debugging
session, err := concurrency.NewSession(client, sessionOptions...)
if err != nil {
return nil, errwrap.Wrapf(err, "scheduler: could not create session")
}
leaseID := session.Lease()
if options.reuseLease {
// save for next time, otherwise run session.Close() somewhere
schedulerLeases[path] = leaseID
}
ctx, cancel := context.WithCancel(context.Background()) // cancel below
//defer cancel() // do NOT do this, as it would cause an early cancel!
// stored scheduler results
scheduledPath := fmt.Sprintf("%s/scheduled", path)
scheduledChan := client.Watcher.Watch(ctx, scheduledPath)
// exchange hostname, and attach it to session (leaseID) so it expires
// (gets deleted) when we disconnect...
exchangePath := fmt.Sprintf("%s/exchange", path)
exchangePathHost := fmt.Sprintf("%s/%s", exchangePath, hostname)
exchangePathPrefix := fmt.Sprintf("%s/", exchangePath)
// open the watch *before* we set our key so that we can see the change!
watchChan := client.Watcher.Watch(ctx, exchangePathPrefix, etcd.WithPrefix())
data := "TODO" // XXX: no data to exchange alongside hostnames yet
ifops := []etcd.Cmp{
etcd.Compare(etcd.Value(exchangePathHost), "=", data),
etcd.Compare(leaseValue(exchangePathHost), "=", int64(leaseID)), // XXX: remove int64() after 3.3.0
}
elsop := etcd.OpPut(exchangePathHost, data, etcd.WithLease(leaseID))
// it's important to do this in one transaction, and atomically, because
// this way, we only generate one watch event, and only when it's needed
// updating leaseID, or key expiry (deletion) both generate watch events
// XXX: context!!!
if txn, err := client.KV.Txn(context.TODO()).If(ifops...).Then([]etcd.Op{}...).Else(elsop).Commit(); err != nil {
defer cancel() // cancel to avoid leaks if we exit early...
return nil, errwrap.Wrapf(err, "could not exchange in `%s`", path)
} else if txn.Succeeded {
options.logf("txn did nothing...") // then branch
} else {
options.logf("txn did an update...")
}
// create an election object
electionPath := fmt.Sprintf("%s/election", path)
election := concurrency.NewElection(session, electionPath)
electionChan := election.Observe(ctx)
elected := "" // who we "assume" is elected
wg := &sync.WaitGroup{}
ch := make(chan *schedulerResult)
closeChan := make(chan struct{})
send := func(hosts []string, err error) bool { // helper function for sending
select {
case ch <- &schedulerResult{ // send
hosts: hosts,
err: err,
}:
return true
case <-closeChan: // unblock
return false // not sent
}
}
once := &sync.Once{}
onceBody := func() { // do not call directly, use closeFunc!
//cancel() // TODO: is this needed here?
// request a graceful shutdown, caller must call this to
// shutdown when they are finished with the scheduler...
// calling this will cause their hosts channels to close
close(closeChan) // send a close signal
}
closeFunc := func() {
once.Do(onceBody)
}
result := &Result{
results: ch,
// TODO: we could accept a context to watch for cancel instead?
closeFunc: closeFunc,
}
mutex := &sync.Mutex{}
var campaignClose chan struct{}
var campaignRunning bool
// goroutine to vote for someone as scheduler! each participant must be
// able to run this or nobody will be around to vote if others are down
campaignFunc := func() {
options.logf("starting campaign...")
// the mutex ensures we don't fly past the wg.Wait() if someone
// shuts down the scheduler right as we are about to start this
// campaigning loop up. we do not want to fail unnecessarily...
mutex.Lock()
wg.Add(1)
mutex.Unlock()
go func() {
defer wg.Done()
ctx, cancel := context.WithCancel(context.Background())
go func() {
defer cancel() // run cancel to stop campaigning...
select {
case <-campaignClose:
return
case <-closeChan:
return
}
}()
for {
// TODO: previously, this looped infinitely fast
// TODO: add some rate limiting here for initial
// campaigning which occasionally loops a lot...
if options.debug {
//fmt.Printf(".") // debug
options.logf("campaigning...")
}
// "Campaign puts a value as eligible for the election.
// It blocks until it is elected, an error occurs, or
// the context is cancelled."
// vote for ourselves, as it's the only host we can
// guarantee is alive, otherwise we wouldn't be voting!
// it would be more sensible to vote for the last valid
// hostname to keep things more stable, but if that
// information was stale, and that host wasn't alive,
// then this would defeat the point of picking them!
if err := election.Campaign(ctx, hostname); err != nil {
if err != context.Canceled {
send(nil, errwrap.Wrapf(err, "scheduler: error campaigning"))
}
return
}
}
}()
}
go func() {
defer close(ch)
if !options.reuseLease {
defer session.Close() // this revokes the lease...
}
defer func() {
// XXX: should we ever resign? why would this block and thus need a context?
if elected == hostname { // TODO: is it safe to just always do this?
if err := election.Resign(context.TODO()); err != nil { // XXX: add a timeout?
}
}
elected = "" // we don't care anymore!
}()
// this "last" defer (first to run) should block until the other
// goroutine has closed so we don't Close an in-use session, etc
defer wg.Wait()
go func() {
defer cancel() // run cancel to "free" Observe...
defer wg.Wait() // also wait here if parent exits first
select {
case <-closeChan:
// we want the above wg.Wait() to work if this
// close happens. lock with the campaign start
defer mutex.Unlock()
mutex.Lock()
return
}
}()
hostnames := make(map[string]string)
for {
select {
case val, ok := <-electionChan:
if options.debug {
options.logf("electionChan(%t): %+v", ok, val)
}
if !ok {
if options.debug {
options.logf("elections stream shutdown...")
}
electionChan = nil
// done
// TODO: do we need to send on error channel?
// XXX: maybe if context was not called to exit us?
// ensure everyone waiting on closeChan
// gets cleaned up so we free mem, etc!
if watchChan == nil && scheduledChan == nil { // all now closed
closeFunc()
return
}
continue
}
elected = string(val.Kvs[0].Value)
//if options.debug {
options.logf("elected: %s", elected)
//}
if elected != hostname { // not me!
// start up the campaign function
if !campaignRunning {
campaignClose = make(chan struct{})
campaignFunc() // run
campaignRunning = true
}
continue // someone else does the scheduling...
} else { // campaigning while i am it loops fast
// shutdown the campaign function
if campaignRunning {
close(campaignClose)
wg.Wait()
campaignRunning = false
}
}
// i was voted in to make the scheduling choice!
case watchResp, ok := <-watchChan:
if options.debug {
options.logf("watchChan(%t): %+v", ok, watchResp)
}
if !ok {
if options.debug {
options.logf("watch stream shutdown...")
}
watchChan = nil
// done
// TODO: do we need to send on error channel?
// XXX: maybe if context was not called to exit us?
// ensure everyone waiting on closeChan
// gets cleaned up so we free mem, etc!
if electionChan == nil && scheduledChan == nil { // all now closed
closeFunc()
return
}
continue
}
err := watchResp.Err()
if watchResp.Canceled || err == context.Canceled {
// channel get closed shortly...
continue
}
if watchResp.Header.Revision == 0 { // by inspection
// received empty message ?
// switched client connection ?
// FIXME: what should we do here ?
continue
}
if err != nil {
send(nil, errwrap.Wrapf(err, "scheduler: exchange watcher failed"))
continue
}
if len(watchResp.Events) == 0 { // nothing interesting
continue
}
options.logf("running exchange values get...")
resp, err := client.Get(ctx, exchangePathPrefix, etcd.WithPrefix(), etcd.WithSort(etcd.SortByKey, etcd.SortAscend))
if err != nil || resp == nil {
if err != nil {
send(nil, errwrap.Wrapf(err, "scheduler: could not get exchange values in `%s`", path))
} else { // if resp == nil
send(nil, fmt.Errorf("scheduler: could not get exchange values in `%s`, resp is nil", path))
}
continue
}
// FIXME: the value key could instead be host
// specific information which is used for some
// purpose, eg: seconds active, and other data?
hostnames = make(map[string]string) // reset
for _, x := range resp.Kvs {
k := string(x.Key)
if !strings.HasPrefix(k, exchangePathPrefix) {
continue
}
k = k[len(exchangePathPrefix):] // strip
hostnames[k] = string(x.Value)
}
if options.debug {
options.logf("available hostnames: %+v", hostnames)
}
case scheduledResp, ok := <-scheduledChan:
if options.debug {
options.logf("scheduledChan(%t): %+v", ok, scheduledResp)
}
if !ok {
if options.debug {
options.logf("scheduled stream shutdown...")
}
scheduledChan = nil
// done
// TODO: do we need to send on error channel?
// XXX: maybe if context was not called to exit us?
// ensure everyone waiting on closeChan
// gets cleaned up so we free mem, etc!
if electionChan == nil && watchChan == nil { // all now closed
closeFunc()
return
}
continue
}
// event! continue below and get new result...
// NOTE: not needed, exit this via Observe ctx cancel,
// which will ultimately cause the chan to shutdown...
//case <-closeChan:
// return
} // end select
if len(hostnames) == 0 {
if options.debug {
options.logf("zero hosts available")
}
continue // not enough hosts available
}
// if we're currently elected, make a scheduling decision
// if not, lookup the existing leader scheduling decision
if elected != hostname {
options.logf("i am not the leader, running scheduling result get...")
resp, err := client.Get(ctx, scheduledPath)
if err != nil || resp == nil || len(resp.Kvs) != 1 {
if err != nil {
send(nil, errwrap.Wrapf(err, "scheduler: could not get scheduling result in `%s`", path))
} else if resp == nil {
send(nil, fmt.Errorf("scheduler: could not get scheduling result in `%s`, resp is nil", path))
} else if len(resp.Kvs) > 1 {
send(nil, fmt.Errorf("scheduler: could not get scheduling result in `%s`, resp kvs: %+v", path, resp.Kvs))
}
// if len(resp.Kvs) == 0, we shouldn't error
// in that situation it's just too early...
continue
}
result := string(resp.Kvs[0].Value)
hosts := strings.Split(result, hostnameJoinChar)
if options.debug {
options.logf("sending hosts: %+v", hosts)
}
// send that on channel!
if !send(hosts, nil) {
//return // pass instead, let channels clean up
}
continue
}
// i am the leader, run scheduler and store result
options.logf("i am elected, running scheduler...")
// run actual scheduler and decide who should be chosen
// TODO: is there any additional data that we can pass
// to the scheduler so it can make a better decision ?
hosts, err := options.strategy.Schedule(hostnames, options)
if err != nil {
send(nil, errwrap.Wrapf(err, "scheduler: strategy failed"))
continue
}
sort.Strings(hosts) // for consistency
options.logf("storing scheduling result...")
data := strings.Join(hosts, hostnameJoinChar)
ifops := []etcd.Cmp{
etcd.Compare(etcd.Value(scheduledPath), "=", data),
}
elsop := etcd.OpPut(scheduledPath, data)
// it's important to do this in one transaction, and atomically, because
// this way, we only generate one watch event, and only when it's needed
// updating leaseID, or key expiry (deletion) both generate watch events
// XXX: context!!!
if _, err := client.KV.Txn(context.TODO()).If(ifops...).Then([]etcd.Op{}...).Else(elsop).Commit(); err != nil {
send(nil, errwrap.Wrapf(err, "scheduler: could not set scheduling result in `%s`", path))
continue
}
if options.debug {
options.logf("sending hosts: %+v", hosts)
}
// send that on channel!
if !send(hosts, nil) {
//return // pass instead, let channels clean up
}
}
}()
// kick off an initial campaign if none exist already...
options.logf("checking for existing leader...")
leaderResult, err := election.Leader(ctx)
if err == concurrency.ErrElectionNoLeader {
// start up the campaign function
if !campaignRunning {
campaignClose = make(chan struct{})
campaignFunc() // run
campaignRunning = true
}
}
if options.debug {
if err != nil {
options.logf("leader information error: %+v", err)
} else {
options.logf("leader information: %+v", leaderResult)
}
}
return result, nil
}

View File

@@ -0,0 +1,51 @@
// Mgmt
// Copyright (C) 2013-2018+ James Shubin and the project contributors
// Written by James Shubin <james@shubin.ca> and the project contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package scheduler
import (
"fmt"
)
// registeredStrategies is a global map of all possible strategy implementations
// which can be used. You should never touch this map directly. Use methods like
// Register instead.
var registeredStrategies = make(map[string]func() Strategy) // must initialize
// Strategy represents the methods a scheduler strategy must implement.
type Strategy interface {
Schedule(hostnames map[string]string, opts *schedulerOptions) ([]string, error)
}
// Register takes a func and its name and makes it available for use. It is
// commonly called in the init() method of the func at program startup. There is
// no matching Unregister function.
func Register(name string, fn func() Strategy) {
if _, ok := registeredStrategies[name]; ok {
panic(fmt.Sprintf("a strategy named %s is already registered", name))
}
//gob.Register(fn())
registeredStrategies[name] = fn
}
type nilStrategy struct {
}
// Schedule returns an error for any scheduling request for this nil strategy.
func (obj *nilStrategy) Schedule(hostnames map[string]string, opts *schedulerOptions) ([]string, error) {
return nil, fmt.Errorf("scheduler: cannot schedule with nil scheduler")
}

View File

@@ -32,9 +32,13 @@ var ErrNotExist = errors.New("errNotExist")
// WatchStr returns a channel which spits out events on key activity.
// FIXME: It should close the channel when it's done, and spit out errors when
// something goes wrong.
// XXX: since the caller of this (via the World API) has no way to tell it it's
// done, does that mean we leak go-routines since it might still be running, but
// perhaps even blocked??? Could this cause a dead-lock? Should we instead return
// some sort of struct which has a close method with it to ask for a shutdown?
func WatchStr(obj *EmbdEtcd, key string) chan error {
// new key structure is /$NS/strings/$key = $data
path := fmt.Sprintf("/%s/strings/%s", NS, key)
// new key structure is $NS/strings/$key = $data
path := fmt.Sprintf("%s/strings/%s", NS, key)
ch := make(chan error, 1)
// FIXME: fix our API so that we get a close event on shutdown.
callback := func(re *RE) error {
@@ -54,8 +58,8 @@ func WatchStr(obj *EmbdEtcd, key string) chan error {
// GetStr collects the string which matches a global namespace in etcd.
func GetStr(obj *EmbdEtcd, key string) (string, error) {
// new key structure is /$NS/strings/$key = $data
path := fmt.Sprintf("/%s/strings/%s", NS, key)
// new key structure is $NS/strings/$key = $data
path := fmt.Sprintf("%s/strings/%s", NS, key)
keyMap, err := obj.Get(path, etcd.WithPrefix())
if err != nil {
return "", errwrap.Wrapf(err, "could not get strings in: %s", key)
@@ -82,8 +86,8 @@ func GetStr(obj *EmbdEtcd, key string) (string, error) {
// nil, then it deletes the key. Otherwise the value should point to a string.
// TODO: TTL or delete disconnect?
func SetStr(obj *EmbdEtcd, key string, data *string) error {
// key structure is /$NS/strings/$key = $data
path := fmt.Sprintf("/%s/strings/%s", NS, key)
// key structure is $NS/strings/$key = $data
path := fmt.Sprintf("%s/strings/%s", NS, key)
ifs := []etcd.Cmp{} // list matching the desired state
ops := []etcd.Op{} // list of ops in this transaction (then)
els := []etcd.Op{} // list of ops in this transaction (else)

View File

@@ -31,8 +31,8 @@ import (
// FIXME: It should close the channel when it's done, and spit out errors when
// something goes wrong.
func WatchStrMap(obj *EmbdEtcd, key string) chan error {
// new key structure is /$NS/strings/$key/$hostname = $data
path := fmt.Sprintf("/%s/strings/%s", NS, key)
// new key structure is $NS/strings/$key/$hostname = $data
path := fmt.Sprintf("%s/strings/%s", NS, key)
ch := make(chan error, 1)
// FIXME: fix our API so that we get a close event on shutdown.
callback := func(re *RE) error {
@@ -52,12 +52,12 @@ func WatchStrMap(obj *EmbdEtcd, key string) chan error {
// GetStrMap collects all of the strings which match a namespace in etcd.
func GetStrMap(obj *EmbdEtcd, hostnameFilter []string, key string) (map[string]string, error) {
// old key structure is /$NS/strings/$hostname/$key = $data
// new key structure is /$NS/strings/$key/$hostname = $data
// old key structure is $NS/strings/$hostname/$key = $data
// new key structure is $NS/strings/$key/$hostname = $data
// FIXME: if we have the $key as the last token (old key structure), we
// can allow the key to contain the slash char, otherwise we need to
// verify that one isn't present in the input string.
path := fmt.Sprintf("/%s/strings/%s", NS, key)
path := fmt.Sprintf("%s/strings/%s", NS, key)
keyMap, err := obj.Get(path, etcd.WithPrefix(), etcd.WithSort(etcd.SortByKey, etcd.SortAscend))
if err != nil {
return nil, errwrap.Wrapf(err, "could not get strings in: %s", key)
@@ -92,8 +92,8 @@ func GetStrMap(obj *EmbdEtcd, hostnameFilter []string, key string) (map[string]s
// nil, then it deletes the key. Otherwise the value should point to a string.
// TODO: TTL or delete disconnect?
func SetStrMap(obj *EmbdEtcd, hostname, key string, data *string) error {
// key structure is /$NS/strings/$key/$hostname = $data
path := fmt.Sprintf("/%s/strings/%s/%s", NS, key, hostname)
// key structure is $NS/strings/$key/$hostname = $data
path := fmt.Sprintf("%s/strings/%s/%s", NS, key, hostname)
ifs := []etcd.Cmp{} // list matching the desired state
ops := []etcd.Op{} // list of ops in this transaction (then)
els := []etcd.Op{} // list of ops in this transaction (else)

View File

@@ -18,13 +18,24 @@
package etcd
import (
"fmt"
"net/url"
"strings"
etcdfs "github.com/purpleidea/mgmt/etcd/fs"
"github.com/purpleidea/mgmt/etcd/scheduler"
"github.com/purpleidea/mgmt/resources"
)
// World is an etcd backed implementation of the World interface.
type World struct {
Hostname string // uuid for the consumer of these
EmbdEtcd *EmbdEtcd
Hostname string // uuid for the consumer of these
EmbdEtcd *EmbdEtcd
MetadataPrefix string // expected metadata prefix
StoragePrefix string // storage prefix for etcdfs storage
StandaloneFs resources.Fs // store an fs here for local usage
Debug bool
Logf func(format string, v ...interface{})
}
// ResWatch returns a channel which spits out events on possible exported
@@ -93,3 +104,49 @@ func (obj *World) StrMapSet(namespace, value string) error {
func (obj *World) StrMapDel(namespace string) error {
return SetStrMap(obj.EmbdEtcd, obj.Hostname, namespace, nil)
}
// Scheduler returns a scheduling result of hosts in a particular namespace.
func (obj *World) Scheduler(namespace string, opts ...scheduler.Option) (*scheduler.Result, error) {
modifiedOpts := []scheduler.Option{}
for _, o := range opts {
modifiedOpts = append(modifiedOpts, o) // copy in
}
modifiedOpts = append(modifiedOpts, scheduler.Debug(obj.Debug))
modifiedOpts = append(modifiedOpts, scheduler.Logf(obj.Logf))
return scheduler.Schedule(obj.EmbdEtcd.GetClient(), fmt.Sprintf("%s/scheduler/%s", NS, namespace), obj.Hostname, modifiedOpts...)
}
// Fs returns a distributed file system from a unique URI. For single host
// execution that doesn't span more than a single host, this file system might
// actually be a local or memory backed file system, so actually only
// distributed within the boredom that is a single host cluster.
func (obj *World) Fs(uri string) (resources.Fs, error) {
u, err := url.Parse(uri)
if err != nil {
return nil, err
}
// we're in standalone mode
if u.Scheme == "memmapfs" && u.Path == "/" {
return obj.StandaloneFs, nil
}
if u.Scheme != "etcdfs" {
return nil, fmt.Errorf("unknown scheme: `%s`", u.Scheme)
}
if u.Path == "" {
return nil, fmt.Errorf("empty path: %s", u.Path)
}
if !strings.HasPrefix(u.Path, obj.MetadataPrefix) {
return nil, fmt.Errorf("wrong path prefix: %s", u.Path)
}
etcdFs := &etcdfs.Fs{
Client: obj.EmbdEtcd.GetClient(),
Metadata: u.Path,
DataPrefix: obj.StoragePrefix,
}
return etcdFs, nil
}

View File

@@ -0,0 +1,22 @@
$set = ["a", "b", "c", "d",]
$c1 = "x1" in ["x1", "x2", "x3",]
$c2 = 42 in [4, 13, 42,]
$c3 = "x" in $set
$c4 = "b" in $set
$s = printf("1: %t, 2: %t, 3: %t, 4: %t\n", $c1, $c2, $c3, $c4)
file "/tmp/mgmt/contains" {
content => $s,
}
$x = if hostname() in ["h1", "h3",] {
printf("i (%s) am one of the chosen few!\n", hostname())
} else {
printf("i (%s) was not chosen :(\n", hostname())
}
file "/tmp/mgmt/hello-${hostname()}" {
content => $x,
}

View File

@@ -0,0 +1,4 @@
$d = datetime()
file "/tmp/mgmt/datetime" {
content => template("Hello! It is now: {{ datetimeprint . }}\n", $d),
}

View File

@@ -0,0 +1,14 @@
$secplusone = datetime() + $ayear
# note the order of the assignment (year can come later in the code)
$ayear = 60 * 60 * 24 * 365 # is a year in seconds (31536000)
$tmplvalues = struct{year => $secplusone, load => $theload,}
$theload = structlookup(load(), "x1")
if 5 > 3 {
file "/tmp/mgmt/datetime" {
content => template("Now + 1 year is: {{ .year }} seconds, aka: {{ datetimeprint .year }}\n\nload average: {{ .load }}\n", $tmplvalues),
}
}

View File

@@ -0,0 +1,14 @@
$secplusone = datetime() + $ayear
# note the order of the assignment (year can come later in the code)
$ayear = 60 * 60 * 24 * 365 # is a year in seconds (31536000)
$tmplvalues = struct{year => $secplusone, load => $theload, vumeter => $vumeter,}
$theload = structlookup(load(), "x1")
$vumeter = vumeter("====", 10, 0.9)
file "/tmp/mgmt/datetime" {
content => template("Now + 1 year is: {{ .year }} seconds, aka: {{ datetimeprint .year }}\n\nload average: {{ .load }}\n\nvu: {{ .vumeter }}\n", $tmplvalues),
}

View File

@@ -0,0 +1,13 @@
# run this example with these commands
# watch -n 0.1 'tail *' # run this in /tmp/mgmt/
# time ./mgmt run --lang examples/lang/exchange0.mcl --hostname h1 --ideal-cluster-size 1 --tmp-prefix --no-pgp
# time ./mgmt run --lang examples/lang/exchange0.mcl --hostname h2 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2381 --server-urls http://127.0.0.1:2382 --tmp-prefix --no-pgp
# time ./mgmt run --lang examples/lang/exchange0.mcl --hostname h3 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2383 --server-urls http://127.0.0.1:2384 --tmp-prefix --no-pgp
# time ./mgmt run --lang examples/lang/exchange0.mcl --hostname h4 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2385 --server-urls http://127.0.0.1:2386 --tmp-prefix --no-pgp
$rand = random1(8)
$exchanged = exchange("keyns", $rand)
file "/tmp/mgmt/exchange-${hostname()}" {
content => template("Found: {{ . }}\n", $exchanged),
}

4
examples/lang/hello0.mcl Normal file
View File

@@ -0,0 +1,4 @@
file "/tmp/mgmt-hello-world" {
content => "hello world from @purpleidea\n",
state => "exists",
}

View File

@@ -0,0 +1,7 @@
$dt = datetime()
$hystvalues = {"ix0" => $dt, "ix1" => $dt{1}, "ix2" => $dt{2}, "ix3" => $dt{3},}
file "/tmp/mgmt/history" {
content => template("Index(0) {{.ix0}}: {{ datetimeprint .ix0 }}\nIndex(1) {{.ix1}}: {{ datetimeprint .ix1 }}\nIndex(2) {{.ix2}}: {{ datetimeprint .ix2 }}\nIndex(3) {{.ix3}}: {{ datetimeprint .ix3 }}\n", $hystvalues),
}

View File

@@ -0,0 +1,4 @@
file "/tmp/mgmt/${hostname()}" {
content => "hello from ${hostname()}!\n",
state => "exists",
}

View File

@@ -0,0 +1,31 @@
file "/tmp/mgmt/systemload" {
content => template("load average: {{ .load }} threshold: {{ .threshold }}\n", $tmplvalues),
}
$tmplvalues = struct{load => $theload, threshold => $threshold,}
$theload = structlookup(load(), "x1")
$threshold = 1.5 # change me if you like
# simple hysteresis implementation
$h1 = $theload > $threshold
$h2 = $theload{1} > $threshold
$h3 = $theload{2} > $threshold
$unload = $h1 || $h2 || $h3
virt "mgmt1" {
uri => "qemu:///session",
cpus => 1,
memory => 524288,
state => "running",
transient => true,
}
# this vm shuts down under load...
virt "mgmt2" {
uri => "qemu:///session",
cpus => 1,
memory => 524288,
state => if $unload { "shutoff" } else { "running" },
transient => true,
}

View File

@@ -0,0 +1,5 @@
$audience = "WORLD!"
file "/tmp/mgmt/hello" {
content => "hello ${audience}!\n",
state => "exists",
}

9
examples/lang/load0.mcl Normal file
View File

@@ -0,0 +1,9 @@
$theload = load()
$x1 = structlookup($theload, "x1")
$x5 = structlookup($theload, "x5")
$x15 = structlookup($theload, "x15")
print "print1" {
msg => printf("load average: %f, %f, %f", $x1, $x5, $x15),
}

View File

@@ -0,0 +1,13 @@
$m = {"k1" => 42, "k2" => 13,}
$found = maplookup($m, "k1", 99)
print "print1" {
msg => printf("found value of: %d", $found),
}
$notfound = maplookup($m, "k3", 99)
print "print2" {
msg => printf("notfound value of: %d", $notfound),
}

4
examples/lang/math1.mcl Normal file
View File

@@ -0,0 +1,4 @@
test "t1" {
int64 => (4 + 32) * 15 - 8,
anotherstr => printf("the answer is: %d", 42),
}

View File

@@ -0,0 +1,8 @@
test "printf-a" {
anotherstr => printf("the %s is: %d", "answer", 42),
}
$format = "a %s is: %f"
test "printf-b" {
anotherstr => printf($format, "cool number", 3.14159),
}

View File

@@ -0,0 +1,18 @@
# here are all the possible options:
#$opts = struct{strategy => "rr", max => 3, reuse => false, ttl => 10,}
# although an empty struct is valid too:
#$opts = struct{}
# we'll just use a smaller subset today:
$opts = struct{strategy => "rr", max => 2, ttl => 10,}
# schedule in a particular namespace with options:
$set = schedule("xsched", $opts)
# and if you want, you can omit the options entirely:
#$set = schedule("xsched")
file "/tmp/mgmt/scheduled-${hostname()}" {
content => template("set: {{ . }}\n", $set),
}

View File

@@ -0,0 +1,10 @@
$x = "hello"
if true {
$x = "i am shadowed" # this is allowed, but not a good practice to intentionally shadow
print "inner-scope" {
msg => $x, # contents are: i am shadowed
}
}
print "top-scope" {
msg => $x, # contents are: hello
}

35
examples/lang/states0.mcl Normal file
View File

@@ -0,0 +1,35 @@
$ns = "estate"
$exchanged = kvlookup($ns)
$state = maplookup($exchanged, $hostname, "default")
if $state == "one" || $state == "default" {
file "/tmp/mgmt/state" {
content => "state: one\n",
}
kv "${ns}" {
key => $ns,
value => "two",
}
}
if $state == "two" {
file "/tmp/mgmt/state" {
content => "state: two\n",
}
kv "${ns}" {
key => $ns,
value => "three",
}
}
if $state == "three" {
file "/tmp/mgmt/state" {
content => "state: three\n",
}
kv "${ns}" {
key => $ns,
value => "one",
}
}

View File

@@ -0,0 +1,13 @@
$st = struct{f1 => 42, f2 => true, f3 => 3.14,}
$f1 = structlookup($st, "f1")
print "print1" {
msg => printf("f1 field is: %d", $f1),
}
$f2 = structlookup($st, "f2")
print "print2" {
msg => printf("f2 field is: %t", $f2),
}

7
examples/lang/virt1.mcl Normal file
View File

@@ -0,0 +1,7 @@
virt "mgmt3" {
uri => "qemu:///session",
cpus => 1,
memory => 524288,
state => "running",
transient => true,
}

View File

@@ -14,6 +14,15 @@ import (
mgmt "github.com/purpleidea/mgmt/lib"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
"github.com/urfave/cli"
)
// XXX: this has not been updated to latest GAPI/Deploy API. Patches welcome!
const (
// Name is the name of this frontend.
Name = "libmgmt"
)
// MyGAPI implements the main GAPI interface.
@@ -36,6 +45,39 @@ func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
return obj, obj.Init(data)
}
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
// should take the prefix of the registered name. On activation, if there are
// any validation problems, you should return an error. If this was not
// activated, then you should return a nil GAPI and a nil error.
func (obj *MyGAPI) Cli(c *cli.Context, fs resources.Fs) (*gapi.Deploy, error) {
if s := c.String(obj.Name); c.IsSet(obj.Name) {
if s != "" {
return nil, fmt.Errorf("input is not empty")
}
return &gapi.Deploy{
Name: obj.Name,
Noop: c.GlobalBool("noop"),
Sema: c.GlobalInt("sema"),
GAPI: &MyGAPI{
// TODO: add properties here...
},
}, nil
}
return nil, nil // we weren't activated!
}
// CliFlags returns a list of flags used by this deploy subcommand.
func (obj *MyGAPI) CliFlags() []cli.Flag {
return []cli.Flag{
cli.StringFlag{
Name: obj.Name,
Value: "",
Usage: "run",
},
}
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
@@ -53,7 +95,7 @@ func (obj *MyGAPI) Init(data gapi.Data) error {
// Graph returns a current Graph.
func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
return nil, fmt.Errorf("%s: MyGAPI is not initialized", Name)
}
g, err := pgraph.NewGraph(obj.Name)
@@ -135,7 +177,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Err: fmt.Errorf("%s: MyGAPI is not initialized", Name),
Exit: true, // exit, b/c programming error?
}
ch <- next
@@ -164,7 +206,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
return
}
log.Printf("libmgmt: Generating new graph...")
log.Printf("%s: Generating new graph...", Name)
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
@@ -178,7 +220,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
// Close shuts down the MyGAPI.
func (obj *MyGAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("libmgmt: MyGAPI is not initialized")
return fmt.Errorf("%s: MyGAPI is not initialized", Name)
}
close(obj.closeChan)
obj.wg.Wait()
@@ -199,10 +241,10 @@ func Run() error {
obj.ConvergedTimeout = -1
obj.Noop = false // FIXME: careful!
obj.GAPI = &MyGAPI{ // graph API
Name: "libmgmt", // TODO: set on compilation
Interval: 60 * 10, // arbitrarily change graph every 15 seconds
}
//obj.GAPI = &MyGAPI{ // graph API
// Name: "libmgmt", // TODO: set on compilation
// Interval: 60 * 10, // arbitrarily change graph every 15 seconds
//}
if err := obj.Init(); err != nil {
return err

View File

@@ -16,8 +16,20 @@ import (
"github.com/purpleidea/mgmt/resources"
errwrap "github.com/pkg/errors"
"github.com/urfave/cli"
)
// XXX: this has not been updated to latest GAPI/Deploy API. Patches welcome!
const (
// Name is the name of this frontend.
Name = "libmgmt"
)
func init() {
gapi.Register(Name, func() gapi.GAPI { return &MyGAPI{} }) // register
}
// MyGAPI implements the main GAPI interface.
type MyGAPI struct {
Name string // graph name
@@ -38,6 +50,39 @@ func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
return obj, obj.Init(data)
}
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
// should take the prefix of the registered name. On activation, if there are
// any validation problems, you should return an error. If this was not
// activated, then you should return a nil GAPI and a nil error.
func (obj *MyGAPI) Cli(c *cli.Context, fs resources.Fs) (*gapi.Deploy, error) {
if s := c.String(obj.Name); c.IsSet(obj.Name) {
if s != "" {
return nil, fmt.Errorf("input is not empty")
}
return &gapi.Deploy{
Name: obj.Name,
Noop: c.GlobalBool("noop"),
Sema: c.GlobalInt("sema"),
GAPI: &MyGAPI{
// TODO: add properties here...
},
}, nil
}
return nil, nil // we weren't activated!
}
// CliFlags returns a list of flags used by this deploy subcommand.
func (obj *MyGAPI) CliFlags() []cli.Flag {
return []cli.Flag{
cli.StringFlag{
Name: obj.Name,
Value: "",
Usage: "run",
},
}
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
@@ -87,7 +132,7 @@ func (obj *MyGAPI) subGraph() (*pgraph.Graph, error) {
// Graph returns a current Graph.
func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
return nil, fmt.Errorf("%s: MyGAPI is not initialized", Name)
}
g, err := pgraph.NewGraph(obj.Name)
@@ -142,7 +187,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Err: fmt.Errorf("%s: MyGAPI is not initialized", Name),
Exit: true, // exit, b/c programming error?
}
ch <- next
@@ -171,7 +216,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
return
}
log.Printf("libmgmt: Generating new graph...")
log.Printf("%s: Generating new graph...", Name)
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
@@ -185,7 +230,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
// Close shuts down the MyGAPI.
func (obj *MyGAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("libmgmt: MyGAPI is not initialized")
return fmt.Errorf("%s: MyGAPI is not initialized", Name)
}
close(obj.closeChan)
obj.wg.Wait()
@@ -197,19 +242,19 @@ func (obj *MyGAPI) Close() error {
func Run() error {
obj := &mgmt.Main{}
obj.Program = "libmgmt" // TODO: set on compilation
obj.Version = "0.0.1" // TODO: set on compilation
obj.TmpPrefix = true // disable for easy debugging
obj.Program = Name // TODO: set on compilation
obj.Version = "0.0.1" // TODO: set on compilation
obj.TmpPrefix = true // disable for easy debugging
//prefix := "/tmp/testprefix/"
//obj.Prefix = &p // enable for easy debugging
obj.IdealClusterSize = -1
obj.ConvergedTimeout = -1
obj.Noop = false // FIXME: careful!
obj.GAPI = &MyGAPI{ // graph API
Name: "libmgmt", // TODO: set on compilation
Interval: 60 * 10, // arbitrarily change graph every 15 seconds
}
//obj.GAPI = &MyGAPI{ // graph API
// Name: Name, // TODO: set on compilation
// Interval: 60 * 10, // arbitrarily change graph every 15 seconds
//}
if err := obj.Init(); err != nil {
return err

View File

@@ -14,6 +14,15 @@ import (
mgmt "github.com/purpleidea/mgmt/lib"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
"github.com/urfave/cli"
)
// XXX: this has not been updated to latest GAPI/Deploy API. Patches welcome!
const (
// Name is the name of this frontend.
Name = "libmgmt"
)
// MyGAPI implements the main GAPI interface.
@@ -36,6 +45,39 @@ func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
return obj, obj.Init(data)
}
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
// should take the prefix of the registered name. On activation, if there are
// any validation problems, you should return an error. If this was not
// activated, then you should return a nil GAPI and a nil error.
func (obj *MyGAPI) Cli(c *cli.Context, fs resources.Fs) (*gapi.Deploy, error) {
if s := c.String(obj.Name); c.IsSet(obj.Name) {
if s != "" {
return nil, fmt.Errorf("input is not empty")
}
return &gapi.Deploy{
Name: obj.Name,
Noop: c.GlobalBool("noop"),
Sema: c.GlobalInt("sema"),
GAPI: &MyGAPI{
// TODO: add properties here...
},
}, nil
}
return nil, nil // we weren't activated!
}
// CliFlags returns a list of flags used by this deploy subcommand.
func (obj *MyGAPI) CliFlags() []cli.Flag {
return []cli.Flag{
cli.StringFlag{
Name: obj.Name,
Value: "",
Usage: "run",
},
}
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
@@ -53,7 +95,7 @@ func (obj *MyGAPI) Init(data gapi.Data) error {
// Graph returns a current Graph.
func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
return nil, fmt.Errorf("%s: MyGAPI is not initialized", Name)
}
g, err := pgraph.NewGraph(obj.Name)
@@ -132,7 +174,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Err: fmt.Errorf("%s: MyGAPI is not initialized", Name),
Exit: true, // exit, b/c programming error?
}
ch <- next
@@ -161,7 +203,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
return
}
log.Printf("libmgmt: Generating new graph...")
log.Printf("%s: Generating new graph...", Name)
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
@@ -175,7 +217,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
// Close shuts down the MyGAPI.
func (obj *MyGAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("libmgmt: MyGAPI is not initialized")
return fmt.Errorf("%s: MyGAPI is not initialized", Name)
}
close(obj.closeChan)
obj.wg.Wait()
@@ -196,10 +238,10 @@ func Run() error {
obj.ConvergedTimeout = -1
obj.Noop = false // FIXME: careful!
obj.GAPI = &MyGAPI{ // graph API
Name: "libmgmt", // TODO: set on compilation
Interval: 60 * 10, // arbitrarily change graph every 15 seconds
}
//obj.GAPI = &MyGAPI{ // graph API
// Name: "libmgmt", // TODO: set on compilation
// Interval: 60 * 10, // arbitrarily change graph every 15 seconds
//}
if err := obj.Init(); err != nil {
return err

View File

@@ -15,6 +15,15 @@ import (
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
"github.com/purpleidea/mgmt/yamlgraph"
"github.com/urfave/cli"
)
// XXX: this has not been updated to latest GAPI/Deploy API. Patches welcome!
const (
// Name is the name of this frontend.
Name = "libmgmt"
)
// MyGAPI implements the main GAPI interface.
@@ -37,6 +46,39 @@ func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
return obj, obj.Init(data)
}
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
// should take the prefix of the registered name. On activation, if there are
// any validation problems, you should return an error. If this was not
// activated, then you should return a nil GAPI and a nil error.
func (obj *MyGAPI) Cli(c *cli.Context, fs resources.Fs) (*gapi.Deploy, error) {
if s := c.String(obj.Name); c.IsSet(obj.Name) {
if s != "" {
return nil, fmt.Errorf("input is not empty")
}
return &gapi.Deploy{
Name: obj.Name,
Noop: c.GlobalBool("noop"),
Sema: c.GlobalInt("sema"),
GAPI: &MyGAPI{
// TODO: add properties here...
},
}, nil
}
return nil, nil // we weren't activated!
}
// CliFlags returns a list of flags used by this deploy subcommand.
func (obj *MyGAPI) CliFlags() []cli.Flag {
return []cli.Flag{
cli.StringFlag{
Name: obj.Name,
Value: "",
Usage: "run",
},
}
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
@@ -54,7 +96,7 @@ func (obj *MyGAPI) Init(data gapi.Data) error {
// Graph returns a current Graph.
func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
return nil, fmt.Errorf("%s: MyGAPI is not initialized", Name)
}
n1, err := resources.NewNamedResource("noop", "noop1")
@@ -96,7 +138,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Err: fmt.Errorf("%s: MyGAPI is not initialized", Name),
Exit: true, // exit, b/c programming error?
}
ch <- next
@@ -124,7 +166,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
return
}
log.Printf("libmgmt: Generating new graph...")
log.Printf("%s: Generating new graph...", Name)
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
@@ -138,7 +180,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
// Close shuts down the MyGAPI.
func (obj *MyGAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("libmgmt: MyGAPI is not initialized")
return fmt.Errorf("%s: MyGAPI is not initialized", Name)
}
close(obj.closeChan)
obj.wg.Wait()
@@ -157,10 +199,10 @@ func Run() error {
obj.ConvergedTimeout = -1
obj.Noop = true
obj.GAPI = &MyGAPI{ // graph API
Name: "libmgmt", // TODO: set on compilation
Interval: 15, // arbitrarily change graph every 15 seconds
}
//obj.GAPI = &MyGAPI{ // graph API
// Name: "libmgmt", // TODO: set on compilation
// Interval: 15, // arbitrarily change graph every 15 seconds
//}
if err := obj.Init(); err != nil {
return err

View File

@@ -15,6 +15,15 @@ import (
mgmt "github.com/purpleidea/mgmt/lib"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
"github.com/urfave/cli"
)
// XXX: this has not been updated to latest GAPI/Deploy API. Patches welcome!
const (
// Name is the name of this frontend.
Name = "libmgmt"
)
// MyGAPI implements the main GAPI interface.
@@ -39,6 +48,39 @@ func NewMyGAPI(data gapi.Data, name string, interval uint, count uint) (*MyGAPI,
return obj, obj.Init(data)
}
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
// should take the prefix of the registered name. On activation, if there are
// any validation problems, you should return an error. If this was not
// activated, then you should return a nil GAPI and a nil error.
func (obj *MyGAPI) Cli(c *cli.Context, fs resources.Fs) (*gapi.Deploy, error) {
if s := c.String(obj.Name); c.IsSet(obj.Name) {
if s != "" {
return nil, fmt.Errorf("input is not empty")
}
return &gapi.Deploy{
Name: obj.Name,
Noop: c.GlobalBool("noop"),
Sema: c.GlobalInt("sema"),
GAPI: &MyGAPI{
// TODO: add properties here...
},
}, nil
}
return nil, nil // we weren't activated!
}
// CliFlags returns a list of flags used by this deploy subcommand.
func (obj *MyGAPI) CliFlags() []cli.Flag {
return []cli.Flag{
cli.StringFlag{
Name: obj.Name,
Value: "",
Usage: "run",
},
}
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
@@ -56,7 +98,7 @@ func (obj *MyGAPI) Init(data gapi.Data) error {
// Graph returns a current Graph.
func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
return nil, fmt.Errorf("%s: MyGAPI is not initialized", Name)
}
g, err := pgraph.NewGraph(obj.Name)
@@ -89,7 +131,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Err: fmt.Errorf("%s: MyGAPI is not initialized", Name),
Exit: true, // exit, b/c programming error?
}
ch <- next
@@ -117,7 +159,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
return
}
log.Printf("libmgmt: Generating new graph...")
log.Printf("%s: Generating new graph...", Name)
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
@@ -131,7 +173,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
// Close shuts down the MyGAPI.
func (obj *MyGAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("libmgmt: MyGAPI is not initialized")
return fmt.Errorf("%s: MyGAPI is not initialized", Name)
}
close(obj.closeChan)
obj.wg.Wait()
@@ -150,11 +192,11 @@ func Run(count uint) error {
obj.ConvergedTimeout = -1
obj.Noop = true
obj.GAPI = &MyGAPI{ // graph API
Name: "libmgmt", // TODO: set on compilation
Count: count, // number of vertices to add
Interval: 15, // arbitrarily change graph every 15 seconds
}
//obj.GAPI = &MyGAPI{ // graph API
// Name: "libmgmt", // TODO: set on compilation
// Count: count, // number of vertices to add
// Interval: 15, // arbitrarily change graph every 15 seconds
//}
if err := obj.Init(); err != nil {
return err

View File

@@ -14,6 +14,15 @@ import (
mgmt "github.com/purpleidea/mgmt/lib"
"github.com/purpleidea/mgmt/pgraph"
"github.com/purpleidea/mgmt/resources"
"github.com/urfave/cli"
)
// XXX: this has not been updated to latest GAPI/Deploy API. Patches welcome!
const (
// Name is the name of this frontend.
Name = "libmgmt"
)
// MyGAPI implements the main GAPI interface.
@@ -36,6 +45,39 @@ func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
return obj, obj.Init(data)
}
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
// should take the prefix of the registered name. On activation, if there are
// any validation problems, you should return an error. If this was not
// activated, then you should return a nil GAPI and a nil error.
func (obj *MyGAPI) Cli(c *cli.Context, fs resources.Fs) (*gapi.Deploy, error) {
if s := c.String(obj.Name); c.IsSet(obj.Name) {
if s != "" {
return nil, fmt.Errorf("input is not empty")
}
return &gapi.Deploy{
Name: obj.Name,
Noop: c.GlobalBool("noop"),
Sema: c.GlobalInt("sema"),
GAPI: &MyGAPI{
// TODO: add properties here...
},
}, nil
}
return nil, nil // we weren't activated!
}
// CliFlags returns a list of flags used by this deploy subcommand.
func (obj *MyGAPI) CliFlags() []cli.Flag {
return []cli.Flag{
cli.StringFlag{
Name: obj.Name,
Value: "",
Usage: "run",
},
}
}
// Init initializes the MyGAPI struct.
func (obj *MyGAPI) Init(data gapi.Data) error {
if obj.initialized {
@@ -53,7 +95,7 @@ func (obj *MyGAPI) Init(data gapi.Data) error {
// Graph returns a current Graph.
func (obj *MyGAPI) Graph() (*pgraph.Graph, error) {
if !obj.initialized {
return nil, fmt.Errorf("libmgmt: MyGAPI is not initialized")
return nil, fmt.Errorf("%s: MyGAPI is not initialized", Name)
}
g, err := pgraph.NewGraph(obj.Name)
@@ -138,7 +180,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
defer close(ch) // this will run before the obj.wg.Done()
if !obj.initialized {
next := gapi.Next{
Err: fmt.Errorf("libmgmt: MyGAPI is not initialized"),
Err: fmt.Errorf("%s: MyGAPI is not initialized", Name),
Exit: true, // exit, b/c programming error?
}
ch <- next
@@ -166,7 +208,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
return
}
log.Printf("libmgmt: Generating new graph...")
log.Printf("%s: Generating new graph...", Name)
select {
case ch <- gapi.Next{}: // trigger a run
case <-obj.closeChan:
@@ -180,7 +222,7 @@ func (obj *MyGAPI) Next() chan gapi.Next {
// Close shuts down the MyGAPI.
func (obj *MyGAPI) Close() error {
if !obj.initialized {
return fmt.Errorf("libmgmt: MyGAPI is not initialized")
return fmt.Errorf("%s: MyGAPI is not initialized", Name)
}
close(obj.closeChan)
obj.wg.Wait()
@@ -201,10 +243,10 @@ func Run() error {
obj.ConvergedTimeout = -1
obj.Noop = false // FIXME: careful!
obj.GAPI = &MyGAPI{ // graph API
Name: "libmgmt", // TODO: set on compilation
Interval: 60 * 10, // arbitrarily change graph every 15 seconds
}
//obj.GAPI = &MyGAPI{ // graph API
// Name: "libmgmt", // TODO: set on compilation
// Interval: 60 * 10, // arbitrarily change graph every 15 seconds
//}
if err := obj.Init(); err != nil {
return err

Some files were not shown because too many files have changed in this diff Show More