lang: Add module imports and more
This enables imports in mcl code, and is one of last remaining blockers to using mgmt. Now we can start writing standalone modules, and adding standard library functions as needed. There's still lots to do, but this was a big missing piece. It was much harder to get right than I had expected, but I think it's solid! This unfortunately large commit is the result of some wild hacking I've been doing for the past little while. It's the result of a rebase that broke many "wip" commits that tracked my private progress, into something that's not gratuitously messy for our git logs. Since this was a learning and discovery process for me, I've "erased" the confusing git history that wouldn't have helped. I'm happy to discuss the dead-ends, and a small portion of that code was even left in for possible future use. This patch includes: * A change to the cli interface: You now specify the front-end explicitly, instead of leaving it up to the front-end to decide when to "activate". For example, instead of: mgmt run --lang code.mcl we now do: mgmt run lang --lang code.mcl We might rename the --lang flag in the future to avoid the awkward word repetition. Suggestions welcome, but I'm considering "input". One side-effect of this change, is that flags which are "engine" specific now must be specified with "run" before the front-end name. Eg: mgmt run --tmp-prefix lang --lang code.mcl instead of putting --tmp-prefix at the end. We also changed the GAPI slightly, but I've patched all code that used it. This also makes things consistent with the "deploy" command. * The deploys are more robust and let you deploy after a run This has been vastly improved and let's mgmt really run as a smart engine that can handle different workloads. If you don't want to deploy when you've started with `run` or if one comes in, you can use the --no-watch-deploy option to block new deploys. * The import statement exists and works! We now have a working `import` statement. Read the docs, and try it out. I think it's quite elegant how it fits in with `SetScope`. Have a look. As a result, we now have some built-in functions available in modules. This also adds the metadata.yaml entry-point for all modules. Have a look at the examples or the tests. The bulk of the patch is to support this. * Improved lang input parsing code: I re-wrote the parsing that determined what ran when we passed different things to --lang. Deciding between running an mcl file or raw code is now handled in a more intelligent, and re-usable way. See the inputs.go file if you want to have a look. One casualty is that you can't stream code from stdin *directly* to the front-end, it's encapsulated into a deploy first. You can still use stdin though! I doubt anyone will notice this change. * The scope was extended to include functions and classes: Go forth and import lovely code. All these exist in scopes now, and can be re-used! * Function calls actually use the scope now. Glad I got this sorted out. * There is import cycle detection for modules! Yes, this is another dag. I think that's #4. I guess they're useful. * A ton of tests and new test infra was added! This should make it much easier to add new tests that run mcl code. Have a look at TestAstFunc1 to see how to add more of these. As usual, I'll try to keep these commits smaller in the future!
This commit is contained in:
@@ -137,15 +137,15 @@ Invoke `mgmt` with the `--puppet` switch, which supports 3 variants:
|
||||
|
||||
1. Request the configuration from the Puppet Master (like `puppet agent` does)
|
||||
|
||||
`mgmt run --puppet agent`
|
||||
`mgmt run puppet --puppet agent`
|
||||
|
||||
2. Compile a local manifest file (like `puppet apply`)
|
||||
|
||||
`mgmt run --puppet /path/to/my/manifest.pp`
|
||||
`mgmt run puppet --puppet /path/to/my/manifest.pp`
|
||||
|
||||
3. Compile an ad hoc manifest from the commandline (like `puppet apply -e`)
|
||||
|
||||
`mgmt run --puppet 'file { "/etc/ntp.conf": ensure => file }'`
|
||||
`mgmt run puppet --puppet 'file { "/etc/ntp.conf": ensure => file }'`
|
||||
|
||||
For more details and caveats see [Puppet.md](Puppet.md).
|
||||
|
||||
@@ -164,6 +164,7 @@ If you feel that a well used option needs documenting here, please patch it!
|
||||
### Overview of reference
|
||||
|
||||
* [Meta parameters](#meta-parameters): List of available resource meta parameters.
|
||||
* [Lang metadata file](#lang-metadata-file): Lang metadata file format.
|
||||
* [Graph definition file](#graph-definition-file): Main graph definition file.
|
||||
* [Command line](#command-line): Command line parameters.
|
||||
* [Compilation options](#compilation-options): Compilation options.
|
||||
@@ -249,11 +250,48 @@ integer, then that value is the max size for that semaphore. Valid semaphore
|
||||
id's include: `some_id`, `hello:42`, `not:smart:4` and `:13`. It is expected
|
||||
that the last bare example be only used by the engine to add a global semaphore.
|
||||
|
||||
### Lang metadata file
|
||||
|
||||
Any module *must* have a metadata file in its root. It must be named
|
||||
`metadata.yaml`, even if it's empty. You can specify zero or more values in yaml
|
||||
format which can change how your module behaves, and where the `mcl` language
|
||||
looks for code and other files. The most important top level keys are: `main`,
|
||||
`path`, `files`, and `license`.
|
||||
|
||||
#### Main
|
||||
|
||||
The `main` key points to the default entry point of your code. It must be a
|
||||
relative path if specified. If it's empty it defaults to `main.mcl`. It should
|
||||
generally not be changed. It is sometimes set to `main/main.mcl` if you'd like
|
||||
your modules code out of the root and into a child directory for cases where you
|
||||
don't plan on having a lot deeper imports relative to `main.mcl` and all those
|
||||
files would clutter things up.
|
||||
|
||||
#### Path
|
||||
|
||||
The `path` key specifies the modules import search directory to use for this
|
||||
module. You can specify this if you'd like to vendor something for your module.
|
||||
In general, if you use it, please use the convention: `path/`. If it's not
|
||||
specified, you will default to the parent modules directory.
|
||||
|
||||
#### Files
|
||||
|
||||
The `files` key specifies some additional files that will get included in your
|
||||
deploy. It defaults to `files/`.
|
||||
|
||||
#### License
|
||||
|
||||
The `license` key allows you to specify a license for the module. Please specify
|
||||
one so that everyone can enjoy your code! Use a "short license identifier", like
|
||||
`LGPLv3+`, or `MIT`. The former is a safe choice if you're not sure what to use.
|
||||
|
||||
### Graph definition file
|
||||
|
||||
graph.yaml is the compiled graph definition file. The format is currently
|
||||
undocumented, but by looking through the [examples/](https://github.com/purpleidea/mgmt/tree/master/examples)
|
||||
you can probably figure out most of it, as it's fairly intuitive.
|
||||
undocumented, but by looking through the [examples/](https://github.com/purpleidea/mgmt/tree/master/examples/yaml/)
|
||||
you can probably figure out most of it, as it's fairly intuitive. It's not
|
||||
recommended that you use this, since it's preferable to write code in the
|
||||
[mcl language](language-guide.md) front-end.
|
||||
|
||||
### Command line
|
||||
|
||||
|
||||
13
docs/faq.md
13
docs/faq.md
@@ -57,6 +57,8 @@ hacking!
|
||||
|
||||
### Is this project ready for production?
|
||||
|
||||
It's getting pretty close. I'm able to write modules for it now!
|
||||
|
||||
Compared to some existing automation tools out there, mgmt is a relatively new
|
||||
project. It is probably not as feature complete as some other software, but it
|
||||
also offers a number of features which are not currently available elsewhere.
|
||||
@@ -146,7 +148,7 @@ requires a number of seconds as an argument.
|
||||
#### Example:
|
||||
|
||||
```
|
||||
./mgmt run --lang examples/lang/hello0.mcl --converged-timeout=5
|
||||
./mgmt run lang --lang examples/lang/hello0.mcl --converged-timeout=5
|
||||
```
|
||||
|
||||
### What does the error message about an inconsistent dataDir mean?
|
||||
@@ -167,14 +169,15 @@ starting up, and as a result, a default endpoint never gets added. The solution
|
||||
is to either reconcile the mistake, and if there is no important data saved, you
|
||||
can remove the etcd dataDir. This is typically `/var/lib/mgmt/etcd/member/`.
|
||||
|
||||
### Why do resources have both a `Compare` method and an `IFF` (on the UID) method?
|
||||
### Why do resources have both a `Cmp` method and an `IFF` (on the UID) method?
|
||||
|
||||
The `Compare()` methods are for determining if two resources are effectively the
|
||||
The `Cmp()` methods are for determining if two resources are effectively the
|
||||
same, which is used to make graph change delta's efficient. This is when we want
|
||||
to change from the current running graph to a new graph, but preserve the common
|
||||
vertices. Since we want to make this process efficient, we only update the parts
|
||||
that are different, and leave everything else alone. This `Compare()` method can
|
||||
tell us if two resources are the same.
|
||||
that are different, and leave everything else alone. This `Cmp()` method can
|
||||
tell us if two resources are the same. In case it is not obvious, `cmp` is an
|
||||
abbrev. for compare.
|
||||
|
||||
The `IFF()` method is part of the whole UID system, which is for discerning if a
|
||||
resource meets the requirements another expects for an automatic edge. This is
|
||||
|
||||
@@ -342,11 +342,21 @@ also ensures they can be encoded and decoded. Make sure to include the following
|
||||
code snippet for this to work.
|
||||
|
||||
```golang
|
||||
import "github.com/purpleidea/mgmt/lang/funcs"
|
||||
|
||||
func init() { // special golang method that runs once
|
||||
funcs.Register("foo", func() interfaces.Func { return &FooFunc{} })
|
||||
}
|
||||
```
|
||||
|
||||
Functions inside of built-in modules will need to use the `ModuleRegister`
|
||||
method instead.
|
||||
|
||||
```golang
|
||||
// moduleName is already set to "math" by the math package. Do this in `init`.
|
||||
funcs.ModuleRegister(moduleName, "cos", func() interfaces.Func { return &CosFunc{} })
|
||||
```
|
||||
|
||||
### Composite functions
|
||||
|
||||
Composite functions are functions which import one or more existing functions.
|
||||
|
||||
@@ -140,6 +140,31 @@ expression
|
||||
include bar("world", 13) # an include can be called multiple times
|
||||
```
|
||||
|
||||
- **import**: import a particular scope from this location at a given namespace
|
||||
|
||||
```mcl
|
||||
# a system module import
|
||||
import "fmt"
|
||||
|
||||
# a local, single file import (relative path, not a module)
|
||||
import "dir1/file.mcl"
|
||||
|
||||
# a local, module import (relative path, contents are a module)
|
||||
import "dir2/"
|
||||
|
||||
# a remote module import (absolute remote path, contents are a module)
|
||||
import "git://github.com/purpleidea/mgmt-example1/"
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```mcl
|
||||
import "fmt" as * # contents namespaced into top-level names
|
||||
import "foo.mcl" # namespaced as foo
|
||||
import "dir1/" as bar # namespaced as bar
|
||||
import "git://github.com/purpleidea/mgmt-example1/" # namespaced as example1
|
||||
```
|
||||
|
||||
All statements produce _output_. Output consists of between zero and more
|
||||
`edges` and `resources`. A resource statement can produce a resource, whereas an
|
||||
`if` statement produces whatever the chosen branch produces. Ultimately the goal
|
||||
@@ -318,6 +343,45 @@ parameters, then the same class can even be called with different signatures.
|
||||
Whether the output is useful and whether there is a unique type unification
|
||||
solution is dependent on your code.
|
||||
|
||||
#### Import
|
||||
|
||||
The `import` statement imports a scope into the specified namespace. A scope can
|
||||
contain variable, class, and function definitions. All are statements.
|
||||
Furthermore, since each of these have different logical uses, you could
|
||||
theoretically import a scope that contains an `int` variable named `foo`, a
|
||||
class named `foo`, and a function named `foo` as well. Keep in mind that
|
||||
variables can contain functions (they can have a type of function) and are
|
||||
commonly called lambdas.
|
||||
|
||||
There are a few different kinds of imports. They differ by the string contents
|
||||
that you specify. Short single word, or multiple-word tokens separated by zero
|
||||
or more slashes are system imports. Eg: `math`, `fmt`, or even `math/trig`.
|
||||
Local imports are path imports that are relative to the current directory. They
|
||||
can either import a single `mcl` file, or an entire well-formed module. Eg:
|
||||
`file1.mcl` or `dir1/`. Lastly, you can have a remote import. This must be an
|
||||
absolute path to a well-formed module. The common transport is `git`, and it can
|
||||
be represented via an FQDN. Eg: `git://github.com/purpleidea/mgmt-example1/`.
|
||||
|
||||
The namespace that any of these are imported into depends on how you use the
|
||||
import statement. By default, each kind of import will have a logic namespace
|
||||
identifier associated with it. System imports use the last token in their name.
|
||||
Eg: `fmt` would be imported as `fmt` and `math/trig` would be imported as
|
||||
`trig`. Local imports do the same, except the required `.mcl` extension, or
|
||||
trailing slash are removed. Eg: `foo/file1.mcl` would be imported as `file1` and
|
||||
`bar/baz/` would be imported as `baz`. Remote imports use some more complex
|
||||
rules. In general, well-named modules that contain a final directory name in the
|
||||
form: `mgmt-whatever/` will be named `whatever`. Otherwise, the last path token
|
||||
will be converted to lowercase and the dashes will be converted to underscores.
|
||||
The rules for remote imports might change, and should not be considered stable.
|
||||
|
||||
In any of the import cases, you can change the namespace that you're imported
|
||||
into. Simply add the `as whatever` text at the end of the import, and `whatever`
|
||||
will be the name of the namespace. Please note that `whatever` is not surrounded
|
||||
by quotes, since it is an identifier, and not a `string`. If you'd like to add
|
||||
all of the import contents into the top-level scope, you can use the `as *` text
|
||||
to dump all of the contents in. This is generally not recommended, as it might
|
||||
cause a conflict with another identifier.
|
||||
|
||||
### Stages
|
||||
|
||||
The mgmt compiler runs in a number of stages. In order of execution they are:
|
||||
|
||||
@@ -143,7 +143,7 @@ you to specify which `puppet.conf` file should be used during
|
||||
translation.
|
||||
|
||||
```
|
||||
mgmt run --puppet /opt/my-manifest.pp --puppet-conf /etc/mgmt/puppet.conf
|
||||
mgmt run puppet --puppet /opt/my-manifest.pp --puppet-conf /etc/mgmt/puppet.conf
|
||||
```
|
||||
|
||||
Within this file, you can just specify any needed options in the
|
||||
|
||||
@@ -57,8 +57,8 @@ export PATH=$PATH:$GOPATH/bin
|
||||
|
||||
### Running mgmt
|
||||
|
||||
* Run `time ./mgmt run --lang examples/lang/hello0.mcl --tmp-prefix` to try out
|
||||
a very simple example!
|
||||
* Run `time ./mgmt run --tmp-prefix lang --lang examples/lang/hello0.mcl` to try
|
||||
out a very simple example!
|
||||
* Look in that example file that you ran to see if you can figure out what it
|
||||
did!
|
||||
* Have fun hacking on our future technology and get involved to shape the
|
||||
@@ -181,5 +181,5 @@ Other examples:
|
||||
|
||||
```
|
||||
docker/scripts/exec-development make build
|
||||
docker/scripts/exec-development ./mgmt run --tmp-prefix --lang examples/lang/load0.mcl
|
||||
docker/scripts/exec-development ./mgmt run --tmp-prefix lang --lang examples/lang/load0.mcl
|
||||
```
|
||||
|
||||
@@ -1,22 +1,28 @@
|
||||
# Style guide
|
||||
|
||||
## Overview
|
||||
This document aims to be a reference for the desired style for patches to mgmt,
|
||||
and the associated `mcl` language. In particular it describes conventions which
|
||||
are not officially enforced by tools and in test cases, or that aren't clearly
|
||||
defined elsewhere. We try to turn as many of these into automated tests as we
|
||||
can. If something here is not defined in a test, or you think it should be,
|
||||
please write one! Even better, you can write a tool to automatically fix it,
|
||||
since this is more useful and can easily be turned into a test!
|
||||
|
||||
This document aims to be a reference for the desired style for patches to mgmt.
|
||||
In particular it describes conventions which we use which are not officially
|
||||
enforced by the `gofmt` tool, and which might not be clearly defined elsewhere.
|
||||
Most of these are common sense to seasoned programmers, and we hope this will be
|
||||
a useful reference for new programmers.
|
||||
## Overview for golang code
|
||||
|
||||
Most style issues are enforced by the `gofmt` tool. Other style aspects are
|
||||
often common sense to seasoned programmers, and we hope this will be a useful
|
||||
reference for new programmers.
|
||||
|
||||
There are a lot of useful code review comments described
|
||||
[here](https://github.com/golang/go/wiki/CodeReviewComments). We don't
|
||||
necessarily follow everything strictly, but it is in general a very good guide.
|
||||
|
||||
## Basics
|
||||
### Basics
|
||||
|
||||
* All of our golang code is formatted with `gofmt`.
|
||||
|
||||
## Comments
|
||||
### Comments
|
||||
|
||||
All of our code is commented with the minimums required for `godoc` to function,
|
||||
and so that our comments pass `golint`. Code comments should either be full
|
||||
@@ -28,7 +34,7 @@ They should explain algorithms, describe non-obvious behaviour, or situations
|
||||
which would otherwise need explanation or additional research during a code
|
||||
review. Notes about use of unfamiliar API's is a good idea for a code comment.
|
||||
|
||||
### Example
|
||||
#### Example
|
||||
|
||||
Here you can see a function with the correct `godoc` string. The first word must
|
||||
match the name of the function. It is _not_ capitalized because the function is
|
||||
@@ -41,7 +47,7 @@ func square(x int) int {
|
||||
}
|
||||
```
|
||||
|
||||
## Line length
|
||||
### Line length
|
||||
|
||||
In general we try to stick to 80 character lines when it is appropriate. It is
|
||||
almost *always* appropriate for function `godoc` comments and most longer
|
||||
@@ -55,7 +61,7 @@ Occasionally inline, two line source code comments are used within a function.
|
||||
These should usually be balanced so that you don't have one line with 78
|
||||
characters and the second with only four. Split the comment between the two.
|
||||
|
||||
## Method receiver naming
|
||||
### Method receiver naming
|
||||
|
||||
[Contrary](https://github.com/golang/go/wiki/CodeReviewComments#receiver-names)
|
||||
to the specialized naming of the method receiver variable, we usually name all
|
||||
@@ -65,7 +71,7 @@ makes the code easier to read since you don't need to remember the name of the
|
||||
method receiver variable in each different method. This is very similar to what
|
||||
is done in `python`.
|
||||
|
||||
### Example
|
||||
#### Example
|
||||
|
||||
```golang
|
||||
// Bar does a thing, and returns the number of baz results found in our
|
||||
@@ -78,7 +84,7 @@ func (obj *Foo) Bar(baz string) int {
|
||||
}
|
||||
```
|
||||
|
||||
## Consistent ordering
|
||||
### Consistent ordering
|
||||
|
||||
In general we try to preserve a logical ordering in source files which usually
|
||||
matches the common order of execution that a _lazy evaluator_ would follow.
|
||||
@@ -90,6 +96,55 @@ declared in the interface.
|
||||
When implementing code for the various types in the language, please follow this
|
||||
order: `bool`, `str`, `int`, `float`, `list`, `map`, `struct`, `func`.
|
||||
|
||||
## Overview for mcl code
|
||||
|
||||
The `mcl` language is quite new, so this guide will probably change over time as
|
||||
we find what's best, and hopefully we'll be able to add an `mclfmt` tool in the
|
||||
future so that less of this needs to be documented. (Patches welcome!)
|
||||
|
||||
### Indentation
|
||||
|
||||
Code indentation is done with tabs. The tab-width is a private preference, which
|
||||
is the beauty of using tabs: you can have your own personal preference. The
|
||||
inventor of `mgmt` uses and recommends a width of eight, and that is what should
|
||||
be used if your tool requires a modeline to be publicly committed.
|
||||
|
||||
### Line length
|
||||
|
||||
We recommend you stick to 80 char line width. If you find yourself with deeper
|
||||
nesting, it might be a hint that your code could be refactored in a more
|
||||
pleasant way.
|
||||
|
||||
### Capitalization
|
||||
|
||||
At the moment, variables, function names, and classes are all lowercase and do
|
||||
not contain underscores. We will probably figure out what style to recommend
|
||||
when the language is a bit further along. For example, we haven't decided if we
|
||||
should have a notion of public and private variables, and if we'd like to
|
||||
reserve capitalization for this situation.
|
||||
|
||||
### Module naming
|
||||
|
||||
We recommend you name your modules with an `mgmt-` prefix. For example, a module
|
||||
about bananas might be named `mgmt-banana`. This is helpful for the useful magic
|
||||
built-in to the module import code, which will by default take a remote import
|
||||
like: `import "https://github.com/purpleidea/mgmt-banana/"` and namespace it as
|
||||
`banana`. Of course you can always pick the namespace yourself on import with:
|
||||
`import "https://github.com/purpleidea/mgmt-banana/" as tomato` or something
|
||||
similar.
|
||||
|
||||
### Licensing
|
||||
|
||||
We believe that sharing code helps reduce unnecessary re-invention, so that we
|
||||
can [stand on the shoulders of giants](https://en.wikipedia.org/wiki/Standing_on_the_shoulders_of_giants)
|
||||
and hopefully make faster progress in science, medicine, exploration, etc... As
|
||||
a result, we recommend releasing your modules under the [LGPLv3+](https://www.gnu.org/licenses/lgpl-3.0.en.html)
|
||||
license for the maximum balance of freedom and re-usability. We strongly oppose
|
||||
any [CLA](https://en.wikipedia.org/wiki/Contributor_License_Agreement)
|
||||
requirements and believe that the ["inbound==outbound"](https://ref.fedorapeople.org/fontana-linuxcon.html#slide2)
|
||||
rule applies. Lastly, we do not support software patents and we hope you don't
|
||||
either!
|
||||
|
||||
## Suggestions
|
||||
|
||||
If you have any ideas for suggestions or other improvements to this guide,
|
||||
|
||||
@@ -48,7 +48,7 @@ type Fs interface {
|
||||
//IsDir(path string) (bool, error)
|
||||
//IsEmpty(path string) (bool, error)
|
||||
//NeuterAccents(s string) string
|
||||
//ReadAll(r io.Reader) ([]byte, error) // not needed
|
||||
//ReadAll(r io.Reader) ([]byte, error) // not needed, same as ioutil
|
||||
ReadDir(dirname string) ([]os.FileInfo, error)
|
||||
ReadFile(filename string) ([]byte, error)
|
||||
//SafeWriteReader(path string, r io.Reader) (err error)
|
||||
|
||||
@@ -91,32 +91,53 @@ func GetDeploys(obj Client) (map[uint64]string, error) {
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// GetDeploy gets the latest deploy if id == 0, otherwise it returns the deploy
|
||||
// with the specified id if it exists.
|
||||
// calculateMax is a helper function.
|
||||
func calculateMax(deploys map[uint64]string) uint64 {
|
||||
var max uint64
|
||||
for i := range deploys {
|
||||
if i > max {
|
||||
max = i
|
||||
}
|
||||
}
|
||||
return max
|
||||
}
|
||||
|
||||
// GetDeploy returns the deploy with the specified id if it exists. If you input
|
||||
// an id of 0, you'll get back an empty deploy without error. This is useful so
|
||||
// that you can pass through this function easily.
|
||||
// FIXME: implement this more efficiently so that it doesn't have to download *all* the old deploys from etcd!
|
||||
func GetDeploy(obj Client, id uint64) (string, error) {
|
||||
result, err := GetDeploys(obj)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if id != 0 {
|
||||
str, exists := result[id]
|
||||
if !exists {
|
||||
return "", fmt.Errorf("can't find id `%d`", id)
|
||||
}
|
||||
return str, nil
|
||||
}
|
||||
// find the latest id
|
||||
var max uint64
|
||||
for i := range result {
|
||||
if i > max {
|
||||
max = i
|
||||
}
|
||||
}
|
||||
if max == 0 {
|
||||
|
||||
// don't optimize this test to the top, because it's better to catch an
|
||||
// etcd failure early if we can, rather than fail later when we deploy!
|
||||
if id == 0 {
|
||||
return "", nil // no results yet
|
||||
}
|
||||
return result[max], nil
|
||||
|
||||
str, exists := result[id]
|
||||
if !exists {
|
||||
return "", fmt.Errorf("can't find id `%d`", id)
|
||||
}
|
||||
return str, nil
|
||||
}
|
||||
|
||||
// GetMaxDeployID returns the maximum deploy id. If none are found, this returns
|
||||
// zero. You must increment the returned value by one when you add a deploy. If
|
||||
// two or more clients race for this deploy id, then the loser is not committed,
|
||||
// and must repeat this GetMaxDeployID process until it succeeds with a commit!
|
||||
func GetMaxDeployID(obj Client) (uint64, error) {
|
||||
// TODO: this was all implemented super inefficiently, fix up for perf!
|
||||
deploys, err := GetDeploys(obj) // get previous deploys
|
||||
if err != nil {
|
||||
return 0, errwrap.Wrapf(err, "error getting previous deploys")
|
||||
}
|
||||
// find the latest id
|
||||
max := calculateMax(deploys)
|
||||
return max, nil // found! (or zero)
|
||||
}
|
||||
|
||||
// AddDeploy adds a new deploy. It takes an id and ensures it's sequential. If
|
||||
|
||||
10
etcd/etcd.go
10
etcd/etcd.go
@@ -37,12 +37,12 @@
|
||||
//
|
||||
// Smoke testing:
|
||||
// mkdir /tmp/mgmt{A..E}
|
||||
// ./mgmt run --yaml examples/etcd1a.yaml --hostname h1 --tmp-prefix --no-pgp
|
||||
// ./mgmt run --yaml examples/etcd1b.yaml --hostname h2 --tmp-prefix --no-pgp --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2381 --server-urls http://127.0.0.1:2382
|
||||
// ./mgmt run --yaml examples/etcd1c.yaml --hostname h3 --tmp-prefix --no-pgp --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2383 --server-urls http://127.0.0.1:2384
|
||||
// ./mgmt run --hostname h1 --tmp-prefix --no-pgp yaml --yaml examples/yaml/etcd1a.yaml
|
||||
// ./mgmt run --hostname h2 --tmp-prefix --no-pgp --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2381 --server-urls http://127.0.0.1:2382 yaml --yaml examples/yaml/etcd1b.yaml
|
||||
// ./mgmt run --hostname h3 --tmp-prefix --no-pgp --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2383 --server-urls http://127.0.0.1:2384 yaml --yaml examples/yaml/etcd1c.yaml
|
||||
// ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 put /_mgmt/idealClusterSize 3
|
||||
// ./mgmt run --yaml examples/etcd1d.yaml --hostname h4 --tmp-prefix --no-pgp --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2385 --server-urls http://127.0.0.1:2386
|
||||
// ./mgmt run --yaml examples/etcd1e.yaml --hostname h5 --tmp-prefix --no-pgp --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2387 --server-urls http://127.0.0.1:2388
|
||||
// ./mgmt run --hostname h4 --tmp-prefix --no-pgp --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2385 --server-urls http://127.0.0.1:2386 yaml --yaml examples/yaml/etcd1d.yaml
|
||||
// ./mgmt run --hostname h5 --tmp-prefix --no-pgp --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2387 --server-urls http://127.0.0.1:2388 yaml --yaml examples/yaml/etcd1e.yaml
|
||||
// ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 member list
|
||||
// ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2381 put /_mgmt/idealClusterSize 5
|
||||
// ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2381 member list
|
||||
|
||||
@@ -55,7 +55,7 @@ func runEtcd() (func() error, error) {
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "error getting binary path")
|
||||
}
|
||||
cmd := exec.Command(cmdName, "run", "--tmp-prefix")
|
||||
cmd := exec.Command(cmdName, "run", "--tmp-prefix", "empty") // empty GAPI
|
||||
if err := cmd.Start(); err != nil {
|
||||
return nil, errwrap.Wrapf(err, "error starting command %v", cmd)
|
||||
}
|
||||
|
||||
@@ -28,7 +28,8 @@ import (
|
||||
// A successful call returns err == nil, not err == EOF. Because ReadAll is
|
||||
// defined to read from src until EOF, it does not treat an EOF from Read
|
||||
// as an error to be reported.
|
||||
//func ReadAll(r io.Reader) ([]byte, error) {
|
||||
//func (obj *Fs) ReadAll(r io.Reader) ([]byte, error) {
|
||||
// // NOTE: doesn't need Fs, same as ioutil.ReadAll package
|
||||
// return afero.ReadAll(r)
|
||||
//}
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# read and print environment variable
|
||||
# env TEST=123 EMPTY= ./mgmt run --tmp-prefix --lang=examples/lang/env0.mcl --converged-timeout=5
|
||||
# env TEST=123 EMPTY= ./mgmt run --tmp-prefix --converged-timeout=5 lang --lang=examples/lang/env0.mcl
|
||||
|
||||
import "fmt"
|
||||
import "sys"
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
# run this example with these commands
|
||||
# watch -n 0.1 'tail *' # run this in /tmp/mgmt/
|
||||
# time ./mgmt run --lang examples/lang/exchange0.mcl --hostname h1 --ideal-cluster-size 1 --tmp-prefix --no-pgp
|
||||
# time ./mgmt run --lang examples/lang/exchange0.mcl --hostname h2 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2381 --server-urls http://127.0.0.1:2382 --tmp-prefix --no-pgp
|
||||
# time ./mgmt run --lang examples/lang/exchange0.mcl --hostname h3 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2383 --server-urls http://127.0.0.1:2384 --tmp-prefix --no-pgp
|
||||
# time ./mgmt run --lang examples/lang/exchange0.mcl --hostname h4 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2385 --server-urls http://127.0.0.1:2386 --tmp-prefix --no-pgp
|
||||
# time ./mgmt run --hostname h1 --ideal-cluster-size 1 --tmp-prefix --no-pgp lang --lang examples/lang/exchange0.mcl
|
||||
# time ./mgmt run --hostname h2 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2381 --server-urls http://127.0.0.1:2382 --tmp-prefix --no-pgp lang --lang examples/lang/exchange0.mcl
|
||||
# time ./mgmt run --hostname h3 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2383 --server-urls http://127.0.0.1:2384 --tmp-prefix --no-pgp lang --lang examples/lang/exchange0.mcl
|
||||
# time ./mgmt run --hostname h4 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2385 --server-urls http://127.0.0.1:2386 --tmp-prefix --no-pgp lang --lang examples/lang/exchange0.mcl
|
||||
|
||||
import "sys"
|
||||
|
||||
|
||||
5
examples/lang/import0.mcl
Normal file
5
examples/lang/import0.mcl
Normal file
@@ -0,0 +1,5 @@
|
||||
import "fmt"
|
||||
|
||||
test "printf" {
|
||||
anotherstr => fmt.printf("the answer is: %d", 42),
|
||||
}
|
||||
@@ -0,0 +1 @@
|
||||
# empty metadata file (use defaults)
|
||||
3
examples/lang/modules/badexample1/metadata.yaml
Normal file
3
examples/lang/modules/badexample1/metadata.yaml
Normal file
@@ -0,0 +1,3 @@
|
||||
main: "main/hello.mcl" # this is not the default, the default is "main.mcl"
|
||||
files: "files/" # these are some extra files we can use (is the default)
|
||||
path: "path/" # where to look for modules, defaults to using a global
|
||||
@@ -0,0 +1,2 @@
|
||||
main: "main.mcl"
|
||||
files: "files/" # these are some extra files we can use (is the default)
|
||||
@@ -0,0 +1,2 @@
|
||||
main: "main.mcl"
|
||||
files: "files/" # these are some extra files we can use (is the default)
|
||||
@@ -31,14 +31,14 @@ type MyGAPI struct {
|
||||
Name string // graph name
|
||||
Interval uint // refresh interval, 0 to never refresh
|
||||
|
||||
data gapi.Data
|
||||
data *gapi.Data
|
||||
initialized bool
|
||||
closeChan chan struct{}
|
||||
wg sync.WaitGroup // sync group for tunnel go routines
|
||||
}
|
||||
|
||||
// NewMyGAPI creates a new MyGAPI struct and calls Init().
|
||||
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
|
||||
func NewMyGAPI(data *gapi.Data, name string, interval uint) (*MyGAPI, error) {
|
||||
obj := &MyGAPI{
|
||||
Name: name,
|
||||
Interval: interval,
|
||||
@@ -46,28 +46,8 @@ func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
|
||||
return obj, obj.Init(data)
|
||||
}
|
||||
|
||||
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
|
||||
// should take the prefix of the registered name. On activation, if there are
|
||||
// any validation problems, you should return an error. If this was not
|
||||
// activated, then you should return a nil GAPI and a nil error.
|
||||
func (obj *MyGAPI) Cli(c *cli.Context, fs engine.Fs) (*gapi.Deploy, error) {
|
||||
if s := c.String(obj.Name); c.IsSet(obj.Name) {
|
||||
if s != "" {
|
||||
return nil, fmt.Errorf("input is not empty")
|
||||
}
|
||||
|
||||
return &gapi.Deploy{
|
||||
Name: obj.Name,
|
||||
Noop: c.GlobalBool("noop"),
|
||||
Sema: c.GlobalInt("sema"),
|
||||
GAPI: &MyGAPI{},
|
||||
}, nil
|
||||
}
|
||||
return nil, nil // we weren't activated!
|
||||
}
|
||||
|
||||
// CliFlags returns a list of flags used by this deploy subcommand.
|
||||
func (obj *MyGAPI) CliFlags() []cli.Flag {
|
||||
// CliFlags returns a list of flags used by the passed in subcommand.
|
||||
func (obj *MyGAPI) CliFlags(string) []cli.Flag {
|
||||
return []cli.Flag{
|
||||
cli.StringFlag{
|
||||
Name: obj.Name,
|
||||
@@ -77,8 +57,26 @@ func (obj *MyGAPI) CliFlags() []cli.Flag {
|
||||
}
|
||||
}
|
||||
|
||||
// Cli takes a cli.Context and some other info, and returns our GAPI. If there
|
||||
// are any validation problems, you should return an error.
|
||||
func (obj *MyGAPI) Cli(cliInfo *gapi.CliInfo) (*gapi.Deploy, error) {
|
||||
c := cliInfo.CliContext
|
||||
//fs := cliInfo.Fs // copy files from local filesystem *into* this fs...
|
||||
//debug := cliInfo.Debug
|
||||
//logf := func(format string, v ...interface{}) {
|
||||
// cliInfo.Logf(Name+": "+format, v...)
|
||||
//}
|
||||
|
||||
return &gapi.Deploy{
|
||||
Name: obj.Name,
|
||||
Noop: c.GlobalBool("noop"),
|
||||
Sema: c.GlobalInt("sema"),
|
||||
GAPI: &MyGAPI{},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Init initializes the MyGAPI struct.
|
||||
func (obj *MyGAPI) Init(data gapi.Data) error {
|
||||
func (obj *MyGAPI) Init(data *gapi.Data) error {
|
||||
if obj.initialized {
|
||||
return fmt.Errorf("already initialized")
|
||||
}
|
||||
|
||||
@@ -36,14 +36,14 @@ type MyGAPI struct {
|
||||
Name string // graph name
|
||||
Interval uint // refresh interval, 0 to never refresh
|
||||
|
||||
data gapi.Data
|
||||
data *gapi.Data
|
||||
initialized bool
|
||||
closeChan chan struct{}
|
||||
wg sync.WaitGroup // sync group for tunnel go routines
|
||||
}
|
||||
|
||||
// NewMyGAPI creates a new MyGAPI struct and calls Init().
|
||||
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
|
||||
func NewMyGAPI(data *gapi.Data, name string, interval uint) (*MyGAPI, error) {
|
||||
obj := &MyGAPI{
|
||||
Name: name,
|
||||
Interval: interval,
|
||||
@@ -51,28 +51,8 @@ func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
|
||||
return obj, obj.Init(data)
|
||||
}
|
||||
|
||||
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
|
||||
// should take the prefix of the registered name. On activation, if there are
|
||||
// any validation problems, you should return an error. If this was not
|
||||
// activated, then you should return a nil GAPI and a nil error.
|
||||
func (obj *MyGAPI) Cli(c *cli.Context, fs engine.Fs) (*gapi.Deploy, error) {
|
||||
if s := c.String(obj.Name); c.IsSet(obj.Name) {
|
||||
if s != "" {
|
||||
return nil, fmt.Errorf("input is not empty")
|
||||
}
|
||||
|
||||
return &gapi.Deploy{
|
||||
Name: obj.Name,
|
||||
Noop: c.GlobalBool("noop"),
|
||||
Sema: c.GlobalInt("sema"),
|
||||
GAPI: &MyGAPI{},
|
||||
}, nil
|
||||
}
|
||||
return nil, nil // we weren't activated!
|
||||
}
|
||||
|
||||
// CliFlags returns a list of flags used by this deploy subcommand.
|
||||
func (obj *MyGAPI) CliFlags() []cli.Flag {
|
||||
// CliFlags returns a list of flags used by the passed in subcommand.
|
||||
func (obj *MyGAPI) CliFlags(string) []cli.Flag {
|
||||
return []cli.Flag{
|
||||
cli.StringFlag{
|
||||
Name: obj.Name,
|
||||
@@ -82,8 +62,26 @@ func (obj *MyGAPI) CliFlags() []cli.Flag {
|
||||
}
|
||||
}
|
||||
|
||||
// Cli takes a cli.Context and some other info, and returns our GAPI. If there
|
||||
// are any validation problems, you should return an error.
|
||||
func (obj *MyGAPI) Cli(cliInfo *gapi.CliInfo) (*gapi.Deploy, error) {
|
||||
c := cliInfo.CliContext
|
||||
//fs := cliInfo.Fs // copy files from local filesystem *into* this fs...
|
||||
//debug := cliInfo.Debug
|
||||
//logf := func(format string, v ...interface{}) {
|
||||
// cliInfo.Logf(Name+": "+format, v...)
|
||||
//}
|
||||
|
||||
return &gapi.Deploy{
|
||||
Name: obj.Name,
|
||||
Noop: c.GlobalBool("noop"),
|
||||
Sema: c.GlobalInt("sema"),
|
||||
GAPI: &MyGAPI{},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Init initializes the MyGAPI struct.
|
||||
func (obj *MyGAPI) Init(data gapi.Data) error {
|
||||
func (obj *MyGAPI) Init(data *gapi.Data) error {
|
||||
if obj.initialized {
|
||||
return fmt.Errorf("already initialized")
|
||||
}
|
||||
|
||||
@@ -31,14 +31,14 @@ type MyGAPI struct {
|
||||
Name string // graph name
|
||||
Interval uint // refresh interval, 0 to never refresh
|
||||
|
||||
data gapi.Data
|
||||
data *gapi.Data
|
||||
initialized bool
|
||||
closeChan chan struct{}
|
||||
wg sync.WaitGroup // sync group for tunnel go routines
|
||||
}
|
||||
|
||||
// NewMyGAPI creates a new MyGAPI struct and calls Init().
|
||||
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
|
||||
func NewMyGAPI(data *gapi.Data, name string, interval uint) (*MyGAPI, error) {
|
||||
obj := &MyGAPI{
|
||||
Name: name,
|
||||
Interval: interval,
|
||||
@@ -46,28 +46,8 @@ func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
|
||||
return obj, obj.Init(data)
|
||||
}
|
||||
|
||||
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
|
||||
// should take the prefix of the registered name. On activation, if there are
|
||||
// any validation problems, you should return an error. If this was not
|
||||
// activated, then you should return a nil GAPI and a nil error.
|
||||
func (obj *MyGAPI) Cli(c *cli.Context, fs engine.Fs) (*gapi.Deploy, error) {
|
||||
if s := c.String(obj.Name); c.IsSet(obj.Name) {
|
||||
if s != "" {
|
||||
return nil, fmt.Errorf("input is not empty")
|
||||
}
|
||||
|
||||
return &gapi.Deploy{
|
||||
Name: obj.Name,
|
||||
Noop: c.GlobalBool("noop"),
|
||||
Sema: c.GlobalInt("sema"),
|
||||
GAPI: &MyGAPI{},
|
||||
}, nil
|
||||
}
|
||||
return nil, nil // we weren't activated!
|
||||
}
|
||||
|
||||
// CliFlags returns a list of flags used by this deploy subcommand.
|
||||
func (obj *MyGAPI) CliFlags() []cli.Flag {
|
||||
// CliFlags returns a list of flags used by the passed in subcommand.
|
||||
func (obj *MyGAPI) CliFlags(string) []cli.Flag {
|
||||
return []cli.Flag{
|
||||
cli.StringFlag{
|
||||
Name: obj.Name,
|
||||
@@ -77,8 +57,26 @@ func (obj *MyGAPI) CliFlags() []cli.Flag {
|
||||
}
|
||||
}
|
||||
|
||||
// Cli takes a cli.Context and some other info, and returns our GAPI. If there
|
||||
// are any validation problems, you should return an error.
|
||||
func (obj *MyGAPI) Cli(cliInfo *gapi.CliInfo) (*gapi.Deploy, error) {
|
||||
c := cliInfo.CliContext
|
||||
//fs := cliInfo.Fs // copy files from local filesystem *into* this fs...
|
||||
//debug := cliInfo.Debug
|
||||
//logf := func(format string, v ...interface{}) {
|
||||
// cliInfo.Logf(Name+": "+format, v...)
|
||||
//}
|
||||
|
||||
return &gapi.Deploy{
|
||||
Name: obj.Name,
|
||||
Noop: c.GlobalBool("noop"),
|
||||
Sema: c.GlobalInt("sema"),
|
||||
GAPI: &MyGAPI{},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Init initializes the MyGAPI struct.
|
||||
func (obj *MyGAPI) Init(data gapi.Data) error {
|
||||
func (obj *MyGAPI) Init(data *gapi.Data) error {
|
||||
if obj.initialized {
|
||||
return fmt.Errorf("already initialized")
|
||||
}
|
||||
|
||||
@@ -32,14 +32,14 @@ type MyGAPI struct {
|
||||
Count uint // number of resources to create
|
||||
Interval uint // refresh interval, 0 to never refresh
|
||||
|
||||
data gapi.Data
|
||||
data *gapi.Data
|
||||
initialized bool
|
||||
closeChan chan struct{}
|
||||
wg sync.WaitGroup // sync group for tunnel go routines
|
||||
}
|
||||
|
||||
// NewMyGAPI creates a new MyGAPI struct and calls Init().
|
||||
func NewMyGAPI(data gapi.Data, name string, interval uint, count uint) (*MyGAPI, error) {
|
||||
func NewMyGAPI(data *gapi.Data, name string, interval uint, count uint) (*MyGAPI, error) {
|
||||
obj := &MyGAPI{
|
||||
Name: name,
|
||||
Count: count,
|
||||
@@ -48,28 +48,8 @@ func NewMyGAPI(data gapi.Data, name string, interval uint, count uint) (*MyGAPI,
|
||||
return obj, obj.Init(data)
|
||||
}
|
||||
|
||||
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
|
||||
// should take the prefix of the registered name. On activation, if there are
|
||||
// any validation problems, you should return an error. If this was not
|
||||
// activated, then you should return a nil GAPI and a nil error.
|
||||
func (obj *MyGAPI) Cli(c *cli.Context, fs engine.Fs) (*gapi.Deploy, error) {
|
||||
if s := c.String(obj.Name); c.IsSet(obj.Name) {
|
||||
if s != "" {
|
||||
return nil, fmt.Errorf("input is not empty")
|
||||
}
|
||||
|
||||
return &gapi.Deploy{
|
||||
Name: obj.Name,
|
||||
Noop: c.GlobalBool("noop"),
|
||||
Sema: c.GlobalInt("sema"),
|
||||
GAPI: &MyGAPI{},
|
||||
}, nil
|
||||
}
|
||||
return nil, nil // we weren't activated!
|
||||
}
|
||||
|
||||
// CliFlags returns a list of flags used by this deploy subcommand.
|
||||
func (obj *MyGAPI) CliFlags() []cli.Flag {
|
||||
// CliFlags returns a list of flags used by the passed in subcommand.
|
||||
func (obj *MyGAPI) CliFlags(string) []cli.Flag {
|
||||
return []cli.Flag{
|
||||
cli.StringFlag{
|
||||
Name: obj.Name,
|
||||
@@ -79,8 +59,26 @@ func (obj *MyGAPI) CliFlags() []cli.Flag {
|
||||
}
|
||||
}
|
||||
|
||||
// Cli takes a cli.Context and some other info, and returns our GAPI. If there
|
||||
// are any validation problems, you should return an error.
|
||||
func (obj *MyGAPI) Cli(cliInfo *gapi.CliInfo) (*gapi.Deploy, error) {
|
||||
c := cliInfo.CliContext
|
||||
//fs := cliInfo.Fs // copy files from local filesystem *into* this fs...
|
||||
//debug := cliInfo.Debug
|
||||
//logf := func(format string, v ...interface{}) {
|
||||
// cliInfo.Logf(Name+": "+format, v...)
|
||||
//}
|
||||
|
||||
return &gapi.Deploy{
|
||||
Name: obj.Name,
|
||||
Noop: c.GlobalBool("noop"),
|
||||
Sema: c.GlobalInt("sema"),
|
||||
GAPI: &MyGAPI{},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Init initializes the MyGAPI struct.
|
||||
func (obj *MyGAPI) Init(data gapi.Data) error {
|
||||
func (obj *MyGAPI) Init(data *gapi.Data) error {
|
||||
if obj.initialized {
|
||||
return fmt.Errorf("already initialized")
|
||||
}
|
||||
|
||||
@@ -31,14 +31,14 @@ type MyGAPI struct {
|
||||
Name string // graph name
|
||||
Interval uint // refresh interval, 0 to never refresh
|
||||
|
||||
data gapi.Data
|
||||
data *gapi.Data
|
||||
initialized bool
|
||||
closeChan chan struct{}
|
||||
wg sync.WaitGroup // sync group for tunnel go routines
|
||||
}
|
||||
|
||||
// NewMyGAPI creates a new MyGAPI struct and calls Init().
|
||||
func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
|
||||
func NewMyGAPI(data *gapi.Data, name string, interval uint) (*MyGAPI, error) {
|
||||
obj := &MyGAPI{
|
||||
Name: name,
|
||||
Interval: interval,
|
||||
@@ -46,28 +46,8 @@ func NewMyGAPI(data gapi.Data, name string, interval uint) (*MyGAPI, error) {
|
||||
return obj, obj.Init(data)
|
||||
}
|
||||
|
||||
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
|
||||
// should take the prefix of the registered name. On activation, if there are
|
||||
// any validation problems, you should return an error. If this was not
|
||||
// activated, then you should return a nil GAPI and a nil error.
|
||||
func (obj *MyGAPI) Cli(c *cli.Context, fs engine.Fs) (*gapi.Deploy, error) {
|
||||
if s := c.String(obj.Name); c.IsSet(obj.Name) {
|
||||
if s != "" {
|
||||
return nil, fmt.Errorf("input is not empty")
|
||||
}
|
||||
|
||||
return &gapi.Deploy{
|
||||
Name: obj.Name,
|
||||
Noop: c.GlobalBool("noop"),
|
||||
Sema: c.GlobalInt("sema"),
|
||||
GAPI: &MyGAPI{},
|
||||
}, nil
|
||||
}
|
||||
return nil, nil // we weren't activated!
|
||||
}
|
||||
|
||||
// CliFlags returns a list of flags used by this deploy subcommand.
|
||||
func (obj *MyGAPI) CliFlags() []cli.Flag {
|
||||
// CliFlags returns a list of flags used by the passed in subcommand.
|
||||
func (obj *MyGAPI) CliFlags(string) []cli.Flag {
|
||||
return []cli.Flag{
|
||||
cli.StringFlag{
|
||||
Name: obj.Name,
|
||||
@@ -77,8 +57,26 @@ func (obj *MyGAPI) CliFlags() []cli.Flag {
|
||||
}
|
||||
}
|
||||
|
||||
// Cli takes a cli.Context and some other info, and returns our GAPI. If there
|
||||
// are any validation problems, you should return an error.
|
||||
func (obj *MyGAPI) Cli(cliInfo *gapi.CliInfo) (*gapi.Deploy, error) {
|
||||
c := cliInfo.CliContext
|
||||
//fs := cliInfo.Fs // copy files from local filesystem *into* this fs...
|
||||
//debug := cliInfo.Debug
|
||||
//logf := func(format string, v ...interface{}) {
|
||||
// cliInfo.Logf(Name+": "+format, v...)
|
||||
//}
|
||||
|
||||
return &gapi.Deploy{
|
||||
Name: obj.Name,
|
||||
Noop: c.GlobalBool("noop"),
|
||||
Sema: c.GlobalInt("sema"),
|
||||
GAPI: &MyGAPI{},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Init initializes the MyGAPI struct.
|
||||
func (obj *MyGAPI) Init(data gapi.Data) error {
|
||||
func (obj *MyGAPI) Init(data *gapi.Data) error {
|
||||
if obj.initialized {
|
||||
return fmt.Errorf("already initialized")
|
||||
}
|
||||
|
||||
@@ -21,7 +21,6 @@ import (
|
||||
"fmt"
|
||||
"sync"
|
||||
|
||||
"github.com/purpleidea/mgmt/engine"
|
||||
"github.com/purpleidea/mgmt/gapi"
|
||||
"github.com/purpleidea/mgmt/pgraph"
|
||||
|
||||
@@ -39,44 +38,31 @@ func init() {
|
||||
|
||||
// GAPI implements the main lang GAPI interface.
|
||||
type GAPI struct {
|
||||
data gapi.Data
|
||||
data *gapi.Data
|
||||
initialized bool
|
||||
closeChan chan struct{}
|
||||
wg *sync.WaitGroup // sync group for tunnel go routines
|
||||
}
|
||||
|
||||
// CliFlags returns a list of flags used by the specified subcommand.
|
||||
func (obj *GAPI) CliFlags(command string) []cli.Flag {
|
||||
return []cli.Flag{}
|
||||
}
|
||||
|
||||
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
|
||||
// should take the prefix of the registered name. On activation, if there are
|
||||
// any validation problems, you should return an error. If this was not
|
||||
// activated, then you should return a nil GAPI and a nil error.
|
||||
func (obj *GAPI) Cli(c *cli.Context, fs engine.Fs) (*gapi.Deploy, error) {
|
||||
if s := c.String(Name); c.IsSet(Name) {
|
||||
if s == "" {
|
||||
return nil, fmt.Errorf("input code is empty")
|
||||
}
|
||||
|
||||
return &gapi.Deploy{
|
||||
Name: Name,
|
||||
//Noop: false,
|
||||
GAPI: &GAPI{},
|
||||
}, nil
|
||||
}
|
||||
return nil, nil // we weren't activated!
|
||||
}
|
||||
|
||||
// CliFlags returns a list of flags used by this deploy subcommand.
|
||||
func (obj *GAPI) CliFlags() []cli.Flag {
|
||||
return []cli.Flag{
|
||||
cli.StringFlag{
|
||||
Name: Name,
|
||||
Value: "",
|
||||
Usage: "empty graph to deploy",
|
||||
},
|
||||
}
|
||||
func (obj *GAPI) Cli(*gapi.CliInfo) (*gapi.Deploy, error) {
|
||||
return &gapi.Deploy{
|
||||
Name: Name,
|
||||
//Noop: false,
|
||||
GAPI: &GAPI{},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Init initializes the lang GAPI struct.
|
||||
func (obj *GAPI) Init(data gapi.Data) error {
|
||||
func (obj *GAPI) Init(data *gapi.Data) error {
|
||||
if obj.initialized {
|
||||
return fmt.Errorf("already initialized")
|
||||
}
|
||||
|
||||
86
gapi/gapi.go
86
gapi/gapi.go
@@ -28,6 +28,18 @@ import (
|
||||
"github.com/urfave/cli"
|
||||
)
|
||||
|
||||
const (
|
||||
// CommandRun is the identifier for the "run" command. It is distinct
|
||||
// from the other commands, because it can run with any front-end.
|
||||
CommandRun = "run"
|
||||
|
||||
// CommandDeploy is the identifier for the "deploy" command.
|
||||
CommandDeploy = "deploy"
|
||||
|
||||
// CommandGet is the identifier for the "get" (download) command.
|
||||
CommandGet = "get"
|
||||
)
|
||||
|
||||
// RegisteredGAPIs is a global map of all possible GAPIs which can be used. You
|
||||
// should never touch this map directly. Use methods like Register instead.
|
||||
var RegisteredGAPIs = make(map[string]func() GAPI) // must initialize this map
|
||||
@@ -42,6 +54,19 @@ func Register(name string, fn func() GAPI) {
|
||||
RegisteredGAPIs[name] = fn
|
||||
}
|
||||
|
||||
// CliInfo is the set of input values passed into the Cli method so that the
|
||||
// GAPI can decide if it wants to activate, and if it does, the initial handles
|
||||
// it needs to use to do so.
|
||||
type CliInfo struct {
|
||||
// CliContext is the struct that is used to transfer in user input.
|
||||
CliContext *cli.Context
|
||||
// Fs is the filesystem the Cli method should copy data into. It usually
|
||||
// copies *from* the local filesystem using standard io functionality.
|
||||
Fs engine.Fs
|
||||
Debug bool
|
||||
Logf func(format string, v ...interface{})
|
||||
}
|
||||
|
||||
// Data is the set of input values passed into the GAPI structs via Init.
|
||||
type Data struct {
|
||||
Program string // name of the originating program
|
||||
@@ -69,13 +94,60 @@ type Next struct {
|
||||
Err error // if something goes wrong (use with or without exit!)
|
||||
}
|
||||
|
||||
// GAPI is a Graph API that represents incoming graphs and change streams.
|
||||
// GAPI is a Graph API that represents incoming graphs and change streams. It is
|
||||
// the frontend interface that needs to be implemented to use the engine.
|
||||
type GAPI interface {
|
||||
Cli(c *cli.Context, fs engine.Fs) (*Deploy, error)
|
||||
CliFlags() []cli.Flag
|
||||
// CliFlags is passed a Command constant specifying which command it is
|
||||
// requesting the flags for. If an invalid or unsupported command is
|
||||
// passed in, simply return an empty list. Similarly, it is not required
|
||||
// to ever return any flags, and the GAPI may always return an empty
|
||||
// list.
|
||||
CliFlags(string) []cli.Flag
|
||||
|
||||
Init(Data) error // initializes the GAPI and passes in useful data
|
||||
Graph() (*pgraph.Graph, error) // returns the most recent pgraph
|
||||
Next() chan Next // returns a stream of switch events
|
||||
Close() error // shutdown the GAPI
|
||||
// Cli is run on each GAPI to give it a chance to decide if it wants to
|
||||
// activate, and if it does, then it will return a deploy struct. During
|
||||
// this time, it uses the CliInfo struct as useful information to decide
|
||||
// what to do.
|
||||
Cli(*CliInfo) (*Deploy, error)
|
||||
|
||||
// Init initializes the GAPI and passes in some useful data.
|
||||
Init(*Data) error
|
||||
|
||||
// Graph returns the most recent pgraph. This is called by the engine on
|
||||
// every event from Next().
|
||||
Graph() (*pgraph.Graph, error)
|
||||
|
||||
// Next returns a stream of switch events. The engine will run Graph()
|
||||
// to build a new graph after every Next event.
|
||||
Next() chan Next
|
||||
|
||||
// Close shuts down the GAPI. It asks the GAPI to close, and must cause
|
||||
// Next() to unblock even if is currently blocked and waiting to send a
|
||||
// new event.
|
||||
Close() error
|
||||
}
|
||||
|
||||
// GetInfo is the set of input values passed into the Get method for it to run.
|
||||
type GetInfo struct {
|
||||
// CliContext is the struct that is used to transfer in user input.
|
||||
CliContext *cli.Context
|
||||
|
||||
Noop bool
|
||||
Sema int
|
||||
Update bool
|
||||
|
||||
Debug bool
|
||||
Logf func(format string, v ...interface{})
|
||||
}
|
||||
|
||||
// GettableGAPI represents additional methods that need to be implemented in
|
||||
// this GAPI so that it can be used with the `get` Command. The methods in this
|
||||
// interface are called independently from the rest of the GAPI interface, and
|
||||
// you must not rely on shared state from those methods. Logically, this should
|
||||
// probably be named "Getable", however the correct modern word is "Gettable".
|
||||
type GettableGAPI interface {
|
||||
GAPI // the base interface must be implemented
|
||||
|
||||
// Get runs the get/download method.
|
||||
Get(*GetInfo) error
|
||||
}
|
||||
|
||||
@@ -62,3 +62,9 @@ func CopyStringToFs(fs engine.Fs, str, dst string) error {
|
||||
func CopyDirToFs(fs engine.Fs, src, dst string) error {
|
||||
return util.CopyDiskToFs(fs, src, dst, false)
|
||||
}
|
||||
|
||||
// CopyDirContentsToFs copies a dir contents from src path on the local fs to a
|
||||
// dst path on fs.
|
||||
func CopyDirContentsToFs(fs engine.Fs, src, dst string) error {
|
||||
return util.CopyDiskContentsToFs(fs, src, dst, false)
|
||||
}
|
||||
|
||||
@@ -32,7 +32,8 @@ import (
|
||||
|
||||
func TestInstance0(t *testing.T) {
|
||||
code := `
|
||||
$root = getenv("MGMT_TEST_ROOT")
|
||||
import "sys"
|
||||
$root = sys.getenv("MGMT_TEST_ROOT")
|
||||
|
||||
file "${root}/mgmt-hello-world" {
|
||||
content => "hello world from @purpleidea\n",
|
||||
@@ -42,6 +43,10 @@ func TestInstance0(t *testing.T) {
|
||||
m := Instance{
|
||||
Hostname: "h1", // arbitrary
|
||||
Preserve: true,
|
||||
Debug: false, // TODO: set to true if not too wordy
|
||||
Logf: func(format string, v ...interface{}) {
|
||||
t.Logf("test: "+format, v...)
|
||||
},
|
||||
}
|
||||
if err := m.SimpleDeployLang(code); err != nil {
|
||||
t.Errorf("failed with: %+v", err)
|
||||
@@ -72,7 +77,8 @@ func TestInstance1(t *testing.T) {
|
||||
|
||||
{
|
||||
code := util.Code(`
|
||||
$root = getenv("MGMT_TEST_ROOT")
|
||||
import "sys"
|
||||
$root = sys.getenv("MGMT_TEST_ROOT")
|
||||
|
||||
file "${root}/mgmt-hello-world" {
|
||||
content => "hello world from @purpleidea\n",
|
||||
@@ -96,6 +102,10 @@ func TestInstance1(t *testing.T) {
|
||||
m := Instance{
|
||||
Hostname: "h1",
|
||||
Preserve: true,
|
||||
Debug: false, // TODO: set to true if not too wordy
|
||||
Logf: func(format string, v ...interface{}) {
|
||||
t.Logf(fmt.Sprintf("test #%d: ", index)+format, v...)
|
||||
},
|
||||
}
|
||||
err := m.SimpleDeployLang(code)
|
||||
d := m.Dir()
|
||||
@@ -155,10 +165,11 @@ func TestCluster1(t *testing.T) {
|
||||
|
||||
{
|
||||
code := util.Code(`
|
||||
$root = getenv("MGMT_TEST_ROOT")
|
||||
import "sys"
|
||||
$root = sys.getenv("MGMT_TEST_ROOT")
|
||||
|
||||
file "${root}/mgmt-hostname" {
|
||||
content => "i am ${hostname()}\n",
|
||||
content => "i am ${sys.hostname()}\n",
|
||||
state => "exists",
|
||||
}
|
||||
`)
|
||||
@@ -179,10 +190,11 @@ func TestCluster1(t *testing.T) {
|
||||
}
|
||||
{
|
||||
code := util.Code(`
|
||||
$root = getenv("MGMT_TEST_ROOT")
|
||||
import "sys"
|
||||
$root = sys.getenv("MGMT_TEST_ROOT")
|
||||
|
||||
file "${root}/mgmt-hostname" {
|
||||
content => "i am ${hostname()}\n",
|
||||
content => "i am ${sys.hostname()}\n",
|
||||
state => "exists",
|
||||
}
|
||||
`)
|
||||
@@ -212,6 +224,10 @@ func TestCluster1(t *testing.T) {
|
||||
c := Cluster{
|
||||
Hostnames: hosts,
|
||||
Preserve: true,
|
||||
Debug: false, // TODO: set to true if not too wordy
|
||||
Logf: func(format string, v ...interface{}) {
|
||||
t.Logf(fmt.Sprintf("test #%d: ", index)+format, v...)
|
||||
},
|
||||
}
|
||||
err := c.SimpleDeployLang(code)
|
||||
if d := c.Dir(); d != "" {
|
||||
|
||||
@@ -39,6 +39,9 @@ type Cluster struct {
|
||||
// This is helpful for running analysis or tests on the output.
|
||||
Preserve bool
|
||||
|
||||
// Logf is a logger which should be used.
|
||||
Logf func(format string, v ...interface{})
|
||||
|
||||
// Debug enables more verbosity.
|
||||
Debug bool
|
||||
|
||||
@@ -62,7 +65,8 @@ func (obj *Cluster) Init() error {
|
||||
}
|
||||
}
|
||||
|
||||
for _, h := range obj.Hostnames {
|
||||
for _, hostname := range obj.Hostnames {
|
||||
h := hostname
|
||||
instancePrefix := path.Join(obj.dir, h)
|
||||
if err := os.MkdirAll(instancePrefix, dirMode); err != nil {
|
||||
return errwrap.Wrapf(err, "can't create instance directory")
|
||||
@@ -71,7 +75,10 @@ func (obj *Cluster) Init() error {
|
||||
obj.instances[h] = &Instance{
|
||||
Hostname: h,
|
||||
Preserve: obj.Preserve,
|
||||
Debug: obj.Debug,
|
||||
Logf: func(format string, v ...interface{}) {
|
||||
obj.Logf(fmt.Sprintf("instance <%s>: ", h)+format, v...)
|
||||
},
|
||||
Debug: obj.Debug,
|
||||
|
||||
dir: instancePrefix,
|
||||
}
|
||||
|
||||
@@ -75,6 +75,9 @@ type Instance struct {
|
||||
// This is helpful for running analysis or tests on the output.
|
||||
Preserve bool
|
||||
|
||||
// Logf is a logger which should be used.
|
||||
Logf func(format string, v ...interface{})
|
||||
|
||||
// Debug enables more verbosity.
|
||||
Debug bool
|
||||
|
||||
@@ -205,6 +208,9 @@ func (obj *Instance) Run(seeds []*Instance) error {
|
||||
//s := fmt.Sprintf("--seeds=%s", strings.Join(urls, ","))
|
||||
cmdArgs = append(cmdArgs, s)
|
||||
}
|
||||
gapi := "empty" // empty GAPI (for now)
|
||||
cmdArgs = append(cmdArgs, gapi)
|
||||
obj.Logf("run: %s %s", cmdName, strings.Join(cmdArgs, " "))
|
||||
obj.cmd = exec.Command(cmdName, cmdArgs...)
|
||||
obj.cmd.Env = []string{
|
||||
fmt.Sprintf("MGMT_TEST_ROOT=%s", obj.testRootDirectory),
|
||||
@@ -369,8 +375,12 @@ func (obj *Instance) DeployLang(code string) error {
|
||||
"--seeds", obj.clientURL,
|
||||
"lang", "--lang", filename,
|
||||
}
|
||||
obj.Logf("run: %s %s", cmdName, strings.Join(cmdArgs, " "))
|
||||
cmd := exec.Command(cmdName, cmdArgs...)
|
||||
if err := cmd.Run(); err != nil {
|
||||
|
||||
stdoutStderr, err := cmd.CombinedOutput() // does cmd.Run() for us!
|
||||
obj.Logf("stdout/stderr:\n%s", stdoutStderr)
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "can't run deploy")
|
||||
}
|
||||
return nil
|
||||
|
||||
153
lang/download.go
Normal file
153
lang/download.go
Normal file
@@ -0,0 +1,153 @@
|
||||
// Mgmt
|
||||
// Copyright (C) 2013-2018+ James Shubin and the project contributors
|
||||
// Written by James Shubin <james@shubin.ca> and the project contributors
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package lang // TODO: move this into a sub package of lang/$name?
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path"
|
||||
"strings"
|
||||
|
||||
"github.com/purpleidea/mgmt/lang/interfaces"
|
||||
|
||||
errwrap "github.com/pkg/errors"
|
||||
git "gopkg.in/src-d/go-git.v4"
|
||||
)
|
||||
|
||||
// Downloader implements the Downloader interface. It provides a mechanism to
|
||||
// pull down new code from the internet. This is usually done with git.
|
||||
type Downloader struct {
|
||||
info *interfaces.DownloadInfo
|
||||
|
||||
// Depth is the max recursion depth that we should descent to. A
|
||||
// negative value means infinite. This is usually the default.
|
||||
Depth int
|
||||
|
||||
// Retry is the max number of retries we should run if we encounter a
|
||||
// network error. A negative value means infinite. The default is
|
||||
// usually zero.
|
||||
Retry int
|
||||
|
||||
// TODO: add a retry backoff parameter
|
||||
}
|
||||
|
||||
// Init initializes the downloader with some core structures we'll need.
|
||||
func (obj *Downloader) Init(info *interfaces.DownloadInfo) error {
|
||||
obj.info = info
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get runs a single download of an import and stores it on disk.
|
||||
// XXX: this should only touch the filesystem via obj.info.Fs, but that is not
|
||||
// implemented at the moment, so we cheat and use the local fs directly. This is
|
||||
// not disastrous, since we only run Get on a local fs, since we don't download
|
||||
// to etcdfs directly with the downloader during a deploy. This is because we'd
|
||||
// need to implement the afero.Fs -> billy.Filesystem mapping layer.
|
||||
func (obj *Downloader) Get(info *interfaces.ImportData, modulesPath string) error {
|
||||
if info == nil {
|
||||
return fmt.Errorf("empty import information")
|
||||
}
|
||||
if info.URL == "" {
|
||||
return fmt.Errorf("can't clone from empty URL")
|
||||
}
|
||||
if modulesPath == "" || !strings.HasSuffix(modulesPath, "/") || !strings.HasPrefix(modulesPath, "/") {
|
||||
return fmt.Errorf("module path (`%s`) (must be an absolute dir)", modulesPath)
|
||||
}
|
||||
if stat, err := obj.info.Fs.Stat(modulesPath); err != nil || !stat.IsDir() {
|
||||
if err == nil {
|
||||
return fmt.Errorf("module path (`%s`) must be a dir", modulesPath)
|
||||
}
|
||||
if err == os.ErrNotExist {
|
||||
return fmt.Errorf("module path (`%s`) must exist", modulesPath)
|
||||
}
|
||||
return errwrap.Wrapf(err, "could not read module path (`%s`)", modulesPath)
|
||||
}
|
||||
|
||||
if info.IsSystem || info.IsLocal {
|
||||
// NOTE: this doesn't prevent us from downloading from a remote
|
||||
// git repo that is actually a .git file path instead of HTTP...
|
||||
return fmt.Errorf("can only download remote repos")
|
||||
}
|
||||
// TODO: error early if we're provided *ImportData that we can't act on
|
||||
|
||||
pull := false
|
||||
dir := modulesPath + info.Path // TODO: is this dir unique?
|
||||
isBare := false
|
||||
options := &git.CloneOptions{
|
||||
URL: info.URL,
|
||||
// TODO: do we want to add an option for infinite recursion here?
|
||||
RecurseSubmodules: git.DefaultSubmoduleRecursionDepth,
|
||||
}
|
||||
|
||||
msg := fmt.Sprintf("downloading `%s` to: `%s`", info.URL, dir)
|
||||
if obj.info.Noop {
|
||||
msg = "(noop) " + msg // add prefix
|
||||
}
|
||||
obj.info.Logf(msg)
|
||||
if obj.info.Debug {
|
||||
obj.info.Logf("info: `%+v`", info)
|
||||
obj.info.Logf("options: `%+v`", options)
|
||||
}
|
||||
if obj.info.Noop {
|
||||
return nil // done early
|
||||
}
|
||||
// FIXME: replace with:
|
||||
// `git.Clone(s storage.Storer, worktree billy.Filesystem, o *CloneOptions)`
|
||||
// that uses an `fs engine.Fs` wrapped to the git Filesystem interface:
|
||||
// `billyFs := desfacer.New(obj.info.Fs)`
|
||||
// TODO: repo, err := git.Clone(??? storage.Storer, billyFs, options)
|
||||
repo, err := git.PlainClone(path.Clean(dir), isBare, options)
|
||||
if err == git.ErrRepositoryAlreadyExists {
|
||||
if obj.info.Update {
|
||||
pull = true // make sure to pull latest...
|
||||
}
|
||||
} else if err != nil {
|
||||
return errwrap.Wrapf(err, "can't clone repo: `%s` to: `%s`", info.URL, dir)
|
||||
}
|
||||
|
||||
worktree, err := repo.Worktree()
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "can't get working tree: `%s`", dir)
|
||||
}
|
||||
if worktree == nil {
|
||||
// FIXME: not sure how we're supposed to handle this scenario...
|
||||
return errwrap.Wrapf(err, "can't work with nil work tree for: `%s`", dir)
|
||||
}
|
||||
|
||||
// TODO: do we need to checkout master first, before pulling?
|
||||
if pull {
|
||||
options := &git.PullOptions{
|
||||
// TODO: do we want to add an option for infinite recursion here?
|
||||
RecurseSubmodules: git.DefaultSubmoduleRecursionDepth,
|
||||
}
|
||||
err := worktree.Pull(options)
|
||||
if err != nil && err != git.NoErrAlreadyUpToDate {
|
||||
return errwrap.Wrapf(err, "can't pull latest from: `%s`", info.URL)
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: checkout requested sha1/tag if one was specified...
|
||||
// if err := worktree.Checkout(opts *CheckoutOptions)
|
||||
|
||||
// does the repo have a metadata file present? (we'll validate it later)
|
||||
if _, err := obj.info.Fs.Stat(dir + interfaces.MetadataFilename); err != nil {
|
||||
return errwrap.Wrapf(err, "could not read repo metadata file `%s` in its root", interfaces.MetadataFilename)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -29,6 +29,7 @@ import (
|
||||
)
|
||||
|
||||
func init() {
|
||||
// FIXME: should this be named sprintf instead?
|
||||
funcs.ModuleRegister(moduleName, "printf", func() interfaces.Func { return &PrintfFunc{} })
|
||||
}
|
||||
|
||||
|
||||
@@ -16,9 +16,9 @@
|
||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
// test with:
|
||||
// time ./mgmt run --lang examples/lang/schedule0.mcl --hostname h1 --ideal-cluster-size 1 --tmp-prefix --no-pgp
|
||||
// time ./mgmt run --lang examples/lang/schedule0.mcl --hostname h2 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2381 --server-urls http://127.0.0.1:2382 --tmp-prefix --no-pgp
|
||||
// time ./mgmt run --lang examples/lang/schedule0.mcl --hostname h3 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2383 --server-urls http://127.0.0.1:2384 --tmp-prefix --no-pgp
|
||||
// time ./mgmt run --hostname h1 --ideal-cluster-size 1 --tmp-prefix --no-pgp lang --lang examples/lang/schedule0.mcl
|
||||
// time ./mgmt run --hostname h2 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2381 --server-urls http://127.0.0.1:2382 --tmp-prefix --no-pgp lang --lang examples/lang/schedule0.mcl
|
||||
// time ./mgmt run --hostname h3 --seeds http://127.0.0.1:2379 --client-urls http://127.0.0.1:2383 --server-urls http://127.0.0.1:2384 --tmp-prefix --no-pgp lang --lang examples/lang/schedule0.mcl
|
||||
// kill h2 (should see h1 and h3 pick [h1, h3] instead)
|
||||
// restart h2 (should see [h1, h3] as before)
|
||||
// kill h3 (should see h1 and h2 pick [h1, h2] instead)
|
||||
|
||||
@@ -81,16 +81,38 @@ func Lookup(name string) (interfaces.Func, error) {
|
||||
// prefix. This search automatically adds the period separator. So if you want
|
||||
// functions in the `fmt` package, search for `fmt`, not `fmt.` and it will find
|
||||
// all the correctly registered functions. This removes that prefix from the
|
||||
// result in the map keys that it returns.
|
||||
func LookupPrefix(prefix string) (map[string]interfaces.Func, error) {
|
||||
result := make(map[string]interfaces.Func)
|
||||
// result in the map keys that it returns. If you search for an empty prefix,
|
||||
// then this will return all the top-level functions that aren't in a module.
|
||||
func LookupPrefix(prefix string) map[string]func() interfaces.Func {
|
||||
result := make(map[string]func() interfaces.Func)
|
||||
for name, f := range registeredFuncs {
|
||||
// requested top-level functions, and no module separators...
|
||||
if prefix == "" {
|
||||
if !strings.Contains(name, ModuleSep) {
|
||||
result[name] = f // copy
|
||||
}
|
||||
continue
|
||||
}
|
||||
sep := prefix + ModuleSep
|
||||
if !strings.HasPrefix(name, sep) {
|
||||
continue
|
||||
}
|
||||
s := strings.TrimPrefix(name, sep) // TODO: is it okay to remove the prefix?
|
||||
result[s] = f() // build
|
||||
s := strings.TrimPrefix(name, sep) // remove the prefix
|
||||
result[s] = f // copy
|
||||
}
|
||||
return result, nil
|
||||
return result
|
||||
}
|
||||
|
||||
// Map returns a map from all registered function names to a function to return
|
||||
// that one. We return a copy of our internal registered function store so that
|
||||
// this result can be manipulated safely. We return the functions that produce
|
||||
// the Func interface because we might use this result to create multiple
|
||||
// functions, and each one must have its own unique memory address to work
|
||||
// properly.
|
||||
func Map() map[string]func() interfaces.Func {
|
||||
m := make(map[string]func() interfaces.Func)
|
||||
for name, fn := range registeredFuncs { // copy
|
||||
m[name] = fn
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
490
lang/gapi.go
490
lang/gapi.go
@@ -18,24 +18,33 @@
|
||||
package lang
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/purpleidea/mgmt/engine"
|
||||
"github.com/purpleidea/mgmt/gapi"
|
||||
"github.com/purpleidea/mgmt/lang/funcs"
|
||||
"github.com/purpleidea/mgmt/lang/interfaces"
|
||||
"github.com/purpleidea/mgmt/lang/unification"
|
||||
"github.com/purpleidea/mgmt/pgraph"
|
||||
"github.com/purpleidea/mgmt/util"
|
||||
|
||||
multierr "github.com/hashicorp/go-multierror"
|
||||
errwrap "github.com/pkg/errors"
|
||||
"github.com/spf13/afero"
|
||||
"github.com/urfave/cli"
|
||||
)
|
||||
|
||||
const (
|
||||
// Name is the name of this frontend.
|
||||
Name = "lang"
|
||||
// Start is the entry point filename that we use. It is arbitrary.
|
||||
Start = "/start." + FileNameExtension // FIXME: replace with a proper code entry point schema (directory schema)
|
||||
|
||||
// flagModulePath is the name of the module-path flag.
|
||||
flagModulePath = "module-path"
|
||||
|
||||
// flagDownload is the name of the download flag.
|
||||
flagDownload = "download"
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -48,57 +57,321 @@ type GAPI struct {
|
||||
|
||||
lang *Lang // lang struct
|
||||
|
||||
data gapi.Data
|
||||
// this data struct is only available *after* Init, so as a result, it
|
||||
// can not be used inside the Cli(...) method.
|
||||
data *gapi.Data
|
||||
initialized bool
|
||||
closeChan chan struct{}
|
||||
wg *sync.WaitGroup // sync group for tunnel go routines
|
||||
}
|
||||
|
||||
// CliFlags returns a list of flags used by the specified subcommand.
|
||||
func (obj *GAPI) CliFlags(command string) []cli.Flag {
|
||||
result := []cli.Flag{}
|
||||
modulePath := cli.StringFlag{
|
||||
Name: flagModulePath,
|
||||
Value: "", // empty by default
|
||||
Usage: "choose the modules path (absolute)",
|
||||
EnvVar: "MGMT_MODULE_PATH",
|
||||
}
|
||||
|
||||
// add this only to run (not needed for get or deploy)
|
||||
if command == gapi.CommandRun {
|
||||
runFlags := []cli.Flag{
|
||||
cli.BoolFlag{
|
||||
Name: flagDownload,
|
||||
Usage: "download any missing imports (as the get command does)",
|
||||
},
|
||||
cli.BoolFlag{
|
||||
Name: "update",
|
||||
Usage: "update all dependencies to the latest versions",
|
||||
},
|
||||
}
|
||||
result = append(result, runFlags...)
|
||||
}
|
||||
|
||||
switch command {
|
||||
case gapi.CommandGet:
|
||||
flags := []cli.Flag{
|
||||
cli.IntFlag{
|
||||
Name: "depth d",
|
||||
Value: -1,
|
||||
Usage: "max recursion depth limit (-1 is unlimited)",
|
||||
},
|
||||
cli.IntFlag{
|
||||
Name: "retry r",
|
||||
Value: 0, // any error is a failure by default
|
||||
Usage: "max number of retries (-1 is unlimited)",
|
||||
},
|
||||
//modulePath, // already defined below in fallthrough
|
||||
}
|
||||
result = append(result, flags...)
|
||||
fallthrough // at the moment, we want the same code input arg...
|
||||
case gapi.CommandRun:
|
||||
fallthrough
|
||||
case gapi.CommandDeploy:
|
||||
flags := []cli.Flag{
|
||||
cli.StringFlag{
|
||||
Name: fmt.Sprintf("%s, %s", Name, Name[0:1]),
|
||||
Value: "",
|
||||
Usage: "code to deploy",
|
||||
},
|
||||
// TODO: removed (temporarily?)
|
||||
//cli.BoolFlag{
|
||||
// Name: "stdin",
|
||||
// Usage: "use passthrough stdin",
|
||||
//},
|
||||
modulePath,
|
||||
}
|
||||
result = append(result, flags...)
|
||||
default:
|
||||
return []cli.Flag{}
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
|
||||
// should take the prefix of the registered name. On activation, if there are
|
||||
// any validation problems, you should return an error. If this was not
|
||||
// activated, then you should return a nil GAPI and a nil error.
|
||||
func (obj *GAPI) Cli(c *cli.Context, fs engine.Fs) (*gapi.Deploy, error) {
|
||||
if s := c.String(Name); c.IsSet(Name) {
|
||||
if s == "" {
|
||||
return nil, fmt.Errorf("input code is empty")
|
||||
}
|
||||
|
||||
// read through this local path, and store it in our file system
|
||||
// since our deploy should work anywhere in the cluster, let the
|
||||
// engine ensure that this file system is replicated everywhere!
|
||||
|
||||
// TODO: single file input for now
|
||||
if err := gapi.CopyFileToFs(fs, s, Start); err != nil {
|
||||
return nil, errwrap.Wrapf(err, "can't copy code from `%s` to `%s`", s, Start)
|
||||
}
|
||||
|
||||
return &gapi.Deploy{
|
||||
Name: Name,
|
||||
Noop: c.GlobalBool("noop"),
|
||||
Sema: c.GlobalInt("sema"),
|
||||
GAPI: &GAPI{
|
||||
InputURI: fs.URI(),
|
||||
// TODO: add properties here...
|
||||
},
|
||||
}, nil
|
||||
// activated, then you should return a nil GAPI and a nil error. This is passed
|
||||
// in a functional file system interface. For standalone usage, this will be a
|
||||
// temporary memory-backed filesystem so that the same deploy API is used, and
|
||||
// for normal clustered usage, this will be the normal implementation which is
|
||||
// usually an etcd backed fs. At this point we should be copying the necessary
|
||||
// local file system data into our fs for future use when the GAPI is running.
|
||||
// IOW, running this Cli function, when activated, produces a deploy object
|
||||
// which is run by our main loop. The difference between running from `deploy`
|
||||
// or from `run` (both of which can activate this GAPI) is that `deploy` copies
|
||||
// to an etcdFs, and `run` copies to a memFs. All GAPI's run off of the fs that
|
||||
// is passed in.
|
||||
func (obj *GAPI) Cli(cliInfo *gapi.CliInfo) (*gapi.Deploy, error) {
|
||||
c := cliInfo.CliContext
|
||||
cliContext := c.Parent()
|
||||
if cliContext == nil {
|
||||
return nil, fmt.Errorf("could not get cli context")
|
||||
}
|
||||
fs := cliInfo.Fs // copy files from local filesystem *into* this fs...
|
||||
prefix := "" // TODO: do we need this?
|
||||
debug := cliInfo.Debug
|
||||
logf := func(format string, v ...interface{}) {
|
||||
cliInfo.Logf(Name+": "+format, v...)
|
||||
}
|
||||
return nil, nil // we weren't activated!
|
||||
}
|
||||
|
||||
// CliFlags returns a list of flags used by this deploy subcommand.
|
||||
func (obj *GAPI) CliFlags() []cli.Flag {
|
||||
return []cli.Flag{
|
||||
cli.StringFlag{
|
||||
Name: fmt.Sprintf("%s, %s", Name, Name[0:1]),
|
||||
Value: "",
|
||||
Usage: "language code path to deploy",
|
||||
if !c.IsSet(Name) {
|
||||
return nil, nil // we weren't activated!
|
||||
}
|
||||
|
||||
// empty by default (don't set for deploy, only download)
|
||||
modules := c.String(flagModulePath)
|
||||
if modules != "" && (!strings.HasPrefix(modules, "/") || !strings.HasSuffix(modules, "/")) {
|
||||
return nil, fmt.Errorf("module path is not an absolute directory")
|
||||
}
|
||||
|
||||
// TODO: while reading through trees of metadata files, we could also
|
||||
// check the license compatibility of deps...
|
||||
|
||||
osFs := afero.NewOsFs()
|
||||
readOnlyOsFs := afero.NewReadOnlyFs(osFs) // can't be readonly to dl!
|
||||
//bp := afero.NewBasePathFs(osFs, base) // TODO: can this prevent parent dir access?
|
||||
afs := &afero.Afero{Fs: readOnlyOsFs} // wrap so that we're implementing ioutil
|
||||
localFs := &util.Fs{Afero: afs} // always the local fs
|
||||
downloadAfs := &afero.Afero{Fs: osFs}
|
||||
downloadFs := &util.Fs{Afero: downloadAfs} // TODO: use with a parent path preventer?
|
||||
|
||||
// the fs input here is the local fs we're reading to get the files from
|
||||
// this is different from the fs variable which is our output dest!!!
|
||||
output, err := parseInput(c.String(Name), localFs)
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not activate an input parser")
|
||||
}
|
||||
|
||||
// no need to run recursion detection since this is the beginning
|
||||
// TODO: do the paths need to be cleaned for "../" before comparison?
|
||||
|
||||
logf("lexing/parsing...")
|
||||
ast, err := LexParse(bytes.NewReader(output.Main))
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not generate AST")
|
||||
}
|
||||
if debug {
|
||||
logf("behold, the AST: %+v", ast)
|
||||
}
|
||||
|
||||
var downloader interfaces.Downloader
|
||||
if c.IsSet(flagDownload) && c.Bool(flagDownload) {
|
||||
downloadInfo := &interfaces.DownloadInfo{
|
||||
Fs: downloadFs, // the local fs!
|
||||
|
||||
// flags are passed in during Init()
|
||||
Noop: cliContext.Bool("noop"),
|
||||
Sema: cliContext.Int("sema"),
|
||||
Update: c.Bool("update"),
|
||||
|
||||
Debug: debug,
|
||||
Logf: func(format string, v ...interface{}) {
|
||||
// TODO: is this a sane prefix to use here?
|
||||
logf("get: "+format, v...)
|
||||
},
|
||||
}
|
||||
// this fulfills the interfaces.Downloader interface
|
||||
downloader = &Downloader{
|
||||
Depth: c.Int("depth"), // default of infinite is -1
|
||||
Retry: c.Int("retry"), // infinite is -1
|
||||
}
|
||||
if err := downloader.Init(downloadInfo); err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not initialize downloader")
|
||||
}
|
||||
}
|
||||
|
||||
importGraph, err := pgraph.NewGraph("importGraph")
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not create graph")
|
||||
}
|
||||
importVertex := &pgraph.SelfVertex{
|
||||
Name: "", // first node is the empty string
|
||||
Graph: importGraph, // store a reference to ourself
|
||||
}
|
||||
importGraph.AddVertex(importVertex)
|
||||
|
||||
logf("init...")
|
||||
// init and validate the structure of the AST
|
||||
data := &interfaces.Data{
|
||||
Fs: localFs, // the local fs!
|
||||
Base: output.Base, // base dir (absolute path) that this is rooted in
|
||||
Files: output.Files,
|
||||
Imports: importVertex,
|
||||
Metadata: output.Metadata,
|
||||
Modules: modules,
|
||||
Downloader: downloader,
|
||||
|
||||
//World: obj.World, // TODO: do we need this?
|
||||
Prefix: prefix,
|
||||
Debug: debug,
|
||||
Logf: func(format string, v ...interface{}) {
|
||||
// TODO: is this a sane prefix to use here?
|
||||
logf("ast: "+format, v...)
|
||||
},
|
||||
}
|
||||
// some of this might happen *after* interpolate in SetScope or Unify...
|
||||
if err := ast.Init(data); err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not init and validate AST")
|
||||
}
|
||||
|
||||
logf("interpolating...")
|
||||
// interpolate strings and other expansionable nodes in AST
|
||||
interpolated, err := ast.Interpolate()
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not interpolate AST")
|
||||
}
|
||||
|
||||
// top-level, built-in, initial global scope
|
||||
scope := &interfaces.Scope{
|
||||
Variables: map[string]interfaces.Expr{
|
||||
"purpleidea": &ExprStr{V: "hello world!"}, // james says hi
|
||||
// TODO: change to a func when we can change hostname dynamically!
|
||||
"hostname": &ExprStr{V: ""}, // NOTE: empty b/c not used
|
||||
},
|
||||
// all the built-in top-level, core functions enter here...
|
||||
Functions: funcs.LookupPrefix(""),
|
||||
}
|
||||
|
||||
logf("building scope...")
|
||||
// propagate the scope down through the AST...
|
||||
// We use SetScope because it follows all of the imports through. I did
|
||||
// not think we needed to pass in an initial scope because the download
|
||||
// operation should not depend on any initial scope values, since those
|
||||
// would all be runtime changes, and we do not support dynamic imports,
|
||||
// however, we need to since we're doing type unification to err early!
|
||||
if err := interpolated.SetScope(scope); err != nil { // empty initial scope!
|
||||
return nil, errwrap.Wrapf(err, "could not set scope")
|
||||
}
|
||||
|
||||
// apply type unification
|
||||
unificationLogf := func(format string, v ...interface{}) {
|
||||
if debug { // unification only has debug messages...
|
||||
logf("unification: "+format, v...)
|
||||
}
|
||||
}
|
||||
logf("running type unification...")
|
||||
if err := unification.Unify(interpolated, unification.SimpleInvariantSolverLogger(unificationLogf)); err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not unify types")
|
||||
}
|
||||
|
||||
// get the list of needed files (this is available after SetScope)
|
||||
fileList, err := CollectFiles(interpolated)
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not collect files")
|
||||
}
|
||||
|
||||
// add in our initial files
|
||||
|
||||
// we can sometimes be missing our top-level metadata.yaml and main.mcl
|
||||
files := []string{}
|
||||
files = append(files, output.Files...)
|
||||
files = append(files, fileList...)
|
||||
|
||||
// run some copy operations to add data into the filesystem
|
||||
for _, fn := range output.Workers {
|
||||
if err := fn(fs); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: do we still need this, now that we have the Imports DAG?
|
||||
noDuplicates := util.StrRemoveDuplicatesInList(files)
|
||||
if len(noDuplicates) != len(files) {
|
||||
// programming error here or in this logical test
|
||||
return nil, fmt.Errorf("duplicates in file list found")
|
||||
}
|
||||
|
||||
// sort by depth dependency order! (or mkdir -p all the dirs first)
|
||||
// TODO: is this natively already in a correctly sorted order?
|
||||
util.PathSlice(files).Sort() // sort it
|
||||
for _, src := range files { // absolute paths
|
||||
// rebase path src to root file system of "/" for etcdfs...
|
||||
dst, err := util.Rebase(src, output.Base, "/")
|
||||
if err != nil {
|
||||
// possible programming error
|
||||
return nil, errwrap.Wrapf(err, "malformed source file path: `%s`", src)
|
||||
}
|
||||
|
||||
if strings.HasSuffix(src, "/") { // it's a dir
|
||||
// TODO: add more tests to this (it is actually CopyFs)
|
||||
if err := gapi.CopyDirToFs(fs, src, dst); err != nil {
|
||||
return nil, errwrap.Wrapf(err, "can't copy dir from `%s` to `%s`", src, dst)
|
||||
}
|
||||
continue
|
||||
}
|
||||
// it's a regular file path
|
||||
if err := gapi.CopyFileToFs(fs, src, dst); err != nil {
|
||||
return nil, errwrap.Wrapf(err, "can't copy file from `%s` to `%s`", src, dst)
|
||||
}
|
||||
}
|
||||
|
||||
// display the deploy fs tree
|
||||
if debug || true { // TODO: should this only be shown on debug?
|
||||
logf("input: %s", c.String(Name))
|
||||
tree, err := util.FsTree(fs, "/")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
logf("tree:\n%s", tree)
|
||||
}
|
||||
|
||||
return &gapi.Deploy{
|
||||
Name: Name,
|
||||
Noop: c.GlobalBool("noop"),
|
||||
Sema: c.GlobalInt("sema"),
|
||||
GAPI: &GAPI{
|
||||
InputURI: fs.URI(),
|
||||
// TODO: add properties here...
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Init initializes the lang GAPI struct.
|
||||
func (obj *GAPI) Init(data gapi.Data) error {
|
||||
func (obj *GAPI) Init(data *gapi.Data) error {
|
||||
if obj.initialized {
|
||||
return fmt.Errorf("already initialized")
|
||||
}
|
||||
@@ -117,20 +390,21 @@ func (obj *GAPI) LangInit() error {
|
||||
if obj.lang != nil {
|
||||
return nil // already ran init, close first!
|
||||
}
|
||||
if obj.InputURI == "-" {
|
||||
return fmt.Errorf("stdin passthrough is not supported at this time")
|
||||
}
|
||||
|
||||
fs, err := obj.data.World.Fs(obj.InputURI) // open the remote file system
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "can't load code from file system `%s`", obj.InputURI)
|
||||
}
|
||||
// the lang always tries to load from this standard path: /metadata.yaml
|
||||
input := "/" + interfaces.MetadataFilename // path in remote fs
|
||||
|
||||
b, err := fs.ReadFile(Start) // read the single file out of it
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "can't read code from file `%s`", Start)
|
||||
}
|
||||
|
||||
code := strings.NewReader(string(b))
|
||||
obj.lang = &Lang{
|
||||
Input: code, // string as an interface that satisfies io.Reader
|
||||
Fs: fs,
|
||||
Input: input,
|
||||
|
||||
Hostname: obj.data.Hostname,
|
||||
World: obj.data.World,
|
||||
Debug: obj.data.Debug,
|
||||
@@ -293,3 +567,127 @@ func (obj *GAPI) Close() error {
|
||||
obj.initialized = false // closed = true
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get runs the necessary downloads. This basically runs the lexer, parser and
|
||||
// sets the scope so that all the imports are followed. It passes a downloader
|
||||
// in, which can be used to pull down or update any missing imports. This will
|
||||
// also work when called with the download flag during a normal execution run.
|
||||
func (obj *GAPI) Get(getInfo *gapi.GetInfo) error {
|
||||
c := getInfo.CliContext
|
||||
cliContext := c.Parent()
|
||||
if cliContext == nil {
|
||||
return fmt.Errorf("could not get cli context")
|
||||
}
|
||||
prefix := "" // TODO: do we need this?
|
||||
debug := getInfo.Debug
|
||||
logf := getInfo.Logf
|
||||
|
||||
// empty by default (don't set for deploy, only download)
|
||||
modules := c.String(flagModulePath)
|
||||
if modules != "" && (!strings.HasPrefix(modules, "/") || !strings.HasSuffix(modules, "/")) {
|
||||
return fmt.Errorf("module path is not an absolute directory")
|
||||
}
|
||||
|
||||
osFs := afero.NewOsFs()
|
||||
readOnlyOsFs := afero.NewReadOnlyFs(osFs) // can't be readonly to dl!
|
||||
//bp := afero.NewBasePathFs(osFs, base) // TODO: can this prevent parent dir access?
|
||||
afs := &afero.Afero{Fs: readOnlyOsFs} // wrap so that we're implementing ioutil
|
||||
localFs := &util.Fs{Afero: afs} // always the local fs
|
||||
downloadAfs := &afero.Afero{Fs: osFs}
|
||||
downloadFs := &util.Fs{Afero: downloadAfs} // TODO: use with a parent path preventer?
|
||||
|
||||
// the fs input here is the local fs we're reading to get the files from
|
||||
// this is different from the fs variable which is our output dest!!!
|
||||
output, err := parseInput(c.String(Name), localFs)
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "could not activate an input parser")
|
||||
}
|
||||
|
||||
// no need to run recursion detection since this is the beginning
|
||||
// TODO: do the paths need to be cleaned for "../" before comparison?
|
||||
|
||||
logf("lexing/parsing...")
|
||||
ast, err := LexParse(bytes.NewReader(output.Main))
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "could not generate AST")
|
||||
}
|
||||
if debug {
|
||||
logf("behold, the AST: %+v", ast)
|
||||
}
|
||||
|
||||
downloadInfo := &interfaces.DownloadInfo{
|
||||
Fs: downloadFs, // the local fs!
|
||||
|
||||
// flags are passed in during Init()
|
||||
Noop: cliContext.Bool("noop"),
|
||||
Sema: cliContext.Int("sema"),
|
||||
Update: cliContext.Bool("update"),
|
||||
|
||||
Debug: debug,
|
||||
Logf: func(format string, v ...interface{}) {
|
||||
// TODO: is this a sane prefix to use here?
|
||||
logf("get: "+format, v...)
|
||||
},
|
||||
}
|
||||
// this fulfills the interfaces.Downloader interface
|
||||
downloader := &Downloader{
|
||||
Depth: c.Int("depth"), // default of infinite is -1
|
||||
Retry: c.Int("retry"), // infinite is -1
|
||||
}
|
||||
if err := downloader.Init(downloadInfo); err != nil {
|
||||
return errwrap.Wrapf(err, "could not initialize downloader")
|
||||
}
|
||||
|
||||
importGraph, err := pgraph.NewGraph("importGraph")
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "could not create graph")
|
||||
}
|
||||
importVertex := &pgraph.SelfVertex{
|
||||
Name: "", // first node is the empty string
|
||||
Graph: importGraph, // store a reference to ourself
|
||||
}
|
||||
importGraph.AddVertex(importVertex)
|
||||
|
||||
logf("init...")
|
||||
// init and validate the structure of the AST
|
||||
data := &interfaces.Data{
|
||||
Fs: localFs, // the local fs!
|
||||
Base: output.Base, // base dir (absolute path) that this is rooted in
|
||||
Files: output.Files,
|
||||
Imports: importVertex,
|
||||
Metadata: output.Metadata,
|
||||
Modules: modules,
|
||||
Downloader: downloader,
|
||||
|
||||
//World: obj.World, // TODO: do we need this?
|
||||
Prefix: prefix,
|
||||
Debug: debug,
|
||||
Logf: func(format string, v ...interface{}) {
|
||||
// TODO: is this a sane prefix to use here?
|
||||
logf("ast: "+format, v...)
|
||||
},
|
||||
}
|
||||
// some of this might happen *after* interpolate in SetScope or Unify...
|
||||
if err := ast.Init(data); err != nil {
|
||||
return errwrap.Wrapf(err, "could not init and validate AST")
|
||||
}
|
||||
|
||||
logf("interpolating...")
|
||||
// interpolate strings and other expansionable nodes in AST
|
||||
interpolated, err := ast.Interpolate()
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "could not interpolate AST")
|
||||
}
|
||||
|
||||
logf("building scope...")
|
||||
// propagate the scope down through the AST...
|
||||
// we use SetScope because it follows all of the imports through. i
|
||||
// don't think we need to pass in an initial scope because the download
|
||||
// operation shouldn't depend on any initial scope values, since those
|
||||
// would all be runtime changes, and we do not support dynamic imports!
|
||||
if err := interpolated.SetScope(nil); err != nil { // empty initial scope!
|
||||
return errwrap.Wrapf(err, "could not set scope")
|
||||
}
|
||||
|
||||
return nil // success!
|
||||
}
|
||||
|
||||
366
lang/inputs.go
Normal file
366
lang/inputs.go
Normal file
@@ -0,0 +1,366 @@
|
||||
// Mgmt
|
||||
// Copyright (C) 2013-2018+ James Shubin and the project contributors
|
||||
// Written by James Shubin <james@shubin.ca> and the project contributors
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package lang
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/purpleidea/mgmt/engine"
|
||||
"github.com/purpleidea/mgmt/gapi"
|
||||
"github.com/purpleidea/mgmt/lang/interfaces"
|
||||
|
||||
errwrap "github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// Module logic: We want one single file to start from. That single file will
|
||||
// have `import` or `append` statements in it. Those pull in everything else. To
|
||||
// kick things off, we point directly at a metadata.yaml file path, which points
|
||||
// to the main.mcl entry file. Alternatively, we could point directly to a
|
||||
// main.mcl file, but then we wouldn't know to import additional special dirs
|
||||
// like files/ that are listed in the metadata.yaml file. If we point to a
|
||||
// directory, we could read a metadata.yaml file, and if missing, read a
|
||||
// main.mcl file. The ideal way to start mgmt is by pointing to a metadata.yaml
|
||||
// file. It's ideal because *it* could list the dir path for files/ or
|
||||
// templates/ or other modules. If we start with a single foo.mcl file, the file
|
||||
// itself won't have files/ however other files/ can come in from an import that
|
||||
// contains a metadata.yaml file. Below we have the input parsers that implement
|
||||
// some of this logic.
|
||||
|
||||
var (
|
||||
// inputOrder contains the correct running order of the input functions.
|
||||
inputOrder = []func(string, engine.Fs) (*ParsedInput, error){
|
||||
inputEmpty,
|
||||
inputStdin,
|
||||
inputMetadata,
|
||||
inputMcl,
|
||||
inputDirectory,
|
||||
inputCode,
|
||||
//inputFail,
|
||||
}
|
||||
)
|
||||
|
||||
// ParsedInput is the output struct which contains all the information we need.
|
||||
type ParsedInput struct {
|
||||
//activated bool // if struct is not nil we're activated
|
||||
Base string // base path (abs path with trailing slash)
|
||||
Main []byte // contents of main entry mcl code
|
||||
Files []string // files and dirs to copy to fs (abs paths)
|
||||
Metadata *interfaces.Metadata
|
||||
Workers []func(engine.Fs) error // copy files here that aren't listed!
|
||||
}
|
||||
|
||||
// parseInput runs the list if input parsers to know how to run the lexer,
|
||||
// parser, and so on... The fs input is the source filesystem to look in.
|
||||
func parseInput(s string, fs engine.Fs) (*ParsedInput, error) {
|
||||
var err error
|
||||
var output *ParsedInput
|
||||
activated := false
|
||||
// i decided this was a cleaner way of input parsing than a big if-else!
|
||||
for _, fn := range inputOrder { // list of input detection functions
|
||||
output, err = fn(s, fs)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if output != nil { // activated!
|
||||
activated = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !activated {
|
||||
return nil, fmt.Errorf("input is invalid")
|
||||
}
|
||||
|
||||
return output, nil
|
||||
}
|
||||
|
||||
// absify makes a path absolute if it's not already.
|
||||
func absify(str string) (string, error) {
|
||||
if filepath.IsAbs(str) {
|
||||
return str, nil // done early!
|
||||
}
|
||||
x, err := filepath.Abs(str)
|
||||
if err != nil {
|
||||
return "", errwrap.Wrapf(err, "can't get abs path for: `%s`", str)
|
||||
}
|
||||
if strings.HasSuffix(str, "/") { // if we started with a trailing slash
|
||||
x = dirify(x) // add it back because filepath.Abs() removes it!
|
||||
}
|
||||
return x, nil // success, we're absolute now
|
||||
}
|
||||
|
||||
// dirify ensures path ends with a trailing slash, so that it's a dir. Don't
|
||||
// call this on something that's not a dir! It just appends a trailing slash if
|
||||
// one isn't already present.
|
||||
func dirify(str string) string {
|
||||
if !strings.HasSuffix(str, "/") {
|
||||
return str + "/"
|
||||
}
|
||||
return str
|
||||
}
|
||||
|
||||
// inputEmpty is a simple empty string contents check.
|
||||
// TODO: perhaps we could have a default action here to run from /etc/ or /var/?
|
||||
func inputEmpty(s string, _ engine.Fs) (*ParsedInput, error) {
|
||||
if s == "" {
|
||||
return nil, fmt.Errorf("input is empty")
|
||||
}
|
||||
return nil, nil // pass (this test never succeeds)
|
||||
}
|
||||
|
||||
// inputStdin checks if we're looking at stdin.
|
||||
func inputStdin(s string, fs engine.Fs) (*ParsedInput, error) {
|
||||
if s != "-" {
|
||||
return nil, nil // not us, but no error
|
||||
}
|
||||
|
||||
// TODO: stdin passthrough is not implemented (should it be?)
|
||||
// TODO: this reads everything into memory, which isn't very efficient!
|
||||
|
||||
// FIXME: check if we have a contained OsFs or not.
|
||||
//if fs != OsFs { // XXX: https://github.com/spf13/afero/issues/188
|
||||
// return nil, errwrap.Wrapf("can't use stdin for: `%s`", fs.Name())
|
||||
//}
|
||||
|
||||
// TODO: can this cause a problem if stdin is too large?
|
||||
// TODO: yes, we could pass a reader directly, but we'd
|
||||
// need to have a convention for it to get closed after
|
||||
// and we need to save it to disk for deploys to use it
|
||||
b, err := ioutil.ReadAll(os.Stdin) // doesn't need fs
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "can't read in stdin")
|
||||
}
|
||||
|
||||
return inputCode(string(b), fs) // recurse
|
||||
}
|
||||
|
||||
// inputMetadata checks to see if we have a metadata file path.
|
||||
func inputMetadata(s string, fs engine.Fs) (*ParsedInput, error) {
|
||||
// we've got a metadata.yaml file
|
||||
if !strings.HasSuffix(s, "/"+interfaces.MetadataFilename) {
|
||||
return nil, nil // not us, but no error
|
||||
}
|
||||
var err error
|
||||
if s, err = absify(s); err != nil { // s is now absolute
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// does metadata file exist?
|
||||
f, err := fs.Open(s)
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "file: `%s` does not exist", s)
|
||||
}
|
||||
|
||||
// parse metadata file and save it to the fs
|
||||
metadata, err := interfaces.ParseMetadata(f)
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not parse metadata file")
|
||||
}
|
||||
|
||||
// base path on local system of the metadata file, with trailing slash
|
||||
basePath := dirify(filepath.Dir(s)) // absolute dir
|
||||
m := basePath + metadata.Main // absolute file
|
||||
|
||||
// does main.mcl file exist? open the file read-only...
|
||||
fm, err := fs.Open(m)
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "can't read from file: `%s`", m)
|
||||
}
|
||||
defer fm.Close() // we're done reading by the time this runs
|
||||
b, err := ioutil.ReadAll(fm) // doesn't need fs
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "can't read in file: `%s`", m)
|
||||
}
|
||||
|
||||
// files that we saw
|
||||
files := []string{
|
||||
s, // the metadata.yaml input file
|
||||
m, // the main.mcl file
|
||||
}
|
||||
|
||||
// real files/ directory
|
||||
if metadata.Files != "" { // TODO: nil pointer instead?
|
||||
filesDir := basePath + metadata.Files
|
||||
if _, err := fs.Stat(filesDir); err == nil {
|
||||
files = append(files, filesDir)
|
||||
}
|
||||
}
|
||||
|
||||
// set this path since we know the location (it is used to find modules)
|
||||
if err := metadata.SetAbsSelfPath(basePath); err != nil { // set metadataPath
|
||||
return nil, errwrap.Wrapf(err, "could not build metadata")
|
||||
}
|
||||
return &ParsedInput{
|
||||
Base: basePath,
|
||||
Main: b,
|
||||
Files: files,
|
||||
Metadata: metadata,
|
||||
// no Workers needed, this is the ideal input
|
||||
}, nil
|
||||
}
|
||||
|
||||
// inputMcl checks if we have a path to a *.mcl file?
|
||||
func inputMcl(s string, fs engine.Fs) (*ParsedInput, error) {
|
||||
// TODO: a regexp here would be better
|
||||
if !strings.HasSuffix(s, interfaces.DotFileNameExtension) {
|
||||
return nil, nil // not us, but no error
|
||||
}
|
||||
var err error
|
||||
if s, err = absify(s); err != nil { // s is now absolute
|
||||
return nil, err
|
||||
}
|
||||
// does *.mcl file exist? open the file read-only...
|
||||
fm, err := fs.Open(s)
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "can't read from file: `%s`", s)
|
||||
}
|
||||
defer fm.Close() // we're done reading by the time this runs
|
||||
b, err := ioutil.ReadAll(fm) // doesn't need fs
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "can't read in file: `%s`", s)
|
||||
}
|
||||
|
||||
// build and save a metadata file to fs
|
||||
metadata := &interfaces.Metadata{
|
||||
//Main: interfaces.MainFilename, // TODO: use standard name?
|
||||
Main: filepath.Base(s), // use the name of the input
|
||||
}
|
||||
byt, err := metadata.ToBytes()
|
||||
if err != nil {
|
||||
// probably a programming error
|
||||
return nil, errwrap.Wrapf(err, "can't built metadata file")
|
||||
}
|
||||
dst := "/" + interfaces.MetadataFilename // eg: /metadata.yaml
|
||||
workers := []func(engine.Fs) error{
|
||||
func(fs engine.Fs) error {
|
||||
err := gapi.CopyBytesToFs(fs, byt, dst)
|
||||
return errwrap.Wrapf(err, "could not copy metadata file to fs")
|
||||
},
|
||||
}
|
||||
return &ParsedInput{
|
||||
Base: dirify(filepath.Dir(s)), // base path with trailing slash
|
||||
Main: b,
|
||||
Files: []string{
|
||||
s, // the input .mcl file
|
||||
},
|
||||
Metadata: metadata,
|
||||
Workers: workers,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// inputDirectory checks if we're given the path to a directory.
|
||||
func inputDirectory(s string, fs engine.Fs) (*ParsedInput, error) {
|
||||
if !strings.HasSuffix(s, "/") {
|
||||
return nil, nil // not us, but no error
|
||||
}
|
||||
var err error
|
||||
if s, err = absify(s); err != nil { // s is now absolute
|
||||
return nil, err
|
||||
}
|
||||
// does dir exist?
|
||||
fi, err := fs.Stat(s)
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "dir: `%s` does not exist", s)
|
||||
}
|
||||
if !fi.IsDir() {
|
||||
return nil, errwrap.Wrapf(err, "dir: `%s` is not a dir", s)
|
||||
}
|
||||
|
||||
// try looking for a metadata file in the root
|
||||
md := s + interfaces.MetadataFilename // absolute file
|
||||
if _, err := fs.Stat(md); err == nil {
|
||||
if x, err := inputMetadata(md, fs); err != nil { // recurse
|
||||
return nil, err
|
||||
} else if x != nil {
|
||||
return x, nil // recursed successfully!
|
||||
}
|
||||
}
|
||||
|
||||
// try looking for a main.mcl file in the root
|
||||
mf := s + interfaces.MainFilename // absolute file
|
||||
if _, err := fs.Stat(mf); err == nil {
|
||||
if x, err := inputMcl(mf, fs); err != nil { // recurse
|
||||
return nil, err
|
||||
} else if x != nil {
|
||||
return x, nil // recursed successfully!
|
||||
}
|
||||
}
|
||||
|
||||
// no other options left, didn't activate!
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// inputCode checks if this is raw code? (last possibility, try and run it).
|
||||
func inputCode(s string, fs engine.Fs) (*ParsedInput, error) {
|
||||
if len(s) == 0 {
|
||||
// handle empty strings in a single place by recursing
|
||||
if x, err := inputEmpty(s, fs); err != nil { // recurse
|
||||
return nil, err
|
||||
} else if x != nil {
|
||||
return x, nil // recursed successfully!
|
||||
}
|
||||
}
|
||||
|
||||
wd, err := os.Getwd() // NOTE: not meaningful for stdin unless fs is an OsFs
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "can't get working dir")
|
||||
}
|
||||
|
||||
// since by the time we run this stdin will be gone, and
|
||||
// we want this to work with deploys, we need to fake it
|
||||
// by saving the data and adding a default metadata file
|
||||
// so that everything is built in a logical input state.
|
||||
metadata := &interfaces.Metadata{ // default metadata
|
||||
Main: interfaces.MainFilename,
|
||||
}
|
||||
byt, err := metadata.ToBytes() // build a metadata file
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not build metadata file")
|
||||
}
|
||||
|
||||
dst1 := "/" + interfaces.MetadataFilename // eg: /metadata.yaml
|
||||
dst2 := "/" + metadata.Main // eg: /main.mcl
|
||||
b := []byte(s) // unfortunately we convert things back and forth :/
|
||||
|
||||
workers := []func(engine.Fs) error{
|
||||
func(fs engine.Fs) error {
|
||||
err := gapi.CopyBytesToFs(fs, byt, dst1)
|
||||
return errwrap.Wrapf(err, "could not copy metadata file to fs")
|
||||
},
|
||||
func(fs engine.Fs) error {
|
||||
err := gapi.CopyBytesToFs(fs, b, dst2)
|
||||
return errwrap.Wrapf(err, "could not copy main file to fs")
|
||||
},
|
||||
}
|
||||
|
||||
return &ParsedInput{
|
||||
Base: dirify(wd),
|
||||
Main: b,
|
||||
Files: []string{}, // they're already copied in
|
||||
Metadata: metadata,
|
||||
Workers: workers,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// inputFail fails, because we couldn't activate anyone. We might not need this.
|
||||
//func inputFail(s string, _ engine.Fs) (*ParsedInput, error) {
|
||||
// return nil, fmt.Errorf("input is invalid") // fail (this test always succeeds)
|
||||
//}
|
||||
@@ -18,9 +18,14 @@
|
||||
package interfaces
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
|
||||
"github.com/purpleidea/mgmt/engine"
|
||||
"github.com/purpleidea/mgmt/lang/types"
|
||||
"github.com/purpleidea/mgmt/pgraph"
|
||||
|
||||
multierr "github.com/hashicorp/go-multierror"
|
||||
)
|
||||
|
||||
// Node represents either a Stmt or an Expr. It contains the minimum set of
|
||||
@@ -64,6 +69,60 @@ type Expr interface {
|
||||
|
||||
// Data provides some data to the node that could be useful during its lifetime.
|
||||
type Data struct {
|
||||
// Fs represents a handle to the filesystem that we're running on. This
|
||||
// is necessary for opening files if needed by import statements. The
|
||||
// file() paths used to get templates or other files from our deploys
|
||||
// come from here, this is *not* used to interact with the host file
|
||||
// system to manage file resources or other aspects.
|
||||
Fs engine.Fs
|
||||
|
||||
// Base directory (absolute path) that the running code is in. If an
|
||||
// import is found, that's a recursive addition, and naturally for that
|
||||
// run, this value would be different in the recursion.
|
||||
Base string
|
||||
|
||||
// Files is a list of absolute paths seen so far. This includes all
|
||||
// previously seen paths, where as the former Offsets parameter did not.
|
||||
Files []string
|
||||
|
||||
// Imports stores a graph inside a vertex so we have a current cursor.
|
||||
// This means that as we recurse through our import graph (hopefully a
|
||||
// DAG) we can know what the parent vertex in our graph is to edge to.
|
||||
// If we ever can't topologically sort it, then it has an import loop.
|
||||
Imports *pgraph.SelfVertex
|
||||
|
||||
// Metadata is the metadata structure associated with the given parsing.
|
||||
// It can be present, which is often the case when importing a module,
|
||||
// or it can be nil, which is often the case when parsing a single file.
|
||||
// When imports are nested (eg: an imported module imports another one)
|
||||
// the metadata structure can recursively point to an earlier structure.
|
||||
Metadata *Metadata
|
||||
|
||||
// Modules is an absolute path to a modules directory on the current Fs.
|
||||
// It is the directory to use to look for remote modules if we haven't
|
||||
// specified an alternative with the metadata Path field. This is
|
||||
// usually initialized with the global modules path that can come from
|
||||
// the cli or an environment variable, but this only occurs for the
|
||||
// initial download/get operation, and obviously not once we're running
|
||||
// a deploy, since by then everything in here would have been copied to
|
||||
// the runtime fs.
|
||||
Modules string
|
||||
|
||||
// Downloader is the interface that must be fulfilled to download
|
||||
// modules. If a missing import is found, and this is not nil, then it
|
||||
// will be run once in an attempt to get the missing module before it
|
||||
// fails outright. In practice, it is recommended to separate this
|
||||
// download phase in a separate step from the production running and
|
||||
// deploys, however that is not blocked at the level of this interface.
|
||||
Downloader Downloader
|
||||
|
||||
//World engine.World // TODO: do we need this?
|
||||
|
||||
// Prefix provides a unique path prefix that we can namespace in. It is
|
||||
// currently shared identically across the whole AST. Nodes should be
|
||||
// careful to not write on top of other nodes data.
|
||||
Prefix string
|
||||
|
||||
// Debug represents if we're running in debug mode or not.
|
||||
Debug bool
|
||||
|
||||
@@ -79,11 +138,15 @@ type Data struct {
|
||||
// scope. This is useful so that someone in the top scope can't prevent a child
|
||||
// module from ever using that variable name again. It might be worth revisiting
|
||||
// this point in the future if we find it adds even greater code safety. Please
|
||||
// report any bugs you have written that would have been prevented by this.
|
||||
// report any bugs you have written that would have been prevented by this. This
|
||||
// also contains the currently available functions. They function similarly to
|
||||
// the variables, and you can add new ones with a function statement definition.
|
||||
// An interesting note about these is that they exist in a distinct namespace
|
||||
// from the variables, which could actually contain lambda functions.
|
||||
type Scope struct {
|
||||
Variables map[string]Expr
|
||||
//Functions map[string]??? // TODO: do we want a separate namespace for user defined functions?
|
||||
Classes map[string]Stmt
|
||||
Functions map[string]func() Func
|
||||
Classes map[string]Stmt
|
||||
|
||||
Chain []Stmt // chain of previously seen stmt's
|
||||
}
|
||||
@@ -93,9 +156,9 @@ type Scope struct {
|
||||
func EmptyScope() *Scope {
|
||||
return &Scope{
|
||||
Variables: make(map[string]Expr),
|
||||
//Functions: ???,
|
||||
Classes: make(map[string]Stmt),
|
||||
Chain: []Stmt{},
|
||||
Functions: make(map[string]func() Func),
|
||||
Classes: make(map[string]Stmt),
|
||||
Chain: []Stmt{},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -105,12 +168,16 @@ func EmptyScope() *Scope {
|
||||
// we need those to be consistently pointing to the same things after copying.
|
||||
func (obj *Scope) Copy() *Scope {
|
||||
variables := make(map[string]Expr)
|
||||
functions := make(map[string]func() Func)
|
||||
classes := make(map[string]Stmt)
|
||||
chain := []Stmt{}
|
||||
if obj != nil { // allow copying nil scopes
|
||||
for k, v := range obj.Variables { // copy
|
||||
variables[k] = v // we don't copy the expr's!
|
||||
}
|
||||
for k, v := range obj.Functions { // copy
|
||||
functions[k] = v // we don't copy the generator func's
|
||||
}
|
||||
for k, v := range obj.Classes { // copy
|
||||
classes[k] = v // we don't copy the StmtClass!
|
||||
}
|
||||
@@ -120,11 +187,80 @@ func (obj *Scope) Copy() *Scope {
|
||||
}
|
||||
return &Scope{
|
||||
Variables: variables,
|
||||
Functions: functions,
|
||||
Classes: classes,
|
||||
Chain: chain,
|
||||
}
|
||||
}
|
||||
|
||||
// Merge takes an existing scope and merges a scope on top of it. If any
|
||||
// elements had to be overwritten, then the error result will contain some info.
|
||||
// Even if this errors, the scope will have been merged successfully. The merge
|
||||
// runs in a deterministic order so that errors will be consistent. Use Copy if
|
||||
// you don't want to change this destructively.
|
||||
// FIXME: this doesn't currently merge Chain's... Should it?
|
||||
func (obj *Scope) Merge(scope *Scope) error {
|
||||
var err error
|
||||
// collect names so we can iterate in a deterministic order
|
||||
namedVariables := []string{}
|
||||
namedFunctions := []string{}
|
||||
namedClasses := []string{}
|
||||
for name := range scope.Variables {
|
||||
namedVariables = append(namedVariables, name)
|
||||
}
|
||||
for name := range scope.Functions {
|
||||
namedFunctions = append(namedFunctions, name)
|
||||
}
|
||||
for name := range scope.Classes {
|
||||
namedClasses = append(namedClasses, name)
|
||||
}
|
||||
sort.Strings(namedVariables)
|
||||
sort.Strings(namedFunctions)
|
||||
sort.Strings(namedClasses)
|
||||
|
||||
for _, name := range namedVariables {
|
||||
if _, exists := obj.Variables[name]; exists {
|
||||
e := fmt.Errorf("variable `%s` was overwritten", name)
|
||||
err = multierr.Append(err, e)
|
||||
}
|
||||
obj.Variables[name] = scope.Variables[name]
|
||||
}
|
||||
for _, name := range namedFunctions {
|
||||
if _, exists := obj.Functions[name]; exists {
|
||||
e := fmt.Errorf("function `%s` was overwritten", name)
|
||||
err = multierr.Append(err, e)
|
||||
}
|
||||
obj.Functions[name] = scope.Functions[name]
|
||||
}
|
||||
for _, name := range namedClasses {
|
||||
if _, exists := obj.Classes[name]; exists {
|
||||
e := fmt.Errorf("class `%s` was overwritten", name)
|
||||
err = multierr.Append(err, e)
|
||||
}
|
||||
obj.Classes[name] = scope.Classes[name]
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// IsEmpty returns whether or not a scope is empty or not.
|
||||
// FIXME: this doesn't currently consider Chain's... Should it?
|
||||
func (obj *Scope) IsEmpty() bool {
|
||||
//if obj == nil { // TODO: add me if this turns out to be useful
|
||||
// return true
|
||||
//}
|
||||
if len(obj.Variables) > 0 {
|
||||
return false
|
||||
}
|
||||
if len(obj.Functions) > 0 {
|
||||
return false
|
||||
}
|
||||
if len(obj.Classes) > 0 {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// Edge is the data structure representing a compiled edge that is used in the
|
||||
// lang to express a dependency between two resources and optionally send/recv.
|
||||
type Edge struct {
|
||||
|
||||
@@ -27,4 +27,9 @@ const (
|
||||
// ErrTypeCurrentlyUnknown is returned from the Type() call on Expr if
|
||||
// unification didn't run successfully and the type isn't obvious yet.
|
||||
ErrTypeCurrentlyUnknown = Error("type is currently unknown")
|
||||
|
||||
// ErrExpectedFileMissing is returned when a file that is used by an
|
||||
// import is missing. This might signal the downloader, or it might
|
||||
// signal a permanent error.
|
||||
ErrExpectedFileMissing = Error("file is currently missing")
|
||||
)
|
||||
|
||||
@@ -39,9 +39,10 @@ type Init struct {
|
||||
//Noop bool
|
||||
Input chan types.Value // Engine will close `input` chan
|
||||
Output chan types.Value // Stream must close `output` chan
|
||||
World engine.World
|
||||
Debug bool
|
||||
Logf func(format string, v ...interface{})
|
||||
// TODO: should we pass in a *Scope here for functions like template() ?
|
||||
World engine.World
|
||||
Debug bool
|
||||
Logf func(format string, v ...interface{})
|
||||
}
|
||||
|
||||
// Func is the interface that any valid func must fulfill. It is very simple,
|
||||
|
||||
98
lang/interfaces/import.go
Normal file
98
lang/interfaces/import.go
Normal file
@@ -0,0 +1,98 @@
|
||||
// Mgmt
|
||||
// Copyright (C) 2013-2018+ James Shubin and the project contributors
|
||||
// Written by James Shubin <james@shubin.ca> and the project contributors
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package interfaces
|
||||
|
||||
import (
|
||||
"github.com/purpleidea/mgmt/engine"
|
||||
)
|
||||
|
||||
// ImportData is the result of parsing a string import when it has not errored.
|
||||
type ImportData struct {
|
||||
// Name is the original input that produced this struct. It is stored
|
||||
// here so that you can parse it once and pass this struct around
|
||||
// without having to include a copy of the original data if needed.
|
||||
Name string
|
||||
|
||||
// Alias is the name identifier that should be used for this import.
|
||||
Alias string
|
||||
|
||||
// IsSystem specifies that this is a system import.
|
||||
IsSystem bool
|
||||
|
||||
// IsLocal represents if a module is either local or a remote import.
|
||||
IsLocal bool
|
||||
|
||||
// IsFile represents if we're referring to an individual file or not.
|
||||
IsFile bool
|
||||
|
||||
// Path represents the relative path to the directory that this import
|
||||
// points to. Since it specifies a directory, it will end with a
|
||||
// trailing slash which makes detection more obvious for other helpers.
|
||||
// If this points to a local import, that directory is probably not
|
||||
// expected to contain a metadata file, and it will be a simple path
|
||||
// addition relative to the current file this import was parsed from. If
|
||||
// this is a remote import, then it's likely that the file will be found
|
||||
// in a more distinct path, such as a search path that contains the full
|
||||
// fqdn of the import.
|
||||
// TODO: should system imports put something here?
|
||||
Path string
|
||||
|
||||
// URL is the path that a `git clone` operation should use as the URL.
|
||||
// If it is a local import, then this is the empty value.
|
||||
URL string
|
||||
}
|
||||
|
||||
// DownloadInfo is the set of input values passed into the Init method of the
|
||||
// Downloader interface, so that it can have some useful information to use.
|
||||
type DownloadInfo struct {
|
||||
// Fs is the filesystem to use for downloading to.
|
||||
Fs engine.Fs
|
||||
|
||||
// Noop specifies if we should actually download or just fake it. The
|
||||
// one problem is that if we *don't* download something, then we can't
|
||||
// follow it to see if there's anything else to download.
|
||||
Noop bool
|
||||
|
||||
// Sema specifies the max number of simultaneous downloads to run.
|
||||
Sema int
|
||||
|
||||
// Update specifies if we should try and update existing downloaded
|
||||
// artifacts.
|
||||
Update bool
|
||||
|
||||
// Debug represents if we're running in debug mode or not.
|
||||
Debug bool
|
||||
|
||||
// Logf is a logger which should be used.
|
||||
Logf func(format string, v ...interface{})
|
||||
}
|
||||
|
||||
// Downloader is the interface that must be fulfilled to download modules.
|
||||
// TODO: this should probably be in a more central package like the top-level
|
||||
// GAPI package, and not contain the lang specific *ImportData struct. Since we
|
||||
// aren't working on a downloader for any other frontend at the moment, we'll
|
||||
// keep it here, and keep it less generalized for now. If we *really* wanted to
|
||||
// generalize it, Get would be implemented as part of the *ImportData struct and
|
||||
// there would be an interface it helped fulfill for the Downloader GAPI.
|
||||
type Downloader interface {
|
||||
// Init initializes the downloader with some core structures we'll need.
|
||||
Init(*DownloadInfo) error
|
||||
|
||||
// Get runs a single download of an import and stores it on disk.
|
||||
Get(*ImportData, string) error
|
||||
}
|
||||
324
lang/interfaces/metadata.go
Normal file
324
lang/interfaces/metadata.go
Normal file
@@ -0,0 +1,324 @@
|
||||
// Mgmt
|
||||
// Copyright (C) 2013-2018+ James Shubin and the project contributors
|
||||
// Written by James Shubin <james@shubin.ca> and the project contributors
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package interfaces
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"strings"
|
||||
|
||||
errwrap "github.com/pkg/errors"
|
||||
"gopkg.in/yaml.v2"
|
||||
)
|
||||
|
||||
const (
|
||||
// MetadataFilename is the filename for the metadata storage. This is
|
||||
// the ideal entry point for any running code.
|
||||
MetadataFilename = "metadata.yaml"
|
||||
|
||||
// FileNameExtension is the filename extension used for languages files.
|
||||
FileNameExtension = "mcl" // alternate suggestions welcome!
|
||||
|
||||
// DotFileNameExtension is the filename extension with a dot prefix.
|
||||
DotFileNameExtension = "." + FileNameExtension
|
||||
|
||||
// MainFilename is the default filename for code to start running from.
|
||||
MainFilename = "main" + DotFileNameExtension
|
||||
|
||||
// PathDirectory is the path directory name we search for modules in.
|
||||
PathDirectory = "path/"
|
||||
|
||||
// FilesDirectory is the files directory name we include alongside
|
||||
// modules. It can store any useful files that we'd like.
|
||||
FilesDirectory = "files/"
|
||||
|
||||
// ModuleDirectory is the default module directory name. It gets
|
||||
// appended to whatever the running prefix is or relative to the base
|
||||
// dir being used for deploys.
|
||||
ModuleDirectory = "modules/"
|
||||
)
|
||||
|
||||
// Metadata is a data structure representing the module metadata. Since it can
|
||||
// get moved around to different filesystems, it should only contain relative
|
||||
// paths.
|
||||
type Metadata struct {
|
||||
// Main is the path to the entry file where we start reading code.
|
||||
// Normally this is main.mcl or the value of the MainFilename constant.
|
||||
Main string `yaml:"main"`
|
||||
|
||||
// Path is the relative path to the local module search path directory
|
||||
// that we should look in. This is similar to golang's vendor directory.
|
||||
// If a module wishes to include this directory, it's recommended that
|
||||
// it have the contained directory be a `git submodule` if possible.
|
||||
Path string `yaml:"path"`
|
||||
|
||||
// Files is the location of the files/ directory which can contain some
|
||||
// useful additions that might get used in the modules. You can store
|
||||
// templates, or any other data that you'd like.
|
||||
// TODO: also allow storing files alongside the .mcl files in their dir!
|
||||
Files string `yaml:"files"`
|
||||
|
||||
// License is the listed license of the module. Use the short names, eg:
|
||||
// LGPLv3+, or MIT.
|
||||
License string `yaml:"license"`
|
||||
|
||||
// ParentPathBlock specifies whether we're allowed to search in parent
|
||||
// metadata file Path settings for modules. We always search in the
|
||||
// global path if we don't find others first. This setting defaults to
|
||||
// false, which is important because the downloader uses it to decide
|
||||
// where to put downloaded modules. It is similar to the equivalent of
|
||||
// a `require vendoring` flag in golang if such a thing existed. If a
|
||||
// module sets this to true, and specifies a Path value, then only that
|
||||
// path will be used as long as imports are present there. Otherwise it
|
||||
// will fall-back on the global modules directory. If a module sets this
|
||||
// to true, and does not specify a Path value, then the global modules
|
||||
// directory is automatically chosen for the import location for this
|
||||
// module. When this is set to true, in no scenario will an import come
|
||||
// from a directory other than the one specified here, or the global
|
||||
// modules directory. Module authors should use this sparingly when they
|
||||
// absolutely need a specific import vendored, otherwise they might
|
||||
// rouse the ire of module consumers. Keep in mind that you can specify
|
||||
// a Path directory, and include a git submodule in it, which will be
|
||||
// used by default, without specifying this option. In that scenario,
|
||||
// the consumer can decide to not recursively clone your submodule if
|
||||
// they wish to override it higher up in the module search locations.
|
||||
ParentPathBlock bool `yaml:"parentpathblock"`
|
||||
|
||||
// Metadata stores a link to the parent metadata structure if it exists.
|
||||
Metadata *Metadata // this does *NOT* get a yaml struct tag
|
||||
|
||||
// metadataPath stores the absolute path to this metadata file as it is
|
||||
// parsed. This is useful when we search upwards for parent Path values.
|
||||
metadataPath string // absolute path that this file was found in
|
||||
|
||||
// TODO: is this needed anymore?
|
||||
defaultMain *string // set this to pick a default Main when decoding
|
||||
|
||||
// bug395 is a flag to workaround the terrible yaml parser resetting all
|
||||
// the default struct field values when it finds an empty yaml document.
|
||||
// We set this value to have a default of true, which enables us to know
|
||||
// if the document was empty or not, and if so, then we know this struct
|
||||
// was emptied, so we should then return a new struct with all defaults.
|
||||
// See: https://github.com/go-yaml/yaml/issues/395 for more information.
|
||||
bug395 bool
|
||||
}
|
||||
|
||||
// DefaultMetadata returns the default metadata that is used for absent values.
|
||||
func DefaultMetadata() *Metadata {
|
||||
return &Metadata{ // the defaults
|
||||
Main: MainFilename, // main.mcl
|
||||
// This MUST be empty for a top-level default, because if it's
|
||||
// not, then an undefined Path dir at a lower level won't search
|
||||
// upwards to find a suitable path, and we'll nest forever...
|
||||
//Path: PathDirectory, // do NOT set this!
|
||||
Files: FilesDirectory, // files/
|
||||
//License: "", // TODO: ???
|
||||
|
||||
bug395: true, // workaround, lol
|
||||
}
|
||||
}
|
||||
|
||||
// SetAbsSelfPath sets the absolute directory path to this metadata file. This
|
||||
// method is used on a built metadata file so that it can internally know where
|
||||
// it is located.
|
||||
func (obj *Metadata) SetAbsSelfPath(p string) error {
|
||||
obj.metadataPath = p
|
||||
return nil
|
||||
}
|
||||
|
||||
// ToBytes marshals the struct into a byte array and returns it.
|
||||
func (obj *Metadata) ToBytes() ([]byte, error) {
|
||||
return yaml.Marshal(obj) // TODO: obj or *obj ?
|
||||
}
|
||||
|
||||
// NOTE: this is not currently needed, but here for reference.
|
||||
//// MarshalYAML modifies the struct before it is used to build the raw output.
|
||||
//func (obj *Metadata) MarshalYAML() (interface{}, error) {
|
||||
// // The Marshaler interface may be implemented by types to customize
|
||||
// // their behavior when being marshaled into a YAML document. The
|
||||
// // returned value is marshaled in place of the original value
|
||||
// // implementing Marshaler.
|
||||
//
|
||||
// if obj.metadataPath == "" { // make sure metadataPath isn't saved!
|
||||
// return obj, nil
|
||||
// }
|
||||
// md := obj.Copy() // TODO: implement me
|
||||
// md.metadataPath = "" // if set, blank it out before save
|
||||
// return md, nil
|
||||
//}
|
||||
|
||||
// UnmarshalYAML is the standard unmarshal method for this struct.
|
||||
func (obj *Metadata) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
||||
type indirect Metadata // indirection to avoid infinite recursion
|
||||
def := DefaultMetadata()
|
||||
// support overriding
|
||||
if x := obj.defaultMain; x != nil {
|
||||
def.Main = *x
|
||||
}
|
||||
|
||||
raw := indirect(*def) // convert; the defaults go here
|
||||
|
||||
if err := unmarshal(&raw); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
*obj = Metadata(raw) // restore from indirection with type conversion!
|
||||
return nil
|
||||
}
|
||||
|
||||
// ParseMetadata reads from some input and returns a *Metadata struct that
|
||||
// contains plausible values to be used.
|
||||
func ParseMetadata(reader io.Reader) (*Metadata, error) {
|
||||
metadata := DefaultMetadata() // populate this
|
||||
//main := MainFilename // set a custom default here if you want
|
||||
//metadata.defaultMain = &main
|
||||
|
||||
// does not work in all cases :/ (fails with EOF files, ioutil does not)
|
||||
//decoder := yaml.NewDecoder(reader)
|
||||
////decoder.SetStrict(true) // TODO: consider being strict?
|
||||
//if err := decoder.Decode(metadata); err != nil {
|
||||
// return nil, errwrap.Wrapf(err, "can't parse metadata")
|
||||
//}
|
||||
b, err := ioutil.ReadAll(reader)
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "can't read metadata")
|
||||
}
|
||||
if err := yaml.Unmarshal(b, metadata); err != nil {
|
||||
return nil, errwrap.Wrapf(err, "can't parse metadata")
|
||||
}
|
||||
|
||||
if !metadata.bug395 { // workaround, lol
|
||||
// we must have gotten an empty document, so use a new default!
|
||||
metadata = DefaultMetadata()
|
||||
}
|
||||
|
||||
// FIXME: search for unclean paths containing ../ or similar and error!
|
||||
|
||||
if strings.HasPrefix(metadata.Main, "/") || strings.HasSuffix(metadata.Main, "/") {
|
||||
return nil, fmt.Errorf("the Main field must be a relative file path")
|
||||
}
|
||||
if metadata.Path != "" && (strings.HasPrefix(metadata.Path, "/") || !strings.HasSuffix(metadata.Path, "/")) {
|
||||
return nil, fmt.Errorf("the Path field must be undefined or be a relative dir path")
|
||||
}
|
||||
if metadata.Files != "" && (strings.HasPrefix(metadata.Files, "/") || !strings.HasSuffix(metadata.Files, "/")) {
|
||||
return nil, fmt.Errorf("the Files field must be undefined or be a relative dir path")
|
||||
}
|
||||
// TODO: add more validation
|
||||
|
||||
return metadata, nil
|
||||
}
|
||||
|
||||
// FindModulesPath returns an absolute path to the Path dir where modules can
|
||||
// be found. This can vary, because the current metadata file might not specify
|
||||
// a Path value, meaning we'd have to return the global modules path.
|
||||
// Additionally, we can search upwards for a path if our metadata file allows
|
||||
// this. It searches with respect to the calling base directory, and uses the
|
||||
// ParentPathBlock field to determine if we're allowed to search upwards. It
|
||||
// does logically without doing any filesystem operations.
|
||||
func FindModulesPath(metadata *Metadata, base, modules string) (string, error) {
|
||||
ret := func(s string) (string, error) { // return helper function
|
||||
// don't return an empty string without an error!!!
|
||||
if s == "" {
|
||||
return "", fmt.Errorf("can't find a module path")
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
m := metadata // start
|
||||
b := base // absolute base path current metadata file is in
|
||||
for m != nil {
|
||||
if m.metadataPath == "" { // a top-level module might be empty!
|
||||
return ret(modules) // so return this, there's nothing else!
|
||||
}
|
||||
if m.metadataPath != b { // these should be the same if no bugs!
|
||||
return "", fmt.Errorf("metadata inconsistency: `%s` != `%s`", m.metadataPath, b)
|
||||
}
|
||||
|
||||
// does metadata specify where to look ?
|
||||
// search in the module specific space
|
||||
if m.Path != "" { // use this path, since it was specified!
|
||||
if !strings.HasSuffix(m.Path, "/") {
|
||||
return "", fmt.Errorf("metadata inconsistency: path `%s` has no trailing slash", m.Path)
|
||||
}
|
||||
return ret(b + m.Path) // join w/o cleaning trailing slash
|
||||
}
|
||||
|
||||
// are we allowed to search incrementally upwards?
|
||||
if m.ParentPathBlock {
|
||||
break
|
||||
}
|
||||
|
||||
// search upwards (search in parent dirs upwards recursively...)
|
||||
m = m.Metadata // might be nil
|
||||
if m != nil {
|
||||
b = m.metadataPath // get new parent path
|
||||
}
|
||||
}
|
||||
// by now we haven't found a metadata path, so we use the global path...
|
||||
return ret(modules) // often comes from an ENV or a default
|
||||
}
|
||||
|
||||
// FindModulesPathList does what FindModulesPath does, except this function
|
||||
// returns the entirely linear string of possible module locations until it gets
|
||||
// to the root. This can be useful if you'd like to know which possible
|
||||
// locations are valid, so that you can search through them to see if there is
|
||||
// downloaded code available.
|
||||
func FindModulesPathList(metadata *Metadata, base, modules string) ([]string, error) {
|
||||
found := []string{}
|
||||
ret := func(s []string) ([]string, error) { // return helper function
|
||||
// don't return an empty list without an error!!!
|
||||
if s == nil || len(s) == 0 {
|
||||
return nil, fmt.Errorf("can't find any module paths")
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
m := metadata // start
|
||||
b := base // absolute base path current metadata file is in
|
||||
for m != nil {
|
||||
if m.metadataPath == "" { // a top-level module might be empty!
|
||||
return ret([]string{modules}) // so return this, there's nothing else!
|
||||
}
|
||||
if m.metadataPath != b { // these should be the same if no bugs!
|
||||
return nil, fmt.Errorf("metadata inconsistency: `%s` != `%s`", m.metadataPath, b)
|
||||
}
|
||||
|
||||
// does metadata specify where to look ?
|
||||
// search in the module specific space
|
||||
if m.Path != "" { // use this path, since it was specified!
|
||||
if !strings.HasSuffix(m.Path, "/") {
|
||||
return nil, fmt.Errorf("metadata inconsistency: path `%s` has no trailing slash", m.Path)
|
||||
}
|
||||
p := b + m.Path // join w/o cleaning trailing slash
|
||||
found = append(found, p) // add to list
|
||||
}
|
||||
|
||||
// are we allowed to search incrementally upwards?
|
||||
if m.ParentPathBlock {
|
||||
break
|
||||
}
|
||||
|
||||
// search upwards (search in parent dirs upwards recursively...)
|
||||
m = m.Metadata // might be nil
|
||||
if m != nil {
|
||||
b = m.metadataPath // get new parent path
|
||||
}
|
||||
}
|
||||
// add the global path to everything we've found...
|
||||
found = append(found, modules) // often comes from an ENV or a default
|
||||
return ret(found)
|
||||
}
|
||||
196
lang/interfaces/metadata_test.go
Normal file
196
lang/interfaces/metadata_test.go
Normal file
@@ -0,0 +1,196 @@
|
||||
// Mgmt
|
||||
// Copyright (C) 2013-2018+ James Shubin and the project contributors
|
||||
// Written by James Shubin <james@shubin.ca> and the project contributors
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
// +build !root
|
||||
|
||||
package interfaces
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/purpleidea/mgmt/util"
|
||||
|
||||
"github.com/davecgh/go-spew/spew"
|
||||
"github.com/kylelemons/godebug/pretty"
|
||||
)
|
||||
|
||||
func TestMetadataParse0(t *testing.T) {
|
||||
type test struct { // an individual test
|
||||
name string
|
||||
yaml string
|
||||
fail bool
|
||||
meta *Metadata
|
||||
}
|
||||
testCases := []test{}
|
||||
|
||||
//{
|
||||
// testCases = append(testCases, test{
|
||||
// "",
|
||||
// ``,
|
||||
// false,
|
||||
// nil,
|
||||
// })
|
||||
//}
|
||||
{
|
||||
testCases = append(testCases, test{
|
||||
name: "empty",
|
||||
yaml: ``,
|
||||
fail: false,
|
||||
meta: DefaultMetadata(),
|
||||
})
|
||||
}
|
||||
{
|
||||
testCases = append(testCases, test{
|
||||
name: "empty file defaults",
|
||||
yaml: util.Code(`
|
||||
# empty file
|
||||
`),
|
||||
fail: false,
|
||||
meta: DefaultMetadata(),
|
||||
})
|
||||
}
|
||||
{
|
||||
testCases = append(testCases, test{
|
||||
name: "empty document defaults",
|
||||
yaml: util.Code(`
|
||||
--- # new document
|
||||
`),
|
||||
fail: false,
|
||||
meta: DefaultMetadata(),
|
||||
})
|
||||
}
|
||||
{
|
||||
testCases = append(testCases, test{
|
||||
name: "set values",
|
||||
yaml: util.Code(`
|
||||
main: "hello.mcl"
|
||||
files: "xfiles/"
|
||||
path: "vendor/"
|
||||
`),
|
||||
fail: false,
|
||||
meta: &Metadata{
|
||||
Main: "hello.mcl",
|
||||
Files: "xfiles/",
|
||||
Path: "vendor/",
|
||||
},
|
||||
})
|
||||
}
|
||||
{
|
||||
meta := DefaultMetadata()
|
||||
meta.Main = "start.mcl"
|
||||
testCases = append(testCases, test{
|
||||
name: "partial document defaults",
|
||||
yaml: util.Code(`
|
||||
main: "start.mcl"
|
||||
`),
|
||||
fail: false,
|
||||
meta: meta,
|
||||
})
|
||||
}
|
||||
|
||||
names := []string{}
|
||||
for index, tc := range testCases { // run all the tests
|
||||
if tc.name == "" {
|
||||
t.Errorf("test #%d: not named", index)
|
||||
continue
|
||||
}
|
||||
if util.StrInList(tc.name, names) {
|
||||
t.Errorf("test #%d: duplicate sub test name of: %s", index, tc.name)
|
||||
continue
|
||||
}
|
||||
names = append(names, tc.name)
|
||||
|
||||
//if index != 3 { // hack to run a subset (useful for debugging)
|
||||
//if (index != 20 && index != 21) {
|
||||
//if tc.name != "nil" {
|
||||
// continue
|
||||
//}
|
||||
|
||||
t.Run(fmt.Sprintf("test #%d (%s)", index, tc.name), func(t *testing.T) {
|
||||
name, yaml, fail, meta := tc.name, tc.yaml, tc.fail, tc.meta
|
||||
|
||||
t.Logf("\n\ntest #%d (%s) ----------------\n\n", index, name)
|
||||
|
||||
str := strings.NewReader(yaml)
|
||||
metadata, err := ParseMetadata(str)
|
||||
meta.bug395 = true // workaround for https://github.com/go-yaml/yaml/issues/395
|
||||
|
||||
if !fail && err != nil {
|
||||
t.Errorf("test #%d: metadata parse failed with: %+v", index, err)
|
||||
return
|
||||
}
|
||||
if fail && err == nil {
|
||||
t.Errorf("test #%d: metadata parse passed, expected fail", index)
|
||||
return
|
||||
}
|
||||
if !fail && metadata == nil {
|
||||
t.Errorf("test #%d: metadata parse output was nil", index)
|
||||
return
|
||||
}
|
||||
|
||||
if metadata != nil {
|
||||
if !reflect.DeepEqual(meta, metadata) {
|
||||
// double check because DeepEqual is different since the func exists
|
||||
diff := pretty.Compare(meta, metadata)
|
||||
if diff != "" { // bonus
|
||||
t.Errorf("test #%d: metadata did not match expected", index)
|
||||
// TODO: consider making our own recursive print function
|
||||
t.Logf("test #%d: actual: \n\n%s\n", index, spew.Sdump(meta))
|
||||
t.Logf("test #%d: expected: \n\n%s", index, spew.Sdump(metadata))
|
||||
|
||||
// more details, for tricky cases:
|
||||
diffable := &pretty.Config{
|
||||
Diffable: true,
|
||||
IncludeUnexported: true,
|
||||
//PrintStringers: false,
|
||||
//PrintTextMarshalers: false,
|
||||
//SkipZeroFields: false,
|
||||
}
|
||||
t.Logf("test #%d: actual: \n\n%s\n", index, diffable.Sprint(meta))
|
||||
t.Logf("test #%d: expected: \n\n%s", index, diffable.Sprint(metadata))
|
||||
t.Logf("test #%d: diff:\n%s", index, diff)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestMetadataSave0(t *testing.T) {
|
||||
// Since we put local path information into metadataPath, we'd like to
|
||||
// test that we don't leak it into our remote filesystem. This isn't a
|
||||
// major issue, but it's not technically nice to tell anyone about it.
|
||||
sentinel := "nope!"
|
||||
md := &Metadata{
|
||||
Main: "hello.mcl",
|
||||
metadataPath: sentinel, // this value should not get seen
|
||||
}
|
||||
b, err := md.ToBytes()
|
||||
if err != nil {
|
||||
t.Errorf("can't print metadata file: %+v", err)
|
||||
return
|
||||
}
|
||||
s := string(b) // convert
|
||||
if strings.Contains(s, sentinel) { // did we find the sentinel?
|
||||
t.Errorf("sentinel was found")
|
||||
}
|
||||
t.Logf("got:\n%s", s)
|
||||
}
|
||||
@@ -546,7 +546,7 @@ func TestInterpolateBasicExpr(t *testing.T) {
|
||||
}
|
||||
{
|
||||
ast := &ExprStr{
|
||||
V: "i am: ${hostname()}",
|
||||
V: "i am: ${sys.hostname()}",
|
||||
}
|
||||
exp := &ExprCall{
|
||||
Name: operatorFuncName,
|
||||
@@ -558,7 +558,7 @@ func TestInterpolateBasicExpr(t *testing.T) {
|
||||
V: "i am: ",
|
||||
},
|
||||
&ExprCall{
|
||||
Name: "hostname",
|
||||
Name: "sys.hostname",
|
||||
Args: []interfaces.Expr{},
|
||||
},
|
||||
},
|
||||
|
||||
@@ -20,16 +20,24 @@
|
||||
package lang
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"sort"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/purpleidea/mgmt/engine"
|
||||
"github.com/purpleidea/mgmt/engine/resources"
|
||||
"github.com/purpleidea/mgmt/lang/funcs"
|
||||
"github.com/purpleidea/mgmt/lang/interfaces"
|
||||
"github.com/purpleidea/mgmt/lang/unification"
|
||||
"github.com/purpleidea/mgmt/pgraph"
|
||||
"github.com/purpleidea/mgmt/util"
|
||||
|
||||
"github.com/kylelemons/godebug/pretty"
|
||||
"github.com/spf13/afero"
|
||||
)
|
||||
|
||||
func vertexAstCmpFn(v1, v2 pgraph.Vertex) (bool, error) {
|
||||
@@ -68,6 +76,8 @@ func TestAstFunc0(t *testing.T) {
|
||||
"hello": &ExprStr{V: "world"},
|
||||
"answer": &ExprInt{V: 42},
|
||||
},
|
||||
// all the built-in top-level, core functions enter here...
|
||||
Functions: funcs.LookupPrefix(""),
|
||||
}
|
||||
|
||||
type test struct { // an individual test
|
||||
@@ -183,6 +193,7 @@ func TestAstFunc0(t *testing.T) {
|
||||
}
|
||||
`,
|
||||
fail: false,
|
||||
scope: scope,
|
||||
graph: graph,
|
||||
})
|
||||
}
|
||||
@@ -211,6 +222,7 @@ func TestAstFunc0(t *testing.T) {
|
||||
}
|
||||
`,
|
||||
fail: false,
|
||||
scope: scope,
|
||||
graph: graph,
|
||||
})
|
||||
}
|
||||
@@ -242,6 +254,7 @@ func TestAstFunc0(t *testing.T) {
|
||||
$i = 13
|
||||
`,
|
||||
fail: false,
|
||||
scope: scope,
|
||||
graph: graph,
|
||||
})
|
||||
}
|
||||
@@ -368,121 +381,446 @@ func TestAstFunc0(t *testing.T) {
|
||||
|
||||
names := []string{}
|
||||
for index, tc := range testCases { // run all the tests
|
||||
name, code, fail, scope, exp := tc.name, tc.code, tc.fail, tc.scope, tc.graph
|
||||
|
||||
if name == "" {
|
||||
name = "<sub test not named>"
|
||||
}
|
||||
if util.StrInList(name, names) {
|
||||
t.Errorf("test #%d: duplicate sub test name of: %s", index, name)
|
||||
if tc.name == "" {
|
||||
t.Errorf("test #%d: not named", index)
|
||||
continue
|
||||
}
|
||||
names = append(names, name)
|
||||
if util.StrInList(tc.name, names) {
|
||||
t.Errorf("test #%d: duplicate sub test name of: %s", index, tc.name)
|
||||
continue
|
||||
}
|
||||
names = append(names, tc.name)
|
||||
|
||||
//if index != 3 { // hack to run a subset (useful for debugging)
|
||||
//if tc.name != "simple operators" {
|
||||
// continue
|
||||
//}
|
||||
|
||||
t.Logf("\n\ntest #%d (%s) ----------------\n\n", index, name)
|
||||
str := strings.NewReader(code)
|
||||
ast, err := LexParse(str)
|
||||
if err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: lex/parse failed with: %+v", index, err)
|
||||
t.Run(fmt.Sprintf("test #%d (%s)", index, tc.name), func(t *testing.T) {
|
||||
name, code, fail, scope, exp := tc.name, tc.code, tc.fail, tc.scope, tc.graph
|
||||
|
||||
t.Logf("\n\ntest #%d (%s) ----------------\n\n", index, name)
|
||||
str := strings.NewReader(code)
|
||||
ast, err := LexParse(str)
|
||||
if err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: lex/parse failed with: %+v", index, err)
|
||||
return
|
||||
}
|
||||
t.Logf("test #%d: AST: %+v", index, ast)
|
||||
|
||||
data := &interfaces.Data{
|
||||
Debug: true,
|
||||
Logf: func(format string, v ...interface{}) {
|
||||
t.Logf("ast: "+format, v...)
|
||||
},
|
||||
}
|
||||
// some of this might happen *after* interpolate in SetScope or Unify...
|
||||
if err := ast.Init(data); err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: could not init and validate AST: %+v", index, err)
|
||||
return
|
||||
}
|
||||
|
||||
iast, err := ast.Interpolate()
|
||||
if err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: interpolate failed with: %+v", index, err)
|
||||
return
|
||||
}
|
||||
|
||||
// propagate the scope down through the AST...
|
||||
err = iast.SetScope(scope)
|
||||
if !fail && err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: could not set scope: %+v", index, err)
|
||||
return
|
||||
}
|
||||
if fail && err != nil {
|
||||
return // fail happened during set scope, don't run unification!
|
||||
}
|
||||
|
||||
// apply type unification
|
||||
logf := func(format string, v ...interface{}) {
|
||||
t.Logf(fmt.Sprintf("test #%d", index)+": unification: "+format, v...)
|
||||
}
|
||||
err = unification.Unify(iast, unification.SimpleInvariantSolverLogger(logf))
|
||||
if !fail && err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: could not unify types: %+v", index, err)
|
||||
return
|
||||
}
|
||||
// maybe it will fail during graph below instead?
|
||||
//if fail && err == nil {
|
||||
// t.Errorf("test #%d: FAIL", index)
|
||||
// t.Errorf("test #%d: unification passed, expected fail", index)
|
||||
// continue
|
||||
//}
|
||||
if fail && err != nil {
|
||||
return // fail happened during unification, don't run Graph!
|
||||
}
|
||||
|
||||
// build the function graph
|
||||
graph, err := iast.Graph()
|
||||
|
||||
if !fail && err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: functions failed with: %+v", index, err)
|
||||
return
|
||||
}
|
||||
if fail && err == nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: functions passed, expected fail", index)
|
||||
return
|
||||
}
|
||||
|
||||
if fail { // can't process graph if it's nil
|
||||
// TODO: match against expected error
|
||||
t.Logf("test #%d: error: %+v", index, err)
|
||||
return
|
||||
}
|
||||
|
||||
t.Logf("test #%d: graph: %+v", index, graph)
|
||||
// TODO: improve: https://github.com/purpleidea/mgmt/issues/199
|
||||
if err := graph.GraphCmp(exp, vertexAstCmpFn, edgeAstCmpFn); err != nil {
|
||||
t.Errorf("test #%d: FAIL\n\n", index)
|
||||
t.Logf("test #%d: actual (g1): %v%s\n\n", index, graph, fullPrint(graph))
|
||||
t.Logf("test #%d: expected (g2): %v%s\n\n", index, exp, fullPrint(exp))
|
||||
t.Errorf("test #%d: cmp error:\n%v", index, err)
|
||||
return
|
||||
}
|
||||
|
||||
for i, v := range graph.Vertices() {
|
||||
t.Logf("test #%d: vertex(%d): %+v", index, i, v)
|
||||
}
|
||||
for v1 := range graph.Adjacency() {
|
||||
for v2, e := range graph.Adjacency()[v1] {
|
||||
t.Logf("test #%d: edge(%+v): %+v -> %+v", index, e, v1, v2)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestAstFunc1 is a more advanced version which pulls code from physical dirs.
|
||||
func TestAstFunc1(t *testing.T) {
|
||||
const magicError = "# err: "
|
||||
const magicEmpty = "# empty!"
|
||||
dir, err := util.TestDirFull()
|
||||
if err != nil {
|
||||
t.Errorf("FAIL: could not get tests directory: %+v", err)
|
||||
return
|
||||
}
|
||||
t.Logf("tests directory is: %s", dir)
|
||||
scope := &interfaces.Scope{ // global scope
|
||||
Variables: map[string]interfaces.Expr{
|
||||
"purpleidea": &ExprStr{V: "hello world!"}, // james says hi
|
||||
// TODO: change to a func when we can change hostname dynamically!
|
||||
"hostname": &ExprStr{V: ""}, // NOTE: empty b/c not used
|
||||
},
|
||||
// all the built-in top-level, core functions enter here...
|
||||
Functions: funcs.LookupPrefix(""),
|
||||
}
|
||||
|
||||
type test struct { // an individual test
|
||||
name string
|
||||
path string // relative sub directory path inside tests dir
|
||||
fail bool
|
||||
//graph *pgraph.Graph
|
||||
expstr string // expected graph in string format
|
||||
}
|
||||
testCases := []test{}
|
||||
//{
|
||||
// graph, _ := pgraph.NewGraph("g")
|
||||
// testCases = append(testCases, test{
|
||||
// name: "simple hello world",
|
||||
// path: "hello0/",
|
||||
// fail: false,
|
||||
// expstr: graph.Sprint(),
|
||||
// })
|
||||
//}
|
||||
|
||||
// build test array automatically from reading the dir
|
||||
files, err := ioutil.ReadDir(dir)
|
||||
if err != nil {
|
||||
t.Errorf("FAIL: could not read through tests directory: %+v", err)
|
||||
return
|
||||
}
|
||||
for _, f := range files {
|
||||
if !f.IsDir() {
|
||||
continue
|
||||
}
|
||||
t.Logf("test #%d: AST: %+v", index, ast)
|
||||
|
||||
data := &interfaces.Data{
|
||||
Debug: true,
|
||||
Logf: func(format string, v ...interface{}) {
|
||||
t.Logf("ast: "+format, v...)
|
||||
},
|
||||
graphFile := f.Name() + ".graph" // expected graph file
|
||||
graphFileFull := dir + graphFile
|
||||
info, err := os.Stat(graphFileFull)
|
||||
if err != nil || info.IsDir() {
|
||||
t.Errorf("FAIL: missing: %s", graphFile)
|
||||
t.Errorf("(err: %+v)", err)
|
||||
continue
|
||||
}
|
||||
// some of this might happen *after* interpolate in SetScope or Unify...
|
||||
if err := ast.Init(data); err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: could not init and validate AST: %+v", index, err)
|
||||
content, err := ioutil.ReadFile(graphFileFull)
|
||||
if err != nil {
|
||||
t.Errorf("FAIL: could not read graph file: %+v", err)
|
||||
return
|
||||
}
|
||||
str := string(content) // expected graph
|
||||
|
||||
iast, err := ast.Interpolate()
|
||||
if err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: interpolate failed with: %+v", index, err)
|
||||
continue
|
||||
// if the graph file has a magic error string, it's a failure
|
||||
errStr := ""
|
||||
if strings.HasPrefix(str, magicError) {
|
||||
errStr = strings.TrimPrefix(str, magicError)
|
||||
str = errStr
|
||||
}
|
||||
|
||||
// propagate the scope down through the AST...
|
||||
err = iast.SetScope(scope)
|
||||
if !fail && err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: could not set scope: %+v", index, err)
|
||||
continue
|
||||
}
|
||||
if fail && err != nil {
|
||||
continue // fail happened during set scope, don't run unification!
|
||||
}
|
||||
// add automatic test case
|
||||
testCases = append(testCases, test{
|
||||
name: fmt.Sprintf("dir: %s", f.Name()),
|
||||
path: f.Name() + "/",
|
||||
fail: errStr != "",
|
||||
expstr: str,
|
||||
})
|
||||
//t.Logf("adding: %s", f.Name() + "/")
|
||||
}
|
||||
|
||||
// apply type unification
|
||||
logf := func(format string, v ...interface{}) {
|
||||
t.Logf(fmt.Sprintf("test #%d", index)+": unification: "+format, v...)
|
||||
}
|
||||
err = unification.Unify(iast, unification.SimpleInvariantSolverLogger(logf))
|
||||
if !fail && err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: could not unify types: %+v", index, err)
|
||||
names := []string{}
|
||||
for index, tc := range testCases { // run all the tests
|
||||
if tc.name == "" {
|
||||
t.Errorf("test #%d: not named", index)
|
||||
continue
|
||||
}
|
||||
// maybe it will fail during graph below instead?
|
||||
//if fail && err == nil {
|
||||
// t.Errorf("test #%d: FAIL", index)
|
||||
// t.Errorf("test #%d: unification passed, expected fail", index)
|
||||
if util.StrInList(tc.name, names) {
|
||||
t.Errorf("test #%d: duplicate sub test name of: %s", index, tc.name)
|
||||
continue
|
||||
}
|
||||
names = append(names, tc.name)
|
||||
|
||||
//if index != 3 { // hack to run a subset (useful for debugging)
|
||||
//if tc.name != "simple operators" {
|
||||
// continue
|
||||
//}
|
||||
if fail && err != nil {
|
||||
continue // fail happened during unification, don't run Graph!
|
||||
}
|
||||
|
||||
// build the function graph
|
||||
graph, err := iast.Graph()
|
||||
t.Run(fmt.Sprintf("test #%d (%s)", index, tc.name), func(t *testing.T) {
|
||||
name, path, fail, expstr := tc.name, tc.path, tc.fail, strings.Trim(tc.expstr, "\n")
|
||||
src := dir + path // location of the test
|
||||
|
||||
if !fail && err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: functions failed with: %+v", index, err)
|
||||
continue
|
||||
}
|
||||
if fail && err == nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: functions passed, expected fail", index)
|
||||
continue
|
||||
}
|
||||
t.Logf("\n\ntest #%d (%s) ----------------\npath: %s\n\n", index, name, src)
|
||||
|
||||
if fail { // can't process graph if it's nil
|
||||
// TODO: match against expected error
|
||||
t.Logf("test #%d: error: %+v", index, err)
|
||||
continue
|
||||
}
|
||||
|
||||
t.Logf("test #%d: graph: %+v", index, graph)
|
||||
// TODO: improve: https://github.com/purpleidea/mgmt/issues/199
|
||||
if err := graph.GraphCmp(exp, vertexAstCmpFn, edgeAstCmpFn); err != nil {
|
||||
t.Errorf("test #%d: FAIL\n\n", index)
|
||||
t.Logf("test #%d: actual (g1): %v%s\n\n", index, graph, fullPrint(graph))
|
||||
t.Logf("test #%d: expected (g2): %v%s\n\n", index, exp, fullPrint(exp))
|
||||
t.Errorf("test #%d: cmp error:\n%v", index, err)
|
||||
continue
|
||||
}
|
||||
|
||||
for i, v := range graph.Vertices() {
|
||||
t.Logf("test #%d: vertex(%d): %+v", index, i, v)
|
||||
}
|
||||
for v1 := range graph.Adjacency() {
|
||||
for v2, e := range graph.Adjacency()[v1] {
|
||||
t.Logf("test #%d: edge(%+v): %+v -> %+v", index, e, v1, v2)
|
||||
logf := func(format string, v ...interface{}) {
|
||||
t.Logf(fmt.Sprintf("test #%d", index)+": "+format, v...)
|
||||
}
|
||||
}
|
||||
mmFs := afero.NewMemMapFs()
|
||||
afs := &afero.Afero{Fs: mmFs} // wrap so that we're implementing ioutil
|
||||
fs := &util.Fs{Afero: afs}
|
||||
|
||||
// use this variant, so that we don't copy the dir name
|
||||
// this is the equivalent to running `rsync -a src/ /`
|
||||
if err := util.CopyDiskContentsToFs(fs, src, "/", false); err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: CopyDiskContentsToFs failed: %+v", index, err)
|
||||
return
|
||||
}
|
||||
|
||||
// this shows us what we pulled in from the test dir:
|
||||
tree0, err := util.FsTree(fs, "/")
|
||||
if err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: FsTree failed: %+v", index, err)
|
||||
return
|
||||
}
|
||||
logf("tree:\n%s", tree0)
|
||||
|
||||
input := "/"
|
||||
logf("input: %s", input)
|
||||
|
||||
output, err := parseInput(input, fs) // raw code can be passed in
|
||||
if err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: parseInput failed: %+v", index, err)
|
||||
return
|
||||
}
|
||||
for _, fn := range output.Workers {
|
||||
if err := fn(fs); err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: worker execution failed: %+v", index, err)
|
||||
return
|
||||
}
|
||||
}
|
||||
tree, err := util.FsTree(fs, "/")
|
||||
if err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: FsTree failed: %+v", index, err)
|
||||
return
|
||||
}
|
||||
logf("tree:\n%s", tree)
|
||||
|
||||
logf("main:\n%s", output.Main) // debug
|
||||
|
||||
reader := bytes.NewReader(output.Main)
|
||||
ast, err := LexParse(reader)
|
||||
if !fail && err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: lex/parse failed with: %+v", index, err)
|
||||
return
|
||||
}
|
||||
if fail && err != nil {
|
||||
// TODO: %+v instead?
|
||||
s := fmt.Sprintf("%s", err) // convert to string
|
||||
if s != expstr {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: expected different error", index)
|
||||
t.Logf("test #%d: err: %s", index, s)
|
||||
t.Logf("test #%d: exp: %s", index, expstr)
|
||||
}
|
||||
return // fail happened during set scope, don't run unification!
|
||||
}
|
||||
t.Logf("test #%d: AST: %+v", index, ast)
|
||||
|
||||
importGraph, err := pgraph.NewGraph("importGraph")
|
||||
if err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: could not create graph: %+v", index, err)
|
||||
return
|
||||
}
|
||||
importVertex := &pgraph.SelfVertex{
|
||||
Name: "", // first node is the empty string
|
||||
Graph: importGraph, // store a reference to ourself
|
||||
}
|
||||
importGraph.AddVertex(importVertex)
|
||||
|
||||
data := &interfaces.Data{
|
||||
Fs: fs,
|
||||
Base: output.Base, // base dir (absolute path) the metadata file is in
|
||||
Files: output.Files, // no really needed here afaict
|
||||
Imports: importVertex,
|
||||
Metadata: output.Metadata,
|
||||
Modules: "/" + interfaces.ModuleDirectory, // not really needed here afaict
|
||||
|
||||
Debug: true,
|
||||
Logf: func(format string, v ...interface{}) {
|
||||
logf("ast: "+format, v...)
|
||||
},
|
||||
}
|
||||
// some of this might happen *after* interpolate in SetScope or Unify...
|
||||
if err := ast.Init(data); err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: could not init and validate AST: %+v", index, err)
|
||||
return
|
||||
}
|
||||
|
||||
iast, err := ast.Interpolate()
|
||||
if err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: interpolate failed with: %+v", index, err)
|
||||
return
|
||||
}
|
||||
|
||||
// propagate the scope down through the AST...
|
||||
err = iast.SetScope(scope)
|
||||
if !fail && err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: could not set scope: %+v", index, err)
|
||||
return
|
||||
}
|
||||
if fail && err != nil {
|
||||
// TODO: %+v instead?
|
||||
s := fmt.Sprintf("%s", err) // convert to string
|
||||
if s != expstr {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: expected different error", index)
|
||||
t.Logf("test #%d: err: %s", index, s)
|
||||
t.Logf("test #%d: exp: %s", index, expstr)
|
||||
}
|
||||
return // fail happened during set scope, don't run unification!
|
||||
}
|
||||
|
||||
// apply type unification
|
||||
xlogf := func(format string, v ...interface{}) {
|
||||
logf("unification: "+format, v...)
|
||||
}
|
||||
err = unification.Unify(iast, unification.SimpleInvariantSolverLogger(xlogf))
|
||||
if !fail && err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: could not unify types: %+v", index, err)
|
||||
return
|
||||
}
|
||||
// maybe it will fail during graph below instead?
|
||||
//if fail && err == nil {
|
||||
// t.Errorf("test #%d: FAIL", index)
|
||||
// t.Errorf("test #%d: unification passed, expected fail", index)
|
||||
// continue
|
||||
//}
|
||||
if fail && err != nil {
|
||||
// TODO: %+v instead?
|
||||
s := fmt.Sprintf("%s", err) // convert to string
|
||||
if s != expstr {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: expected different error", index)
|
||||
t.Logf("test #%d: err: %s", index, s)
|
||||
t.Logf("test #%d: exp: %s", index, expstr)
|
||||
}
|
||||
return // fail happened during unification, don't run Graph!
|
||||
}
|
||||
|
||||
// build the function graph
|
||||
graph, err := iast.Graph()
|
||||
|
||||
if !fail && err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: functions failed with: %+v", index, err)
|
||||
return
|
||||
}
|
||||
if fail && err == nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: functions passed, expected fail", index)
|
||||
return
|
||||
}
|
||||
|
||||
if fail { // can't process graph if it's nil
|
||||
// TODO: %+v instead?
|
||||
s := fmt.Sprintf("%s", err) // convert to string
|
||||
if s != expstr {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: expected different error", index)
|
||||
t.Logf("test #%d: err: %s", index, s)
|
||||
t.Logf("test #%d: exp: %s", index, expstr)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
t.Logf("test #%d: graph: %+v", index, graph)
|
||||
str := strings.Trim(graph.Sprint(), "\n") // text format of graph
|
||||
if expstr == magicEmpty {
|
||||
expstr = ""
|
||||
}
|
||||
// XXX: something isn't consistent, and I can't figure
|
||||
// out what, so workaround this by sorting these :(
|
||||
sortHack := func(x string) string {
|
||||
l := strings.Split(x, "\n")
|
||||
sort.Strings(l)
|
||||
return strings.Join(l, "\n")
|
||||
}
|
||||
str = sortHack(str)
|
||||
expstr = sortHack(expstr)
|
||||
if expstr != str {
|
||||
t.Errorf("test #%d: FAIL\n\n", index)
|
||||
t.Logf("test #%d: actual (g1):\n%s\n\n", index, str)
|
||||
t.Logf("test #%d: expected (g2):\n%s\n\n", index, expstr)
|
||||
diff := pretty.Compare(str, expstr)
|
||||
if diff != "" { // bonus
|
||||
t.Logf("test #%d: diff:\n%s", index, diff)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
for i, v := range graph.Vertices() {
|
||||
t.Logf("test #%d: vertex(%d): %+v", index, i, v)
|
||||
}
|
||||
for v1 := range graph.Adjacency() {
|
||||
for v2, e := range graph.Adjacency()[v1] {
|
||||
t.Logf("test #%d: edge(%+v): %+v -> %+v", index, e, v1, v2)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
1
lang/interpret_test/TestAstFunc1/comment1.graph
Normal file
1
lang/interpret_test/TestAstFunc1/comment1.graph
Normal file
@@ -0,0 +1 @@
|
||||
# empty!
|
||||
1
lang/interpret_test/TestAstFunc1/comment1/main.mcl
Normal file
1
lang/interpret_test/TestAstFunc1/comment1/main.mcl
Normal file
@@ -0,0 +1 @@
|
||||
# this is a comment
|
||||
1
lang/interpret_test/TestAstFunc1/fail1.graph
Normal file
1
lang/interpret_test/TestAstFunc1/fail1.graph
Normal file
@@ -0,0 +1 @@
|
||||
# err: parser: `syntax error: unexpected COLON` @2:8
|
||||
4
lang/interpret_test/TestAstFunc1/fail1/main.mcl
Normal file
4
lang/interpret_test/TestAstFunc1/fail1/main.mcl
Normal file
@@ -0,0 +1,4 @@
|
||||
# this is not valid mcl code, this is puppet!
|
||||
file { "/tmp/foo":
|
||||
ensure => present,
|
||||
}
|
||||
8
lang/interpret_test/TestAstFunc1/hello0.graph
Normal file
8
lang/interpret_test/TestAstFunc1/hello0.graph
Normal file
@@ -0,0 +1,8 @@
|
||||
Vertex: call:fmt.printf(str(hello: %s), var(s))
|
||||
Vertex: str(greeting)
|
||||
Vertex: str(hello: %s)
|
||||
Vertex: str(world)
|
||||
Vertex: var(s)
|
||||
Edge: str(hello: %s) -> call:fmt.printf(str(hello: %s), var(s)) # a
|
||||
Edge: str(world) -> var(s) # s
|
||||
Edge: var(s) -> call:fmt.printf(str(hello: %s), var(s)) # b
|
||||
7
lang/interpret_test/TestAstFunc1/hello0/main.mcl
Normal file
7
lang/interpret_test/TestAstFunc1/hello0/main.mcl
Normal file
@@ -0,0 +1,7 @@
|
||||
import "fmt"
|
||||
|
||||
$s = "world"
|
||||
|
||||
test "greeting" {
|
||||
anotherstr => fmt.printf("hello: %s", $s),
|
||||
}
|
||||
72
lang/interpret_test/TestAstFunc1/module_search1.graph
Normal file
72
lang/interpret_test/TestAstFunc1/module_search1.graph
Normal file
@@ -0,0 +1,72 @@
|
||||
Vertex: call:_operator(str(+), int(42), var(third.three))
|
||||
Vertex: call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name))
|
||||
Vertex: call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name))
|
||||
Vertex: call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example2/ and i contain: ), var(ex1))
|
||||
Vertex: call:_operator(str(+), str(this is module mod1 which contains: ), var(mod1.name))
|
||||
Vertex: call:fmt.printf(str(i imported local: %s), var(mod1.name))
|
||||
Vertex: call:fmt.printf(str(i imported remote: %s and %s), var(example1.name), var(example2.ex1))
|
||||
Vertex: call:fmt.printf(str(the answer is: %d), var(answer))
|
||||
Vertex: int(3)
|
||||
Vertex: int(42)
|
||||
Vertex: str(+)
|
||||
Vertex: str(+)
|
||||
Vertex: str(+)
|
||||
Vertex: str(+)
|
||||
Vertex: str(+)
|
||||
Vertex: str(hello)
|
||||
Vertex: str(hello2)
|
||||
Vertex: str(hello3)
|
||||
Vertex: str(i am github.com/purpleidea/mgmt-example1/ and i contain: )
|
||||
Vertex: str(i am github.com/purpleidea/mgmt-example1/ and i contain: )
|
||||
Vertex: str(i am github.com/purpleidea/mgmt-example2/ and i contain: )
|
||||
Vertex: str(i imported local: %s)
|
||||
Vertex: str(i imported remote: %s and %s)
|
||||
Vertex: str(the answer is: %d)
|
||||
Vertex: str(this is module mod1 which contains: )
|
||||
Vertex: str(this is the nested git module mod1)
|
||||
Vertex: str(this is the nested git module mod1)
|
||||
Vertex: str(this is the nested local module mod1)
|
||||
Vertex: var(answer)
|
||||
Vertex: var(ex1)
|
||||
Vertex: var(example1.name)
|
||||
Vertex: var(example1.name)
|
||||
Vertex: var(example2.ex1)
|
||||
Vertex: var(h2g2.answer)
|
||||
Vertex: var(mod1.name)
|
||||
Vertex: var(mod1.name)
|
||||
Vertex: var(mod1.name)
|
||||
Vertex: var(mod1.name)
|
||||
Vertex: var(third.three)
|
||||
Edge: call:_operator(str(+), int(42), var(third.three)) -> var(h2g2.answer) # h2g2.answer
|
||||
Edge: call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name)) -> var(example1.name) # example1.name
|
||||
Edge: call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name)) -> var(example1.name) # example1.name
|
||||
Edge: call:_operator(str(+), str(this is module mod1 which contains: ), var(mod1.name)) -> var(mod1.name) # mod1.name
|
||||
Edge: int(3) -> var(third.three) # third.three
|
||||
Edge: int(42) -> call:_operator(str(+), int(42), var(third.three)) # a
|
||||
Edge: str(+) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name)) # x
|
||||
Edge: str(+) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name)) # x
|
||||
Edge: str(+) -> call:_operator(str(+), int(42), var(third.three)) # x
|
||||
Edge: str(+) -> call:_operator(str(+), str(this is module mod1 which contains: ), var(mod1.name)) # x
|
||||
Edge: str(+) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example2/ and i contain: ), var(ex1)) # x
|
||||
Edge: str(i am github.com/purpleidea/mgmt-example1/ and i contain: ) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name)) # a
|
||||
Edge: str(i am github.com/purpleidea/mgmt-example1/ and i contain: ) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name)) # a
|
||||
Edge: str(i am github.com/purpleidea/mgmt-example2/ and i contain: ) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example2/ and i contain: ), var(ex1)) # a
|
||||
Edge: str(i imported local: %s) -> call:fmt.printf(str(i imported local: %s), var(mod1.name)) # a
|
||||
Edge: str(i imported remote: %s and %s) -> call:fmt.printf(str(i imported remote: %s and %s), var(example1.name), var(example2.ex1)) # a
|
||||
Edge: str(the answer is: %d) -> call:fmt.printf(str(the answer is: %d), var(answer)) # a
|
||||
Edge: str(this is module mod1 which contains: ) -> call:_operator(str(+), str(this is module mod1 which contains: ), var(mod1.name)) # a
|
||||
Edge: str(this is the nested git module mod1) -> var(mod1.name) # mod1.name
|
||||
Edge: str(this is the nested git module mod1) -> var(mod1.name) # mod1.name
|
||||
Edge: str(this is the nested local module mod1) -> var(mod1.name) # mod1.name
|
||||
Edge: var(answer) -> call:fmt.printf(str(the answer is: %d), var(answer)) # b
|
||||
Edge: var(ex1) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example2/ and i contain: ), var(ex1)) # b
|
||||
Edge: var(example1.name) -> var(ex1) # ex1
|
||||
Edge: var(example1.name) -> var(example2.ex1) # example2.ex1
|
||||
Edge: var(example1.name) -> call:fmt.printf(str(i imported remote: %s and %s), var(example1.name), var(example2.ex1)) # b
|
||||
Edge: var(example2.ex1) -> call:fmt.printf(str(i imported remote: %s and %s), var(example1.name), var(example2.ex1)) # c
|
||||
Edge: var(h2g2.answer) -> var(answer) # answer
|
||||
Edge: var(mod1.name) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name)) # b
|
||||
Edge: var(mod1.name) -> call:fmt.printf(str(i imported local: %s), var(mod1.name)) # b
|
||||
Edge: var(mod1.name) -> call:_operator(str(+), str(this is module mod1 which contains: ), var(mod1.name)) # b
|
||||
Edge: var(mod1.name) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name)) # b
|
||||
Edge: var(third.three) -> call:_operator(str(+), int(42), var(third.three)) # b
|
||||
@@ -0,0 +1,3 @@
|
||||
import "third.mcl"
|
||||
|
||||
$answer = 42 + $third.three
|
||||
@@ -0,0 +1,19 @@
|
||||
import "fmt"
|
||||
import "h2g2.mcl"
|
||||
import "mod1/"
|
||||
|
||||
# imports as example1
|
||||
import "git://github.com/purpleidea/mgmt-example1/"
|
||||
import "git://github.com/purpleidea/mgmt-example2/"
|
||||
|
||||
$answer = $h2g2.answer
|
||||
|
||||
test "hello" {
|
||||
anotherstr => fmt.printf("the answer is: %d", $answer),
|
||||
}
|
||||
test "hello2" {
|
||||
anotherstr => fmt.printf("i imported local: %s", $mod1.name),
|
||||
}
|
||||
test "hello3" {
|
||||
anotherstr => fmt.printf("i imported remote: %s and %s", $example1.name, $example2.ex1),
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
import "mod1/" # the nested version, not us
|
||||
|
||||
$name = "this is module mod1 which contains: " + $mod1.name
|
||||
@@ -0,0 +1 @@
|
||||
# empty metadata file (use defaults)
|
||||
@@ -0,0 +1 @@
|
||||
$name = "this is the nested local module mod1"
|
||||
@@ -0,0 +1 @@
|
||||
# empty metadata file (use defaults)
|
||||
@@ -0,0 +1 @@
|
||||
$three = 3
|
||||
@@ -0,0 +1,3 @@
|
||||
main: "main/hello.mcl" # this is not the default, the default is "main.mcl"
|
||||
files: "files/" # these are some extra files we can use (is the default)
|
||||
path: "path/" # where to look for modules, defaults to using a global
|
||||
@@ -0,0 +1,4 @@
|
||||
# this is a pretty lame module!
|
||||
import "mod1/" # yet another similarly named "mod1" import
|
||||
|
||||
$name = "i am github.com/purpleidea/mgmt-example1/ and i contain: " + $mod1.name
|
||||
@@ -0,0 +1,2 @@
|
||||
main: "main.mcl"
|
||||
files: "files/" # these are some extra files we can use (is the default)
|
||||
@@ -0,0 +1 @@
|
||||
$name = "this is the nested git module mod1"
|
||||
@@ -0,0 +1 @@
|
||||
# empty metadata file (use defaults)
|
||||
@@ -0,0 +1,5 @@
|
||||
# this is a pretty lame module!
|
||||
import "git://github.com/purpleidea/mgmt-example1/" # import another module
|
||||
$ex1 = $example1.name
|
||||
|
||||
$name = "i am github.com/purpleidea/mgmt-example2/ and i contain: " + $ex1
|
||||
@@ -0,0 +1,4 @@
|
||||
main: "main.mcl"
|
||||
files: "files/" # these are some extra files we can use (is the default)
|
||||
path: "path/" # specify this, even though we already imported in parent
|
||||
parentpathblock: false
|
||||
72
lang/interpret_test/TestAstFunc1/recursive_module1.graph
Normal file
72
lang/interpret_test/TestAstFunc1/recursive_module1.graph
Normal file
@@ -0,0 +1,72 @@
|
||||
Vertex: call:_operator(str(+), int(42), var(third.three))
|
||||
Vertex: call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name))
|
||||
Vertex: call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name))
|
||||
Vertex: call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example2/ and i contain: ), var(ex1))
|
||||
Vertex: call:_operator(str(+), str(this is module mod1 which contains: ), var(mod1.name))
|
||||
Vertex: call:fmt.printf(str(i imported local: %s), var(mod1.name))
|
||||
Vertex: call:fmt.printf(str(i imported remote: %s and %s), var(example1.name), var(example2.ex1))
|
||||
Vertex: call:fmt.printf(str(the answer is: %d), var(answer))
|
||||
Vertex: int(3)
|
||||
Vertex: int(42)
|
||||
Vertex: str(+)
|
||||
Vertex: str(+)
|
||||
Vertex: str(+)
|
||||
Vertex: str(+)
|
||||
Vertex: str(+)
|
||||
Vertex: str(hello)
|
||||
Vertex: str(hello2)
|
||||
Vertex: str(hello3)
|
||||
Vertex: str(i am github.com/purpleidea/mgmt-example1/ and i contain: )
|
||||
Vertex: str(i am github.com/purpleidea/mgmt-example1/ and i contain: )
|
||||
Vertex: str(i am github.com/purpleidea/mgmt-example2/ and i contain: )
|
||||
Vertex: str(i imported local: %s)
|
||||
Vertex: str(i imported remote: %s and %s)
|
||||
Vertex: str(the answer is: %d)
|
||||
Vertex: str(this is module mod1 which contains: )
|
||||
Vertex: str(this is the nested git module mod1)
|
||||
Vertex: str(this is the nested git module mod1)
|
||||
Vertex: str(this is the nested local module mod1)
|
||||
Vertex: var(answer)
|
||||
Vertex: var(ex1)
|
||||
Vertex: var(example1.name)
|
||||
Vertex: var(example1.name)
|
||||
Vertex: var(example2.ex1)
|
||||
Vertex: var(h2g2.answer)
|
||||
Vertex: var(mod1.name)
|
||||
Vertex: var(mod1.name)
|
||||
Vertex: var(mod1.name)
|
||||
Vertex: var(mod1.name)
|
||||
Vertex: var(third.three)
|
||||
Edge: call:_operator(str(+), int(42), var(third.three)) -> var(h2g2.answer) # h2g2.answer
|
||||
Edge: call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name)) -> var(example1.name) # example1.name
|
||||
Edge: call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name)) -> var(example1.name) # example1.name
|
||||
Edge: call:_operator(str(+), str(this is module mod1 which contains: ), var(mod1.name)) -> var(mod1.name) # mod1.name
|
||||
Edge: int(3) -> var(third.three) # third.three
|
||||
Edge: int(42) -> call:_operator(str(+), int(42), var(third.three)) # a
|
||||
Edge: str(+) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name)) # x
|
||||
Edge: str(+) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name)) # x
|
||||
Edge: str(+) -> call:_operator(str(+), int(42), var(third.three)) # x
|
||||
Edge: str(+) -> call:_operator(str(+), str(this is module mod1 which contains: ), var(mod1.name)) # x
|
||||
Edge: str(+) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example2/ and i contain: ), var(ex1)) # x
|
||||
Edge: str(i am github.com/purpleidea/mgmt-example1/ and i contain: ) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name)) # a
|
||||
Edge: str(i am github.com/purpleidea/mgmt-example1/ and i contain: ) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name)) # a
|
||||
Edge: str(i am github.com/purpleidea/mgmt-example2/ and i contain: ) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example2/ and i contain: ), var(ex1)) # a
|
||||
Edge: str(i imported local: %s) -> call:fmt.printf(str(i imported local: %s), var(mod1.name)) # a
|
||||
Edge: str(i imported remote: %s and %s) -> call:fmt.printf(str(i imported remote: %s and %s), var(example1.name), var(example2.ex1)) # a
|
||||
Edge: str(the answer is: %d) -> call:fmt.printf(str(the answer is: %d), var(answer)) # a
|
||||
Edge: str(this is module mod1 which contains: ) -> call:_operator(str(+), str(this is module mod1 which contains: ), var(mod1.name)) # a
|
||||
Edge: str(this is the nested git module mod1) -> var(mod1.name) # mod1.name
|
||||
Edge: str(this is the nested git module mod1) -> var(mod1.name) # mod1.name
|
||||
Edge: str(this is the nested local module mod1) -> var(mod1.name) # mod1.name
|
||||
Edge: var(answer) -> call:fmt.printf(str(the answer is: %d), var(answer)) # b
|
||||
Edge: var(ex1) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example2/ and i contain: ), var(ex1)) # b
|
||||
Edge: var(example1.name) -> var(ex1) # ex1
|
||||
Edge: var(example1.name) -> var(example2.ex1) # example2.ex1
|
||||
Edge: var(example1.name) -> call:fmt.printf(str(i imported remote: %s and %s), var(example1.name), var(example2.ex1)) # b
|
||||
Edge: var(example2.ex1) -> call:fmt.printf(str(i imported remote: %s and %s), var(example1.name), var(example2.ex1)) # c
|
||||
Edge: var(h2g2.answer) -> var(answer) # answer
|
||||
Edge: var(mod1.name) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name)) # b
|
||||
Edge: var(mod1.name) -> call:fmt.printf(str(i imported local: %s), var(mod1.name)) # b
|
||||
Edge: var(mod1.name) -> call:_operator(str(+), str(this is module mod1 which contains: ), var(mod1.name)) # b
|
||||
Edge: var(mod1.name) -> call:_operator(str(+), str(i am github.com/purpleidea/mgmt-example1/ and i contain: ), var(mod1.name)) # b
|
||||
Edge: var(third.three) -> call:_operator(str(+), int(42), var(third.three)) # b
|
||||
@@ -0,0 +1,3 @@
|
||||
import "third.mcl"
|
||||
|
||||
$answer = 42 + $third.three
|
||||
@@ -0,0 +1,19 @@
|
||||
import "fmt"
|
||||
import "h2g2.mcl"
|
||||
import "mod1/"
|
||||
|
||||
# imports as example1
|
||||
import "git://github.com/purpleidea/mgmt-example1/"
|
||||
import "git://github.com/purpleidea/mgmt-example2/"
|
||||
|
||||
$answer = $h2g2.answer
|
||||
|
||||
test "hello" {
|
||||
anotherstr => fmt.printf("the answer is: %d", $answer),
|
||||
}
|
||||
test "hello2" {
|
||||
anotherstr => fmt.printf("i imported local: %s", $mod1.name),
|
||||
}
|
||||
test "hello3" {
|
||||
anotherstr => fmt.printf("i imported remote: %s and %s", $example1.name, $example2.ex1),
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
import "mod1/" # the nested version, not us
|
||||
|
||||
$name = "this is module mod1 which contains: " + $mod1.name
|
||||
@@ -0,0 +1 @@
|
||||
# empty metadata file (use defaults)
|
||||
@@ -0,0 +1 @@
|
||||
$name = "this is the nested local module mod1"
|
||||
@@ -0,0 +1 @@
|
||||
# empty metadata file (use defaults)
|
||||
@@ -0,0 +1 @@
|
||||
$three = 3
|
||||
@@ -0,0 +1,3 @@
|
||||
main: "main/hello.mcl" # this is not the default, the default is "main.mcl"
|
||||
files: "files/" # these are some extra files we can use (is the default)
|
||||
path: "path/" # where to look for modules, defaults to using a global
|
||||
@@ -0,0 +1,4 @@
|
||||
# this is a pretty lame module!
|
||||
import "mod1/" # yet another similarly named "mod1" import
|
||||
|
||||
$name = "i am github.com/purpleidea/mgmt-example1/ and i contain: " + $mod1.name
|
||||
@@ -0,0 +1,2 @@
|
||||
main: "main.mcl"
|
||||
files: "files/" # these are some extra files we can use (is the default)
|
||||
@@ -0,0 +1 @@
|
||||
$name = "this is the nested git module mod1"
|
||||
@@ -0,0 +1 @@
|
||||
# empty metadata file (use defaults)
|
||||
@@ -0,0 +1,5 @@
|
||||
# this is a pretty lame module!
|
||||
import "git://github.com/purpleidea/mgmt-example1/" # import another module
|
||||
$ex1 = $example1.name
|
||||
|
||||
$name = "i am github.com/purpleidea/mgmt-example2/ and i contain: " + $ex1
|
||||
@@ -0,0 +1,2 @@
|
||||
main: "main.mcl"
|
||||
files: "files/" # these are some extra files we can use (is the default)
|
||||
70
lang/lang.go
70
lang/lang.go
@@ -18,8 +18,8 @@
|
||||
package lang // TODO: move this into a sub package of lang/$name?
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"sync"
|
||||
|
||||
"github.com/purpleidea/mgmt/engine"
|
||||
@@ -28,14 +28,12 @@ import (
|
||||
"github.com/purpleidea/mgmt/lang/interfaces"
|
||||
"github.com/purpleidea/mgmt/lang/unification"
|
||||
"github.com/purpleidea/mgmt/pgraph"
|
||||
"github.com/purpleidea/mgmt/util"
|
||||
|
||||
errwrap "github.com/pkg/errors"
|
||||
)
|
||||
|
||||
const (
|
||||
// FileNameExtension is the filename extension used for languages files.
|
||||
FileNameExtension = "mcl" // alternate suggestions welcome!
|
||||
|
||||
// make these available internally without requiring the import
|
||||
operatorFuncName = funcs.OperatorFuncName
|
||||
historyFuncName = funcs.HistoryFuncName
|
||||
@@ -44,7 +42,17 @@ const (
|
||||
|
||||
// Lang is the main language lexer/parser object.
|
||||
type Lang struct {
|
||||
Input io.Reader // os.Stdin or anything that satisfies this interface
|
||||
Fs engine.Fs // connected fs where input dir or metadata exists
|
||||
// Input is a string which specifies what the lang should run. It can
|
||||
// accept values in several different forms. If is passed a single dash
|
||||
// (-), then it will use `os.Stdin`. If it is passed a single .mcl file,
|
||||
// then it will attempt to run that. If it is passed a directory path,
|
||||
// then it will attempt to run from there. Instead, if it is passed the
|
||||
// path to a metadata file, then it will attempt to parse that and run
|
||||
// from that specification. If none of those match, it will attempt to
|
||||
// run the raw string as mcl code.
|
||||
Input string
|
||||
|
||||
Hostname string
|
||||
World engine.World
|
||||
Prefix string
|
||||
@@ -76,9 +84,36 @@ func (obj *Lang) Init() error {
|
||||
once := &sync.Once{}
|
||||
loadedSignal := func() { close(obj.loadedChan) } // only run once!
|
||||
|
||||
if obj.Debug {
|
||||
obj.Logf("input: %s", obj.Input)
|
||||
tree, err := util.FsTree(obj.Fs, "/") // should look like gapi
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
obj.Logf("run tree:\n%s", tree)
|
||||
}
|
||||
|
||||
// we used to support stdin passthrough, but we we got rid of it for now
|
||||
// the fs input here is the local fs we're reading to get the files from
|
||||
// which is usually etcdFs.
|
||||
output, err := parseInput(obj.Input, obj.Fs)
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "could not activate an input parser")
|
||||
}
|
||||
if len(output.Workers) > 0 {
|
||||
// either programming error, or someone hacked in something here
|
||||
// by the time *this* parseInput runs, we should be standardized
|
||||
return fmt.Errorf("input contained file system workers")
|
||||
}
|
||||
reader := bytes.NewReader(output.Main)
|
||||
|
||||
// no need to run recursion detection since this is the beginning
|
||||
// TODO: do the paths need to be cleaned for "../" before comparison?
|
||||
|
||||
// run the lexer/parser and build an AST
|
||||
obj.Logf("lexing/parsing...")
|
||||
ast, err := LexParse(obj.Input)
|
||||
// this reads an io.Reader, which might be a stream of multiple files...
|
||||
ast, err := LexParse(reader)
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "could not generate AST")
|
||||
}
|
||||
@@ -86,10 +121,29 @@ func (obj *Lang) Init() error {
|
||||
obj.Logf("behold, the AST: %+v", ast)
|
||||
}
|
||||
|
||||
importGraph, err := pgraph.NewGraph("importGraph")
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "could not create graph")
|
||||
}
|
||||
importVertex := &pgraph.SelfVertex{
|
||||
Name: "", // first node is the empty string
|
||||
Graph: importGraph, // store a reference to ourself
|
||||
}
|
||||
importGraph.AddVertex(importVertex)
|
||||
|
||||
obj.Logf("init...")
|
||||
// init and validate the structure of the AST
|
||||
data := &interfaces.Data{
|
||||
Debug: obj.Debug,
|
||||
Fs: obj.Fs,
|
||||
Base: output.Base, // base dir (absolute path) the metadata file is in
|
||||
Files: output.Files,
|
||||
Imports: importVertex,
|
||||
Metadata: output.Metadata,
|
||||
Modules: "/" + interfaces.ModuleDirectory, // do not set from env for a deploy!
|
||||
|
||||
//World: obj.World, // TODO: do we need this?
|
||||
Prefix: obj.Prefix,
|
||||
Debug: obj.Debug,
|
||||
Logf: func(format string, v ...interface{}) {
|
||||
// TODO: is this a sane prefix to use here?
|
||||
obj.Logf("ast: "+format, v...)
|
||||
@@ -115,6 +169,8 @@ func (obj *Lang) Init() error {
|
||||
// TODO: change to a func when we can change hostname dynamically!
|
||||
"hostname": &ExprStr{V: obj.Hostname},
|
||||
},
|
||||
// all the built-in top-level, core functions enter here...
|
||||
Functions: funcs.LookupPrefix(""),
|
||||
}
|
||||
|
||||
obj.Logf("building scope...")
|
||||
|
||||
@@ -21,17 +21,18 @@ package lang
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/purpleidea/mgmt/engine"
|
||||
"github.com/purpleidea/mgmt/engine/resources"
|
||||
_ "github.com/purpleidea/mgmt/lang/funcs/core" // import so the funcs register
|
||||
"github.com/purpleidea/mgmt/lang/interfaces"
|
||||
"github.com/purpleidea/mgmt/pgraph"
|
||||
"github.com/purpleidea/mgmt/util"
|
||||
|
||||
multierr "github.com/hashicorp/go-multierror"
|
||||
errwrap "github.com/pkg/errors"
|
||||
"github.com/spf13/afero"
|
||||
)
|
||||
|
||||
// TODO: unify with the other function like this...
|
||||
@@ -85,12 +86,32 @@ func edgeCmpFn(e1, e2 pgraph.Edge) (bool, error) {
|
||||
}
|
||||
|
||||
func runInterpret(t *testing.T, code string) (*pgraph.Graph, error) {
|
||||
str := strings.NewReader(code)
|
||||
logf := func(format string, v ...interface{}) {
|
||||
t.Logf("test: lang: "+format, v...)
|
||||
}
|
||||
mmFs := afero.NewMemMapFs()
|
||||
afs := &afero.Afero{Fs: mmFs} // wrap so that we're implementing ioutil
|
||||
fs := &util.Fs{Afero: afs}
|
||||
|
||||
output, err := parseInput(code, fs) // raw code can be passed in
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "parseInput failed")
|
||||
}
|
||||
for _, fn := range output.Workers {
|
||||
if err := fn(fs); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
tree, err := util.FsTree(fs, "/")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
logf("tree:\n%s", tree)
|
||||
|
||||
lang := &Lang{
|
||||
Input: str, // string as an interface that satisfies io.Reader
|
||||
Fs: fs,
|
||||
Input: "/" + interfaces.MetadataFilename, // start path in fs
|
||||
Debug: true,
|
||||
Logf: logf,
|
||||
}
|
||||
@@ -125,18 +146,19 @@ func runInterpret(t *testing.T, code string) (*pgraph.Graph, error) {
|
||||
return graph, closeFn()
|
||||
}
|
||||
|
||||
func TestInterpret0(t *testing.T) {
|
||||
code := ``
|
||||
graph, err := runInterpret(t, code)
|
||||
if err != nil {
|
||||
t.Errorf("runInterpret failed: %+v", err)
|
||||
return
|
||||
}
|
||||
// TODO: empty code is not currently allowed, should we allow it?
|
||||
//func TestInterpret0(t *testing.T) {
|
||||
// code := ``
|
||||
// graph, err := runInterpret(t, code)
|
||||
// if err != nil {
|
||||
// t.Errorf("runInterpret failed: %+v", err)
|
||||
// return
|
||||
// }
|
||||
|
||||
expected := &pgraph.Graph{}
|
||||
// expected := &pgraph.Graph{}
|
||||
|
||||
runGraphCmp(t, graph, expected)
|
||||
}
|
||||
// runGraphCmp(t, graph, expected)
|
||||
//}
|
||||
|
||||
func TestInterpret1(t *testing.T) {
|
||||
code := `noop "n1" {}`
|
||||
@@ -307,24 +329,25 @@ func TestInterpretMany(t *testing.T) {
|
||||
}
|
||||
testCases := []test{}
|
||||
|
||||
{
|
||||
graph, _ := pgraph.NewGraph("g")
|
||||
testCases = append(testCases, test{ // 0
|
||||
"nil",
|
||||
``,
|
||||
false,
|
||||
graph,
|
||||
})
|
||||
}
|
||||
{
|
||||
graph, _ := pgraph.NewGraph("g")
|
||||
testCases = append(testCases, test{ // 1
|
||||
name: "empty",
|
||||
code: ``,
|
||||
fail: false,
|
||||
graph: graph,
|
||||
})
|
||||
}
|
||||
// TODO: empty code is not currently allowed, should we allow it?
|
||||
//{
|
||||
// graph, _ := pgraph.NewGraph("g")
|
||||
// testCases = append(testCases, test{ // 0
|
||||
// "nil",
|
||||
// ``,
|
||||
// false,
|
||||
// graph,
|
||||
// })
|
||||
//}
|
||||
//{
|
||||
// graph, _ := pgraph.NewGraph("g")
|
||||
// testCases = append(testCases, test{ // 1
|
||||
// name: "empty",
|
||||
// code: ``,
|
||||
// fail: false,
|
||||
// graph: graph,
|
||||
// })
|
||||
//}
|
||||
{
|
||||
graph, _ := pgraph.NewGraph("g")
|
||||
r, _ := engine.NewNamedResource("test", "t")
|
||||
@@ -859,6 +882,66 @@ func TestInterpretMany(t *testing.T) {
|
||||
fail: true,
|
||||
})
|
||||
}
|
||||
{
|
||||
graph, _ := pgraph.NewGraph("g")
|
||||
r1, _ := engine.NewNamedResource("test", "t1")
|
||||
x1 := r1.(*resources.TestRes)
|
||||
s1 := "the answer is: 42"
|
||||
x1.StringPtr = &s1
|
||||
graph.AddVertex(x1)
|
||||
testCases = append(testCases, test{
|
||||
name: "simple import 1",
|
||||
code: `
|
||||
import "fmt"
|
||||
|
||||
test "t1" {
|
||||
stringptr => fmt.printf("the answer is: %d", 42),
|
||||
}
|
||||
`,
|
||||
fail: false,
|
||||
graph: graph,
|
||||
})
|
||||
}
|
||||
{
|
||||
graph, _ := pgraph.NewGraph("g")
|
||||
r1, _ := engine.NewNamedResource("test", "t1")
|
||||
x1 := r1.(*resources.TestRes)
|
||||
s1 := "the answer is: 42"
|
||||
x1.StringPtr = &s1
|
||||
graph.AddVertex(x1)
|
||||
testCases = append(testCases, test{
|
||||
name: "simple import 2",
|
||||
code: `
|
||||
import "fmt" as foo
|
||||
|
||||
test "t1" {
|
||||
stringptr => foo.printf("the answer is: %d", 42),
|
||||
}
|
||||
`,
|
||||
fail: false,
|
||||
graph: graph,
|
||||
})
|
||||
}
|
||||
{
|
||||
graph, _ := pgraph.NewGraph("g")
|
||||
r1, _ := engine.NewNamedResource("test", "t1")
|
||||
x1 := r1.(*resources.TestRes)
|
||||
s1 := "the answer is: 42"
|
||||
x1.StringPtr = &s1
|
||||
graph.AddVertex(x1)
|
||||
testCases = append(testCases, test{
|
||||
name: "simple import 3",
|
||||
code: `
|
||||
import "fmt" as *
|
||||
|
||||
test "t1" {
|
||||
stringptr => printf("the answer is: %d", 42),
|
||||
}
|
||||
`,
|
||||
fail: false,
|
||||
graph: graph,
|
||||
})
|
||||
}
|
||||
|
||||
names := []string{}
|
||||
for index, tc := range testCases { // run all the tests
|
||||
|
||||
@@ -147,6 +147,10 @@
|
||||
/\./ {
|
||||
yylex.pos(lval) // our pos
|
||||
lval.str = yylex.Text()
|
||||
// sanity check... these should be the same!
|
||||
if x, y := lval.str, interfaces.ModuleSep; x != y {
|
||||
panic(fmt.Sprintf("DOT does not match ModuleSep (%s != %s)", x, y))
|
||||
}
|
||||
return DOT
|
||||
}
|
||||
/\$/ {
|
||||
@@ -419,6 +423,8 @@ package lang
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
|
||||
"github.com/purpleidea/mgmt/lang/interfaces"
|
||||
)
|
||||
|
||||
// NOTE:
|
||||
|
||||
124
lang/lexparse.go
124
lang/lexparse.go
@@ -39,6 +39,9 @@ const (
|
||||
// remaining characters following the name. If this is the empty string
|
||||
// then it will be ignored.
|
||||
ModuleMagicPrefix = "mgmt-"
|
||||
|
||||
// CoreDir is the directory prefix where core bindata mcl code is added.
|
||||
CoreDir = "core/"
|
||||
)
|
||||
|
||||
// These constants represent the different possible lexer/parser errors.
|
||||
@@ -110,7 +113,8 @@ func LexParse(input io.Reader) (interfaces.Stmt, error) {
|
||||
// redirects directly to LexParse. This differs because when it errors it will
|
||||
// also report the corresponding file the error occurred in based on some offset
|
||||
// math. The offsets are in units of file size (bytes) and not length (lines).
|
||||
// FIXME: due to an implementation difficulty, offsets are currently in length!
|
||||
// TODO: Due to an implementation difficulty, offsets are currently in length!
|
||||
// NOTE: This was used for an older deprecated form of lex/parse file combining.
|
||||
func LexParseWithOffsets(input io.Reader, offsets map[uint64]string) (interfaces.Stmt, error) {
|
||||
if offsets == nil || len(offsets) == 0 {
|
||||
return LexParse(input) // special case, no named offsets...
|
||||
@@ -165,7 +169,8 @@ func LexParseWithOffsets(input io.Reader, offsets map[uint64]string) (interfaces
|
||||
// source files, and as a result, this will skip over files that don't have the
|
||||
// correct extension. The offsets are in units of file size (bytes) and not
|
||||
// length (lines).
|
||||
// FIXME: due to an implementation difficulty, offsets are currently in length!
|
||||
// TODO: Due to an implementation difficulty, offsets are currently in length!
|
||||
// NOTE: This was used for an older deprecated form of lex/parse file combining.
|
||||
func DirectoryReader(fs engine.Fs, dir string) (io.Reader, map[uint64]string, error) {
|
||||
fis, err := fs.ReadDir(dir) // ([]os.FileInfo, error)
|
||||
if err != nil {
|
||||
@@ -181,7 +186,7 @@ func DirectoryReader(fs engine.Fs, dir string) (io.Reader, map[uint64]string, er
|
||||
continue // skip directories
|
||||
}
|
||||
name := path.Join(dir, fi.Name()) // relative path made absolute
|
||||
if !strings.HasSuffix(name, "."+FileNameExtension) {
|
||||
if !strings.HasSuffix(name, interfaces.DotFileNameExtension) {
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -231,45 +236,12 @@ func DirectoryReader(fs engine.Fs, dir string) (io.Reader, map[uint64]string, er
|
||||
return io.MultiReader(readers...), offsets, nil
|
||||
}
|
||||
|
||||
// ImportData is the result of parsing a string import when it has not errored.
|
||||
type ImportData struct {
|
||||
// Name is the original input that produced this struct. It is stored
|
||||
// here so that you can parse it once and pass this struct around
|
||||
// without having to include a copy of the original data if needed.
|
||||
Name string
|
||||
|
||||
// Alias is the name identifier that should be used for this import.
|
||||
Alias string
|
||||
|
||||
// System specifies that this is a system import.
|
||||
System bool
|
||||
|
||||
// Local represents if a module is either local or a remote import.
|
||||
Local bool
|
||||
|
||||
// Path represents the relative path to the directory that this import
|
||||
// points to. Since it specifies a directory, it will end with a
|
||||
// trailing slash which makes detection more obvious for other helpers.
|
||||
// If this points to a local import, that directory is probably not
|
||||
// expected to contain a metadata file, and it will be a simple path
|
||||
// addition relative to the current file this import was parsed from. If
|
||||
// this is a remote import, then it's likely that the file will be found
|
||||
// in a more distinct path, such as a search path that contains the full
|
||||
// fqdn of the import.
|
||||
// TODO: should system imports put something here?
|
||||
Path string
|
||||
|
||||
// URL is the path that a `git clone` operation should use as the URL.
|
||||
// If it is a local import, then this is the empty value.
|
||||
URL string
|
||||
}
|
||||
|
||||
// ParseImportName parses an import name and returns the default namespace name
|
||||
// that should be used with it. For example, if the import name was:
|
||||
// "git://example.com/purpleidea/Module-Name", this might return an alias of
|
||||
// "module_name". It also returns a bunch of other data about the parsed import.
|
||||
// TODO: check for invalid or unwanted special characters
|
||||
func ParseImportName(name string) (*ImportData, error) {
|
||||
func ParseImportName(name string) (*interfaces.ImportData, error) {
|
||||
magicPrefix := ModuleMagicPrefix
|
||||
if name == "" {
|
||||
return nil, fmt.Errorf("empty name")
|
||||
@@ -286,6 +258,12 @@ func ParseImportName(name string) (*ImportData, error) {
|
||||
return nil, fmt.Errorf("empty path")
|
||||
}
|
||||
p := u.Path
|
||||
// catch bad paths like: git:////home/james/ (note the quad slash!)
|
||||
// don't penalize if we have a dir with a trailing slash at the end
|
||||
if s := path.Clean(u.Path); u.Path != s && u.Path != s+"/" {
|
||||
// TODO: are there any cases where this is not what we want?
|
||||
return nil, fmt.Errorf("dirty path, cleaned it's: `%s`", s)
|
||||
}
|
||||
|
||||
for strings.HasSuffix(p, "/") { // remove trailing slashes
|
||||
p = p[:len(p)-len("/")]
|
||||
@@ -302,7 +280,7 @@ func ParseImportName(name string) (*ImportData, error) {
|
||||
s = s[len(magicPrefix):]
|
||||
}
|
||||
|
||||
s = strings.Replace(s, "-", "_", -1)
|
||||
s = strings.Replace(s, "-", "_", -1) // XXX: allow underscores in IDENTIFIER
|
||||
if strings.HasPrefix(s, "_") || strings.HasSuffix(s, "_") {
|
||||
return nil, fmt.Errorf("name can't begin or end with dash or underscore")
|
||||
}
|
||||
@@ -312,13 +290,16 @@ func ParseImportName(name string) (*ImportData, error) {
|
||||
// if it's an fqdn import, it should contain a metadata file
|
||||
|
||||
// if there's no protocol prefix, then this must be a local path
|
||||
local := u.Scheme == ""
|
||||
system := local && !strings.HasSuffix(u.Path, "/")
|
||||
isLocal := u.Scheme == ""
|
||||
// if it has a trailing slash or .mcl extension it's not a system import
|
||||
isSystem := isLocal && !strings.HasSuffix(u.Path, "/") && !strings.HasSuffix(u.Path, interfaces.DotFileNameExtension)
|
||||
// is it a local file?
|
||||
isFile := !isSystem && isLocal && strings.HasSuffix(u.Path, interfaces.DotFileNameExtension)
|
||||
xpath := u.Path // magic path
|
||||
if system {
|
||||
if isSystem {
|
||||
xpath = ""
|
||||
}
|
||||
if !local {
|
||||
if !isLocal {
|
||||
host := u.Host // host or host:port
|
||||
split := strings.Split(host, ":")
|
||||
if l := len(split); l == 1 || l == 2 {
|
||||
@@ -328,16 +309,24 @@ func ParseImportName(name string) (*ImportData, error) {
|
||||
}
|
||||
xpath = path.Join(host, xpath)
|
||||
}
|
||||
if !local && !strings.HasSuffix(xpath, "/") {
|
||||
if !isLocal && !strings.HasSuffix(xpath, "/") {
|
||||
xpath = xpath + "/"
|
||||
}
|
||||
// we're a git repo with a local path instead of an fqdn over http!
|
||||
// this still counts as isLocal == false, since it's still a remote
|
||||
if u.Host == "" && strings.HasPrefix(u.Path, "/") {
|
||||
xpath = strings.TrimPrefix(xpath, "/") // make it a relative dir
|
||||
}
|
||||
if strings.HasPrefix(xpath, "/") { // safety check (programming error?)
|
||||
return nil, fmt.Errorf("can't parse strange import")
|
||||
}
|
||||
|
||||
// build a url to clone from if we're not local...
|
||||
// TODO: consider adding some logic that is similar to the logic in:
|
||||
// https://github.com/golang/go/blob/054640b54df68789d9df0e50575d21d9dbffe99f/src/cmd/go/internal/get/vcs.go#L972
|
||||
// so that we can more correctly figure out the correct url to clone...
|
||||
xurl := ""
|
||||
if !local {
|
||||
if !isLocal {
|
||||
u.Fragment = ""
|
||||
// TODO: maybe look for ?sha1=... or ?tag=... to pick a real ref
|
||||
u.RawQuery = ""
|
||||
@@ -345,12 +334,45 @@ func ParseImportName(name string) (*ImportData, error) {
|
||||
xurl = u.String()
|
||||
}
|
||||
|
||||
return &ImportData{
|
||||
Name: name, // save the original value here
|
||||
Alias: alias,
|
||||
System: system,
|
||||
Local: local,
|
||||
Path: xpath,
|
||||
URL: xurl,
|
||||
// if u.Path is local file like: foo/server.mcl alias should be "server"
|
||||
// we should trim the alias to remove the .mcl (the dir is already gone)
|
||||
if isFile && strings.HasSuffix(alias, interfaces.DotFileNameExtension) {
|
||||
alias = strings.TrimSuffix(alias, interfaces.DotFileNameExtension)
|
||||
}
|
||||
|
||||
return &interfaces.ImportData{
|
||||
Name: name, // save the original value here
|
||||
Alias: alias,
|
||||
IsSystem: isSystem,
|
||||
IsLocal: isLocal,
|
||||
IsFile: isFile,
|
||||
Path: xpath,
|
||||
URL: xurl,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// CollectFiles collects all the files used in the AST. You will see more files
|
||||
// based on how many compiling steps have run. In general, this is useful for
|
||||
// collecting all the files needed to store in our file system for a deploy.
|
||||
func CollectFiles(ast interfaces.Stmt) ([]string, error) {
|
||||
// collect the list of files
|
||||
fileList := []string{}
|
||||
fn := func(node interfaces.Node) error {
|
||||
// redundant check for example purposes
|
||||
stmt, ok := node.(interfaces.Stmt)
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
prog, ok := stmt.(*StmtProg)
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
// collect into global
|
||||
fileList = append(fileList, prog.importFiles...)
|
||||
return nil
|
||||
}
|
||||
if err := ast.Apply(fn); err != nil {
|
||||
return nil, errwrap.Wrapf(err, "can't retrieve paths")
|
||||
}
|
||||
return fileList, nil
|
||||
}
|
||||
|
||||
@@ -2090,13 +2090,14 @@ func TestLexParseWithOffsets1(t *testing.T) {
|
||||
|
||||
func TestImportParsing0(t *testing.T) {
|
||||
type test struct { // an individual test
|
||||
name string
|
||||
fail bool
|
||||
alias string
|
||||
system bool
|
||||
local bool
|
||||
path string
|
||||
url string
|
||||
name string
|
||||
fail bool
|
||||
alias string
|
||||
isSystem bool
|
||||
isLocal bool
|
||||
isFile bool
|
||||
path string
|
||||
url string
|
||||
}
|
||||
testCases := []test{}
|
||||
testCases = append(testCases, test{ // index: 0
|
||||
@@ -2108,70 +2109,70 @@ func TestImportParsing0(t *testing.T) {
|
||||
fail: true, // can't be root
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "git://example.com/purpleidea/mgmt",
|
||||
alias: "mgmt",
|
||||
local: false,
|
||||
path: "example.com/purpleidea/mgmt/",
|
||||
url: "git://example.com/purpleidea/mgmt",
|
||||
name: "git://example.com/purpleidea/mgmt",
|
||||
alias: "mgmt",
|
||||
isLocal: false,
|
||||
path: "example.com/purpleidea/mgmt/",
|
||||
url: "git://example.com/purpleidea/mgmt",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "git://example.com/purpleidea/mgmt/",
|
||||
alias: "mgmt",
|
||||
local: false,
|
||||
path: "example.com/purpleidea/mgmt/",
|
||||
url: "git://example.com/purpleidea/mgmt/",
|
||||
name: "git://example.com/purpleidea/mgmt/",
|
||||
alias: "mgmt",
|
||||
isLocal: false,
|
||||
path: "example.com/purpleidea/mgmt/",
|
||||
url: "git://example.com/purpleidea/mgmt/",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "git://example.com/purpleidea/mgmt/foo/bar/",
|
||||
alias: "bar",
|
||||
local: false,
|
||||
path: "example.com/purpleidea/mgmt/foo/bar/",
|
||||
name: "git://example.com/purpleidea/mgmt/foo/bar/",
|
||||
alias: "bar",
|
||||
isLocal: false,
|
||||
path: "example.com/purpleidea/mgmt/foo/bar/",
|
||||
// TODO: change this to be more clever about the clone URL
|
||||
//url: "git://example.com/purpleidea/mgmt/",
|
||||
// TODO: also consider changing `git` to `https` ?
|
||||
url: "git://example.com/purpleidea/mgmt/foo/bar/",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "git://example.com/purpleidea/mgmt-foo",
|
||||
alias: "foo", // prefix is magic
|
||||
local: false,
|
||||
path: "example.com/purpleidea/mgmt-foo/",
|
||||
url: "git://example.com/purpleidea/mgmt-foo",
|
||||
name: "git://example.com/purpleidea/mgmt-foo",
|
||||
alias: "foo", // prefix is magic
|
||||
isLocal: false,
|
||||
path: "example.com/purpleidea/mgmt-foo/",
|
||||
url: "git://example.com/purpleidea/mgmt-foo",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "git://example.com/purpleidea/foo-bar",
|
||||
alias: "foo_bar",
|
||||
local: false,
|
||||
path: "example.com/purpleidea/foo-bar/",
|
||||
url: "git://example.com/purpleidea/foo-bar",
|
||||
name: "git://example.com/purpleidea/foo-bar",
|
||||
alias: "foo_bar",
|
||||
isLocal: false,
|
||||
path: "example.com/purpleidea/foo-bar/",
|
||||
url: "git://example.com/purpleidea/foo-bar",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "git://example.com/purpleidea/FOO-bar",
|
||||
alias: "foo_bar",
|
||||
local: false,
|
||||
path: "example.com/purpleidea/FOO-bar/",
|
||||
url: "git://example.com/purpleidea/FOO-bar",
|
||||
name: "git://example.com/purpleidea/FOO-bar",
|
||||
alias: "foo_bar",
|
||||
isLocal: false,
|
||||
path: "example.com/purpleidea/FOO-bar/",
|
||||
url: "git://example.com/purpleidea/FOO-bar",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "git://example.com/purpleidea/foo-BAR",
|
||||
alias: "foo_bar",
|
||||
local: false,
|
||||
path: "example.com/purpleidea/foo-BAR/",
|
||||
url: "git://example.com/purpleidea/foo-BAR",
|
||||
name: "git://example.com/purpleidea/foo-BAR",
|
||||
alias: "foo_bar",
|
||||
isLocal: false,
|
||||
path: "example.com/purpleidea/foo-BAR/",
|
||||
url: "git://example.com/purpleidea/foo-BAR",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "git://example.com/purpleidea/foo-BAR-baz",
|
||||
alias: "foo_bar_baz",
|
||||
local: false,
|
||||
path: "example.com/purpleidea/foo-BAR-baz/",
|
||||
url: "git://example.com/purpleidea/foo-BAR-baz",
|
||||
name: "git://example.com/purpleidea/foo-BAR-baz",
|
||||
alias: "foo_bar_baz",
|
||||
isLocal: false,
|
||||
path: "example.com/purpleidea/foo-BAR-baz/",
|
||||
url: "git://example.com/purpleidea/foo-BAR-baz",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "git://example.com/purpleidea/Module-Name",
|
||||
alias: "module_name",
|
||||
local: false,
|
||||
path: "example.com/purpleidea/Module-Name/",
|
||||
url: "git://example.com/purpleidea/Module-Name",
|
||||
name: "git://example.com/purpleidea/Module-Name",
|
||||
alias: "module_name",
|
||||
isLocal: false,
|
||||
path: "example.com/purpleidea/Module-Name/",
|
||||
url: "git://example.com/purpleidea/Module-Name",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "git://example.com/purpleidea/foo-",
|
||||
@@ -2185,74 +2186,114 @@ func TestImportParsing0(t *testing.T) {
|
||||
name: "/var/lib/mgmt",
|
||||
alias: "mgmt",
|
||||
fail: true, // don't allow absolute paths
|
||||
//local: true,
|
||||
//isLocal: true,
|
||||
//path: "/var/lib/mgmt",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "/var/lib/mgmt/",
|
||||
alias: "mgmt",
|
||||
fail: true, // don't allow absolute paths
|
||||
//local: true,
|
||||
//isLocal: true,
|
||||
//path: "/var/lib/mgmt/",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "git://example.com/purpleidea/Module-Name?foo=bar&baz=42",
|
||||
alias: "module_name",
|
||||
local: false,
|
||||
path: "example.com/purpleidea/Module-Name/",
|
||||
url: "git://example.com/purpleidea/Module-Name",
|
||||
name: "git://example.com/purpleidea/Module-Name?foo=bar&baz=42",
|
||||
alias: "module_name",
|
||||
isLocal: false,
|
||||
path: "example.com/purpleidea/Module-Name/",
|
||||
url: "git://example.com/purpleidea/Module-Name",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "git://example.com/purpleidea/Module-Name/?foo=bar&baz=42",
|
||||
alias: "module_name",
|
||||
local: false,
|
||||
path: "example.com/purpleidea/Module-Name/",
|
||||
url: "git://example.com/purpleidea/Module-Name/",
|
||||
name: "git://example.com/purpleidea/Module-Name/?foo=bar&baz=42",
|
||||
alias: "module_name",
|
||||
isLocal: false,
|
||||
path: "example.com/purpleidea/Module-Name/",
|
||||
url: "git://example.com/purpleidea/Module-Name/",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "git://example.com/purpleidea/Module-Name/?sha1=25ad05cce36d55ce1c55fd7e70a3ab74e321b66e",
|
||||
alias: "module_name",
|
||||
local: false,
|
||||
path: "example.com/purpleidea/Module-Name/",
|
||||
url: "git://example.com/purpleidea/Module-Name/",
|
||||
name: "git://example.com/purpleidea/Module-Name/?sha1=25ad05cce36d55ce1c55fd7e70a3ab74e321b66e",
|
||||
alias: "module_name",
|
||||
isLocal: false,
|
||||
path: "example.com/purpleidea/Module-Name/",
|
||||
url: "git://example.com/purpleidea/Module-Name/",
|
||||
// TODO: report the query string info as an additional param
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "git://example.com/purpleidea/Module-Name/subpath/foo",
|
||||
alias: "foo",
|
||||
local: false,
|
||||
path: "example.com/purpleidea/Module-Name/subpath/foo/",
|
||||
url: "git://example.com/purpleidea/Module-Name/subpath/foo",
|
||||
name: "git://example.com/purpleidea/Module-Name/subpath/foo",
|
||||
alias: "foo",
|
||||
isLocal: false,
|
||||
path: "example.com/purpleidea/Module-Name/subpath/foo/",
|
||||
url: "git://example.com/purpleidea/Module-Name/subpath/foo",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "foo/",
|
||||
alias: "foo",
|
||||
local: true,
|
||||
path: "foo/",
|
||||
name: "foo/",
|
||||
alias: "foo",
|
||||
isLocal: true,
|
||||
path: "foo/",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "foo/bar",
|
||||
alias: "bar",
|
||||
system: true, // system because not a dir (no trailing slash)
|
||||
local: true, // not really used, but this is what we return
|
||||
// import foo.mcl # import a file next to me
|
||||
name: "foo.mcl",
|
||||
alias: "foo",
|
||||
isSystem: false,
|
||||
isLocal: true,
|
||||
isFile: true,
|
||||
path: "foo.mcl",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "foo/bar/baz",
|
||||
alias: "baz",
|
||||
system: true, // system because not a dir (no trailing slash)
|
||||
local: true, // not really used, but this is what we return
|
||||
// import server/foo.mcl # import a file in a dir next to me
|
||||
name: "server/foo.mcl",
|
||||
alias: "foo",
|
||||
isSystem: false,
|
||||
isLocal: true,
|
||||
isFile: true,
|
||||
path: "server/foo.mcl",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "fmt",
|
||||
alias: "fmt",
|
||||
system: true,
|
||||
local: true, // not really used, but this is what we return
|
||||
// import a deeper file (not necessarily a good idea)
|
||||
name: "server/vars/blah.mcl",
|
||||
alias: "blah",
|
||||
isSystem: false,
|
||||
isLocal: true,
|
||||
isFile: true,
|
||||
path: "server/vars/blah.mcl",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "blah",
|
||||
alias: "blah",
|
||||
system: true, // even modules that don't exist return true here
|
||||
local: true,
|
||||
name: "foo/bar",
|
||||
alias: "bar",
|
||||
isSystem: true, // system because not a dir (no trailing slash)
|
||||
isLocal: true, // not really used, but this is what we return
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "foo/bar/baz",
|
||||
alias: "baz",
|
||||
isSystem: true, // system because not a dir (no trailing slash)
|
||||
isLocal: true, // not really used, but this is what we return
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "fmt",
|
||||
alias: "fmt",
|
||||
isSystem: true,
|
||||
isLocal: true, // not really used, but this is what we return
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "blah",
|
||||
alias: "blah",
|
||||
isSystem: true, // even modules that don't exist return true here
|
||||
isLocal: true,
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "git:///home/james/code/mgmt-example1/",
|
||||
alias: "example1",
|
||||
isSystem: false,
|
||||
isLocal: false,
|
||||
// FIXME: do we want to have a special "local" imports dir?
|
||||
path: "home/james/code/mgmt-example1/",
|
||||
url: "git:///home/james/code/mgmt-example1/",
|
||||
})
|
||||
testCases = append(testCases, test{
|
||||
name: "git:////home/james/code/mgmt-example1/",
|
||||
fail: true, // don't allow double root slash
|
||||
})
|
||||
|
||||
t.Logf("ModuleMagicPrefix: %s", ModuleMagicPrefix)
|
||||
@@ -2264,7 +2305,7 @@ func TestImportParsing0(t *testing.T) {
|
||||
}
|
||||
names = append(names, tc.name)
|
||||
t.Run(fmt.Sprintf("test #%d (%s)", index, tc.name), func(t *testing.T) {
|
||||
name, fail, alias, system, local, path, url := tc.name, tc.fail, tc.alias, tc.system, tc.local, tc.path, tc.url
|
||||
name, fail, alias, isSystem, isLocal, isFile, path, url := tc.name, tc.fail, tc.alias, tc.isSystem, tc.isLocal, tc.isFile, tc.path, tc.url
|
||||
|
||||
output, err := ParseImportName(name)
|
||||
if !fail && err != nil {
|
||||
@@ -2275,6 +2316,7 @@ func TestImportParsing0(t *testing.T) {
|
||||
if fail && err == nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: ParseImportName expected error, not nil", index)
|
||||
t.Logf("test #%d: output: %+v", index, output)
|
||||
return
|
||||
}
|
||||
if fail { // we failed as expected, don't continue...
|
||||
@@ -2288,21 +2330,26 @@ func TestImportParsing0(t *testing.T) {
|
||||
t.Logf("test #%d: alias: %s", index, alias)
|
||||
return
|
||||
}
|
||||
if system != output.System {
|
||||
t.Errorf("test #%d: unexpected value for: `System`", index)
|
||||
if isSystem != output.IsSystem {
|
||||
t.Errorf("test #%d: unexpected value for: `IsSystem`", index)
|
||||
//t.Logf("test #%d: input: %s", index, name)
|
||||
t.Logf("test #%d: output: %+v", index, output)
|
||||
t.Logf("test #%d: system: %t", index, system)
|
||||
t.Logf("test #%d: output: %+v", index, output)
|
||||
t.Logf("test #%d: isSystem: %t", index, isSystem)
|
||||
return
|
||||
|
||||
}
|
||||
if local != output.Local {
|
||||
t.Errorf("test #%d: unexpected value for: `Local`", index)
|
||||
if isLocal != output.IsLocal {
|
||||
t.Errorf("test #%d: unexpected value for: `IsLocal`", index)
|
||||
//t.Logf("test #%d: input: %s", index, name)
|
||||
t.Logf("test #%d: output: %+v", index, output)
|
||||
t.Logf("test #%d: isLocal: %t", index, isLocal)
|
||||
return
|
||||
}
|
||||
if isFile != output.IsFile {
|
||||
t.Errorf("test #%d: unexpected value for: `isFile`", index)
|
||||
//t.Logf("test #%d: input: %s", index, name)
|
||||
t.Logf("test #%d: output: %+v", index, output)
|
||||
t.Logf("test #%d: local: %t", index, local)
|
||||
t.Logf("test #%d: isFile: %t", index, isFile)
|
||||
return
|
||||
|
||||
}
|
||||
if path != output.Path {
|
||||
t.Errorf("test #%d: unexpected value for: `Path`", index)
|
||||
@@ -2310,15 +2357,23 @@ func TestImportParsing0(t *testing.T) {
|
||||
t.Logf("test #%d: output: %+v", index, output)
|
||||
t.Logf("test #%d: path: %s", index, path)
|
||||
return
|
||||
|
||||
}
|
||||
if url != output.URL {
|
||||
t.Errorf("test #%d: unexpected value for: `URL`", index)
|
||||
//t.Logf("test #%d: input: %s", index, name)
|
||||
t.Logf("test #%d: output: %+v", index, output)
|
||||
t.Logf("test #%d: url: %s", index, url)
|
||||
t.Logf("test #%d: url: %s", index, url)
|
||||
return
|
||||
}
|
||||
|
||||
// add some additional sanity checking:
|
||||
if strings.HasPrefix(path, "/") {
|
||||
t.Errorf("test #%d: the path value starts with a / (it should be relative)", index)
|
||||
}
|
||||
if !isSystem {
|
||||
if !strings.HasSuffix(path, "/") && !strings.HasSuffix(path, interfaces.DotFileNameExtension) {
|
||||
t.Errorf("test #%d: the path value should be a directory or a code file", index)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1152,7 +1152,7 @@ dotted_identifier:
|
||||
| dotted_identifier DOT IDENTIFIER
|
||||
{
|
||||
posLast(yylex, yyDollar) // our pos
|
||||
$$.str = $1.str + "." + $3.str
|
||||
$$.str = $1.str + interfaces.ModuleSep + $3.str
|
||||
}
|
||||
;
|
||||
// there are different ways the lexer/parser might choose to represent this...
|
||||
@@ -1167,7 +1167,7 @@ dotted_var_identifier:
|
||||
| VAR_IDENTIFIER DOT dotted_identifier
|
||||
{
|
||||
posLast(yylex, yyDollar) // our pos
|
||||
$$.str = $1.str + "." + $3.str
|
||||
$$.str = $1.str + interfaces.ModuleSep + $3.str
|
||||
}
|
||||
// eg: $ foo.bar.baz (dollar prefix + dotted identifier)
|
||||
| DOLLAR dotted_identifier
|
||||
|
||||
604
lang/structs.go
604
lang/structs.go
@@ -18,13 +18,16 @@
|
||||
package lang // TODO: move this into a sub package of lang/$name?
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/purpleidea/mgmt/engine"
|
||||
engineUtil "github.com/purpleidea/mgmt/engine/util"
|
||||
"github.com/purpleidea/mgmt/lang/funcs"
|
||||
"github.com/purpleidea/mgmt/lang/funcs/bindata"
|
||||
"github.com/purpleidea/mgmt/lang/funcs/structs"
|
||||
"github.com/purpleidea/mgmt/lang/interfaces"
|
||||
"github.com/purpleidea/mgmt/lang/types"
|
||||
@@ -51,6 +54,17 @@ const (
|
||||
// EdgeDepend declares an edge a <- b, such that no notification occurs.
|
||||
// This is most similar to "require" in Puppet.
|
||||
EdgeDepend = "depend"
|
||||
|
||||
// AllowUserDefinedPolyFunc specifies if we allow user-defined
|
||||
// polymorphic functions or not. At the moment this is not implemented.
|
||||
// XXX: not implemented
|
||||
AllowUserDefinedPolyFunc = false
|
||||
|
||||
// RequireStrictModulePath can be set to true if you wish to ignore any
|
||||
// of the metadata parent path searching. By default that is allowed,
|
||||
// unless it is disabled per module with ParentPathBlock. This option is
|
||||
// here in case we decide that the parent module searching is confusing.
|
||||
RequireStrictModulePath = false
|
||||
)
|
||||
|
||||
// StmtBind is a representation of an assignment, which binds a variable to an
|
||||
@@ -1352,6 +1366,12 @@ func (obj *StmtIf) Output() (*interfaces.Output, error) {
|
||||
// their order of definition.
|
||||
type StmtProg struct {
|
||||
data *interfaces.Data
|
||||
// XXX: should this be copied when we run Interpolate here or elsewhere?
|
||||
scope *interfaces.Scope // store for use by imports
|
||||
|
||||
// TODO: should this be a map? if so, how would we sort it to loop it?
|
||||
importProgs []*StmtProg // list of child programs after running SetScope
|
||||
importFiles []string // list of files seen during the SetScope import
|
||||
|
||||
Prog []interfaces.Stmt
|
||||
}
|
||||
@@ -1367,6 +1387,13 @@ func (obj *StmtProg) Apply(fn func(interfaces.Node) error) error {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// might as well Apply on these too, to make file collection easier, etc
|
||||
for _, x := range obj.importProgs {
|
||||
if err := x.Apply(fn); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return fn(obj)
|
||||
}
|
||||
|
||||
@@ -1374,6 +1401,8 @@ func (obj *StmtProg) Apply(fn func(interfaces.Node) error) error {
|
||||
// validate.
|
||||
func (obj *StmtProg) Init(data *interfaces.Data) error {
|
||||
obj.data = data
|
||||
obj.importProgs = []*StmtProg{}
|
||||
obj.importFiles = []string{}
|
||||
for _, x := range obj.Prog {
|
||||
if err := x.Init(data); err != nil {
|
||||
return err
|
||||
@@ -1395,21 +1424,497 @@ func (obj *StmtProg) Interpolate() (interfaces.Stmt, error) {
|
||||
prog = append(prog, interpolated)
|
||||
}
|
||||
return &StmtProg{
|
||||
data: obj.data,
|
||||
Prog: prog,
|
||||
data: obj.data,
|
||||
importProgs: obj.importProgs, // TODO: do we even need this here?
|
||||
importFiles: obj.importFiles,
|
||||
Prog: prog,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// importScope is a helper function called from SetScope. If it can't find a
|
||||
// particular scope, then it can also run the downloader if it is available.
|
||||
func (obj *StmtProg) importScope(info *interfaces.ImportData, scope *interfaces.Scope) (*interfaces.Scope, error) {
|
||||
if obj.data.Debug {
|
||||
obj.data.Logf("import: %s", info.Name)
|
||||
}
|
||||
// the abs file path that we started actively running SetScope on is:
|
||||
// obj.data.Base + obj.data.Metadata.Main
|
||||
// but recursive imports mean this is not always the active file...
|
||||
|
||||
if info.IsSystem { // system imports are the exact name, eg "fmt"
|
||||
systemScope, err := obj.importSystemScope(info.Alias)
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "system import of `%s` failed", info.Alias)
|
||||
}
|
||||
return systemScope, nil
|
||||
}
|
||||
|
||||
// graph-based recursion detection
|
||||
// TODO: is this suffiently unique, but not incorrectly unique?
|
||||
// TODO: do we need to clean uvid for consistency so the compare works?
|
||||
uvid := obj.data.Base + ";" + info.Name // unique vertex id
|
||||
importVertex := obj.data.Imports // parent vertex
|
||||
if importVertex == nil {
|
||||
return nil, fmt.Errorf("programming error: missing import vertex")
|
||||
}
|
||||
importGraph := importVertex.Graph // existing graph (ptr stored within)
|
||||
nextVertex := &pgraph.SelfVertex{ // new vertex (if one doesn't already exist)
|
||||
Name: uvid, // import name
|
||||
Graph: importGraph, // store a reference to ourself
|
||||
}
|
||||
for _, v := range importGraph.VerticesSorted() { // search for one first
|
||||
gv, ok := v.(*pgraph.SelfVertex)
|
||||
if !ok { // someone misused the vertex
|
||||
return nil, fmt.Errorf("programming error: unexpected vertex type")
|
||||
}
|
||||
if gv.Name == uvid {
|
||||
nextVertex = gv // found the same name (use this instead!)
|
||||
// this doesn't necessarily mean a cycle. a dag is okay
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// add an edge
|
||||
edge := &pgraph.SimpleEdge{Name: ""} // TODO: name me?
|
||||
importGraph.AddEdge(importVertex, nextVertex, edge)
|
||||
if _, err := importGraph.TopologicalSort(); err != nil {
|
||||
// TODO: print the cycle in a prettier way (with file names?)
|
||||
obj.data.Logf("import: not a dag:\n%s", importGraph.Sprint())
|
||||
return nil, errwrap.Wrapf(err, "recursive import of: `%s`", info.Name)
|
||||
}
|
||||
|
||||
if info.IsLocal {
|
||||
// append the relative addition of where the running code is, on
|
||||
// to the base path that the metadata file (data) is relative to
|
||||
// if the main code file has no additional directory, then it is
|
||||
// okay, because Dirname collapses down to the empty string here
|
||||
importFilePath := obj.data.Base + util.Dirname(obj.data.Metadata.Main) + info.Path
|
||||
if obj.data.Debug {
|
||||
obj.data.Logf("import: file: %s", importFilePath)
|
||||
}
|
||||
// don't do this collection here, it has moved elsewhere...
|
||||
//obj.importFiles = append(obj.importFiles, importFilePath) // save for CollectFiles
|
||||
|
||||
localScope, err := obj.importScopeWithInputs(importFilePath, scope, nextVertex)
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "local import of `%s` failed", info.Name)
|
||||
}
|
||||
return localScope, nil
|
||||
}
|
||||
|
||||
// Now, info.IsLocal is false... we're dealing with a remote import!
|
||||
|
||||
// This takes the current metadata as input so it can use the Path
|
||||
// directory to search upwards if we wanted to look in parent paths.
|
||||
// Since this is an fqdn import, it must contain a metadata file...
|
||||
modulesPath, err := interfaces.FindModulesPath(obj.data.Metadata, obj.data.Base, obj.data.Modules)
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "module path error")
|
||||
}
|
||||
importFilePath := modulesPath + info.Path + interfaces.MetadataFilename
|
||||
|
||||
if !RequireStrictModulePath { // look upwards
|
||||
modulesPathList, err := interfaces.FindModulesPathList(obj.data.Metadata, obj.data.Base, obj.data.Modules)
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "module path list error")
|
||||
}
|
||||
for _, mp := range modulesPathList { // first one to find a file
|
||||
x := mp + info.Path + interfaces.MetadataFilename
|
||||
if _, err := obj.data.Fs.Stat(x); err == nil {
|
||||
// found a valid location, so keep using it!
|
||||
modulesPath = mp
|
||||
importFilePath = x
|
||||
break
|
||||
}
|
||||
}
|
||||
// If we get here, and we didn't find anything, then we use the
|
||||
// originally decided, most "precise" location... The reason we
|
||||
// do that is if the sysadmin wishes to require all the modules
|
||||
// to come from their top-level (or higher-level) directory, it
|
||||
// can be done by adding the code there, so that it is found in
|
||||
// the above upwards search. Otherwise, we just do what the mod
|
||||
// asked for and use the path/ directory if it wants its own...
|
||||
}
|
||||
if obj.data.Debug {
|
||||
obj.data.Logf("import: modules path: %s", modulesPath)
|
||||
obj.data.Logf("import: file: %s", importFilePath)
|
||||
}
|
||||
// don't do this collection here, it has moved elsewhere...
|
||||
//obj.importFiles = append(obj.importFiles, importFilePath) // save for CollectFiles
|
||||
|
||||
// invoke the download when a path is missing, if the downloader exists
|
||||
// we need to invoke the recursive checker before we run this download!
|
||||
// this should cleverly deal with skipping modules that are up-to-date!
|
||||
if obj.data.Downloader != nil {
|
||||
// run downloader stuff first
|
||||
if err := obj.data.Downloader.Get(info, modulesPath); err != nil {
|
||||
return nil, errwrap.Wrapf(err, "download of `%s` failed", info.Name)
|
||||
}
|
||||
}
|
||||
|
||||
// takes the full absolute path to the metadata.yaml file
|
||||
remoteScope, err := obj.importScopeWithInputs(importFilePath, scope, nextVertex)
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "remote import of `%s` failed", info.Name)
|
||||
}
|
||||
return remoteScope, nil
|
||||
}
|
||||
|
||||
// importSystemScope takes the name of a built-in system scope (eg: "fmt") and
|
||||
// returns the scope struct for that built-in. This function is slightly less
|
||||
// trivial than expected, because the scope is built from both native mcl code
|
||||
// and golang code as well. The native mcl code is compiled in as bindata.
|
||||
// TODO: can we memoize?
|
||||
func (obj *StmtProg) importSystemScope(name string) (*interfaces.Scope, error) {
|
||||
// this basically loop through the registeredFuncs and includes
|
||||
// everything that starts with the name prefix and a period, and then
|
||||
// lexes and parses the compiled in code, and adds that on top of the
|
||||
// scope. we error if there's a duplicate!
|
||||
|
||||
isEmpty := true // assume empty (which should cause an error)
|
||||
|
||||
funcs := funcs.LookupPrefix(name)
|
||||
if len(funcs) > 0 {
|
||||
isEmpty = false
|
||||
}
|
||||
|
||||
// initial scope, built from core golang code
|
||||
scope := &interfaces.Scope{
|
||||
// TODO: we could add core API's for variables and classes too!
|
||||
//Variables: make(map[string]interfaces.Expr),
|
||||
Functions: funcs, // map[string]func() interfaces.Func
|
||||
//Classes: make(map[string]interfaces.Stmt),
|
||||
}
|
||||
|
||||
// TODO: the obj.data.Fs filesystem handle is unused for now, but might
|
||||
// be useful if we ever ship all the specific versions of system modules
|
||||
// to the remote machines as well, and we want to load off of it...
|
||||
|
||||
// now add any compiled-in mcl code
|
||||
paths := bindata.AssetNames()
|
||||
// results are not sorted by default (ascertained by reading the code!)
|
||||
sort.Strings(paths)
|
||||
newScope := interfaces.EmptyScope()
|
||||
// XXX: consider using a virtual `append *` statement to combine these instead.
|
||||
for _, p := range paths {
|
||||
// we only want code from this prefix
|
||||
prefix := CoreDir + name + "/"
|
||||
if !strings.HasPrefix(p, prefix) {
|
||||
continue
|
||||
}
|
||||
// we only want code from this directory level, so skip children
|
||||
// heuristically, a child mcl file will contain a path separator
|
||||
if strings.Contains(p[len(prefix):], "/") {
|
||||
continue
|
||||
}
|
||||
|
||||
b, err := bindata.Asset(p)
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "can't read asset: `%s`", p)
|
||||
}
|
||||
|
||||
// to combine multiple *.mcl files from the same directory, we
|
||||
// lex and parse each one individually, which each produces a
|
||||
// scope struct. we then merge the scope structs, while making
|
||||
// sure we don't overwrite any values. (this logic is only valid
|
||||
// for modules, as top-level code combines the output values
|
||||
// instead.)
|
||||
|
||||
reader := bytes.NewReader(b) // wrap the byte stream
|
||||
|
||||
// now run the lexer/parser to do the import
|
||||
ast, err := LexParse(reader)
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not generate AST from import `%s`", name)
|
||||
}
|
||||
if obj.data.Debug {
|
||||
obj.data.Logf("behold, the AST: %+v", ast)
|
||||
}
|
||||
|
||||
obj.data.Logf("init...")
|
||||
// init and validate the structure of the AST
|
||||
// some of this might happen *after* interpolate in SetScope or Unify...
|
||||
if err := ast.Init(obj.data); err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not init and validate AST")
|
||||
}
|
||||
|
||||
obj.data.Logf("interpolating...")
|
||||
// interpolate strings and other expansionable nodes in AST
|
||||
interpolated, err := ast.Interpolate()
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not interpolate AST from import `%s`", name)
|
||||
}
|
||||
|
||||
obj.data.Logf("building scope...")
|
||||
// propagate the scope down through the AST...
|
||||
// most importantly, we ensure that the child imports will run!
|
||||
// we pass in *our* parent scope, which will include the globals
|
||||
if err := interpolated.SetScope(scope); err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not set scope from import `%s`", name)
|
||||
}
|
||||
|
||||
// is the root of our ast a program?
|
||||
prog, ok := interpolated.(*StmtProg)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("import `%s` did not return a program", name)
|
||||
}
|
||||
|
||||
if prog.scope == nil { // pull out the result
|
||||
continue // nothing to do here, continue with the next!
|
||||
}
|
||||
|
||||
// check for unwanted top-level elements in this module/scope
|
||||
// XXX: add a test case to test for this in our core modules!
|
||||
if err := prog.IsModuleUnsafe(); err != nil {
|
||||
return nil, errwrap.Wrapf(err, "module contains unused statements")
|
||||
}
|
||||
|
||||
if !prog.scope.IsEmpty() {
|
||||
isEmpty = false // this module/scope isn't empty
|
||||
}
|
||||
|
||||
// save a reference to the prog for future usage in Unify/Graph/Etc...
|
||||
// XXX: we don't need to do this if we can combine with Append!
|
||||
obj.importProgs = append(obj.importProgs, prog)
|
||||
|
||||
// attempt to merge
|
||||
// XXX: test for duplicate var/func/class elements in a test!
|
||||
if err := newScope.Merge(prog.scope); err != nil { // errors if something was overwritten
|
||||
return nil, errwrap.Wrapf(err, "duplicate scope element(s) in module found")
|
||||
}
|
||||
}
|
||||
|
||||
if err := scope.Merge(newScope); err != nil { // errors if something was overwritten
|
||||
return nil, errwrap.Wrapf(err, "duplicate scope element(s) found")
|
||||
}
|
||||
|
||||
// when importing a system scope, we only error if there are zero class,
|
||||
// function, or variable statements in the scope. We error in this case,
|
||||
// because it is non-sensical to import such a scope.
|
||||
if isEmpty {
|
||||
return nil, fmt.Errorf("could not find any non-empty scope named: %s", name)
|
||||
}
|
||||
|
||||
return scope, nil
|
||||
}
|
||||
|
||||
// importScopeWithInputs returns a local or remote scope from an inputs string.
|
||||
// The inputs string is the common frontend for a lot of our parsing decisions.
|
||||
func (obj *StmtProg) importScopeWithInputs(s string, scope *interfaces.Scope, parentVertex *pgraph.SelfVertex) (*interfaces.Scope, error) {
|
||||
output, err := parseInput(s, obj.data.Fs)
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not activate an input parser")
|
||||
}
|
||||
|
||||
// TODO: rm this old, and incorrect, linear file duplicate checking...
|
||||
// recursion detection (i guess following the imports has to be a dag!)
|
||||
// run recursion detection by checking for duplicates in the seen files
|
||||
// TODO: do the paths need to be cleaned for "../", etc before compare?
|
||||
//for _, name := range obj.data.Files { // existing seen files
|
||||
// if util.StrInList(name, output.Files) {
|
||||
// return nil, fmt.Errorf("recursive import of: `%s`", name)
|
||||
// }
|
||||
//}
|
||||
|
||||
reader := bytes.NewReader(output.Main)
|
||||
|
||||
// nested logger
|
||||
logf := func(format string, v ...interface{}) {
|
||||
obj.data.Logf("import: "+format, v...)
|
||||
}
|
||||
|
||||
// build new list of files
|
||||
files := []string{}
|
||||
files = append(files, output.Files...)
|
||||
files = append(files, obj.data.Files...)
|
||||
|
||||
// store a reference to the parent metadata
|
||||
metadata := output.Metadata
|
||||
metadata.Metadata = obj.data.Metadata
|
||||
|
||||
// now run the lexer/parser to do the import
|
||||
ast, err := LexParse(reader)
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not generate AST from import")
|
||||
}
|
||||
if obj.data.Debug {
|
||||
logf("behold, the AST: %+v", ast)
|
||||
}
|
||||
|
||||
logf("init...")
|
||||
// init and validate the structure of the AST
|
||||
data := &interfaces.Data{
|
||||
Fs: obj.data.Fs,
|
||||
Base: output.Base, // new base dir (absolute path)
|
||||
Files: files,
|
||||
Imports: parentVertex, // the parent vertex that imported me
|
||||
Metadata: metadata,
|
||||
Modules: obj.data.Modules,
|
||||
Downloader: obj.data.Downloader,
|
||||
//World: obj.data.World,
|
||||
|
||||
//Prefix: obj.Prefix, // TODO: add a path on?
|
||||
Debug: obj.data.Debug,
|
||||
Logf: logf,
|
||||
}
|
||||
// some of this might happen *after* interpolate in SetScope or Unify...
|
||||
if err := ast.Init(data); err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not init and validate AST")
|
||||
}
|
||||
|
||||
logf("interpolating...")
|
||||
// interpolate strings and other expansionable nodes in AST
|
||||
interpolated, err := ast.Interpolate()
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not interpolate AST from import")
|
||||
}
|
||||
|
||||
logf("building scope...")
|
||||
// propagate the scope down through the AST...
|
||||
// most importantly, we ensure that the child imports will run!
|
||||
// we pass in *our* parent scope, which will include the globals
|
||||
if err := interpolated.SetScope(scope); err != nil {
|
||||
return nil, errwrap.Wrapf(err, "could not set scope from import")
|
||||
}
|
||||
|
||||
// we DON'T do this here anymore, since Apply() digs into the children!
|
||||
//// this nested ast needs to pass the data up into the parent!
|
||||
//fileList, err := CollectFiles(interpolated)
|
||||
//if err != nil {
|
||||
// return nil, errwrap.Wrapf(err, "could not collect files")
|
||||
//}
|
||||
//obj.importFiles = append(obj.importFiles, fileList...) // save for CollectFiles
|
||||
|
||||
// is the root of our ast a program?
|
||||
prog, ok := interpolated.(*StmtProg)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("import did not return a program")
|
||||
}
|
||||
|
||||
// check for unwanted top-level elements in this module/scope
|
||||
// XXX: add a test case to test for this in our core modules!
|
||||
if err := prog.IsModuleUnsafe(); err != nil {
|
||||
return nil, errwrap.Wrapf(err, "module contains unused statements")
|
||||
}
|
||||
|
||||
// when importing a system scope, we only error if there are zero class,
|
||||
// function, or variable statements in the scope. We error in this case,
|
||||
// because it is non-sensical to import such a scope.
|
||||
if prog.scope.IsEmpty() {
|
||||
return nil, fmt.Errorf("could not find any non-empty scope")
|
||||
}
|
||||
|
||||
// save a reference to the prog for future usage in Unify/Graph/Etc...
|
||||
obj.importProgs = append(obj.importProgs, prog)
|
||||
|
||||
// collecting these here is more elegant (and possibly more efficient!)
|
||||
obj.importFiles = append(obj.importFiles, output.Files...) // save for CollectFiles
|
||||
|
||||
return prog.scope, nil
|
||||
}
|
||||
|
||||
// SetScope propagates the scope into its list of statements. It does so
|
||||
// cleverly by first collecting all bind statements and adding those into the
|
||||
// scope after checking for any collisions. Finally it pushes the new scope
|
||||
// downwards to all child statements.
|
||||
// cleverly by first collecting all bind and func statements and adding those
|
||||
// into the scope after checking for any collisions. Finally it pushes the new
|
||||
// scope downwards to all child statements. If we support user defined function
|
||||
// polymorphism via multiple function definition, then these are built together
|
||||
// here. This SetScope is the one which follows the import statements. If it
|
||||
// can't follow one (perhaps it wasn't downloaded yet, and is missing) then it
|
||||
// leaves some information about these missing imports in the AST and errors, so
|
||||
// that a subsequent AST traversal (usually via Apply) can collect this detailed
|
||||
// information to be used by the downloader.
|
||||
func (obj *StmtProg) SetScope(scope *interfaces.Scope) error {
|
||||
newScope := scope.Copy()
|
||||
|
||||
binds := make(map[string]struct{}) // bind existence in this scope
|
||||
// start by looking for any `import` statements to pull into the scope!
|
||||
// this will run child lexing/parsing, interpolation, and scope setting
|
||||
imports := make(map[string]struct{})
|
||||
aliases := make(map[string]struct{})
|
||||
|
||||
// keep track of new imports, to ensure they don't overwrite each other!
|
||||
// this is different from scope shadowing which is allowed in new scopes
|
||||
newVariables := make(map[string]string)
|
||||
newFunctions := make(map[string]string)
|
||||
newClasses := make(map[string]string)
|
||||
for _, x := range obj.Prog {
|
||||
imp, ok := x.(*StmtImport)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
// check for duplicates *in this scope*
|
||||
if _, exists := imports[imp.Name]; exists {
|
||||
return fmt.Errorf("import `%s` already exists in this scope", imp.Name)
|
||||
}
|
||||
|
||||
result, err := ParseImportName(imp.Name)
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "import `%s` is not valid", imp.Name)
|
||||
}
|
||||
alias := result.Alias // this is what we normally call the import
|
||||
|
||||
if imp.Alias != "" { // this is what the user decided as the name
|
||||
alias = imp.Alias // use alias if specified
|
||||
}
|
||||
if _, exists := aliases[alias]; exists {
|
||||
return fmt.Errorf("import alias `%s` already exists in this scope", alias)
|
||||
}
|
||||
|
||||
// run the scope importer...
|
||||
importedScope, err := obj.importScope(result, scope)
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "import scope `%s` failed", imp.Name)
|
||||
}
|
||||
|
||||
// read from stored scope which was previously saved in SetScope
|
||||
// add to scope, (overwriting, aka shadowing is ok)
|
||||
// rename scope values, adding the alias prefix
|
||||
// check that we don't overwrite a new value from another import
|
||||
// TODO: do this in a deterministic (sorted) order
|
||||
for name, x := range importedScope.Variables {
|
||||
newName := alias + interfaces.ModuleSep + name
|
||||
if alias == "*" {
|
||||
newName = name
|
||||
}
|
||||
if previous, exists := newVariables[newName]; exists {
|
||||
// don't overwrite in same scope
|
||||
return fmt.Errorf("can't squash variable `%s` from `%s` by import of `%s`", newName, previous, imp.Name)
|
||||
}
|
||||
newVariables[newName] = imp.Name
|
||||
newScope.Variables[newName] = x // merge
|
||||
}
|
||||
for name, x := range importedScope.Functions {
|
||||
newName := alias + interfaces.ModuleSep + name
|
||||
if alias == "*" {
|
||||
newName = name
|
||||
}
|
||||
if previous, exists := newFunctions[newName]; exists {
|
||||
// don't overwrite in same scope
|
||||
return fmt.Errorf("can't squash function `%s` from `%s` by import of `%s`", newName, previous, imp.Name)
|
||||
}
|
||||
newFunctions[newName] = imp.Name
|
||||
newScope.Functions[newName] = x
|
||||
}
|
||||
for name, x := range importedScope.Classes {
|
||||
newName := alias + interfaces.ModuleSep + name
|
||||
if alias == "*" {
|
||||
newName = name
|
||||
}
|
||||
if previous, exists := newClasses[newName]; exists {
|
||||
// don't overwrite in same scope
|
||||
return fmt.Errorf("can't squash class `%s` from `%s` by import of `%s`", newName, previous, imp.Name)
|
||||
}
|
||||
newClasses[newName] = imp.Name
|
||||
newScope.Classes[newName] = x
|
||||
}
|
||||
|
||||
// everything has been merged, move on to next import...
|
||||
imports[imp.Name] = struct{}{} // mark as found in scope
|
||||
aliases[alias] = struct{}{}
|
||||
}
|
||||
|
||||
// collect all the bind statements in the first pass
|
||||
// this allows them to appear out of order in this scope
|
||||
binds := make(map[string]struct{}) // bind existence in this scope
|
||||
for _, x := range obj.Prog {
|
||||
bind, ok := x.(*StmtBind)
|
||||
if !ok {
|
||||
@@ -1425,6 +1930,44 @@ func (obj *StmtProg) SetScope(scope *interfaces.Scope) error {
|
||||
newScope.Variables[bind.Ident] = bind.Value
|
||||
}
|
||||
|
||||
// now collect all the functions, and group by name (if polyfunc is ok)
|
||||
funcs := make(map[string][]*StmtFunc)
|
||||
for _, x := range obj.Prog {
|
||||
fn, ok := x.(*StmtFunc)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
_, exists := funcs[fn.Name]
|
||||
if !exists {
|
||||
funcs[fn.Name] = []*StmtFunc{} // initialize
|
||||
}
|
||||
|
||||
// check for duplicates *in this scope*
|
||||
if exists && !AllowUserDefinedPolyFunc {
|
||||
return fmt.Errorf("func `%s` already exists in this scope", fn.Name)
|
||||
}
|
||||
|
||||
// collect funcs (if multiple, this is a polyfunc)
|
||||
funcs[fn.Name] = append(funcs[fn.Name], fn)
|
||||
}
|
||||
|
||||
for name, fnList := range funcs {
|
||||
// add to scope, (overwriting, aka shadowing is ok)
|
||||
if len(fnList) == 1 {
|
||||
fn := fnList[0].Func // local reference to avoid changing it in the loop...
|
||||
f, err := fn.Func()
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "could not build func from: %s", fnList[0].Name)
|
||||
}
|
||||
newScope.Functions[name] = func() interfaces.Func { return f }
|
||||
continue
|
||||
}
|
||||
|
||||
// build polyfunc's
|
||||
// XXX: not implemented
|
||||
}
|
||||
|
||||
// now collect any classes
|
||||
// TODO: if we ever allow poly classes, then group in lists by name
|
||||
classes := make(map[string]struct{})
|
||||
@@ -1443,6 +1986,8 @@ func (obj *StmtProg) SetScope(scope *interfaces.Scope) error {
|
||||
newScope.Classes[class.Name] = class
|
||||
}
|
||||
|
||||
obj.scope = newScope // save a reference in case we're read by an import
|
||||
|
||||
// now set the child scopes (even on bind...)
|
||||
for _, x := range obj.Prog {
|
||||
// skip over *StmtClass here (essential for recursive classes)
|
||||
@@ -1478,6 +2023,15 @@ func (obj *StmtProg) Unify() ([]interfaces.Invariant, error) {
|
||||
invariants = append(invariants, invars...)
|
||||
}
|
||||
|
||||
// add invariants from SetScope's imported child programs
|
||||
for _, x := range obj.importProgs {
|
||||
invars, err := x.Unify()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
invariants = append(invariants, invars...)
|
||||
}
|
||||
|
||||
return invariants, nil
|
||||
}
|
||||
|
||||
@@ -1507,6 +2061,15 @@ func (obj *StmtProg) Graph() (*pgraph.Graph, error) {
|
||||
graph.AddGraph(g)
|
||||
}
|
||||
|
||||
// add graphs from SetScope's imported child programs
|
||||
for _, x := range obj.importProgs {
|
||||
g, err := x.Graph()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
graph.AddGraph(g)
|
||||
}
|
||||
|
||||
return graph, nil
|
||||
}
|
||||
|
||||
@@ -1536,6 +2099,8 @@ func (obj *StmtProg) Output() (*interfaces.Output, error) {
|
||||
}
|
||||
}
|
||||
|
||||
// nothing to add from SetScope's imported child programs
|
||||
|
||||
return &interfaces.Output{
|
||||
Resources: resources,
|
||||
Edges: edges,
|
||||
@@ -3428,7 +3993,10 @@ type ExprStructField struct {
|
||||
}
|
||||
|
||||
// ExprFunc is a representation of a function value. This is not a function
|
||||
// call, that is represented by ExprCall.
|
||||
// call, that is represented by ExprCall. This is what we build when we have a
|
||||
// lambda that we want to express, or the contents of a StmtFunc that needs a
|
||||
// function body (this ExprFunc) as well. This is used when the user defines an
|
||||
// inline function in mcl code somewhere.
|
||||
// XXX: this is currently not fully implemented, and parts may be incorrect.
|
||||
type ExprFunc struct {
|
||||
Args []*Arg
|
||||
@@ -3622,15 +4190,12 @@ func (obj *ExprCall) buildType() (*types.Type, error) {
|
||||
// this function execution.
|
||||
// XXX: review this function logic please
|
||||
func (obj *ExprCall) buildFunc() (interfaces.Func, error) {
|
||||
// TODO: if we have locally defined functions that can exist in scope,
|
||||
// then perhaps we should do a lookup here before we use the built-in.
|
||||
//fn, exists := obj.scope.Functions[obj.Name] // look for a local function
|
||||
// Remember that a local function might have Invariants it needs to add!
|
||||
|
||||
fn, err := funcs.Lookup(obj.Name) // lookup the function by name
|
||||
if err != nil {
|
||||
return nil, errwrap.Wrapf(err, "func `%s` could not be found", obj.Name)
|
||||
// lookup function from scope
|
||||
f, exists := obj.scope.Functions[obj.Name]
|
||||
if !exists {
|
||||
return nil, fmt.Errorf("func `%s` does not exist in this scope", obj.Name)
|
||||
}
|
||||
fn := f() // build
|
||||
|
||||
polyFn, ok := fn.(interfaces.PolyFunc) // is it statically polymorphic?
|
||||
if !ok {
|
||||
@@ -3711,9 +4276,14 @@ func (obj *ExprCall) SetType(typ *types.Type) error {
|
||||
// Type returns the type of this expression, which is the return type of the
|
||||
// function call.
|
||||
func (obj *ExprCall) Type() (*types.Type, error) {
|
||||
fn, err := funcs.Lookup(obj.Name) // lookup the function by name
|
||||
f, exists := obj.scope.Functions[obj.Name]
|
||||
if !exists {
|
||||
return nil, fmt.Errorf("func `%s` does not exist in this scope", obj.Name)
|
||||
}
|
||||
fn := f() // build
|
||||
|
||||
_, isPoly := fn.(interfaces.PolyFunc) // is it statically polymorphic?
|
||||
if err == nil && obj.typ == nil && !isPoly {
|
||||
if obj.typ == nil && !isPoly {
|
||||
if info := fn.Info(); info != nil {
|
||||
if sig := info.Sig; sig != nil {
|
||||
if typ := sig.Out; typ != nil && !typ.HasVariant() {
|
||||
|
||||
@@ -23,6 +23,7 @@ import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/purpleidea/mgmt/lang/funcs"
|
||||
"github.com/purpleidea/mgmt/lang/interfaces"
|
||||
"github.com/purpleidea/mgmt/lang/types"
|
||||
"github.com/purpleidea/mgmt/lang/unification"
|
||||
@@ -35,6 +36,7 @@ func TestUnification1(t *testing.T) {
|
||||
ast interfaces.Stmt // raw AST
|
||||
fail bool
|
||||
expect map[interfaces.Expr]*types.Type
|
||||
experr error // expected error if fail == true (nil ignores it)
|
||||
}
|
||||
testCases := []test{}
|
||||
|
||||
@@ -511,6 +513,136 @@ func TestUnification1(t *testing.T) {
|
||||
fail: true,
|
||||
})
|
||||
}
|
||||
// XXX: add these tests when we fix the bug!
|
||||
//{
|
||||
// //test "t1" {
|
||||
// // import "fmt"
|
||||
// // stringptr => fmt.printf("hello %s and %s", "one"), # bad
|
||||
// //}
|
||||
// expr := &ExprCall{
|
||||
// Name: "fmt.printf",
|
||||
// Args: []interfaces.Expr{
|
||||
// &ExprStr{
|
||||
// V: "hello %s and %s",
|
||||
// },
|
||||
// &ExprStr{
|
||||
// V: "one",
|
||||
// },
|
||||
// },
|
||||
// }
|
||||
// stmt := &StmtProg{
|
||||
// Prog: []interfaces.Stmt{
|
||||
// &StmtImport{
|
||||
// Name: "fmt",
|
||||
// },
|
||||
// &StmtRes{
|
||||
// Kind: "test",
|
||||
// Name: &ExprStr{V: "t1"},
|
||||
// Contents: []StmtResContents{
|
||||
// &StmtResField{
|
||||
// Field: "stringptr",
|
||||
// Value: expr,
|
||||
// },
|
||||
// },
|
||||
// },
|
||||
// },
|
||||
// }
|
||||
// testCases = append(testCases, test{
|
||||
// name: "function, missing arg for printf",
|
||||
// ast: stmt,
|
||||
// fail: true,
|
||||
// })
|
||||
//}
|
||||
//{
|
||||
// //test "t1" {
|
||||
// // import "fmt"
|
||||
// // stringptr => fmt.printf("hello %s and %s", "one", "two", "three"), # bad
|
||||
// //}
|
||||
// expr := &ExprCall{
|
||||
// Name: "fmt.printf",
|
||||
// Args: []interfaces.Expr{
|
||||
// &ExprStr{
|
||||
// V: "hello %s and %s",
|
||||
// },
|
||||
// &ExprStr{
|
||||
// V: "one",
|
||||
// },
|
||||
// &ExprStr{
|
||||
// V: "two",
|
||||
// },
|
||||
// &ExprStr{
|
||||
// V: "three",
|
||||
// },
|
||||
// },
|
||||
// }
|
||||
// stmt := &StmtProg{
|
||||
// Prog: []interfaces.Stmt{
|
||||
// &StmtImport{
|
||||
// Name: "fmt",
|
||||
// },
|
||||
// &StmtRes{
|
||||
// Kind: "test",
|
||||
// Name: &ExprStr{V: "t1"},
|
||||
// Contents: []StmtResContents{
|
||||
// &StmtResField{
|
||||
// Field: "stringptr",
|
||||
// Value: expr,
|
||||
// },
|
||||
// },
|
||||
// },
|
||||
// },
|
||||
// }
|
||||
// testCases = append(testCases, test{
|
||||
// name: "function, extra arg for printf",
|
||||
// ast: stmt,
|
||||
// fail: true,
|
||||
// })
|
||||
//}
|
||||
{
|
||||
//test "t1" {
|
||||
// import "fmt"
|
||||
// stringptr => fmt.printf("hello %s and %s", "one", "two"),
|
||||
//}
|
||||
expr := &ExprCall{
|
||||
Name: "fmt.printf",
|
||||
Args: []interfaces.Expr{
|
||||
&ExprStr{
|
||||
V: "hello %s and %s",
|
||||
},
|
||||
&ExprStr{
|
||||
V: "one",
|
||||
},
|
||||
&ExprStr{
|
||||
V: "two",
|
||||
},
|
||||
},
|
||||
}
|
||||
stmt := &StmtProg{
|
||||
Prog: []interfaces.Stmt{
|
||||
&StmtImport{
|
||||
Name: "fmt",
|
||||
},
|
||||
&StmtRes{
|
||||
Kind: "test",
|
||||
Name: &ExprStr{V: "t1"},
|
||||
Contents: []StmtResContents{
|
||||
&StmtResField{
|
||||
Field: "stringptr",
|
||||
Value: expr,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
testCases = append(testCases, test{
|
||||
name: "function, regular printf unification",
|
||||
ast: stmt,
|
||||
fail: false,
|
||||
expect: map[interfaces.Expr]*types.Type{
|
||||
expr: types.NewType("str"),
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
names := []string{}
|
||||
for index, tc := range testCases { // run all the tests
|
||||
@@ -524,7 +656,7 @@ func TestUnification1(t *testing.T) {
|
||||
}
|
||||
names = append(names, tc.name)
|
||||
t.Run(fmt.Sprintf("test #%d (%s)", index, tc.name), func(t *testing.T) {
|
||||
ast, fail, expect := tc.ast, tc.fail, tc.expect
|
||||
ast, fail, expect, experr := tc.ast, tc.fail, tc.expect, tc.experr
|
||||
|
||||
//str := strings.NewReader(code)
|
||||
//ast, err := LexParse(str)
|
||||
@@ -535,6 +667,19 @@ func TestUnification1(t *testing.T) {
|
||||
// TODO: print out the AST's so that we can see the types
|
||||
t.Logf("\n\ntest #%d: AST (before): %+v\n", index, ast)
|
||||
|
||||
data := &interfaces.Data{
|
||||
Debug: true,
|
||||
Logf: func(format string, v ...interface{}) {
|
||||
t.Logf(fmt.Sprintf("test #%d", index)+": ast: "+format, v...)
|
||||
},
|
||||
}
|
||||
// some of this might happen *after* interpolate in SetScope or Unify...
|
||||
if err := ast.Init(data); err != nil {
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: could not init and validate AST: %+v", index, err)
|
||||
return
|
||||
}
|
||||
|
||||
// skip interpolation in this test so that the node pointers
|
||||
// aren't changed and so we can compare directly to expected
|
||||
//astInterpolated, err := ast.Interpolate() // interpolate strings in ast
|
||||
@@ -548,7 +693,10 @@ func TestUnification1(t *testing.T) {
|
||||
scope := &interfaces.Scope{
|
||||
Variables: map[string]interfaces.Expr{
|
||||
"purpleidea": &ExprStr{V: "hello world!"}, // james says hi
|
||||
//"hostname": &ExprStr{V: obj.Hostname},
|
||||
},
|
||||
// all the built-in top-level, core functions enter here...
|
||||
Functions: funcs.LookupPrefix(""),
|
||||
}
|
||||
// propagate the scope down through the AST...
|
||||
if err := ast.SetScope(scope); err != nil {
|
||||
@@ -576,6 +724,12 @@ func TestUnification1(t *testing.T) {
|
||||
t.Errorf("test #%d: unification passed, expected fail", index)
|
||||
return
|
||||
}
|
||||
if fail && experr != nil && err != experr { // test for specific error!
|
||||
t.Errorf("test #%d: FAIL", index)
|
||||
t.Errorf("test #%d: expected fail, got wrong error", index)
|
||||
t.Errorf("test #%d: got error: %+v", index, err)
|
||||
t.Errorf("test #%d: exp error: %+v", index, experr)
|
||||
}
|
||||
|
||||
if expect == nil { // test done early
|
||||
return
|
||||
@@ -592,6 +746,8 @@ func TestUnification1(t *testing.T) {
|
||||
|
||||
if err := typ.Cmp(exptyp); err != nil {
|
||||
t.Errorf("test #%d: type cmp failed with: %+v", index, err)
|
||||
t.Logf("test #%d: got: %+v", index, typ)
|
||||
t.Logf("test #%d: exp: %+v", index, exptyp)
|
||||
failed = true
|
||||
break
|
||||
}
|
||||
|
||||
@@ -23,7 +23,6 @@ import (
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/purpleidea/mgmt/engine"
|
||||
"github.com/purpleidea/mgmt/gapi"
|
||||
"github.com/purpleidea/mgmt/lang"
|
||||
"github.com/purpleidea/mgmt/pgraph"
|
||||
@@ -62,17 +61,43 @@ type GAPI struct {
|
||||
puppetGraphReady bool // flag to indicate that a new graph from puppet is ready
|
||||
graphFlagMutex *sync.Mutex
|
||||
|
||||
data gapi.Data
|
||||
data *gapi.Data
|
||||
initialized bool
|
||||
closeChan chan struct{}
|
||||
wg sync.WaitGroup // sync group for tunnel go routines
|
||||
}
|
||||
|
||||
// CliFlags returns a list of flags used by this deploy subcommand.
|
||||
// It consists of all flags accepted by lang and puppet mode,
|
||||
// with a respective "lp-" prefix.
|
||||
func (obj *GAPI) CliFlags(command string) []cli.Flag {
|
||||
langFlags := (&lang.GAPI{}).CliFlags(command)
|
||||
puppetFlags := (&puppet.GAPI{}).CliFlags(command)
|
||||
|
||||
var childFlags []cli.Flag
|
||||
for _, flag := range append(langFlags, puppetFlags...) {
|
||||
childFlags = append(childFlags, &cli.StringFlag{
|
||||
Name: FlagPrefix + strings.Split(flag.GetName(), ",")[0],
|
||||
Value: "",
|
||||
Usage: fmt.Sprintf("equivalent for '%s' when using the lang/puppet entrypoint", flag.GetName()),
|
||||
})
|
||||
}
|
||||
|
||||
return childFlags
|
||||
}
|
||||
|
||||
// Cli takes a cli.Context, and returns our GAPI if activated. All arguments
|
||||
// should take the prefix of the registered name. On activation, if there are
|
||||
// any validation problems, you should return an error. If this was not
|
||||
// activated, then you should return a nil GAPI and a nil error.
|
||||
func (obj *GAPI) Cli(c *cli.Context, fs engine.Fs) (*gapi.Deploy, error) {
|
||||
func (obj *GAPI) Cli(cliInfo *gapi.CliInfo) (*gapi.Deploy, error) {
|
||||
c := cliInfo.CliContext
|
||||
fs := cliInfo.Fs // copy files from local filesystem *into* this fs...
|
||||
debug := cliInfo.Debug
|
||||
logf := func(format string, v ...interface{}) {
|
||||
cliInfo.Logf(Name+": "+format, v...)
|
||||
}
|
||||
|
||||
if !c.IsSet(FlagPrefix+lang.Name) && !c.IsSet(FlagPrefix+puppet.Name) {
|
||||
return nil, nil
|
||||
}
|
||||
@@ -97,13 +122,25 @@ func (obj *GAPI) Cli(c *cli.Context, fs engine.Fs) (*gapi.Deploy, error) {
|
||||
|
||||
var langDeploy *gapi.Deploy
|
||||
var puppetDeploy *gapi.Deploy
|
||||
langCliInfo := &gapi.CliInfo{
|
||||
CliContext: cli.NewContext(c.App, flagSet, nil),
|
||||
Fs: fs,
|
||||
Debug: debug,
|
||||
Logf: logf, // TODO: wrap logf?
|
||||
}
|
||||
puppetCliInfo := &gapi.CliInfo{
|
||||
CliContext: cli.NewContext(c.App, flagSet, nil),
|
||||
Fs: fs,
|
||||
Debug: debug,
|
||||
Logf: logf, // TODO: wrap logf?
|
||||
}
|
||||
var err error
|
||||
|
||||
// we don't really need the deploy object from the child GAPIs
|
||||
if langDeploy, err = (&lang.GAPI{}).Cli(cli.NewContext(c.App, flagSet, nil), fs); err != nil {
|
||||
if langDeploy, err = (&lang.GAPI{}).Cli(langCliInfo); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if puppetDeploy, err = (&puppet.GAPI{}).Cli(cli.NewContext(c.App, flagSet, nil), fs); err != nil {
|
||||
if puppetDeploy, err = (&puppet.GAPI{}).Cli(puppetCliInfo); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -118,34 +155,15 @@ func (obj *GAPI) Cli(c *cli.Context, fs engine.Fs) (*gapi.Deploy, error) {
|
||||
}, nil
|
||||
}
|
||||
|
||||
// CliFlags returns a list of flags used by this deploy subcommand.
|
||||
// It consists of all flags accepted by lang and puppet mode,
|
||||
// with a respective "lp-" prefix.
|
||||
func (obj *GAPI) CliFlags() []cli.Flag {
|
||||
langFlags := (&lang.GAPI{}).CliFlags()
|
||||
puppetFlags := (&puppet.GAPI{}).CliFlags()
|
||||
|
||||
var childFlags []cli.Flag
|
||||
for _, flag := range append(langFlags, puppetFlags...) {
|
||||
childFlags = append(childFlags, &cli.StringFlag{
|
||||
Name: FlagPrefix + strings.Split(flag.GetName(), ",")[0],
|
||||
Value: "",
|
||||
Usage: fmt.Sprintf("equivalent for '%s' when using the lang/puppet entrypoint", flag.GetName()),
|
||||
})
|
||||
}
|
||||
|
||||
return childFlags
|
||||
}
|
||||
|
||||
// Init initializes the langpuppet GAPI struct.
|
||||
func (obj *GAPI) Init(data gapi.Data) error {
|
||||
func (obj *GAPI) Init(data *gapi.Data) error {
|
||||
if obj.initialized {
|
||||
return fmt.Errorf("already initialized")
|
||||
}
|
||||
obj.data = data // store for later
|
||||
obj.graphFlagMutex = &sync.Mutex{}
|
||||
|
||||
dataLang := gapi.Data{
|
||||
dataLang := &gapi.Data{
|
||||
Program: obj.data.Program,
|
||||
Hostname: obj.data.Hostname,
|
||||
World: obj.data.World,
|
||||
@@ -157,7 +175,7 @@ func (obj *GAPI) Init(data gapi.Data) error {
|
||||
obj.data.Logf(lang.Name+": "+format, v...)
|
||||
},
|
||||
}
|
||||
dataPuppet := gapi.Data{
|
||||
dataPuppet := &gapi.Data{
|
||||
Program: obj.data.Program,
|
||||
Hostname: obj.data.Hostname,
|
||||
World: obj.data.World,
|
||||
|
||||
357
lib/cli.go
357
lib/cli.go
@@ -21,189 +21,32 @@ import (
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"os/signal"
|
||||
"sort"
|
||||
"sync"
|
||||
"syscall"
|
||||
|
||||
"github.com/purpleidea/mgmt/bindata"
|
||||
"github.com/purpleidea/mgmt/gapi"
|
||||
// these imports are so that GAPIs register themselves in init()
|
||||
_ "github.com/purpleidea/mgmt/lang"
|
||||
_ "github.com/purpleidea/mgmt/langpuppet"
|
||||
_ "github.com/purpleidea/mgmt/puppet"
|
||||
_ "github.com/purpleidea/mgmt/yamlgraph"
|
||||
|
||||
"github.com/spf13/afero"
|
||||
"github.com/urfave/cli"
|
||||
)
|
||||
|
||||
// Fs is a simple wrapper to a memory backed file system to be used for
|
||||
// standalone deploys. This is basically a pass-through so that we fulfill the
|
||||
// same interface that the deploy mechanism uses.
|
||||
type Fs struct {
|
||||
*afero.Afero
|
||||
}
|
||||
|
||||
// URI returns the unique URI of this filesystem. It returns the root path.
|
||||
func (obj *Fs) URI() string { return fmt.Sprintf("%s://"+"/", obj.Name()) }
|
||||
|
||||
// run is the main run target.
|
||||
func run(c *cli.Context) error {
|
||||
|
||||
obj := &Main{}
|
||||
|
||||
obj.Program = c.App.Name
|
||||
obj.Version = c.App.Version
|
||||
if val, exists := c.App.Metadata["flags"]; exists {
|
||||
if flags, ok := val.(Flags); ok {
|
||||
obj.Flags = flags
|
||||
}
|
||||
}
|
||||
|
||||
if h := c.String("hostname"); c.IsSet("hostname") && h != "" {
|
||||
obj.Hostname = &h
|
||||
}
|
||||
|
||||
if s := c.String("prefix"); c.IsSet("prefix") && s != "" {
|
||||
obj.Prefix = &s
|
||||
}
|
||||
obj.TmpPrefix = c.Bool("tmp-prefix")
|
||||
obj.AllowTmpPrefix = c.Bool("allow-tmp-prefix")
|
||||
|
||||
// add the versions GAPIs
|
||||
names := []string{}
|
||||
for name := range gapi.RegisteredGAPIs {
|
||||
names = append(names, name)
|
||||
}
|
||||
sort.Strings(names) // ensure deterministic order when parsing
|
||||
|
||||
// create a memory backed temporary filesystem for storing runtime data
|
||||
mmFs := afero.NewMemMapFs()
|
||||
afs := &afero.Afero{Fs: mmFs} // wrap so that we're implementing ioutil
|
||||
standaloneFs := &Fs{afs}
|
||||
obj.DeployFs = standaloneFs
|
||||
|
||||
for _, name := range names {
|
||||
fn := gapi.RegisteredGAPIs[name]
|
||||
deployObj, err := fn().Cli(c, standaloneFs)
|
||||
if err != nil {
|
||||
log.Printf("GAPI cli parse error: %v", err)
|
||||
//return cli.NewExitError(err.Error(), 1) // TODO: ?
|
||||
return cli.NewExitError("", 1)
|
||||
}
|
||||
if deployObj == nil { // not used
|
||||
continue
|
||||
}
|
||||
if obj.Deploy != nil { // already set one
|
||||
return fmt.Errorf("can't combine `%s` GAPI with existing GAPI", name)
|
||||
}
|
||||
obj.Deploy = deployObj
|
||||
}
|
||||
|
||||
obj.NoWatch = c.Bool("no-watch")
|
||||
obj.NoConfigWatch = c.Bool("no-config-watch")
|
||||
obj.NoStreamWatch = c.Bool("no-stream-watch")
|
||||
|
||||
obj.Noop = c.Bool("noop")
|
||||
obj.Sema = c.Int("sema")
|
||||
obj.Graphviz = c.String("graphviz")
|
||||
obj.GraphvizFilter = c.String("graphviz-filter")
|
||||
obj.ConvergedTimeout = c.Int("converged-timeout")
|
||||
obj.ConvergedTimeoutNoExit = c.Bool("converged-timeout-no-exit")
|
||||
obj.ConvergedStatusFile = c.String("converged-status-file")
|
||||
obj.MaxRuntime = uint(c.Int("max-runtime"))
|
||||
|
||||
obj.Seeds = c.StringSlice("seeds")
|
||||
obj.ClientURLs = c.StringSlice("client-urls")
|
||||
obj.ServerURLs = c.StringSlice("server-urls")
|
||||
obj.AdvertiseClientURLs = c.StringSlice("advertise-client-urls")
|
||||
obj.AdvertiseServerURLs = c.StringSlice("advertise-server-urls")
|
||||
obj.IdealClusterSize = c.Int("ideal-cluster-size")
|
||||
obj.NoServer = c.Bool("no-server")
|
||||
|
||||
obj.NoPgp = c.Bool("no-pgp")
|
||||
|
||||
if kp := c.String("pgp-key-path"); c.IsSet("pgp-key-path") {
|
||||
obj.PgpKeyPath = &kp
|
||||
}
|
||||
|
||||
if us := c.String("pgp-identity"); c.IsSet("pgp-identity") {
|
||||
obj.PgpIdentity = &us
|
||||
}
|
||||
|
||||
obj.Prometheus = c.Bool("prometheus")
|
||||
obj.PrometheusListen = c.String("prometheus-listen")
|
||||
|
||||
if err := obj.Validate(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := obj.Init(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// install the exit signal handler
|
||||
wg := &sync.WaitGroup{}
|
||||
defer wg.Wait()
|
||||
exit := make(chan struct{})
|
||||
defer close(exit)
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
// must have buffer for max number of signals
|
||||
signals := make(chan os.Signal, 3+1) // 3 * ^C + 1 * SIGTERM
|
||||
signal.Notify(signals, os.Interrupt) // catch ^C
|
||||
//signal.Notify(signals, os.Kill) // catch signals
|
||||
signal.Notify(signals, syscall.SIGTERM)
|
||||
var count uint8
|
||||
for {
|
||||
select {
|
||||
case sig := <-signals: // any signal will do
|
||||
if sig != os.Interrupt {
|
||||
log.Printf("Interrupted by signal")
|
||||
obj.Interrupt(fmt.Errorf("killed by %v", sig))
|
||||
return
|
||||
}
|
||||
|
||||
switch count {
|
||||
case 0:
|
||||
log.Printf("Interrupted by ^C")
|
||||
obj.Exit(nil)
|
||||
case 1:
|
||||
log.Printf("Interrupted by ^C (fast pause)")
|
||||
obj.FastExit(nil)
|
||||
case 2:
|
||||
log.Printf("Interrupted by ^C (hard interrupt)")
|
||||
obj.Interrupt(nil)
|
||||
}
|
||||
count++
|
||||
|
||||
case <-exit:
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
reterr := obj.Run()
|
||||
if reterr != nil {
|
||||
// log the error message returned
|
||||
log.Printf("Main: Error: %v", reterr)
|
||||
}
|
||||
|
||||
if err := obj.Close(); err != nil {
|
||||
log.Printf("Main: Close: %v", err)
|
||||
//return cli.NewExitError(err.Error(), 1) // TODO: ?
|
||||
return cli.NewExitError("", 1)
|
||||
}
|
||||
|
||||
return reterr
|
||||
}
|
||||
|
||||
// CLI is the entry point for using mgmt normally from the CLI.
|
||||
func CLI(program, version string, flags Flags) error {
|
||||
|
||||
// test for sanity
|
||||
if program == "" || version == "" {
|
||||
return fmt.Errorf("program was not compiled correctly, see Makefile")
|
||||
}
|
||||
|
||||
// All of these flags can be accessed in your GAPI implementation with
|
||||
// the `c.Parent().Type` and `c.Parent().IsSet` functions. Their own
|
||||
// flags can be accessed with `c.Type` and `c.IsSet` directly.
|
||||
runFlags := []cli.Flag{
|
||||
// common flags which all can use
|
||||
|
||||
// useful for testing multiple instances on same machine
|
||||
cli.StringFlag{
|
||||
Name: "hostname",
|
||||
@@ -237,6 +80,10 @@ func CLI(program, version string, flags Flags) error {
|
||||
Name: "no-stream-watch",
|
||||
Usage: "do not update graph on stream switch events",
|
||||
},
|
||||
cli.BoolFlag{
|
||||
Name: "no-deploy-watch",
|
||||
Usage: "do not change deploys after an initial deploy",
|
||||
},
|
||||
|
||||
cli.BoolFlag{
|
||||
Name: "noop",
|
||||
@@ -349,8 +196,53 @@ func CLI(program, version string, flags Flags) error {
|
||||
Usage: "specify prometheus instance binding",
|
||||
},
|
||||
}
|
||||
deployFlags := []cli.Flag{
|
||||
// common flags which all can use
|
||||
cli.StringSliceFlag{
|
||||
Name: "seeds, s",
|
||||
Value: &cli.StringSlice{}, // empty slice
|
||||
Usage: "default etc client endpoint",
|
||||
EnvVar: "MGMT_SEEDS",
|
||||
},
|
||||
cli.BoolFlag{
|
||||
Name: "noop",
|
||||
Usage: "globally force all resources into no-op mode",
|
||||
},
|
||||
cli.IntFlag{
|
||||
Name: "sema",
|
||||
Value: -1,
|
||||
Usage: "globally add a semaphore to all resources with this lock count",
|
||||
},
|
||||
|
||||
subCommands := []cli.Command{} // build deploy sub commands
|
||||
cli.BoolFlag{
|
||||
Name: "no-git",
|
||||
Usage: "don't look at git commit id for safe deploys",
|
||||
},
|
||||
cli.BoolFlag{
|
||||
Name: "force",
|
||||
Usage: "force a new deploy, even if the safety chain would break",
|
||||
},
|
||||
}
|
||||
getFlags := []cli.Flag{
|
||||
// common flags which all can use
|
||||
cli.BoolFlag{
|
||||
Name: "noop",
|
||||
Usage: "simulate the download (can't recurse)",
|
||||
},
|
||||
cli.IntFlag{
|
||||
Name: "sema",
|
||||
Value: -1, // maximum parallelism
|
||||
Usage: "globally add a semaphore to downloads with this lock count",
|
||||
},
|
||||
cli.BoolFlag{
|
||||
Name: "update",
|
||||
Usage: "update all dependencies to the latest versions",
|
||||
},
|
||||
}
|
||||
|
||||
subCommandsRun := []cli.Command{} // run sub commands
|
||||
subCommandsDeploy := []cli.Command{} // deploy sub commands
|
||||
subCommandsGet := []cli.Command{} // get (download) sub commands
|
||||
|
||||
names := []string{}
|
||||
for name := range gapi.RegisteredGAPIs {
|
||||
@@ -361,24 +253,53 @@ func CLI(program, version string, flags Flags) error {
|
||||
name := x // create a copy in this scope
|
||||
fn := gapi.RegisteredGAPIs[name]
|
||||
gapiObj := fn()
|
||||
flags := gapiObj.CliFlags() // []cli.Flag
|
||||
|
||||
runFlags = append(runFlags, flags...)
|
||||
|
||||
command := cli.Command{
|
||||
commandRun := cli.Command{
|
||||
Name: name,
|
||||
Usage: fmt.Sprintf("deploy using the `%s` frontend", name),
|
||||
Usage: fmt.Sprintf("run using the `%s` frontend", name),
|
||||
Action: func(c *cli.Context) error {
|
||||
if err := deploy(c, name, gapiObj); err != nil {
|
||||
log.Printf("Deploy: Error: %v", err)
|
||||
if err := run(c, name, gapiObj); err != nil {
|
||||
log.Printf("run: error: %v", err)
|
||||
//return cli.NewExitError(err.Error(), 1) // TODO: ?
|
||||
return cli.NewExitError("", 1)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
Flags: flags,
|
||||
Flags: gapiObj.CliFlags(gapi.CommandRun),
|
||||
}
|
||||
subCommandsRun = append(subCommandsRun, commandRun)
|
||||
|
||||
commandDeploy := cli.Command{
|
||||
Name: name,
|
||||
Usage: fmt.Sprintf("deploy using the `%s` frontend", name),
|
||||
Action: func(c *cli.Context) error {
|
||||
if err := deploy(c, name, gapiObj); err != nil {
|
||||
log.Printf("deploy: error: %v", err)
|
||||
//return cli.NewExitError(err.Error(), 1) // TODO: ?
|
||||
return cli.NewExitError("", 1)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
Flags: gapiObj.CliFlags(gapi.CommandDeploy),
|
||||
}
|
||||
subCommandsDeploy = append(subCommandsDeploy, commandDeploy)
|
||||
|
||||
if _, ok := gapiObj.(gapi.GettableGAPI); ok {
|
||||
commandGet := cli.Command{
|
||||
Name: name,
|
||||
Usage: fmt.Sprintf("get (download) using the `%s` frontend", name),
|
||||
Action: func(c *cli.Context) error {
|
||||
if err := get(c, name, gapiObj); err != nil {
|
||||
log.Printf("get: error: %v", err)
|
||||
//return cli.NewExitError(err.Error(), 1) // TODO: ?
|
||||
return cli.NewExitError("", 1)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
Flags: gapiObj.CliFlags(gapi.CommandGet),
|
||||
}
|
||||
subCommandsGet = append(subCommandsGet, commandGet)
|
||||
}
|
||||
subCommands = append(subCommands, command)
|
||||
}
|
||||
|
||||
app := cli.NewApp()
|
||||
@@ -416,48 +337,52 @@ func CLI(program, version string, flags Flags) error {
|
||||
}
|
||||
|
||||
app.Commands = []cli.Command{
|
||||
{
|
||||
Name: "run",
|
||||
Aliases: []string{"r"},
|
||||
Usage: "run",
|
||||
Action: run,
|
||||
Flags: runFlags,
|
||||
},
|
||||
{
|
||||
Name: "deploy",
|
||||
//{
|
||||
// Name: gapi.CommandTODO,
|
||||
// Aliases: []string{"TODO"},
|
||||
// Usage: "TODO",
|
||||
// Action: TODO,
|
||||
// Flags: TODOFlags,
|
||||
//},
|
||||
}
|
||||
|
||||
// run always requires a frontend to start the engine, but if you don't
|
||||
// want a graph, you can use the `empty` frontend. The engine backend is
|
||||
// agnostic to which frontend is running, in fact, you can deploy with
|
||||
// multiple different frontends, one after another, on the same engine.
|
||||
if len(subCommandsRun) > 0 {
|
||||
commandRun := cli.Command{
|
||||
Name: gapi.CommandRun,
|
||||
Aliases: []string{"r"},
|
||||
Usage: "run",
|
||||
Subcommands: subCommandsRun,
|
||||
Flags: runFlags,
|
||||
}
|
||||
app.Commands = append(app.Commands, commandRun)
|
||||
}
|
||||
|
||||
if len(subCommandsDeploy) > 0 {
|
||||
commandDeploy := cli.Command{
|
||||
Name: gapi.CommandDeploy,
|
||||
Aliases: []string{"d"},
|
||||
Usage: "deploy",
|
||||
Subcommands: subCommands,
|
||||
Flags: []cli.Flag{
|
||||
cli.StringSliceFlag{
|
||||
Name: "seeds, s",
|
||||
Value: &cli.StringSlice{}, // empty slice
|
||||
Usage: "default etc client endpoint",
|
||||
EnvVar: "MGMT_SEEDS",
|
||||
},
|
||||
|
||||
// common flags which all can use
|
||||
cli.BoolFlag{
|
||||
Name: "noop",
|
||||
Usage: "globally force all resources into no-op mode",
|
||||
},
|
||||
cli.IntFlag{
|
||||
Name: "sema",
|
||||
Value: -1,
|
||||
Usage: "globally add a semaphore to all resources with this lock count",
|
||||
},
|
||||
|
||||
cli.BoolFlag{
|
||||
Name: "no-git",
|
||||
Usage: "don't look at git commit id for safe deploys",
|
||||
},
|
||||
cli.BoolFlag{
|
||||
Name: "force",
|
||||
Usage: "force a new deploy, even if the safety chain would break",
|
||||
},
|
||||
},
|
||||
},
|
||||
Subcommands: subCommandsDeploy,
|
||||
Flags: deployFlags,
|
||||
}
|
||||
app.Commands = append(app.Commands, commandDeploy)
|
||||
}
|
||||
|
||||
if len(subCommandsGet) > 0 {
|
||||
commandGet := cli.Command{
|
||||
Name: gapi.CommandGet,
|
||||
Aliases: []string{"g"},
|
||||
Usage: "get",
|
||||
Subcommands: subCommandsGet,
|
||||
Flags: getFlags,
|
||||
}
|
||||
app.Commands = append(app.Commands, commandGet)
|
||||
}
|
||||
|
||||
app.EnableBashCompletion = true
|
||||
return app.Run(os.Args)
|
||||
}
|
||||
|
||||
@@ -25,11 +25,6 @@ import (
|
||||
"github.com/purpleidea/mgmt/etcd"
|
||||
etcdfs "github.com/purpleidea/mgmt/etcd/fs"
|
||||
"github.com/purpleidea/mgmt/gapi"
|
||||
// these imports are so that GAPIs register themselves in init()
|
||||
_ "github.com/purpleidea/mgmt/lang"
|
||||
_ "github.com/purpleidea/mgmt/langpuppet"
|
||||
_ "github.com/purpleidea/mgmt/puppet"
|
||||
_ "github.com/purpleidea/mgmt/yamlgraph"
|
||||
|
||||
"github.com/google/uuid"
|
||||
errwrap "github.com/pkg/errors"
|
||||
@@ -46,17 +41,24 @@ const (
|
||||
|
||||
// deploy is the cli target to manage deploys to our cluster.
|
||||
func deploy(c *cli.Context, name string, gapiObj gapi.GAPI) error {
|
||||
program, version := c.App.Name, c.App.Version
|
||||
cliContext := c.Parent()
|
||||
if cliContext == nil {
|
||||
return fmt.Errorf("could not get cli context")
|
||||
}
|
||||
|
||||
program, version := safeProgram(c.App.Name), c.App.Version
|
||||
var flags Flags
|
||||
var debug bool
|
||||
if val, exists := c.App.Metadata["flags"]; exists {
|
||||
if f, ok := val.(Flags); ok {
|
||||
flags = f
|
||||
debug = flags.Debug
|
||||
}
|
||||
}
|
||||
hello(program, version, flags) // say hello!
|
||||
|
||||
var hash, pHash string
|
||||
if !c.GlobalBool("no-git") {
|
||||
if !cliContext.Bool("no-git") {
|
||||
wd, err := os.Getwd()
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "could not get current working directory")
|
||||
@@ -72,7 +74,7 @@ func deploy(c *cli.Context, name string, gapiObj gapi.GAPI) error {
|
||||
}
|
||||
|
||||
hash = head.Hash().String() // current commit id
|
||||
log.Printf("Deploy: Hash: %s", hash)
|
||||
log.Printf("deploy: hash: %s", hash)
|
||||
|
||||
lo := &git.LogOptions{
|
||||
From: head.Hash(),
|
||||
@@ -88,8 +90,8 @@ func deploy(c *cli.Context, name string, gapiObj gapi.GAPI) error {
|
||||
if err == nil { // errors are okay, we might be empty
|
||||
pHash = commit.Hash.String() // previous commit id
|
||||
}
|
||||
log.Printf("Deploy: Previous deploy hash: %s", pHash)
|
||||
if c.GlobalBool("force") {
|
||||
log.Printf("deploy: previous deploy hash: %s", pHash)
|
||||
if cliContext.Bool("force") {
|
||||
pHash = "" // don't check this :(
|
||||
}
|
||||
if hash == "" {
|
||||
@@ -100,27 +102,21 @@ func deploy(c *cli.Context, name string, gapiObj gapi.GAPI) error {
|
||||
uniqueid := uuid.New() // panic's if it can't generate one :P
|
||||
|
||||
etcdClient := &etcd.ClientEtcd{
|
||||
Seeds: c.GlobalStringSlice("seeds"), // endpoints
|
||||
Seeds: cliContext.StringSlice("seeds"), // endpoints
|
||||
}
|
||||
if err := etcdClient.Connect(); err != nil {
|
||||
return errwrap.Wrapf(err, "client connection error")
|
||||
}
|
||||
defer etcdClient.Destroy()
|
||||
|
||||
// TODO: this was all implemented super inefficiently, fix up for perf!
|
||||
deploys, err := etcd.GetDeploys(etcdClient) // get previous deploys
|
||||
// get max id (from all the previous deploys)
|
||||
max, err := etcd.GetMaxDeployID(etcdClient)
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "error getting previous deploys")
|
||||
return errwrap.Wrapf(err, "error getting max deploy id")
|
||||
}
|
||||
// find the latest id
|
||||
var max uint64
|
||||
for i := range deploys {
|
||||
if i > max {
|
||||
max = i
|
||||
}
|
||||
}
|
||||
var id = max + 1 // next id
|
||||
log.Printf("Deploy: Previous deploy id: %d", max)
|
||||
log.Printf("deploy: max deploy id: %d", max)
|
||||
|
||||
etcdFs := &etcdfs.Fs{
|
||||
Client: etcdClient.GetClient(),
|
||||
@@ -129,7 +125,18 @@ func deploy(c *cli.Context, name string, gapiObj gapi.GAPI) error {
|
||||
DataPrefix: StoragePrefix,
|
||||
}
|
||||
|
||||
deploy, err := gapiObj.Cli(c, etcdFs)
|
||||
cliInfo := &gapi.CliInfo{
|
||||
CliContext: c, // don't pass in the parent context
|
||||
|
||||
Fs: etcdFs,
|
||||
Debug: debug,
|
||||
Logf: func(format string, v ...interface{}) {
|
||||
// TODO: is this a sane prefix to use here?
|
||||
log.Printf("cli: "+format, v...)
|
||||
},
|
||||
}
|
||||
|
||||
deploy, err := gapiObj.Cli(cliInfo)
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "cli parse error")
|
||||
}
|
||||
@@ -138,8 +145,8 @@ func deploy(c *cli.Context, name string, gapiObj gapi.GAPI) error {
|
||||
}
|
||||
|
||||
// redundant
|
||||
deploy.Noop = c.GlobalBool("noop")
|
||||
deploy.Sema = c.GlobalInt("sema")
|
||||
deploy.Noop = cliContext.Bool("noop")
|
||||
deploy.Sema = cliContext.Int("sema")
|
||||
|
||||
str, err := deploy.ToB64()
|
||||
if err != nil {
|
||||
@@ -150,6 +157,6 @@ func deploy(c *cli.Context, name string, gapiObj gapi.GAPI) error {
|
||||
if err := etcd.AddDeploy(etcdClient, id, hash, pHash, &str); err != nil {
|
||||
return errwrap.Wrapf(err, "could not create deploy id `%d`", id)
|
||||
}
|
||||
log.Printf("Deploy: Success, id: %d", id)
|
||||
log.Printf("deploy: success, id: %d", id)
|
||||
return nil
|
||||
}
|
||||
|
||||
73
lib/get.go
Normal file
73
lib/get.go
Normal file
@@ -0,0 +1,73 @@
|
||||
// Mgmt
|
||||
// Copyright (C) 2013-2018+ James Shubin and the project contributors
|
||||
// Written by James Shubin <james@shubin.ca> and the project contributors
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package lib
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
|
||||
"github.com/purpleidea/mgmt/gapi"
|
||||
|
||||
"github.com/urfave/cli"
|
||||
)
|
||||
|
||||
// get is the cli target to run code/import downloads.
|
||||
func get(c *cli.Context, name string, gapiObj gapi.GAPI) error {
|
||||
cliContext := c.Parent()
|
||||
if cliContext == nil {
|
||||
return fmt.Errorf("could not get cli context")
|
||||
}
|
||||
|
||||
program, version := safeProgram(c.App.Name), c.App.Version
|
||||
var flags Flags
|
||||
var debug bool
|
||||
if val, exists := c.App.Metadata["flags"]; exists {
|
||||
if f, ok := val.(Flags); ok {
|
||||
flags = f
|
||||
debug = flags.Debug
|
||||
}
|
||||
}
|
||||
hello(program, version, flags) // say hello!
|
||||
|
||||
gettable, ok := gapiObj.(gapi.GettableGAPI)
|
||||
if !ok {
|
||||
// this is a programming bug as this should not get called...
|
||||
return fmt.Errorf("the `%s` GAPI does not implement: %s", name, gapi.CommandGet)
|
||||
}
|
||||
|
||||
getInfo := &gapi.GetInfo{
|
||||
CliContext: c, // don't pass in the parent context
|
||||
|
||||
Noop: cliContext.Bool("noop"),
|
||||
Sema: cliContext.Int("sema"),
|
||||
Update: cliContext.Bool("update"),
|
||||
|
||||
Debug: debug,
|
||||
Logf: func(format string, v ...interface{}) {
|
||||
// TODO: is this a sane prefix to use here?
|
||||
log.Printf(name+": "+format, v...)
|
||||
},
|
||||
}
|
||||
|
||||
if err := gettable.Get(getInfo); err != nil {
|
||||
return err // no need to errwrap here
|
||||
}
|
||||
|
||||
log.Printf("%s: success!", name)
|
||||
return nil
|
||||
}
|
||||
@@ -43,6 +43,9 @@ func hello(program, version string, flags Flags) {
|
||||
capnslog.SetFormatter(capnslog.NewNilFormatter())
|
||||
}
|
||||
|
||||
log.Printf("This is: %s, version: %s", program, version)
|
||||
log.Printf("main: Start: %v", start)
|
||||
if program == "" {
|
||||
program = "<unknown>"
|
||||
}
|
||||
log.Printf("this is: %s, version: %s", program, version)
|
||||
log.Printf("main: start: %v", start)
|
||||
}
|
||||
|
||||
209
lib/main.go
209
lib/main.go
@@ -23,6 +23,7 @@ import (
|
||||
"log"
|
||||
"os"
|
||||
"path"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@@ -70,6 +71,7 @@ type Main struct {
|
||||
NoWatch bool // do not change graph under any circumstances
|
||||
NoConfigWatch bool // do not update graph due to config changes
|
||||
NoStreamWatch bool // do not update graph due to stream changes
|
||||
NoDeployWatch bool // do not change deploys after an initial deploy
|
||||
|
||||
Noop bool // globally force all resources into no-op mode
|
||||
Sema int // add a semaphore with this lock count to each resource
|
||||
@@ -114,6 +116,9 @@ func (obj *Main) Validate() error {
|
||||
if obj.Program == "" || obj.Version == "" {
|
||||
return fmt.Errorf("you must set the Program and Version strings")
|
||||
}
|
||||
if strings.Contains(obj.Program, " ") {
|
||||
return fmt.Errorf("the Program string contains unexpected spaces")
|
||||
}
|
||||
|
||||
if obj.Prefix != nil && obj.TmpPrefix {
|
||||
return fmt.Errorf("choosing a prefix and the request for a tmp prefix is illogical")
|
||||
@@ -139,7 +144,7 @@ func (obj *Main) Init() error {
|
||||
}
|
||||
|
||||
if obj.idealClusterSize < 1 {
|
||||
return fmt.Errorf("the IdealClusterSize should be at least one")
|
||||
return fmt.Errorf("the IdealClusterSize (%d) should be at least one", obj.idealClusterSize)
|
||||
}
|
||||
|
||||
// transform the url list inputs into etcd typed lists
|
||||
@@ -187,7 +192,7 @@ func (obj *Main) Run() error {
|
||||
}
|
||||
|
||||
hello(obj.Program, obj.Version, obj.Flags) // say hello!
|
||||
defer Logf("Goodbye!")
|
||||
defer Logf("goodbye!")
|
||||
|
||||
defer obj.exit.Done(nil) // ensure this gets called even if Exit doesn't
|
||||
|
||||
@@ -216,7 +221,7 @@ func (obj *Main) Run() error {
|
||||
Logf("warning: working prefix directory is temporary!")
|
||||
|
||||
} else {
|
||||
return fmt.Errorf("can't create prefix")
|
||||
return fmt.Errorf("can't create prefix: `%s`", prefix)
|
||||
}
|
||||
}
|
||||
Logf("working prefix is: %s", prefix)
|
||||
@@ -472,7 +477,7 @@ func (obj *Main) Run() error {
|
||||
}
|
||||
gapiImpl = gapiObj // copy it to active
|
||||
|
||||
data := gapi.Data{
|
||||
data := &gapi.Data{
|
||||
Program: obj.Program,
|
||||
Hostname: hostname,
|
||||
World: world,
|
||||
@@ -666,109 +671,151 @@ func (obj *Main) Run() error {
|
||||
}
|
||||
}()
|
||||
|
||||
if obj.Deploy != nil {
|
||||
deploy := obj.Deploy
|
||||
// redundant
|
||||
deploy.Noop = obj.Noop
|
||||
deploy.Sema = obj.Sema
|
||||
// get max id (from all the previous deploys)
|
||||
// this is what the existing cluster is already running
|
||||
// TODO: can this block since we didn't deploy yet?
|
||||
max, err := etcd.GetMaxDeployID(embdEtcd)
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "error getting max deploy id")
|
||||
}
|
||||
|
||||
select {
|
||||
case deployChan <- deploy:
|
||||
// send
|
||||
case <-exitchan:
|
||||
// pass
|
||||
}
|
||||
// improved etcd based deploy
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
defer close(deployChan) // no more are coming ever!
|
||||
|
||||
// don't inline this, because when we close the deployChan it's
|
||||
// the signal to tell the engine to actually shutdown...
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
defer close(deployChan) // no more are coming ever!
|
||||
select { // wait until we're ready to shutdown
|
||||
// we've been asked to deploy, so do that first...
|
||||
if obj.Deploy != nil {
|
||||
deploy := obj.Deploy
|
||||
// redundant
|
||||
deploy.Noop = obj.Noop
|
||||
deploy.Sema = obj.Sema
|
||||
|
||||
select {
|
||||
case deployChan <- deploy:
|
||||
// send
|
||||
if obj.Flags.Debug {
|
||||
Logf("deploy: sending new gapi")
|
||||
}
|
||||
case <-exitchan:
|
||||
return
|
||||
}
|
||||
}()
|
||||
} else {
|
||||
// etcd based deploy
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
defer close(deployChan)
|
||||
startChan := make(chan struct{}) // start signal
|
||||
close(startChan) // kick it off!
|
||||
for {
|
||||
select {
|
||||
case <-startChan: // kick the loop once at start
|
||||
startChan = nil // disable
|
||||
|
||||
case err, ok := <-etcd.WatchDeploy(embdEtcd):
|
||||
if !ok {
|
||||
obj.exit.Done(nil) // regular shutdown
|
||||
return
|
||||
}
|
||||
if err != nil {
|
||||
// TODO: it broke, can we restart?
|
||||
obj.exit.Done(fmt.Errorf("deploy: watch error"))
|
||||
return
|
||||
}
|
||||
startChan = nil // disable it early...
|
||||
}
|
||||
|
||||
// now we can wait for future deploys, but if we already had an
|
||||
// initial deploy from run, don't switch to this unless it's new
|
||||
var last uint64
|
||||
startChan := make(chan struct{}) // start signal
|
||||
close(startChan) // kick it off!
|
||||
for {
|
||||
if obj.NoDeployWatch && (obj.Deploy != nil || last > 0) {
|
||||
// block here, because when we close the
|
||||
// deployChan it's the signal to tell the engine
|
||||
// to actually shutdown...
|
||||
select { // wait until we're ready to shutdown
|
||||
case <-exitchan:
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
select {
|
||||
case <-startChan: // kick the loop once at start
|
||||
startChan = nil // disable
|
||||
|
||||
case err, ok := <-etcd.WatchDeploy(embdEtcd):
|
||||
if !ok {
|
||||
obj.exit.Done(nil) // regular shutdown
|
||||
return
|
||||
}
|
||||
if err != nil {
|
||||
// TODO: it broke, can we restart?
|
||||
obj.exit.Done(fmt.Errorf("deploy: watch error"))
|
||||
return
|
||||
}
|
||||
startChan = nil // disable it early...
|
||||
if obj.Flags.Debug {
|
||||
Logf("deploy: got activity")
|
||||
}
|
||||
str, err := etcd.GetDeploy(embdEtcd, 0) // 0 means get the latest one
|
||||
if err != nil {
|
||||
Logf("deploy: error getting deploy: %+v", err)
|
||||
continue
|
||||
}
|
||||
if str == "" { // no available deploys exist yet
|
||||
// send an empty deploy... this is done
|
||||
// to start up the engine so it can run
|
||||
// an empty graph and be ready to swap!
|
||||
Logf("deploy: empty")
|
||||
deploy := &gapi.Deploy{
|
||||
Name: empty.Name,
|
||||
GAPI: &empty.GAPI{},
|
||||
}
|
||||
select {
|
||||
case deployChan <- deploy:
|
||||
// send
|
||||
if obj.Flags.Debug {
|
||||
Logf("deploy: sending empty deploy")
|
||||
}
|
||||
|
||||
case <-exitchan:
|
||||
return
|
||||
}
|
||||
continue
|
||||
}
|
||||
case <-exitchan:
|
||||
return
|
||||
}
|
||||
|
||||
// decode the deploy (incl. GAPI) and send it!
|
||||
deploy, err := gapi.NewDeployFromB64(str)
|
||||
if err != nil {
|
||||
Logf("deploy: error decoding deploy: %+v", err)
|
||||
continue
|
||||
}
|
||||
latest, err := etcd.GetMaxDeployID(embdEtcd) // or zero
|
||||
if err != nil {
|
||||
Logf("error getting max deploy id: %+v", err)
|
||||
continue
|
||||
}
|
||||
|
||||
// if we already did the built-in one from run, and this
|
||||
// new deploy is not newer than when we started, skip it
|
||||
if obj.Deploy != nil && latest <= max {
|
||||
// if latest and max are zero, it's okay to loop
|
||||
continue
|
||||
}
|
||||
|
||||
// if we're doing any deploy, don't run the previous one
|
||||
// (this might be useful if we get a double event here!)
|
||||
if obj.Deploy == nil && latest <= last && latest != 0 {
|
||||
// if latest and last are zero, pass through it!
|
||||
continue
|
||||
}
|
||||
// if we already did a deploy, but we're being asked for
|
||||
// this again, then skip over it if it's not a newer one
|
||||
if obj.Deploy != nil && latest <= last {
|
||||
continue
|
||||
}
|
||||
|
||||
// 0 passes through an empty deploy without an error...
|
||||
// (unless there is some sort of etcd error that occurs)
|
||||
str, err := etcd.GetDeploy(embdEtcd, latest)
|
||||
if err != nil {
|
||||
Logf("deploy: error getting deploy: %+v", err)
|
||||
continue
|
||||
}
|
||||
if str == "" { // no available deploys exist yet
|
||||
// send an empty deploy... this is done
|
||||
// to start up the engine so it can run
|
||||
// an empty graph and be ready to swap!
|
||||
Logf("deploy: empty")
|
||||
deploy := &gapi.Deploy{
|
||||
Name: empty.Name,
|
||||
GAPI: &empty.GAPI{},
|
||||
}
|
||||
select {
|
||||
case deployChan <- deploy:
|
||||
// send
|
||||
if obj.Flags.Debug {
|
||||
Logf("deploy: sending new gapi")
|
||||
Logf("deploy: sending empty deploy")
|
||||
}
|
||||
|
||||
case <-exitchan:
|
||||
return
|
||||
}
|
||||
continue
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// decode the deploy (incl. GAPI) and send it!
|
||||
deploy, err := gapi.NewDeployFromB64(str)
|
||||
if err != nil {
|
||||
Logf("deploy: error decoding deploy: %+v", err)
|
||||
continue
|
||||
}
|
||||
|
||||
select {
|
||||
case deployChan <- deploy:
|
||||
last = latest // update last deployed
|
||||
// send
|
||||
if obj.Flags.Debug {
|
||||
Logf("deploy: sent new gapi")
|
||||
}
|
||||
|
||||
case <-exitchan:
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
Logf("running...")
|
||||
|
||||
|
||||
188
lib/run.go
Normal file
188
lib/run.go
Normal file
@@ -0,0 +1,188 @@
|
||||
// Mgmt
|
||||
// Copyright (C) 2013-2018+ James Shubin and the project contributors
|
||||
// Written by James Shubin <james@shubin.ca> and the project contributors
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package lib
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"os/signal"
|
||||
"sync"
|
||||
"syscall"
|
||||
|
||||
"github.com/purpleidea/mgmt/gapi"
|
||||
"github.com/purpleidea/mgmt/util"
|
||||
|
||||
errwrap "github.com/pkg/errors"
|
||||
"github.com/spf13/afero"
|
||||
"github.com/urfave/cli"
|
||||
)
|
||||
|
||||
// run is the main run target.
|
||||
func run(c *cli.Context, name string, gapiObj gapi.GAPI) error {
|
||||
cliContext := c.Parent() // these are the flags from `run`
|
||||
if cliContext == nil {
|
||||
return fmt.Errorf("could not get cli context")
|
||||
}
|
||||
|
||||
obj := &Main{}
|
||||
|
||||
obj.Program, obj.Version = safeProgram(c.App.Name), c.App.Version
|
||||
if val, exists := c.App.Metadata["flags"]; exists {
|
||||
if flags, ok := val.(Flags); ok {
|
||||
obj.Flags = flags
|
||||
}
|
||||
}
|
||||
|
||||
if h := cliContext.String("hostname"); cliContext.IsSet("hostname") && h != "" {
|
||||
obj.Hostname = &h
|
||||
}
|
||||
|
||||
if s := cliContext.String("prefix"); cliContext.IsSet("prefix") && s != "" {
|
||||
obj.Prefix = &s
|
||||
}
|
||||
obj.TmpPrefix = cliContext.Bool("tmp-prefix")
|
||||
obj.AllowTmpPrefix = cliContext.Bool("allow-tmp-prefix")
|
||||
|
||||
// create a memory backed temporary filesystem for storing runtime data
|
||||
mmFs := afero.NewMemMapFs()
|
||||
afs := &afero.Afero{Fs: mmFs} // wrap so that we're implementing ioutil
|
||||
standaloneFs := &util.Fs{Afero: afs}
|
||||
obj.DeployFs = standaloneFs
|
||||
|
||||
cliInfo := &gapi.CliInfo{
|
||||
CliContext: c, // don't pass in the parent context
|
||||
|
||||
Fs: standaloneFs,
|
||||
Debug: obj.Flags.Debug,
|
||||
Logf: func(format string, v ...interface{}) {
|
||||
log.Printf("cli: "+format, v...)
|
||||
},
|
||||
}
|
||||
|
||||
deploy, err := gapiObj.Cli(cliInfo)
|
||||
if err != nil {
|
||||
return errwrap.Wrapf(err, "cli parse error")
|
||||
}
|
||||
obj.Deploy = deploy
|
||||
if obj.Deploy == nil {
|
||||
// nobody activated, but we'll still watch the etcd deploy chan,
|
||||
// and if there is deployed code that's ready to run, we'll run!
|
||||
log.Printf("main: no frontend selected (no GAPI activated)")
|
||||
}
|
||||
|
||||
obj.NoWatch = cliContext.Bool("no-watch")
|
||||
obj.NoConfigWatch = cliContext.Bool("no-config-watch")
|
||||
obj.NoStreamWatch = cliContext.Bool("no-stream-watch")
|
||||
obj.NoDeployWatch = cliContext.Bool("no-deploy-watch")
|
||||
|
||||
obj.Noop = cliContext.Bool("noop")
|
||||
obj.Sema = cliContext.Int("sema")
|
||||
obj.Graphviz = cliContext.String("graphviz")
|
||||
obj.GraphvizFilter = cliContext.String("graphviz-filter")
|
||||
obj.ConvergedTimeout = cliContext.Int("converged-timeout")
|
||||
obj.ConvergedTimeoutNoExit = cliContext.Bool("converged-timeout-no-exit")
|
||||
obj.ConvergedStatusFile = cliContext.String("converged-status-file")
|
||||
obj.MaxRuntime = uint(cliContext.Int("max-runtime"))
|
||||
|
||||
obj.Seeds = cliContext.StringSlice("seeds")
|
||||
obj.ClientURLs = cliContext.StringSlice("client-urls")
|
||||
obj.ServerURLs = cliContext.StringSlice("server-urls")
|
||||
obj.AdvertiseClientURLs = cliContext.StringSlice("advertise-client-urls")
|
||||
obj.AdvertiseServerURLs = cliContext.StringSlice("advertise-server-urls")
|
||||
obj.IdealClusterSize = cliContext.Int("ideal-cluster-size")
|
||||
obj.NoServer = cliContext.Bool("no-server")
|
||||
|
||||
obj.NoPgp = cliContext.Bool("no-pgp")
|
||||
|
||||
if kp := cliContext.String("pgp-key-path"); cliContext.IsSet("pgp-key-path") {
|
||||
obj.PgpKeyPath = &kp
|
||||
}
|
||||
|
||||
if us := cliContext.String("pgp-identity"); cliContext.IsSet("pgp-identity") {
|
||||
obj.PgpIdentity = &us
|
||||
}
|
||||
|
||||
obj.Prometheus = cliContext.Bool("prometheus")
|
||||
obj.PrometheusListen = cliContext.String("prometheus-listen")
|
||||
|
||||
if err := obj.Validate(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := obj.Init(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// install the exit signal handler
|
||||
wg := &sync.WaitGroup{}
|
||||
defer wg.Wait()
|
||||
exit := make(chan struct{})
|
||||
defer close(exit)
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
// must have buffer for max number of signals
|
||||
signals := make(chan os.Signal, 3+1) // 3 * ^C + 1 * SIGTERM
|
||||
signal.Notify(signals, os.Interrupt) // catch ^C
|
||||
//signal.Notify(signals, os.Kill) // catch signals
|
||||
signal.Notify(signals, syscall.SIGTERM)
|
||||
var count uint8
|
||||
for {
|
||||
select {
|
||||
case sig := <-signals: // any signal will do
|
||||
if sig != os.Interrupt {
|
||||
log.Printf("interrupted by signal")
|
||||
obj.Interrupt(fmt.Errorf("killed by %v", sig))
|
||||
return
|
||||
}
|
||||
|
||||
switch count {
|
||||
case 0:
|
||||
log.Printf("interrupted by ^C")
|
||||
obj.Exit(nil)
|
||||
case 1:
|
||||
log.Printf("interrupted by ^C (fast pause)")
|
||||
obj.FastExit(nil)
|
||||
case 2:
|
||||
log.Printf("interrupted by ^C (hard interrupt)")
|
||||
obj.Interrupt(nil)
|
||||
}
|
||||
count++
|
||||
|
||||
case <-exit:
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
reterr := obj.Run()
|
||||
if reterr != nil {
|
||||
// log the error message returned
|
||||
log.Printf("main: Error: %v", reterr)
|
||||
}
|
||||
|
||||
if err := obj.Close(); err != nil {
|
||||
log.Printf("main: Close: %v", err)
|
||||
if reterr == nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return reterr
|
||||
}
|
||||
35
lib/util.go
Normal file
35
lib/util.go
Normal file
@@ -0,0 +1,35 @@
|
||||
// Mgmt
|
||||
// Copyright (C) 2013-2018+ James Shubin and the project contributors
|
||||
// Written by James Shubin <james@shubin.ca> and the project contributors
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package lib
|
||||
|
||||
import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
// safeProgram returns the correct program string when given a buggy variant.
|
||||
func safeProgram(program string) string {
|
||||
// FIXME: in sub commands, the cli package appends a space and the sub
|
||||
// command name at the end. hack around this by only using the first bit
|
||||
// see: https://github.com/urfave/cli/issues/783 for more details...
|
||||
split := strings.Split(program, " ")
|
||||
program = split[0]
|
||||
//if program == "" {
|
||||
// program = "<unknown>"
|
||||
//}
|
||||
return program
|
||||
}
|
||||
@@ -5,7 +5,7 @@ After=systemd-networkd.service
|
||||
Requires=systemd-networkd.service
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/bin/mgmt run $OPTS
|
||||
ExecStart=/usr/bin/mgmt run empty $OPTS
|
||||
RestartSec=5s
|
||||
Restart=always
|
||||
|
||||
|
||||
@@ -234,10 +234,18 @@ func (g *Graph) VerticesChan() chan Vertex {
|
||||
// VertexSlice is a linear list of vertices. It can be sorted.
|
||||
type VertexSlice []Vertex
|
||||
|
||||
func (vs VertexSlice) Len() int { return len(vs) }
|
||||
func (vs VertexSlice) Swap(i, j int) { vs[i], vs[j] = vs[j], vs[i] }
|
||||
// Len returns the length of the slice of vertices.
|
||||
func (vs VertexSlice) Len() int { return len(vs) }
|
||||
|
||||
// Swap swaps two elements in the slice.
|
||||
func (vs VertexSlice) Swap(i, j int) { vs[i], vs[j] = vs[j], vs[i] }
|
||||
|
||||
// Less returns the smaller element in the sort order.
|
||||
func (vs VertexSlice) Less(i, j int) bool { return vs[i].String() < vs[j].String() }
|
||||
|
||||
// Sort is a convenience method.
|
||||
func (vs VertexSlice) Sort() { sort.Sort(vs) }
|
||||
|
||||
// VerticesSorted returns a sorted slice of all vertices in the graph.
|
||||
// The order is sorted by String() to avoid the non-determinism in the map type.
|
||||
func (g *Graph) VerticesSorted() []Vertex {
|
||||
@@ -259,14 +267,20 @@ func (g *Graph) String() string {
|
||||
|
||||
// Sprint prints a full graph in textual form out to a string. To log this you
|
||||
// might want to use Logf, which will keep everything aligned with whatever your
|
||||
// logging prefix is.
|
||||
// logging prefix is. This function returns the result in a deterministic order.
|
||||
func (g *Graph) Sprint() string {
|
||||
var str string
|
||||
for v := range g.Adjacency() {
|
||||
for _, v := range g.VerticesSorted() {
|
||||
str += fmt.Sprintf("Vertex: %s\n", v)
|
||||
}
|
||||
for v1 := range g.Adjacency() {
|
||||
for v2, e := range g.Adjacency()[v1] {
|
||||
for _, v1 := range g.VerticesSorted() {
|
||||
vs := []Vertex{}
|
||||
for v2 := range g.Adjacency()[v1] {
|
||||
vs = append(vs, v2)
|
||||
}
|
||||
sort.Sort(VertexSlice(vs)) // deterministic order
|
||||
for _, v2 := range vs {
|
||||
e := g.Adjacency()[v1][v2]
|
||||
str += fmt.Sprintf("Edge: %s -> %s # %s\n", v1, v2, e)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -728,6 +728,38 @@ func TestSort1(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestSprint1(t *testing.T) {
|
||||
g, _ := NewGraph("graph1")
|
||||
v1 := NV("v1")
|
||||
v2 := NV("v2")
|
||||
v3 := NV("v3")
|
||||
v4 := NV("v4")
|
||||
v5 := NV("v5")
|
||||
v6 := NV("v6")
|
||||
e1 := NE("e1")
|
||||
e2 := NE("e2")
|
||||
e3 := NE("e3")
|
||||
e4 := NE("e4")
|
||||
e5 := NE("e5")
|
||||
g.AddEdge(v1, v2, e1)
|
||||
g.AddEdge(v2, v3, e2)
|
||||
g.AddEdge(v3, v4, e3)
|
||||
g.AddEdge(v4, v5, e4)
|
||||
g.AddEdge(v5, v6, e5)
|
||||
|
||||
str := g.Sprint()
|
||||
t.Logf("graph is:\n%s", str)
|
||||
count := 0
|
||||
for count < 100000 { // about one second
|
||||
x := g.Sprint()
|
||||
if str != x {
|
||||
t.Errorf("graph sprint is not consistent")
|
||||
return
|
||||
}
|
||||
count++
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeleteEdge1(t *testing.T) {
|
||||
g, _ := NewGraph("g")
|
||||
|
||||
|
||||
30
pgraph/selfvertex.go
Normal file
30
pgraph/selfvertex.go
Normal file
@@ -0,0 +1,30 @@
|
||||
// Mgmt
|
||||
// Copyright (C) 2013-2018+ James Shubin and the project contributors
|
||||
// Written by James Shubin <james@shubin.ca> and the project contributors
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package pgraph
|
||||
|
||||
// SelfVertex is a vertex that stores a graph pointer to the graph that it's on.
|
||||
// This is useful if you want to pass around a graph with a vertex cursor on it.
|
||||
type SelfVertex struct {
|
||||
Name string
|
||||
Graph *Graph // it's up to you to manage the cursor safety
|
||||
}
|
||||
|
||||
// String is a required method of the Vertex interface that we must fulfill.
|
||||
func (obj *SelfVertex) String() string {
|
||||
return obj.Name
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user