This enables imports in mcl code, and is one of last remaining blockers
to using mgmt. Now we can start writing standalone modules, and adding
standard library functions as needed. There's still lots to do, but this
was a big missing piece. It was much harder to get right than I had
expected, but I think it's solid!
This unfortunately large commit is the result of some wild hacking I've
been doing for the past little while. It's the result of a rebase that
broke many "wip" commits that tracked my private progress, into
something that's not gratuitously messy for our git logs. Since this was
a learning and discovery process for me, I've "erased" the confusing git
history that wouldn't have helped. I'm happy to discuss the dead-ends,
and a small portion of that code was even left in for possible future
use.
This patch includes:
* A change to the cli interface:
You now specify the front-end explicitly, instead of leaving it up to
the front-end to decide when to "activate". For example, instead of:
mgmt run --lang code.mcl
we now do:
mgmt run lang --lang code.mcl
We might rename the --lang flag in the future to avoid the awkward word
repetition. Suggestions welcome, but I'm considering "input". One
side-effect of this change, is that flags which are "engine" specific
now must be specified with "run" before the front-end name. Eg:
mgmt run --tmp-prefix lang --lang code.mcl
instead of putting --tmp-prefix at the end. We also changed the GAPI
slightly, but I've patched all code that used it. This also makes things
consistent with the "deploy" command.
* The deploys are more robust and let you deploy after a run
This has been vastly improved and let's mgmt really run as a smart
engine that can handle different workloads. If you don't want to deploy
when you've started with `run` or if one comes in, you can use the
--no-watch-deploy option to block new deploys.
* The import statement exists and works!
We now have a working `import` statement. Read the docs, and try it out.
I think it's quite elegant how it fits in with `SetScope`. Have a look.
As a result, we now have some built-in functions available in modules.
This also adds the metadata.yaml entry-point for all modules. Have a
look at the examples or the tests. The bulk of the patch is to support
this.
* Improved lang input parsing code:
I re-wrote the parsing that determined what ran when we passed different
things to --lang. Deciding between running an mcl file or raw code is
now handled in a more intelligent, and re-usable way. See the inputs.go
file if you want to have a look. One casualty is that you can't stream
code from stdin *directly* to the front-end, it's encapsulated into a
deploy first. You can still use stdin though! I doubt anyone will notice
this change.
* The scope was extended to include functions and classes:
Go forth and import lovely code. All these exist in scopes now, and can
be re-used!
* Function calls actually use the scope now. Glad I got this sorted out.
* There is import cycle detection for modules!
Yes, this is another dag. I think that's #4. I guess they're useful.
* A ton of tests and new test infra was added!
This should make it much easier to add new tests that run mcl code. Have
a look at TestAstFunc1 to see how to add more of these.
As usual, I'll try to keep these commits smaller in the future!
This is a giant refactor to move functions into a hierarchial module
layout. While this isn't entirely implemented yet, it should work
correctly once all the import bits have landed. What's broken at the
moment is the template function, which currently doesn't understand the
period separator.
This requires breaking changes in gofmt. It is hilarious that this was
changed. Oh well. This also moves to the latest stable etcd. Lastly,
this changes the `go vet` testing to test by package, since the new go
vet changed how it works and now fails without this change.
This giant patch makes some much needed improvements to the code base.
* The engine has been rewritten and lives within engine/graph/
* All of the common interfaces and code now live in engine/
* All of the resources are in one package called engine/resources/
* The Res API can use different "traits" from engine/traits/
* The Res API has been simplified to hide many of the old internals
* The Watch & Process loops were previously inverted, but is now fixed
* The likelihood of package cycles has been reduced drastically
* And much, much more...
Unfortunately, some code had to be temporarily removed. The remote code
had to be taken out, as did the prometheus code. We hope to have these
back in new forms as soon as possible.
This adds the ability to specify internal, resource specific edges, with
and without notifications. We use the special words: "Notify", "Before",
"Listen", and "Depend". They must have the first character capitalized.
They also support the "elvis" operator.
This allows you to omit a resource parameter programmatically, and
avoids the need of an `undef` or `nil` in our language, which would
contribute to programming errors, crashes, and overall reduced safety.
This adds a simple API for adding static, polymorphic, pure functions.
This lets you define a list of type signatures and the associated
implementations to overload a particular function name. The internals of
this API then do all of the hard work of matching the available
signatures to what statically type checks, and then calling the
appropriate implementation.
While this seems as if this would only work for function polymorphism
with a finite number of possible types, while this is mostly true, it
also allows you to add the `variant` "wildcard" type into your
signatures which will allow you to match a wider set of signatures.
A canonical use case for this is the len function which can determine
the length of both lists and maps with any contained type. (Either the
type of the list elements, or the types of the map keys and values.)
When using this functionality, you must be careful to ensure that there
is only a single mapping from possible type to signature so that the
"dynamic dispatch" of the function is unique.
It is worth noting that this API won't cover functions which support an
arbitrary number of input arguments. The well-known case of this,
printf, is implemented with the more general function API which is more
complicated.
This patch also adds some necessary library improvements for comparing
types to partial types, and to types containing variants.
Lastly, this fixes a bug in the `NewType` parser which parsed certain
complex function types wrong.
I forgot to handle the special case of a function using this API that
received no inputs. It was waiting for the first input to come in, and
as a result was never producing any output.
Remember that functions like this should *almost* be thought of as
constants of the system. You would expect their output to never change
during the lifetime of a particular program invocation.
This patch adds a simple function API for writing simple, pure
functions. This should reduce the amount of boilerplate required for
most functions, and make growing a stdlib significantly easier. If you
need to build more complex, event-generating functions, or statically
polymorphic functions, then you'll still need to use the normal API for
now.
This also makes all of these pure functions available automatically
within templates. It might make sense to group these functions into
packages to make their logical organization easier, but this is a good
enough start for now.
Lastly, this added some missing pieces to our types library. You can now
use `ValueOf` to convert from a `reflect.Value` to the corresponding
`Value` in our type system, if an equivalent exists.
Unfortunately, we're severely lacking in tests for these new types
library additions, but look forward to growing some in the future!
This is an initial implementation of the mgmt language. It is a
declarative (immutable) functional, reactive, domain specific
programming language. It is intended to be a language that is:
* safe
* powerful
* easy to reason about
With these properties, we hope this language, and the mgmt engine will
allow you to model the real-time systems that you'd like to automate.
This also includes a number of other associated changes. Sorry for the
large size of this patch.
This is an example of a race-free long-poll server and client. It uses a
redirection method to signal that the "Watch" is running.
Other race-free methods exist.
This patch adds autoedges between users and groups, and extends
users with additional fields for supplementary groups and a named
primary group. Also, some small fixes to log and error messages.
This allows the implementer of the GAPI to specify three parameters for
every Next message sent on the channel. The Fast parameter tells the
agent if it should do the pause quickly or if it should finish the
sequence. A quick pause means that it will cause a pause immediately
after the currently running resources finish, where as a slow (default)
pause will allow the wave of execution to finish. This is usually
preferred in scenarios where complex graphs are used where we want each
step to complete. The Exit parameter tells the engine to exit, and the
Err parameter tells the engine that an error occurred.
This adds send/recv output parameters from exec for stdout, stderr, and
output which is a combination of those two. This also includes a few
tests, and a working example too!
Gone are the `some_command > some_file` days of puppet.
Since the pgraph graph can store arbitrary pointers, we don't need a
special method to create the vertices or edges as long as they implement
the String() string method. This cleans up the library and some of the
examples which I let rot previously.
This is something I've wanted to do for a while, but for the reasons
mentioned in the comments, I've been unable to complete yet. I figured
I'd at least merge what does exist so far in case someone else would
like to pick this up. It's a bit of a brain hurdle / monster, because
the tricky part is refactoring the core engine so that this fits in
nicely. Perhaps someone will have more time and/or less tunnel vision
than I to either merge something or sketch out some ideas on the path
forwards. I think it's a useful goal because if recursive resources are
possible, it could force the core engine into a more elegant design.
Happy hacking!
These are helper functions to merge in existing graphs into a main graph
with or without adding an edge relationship between a vertex and the new
graph. These are particularly useful if using mgmt as a lib to break
apart units of work into functions that create sub graphs, which are
then added to the main graph when they're returned.
The graph of dependencies in golang is a DAG, and as such doesn't allow
cycles. Clean up this lib so that it eventually doesn't import our
resources module or anything else which might want to import it.
This patch makes adjacency private, and adds a generalized key store to
the graph struct.
This puts the generation of the initial event into the Next method of
the GAPI. If it does not happen, then we will never get a graph. This is
important because this notifies the GAPI when we're actually ready to
try and generate a graph, rather than blocking on the Graph method if we
have a long compile for example.
This is also required for the etcd watch cleanup.
This is a new resource for setting key value pairs in our global world
database. Currently only etcd is supported. Some of the implications and
possibilities of this resource will become more obvious with future
commits!
You can bother/test this resource with these commands:
ETCDCTL_API=3 etcdctl get "/_mgmt/strings/" --prefix=true
ETCDCTL_API=3 etcdctl put "/_mgmt/strings/KEY/HOSTNAME" 42
Replace the KEY and HOSTNAME variables with the actual values you'd like
to use. The 42 is the value that is set.
This adds a P/V style semaphore mechanism to the resource graph. This
enables the user to specify a number of "id:count" tags associated with
each resource which will reduce the parallelism of the CheckApply
operation to that maximum count.
This is particularly interesting because (assuming I'm not mistaken) the
implementation is dead-lock free assuming that no individual resource
permanently ever blocks during execution! I don't have a formal proof of
this, but I was able to convince myself on paper that it was the case.
An actual proof that N P/V counting semaphores in a DAG won't ever
dead-lock would be particularly welcome! Hint: the trick is to acquire
them in alphabetical order while respecting the DAG flow. Disclaimer,
this assumes that the lock count is always > 0 of course.
This allows hot (un)plugging of CPU's! It also includes some general
cleanups which were necessary to support this as well as some other
features to the virt resource. Hotunplug requires Fedora 25.
It also comes with a mini shell script to help demo this capability.
Many thanks to pkrempa for his help with the libvirt API!