This requires breaking changes in gofmt. It is hilarious that this was
changed. Oh well. This also moves to the latest stable etcd. Lastly,
this changes the `go vet` testing to test by package, since the new go
vet changed how it works and now fails without this change.
It turns out that some planned additions to the parser make it so that
the map type definition can be ambiguous. As a result, this patch
updates the definition so that the map definition is not confused with
an open curly bracket anywhere.
Thanks to pestle and stbenjamin for their help understanding yacc!
This adds the additional bits onto the class/include statements to
support or detect class recursion. It's not currently supported, but
I figured I'd commit the detection code as a variant of the recursion
implementation, since I think this is correct, and it was a bit tricky
for me to get it right.
This adds support for the class definition statement and the include
statement which produces the output from the corresponding class.
The classes in this language support optional input parameters.
In contrast with other tools, the class is *not* a singleton, although
it can be used as one. Using include with equivalent input parameters
will cause the class to act as a singleton, although it can also be used
to produce distinct output.
The output produced by including a class is actually a list of
statements (a prog) which is ultimately a list of resources and edges.
This is different from functions which produces values.
This means that it's legal to produce two compatible (usually identical)
resources without a compile error and without causing two of them to get
run. It's too bad puppet never got this right.
It's probably worth checking if this could be done for edges too, and if
the logic can be contained in the engine and not in the frontend.
Turns out we can actually cause the parser to error instead of needing
to panic. It definitely seems to work, and is better than the panic. The
only awkward thing is how this plumbing works in yacc world. If anyone
knows why this is wrong, please let me know. Reading the generated code
seems to imply that this is correct.
This allows golang tests to be marked as root or !root using build tags.
The matching tests are then run as expected using our test runner.
This also disables test caching which is unfriendly to repeated test
running and is an absurd golang default to add.
Lastly this hooks up the testing verbose flag to tests that accept a
debug variable.
These tests aren't enabled on travis yet because of how it installs
golang.
This giant patch makes some much needed improvements to the code base.
* The engine has been rewritten and lives within engine/graph/
* All of the common interfaces and code now live in engine/
* All of the resources are in one package called engine/resources/
* The Res API can use different "traits" from engine/traits/
* The Res API has been simplified to hide many of the old internals
* The Watch & Process loops were previously inverted, but is now fixed
* The likelihood of package cycles has been reduced drastically
* And much, much more...
Unfortunately, some code had to be temporarily removed. The remote code
had to be taken out, as did the prometheus code. We hope to have these
back in new forms as soon as possible.
If we trigger a close, we must not run the LangClose before we've exited
from the loop, because that loop could race and run code which depends
on LangClose not having run first. So run the loop shutdown, then let
the wait group expire, before shutting down the lang.
The golang race detector complains about some unimportant races, and as
a result, this patch adds some mutexes to prevent these test failures.
We actually lock more than necessary, because a more accurate version
would be more time consuming to implement. Secondarily, it's likely that
in the future we replace this function graph algorithm with something
that is guaranteed to be glitch-free and supports back pressure.
I noticed a very intermittent test failure where interpret would end up
running, but *fail* because a value wasn't present. This should never
happen, because the function engine is designed to only call interpret
when there has been at least one value produced for every node in the
AST. So what is the bug that would produce:
interpret error: could not interpret: func value does not yet exist
About 20 minutes ago while I was getting to bed, it occurred to me where
to look! Out of bed and to the laptop, and after briefly reminding
myself of the code, I think I've found the issue.
What I think was happening, was that an AST node would produce a value,
and send a message on the aggregate channel. This channel is monitored,
and every time it receives a message, it checks to ensure that all the
values now exist before producing a message for interpret to run.
However, this AST node was not the final one to be produced, but before
the message was read by the aggregate channel, the last remaining AST
node ran and set it's "loaded" state to `true`, but *before* its value
was made available for the aggregate channel to read. That channel then
occasionally won the race and tried to access a value before it existed,
thus causing out intermittent bug.
At least I think that's what was going on. Hopefully this patch fixes
this, if not, then there's another bug hiding too! And of course, this
entire function engine could do with some proper analysis from someone
familiar with glitches, back pressure, and FRP parallelism.
One particular note was that I used my brain, not some fancy debugging
tool to find this. Maybe skilled debuggers can fork lift their tools
onto this type of problem, but I haven't those skills!
¯\_(ツ)_/¯
This adds the ability to specify internal, resource specific edges, with
and without notifications. We use the special words: "Notify", "Before",
"Listen", and "Depend". They must have the first character capitalized.
They also support the "elvis" operator.
This allows you to omit a resource parameter programmatically, and
avoids the need of an `undef` or `nil` in our language, which would
contribute to programming errors, crashes, and overall reduced safety.
This adds a simple API for adding static, polymorphic, pure functions.
This lets you define a list of type signatures and the associated
implementations to overload a particular function name. The internals of
this API then do all of the hard work of matching the available
signatures to what statically type checks, and then calling the
appropriate implementation.
While this seems as if this would only work for function polymorphism
with a finite number of possible types, while this is mostly true, it
also allows you to add the `variant` "wildcard" type into your
signatures which will allow you to match a wider set of signatures.
A canonical use case for this is the len function which can determine
the length of both lists and maps with any contained type. (Either the
type of the list elements, or the types of the map keys and values.)
When using this functionality, you must be careful to ensure that there
is only a single mapping from possible type to signature so that the
"dynamic dispatch" of the function is unique.
It is worth noting that this API won't cover functions which support an
arbitrary number of input arguments. The well-known case of this,
printf, is implemented with the more general function API which is more
complicated.
This patch also adds some necessary library improvements for comparing
types to partial types, and to types containing variants.
Lastly, this fixes a bug in the `NewType` parser which parsed certain
complex function types wrong.
I forgot to handle the special case of a function using this API that
received no inputs. It was waiting for the first input to come in, and
as a result was never producing any output.
Remember that functions like this should *almost* be thought of as
constants of the system. You would expect their output to never change
during the lifetime of a particular program invocation.
This patch adds a simple function API for writing simple, pure
functions. This should reduce the amount of boilerplate required for
most functions, and make growing a stdlib significantly easier. If you
need to build more complex, event-generating functions, or statically
polymorphic functions, then you'll still need to use the normal API for
now.
This also makes all of these pure functions available automatically
within templates. It might make sense to group these functions into
packages to make their logical organization easier, but this is a good
enough start for now.
Lastly, this added some missing pieces to our types library. You can now
use `ValueOf` to convert from a `reflect.Value` to the corresponding
`Value` in our type system, if an equivalent exists.
Unfortunately, we're severely lacking in tests for these new types
library additions, but look forward to growing some in the future!
The test for gometalinter got silently broken in an earlier commit.
Look for the missing space that was added back in this commit to see
why! In any case, this now fixes some of the things that weren't
previously caught by this change.
If anyone knows how to run these sorts of tests properly so that entire
packages are tested and so that we can enable additional tests, please
let me know!
It's also unclear why goreportcard catches a few additional problems
which aren't found by running this ourselves.
See:
https://goreportcard.com/report/github.com/purpleidea/mgmt
for more information.
This is an initial implementation of the mgmt language. It is a
declarative (immutable) functional, reactive, domain specific
programming language. It is intended to be a language that is:
* safe
* powerful
* easy to reason about
With these properties, we hope this language, and the mgmt engine will
allow you to model the real-time systems that you'd like to automate.
This also includes a number of other associated changes. Sorry for the
large size of this patch.