We had mapped the field type to a dummy type instead of to T2 the return
type. Fixed now and added some tests.
This broke the unification for the load function lookups.
This adds support for variant types in the simple poly definitions. It
is recommended that you avoid using these as much as possible, because
they're a bit harder for the type unification to solve for them. The way
this works is that these functions look at the available input types and
then generate a (recursive) set of invariants which might hold true. It
filters out any impossible ones, which is where this variant matching is
done. It's less likely that you'll get a solution with this mechanism,
but it is possible.
This adds a safety check in case someone sneaks in a variant type in the
simple function signature. These might be sneaky to detect, and it's
simpler to catch them right here.
From a design point of view, we might consider actually permitting
these, like we did with the simple poly API, but it's probably better
for them to get implemented in that API instead (if we decide to allow
this long-term) and keep this simple API very simple.
All polymorphic functions should use the new API, at least until we
either implement a compat wrapper. But it's probably best if we get rid
of the old API as soon as we make all this type unification work
properly.
This adds a sneaky unification between the expression of the function
return value in the unification. I am not entirely sure how often this
will get used, but it could be valuable in the right instance if this
isn't already learned through other sources. I'm fairly confident that
it isn't incorrect, so in the worst case scenario it's redundant
information for the unification solver.
This is being added as a separate commit so that it's obvious how this
type of unification invariant can be applied.
We should probably add some tests for this function because it once had
type unification ghosts, and while adding this new API method, I somehow
hit some temporary new ghosts that have since been killed.
This is an implementation of the Unify approach for the simplepoly
function API, which wraps the full function API. It is unique in that a
lot of different functions use it, and it is easy to build functions
with it. It needs to use exclusives to represent the different options,
but at least it filters out any that aren't viable.
The Unify implementation here is fairly similar to the patterns in the
template() function.
To improve the filtering, it would be excellent if we could examine the
return type in `solved` somehow (if it is known) and use that to trim
our list of exclusives down even further! The smaller exclusives are,
the faster everything in the solver can run.
This is an implementation of the Unify approach for the operator
function. It is unique in that it is a wrapper around the simple
operator function API.
To improve the filtering, it would be excellent if we could examine the
return type in `solved` somehow (if it is known) and use that to trim
our list of exclusives down even further! The smaller exclusives are,
the faster everything in the solver can run.
In case something in the type unification tries to speculatively call
Info before it's ready to produce a valid sig, make sure we only return
a definitive answer (non-nil, and no variant types) once we've
conclusively finished defining the signature.
This is mainly meant as a useful test case, but might as well have it be
fun too. As an aside, it taught me a surprising result about the %v verb
in printf, and we'll have to decide if it's an issue we care about.
https://github.com/golang/go/issues/46118
The interesting thing about this method is that it uses the simplepoly
API but has no input args-- only the output types are different. If it
had identical types in the input args, that might also have been
interesting, but it's more rare to have none. Hopefully this exercises
our type unification logic.
This is an implementation of the Unify approach for the contains
function. It is unique in that its generator invariant can recursively
generate a new generator invariant once.
If there is a programming error in any func Stream() implementation then
the node could never output anything, causing the engine to hang
indefinitely waiting for an initial value that will never come,
Nodes keep track of whether they are loaded, so testing for this
occurence is pretty simple. Any nodes that do not return output at least
once before they close their output channel can be considered a fatal
error on which the engine will exit.
Signed-off-by: Joe Groocock <me@frebib.net>
In case a programmer makes a mistake and passes in a function using the
simple function API without a type or even without the entire value,
we'll now return a sensible error message and panic in init() instead of
requiring a test to catch this alone.
The vumeter example was written quickly and without much care. This
fixes a possible race (panic) and also removes the busy loop that wastes
CPU while we're waiting for the first value to come in.
Since we focus on safety, it would be nice to reduce the chance of any
runtime errors if we made a typo for a resource parameter. With this
patch, each resource can export constants into the global namespace so
that typos would cause a compile error.
Of course in the future if we had a more advanced type system, then we
could support precise types for each individual resource param, but in
an attempt to keep things simple, we'll leave that for another day. It
would add complexity too if we ever wanted to store a parameter
externally.
Lastly, we might consider adding "special case" parsing so that directly
specified fields would parse intelligently. For example, we could allow:
file "/tmp/hello" {
state => exists, # magic sugar!
}
This isn't supported for now, but if it works after all the other parser
changes have been made, it might be something to consider.
This ensures that docstring comments are wrapped to 80 chars. ffrank
seemed to be making this mistake far too often, and it's a silly thing
to look for manually. As it turns out, I've made it too, as have many
others. Now we have a test that checks for most cases. There are still a
few stray cases that aren't checked automatically, but this can be
improved upon if someone is motivated to do so.
Before anyone complains about the 80 character limit: this only checks
docstring comments, not source code length or inline source code
comments. There's no excuse for having docstrings that are badly
reflowed or over 80 chars, particularly if you have an automated test.
I seem to have forgotten to differentiate between the empty string and
no data because the zero value for the stored result was the empty
string. This turns it into a pointer so that we don't block the function
engine if a template or one of the other patched functions sends an
empty string as the first value.
Template function should be able to be called with just one arg (no
input vars) and we should correctly use the known arg name and not the
string "a" or "b".
Send non-error log messages to stdout rather than stderr. Any messages
outside the main function are expected to be purely informational. By
sending to stdout rather than stderr, they can be discarded during the
build.
Fixes#568
This adds a readfile function to actually access files from our deploy.
A fun side effect is that we can even access our own code! In general,
it's a good reminder that you should only run trusted code on your own
infrastructure. This also includes a fancy new test case.