This fixes two small races we had while simultaneously reading from and
writing to the vertex timestamp, and simultaneously writing to the
vertex isStateOK (dirty) flag.
They were actually "safe" races, in that it doesn't matter if the
read/write race got the old or new value, or that the double write
happened. The time sequencing was correct (I believe) in both cases, but
this triggers the race detector now that we have tests for it.
Since this special one_instance function uses global state, if it's
re-used in more than one test, this won't work since they still all use
the whole global state. Make new ones for each test.
This also breaks the count=2 feature (any number other than 1) when
running these, which is not ideal. Create a cleanup API that we can run
between tests to reset the global state.
This adds ExprTopLevel and ExprSingleton and ensures that ExprBind is
now monomorphic.
This corrects a previous design bug where it was not monomorphic and
would thus cause spawning of many more copies than necessary. In most
cases this was only harmful to memory and performance, and not
behaviour, since these functions were pure, and we didn't have a test
for this.
This also adds a bunch more tests. Most notably, the graph shape tests
generally produce smaller graphs now.
Lastly, a lambda cannot have two different types when used at two
different call sites. It is rare that this would be used, and when it
would make sense, there are easy workarounds to accomplish equivalent
goals.
This was mostly authored by Sam, James helped with some cleanup and
debugging.
Co-authored-by: James Shubin <james@shubin.ca>
We don't allow a lambda variable to be polymorphic anymore. Of course
this isn't bad, but it makes things too difficult, at least for now.
It's also likely that we don't need this specific feature very often I
think.
We should not call either of these functions more than once for their
values. If we do, it means we have made a mistake with a compiler
optimization.
This is important, because otherwise if you had code like:
$x = random_password()
Then this would obviously be a problem. Thankfully, the situations where
functions generate unique data is rare, but it's probably something we
should take care of.
We've previously not received a value from within an autogrouped
resource. It turns out this would be quite useful, and so this patch
implements the additional plumbing and testing so that this works!
Testing that an autogrouped resource can still send values has not been
done at this time.
This adds a new series of "get*" functions which can read values from
the associated "value" resources. The key name of the function must
match the name value of the resource for things to work.
Type unification isn't yet perfect in these scenarios, so you should use
casually and with caution.
This test detects a mistake which is easy to make: when making a
recursive call to the target of an ExprVar, it would be easy to
accidentally pass the environment, like we usually do with every other
recursive call. For variables, this is a mistake, because the lambda
parameters which are in scope where the variable is used must not be in
scope where the variable is defined.
In fact, ExprVar.Graph() currently makes this mistake. The test passes
anyway, because an earlier phase (SetScope) correctly clears the
environment and detects the problem before the Graph phase. Thus, this
test does not guarantee that all the phases correctly clear their
environment, it merely detects the unlikely case in which all the phases
make the same mistake.
This modifies the panic feature to accept a boolean or a string. If true
or not empty, then it will cause the panic. This makes some of the error
code a little less ugly.
This is a newer implementation of the panic magic. I kept the old commit
in for posterity and to show the difference. The two versions are
identical to the end-user with one exception: the newer version doesn't
include a useless panic resource in the graph when there is no panic. In
this version, the panic function returns false and the if statement it's
the condition of, doesn't produce the resource within. On error, we
still consume the function in the if expression, and doing so causes
everything to shutdown.
The other benefit is that the implementation is much cleaner and doesn't
need the interpolate hack.
It's valuable to check your runtime values and to shut down the entire
engine in case something doesn't match. This patch adds some magic
plumbing to support a "panic" mechanism.
A new "panic" statement gets transparently converted into a panic
function and panic resource. The former errors if the input is not
empty. The latter must be present to consume the value, but doesn't
actually do anything.
This is a strange resource which is probably most useful for passing
values between scopes. It supports a variant resource field, and should
only be used as a last resort and if you know exactly what you're doing.
This adds an interesting version of the struct lookup function. In the
situation where we can't type-check the field name, it will use the
optional value passed in. This makes it easy to write a function that
will pull in the desired value, even as the input struct changes type
between compilations, without having to re-write your code.
It's structurally different from the other default lookup functions,
which is why it is named differently.
These test both graph shape consistency and single value outputs.
Eventually we want to make the graph shape tests more precise, and also
verify specific outputs how it used to be. For now, this is okay.
Co-authored-by: Samuel Gélineau <gelisam@gmail.com>
This adds a meta state store that is preserved between graph switches if
the kind and name match. This is useful so that rapid graph changes
don't necessarily reset their retry count if they've only changed one
resource field.
There are many reasonable cases where we might want to allow a dynamic
format string. Support that situation by adding the new invariants that
are needed for those cases.
We were skipping over being fully consistent with all of the generator
invariants when running the solver. This allowed us to miss some of the
conditions that a generator might impose. Usually this caused us to be
"solved" when in fact we had an invalid program.
There were some bugs about setting resource fields that were structs
with various fields. This makes things more strict and correct. Now we
check for duplicate field names earlier (duplicates due to identical
aliases) and we also don't try and set private fields, or incorrectly
set partial structs.
Most interestingly, this also cleans up all of the resources and ensures
that each one has nicer docs and a clear struct tag for fields that we
want to use in mcl. These are mandatory now, and if you're missing the
tag, then we will ignore the field.
We also add a backup fix to avoid a panic in case we ever hit a new
unification bug that lets something through, we can at least turn it
into a runtime issue. This adds a test as well.
This ports TestAstFunc2 from our home-grown content storage system to
the txtar package. Since a single file can be used to represent the
entire folder hierarchy, this makes it much easier to see and edit
tests.
Dollar symbols were failing to parse when not followed by a non-brace,
non-dollar, non-EOF token and causing expected tests to fail. This
simplifies the rules to allow the remaining tests to succeed.
Fix and reinstate the final few failing tests, and add another.
Allow any escape sequence to be matched so that invalid sequences
produce a meaningful error message instead of a generic "cannot parse":
ast: interpolate: interpolating: V: \?
unhandled escape sequence token: \?
Tidy the related Makefile rule for generating the ragel parser.
Signed-off-by: Joe Groocock <me@frebib.net>
We had mapped the field type to a dummy type instead of to T2 the return
type. Fixed now and added some tests.
This broke the unification for the load function lookups.
This isn't perfect yet, but we're trying to do this incrementally, and
merge whatever we can as early as possible.
During this work, I realized that the Simplify method of the exclusive
could probably be improved, and possibly receive a better signature.
This work will have to happen later.
This adds support for variant types in the simple poly definitions. It
is recommended that you avoid using these as much as possible, because
they're a bit harder for the type unification to solve for them. The way
this works is that these functions look at the available input types and
then generate a (recursive) set of invariants which might hold true. It
filters out any impossible ones, which is where this variant matching is
done. It's less likely that you'll get a solution with this mechanism,
but it is possible.
This is an implementation of the Unify approach for the operator
function. It is unique in that it is a wrapper around the simple
operator function API.
To improve the filtering, it would be excellent if we could examine the
return type in `solved` somehow (if it is known) and use that to trim
our list of exclusives down even further! The smaller exclusives are,
the faster everything in the solver can run.