This adds support for `include as <identifier>` type statements which in
addition to pulling in any defined resources, it also makes the contents
of the scope of the class available to the scope of the include
statement, but prefixed by the identifier specified.
This makes passing data between scopes much more powerful, and it also
allows classes to return useful classes for subsequent use.
This also improves the SetScope procedure and adds to the Ordering
stage. It's unclear if the current Ordering stage can handle all code,
or if there exist corner-cases which are valid code, but which would
produce a wrong or imprecise topological sort.
Some extraneous scoping bugs still exist, which expose certain variables
that we should not depend on in future code.
Co-authored-by: Samuel Gélineau <gelisam@gmail.com>
This is a new API that is similar in spirit and plumbing to the World
API, but it intended for all local machine operations and will likely
only ever have one implementation.
This is a newer implementation of the panic magic. I kept the old commit
in for posterity and to show the difference. The two versions are
identical to the end-user with one exception: the newer version doesn't
include a useless panic resource in the graph when there is no panic. In
this version, the panic function returns false and the if statement it's
the condition of, doesn't produce the resource within. On error, we
still consume the function in the if expression, and doing so causes
everything to shutdown.
The other benefit is that the implementation is much cleaner and doesn't
need the interpolate hack.
It's valuable to check your runtime values and to shut down the entire
engine in case something doesn't match. This patch adds some magic
plumbing to support a "panic" mechanism.
A new "panic" statement gets transparently converted into a panic
function and panic resource. The former errors if the input is not
empty. The latter must be present to consume the value, but doesn't
actually do anything.
This adds a new implementation of the function engine that runs the DAG
function graph. This version is notable in that it can run a graph that
changes shape over time. To make changes to the same of the graph, you
must use the new transaction (Txn) system. This system implements a
simple garbage collector (GC) for scheduled removal of nodes that the
transaction system "reverses" out of the graph.
Special thanks to Samuel Gélineau <gelisam@gmail.com> for his help
hacking on and debugging so much of this concurrency work with me.
This returns the type with the arg names we'll actually use. This is
helpful so we can pass values to the right places. We have named edges
so you can actually see what's going on.
Co-authored-by: Samuel Gélineau <gelisam@gmail.com>
This plumbs through the new Output method signature that accepts a table
of function pointers to values and relies on the previous storing of the
function pointers to be used for the lookup right now. This has the
elegant side-effect that Output generation could run in parallel with
the graph engine, as the engine only needs to pause to take a snapshot
of the current values tables.
Co-authored-by: Samuel Gélineau <gelisam@gmail.com>
The Graph signature changes are needed for future function work, and it
also fits in nicely with the need for storing the value pointer for each
function node. These are used to later extract values during the Output
stage.
Sam deserves all of the credit for realizing both of these points and
convincing me to make the change! It worked out great, cheers!
Co-authored-by: Samuel Gélineau <gelisam@gmail.com>
This makes some of the Graph sig changes to prepare the code for proper
functions. The remaining bits will happen later.
Co-authored-by: Samuel Gélineau <gelisam@gmail.com>
We were skipping over being fully consistent with all of the generator
invariants when running the solver. This allowed us to miss some of the
conditions that a generator might impose. Usually this caused us to be
"solved" when in fact we had an invalid program.
This removes the `Close() error` and replaces it with a more modern
Stream API that takes a context. This removes boilerplate and makes
integration with concurrent code easier. The only downside is that there
isn't an explicit cleanup step, but only one function was even using
that and it was possible to switch it to a defer in Stream.
This also renames the functions from polyfunc to just func which we
determine by API not naming.
This adds the requirement that all function implementations provider a
String() string method so that these can be used as vertices in the
pgraph library. If we eventually move to generics for the pgraph DAG,
then this might not matter, but it's not bad that these have names
either.
There were a bunch of packages that weren't well documented. With the
recent split up of the lang package, I figured it would be more helpful
for new contributors who want to learn the structure of the project.
This is a giant refactor to split the giant lang package into many
subpackages. The most difficult piece was figuring out how to extract
the extra ast structs into their own package, because they needed to
call two functions which also needed to import the ast.
The solution was to separate out those functions into their own
packages, and to pass them into the ast at the root when they're needed,
and to let the relevant ast portions call a handle.
This isn't terribly ugly because we already had a giant data struct
woven through the ast.
The bad part is rebasing any WIP work on top of this.
This isn't perfect yet, but we're trying to do this incrementally, and
merge whatever we can as early as possible.
During this work, I realized that the Simplify method of the exclusive
could probably be improved, and possibly receive a better signature.
This work will have to happen later.
This is a new invariant that I realized might be useful. It's not
guaranteed that it will be used or useful, but I wanted to get it out of
my WIP branch, to keep that work cleaner.
This is a new invariant that I realized might be useful. It's not
guaranteed that it will be used or useful, but I wanted to get it out of
my WIP branch, to keep that work cleaner.
This is a new invariant that I realized might be useful. It's not
guaranteed that it will be used or useful, but I wanted to get it out of
my WIP branch, to keep that work cleaner.
We should probably move these into the central interfaces package so
that these can be used from multiple places. They don't have any
dependencies, and it doesn't make sense to have the solver code mixed in
to the same package. Overall the interface being implemented here could
probably be improved, but that's a project for another day.
Since we focus on safety, it would be nice to reduce the chance of any
runtime errors if we made a typo for a resource parameter. With this
patch, each resource can export constants into the global namespace so
that typos would cause a compile error.
Of course in the future if we had a more advanced type system, then we
could support precise types for each individual resource param, but in
an attempt to keep things simple, we'll leave that for another day. It
would add complexity too if we ever wanted to store a parameter
externally.
Lastly, we might consider adding "special case" parsing so that directly
specified fields would parse intelligently. For example, we could allow:
file "/tmp/hello" {
state => exists, # magic sugar!
}
This isn't supported for now, but if it works after all the other parser
changes have been made, it might be something to consider.
This ensures that docstring comments are wrapped to 80 chars. ffrank
seemed to be making this mistake far too often, and it's a silly thing
to look for manually. As it turns out, I've made it too, as have many
others. Now we have a test that checks for most cases. There are still a
few stray cases that aren't checked automatically, but this can be
improved upon if someone is motivated to do so.
Before anyone complains about the 80 character limit: this only checks
docstring comments, not source code length or inline source code
comments. There's no excuse for having docstrings that are badly
reflowed or over 80 chars, particularly if you have an automated test.
This should give us options as to how a function should interact with an
FS. I feel like it's cleaner to go through the World API, and passing in
the FsURI lets us do that, but I passed in the Fs at the same time in
case it's useful for some reason. I think using it is a boundary
violation, but it's just a hunch. Does anything break when we move from
one deploy to the next?
Sometimes certain internal functions might want to get some data from
the AST or from something relating to the state of the language. This
adds a method to pass in that data. For now it's a very simple method,
but we could generalize it in the future if it becomes more useful.
This adds a giant missing piece of the language: proper function values!
It is lovely to now understand why early programming language designers
didn't implement these, but a joy to now reap the benefits of them. In
adding these, many other changes had to be made to get them to "fit"
correctly. This improved the code and fixed a number of bugs.
Unfortunately this touched many areas of the code, and since I was
learning how to do all of this for the first time, I've squashed most of
my work into a single commit. Some more information:
* This adds over 70 new tests to verify the new functionality.
* Functions, global variables, and classes can all be implemented
natively in mcl and built into core packages.
* A new compiler step called "Ordering" was added. It is called by the
SetScope step, and determines statement ordering and shadowing
precedence formally. It helped remove at least one bug and provided the
additional analysis required to properly capture variables when
implementing function generators and closures.
* The type unification code was improved to handle the new cases.
* Light copying of Node's allowed our function graphs to be more optimal
and share common vertices and edges. For example, if two different
closures capture a variable $x, they'll both use the same copy when
running the function, since the compiler can prove if they're identical.
* Some areas still need improvements, but this is ready for mainstream
testing and use!
This lets you specify which args are being used in the general function
API, which can make code readability and debugability slightly better.
In an ideal world, we wouldn't need this at all, but I can't figure out
how to avoid it at the moment, so we'll include it for now, as it's
always easy to delete if we find a more elegant solution.
The simple type unification algorithm suffered from some serious
performance and memory problems when used with certain code bases. This
adds some crucial optimizations that improve performance drastically.