This makes tar archives from a list of files and/or directories. It
correctly includes empty directories as well. The code structure is
similar to the gzip resource. While this resource is arguably more
useful than gzip, it was invaluable to write the gzip resource first
since that made writing this one much easier.
This may have lots of uses, particularly for bootstrapping and handoff
if we want to compress payloads. It is also a good model resource for
how to implement such a resource to avoid re-computing the result on
every CheckApply call. Of course if the computation is cheaper than the
hashing of the data this isn't the optimal approach.
Apparently wget2 has a serious regression that the HTTP 102 header
throws it off... So let's not send this for now... I'm pretty unhappy
about this, wget used to always be rock solid. Maybe curl deserves a
chance? (This works fine with curl btw.)
In 83a747794e a bug was introduced with
the implementation of symbolic modes, that would prevent a file resource
from passing the Validate step if you were using a symbolic mode, and
the file didn't already exist. If you didn't use symbolic modes and
those files weren't absent, then you wouldn't have noticed.
It might be worth looking into the API for symbolic parsing as well.
This adds a modern type unification algorithm, which drastically
improves performance, particularly for bigger programs.
This required a change to the AST to add TypeCheck methods (for Stmt)
and Infer/Check methods (for Expr). This also changed how the functions
express their invariants, and as a result this was changed as well.
This greatly improves the way we express these invariants, and as a
result it makes adding new polymorphic functions significantly easier.
This also makes error output for the user a lot better in pretty much
all scenarios.
The one downside of this patch is that a good chunk of it is merged in
this giant single commit since it was hard to do it step-wise. That's
not the end of the world.
This couldn't be done without the guidance of Sam who helped me in
explaining, debugging, and writing all the sneaky algorithmic parts and
much more. Thanks again Sam!
Co-authored-by: Samuel Gélineau <gelisam@gmail.com>
The owner/group of a file should not be validated on the host until runtime. This removes the checks in Validate() that were happening before the execution of the resource graph (and therefore bound to fail if the system was being bootstrapped).
This adds the ability to offer a dhcp lease to someone when we don't
know their mac address in advance.
This also uses the extended autogrouping API to keep the internal API
simpler.
This extends the autogrouping API so that a child can easily get a
reference to the parent that it is autogrouped in. This can simplify the
API for some resources when it makes sense to allow them access to the
parent handle. Use sparingly and intelligently!
This adds a standard gate that prevents execution if a file exists. Of
note, this also adds a watch on it, so we can have a proper watched exec
resource without a watch cmd.
This adds basic support for streaming files directly from the download
server. This avoids clients timing out if they are blocked while first
waiting for a giant file to download.
The earlier path mangling code was incorrect. I've taken more time to
understand the correct use case and I've improved it. I've also split
out the parser logic and added tests, so this should either stay stable
or grow new tests and fixes if we find new issues.
With the recent merging of embedded package imports and the entry CLI
package, it is now possible for users to build in mcl code into a single
binary. This additional permission makes it explicitly clear that this
is permitted to make it easier for those users. The condition is phrased
so that the terms can be "patched" by the original author if it's
necessary for the project. For example, if the name of the language
(mcl) changes, has a differently named new version, someone finds a
phrasing improvement or a legal loophole, or for some other
reasonable circumstance. Now go write some beautiful embedded tools!
This adds a new caching http proxy resource that can be autogrouped into
the core http:server resource, and which caches and serves files that it
received from a different http server. It first pulls them down when
they are initially requested, which makes it possible for us to use this
for provisioning a Linux installation without having to pre-rsync the
entire package repository.
A regression in 4b0cdf9123 caused the
basic send/recv functionality to break for simple scenarios. This was
due to inadequate testing, and a partial misunderstanding of the
situation.
New testing should hopefully catch more cases, but send/recv and
compile-time checks are still not as complete as is probably possible.
This adds a new http:flag resource which can autogroup into an
http:server resource to receive actions from client HTTP requests, and
forward these values on to other resources.
When graph swapping (which is quite common) we only use the newly-made
resource if the Cmp function between the two shows a difference. If the
old resource has previously received a value via send/recv, then when it
is compared to the new value, it will almost always be different. As a
result, we need to run send/recv on the newly made graph to make sure it
has up-to-date values before we compare. This has to happen after
autogrouping since the resources can often be autogrouped and any child
grouped resource will cause a remake of all of the other children and
parents.
It turns out that the actual send/recv properties were being compared as
well, and for unknown reasons (tunnel vision perhaps) they are often not
identical. Skip comparing these for now until we find a fix or
understanding of how to make them identical.