It's not clear if this is absolutely necessary or not, but it probably
doesn't hurt. It's not clear if there is a way to read the previous
state before running this. Of course this isn't really thread-safe, so
use at your own risk.
It was non-trivial to do this, so I put it into a library. Strangely I
couldn't directly wrap the ReadPassword function from the
golang.org/x/term package, as it wouldn't unblock for some reason.
If our machine has that pipe busy, don't panic the test. (We do want it
to fail though!)
We're also more careful to check nil object just as a convenience to
help programmers.
This adds the BoundedReadSemaphore mutex that I invented. It's not that
is necessarily particularly revolutionary, but it is useful. I wish I
had a better name for it, but I couldn't think of anything. It's fairly
obvious what it does, so if you have a suggestion of how to name it,
please do so!
The old system with vendor/ and git submodules worked great,
unfortunately FUD around git submodules seemed to scare people away and
golang moved to a go.mod system that adds a new lock file format instead
of using the built-in git version. It's now almost impossible to use
modern golang without this, so we've switched.
So much for the golang compatibility promise-- turns out it doesn't
apply to the useful parts that I actually care about like this.
Thanks to frebib for his incredibly valuable contributions to this
patch. This snide commit message is mine alone.
This patch also mixes in some changes due to legacy golang as we've also
bumped the minimum version to 1.16 in the docs and tests.
Lastly, we had to disable some tests and fix up a few other misc things
to get this passing. We've definitely hot bugs in the go.mod system, and
our Makefile tries to workaround those.
This is a new path manipulation library that is designed to be safer
than using simple strings for everything. It is more work to use, but it
can help you keep track of the different path types.
It has been sitting unused in a git branch for too long, and I figured
it should see the light of day in case someone wants to start using it.
Since we focus on safety, it would be nice to reduce the chance of any
runtime errors if we made a typo for a resource parameter. With this
patch, each resource can export constants into the global namespace so
that typos would cause a compile error.
Of course in the future if we had a more advanced type system, then we
could support precise types for each individual resource param, but in
an attempt to keep things simple, we'll leave that for another day. It
would add complexity too if we ever wanted to store a parameter
externally.
Lastly, we might consider adding "special case" parsing so that directly
specified fields would parse intelligently. For example, we could allow:
file "/tmp/hello" {
state => exists, # magic sugar!
}
This isn't supported for now, but if it works after all the other parser
changes have been made, it might be something to consider.
This ensures that docstring comments are wrapped to 80 chars. ffrank
seemed to be making this mistake far too often, and it's a silly thing
to look for manually. As it turns out, I've made it too, as have many
others. Now we have a test that checks for most cases. There are still a
few stray cases that aren't checked automatically, but this can be
improved upon if someone is motivated to do so.
Before anyone complains about the 80 character limit: this only checks
docstring comments, not source code length or inline source code
comments. There's no excuse for having docstrings that are badly
reflowed or over 80 chars, particularly if you have an automated test.
This adds a giant missing piece of the language: proper function values!
It is lovely to now understand why early programming language designers
didn't implement these, but a joy to now reap the benefits of them. In
adding these, many other changes had to be made to get them to "fit"
correctly. This improved the code and fixed a number of bugs.
Unfortunately this touched many areas of the code, and since I was
learning how to do all of this for the first time, I've squashed most of
my work into a single commit. Some more information:
* This adds over 70 new tests to verify the new functionality.
* Functions, global variables, and classes can all be implemented
natively in mcl and built into core packages.
* A new compiler step called "Ordering" was added. It is called by the
SetScope step, and determines statement ordering and shadowing
precedence formally. It helped remove at least one bug and provided the
additional analysis required to properly capture variables when
implementing function generators and closures.
* The type unification code was improved to handle the new cases.
* Light copying of Node's allowed our function graphs to be more optimal
and share common vertices and edges. For example, if two different
closures capture a variable $x, they'll both use the same copy when
running the function, since the compiler can prove if they're identical.
* Some areas still need improvements, but this is ready for mainstream
testing and use!
This is a giant cleanup of the etcd code. The earlier version was
written when I was less experienced with golang.
This is still not perfect, and does contain some races, but at least
it's a decent base to start from. The automatic elastic clustering
should be considered an experimental feature. If you need a more
battle-tested cluster, then you should manage etcd manually and point
mgmt at your existing cluster.
This adds a utility function to close a context via a closed signalling
channel, and also functions to wrap and unwrap a wait group into and out
of a context.
Simplify working with errors across our code base. Instead of constantly
importing the necessary error helpers, assemble them all into one
package and import and use that as needed.
This adds the ability to test that functions return the expected
streams, and to model this behaviour over time. This is done via a
"timeline" which runs an ordered list of actions that can both push new
values into the function stream, and wait and expect particular outputs.
Hopefully this will make our function implementations more robust!
After some investigation, it appears that SocketSet.Shutdown() and
SocketSet.Close() are not synchronous operations. The sendto system call
called in SocketSet.Shutdown() is not a blocking send. That means there
is a race in which SocketSet.Shutdown() sends a message to a file
descriptor to unblock select, while SocketSet.Close() will close the
file descriptor that the message is being sent to. If SocketSet.Close()
wins the race, select is listening on a dead file descriptor and will
hang indefinitely.
This is fixed in the current master by putting SocketSet.Close() inside
of the goroutine in which data from the socket is being received. It
relies on SocketSet.Shutdown() being called to terminate the goroutine.
While this works most of the time, there is a race here. All the
goroutines can also be terminated by a closeChan. If the goroutine
receives an event (thus unblocking select) and then closeChan is
triggered, both SocketSet.Shutdown() and SocketSet.Close() race, leading
to undefined behavior.
This patch ensures the ordering of the two function calls by pulling
them both out of the goroutine and separating them with a WaitGroup.
Co-authored-by: James Shubin <james@shubin.ca>
Test to ensure that SocketSet is nonblocking and will close when
SocketSet.Shutdown() is called. Create a SocketSet that will never
receive any data and leave it running in a goroutine with a WaitGroup
for a second. If Shutdown is working correctly, the goroutine will be
terminated after the timer expires.
Add the ReceiveBytes, ReceiveNetlinkMessage, and ReceiveUEvent methods.
This is because not everything passed through a netlink socket cannot
reliably be parsed using the ParseNetLinkMessage function.
With the ReceiveUEvent method, we add support for "uevent" kernel
events, which updates us about the state of devices currently on the
system. To make using this method easier, we add a UEvent struct, that
has the action (what event), Devpath (where the device lives in /proc or
/sysfs), and Subsystem (what subsystem this event belows to).
This enables imports in mcl code, and is one of last remaining blockers
to using mgmt. Now we can start writing standalone modules, and adding
standard library functions as needed. There's still lots to do, but this
was a big missing piece. It was much harder to get right than I had
expected, but I think it's solid!
This unfortunately large commit is the result of some wild hacking I've
been doing for the past little while. It's the result of a rebase that
broke many "wip" commits that tracked my private progress, into
something that's not gratuitously messy for our git logs. Since this was
a learning and discovery process for me, I've "erased" the confusing git
history that wouldn't have helped. I'm happy to discuss the dead-ends,
and a small portion of that code was even left in for possible future
use.
This patch includes:
* A change to the cli interface:
You now specify the front-end explicitly, instead of leaving it up to
the front-end to decide when to "activate". For example, instead of:
mgmt run --lang code.mcl
we now do:
mgmt run lang --lang code.mcl
We might rename the --lang flag in the future to avoid the awkward word
repetition. Suggestions welcome, but I'm considering "input". One
side-effect of this change, is that flags which are "engine" specific
now must be specified with "run" before the front-end name. Eg:
mgmt run --tmp-prefix lang --lang code.mcl
instead of putting --tmp-prefix at the end. We also changed the GAPI
slightly, but I've patched all code that used it. This also makes things
consistent with the "deploy" command.
* The deploys are more robust and let you deploy after a run
This has been vastly improved and let's mgmt really run as a smart
engine that can handle different workloads. If you don't want to deploy
when you've started with `run` or if one comes in, you can use the
--no-watch-deploy option to block new deploys.
* The import statement exists and works!
We now have a working `import` statement. Read the docs, and try it out.
I think it's quite elegant how it fits in with `SetScope`. Have a look.
As a result, we now have some built-in functions available in modules.
This also adds the metadata.yaml entry-point for all modules. Have a
look at the examples or the tests. The bulk of the patch is to support
this.
* Improved lang input parsing code:
I re-wrote the parsing that determined what ran when we passed different
things to --lang. Deciding between running an mcl file or raw code is
now handled in a more intelligent, and re-usable way. See the inputs.go
file if you want to have a look. One casualty is that you can't stream
code from stdin *directly* to the front-end, it's encapsulated into a
deploy first. You can still use stdin though! I doubt anyone will notice
this change.
* The scope was extended to include functions and classes:
Go forth and import lovely code. All these exist in scopes now, and can
be re-used!
* Function calls actually use the scope now. Glad I got this sorted out.
* There is import cycle detection for modules!
Yes, this is another dag. I think that's #4. I guess they're useful.
* A ton of tests and new test infra was added!
This should make it much easier to add new tests that run mcl code. Have
a look at TestAstFunc1 to see how to add more of these.
As usual, I'll try to keep these commits smaller in the future!