I stumbled upon a very elegant interpretation

A very elegant

I had heard Rust described as "compile-time memory management" before, and I liked the term quite a bit. Nonetheless, I didn't realize how true it actually is until I observed something a little while ago.

Niko Matsakis already alludes to this in his blog, referring to "liveness" analysis as the basis for Non-lexical lifetime analysis in Rust.

He gives a lot of details here (http://smallcultfollowing.com/babysteps/blog/2016/05/04/non-lexical-lifetimes-based-on-liveness/) making it clear that RAII and stack allocation (a.k.a. liveness) are essentially the

http://smallcultfollowing.com/babysteps/blog/2017/02/21/non-lexical-lifetimes-using-liveness-and-location/

That is to say, there is a point in the control flow graph where the value is created (an "allocation") and a point where it is destroyed ("deallocation")

.free()

.allocate()

What we want to verify is that every possible path through this program calls .free() before .allocate() -> In particular, this means that we'd like comptime state to branch just

-> This also means that we'd like something similar to a "comptime defer" (which is tightly scoped)

if (b) { // x deallocates early here comptime var b = false; x.deallocate(); return x; } else { // }

// If control flow doesn't return in the branch, then we either (de)allocate in both branches (at different points), or in neither // So basically, returns deallocate always, wherever they appear

// So we really want our comptime state to be copied in each branch -> Is that true for every abstract analysis? It is but sometimes the re-join is not as simple as in the case of live-ness, where the re-join is guaranteed to be identical

// And for a loop? Loops won't have deallocation in this case // -> For a typical analysis you'd employ a "widening" operator -> Related to the HOAS wishes for Reverse-mode AD

// Could always just work around this by not allowing allocation/frees partially spanning an if-else

The interesting thing is that liveness analysis is exactly like stack-based allocation. So we might as well say that the compiling is inserting automatic allocation and deallocation points from the stack into the control flow.

This is really interesting because it explains that non-lexical lifetimes in Rust were actually all about tightening the stack lifetimes to be marked tightly, rather than conservatively marking "deallocation" of the references at the end of the scope.

In theory, if you have tight stack allocation inference for your variables already, and a consistent method of

This makes it much easier to understand Rust's ownership and borrowing rules, as essentially the same rules required to handle heap memory correctly, but with a recursive set of "local allocators" on the stack.

When a reference is borrowed, it's turned into an allocator for other references and cannot be used until all of those references are freed.

This even makes it easier to understand the smorgasboard of pointer types available in Pony:

This also means we can literally prototype this today... By providing something that let's you turn a pointer into an allocator

TODO: How do we handle moves? TODO: How do we handle "interior mutability"? -> Just return a different type of pointer and don't allow it across threads, right? -> Should be similar for ponie's other capabilities TODO: How do we handle stacked borrows?

-> A box just yells if it's ever de-allocated, and it doesn't allow itself to be allocated into new copies -> i.e. its allocator won't give you more of it -> moving into a function says "don't de-allocate me" -> Is this consistent? I use a mutable reference and pass it in, if it's not returned that mutable reference is dead so it deallocates after the function call. -> If I give a function a box though, it's gone already and is not considered deallocated - it's been moved -> Or, I guess you could say, it's been "safely consumed" -> These are different because they "box up" the result of an allocation. -> Does that mean we could -> Not happy until it's passed into a function that consumes it (specifically) -> i.e. This is deallocation "with an obligation" (so it's like two-step free - which could also model the other pointers TBH) in Rust, this obligation is the Drop trait and is executed automatically

			-> Same thing is true for our pointers, EXCEPT
			    that mutable pointers act as allocators (of mutable pointers and shared pointers)
			    and shared pointers defer to their upstream allocator

			-> It _seems_ like this does it _without_ sub-structural logics. Interesting...
			        -> Technically, it encodes a copy/cast as a compile-time allocation _from_ a type

				-> It also says that "with an obligation" can get us most of Rust?
				   Key open is the if { ... } else { ... } handling, which I despise
				   but an "obligation encoding would sort of side-step the abstract interpretation desires

One the one hand, this shows that the "core" of Rust is really just liveness analysis (which in theory is buried deep in the compiler somewhere, already) and "function obligations"

Is really just `comptime RAII`. This gives us a couple options to adopt a similar OBS system in Zig:

If we only get "function obligations":
     (a) Make it _all_ manually managed. The user _must_ call free on the element they borrowed from

	 The main thing that is saved here is that we don't need any hidden comptime function calls or user
	 hooks into the liveness tracking system.

    If we also get the ability to ask for 
     (b) Support "comptime RAII". This will ask the compile to insert comptime calls to the appropriate "allocator",
         based on each variable's liveness

	 Note: this is basically a `comptime defer`, with the exception that we'd want this to be an obligation given
	 by the function passing to you and we'd want the "comptime deallocation" to be relatively tight
	 (although Rust went on for a long time without lexical lifetimes so maybe not)

           -> Even this requires some comptime magic to handle branches correctly
	       TODO: Check the "compositional formula" for semantics, 
		but I think copy is universal, while the widening/join after the fact is less certain
		   -> Would it be suitable for now to require that both branches reach the same conclusion?
		   -> That OR we could dig back into traits again and really try to define this 
		      as a consistent abstract interpretation...

	 Function types would need some way to specify the "pseudoallocation" trait
	 (which is really a cast/copy, with a wait for things to return)

           If Zig doesn't wish to support RAII in general

This really does confirm something interesting to me:
	- Parameterizing a function by an allocator (even a comptime one) is like providing a comonad, and 
	  this can enforce very strong invariants
	- It does seem like this analysis could all be done "after the fact", but it requires a special type-synthesis
	  algorithm that, e.g. knows whether a mutable or immutable reference is required by a function
	     -> This is really just casting *T and *const T to &mut T and &T, though - Not that bad at all, since
	        const-ness is considered a very "inherent" property

	- Hmm, my hopes for colorlessness are being rapidly dashed............


This betrays the "evil" of Zig's current freely-copying type system. 

TODO: Can this handle alias types? Probably yes?

			-> So arguments passed into functions are like "temporary allocations", and so are their return types
->   If it comes back, it's considered "allocated" by the function below us

HOAS for Zig is almost certainly necessary for abstract interpretation... :( -> Unless we can find a trick to solve the "binder problem" -> Isn't one-pass re-composition that trick exactly? -> Kind of, but it's not clearly specified and really, it's kind of a strange rewrite rule

There's some flaws with this idea:

Zig's handling of mutable comptime state in if-else conditionals is inconsistent with this approach:

  Specifically, we'd ideally like to _copy_ the comptime state of x in each branch (and then join it at the end). This is the approach followed by abstract interpretation, which is effectively what we're doing here.

  What happens if x is borrowed in one branch but alive in the other?

In fact, we can even look at this as a simulation (technically called abstract interpretation) of the following "stack allocator":

Stack allocator derived from trait Handle<T, :x> { borrowed_mut: bool = false; shared_cnt: usize = 0;

// This would be a "standard" scoped allocator:

inline fn allocate(comptime T: type) {
    if (T.is_mutable_ref() && shared_cnt == 0 && !self.borrowed_mut) {
        self.borrowed_mut = true;
} else if (!T.is_mutable_ref() && !self.borrowed_mut) {
    self.shared_cnt += 1;
}
return self;
}

inline fn free(T: type) {
    // How do I check that all the borrowed references have been returned?
// Literally with reference counts, right?
// Holy crap, yes that's one way?

    if (T.is_mutable_ref()) {
    self.borrowed_mut = false;
} else {
    self.shared_cnt = 0;
}
}

// mutable pointer can't be used while borrowed
inline fn get() {
    if (self.shared_cnt > 0 || self.borrowed_mut) {
    abort!();
}
return self;
}

inline fn created() {
    @localVariable(:x).allocate(Self);
}

inline fn freed() {
    @localVariable(:x).free(Self);
}

}

// Two dual views: // Once a variable x is borrowed, it must wait until all of its children are gone before it can be used // -> This can be a for loop on the liveness of the variables // A variable y can only be used as long as its parent x is still alive and de-activated?

// A use needs to make sure my children are dead and that my parents are alive -> Simple as that // -> Does Zig have problems keeping parents alive for borrowed variables? That would explain much of the interaction with liveness...

  //     Ah, there's the rub: Normal liveness won't actually realize that a variable location has been taken
  		// -> But that's easily included, right? Yes, why not (well, return values that extend the life of a parameter is one example)
	// Fuck, good idea dead again :(

// Let's think it over some more in the morning

// Have to argue: Why this respects loop invariants (it's not obvious it would)

trait StackAllocator<T,:x>