Moving beyond Terra and the "comp-time Python" approach.

Ok, so the idea is that every function call can

Oh, fuck. Hold on. If we don't modify the comptime state, then memoization should save us (the analysis is truly compositional)

If we do modify the comptime state though, this becomes whole-program analysis, which will not do...

In particular, there are some analyses that are valid as whole-program analyses but much faster because of the ability to specify weakest preconditions at the function level

So maybe instead of comptime state changing over time, we should be thinking about a type check for one comptime object (used to instantiate the function) and another...

Yes, that would fit the bill for a lot of what we do...

Hmm, this is nothing other than a cast! Fascinating!! So then, how do we go about implementing that?

Proposal: Support type-state

-> Make cast lazy (T.cast(target: type))
	-> Now, what prevents us from casting x to _two_ mutable pointers?
		-> Well, this is where I would like some comptime state
	-> So cast will prevent us from re-monomorphizing the function, BUT it does need to modify comptime
	   state in x to verify its usage

	-> Furthermore, this means that T.cast(T) 

	-> What if we exploit memoization _by_ casting the value? Hmm...
	   -> Like a wrapper function around deref
	   -> cast and copy would be part of the same operation set
	      -> Biggest problem: copy would still be part of the "default" ops on the pointer,
	         (which is why linearity is such a nice base to work from normally)
	         -> This means that we'd have no way to "turn it off"
		        -> We'd need to opt out of the default "copy" operator, basically
		 	-> Except maybe by treating it as a default implementation that we can modify?
		 -> It _is_ much nicer to think about it as a type that doesn't support copying, rather
		    than a `linear` type or whatever

	-> one big hack: .cast(.{x, y, z}, .{T, T, T});
	    -> We are allowed to turn x into x, but "deactivating" the original x

	    -> Wait, what's wrong with my cast again? Sometimes I do need to "move" the variable by de-activating it...
	         -> The problem rn is that without sub-structural typing, I am going to lose track of my x somewhere
		    -> Right? Or is the only problem when I call a function?
		       -> Hmm, if all of my x's implicitly refer to their original location, maybe this is actually OK!
		           -> So then, if it's a certain kind of function, we move into the function (casting all the arguments)
			      then we move back out (casting all the arguments)

MutablePointer(T, l) -> This can be converted into many borrowed pointers (during a function call) -> or it can be converted into a single mutable pointer (this is given back after the function call)

Very simple Hoare-like logic, really -> So what if we had a function designed to propagate Hoare triples? P f Q -> And we'll make this object-oriented:
f(HoareTriple(T)) P = receiving type can be cast to something Q = afterwards type can be something

  • Rust's logic is simple enough that dependencies between these are straight-forward The star is not clear here, but still, this might be good enough (nor any mutable points to, points to, points to)

  • So what I really want is something that inspects the type of a function, verifies that the pre-condition matches and then possibly transforms the state to reflect the post-condition. Oh my!

  • For a mutable pointer, this means check that I'm not mutable and then you're fine I'll only borrowed, carry on -> For a move objected, this means check that I can be moved, then mark me dead afterward

so in this case, I receive x and I update some state in its type
   and then we cast to a type that is deemed "compatible" (but not necessarily the same)

Within the "whole program", we see this as x is cast to something, either preserving the original type, or disturbing it

I need to think this through more completely still

// Mutable: " every use of the reference (and everything derived from it) must occur before the next use of the referent (after the reference got created)" // Shared: "every use of the reference (and everything derived from it) must occur before the next mutating use of the referent (after the reference got created), and moreover the reference must not be used for mutation." // We -1 to convert to a different pointer, but how do we know when we get it back? // -> Ah, pointer invalidation requires this to be very strict, since we assume any mutation can actually cause a move

// So when I create a borrow, it's a lot more like the loan obligation, where I need that borrow "returned to me" // -> If the function returns without a lifetime, then everything is fine // If the function returns with a lifetime, then I mark myself sort of half-dead

  // half-dead means that if you use me, then the return value, we explode
     // So really I save a comptime reference to y as a "lingering clone"
     //    and if I get used again I mark my lingering clones as dead

 //  This really _is_ compile time GC, I need to report to my heap (the pointer I'm from)
 //     That I'm dead and they can exist again

 // This is _very_ verbose, but technically could be done with a "comptime dealloc noop", which
 // would report x as safe to use again

 // WAIT, so can I literally do Rust but with comptime manual "allocation" and "de-allocation"?
 // All of the extra fanciness is really just to infer when the variable is dropped...

 // WAIT, does that mean that a "comptime stack allocator" would be enough???
 // Holy shit, it seems like it would..........


 // And does that mean we can easily implement the auto-debug-tracking approach? Almost certainly yes.
 // Let's chill and just calmly think this one over soon

 // We already have logic that carefully measures the lifetime of a variable on the stack

 // Pointer(A).get_mut() -> marks x as "borrowed", and asks the compiler to notify us when the resulting object dies
 //                          -> This is _also_ RAII, but we can disallow that

 // Doesn't this mean that borrow checking is just type-states + RAII? Potentially, yes

 // Is a lifetime just a reference to the stack allocation we were borrowed from? Essentially, yes!
 //    -> And then if I can move that obligation to somewhere else, we are good to go...

 // WOW -> So all the borrow rules are basically applying comptime logic on top of the existing stack allocator

     // Weirdness -> Marking the parameters that go into functions as dying or not
     //   -> It's perfectly sufficient to say that the first input parameter is killed automatically, but 
     //      then the return creates a new one, so that will have to be included somehow

     // shared references re-use the "comptime allocator" of their mutable parent
     // -> the allocator for an argument is just the "context"
          // (the function provides a way of sourcing these, but it will never give you two mutable pointers)
	  // (and it expects you to prove that you give them all back)
	  //    Wow, this is very effect/co-effectual...

     	// -> Does "pointers as allocators" generalize some of Pony's crazy bullshit? Probably!!
	       // Not totally clear how to mark cross-agent vs. same-agent functions - perhaps a thread id of some kind?

     //   The "lifetime" is _really_ a compile-time reference to x

     // Simple as that, right?

 // So it really is kind of "stack-like"
    // -> In fact, this can probably be implemented with a comptime stack



                  -> Hmm, maybe do modalities and capabilities coincide?


	-> Might need a new name, but this sounds a lot like comonads/monads
		-> We'd like to allow effects in both directions
			(I call a function, which changes my type)
			    -> Technically, this is like return type inference
			      -> The other modifications we're talking about are "observationally pure" since
			         they are just borrowing!
			          -> However, moves _are_ moves. These _do_ need to change type-state
			    -> It's more like I want to evaluate the function call "all together"
			       and check the arguments that way...
			(A function is parametric in the type I give it) <- We already have this one
			(I call a function, which changes my return type?) <- We already have this one

			-> I may also want to infer a function's type, based on the signature of its internal
			functions (this is perfectly composable, although long-distance, like return type inference)

			-> Need to have a sense of "before/after" the function call

			-> We'd also like reverse type-flow
				-> Can we make it bidirectional?

			-> If we had linearity, it would seem like we could just do this like casting, but we need
			   some kind of connection between the two instances in order to enforce uniqueness

			-> Another option to consider: explicit linear/move semantics

So let's consider return values and lifetimes and we might be done?

Most of the other changes seem like they can be done atomically (i.e. literally cast x to a different type in place, as usual) -> "method-casting", rather than binding the object and making its dispatch flow-sensitive -> We already have to do this for other structurally-compatible types