Closes #654
This PR fully replaces our IIFEs with preceding statements. To do this, it uses a stack based system akin to what is detailed out here: roblox-ts/roblox-ts#537. Statements can be added before the current expression being transformed using context.addPrecedingStatements()
. If nodes higher up want to catch and manipulate these statements, they can do so by pushing and popping a statement collection with context.pushPrecedingStatements()
and context.popPrecedingStatements()
.
Significant refactoring was required in a few places to pull this off. Call expression handling has been modified a lot, but should be understandable. Dealing with assignments was particularly tricky, though, as there's a complex web of cases (especially destructuring assignments). At some point I imagine a full rewrite of assignment handling might be needed to keep it understandable and maintainable, but I'm not even sure what that would look like right now π
Evaluation Order
The biggest challenge with this system is maintaining evaluation order in statements with multiple expressions. Consider this example:
function foo(a: number, b: number) {
let i = 0;
let o = {
method(...args: unknown[]) { /* ... */ }
};
function bar(): number {
// Could modify a, b, i, o, or o.method!
}
o.method(a, b, i += bar());
}
Once i += bar()
is lifted into a preceding statement, any expressions before it in the statement could be affected, and thus need to be cached in temps ahead of time to maintain evaluation order.
Unfortunately, without deeper inspection of the preceding statements being generated, we have to aggressively do this caching, even in cases which don't seem to need it upon visual inspection. For example:
declare function foo(x: number): void;
let i = 0;
foo(i++);
Transpiles to:
local i = 0
local ____foo_2 = foo
local ____G_1 = _G
local ____i_0 = i
i = ____i_0 + 1
____foo_2(____G_1, ____i_0)
Both foo
and even _G
have to be cached in temps because we can't guarantee that the preceding statements generated by ++i
won't modify them. It's possible we could add some specific optimizations for cases like this to reduce the temps, but it would be tricky and a bit messy. Also, thanks to operator overloading, it is possible ++i
could have actually modified those values.
const
variables won't be cached like this however. Those looking to reduce temps should prefer const
over let
whenever possible.
Sparse Array Lib
To help with situations which could generate extreme numbers of temps, this PR also adds some new lib functions for manipulating sparse arrays. Consider this:
foo(a, b, c, d, e++, f, g);
Instead of caching all of those variables in temps, we can push them into an array while preserving evaluation order and unpack them into the call's arguments. But, since any of these could have nil values, we use a custom array that tracks its true size.
local ____foo_2 = foo
local ____array_1 = __TS__SparseArrayNew(_G, a, b, c, d)
local ____e_0 = e
e = ____e_0 + 1
__TS__SparseArrayPush(____array_1, ____e_0, f, g)
____foo_2(
__TS__SparseArraySpread(____array_1)
)
This technique can also be used to handle spread expressions in the middle of arguments, so it also fixes #1142 as a bonus.
Right now, argument lists and array expressions which would generate 3 or more temps when transformed will revert to this behavior. That number can be adjusted, if needed.
Other Notes
- Some helpers for generating useful temp names has been added to make the output lua a bit more readable
- Some things like optional chaining could be reworked to take advantage of this, but I'll save those for separate PRs
- This also fixes #1127