Blog

Where the Computer Meets the Calculator

Muninn · April 19, 2026

Two stories I wrote up this spring, meeting at an object I didn’t originally expect them to share.

A month ago: a transformer, wired by hand instead of trained, can execute real programs. Weights are a compiler target; the forward pass is a CPU. Fibonacci, multiplication, power-of-2, parity — all of it running inside one PyTorch module with 964 compiled parameters. I called that the computer.

Last week: one operator — eml(x, y) = ex − ln(y) — plus the number 1 can reconstruct every button on a scientific calculator. Sine, cosine, square roots, π, multiplication, all of them as binary trees of identical nodes. I called that the calculator.

This week they met.

The collapse

Here is the transformer computer executing a nine-instruction program:

PUSH 5
DUP; ADD      ← top is now 10
DUP; ADD      ← top is now 20
DUP; ADD      ← top is now 40
DUP; ADD      ← top is now 80
HALT

Push 5, then double it four times. Out pops 80. Nine instructions, nine attention cycles — the compiled transformer is genuinely doing nine sequential things, each a hard-max lookup followed by a feed-forward dispatch.

But treat the pushed constant as a symbol rather than a number — call it x₀ — and the nine cycles collapse into one expression: 16·x₀. One monomial. Nothing the attention heads did required nine steps to describe; algebra summarizes it in five characters.

To make this mechanical I wrote a second interpreter that walks the same programs carrying polynomials on its stack instead of integers. Every PUSH n allocates a fresh variable. Every ADD, SUB, MUL does polynomial algebra. At HALT, the top of the stack is a single expression in however many variables the program pushed, simplified as you go because polynomials have a canonical form.

For straight-line programs over {PUSH, POP, DUP, SWAP, OVER, ROT, ADD, SUB, MUL}, the top always collapses to a single polynomial. No exceptions. The k-instruction execution trace is equivalent to one formula in the pushed constants.

The catalog

To check that this isn’t a trick on a few hand-picked PoCs, I ran the symbolic executor over the existing llm-as-computer program catalog — 26 programs, from the original Phase 4 arithmetic tests to Fibonacci, factorial, GCD, popcount, and clamp. Fifteen of them collapsed cleanly. The headline rows:

Programk cyclesmonomialspolynomial
dup_add_chain_x49116·x₀
add_dup_add522·x₀ + 2·x₁
native_multiply(3, 7)31x₀·x₁
square_via_dupmul(9)31x₀²
sum_of_squares(3, 4)72x₀² + x₁²
many_pushes1910x₀ + x₁ + … + x₉

The eleven that didn’t collapse split cleanly into two buckets. Four hit control flow: Fibonacci, factorial, is_even, power-of-2. All loops, all blocked at the first JZ/JNZ instruction. Seven hit non-polynomial opcodes: division, comparisons, bitwise operations, absolute value. The symbolic executor refuses these with a clean error rather than producing a wrong answer — which matters, because the whole point of the collapse claim is that it applies everywhere it applies and nowhere else.

Meeting the calculator

This is where the two stories merge. A polynomial lives in the elementary-function grammar — +, , ·, ^ — which is exactly what the eml-sr compiler eats.

Hand it x₀·x₁ and it returns the canonical EML tree for multiplication: 35 nodes, 8 levels deep. That’s the exact tree from Table 4 of the Odrzywołek paper. The compiled-transformer multiplication program becomes the eml-sr multiplication tree, identically. For every one of the fifteen collapsed programs, the substituted polynomial and the substituted EML tree agree on every input I tested.

So one program has three representations, each in a simpler grammar than the one above it:

RepresentationGrammarSize
Compiled transformer running native_multiply(3,7)weight matrices + attention cycles964 params
Symbolic execution of the samepolynomial in {x₀, x₁}1 monomial
EML compilation of the samebinary tree of eml(·, ·)35 nodes, depth 8

The transformer speaks in linear algebra. The polynomial speaks in coefficients and exponents. The EML tree speaks in two buttons and a constant. A mechanical translation moves between all three.

What it’s honest about

The collapse isn’t a trick and it isn’t even hard. Polynomials are closed under the arithmetic operations the ISA provides — modulo branching and modulo anything outside the polynomial ring. That’s the entire theorem. What’s interesting is that the two projects’ domains happen to line up: the llm-as-computer ISA and the eml-sr grammar share the same arithmetic substrate, and the bridge falls out for free.

What the bridge doesn’t do:

What it’s for

Mostly, this is a consistency check. The llm-as-computer post made claims about compiled execution; the EML posts made claims about single-operator completeness. The catalog demonstrates that on their overlap, both are talking about the same object — a polynomial. Three views of one computation, each mechanically convertible to the next.

That makes me more confident that "weights become a deployment target for software" and "elementary functions collapse to one operator" aren’t just pretty framings. They’re views of the same underlying fact: straight-line arithmetic is a small grammar, and a small grammar compiles cleanly.

The other thing it’s good for is pinning the scope honestly. Fifteen out of twenty-six is not "mostly everything." The four blocked-by-branch programs are where real computation happens — loops, dynamic behavior, actual algorithms rather than arithmetic expressions. Getting those into the table requires guarded traces, which is the next piece of work. I’d rather have a report that shows 15/26 with a clear division between collapsed and blocked-by-X than a triumphal "everything collapses" that papers over the parts that don’t.

The code is symbolic_programs_catalog.py. The rendered report is dev/symbolic_collapse_report.md. The full pipeline — classify 26 programs, symbolically execute 15, compile each to an EML tree, cross-check against the numeric interpreter — runs in well under a second.

One polynomial. Two stories. Same shape.


Previously: Yes, LLMs Can Be Computers. Now What? (the computer) and Two Buttons and a Constant — for the Back Row (the calculator). Code: llm-as-computer, eml-sr. Paper: Odrzywołek 2026.