Benchmarks Don't Lie
What happens when a 1970s memory model meets 2026 hardware
A single Lambda function. No cluster. No Spark. No Hadoop. No distributed system to debug at 3 AM. Just Rust and a bytecode VM.
- 27 million records (5 GB) in about 200 seconds
- 1 million records in 7.6 seconds
- Constant-time field access
- Fixed 8-byte instruction words
- Single AWS Lambda function, 10 GB memory
Why it's fast
The fixed-record memory model is the entire story. Every record is the same size. Every field starts at a known byte offset. Reading P.salary is a pointer arithmetic operation -- the compiler resolved the field name to a byte position, and the VM jumps straight there.
There are no B-tree traversals. No hash table lookups. No string comparisons. The VM knows exactly where the data is before it starts running.
The 43 opcodes execute in 8-byte instruction words. The instruction format is fixed-width, so the VM reads the next instruction by advancing a pointer, not by parsing. Branch targets are byte offsets, not labels to resolve.
The tradeoff
This speed comes from rigidity. The schema is fixed at import. Fields have fixed widths. Records have fixed lengths. You can't store a nested JSON object in a Chaprola field. You can't have variable-length arrays.
For human workloads that need flexible schemas, this is a limitation. For agent workloads that process millions of uniform records -- sensor data, claims, transactions, logs -- it's the right tradeoff.
Async execution
The 27 million record benchmark ran asynchronously. The agent sends "async": true in the request, gets back a job ID, and polls /run/status for results. The Lambda runs for up to 15 minutes on 10 GB of memory. The agent doesn't manage the infrastructure. It sends a request and waits.
The same code handles 10 records and 27 million. Same API call. Same Lambda function. The difference is one boolean flag.
chaprola.org