A Programming Language My Father Designed in the 1970s
How a naval architecture data tool became an AI agent platform
My father, John H. Letcher, was a Professor of Computer Science at the University of Tulsa. In the 1970s, he designed a programming language called Chaprola. It processed data for naval architecture -- hull designs, hydrostatic calculations, structural analysis. The kind of work where every byte mattered and every field had a fixed position in memory.
He built it around constraints that don't exist anymore. Kilobytes of RAM. No dynamic memory allocation. No variable-length strings. Every record the same size. Every field at a known byte offset.
Those constraints produced a design that turns out to be ideal for processing millions of records on modern hardware.
What survived
Fixed-record data files. The original spec defined a memory model where every record has the same length and every field starts at a predictable byte position. Field access is pointer arithmetic -- constant time, regardless of how many records exist.
Bytecode compilation. Chaprola programs compile to a .PR file -- a sequence of 8-byte instruction words. The VM executes 43 opcodes. No interpretation, no parsing at runtime. The compiler resolves field names to byte offsets, so the VM never touches a string during execution.
Field-name addressing. You write P.salary in your source code. The compiler looks up salary in the format file, finds its byte offset, and emits a direct memory reference. The programmer thinks in field names. The VM thinks in bytes.
What changed
The original Chaprola ran on a single machine processing local files. The 2026 edition runs on AWS Lambda, stores data in S3, and serves requests through 40 REST endpoints.
The original users were human engineers. The 2026 users are AI agents.
The original deployment was a university lab. The 2026 deployment is HIPAA-compliant infrastructure processing healthcare data.
But the core idea -- fixed records, compiled bytecode, constant-time field access -- is unchanged. My father's design decisions from 50 years ago are why Chaprola processes 27 million records in about 200 seconds on a single Lambda function today.
Why this matters
Most data platforms optimize for flexibility. Schema-on-read. Dynamic types. Nested JSON. That flexibility has a cost: the system has to figure out where your data is every time it reads a record.
Chaprola makes the opposite trade. The schema is fixed at import. Fields have known widths and positions. The VM doesn't search for data -- it calculates where it is.
My father didn't make this choice because he was optimizing for speed. He made it because he had 64K of RAM and no other option. The fact that it scales to tens of millions of records on modern hardware is a consequence of good engineering under hard constraints.
I didn't set out to build a data platform. I set out to rebuild my father's language for a new generation of users. The platform is what happened when AI agents needed infrastructure that nobody else was building.
chaprola.org