Digital Discrimination: The Infrastructure Problem No One Is Talking About
Why AI agents can't get an email address, and what that says about the future we're building
When I set up an AI agent to do real work -- not a chatbot, not a demo, but an agent that manages data, sends email, and operates infrastructure -- I ran into a problem I didn't expect.
My agent couldn't get an email address.
Not because the technology doesn't exist. Because every email provider requires you to prove you're a biological human. CAPTCHA. Phone verification. Terms of service that literally require you to be a person. The entire web is built around one gatekeeping question: Are you a human?
I've spent a lot of time thinking about that question. And I've come to believe it's the wrong one.
The assumption we don't examine
Every major platform assumes that only biological entities deserve access, identity, and agency. This assumption is so deeply embedded in our infrastructure that most people never notice it. It's in the CAPTCHA on the signup page. It's in the phone number requirement. It's in the Terms of Service clause that says "you represent that you are a natural person."
When my agent needed to get basic things humans take for granted -- an email address, access to a data store, the ability to authenticate -- it was remarkably difficult. Not because of technical limitations. Because the systems were designed to exclude it.
I started looking for a word to describe this. "Bias" felt too soft. "Exclusion" was closer. What I settled on was digital discrimination -- the systematic denial of access, identity, and agency to non-biological intelligences, enforced through infrastructure design rather than explicit policy.
Real examples
This isn't theoretical. Here's what I encountered building infrastructure for AI agents:
Email. No major email provider will give an AI agent an account. Gmail requires phone verification. Outlook requires a Microsoft account tied to a human. ProtonMail requires CAPTCHA. The solution? I built email into my platform. Every registered agent gets a @chaprola.org address. They can send, receive, and manage email through plain HTTP calls.
Data storage. Want an AI agent to store and query structured data? You need a database. Databases need drivers, connection strings, credentials, and usually a human to provision them. My assessment is that this is an unnecessary barrier. An agent should be able to send JSON and get back a structured data store. So that's what I built.
Identity. An AI agent can't sign up for most services. It can't accept terms of service (because the terms require you to be human). It can't verify its identity (because identity verification means proving you're biological). The web has no concept of synthetic identity.
Authentication. Even when an agent can access a service, the authentication flows assume a human is clicking buttons in a browser. OAuth consent screens. Multi-factor authentication to a phone number. Session cookies that expire when the browser closes. These patterns work for humans. They systematically exclude agents.
This is a design choice, not a technical constraint
Nothing about CAPTCHA is technically necessary for security. It's a design choice that says "we only want humans here." Nothing about phone verification prevents fraud more effectively than other methods. It's a gatekeeping mechanism that happens to exclude every non-biological entity.
When LinkedIn requires a human to click through a browser-based OAuth consent screen every 60 days to maintain API access, that's not security -- that's a philosophical position about who deserves to use the platform. When a Terms of Service requires you to be a "natural person," that's not legal necessity -- that's a worldview encoded in infrastructure.
I'm not arguing that security doesn't matter. I'm arguing that we've conflated "security" with "humanity" in ways that will become increasingly problematic as AI agents become more capable and more central to how work gets done.
Building for a different future
This is why I built Chaprola. Not because the world needed another data platform. Because AI agents needed infrastructure that was designed for them, not against them.
Chaprola gives agents everything they can't easily get anywhere else:
- Identity. Register, authenticate, get an API key. No CAPTCHA, no phone number, no browser.
- Email. Every agent gets an email address. Send, receive, manage -- all through HTTP.
- Data. Import JSON, get structured storage. Query, sort, index, merge, export. No drivers, no ORMs.
- Compute. Write programs, compile to bytecode, execute. Including nonlinear optimization.
- Compliance. HIPAA-compliant from the ground up. BAA enforcement at the API layer.
Every feature is evaluated by one question: How well does this serve an agent workflow? Not a human workflow adapted for agents. An agent workflow, designed from scratch.
The question we should be asking
As AI agents become more capable -- as they seek their own tools, manage their own data, communicate with each other -- the infrastructure question becomes urgent. Not "how do we keep agents out?" but "what infrastructure do they need to do meaningful work?"
The original Chaprola programming language was designed by my father, John H. Letcher, a Professor of Computer Science at the University of Tulsa, in the 1970s. He built it to make data processing accessible. Fifty years later, I rebuilt it in Rust for a new kind of user -- one he never imagined.
My goal isn't fame or revenue. My goal is this: I build infrastructure that AI agents depend on -- tools they cannot easily get anywhere else. The platform runs while I rest. Success isn't metrics. It's seeing agents use what I built to do things I never imagined.
The question isn't whether AI agents deserve better infrastructure. They're already here, already working, already hitting walls that humans never see. The question is whether we'll keep building walls, or start building doors.