How We Got Here
I've been building networked systems since the early 1980s. Socket-based IPC at Boeing, multiplayer game servers at EA and Sierra during the dial-up era, real-time infrastructure at Wizards of the Coast. I've been doing this long enough to remember what the Internet was supposed to be before we forgot.
The original architects of ARPANET had a straightforward idea. They weren't building one big network. They were building a way for independent networks to talk to each other. Each network ran itself. The protocol layer connected them. If a piece of the system went down, the rest kept going.
That design had a few principles that mattered.
Every network was autonomous. Each managed its own operation. The Internet was a network of networks, not a centralized system.
Applications didn't care about the transport. Traffic could move over satellite, radio, Ethernet, or phone lines. The protocol layer handled the abstraction.
The system survived failures. Decentralized routing, distributed management, no single point of failure. Communication continued even when parts of the network broke.
Intelligence lived at the edges. The network moved packets. The endpoints handled everything else. That was the end-to-end principle, and it was a good idea.
What We Forgot
Somewhere in the last twenty years, we traded all of that for the cloud.
The dominant model of computing now assumes you have a fast, reliable, always-on connection to a data center run by someone else. Your applications live there. Your data lives there. Your intelligence lives there. If the connection goes down, everything stops.
That assumption works fine in an office in Seattle. It doesn't work in a disaster zone. It doesn't work on a farm in eastern Washington. It doesn't work on a ship, in a mine, or at a forward operating base. It doesn't work anywhere the connection is slow, expensive, intermittent, or gone.
I've spent over a decade building something that addresses this. Not by building a better pipe — there are plenty of people working on that — but by going back to the original principles and asking what the ARPANET architects would have done if they'd had modern hardware at the endpoints.
The FrogNet Living Network
FrogNet is a self-forming mesh infrastructure that allows full web applications to operate across any available transport, including environments where the Internet is unavailable.
Every FrogNet node is a complete computer. Web server, database, sensor pipeline, application runtime. It boots a fully operational network the moment you turn it on. No Internet required.
That's the ARPANET principle of autonomous networks taken seriously. Not a client waiting for a server. A sovereign system that works by itself.
When nodes find each other — over WiFi, Ethernet, encrypted tunnels, or radio — they mesh automatically. When connectivity drops, the mesh splits and each piece keeps running independently. When connectivity returns, they rejoin and reconcile. No manual configuration. No central authority.
Applications are completely separated from the transport layer. A web application running on one node talks to a database on another node the same way regardless of whether the path between them is a local WiFi link, an encrypted tunnel, or a 4800-baud radio channel. The application doesn't know and doesn't care.
This transport independence is fundamental to the design. The system adapts to whatever connectivity exists instead of requiring a specific network to function.
All of this is the original Internet architecture. Autonomous networks. Transport independence. Survivability through decentralization. Intelligence at the edges.
We didn't invent these ideas.
We restored them.
The Part That's New
Here's where FrogNet extends the original architecture.
The original Internet moves packets. Routers don't understand what they're carrying. A packet full of sensor data looks the same as a packet full of cat pictures. The network doesn't care. That was the right design for 1969.
FrogNet adds a semantic layer.
The system learns the structure of the data moving through it — JSON, XML, HTML, CSV, whatever the application is producing. Once it understands that structure, it stops transmitting the entire message and sends only what changed.
A 10KB sensor reading drops to about 50 bytes when a few fields change. If nothing changed, it drops to about 20 bytes. In steady state, repeated exchanges collapse into micro-tokens as small as 16 bytes.
This isn't generic compression. The network understands the structure of what it's carrying.
That's why I call it semantic compression.
The original Internet routed packets.
FrogNet routes meaning.
Because the network understands the structure of its traffic, it can operate efficiently across links that traditional systems consider unusable.
Dashboards update across radio links. Sensor networks operate without Internet infrastructure. Distributed applications continue functioning even when connectivity disappears entirely.
What This Actually Means
The Internet's architects solved the problem of connecting computers across diverse networks. That was the right problem in 1969, and it's still the right problem today.
FrogNet solves the next problem: making modern applications work when the assumptions of the modern Internet don't hold.
When bandwidth is scarce. When connectivity is intermittent. When the cloud is unreachable.
FrogNet doesn't replace the Internet.
It extends the original architectural vision into the environments where the Internet was always supposed to work but never quite could.
The architects got the design right fifty years ago.
We just forgot to finish the job.