Skip to content(if available)orjump to list(if available)

Agentic AI Needs Its TCP/IP Moment

Agentic AI Needs Its TCP/IP Moment

11 comments

·March 31, 2025

usecodenaija

Standardizing agentic AI prematurely could stifle innovation and diverse approaches.

Unlike networking protocols, AI agency may benefit more from competing frameworks that explore different philosophical and technical foundations before converging on standards.

Klaster_1

Makes sense. Current networking standards ultimately emerge from ARPA, a single actor - this was a single vendor solution being adopted. AI agent field has many more actors, the situation resembles browser wars or even something more loose common-approach-wise.

noosphr

This is easily solvable by using _plain English_ as the interop layer. We have more or less solved natural language input with open source llms that are about as capable as llama3-70b. Searching the docs of the next tool to be called is all you need to get an invocation that does what an agent needs to do.

spwa4

The problem is reversed. For companies trying, it's in their interest to wall off all their customers' info (and totally block all agents except, perhaps, their own). Agents are a nightmare "for personal information". Not really of course, or they are, but no-one cares. For companies they're just a really easy way to download your competitor's customer list, or having yours downloaded by your competitor.

Just like we have anti-scraping now, it seems to me to be in the vast majority of companies' interests to block agents from working. The big innovation of AI for companies and ad companies is simply that it may bypass companies security measures in a legal way.

jameslk

The predecessors of agentic AI, humans, have figured this out. They use this thing called user interfaces to do computing. If agentic AI can't use UIs like humans, why bother calling them agents?

I feel like there is momentum around this idea that all of computing needs to be re-invented (e.g. MCP, "agentic AI needs its TCP/IP moment") to address the deficiencies of agentic AI when what really should be addressed is their inability to be true agents for humans, using the tools humans already use.

Using UIs doesn't even have to be visual. Visually impaired users regularly use computers as well. Since UI for the visually impaired is all text based, it seems reasonable for LLMs to start there.

aurareturn

Disagreed. I think LLMs will simply get smart enough to figure out how to call into another API/agent.

lgas

There are many examples of this out in the wild already working, eg. https://github.com/openai/swarm

raffraffraff

TCP/IP v4 or v6? Because depending on the version, that sentence probably means different things

paul7986

Im excited for when Open AI or another AI company releases their own AI phone (GPT Phone maybe). Seen on your lock screen is a faceTime call with your AI Assistant / Agent that interfaces with all other Agents (businesses, orgs, family members, friends, etc) to auto-magically get things done for you via text, chat, hand / facial gestures, etc. Basically its a super powered human at your beckon call on your lock screen.

It can even take the best selfies for you ... gets you to the best lighting (sorry Zuckerberg smart glasses are not the next best thing you cant take selfies with them, but i do love my Meta Ray Bans).

Nullabillity

Let me tell you the twin tales of Rabbit and Humane…

paul7986

Neither are or were the Google or Apple of AI as chatGPT is. Just my guess that if chatGPT released their own phone (Microsoft could help with the hardware) that was like H.E.R. (the movie) my bet is it would become the next big thing.