Skip to content(if available)orjump to list(if available)

Understanding gRPC, OpenAPI and REST and when to use them in API design (2020)

jdwyah

If I could go back in time I would stop myself from ever learning about gRPC. I was so into the dream, but years later way too many headaches. Don’t do it to yourself.

Saying gRPC hides the internals is a joke. You’ll get internals all right, when you’re blasting debug logging trying to figure out what the f is going on causing 1/10 requests to fail and fine tuning 10-20 different poorly named and timeout / retry settings.

Hours lost fighting with maven plugins. Hours lost debugging weird deadline exceeded. Hours lost with LBs that don’t like the esoteric http2. Firewall pain meaning we had to use Standard api anyway. Crappy docs. Hours lost trying to get error messages that don’t suck into observability.

I wish I’d never heard of it.

stickfigure

IMO the problem with gRPC isn't the protocol or the protobufs, but the terrible tooling - at least on the Java end. It generates shit code with awful developer ergonomics.

When you run the protobuf builder...

* The client stub is a concrete final class. It can't be mocked in tests.

* When implementing a server, you have to extend a concrete class (not an interface).

* The server method has an async method signature. Screws up AOP-oriented behavior like `@Transactional`

* No support for exceptions.

* Immutable value classes yes, but you have to construct them with builders.

The net result is that if you want to use gRPC in your SOA, you have to write a lot of plumbing to hide the gRPC noise and get clean, testable code.

There's no reason it has to be this way, but it is that way, and I don't want to write my own protobuf compiler.

Thrift's rpc compiler has many of the same problems, plus some others. Sigh.

bjackman

> The client stub is a concrete final class. It can't be mocked in tests.

I believe this is deliberate, you are supposed to substitute a fake server. This is superior in theory since you have much less scope to get error reporting wrong (since errors actually go across a gRPC transport during the test).

Of course.. at least with C++, there is no well-lit-path for actually _doing_ that, which seems bonkers. In my case I had to write a bunch of undocumented boilerplate to make this happen.

IIUC for Stubby (Google's internal precursor to gRPC) those kinda bizarre ergonomic issues are solved.

Degorath

Stubby calls (at least in Java) just use something called a GenericServiceMocker which is akin to a more specialised mockito.

tbarbugli

In my experience, only Swift has a generator that produces good-quality code. Ironically, it’s developed by Apple.

rkagerer

Any alternatives that take a similar philosophy but get the tooling right?

stickfigure

Depends what you mean by "similar philosophy". We (largeish household name though not thought of as a tech company) went through a pretty extensive review of the options late last year and standardized on this for our internal service<->service communication:

https://github.com/stickfigure/trivet

It's the dumbest RPC protocol you can imagine, less than 400 lines of code. You publish a vanilla Java interface in a jar; you annotate the implementation with `@Remote` and make sure it's in the spring context. Other than a tiny bit of setup, that's pretty much it.

The main downside is that it's based on Java serialization. For us this is fine, we already use serialization heavily and it's a known quantity for our team. Performance is "good enough". But you can't use this to expose public services or talk to nonjava services. For that we use plain old REST endpoints.

The main upsides are developer ergonomics, easy testability, spring metrics/spans pass through remote calls transparently, and exceptions (with complete stacktraces) propagate to clients (even through multiple layers of remote calls).

I wrote it some time ago. It's not for everyone. But when our team (well, the team making this decision for the company) looked at the proof-of-concepts, this is what everyone preferred.

crabbone

Protobuf is an atrocious protocol. Whatever other problems gRPC has may be worse, but Protobuf doesn't make anything better that's for sure.

The reason to use it may be that you are required to by the side you cannot control, or this is the only thing you know. Otherwise it's a disaster. It's really upsetting that a lot of things used in this domain are the first attempt by the author to make something of sorts. So many easily preventable disasters exist in this protocol for no reason.

morganherlocker

Agree. As an example, this proto generates 584 lines of C++, links to 173k lines of dependencies, and generates a 21Kb object file, even before adding grpc:

syntax = "proto3"; message LonLat { float lon = 1; float lat = 2; }

Looking through the generated headers, they are full of autogenerated slop with loads of dependencies, all to read a struct with 2 primitive fields. For a real monorepo, this adds up quickly.

null

[deleted]

null

[deleted]

bellgrove

Can you elaborate?

dtquad

Your problems has more to do with some implementations than the grpc/protobuf specs themselves.

The modern .NET and C# experience with gRPC is so good that Microsoft has sunset its legacy RPC tech like WCF and gone all in on gRPC.

junto

Agreed. The newest versions of .NET are now chef’s kiss and so damn fast.

zigzag312

I would really like if proto to C# compiler would create nullable members. Hasers IMO give poor DX and are error prone.

hedora

The biggest project I’ve used it with was in Java.

Validating the output of the bindings protoc generated was more verbose and error prone than hand serializing data would have been.

The wire protocol is not type safe. It has type tags, but they reuse the same tags for multiple datatypes.

Also, zig-zag integer encoding is slow.

Anyway, it’s a terrible RPC library. Flatbuffer is the only one that I’ve encountered that is worse.

TeeWEE

What do you mean with validating the bindings? GRPC is type safe. You don’t have to think about that part anymore.

But as the article mentions OpenAPI is also an RPC library with stub generation.

Manual parsing of the json is imho really Oldskool.

But it depends on your use case. That’s the whole point: it depends.

matrix87

> The wire protocol is not type safe. It has type tags, but they reuse the same tags for multiple datatypes.

When is this ever an issue in practice? Why would the client read int32 but then all of a sudden decide to read uint32?

sagarm

I guess backwards incompatible changes to the protocol? But yeah, don't do that if you're using protobuf; it's intentionally not robust to it.

bborud

Since you mention Maven I'm going to make the assumption that you are using Java. I haven't used Java in quite a while. The last 8 years or so I've been programming Go.

Your experience of gRPC seems to be very different from mine. How much of the difference in experience do you think might be down to Java and how much is down to gRPC as a technology?

piva00

It's not Java itself, it's design decisions on the tooling that Google provides for Java, mostly the protobuf-gen plugin.

At my company we found some workarounds to the issues brought up on GP but it's annoying the tooling is a bit subpar.

bborud

Have you tried the buf.build tools? Especially the remote code generation and package generation may make life easier for you.

a couple of links

https://buf.build/protocolbuffers/java?version=v29.3 https://buf.build/docs/bsr/generated-sdks/maven

divan

I use gRPC with Go+Dart stack for years and never experienced these issues. Is it something specific to Java+gRPC?

robertlagrant

Go and Dart are probably the languages most likely to work well with gRPC, given their provenance.

throwaway127482

Google has massive amounts of code written in Java so one would think the Java tooling would be excellent as well.

drtse4

As someone that used it for years with the same problems he describes... spot on analysis, the library does too much for you (e.g. reconnection handling) and handling even basic recovery is a bit a nuisance for newbies. And yes, when you get random failures good luck figuring out that maybe is just a router in the middle of the path dropping packets because their http2 filtering is full of bugs.

I like a lot of things about it and used it extensively instead of the inferior REST alternative, but I recommend to be aware of the limitations/nuisances. Not all issues will be simply solved looking at stackoverflow.

azemetre

What would you recommend doing instead?

Atotalnoob

Web sockets would probably be easy.

Some web socket libraries support automatic fallback to polling if the infrastructure doesn’t support web sockets.

doctorpangloss

Do you need bidirectional streams? If so, you should write a bespoke protocol, on top of UDP, TCP or websockets.

If you don't, use GraphQL.

nithril

"Write a protocol and GraphQL", god damn it escalates quickly.

Fortunately, there are intermediate steps.

galangalalgol

What about songle directional streams? Graphql streams aren't widely supported yet are they? Graphql also strikes me as a weird alternative to protobufs as the latter works so hard for performance with binary payloads, and graphql is typically human readable bloaty text. And they aren't really queries, you can just choose to ignore parts of the return for a rpc.

oppositelock

I've been building API's for a long time, using gRPC, and HTTP/REST (we'll not go into CORBA or DCOM, because I'll cry). To that end, I've open sourced a Go library for generating your clients and servers from OpenAPI specs (https://github.com/oapi-codegen/oapi-codegen).

I disagree with the way this article breaks down the options. There is no difference between OpenAPI and REST, it's a strange distinction. OpenAPI is a way of documenting the behavior of your HTTP API. You can express a RESTful API using OpenAPI, or something completely random, it's up to you. The purpose of OpenAPI is to have a schema language to describe your API for tooling to interpret, so in concept, it's similar to Protocol Buffer files that are used to specify gRPC protocols.

gRPC is an RPC mechanism for sending protos back and forth. When Google open sourced protobufs, they didn't opensource the RPC layer, called "stubby" at Google, which made protos really great. gRPC is not stubby, and it's not as awesome, but it's still very efficient at transport, and fairly easy too extend and hook into. The problem is, it's a self-contained ecosystem that isn't as robust as mainstream HTTP libraries, which give you all kinds of useful middleware like logging or auth. You'll be implementing lots of these yourself with gRPC, particularly if you are making RPC calls across services implemented in different languages.

To me, the problem with gRPC is proto files. Every client must be built against .proto files compatible with the server; it's not a discoverable protocol. With an HTTP API, you can make calls to it via curl or your own code without having the OpenAPI description, so it's a "softer" binding. This fact alone makes it easier to work with and debug.

mandevil

There is a distinction between (proper) REST and what this blog calls "OpenAPI". But the thing is, almost no one builds a true, proper REST API. In practice, everyone uses the OpenAPI approach.

The way that REST was defined by Roy Fielding in his 2000 Ph.D dissertation ("Architectural Styles and the Design of Network-based Software Architectures") it was supposed to allow a web-like exploring of all available resources. You would GET the root URL, and the 200 OK Response would provide a set of links that would allow you to traverse all available resources provided by the API (it was allowed to be hierarchical- but everything had to be accessible somewhere in the link tree). This was supposed to allow discoverability.

In practice, everywhere I've ever worked over the past two decades has just used POST resource_name/resource_id/sub_resource/sub_resource_id/mutatation_type- or PUT resource_name/resource_id/sub_resource/sub_resource_id depending on how that company handled the idempotency issues that PUT creates- with all of those being magic URL's assembled by the client with knowledge of the structure (often defined in something like Swagger/OpenAPI), lacking the link-traversal from root that was a hallmark of Fielding's original work.

Pedants (which let's face it, most of us are) will often describe what is done in practice as "RESTful" rather than "REST" just to acknowledge that they are not implementing Fielding's definition of REST.

bborud

I tend to prefer RESTish rather than RESTful since RESTful almost suggests attempting to implement Fielding's ideas but not quite getting there. I think the subset of approaches that try and fail to implement Fielding's ideas is an order of magnitude (or two) smaller than those who go for something that is superficially similar, but has nothing to do with HATEOAS :-).

REST is an interesting idea, but I don't think it is a practical idea. It is too hard to design tools and libraries that helps/encourages/forces the user implement HATEOAS sensibly, easily and consistently.

mandevil

While it is amazing for initial discovery to have everything presented for the developer's inspection, in production it ends up requiring too many network round-trips to actually traverse from root to /resource_name/resource_id/sub_resource_name/sub_resource_id, or an already verbose transaction (everything is serialized and deserialized into strings!) becomes gigantic if you if don't make it hierarchical and just drop every URL into the root response.

This is why everyone just builds magic URL endpoints, and hopefully also includes a OpenAPI/Swagger documentation for them so the developer can figure it out. And then keeps the documentation up-to-date as they add new sub_resource endpoints!

nicholasjarnold

> Pedants (which let's face it, most of us are) will often describe what is done in practice as "RESTful" rather than "REST" just to acknowledge that they are not implementing Fielding's definition of REST.

Yes, exactly. I've never actually worked with any group whom had actually implemented full REST. When working with teams on public interface definitions I've personally tended to use the so-called Richardson's Maturity Model[0] and advocated for what it calls 'Level 2', which is what I think most of us find rather canonical and principal of least surprise regarding a RESTful interface.

[0] - https://en.wikipedia.org/wiki/Richardson_Maturity_Model

physicles

> There is no difference between OpenAPI and REST, it's a strange distinction.

That threw me off too. What the article calls REST, I understand to be closer to HATEOAS.

> I've open sourced a Go library for generating your clients and servers from OpenAPI specs

As a maintainer of a couple pretty substantial APIs with internal and external clients, I'm really struggling to understand the workflow that starts with generating code from OpenAPI specs. Once you've filled in all those generated stubs, how can you then iterate on the API spec? The tooling will just give you more stubs that you have to manually merge in, and it'll get harder and harder to find the relevant updates as the API grows.

This is why I created an abomination that uses go/ast and friends to generate the OpenAPI spec from the code. It's not perfect, but it's a 95% solution that works with both Echo and Gin. So when we need to stand up a new endpoint and allow the front end to start coding against it ASAP, the workflow looks like this:

1. In a feature branch, define the request and response structs, and write an empty handler that parses parameters and returns an empty response.

2. Generate the docs and send them to the front end dev.

Now, most devs never have to think about how to express their API in OpenAPI. And the docs will always be perfectly in sync with the code.

plorkyeran

HATEOAS is just REST as originally envisioned but accepting that the REST name has come to be attached to something different.

jpc0

> This is why I created an abomination that uses go/ast and friends to generate the OpenAPI spec from the code

OpenAPI is a spec not documentation. Write the spec first then generate the code from the spec.

You are doing it backwards, at least in my opinion.

mdaniel

That's conceptually true, and yet if the hundreds of code generators don't support Your Favorite OAPI Feature &trade; then you're stuck, whereas the opposite is that unless your framework is braindead it's going to at least support some mapping from your host language down to the OAPI spec. I doubt very seriously it's pretty, and my life experience is that it will definitely not be bright enough to have #/component reuse, but it's also probably closer to 30 seconds to run $(go generate something) than to launch an OAPI editor and now you have a 2nd job

I'd love an OAPI compliance badge (actually what I'm probably complaining about is the tooling's support for JSON Schema) so one could readily know which tools to avoid because they were conceived in a hackathon and worked for that purpose but that I should avoid them for real work

oppositelock

This comes down to your philosophical approach to API development.

If you design the API first, you can take the OpenAPI spec through code review, making the change explicit, forcing others to think about it. Breaking changes can be caught more easily. The presence of this spec allows for a lot of work to be automated, for example, request validation. In unit tests, I have automated response validation, to make sure my implementation conforms to the spec.

Iteration is quite simple, because you update your spec, which regenerates your models, but doesn't affect your implementation. It's then on you to update your implementation, that can't be automated without fancy AI.

When the spec changes follow the code changes, you have some new worries. If someone changes the schema of an API in the code and forgets to update the spec, what then? If you automate spec generation from code, what happens when you express something in code which doesn't map to something expressible in OpenAPI?

I've done both, and I've found that writing code spec-first, you end up constraining what you can do to what the spec can express, which allows you to use all kinds of off-the-shelf tooling to save you time. As a developer, my most precious resource is time, so I am willing to lose generality going with a spec-first approach to leverage the tooling.

ak217

In my part of the industry, a rite of passage is coming up with one's own homegrown data pipeline workflow manager/DAG execution engine.

In the OpenAPI world, the equivalent must be writing one's own OpenAPI spec generator that scans an annotated server codebase, probably bundled with a client codegen tool as well. I know I've written one (mine too was a proper abomination) and it sounds like so have a few others in this thread.

foobarian

> In the OpenAPI world, the equivalent must be writing one's own OpenAPI spec generator

Close, it's writing custom client and server codegen that actually have working support for oneOf polymorphism and whatever other weird home-grown extensions there are.

Cthulhu_

> Once you've filled in all those generated stubs, how can you then iterate on the API spec? The tooling will just give you more stubs that you have to manually merge in, and it'll get harder and harder to find the relevant updates as the API grows.

This is why I have never used generators to generate the API clients, only the models. Consuming a HTTP based API is just a single line function nowadays in web world, if you use e.g. react / tanstack query or write some simple utilities. The generaged clients are almost never good enough. That said, replacing the generator templates is an option in some of the generators, I've used the official openapi generator for a while which has many different generators, but I don't know if I'd recommend it because the generation is split between Java code and templates.

talideon

I'm scratching my head here. HATEOAS is the core of REST. Without it and the uniform interface principle, you're not doing REST. "REST" without it is charitably described as "RESTish", though I prefer the term "HTTP API". OpenAPI only exists because it turns out that developers have a very weak grasp on hypertext and indirection, but if you reframe things in a more familiar RPC-ish manner, they can understand it better as they can latch onto something they already understand: procedure calls. But it's not REST.

mkleczek

> This is why I created an abomination that uses go/ast and friends to generate the OpenAPI spec from the code.

This is against "interface first" principle and couples clients of your API to its implementation.

That might be OK if the only consumer of the API is your own application as in that case API is really just an internal implementation detail. But even then - once you have to support multiple versions of your own client it becomes difficult not to break them.

physicles

I don't see why it couples clients to the implementation.

Effectively, there's no difference between writing the code first and updating the OpenAPI spec, and updating the spec first and then doing some sort of code gen to update the implementation. The end state of the world is the same.

In either case, modifications to the spec will be scrutinized to make sure there are no breaking changes.

jitl

OpenAPI spec being authored by a human or a machine, it can still be the same YAML at the end of the day, so why would one approach be more brittle / breaks your clients than the other?

XorNot

The oapi-codegen tool the OP was put out (which I use) solves this by emitting an interface though. OpenAPI has the concept of operation names (which also have a standard pattern), so your generated code is simply implementing operation names. You can happily rewrite the entire spec and provided operation names are the same, everything will still map correctly - which solves the coupling problem.

arccy

These days there's gprc reflection for discovery https://grpc.io/docs/guides/reflection/

cpursley

I'm piggybacking on the OpenAPI spec as well to generate a SQL-like query syntax along with generated types which makes working with any 3rd party API feel the same.

What if you could query any ole' API like this?:

  Apipe.new(GitHun) |> from("search/repositories") |> eq(:language, "elixir") |> order_by(:updated) |> limit(1) |> execute()
This way, you don't have to know about all the available gRPC functions or the 3rd party API's RESTful quirks while retaining built-in documenting and having access to types.

https://github.com/cpursley/apipe

I'm considering building a TS adapter layer so that you can just drop this into your JS/TS project like you would with Supabase:

  const { data, error } = await apipe.from('search/repositories').eq('language', 'elixir').order_by('updated').limit(1)
Where this would run through the Elixir proxy which would do the heavy lifting like async, handle rate limits, etc.

cyberax

> To me, the problem with gRPC is proto files. Every client must be built against .proto files compatible with the server; it's not a discoverable protocol.

That's not quite true. You can build an OpenAPI description based on JSON serialization of Protobufs and serve it via Swagger. The gRPC itself also offers built-in reflection (and a nice grpcurl utility that uses it!).

Pooge

> https://github.com/oapi-codegen/oapi-codegen

I'm using it for a small personal project! Works very well. Thank you!

TheGoodBarn

Just chiming in to say we use oapi-codegen everyday and it’s phenomenal.

Migrated away from Swaggo -> oapi during a large migration to be interface first for separating out large vertical slices and it’s been a godsend.

null

[deleted]

toprerules

As someone who has worked at a few of the FAANGs, having thrift/grpc is a godsend for internal service routing, but a lot of the complexity is managed by teams building the libraries, creating the service discovery layers, doing the routing etc. But using an RPC protocol enables those things to happen on a much greater scale and speed than you could ever do with your typical JSON/REST service. I've also never seen a REST API that didn't leak verbs. If I need to build a backend service mesh or wire two local services together via an networked stream, I will always reach for grpc.

That said, I absolutely would not use grpc for anything customer or web facing. RPC is powerful because it locks you into a lot of decisions and gives you "the one way". REST is far superior when you have many different clients with different technology stacks trying to use your service.

jitl

For a public API I wouldn’t do this, but for private APIs we just do POST /api/doThingy with a JSON body, easy peasy RPC anyone can participate in with the most basic HTTP client. Works great on every OS and in every browser, no fucking around with “what goes in the URL path” vs “what goes in query params” vs “what goes in the body”.

You can even do this with gRPC if you’re using Buf or Connect - one of the server thingies that try not to suck; they will accept JSON via HTTP happily.

ryathal

I'd argue just making everything POST is the correct way to do a public Api too. REST tricks you into endpoints no one really wants, or you break it anyway to support functionality needed. SOAP was heavy with it's request/respone, but it was absolutely correct that just sending everything as POST across the wire is easier to work with.

curt15

Some of the AWS APIs work this way too. See for example the Cloudwatch API: https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIRefer..., which is really JSON-RPC, not REST.

porridgeraisin

Yeah, I like doing this as well. And all the data goes in the request body. No query parameters.

Especially when the primary intended client is an SPA, where the URL shown is decoupled with the API URL.

Little bit of a memory jolt: I once built a (not for prod) backend in python as follows:

write a list of functions, one for each RPC, in a file `functions.py`

then write this generic function for flask:

  import server.functions as functions

  @server.post("/<method>")
  def api(method: str):
      data: Any = request.json if request.is_json else {}

      fn = lookup(functions, method)
      if fn is None:
          return {"error": "Method not found."}
      return fn(data)

And `lookup()` looks like:

  def lookup(module: ModuleType, method: str):
      md = module.__dict__
      mn = module.__name__
      is_present = method in md
      is_not_imported = md[method].__module__ == mn
      is_a_function = inspect.isfunction(md[method])

      if is_present and is_not_imported and is_a_function:
          return md[method]
      return None
So writing a new RPC is just writing a new function, and it all gets automatically wired up to `/api/function_name`. Quite nice.

The other nice feature there was automatic "docs" generation, from the python docstring of the function. You see, in python you can dynamically read the docstring of an object. So, I wrote this:

  def get_docs(module: ModuleType):
      md = module.__dict__
      mn = module.__name__
      docs = ""

      for name in md:
          if not inspect.isfunction(md[name]) or md[name].__module__ != mn:
              continue
          docs += md[name].__doc__ + "\n<br>\n"

      return docs[:-6]
Gives a simple text documentation which I served at an endpoint. Of course you could also write the docstring in openapi yaml format and serve it that way too.

Quite cursed overall, but hey, its python.

One of the worst footguns here is that you could accidentally expose helper functions, so you have to be sure to not write those in the functions file :P

pandemic_region

This. The amount of time lost debating correct rest semantics for a use case is staggering.

spelunker

Arguing the Right Way To Do REST was a favorite passtime amongst people at one of my previous jobs. Huge waste of time.

porridgeraisin

Yeah, when it matters in close to 0% of cases. Everyone reads the docs for everything anyways, any shared knowledge granting implicit meaning to things is very close to useless in practice with REST APIs.

Cthulhu_

What about non-web client/server applications though? I'm thinking online games / MMOs that require much more realtime communications than REST does. I have no idea what is used now, socket connections with something on the line I suppose.

kyrra

For a game, I would maybe use Protobuf and grpc. There is serialization and deserializarion required. Something like flatbuffers or capnproto where the wireformat matches language data layout makes for extremely efficient parsing (though it may not be as network efficient). Really depends on how you structure your data.

crabbone

> thrift/grpc is a godsend for internal service routing

Compared to what? What else did you try?

rfw300

What do you mean by “leak verbs”?

jon_richards

Not OP, but https://cloud.google.com/blog/products/api-management/restfu...

The problem is that clients generally have a bunch of verbs they need to do. You have to design your objects and permissions just right such that clients can do all their verbs without an attacker being able to PATCH "payment_status" from "Requires Payment" to "Payment Confirmed".

RPC uses verbs, so that could just be the SubmitPayment RPC's job. In REST, the correct design would be to give permission to POST a "Payment" object and base "payment_status" on whether that has been done.

robertlagrant

This is the most painful bit of REST for sure.

bitzun

Unless you are doing bidirectional streaming (for which it seems pretty well suited, but I haven't used it, so it might be a fucking mess), grpc is usually a waste of time. Runtime transitive dependency hell, toolchain hell, and the teams inside Google that manage various implementations philosophically disagree on how basic features should work. Try exposing a grpc api to a team that doesn't use your language (particularly if they're using a language that isn't go, python or java, or is an old version of those.) Try exposing a grpc api to integrate with a cots product. Try exposing a grpc api to a browser. All will require a middleware layer.

lordofgibbons

I've used grpc at multiple companies and teams within these companies, all of them 100-500ish engineering team size, and never had these dependency and tool chain issues. It was smooth sailing with grpc.

hamandcheese

I have worked full time at now two companies of that size making the dependency and tool chain problems not be a problem for all the normies.

drtse4

In my opinion, you shouldn't expose it to a browser, it's not what is good at, build something custom that converts to json. Like using REST to talk between backend services, makes no sense using a human readable protocol/api especially if there are performance requirements (not a call every now and then with a small amount of data returned).

9rx

To be fair, it was intended to be for browsers. But it was designed alongside the HTTP/2 spec, before browsers added HTTP/2 support, and they didn't anticipate that browsers wouldn't end up following the spec. So now it only works where you can rely on a spec-compliant HTTP/2 implementation.

robertlagrant

The article seems to be an advert for this, with its plug of that hosted gRPC<->JSON service.

txdv

> Try exposing a grpc api to a browser

I remember being grilled for not creating "jsony" interfaces:

message Response { string id = 1; oneof sub { SubTypeOne sub_type_one = 2; SubTypeTwo sub_type_two = 3; } }

message SubTypeOne { string field = 1; }

message SubTypeTwo { }

In your current model you just don't have any fields in this subtype, but the response looked like this with our auto translator: { "id": "id", "sub_type_two": { } }

Functionally, it works, and code written for this will work if new fields appear. However, returning empty objects to signify the type of response is strange in the web world. But when you write the protobuf you might not notice

aaomidi

Bidirectional streaming is generally a bad idea for anything you’re going to want to run “at scale” for what it’s worth.

mvdtnz

Why do you say that? I'm involved in the planning for bidi streaming for a product that supports over 200M monthly active users. I am genuinely curious what landmines we're about to step on.

joatmon-snoo

bidi streaming screws with a whole bunch of assumptions you rely on in usual fault-tolerant software:

- there are multiple ways to retry - you can retry establishing the connection (e.g. say DNS resolution fails for a 30s window) _or_ you can retry establishing the stream

- your load-balancer needs to persist the stream to the backend; it can't just re-route per single HTTP request/response

- how long are your timeouts? if you don't receive a message for 1s, OK, the client can probably keep the stream open, but what if you don't receive a message for 30s? this percolates through the entire request path, generally in the form of "how do I detect when a service in the request path has failed"

jpc0

Not going to give you any proper advice but rather a question to have an answer for. It's not unsolvable or even difficult but needs an answer at scale.

How do you scale horizontally?

User A connects to server A. User A's connection drops. User A reconnects to your endpoint. Did you have anything stateful you had to remember? Did they loadbalancer need to remember to reconnect user A to server A? What happens if the server dropped, how do you reconnect the user?

Now if your streaming is server to server over gRPC on your own internal backend then sure, build actors with message passing, you will probably need an orchestration layer (not k8s, that's for ifra, you need an orchestrator for your services probably written by you), for the same reason as above. What happens if Server A goes down but instead of User A it was Server B. The orchestrator acts as your load balancer would have but it just remembers who exists and who they need to speak to.

null

[deleted]

crabbone

Nothing in Protobuf is suited for streaming. It's anti-streaming compared to almost any binary protocol you can imagine (unless you want to stream VHD, which would be a sad joke... for another time).

cyberax

> Nothing in Protobuf is suited for streaming.

Uhh... Why? Protobuf supports streaming replies and requests. Do you mean that you need to know the message size in advance?

crabbone

No, Protobuf doesn't support streaming.

Streaming means that it's possible to process the payload in small chunks, preferably of fixed size. Here are some examples of formats that can be considered streaming:

* IP protocol. Comes in uniformly sized chunks, payload doesn't have a concept of "headers". Doesn't even have to come in any particular order (which might be both a curse and a blessing for streaming).

* MP4 format. Comes in frames, not necessarily uniformly sized, but more-or-less uniform (the payload size will vary based on compression outcome, but will generally be within certain size). However, it has a concept of "headers", so must be streamed from a certain position onward. There's no way to jump into the middle and start streaming from there. If the "header" was lost, it's not possible to resume.

* Sun RPC, specifically the part that's used in NFS. Payload is wildly variable in size and function, but when it comes to transferring large files, it still can be streamed. Reordering is possible to a degree, but the client / server need to keep count of messages received, also are able to resume with minimal re-negotiation (not all data needs to be re-synced in order to resume).

Protobuf, in principle, cannot be processed unless the entire message has been received (because, by design, the keys in messages don't have to be unique, and the last one wins). Messages are hierarchical, so, there's no way to split them into fixed or near-fixed size chunks. Metadata must be communicated separately, ahead of time, otherwise sides have no idea what's being sent. So, it's not possible to resume reading the message if the preceding data was lost.

It's almost literally the collection of all things you don't want to have in a streaming format. It's like picking a strainer with the largest holes to make soup. Hard to think about a worse tool for the job.

9rx

> Try exposing a grpc api to a team that doesn't use your language

Because of poor HTTP/2 support in those languages? Otherwise, it's not much more than just a run of the mill "Web API", albeit with some standardization around things like routing and headers instead of the randomly made up ones you will find in a bespoke "Look ma, I can send JSON with a web server" API. That standardization should only make implementing a client easier.

If HTTP/2 support is poor, then yeah, you will be in for a world of hurt. Which is also the browser problem with no major browser (and maybe no browser in existence) ever ending up supporting HTTP/2 in full.

NAHWheatCracker

My only work experience with gRPC was on a project where another senior dev pushed for it because we "needed the performance". We ended up creating a JSON API anyways. Mostly because that's what the frontend could consume. No one except for that developer had experience with gRPC. He didn't go any deeper than the gRPC Python Quick start guide and wouldn't help fix bugs.

The project was a mess for a hundred reasons and never got any sort of scale to justify gRPC.

That said, I've used gRPC in bits outside of work and I like it. It requires lot more work and thought. That's mostly because I've worked on so many more JSON APIs.

lordofgibbons

That sounds more like a critique of the "senior" developer who didn't know grpc isn't compatible with browsers before adopting it than grpc itself.

NAHWheatCracker

Correct, I wasn't critiquing gRPC. I was critiquing a type of person who might push for gRPC. That developer probably thought of it as a novelty and made up reasons to use it. It was a big hassle that added to that teams workload with no upside.

reactordev

When all you have is a hammer…

gRPC is fantastic for its use case. Contract first services with built in auth. I can make a call to a service using an API that’s statically typed due to code generation and I don’t have to write it. That said, it’s not for browsers so Mr gRPC dev probably had no experience in browser technologies.

A company I worked for about 10 years ago was heavy gRPC but only as a service bridge that would call the REST handler (if you came in over REST, it would just invoke this handler anyway). Everything was great and dtos (messages) were automatically generated! Downside was the serialization hit.

awinter-py

yes who would imagine that the homegrown rpc of the internet and browser company would work on the internet and in a browser

very fair critique

jon_richards

I've been having fun with connectrpc https://connectrpc.com/

It fixes a lot of the problematic stuff with grpc and I'm excited for webtransport to finally be accepted by safari so connectrpc can develop better streaming.

I initially thought https://buf.build was overkill, but the killer feature was being able to import 3rd party proto files without having to download them individually:

    deps:
      - buf.build/landeed/protopatch
      - buf.build/googleapis/googleapis

The automatic SDK creation is also huge. I was going to grab a screenshot praising it auto-generating SDKs for ~9 languages, but it looks like they updated in the past day or two and now I count 16 languages, plus OpenAPI and some other new stuff.

Edit: I too was swayed by false promises of gRPC streaming. This document exactly mirrored my experiences https://connectrpc.com/docs/go/streaming/

cyberax

> It fixes a lot of the problematic stuff with grpc and I'm excited for webtransport to finally be accepted by safari so connectrpc can develop better streaming.

We developed a small WebSocket-based wrapper for ConnectRPC streaming, just to make it work with ReactNative. But it also allows us to use bidirectional streaming in the browser.

jon_richards

Awesome! Could you share? I also use react native.

thayne

It still uses protocol buffers though, which is where many of the problems I have with gRPC comes from

jon_richards

The auto-generated SDKs are very useful here. An API customer doesn't have to learn protobuf or install any tooling. Plus they can fall back to JSON without any fuss. Connectrpc is much better at that than my envoy transcoder was.

If you're thinking from the API author's point of view, I might agree with you if there was a ubiquitous JSON annotation standard for marking optional/nullable values, but I am sick of working with APIs that document endpoints with a single JSON example and I don't want to inflict that on anyone else.

9rx

It doesn't use protocol buffers any more than gRPC does, which is to say it only uses them if you choose to use them. gRPC is payload agnostic by design. Use CSV if you'd rather. It's up to you.

masterj

You can also choose to use JSON instead. Works great with curl and browser dev tools.

nazcan

Is there recent news on safari supporting webtransport?

rednafi

Google somehow psyoped the entire industry to use gRPC for internal service communications. The devex of gRPC is considerably worse than REST.

You can’t just give someone a simple command to call an endpoint—it requires additional tooling that isn’t standardized. Plus, the generated client-side code is some of the ugliest gunk you’ll find in any language.

echelon

> The devex of gRPC is considerably worse than REST.

Hard disagree from the backend world.

From one protocol change you can statically determine which of your downstream consumers needs to be updated and redeployed. That can turn weeks of work into a hour long change.

You know that the messages you accept and emit are immediately validated. You can also store them cheaply for later rehydration.

You get incredibly readable API documentation with protos that isn't muddled with code and business logic.

You get baked in versioning and deprecation semantics.

You have support for richer data structures (caveat: except for maps).

In comparison, JSON feels bloated and dated. At least on the backend.

danpalmer

I also disagree, at Google everything is RPCs in a similar way to gRPC internally, and I barely need to think about the mechanics of them most of the time, whereas with REST/raw HTTP, you need to think about so much of the process – connection lifecycle, keepalive, error handling at more layers, connection pools, etc.

However, I used to work in a company that used HTTP internally, and moving to gRPC would have sucked. If you're the one adding gRPC to a new service, that's more of a pain than `import requests; requests.get(...)`. There is no quick and hacky solution for gRPC, you need a fully baked, well integrated solution, rolled out across everyone who will need it.

pianoben

The flexibility of HTTP has advantages, too; it's simple to whip up a `curl` command to try things out. How does Google meet that need for gRPC APIs?

rednafi

My perspective stems from working with it in backend services as well. The type safety and the declarative nature of protobufs are nice, but writing clients and servers isn’t.

The tooling is rough, and the documentation is sparse. Not saying REST doesn’t have its fair share of faults, but gRPC feels like a weird niche thing that’s hard to use for anything public-facing. No wonder none of the LLM vendors offer gRPC as an alternative to REST.

spockz

The benefits you mention stem from having a total view on all services and which protos they are using.

The same is achievable with a registry of OpenAPI documents. The only thing you need to ensure is that teams share schema definitions. This holds for gRPC as well. If teams create new types just copying some of the fields they need your analysis will be lost as well.

matrix87

> You get incredibly readable API documentation with protos that isn't muddled with code and business logic.

I mean, ideally (hopefully) in the JSON case there's some class defined in code that they can document in the comments

If it's a shitty shop that's sometimes less likely. Nice thing about protos is that the schemas are somewhere

lmm

> You can’t just give someone a simple command to call an endpoint—it requires additional tooling that isn’t standardized.

GRPC is a standard in all the ways that matter. It (or Thrift) is a breath of fresh air compared to doing it all by hand - write down your data types and function signatures, get something that you can actually call like a function (clearly separated from an actual function function - as it should be, it behaves differently - but usable like one). Get on with your business logic instead of writing serialisation/deserialisation boilerplate. GraphQL is even better.

coolhand2120

> GraphQL is even better.

Letting clients introduce load into the system without understanding the big O impact of the SOA upstream is a foot gun. This does not scale and results in a massive waste of money on unnecessary CPU cycles on O(log n) FK joins and O(n^2) aggregators.

Precomputed data in the shape of the client's data access pattern is the way to go. Frontload your CPU cycles with CQRS. Running all your compute at runtime is a terrible experience for users (slow, uncachable, geo origin slow too) and creates total chaos for backend service scaling (Who's going to use what resource next? Nobody knows!).

tshaddox

Any non-trivial REST API is also going to have responses which embed lists of related resources.

If your REST API doesn't have a mechanism for each request to specify which related resources get included, you'll also be wasting resources include related resources which some requesters don't even need!

If your REST API does have a mechanism for each to request to specify which related sources get included (e.g. JSON API's 'include' query param [0]), then you have the same problem as GraphQL where it's not trivial to know the precise performance characteristics of every possible request.

[0] https://jsonapi.org/format/#fetching-includes

lmm

Premature optimisation is the root of all evil. Yes, for the 20% of cases that are loading a lot of data and/or used a lot, you need to do CQRS and precalculate the thing you need. But for the other 80%, you'll spend more developer time on that than you'll ever make back in compute time savings (and you might not even save compute time if you're precomputing things that are rarely queried).

nsonha

> GraphQL is even better

just a casual sentence at the end? How about no. It's in the name, a query-oriented API, useless if you don't need flexible queries.

Why don't you address the problem they talked about, what is the cli tool I can use to test grpc, what about gui client?

mjr00

For GUI, I've been very happy with grpcui-web[0]. It really highlights the strengths of GRPC: you get a full list of available operations (either from the server directly if it exposes metadata, or by pointing to the .proto file if not), since everything is strongly typed you get client-side field validation and custom controls e.g. a date picker for timestamp types or drop-down for enums. The experience is a lot better than copy & pasting from docs for trying out JSON-HTTP APIs.

In general though I agree devex for gRPC is poor. I primarily work with the Python and Go APIs and they can be very frustrating. Basic operations like "turn pbtypes.Timestamp into a Python datetime or Go time.Time" are poorly documented and not obvious. proto3 removing `optional` was a flub and then adding it back was an even bigger flub; I have a bunch of protos which rely on the `google.protobuf.Int64Value` wrapper types which can never be changed (without a massive migration which I'm not doing). And even figuring out how to build the stuff consistently is a challenge! I had to build out a centralized protobuf build server that could use consistent versions of protoc plus the appropriate proto-gen plugins. I think buf.build basically does this now but they didn't exist then.

[0] https://github.com/fullstorydev/grpcui

apayan

grpcurl is what I use to inspect gRPC apis.

https://github.com/fullstorydev/grpcurl

cloverich

> a query-oriented API, useless if you don't need flexible queries

Right but, the typical web service at the typical startup does need flexible queries. I feel people both overestimate its implications and under estimate its value.

    - Standard "I need everything" in the model call
    - Simplified "I need two properties call", like id + display name for a dropdown
    - I need everything + a few related fields, which maybe require elevated permissions

GraphQL makes that very easy to support, test, and monitor in a very standard way. You can build something similar with REST, its just very ergonomic and natural in GraphQL. And its especially valuable as your startup grows, and some of your services become "Key" services used by a wider variety of use cases. Its not perfect or something everyone should use sure, but I believe a _lot_ of startup developers would be more efficient and satisfied using GraphQL.

reactordev

Take the protobuf and generate a client… gRPC makes no assumptions on your topography, only that there’s a server, there’s a client, and it’s up to you to fill the logic. Or use grpcurl, or bloomrpc, or kreya.

The client is the easy part if you just want to test calls.

lmm

> It's in the name, a query-oriented API, useless if you don't need flexible queries.

It's actually still nice even if you don't use the flexibility. Throw up GraphiQL and you've got the testing tool you were worried about. (Sure, it's not a command line tool, but people don't expect that for e.g. SQL databases).

alexandre_m

> what is the cli tool I can use to test grpc

Use https://connectrpc.com/ and then you can use curl, postman, or any HTTP tool of your choosing that supports sending POST requests.

null

[deleted]

sitzkrieg

i agree, was forced to use it at several companies and it was 99% not needed tech debt investment garbage

even in go its a pain in the ass to have to regen and figure out versioning shared protos and it only gets worse w each additional language

but every startup thinks they need 100 microservices and grpc so whatever

hamandcheese

> even in go its a pain in the ass to have to regen and figure out versioning shared protos and it only gets worse w each additional language

The secret is: don't worry about it. There is no need to regenerate your proto bindings for every change to the protos defs. Only do it when you need to access something new in your application (which only happens when you will be making changes to the application anyway). Don't try and automate it. That is, assuming you don't make breaking changes to your protos (or if you do, you do so under a differently named proto).

recursivedoubts

> If your API is a REST API, then your clients never have to understand the format of your URLs and those formats are not part of the API specification given to clients.

Roy Fielding, who coined the term REST:

"A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations."

https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...

I know it's a dead horse, but it's so funny: the "API specification" given to clients, in a truly RESTful system, should only be the initial entry point URI/URL.

jahewson

This idea of self-describing REST is now better known as HATEOAS. Personally I think it’s bloated and doesn’t solve a real problem.

https://en.m.wikipedia.org/wiki/HATEOAS

recursivedoubts

HATEOAS is one sub-constraint of the uniform interface constraint of REST, see chapter 2 of my book:

https://hypermedia.systems/components-of-a-hypermedia-system...

It's an important aspect of a truly RESTful network architecture

crabmusket

HATEOAS is fantastic when your clients are humans. Not so much when they're code.

curt15

How does one even write an API client against a REST API that only publishes the initial entry point? in particular, how should the client discover the resources that can be manipulated by the API or the request/response models?

deathanatos

The responses from prior requests give you URLs which form subsequent requests.

For example, if I,

  GET <account URL>
that might return the details of my account, which might include a list of links (URLs) to all subscriptions (or perhaps a URL to the entire collection) in the account.

(Obviously you have to get the account URL in this example somewhere too, and usually you just keep tugging on the objects in whatever data model you're working with and there are a few natural, easy top-level URLs that might end up in a directory of sorts, if there's >1.)

See ACME for an example; it's one of the few APIs I'd class as actually RESTful. https://datatracker.ietf.org/doc/html/rfc8555#section-7.1.1.

Needing a single URL is beautiful, IMO, both configuration-wise and easily lets one put in alternate implementations, mocks, etc., and you're not guessing at URLs which I've had to do a few times with non-RESTful HTTP APIs. (Most recently being Google Cloud's…)

AdieuToLogic

> How does one even write an API client against a REST API that only publishes the initial entry point? in particular, how should the client discover the resources that can be manipulated by the API or the request/response models?

HAL[0] is very useful for this requirement IMHO. That in conjunction with defining contracts via RAML[1] I have found to be highly effective.

0 - https://datatracker.ietf.org/doc/html/draft-kelly-json-hal

1 - https://github.com/raml-org/raml-spec/blob/master/versions/r...

pests

Look up HATEOS. The initial endpoint will you give you the next set of resources - maybe the user list and then the post list. Then as you navigate to say, the post list, it will have embedded pagination links. Once you have resource urls from this list you can post/put/delete as usual.

recursivedoubts

your browser is a client that works against RESTful entries points that only publish an initial entry point, such as https://news.ycombinator.com

from that point forward the client discovers resources (articles, etc) that can be manipulated (e.g. comments posted and updated) via hypermedia responses from the server in responses

wstrange

The browser is also driven by an advanced wetware AI system that knows which links to click on and how to interpret the results.

loudgas

Your Web browser is probably the best example. When you visit a Web site, your browser discovers resources and understands how it can interact with them.

Thiez

It certainly does not. Sure it can crawl links, but the browser doesn't understand the meaning of the pages, nor can it intelligently fill out forms. It is the user that can hopefully divine how to interact with the pages you serve their browser.

Most APIs however are intended to be consumed by another service, not by a human manually interpreting the responses and picking the next action from a set of action links. HATEOS is mostly pointless.

deathanatos

> the "API specification" given to clients, in a truly RESTful system, should only be the initial entry point URI/URL

I don't know that I fully agree? The configuration, perhaps, but I think the API specification will be far more than just a URL. It'll need to detail whatever media types the system the API is for uses. (I.e., you'll need to spend a lot of words on the HTTP request/response bodies, essentially.)

From your link:

> A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state

That. I.e., you're not just returning `application/json` to your application, you're returning `<something specific>+json`. (Unless you truly are working with JSON generically, but I don't think most are; the JSON is holding business specific data that the application needs to understand & work with.)

That is, "and [the] set of standardized media types that are appropriate for the intended audience" is also crucial.

(And I think this point gets lost in the popular discourse: it focuses on that initial entry URL, but the "describe the media types", as Fielding says, should be the bulk of the work — sort of the "rest of the owl" of the spec. There's a lot of work there, and I think sometimes people hearing "all you need is one URL" are right to wonder "but where's the rest of the specification?")

recursivedoubts

that <something specific> should not be API specific, otherwise you are just smuggling an API specification into a second-order aspect of your system and violating the uniform interface.

eadmund

You both agree: when he writes ‘format of your URLs,’ he literally means the format of the URLs, not the format of the resources. Like you, I clicked on the article expecting yet another blogger who doesn’t understand REST but it appears this author has at least some basic knowledge of the concepts. Good for him!

I like gRPC too, and honestly for a commercial project it is pretty compelling. But for a personal or idealistic project I think that REST is preferable.

resonious

Classic case of a good idea going viral, followed by people misunderstanding the idea but continuing to spread it anyway.

est

I think the original REST is only suitable for "file" resources, so there's WebDAV and nobody bothers to use it these days.

gghoop

I dislike the use of gRPC within the data center. People reach for it citing performance, but gRPC is not high performance and the quality of the available open source clients is very poor, particularly outside of the core C++/Java implementations like the nodejs implementation. I am not against the use of protobuf as an API spec but it should be possible to use it with a framing protocol over TCP, there just isn't a clear dominant choice for that way of doing RPC. When it comes to web based APIs I am more in favour of readable payloads, but there are issues here since we tend to use JSON but the type specificity is loose, which leads to interop problems between backend languages, particularly in nodejs where JSON parse is used to implement a schema mapping. In order to do this properly, encoders and decoders need to be generated explicitly from schemas, which somewhat diminishes the use of JSON within the context of JS.

mvdtnz

In what situation is performance enough of a concern that you would consider gRPC but not enough of a concern that you would let nodeJS anywhere near your stack?

gghoop

No one is picking Nodejs for high performance, but when it is chosen for other reasons it's still expected to perform well. The Nodejs gRPC library performs poorly relatively to the overall performance characteristics of Nodejs, and this is a problem because most of the work performed by typical Nodejs services is API-related work (sending data, encoding and decoding payloads, managing sockets etc). That's not even touching on the bugs in the http2 implementation in node core or the grpc library itself, but much of the selling point of gRPC is supposedly the language interop, and this seems like false advertising to me.

MobiusHorizons

I would imagine the reason is really that Google internally doesn't allow NodeJS in production, so the tooling for gRPC for NodeJS does not benefit from the same level of scrutiny as languages Google uses internally.

jahewson

I agree, though Zod greatly helps with the JS schema issue. I’m keeping an eye on Microsoft’s TypeSpec project too: typespec.io for interoperable schema generation.

kyrra

The main benefit of protos is interop between various languages. If you are a single language tech stack, it matters less.

Also, if you use languages outside of Google's primary languages, you're likely not going to get as good of an experience.

whoevercares

There was a talk in 2023 of a non-TCP based protocol, Homa in RPC for data center use-case https://youtu.be/xQQT8YUvWg8?si=g3u5TogBe0_QpPpj.

swyx

always felt like grpc was unnecessarily inaccessible to the rest of us outside google land. the grpc js client unnecessarily heavy and kinda opaque. good idea but poorly executed compared to people who are familiar with the "simplicity" of REST

echelon

The frontend / backend split is where you have the REST and JSON camps fighting with the RPC / protobuf / gRPC factions.

RPCs have more maintainable semantics than REST as a virtue of not trying to shoehorn your data model (cardinality, relationships, etc.) into a one-size-fits-all prescriptive pattern. Very few entities ever organically evolve to fit cleanly within RESTful semantics unless you design everything upfront with perfect foresight. In a world of rapidly evolving APIs, you're never going to hit upon beautiful RESTful entities. In bigger teams with changing requirements and ownership, it's better to design around services.

The frontend folks don't maintain your backend systems. They want easy to reason about APIs, and so they want entities they can abstract into REST. They're the ultimate beneficiaries of such designs.

The effort required for REST has a place in companies that sell APIs and where third party developers are your primary customers.

Protobufs and binary wire encodings are easier for backend development. You can define your API and share it across services in a statically typed way, and your services spend less time encoding and decoding messages. JSON isn't semantic or typed, and it requires a lot of overhead.

The frontend folks natively deal with text and JSON. They don't want to download protobuf definitions or handle binary data as second class citizens. It doesn't work as cleanly with their tools, and JSON is perfectly elegant for them.

gRPC includes excellent routing, retry, side channel, streaming, and protocol deprecation semantics. None of this is ever apparent to the frontend. It's all for backend consumers.

This is 100% a frontend / backend tooling divide. There's an interface and ergonomic mismatch.

eadmund

Protobufs vs. JSON are orthogonal to REST vs. RPC: you can have REST where the representations are protobufs or JSON objects; you can have RPC where the requests and responses are protobufs or JSON objects.

rgbrgb

yes!

REST is kind of like HTML... source available by default, human-readable, easy to inspect

GRPC is for machines efficiently talking to other machines... slightly inconvenient for any human in the loop (whether that's coding or inspecting requests and responses)

The different affordances make sense given the contexts and goals they were developed in, even if they are functionally very similar.

kyrra

The official grpc JavaScript implementation is sort of bad. The one by buf.build is good from what I've seen.

https://buf.build/blog/protobuf-es-the-protocol-buffers-type...

tempest_

GRPC is a nice idea weighed down by the fact that it is full of solutions to google type problems I dont have. It seems like a lot of things have chosen it because a "binary" like rpc protocol with a contract is a nice thing to have but the further away from GoLang you get the worse it is.

limaoscarjuliet

There are uses where gRPC shines. Streaming is one of them - you can transparently send a stream of messages in one "connection". For simple CRUD service, REST is more than enough indeed.

dlahoda

afaik grpc did callbacks before we got sse/ws/webrtc/webtransport. so grpc was needed kind of.

and also canonical content streaming was in grpc. in http there was no common accepted solution at old times.

coder543

Your memory appears to be incorrect.

SSE was first built into a web browser back in 2006. By 2011, it was supported in all major browsers except IE. SSE is really just an enhanced, more efficient version of long polling, which I believe was possible much earlier.

Websocket support was added by all major browsers (including IE) between 2010 and 2012.

gRPC wasn't open source until 2015.

dilyevsky

Im old enough to have worked with asn.1 and its various proprietary “improvements” as well as SOAP/wsdl and compared to that working with protobuf/stubby (internal google predecessor to grpc) was the best thing since sliced bread

kybernetikos

Even in 2025 grpc is still awful for streaming to browsers. I was doing Browser streaming via a variety of different methods back in 2006, and it wasn't like we were the only ones doing it back then.

masterj

You should check out https://connectrpc.com/ It's based on grpc but works a lot better with web tooling

pphysch

How could gRPC be simpler without sacrificing performance?

jeeyoungk

There's two parts to gRPC's performance

- 1. multiplexing protocol implemented on top of HTTP/2 - 2. serialization format via protobuf

For most companies, neither 1 or 2 is needed, but the side effect of 2 (of having structured schema) is good enough. This was the idea behind twrip - https://github.com/twitchtv/twirp - not sure whether this is still actively used / maintained, but it's protobuf as json over HTTP.

liontwist

What kind of performance? Read? Write? Bandwidth?

dlahoda

grpc "urls" and data are binary.

binary with schema separation.

3x smaller payload.

turnsout

I like this article format. Here, let me try. In my opinion, there are three significant and distinct formats for serializing data:

  - JSON
  - .NET Binary Format for XML (NBFX)
  - JSON Schema
JSON: The least-commonly used format is JSON—only a small minority use it, even though the word JSON is used (or abused) more broadly. A signature characteristic of JSON is that the consumer of JSON can never know anything about the data model.

NBFX: A second serialization model is NBFX. The great thing about NBFX is that nobody has to worry about parsing XML text—they just have to learn NBFX.

JSON Schema: Probably the most popular way to serialize data is to use something like JSON Schema. A consumer of JSON Schema just reads the schema, and then uses JSON to read the data. It should be obvious that this is the total opposite of JSON, because again, in JSON it's illegal to know the format ahead of time.

jackman3005

This is great. I feel like this speaks to the strangeness of how this article was written perfectly.

abalaji

Everyone is hating on gRPC in this thread, but I thought I'd chime in as to where it shines. Because of the generated message definition stubs (which require additional tooling), clients almost never send malformed requests and the servers send a well understood response.

This makes stable APIs so much easier to integrate with.

inetknght

> Because of the generated message definition stubs (which require additional tooling), clients almost never send malformed requests and the servers send a well understood response.

Sure. Until you need some fields to be optional.

> This makes stable APIs so much easier to integrate with.

Only on your first iteration. After a year or two of iterating you're back to JSON, checking if fields exist, and re-validating your data. Also there's a half dozen bugs that you can't reproduce and you don't know why they happen, so you just work around them with retries.

hedora

There’s also a gaping security hole in its design.

They don’t have sane support for protocol versioning or required fields, so every field of every type ends up being optional in practice.

So, if a message has N fields, there are 2^N combinations of fields that the generated stubs will accept and pass to you, and its up to business logic to decide which combinations are valid.

It’s actually worse than that, since the other side of the connection could be too new for you to understand. In that case, the bindings just silently accept messages with unknown fields, and it’s up to you to decide how to handle them.

All of this means that, in practice, the endpoints and clients will accumulate validation bugs over time. At that point maliciously crafted messages can bypass validation checks, and exploit unexpected behavior of code that assumes validated messages are well-formed.

I’ve never met a gRPC proponent that understands these issues, and all the gRPC applications I’ve worked with has had these problems.

bluGill

I have yet to see a good way to do backward compatibility in anything. The only thing I've found that really works is sometimes you can add an argument with a default value. Removing an argument only works if everyone is using the same value of it anyway - otherwise they are expecting the behavior that other value causes and so you can't remove it.

Thus all arguments should be required in my opinion. If you make a change add a whole new function with the new arguments. If allowed the new function can have the same time (if overloading should be done this way is somewhat controversial - I'm coming out in favor but the arguments against do make good points which may be compelling to you). That way the complexity is managed since there is only a limited subset of the combinatorial explosion possible.

kybernetikos

> every field of every type ends up being optional in practice.

This also means that you cant write a client without loads of branches, harming performance.

I find it odd that grpc had a reputation for high performance. Its at best good performance given a bunch of assumptions about how schemas will be maintained and evolved.

abalaji

Hence, the qualification of stable API. You can mark fields as unused and fields as optional (recently):

https://stackoverflow.com/a/62566052

When your API changes that dramatically, you should use a new message definition on the client and server and deprecate the old RPC.

matrix87

> After a year or two of iterating you're back to JSON, checking if fields exist, and re-validating your data.

Every time this has happened to me, it's because of one-sided contract negotiation and dealing with teams where their incentives are not aligned

i.e. they can send whatever shit they want, and we have to interpret it and make it work