Skip to content(if available)orjump to list(if available)

Context should go away for Go 2 (2017)

Context should go away for Go 2 (2017)

175 comments

·January 21, 2025

captainmuon

This is about an explicit argument of type "Context". I'm not a Go user, and at first I thought it was about something else: an implicit context variable that allows you to pass stuff deep down the call stack, without intermediate functions knowing about it.

React has "Context", SwiftUI has "@Environment", Emacs LISP has dynamic scope (so I heard). C# has AsyncLocal, Node.JS AsyncLocalStorage.

This is one of those ideas that at first seem really wrong (isn't it just a global variable in disguise?) but is actually very useful and can result in cleaner code with less globals or less superfluous function arguments. Imagine passing a logger like this, or feature flags. Or imagine setting "debug = True" before a function, and it applies to everything down the call stack (but not in other threads/async contexts).

Implicit context (properly integrated into the type system) is something I would consider in any new language. And it might also be a solution here (altough I would say such a "clever" and unusual feature would be against the goals of Go).

kgeist

Passing the current user ID/tenant ID inside ctx has been super useful for us. We’re already using contexts for cancellation and graceful termination, so our application-layer functions already have them. Makes sense to just reuse them to store user and tenant IDs too (which we pull from access tokens in the transport layer).

We have DB sharding, so the DB layer needs to figure out which shard to choose. It does that by grabbing the user/tenant ID from the context and picking the right shard. Without contexts, this would be way harder—unless we wanted to break architecture rules, like exposing domain logic to DB details, and it would generally just clutter the code (passing tenant ID and shard IDs everywhere). Instead, we just use the "current request context" from the standard lib that can be passed around freely between modules, with various bits extracted from it as needed.

What’s the alternatives, though? Syntax sugar for retrieving variables from some sort of goroutine-local storage? Not good, we want things to be explicit. Force everyone to roll their own context-like interfaces, since a standard lib's implementation can't generalize well for all sitiations? That’s exactly why contexts we introduced—because nobody wanted to deal with mismatched custom implementations from different libs. Split it into separate "data context" and "cancellation context"? Okay, now we’re passing around two variables instead of one in every function call. DI to the rescue? You can hide userID/tenantID with clever dependency injection, and that's what we did before we introduced contexts to our codebase, but that resulted in allocations of individual dependency trees for each request (i.e. we embedded userID/tenantID inside request-specific service instances, to hide the current userID/tenantID, and other request details, from the domain layer to simplify domain logic), and it stressed the GC.

vbezhenar

An alternative is to add all dependencies explicitly into function argument list or object fields, instead of using them implicitly from the context, without documentation and static typing. Including logger.

kgeist

I already talked about it above.

Main problems with passing dependencies in function argument lists:

1) it pollutes the code and makes refactoring harder (a small change in one place must be propagated to all call sites in the dependency tree which recursively accept user ID/tenant ID and similar info)

2) it violates various architectural principles, for example, from the point of view of our business logic, there's no such thing as "tenant ID", it's an implementation detail to more efficiently store data, and if we just rely on function argument lists, then we'd have to litter actual business logic with various infrastructure-specific references to tenant IDs and the like so that the underlying DB layer could figure out what to do.

Sure, it can be solved with constructor-based dependency injection (i.e. request-specific service instances are generated for each request, and we store user ID/tenant ID & friends as object fields of such request-scoped instances), and that's what we had before switching to contexts, but it resulted in excessive allocations and unnecessary memory pressure for our highload services. In complex enterprise code, those dependency trees can be quite large -- and we ended up allocating huge dependency trees for each request. With contexts, we now have a single application-scoped service dependency tree, and request-specific stuff just comes inside contexts.

Both problems can be solved by trying to group and reuse data cleverly, and eventually you'll get back to square one with an implementation which looks similar to ctx.Context but which is not reusable/composable.

>Including logger.

We don't store loggers in ctx, they aren't request-specific, so we just use constructor-based DI.

cle

> instead of using them implicitly from the context, without documentation and static typing

This is exactly what context is trying to avoid, and makes a tradeoff to that end. There's often intermediate business logic that shouldn't need to know anything about logging or metrics collection or the authn session. So we stuff things into an opaque object, whether it's a map, a dict, a magic DI container, "thread local storage", or whatever. It's a technique as old as programming.

There's nothing preventing you from providing well-typed and documented accessors for the things you put into a context. The context docs themselves recommend it and provide examples.

If you disagree that this is even a tradeoff worth making, then there's not really a discussion to be had about how to make it.

bvrmn

You can't add arguments to vendor library functions. It's super convenient to have contexted logging work for any logging calls.

danudey

Other responses cover this well, but: the idea of having to change 20 functions to accept and propagate a `user` field just so that my database layer can shard based on userid is gross/awful.

...but doing the same with a context object is also gross/awful.

dang

We added exactly this feature to Arc* and it has proven quite useful. Long writeup in this thread:

https://news.ycombinator.com/item?id=11240681 (March 2016)

* the Lisp that HN is written in

cesarb

> an implicit context variable that allows you to pass stuff deep down the call stack, without intermediate functions knowing about it. [...] but is actually very useful and can result in cleaner code with less globals or less superfluous function arguments. [...] and it applies to everything down the call stack (but not in other threads/async contexts).

In my experience, these "thread-local" implicit contexts are a pain, for several reasons. First of all, they make refactoring harder: things like moving part of the computation to a thread pool, making part of the computation lazy, calling something which ends up modifying the implicit context behind your back without you knowing, etc. All of that means you have to manually save and restore the implicit context (inheritance doesn't help when the thread doing the work is not under your control). And for that, you have to know which implicit contexts exist (and how to save and restore them), which leads to my second point: they make the code harder to understand and debug. You have to know and understand each and every implicit context which might affect code you're calling (or code called by code you're calling, and so on). As proponents of another programming language would say, explicit is better than implicit.

mst

They're basically dynamic scoping and it's both a very useful and powerful and very dangerous feature ... scheme's dynamic-wind model makes it more obvious when the particular form of magic is in use but isn't otherwise a lot different.

I would like to think that somebody better at type systems than me could provide a way to encode it into one that doesn't require typing out the dynamic names and types on every single function but can instead infer them based on what other functions are being called therein, but even assuming you had that I'm not sure how much of the (very real) issues you describe it would ameliorate.

I think for golang the answer is probably "no, that sort of powerful but dangerous feature is not what we're going for here" ... and yet when used sufficiently sparingly in other languages, I've found it incredibly helpful.

Trade-offs all the way down as ever.

wbl

Basically you'd be asking for inferring a record type largely transparently. That's going to quickly explode to the most naive form because it's very hard to tell what could be called, especially in Go.

flohofwoe

I haven't seen it mentioned yet, but Odin also has an implicit `context` variable:

https://odin-lang.org/docs/overview/#implicit-context-system

TeMPOraL

> React has "Context", SwiftUI has "@Environment", Emacs LISP has dynamic scope (so I heard). C# has AsyncLocal, Node.JS AsyncLocalStorage.

Emacs Lisp retains dynamic scope, but it's no longer a default for some time now, in line in other Lisps that remain in use. Dynamic scope is one of the greatest features in Lisp language family, and it's sad to see it's missing almost everywhere else - where, as you noted, it's being reinvented, but poorly, because it's not a first-class language feature.

On that note, the most common case of dynamic scope that almost everyone is familiar with, are environment variables. That's what they're for. Since most devs these days are not familiar with the idea of dynamic scope, this leads to a lot of peculiar practices and footguns the industry has around environment variables, that all stem from misunderstanding what they are for.

> This is one of those ideas that at first seem really wrong (isn't it just a global variable in disguise?)

It's not. It's about scoping a value to the call stack. Correctly used, rebinding a value to a dynamic variable should only be visible to the block doing the rebinding, and everything below it on the call stack at runtime.

> Implicit context (properly integrated into the type system) is something I would consider in any new language.

That's the problem I believe is currently unsolved, and possibly unsolvable in the overall programming paradigm we work under. One of the main practical benefits of dynamic scope is that place X can set up some value for place Z down on the call stack, while keeping everything in between X and Z oblivious of this fact. Now, this is trivial in dynamically typed language, but it goes against the principles behind statically-typed languages, which all hate implicit things.

(FWIW, I love types, but I also hate having to be explicit about irrelevant things. Since whether something is relevant or not isn't just a property of code, but also a property of a specific programmer at specific time and place, we're in a bit of a pickle. A shorter name for "stuff that's relevant or not depending on what you're doing at the moment" is cross-cutting concerns, and we still suck at managing them.)

masklinn

> Emacs Lisp retains dynamic scope, but it's no longer a default for some time now

https://www.gnu.org/software/emacs/manual/html_node/elisp/Va...

> By default, the local bindings that Emacs creates are dynamic bindings. Such a binding has dynamic scope, meaning that any part of the program can potentially access the variable binding. It also has dynamic extent, meaning that the binding lasts only while the binding construct (such as the body of a let form) is being executed.

It’s also not really germane to the GP’s comment, as they’re just talking about dynamic scoping being available, which it will almost certainly always be (because it’s useful).

TeMPOraL

Sorry, you're right. It's not a cultural default anymore. I.e. Emacs Lisp got proper lexical scope some time ago, and since then, you're supposed to start every new .elisp file with:

  ;; -*- mode: emacs-lisp; lexical-binding: t; -*-
i.e. explicitly switching the interpreter/compiler to work in lexical binding mode.

siknad

> against the principles behind statically-typed languages, which all hate implicit things

But many statically typed languages allow throwing exceptions of any type. Contexts can be similar: "try catch" becomes "with value", "throw" becomes "get".

TeMPOraL

Yes, but then those languages usually implement only unchecked exception, as propagating error types up the call tree is seen as annoying. And then, because there are good reasons you may want to have typed error values (instead of just "any"), there is now pressure to use result types (aka. "expected", "maybe") instead - turning your return type Foo into Result<Foo, ErrorType>.

And all that it does is making you spell out the entire exception handling mechanism explicitly in your code - not just propagating the types up the call tree, but also making every function explicitly wrapping, unwrapping and branching on Result types. The latter is so annoying that people invent new syntax to hide it - like tacking ? at the end of the function, or whatever.

This becomes even worse than checked exception, but it's apparently what you're supposed to be doing these days, so ¯\_(ツ)_/¯.

nagaiaida

raku's take on gradual typing may be to your taste; i likewise prefer to leave irrelevant types out and use maximally-expressive types where it makes sense¹. i feel this is helped by the insistence on sigils because you then know the rough shape of things (and thus a minimal interface they implement: $scalar, @positional, %associative, &callable) even when you lack their specific types. in the same vein, dynamically scoped variables are indicated with the asterisk as a twigil (second level sigil).

  @foo
is a list (well, it does Positional anyway), while

  @*foo
is a different variable that is additionally dynamically scoped.

it's idiomatic to see

  $*db
as a database handle to save passing it around explicitly, env vars are in

  %*ENV
things like that. it's nice to have the additional explicit reminder whenever you're dealing with a dynamic variable in a way the language checks for you and yells at you for forgetting.

i would prefer to kick more of the complex things i do with types back to compile time, but a lot of static checks are there. more to the point, raku's type system is quite expressive at runtime (that's what you get when you copy common lisp's homework, after all) and helpful to move orthogonal concerns out into discrete manageable things that feel like types to use even if what they're doing is just a runtime branch that lives in the function signature. doing stuff via subset types or roles or coercion types means whatever you do plays nicely with polymorphic dispatch, method resolution order, pattern matching, what have you.

in fact, i just wrote a little entirely type level... thing? to clean up the body of an http handler that lifts everything into a role mix-in pipeline that runs from the database straight on through to live reloading of client-side elements. processing sensor readings for textual display, generating html, customizing where and when the client fetches the next live update, it's all just the same pipeline applying roles to the raw values from the db with the same infix operator (which just wraps a builtin non-associative operator to be left associative to free myself from all the parentheses).

not getting bogged down in managing types all the time frees you up to do things like this when it's most impactful, or at least that's what i tell myself whenever i step on a rake i should have remembered was there.

¹ or times where raku bubbles types up to the end-user, like the autogenerated help messages generated from the type signature of MAIN. i often write "useless" type declarations such as subset Domain-or-IP; which match anything² so that the help message says --host[=Domain-or-IP] instead of --host[=Str] or whatever

² well, except junctions, which i consider the current implementation of to be somewhat of a misstep since they're not fundamentally also a list plus a context. it's a whole thing. in any case, this acts at the level of the type hierarchy that you want anyway.

crowcountry

Scala has implicit contextual parameters: https://docs.scala-lang.org/tour/implicit-parameters.html.

agumonkey

I've always been curious about how this feature ends up in day to day operations and long term projects. You're happy with it ?

kloop

As a veteran of a large scala project (which was re-written in go, so I'm not unbiased), no. I was generally not happy.

This was scala 2, so implicit resolution lookup was a big chunk of the problem. There's nothing at the call site that tells you what is happening. But even when it wasn't hidden in a companion object somewhere, it was still difficult because every import change had to be scrutinized as it could cause large changes in behavior (this caused a non-zero number of production issues).

They work well for anything you would use environment variables for, but a chunk of the ecosystem likes to use them for handlers (the signature being a Functor generally), which was painful

xmodem

Not OP, but I briefly seconded to a team that used Scala at a big tech co and I was often frustrated by this feature specifically. They had a lot of code that consumed implicit parameters that I was trying to call from contexts they were not available.

Then again I guess it's better than a production outage because the thread-local you didn't know was a requirement wasn't available.

segfaltnh

Scala has everything, and therefore nothing.

lmm

> Implicit context (properly integrated into the type system) is something I would consider in any new language.

Those who forget monads are doomed to reinvent dozens of limited single-purpose variants of them as language features.

mananaysiempre

Algebraic effects and implicit arguments with explicit records are perfectly cromulent language features. GHC Haskell already has implicit arguments, and IIRC Scala uses them instead of a typeclass/trait system. The situation with extensible records in Haskell is more troublesome, but it’s more because of the endless bikeshedding of precisely how powerful they should be and because you can get almost all the way there with the existing type-system features except the ergonomics invariably suck.

It’s reasonable, I think, to want the dynamic scope but not the control-flow capabilities of monads, and in a language with mutability that might even be a better choice. (Then again, maybe not—SwiftUI is founded on Swift’s result builders, and those seem pretty much like monads by another name to me.) And I don’t think anybody likes writing the boilerplate you need to layer a dozen MonadReaders or -States on each other and then compose meaningful MonadMyLibraries out of them.

Finally, there’s the question of strong typing. You do want the whole thing to be strongly typed, but you don’t want the caller to write the entire dependency tree of the callee, or perhaps even to know it. Yet the caller may want to declare a type for itself. Allowing type signatures to be partly specified and partly inferred is not a common feature, and in general development seems to be backing away from large-scale type inference of this sort due to issues with compile errors. Not breaking ABI when the dependencies change (perhaps through default values of some sort) is a more difficult problem still.

(Note the last part can be repeated word for word for checked exceptions/typed errors. Those are also, as far as I’m aware, largely unsolved—and no, Rust doesn’t do much here except make the problem more apparent.)

segfaltnh

Thread local storage means all async tasks (goroutines) must run in the same thread. This isn't how tasks are actually scheduled. A request can fan out, or contention can move parts of the computation between threads, which is why context exists.

Furthermore in Go threads are spun up at process start, not at request time, so thread-local has a leak risk or cleanup cost. Contexts are all releasable after their processing ends.

I've grown to be a huge fan of Go for servers and context is one reason. That said, I agree with a lot of the critique and would love to see an in-language solution, but thread-local ain't it.

cyberax

A more correct term is "goroutine-local" storage, which Go _already_ has. It's used for pprof labels, they are even inherited when a new Goroutine is started.

kalekold

> If you use ctx.Value in my (non-existent) company, you’re fired

This is such a bad take.

ctx.Value is incredibly useful for passing around context of api calls. We use it a lot, especially for logging such context values as locales, ids, client info, etc. We then use these context values when calling other services as headers so they gain the context around the original call too. Loggers in all services pluck out values from the context automatically when a log entry is created. It's a fantastic system and serves us well. e.g.

    log.WithContext(ctx).Errorf("....", err)

sluongng

Let me try to take the other side:

`ctx.Value` is an `any -> any` kv store that does not come with any documentation, type checking for which key and value should be available. It's quick and dirty, but in a large code base, it can be quite tricky to check if you are passing too many values down the chain, or too little, and handle the failure cases.

What if you just use a custom struct with all the fields you may need to be defined inside? Then at least all the field types are properly defined and documented. You can also use multiple custom "context" structs in different call paths, or even compose them if there are overlapping fields.

Thaxll

Because you should wrapp that in a type safe function. You should not use the context.GetValue() directly but use your own function, the context is just a transport mechanism.

kflgkans

If it is just a transport mechanism, why use context at all ant not a typed struct?

bluetech

> `ctx.Value` is an `any -> any` kv store that does not come with any documentation, type checking for which key and value should be available

The docs https://pkg.go.dev/context#Context suggest a way to make it type-safe (use an unexported key type and provide getter/setter). Seems fine to me.

> What if you just use a custom struct with all the fields you may need to be defined inside?

Can't seamlessly cross module boundaries.

smarkov

> `ctx.Value` is an `any -> any` kv store that does not come with any documentation, type checking for which key and value should be available.

On a similar note, this is also why I highly dislike struct tags. They're string magic that should be used sparingly, yet we've integrated them into data parsing, validation, type definitions and who knows what else just to avoid a bit of verbosity.

homebrewer

Most popular languages support annotations of one type or another, they let you do all that in a type safe way. It's Go that's decided to be different for difference sake, and produced a complete mess.

throw_m239339

> `ctx.Value` is an `any -> any`

It did not have to be this way, this is a shortcoming of Go itself. Generic interfaces makes things a bit better, but Go designers chose that dumb typing at first place. The std lib is full of interface {} use iteself.

context itself is an after thought, because people were building thread unsafe leaky code on top of http request with no good way to easily scope variables that would scale concurrently.

I remember the web session lib for instance back then, a hack.

ctx.Value is made for each go routine scoped data, that's the whole point.

If it is an antipattern well, it is an antipattern designed by go designers themselves.

b1-88er

Maybe he doesn't have a company because he is too dogmatic about things that don't really matter.

PUSH_AX

100%

People who have takes like this have likely never zoomed out enough to understand how their software delivery ultimately affects the business. And if you haven't stopped to think about that you might have a bad time when it's your business.

pm90

Someone has to question the status quo. If we just did the same things there would be a lot less progress. The author took the time to articulate their argument, and publish it. I appreciate their effort even if I may not agree with their argument.

daviddever23box

Bingo. Everything that can be wrongly used or abused started out its existence within sane constraints and use patterns.

frankie_t

The author gave a pretty good reasoning why is it a bad idea, in the same section. However, for the demonstration purposes I think the they should have included their vision on how the request scoped data should be passed.

As I understand they propose to pass the data explicitly, like a struct with fields for all possible request-scoped data.

I personally don't like context for value passing either, as it is easy to abuse in a way that it becomes part of the API: the callee is expecting something from the caller but there is no static check that makes sure it happens. Something like passing an argument in a dictionary instead of using parameters.

However, for "optional" data whose presence is not required for the behavior of the call, it should be fine. That sort of discipline has to be enforced on the human level, unfortunately.

rubenv

> As I understand they propose to pass the data explicitly, like a struct with fields for all possible request-scoped data.

So basically context.Context, except it can't propagate through third party libraries?

frankie_t

If you use a type like `map[string]any` then yes, it's going to be the same as Context. However, you can make a struct with fields of exactly the types you want.

It won't propagate to the third-party libraries, yes. But then again, why don't they just provide an explicit way of passing values instead of hiding them in the context?

elAhmo

We effectively use this approach in most of our go services. Other than logging purposes, we sometimes use it to pass stuff that is not critical but highly useful to have, like some request and response bodies from HTTP calls, tenant information and similar info.

bheadmaster

Contexts in Go are generally used for convenience in request cancellation, but they're not required, and they're not the only way to do it. Under the hood, a context is just a channel that's closed on cancellation. The way it was done before contexts was pretty much the same:

    func CancellableOp(done chan error /* , args... */) {
        for {
            // ...

            // cancellable code:
            select {
                case <-something:
                    // ...
                case err := <-done:
                    // log error or whatever
            }
        }
    }
Some compare context "virus" to async virus in languages that bolt-on async runtime on top of sync syntax - but the main difference is you can compose context-aware code with context-oblivious code (by passing context.Background()), and vice versa with no problems. E.g. here's a context-aware wrapper for the standard `io.Reader` that is completely compatible with `io.Reader`:

    type ioContextReader struct {
        io.Reader
        ctx context.Context
    }

    func (rc ioContextReader) Read(p []byte) (n int, err error) {
        done := make(chan struct{})
        go func() {
            n, err = rc.Reader.Read(p)
            close(done)
        }()

        select {
        case <-rc.ctx.Done():
            return 0, rc.ctx.Err()
        case <-done:
            return n, err
        }
    }

    func main() {
        ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
        defer cancel()

        rc := ioContextReader{Reader: os.Stdin, ctx: ctx}

        // we can use rc in io.Copy as it is an io.Reader
        _, err := io.Copy(os.Stdout, rc)
        if err != nil {
            log.Println(err)
        }
    }
For io.ReadCloser, we could call `Close()` method when context exits, or even better, with `context.AfterFunc(ctx, rc.Close)`.

Contexts definitely have flaws - verbosity being the one I hate the most - but having them behave as ordinary values, just like errors, makes context-aware code more understandable and flexible.

And just like errors, having cancellation done automatically makes code more prone to errors. When you don't put "on-cancel" code, your code gets cancelled but doesn't clean up after itself. When you don't select on `ctx.Done()` your code doesn't get cancelled at all, making the bug more obvious.

kbolino

You are half right. A context also carries a deadline. This is important for those APIs which don't allow asynchronous cancellation but which do support timeouts as long as they are set up in advance. Indeed, your ContextReader is not safe to use in general, as io.ReadCloser does not specify the effect of concurrent calls to Close during Read. Not all implementations allow it, and even when they do tolerate it, they don't always guarantee that it interrupts Read.

bryancoxwell

This works, but goes against convention in that (from the context package docs) you shouldn’t “store Contexts inside a struct type; instead, pass a Context explicitly to each function that needs it.”

dfawcus

It does seem an unnecessarily limiting convention.

What will go wrong if one stores a Context in a struct?

I've done so for a specific use case, and did not notice any issues.

lawrjone

This guidance is actually super important, as contexts are expected to be modified in a code flow and apply to all functions that are downstream of your current call stack.

If you store contexts on your structs it’s very likely you won’t thread them correctly, leading to errors like database code not properly handling transactions.

Actually super fragile and you should avoid doing this as much as is possible. It’s never a good idea!

eadmund

> What will go wrong if one stores a Context in a struct?

Contexts are about the dynamic contour, i.e. the dynamic call stack. Storing the current context in a struct and then referring to it in some other dynamic … context … is going to lead to all sorts of pain: timeouts or deadlines which have already expired and/or values which are no longer pertinent.

While there are some limited circumstances in which it may be appropriate, in general it is a very strong code smell. Any code which passes a context should receive a context. And any code which may pass a context in the future should receive one now, to preserve API compatibility. So any exported function really should have a context as its first argument for forwards-compatibility.

bheadmaster

True. But this code is only proof-of-concept of how non-context-aware functions can be wrapped in a context. Such usage of context is not standard.

the_gipsy

Consider this:

    ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
    reader := ioContextReader(ctx, r)
    ...
    ctx, cancel := context.WithTimeout(ctx, 1*time.Second)
    ctx = context.WithValue(ctx, "hello", "world")
    ...
    func(ctx context.Context) {
        reader.Read() // does not time out after one second, does not contain hello/world.
        ...
    }(ctx)

bheadmaster

There are two solutions, depending on your real use case:

1) You're calling Read() directly and don't need to use functions that strictly accept io.Reader - then just implement ReadContext:

    func (rc ioContextReader) ReadContext(ctx context.Context, p []byte) (n int, err error) {
        done := make(chan struct{})
        go func() {
            n, err = rc.Reader.Read(p)
            close(done)
        }()

        select {
        case <-ctx.Done():
            return 0, ctx.Err()
        case <-done:
            return n, err
        }
    }
Otherwise, just wrap the ioContextReader with another ioContextReader:

    reader = ioContextReader(ctx, r)

the_gipsy

Changing the interface 1) is obviously not relevant.

Re-wrapping works only for the toy example. In the real world, the reader isn't some local variable, but there could be many, across different structs, behind private fields.

To cirle back, and not focus too much on the io.Reader example: the virality of ctx is real, and making wrapper structs is not a good solution. Updating stale references may not be possible, and would quickly become overwhelming. Not to forget the performance overhead.

Personally I think it's okay, go is fine as a "webservices" language. The go gospel is, You can have your cake and eat it too, but it's almost never true unless you twist the meaning of "cake" and "eat".

rad_gruchalski

Of course not - you're not handling the context at all in the called function. What's there to consider, reader.Read() has no idea about your timeout and value store intent. How would it, telepathy?

kiitos

You're spawning a goroutine per Read call? This is pretty bonkers inefficient, to start, and a super weird approach in any case...

bheadmaster

Yes, but this is just proof of concept. For any given case, you can optimize your approach to your needs. E.g. single goroutine ReadCloser:

    type ioContextReadCloser struct {
        io.ReadCloser
        ctx context.Context

        ch chan *readReq
    }

    type readReq struct {
        p   []byte
        n   *int
        err *error
        m   sync.Mutex
    }

    func NewIoContextReadCloser(ctx context.Context, rc io.ReadCloser) *ioContextReadCloser {
        rcc := &ioContextReadCloser{
            ReadCloser: rc,
            ctx:        ctx,

            ch: make(chan *readReq),
        }
        go rcc.readLoop()
        return rcc
    }

    func (rcc *ioContextReadCloser) readLoop() {
        for {
            select {
            case <-rcc.ctx.Done():
                return
            case req := <-rcc.ch:
                *req.n, *req.err = rcc.ReadCloser.Read(req.p)
                if *req.err != nil {
                    req.m.Unlock()
                    return
                }
                req.m.Unlock()
            }
        }
    }

    func (rcc *ioContextReadCloser) Read(p []byte) (n int, err error) {
        req := &readReq{p: p, n: &n, err: &err}
        req.m.Lock() // use plain mutex as signalling for efficiency
        select {
        case <-rcc.ctx.Done():
            return 0, rcc.ctx.Err()
        case rcc.ch <- req:
        }
        req.m.Lock() // wait for readLoop to unlock
        return n, err
    }
Again, this is not to say this is the right way, only that it is possible and does not require any shenanigans that e.g. Python needs when dealing with when mixing sync & async, or even different async libraries.

kiitos

I think you're missing the forest for the trees, here.

The io.Reader/Writer interfaces, and their implementations, are meant to provide a streaming model for reading and writing bytes, which is as efficient as reasonably possible, within the constraints of the core language.

If your goal is to make an io.Reader that respects a context.Context cancelation, then you can just do

    type ContextReader struct {
        ctx context.Context
        r   io.Reader
    }

    func NewContextReader(ctx context.Context, r io.Reader) *ContextReader {
        return &ContextReader{
            ctx: ctx,
            r:   r,
        }
    }
    
    func (cr *ContextReader) Read(p []byte) (int, error) {
        if err := cr.ctx.Err(); err != nil {
            return 0, err
        }
        return cr.r.Read(p)
    }
No goroutines or mutexes or whatever else required.

Extending to a ReadCloser is a simple exercise left to the, er, reader.

orf

A mutex in a hot Read (or any IO) path isn’t efficient.

rednafi

This article is from 2017!

As others have already mentioned, there won't be a Go 2. Besides, I really don't want another verbose method for cancellation; error handling is already bad enough.

incognito124

I thought go 2 was considered harmful

TeMPOraL

Yes, that's why you should instead use "COMEFROM", or it's more general form, "LET'S HAVE A WALK".

rednafi

Oh, don't even start about Go's knack for being pithy to a fault.

robertlagrant

I came here to say this.

the_gipsy

> This probably doesn’t happen often, but it’s prone to name collisions.

It's funny, it really was just using strings as keys until quite recently, and obviously there were collisions and there was no way to "protect" a key/value, etc.

Now the convention is to use a key with a private type, so no more collisions. The value you get is still untyped and needs to be cast, though. Also there are still many older libraries still uses strings.

grose

The blog post from 2014 introducing context uses a private key type, so there's really no excuse: https://go.dev/blog/context#package-userip

mukunda_johnson

> It’s very similar to thread-local storage. We know how bad of an idea thread-local storage is. Non-flexible, complicates usage, composition, testing.

I kind of do wish we had goroutine local storage though :) Passing down the context of the request everywhere is ugly.

kflgkans

I like explicit over implicit. I will take passing down context (in the sense of the concept, not the specific Go implementation) explicitly everywhere over implicit ("put it somewhere and I'll trust I can [probably, hopefully] get it back later") any day of the week.

I've seen plenty of issues in Java codebases where there was an assumption some item was in the Thread Local storage (e.g. to add some context to a log statement or metric) and it just wasn't there (mostly because code switched to a different thread, sometimes due to a "refactor" where stuff was renamed in one place but not in another).

pm90

Most recently ive been bit by this with datadog. The Python version does some monkeypatching to inject trace info. The go version you need to inject the trace info explicitly. While the latter takes more setup, it was much easier to understand what was going on and to debug when we ran into issues.

kflgkans

Sounds very familiar. I was a Java developer for a long time, and in that ecosystem adding a library to your project can be enough for code to be activated and run. There are plenty of libraries where the idea is: just include it, magic stuff will happen, and everything works! That is, until it doesn't work. And then you have to try and debug all this magic stuff of how Java automatically loads classes, how these classes are created and run, and what they do. Didn't happen very often, but when it happened usually a full week was wasted with this.

I really prefer spending a bit more time to set it up myself (and learn something about what I'm using in the process) and knowing how it works, than all the implicit magic.

xlii

This is why I avoid Python. I started doing Go after looking for few solutions written and Python and I couldn’t use it.

Some magic values inside objects of recursive depth changing dynamically at the runtime. After working for some time with functional languages and languages with non-mutable structures I’m afraid of such features today.

Context is nice because it’s explicit. Even function header spills the detail. `GetXFromName(context.Context, string)` already says that this call will do some IO/remote call and might never return or be subject of cancellation.

arccy

now your stuff breaks when you pass messages between channels

rednafi

Goroutines have a tiny stack at the beginning, 4KB iirc. Having a goroutine-local storage will probably open a can of worms there.

PaulKeeble

Context's spread just like exceptions do, the moment you introduce one it flies up and down all the functions to get where it needs to be. I can't help but think that local storage and operations for Go just like Threads have in Java would be a cleaner solution to the problem.

nickcw

Contexts implement the idea of cancellation along with go routine local storage and at that they work very well.

What if for the hypothetical Go 2 we add an implicit context for each goroutine. You'd probably need to call a builtin, say `getctx()` to get it.

The context would be inherited by all go routines automatically. If you wanted to change the context then you'd use another builtin `setctx()` say.

This would have the usefulness of the current context without having to pass it down the call chain everwhere.

The cognitive load is two bultins getctx() and setctx(). It would probably be quite easy to implement too - just stuff a context.Context in the G.

alkonaut

Was this solved? Is this context only a cancellation flag or does it do something more? The obvious solution for a cancellation trigger would be to have cancellation as an optional second argument. That's how it's solved in e.g. C#. Failing to pass the argument just makes it CancellationToken.None, which is simply never cancelled. So I/O without cancellation is simply foo.ReadAsync(x) and with cancellation it's foo.ReadAsync(x, ct).

whstl

It's not just for cancellation and timeouts, it is also used for passing down metadata, but also for cross-cutting concerns like structured loggers.

the_duke

Needs a (2017)!

pansa2

Yes, I was about to comment that “there won’t be a Go 2”, but I guess that wasn’t settled when the article was written.

riffraff

as someone who's not in the community: why not?

jerf

The major features that may have required a 2.0 were implemented in a backwards-compatible way, removing the utility of a Go 2.0.

Go 2.0 was basically a blank check for the future that said "We may need to break backwards compatibility in a big way". It turns out the Go team does not see the need to cash that check and there is no anticipated upcoming feature in the next several years that would require it.

The last one that I was sort of wondering about was the standard library, but the introduction of math/rand/v2 has made it clear the devs are comfortable ramping standard library packages without a Go 2. There are a number of standard libraries that I think could stand to take a v2; there aren't any that are so broken that it's worth a v2 to hard-remove them. (Except arguably syscall [1], which turns out it doesn't belong in the standard library because it can't maintain the standard library backwards compatibility and should have been in the extended standard library from the beginning, but that's been the way it is now for a long time and also doesn't rate a v2.)

(And again let me underline I'm not saying all the standard library is perfect. There is some brokenness here and there, for various definitions of "brokenness". I'm just saying it's not so broken that it's worth a v2 hard break at the language level and hard elimination of the libraries such that old code is forcibly broken and forced to update to continue on.)

[1]: https://pkg.go.dev/syscall

orian

To not repeat other's (Python) mistakes ;-)

miffy900

> First things first, let’s establish some ground. Go is a good language for writing servers, but Go is not a language for writing servers. Go is a general purpose programming language, just like C, C++, Java or Python

Really? Even years later in 2025, this never ended up being true. Unless your definition of 'general purpose' specifically excludes anything UI-related, like on desktop, web or mobile, or AI-related.

I know it's written in 2017, but reading it now in 2025 and seeing the author comparing it to Python of all languages in the context of it's supposed 'general purpose'ness is just laughable. Even Flutter doesn't support go. granted, that seems like a very deliberate decision to justify Dart's existence.

j16sdiz

It is not.

Link to previous discussion: https://news.ycombinator.com/item?id=14958989

> https://golang.org/doc/faq#What_is_the_purpose_of_the_projec...: "By its design, Go proposes an approach for the construction of system software on multicore machines."

> That page points to https://talks.golang.org/2012/splash.article for "A much more expansive answer to this question". That article states:

> "Go is a programming language designed by Google to help solve Google's problems [...] More than most general-purpose programming languages, Go was designed to address a set of software engineering issues that we had been exposed to in the construction of large server software."

pjmlp

In an alternative timeline, had Rust 1.0 been available when Docker pivoted away from Python into Go, and Kubernetes from Java into Go, due to having Go folks pushing for the rewrite, and most likely they would have been taken by RIIR instead, nowadays spreading across Python and JavaScript ecosystem, including rewriting tools originally written in Go.

cyberax

Nope. Rust is not a good tool for servers. It's downright terrible, in fact. Goroutines help _a_ _lot_ with concurrency.

pjmlp

Go tell that to Amazon, Facebook and Microsoft.

dlisboa

> Unless your definition of 'general purpose' specifically excludes anything UI-related, like on desktop, web or mobile, or AI-related.

By that definition no language is general purpose. There is no language today that excels in GUI (desktop/mobile), web development, AI, cloud infrastructure, and all the other stuff like systems, embedded...And all at the same time.

For instance I have never seen or heard of a successful Python desktop app (or mobile for that matter).

pkilgore

I think the whole argument here is silly, but I do know kitty (terminal) and Calibre (ebook manager) are two rather popular cross platform python desktop apps.

mrkeen

> If the Go language ever comes to the point where I’d have to write this

  n, err := r.Read(context.TODO(), p)
> put a bullet in my head, please.

Manually passing around a context everywhere sounds about as palatable as manually checking every return for error.

the_gipsy

Exactly, the snippets needs at least three lines of inane error checking boilerplate and variable juggling.

disintegrator

Consider what happens in JavaScript when you declare a function as async. Now everything calling it is infected. Passing around runtime constructs like context in Go (AbortSignal in JS) or an allocator in Zig gives exactly the right level control back to the call and I love it. You can bail out of context propagation at any level of your program if that's your desire.