Skip to content(if available)orjump to list(if available)

How to Think about Parallel Programming: Not! [video] (2021)

Jtsummers

(2011) and was submitted back then: https://news.ycombinator.com/item?id=2105661

fifilura

Disclaimer: at work so I didn't watch the video.

For loops are the "goto":s of the parallel programming era.

Ditch them and the rest can be handled by the programming language abstraction.

Why? Because they 1. Enforce order of execution and 2. Allow breaking computation after a certain number of iterations.

bee_rider

I’ve always been surprised that we don’t have a really widely supported construct in programming that is like a for loop, but with no dependency allowed between iterations. It would be convenient for stuff like multi-core parallelism… and also for stuff like out of order execution!

Not sure how “break” would be interpreted in this context. Maybe it should make the program crash, or it could be equivalent to “continue” (in the programming model, all of the iterations would be happening in parallel anyway).

I vaguely feel like “for” would actually have been the best English word for this construct, if we stripped out the existing programming context. I mean, if somebody post gives you instructions like:

For each postcard, sign your name and put it in an envelope

You don’t expect there to be any non-trivial dependencies between iterations, right? Although, we don’t often give each other complex programs in English, so maybe the opportunity for non-trivial dependencies just doesn’t really arise anyway…

In math, usually when you encounter “for,” it is being applied to a whole set of things without any loop dependency implied (for all x in X, x has some property). But maybe that’s just an artifact of there being less of a procedural bias in math…

vlovich123

We actually do have the abstractions but the problem is that the vast majority of for loops don’t benefit - you need to have so much work that the overhead of coordinating the threads is sufficient. Additionally, you’ve got all sorts of secondary effects like cache write contention that will fight any win you try to extract out of for loops parallelism. What we’ve been learning for a long time as an industry is that you benefit most from task level parallelism with minimal to no synchronization.

dkarl

Granted this probably isn't the parallel application that the other poster was envisioning, but it can be extremely useful when a computation depends on a large number of I/O-bound tasks that may fail, like when you are servicing a request with a high fan-out to other services, and you need to respond in a fixed time with the best information you have.

For example, if you need to respond to a request in 100ms and it depends on 100 service calls, you can make 100 calls with a 80ms timeout; get 90 quick responses, including two transient errors, and immediately retry the errors; get eight more successful responses and two timeouts; and then send the response within the SLA using the 98 responses you received.

null

[deleted]

mannykannot

The tricky cases are the very many where there are dependencies between iterations, but not demanding the strict serialization that a simple loop enforces. We have constructs for that, but there's an irreducible complexity to using them correctly.

two_handfuls

They're not in the language proper, but "parallel for" is a common construct. I've seen it in C# and Rust, but I'm sure other languages have it too.

It may be a good idea to use a framework with explicitly stateless "tasks" and an orchestrator (parallel, distributed, or both). This is what Spark, Tensorflow, Beam and others do. Those will have a "parallel for" as well, but now in addition to threads you can use remote computers as well with a configuration change.

Weryj

Sounds like you're talking about Realtime operating systems. I don't know if there are many/any programming languages that build those operational requirements into the syntax/abstraction.

epgui

> we don’t have a really widely supported construct in programming that is like a for loop, but with no dependency allowed between iterations

Uhhh... we don't? It seems to me like we do. This is a solved problem. Depending on what you're trying to do, there's map, reduce, comprehensions, etc.

dkarl

And for those who also don't want to be forced to sequence the computations, i.e., wanting to run them concurrently and potentially in parallel, each approach to concurrency supports its own version of this.

For example, choosing Scala on the JVM because that's what I know best, the language provides a rich set of maps, folds, etc., and the major libraries for different approaches to concurrency (futures, actors, effect systems) all provide ways to transform a collection of computations into a collection of concurrent operations.

Curious if the poster who said "we don't have a really widely supported construct" works in a language that lacks a rich concurrency ecosystem or if they want support baked into their language.

null

[deleted]