Skip to content(if available)orjump to list(if available)

Ruby “Thread Contention” Is Simply GVL Queuing

Alifatisk

I've started to use Ruby for mathematical and scientific computing using IRuby. I am not in need for high performance computing yet so Ruby is still sufficient. Oh boy what a joy it is to use.

Toutouxc

How I wish Ruby had been adopted as the glue-lingua franca instead of Python. I get zero joy from writing Python.

agos

I've probably not tried hard enough, but I always got zero joy from writing Ruby. Way too much magic.

alberth

> Way too much magic.

Are you referring to Ruby that has too much magic, or specifically Rails?

graypegg

Totally valid point, but I do think magic is more a property of the clever (usually reflection-based magic) things people make with Ruby, less so the language. The draw towards magic is actually because of the logically-consistent object model Ruby is built around that I find to be less magical and edge-case-filled than other script-y languages.

If you just use it as a glue language to call out to other things, prepare data, and iterate over results: it's pretty clear and concise.

(I do personally like the reflection-magic in a lot of Ruby apps, so I might be overlooking something that feels normal to me, but is some very weird behaviour to anyone else!)

IshKebab

I agree. And it's impossible to follow. No type hints, missing syntax, generated identifiers all over the place, etc. Awful awful language. I'd rather write Perl.

dismalaf

Too much magic? Ruby is super straightforward and consistent.

Unless you're talking Rails, which indeed uses a lot of metaprogramming under the hood and is quite different than programming in basic Ruby.

pjmlp

Python was on the right spot as Tcl/Perl successor, Ruby only became known thanks to Rails and arrived too late for that.

Already in early 2000's CERN was scripting their builds, Fortran and C++ tooling with Python.

Their Grid Computing tutorials, predating what we now call Cloud, already used Python.

masklinn

Yeah like many things Python's overnight success was the result of a decade of work. Python started involving scientists (and creating science-oriented SIGs) in the mid 90s. Numpy's original ancestor was Numeric, first released in 1995.

te_chris

Agree 100%.

I’ve made peace with python, accepted that I need to use it, but I hate it. Death to for loops.

kstrauser

Really? That's one of the things I like about Python: how its for loops consistently consume an iterator, and that's it. Anything that looks like an iterator can be looped across or used in a comprehension without special casing.

amw-zero

It's the most "enjoyable" language to write in my opinion. I totally agree. I wish I got to use it more at work, now it's all Go and Python.

taf2

If i'm reading this correctly for web applications setting RUBY_THREAD_DEFAULT_QUANTUM_MS=10 or maybe even RUBY_THREAD_DEFAULT_QUANTUM_MS=1 could be more optimal then the current default of 100, allowing for more throughput at the cost of potentially slower single shot response times?

masklinn

IIRC it’s complicated because it can depend on the relative run times, and how fast the IO OPs are. By lowering the quantum you’re increasing the switching overhead (because there’s more handoffs), but it might allow the io thread to complete much sooner and stop contending with the cpu thread.

Python’s “new gil” (it’s some 15 years old...) uses a similar scheme, with a much lower timeout (5ms? 10?), but it still suffers from this sort of contentions, and things get worse as the number of CPU threads increase because they can hand the gil off to one another bypassing the io thread.

jewel

I think you have that backwards; it should decrease latency and decrease throughput.

I don't imagine the overhead will be too bad though, so it may decrease latency while keeping throughput essentially the same.

Of course you'll want to benchmark on your specific load. For example, at my day job it'd make no difference because we don't use threads, each Passenger worker has just one thread and handles one request at a time.

codesnik

I actually had a less than pleasant surprise when debugging sidekiq process with a higher than usual threads count, and noticing 100ms-multiples in my traces. 100ms before switch indeed seems to be too high for an app with a potential to misbehave.

amw-zero

Loved this writeup, because queues are the most important and general concept in reasoning about performance. When you realize this, you start seeing them everywhere. Locks, async IO... everything is just interacting queues.

jeffbee

This is what thread contention means in any language and runtime. Threads "contend" for mutexes by waiting.

electroly

Most mutexes aren't fair; it's not strictly equivalent to queueing. That said, I agree with your larger point that I don't understand what the author finds revelatory about this.

jeffbee

Maybe not but even spinlocks logically contend this way. For example the Go and Abseil mutex libraries, which are similar, account cycles spun toward contention statistics.

amw-zero

Yes, but I think it's a revelation to many people that things like this map to queueing. Literally _everything_ related to performance is queueing, but we use different words for different scenarios, like "contention."