Skip to content(if available)orjump to list(if available)

Too Many Open Files

Too Many Open Files

83 comments

·June 6, 2025

xorvoid

The real fun thing is when the same application is using “select()” and then somewhere else you open like 5000 files. Then you start getting weird crashes and eventually trace it down to the select bitset having a hardcoded max of 4096 entries and no bounds checking! Fun fun fun.

moyix

I made a CTF challenge based on that lovely feature of select() :D You could use the out-of-bounds bitset memory corruption to flip bits in an RSA public key in a way that made it factorable, generate the corresponding private key, and use that to authenticate.

https://threadreaderapp.com/thread/1723398619313603068.html

malux85

Oh that’s clever!

reisse

Oh the real fun thing is when the select() is not even in your code! I remember having to integrate a closed-source third-party library vendored by an Australian fin(tech?) company which used select() internally, into a bigger application which really liked to open a lot of file descriptors. Their devs refused to rewrite it to use something more contemporary (it was 2019 iirc!), so we had to improvise.

In the end we came up with a hack to open 4k file descriptors into /dev/null on start, then open the real files and sockets necessary for our app, then close that /dev/null descriptors and initialize the library.

ape4

Yeah, the man page says:

    WARNING:  select()  can  monitor  only file descriptors numbers that are
       less than FD_SETSIZE (1024)—an unreasonably low limit  for  many  modern
       applications—and  this  limitation will not change.  All modern applica‐
       tions should instead use poll(2) or epoll(7), which do not  suffer  this
       limitation.

danadam

> trace it down to the select bitset having a hardcoded max of 4096

Did it change? Last time I checked it was 1024 (though it was long time ago).

> and no bounds checking!

_FORTIFY_SOURCE is not set? When I try to pass 1024 to FD_SET and FD_CLR on my (very old) machine I immediately get:

  *** buffer overflow detected ***: ./a.out terminated
  Aborted
(ok, with -O1 and higher)

xorvoid

You’re right. I think it ends up working out to a 4096 page on x86 machines, that’s probably what I remembered.

Yes, _FORTIFY_SOURCE is a fabulous idea. I was just a bit shocked it wasn’t checked without _FORTIFY_SOURCE. If you’re doing FD_SET/FD_CLR, you’re about to make an (expensive) syscall. Why do you care to elide a cheap not-taken branch that’ll save your bacon some day? The overhead is so incredibly negligible.

Anyways, seriously just use poll(). The select() syscall needs to go away for good.

reisse

You've had a good chance to really see 4096 descriptions in select() somewhere. The man is misleading because it refers to the stubbornly POSIX compliant glibc wrapper around actual syscall. Any sane modern kernel (Linux; FreeBSD; NT (although select() on NT is a very different beast); well, maybe except macOS, never had a chance to write network code there) supports passing the descriptor sets of arbitrary size to select(). It's mentioned further down in the man, in the BUGS section:

> POSIX allows an implementation to define an upper limit, advertised via the constant FD_SETSIZE, on the range of file descriptors that can be specified in a file descriptor set. The Linux kernel imposes no fixed limit, but the glibc implementation makes fd_set a fixed-size type, with FD_SETSIZE defined as 1024, and the FD_*() macros operating according to that limit.

The code I've had a chance to work with (it had its roots in the 90s-00s, therefore the select()) mostly used 2048 and 4096.

> Anyways, seriously just use poll().

Oh please don't. poll() should be in the same grave as select() really. Either use libev/libuv or go down the rabbit hole of what is the bleeding edge IO multiplexer for your platform (kqueue/epoll/IOCP/io_uring...).

jeroenhd

I think there's something ironic about combining UNIX's "everything is a file" philosophy with a rule like "every process has a maximum amount of open files". Feels a bit like Windows programming back when GDI handles were a limited resource.

Nowadays Windows seems to have capped the max amount of file handles per process to 2^16 (or 8096 if you're using raw C rather than Windows APIs). However, as on Windows not everything is a file, the amount of open handles is limited "only by memory", so Windows programs can do a lot of things UNIX programs can't do anymore when the file handle limit has been reached.

jchw

I'm not even 100% certain there's really much of a specific reason why there has to be a low hard limit on file descriptors. I would guess that Windows NT handles take up more system resources since NT handles have a lot of things that file descriptors do not (e.g. ACLs).

Still, on the other hand, opening a lot of file descriptors will necessarily incur a lot of resource usage, so really if there's a more efficient way to do it, we should find it. That's definitely the case with the old way of doing inotify for recursive file watching; I believe most or all uses of inotify that work this way can now use fanotify instead much more efficiently (and kqueue exists on other UNIX-likes.)

In general having the limit be low is probably useful for sussing out issues like this though it definitely can result in a worse experience for users for a while...

> Feels a bit like Windows programming back when GDI handles were a limited resource.

IIRC it was also amusing because the limit was global (right?) and so you could have a handle leak cause the entire UI to go haywire. This definitely lead to some very interesting bugs for me over the years.

kevincox

The reason for this limit, at least on modern systems, is that select() has a fixed limit (usually 1024). So it would cause issues if there was an fd higher than that.

The correct solution is basically 1. On startup every process should set the soft limit to the hard limit, 2. Don't use select ever 3. Before execing any processes set the limit back down (in case the thing you exec uses select)

This silly dance is explained in more detail here: https://0pointer.net/blog/file-descriptor-limits.html

0xbadcafebee

> I'm not even 100% certain there's really much of a specific reason why there has to be a low hard limit on file descriptors

Same reason disks have quotas and containers have cpu & memory limits: to keep one crappy program from doinking the whole system. In general it's seen as poor form to let your server crash just because somebody allowed infinite loops/resource use in their program.

A lot of people's desktops, servers, even networks, crashing is just a program that was allowed to take up too many resources. Limits/quotas help more than they hurt.

bombcar

> I'm not even 100% certain there's really much of a specific reason why there has to be a low hard limit on file descriptors.

There was. Even if a file handle is 128 bytes or so, on a system with only 10s or 100s of KB you wouldn't want it to get out of control. On multi-user especially, you don't want one process going nuts to open so many files that it eats all available kernel RAM.

Today, not so much though an out-of-control program is still out of control.

mrguyorama

The limit was global, so you could royally screw things up, but it was also a very high limit for the time, 65k GDI handles. In practice, hitting this before running out of hardware resources was unlikely, and basically required leaking the handles or doing something fantastically stupid (as was the style at the time). There was also a per process 10k GDI handle limit that could be modified, and Windows 2000 reduced the global limit to 16k.

It was the Windows 9x days, so of course you could also just royally screw things up by just writing to whatever memory or hardware you felt like, with few limits.

jchw

> It was the Windows 9x days, so of course you could also just royally screw things up by just writing to whatever memory or hardware you felt like, with few limits.

You say that, but when I actually tried I found that despite not actually having robust memory protection, it's not as though it's particularly straightforward. You certainly wouldn't do it by accident... I can't imagine, anyway.

taeric

I'm not sure I see irony? I can somewhat get that it is awkward to have a limit that covers many use cases, but this feels a bit easier to reason about than having to check every possible thing you would want to limit.

Granted, I can agree it is frustrating to hit an overall limit if you have tuned lower limits.

CactusRocket

I actually think it's not ironic, but a synergy. If not everything is a file, you need to limit everything in their own specific way (because resource limits are always important, although it's convenient if they're configurable). If everything is a file, you just limit the maximum number of open files and you're done.

eddd-ddde

That's massively simplifying things however, every "file" uses resources in its own magical little way under the hood.

Brian_K_White

saying "everything is a file" is massively simplifying, so fair is fair

null

[deleted]

raggi

        use std::io;
        
        #[cfg(unix)]
        fn raise_file_limit() -> io::Result<()> {
            use libc::{getrlimit, setrlimit, rlimit, RLIMIT_NOFILE};
            
            unsafe {
                let mut rlim = rlimit {
                    rlim_cur: 0,
                    rlim_max: 0,
                };
                
                if getrlimit(RLIMIT_NOFILE, &mut rlim) != 0 {
                    return Err(io::Error::last_os_error());
                }
                
                rlim.rlim_cur = rlim.rlim_max;
                
                if setrlimit(RLIMIT_NOFILE, &rlim) != 0 {
                    return Err(io::Error::last_os_error());
                }
            }
            
            Ok(())
        }

a_t48

Years ago I had the fun of hunting down a bug at 3am before a game launch. Randomly, we’d save the game and instead get an empty file. This is pretty much the worst thing a game can do (excepting wiping your hard drive, hello Bungie). Turned out some analytics framework was leaking network connections and thus stealing all our file handles. :(

Izkata

  lsof -p $(echo $$)
The subshell isn't doing anything useful here, could just be:

  lsof -p $$

codedokode

The problem with lsof is that it outputs lot of duplicates, for example:

- it outputs memory-mapped files whose descriptor was closed (with "mem" type)

- for multi-thread processes it repeats every file for every thread

For example my system has 400 000 lines in lsof output and it is really difficult to figure out which of them count against the system-wide limit.

zx8080

This code has AI smell

mattrighetti

Or writing blogs at 2AM is not a smart thing to do

css

I ran into this issue recently [0]. Apparently the integrated VSCode terminal sets its own (very high) cap by default, but other shells don't, so all of my testing in the VSCode shell "hid" the bug that other shells exposed.

[0]: https://github.com/ReagentX/imessage-exporter/issues/314#iss...

oatsandsugar

Yeah I ran into this too when testing a new feature. My colleague sent me this: https://apple.stackexchange.com/questions/32235/how-to-prope...

But I reckon its unreasonable for us to ask our users to know this, and we'll have to fix the underlying cause.

geocrasher

Back in the earlier days of web hosting we'd run into this with Apache. In fact, I have a note from 2014 (much later than the early days actually):

  ulimit -n 10000

  to set permanently:
  /etc/security/limits.conf
  \* - nofile 10000

bombcar

I seem to remember this was a big point of contention when threaded Apache (vs just forking a billion processes) appeared - that if you went from 20 processes to 4 processes of 5 threads each you could hit the ulimit.

But ... that's a bad memory from long ago and far away.

database64128

This is one of the many things where Go just takes care of automatically. Since Go 1.19, if you import the os package, on startup, the open file soft limit will be raised to the hard limit: https://github.com/golang/go/commit/8427429c592588af8c49522c...

nritchie

Seems like a good idea but I do wonder what the cost is as the overhead of allocate the extra resource space (whatever it is) would be added to every Go application.

null

[deleted]

nasretdinov

Yeah macOS has a very low default limit, and apparently it affects more than just cargo test, e.g. ClickHouse, and there's even a quite good article about how to increase it permanently: https://clickhouse.com/docs/development/build-osx

mhink

I actually tried another method for doing this not too long ago (adding `kern.maxfiles` and `kern.maxfilesperproc` to `/etc/sysctl.conf` with higher limits than the default) and it made my system extremely unstable after rebooting. I'm not entirely sure why, though.

AdmiralAsshat

Used to run into this error frequently with a piece of software I supported. I don't remember the specifics, but it was your basic C program to process a record-delimited datafile. A for-loop with an fopen that didn't have a corresponding fclose at the end of the loop. For a sufficiently large datafile, eventually we'd run out of file handles.

L3viathan

Nitpick, but:

> At its core, a file descriptor (often abbreviated as fd) is simply a positive integer

A _non-negative_ integer.