Skip to content(if available)orjump to list(if available)

Why Nextcloud feels slow to use

Why Nextcloud feels slow to use

234 comments

·November 3, 2025

palata

I would love to like Nextcloud, it's pretty great that it does exist. Just that makes it better than... well everything else I haven't found.

What frustrates me is that it looks like it works, but once in a while it breaks in a way that is pretty much irreparable (or at least not in a practical way).

I want to run an iOS/Android app that backs up images on my server. I tried the iOS app and when it works, it's cool. It's just that once in a while I get errors like "locked webdav" files and it never seems to recover, or sometimes it just stops synchronising and the only way to recover seems to be to restart the sync from zero. It will gladly upload 80GB of pictures "for nothing", discarding each one when it arrives on the server because it already exists (or so it seems, maybe it just overwrites everything).

The thing is that I want my family to use the app, so I can't access their phone for multiple hours every 2 weeks; it has to work reliably.

If it was just for backing up my photos... well I don't need Nextcloud for that.

Again, alternatives just don't seem to exist, where I can install an app on my parent's iOS and have it synchronise their photo gallery in the background. Except I guess iCloud, that is.

benhurmarcel

I stopped using Nextcloud when the iOS app lost data.

For some reason the app disconnected from my account in the background from time to time (annoying but didn't think it was critical). Once I pasted data on Nextcloud through the Files app integration, it didn't sync because it was disconnected and didn't say anything, and it lost the data.

lompad

Recently people built a super-lightweigt alternative, named copyparty[0]. To me that looks like it does everything people tend to need without all the bloat.

[0]: https://github.com/9001/copyparty

nucleardog

I think "people" deserves clarification: Almost the entire thing was written by a single person and with a _seriously_ impressive feature set. The launch video is well worth a quick watch: https://www.youtube.com/watch?v=15_-hgsX2V0&pp=ygUJY29weXBhc...

I don't say this to diminish anyone else's contribution or criticize the software, just to call out the absolutely herculean feat this one person accomplished.

Dylan16807

> everything people tend to need

> NOTE: full bidirectional sync, like what nextcloud and syncthing does, will never be supported! Only single-direction sync (server-to-client, or client-to-server) is possible with copyparty

Is sync not the primary use of nextcloud?

chappi42

This is not an alternative as it only covers files. Mind what is in the article: "I like what Nextcloud offers with its feature set and how easily it replaces a bunch of services under one roof (files, calendar, contacts, notes, to-do lists, photos etc.), but ".

For us Nextcloud AIO is the best thing under the sun. It works reasonably well for our small company (about 10 ppl) and saves us from Microsoft. I'm very grateful to the developers.

Hopefully they are able to act upon such findings or rewrite it with go :-). Mmh, if Berlin (Germany) wouldn't waste so much money in ill-advised ideology-driven and long-term state-destroying actions and "NGOs" they had enough money to fund 100s of such rewrites. Alas...

lachiflippi

Why should Germany be wasting public money on a private company who keeps shoveling more and more restrictions on their open-source-washed "community" offering, and whose "enterprise" pricing comes in at twice* the price MS365 does for fewer features, worse integration, and with added costs for hosting, storage, and maintenance?

* or same, if excluding nextcloud talk, but then missing a chat feature

mynameisvlad

There is no way it’s going to be completely rewritten from scratch in Go, and none of whatever Germany is or isn’t doing affects that in any way shape or form.

upboundspiral

I think what you described is basically ownCloud Infinite Scale (ocis). I haven't tested it myself but it's something I've been considering. I run normal owncloud right now over nextcloud as it avoided a few hiccups that I had.

cbondurant

It makes perfect sense to me that nextcloud is a good fit for a small company.

My biggest gripe with having used it for far longer than I should have was always that it expected far too much maintenance (4 month release cadence) to make sense for individual use.

Doing that kind of regular upkeep on a tool meant for a whole team of people is a far more reasonable cost-benefit analysis. Especially since it only needs one technically savvy person working behind the scenes, and is very intuitive and familiar on its front-end. Making for great savings overall.

seemaze

I found copyparty to be too busy on the UI/UX side of things. I've settled on dufs[0], quick to deploy, fast to use use, and cross platform.

[0] https://github.com/sigoden/dufs

davidcollantes

Do you have a systemd for it, run it with Docker, or simply manually as needed? I find its simplicity perfect!

Larrikin

For your specific use case of photos, Immich is the front runner and a much better experience. Sadly for the general Dropbox replacement I haven't found anything either.

nucleardog

> Sadly for the general Dropbox replacement I haven't found anything either.

I had really good luck with Seafile[0]. It's not a full groupware solution, just primarily a really good file syncing/Dropbox solution.

Upsides are everything worked reliably for me, it was much faster, does chunk-level deduplication and some other things, has native apps for everything, is supported by rclone, has a fuse mount option, supports mounting as a "virtual drive" on Windows, supports publicly sharing files, shared "drives", end-to-end encryption, and practically everything else I'd want out of "file syncing solution".

The only thing I didn't like about it is that it stores all of your data as, essentially, opaque chunks on disk that are pieced together using the data in the database. This is how it achieves the performance, deduplication, and other things I _liked_. However it made me a little nervous that I would have a tough time extracting my data if anything went horribly wrong. I took backups. Nothing ever went horribly wrong over 4 or 5 years of running it. I only stopped because I shelved a lot of my self-hosting for a bit.

[0]: https://www.seafile.com/en/home/

Semaphor

Yeah, went with that as well. It’s blazingly fast compared to NC.

justinparus

thanks for sharing. been looking for something like this for awhile

thuttinger

For a general file sharing / storage solution there is also OpenCloud: https://opencloud.eu/de

It's what I want to try next. Written in go, it looks promising.

karamanolev

Too many Cloud things! OwnCloud, NextCloud, OpenCloud. There have* to be better names available...

63stack

Look into syncthing for a dropbox replacement, have been using it for years, very satisfied.

troyvit

Syncthing is under my "want to like" list but I gave up on it. I'm a one person show who just wants to sync a few dozen markdown files across a few laptops and a phone. Every time I'd run it I'd invariably end up with conflict files. It got to the point where I was spending more time merging diffs than writing. How it could do that with just one person running it I have no idea.

layer8

If you just need a Dropbox replacement for file syncing, Nextcloud is fine if you use the native file system integrations and ignore the web and WebDAV interfaces.

guilamu

I'd say Ente-photo is at least as good if not better than Immich.

https://github.com/ente-io/ente

omnimus

I would say the opposite. Ente has one huge advantage and that it is e2ee so it's a must if you are hosting someone else photos. But if you are planning to run something on your server/NAS for yourself then Immich has many advantages (that often relate to the e2ee). For example... your files are still files on the disk so less worry about something unrecoverably breaking. And you can add external locations. With Ente it is just about backing up your phone photos. Immich works pretty well as camera photo organizer.

palata

Does it have a mobile app that backs up the photos while in the background and can essentially be "forgotten"? That's pretty much what I need for my family: their photos need to get to my server magically.

fauigerzigerk

I'm a very happy Ente Photos user as well.

palata

Does its iOS/Android app automatically backup the photos in the background? When I looked into Immich (didn't try it) it sounded like it was more of a server thing. I need the automation so that my family can forget about it.

treve

I replaced all my Dropbox uses with SyncThing (and love it). I run an instance on my server at all times and on every client.

redrblackr

There is also "memories for nextcloud" which basically matches immich in feature set (was ahead until last month), nextcloud+memories make a very strong replacement for gdrive or dropbox

palata

Yeah I guess my issue is that if I can't trust the mobile app not to lose my photos (or stop syncing, or not sync everything), then I just can't use it at all. There is no point in having Nextcloud AND iCloud just because I don't trust Nextcloud :D.

conradev

I use Syncthing as a Dropbox replacement, and I like it. I have a machine at home running it that is accessible over the net. Not the prettiest, but it works!

jacomoRodriguez

I switch to FolderSync for the upload from mobile. Works like a charm!

I know, it sucks that the official apps are buggy as hell, but the server side is real solid

stavros

For photos, you can't beat Immich.

nolan879

This also happened to me with my nextcloud, thankfully I did not lose any photos. I transitioned to Immich for my photos and have not looked back.

pjs_

I’ve tried every scheme under the sun and Immich is the only thing I’ve ever seen that actually works for this use case

exe34

I use syncthing, I've got a folder shared between my phone, laptop and media center, and it just syncs everything easily.

dns_snek

It works well for smaller folders but it slows down to a crawl with folders that contain thousands of files. If I add a file to an empty shared folder it will sync almost instantly but if I take a photo both sides become aware of the change rather quickly but then they just sit around for 5 minutes doing nothing before starting the transfer.

kelvinjps10

I do the same it's so convenient

PaulKeeble

I don't doubt that large amounts of javascript can often cause issues but even when cached NextCloud feels sluggish. When I look at just the network tab of a refresh of the calendar page it does 124 network calls, 31 of which aren't cached. it seems to be making a call per calendar each of which is over 30ms. So that stacks up the more calendars you have(and you have a number by default like contact birthdays).

The Javascript performance trace shows over 50% of the work is in making the asynchronous calls to pull those calendars and other network calls one by one and then on all the refresh updates it causes putting them onto the page.

Supporting all these N calendar calls is pulls individually for calendar rooms and calendar resources and "principles" for the user. All separate individual network calls some of which must be gating the later individual calendar calls.

Its not just that, it also makes a call for notifications, groups, user status and multiple heartbeats to complete the page as well, all before it tries to get the calendar details.

This is why I think it feels slow, its pulling down the page and then the javascript is pulling down all the bits of data for everything on the screen with individual calls, waiting for the responses before it can progress in many ways to make the further calls of which there can be N many depending on what the user is doing.

So across the local network (2.5Gbps) that is a second and most of it in waiting for the network. If I use the regular 4G level of throttling it takes 33.10 seconds! Really goes to show how bad this design does with extra latency.

riskable

I was going to say... The size of the JS only matters the first time you download it unless there's a lot of tiny files instead of a bundle or two. What the article is complaining about doesn't seem like it's root cause of the slowness.

When it comes to JS optimization in the browser there's usually a few great big smoking guns:

    1. Tons of tiny files: Bundle them! Big bundle > zillions of lazy-loaded files.
    2. Lots of AJAX requests: We have WebSockets for a reason!
    3. Race conditions: Fix your bugs :shrug:
    4. Too many JS-driven animations: Use CSS or JS that just manipulates CSS.
Nextcloud appears to be slow because of #2. Both #1 and #2 are dependent on round-trip times (HTTP request to server -> HTTP response to client) which are the biggest cause of slowness on mobile networks (e.g. 5G).

Modern mobile network connections have plenty of bandwidth to deliver great big files/streams but they're still super slow when it comes to round-trip times. Knowing this, it makes perfect sense that Nextcloud would be slow AF on mobile networks because it follows the REST philosophy.

My controversial take: GIVE REST A REST already! WebSockets are vastly superior and they've been around for FIFTEEN YEARS now. Do I understand why they're so much lower latency than REST calls on mobile networks? Not really: In theory, it's still a round-trip but for some reason an open connection can pass data through an order of magnitude (or more) lower latency on something like a 5G connection.

fwlr

15MB of JavaScript is 15MB of code that your browser is trying to execute. It’s the same principle as “compiling a million lines of code takes a lot longer than compiling a thousand lines”.

riskable

It's a lot more complicated than that. If I have a 15MB .js file and it's just a collection of functions that get called on-demand (later), that's going to have a very, very low overhead because modern JS engines JIT compile on-the-fly (as functions get used) with optimization happening for "hot" stuff (even later).

If there's 15MB of JS that gets run immediately after page load, that's a different story. Especially if there's lots of nested calls. Ever drill down deep into a series of function calls inside the performance report for the JS on a web page? The more layers of nesting you have, the greater the overhead.

DRY as a concept is great from a code readability standpoint but it's not ideal performance when it comes to things like JS execution (haha). I'm actually disappointed that modern bundlers don't normally inline calls at the JS layer. IMHO, they rely too much on the JIT to optimize hot call sites when that could've been done by the bundler. Instead, bundlers tend to optimize for file size which is becoming less and less of a concern as bandwidth has far outpaced JS bundle sizes.

The entire JS ecosystem is a giant mess of "tiny package does one thing well" that is dependent on n layers of "other tiny package does one thing well." This results in LOADS of unnecessary nesting when the "tiny package that does one thing well" could've just written their own implementation of that simple thing it relies on.

Don't think of it from the perspective of, "tree shaking is supposed to take care of that." Think of it from the perspective of, "tree shaking is only going to remove dead/duplicated code to save file sizes." It's not going to take that 10-line function that handles with <whatever> and put that logic right where its used (in order to shorten the call tree).

fluoridation

>Do I understand why they're so much lower latency than REST calls on mobile networks? Not really: In theory, it's still a round-trip but for some reason an open connection can pass data through an order of magnitude (or more) lower latency on something like a 5G connection.

It's because a TLS handshake takes more than one roundtrip to complete. Keeping the connection open means the handshake needs to be done only once, instead of over and over again.

binary132

doesn’t HTTP keep connections open?

riskable

Yes and no: There's still a rather large latency improvement even when you're using plain HTTP (not that you should go without encryption).

I was very curious so I asked AI to explain why websockets would have such lower latency than regular HTTP and it gave some (uncited, but logical) reasons:

Once a WebSocket is open, each message avoids several sources of delay that an HTTP request can hit—especially on mobile. The big wins are skipping connection setup and radio wakeups, not shaving a few header bytes.

Why WebSocket “ping/pong” often beats HTTP GET /ping on mobile

    No connection setup on the hot path
        HTTP (worst case): DNS + TCP 3‑way handshake + TLS handshake (HTTPS) before you can send the request. On mobile RTTs (60–200+ ms), that’s 1–3 extra RTTs, i.e., 100–500+ ms just to get started.
        HTTP with keep‑alive/H2/H3: Better (no new TCP/TLS), but pools can be empty or closed by OS/radios/idle timers, so you still pay setup sometimes.
        WebSocket: You pay the TCP+TLS+Upgrade once. After that, a ping is just one round trip on an already‑open connection.


    Mobile radio state promotions
        Cellular modems drop to low‑power states when idle. A fresh HTTP request can force an RRC “promotion” from idle to connected, adding tens to hundreds of ms.
        A long‑lived WebSocket with periodic keepalives tends to keep the radio in a faster state or makes promotion more likely to already be done, so your message departs immediately.
        Trade‑off: keeping the radio “warm” costs battery; most realtime apps tune keepalive intervals to balance latency vs power.


    Fewer app/stack layers per message
        HTTP request path: request line + headers (often cookies, auth), routing/middleware, logging, etc. Even with HTTP/2 header compression, the server still parses and runs more machinery.
        WebSocket after upgrade: tiny frame parsing (client→server frames are 2‑byte header + 4‑byte mask + payload), often handled in a lightweight event loop. Much less per‑message work.
         

    No extra round trips from CORS preflight
        A simple GET usually avoids preflight, but if you add non‑safelisted headers (e.g., Authorization) the browser will first send an OPTIONS request. That’s an extra RTT before your GET.
        WebSocket doesn’t use CORS preflights; the Upgrade carries an Origin header that servers can validate.


    Warm path effects
        Persistent connections retain congestion window and NAT/firewall state, reducing first‑packet delays and occasional SYN drops that new HTTP connections can encounter on mobile networks.

What about encryption (HTTPS/WSS)?

    Handshake cost: TLS adds 1–2 RTTs (TLS 1.3 is 1‑RTT; 0‑RTT is possible but niche). If you open and close HTTP connections frequently, you keep paying this. A WebSocket pays it once, then amortizes it over many messages.
    After the connection is up, the per‑message crypto cost is small compared to network RTT; the latency advantage mainly comes from avoiding repeated handshakes.
     
How much do headers/bytes matter?

    For tiny messages, both HTTP and WS fit in one MTU. The few hundred extra bytes of HTTP headers rarely change latency meaningfully on mobile; the dominant factor is extra round trips (connection setup, preflight) and radio state.
     
When the gap narrows

    If your HTTP requests reuse an existing HTTP/2 or HTTP/3 connection, have no preflight, and the radio is already in a connected state, a minimal GET /ping and a WS ping/pong both take roughly one network RTT. In that best case, latencies can be similar.
    In real mobile conditions, the chances of hitting at least one of the slow paths above are high, so WebSocket usually looks faster and more consistent.

Yokolos

I've never seen anybody recommend WebSockets instead of REST. I take it this isn't a widely recommended solution? Do you mean specifically for mobile clients only?

DecoPerson

WebSockets are the secret ingredient to amazing low- to medium-user-count software. If you practice using them enough and build a few abstractions over them, you can produce incredible “live” features that REST-designs struggle with.

Having used WebSockets a lot, I’ve realised that it’s not the simple fact that WebSockets are duplex or that it’s more efficient than using HTTP long-polling or SSEs or something else… No, the real benefit is that once you have a “socket” object in your hands, and this object lives beyond the normal “request->response” lifecycle, you realise that your users DESERVE a persistent presence on your server.

You start letting your route handlers run longer, so that you can send the result of an action, rather than telling the user to “refresh the page” with a 5-second refresh timer.

You start connecting events/pubsub messages to your users and forwarding relevant updates over the socket you already hold. (Trying to build a delta update system for polling is complicated enough that the developers of most bespoke business software I’ve seen do not go to the effort of building such things… But with WebSockets it’s easy, as you just subscribe before starting the initial DB query and send all broadcasted updates events for your set of objects on the fly.)

You start wanting to output the progress of a route handler to the user as it happens (“Fetching payroll details…”, “Fetching timesheets…”, “Correlating timesheets and clock in/out data…”, “Making payments…”).

Suddenly, as a developer, you can get live debug log output IN THE UI as it happens. This is amazing.

AND THEN YOU WANT TO CANCEL SOMETHING because you realise you accidentally put in the actual payroll system API key. And that gets you thinking… can I add a cancel button in the UI?

Yes, you can! Just make a ‘ctx.progress()’ method. When called, if the user has cancelled the current RPC, then throw a RPCCancelled error that’s caught by the route handling system. There’s an optional first argument for a progress message to the end user. Maybe add a “no-cancel” flag too for critical sections.

And then you think about live collaboration for a bit… that’s a fun rabbit hole to dive down. I usually just do “this is locked for editing” or check the per-document incrementing version number and say “someone else edited this before you started editing, your changes will be lost — please reload”. Figma cracked live collaboration, but it was very difficult based on what they’ve shared on their blog.

And then… one day… the big one hits… where you have a multistep process and you want Y/N confirmation from the user or some other kind of selection. The sockets are duplex! You can send a message BACK to the RPC client, and have it handled by the initiating code! You just need to make it so devs can add event listeners on the RPC call handle on the client! Then, your server-side route handler can just “await” a response! No need to break up the handler into multiple functions. No need to pack state into the DB for resumability. Just await (and make sure the Promise is rejected if the RPC is cancelled).

If you have a very complex UI page with live-updating pieces, and you want parts of it to be filterable or searchable… This is when you add “nested RPCs”. And if the parent RPC is cancelled (because the user closes that tab, or navigates away, or such) then that RPC and all of its children RPCs are cancelled. The server-side route handler is a function closure, that holds a bunch of state that can be used by any of the sub-RPC handlers (they can be added with ‘ctx.addSubMethod’ or such).

The end result is: while building out any feature of any “non-web-scale” app, you can easily add levels of polish that are simply too annoying to obtain when stuck in a REST point of view. Sure, it’s possible to do the same thing there, but you’ll get frustrated (and so development of such features will not be prioritised). Also, perf-wise, REST is good for “web scale” / high-user-counts, but you will hit weird latency issues if you try to use for live, duplex comms.

WebSockets (and soon HTTP3 transport API) are game-changing. I highly recommend trying some of these things.

riskable

After all my years of web development, my rules are thus:

    * If the browser has an optimal path for it, use HTTP (e.g. images where it caches them automatically or file uploads where you get a "free" progress API).
    * If I know my end users will be behind some shitty firewall that can't handle WebSockets (like we're still living in the early 2010s), use HTTP.
    * Requests will be rare (per client):  Use HTTP.
    * For all else, use WebSockets.
WebSockets are just too awesome! You can use a simple event dispatcher for both the frontend and the backend to handle any given request/response and it makes the code sooooo much simpler than REST. Example:

    WSDispatcher.on("pong", pongFunc);
...and `WSDispatcher` would be the (singleton) object that holds the WebSocket connection and has `on()`, `off()`, and `dispatch()` functions. When the server sends a message like `{"type": "pong", "payload": "<some timestamp>"}`, the client calls `WSDispatcher.dispatch("pong", "<some timestamp>")` which results in `pongFunc("<some timestamp>")` being called.

It makes reasoning about your API so simple and human-readable! It's also highly performant and fully async. With a bit of Promise wrapping, you can even make it behave like a synchronous call in your code which keeps the logic nice and concise.

In my latest pet project (collaborative editor) I've got the WebSocket API using a strict "call"/"call:ok" structure. Here's an example from my WEBSOCKET_API.md:

    ### Create Resource
    ```javascript
    // Create story
    send('resources:create', {
      resource_type: 'story',
      title: 'My New Story',
      content: '',
      tags: {},
      policy: {}
    });
    
    // Create chapter (child of story)
    send('resources:create', {
      resource_type: 'chapter',
      parent_id: 'story_abc123', // This would actually be a UUID
      title: 'Chapter 1'
    });
    
    // Response:
    {
      type: 'resources:create:ok', // <- Note the ":ok"
      resource: { id: '...', resource_type: '...', ... }
    }
    ```
I've got a `request()` helper that makes the async nature of the WebSocket feel more like a synchronous call. Here's what that looks like in action:

    const wsPromise = getWsService(); // Returns the WebSocket singleton
    
    // Create resource (story, chapter, or file)
    async function createResource(data: ResourcesCreateRequest) {
      loading.value = true;
      error.value = null;
      try {
        const ws = await wsPromise;
        const response = await ws.request<ResourcesCreateResponse>(
          "resources:create",
          data // <- The payload
        );
        // resources.value because it's a Vue 3 `ref()`:
        resources.value.push(response.resource); 
        return response.resource;
      } catch (err: any) {
        error.value = err?.message || "Failed to create resource";
        throw err;
      } finally {
        loading.value = false;
      }
    }
For reference, errors are returned in a different, more verbose format where "type" is "error" in the object that the `request()` function knows how to deal with. It used to be ":err" instead of ":ok" but I made it different for a good reason I can't remember right now (LOL).

Aside: There's still THREE firewalls that suck so bad they can't handle WebSockets: SophosXG Firewall, WatchGuard, and McAfee Web Gateway.

bityard

The thing that kills me is that Nextcloud had an _amazing_ calendar a few years ago. It was way better than anything else I have used. (And I tried a lot, even the calendar add-on for Thunderbird. Which may or may not be built in these days, I can't keep track.)

Then at some point the Nextcloud calendar was "redesigned" and now it's completely terrible. Aesthetically, it looks like it was designed for toddlers. Functionally, adding and editing events is flat out painful. Trying to specify a time range for an event is weird and frustrating. It's better than not having a calendar, but only just.

There are plenty of open source calendar _servers_, but no good open source web-based calendars that I have been able to find.

jauntywundrkind

Sync Conf is next week, and this sort of issue is so part of what I hope maybe can just go away. https://syncconf.dev/

Efforts like Electric SQL to have APIs/protocols for bulk fetching all changes (to a "table") is where it's at. https://electric-sql.com/docs/api/http

It's so rare for teams to do data loading well, rarer still we get effective caching, and often a products footing here only degrades with time. The various sync ideas out there offer such an alluring potential, of having a consistent way to get the client the updated live data they need, in a consistent fashion.

Side note, I'm also hoping the js / TC39 source phase imports proposal aka import source can help let large apps like NextCloud defer loading more of it's JS until needed too. But the waterfall you call out here seems like the real bad side (of NextCloud's architecture)! https://github.com/tc39/proposal-source-phase-imports

dingdingdang

Having at some point maintained a soft fork / patch-set for Nextcloud.. yes, there is so much performance left on the table. With a few basic patches the file manager, for example, sped up by magnitudes in terms of render speed.

The issue remains that the core itself feels like layers upon layers of encrusted code that instead of being fixed have just had another layer added ... "something fundamental wrong? Just add Redis as a dependency. Does it help? Unsure. Let's add something else. Don't like having the config in a db? Let's move some of it to ini files (or vice versa)..etc..etc." it feels like that's the cycle and it ain't pretty and I don't trust the result at all. Eventually abandoned the project.

Edit: at some point I reckon some part of the ecosystem recognised some of these issues and hence Owncloud remade a large part of the fundamentals in Golang. It remains unknown to me whether this sorted things or not. All of these projects feel like they suffer badly from "overbuild".

Edit-edit: another layer to add to the mix is that the "overbuild" situation is probably largely what allows the hosting economy around these open source solutions to thrive since Nextcloud and co. are so over-engineered and badly documented that they -require- a dedicated sys-admin team to run well.

INTPenis

This is my theory as well. NC has grown gradually in silos almost, every piece of it is some plugin they've imported from contributions at some point.

For example the reason there's no cohesiveness with a common websocket bus for all those ajax calls is because they all started out as a separate plugin.

NC has gone full modularity and lost performance for it. What we need is a more focused and cohesive tool for document sharing.

Honestly I think today with IaC and containers, a better approach for selfhosting is to use many tools connected by SSO instead of one monstrosity. The old Unix philosophy, do one thing but do it well.

rahkiin

This still needs cohesive authorization and central file sharing and access rules across apps. And some central concept of projects to move all content away from people and into the org and roles

redrblackr

Two things:

1. Did you open back port request with these basic patches? If you have orders of magnitude speed improvements it would be aswesome to share!

2. You definitively don't need an entire sysadmin team to run nextcloud, in my work (large organisation) there's three instances running (for different parts/purposes of which only one is run by more than one person, and I run myself both my personal instance and for a nonprofit with ~100 persons, it's really not much work after setup (and other systems are plenty of a lot more complicated systems to set up, trust me)

aeldidi

Nextcloud is something I have a somewhat love-hate relationship with. On one hand, I've used Nextcloud for ~7 years to backup and provide access to all of my family's photos. We can look at our family pictures and memories from any computer, and it's all private and runs mostly without any headaches.

On the other hand, Nextcloud is so far from being something like Google Docs, and I would never recommend it as a general replacement to someone who can't tolerate "jank", for lack of a better word. There are so many small papercuts you'll notice when using it as a power user. Right off the top of my head, uploading large files is finicky, and no amount of web server config tinkering gets it to always work; thumbnail loading is always spotty, and it's significantly slower than it needs to be (I'm talking orders of magnitude).

With all that said, I'm so grateful for Nextcloud since I don't have a replacement, and I would prefer not having all our baby and vacation pictures feeding some big corporation's AI. We really ought to have a safe, private place to store files in 2025 that the average person can wrap their head around. I only wish my family took better advantage of it, since I'm essentially providing them with unlimited storage.

madeofpalk

I don't think this article actually does a great job of explaining why Nextcloud feels slow. It shows lots of big numbers for MBs of Javascript being downloading, but how does that actually impact the user experience? Is the "slow" Nextcloud just sitting around waiting for these JS assets to load and parse?

From my experience, this doesn't meaningfully impact performance. Performance problems come from "accidentally quadratic" logic in the frontend, poorly optimised UI updates, and too many API calls.

hamburglar

It downloads a lot of JavaScript, it decompresses a lot of JavaScript, it parses a lot of JavaScript, it runs a lot of JavaScript, it creates a gazillion onFoundMyNavel event callbacks which all run JavaScript, it does all manner of uncontrolled DOM-touching while its millions of script fragments do their thing, it xhr’s in response to xhrs in response to DOM content ready events, it throws and swallows untold exceptions, has several dozen slightly unoptimized (but not too terrible) page traversals, … the list goes on and on. The point is this all adds up, and having 15MB of code gives a LOT of opportunity for all this to happen. I used to work on a large site where we would break out the stopwatch and paring knife if the homepage got to more than 200KB of code, because it meant we were getting sloppy.

bob1029

15+ megabytes of executable code begins to look quite insane when you start to take a gander at many AAA games. You can produce a non-trivial Unity WebGL build that fits in <10 megabytes.

hamburglar

It’s the kind of code size where you analyze it and find 13 different versions of jquery and a hundred different bespoke console.log wrappers.

72deluxe

Yes and Windows 3.11 came on 6 1.44MB floppy disks. Modern software is so offensive.

shermantanktop

Agreed. Plus if it truly downloads all of that every time, something has gone wrong with caching.

Overeager warming/precomputation of resources on page load (rather than on use) can be a culprit as well.

hamburglar

Relying on cache to cover up a 15MB JavaScript load is a serious crutch.

RiverCrochet

I've played around with many self-hosted file manager apps. My first one was Ajaxplorer which then became Pydio. I really liked Pydio but didn't stick with it because it was too slow. I briefly played with Nextcloud but didn't stick with it either.

Eventually I ran into FileRun and loved it, even though it wasn't completely open source. FileRun is fast, worked on both desktop and mobile via browser nicely, and I never had an issue with it. It was free for personal use a few years ago, and unfortunately is not anymore. But it's worth the license if you have the money for it.

I tried setting up SeaFile but I had issues getting it working via a reverse proxy and gave up on it.

I like copyparty (https://github.com/9001/copyparty) - really dead simple to use and quick like FIleRun - but the web interface is not geared towards casual users. I also miss Filerun's "Request a file" feature which worked very nicely if you just wanted someone to upload a file to you and then be done.

tripflag

> I also miss Filerun's "Request a file" feature which worked very nicely if you just wanted someone to upload a file to you and then be done.

With the disclaimer that I've never used Filerun, I think this can be replicated with copyparty by means of the "shares" feature (--shr). That way, you can create a temporary link for other people to upload to, without granting access to browse or download existing files. It works like this: https://a.ocv.me/pub/demo/#gf-bb96d8ba&t=13:44

accrual

On the topic of self-hosted file manager apps, I've really liked "filebrowser". Pair it with Syncthing or another sync daemon and you've got a minimal self-hosted Dropbox clone.

* https://github.com/filebrowser/filebrowser

* https://github.com/hurlenko/filebrowser-docker

t_mann

Copyparty can't (and doesn't want to) replace Nextcloud for many use cases because it supports one-way sync only. The readme is pretty clear about that. I'm toying with the idea of combining it with Syncthing (for all those devices where I don't want to do a full sync), does anybody have experience with that? I've seen some posts that it can lead to extreme CPU usage when combined with other tools that read/write/index the same folders, but nothing specifically about Syncthing.

tripflag

Combining copyparty with Syncthing is not something I have tested extensively, but I know people are doing this, and I have yet to hear about any related issues. It's also a usecase I want to support, so if you /do/ hit any issues, please give word! I've briefly checked how Syncthing handles the symlink-based file deduplication, and it seemed to work just fine.

The only precaution I can think of is that copyparty's .hist folder should probably not be synced between devices. So if you intend to share an entire copyparty volume, or a folder which contains a copyparty volume, then you could use the `--hist` global-option or `hist` volflag to put it somewhere else.

As for high CPU usage, this would arise from copyparty deciding to reindex a file when it detects that the file has been modified. This shouldn't be a concern unless you point it at a folder which has continuously modifying files, such as a file that is currently being downloaded or otherwise slowly written to.

aborsy

A good thing thing about Nextcloud is that by learning one tool, you get a full suite of collaboration apps: sync, file sharing, calendar, notes, collectives, office (via Collabora or OnlyOffice), and more. These features are pretty good, plus, you get things like photo management and Talk, which are decent.

Sure, some people might argue that there are specialized tools for each of these functions. And that’s true. But the tradeoff is that you'd need to manage a lot more with individual services. With Nextcloud, you get a unified platform that might be good enough to run a company, even if it’s not very fast and some features might have bugs.

The AIO has addressed issues like update management and reliability, it been very good in my experience. You get a fully tested, ready-to-go package from Nextcloud.

That said, I wonder, if the platform were rewritten in a more performance-efficient language than PHP, with a simplified codebase and trimmed-down features, would it run faster? The UI could also be more polished (see Synology DSM web interface). The interface in Synology looks really nice!

s1mplicissimus

rewriting in a lower-level language won't do too much for NC, because it's mostly slow due to inefficient IO organization - things like mountains of XHRs, inefficient fetching, db querying etc. - None of that will be implicitly fixed by a rewrite in any language and can be fixed in the PHP stack as well. I think one of the reasons that helped OC/NC get off the ground was precisely that the sysadmins running it can often do a little PHP, which is just enough to get it customized for the client. Raising the bar for contribution by using lower level languages might not be a desirable change of direction in that case.

troyvit

The thing I don't get is that based on the article the front-end is as bloated as the back-end.

That said there's an Owncloud version called Infinite Scale which is written in Go.[1] Honestly I tried to go that route but it's requirements are pretty opinionated (Ubuntu LTS 22.04 or 24.04 and lots of docker containers littering your system) but it looks like it's getting a lot of development.

[1] https://doc.owncloud.com/

c-hendricks

> it's requirements are pretty opinionated (Ubuntu LTS 22.04 or 24.04

Hm?

> This guide describes an installation of Infinite Scale based on Ubuntu LTS and docker compose. The underlying hardware of the server can be anything as listed below as long it meets the OS requirements defined in the Software Stack

https://doc.owncloud.com/ocis/next/depl-examples/ubuntu-comp...

The Software Stack section goes on to say it's just needs Docker, Docker Compose, shell access, and sudo.

Ubuntu and sudo are probably only mentioned because the guide walks you through installing docker and docker compose.

tripplyons

I once discovered and reported a vulnerability in Nextcloud's web client that was due to them including an outdated version of a JavaScript-based PDF viewer. I always wondered why they couldn't just use the browser's PDF viewer. I made $100, which was a large amount to me as a 16 year old at the time.

Here is a blog post I wrote at the time about the vulnerability (CVE-2020-8155): https://tripplyons.com/blog/nextcloud-bug-bounty

rahkiin

I recently needed to show a pdf file inside a div in my app. All i wanted was to show it and make it scrollable. The file comes from a fetch() with authorzation headers.

I could not find a way to do this without pdf.js.

rahkiin

This made me try it once more and I got something to work with some Blobs, resource URLs, sanitazion and iframes.

So I guess it is possible

tripplyons

Yeah, blobs seem like the right way to do it.

moi2388

The html object tag can just show a pdf file by default. Just fetch it and pass the source there.

What is the problem with that exactly in your case?

jrochkind1

I think it can't do that on iOS? Don't know if that is the relevant thing in the choice being discussed though. Not sure about Android.

bogwog

Nextcloud is bloated and slow, but it works and is reliable. I've been running a small instance in a business setting with around 8 daily users for many years. It is rock solid and requires zero maintenance.

But people rarely use the web apps. Instead, it's used more like a NAS with the desktop sync client being the primary interface. Nobody likes the web apps because they're slow. The Windows desktop sync client has a really annoying update process, but other than that is excellent.

I could replace it with a traditional NAS, but the main feature keeping me there is an IMAP authentication plugin. This allows users to sign in with their business email/password. It works so well and makes it so much easier to manage user accounts, revoke access, do password resets, etc.

imcritic

> Nobody likes the web apps because they're slow.

Web apps don't have to be slow. I prefer web apps over system apps, as I don't have to install extra programs into my system and I have more control over those apps:

- a service decides it's a good idea to load some tracking stuff from 3rd-party? I just uMatrix block it;

- a page has an unwanted element? I just uBlock block it;

- a page could have a better look? I just userstyle style it;

- a page is missing something that could be added on client side? I just userscript script it

Jaxan

Do you also prefer a web-based file browser? My main use for Nextcloud is files and a desktop sync is crucial and integrates with the OS.

skeptrune

I know that this is supposed to be targeted at NextCloud in particular, but I think it's a good standalone "you should care about how much JavaScript you ship" post as well.

What frustrates me about modern web development is that everyone is focused on making it work much more than they are making it sure it works fast. Then when you go to push back, the response is always something like "we need to not spend time over-optimizing."

Sent this straight to the team slack haha.

xingped

I gave up on using Nextcloud because every time it updated it accumulated more and more errors and there was no way I was going to use a software that I had to troubleshoot every single update. Also the defaults for pictures are apparently quite stupid and so instead of making and showing tiny thumbnails for pictures, the thumbnails are unnecessarily large and loading the thumbnails for a folder of pictures takes forever. You can fix this and tell it to make smaller thumbnails apparently, but again, why am I having to fix everything myself? These should be sane defaults. Unfortunately, I just can't trust Nextcloud.

paularmstrong

I gave up updating Nextcloud. It works for what I use it for and I don't feel like I'm missing anything. I'd rather not spend 4+ hours updating and fixing confusing issues without any tangible benefit.

gloosx

I was expecting the author to open the profiler tab instead of just staring at network. But its yet another "heavy JavaScript bad" rant.

You really consider 1 MB of JS too heavy for an application with hundreds of features? How exactly are developers supposed to fit an entire web app into that? Why does this minimalism suddenly apply only to JavaScript? Should every desktop app be under 1 MB too? Is Windows Calculator 30 MB binary also an offense to your principles?

What year is it, 2002? Even low-band 5G gives you 30–250 Mbps down. At those speeds, 20 MB of JS downloads in well under a second. So whats the math beihnd the 5–10 second figure? What about the cache? Is it turned off for you and you redownload the whole nextcloud from scratch every time?

Nextcloud is undeniably slow, but the real reasons show up in the profiler, not the network tab.

j1elo

> low-band 5G gives you 30–250

First and foremost, I agree with the meat of your comment.

But I wanted to point about your comment, that it DOES very much matter that apps meant to be transmitted over a remote connection are, indeed, as slim as possible.

You must be thinking about 5G on a city with good infrastructure, right?

I'm right now having a coffee on a road trip, with a 4G connection, and just loading this HN page took like 8~10 seconds. Imagine a bulky and bloated web app if I needed to quickly check a copy of my ID stored in NextCloud.

It's time we normalize testing network-bounded apps through low-bandwidth, high-latency network simulators.

znpy

> You really consider 1 MB of JS too heavy for an application with hundreds of features? How exactly are developers supposed to fit an entire web app into that? Why does this minimalism suddenly apply only to JavaScript? Should every desktop app be under 1 MB too? Is Windows Calculator 30 MB binary also an offense to your principles?

Yes, I don't know, because it runs in the browser, yes, yes.