Skip to content(if available)orjump to list(if available)

Apple unveils new Mac Studio

Apple unveils new Mac Studio

298 comments

·March 5, 2025

lenerdenator

> 512GB of RAM

Keep these things the hell away from the people who develop Chrome and desktop JS apps.

whizzter

In 2025 the question isn't "will it run crysis", it's "will it run a simple CRUD app".

esafak

Will it run Electron?

bloomingkales

Gonna pile on:

At this point we may need TSMC to make a specialized chip to run Electron.

belter

You need an AWS Region for that...

gjsman-1000

In the future, we’ll decide HTML, CSS, and JS are too much of an inconsistent burden; so every website will bundle their own renderers into a <canvas> tag running off a WASM blob. Accessibility will be figured out later - just like it was for the early JavaScript frontends.

I am looking forward to the HTML Frameworks explosion. You thought there were too many JS options? Imagine when anyone can fork HTML.

lynx97

<canvas> is already a middle finger in the direction of accessibility. You don't need wasm to put blind people at an extra disadvantage. SVG Accessibility anyone? No? What a surprise. Classical web accessibility has basically ended. We (blind people) are only using sites which are sufficiently old to be still usable.

jsheard

Why stop there? LLMs will free us from the shackles of having to ship actual code, instead we'll ship natural language descriptions and JIT them at runtime. It may use orders of magnitude more resources and only work half of the time but imagine the Developer Velocity™

peatmoss

The state of web deployment in 2025 is the universe punishing me for calling java applets and other java web deployment tech "heavyweight" back in the day.

asdajksah2123

> every website will bundle their own renderers into a <canvas> tag running off a WASM blob

Isn't that Flutter?

catapart

Not that I intend to scale this in any way, but I'm working on an in-game UI rendered on the canvas, and I am thinking I might be able to hack something together based on this youtuber's library and excellent explainer video[0]. The thought had definitely occurred to me that if someone wanted to really roll up their sleeves and maintain a js port of the library, it would provide a translate-able UI from native C to native JS and back. At least, I can imagine a vite/webpack-like cli that reads the C implementation and spits out a js implementation.

Of course, I could also imagine one that reads the C and provides the equivalent html/css/js. And others might scoff "why not just compile the whole C app into wasm", which would certainly be plenty performant in a lot of cases. So I guess I don't know why it isn't already being done, and that usually means I don't know enough about the problems to have any clue what it would actually take to make such things.

In any case, I'm also looking forward to a quantum leap in web app UI! I'm not quite as optimistic that it's ever going to happen, I guess, but I can see a lot of benefit, if it did.

[0]https://www.youtube.com/watch?v=by9lQvpvMIc

fumar

I'm thinking about this space now. Ideally, I want a new browser like platform with stricter security properties than browsers but better rendering out of the box capabilities.

null

[deleted]

swiftcoder

You jest, but isn't this Web Components? Or alternately, Flutter

cellularmitosis

Speaking of CRUD, would Apple’s on-chip memory have significant advantages for running Postgres vs a threadripper with a mobo full of ram?

It seems like vertical scaling has fallen out of fashion lately, and I’m curious if this might be the new high-water mark for “everything in one giant DB”.

vaxman

Better get to the bottom of the mystery surrounding Apple's ECC patents on LPDDR ECC or you will have to make a leap of faith that your database on their chips won't wind up cruddy in a Bad Way. All we have now are assumptions and educated guesses about what they may be doing. It's also going to be an issue with AMD 395+ and nVidia+MediaTek GB10 (but I would assume NO ECC on those SoCs, based on their history).

It may only be a few mm to the LPDDR5 arrays inside the SoC, but there are all sorts of environmental/thermal/power and RFi considerations, especially on tiny (3-5nm) nodes! Switch on the numerical control machine on the other side of the wall from your office and hope your data doesn't change.

hot_gril

There are already big servers designed for huge single databases, for example the 8-socket Xeon types. Tbh I don't understand exactly why RAM is such a concern, but these machines have TBs of it.

ffsm8

I'm not sure how this would impact the server market in any way considering that epyc thread ripper has supported 4 TB for over 5 yrs now.

Is it the usual Apple distortion effect where fanboys just can't help themselves?

It's definitely a sizeable amount of RAM though, and definitely enough to run the majority of websites running on the web. But so would a budget Linux server costing maybe 100-200 bucks per month.

Maken

Will it run Discord?

layer8

They should make a “webdev” edition with like 4 GB.

DwnVoteHoneyPot

Raspberry Pi 4GB

Hamuko

The Mac Mini M2.

kees99

Chrome has to run on chromebooks, quite a few of which are still-supported models with 4GB of non-upgradeable RAM.

superjan

So that means it can run with 4GB. Is there a way to block it from using more?

NikkiA

If you have unused ram, why would you want an app not to use it?

amelius

You could try to use cgroups to accomplish that.

lippihom

Now wouldn't that be the dream.

thesmok

Run it in a VM.

ant6n

These chromebooks won’t run chrome, they’ll meander it.

lenerdenator

I wouldn't even call it meandering.

Know that scene from one episode of Aqua Teen Hunger Force where George Lowe (RIP) is a police officer and has his feet amputated, so he drags himself while pursuing a suspect?

Yeah. It does that.

reustle

That’s almost the full deepseek r1!

seunosewa

Almost is a painful word in this case. Imagine if it could actually run R1. They'd make so much money. Falling short by a few dozen GB is such a shame.

amy_petrik

my first thought was, "what does it look like fully specced out, 512 GB RAM cannot be cheap" fully specced out it's ~$15k now I bet that'd be a fine $15k AI machine but if I wanted a CPU AI rig a cobbling of multi-core motherboards could get higher performance at a lower cost, and/or some array of used nvidia cards. the good news is 3 or 4 years from now hardware specs such as this will be much cheaper, which is exciting

singularity2001

512GB only available on M3

asah

$10k and up

Mistletoe

Who do you think buys these? :)

doublerabbit

Renderfarms Animation Studios

We had some hefty rigs at the last studio I worked at.

nicce

Are these really cost effective for that usecase?

yjftsjthsd-h

You run a render farm made of macs?

geerlingguy

The buzz is all around AI and unified memory... but after editing 4K content on an M4 mini (versus my M1 Max Mac Studio), I've realized the few-generations-newer media processing in the M4 is a huge boost over the M1.

Coupled with the CPU just having more oomph, I ordered an M4 Max with 64 GB of RAM for my video/photo editing; I may be able to export H.265 content at 4K/high settings with greater-than-realtime performance...

I'm a little sad that the AI narratives have taken over all discussion of mid-tier workstation-ish builds now.

cosmic_cheese

It feels a bit like we entered the “consumer grade workstation” era a while back when AMD started selling 16-core CPUs that will happily socket into run of the mill consumer motherboards and that continued with the higher end M-series SoCs.

It really is cool to see. It’s nice that that kind of horsepower isn’t limited to the likes proper “big iron” like it once was and can even be reasonably be packaged into a laptop that is decent at being mobile and not an ungainly portable-on-a-technicality behemoth.

TylerE

The one thing that has me a bit bummed with this is that the Ultra, which I had planned to upgrade to, is only an M3 not an M4. Bit disappointing after waiting this long.

tromp

Not all that disappointing considering that most of the performance improvement in M4 seems to come from increased power consumption. In some applications, M4 performs worse per watt than M3.

Uehreka

Yeah but if you’re buying an Ultra you’re probably more concerned with raw performance than perf-per-watt. These aren’t exactly used in laptops.

raydev

> Not all that disappointing considering that most of the performance improvement in M4 seems to come from increased power consumption

Disappointing for those of us who don't care about power consumption in a desktop.

null

[deleted]

perfmode

Is the M4's media processing superior to the M3? Would the M3 Ultra not perform as well on video editing?

schainks

The M3 Ultra is two M3 chips together on one die. In aggregate they should outperform an m4 max by quite a bit.

perfmode

I meant single-core performance.

2OEH8eoCRo0

I'm surprised you use Macs since you usually lean toward more open HW and FOSS.

silvestrov

The SSD prices are insane.

$400 to go from 1TB to 2GB.

$307/TB to go from 1TB to 16TB.

That is 3 times the Amazon prices: https://diskprices.com/?locale=us&condition=new&capacity=4-&...

rsynnott

Given that it's a desktop, most people should just get it with the default size and get an external thunderbolt NVMe disk. Only if you need >Thunderbolt 5 speeds (ie 80 Gbit/sec) do you really need the internal drive, and most NVMe is slower than this in any case.

staplung

I did this recently with a new Mac Mini that I set up. MacOS recently added the ability to locate the home directories on any volume. There's a somewhat hidden feature too that if you drag the Applications directory onto an external drive it will move selected apps there (the larger ones like Pages, etc.); combine that with the option in the App Store to keep large downloads on a separate disk.

So far it's been working quite well with the exception that VSCode does not seem to understand how to update itself if you keep it in the external Applications folder: every time it tries to update itself it just deletes itself instead. Moved it back into the /Applications folder and it's been fine.

zuhsetaqi

Instead of dragging them over you should create a link. This way it's the same as before for Applications like VS Code.

wpm

> MacOS recently added the ability to locate the home directories on any volume

Mac OS has always been able to do this.

rsynnott

You can just use it as a boot drive, IIRC.

WWLink

Or if you don't want ugly ass external boxes cluttering up your desk.

I don't get why they couldn't be arsed to stuff a few m.2 slots in there. They could keep the main nand their weird soldered on BS with the firmware stuffed in a special partition if they want. Just give us more room!

rsynnott

https://en.wikipedia.org/wiki/Mac_Pro#Apple_silicon_(2023), kinda. The most ultra-niche of Apple's products.

outime

You seriously don't get why?

ravetcofx

I don't know about thunderbolt, but the Apple Silicon macs I help my clients with have something really wrong and screwed up with how macOS or the firmware deals with USB 3.1+ external drives with constant disconnects despite sleeping hard-drives setting turned off etc. Searching on forums leads to similar issues others are having.

Citizen8396

What brand and model of drive? This sounds similar to a hardware defect in some SanDisk Extreme SSDs; IIRC it was caused by firmware and/or overheating.

moralestapia

This is also quite convenient when you buy a new laptop and just unplug/plug and that's it, you have everything.

ohgr

Yeah they really need to get that under control. It's a complete rip off at this point.

I don't mind them charging say $50 "Apple premium" for the fact it's a proprietary board and needs firmware loading onto the flash but the multiplicative pricing is bullshit price gouging and nothing more.

LeafItAlone

Get what under control? People (me included) still pay it.

And most (me included) would still end up buying the device anyways, maybe just with less storage than they want. And then need to upgrade earlier.

From Apple’s perspective, they seem to have figured it out.

And maybe the upgraded configurations somewhat subsidize the lower end configurations?

xtracto

Exactly!! The prices are a result of extensive market research. Apple prices this things at a price they know people will buy it.

It's the beauty of having a product with no real competition in the market.

(BTW, I use Linux as my home and work OS But I'm a super geek and 20+ years full stack dev... not their target market, as I can handle the quirks and thousand papercuts of Linux)

ohgr

I don't. I've got a 256 gig M4 mini with a 2TB disk hanging off it.

Saris

Tons of people happily pay for it, so I'd say it works out pretty well for them.

rootbear

Years ago, someone on Usenet explained that Apple upgrade prices are so high because they use components made from the powdered bones of Unicorns and I truly believe that is the truth.

protocolture

I remember being a PC enthusiast in high school, spending my lunch hours pricing up Mac's, comparing them to market pc component prices, to laugh at the cost of addons. Seems like nothing has changed.

naikrovek

the Studio doesn't use nvme but it does put its storage on a removable card. The mac mini does as well. So you don't have to pay Apple for the storage you want. There are places which sell storage upgrades for the Mini and the M1 Studio, and they, of course, are cheaper than what Apple charges for the upgrade when you buy the machine. dosdude1 on youtube has some videos of this exact upgrade, and a bit of googling will help you find vendors. I am assuming that this M3 and M4 Studio will be the same, but that's not a guarantee.

blacksmith_tb

I see iFixit[1] rates the storage swap for the M4 Mini as "moderate", I remember thinking that popping RAM in my old 2018 Intel Mini was harder than most laptop repairs I have done... I think I will probably settle for an external nVME enclosure when I get one.

1: https://www.ifixit.com/Guide/How+to+Replace+the+SSD+in+your+...

Lammy

They've obviously done the math on what percentage of Mac buyers will subscribe to what tier of iCloud storage, times how long people tend to keep each computer, then priced the local storage options above that: https://support.apple.com/en-us/108047

canucker2016

One can upgrade the SSD storage for a M1/M2 Mac Studio through a third party for a lot less money than what Apple requires at purchase time.

I'd expect an upgrade route for the new Mac Studio will appear.

Here's one YouTube video showing an upgrade to 8TB of SSD storage. see https://www.youtube.com/watch?v=HDFCurB3-0Q

metadat

Are the SSDs soldered in place for the desktop machines? Criminal.

sdf4j

The first paragraph that talks about the OS itself is depressing:

>macOS Sequoia completes the new Mac Studio experience with a host of exciting features, including iPhone Mirroring, which allows users to wirelessly interact with their iPhone, its apps, and notifications directly from their Mac.

So that's their highlight for a pro workstation user.

Nevermark

Just be glad they didn't focus on movies, music and cute apps. Macs seems to be the only product line that continues to semi-dodge Apple's myopic media/services/social kiosk lens they now view all their other product lines through.

If that sounds too negative, compare their current vision for their products with Steve Jobs old vision of "a bicycle for the mind". iOS-type devices are very useful, but unleashing new potential, enabling generational software innovation, just isn't their thing.

(The Vision Pro is "just" another kiosk product for now, but it is hard to tell. The Mac support suggests they MIGHT get it. They should bifurcate:

1. A "Vision" can be the lower cost iOS type device, cool apps and movies product. Virtual Mac screen.

2. A future "Vision Pro that is a complete Mac replacement, the new high end Apple device, filled out spacial user interface for real work, etc. No development sandbox, Mx Ultra, top end resolution and field of view, raise the price, raise the price again, please. It could even do the reverse kind of support, power external screens that continued working like first class virtual screens, when you needed to share into the real world.

The Vision Pro should become a maximum powered post-Mac device. Not another Mac satellite. Its user interface possibilities go far beyond what Mac/physical screens will ever do. The new nuclear powered bicycle for the mind. But I greatly fear they want to box it in, "iPad" everything, even the Mac someday.)

nullpoint420

I agree, except I wonder how they'll do this securely. Imagine if a VS Code plugin could spy on everything in front of me. Opens up a whole new level of security concerns.

rubslopes

It’s like they’re marketing a pro workstation as a glorified iPhone accessory.

rafram

They use a similar line on the MacBook Air page. If you're buying an (up to) $13,000 Mac, hopefully you already understand macOS and its features, I guess.

FloatArtifact

They didn't increase the memory bandwidth. You can get the same memory bandwidth, which is available on the M2 Studio. Yes, yes, of course you can get 512 gigabytes of uRAM for 10 grand.

The the question is if a llm will run with usable performance at that scale? The point is there's diminishing returns despite having enough uRAM with the same amount of memory bandwidth even with increased processing speed of the new chip m3 for AI.

espadrine

> if a llm will run with usable performance at that scale?

Yes.

The reason: MoE. They are able to run at a good speed because they don't load all of the weights into the GPU cores.

For instance, DeepSeek R1 uses 404 GB in Q4 quantization[0], containing 256 experts of which 8 are routed to[1] (very roughly 13 GB per forward pass). With a memory bandwidth of 800 GB/s[3], the Mac Studio will be able to output 800/13 = 62 tokens per second.

[0]: https://ollama.com/library/deepseek-r1

[1]: https://arxiv.org/pdf/2412.19437

[2]: https://www.apple.com/newsroom/2025/03/apple-unveils-new-mac...

_aavaa_

This doesn’t sound correct.

You don’t know which expert you’ll need for each layer, so you either keep them all loaded in memory or stream them from disk

espadrine

In RAM, yes. But if you compute an activation, you need to load the weights from RAM to the GPU core.

kgwgk

Note that 404 < 512

fullstackchris

You seem like you know what you are talking about... mind if I ask what your thoughts on quantization are? Its unclear to me if quantization affects quality... I feel like I've heard yes and no arguments

espadrine

There is no question that quantization degrades quality. The GGUF R1 uses Q4_K_M, which, on Llama-3-8B, increases the perplexity by 0.18[0]. Many plots show increasing degradation as you quantize more[1].

That said, it is possible to train a model in a quantization-aware way[2][3], which improves the quality a bit, although not higher than the raw model.

Also, a loss in quality may not be perceptible in a specific use-case. Famously LMArena.ai tested Llama 3.1 405B with bf16 and fp8, and the latter was only 2 Elo points below, well within measurement error.

[0]: https://github.com/ggml-org/llama.cpp/blob/master/examples/q...

[1]: https://github.com/ggml-org/llama.cpp/discussions/5063#discu...

[2]: https://pytorch.org/blog/quantization-aware-training/

[3]: https://mistral.ai/news/ministraux

sosuke

I don't know what I'm talking about but when I first asked your question this https://gist.github.com/Artefact2/b5f810600771265fc1e3944228... helped start me on a path to understanding. I think.

But if you don't already know the question your asking is not at all something I could distill down into a sentence or to that would make sense to a lay-person. Even then I know I couldn't distill it at all sorry.

Edit: I found this link I referenced above on quantized models by bartowski on huggingface https://huggingface.co/bartowski/Qwen2.5-Coder-14B-GGUF#whic...

Ambix

I did my own experiments and it looks like (surprisingly) Q4KM models often outperforms Q6 and Q8 quantised models.

For bigger models (in range of 8B - 70B) the Q4KM is very good, there are no any degradation compared to full FP16 models.

jazzyjackson

I returned an M2 Max studio with 96GB RAM, unquantized llama 70B 3.1 was dog slow, not an interactive pace. I'm interested in offline LLM but couldn't see how it was going to produce $3,000 ROI.

FloatArtifact

It would be really cool if there was awebsite "we there yet" for reasonable offline AI.

It could track different hardware configurations and reasonably standardized benchmark performance per model. I know there's benchmarks buried in github Llama repository.

robbomacrae

There seems to be a LOT of interest in such a site in the comments here. There seem to be multiple IP issues with sharing your code repo with an online service so I feel a lot of folks are waiting for the hardware to make this possible.

We need a SWE-bench for open source LLM's and for each model to have 3Dmark like benchmarks on various hardware setups.

I did find this which seems very helpful but is missing the latest models and hardware options. https://kamilstanuch.github.io/LLM-token-generation-simulato...

slama

The M3 Ultra is the only configuration that supports 512GB and it has memory bandwidth of 819GB/s.

wkat4242

True, I also noticed that bigger models run slower at the same memory bandwidth (makes sense).

memhole

Yeah, I don’t think RAM is the bottleneck. Which is unfortunate. It feels like a missed opportunity for them. I think Apple partly became popular because it enabled creatives and developers.

throw-qqqqq

> I don’t think RAM is the bottleneck

Not the size/amount, but the memory bandwidth usually is.

Ecko123

[dead]

blobbers

The previous ranking article said the M3 ultra was the most powerful chip ever.

Mac ecosystem is starting to feel like the PC world. Just give me 3 options. Cheap, good, and expensive. Having to decide how many dedicated graphic cores for a teenagers laptop is impossible.

bee_rider

For chips, Ultra and Max are like their workstation chips or something, right? It seems expected that they should be a little more differentiated, they are specialist, aren’t they?

erickhill

The way I think about it is if I buy a Max chip I'm getting the performance of the generation that will be released a year later now in the current form factor, and then some.

For example, I got the M1 Max when it was new. A year later the M2 came out. Spec-wise, the M1 Max was still a bit better than the M2 Pro in many regards. To me, getting a Max buys you some future proofing if you or your company can afford it (and you need that kind of performance). I use the Max with a lot of video work, and it's been fantastic.

fckgw

They have that. On laptops they have the M4, M4 Pro and M4 Max. Cheap, good and expensive.

eyelidlessness

My buying strategy has been the same since they started soldering RAM: buy the lowest spec CPU/GPU they offer with the amount of RAM I will need (which all but once has always been the maximum RAM they offer, which unfortunately usually means also buying the max CPU/GPU).

zitterbewegung

If you are in college or school getting a MacBook Air would be best and the size of your screen is going to have a higher impact (13 to 15 inch) than the Dedicated Graphics cores. Would advise not getting an MacBook Pro.

hot_gril

For the teen's laptop, you can simply get the base model. Even a base M1 is more than fast enough.

hart_russell

$14,000 fully configured by the way

shrx

Why don't they provide performance comparisons between the two chips offered, M3 Ultra and M4 Max?

relium

It'll likely be very workload dependent. The M4 Max will probably do a little better in single threaded tasks like browser benchmarks and the M3 Ultra will do better in things like video transcoding and 3D rendering.

shrx

Yes but I'd still like to know what tradeoffs I am making when deciding to get one or the other option. Right now it's all hand-wavy.

WorldWideWebb

So they wouldn’t put the power button on the back of the latest Mini, but they did on the Studio? That’s frustrating (yes, minor nit).

zitterbewegung

This was always part of the original design of the Mac Studio so they have never changed the design. This is a spec bump.

wpm

I have an old style M1 Mac Mini on my desk and I could probably count on one hand the number of times I had to hit the power button, and Apple knows this, so they decided it wasn’t worth the machining cost to drill a hole in the back of the top shell and engineer a power button to the tolerances you’d expect.

Imagine, my Apple TV doesn’t even have a power button! My MacBook tells at me if I accidentally press it when doing a TouchID!

pourred

I have to hit that power button multiple times a day, because the Mac mini just won't wake up from the USB keyboard/mouse...

Worst of all, it always worked fine on my previous Hackintosh!

jonnrb

Power buttons are for power users. lol

bigtex

You are holding it wrong - Steve Jobs

cyberlimerence

What model can you run realistically with 512 GB of unified memory ? I'm curious who is even the market for such an offering.

wkat4242

DeepSeek R1 for one, quantised but not too cripplingly.

numpad0

the full R1 takes >512GB and the 1.52bit takes >128GB. So enough for agent + app to realize a fully autonomous monolithic AGI humanoid head, potentially, but then it'll be compute limited...

wkat4242

Yeah I was thinking more about q6_0 or so. The q4_K_M is 404GB so you can still push it a bit higher than that. Obviously the 1.52 bit doesn't make sense.

I'm never going to pay 10k for that though. Hopefully cheaper hardware options are coming soon.

saganus

I assume they are getting ready for the next year or two of LLM development.

Maybe there's not much market right now, but who knows if DeepkSeek R3 or whatever will need something like this.

It would be awesome to be able to have a high-performance local-only coding assistant for example (or any other LLM apllication for that matter).

mlboss

The future is local AI agents running on your desktop 24x7. This and NVIDIA Digits will be the hardware to do that.

fluidcruft

At this point, having the power button not be on the bottom is a major selling point for me vs the annoying-as-hell mini.

electriclove

I’ve had the new Mini for a few months and can’t recall having to use the power button.

How often are you using the power button on your Mini? What is your use case?

fluidcruft

It's a shared computer in a hospital used for research data management. Basically, every time I walk up to it to use it, it's turned off.

Maybe Apple should remove power off from the UI menus if they're claiming it uses less energy to leave it on.

(I'm dubious of that claim people are repeating here, but what the hell do I know I'm just a physicist. Reality distortion isn't my thing.)

marci

If you have a laptop, do you turn it off or just close the lid?

The mini is probably less power hungry than the macbooks (less components). I have some wifi 5/ac routers that consume more power at idle (nothing connected to them) than apple laptops.

1123581321

Get a label maker and print "LEAVE ON" on the monitor.

dewey

> How often are you using the power button on your Mini? What is your use case?

Every single day, not by choice but because it's constantly waking up in sleep mode to do maintenance task then overheating and shutting down again. Something about macOS and Bluetooth devices not playing nice.

mort96

How do you turn it on?

If you never turn off your computer, it makes sense that you never use the power button. But some people do turn their computers off, and for us, it's really useful to be able to turn them on again.

dylan604

I'm still on a wired USB full sized keyboard from at least a decade ago, but didn't the newer keyboards see the return of the power button? Did I dream that?

TylerE

Why? Sleep/suspend on macs is incredibly good, and power usage rounds to zero.

7e

An idle Mac doesn't use much power. Why are you turning it off?

yborg

I think another tier needs to be added to the Maslow pyramid for this particular class of complaint. I have had to reboot the M4 Mini on my desk a number of times now and it takes less than 3 seconds to lift the corner an inch and depress the switch.

zie

My thinking is, why would you ever turn it off? They go to sleep and wake up great and barely even sip power when on, let alone when asleep.

fluidcruft

It's a lab computer. You can tell people not to shut it off, but it's still always turned off when I try to use it. Could be being shut down via ITs management tools/policies for all I know.

adamredwoods

>> Mac Studio with M3 Ultra starts with 96GB of unified memory

I still see laptops selling with 8GB memory, and IMO we should be well past this by now with, IMO, 32GB minimum. My work laptop still only has 16GB.