8GB RAM on M3 MacBook Pro ‘Analogous to 16GB’ on PCs, Claims Apple::Following the unveiling of new MacBook Pro models last week, Apple surprised some with the introduction of a base 14-inch MacBook Pro with M3 chip,…
I think they simply mean analogous in price.
What, no! 8 GB Apple RAM costs 4 times that of 16 GB regular DDR4 RAM module.
Edit: 5 times.
The interviewee seems to be meaning it as memory usage (quote from them): "Comparing our memory to other system’s memory actually isn’t equivalent, because of the fact that we have such an efficient use of memory, and we use memory compression, and we have a unified memory architecture.
Actually, 8GB on an M3 MacBook Pro is probably analogous to 16GB on other systems. We just happen to be able to use it much more efficiently."
I mean, i played with memory compression on linux too, but it’s not a factor x2 and you trade that with more CPU utilization/less battery life. And even though software is not worse in efficiency on this side, webbrowsers, VMs and games still need the RAM.
I’ve seen this a lot on this thread, but this is Apple we’re talking about. They have billions of dollars to throw at making their memory compression far better than what’s on Linux. I still regularly use an 8gb ddr3 apple macbook air from 2017. It’s not as fast at computing as my 32gb windows laptop, but it feels more snappy. I also have a 16gb desktop, also windows, and the macbook feels just a little slower than that. A little. And it’s ddr3 vs ddr4
Ok, honestly, snappyness is mostly the duration of animations, hard to judge from that. Billions for better compression is a mixed bag; it must be cheaper than more RAM in manufacturer prices. And, atleast in Linux, you can choose the compression algorithm, while lz4 has almost memory speed.
Now i’m curious, are there memory load tests for Linux aswell as Mac?
If anything the memory being unified between the GPU and CPU makes it even less than 8GB equivalent
They were making a joke about it being expensive
because of the fact that we have such an efficient use of memory,
Do they use below 24 megs of RAM in console? Or below 500 megs in GUI? Well, 500 megs is upper bound, I should probably compare to something less bloated than KDE.
and we have a unified memory architecture.
Really? They still doing UMA?
It’s all unified: CPU, GPU, RAM and SSD as far as I remember.
My dick is 2 inches long. That’s analogous to 10 inches on Mars.
Enough with this Martian slander
Holy crap, apple charges $200 to add 8GB of RAM. I just bought 32GB of DDR5 for the Framework I have on order for $95.
And now look at the actual chip prices and you’ll know, why apple is so fucking rich
They have so much prestige and influence under their name that their super fans would buy anything from them at 1000% markup all because it’s a status symbol.
Hell, they could sell bottles of piss and the super fans would gladly sell off all their sperm/eggs and all unnecessary organs just to get a drop of it because sTaTuS SyMbOl.
The infamous Apple Water…
They’d call it the AppleZeus
That might be true but it’s also embarrassing for all pc brands who get slaughtered by apples performance as soon as they actually attempt to make things. For example there isn’t a single laptop in the windows world that can match anything apple does
Any laptop with a recent GPU will beat an Apple Laptop for performance. Some higher end CPUs too.
Apple has really good power efficiency, which is great when unpluged, but plugged in laptops with 200 to 300W TDPs are still better.
lol sure if you don’t mind an aircraft carrier and mini oven on your legs. Trust me I used to use one of those 200W laptops and honestly I didn’t even need a blanket in the winter because they run SO hot. not to mention odds are it will last a lot longer not running at those insane temps (which literally approach the max safe operating temp of the CPU on some laptops)
So? I know they get hot, that’s what 200-300W means. But when you said “there isn’t a single laptop in the windows world that can match anything apple does”, you were wrong. Laptops with GPUs still win in performance.
Stop moving the goalposts.
Hahahahaha
I have friends who spec for their clients every day (I only do it occasionally). Mac laptops cost anywhere from 50% to 100% more than equivalent Dell and Lenovo laptops (ignoring the even less-costly brands, because none of us spec those).
They all have access to the same hardware. And MacOS, despite the gaslighting in this article, isn’t any more performant on the same hardware in the real world.
They all have access to the same hardware. And MacOS, despite the gaslighting in this article, isn’t any more performant on the same hardware in the real world.
Ah yes please tell me which PC OEMs are using the M-series chips 🙄
Also AFAIK no other company makes their laptop top case/lid and hinges out of a single piece of metal. ALL other laptops glue/epoxy the hinges to the lid and they will (sooner rather than later) break. So if durability is what you’re looking for Mac’s are still built better than all other laptops.
What a strange boogeyman here. I have a 15 year old HP that was $400 when I bought it and the hinges are still fine.
Have you ever actually seen a laptop lid just break off because the epoxy failed, or is this just a hypothetical? I used my last laptop for around 8 years, I took it with me to college every day in a backpack, on public transit. It got thrown around, scratched up, but the hinges didn’t break lol
I have tbh
It was a shitty gaming laptop, but the hinges broke within a couple years. Still cost me nearly $2k at the time. I highly regret not buying the equivalent MacBook I was offered
Circumstances outside gaming where any high end laptop isn’t good enough is pretty niche and I don’t think this really matters to most consumers. I would prefer to run Linux but at work, my options are Windows or MacOS. It’s a pretty easy choice. Apple products are great when someone else is paying for them.
This is mainly due to Windows being incompetent getting an ARM version of Windows in a usable state.
Snapdragon claims to have a M2 level ARM chip for desktop/laptop. It will likely be pretty comparable to the M2, but what OS will it run? Not Windows. In fact at the enterprise level, Lenovo is selling laptops with Android in desktop mode.
So, once that chip is released, I am guessing you will see great options with Linux and Android. Who knows, maybe Microsoft will surprise us with an update that lets Windows run reasonable well on ARM.
Won’t someone please consider that Apple has to spend all the extra effort soldering the ram to the motherboard?? Only $200 extra is a steal!
They have so much prestige and influence under their name that their super fans would buy anything from them at 1000% markup all because it’s a status symbol.
Hell, they could sell bottles of piss and the super fans would gladly sell off all their sperm/eggs and all unnecessary organs just to get a drop of it because sTaTuS SyMbOl.
They have so much prestige and influence under their name that their super fans would buy anything from them at 1000% markup all because it’s a status symbol.
Hell, they could sell bottles of piss and the super fans would gladly sell off all their sperm/eggs and all unnecessary organs just to get a drop of it because sTaTuS SyMbOl.
They have so much prestige and influence under their name that their super fans would buy anything from them at 1000% markup all because it’s a status symbol.
Hell, they could sell bottles of piss and the super fans would gladly sell off all their sperm/eggs and all unnecessary organs just to get a drop of it because sTaTuS SyMbOl.
DDR5 runs at 52GB/s. Apple uses RAM that runs at up to 800GB/s (if you have enough, gets faster the more you have since it runs in parallel… but it’s never as slow as DDR5).
Huge doubt here. Apple RAM is LPDDR5. That’s Low Power DDR5.
Citing this site:
LPDDR5 runs up to 6400 Mbps with many low-power and RAS features including a novel clocking architecture for easier timing closure. DDR5 DRAMs with a data-rate up to 6400 Mbps support higher density including a dual-channel DIMM topology for higher channel efficiency and performance.
I’m looking at the Apple M2 Wikipedia page and it has the 800GB/s number you have, but that’s gotta be something like RAM speed times number of RAM unit blocks for overall bandwidth.
Apple RAM is not magically 15 times faster than DDR5.
tl;dr
The memory bandwidth isn’t magic, nor special, but generally meaningless. MT/s matter more, but Apple’s non-magic is generally higher than the industry standard in compact form factors.
Long version:
How are such wrong numbers are so widely upvoted? The 6400Mbps is per pin.
Generally, DDR5 has a 64-bit data bus. The standard names also indicate the speeds per module: PC5-32000 transfers 32GB/s with 64-bits at 4000MT/s, and PC5-64000 transfers 64GB/s with 64-bits at 8000MT/s. With those speeds, it isn’t hard for a DDR5 desktop or server to reach similar bandwidth.
Apple doubles the data bus from 64-bits to 128-bits (which is still nothing compared to something like an RTX 4090, with a 384-bit data bus). With that, Apple can get 102.4GB/s with just one module instead of the standard 51.2GB/s. The cited 800GB/s is with 8: most comparable hardware does not allow 8 memory modules.
Ironically, the memory bandwidth is pretty much irrelevant compared to the MT/s. To quote Dell defending their CAMM modules:
In a 12th-gen Intel laptop using two SO-DIMMs, for example, you can reach DDR5/4800 transfer speeds. But push it to a four-DIMM design, such as in a laptop with 128GB of RAM, and you have to ratchet it back to DDR5/4000 transfer speeds.
That contradiction makes it hard to balance speed, capacity, and upgradability. Even the upcoming Core Ultra 9 185H seems rated for 5600 MT/s-- after 2 years, we’re almost getting PC laptops that have the memory speed of Macbooks. This wasn’t Apple being magical, but just taking advantage of OEMs dropping the ball on how important memory can be to performance. The memory bandwidth is just the cherry on top.
The standard supports these speeds and faster. To be clear, these speeds and capacity don’t do ANYTHING to support “8GB is analogous to…” statements. It won’t take magic to beat, but the PC industry doesn’t yet have much competition in the performance and form factors Apple is targeting. In the meantime, Apple is milking its customers: The M3s have the same MT/s and memory technology as two years ago. It’s almost as if they looked at the next 6-12 months and went: “They still haven’t caught up, so we don’t need too much faster, yet-- but we can make a lot of money until while we wait.”
Faster isn’t everything. Less, faster ram is only applicable to a few application, where more, slower ram is going to benefit everything.
It’s definitely comparable because that’s what it’s competing against. 16gb of Ram is 16gb of Ram, no matter how fast it is. Pricing it at 2-3x the cost for any other equivalent isn’t competitive at all.
You’re comparing single channel performance to entire system performance.
That statement simply means the most highest of the high end Mac has 16 memory channels (admittedly more than EPYCs 12, but EPYC is in the ballpark). The mere mortal entry M2 has two channels, just like almost every desktop/laptop grade x86 CPU. They are not getting 800 out of only 8Gb of modules.
I’m really interested in this kinda thing, do you have sources I can read?
What I found was DDR5 runs at a max of 64 GB/s, and the M2 Pro runs at 400 GB/s.
I can’t find anything about it being faster due to running in parallel.Edit:I found it, looks like the M2 Ultra runs at 800 GB/s, cool. If I’m reading correctly, this was done by connecting two M2 ProsAlso, the PS5 allegedly has over 400GB/s bandwidth just for perspective
Note those are comparing different numbers.
The number you quoted was for a single memory channel.
A processor has as many memory channels as it feels like. So that 800 number basically means about 16 channels. The M2 plain seems to be about two channels.
For comparison, x86 desktop CPUs have long been 2 channel designs. You go up the stack and you have things like EPYC having 12 channels.
So for single socket design, apple likely has a higher max memory performance than you can do single socket in x86 (but would likely turn in lower numbers than a dual socket x86 box).
So to clarify, the M2 Ultra runs at 800 GB/s because it’s utilizing multiple memory channels, which is like running dual/quad/etc. channel RAM in an x86 PC. So at the max 64 GB/s bandwidth of DDR5 ram, you could run quad channel and get 256 GB/s right? And getting up to 12 channels of DDR5 could mean a bandwidth of 768 GB/sec?
Yeah, in that case Apple is definitely over charging. To be fair, my mb can’t run 12 channels of RAM but it also won’t cost me an arm and a leg and a kidney to have similar performance per GB
Note that I can’t think of modem four channel x86. Either they are the usual two channel (two dimms per channel is how four dimm slots are organized) or have way more (Sapphire Rapids, Bergamo)
To map the M2, the base is about the same as most x86 consumer grade, the Pro is about Threadripper, and the ultra is somewhere between single or dual socket Bergamo, at least in terms of memory bandwidth, which is a highly specific metric.
Oh gotcha, thanks for straightening me out on that. I’m still learning tech so the examples are really helpful
The PS5 uses a unified 16gb of GDDR6, which has really high bandwidth for graphics applications. Apple is full of shit about their LPDDR5.
Sorry you got down voted so hard. Your comment spawned a lot of discussion which is a good thing.
RAM is RAM. If you’re able to manage it better, that’s nice, but programs will still use whatever RAM they were designed to use. If you need to store 5 GiB of something in memory, what happens with the other 2.5 GiB, if they claim that it’s 2x as “efficient?”
Definitely true, but I will say Mac has pretty decent compression on RAM. I’m assuming that’s why they feel this way. My old MBP 2013 had 8, and I used it constantly until earlier this year when I finally upgraded. It was doing pretty well all things considered, mostly because of on the fly RAM compression.
Lower end macs tend to have slower SSDs so this could be a double whammy on these machines.
I’m specifically talking about the in memory compression, not swap.
But memory compression works the same way swap works. When memory is needed LRU page is
written on diskcompressed, and where application needs to read data from compressed page it generates pagefault and OS loads(decompresses) page in memory. That’s it.
Pretty sure windows has been doing some pretty efficient RAM compression of its own since 98SE or something
They actually just it in Windows 10. There were third party add ons to do so prior to then, but they had marginal impact from my experience.
Did you know that you could do RAM compression on “old” MBP 2013? All you had to do is install Linux and enable memory compression.
RAM is not RAM though. If a RAM is twice as fast than some other RAM, then it can swap shit back and forth really fast, making it more efficient per size. Because Apple is soldering ram next to the chip, it enables them to make their RAM a lot faster. M3 max’s ram has 6x more bandwidth than ddr5 and a lot lower latency too.
Also macos needs less ram in general. Is 8gB ram enough? No. But i would bet money on 12gB m3 over 16gB pc to have fewer ram issues and faster performance.
Most of the things that “use” ram on every day pc use, dont need ram. It is just parked assets, webpages, etc. Things that if you have a really fast ram, can be re-cached to ram pretty fast, especially if your storage is also really fast.
RAM transfer rate is is not important when swapping as the bottleneck will be storage transfer rate when reading and writing to swap.
Which I doubt Apple can make as fast as DDR4 bandwidth.
I have a tank that can hold 500 gallons of water. It’s connected to a pumping system that can do 1000 gallons a minute on the discharge side. So it’s just as good as a 2000 gallon tank!
What do you mean incoming water? Look at my discharge rate!
Because Apple is soldering ram next to the chip, it enables them to make their RAM a lot faster.
What a bullshit I see.
Of all the points in their blatantly wrong comment, this probably wasn’t the one to single out. The reason for the soldered RAM is due to speed and length of traces. The longer the trace, the more chance there is for signal loss. By soldering the Ram close to the cpu the traces are shorter, allowing for a minuscule improvement in latency.
To be clear, I don’t like it either. It’s one of the major things holding me back from buying a MacBook right now.
The longer the trace, the more chance there is for signal loss.
While this is true on paper, we don’t need to pretend that this is an unsolved problem in reality. It’s not like large-scale motherboard manufacturers simply refuse to put their RAM closer to the CPU, and it’s littered with data loss. Apple also didn’t do anything innovative by soldering the RAM onto their motherboards. This is simply bootlicking Apple for what’s actually planned obsolescence.
I can’t speak for this particular instance but the reason swappable RAM sticks aren’t “littered with data loss” is because they are designed not to. I.e. Only rated up to a certain speed and timings. Putting RAM physically closer to the CPU does allow you to utilize the RAM better. It’s physics.
Personally, I’d rather take a performance hit than be stuck with a set amount of RAM unless there was some ungodly performance gain.
Putting RAM physically closer to the CPU does allow you to utilize the RAM better. It’s physics.
If the RAM was 3x closer, would it somehow be faster? I’m looking for metrics. With the same stick of any given DDR5, how much performance loss is there on a few example motherboards of your choice?
My point, again, is that yes, on paper, shorter wires means less opportunity for inductance issues, noise, voltage drop, cross-talk, etc. But this is all on paper.
It’s not like every motherboard manufacturer doesn’t know what they’re doing and Apple’s brilliant engineers somehow got a higher clock speed than what the RAM is rated for because… shorter wires?
Case in point: DDR4 is meant to operate at a maximum clock speed per the specs of DDR4. However, on plenty of motherboards that are overclock-capable will support memory that is more than 3x the clock of what DDR4 should be capable of. How does this work with memory that is not soldered into the motherboard?
Additionally, without overclocking, the memory is designed to operate at a clock speed. Will shorter traces to the RAM magically increase the capable clock speed of the RAM? Are these the “physics” you’re referring to?
I know I’ve seen something about this topic. I want to say it was from LTT but I can’t find the video.
I didn’t say anything about it being faster. I said utilize it better. Lower latency can be a big help as it allows quicker access. Think of HDD vs SSD. The biggest advantage in the beginning was the much lower latency SSDs provided. Made things a lot snappier even if the speed/throughput wasn’t all that different.
I don’t know what kind of difference we’re taking about here, or how much real world preformance benefits there are but there’s a reason CPUs have caches on the die.
And that doesn’t include whatever other benefits shorter traces provide. Less voltage drop might be helpful.
But, flexibility must still be better than those gains else most manufacturers would have switched. At some point you start running out of better ways to improve performance though. That’s why things are going back to being integrated with the CPU again.
More apple gaslighting.
Since 1990.
I remember their CPU cycles were “worth more”.
They also always cheapens out on stuff, even when they used “PC” hardware, CPU from 4 years ago etc and RAM & HDD/SSD were so small you basicallyhad to buy a “next tier” machine (much more expensive).
I remember their CPU cycles were “worth more”.
But this exists… There’s still differences amongst PC CPUs in per-cycle performance…
Sure, but their claim was just bullshit back then.
They did optimise Photoshop wildly for Mac to try to show they were right IIRC.
Lol no. My poor linux kernel barely keeps everything stable in 8GB and even then by shoving stuff into swapon zram.
I can just barely run a game and have a ton of FF tabs open + an IDE + discord + multiple desktops
WIndows basically dies once you hit the swap, and it usually starts at like 2GB used.
I’m assuming MacOS lies between Linux and Windows in memory management and performance, so it’ll definitely start lagging if you open too much.
And this is all ignoring the fact that this is a scam statement that should be struck down by the FTC. You can’t call an 8 gallon gas tank equivalent to a 16 gallon gas tank even if your car has better MPG. In that case you advertise the MPG. And in Apple’s case, it would be something like “X% less RAM usage per system process” which we all know doesn’t actually exist because its snake ass Apple.
And this is all ignoring the fact that this is a scam statement that should be struck down by the FTC. You can’t call an 8 gallon gas tank equivalent to a 16 gallon gas tank even if your car has better MPG.
lol good luck when Tesla literally charges $12k for “full self driving” software that does not do what’s advertised nor do what was promised over the last 10 years the CEO has been selling it. FTC and other orgs are toothless when it comes to false advertising, they’ll do nothing.
What helps these machines are built-in SSDs that operate at about 2 GB/s. If swapping out 2 GB of background tabs you’re not looking at when you switch to your IDE takes a second, you’re not really going to notice it. Only if you’re actually trying to operate with all the memory at the same time (big Kubernetes test suites or something) is when the swapping becomes noticeable.
So are they going to make the software smaller? What about iOS? Physically how does 8GB = 16GB? Can’t wait to see Photoshop open a RAW and run out of memory. I will say the M2 CPU was pretty slick and if I got one cheap I’d throw Linux on there.
This ^
Architecture changes can happen as much as they want, but there’s certain tasks that require a fixed amount of memory, and between that and poor developer optimization I doubt these improvements will be seen by the end user.
The CPUs really are great. It’s hard to want any other laptop when the performance/battery life are so great on the M series
I find it pretty easy to want other laptops because I don’t use Apple stuff because I dislike their UX. I know I’m weird but if I never have to get close to OSX or iOS I’m pretty happy.
Thanks for saying this, it’s such an unpopular opinion.
I got a Mac Mini last year and it was dreadful. I used nothing but the Mac for 2 months and still couldn’t get used to it. Half the things required the use of birth mouse and keyboard, neither is sufficient on its own for the most basic of things. Finally sold it off and went back to my PC with dual boot of windows and Ubuntu.
To be fair, the Mac mini is the worst way to experience macOS. macOS works much better with a trackpad and Apple keyboard, which eliminates a lot of your problems I think.
I also have a Mac mini and it barely gets used because it’s a shitty experience to use with a typical windows based mouse and keyboard.
Why don’t you use an apple kbd/mouse/trackpad with your macmini?
Honestly just because I didn’t have one and didn’t want to spend another ~$250 for the combo. I had spare windows keyboards already, so didn’t feel the need to replace them.
If I was to seriously use it again, I’d probably get one, but it’s more likely I’ll sell it to fund a new MacBook Pro
But how is that a criticism of the Mac mini? All it does is give you purchasing flexibility (eg if you already own Apple kbd/mouse). It is like you are implicitly arguing that they should raise the price and include those components. But that would be bad for some consumers that already have those items and would help nobody because you can already buy those separately.
I would definitely agree to that. Even as I used it, I could see certain elements designed in a way that would suit a trackpad better.
The worst part was the scrolling experience. It was either too slow or too fast. Could never scroll at a comfortable speed. Never feel this way when I sometimes use my colleagues’ macbooks (my company provides Macs, but I need certain applications which necessitate a Windows machine for me).
Yep, scroll speed is always wrong on a regular mouse from my experience.
Can’t blame you for not liking it, they really do somewhat punish you for using accessories outside their ecosystem.
It’s good at what it does, it certainly isn’t perfect. I have a mix of all 3 major OS’s running at most times and don’t really hold any loyalty to any one of them besides what’s easiest to do a current task with.
I really like apples OS and their UX is miles better than windows and any Linux desktop I’ve used. The main computer runs windows, but if I never have to touch another windows laptop I’d be very happy. The trackpad experience alone makes it better than any windows laptop I’ve ever used.
Linux is flat out not an option in any way since Lightroom and photoshop don’t run.
Fair enough. As I’ve gotten into photography I have avoided all Adobe products due to how shit they are from a OS clean living point of view, and then being a subscription. But I also don’t heavily edit my photos so ART / Rawtherapee and GIMP work ok for me.
I’m working on becoming a freelance photographer, so adobe isn’t even close to optional. Unfortunately adobe makes good products on a shitty subscription model.
I’m going to take another look at darktable, but the size of my library doesn’t help anything with open source programs
deleted by creator
First looks don’t look super positive. Sounds like they’re going down the enshittification route though. I figure if I’ll pay I’m going to pay for adobe since I really do hate how much I like using their products
I think this is another place I just don’t get because I never used Adobe seriously - what is a size of library? And why would it affect OSS programs specifically? I just use my file manager (thunar or krusader) or CLI (bash) and both work pretty well with dozens to hundreds of files per folder, and I try to not have thousands of pictures in a given folder because that just means I’ve got a messed up pile of photos to ever refer back to. My current trip length and amount of photos will mean I need to break it up when I copy them over to my RAID, but I’d want to break up by day / location anyway so I can go back and find them later.
My 2023 Lightroom library sits at ~70k images and is about 1.5TB of files (shoot 45MP RAW files, a memory card is 256GB). Simply put, if the optimization of whatever I’m using isn’t extremely well done, it bogs down quickly. Even Lightroom struggles occasionally.
All of my organization is done through Lightroom, pictures get dumped to a 16TB RAID array in my local machine, then go into archival storage. I don’t do manual folder organization because it’s extremely slow for my uses.
Damn, I find it the opposite. Apple’s UX is a nightmare, though better than Windows. The window management is dreadful, tiling doesn’t exist, and you have to remember the craziest keyboard shortcuts to do the most basic of things.
Tiling can be replaced by 3rd party apps pretty easily.
I’m more interested in what keybinds you’re needing to use constantly. I can’t think of more than 3 or 4 that get used, all the others are extreme niche cases
Tiling can be replaced by 3rd party apps pretty easily.
I’m sorry but “you can get [basic OS functionality that has existed elsewhere for 15+ years] by installing this random third party tool and pray it works in future updates” isn’t acceptable.
Linux would rightly be criticised if it were like this. MacOS not having tiling is a joke.
There are other bizarre usability issues. Like not being able to minimise a program by clicking on its icon in the dock. An action that’s far easier than aiming for the small minimise button on the top left of the window.
The app grid isn’t integrated with the multitasking view, which is a bit limiting - you can’t drag apps onto virtual desktops as you open them, which is a really good way to organise your workflow.
But again I can’t get over just how terrible window management is on MacOS. It is a nightmare. Pressing close doesn’t close an app it just hides it and keeps the app running, what? Pressing maximise doesn’t maximise the window, it goes full screen. Whyyyyyyy? Maximising a window is such a basic thing, it should be easy! But I’m sure in MacOS style, that’s something you can probably do with some crazy keyboard shortcut+clicking on the green button.
About the shortcuts thing - for example, holding Alt while clicking maximise enlarges the window to show more content, without fully maximising the window. Neat. That’s a useful feature. But it’s not discoverable. It’s not in the GUI, there’s no tutorial, there’s no hint at all.
MacOS is full of random keyboard shortcuts like this, ones that you’re never told about.
Mission control is alright but it’s faaaaaar behind Gnome’s Activities view or KDE’s Overview.
The MacOS store looks pretty but it’s so limited in terms of programs. E.g. I couldn’t find VLC, probably the most popular video player that exists. On Linux, if an app exists, it’s 99% guaranteed to be in the app store.
App management is good if the app is in the store, but my god if you have to do the Windows-style hunt for a download online then manually install it’s annoying.
Search for app online. Find the right website. Navigate to the download page. Download the DMG file. Open the DMG file. Drag the app to your applications folder. Delete the DMG file you downloaded.
It’s just needlessly complex. And I’ve seen people get confused and run the app directly from the DMG rather than moving it to the applications folder, which causes issues sometimes when the app runs in read only mode. Why doesn’t it just pop up with a “would you like to install this app to your applications folder? Yes/No” on first running? And if you say yes, it moves the files and deletes the DMG? It’s needlessly complicated.
There are good things. Most of it is sleek and pretty like Gnome, being able to “stack” files of a certain type is a very cool feature, the widgets are all well designed and consistent, and all use a standardised API. The system tray icons work in a better and more standardised way than anywhere else IMO. Spotlight works just as well as Gnome’s search or kRunner, possibly better, even.
Look I could go on all day with this, good and bad, I’m a bit obsessive over these things, UX is a passion of mine, I even did my thesis on UX design trends. I could talk about Windows or various Linux DEs the same way.
But overall, MacOS just feels old. It’s a bizarre mix of looking very polished whilst also being clunky and feeling 15 years out of date in terms of how it actually works and the way you interact with it.
there’s certain tasks that require a fixed amount of memory
Sure… and for editing a 12 megapixel photo that number is 384MB (raw or jpeg is irrelevant by the way - it’s the megapixels that matter).
As you add layers, you need more memory… but to run into issues at 8GB you’d need a lot of layers. And nobody is saying 8GB is enough for everyone, Apple does sell laptops with 128GB of RAM. They wouldn’t do that if nobody needed it.
And photoshop, which has it’s origins in the late 1980’s, is actually pretty lean. Back in those days it was common to only have one megabyte of RAM and Adobe has kept a lot of the memory management gymnastics they needed to fit within that limit. If you run out of memory it will make smart decisions about what to keep in RAM vs move to swap.
photoshop […] lean
???
Yeah, not sure if they’ve used PS in the last few years, but lean is not a word I’d use to describe it
You’re entirely leaving out the ~2-4GB of system overhead, 1-2gb just to have PS open and then having headroom left on top.
Oh and by the way, Lightroom eats ~45gb of RAM when importing. Also file sizes are much bigger for any decent camera now. I shoot 45MP and files are huge now
I call BS. My 8gb Mac Mini is terrible and constantly running out of memory.
I’m in need of a new laptop, but the lack of upgradeable RAM in these has really made it hard to justify. A minimum of 32gb, preferred 64gb (photographer working with very large files) costs hundreds extra and can’t be done by myself anymore. It’s also hard to find these ones used as the people who buy them have a specific use case and don’t replace them often.
Same here with my M1 MBA. Simply opening a browser with a bunch of tabs + a couple of productivity apps eats up your RAM.
Thanks for sharing. I was eyeing the M2 MBA for my wife because I’m pretty satisfied with my work M1 MBP (32GB RAM), but it seems even apple silicon won’t really do much with just 8GB RAM anyway.
Is it an M3?
M1, but same idea. One app can take 8gb of RAM easy on it. No matter the improvements to the architecture, they’re not going to be able to solve for poor implementation on the dev side
deleted by creator
Just waiting for Apple to just start trademarking and “inventing” units…
Our new Mac has the highest amount of “Rapid Storage Blocks ™” of any Mac ever! Enough to run 30 “Safari Experiences ™”.
Check out their “SSD”:s, hard drives with SSD cache memory.
Not a bad idea (if made correctly and not creating two failur points) but it was so ridiculously small, both of them, like 32GB SSD for 1TB DD when 1TB SSD was for like 200€…
They haven’t sold FusionDrives in years.
My 0.7l wine bottle can fit in 3l of water
Absolutely hilarious. But I’ve heard MacBoys parrot this exact same line of propaganda for years now.
16GB are totally enough! Apple will manage my 20GB of required RAM in such a way that it’ll fit in my 16GB (actually ~15)
I have not. As someone in the Mac community I can tell you that Apple enthusiasts are Apple’s harshest critics. They are the type of people who care a lot about details like this, and have been criticizing Apple for years on the amount of RAM in entry level systems, as well as the absolute rip-off prices they charge for RAM upgrades.
If they are rip-off prices and bad RAM, then why continue buying them? Doesn’t that imply getting ripped off?
When I argued that for the money you shell out for a mac, you could get a machine with 1.5-2x the specs, the constant thing I heard was “yeah, but you can’t compare a SoC with a normal laptop/PC”. Then came the argument about RAM not mattering, CPU speed not mattering, and so on. It would be cheaper for companies to give devs linux machines than macs, but they don’t want to hear it because macs are now the table tennis tables, pool tables, arcade machines, avocado toast, and “we’re a family” of companies.
the mac community
holy fuck i hate consumerism why don’t you just kick me in the nuts instead of burning my eyes out like this
MacOS is pretty decent at memory management. That being said, 8GB of RAM is ridiculous in 2023. Newer and updated applications are tossing memory management out the window. Platforms like Electron wreck memory usage, and many apps popular desktop apps are using Electron now. 12GB is the new 8GB, for Macs. 32GB is the new 16GB for PC’s. I wouldn’t recommend a computer with less than 12GB of RAM for more than $300.
Yes, which is why anyone spending anywhere close to this on a laptop gets 32 GB RAM