tr:dr; he says “x86 took over the server market” because it was the same architecture developers in companies had on their machines thus it made it very easy to develop applications on their machines to then ship to the servers.
Now this, among others he made, are very good points on how and why it is hard for ARM to get mainstream on the datacenter, however I also feel like he kind lost touch with reality on this one…
He’s comparing two very different situations, more specifically eras. Developers aren’t so tied anymore like they used to be to the underlaying hardware. The software development market evolved from C to very high language languages such as Javascript/Typescript and the majority of stuff developed is done or will be done in those languages thus the CPU architecture becomes irrelevant.
Obviously very big companies such as Google, Microsoft and Amazon are more than happy to pay the little “tax” to ensure Javascript runs fine on ARM than to pay the big bucks they pay for x86…
What are your thoughts?
He has a strong opinion, but he hasn’t lost the plot. It’s very reasonable to say you need to develop on the architecture you wanted to deploy to. If you want to be efficient, so most companies are going to deploy to architecture they have locally.
But you’re taking comments from 2019. Nowadays lots of Mac developers develop directly on arm. So by his own argument, those Mac developers would be more comfortable deploying to an arm-based architecture cuz the running on an arm-based architecture.
So broadly I agree with him, or his past comments from 2019, you’re going to need local developer environments, before you’re going to get efficient server software
I hate my M2 Mac because I hate Macs and Docker doesn’t always work correctly.
hates Macs
buys M2 Mac
Have job
Get paid to suffer
[This comment has been deleted by an automated system]
As someone dealing with enterprise software for living, what he’s saying absolutely makes sense, and I deal mostly in web applications (where I never really have to worry about the low level stuff).
Just because the top layer seems to be the same, doesn’t mean the underlying ones are. There’s a reason why perfect bug compatibility is a thing (or maybe, was, in RHEL ecosystem?).
Things that looks like slam dunks in theories are never such in practice. Weird bugs pop up from time to time; and believe me, they will!
It might be rare, you may only see it once or twice in a project; but when it happens, you’re gonna want to be ready, or people will question your ability to do your job.
The cross-compiling point makes sense but, since this is a 4.5 year old message, the state of ARM in the cloud has changed. Now developers do actually have ARM-based machines because of Apple. AWS has Graviton2 instances now and they are a lot cheaper than similarly specced x86_64 instances. ARM is a viable consideration that can be made.
He is sort of right, back in 2019. Even then, IBM PowerPC mainframe are still thriving.
Now, new language with cross compilation with some maturity are here. Major cloud providers now have ARM base machines ready, even designing to their own need.
ARM is in the datacenter market and become a trend.
The only thing I worried about, is the architecture of ARM are too fractured. AWS Graviton might behave differently than Ampere Altra, despite both have the ARM ISA.
[This comment has been deleted by an automated system]
With x86, there are AMD and Intel. With ARM, how many designers are here? With more designers, the smaller the potential common ground is, and more code paths to optimize, thus cost more to build.
[This comment has been deleted by an automated system]