> Nothing on the ARM, Qualcomm, or MediaTek roadmap is even close.
Don't count on the Mediateks of the world to spend any time on this market. Most of the licensees are focusing on high volume embedded cases where they an try to drag themselves above the commodity pricing floor by providing a package of SoC and (in my experience crappy) drivers and an android port. In fact "package" is the key word: they typically won't sell you the chip in a useful way (i.e. no data sheets) but instead want to sell you a reference board with ported software running on top.
Apple is unusual in that they have two value add products that matter and have managed to, over time, build the capability to handle it. The economic calculus for them is quite different from that of the merchant chip vendors.
There are a few possible alternatives like Qualcomm and Nvidia (!!) but the market dynamics aren't the same for them.
It is possible that an Nvidia/microsoft relationship could develop somewhat similar to the Intel/Mcrosoft partnership, but unlikely. The market is different and the regulatory environment may become different. Terminal devices (laptops, desktops, phones) aren't really where the money is any more (except for Apple, an exception on several dimensions) so the push for such a partnership is diminished.
More likely is a partnership addressing cloud datacenters between say MS or Amazon and Nvidia, but nobody is going to be willing to allow one chip vendor to have any say over their destiny. It would make sense for one of the top cloud vendors (AWS, Azure, IBM) to buy Nvidia but that would never be allowed.
Another option is AMD. Originally they were going to have a Zen variant that ran ARM. They shelved that plan once the x86 version of Zen showed promise, but there are rumours that the plan continues on the back burner.
Dual ISA would be pretty cool. The more I learn about the differences between ARM and x86, the more it starts to seem like it really is not that much of a stretch. Just need the market for it.
At this point it seems more and more that x86 is still around purely due to legacy reasons and path-dependency. Intel's failure to execute on mobile really opened the door for ARM to have tons of resources over a decade thrown into it's development as a platform, and now the M1 is and extremely solid proof of concept that tentative forways into ARM on desktops/laptops/servers will absolutely warrant the shift in platforms. A few years for software ports and an extra 2 generations of chip improvements, and I think Intel's failure to execute on it's fab processes will just become a moot point. I have more confidence in AMD to pivot, and they have their GPU business as well. Intel may be relegated to aging obsolescence the same way HP and IBM did with their "big iron".
Hah, true. That was absolutely the wrong chip at the wrong time, poorly implemented and lacking the software base of other platforms. I have no idea why they built the thing rather than put more resources behind Xeon, which was basically the end result anyway.
Multi-ISA chips have been announced from a couple of vendors (Loongson and Tachyum). The Loongson one does a MIPSish native ISA, MIPS, ARM, x86 and RISC-V, with the last three being slower.
That has always been a question of when, not if. And M1 shows that it is quickly approaching. In a market where physical limits have slowed down miniaturization, with just 5-10 years away from a standstill, a low efficiency instruction set focused on backwards binary compatibility is going to get washed out of most usecases very quickly. It's going the way of the IBM mainframe - still sold decades after its heyday, but marginalized to a niche (bunch of enterprise cloud instances for software that is too expensive to recompile/port).
Not true from what I’ve read. The problem with x86 is not complexity but variable instruction lengths. This makes it hard to determine instruction boundaries, complicating decoders and making more than 4-way decode hard. M1 has 8 decoders for comparison, and it would be easy to have 100 with ARM if there were a benefit to that many.
1. AMD64 is a lot easier to decode than the legacy i86 8/16/32 bit instruction set.
2. Decoding took a significant portion of the chip when they were a million transistors. The m1 has 16 billion resistors, decoding is a much less significant fraction of the chip.
> The limitation of the x86 decoder is that each clock cycle can only process up to 4 instructions, but if it is a familiar instruction, it can be placed in the operation cache, and 8 instructions can be processed per cycle.
Back in the Andy Grove era Intel exited its original business (memory) while it was profitable, foreseeing its rapid commoditization. The Intel of today sold off its excellent ARM business (acquired from DEC) and is incapable of doing anything but ride a sinking ship to doom.
Eh, Intel was so scared of itself it even stopped Atom getting properly powerful, and that was x86. No way they will get into the ARM team, the instant what they build starts being powerful they will get scared of it too and kill it.
A good part of why they take such a beating in x86 too, Intel has no backup plan because they're scared of their own backup up, while being oblivious to the fact that it was their backup plan that saved them during the post-P4 era.
At least AMD is not scared of throwing away their arch every few years and restart, they've pretty much been doing that like clockwork every five years for the past two decades.
If you mean AMD couldn't match what Apple was willing to pay when you say "Apple was the only customer willing to pay", then absolutely. Otherwise, when push comes to shove in a bidding war with Apple, I don't think AMD really has a chance.
> Originally they were going to have a Zen variant that ran ARM.
Was AMD's K12 related to Zen in any way? AMD's predecessor to that, the Opteron A1100 series, was bog standard Cortex A57 cores. I figured K12 would be at best an evolution of a standard ARM design, but if it was more that would be very interesting. Would love to know more about this.
It wasn't a Zen variant; but the K12 was intended to be AMD's custom ARM uArch aimed at the server space. So there's a good chance it would have been focused on performance over power efficiency.
> It is possible that an Nvidia/microsoft relationship could develop somewhat similar to the Intel/Mcrosoft partnership, but unlikely. The market is different and the regulatory environment may become different.
In addition, I think Nvidia's tendency toward extreme secrecy would quickly become an obstacle.
OK, how is that different from Apple? They replaced all the protocols for keyboard/trackpad communication with their own at some point and it took forever for people to reverse engineer it. Same with the SSD and other components. M1 was shrouded in secrecy and apple tried to ditch all GPL software so that they wouldn't have to upstream anything if they don't want to.
Let's at least compare apples to apples. Pun intended.
> ... apple tried to ditch all GPL software so that they wouldn't have to upstream anything if they don't want to.
A side point: it was the patent clause of the GPL v3 that frightened them off. The GPL doesn’t require “upstreaming” of anything — you merely have to release the source code you used; any use of that code by another person is that person’s responsibility.
More generally on Apple: they just treat open / proprietary as another tactical tool. When they were on the ropes (e.g. late 90s) they happily embraced open standards like mp3 and jpeg, yet also happily purchased a license to .rtf. They quickly realized the ipod needed Windows support. Then as (and where) they became stronger they stoped caring.
I seem to recall a wide variety of people objected to the patent clause during discussions on the GPLv3. Quite a few companies won't touch GPLv3 software which is unfortunate because it was gaining a lot of momentum at the time.
The difference is that Apple is a single company, so secrecy doesn't interfere with product development. Communication happens between the involved internal teams and the magic happens.
In the case of collaborations between separate companies, secrecy becomes a problem because inevitably, details that are held close and not communicated turn into stumbling blocks. For a collaboration between Nvidia and Microsoft for instance, the teams at Microsoft involved with the project would need the same level of access to Nvidia's half as Nvidia's employees do in order to be as effective as Apple's internal teams have been. This isn't impossible of course, but historically Nvidia is not inclined to do things like this.
Firm is NOT market. In fact fir property right economics why firm exist is a big study area. Why such contractual transaction is efficient compared with free market. (May be it is not as open source “economy” has organised transaction that is not possible with firms. And firms does cooperate.)
The question further is this model extended to say china. Many things happened in china was closed to outside. Not much as a secret but more like a big firm operate cheaply in a world economy. Would others compete or like apple others have to follow inline to get the supply. One wonder.
All major tech has a new existential threat. Once Apple begins replacing its data centers with Apple Silicon chips it will create unbeatable economies in services.
The rest of faang must find a way to catch up. They will already pay huge amounts in the meantime giving Apple even greater latitude to win coming new markets.
There’s also clear opportunity for Apple to enable developers with cheaper services than other providers. This is a major issue for Amazon and Microsoft.
I get the sense that hyperscalers don’t give Intel a ton of margin in negotiations, so they already have access to great server chips at competitive prices. And they’re bringing in AMD now to compete. Where M1 really shines is client-side applications like video handling, ML inference, and crazy branchy nearly single-threaded code. Servers can afford to run much lower clock speeds (see GCP n1 family, for example) because web serving is an embarrassingly parallel problem. Sure, some ARM might help lower costs of internal things like S3, but Graviton2 isn’t that incredible compared to the M1-vs-Coffee Lake comparisons going on in the laptop world.
Don't count on the Mediateks of the world to spend any time on this market. Most of the licensees are focusing on high volume embedded cases where they an try to drag themselves above the commodity pricing floor by providing a package of SoC and (in my experience crappy) drivers and an android port. In fact "package" is the key word: they typically won't sell you the chip in a useful way (i.e. no data sheets) but instead want to sell you a reference board with ported software running on top.
Apple is unusual in that they have two value add products that matter and have managed to, over time, build the capability to handle it. The economic calculus for them is quite different from that of the merchant chip vendors.
There are a few possible alternatives like Qualcomm and Nvidia (!!) but the market dynamics aren't the same for them.
It is possible that an Nvidia/microsoft relationship could develop somewhat similar to the Intel/Mcrosoft partnership, but unlikely. The market is different and the regulatory environment may become different. Terminal devices (laptops, desktops, phones) aren't really where the money is any more (except for Apple, an exception on several dimensions) so the push for such a partnership is diminished.
More likely is a partnership addressing cloud datacenters between say MS or Amazon and Nvidia, but nobody is going to be willing to allow one chip vendor to have any say over their destiny. It would make sense for one of the top cloud vendors (AWS, Azure, IBM) to buy Nvidia but that would never be allowed.