Let's find out whether or not (and if so, how and when) Apple will obsolete x86-based computers in favor of its own SoC-powered successors.
In a recent post on Apple's latest smartphones, tablets and wearables, I chose to not direct the bulk of my kudos at the company's system designers, instead focusing my attention on the developers of the SoCs (and IP cores within those SoCs) inside those systems. Toward the end of that piece, I also noted that I'd shortly begin working on "a broader treatise of whether or not (and if so, how and when) Apple will obsolete x86-based computers in favor of its own SoC-powered successors." The time for that "near-future post of its own" is now.
Rumors and prognostications about Apple migrating Macs away from Intel and to its own Arm-based application processors periodically rise (and fall away) in the tech press and analyst world, but in recent times they've seemingly hit a crescendo. Why? In a big-picture sense, this is the latest (potential) step in a processor architecture transition within the company that began with the Apple-designed A4 found in the first-generation iPad, an ironic launch platform choice given that iPads are now being bantered around as potential laptop successors. To that point, and in the spirit of "a picture paints a thousand words," here's a stock photo of the latest-generation Apple iPad Pro tablet mated to its keyboard accessory:
Now here's a stock photo of Microsoft's Surface Pro laptop/tablet hybrid PC, mated to its keyboard accessory:
See how similar they appear? And see what all the hubbub is therefore about?
To set the stage, let's step back for a second: why did Apple get into the buy-an-Arm-license-and-develop-it-yourself business in the first place? Once shipping product volumes get high enough to counterbalance the license fee-and-R&D expenses, it becomes fiscally attractive to bypass the "middleman" (specifically Samsung, Apple's SoC supplier up to the iPhone 4, in the A4 case) and do more of the total development yourself, leaving only IC's foundry fabrication, packaging, and testing to a third party. Apple-developed Arm cores now power not only all of the company's iPhones and iPads, but also the various Apple Watch generations, Apple TV, HomePod, and other products.
And now, of course, Apple's expanding beyond the primary processor core into other system building block areas; the company's reportedly now doing its own graphics IP (to the detriment of longstanding partner Imagination Technology's business), has brought power management IC development in-house, and is even rumored to be pursuing development of its own cellular voice-plus-data technologies. What's next; flash memory?
Conceptually, therefore, you can also see the appeal to Apple of cutting out the Intel middleman and doing its own PC processor designs. But the situation's quite different here, for a variety of reasons such as:
Difficult … but not impossible. After all, Apple's gone down this path before … several times, in fact. Initial Macs were based on Motorola's 68000 processors; the company transitioned to IBM- and Motorola-developed PowerPC CPUs beginning in 1994. And of course, as yours truly wrote about it at the time, Steve Jobs announced at the June 2005 Apple Worldwide Developer Conference keynote that the company's various Mac product lines would begin rapidly transitioning from PowerPC to Intel x86 CPUs. That's where we still are, in fact, almost 15 years later.
How Apple handled that quite successful mid-2000s transition gives some hints as to what it might do this time. The company rapidly began shipping both x86-based versions of its Mac OS operating system and x86-based computer hardware, along with "universal binary" application development kits for both its internal teams and partners that compiled code for both PowerPC (legacy) and x86 (new) hardware. Additionally, the new x86-based operating systems included an emulation layer called "Rosetta" that enabled efficiently running legacy PowerPC-based applications on new x86 hardware.
Apple's even better positioned this time around, actually. Analogous to what Microsoft attempted with both its own and partners' Windows RT-based systems, Apple has largely (albeit not completely) succeeded in migrating both its own and partners' application sales and distribution channels to its own App Store infrastructure (and I should also note here that Microsoft hasn't given up on the idea, either, most recently partnering with Qualcomm and system partners on Arm-based "Always On, Always Connected PCs"). It'd be a relatively straightforward step, after unveiling Arm-aware Mac OS application development suites, to then either begin distributing via the App Store "universal binary" applications that run on both legacy x86-based or new Arm-based hardware, or (in the interest of minimizing code payload size) to automatically download an x86 or Arm binary version of a particular application depending on what customer hardware is detected.
Apple's Transitive-developed "Rosetta" hardware emulation technology was impressive back in 2005, and as I previously mentioned, both it and broader virtualization have only gotten better in the intervening decade-plus. To that point, as well as addressing my earlier comment that it would be "difficult for an Arm-based SoC" running an x86 binary in emulation mode to "deliver a competitive price/performance/power consumption combination" to its x86 native alternative, I should also point out what's specifically spiking the latest "Apple on Arm" interest.
Shortly after Apple unveiled the new A12X Bionic SoC-based iPad Pros in late October, which it claimed at the time were faster than "92% of portable PCs," Geekbench benchmark test results for the new tablet mysteriously appeared online. The single-core score of 5030 for the iPad Pro goes toe-to-toe with the 5053 score generated by a 2.6 GHz Intel Core i7-based 2018 15" MacBook Pro, for example. Note, however, that the iPad Pro's multi-core score of 17995 is still behind that of the 15" MacBook Pro, at 21421, even though the A12X Bionic SoC is an eight-core design versus the six-core 2.6 GHz Intel Core i7 CPU. And more generally, the "fine print" list is quite long … Geekbench is a synthetic benchmark, for one thing, whose relevance to real-life comparison results is historically hit-and-miss. The systems being compared are also based on different operating systems, contain different amounts and types of DRAM, different graphics processors, different screen sizes, etc … of note, in Apple's favor, keep in mind that the 2.6 GHz Intel Core i7 CPU is active fan-cooled, while the Apple A12X Bionic contains a completely passive thermal subsystem.
Fine print aside, it's indisputable (IMHO) that Apple's SoCs are becoming compelling alternatives not only to other suppliers' (Huawei/HiSilicon, MediaTek, Qualcomm, Samsung, etc.) Arm-based application processors but also to Intel's x86-based products. And the rapid rate of improvement for Apple is equally if not more impressive; as 9to5Mac's coverage points out, "The 2017 iPad Pro [editor note: based on the A10X Fusion SoC] can achieve 3908 single-core and 9310 multi-core scores. The new iPad Pro is 30% faster than its predecessor in single-core and effectively doubles multi-core performance."
I'm still not convinced that Apple's planning on doing a slam-dunk conversion from Intel to its own processors in computers any time soon. But a more gradual consumer "pull"-based transition, beginning with systems that value improved battery life, smaller form factor, and lighter weight over absolute performance, is definitely likely … and is arguably already underway. If Intel ever gets its 10 nm process (and products based on it) in full production, the resulting transistor count, clock speed, and energy efficiency improvements will slow, but likely won't completely stall, this transition. And keep in mind that AMD's x86-based processors are becoming steadily stronger from a price/performance/power consumption combination perspective, too. Agree or disagree? Sound off in the comments.
— This article first appeared on sister publication EDN and was contributed by Brian Dipert, Editor-in-Chief of the Embedded Vision Alliance, and Senior Analyst at BDTI.