Here is a case for a stronger influence from software developers in the CPU selection process.
The selection of a CPU in any embedded design has long been considered a "hardware issue". As it is part of the hardware, this seems logical. However, the implications of the choice on software development are profound. This article puts the case for a stronger influence from software developers in the CPU selection process.
A warning: There is no silver bullet—no One Solution to this problem. Every embedded system is different, so the selection of a processor (or processors) will vary from one design to another. The best that an article such as this can offer is some guidelines on how to approach the issue.
The end game
Ultimately, an embedded device provides a bunch of functionality, which is delivered at a specific speed (performance), while consuming a certain amount of power. Many design decisions are driven by the need to manage the tension between these three parameters: functionality, performance and power. The choice of CPU is a critical factor in this decision-making process.
CPU selection in the past
Some 30+ years ago, embedded systems were just becoming a mainstream type of electronic design. CPU selection was reasonably straightforward. If 8 bits would suffice, the choice was broadly a choice between 8051, Z80 or one of the 6800 family. For 16bit devices, the big players were 80186 and the 68000 family, which gave an entrance to the 32bit world via the 68020. As 32bit devices became more economic and applications needed that kind of power, numerous options arose and many of them have since disappeared. The 68000 family expanded and morphed into ColdFire. PowerPC was a very successful option. Intel never really succeeded with embedded devices, despite offering three separate families at one point (x86, 860 and 960). AMD's 29K came and went. Even Motorola introduced a new family, the 88K, which was somewhat short lived. Then, quietly, with no particular fanfare, along came ARM heralding the embedded CPU landscape that we see today.
The bigger picture: System-on-Chip and FPGA
Historically, selecting the CPU was a single decision, which could be made in isolation, quite separate from other design considerations. There were exceptions. When it made sense to use a microcontroller, its selection would be influenced by the CPU on which the device is based, but the array of peripherals included on the chip would affect the decision. This is all true today, except that there are two other clear scenarios. SoC (System-on-Chip) devices are commonly a very effective means to cost-effectively implement a design. Thus devices include one or more CPUs (which may be several different architectures), memory and a wide range of peripheral electronics. Which CPU(s) are used in a given SoC may determine whether it is suitable for a specific application. If a design utilises an FPGA, there are two distinct ways that a CPU (or CPUs) may be incorporated. First, there are FPGA devices that incorporate other hardware IP alongside the FPGA fabric; this may include one or more processor cores. Also, a number of CPU architectures have been adapted for implementation as soft cores in the FPGA or, indeed, have been designed from the ground up with this in mind. Notable examples of the latter are Xilinx's Zynq and Altera's Nios.
Hardware based selection
The selection of a CPU has traditionally been the sole province of the hardware designers. Typical selection criteria are:
- How much power does the chip consume? This is difficult, as, with modern devices, the power consumption may vary drastically according to the CPU's current status and activity.
- How much computing power can the CPU deliver? It is hard to know what the hardware designer can do with this information.
- What facilities does the device include on-chip? Clearly the closer this is to the final design, the less work required to get there.
- What is the device's price and availability? These are obvious parameters.
An alternative approach to selecting the CPU is to consider the factors that matter to the software team. These might include:
- Is the team familiar with the CPU architecture? If not, there will be a cost/time penalty incurred by training. If they do know the chip, they are likely to be confident about extracting the best functionality and performance from it.
- Does the team have development tools for the CPU? If not, are they readily available and of good quality? To some extent, any workman is only as good as his tools.
- Are simulation models available at a variety of abstraction and performance levels? Waiting for real hardware is not practical and developing on "similar" targets can be inefficient.
- Given that an operating system will be used and has been selected, is it supported on the proposed CPU? This issue applies to other licensed software as well.
- Are there any low power modes available? Many applications are power sensitive and it comes down to the software team to ensure that power budgets are not exceeded. For this to be feasible, it may be essential that the CPU can be put to sleep. The inclusion of DVFS (Dynamic Voltage and Frequency Selection) in the design may be essential and a CPU that can accept this functionality would then be a requirement.
As there are more software criteria than hardware considerations that appertain to the selection of a CPU, it would clearly be appropriate for a different selection process to be applied.
It is increasingly common for designs to be implemented using multiple CPUs. As mentioned earlier, a number of standard SoC devices incorporate multiple CPU cores. The criteria that determine the suitability of the CPUs is broadly similar to a single CPU design, but with some subtle nuances. If there are just a number of identical CPUs, where a multiple is being used to simply provide more computing power, it is very likely that the system software architecture will be Symmetrical Multiprocessing (SMP), where a single operating system instance runs on all the CPUs and manages the distribution of work across the cores and their interoperation. In this case, the key factor would be the availability of an SMP operating system for the proposed CPU architecture. Other designs will be configured as Asymmetrical Multiprocessing (AMP) – each CPU has its own OS or maybe none at all. In this case the CPU selection process discussed above would need to be applied to each one. An additional consideration is the overall control and coordination of an AMP system, which will probably be performed by an AMP framework or a hypervisor. In either case, the availability of suitable software may influence CPU selection.
When to select the CPU
As the software development for most embedded systems is a bigger undertaking than the hardware design, it is obvious that work on the code should start first in order to meet time to market. That is easy enough. However, the further the software development has advanced, the more well defined the needs for CPU specification will be. For example, it may turn out that a design might benefit from the CPU having low power modes, or hardware multiply, or a large cache. However, these factors may not be apparent until a large amount of software design and analysis of use cases has taken place. In other words, the suggestion is that the hardware design team hold off from the selection of the CPU until the last possible moment. That will give the software engineers a chance to assess how much computing power (and memory) they will need and also what power management capabilities and other functionality will be needed to meet their design goals.
About the author
Colin Walls has over thirty years experience in the electronics industry, largely dedicated to embedded software. A frequent presenter at conferences and seminars and author of numerous technical articles and two books on embedded software, Colin is an embedded software technologist with Mentor Embedded [the Mentor Graphics Embedded Software Division], and is based in the UK.