Accelerating energy efficiency when Moore’s Law slows

The energy-related benefits that result from Moore’s Law are slowing down, potentially threatening future advances in computing

Tags: AMD (www.amd.com)
  • E-Mail
Accelerating energy efficiency when Moore’s Law slows Mark Papermaster is senior vice president and chief technology officer at AMD.
By  Mark Papermaster Published  August 12, 2015

Energy efficiency is increasingly important as the computing revolution marches forward. With the explosion of computing over the last 20 years and the resulting societal benefits across business, education, research, health care, and other sectors, the energy and environmental footprint from computing has correspondingly grown. The evolution toward cloud computing, “always on” connectivity, and immersive experiences, including virtual and augmented reality, are adding demands for efficient compute performance. There are projections that information and communications technologies, including computers and mobile phones, will consume 14% of worldwide electricity by 2020.

The result of all these factors is a strong market pull for technologies that improve processor performance while also reducing energy use. Energy efficiency is described as the combination of improving performance while maintaining or reducing energy use. 

What’s next

Historically, improvements in energy efficiency have largely come as a byproduct of Moore’s Law — the doubling of the number of transistors on a chip about every two years through ever smaller circuitry. In general, more transistors on a single computer chip and less physical distance between them leads to better performance and energy efficiency.

However, the energy-related benefits that result from Moore’s Law are slowing down, potentially threatening future advances in computing. We’ve reached a level where the miniaturisation of transistors is now bumping against physical limits. As transistors are continually getting smaller, the leaking current becomes an even greater engineering challenge. This, in part, has led many to question whether Moore’s Law will continue at its traditional pace.

Historically, and according to the International Energy Efficiency Agency, as transistors became smaller, power efficiency is improved in tandem with processor speed. Now, this steady increase in efficiency has slowed. Therefore, it’s increasingly clear that semiconductor designers will need to develop creative measures to supplement the slowing energy efficiency gains.

At AMD, we have taken on a significant goal to improve energy efficiency of our products - to improve the energy efficiency of our mobile processors by 25 times between 2014 and 2020. We refer to this as our 25x20 initiative. How well we and others in the industry respond to the slowing of efficiency gains may have profound implications for the global economy and the environment as society further relies on digital technologies.

But if the historical method — that is, manufacturing technologies for greater transistor density — no longer has the same impact as before, what can the industry do as an offset in order to make future energy efficiency gains? At the processor level, the answers for now are new processor architectures, power-efficient technologies, and power management techniques.

For decades, the central processing unit (CPU) of a computer has been designed to run general programming tasks. These processors excel at running computing instructions serially — if condition A, then do B, then C, etc., one step after the other — and they increasingly use a variety of complex techniques and algorithms in order to improve speed. By contrast, graphical processing units (GPUs) are specialised accelerators originally designed for painting millions of pixels simultaneously across a screen. GPUs do this by performing calculations in parallel using a comparatively simple architecture. CPUs and GPUs have traditionally run as separate processors, on separate chips or integrated circuit boards within a PC, gaming console, tablet, smart phone, and most recently in some servers and supercomputers. 

Today, the CPU and GPU are increasingly being integrated into a single entity, known in the industry as an Accelerated Processing Unit (APU).

While a significant step in the right direction, work has remained to bring the two processor types into a harmonious and heterogeneous alignment that can improve performance and minimize energy consumption. This has led to an emerging industry standard, known as the heterogeneous systems architecture (HSA).

Add a Comment

Your display name This field is mandatory

Your e-mail address This field is mandatory (Your e-mail address won't be published)

Security code