Intel bets on ‘transistor-resilient design’ to avoid past mistakes

By on
Intel bets on ‘transistor-resilient design’ to avoid past mistakes

Intel is embracing a new “transistor-resilient design” approach so that it can continue to push the envelope for its products without getting held back by manufacturing issues.

This new approach was detailed during the Intel Architecture Day pre-briefing event Tuesday with several major new technology and product disclosures —less than three weeks after the company revealed that a defect in the company’s manufacturing process would delay the delivery of its next-generation 7-nanometer products by six months.

Raja Koduri, the former AMD chief architect who was hired away by Intel in 2017 to drive new silicon architectures, started the event by acknowledging that Intel’s traditional approach of designing processors has been so tightly paired with the company’s ability to manufacture chips with higher transistor densities that it has resulted in delays of multiple products, including 10nm processors.

“Our design methodology has historically been tightly coupled to a single transistor target. This makes it very hard to move our new architectures, new features and new IP quickly to different process technologies, either external or internal,” said Koduri, Intel’s chief architect who leads the Architecture, Graphics and Software Group. “Our customers rely on our execution. A ‘transistor-resilient design’ would have allowed us to deliver PCIe Gen 4 or a Sunny Cove CPU or an Xe GPU to the market sooner during the time when we had challenges with our 10nm process transition.”

This new approach is evident in the company’s forthcoming 10nm Tiger Lake mobile processors for laptops, which are expected to launch in September.

While Intel has previously made smaller, incremental improvements within its six-year-old 14nm node—in part to make up for subsequent 10nm delays—the company said it devised a way to deliver what it has called the “largest single intranode enhancement in its history,” which is comparable to a full node transition, like when the company moved from 14nm to 10nm.

The company is using this historical intranode improvement for the Willow Cove cores that are going into Intel’s new Tiger Lake processors, which the company said is allowing it to deliver a “more than generational performance leap” over its first 10nm processors for volume production, Ice Lake.

“We were able to deliver a greater than generational improvement performance by not only dramatically lowering the voltage at which Willow Cove achieves its operating frequencies versus Sunny Cove, but we were also able to extend the range,” said Boyd S. Phelps, vice president of the Client Engineering Group and general manager of the Client and Core Development Group.

This major intranode boost was made possible by a new technology Intel calls 10nm SuperFin, which is a reference to how the technology redefines its 3D FinFET transistor technology and combines it with a new super metal-insulator-metal capacitor to drive performance and efficiency in a variety of ways.

“The era of getting massive performance boost from simply shrinking transistor features is behind us,” said Ruth Brain, an Intel Fellow and director of interconnect technology and integration.

Intel is also using its 10nm SuperFin technology for its DG1 and SG1 discrete GPUs, which the company plans to start shipping later this year for laptops and servers, respectively.

The company is already working on an enhanced version of 10nm SuperFin, which will provide additional performance, new interconnect innovations and optimizations for data center applications.

Intel’s newly disclosed Xe-HP discrete GPU for high-performance data centers will use the enhanced 10nm SuperFin while the previously announced Ponte Vecchio discrete GPU will rely on 10nm SuperFin and enhanced 10nm SuperFin for its base tile and Rambo Cache tile, respectively.

This new approach to intranode improvements means Intel is ditching the plus sign that has been used to name previous intranode enhancements, like 14nm+++.

“They were so many pluses that we often internally mixed up the actual plus count,” Koduri said. “Left unchecked, we were going down the same path in our intranode improvement in 10nm. The number of repetitive calls or emails I sent to my designers, querying about how many pluses were on one chip versus another, was getting ridiculous.”

Intel is looking to push the design envelope in other ways, including how it packages different silicon functions and IPs together on a system-on-chip. The company has already been heading down this path with its Embedded Multi-die Interconnect Bridge and Foveros 3D packaging technologies—the latter of which is used for Intel’s new Lakefield hybrid processors—but it’s already looking at how it can slice and dice the various elements of its processors into more specialized chips.

With a new design methodology Intel is calling “Client 2.0,” the company said it is creating the groundwork to build purpose-built client processors that use tiny silicon functions and IPs as building blocks so that it can significantly reduce the development time to one year from the three to four years it typically takes for monolithic chips and the two to three years for multi-die chips.

While Intel didn’t share any plans for when it will release new products using the new approach, the company said it represents a long-term vision.

“Overall, Client 2.0 is about delivering winning products at an annual cadence,” said Brijesh Tripathi, vice president and chief technology officer for Intel’s Client Computing Group.

By mixing and matching different building blocks—for things like graphics, compute, I/O and artificial intelligence—these chips can suit different user types, like gamers and commercial users.

“For example, a corporate employee could be using a lot of productivity tools and want a lot of AI capabilities. A gamer might want a large graphics and AI engine while a content creator might want to have a lot of graphics and compute,” Tripathi said.

While the company is taking advantage of a number of new design methodologies to bring performance and efficiency to the next level, the company is also turning to third-party chip foundries for new products—which CEO Bob Swan said may happen more in the future if manufacturing issues persist.

This reliance on foundries was evident in the company’s discrete GPU plans. While Ponte Vecchio, for example, will use Intel’s new 10nm SuperFin processes, it will rely on external processes for the GPU’s Xe Link tile and for some of the product’s compute tile, which Koduri said was planned from the beginning to give Intel more flexibility. The company’s newly disclosed Xe-HPG GPU for desktop PCs, on the other hand, will entirely rely on a foundry for manufacturing.

“This helps with our execution immensely,” he said.

This article originally appeared at crn.com

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © 2018 The Channel Company, LLC. All rights reserved.
Tags:

Most Read Articles

You must be a registered member of CRN to post a comment.
| Register

Log In

Username / Email:
Password:
  |  Forgot your password?