Temporary Bonding, Debonding Remains Challenging For TSV Adoption

While compound semis have used TSVs for years, hurdles remain for adoption in high-volume memory and logic apps.

One issue with the adoption of TSVs in 3D ICs in mainstream semiconductor applications revolves around the throughput of the temporary wafer bonding and debonding process. This doesn’t necessarily equate to a roadblock, but work certainly remains to be done on this and related issues.

Semi EngineeringOn one hand, TSVs already are being used in the manufacturing of compound semiconductors and MEMS, and have been for a decade or so. Many of the image sensors used in current consumer devices, for example, employ 3D ICs, and these are being produced in high volume.

In fact, in some products in production today, TSVs have enabled better products, noted Thorsten Matthias, business development director at bonding equipment vendor EV Group (EVG). In the case of image sensors, using TSVs have enabled sensors that produce a better quality image or sensors that produce an image faster. “Even in some of the cheapest devices, TSVs have been implemented where it makes sense,” Matthias said.

But if one is talking about memory and logic for consumer devices, there are reasons the forecast adoption of TSVs has been pushed out. Before TSVs in stacked silicon are utilized in production there are still a few hurdles to be crossed. One of the issues is throughput of the temporary bonding and debonding process. Another significant hurdle for the mainstreaming of TSVs involves wafer-level testing.

“The throughputs on that process are still pretty low,” said Mark Stromberg, an analyst with Gartner Inc. Current systems on the market can manage throughputs of 20 to 25 wafers per hour. Compared to other fab-line tools with a typical throughput of some 60 wafers per hour, that’s a bottleneck in terms of high volume production. “This is one of the reasons TSVs haven’t taken off, especially in the current market conditions,” Stromberg said.

Theoretically, one could add more than one bonding/debonding tool to the flow, but at a cost of several million dollars or more per tool, that is an expensive proposition. However as with other fab tools, bonding and debonding equipment vendors have added multiple chambers to tools to improve throughput (as well as accommodating various and differing bonding and debonding process flows). Austria-based EV Group, for example, unveiled its EVG850 temporary bonding and debonding (TB/DB) tool last year. Built on the company’s XT Frame platform, the EVG850TB/DB can accommodate up to nine process modules, doubling the process capacity of EVG’s previous temporary bonding and debonding tools.

Performance gains vs. cost gains

Still, there is work to be done before the advent of temporary wafer bonding and debonding in TSV fabrication in the production of memory and logic, and not just in terms of throughput. There are a number of process flows that have the potential to be extended to memory production, Matthias said. Even so, when it comes to high volume production of these devices, the industry is looking at a steep learning curve, moving from, say, thousands of wafers in a production run of compound semiconductors, to the tens of thousands of wafers involved in the production run of memory and logic destined for consumer devices such as mobile phones.

“In terms of design, it can be quite challenging,” Matthias observed, referring to the question of whether or not to implement TSVs in such a production environment.

“But I would say the industry has moved beyond the feasibility study phase and is now in the reliability and infrastructure phase,” suggested Matthias’ colleague, Thomas Uhrmann, also business development manager at EV Group. This is particularly true in terms of the throughput question, Uhrmann said, acknowledging that this is a critical issue in terms of a wider adoption of TSVs.

But it’s not just a matter of improving throughput only. Multiple modules on a temporary bonding and debonding tool are also important for accommodating differing steps and process flows. Another challenge in this process has been the use of adhesives to bond a thin device wafer to a temporary carrier wafer so it can go through the back thinning and other process steps required in TSV formation.

The thermal characteristics of some adhesives used in the process are such that they can’t survive in the high temperatures of the chemical vapor deposition (CVD) and physical vapor deposition (PVD) processes typically used in the manufacturing of memory and logic devices. There is much work being done within the industry at the moment to improve the thermal stability of adhesives used in the temporary bonding process, while some companies have implemented low temperature CVD/PVD processes that are compatible with the adhesives, Uhrmann said.

“We’re still working on it, the ability to withstand the higher temperatures,” he said. “It’s an ongoing process.”

With logic and memory chipmakers planning definitively—or at least mulling over—the implementation of TSVs in production within two to three years, the type of bonding/debonding used may depend on the specific device involved and the related production costs. Some devices require high process temperatures, and there are some adhesives in production today that can withstand those higher temperatures, Matthias said. There are related practical reasons in terms of production to maintain a process flow with higher process flows, Uhrmann suggested, although these are somewhat less necessary so than they were three to five years ago.

Furthermore, being able to process at lower temperatures provides additional flexibility in terms of process flows, and most mainstream chipmakers are looking at bonding and debonding processes coupled with CVD/PVD processes in the range of 200 to 320 degrees, Matthias said.

Uhrmann suggested that while there is still a learning process ahead for the industry when it comes to bonding and debonding and TSV production, as more wafers are processed and more statistical data is gathered, improvements can be made in terms of integration with upstream and downstream production flows. “This will allow us to optimize the temporary bonding and debonding process flow,” he said.

As Matthias summed it up: “It’s very challenging and interesting to work in this field.”

Editor’s Note: As explained at length elsewhere on this site this is a news story written by me for another publication. This originally appeared on Semiconductor Engineering; it holds the copyright, of course.

TSVs: Welcome To The Era Of Probably Good Die

Physical probing of devices using TSVs is proving a challenge to traditional test.

Among the challenges of a widespread adoption of 3D ICs is how to test them, particularly when it comes to through-silicon vias (TSVs). While not necessarily presenting a roadblock, TSVs use in the mainstream will almost certainly change traditional test strategies.

Semi EngineeringIn fact for many chipmakers looking to stack their silicon, they may come to rely less on the traditional known good die (KGD) at final test and instead opt for so-called “probably good” die.

If one looks at the semiconductor industry as a whole, this issue of testing a device that relies on TSVs is nothing new. Compound semiconductors, such as image sensors, and MEMS have been utilizing TSVs for years. Furthermore, the problems with probing TSVs are not dissimilar to those introduced in years past with advanced packaging.

The use of delicate copper pillar bumps in flip-chip interconnects, for example, also has proven problematic in terms of physical contact during probing at final test. Physical contact can stress and ultimately damage the pillars. “Old-time spring contact probe doesn’t cut it anymore, ” said Gary Fleeman, vice president of marketing at Advantest Corp. “It’s becoming difficult to make contact. It is becoming quite challenging.”

So in a sense, the difficulties of testing devices with TSVs aren’t new. As with copper pillars, TSVs can be subject to damage with physical probing. But testing an image sensor and testing a stack of logic die, or something more complex—say a processor with memory—ultimately involve different challenges. “There are certain things that have been learned and can be leveraged,” said Mike Slessor, senior vice president and general manager of the MicroProbe Product Group at probe card maker FormFactor. But the devices are structurally very different, he added. It’s not a matter of simply taking the test strategy from one and applying it to the other.

Furthermore, in terms of stacking silicon, it’s one thing to have a single die with a damaged I/O; it’s something else to have a single damaged die that is part of a stack of several die, particularly if it renders the entire stack defective. At first glance that would seem to imply that 100% KGD are essential. However, as makers of high volume, mainstream semiconductor applications have begun to look to stacked die as a means of continuing device performance gains at advanced manufacturing nodes, in all likelihood it will mean new and different test strategies. Demanding 100% KGD may not prove economical in some cases, or even necessary.

But as with so many other aspects of mainstream 3D IC adoption, testing with TSVs is a question mark. “It creates a problem. How are you going to determine KGD?” said Mark Stromberg, semiconductor ATE analyst with Gartner Inc. “The industry is still kind of undecided how it’s going to address the problem,” he said.

Chipmakers are considering and evaluating several different test methodologies with regard to TSVs, Slessor said. In fact traditional physical probing of the device contacts—in this case, TSVs—hasn’t been completely dismissed. But it’s proving difficult, and the prevailing sentiment among MicroProbe’s customers is to avoid it. “We’ve done it, but it’s a challenge,” Slessor said, noting that this is the crux of the TSV/test debate. “If you don’t have to touch it, then you shouldn’t. The jury is still out on whether or not you have to.”

If not known good die, then what?

So if a chipmaker isn’t going to physically probe a device under test and drive current through it, that begs the obvious question: how to test said device? As Slessor said, the jury is still out, but as always, the answer ultimately will come down to cost—both test costs and device manufacturing costs.

Contactless probing would be a potential solution, but so far has proven problematic. “It hasn’t really developed. It isn’t progressing at all,” observed Advantest’s Fleeman.

The methods of contactless probing under consideration involve RF technology, but the physics involved with RF antennas are proving a limiting factor. The high frequencies and power densities of the electrical currents that mainstream semiconductors employ tend are proving a stumbling block to this method. “The tests require quite a bit more power than what can be generated,” MicroProbe’s Slessor said. “It’s something we continue to play around with,” he added, noting that there are inherent advantages to this approach, particularly as pitches shrink. But the technology won’t be ready anytime soon, he said.

BiST, or built-in self-test, is another option: building in extra structures within a device specifically for testing it. This adds complexity, to the manufacturing process, however, and thereby costs. Consequently it may not prove to be the best test strategy for low-cost, high volume device production when it comes to 3D ICs.

“Anything that adds significant potential costs is going to be a potential roadblock,” said Gartner’s Stromberg.

Another method under consideration is the use of specific test points: test or dummy pads placed among the TSVs themselves or on the outside of them that are used to contact and test the device. These can be fabricated in parallel with TSVs, adding relatively little in terms of manufacturing costs, Slessor said. Dummy pads can provide probe access to most of the structures in a device under test (hence the term “probably good die”). It also has the benefit of being familiar; it is an approach that DRAM manufacturers have employed for a long time.

Will known good die prove too expensive?

Whichever test strategy chipmakers adopt may depend on their specific application and the associated costs of fabrication and test. In other words, what proves most cost effective—determining probably good die vs. known good die? In some manufacturing scenarios, particularly among high-yield devices such as memory, it may prove cheaper to depend on a probably good die test strategy, even though it means some yield loss at final packaging, as the cost of that loss would still be less than than that of testing for 100% KGD prior to packaging.

This, of course, differs from tradition; wafer probe originally was designed to sort good die from bad die prior to packaging, sending 100 percent KGD off to be packaged.

“I don’t see it being a major roadblock to (widespread) 3D adoption,” said Slessor. “Instead of a roadblock, we’re looking at changes in test strategies.”

A lot depends on the combination of die involved in a 3D stack and the nature of the individual die along with what can and cannot be tested via dummy pads (or some other strategy alternative to physically probing TSVs). “Can they drive enough power through a device under test through dummy pads to get the results they want? That’s the question,” Slessor said, adding that there are related methods to infer the “goodness” of TSVs.

In terms of memory, this approach almost certainly will work, he said—with a combinations of a stacked processor and memory or stacked FPGAs it should work just fine in most cases. With 3D ICs involving high performance RF die, perhaps not.

In any event, the era of probably good die may be on the horizon.

Editor’s Note: As explained at length elsewhere on this site this is a news story written by me for another publication. This originally appeared on Semiconductor Engineering; it holds the copyright, of course.

Front End Comes To The Back End

The adoption of through silicon vias has meant putting front-end wafer fab tools and processes in assembly and test houses.

For outsourced assembly and test (OSAT) houses either planning for or already offering through-silicon via (TSV) capability for their 3D packaging efforts, this has meant the front end is coming to the back end, in a manner of speaking.

Semi EngineeringA bit of an exaggeration perhaps, as most generalizations are. But thanks to TSVs, in a very real sense some of what would typically be the last steps involved in front-end wafer fab processes are also being implemented at OSATs, the traditional purveyors of back-end packaging, assembly and test.

Whether this expensive investment will pay off for them the long run remains to be seen.

As always, the questions are “When” and “If”

TSVs have proved a bit of a headache for the industry in general and OSATs in particular, as the technology — really several technologies or methodologies — has generated a lot of hype and consequently research in the last several years, but has yet to see widespread adoption. As always in the semiconductor industry, this has been because of a combination of factors: economics, chipmakers’ roadmaps and more expedient technical or economic solutions available in the near term, such as so-called 2.5D IC technology.

In fact, talk out of the recent Semicons—West and Taiwan—indicates there won’t be widespread industry adoption of TSVs in 3D ICs until about the 2016 time frame, or beyond the planar 20nm node.

Currently in terms of TSVs, the market largely comprises FPGAs using 2.5D technology, namely from Xilinx and Altera. There has been some use of vertically stacked memory to date, but only in the high-end server market, said Mark Stromberg, a principle research analyst at Gartner Inc.

Stacked memory may find its way into higher end communications products in 2014, but it will likely be 2015 or later before it could become widespread, Stromberg said. It won’t be until 2016 at the earliest that the chip industry could see TSVs used to connect multiple groups of stacked die in a single 3D package, such as processors, a graphics processor, memory and peripheral logic.

One of the reasons 2.5D has come into play in the FPGA space is the die sized involved: larger than 20mm on side; in high-end applications the economics consequently make sense, said Raj Pendse, vice president and chief marketing officer at STATS ChipPAC Ltd. In the coming years, when dies sizes get below 20mm, it’s possible the market will then see 3D ICs utilizing TSVs used in mobile applications processors.

“If it becomes real, beyond a critical-mass level, TSVs will continue beyond 16nm,” Pendse said. “This is providing a new dimension to scaling and Moore’s Law. That is a tremendous benefit,” largely in increased I/O bandwidth available, he said.

While TSVs are and could continue to prove a boon to makers of ICs for computing applications — FPGAs and ASICs — mobile device makers, and consequently consumer OEMs, have mixed feelings, Pendse said. They are naturally most concerned with what will enable them to stick to their various roadmaps in the most economical manner. Current alternatives in 3D-ICs and packaging, such as extending fan-out wafer level packaging, or future alternatives, such as new packaging substrates, may provide more cost-effective means of getting the device performance needed.

On the other hand, there seems to be little doubt in terms of consensus that 3D ICs are the wave of the immediate future. “At 15nm, if you’re not vertically integrating the silicon, you’re not going to get the device performance you need,” Stromberg said.

Old OSATs learning new tricks

While the widespread adoption of TSVs remains a question, the larger OSATs have nevertheless been making preparations for a more widespread adoption, climbing a steep and expensive learning curve. As Pendse observed, I/O densities required in advanced assemblies and packaging also require technologies that are outside the realm of traditional packaging.

TSVs connecting two die within a package through a thin passive interposer layer—so-called 2.5D tech—aren’t far from what advanced packaging houses have already been doing, he said. But exposing TSVs used to connect die stacked on top of each other—true 3D—involves something fairly new to the OSATs.

In general there are different methods and technologies for implementing TSVs. To put it simply, these vary with the application or chips involved—say, memory or logic—and the type of packaging that will ultimately be used. Whether it can or should be done in the fab or at the OSAT depends on the specific method of TSV formation and whether or not the OSAT has the capability. Economics, as always, also come into play.

But much of the TSV work currently being done in the chip industry is old hat to the MEMS industry. The concept involves middle-end-of-line (MEOL) processes done at OSATs. While some of the tools and processes involved are familiar from wafer level packaging methods, such as wafer bumping, TSV formation requires wafer etch, vapor deposition and some element of polish, and not just grinding, but chemical mechanical planarization (CMP). And regardless of the type of TSV implementation, they all involve exposing vertical copper vias.

“Four years ago, no one thought OSATs would do something in this area,” said Sesh Ramaswami, managing director of TSV and advanced packaging product development at equipment maker Applied Materials. “But for their own market growth and survival, they have to participate somewhere in the TSV adoption.”

And that’s meant a substantial investment for the handful of OSATs that endeavor to be players in TSVs, not to mention part of the aforementioned headache. A single TSV production line can cost somewhere in the vicinity of $30 million. CMP tools don’t come cheap.

Unless costs are recouped within the first couple of years, such an investment can become a financial burden, STATS ChipPAC’s Pendse said. As noted above, other than FPGAs and some high-end memory applications, the market for TSV applications hasn’t really blossomed in the current 2013 to 2014 time frame as many had originally predicted. But if OSATs want to be able to expose vertical copper vias in stacked/3D devices, it’s necessary. With MEOL processes, vias only 50 to 100 microns deep must be exposed in the backside of a wafer that’s approximately 750 microns thick.

“It has to be granular enough to expose these vias; it can’t just wipe them out,” Pendse said. Hence the use of CMP. “We’ve never used CMP in packaging before,” he added.

After this step, there is the handling of the thinned wafer, which in some cases needs to subsequently be metalized and wafer bumped, with temporary bonding and debonding processes. “That’s also new to us, handling the thin wafer,” Pendse said.

These steps also involve more stringent clean room requirements than what OSATs are used to.

So perhaps not surprisingly there are only a handful of OSATs that currently have this capability. Applied Materials has been working with several over the past few years, said Ramaswami. It’s required more than traditional tools sales and support that an IDM or foundry receives, what with the integration challenges. “Wafer thinning isn’t straightforward,” he said. “It requires some special knowledge.”

Furthermore, OSATs haven’t been able to rely on their customers, most of whom are naturally fabless chipmakers lacking the necessary in-house expertise. “How do we develop this capability? I’d say 50% we borrow from … in-house,” Pendse said, noting STATS ChipPAC’s expertise in fan out. The remainder has meant hiring people with expertise in the necessary areas.

Editor’s Note: As explained at length elsewhere on this site this is a news story written by me for another publication. This originally appeared on Semiconductor Engineering; it holds the copyright, of course.

ATE Market Changes With The Times

A consumer-device driven chip industry drives demand for more known good die and quick time to market.

A declining PC market in recent years coupled with the continuing growth of mobile phones and tablets has meant changes throughout the semiconductor supply chain, and automated test equipment is no exception.

Semi EngineeringFor example, a decade ago memory test—namely DRAM—was a large market compared with that of nascent system-on-a-chip (SoC) testing. In fact, at the time some test executives questioned the marketing hubub over SoCs. Of course the PC was still king at the time, even in a post-dotcom bubble world. Smartphones were still expensive and uncommon outside the business world while tablet computers were a rarity (and still thick and heavy).

By 2008 the SoC test market and the memory test market were essentially the same size, however, as the market for consumer devices continued to grow, led by handset growth.

In the ensuing years SoC test continued to outgrow memory test. Last year in 2012 the memory test market was $362 million, while the SoC test market was $1.7 billion, according to Mark Stromberg, a semiconductor ATE analyst with Gartner Inc. The company forecasts that the SoC test market will continue to outstrip that of memory: the memory market will hit $620 million by 2017, while the SoC test market will reach $2.85 billion. In fact at an annual growth rate of 2.5 to 3 percent between 2012 and 2017, the SoC test market is set to slightly outpace the overall market growth for semiconductor ATE.

Worldwide Shipments by Device

While the overall memory test market may be declining in terms of annual growth, the use of NAND flash in all those phones and tablets has driven an increase in demand for NAND ATE. “NAND testers have really kind of accelerated nicely,” said Stromberg. “It’s a really strong market this year.”

As the markets for test have changed, so have the players. Like elsewhere in the semiconductor supply chain, today there are considerably fewer than there were a decade ago, as exits or mergers have reduced their numbers.

Viewed in terms of sales there are two major semiconductor ATE vendors, Advantest Corp. and Teradyne Inc., with LTX-Credence a distant third. Advantest, incidentally, completed its merger with Verigy (itself the former semiconductor test business previously spun out from Agilent Technologies) a year and a half ago; it debuted its first product developed since that merger last month at Semicon West, the T5831. Not surprisingly, Advantest is billing the T5831 as an advanced NAND tester, among its capabilities.

No Time to Lose

Of course some things never change. Cost-of-test, time-to-yield and time-to-market remain primary drivers, and likely always will for ATE. Each generation of tester seems to be able to test more devices in parallel than the previous generation. Today memory testers can test some 1,000 devices in parallel, while non-memory ATE and probe cards have evolved to test as many as 16 to 32 devices in parallel.

But mobile devices, which have given rise to the prevalence of not just SoCs and NAND flash but multi-chip modules and packages, are providing new challenges and drivers for ATE companies.

“The thing we are seeing becoming more important over the last two years is that our customers who are dealing with (their) Tier 1 customers, large handset manufacturers and computer manufacturers are beginning to institute really strict quality standards,” said Greg Smith, computing and communications business unit manager at Teradyne.

These customers are striving for extremely low defective parts per million (DPPM) levels, namely because these consumer driven markets move and react extremely fast. Customers playing in mobile consumer end markets often want to move from sample devices into volume production within the span of one quarter—just three months, noted Gary Fleeman, vice president of marketing at Advantest.

The fact that end markets react quickly is a factor, as well. Take for example the introduction of a new name-brand flagship mobile phone, such as a Samsung Galaxy or Apple iPhone, which will sell 100 million units within a few months of first hitting the market. Even with a relatively low DPPM of 100, that translates into 10,000 customers, observed Smith. Those customers will spread the news of their faulty device via the Internet and social networks.

“Because of how connected the world is, you can end up with these relatively low-rate problems becoming a big reputation problem,” he said, citing Apple’s notorious iPhone antenna issue. “All of these suppliers to Tier 1 developers of smartphones and tablets understand the asymmetric risk of a quality problem.” Thus those concerns filter down to ATE suppliers.

This pressure for low defectivity in a timely manner is a particularly peculiar issue, perhaps, for ATE vendors when dealing with NAND flash. Ubiquitous NAND devices have become so dense and complex, and manufacturing turnaround times so fast, that it’s virtually impossible to fabricate a perfect NAND device. It’s up to the related device controller to manage the errors.

This can lead to an unwanted increase in test times, said Ira Leventhal, senior director of R&D for Advantest’s Americas Memory unit. In response to this problem, the company designed its new tester, the T5831, to provide error-related analysis in the background while the device is in operation under test. The tester also features a real-time source-synchronous interface in which the device under test provides timing clock data to the tester while it is itself being tested.

Interconnects and stacked devices

While managing the ever-present time-to-market and test cost issues, ATE vendors also have to have a care for the near future. Current multi-chip modules and packages and stacked packages or 3D packaging are keeping ATE vendors on their toes. “This is the age of interconnect,” observed Advantest’s Fleeman. “Even conservative businesses like automotive (electronics) are moving into multi-chip packaging and multi-chip dies.”

While packages have gotten more complex and interconnects more dense, the end products they are going into keep getting smaller and thinner, which means the packages have to be thinner as well, and consequently more delicate. “It’s changing the handling environment,” Fleeman said. “Handlers aren’t sexy, they’re utilitarian, but we have to think about it,” he added. Thermal issues are also more prevalent than ever, thanks to more powerful devices in ever-thinner packages.

The need for dense interconnects coupled with the use of corresponding technologies such as copper micropillars are bringing further challenges, particularly for probe card makers. “Companies like Amkor are doing a good job of bringing dense contacts to the industry,” Fleeman said, noting a single device may contain some 10,000 to 20,000 delicate copper micropillars. “Contact is becoming quite challenging.”

Through-silicon vias (TSV) and 3D ICs are another potential headache for ATE vendors. “We’ve spent a fair amount of time thinking about it, but it is still very much up in the air,” said Teradyne’s Smith.

The attraction of TSV and 3D methodologies are the potential to create a device that contains stacked memory on top of a mobile processor, for example. Such a device would provide memory with lower power requirements yet larger bandwidth than what is possible today. “That’s the Holy Grail. That’s what people have been trying to achieve,” said Smith. While no one has achieved such a device just yet, the efforts have nevertheless driven a lot of innovation among memory makers.

And anytime you stack assemblies of devices before they are packaged together, the testing of said devices naturally gets complex. This is driving more test to be done at the wafer level to ensure the devices going into those assemblies and packages are good. The potential problem with multi-chip devices is that if one chip is bad, the entire device is bad. It’s also driving the expanded use of such test methods as boundary scan and built-in self test (BIST), which will require ATE to support such methods.

Then there is the need to test such a completed device or module as a system. “Imagine you have a baseband processor, RF chip, a power management chip, and some memory, and it’s all stacked into this complex 3D IC,” said Smith. “The best way to ensure quality is to perform all of those functions involved on all of the die at the same time, the equivalent of placing a call, browsing the Web, or sending a text message. It’s driven us to add features to our testers to communicate in the protocols of the devices in real time. We’ve developed our current generation of testers to handle this type of stuff.”

It’s still relatively specialized, and consequently small parts of the market that utilizes TSV and 3D ICs, such as MEMS and certain image sensors. But as for highly complex digital devices using these technologies, “We’re still waiting for that to emerge as a real factor,” Smith said.

Editor’s Note: As explained at length elsewhere on this site this is a news story written by me for another publication. This originally appeared on Semiconductor Engineering; it holds the copyright, of course.

Foundry Models In Transition

Market forces have forced some foundries to the cutting edge—and left huge opportunities for others.

There may have been a time when AMD founder Jerry Sanders famous quote: “real men (i.e., real companies) have their own fabs” rang true, but in today’s business climate it seems quaint at best.

Semi EngineeringFabless or fab-lite business models are more popular than ever today, while some IDMs have turned back the clock, so to speak, looking to improve capacity utilization and revenues by offering foundry services—Intel and Samsung among them. Then there is the fact that the third-largest chipmaker in 2012, in terms of revenue, was a pure-play foundry.

As the 28nm node capacity ramp continues in the foundry market in 2013, following unexpected demand and capacity bottlenecks in 2012, today’s foundry market is the end result of market trends and forces with old roots. But those trends and forces have been compounded in modern times by extreme financial and market necessities, not to mention technology.

In one sense, however, at its core, the foundry market hasn’t changed since Taiwan Semiconductor Manufacturing Co. (TSMC) launched as the industry’s first pure-play foundry in 1987: Chip companies look to foundries, either as a customer or as a provider, to maximize productivity and thereby minimize costs. That part of the game hasn’t changed, whether it involves a component supplier designing power modules with 0.18-micron design rules for manufacturing on 200mm wafers, or one of the two GPU giants producing their next-generation graphics processors based on the latest technology.

The trend for years now has been fabless or fab-lite; even Sanders’ own AMD spun out its manufacturing arm several years ago to create one of the world’s largest pure-play foundries, GlobalFoundries. This has naturally in turn spawned the growth of the pure-play foundry market from its birth some 26 years ago.

Indeed, last year the overall foundry market enjoyed revenues of $29.6 billion, managing year-over-year growth of 12%, which is three times that of the chip industry over all in 2012. That growth caught everyone by surprise including the foundries themselves; 28nm capacity was tight for much of the year, even as yields improved dramatically—so much so that it reportedly impacted some capital equipment purchases, in spite of tight foundry capacity.

But that illustrates the biggest and most obvious change in the foundry industry in modern times: The foundries themselves are involved directly with developing leading-edge semiconductor technology. In fact, with the industry looking at the end of planar CMOS at the leading edge for some devices with the advent of 3D transistor architectures and the high-k materials they require, leading foundries no longer can rely on a mix of conventional scaling, publicly available data and equipment and process technology suppliers to get their jobs done. Research and development now must be within their purview, at least for those playing at the leading edge.

“Historically foundries don’t do R&D, their clients do it,” noted Dean Freeman, a research vice president at Gartner Research. That’s not so, today.

Nothing illustrates that fact better than TSMC’s R&D budget. In 2012 the company spent 33.8 billion NT, or about $1.13 billion, on R&D—a quarter of its revenue. This year the company plans to spend 40.4 billion NT, or about $1.35 billion, which includes adding some 500 people to its employee headcount, bolstering its R&D staff from 3,400 people to 3,900.

Indeed, leading foundries have joined the leading IDMs and technology consortia as purveyors of—not just manufacturers of—advanced technology.

While TSMC and its foundry brethren in the first tier of the pure-play market—Globalfoundries and United Microelectronics Corp. (UMC)—continue to build out 28nm capacity, they are also hard at work on the 20nm node and the subsequent hybrid 14/16nm finFET based on a 20nm back-end of line process. In fact, TSMC just announced first tapeouts of an ARM A-57 processor, based on the 64-bit ARMv8 processor series and built with 16nm transistor technology, including finFETs. This followed their rival’s announcement of a few months earlier. In February, GlobalFoundries announced a “first implementation” of a dual-core ARM A9 processor using the company’s 14nm-XM FinFET transistor architecture.

Follow the money

Being on the very leading edge of technology is driving growth among the first-tier foundries.

Like many others in the industry, TSMC and its chairman and CEO, Morris Chang, are quite bullish on the continued demand for 28nm technology as well as the development of 20nm technology. In general, 28nm designs, with their combination of lower power consumption and speedier transistors, have consequently proven cost-effective for a chip industry currently driven by mobile devices—smartphones, tablets and ultra lightweight notebooks. During TSMC’s review of its 2012 results earlier this year, Chang said the company will continue to aggressively grow its 28nm capacity and output; 2013 capacity and output will triple that of 2012, he said.

“It’s all about lower power with functionality and no sacrifice on the power requirements,” observed Kathryn Ta, managing director of strategic marketing for Applied Materials’ Silicon Systems Group. The equipment and process technology supplier’s foundry customers are seeing a need to move to 3D transistor architectures with minimal leakage, she said, because of those power requirements.

Development will continue at 20nm and 16nm as well at TSMC and its rivals. This year, 88% of the 9 billion NT that TSMC will spend on capital expenditures will go to 28nm, 20nm and 16nm capacity; an additional 5% will be spent on additional R&D equipment. Chang predicted that by Q3 of this year high-k metal gate production will surpass that of standard oxynitride gates, a gap that naturally will widen in Q4 and beyond.

“Enough discussions have taken place with enough customers … to lead us to believe that in both its first and second year of production (2014 and 2015, respectively) the volume of 20nm SoCs will be larger than that of 28nm in its first and second years of production (2012 and 2013),” Chang said.

He further noted that this represented the state of the art, and not just for the foundry industry, but for the industry as whole. This may indeed prove to be true in a few years as those 20nm and 16nm/14nm SoC devices move into production. It’s a far cry from the days when foundries were traditionally technological also-rans.

But then the first-tier foundries at the leading edge are still playing catch-up in the meantime with those IDMs at the leading edge, namely Intel. The world’s biggest chipmaker has kept Moore’s Law on track on the CPU side of the ITRS roadmap, last year having brought its Ivy Bridge processors to market. These feature 22nm transistors replete with finFETs; Intel’s own roadmap calls for 14nm designs to be in production in 2014; in terms of mobile SoCs like those the foundries are talking about, the company has promised its 22nm Atom SoCs will be in production in 2015.

“Intel seems to be able to continue to shrink because they spend a fortune on R&D,” said Gartner’s Freeman. “The foundries are pushing hard to catch up,” He noted that while both GlobalFoundries and TSMC have 16nm/14nm chips featuring finFETs in development, they are taking a shortcut, so to speak, by employing 20nm metal interconnects. “It’s close to what Intel is doing. Intel’s design may be more sophisticated, but the lithography is the same.”

Plenty of room, and business, at the trailing end

But not everybody in the foundry market is playing at the leading edge. The same market and industry forces that have induced the bigger pure-play foundries to move beyond their historical roles also have created a two-tiered pure-play foundry market. In the first tier are those that have the deep pockets to play in this space: TSMC, Globalfoundries, UMC, and to a lesser extent China’s Semiconductor Manufacturing International Corp. (SMIC).

Then there are the second-tier companies, those that are still fulfilling a traditional foundry role—at trailing edge processes, but nevertheless needed or even essential semiconductor manufacturing technology and capacity. Indeed, many second-tier foundries do quite well with their particular market niches and technologies. In the world of mobile consumer gadgets, including but not limited to smartphones and tablets, there are still many components fabricated on established, trailing-edge technology, such as sensors, microcontrollers and power components.

Even in 2013, where CPUs with 22nm transistors and mobile SoCs with 28nm transistors represent the current state of the art, some 40% of all silicon used to manufacture chips goes into mature devices fabricated on 200mm wafers. That’s typically 0.18-micron designs or larger. And much, if not most, of that is coming from pure-play foundries.

At the top of that second-tier foundry market, Israel’s TowerJazz, for example, has found a relatively comfortable niche making high-speed devices for a broad range consumer applications utilizing 0.13-micron designs and larger. It also makes CMOS image sensors with 0.16- and 0.11-micron design rules. In terms of financials, this has translated to record revenues: last year TowerJazz posted revenues of $638.8 million, an increase of 5% over the previous year.

Freeman suggested there are plenty of opportunities for these second-tier foundries. The so-called “Internet of Things,” for example, is a major driver behind sensor applications, as it is for the controllers needed to coordinate the data these sensors produce—data that can be managed via mobile Internet devices. These supplemental and complementary applications typically don’t need cutting-edge technology.

As has always been the case in the foundry industry, as leading-edge technology becomes trailing-edge, there will be new opportunities for second-tier foundries, as well. Some of the larger second-tier foundries eventually may have the opportunity to compete with first-tier companies head-to-head with 28nm capacity if they have deep-enough pockets to invest.

In the bifurcated smartphone market, for example, low-end smartphones that originally utilized chips manufactured with 40nm technology soon will migrate to chips with 28nm technology, as capacity ramps and it becomes even more cost effective, said Applied’s Ta. Even as the leading-edge players are driven beyond the 28nm node and the adoption of 3D gate architectures, the industry could very well see an extended 28nm node, driven by this market for lower-end smartphones and other mobile devices, she said.

But What About …

Things rarely ever prove to be so clearly defined in the chip industry. With players such as Samsung, Intel and IBM among others flirting with the foundry business, and some of the larger first-tier foundries suffering the same financial headaches that have plagued the IDMs in the past—problems that drove some of them to a fabless model in the fist place—there are some significant unknowns.

While 3D, high-k metal gate architectures, i.e, finFETs and the like, seem to be the wave of the near future, there are still those in the industry that tout the efficacy of fully depleted silicon-on-insulator (FD-SOI) as either an alternative to complement to 3D gate technology, for example.

IBM and its technology alliance partners have considered FD-SOI as a possible outcome of the semiconductor technology roadmap in the near future, Ta noted. “We see most of the effort on the finFET/Intel approach, but some of our customers are still talking about SOI,” perhaps used in some combination with finFETs, she added.

Gartner’s Freeman noted that Intel’s finFET devices are already fully depleted devices, although SOI could conceivably provide a bit less leakage; as such it may be an option at future nodes. Given the transistor speed and power usage achieved by its 22nm Atom processors, which are manufactured on top of bulk silicon technology, that seems unlikely though for Intel and those choosing to follow its lead. Freeman further observed that GlobalFoundries, once a proponent of FD-SOI, has backed off somewhat, although some of its largest customers remain committed to an FD-SOI strategy for the foreseeable future. IBM, for one, has publicly stated it will use FD-SOI, finFETs and stacked die together at future nodes.

But what does this mean for the leading-edge foundries? As always they will have to be able to manufacture what their customers want. It may be that some chipmakers will choose to go the FD-SOI route and that could prove a competitive opportunity for any foundry.

Another wild card that the top-tier foundries will need to take into account is the overlapping of technology nodes, which may become more pronounced with the extension of the 28nm node coupled with the rush to get 20nm devices into production. “It’s happening faster than previous node transitions have happened,” Applied’s Ta, noting that it’s driven by the low-power promise of finFETs. In the past node transitions typically took two to 2.5 years; “This time we may see a 1.5 year transition to finFETs,” she added.

Another question mark in the foundry market itself is SMIC. While most would still classify the Chinese foundry as a top-tier foundry, it is in a very real way straddling the gap between first and second tier. The company, once relatively close behind TSMC and UMC, has foundered in red ink and legal woes in recent years. While it has subsequently experienced an impressive turnaround financially under the helm of current CEO Tzu-Yin Chiu in 2012, it’s capital expenditures fell dramatically, even as capacity utilization hit 95% in Q2, and it is well behind its rivals in terms of technology.

Customer tapeouts of 28nm devices won’t take place until the end of this year; One of SMIC’s largest domestic customers, Spreadtrum, already has been forced to move to rival TSMC to meet its current plans for 28nm devices.

SMIC’s Chiu has said that the company’s 28nm technology will include both standard polysilicon oxynitride devices and high-k metal gates, and that it has plans to manufacture finFET devices at the 20nm node. In the meantime, it has found a saving grace in applications typically manufactured by second-tier players: smart cards, CMOS image sensors and power management chips.

Which way will SMIC go? Will it continue its impressive turn around by abandoning the leading edge or will it continue to play technological catch up? Or perhaps a little bit of both?

Time will tell. But it’s certainly an interesting time for the foundry business, and certain that for the foreseeable future the pure-play foundries will have to work hard at the cutting edge of semiconductor technology.

Editor’s Note: As explained at length elsewhere on this site this is a news story written by me for another publication. This originally appeared on Semiconductor Engineering; it holds the copyright, of course.