Temporary Bonding, Debonding Remains Challenging For TSV Adoption

While compound semis have used TSVs for years, hurdles remain for adoption in high-volume memory and logic apps.

One issue with the adoption of TSVs in 3D ICs in mainstream semiconductor applications revolves around the throughput of the temporary wafer bonding and debonding process. This doesn’t necessarily equate to a roadblock, but work certainly remains to be done on this and related issues.

Semi EngineeringOn one hand, TSVs already are being used in the manufacturing of compound semiconductors and MEMS, and have been for a decade or so. Many of the image sensors used in current consumer devices, for example, employ 3D ICs, and these are being produced in high volume.

In fact, in some products in production today, TSVs have enabled better products, noted Thorsten Matthias, business development director at bonding equipment vendor EV Group (EVG). In the case of image sensors, using TSVs have enabled sensors that produce a better quality image or sensors that produce an image faster. “Even in some of the cheapest devices, TSVs have been implemented where it makes sense,” Matthias said.

But if one is talking about memory and logic for consumer devices, there are reasons the forecast adoption of TSVs has been pushed out. Before TSVs in stacked silicon are utilized in production there are still a few hurdles to be crossed. One of the issues is throughput of the temporary bonding and debonding process. Another significant hurdle for the mainstreaming of TSVs involves wafer-level testing.

“The throughputs on that process are still pretty low,” said Mark Stromberg, an analyst with Gartner Inc. Current systems on the market can manage throughputs of 20 to 25 wafers per hour. Compared to other fab-line tools with a typical throughput of some 60 wafers per hour, that’s a bottleneck in terms of high volume production. “This is one of the reasons TSVs haven’t taken off, especially in the current market conditions,” Stromberg said.

Theoretically, one could add more than one bonding/debonding tool to the flow, but at a cost of several million dollars or more per tool, that is an expensive proposition. However as with other fab tools, bonding and debonding equipment vendors have added multiple chambers to tools to improve throughput (as well as accommodating various and differing bonding and debonding process flows). Austria-based EV Group, for example, unveiled its EVG850 temporary bonding and debonding (TB/DB) tool last year. Built on the company’s XT Frame platform, the EVG850TB/DB can accommodate up to nine process modules, doubling the process capacity of EVG’s previous temporary bonding and debonding tools.

Performance gains vs. cost gains

Still, there is work to be done before the advent of temporary wafer bonding and debonding in TSV fabrication in the production of memory and logic, and not just in terms of throughput. There are a number of process flows that have the potential to be extended to memory production, Matthias said. Even so, when it comes to high volume production of these devices, the industry is looking at a steep learning curve, moving from, say, thousands of wafers in a production run of compound semiconductors, to the tens of thousands of wafers involved in the production run of memory and logic destined for consumer devices such as mobile phones.

“In terms of design, it can be quite challenging,” Matthias observed, referring to the question of whether or not to implement TSVs in such a production environment.

“But I would say the industry has moved beyond the feasibility study phase and is now in the reliability and infrastructure phase,” suggested Matthias’ colleague, Thomas Uhrmann, also business development manager at EV Group. This is particularly true in terms of the throughput question, Uhrmann said, acknowledging that this is a critical issue in terms of a wider adoption of TSVs.

But it’s not just a matter of improving throughput only. Multiple modules on a temporary bonding and debonding tool are also important for accommodating differing steps and process flows. Another challenge in this process has been the use of adhesives to bond a thin device wafer to a temporary carrier wafer so it can go through the back thinning and other process steps required in TSV formation.

The thermal characteristics of some adhesives used in the process are such that they can’t survive in the high temperatures of the chemical vapor deposition (CVD) and physical vapor deposition (PVD) processes typically used in the manufacturing of memory and logic devices. There is much work being done within the industry at the moment to improve the thermal stability of adhesives used in the temporary bonding process, while some companies have implemented low temperature CVD/PVD processes that are compatible with the adhesives, Uhrmann said.

“We’re still working on it, the ability to withstand the higher temperatures,” he said. “It’s an ongoing process.”

With logic and memory chipmakers planning definitively—or at least mulling over—the implementation of TSVs in production within two to three years, the type of bonding/debonding used may depend on the specific device involved and the related production costs. Some devices require high process temperatures, and there are some adhesives in production today that can withstand those higher temperatures, Matthias said. There are related practical reasons in terms of production to maintain a process flow with higher process flows, Uhrmann suggested, although these are somewhat less necessary so than they were three to five years ago.

Furthermore, being able to process at lower temperatures provides additional flexibility in terms of process flows, and most mainstream chipmakers are looking at bonding and debonding processes coupled with CVD/PVD processes in the range of 200 to 320 degrees, Matthias said.

Uhrmann suggested that while there is still a learning process ahead for the industry when it comes to bonding and debonding and TSV production, as more wafers are processed and more statistical data is gathered, improvements can be made in terms of integration with upstream and downstream production flows. “This will allow us to optimize the temporary bonding and debonding process flow,” he said.

As Matthias summed it up: “It’s very challenging and interesting to work in this field.”

Editor’s Note: As explained at length elsewhere on this site this is a news story written by me for another publication. This originally appeared on Semiconductor Engineering; it holds the copyright, of course.

TSVs: Welcome To The Era Of Probably Good Die

Physical probing of devices using TSVs is proving a challenge to traditional test.

Among the challenges of a widespread adoption of 3D ICs is how to test them, particularly when it comes to through-silicon vias (TSVs). While not necessarily presenting a roadblock, TSVs use in the mainstream will almost certainly change traditional test strategies.

Semi EngineeringIn fact for many chipmakers looking to stack their silicon, they may come to rely less on the traditional known good die (KGD) at final test and instead opt for so-called “probably good” die.

If one looks at the semiconductor industry as a whole, this issue of testing a device that relies on TSVs is nothing new. Compound semiconductors, such as image sensors, and MEMS have been utilizing TSVs for years. Furthermore, the problems with probing TSVs are not dissimilar to those introduced in years past with advanced packaging.

The use of delicate copper pillar bumps in flip-chip interconnects, for example, also has proven problematic in terms of physical contact during probing at final test. Physical contact can stress and ultimately damage the pillars. “Old-time spring contact probe doesn’t cut it anymore, ” said Gary Fleeman, vice president of marketing at Advantest Corp. “It’s becoming difficult to make contact. It is becoming quite challenging.”

So in a sense, the difficulties of testing devices with TSVs aren’t new. As with copper pillars, TSVs can be subject to damage with physical probing. But testing an image sensor and testing a stack of logic die, or something more complex—say a processor with memory—ultimately involve different challenges. “There are certain things that have been learned and can be leveraged,” said Mike Slessor, senior vice president and general manager of the MicroProbe Product Group at probe card maker FormFactor. But the devices are structurally very different, he added. It’s not a matter of simply taking the test strategy from one and applying it to the other.

Furthermore, in terms of stacking silicon, it’s one thing to have a single die with a damaged I/O; it’s something else to have a single damaged die that is part of a stack of several die, particularly if it renders the entire stack defective. At first glance that would seem to imply that 100% KGD are essential. However, as makers of high volume, mainstream semiconductor applications have begun to look to stacked die as a means of continuing device performance gains at advanced manufacturing nodes, in all likelihood it will mean new and different test strategies. Demanding 100% KGD may not prove economical in some cases, or even necessary.

But as with so many other aspects of mainstream 3D IC adoption, testing with TSVs is a question mark. “It creates a problem. How are you going to determine KGD?” said Mark Stromberg, semiconductor ATE analyst with Gartner Inc. “The industry is still kind of undecided how it’s going to address the problem,” he said.

Chipmakers are considering and evaluating several different test methodologies with regard to TSVs, Slessor said. In fact traditional physical probing of the device contacts—in this case, TSVs—hasn’t been completely dismissed. But it’s proving difficult, and the prevailing sentiment among MicroProbe’s customers is to avoid it. “We’ve done it, but it’s a challenge,” Slessor said, noting that this is the crux of the TSV/test debate. “If you don’t have to touch it, then you shouldn’t. The jury is still out on whether or not you have to.”

If not known good die, then what?

So if a chipmaker isn’t going to physically probe a device under test and drive current through it, that begs the obvious question: how to test said device? As Slessor said, the jury is still out, but as always, the answer ultimately will come down to cost—both test costs and device manufacturing costs.

Contactless probing would be a potential solution, but so far has proven problematic. “It hasn’t really developed. It isn’t progressing at all,” observed Advantest’s Fleeman.

The methods of contactless probing under consideration involve RF technology, but the physics involved with RF antennas are proving a limiting factor. The high frequencies and power densities of the electrical currents that mainstream semiconductors employ tend are proving a stumbling block to this method. “The tests require quite a bit more power than what can be generated,” MicroProbe’s Slessor said. “It’s something we continue to play around with,” he added, noting that there are inherent advantages to this approach, particularly as pitches shrink. But the technology won’t be ready anytime soon, he said.

BiST, or built-in self-test, is another option: building in extra structures within a device specifically for testing it. This adds complexity, to the manufacturing process, however, and thereby costs. Consequently it may not prove to be the best test strategy for low-cost, high volume device production when it comes to 3D ICs.

“Anything that adds significant potential costs is going to be a potential roadblock,” said Gartner’s Stromberg.

Another method under consideration is the use of specific test points: test or dummy pads placed among the TSVs themselves or on the outside of them that are used to contact and test the device. These can be fabricated in parallel with TSVs, adding relatively little in terms of manufacturing costs, Slessor said. Dummy pads can provide probe access to most of the structures in a device under test (hence the term “probably good die”). It also has the benefit of being familiar; it is an approach that DRAM manufacturers have employed for a long time.

Will known good die prove too expensive?

Whichever test strategy chipmakers adopt may depend on their specific application and the associated costs of fabrication and test. In other words, what proves most cost effective—determining probably good die vs. known good die? In some manufacturing scenarios, particularly among high-yield devices such as memory, it may prove cheaper to depend on a probably good die test strategy, even though it means some yield loss at final packaging, as the cost of that loss would still be less than than that of testing for 100% KGD prior to packaging.

This, of course, differs from tradition; wafer probe originally was designed to sort good die from bad die prior to packaging, sending 100 percent KGD off to be packaged.

“I don’t see it being a major roadblock to (widespread) 3D adoption,” said Slessor. “Instead of a roadblock, we’re looking at changes in test strategies.”

A lot depends on the combination of die involved in a 3D stack and the nature of the individual die along with what can and cannot be tested via dummy pads (or some other strategy alternative to physically probing TSVs). “Can they drive enough power through a device under test through dummy pads to get the results they want? That’s the question,” Slessor said, adding that there are related methods to infer the “goodness” of TSVs.

In terms of memory, this approach almost certainly will work, he said—with a combinations of a stacked processor and memory or stacked FPGAs it should work just fine in most cases. With 3D ICs involving high performance RF die, perhaps not.

In any event, the era of probably good die may be on the horizon.

Editor’s Note: As explained at length elsewhere on this site this is a news story written by me for another publication. This originally appeared on Semiconductor Engineering; it holds the copyright, of course.

Front End Comes To The Back End

The adoption of through silicon vias has meant putting front-end wafer fab tools and processes in assembly and test houses.

For outsourced assembly and test (OSAT) houses either planning for or already offering through-silicon via (TSV) capability for their 3D packaging efforts, this has meant the front end is coming to the back end, in a manner of speaking.

Semi EngineeringA bit of an exaggeration perhaps, as most generalizations are. But thanks to TSVs, in a very real sense some of what would typically be the last steps involved in front-end wafer fab processes are also being implemented at OSATs, the traditional purveyors of back-end packaging, assembly and test.

Whether this expensive investment will pay off for them the long run remains to be seen.

As always, the questions are “When” and “If”

TSVs have proved a bit of a headache for the industry in general and OSATs in particular, as the technology — really several technologies or methodologies — has generated a lot of hype and consequently research in the last several years, but has yet to see widespread adoption. As always in the semiconductor industry, this has been because of a combination of factors: economics, chipmakers’ roadmaps and more expedient technical or economic solutions available in the near term, such as so-called 2.5D IC technology.

In fact, talk out of the recent Semicons—West and Taiwan—indicates there won’t be widespread industry adoption of TSVs in 3D ICs until about the 2016 time frame, or beyond the planar 20nm node.

Currently in terms of TSVs, the market largely comprises FPGAs using 2.5D technology, namely from Xilinx and Altera. There has been some use of vertically stacked memory to date, but only in the high-end server market, said Mark Stromberg, a principle research analyst at Gartner Inc.

Stacked memory may find its way into higher end communications products in 2014, but it will likely be 2015 or later before it could become widespread, Stromberg said. It won’t be until 2016 at the earliest that the chip industry could see TSVs used to connect multiple groups of stacked die in a single 3D package, such as processors, a graphics processor, memory and peripheral logic.

One of the reasons 2.5D has come into play in the FPGA space is the die sized involved: larger than 20mm on side; in high-end applications the economics consequently make sense, said Raj Pendse, vice president and chief marketing officer at STATS ChipPAC Ltd. In the coming years, when dies sizes get below 20mm, it’s possible the market will then see 3D ICs utilizing TSVs used in mobile applications processors.

“If it becomes real, beyond a critical-mass level, TSVs will continue beyond 16nm,” Pendse said. “This is providing a new dimension to scaling and Moore’s Law. That is a tremendous benefit,” largely in increased I/O bandwidth available, he said.

While TSVs are and could continue to prove a boon to makers of ICs for computing applications — FPGAs and ASICs — mobile device makers, and consequently consumer OEMs, have mixed feelings, Pendse said. They are naturally most concerned with what will enable them to stick to their various roadmaps in the most economical manner. Current alternatives in 3D-ICs and packaging, such as extending fan-out wafer level packaging, or future alternatives, such as new packaging substrates, may provide more cost-effective means of getting the device performance needed.

On the other hand, there seems to be little doubt in terms of consensus that 3D ICs are the wave of the immediate future. “At 15nm, if you’re not vertically integrating the silicon, you’re not going to get the device performance you need,” Stromberg said.

Old OSATs learning new tricks

While the widespread adoption of TSVs remains a question, the larger OSATs have nevertheless been making preparations for a more widespread adoption, climbing a steep and expensive learning curve. As Pendse observed, I/O densities required in advanced assemblies and packaging also require technologies that are outside the realm of traditional packaging.

TSVs connecting two die within a package through a thin passive interposer layer—so-called 2.5D tech—aren’t far from what advanced packaging houses have already been doing, he said. But exposing TSVs used to connect die stacked on top of each other—true 3D—involves something fairly new to the OSATs.

In general there are different methods and technologies for implementing TSVs. To put it simply, these vary with the application or chips involved—say, memory or logic—and the type of packaging that will ultimately be used. Whether it can or should be done in the fab or at the OSAT depends on the specific method of TSV formation and whether or not the OSAT has the capability. Economics, as always, also come into play.

But much of the TSV work currently being done in the chip industry is old hat to the MEMS industry. The concept involves middle-end-of-line (MEOL) processes done at OSATs. While some of the tools and processes involved are familiar from wafer level packaging methods, such as wafer bumping, TSV formation requires wafer etch, vapor deposition and some element of polish, and not just grinding, but chemical mechanical planarization (CMP). And regardless of the type of TSV implementation, they all involve exposing vertical copper vias.

“Four years ago, no one thought OSATs would do something in this area,” said Sesh Ramaswami, managing director of TSV and advanced packaging product development at equipment maker Applied Materials. “But for their own market growth and survival, they have to participate somewhere in the TSV adoption.”

And that’s meant a substantial investment for the handful of OSATs that endeavor to be players in TSVs, not to mention part of the aforementioned headache. A single TSV production line can cost somewhere in the vicinity of $30 million. CMP tools don’t come cheap.

Unless costs are recouped within the first couple of years, such an investment can become a financial burden, STATS ChipPAC’s Pendse said. As noted above, other than FPGAs and some high-end memory applications, the market for TSV applications hasn’t really blossomed in the current 2013 to 2014 time frame as many had originally predicted. But if OSATs want to be able to expose vertical copper vias in stacked/3D devices, it’s necessary. With MEOL processes, vias only 50 to 100 microns deep must be exposed in the backside of a wafer that’s approximately 750 microns thick.

“It has to be granular enough to expose these vias; it can’t just wipe them out,” Pendse said. Hence the use of CMP. “We’ve never used CMP in packaging before,” he added.

After this step, there is the handling of the thinned wafer, which in some cases needs to subsequently be metalized and wafer bumped, with temporary bonding and debonding processes. “That’s also new to us, handling the thin wafer,” Pendse said.

These steps also involve more stringent clean room requirements than what OSATs are used to.

So perhaps not surprisingly there are only a handful of OSATs that currently have this capability. Applied Materials has been working with several over the past few years, said Ramaswami. It’s required more than traditional tools sales and support that an IDM or foundry receives, what with the integration challenges. “Wafer thinning isn’t straightforward,” he said. “It requires some special knowledge.”

Furthermore, OSATs haven’t been able to rely on their customers, most of whom are naturally fabless chipmakers lacking the necessary in-house expertise. “How do we develop this capability? I’d say 50% we borrow from … in-house,” Pendse said, noting STATS ChipPAC’s expertise in fan out. The remainder has meant hiring people with expertise in the necessary areas.

Editor’s Note: As explained at length elsewhere on this site this is a news story written by me for another publication. This originally appeared on Semiconductor Engineering; it holds the copyright, of course.

ATE Market Changes With The Times

A consumer-device driven chip industry drives demand for more known good die and quick time to market.

A declining PC market in recent years coupled with the continuing growth of mobile phones and tablets has meant changes throughout the semiconductor supply chain, and automated test equipment is no exception.

Semi EngineeringFor example, a decade ago memory test—namely DRAM—was a large market compared with that of nascent system-on-a-chip (SoC) testing. In fact, at the time some test executives questioned the marketing hubub over SoCs. Of course the PC was still king at the time, even in a post-dotcom bubble world. Smartphones were still expensive and uncommon outside the business world while tablet computers were a rarity (and still thick and heavy).

By 2008 the SoC test market and the memory test market were essentially the same size, however, as the market for consumer devices continued to grow, led by handset growth.

In the ensuing years SoC test continued to outgrow memory test. Last year in 2012 the memory test market was $362 million, while the SoC test market was $1.7 billion, according to Mark Stromberg, a semiconductor ATE analyst with Gartner Inc. The company forecasts that the SoC test market will continue to outstrip that of memory: the memory market will hit $620 million by 2017, while the SoC test market will reach $2.85 billion. In fact at an annual growth rate of 2.5 to 3 percent between 2012 and 2017, the SoC test market is set to slightly outpace the overall market growth for semiconductor ATE.

Worldwide Shipments by Device

While the overall memory test market may be declining in terms of annual growth, the use of NAND flash in all those phones and tablets has driven an increase in demand for NAND ATE. “NAND testers have really kind of accelerated nicely,” said Stromberg. “It’s a really strong market this year.”

As the markets for test have changed, so have the players. Like elsewhere in the semiconductor supply chain, today there are considerably fewer than there were a decade ago, as exits or mergers have reduced their numbers.

Viewed in terms of sales there are two major semiconductor ATE vendors, Advantest Corp. and Teradyne Inc., with LTX-Credence a distant third. Advantest, incidentally, completed its merger with Verigy (itself the former semiconductor test business previously spun out from Agilent Technologies) a year and a half ago; it debuted its first product developed since that merger last month at Semicon West, the T5831. Not surprisingly, Advantest is billing the T5831 as an advanced NAND tester, among its capabilities.

No Time to Lose

Of course some things never change. Cost-of-test, time-to-yield and time-to-market remain primary drivers, and likely always will for ATE. Each generation of tester seems to be able to test more devices in parallel than the previous generation. Today memory testers can test some 1,000 devices in parallel, while non-memory ATE and probe cards have evolved to test as many as 16 to 32 devices in parallel.

But mobile devices, which have given rise to the prevalence of not just SoCs and NAND flash but multi-chip modules and packages, are providing new challenges and drivers for ATE companies.

“The thing we are seeing becoming more important over the last two years is that our customers who are dealing with (their) Tier 1 customers, large handset manufacturers and computer manufacturers are beginning to institute really strict quality standards,” said Greg Smith, computing and communications business unit manager at Teradyne.

These customers are striving for extremely low defective parts per million (DPPM) levels, namely because these consumer driven markets move and react extremely fast. Customers playing in mobile consumer end markets often want to move from sample devices into volume production within the span of one quarter—just three months, noted Gary Fleeman, vice president of marketing at Advantest.

The fact that end markets react quickly is a factor, as well. Take for example the introduction of a new name-brand flagship mobile phone, such as a Samsung Galaxy or Apple iPhone, which will sell 100 million units within a few months of first hitting the market. Even with a relatively low DPPM of 100, that translates into 10,000 customers, observed Smith. Those customers will spread the news of their faulty device via the Internet and social networks.

“Because of how connected the world is, you can end up with these relatively low-rate problems becoming a big reputation problem,” he said, citing Apple’s notorious iPhone antenna issue. “All of these suppliers to Tier 1 developers of smartphones and tablets understand the asymmetric risk of a quality problem.” Thus those concerns filter down to ATE suppliers.

This pressure for low defectivity in a timely manner is a particularly peculiar issue, perhaps, for ATE vendors when dealing with NAND flash. Ubiquitous NAND devices have become so dense and complex, and manufacturing turnaround times so fast, that it’s virtually impossible to fabricate a perfect NAND device. It’s up to the related device controller to manage the errors.

This can lead to an unwanted increase in test times, said Ira Leventhal, senior director of R&D for Advantest’s Americas Memory unit. In response to this problem, the company designed its new tester, the T5831, to provide error-related analysis in the background while the device is in operation under test. The tester also features a real-time source-synchronous interface in which the device under test provides timing clock data to the tester while it is itself being tested.

Interconnects and stacked devices

While managing the ever-present time-to-market and test cost issues, ATE vendors also have to have a care for the near future. Current multi-chip modules and packages and stacked packages or 3D packaging are keeping ATE vendors on their toes. “This is the age of interconnect,” observed Advantest’s Fleeman. “Even conservative businesses like automotive (electronics) are moving into multi-chip packaging and multi-chip dies.”

While packages have gotten more complex and interconnects more dense, the end products they are going into keep getting smaller and thinner, which means the packages have to be thinner as well, and consequently more delicate. “It’s changing the handling environment,” Fleeman said. “Handlers aren’t sexy, they’re utilitarian, but we have to think about it,” he added. Thermal issues are also more prevalent than ever, thanks to more powerful devices in ever-thinner packages.

The need for dense interconnects coupled with the use of corresponding technologies such as copper micropillars are bringing further challenges, particularly for probe card makers. “Companies like Amkor are doing a good job of bringing dense contacts to the industry,” Fleeman said, noting a single device may contain some 10,000 to 20,000 delicate copper micropillars. “Contact is becoming quite challenging.”

Through-silicon vias (TSV) and 3D ICs are another potential headache for ATE vendors. “We’ve spent a fair amount of time thinking about it, but it is still very much up in the air,” said Teradyne’s Smith.

The attraction of TSV and 3D methodologies are the potential to create a device that contains stacked memory on top of a mobile processor, for example. Such a device would provide memory with lower power requirements yet larger bandwidth than what is possible today. “That’s the Holy Grail. That’s what people have been trying to achieve,” said Smith. While no one has achieved such a device just yet, the efforts have nevertheless driven a lot of innovation among memory makers.

And anytime you stack assemblies of devices before they are packaged together, the testing of said devices naturally gets complex. This is driving more test to be done at the wafer level to ensure the devices going into those assemblies and packages are good. The potential problem with multi-chip devices is that if one chip is bad, the entire device is bad. It’s also driving the expanded use of such test methods as boundary scan and built-in self test (BIST), which will require ATE to support such methods.

Then there is the need to test such a completed device or module as a system. “Imagine you have a baseband processor, RF chip, a power management chip, and some memory, and it’s all stacked into this complex 3D IC,” said Smith. “The best way to ensure quality is to perform all of those functions involved on all of the die at the same time, the equivalent of placing a call, browsing the Web, or sending a text message. It’s driven us to add features to our testers to communicate in the protocols of the devices in real time. We’ve developed our current generation of testers to handle this type of stuff.”

It’s still relatively specialized, and consequently small parts of the market that utilizes TSV and 3D ICs, such as MEMS and certain image sensors. But as for highly complex digital devices using these technologies, “We’re still waiting for that to emerge as a real factor,” Smith said.

Editor’s Note: As explained at length elsewhere on this site this is a news story written by me for another publication. This originally appeared on Semiconductor Engineering; it holds the copyright, of course.

High NA EUV Litho May Require Larger Photomask Size

In the meantime, will the mask supply chain have six-inch EUV masks ready by 2015?

With extreme ultraviolet lithography (EUV) potentially being used in pilot production in a few years, it raises the question of larger photomasks sizes—will the industry need them, and if so, when?

Semi EngineeringWhile there has been discussion of late about the possible need to transition to a larger mask size, veterans of the mask business may feel it’s déjà vu all over again. Back in the mid 1990s there was much discussion about transitioning from six-inch to nine-inch masks—so much so that standards were written. Then, as now, the transition (or more accurately, the lack of one) had to do with economics and the choice of lithography technologies used in semiconductor manufacturing.

The forthcoming choice today involves EUV, and as always when the discussion involves EUV, to answer these questions involves a combination of extrapolation and hypothesis. But the industry is finally getting close to putting EUV tools in fabs.

ASML suggested just last week that it is on track to deliver a throughput of 70 wafers per hour (wph) on its first production EUV lithography tool, the NXE:3300B, sometime next year. Ostensibly that will be with an 80 watt power supply, improving on the source in its current development tool, the NXE:3100, which currently can sustain 50 watts over long periods of time, according to the litho tool vendor.

If this holds true, the chip industry could see EUV exposure tools and pilot lines in chipmakers’ fabs within a few years’ time, although the throughput will have to continue to improve for it to move into mainstream production. The current consensus is that widespread use of EUV— assuming current estimates of power source improvements hold true—won’t happen until the end of the decade and beyond at the 10nm and 7nm nodes.

So in terms of the mask industry, it could be looking at a size transition around the 2018 to 2020 time frame. But worrying about that may be putting the cart before the horse, cautions Stefan Wurm, director of lithography for Sematech. “The industry needs to make the decision on doing high NA or not, and if it proves the right choice, it’s got to be a high NA solution that shares multiple nodes,” he said. While the question of high NA EUV is coupled with the need for a larger mask size, “it’s not something that will be decided tomorrow.”

Of more pressing concern is the availability of six-inch EUV photomasks in the 2015 time frame for those pilot lines, Wurm said. “The goal is very simple: make sure there is an adequate supply that supports the yield requirements for EUV ramp up.”

In fact, mask availability is of more concern than source power at this point, he said. Chipmakers are making a huge effort with regard to supporting lithography vendors on EUV source development to ensure success, he noted. Intel’s investment in ASML is a primary example.

“On the mask side it’s a little different because you have to look at the whole supply chain,” Wurm said. While suppliers are waiting to see the outcome of source development, it begs the question: Will they have time and resources to catch up once the source power is there? “We’re more concerned about the mask blanks supply chain than we’re concerned about the source,” he added.

There are still a number of technical issues to address if six-inch EUV masks are going to be ready for pilot production in a few years. “Everything that’s related to yield and masks and mask lifetime and blank defectivity is certainly at the center of that,” Wurm said.

Why larger photomasks?

Even with the adoption of EUV there aren’t necessarily economic or technical reasons for the industry to move to a larger mask size, or at least not right away. It depends largely on which way the industry goes to get to the resolution needed at the 10nm node and beyond, whether it adopts some sort of double patterning scheme with EUV or opts for a higher numerical aperture (NA) EUV exposure technology.

Increasing the NA—seen as necessary if the industry is going to avoid double pattering—will mean increasing the magnification of EUV exposure tools, which means a smaller exposure field size and consequently more exposures (and less throughput), unless a larger mask size is used.

Throughput, and thereby economics, is the key part of the equation. Based on the technical papers presented at SPIE and elsewhere in recent years, it appear the techniques used to achieve the higher NA would cut throughput by as much as 50%. This can be alleviated somewhat with a larger mask size, noted Franklin Kalk, CTO of Toppan Photomasks.

“It’s interesting because the mask size can help the throughput, but it doesn’t bring it back to where it was,” he said.

Furthermore, it all comes back to EUV source power as well. “If we increase the mask size, it won’t improve the throughput without the (EUV source) power,” said Banqui Wu, Applied Materials’ CTO for its photomask etch products business. “People assume we have the power. If we get the power, we can improve both the resolution and the throughput.”

But what about bigger wafers, too?

As Kalk and Wu suggest, if the source power doesn’t continue to scale as hoped, even if EUV is put into production there would be little need for a larger mask size because high NA EUV wouldn’t be feasible without the requisite source power. But, as Sematech’s Wurm notes, if the source power isn’t available for high NA EUV at the end of the decade, it probably will mean that EUV never made it into production in the first place, and thus it will become a moot point.

Even if high NA EUV proves viable, the smaller exposure field possibly could prove beneficial with high NA EUV at the 10nm and 7nm node, in spite of the extra steps and lower throughput that would result. Yield and defect control conceivably would be easier to manage with the smaller exposure field, Wu noted.

Wrapped up in the argument for larger mask sizes is the transition from 300mm wafers to 450mm wafers, although the wafer size transition wouldn’t necessarily require larger mask sizes, just as the migration from 200mm to 300mm wafers did not. “If the industry doesn’t adopt EUV for production, or it is used on a very limited basis, it seems unlikely the industry would opt to migrate to a larger mask size,” Kalk said. “In principal, on 450mm (wafers), it doesn’t really require a larger mask.”

Applied’s Wu said that a larger mask size in combination with 450mm wafers could provide benefits in terms of wafer etch and chemical-mechanical planarization (CMP). However in terms of throughput, in and of itself a larger mask size wouldn’t really result in any improvements without the adoption of EUV, regardless of wafer size, he said.

Bigger mask size means bigger—much bigger—CapEx

So if high NA EUV is ready for the 10nm node, a transition to a larger mask size, likely nine-inch masks, seems likely. Will the industry be ready? That will remain to be seen, but one thing everyone can agree on: It will require considerable capital investment cycle.

“We’re always used to scaling equipment; we’ve been doing it since the three-inch (mask) days,” said Amitabh Sabharwal, general manager for photomask etch products at Applied. “If there is significant pull and there is an industry demand, we can do it.”

But a transition won’t be cheap.

“The bottom line is it’s going to cost a lot of money to do it,” said Toppan’s Kalk. “We haven’t done a thorough analysis of a nine-inch EUV mask (manufacturing) line or 12-inch EUV capable mask line, but it has to be approximately 200 million,” he said – roughly half the cost of a leading edge manufacturing line today producing photomasks for 28nm manufacturing.

It could mean changes in the photomask supply chain as well. While the big three semiconductor photomask suppliers—Toppan, Photronics and Dai Nippon—have kept their hands in the leading edge by partnering with large IDMs, the pool of those playing at the leading edge dwindles with each technology node. With only a handful of companies likely to be developing chips at the 10nm node, and the considerable capital expense involved, further consolidation among merchant mask suppliers could be in the offing in the years ahead.

Furthermore, with only a small number of chipmakers producing chips at the 10nm node there may not be enough tools sold to justify having two or more suppliers for each piece of equipment. For example, “we’re not going to find multiple providers of writers or etchers,” Kalk said. “I just don’t think that’s going to happen.”

Aside from the economic issues, there will be many technical issues to address when it comes to migrating to a larger mask size along with EUV, such as critical dimension (CD) resolution and mask metrology and defectivity. This is not to mention the technical hurdles that still exist for six-inch EUV masks.

There also is the question of using nine-inch mask sets with six-inch mask sets. While at first glance it may seem plausible to continue to use six-inch mask sets for non-critical layers even as nine-inch masks are used for critical layers, thereby saving costs, this method would introduce its own technical hurdles, such as alignment.

EUV mask availability—be it six-inch or later nine-inch masks—is perhaps indicative of a larger phenomenon in the chip industry beyond EUV and photomasks. “The health of the supply chain in general, not just on the mask side, will need more attention in the industry,” said Wurm. The costs for equipment and materials vendors continues to increase, and their capability to support R&D doesn’t always keep track with what the industry requires of them.

“That’s something the industry needs to keep in mind,” he said. “How can we work together to make sure we have a healthy supply chain in all areas?”

Editor’s Note: As explained at length elsewhere on this site this is a news story written by me for another publication. This originally appeared on Semiconductor Engineering; it holds the copyright, of course.