Pages Menu
Categories Menu

Posted in Top Stories

Scalable Platform Meets the Test Challenges of Ultra-Wideband Chipsets

This article is adapted from a paper and presentation at SEMICON China, March 2023.

By Kevin Yan and Daniel Sun, Advantest (China) Co., Ltd.

Ultra-wideband (UWB) technology, as defined by IEEE 802.15.4 and 802.15.4z standards, enables short-range, low-power RF location-based services and wireless communication. A variety of devices have reached the market to help implement UWB capability, but these devices present significant test challenges related to the high RF frequencies at which UWB operates, the ultrawide bandwidths of UWB’s multiple channels, and the technology’s complex modulation schemes. An effective test platform requires flexibility and scalability to handle the frequencies and bandwidths involved, as well as the compute power to effectively analyze the test results.

UWB markets and capabilities

The UWB market is expanding at a rapid pace. One forecast estimates that UWB unit shipments are growing at a 40% CAGR with the market for UWB chips expected to reach $1.259 billion by 2025.1 UWB serves both business and consumer end markets, with the consumer segment now representing the majority. UWB is hitting the mainstream in the mobile smartphone market, with the automotive and wearables/tags segments also seeing UWB adoption.

Some specific applications that UWB can serve include indoor navigation, item tracking, secure hands-free access, credential sharing, and hands-free payments. In addition, automotive applications are likely to expand, with the Car Connectivity Consortium incorporating UWB into the Digital Key Release 3.0 specification currently under development. 

Compared with other technologies, UWB provides superior positional accuracy. Table 1 compares UWB with RFID, Wi-Fi, and Bluetooth. UWB, which employs time of flight (ToF) and angle of arrival (AoA) technology, achieves a positional accuracy of better than 30 cm, outperforming the others. Bluetooth incorporates AoA and angle of departure (AoD) to provide < 1 m of accuracy (version 5.1). Wi-Fi, which relies only on its Received Signal Strength Indicator (RSSI) functionality for distance estimates, has a limited accuracy of 15 m. RFID can only detect presence, not distance.

Table 1. Positional accuracy comparison of four wireless technologies

UWB basics

The United States Federal Communications Commission (FCC) and the International Telecommunication Union Radiocommunication Sector (ITU-R) define UWB as a communications technology that transmits and receives a signal whose bandwidth exceeds the lesser of 500 MHz or 20% of the arithmetic center frequency fc. The UWB physical (PHY) layer includes 16 channels in three groupings, as shown in Table 2. The sub-gigahertz band (group 0, shaded yellow) includes one channel at an fc of 499.2 MHz, the gigahertz low band (group 1, shaded blue) includes four channels with fc ranging from approximately 3.5 GHz to 4.5 GHz, and the gigahertz high band (group 2, shaded red) includes 11 channels with fc extending from approximately 6.5 GHz to 10 GHz.

Table 2. UWB PHY group and channel allocation                

Most UWB products currently on the market focus on the high band, particularly channels 5 and 9. The UWB PHY layer also comes in low-rate-pulse (LRP) and high-rate-pulse (HRP) repetition-frequency configurations. HRP high-band configurations are becoming the preferred configurations, finding success in industrial applications for location and ranging and for device-to-device communications. Table 3 outlines the IEEE 802.15.4z HRP UWB high-band specifications, including a range up to approximately 100 m and data rates from 110 kbps to 27.4 Mbps. In addition, HRP UWB uses two types of modulation: burst position modulation (BPM) and binary phase-shift keying (BPSK).

Table 3. IEEE 802.15.4z HRP UWB high-band specifications

Figure 1 compares the UWB and Bluetooth spectrums. The narrow-band Bluetooth (left) has a 1 MHz bandwidth at 2.4 GHz. In contrast, UWB (right) has a 500 MHz or greater bandwidth at center frequencies (fc) extending up to approximately 10 GHz, as shown in Table 1. Each band’s upper and lower bounds (fH and fL, respectively) exhibit power levels 10 dB less than the maximum power level at fc.


Figure 1. Bluetooth has a 1-MHz bandwidth at 2.4 GHz, while high-band HRP UWB has a > 500-MHz bandwidth at center frequencies fc from approximately 6.5 GHz to 9.5 GHz.

Test requirements

A typical UWB transceiver chipset includes an analog front end containing a receiver (Rx), a transmitter (Tx), and a digital backend that interfaces to an off-chip host processor. It also includes a Tx/Rx switch that connects either the receiver or transmitter to an antenna port. Some versions come with two RF antenna ports to serve phase-difference AoA applications, and some will add even more ports to improve positional accuracy. AoA capability can help pinpoint the specific location of an object as well as its distance, as shown in Figure 2.


Figure 2. AoA capability can help pinpoint a tag’s angular location as well as distance.

The receiver includes an RF front end that employs a low-noise amplifier that amplifies the received signal before down-converting it to the baseband. The chipset’s transmitter applies digitally encoded transmit data to an analog pulse generator. The chipset also includes a phase-locked loop (PLL) that provides local oscillator signals for receive and transmit mixers.

Typical UWB production tests involve transmit measurements and pulse-related measurements as specified in the 802.15.4z standard as well as direct receiver measurements, ToF measurements, and AoA measurements. 

Transmit measurements ensure the devices meet all emissions rules established by the FCC or other relevant governmental authorities. The tests involve power spectral density (PSD) measurements in accordance with a transmit-spectrum mask (Figure 3) as well as center-frequency tolerance measurements.


Figure 3. A power spectrum mask defines limits for PSD measurements.

Pulse-related measurements ensure the interoperability of UWB devices and are performed using time-domain analysis (Figure 4). Specific tests include baseband impulse response, including measurement of pulse main-lobe width, pulse side-lobe power, and normalized mean square error (NMSE). Additional tests look for chip clock error and chip frequency offset. 


Figure 4. This compliant pulse example uses time-domain analysis.

Although not specified in the 802.15.4z standard, direct receiver measurements must be performed to ensure quality parts. A typical receiver test measures the minimum power level at which the device can operate with minimum error. A typical way to perform this test is to send a minimum power stimulus to the device under test and measure the device’s packet error rate (PER).

Finally, ToF and AoA measurements characterize the positioning performance. In high-volume production test, such measurements are often performed using phase shifts between two Rx antenna inputs.

UWB devices present three specific test challenges that traditional ATE systems cannot address. The first relates to the high RF frequencies involved, ranging up to more than 10 GHz—exceeding the typical less-than 6 GHz capability of many traditional ATE RF instruments. Second, UWB devices require wideband measurements extending to 1.35 GHz—well beyond the 200 MHz limits of traditional instruments. Third, UWB devices must be tested using frequency-domain PSD measurements and time-domain impulse-response measurements, a combination that requires test software with complex algorithms and an efficient architecture to handle the huge amounts of data processing required.

UWB test platform

The flexible Advantest V93000 platform can be configured with appropriate hardware and software to support the thorough test of UWB devices. The platform’s Wave Scale RF instruments cover the frequency range from 10 MHz to 70 GHz. The V93000 platform can also support the necessary wide bandwidths. For example, the Wave Scale RF18 card supports 5.85 GHz to 18 GHz frequency stimulus and measurements with a 200 MHz bandwidth. Adding the optional Wave Scale Wideband card to the Wave Scale RF18 extends the bandwidth up to 2 GHz. The combination also has built-in event triggering—useful for testing asynchronous UWB chips’ Tx packets. The platform can accommodate 128 RF ports to enable efficient multisite parallel testing.

V93000 SmarTest 8 software contains a UWB demodulation library and can analyze a UWB signal in the time and frequency domains and measure such items as the transmit PSD mask, transmit center-frequency tolerance, baseband impulse response, chip clock rate, and chip carrier alignment. To enable rapid test-data analysis, SmarTest 8 supports hidden uploads of captured waveforms and multi-threaded background processing of previously captured data while simultaneously capturing the next measurement. In addition, standard existing SmarTest 8 features can make receiver and ToF-related measurements. Finally, AoA tests can be performed by signals of different phase to different receiver ports, as shown in Figure 5.


Figure 5. AoA tests require relative phase measurements between antenna receiver ports in response to applied stimulus from V93000 instruments.

Conclusion

As the UWB market rapidly expands, test platforms are adapting to accommodate the high RF frequencies, wide bandwidths, and complex modulation schemes involved. The Advantest V93000 platform’s hardware and software include the standard and UWB-specific features necessary to test UWB devices.

Acknowledgments

We would like to acknowledge and give our warmest thanks to Frank Goh, who supports the UWB V93000 Solution and provided professional guidance to this paper. Frank Goh is a principal consultant at the Center of Expertise Asia from Advantest Singapore.  

Reference

 

  1. AMENDED Comments of The Ultra Wide Band (UWB) Alliance Before The Federal Communications Commission, July 14, 2020.
Read More

Posted in Top Stories

True Zero Trust Combats IC Manufacturing Security Challenges

By Michael Chang, ACS VP and General Manager

The semiconductor manufacturing industry is facing a host of unprecedented technology and security challenges. A common catchphrase these days is that “data is the new oil.” Data is everywhere, in everything we do, and there is both good and bad associated with this trend. Data everywhere creates new security issues that need to be addressed to protect the integrity of your information and your devices. Advantest has done this through a new infrastructure setup that enables a True Zero Trust environment on the fab test floor – in turn, allowing us to truly embrace AI without having to fear security repercussions.

Addressing core challenges

Some of the key technology challenges for chipmakers include chip-level scale integration, which requires new types of setup tools and data to be integrated for making measurements; system quality challenges; and achieving chip-scale sensors. Another area of focus is manufacturing 2.5D and 3D chiplets.

A paper published in 2021 by three Google engineers identified an issue with cores failing early due to fleeting computational errors not detected during manufacturing test, which they call “silent” corrupt execution errors. The paper goes on to propose that researchers and hardware designers and vendors collaborate to develop new measurements and procedures to avoid this problem. The interim solution is to isolate and turn off cores that are failing, but they hint that because of chip-level integration and 2.5D/3D, new approaches are needed to measure and screen out these failing cores automatically.

The other side of this coin is security concerns. Access to systems is limited, and different software cannot be installed on machines. We use firewalls, anti-virus spyware, encryption, password management and other technologies to protect our computers, but they’re not infallible. Experts agree that cyberattacks are inevitable, so there needs to be a means of using data to protect all the data on our systems. Advantest is doing this through our ACS offerings, which enable real-time data security, as shown in Figure 1: ACS Nexus™ for data access, ACS Edge™ for edge computing, and the ACS Unified Server for True Zero Trust™ Security.

Figure 1. Advantest’s open solution ecosystem. Data is needed from all sources to mitigate new challenges.

As Figure 1 illustrates, through our Real-Time Data Infrastructure, we can integrate data sources from across the chip manufacturing supply chain, leveraging that data to continually improve our insights and solutions. We can implement test throughout the product lifecycle, taking real-time action during production. Nothing has to be done away from the test floor; all analyses and actions occur during actual test, maintaining fully secure zero trust protection of the data.

Security is more than protection

One way to illustrate the approach that we take to security in semiconductor manufacturing is to look at a seemingly unrelated example: the International Space Station (ISS). Designed to protect against damage from space debris, the outer hull of the ISS is outfitted with Whipple bumpers. These multi-layered shields are placed on the hull with spaces between the layers. The intent is that impact with a layer will slow and, ideally, break apart the projectile, so that by the time it reaches the bottom layer, any potential harm has been prevented. While the bumpers slow the kinetic energy of the debris, something will eventually get through. The second line of defense is the ISS’s containment doors, which ensure any areas where air leaks have been detected can be isolated so that the astronauts are protected. Clearly, this is mission-critical.

The key word: “containment.” It’s not enough to protect – no system is infallible. You also have to contain it so that potential security issues don’t become pervasive and cause a major breach. The challenge when looking at this from the test cell perspective is the test cell is located on the test floor, which is surrounded by all kinds of other equipment that you have no control over. And not just other manufacturing tools. Everyday office products can be vulnerable to hacking – computers, software, printers, routers… even smart appliances in the break room such as IoT-enabled coffee pots. Hackers are increasingly finding ways to get to us through software update servers, routers, printers, and even bypassing firewalls.

Figure 2. The ACS-enabled True Zero Trust environment for the test floor is a must to ensure containment.

The bottom line is that your infrastructure is going to be vulnerable, so you must add a reliable containment structure such that, when there is an attack, you can shut down. This is what our True Zero Trust™ environment is designed to enable. The “zero trust” concept is just what it sounds like – the complete elimination of the assumption of trust from within networks and systems. This means that no default trust is granted to any user or device, either inside or outside an organization’s network. This model grants resource access on a need-to-know basis only, requiring stringent identity verification and contextual information that cannot be known or provided by another source. By preventing unauthorized access to sensitive data, companies mitigate the risks of data breaches and attacks, whether external or internal.

What does this mean for AI/machine learning?
New chip technologies require new measurements, relying on multi-dimensional data. Large language models (LLMs) are creating vast new opportunities in all domains. LLMs are machine learning models that can perform natural-language processing tasks such as generating and classifying text, answering questions, and translating text. LLMs are trained on massive amounts of text data and use deep learning models to process and analyze complex data. This can take several months and result in a pretty hefty power bill.

However, during training, LLMs learn patterns and relationships within a language while aiming to predict the likelihood of the next word based on the words that came before it. We’re talking about a very large number of parameters and petabytes of data. LLMs are used in a variety of fields, including natural language processing, healthcare, and software development.

Currently, LLMs can comprehend and link topics, and they have some understanding of math. But an app like ChatGPT – the most popular and widely used LLM – does not understand new developments as it is not connected to the Internet. LLMs can recognize, summarize, translate, predict, and generate human-like texts and other content based on knowledge from large datasets, and they can perform such natural-language processing tasks as:

  • Sentiment analysis
  • Text categorization
  • Text editing
  • Language translation
  • Information extraction
  • Summarization

Using LLMs to summarize knowledge and feed it into the test cell or test floor can be done in a True Zero Trust environment because there is no danger of the data being manipulated in undesirable ways. With that said, LLMs aren’t self-aware – they don’t know when they make mistakes, so an LLM should be considered a data exoskeleton.

Conclusion 

Over the next few years, we can anticipate a significant shift in the types of applications being developed, moving away from traditional statistical machine learning and towards more sophisticated autonomous or semi-autonomous agents that can automate testing. In order to effectively safeguard the valuable assets and intellectual property of OSAT and fabless organizations, containment is necessary. The ACS Real-time Data Infrastructure offers a highly secure containment system called True Zero Trust. Through its innovative design, this infrastructure establishes a cutting-edge paradigm that allows for the creation of secure data highways and paves the way for building novel applications with enhanced security.

 

Read More

Posted in Top Stories

Global Customers Rank Advantest THE BEST Test Equipment Supplier in 2023 and the #1 Large Supplier of Chip Making Equipment in Annual Customer Satisfaction Survey

By GO SEMI & Beyond Staff

Advantest has again topped the ratings chart of the 2023 TechInsights (formerly VLSIresearch) Customer Satisfaction Survey, capturing the No. 1 spot on this prestigious annual survey of global semiconductor companies for the fourth consecutive year. The company has now been named to the 10 BEST list for each of the 35 years that the survey has been in existence. The survey ratings are based on direct customer feedback representing more than 60% of the world’s chip producers, which include integrated device manufacturers (IDMs), fabless companies, and outsourced assembly and test (OSAT) providers.

According to TechInsights, Advantest, THE BEST supplier of test equipment in 2020, 2021 and 2022, was again the leading test equipment supplier in 2023. The company also RANKED 1st in the 10 BEST list of large suppliers of chip making equipment for the fourth consecutive year. Advantest achieved superior customer ratings in the areas of Partnering, Recommended Supplier, Trust in Supplier, Technical Leadership, and Commitment. According to TechInsights, Advantest continually ranks high among THE BEST Suppliers of Test Equipment and, in 2023, was once again the only ATE supplier to receive a Five-Star designation.

The TechInsights annual Customer Satisfaction Survey is the only publicly available opportunity since 1988 for customers to provide feedback on suppliers of semiconductor equipment and subsystems. Worldwide participants rated equipment suppliers among 14 categories based on three key factors: supplier performance, customer service, and product performance. The categories span a set of criteria, including cost of ownership, quality of results, field engineering support, trust, and partnership.

“Advantest continues to set new industry standards for product development and customer service, prioritizing its customers’ needs and supporting their success,” stated G. Dan Hutcheson, Vice Chair of TechInsights. “Through its broad product portfolio and expansive global network, Advantest enables its customers to create groundbreaking innovations that drive the semiconductor industry forward. Year after year, Advantest earns the highest ratings from the world’s global manufacturers and has topped the ratings charts once again.” 

“We are honored to be recognized in such high regard by our global customers and grateful to know that our partnering efforts are valued,” said Yoshiaki Yoshida, Advantest Corporation president and CEO. “As semiconductors become increasingly essential to our society, we will continue to deliver the leading-edge test solutions our customers have come to expect from us while driving innovation forward with sustainable products that meet not only their needs but those of the environment.”  

As a global provider of test solutions for SoC, logic and memory semiconductors, Advantest has long been the industry’s only ATE supplier to design and manufacture its own fully integrated suite of test-cell solutions comprising testers, handlers, device interfaces, and software – assuring the industry’s highest levels of integrity and compatibility.

 

Read More

Posted in Top Stories

MMAF Option Enables Picoampere Measurements

By Yoshiyuki Aoki, T2000 R&D Hardware Manager, and Tsunetaka Akutagawa, SoC Marketing Manager, Advantest Corp.

Demand for low-current devices is increasing, as many new sensors are being created for medical, automotive, industrial, and other applications. Chief among the heightened production and test requirements for these low-current devices is the need to achieve picoampere (pA)-class measurements. Sensors’ functionality and efficacy, especially in medical and other highly sensitive applications, can be radically impacted by leakage current and drift characteristics. Thus, measuring very low current and detecting current leakage on the order of a few pA is becoming an important capability for general-purpose testers. 

However, conventional mass-production testers cannot easily measure pA and require a dedicated machine to do so. To this end, Advantest developed its Micro Micro-Ammeter-Frontend (MMAF) module option to connect to the MMXHE module for its T2000 SoC tester (Figure 1). Created for testing ICs in power trains, controls and sensors in electric and hybrid vehicles, the MMXHE (multifunction mixed high voltage) mixed-signal module provides 64 output channels to enable massively parallel, high-performance testing.

Figure 1. The current flowing through the sensor device is measured on the T2000 using the MMAF/MMXHE solution. It delivers high accuracy from the difference between the current value at the time of device measurement and the current value at the time of device open. 

The MMAF module measures pA-class microcurrents by increasing the current measurement sensitivity of key voltage source current measurement (VSIM) functions that MMXHE enables by a factor of 1000. This enables measurement of optical sensors and MEMS sensors, calculation of reverse leakage current of diodes, and so on.

Figure 2. Shown here is a mother test board with multiple MMAF installed.

Combining the two modules (MMXHE and MMAF) expands the T2000’s test coverage, enabling the system to deliver current measurements down to the pA level. The addition of multiple MMAFs makes it possible to measure a large number of devices simultaneously. With the MMAF module option, pA measurement can be performed at low cost without adding any modules to the existing tester configuration. In addition, because the module is small, it can be mounted on a performance board (Figure 2), so it can be easily adapted regardless of tester configuration.

In addition to its compact size/footprint and ease of multi-channelization, the combined MMAF/MMXHE module solution offers multiple benefits:

  • Wide source voltage range: 2V range / 7V range
  • Varied current measurement options: 3nA range / 30nA range / 80nA range
  • Low noise
  • Good measurement repeatability (Figure 3)

Figure 3. Using the MMAF pA measurement module, the T2000 can perform highly repeatable current leak measurements.

The new MMAF module option for the MMXHE joins a robust set of module options for the T2000 designed to address a variety of specialized testing requirements. These include multiple digital modules, device power supplies, a multipurpose parametric measurement unit (PMU), analog/RF modules (including arbitrary waveform generator/digitizers), additional multifunction modules for automotive and power-management ICs, and image capture modules for testing CMOS image sensors. Advantest continues to expand the capabilities of its T2000 tester to ensure its ability to accommodate a broad range of testing needs while ensuring full compatibility and ease of installation and helping to keep the overall cost of test as low as possible.

Read More

Posted in Top Stories

Design Considerations for Ultra-High Current Power Delivery Networks

This article is adapted from a presentation at TestConX, March 5-8, 2023, Mesa, AZ.

By Quaid Joher Furniturewala, Global SI/PI Manager, R&D Altanova, Advantest

A power-delivery network (PDN), also called a power-distribution network, is a localized network that delivers power from voltage-regulator modules (VRMs) throughout a load board to the package’s chip pads or wafer’s die pads. The PDN includes the VRM itself, all bulk and localized capacitance, board vias, planes and traces, solder balls, and other interconnects up to the device under test. An optimized power-delivery approach will employ a decoupling scheme that provides low impedance to ensure a clean power supply. An optimized PDN will result in more power being transferred from the VRM to the DUT, with supply voltage held constant within a narrow tolerance band with minimal ripple under load.

PDN optimization is becoming increasingly important as more and more high-current applications appear. Keep in mind that power equals I2R, so even a slight amount of load-board resistance imposes significant power dissipation at high currents. For a 2.5-kW device, a 5% drop in power is 125 W! Table 1 shows how device voltage and current and load-board dissipation are trending over time.

Table 1. Device-power and load-board-dissipation trends

PDN optimization

For effective PDN optimization, prioritize the supplies, keeping in mind that not every supply can be optimized to achieve tight margins, and tradeoffs may be required. Also, plan your DUT board stacking based on power dissipation (Figure 1), and note that usually, the DUT vias account for the highest inductances in the PDN path. Plane inductance is negligible compared with via inductances.

As the industry moves to devices with high current demands, the historical rules of thumb and reference guidelines are no longer adequate. Following antiquated rules can lead to poor hardware design, requiring costly re-spins, dropped yields, and lost time. A proper PDN needs to be designed based on the device specifications to ensure a good power delivery network.

Figure 1. Plan your DUT board stacking to keep critical power near the device.

You can follow several recommendations when optimizing your load-board layout:

  • Move or replicate critical power close to the DUT to reduce via impedance.
  • Increase capacitor via size and use multiple vias at all capacitor pads.
  • Put high-speed capacitors on the top side of the board.
  • Use low equivalent series inductance (ESL) capacitors.
  • Increase DUT via size to the extent permitted by your design-for-manufacturing (DFM) rules.

Contrary to popular belief, the under DUT capacitance (capacitor placed on the opposite side of the device on power vias) is not always effective. The capacitance may be dominated by the inductances of the long and thin DUT vias. Consideration needs to be given when choosing the value of the capacitance under the DUT. The least inductive way to effectively utilize decoupling would be to route the power closer to the device so that the DUT vias are short and place decoupling on the top side very close to the DUT with bigger vias.

When it comes to having the power delivered to the DUT with the least electrical resistance, offsetting the DUT vias towards the corner of the device to create a channel for current to flow to the device core can be a good strategy (Figure 2). If your DUT has high-speed pins or channels on a few quadrants, the others can still be offset to create a channel for the current to flow.

Figure 2. Offsetting the DUT vias toward the corner of the device can be a good strategy.

As the PDN return path on most ATE designs is shared with signal lines, the return path shares the return currents for the signal and power. Consequently, the return path becomes a non-trivial consideration. If the design has shared signal and power lines, the return path needs to be wide enough to ensure the current does not get constricted and create ground-bounce issues.

If the layer stacking allows for a GND-PWR-GND type structure, it is always recommended due to the noise coupling isolation and better power impedance. However, this is not practically possible in very dense and high site count designs where the thickness of the circuit board is limited due to the aspect ratio concerns with the fabrication of the board (aspect ratio is the ratio of the drill size to the board thickness). In this case, the GND-PWR-PWR-GND approach can be used (Table 2). It will offer slightly poor noise isolation but can be used for low-current and less noisy supplies, while GND-PWR-GND can be used for high-current supplies. 


Table 2. Return-path considerations

PDN power-integrity (PI) analysis is a key to delivering ripple-free, low-noise, stable voltage to the device pads. PI analysis begins with a pre-layout analysis on all the power rails with the definition of your target impedance and decoupling strategy. Post-layout analysis is done after decoupling capacitance is placed and power is routed. Post-layout analysis includes all the PDN elements from VRM to DUT, and it involves DC, AC, and sometimes thermal analysis.

DC analysis
DC analysis examines via currents, current density, and voltage drop, including return-path voltage drop, due to resistances in the board current path. DC analysis helps identify bottlenecks due to copper depletion. Performance can generally be improved by increasing the copper area, replicating power planes, and increasing copper weights on stacks.

A case study involving a 2.4-kW device provides an example of DC analysis. The package includes a channel to provide better current flow near the core. The load board includes 2-oz copper layers with multiple high-current supply layers. A 1-mm pitch allows larger 14.5-mil power and return vias. Power shapes added in the signal layers based on available space help to improve performance. Table 3 shows IR drop simulation results for the various supplies. Total power dissipation is 83 W or less than 3.5% of the device’s 2.4-kW rating


Table 3. DC analysis of load board for 2.4-kW device

AC analysis

AC analysis is the study to understand how the load ripples at varying frequencies. It is analyzed using impedance vs. frequency plots to determine whether the decoupling strategy is sufficient to meet the target impedance for the supplies.

At DC to the lower frequency ranges (<10 kHz), the region of lowest impedance on the ATE board is the ATE power supply region, and the path of least resistance is through the DUT power and return planes. As the frequency gets higher, the path of least resistance is through the bulk capacitors, high-frequency capacitors, and finally, the on chip capacitors, respectively. Capacitance on the PDN is designed to cover the entire device-clock-frequency spectrum in order to eliminate the voltage ripple generated by the device’s switching currents.

Each power rail requires a power-supply target impedance ZT as a function of voltage VDD, percent ripple, and transient current:

The target impedance calculations need to factor in the maximum ripple voltage that the DUT can tolerate (for example, 5% of VDD). It must also factor in maximum transient current, which is not always known. As a rule of thumb, ITransient is 50% of IMAX.

As an example, for a 10-A, 7.5-V VDD supply, a 5% ripple spec, and ITransient that is 50% of IMAX, the target impedance is 7.5 mΩ

When determining target impedance, keep in mind that keeping impedance much lower than necessary will result in an overdesigned PDN and unnecessary cost.

Thermal analysis

Finally, thermal analysis involves studying temperature rise in circuit-board structures as currents increase. An effective strategy for thermal analysis is to employ PI-thermal co-simulation, which calculates heat generated as current flows through the metal structures of a load board from the VRM to the DUT.

Thermal analysis must consider the current flow from all supply rails but take into account the fact that not all supplies are necessarily activated at the same time. PI-thermal co-simulation is particularly useful for very high-power designs to identify hot spots that could cause damage to the board or DUT.

Thermal vias spread throughout the board with copper ground-flooding on the outer layers can minimize thermal problems. So can any additional structures, including frames and stiffeners, as they also act as heatsinks.

Figure 3 shows a thermal analysis that confirms satisfactory board temperatures resulting from supply currents. Supplies were run individually and in combinations of multiple supplies with a common return path. This simulation did not consider heat generated by components or the DUT itself.

Figure 3. This thermal simulation shows heat generated by currents from individual supplies and combinations of supplies.

No matter how careful the design, thermal problems can appear during normal load-board operations. You can consider adding temperature sensors such as the Sensirion SHT35 and Texas Instruments TMP1075 to the board, placing multiple sensors on top and bottom sides in different locations. The sensors can communicate over an I2C interface and send an alert signal to the tester, which can be read on the tester pin-electronics channels when a temperature threshold is exceeded to perform a supply shutdown when needed.

Thin-core dielectrics and thick stacks

Other considerations in load-board design include the use of thin-core dielectrics and thicker stacks. Thin cores, such as 12-µm cores, are useful for printed-circuit-board power and ground structures. They permit higher layer density and lower plane inductances, offering impedance reductions of 10% to 45% compared to normal-thickness dielectrics (Figure 4). Note, however, that they are more costly, present handling risks, and may be hard to source.

Figure 4. Thin dielectrics can provide a 10% to 45% improvement compared to normal-thickness dielectrics.

As for thicker stacks, existing ATE fabs offer board thicknesses up to 0.330 in. with a single lamination. Advanced fabs can create boards with thicknesses up to 0.400 in., increasing layer density by 21% (Figure 5). Thicker, higher-density stacks enable more layers for power planes. They are useful for CPU, GPU, and AI accelerator ATE boards, as well as memory probe and other probe tests. In addition, they support an increased number of layers with 2-oz copper cores to help improve PDN performance. R&D Altanova is in production of such boards effective this quarter.

Figure 5. Thicker, higher-density stacks can help improve PDN performance.

Conclusion

PDN performance is critical for the design of load boards for today’s high-current devices. Thermal concerns are increasingly significant as DUT power ratings increase. Design optimization and proper PDN power-integrity analysis will ensure that the power delivery is good without any power stability issues, thus increasing the yields for device under test. It will ensure a good working hardware and save precious time and cost for the board re-spins.

Read More

Posted in Top Stories

Comparison of State-of-the-Art Models for Socket Pin Defect Detection

This article is adapted from a presentation at TestConX, March 5-8, 2023, Mesa, AZ, by Vijayakumar Thangamariappan, Nidhi Agrawal, Jason Kim, Constantinos Xanthopoulos, Ira Leventhal, and Ken Butler, Advantest America Inc., and Joe Xiao, Essai, Advantest Group.

By Vijayakumar Thangamariappan, R&D Engineer, Expert, Advantest America Inc.

Test sockets have a key role to play in the semiconductor test industry. A socket serves as the critical interface between a tester and device under test (DUT). Although seemingly simple in concept, a socket can have thousands of pins, depending on the number of I/O connections to the target device. A typical socket size might be 150mm x 200mm x 25mm, and protruding pin height may be about 50 to 250 micron (Figure 1). Manufacturers may produce thousands of sockets per month or more, and each pin of each socket must be inspected so that pin defects do not impact semiconductor production test and cause expensive downtime.

Figure 1. A socket (top) may include thousands of pins, shown back (bottom left) and front (bottom right).

During socket assembly, several problems can arise. Too much pressure may be applied, one or more holes may be skipped, a pin meant for one hole may be inserted in another, or foreign material may contaminate a pin location. Figure 2 shows several defect types, including apparent pin defects caused by image capture errors.

Figure 2. Pins can exhibit several defects, some of which may be artifacts of the imaging system.

Traditionally, an inspection engineer has used a microscope to identify pin defects. But even for a highly trained engineer, the process is highly subjective, time-consuming and error-prone. The manual approach makes it particularly difficult to identify mixed-pin issues, which occur when a pin meant for one hole is inserted into another, and wrong pin issues, which occur when a pin meant for one socket type is inserted into another (Figure 3).

Figure 3. Manual inspection makes identifying mixed-pin (left) and wrong-pin (right) issues difficult.

In addition, manual inspection is difficult to scale for high-volume manufacturing. In general, it can lead to test escapes, reducing customer satisfaction, functionality, reliability, efficiency and productivity. A single pin failure can lead to system application failure or damage to the DUT, and a defect found at a customer site would require tester downtime to troubleshoot. Once the defective pin is identified, the socket assembly will require rework and retest, negatively impacting production throughput and imposing shipping delays.

Automating the inspection process

Consequently, it becomes desirable to automate the inspection process by applying artificial intelligence and machine learning. The first step involves considering concepts such as object classification and object detection. Object classification returns the class of an object, such as “cat” (Figure 4, left). Object classification provides no localization information regarding the position of the object—it merely indicates whether an object of a particular class, such as “cat,” is or is not present. In contrast, object detection identifies the classes of objects in an image (for example, “dog” and “cat” in Figure 4, right) and surrounds them with bounding boxes (green and red rectangles in Figure 4, right) to indicate their locations.

Figure 4. Object classification can identify the class of an object in an image (left), while object detection identifies object classes and locates them within bounding boxes (right).

For socket pin-defect inspection, object detection is the preferred approach. For object classification, limited interpretability (that is, distinguishing the class of “good pins” from the class of “defective pins”) makes identifying corrective actions difficult, and background variability (such as socket surface patterns) greatly affects results. In contrast, object detection can help identify and locate different object types, with background variability ignored.

Having decided on object detection, we evaluated three object-detection algorithms:

  • YOLO (You Only Look Once) employs a one-step process that performs classification and established bounding boxes at the same time.
  • Faster R-CNN (Faster Regions with Convolutional Neural Networks) employs a two-step process providing, first, a region proposal, and second, object detection within the proposed region.
  • SSD (Single Shot Detector) employs a one-step process that divides an image into a grid to locate objects within the image.

Training these algorithms requires many images for every class of object of interest. Because the pin defect rate in a manufacturing line is low and some defect types are rare, it is difficult to select a balanced dataset. Our approach was to group all defective pins under a single class named “defective.” We then defined two additional classes, “big pin” and “small pin,” to train a single three-class model. Each pin image has a size of 792 by 792 pixels. 

Figure 5 shows our training and validation dataset on the left and the number of defect types that make up our “defective” class on the right.

 

Figure 5. The defective class in the training dataset (left) includes jammed pin, missing pin, foreign material (FM), bent pin, image capture error (ICE) and wrong pin defects (right).

We next employed the semiautomated bounding box preparation process outlined in Figure 6.

Figure 6. Bounding box preparation requires a five-step process.

The steps are as follows:

  1. Apply Gaussian blur
  2. Find a mean value and reset all pixel values to white if the pixel values are greater than the mean
  3. Do a binary invert
  4. Find max area contours
  5. Draw the bounding box

Figure 7 outlines the confusion matrix of possible outcomes. False positives imply test escapes, while false negatives require more time to evaluate bad images.

Figure 7. In this confusion matrix, false positives imply test escapes and false negatives require more time to investigate.

To evaluate algorithm performance, we focused on time and accuracy as key metrics. Speed is crucial because the model will be deployed in a post-assembly socket manufacturing line. In addition, high-volume manufacturing generates a large amount of input data, so a model that can predict an object class quickly is necessary. Accuracy is necessary to minimize false negatives and prevent test escapes.

To measure the inference time of all three models, we deployed them on Amazon EC2 instances, which are commonly used to host machine-learning models used in image classification and object detection. We chose instance type g4dn.16xlarge, which has an NVIDIA Tesla T4 16-GB GPU. Table 1 shows the results.

Table 1. Model Inference Time

The Faster R-CNN algorithm required the longest processing time, as expected, because it has a two-layer network architecture. YOLO and SSD have single-layer architectures and had shorter inference times, with YOLO outperforming the other two.

Our results show that YOLO also outperformed the other two algorithms in terms of accuracy.  Accuracy metrics primarily focus on false positive (test escape) and false negative (need review) results. YOLO misclassified only five good pins as bad pins (false negative). The low false negative count drastically simplified the post prediction review process. The following list summarizes our observations regarding the test escapes:

 

  1. Test escapes: All models performed well in identifying jammed pin, bent pin, and wrong pin defects. YOLO correctly identified all missing pin defects, while Faster R-CNN and SSD had eleven and two misclassifications, respectively. Both R-CNN and SSD had test escapes. 
  2. Conditional test escapes: YOLO outperformed the other two models in identifying foreign material and image capture error classes. YOLO’s 10 false positives are from seven foreign-material (FM) defects and three image-capturing errors (ICEs).
    1. FMs that clog the whole pin region are the real problem. Compared to other models, YOLO’s seven FM misclassifications resulted from either a tiny FM particle in the pin region or FM that did not affect pin hole region. We recommended additional cleaning procedures before inspection to avoid this issue. 
    2. An ICE is an issue caused by the image-capture equipment. ICEs do not represent actual pin defects but do result in noise being added to the image.  In YOLO’s three misclassified image-capturing errors, pin regions are clearly visible, and the issue occurs outside the pinhole region. We took additional measures to avoid these randomly generated ICE issues. Table 2 summarizes our overall results.

 

Table 2. Model comparison with metrics

Advantest ACS Edge solution

As mentioned, we performed our model evaluations in an Amazon AWS cloud environment. To achieve faster prediction speeds in an actual manufacturing facility, you can forego the cloud-service hosting and instead use Advantest ACS EdgeTM.  It is a highly secure edge compute and analytics solution which can host computationally intensive workloads adjacent to the test equipment. The Advantest ACS Edge solution provides consistent and reliably low latencies compared to datacenter-hosted alternatives.

Figure 8. ACS Edge can host models with low latency.

Conclusion

The primary goal of socket pin-defect detection is to reduce the need for manual inspection while maintaining zero test escapes. We compared three different object-detection algorithms to find the best combination of accuracy and processing speed. The YOLO model was able to learn pin-type features quickly, achieving higher accuracy with fewer iterations compared with the other models.

Read More