Pages Menu
Categories Menu

Posted in Top Stories

Trustable AI for Dependable Systems

By Ira Leventhal, Vice President, Applied Research Technology and Ventures, Advantest America, and Jochen Rivoir, Fellow, Advantest Europe

Interest in implementing artificial intelligence (AI) for a wide range of industries has been growing steadily due to its potential for streamlining functions and delivering time and cost savings. However, in order for electrical and electronic systems utilizing AI to be truly dependable, the AI itself must be trustable.

As Figure 1 shows, dependable systems have a number of shared characteristics: availability, reliability, safety, integrity, and maintainability. This is particularly essential for mission-critical environments such as those illustrated. Users need to be confident that the system will perform the appropriate actions in a given situation, and that it can’t be hacked into, which means the AI needs to be trustable from the ground up. As a test company, we’re looking at what we can do down at the semiconductor level to apply trustable, explainable AI in the process.


Figure 1. Dependable systems are essential for electrical and electronic applications,
particularly those with life-or-death implications.
These Photos by Unknown Authors are licensed under CC BY-SA and CC BY-NC.

What is trustable AI?

Currently, much of AI is a black box; we don’t always know why the AI is telling us to do something. Let’s say you’re using AI to determine pass or fail on a test. You need to understand what conditions will cause the test to fail – how much can you trust the results? And how do you deal with errors? You need to understand what’s going on inside the AI, particularly with deep learning models: which errors are critical, which aren’t, and why a decision is made.

A recent, tragic example is the Boeing 737 MAX8 jet. At the end of the day, the crashes that occurred were due to failure of an AI system. The autopilot system was designed, based on the sensor data it was continually monitoring, to engage and prevent stalling at a high angle of attack – all behind the scenes without the pilot knowing it had taken place. The problem was that the system engaged corrective action at the wrong time because it was getting bad data from the sensor. This makes it an explainable failure – technically, the AI algorithm worked the way it was supposed to, but the sensors malfunctioned. Boeing could potentially rebuild confidence in the airplane by explaining what happened and what they’re doing to prevent future disasters – e.g., adding more redundancy, taking data from more sensors, improving pilot training, etc.

But what if a deep learning model was the autopilot rather than the simpler model that acts based on sensor data? Due to the black box nature of deep learning models, it would be difficult to assure the public that the manufacturer knew exactly what caused the problem – the best they could do would be to take what seemed like logical measures to correct the issue, but the system would NOT be trustable.

What does this mean for AI going forward? What are the implications of not having trustable AI? To understand this, we need to look briefly at the evolution of AI.

The “next big thing”… for 70 years

As Figure 2 shows, for seven decades now, AI has been touted as the next big thing. Early on, AI pioneer Alan Turing recognized that a computer equivalent to a child’s brain could be trained to learn and evolve into an adult-like brain, but bringing this to fruition has taken longer that he likely anticipated. During the first 25 years of the AI boom, many demos and prototypes were created to show the power of neural networks, but they couldn’t be used for real-world applications because the hardware was too limited – the early computers were very slow with a minuscule amount of memory. What followed in the 1970s was the first AI winter. The second boom arose in the 1980s and ‘90s around expert systems, and their ability to answer complex questions. The industry created very customized expert-system hardware that was expensive and tough to maintain, and the applications were mediocre, at best. The second AI winter ensued.

Figure 2. The evolution of AI has been marked by hype cycles, followed by AI winters. This Photo by Unknown Author is licensed under CC BY-NC-ND

For the past 20 years, AI has enjoyed a fairly steady upward climb due to the confluence of parallel processing, higher memory capacity, and more massive data collection, with data being put into lakes rather than silos to enable better flow of data in and out. Having all these pieces in place has enabled much better algorithms to be created, and Internet of Things (IoT) devices have created a massive, continuous flow of data, aiding in this steady progression.

What this means, however, is that we are currently in the next hype cycle. The main focus of the current hype is autonomous cars, medical applications such as smart pacemakers, and aerospace/defense – all areas with life-and-death implications. We need trustable AI; otherwise, we will not have dependable systems, which will lead to disappointment and the next AI winter. Clearly, we need to avoid this.

AI in the semiconductor industry

With this backdrop, what are some challenges of applying AI within the semiconductor industry?

  • Fast rate of technological advancement. AI is great for object recognition because it’s learning to recognize things that don’t change that much, e.g., human beings, trees, buildings. But in the semiconductor industry, we see a steady parade of process shrinks and more complex designs that bring new and different failure modes.
  • Difficult to apply supervised learning due to a lack of labeled training data for these new areas.
  • High cost of test escapes. If a faulty device is passed and sent out for use in an app – an autonomous driving system, for example – and subsequently fails, the cost could be life and death. Therefore, both risk aversion and the need for certainty are very high.

To meet these challenges requires a different type of AI. A major research focus in the AI community is on developing explainable AI techniques designed to provide greater transparency and interpretability, but these techniques are currently far from fully opening AI model black boxes. Today, our focus is on development of explaining AI. With this approach, we look for opportunities to use machine learning models and algorithms – deep learning, clustering, etc. – to provide insight into the data so that we can make better decisions based on the information. By looking for ways to use AI that have more upside potential for insight, and staying away from those that increase risk, we can create more trustable AI. This will allow us to make semiconductors that operate more accurately, reliably and safely – that is, devices that exhibit all the characteristics associated with dependable systems.

Reduced test time or higher test quality?

If we use deep learning to analyze test results, we may find that we don’t need to do as many tests – for example, 10 tests could replace a previous test flow that required 30 tests, which would greatly reduce test time required. But if the models are imperfect and result in more test escapes, you end up losing dependability for the devices and the systems they go into.

Machine learning exploits correlations between measurements, but every machine learning algorithm makes mistakes. As shown in the table, there are two kinds of risks you can take: a) to remove outliers, risk failing good devices at the expense of yield loss, and lose money; or b) to reduce test time, risk passing bad devices, and lose dependability. Multivariate outlier detection can be used to find additional failures, while deep learning can be employed to detect complex, but well-characterized, failures. Either way, you need explainable decisions.

Explaining AI for engineering applications

Applying AI algorithms to your process requires powerful visualization tools to help you gain further insights into your data. Understanding what the machine learning is telling you will enable you to make decisions based on the data. Let’s take, as an example, machine learning-based debug during post-silicon validation. After your design is complete and you have your first chips, you now want to perform a variety of exploratory measurements on the device to determine that it’s doing what you want it to do.

We are currently researching an innovative approach for applying machine learning in post-silicon validation, as shown in Figure 3:

  1. Generate. Proprietary machine learning algorithms are used to smartly generate a set of constrained random tests that are designed to efficiently find complex relationships and hidden effects.
  2. Execute. The constrained random tests are then executed on the test system. When the results show relationships under certain conditions, we want to zero in on these and find out more about what’s going on in these specific areas. The data collected creates a model of the system.
  3. Analyze. Now that we have our model, we can perform offline analysis, running through a wide range of different I/O combinations and using proprietary machine learning algorithms to analyze the data and determine where there may be effects or issues we need to be aware of.
Figure 3. Machine-learning based post-silicon validation comprises the three steps shown above.

In one example, we implemented this machine learning-based process to debug the calibration for a driver chip from one of our pin electronics cards. 500,000 test cases were generated, with all inputs varied, and the results were analyzed to find hidden relationships. Using the resulting model, virtual calibrations of the device were run with varying input conditions and the resulting root-mean-square (RMS) error for each calibration was predicted. The machine learning algorithm uncovered a hidden and unexpected effect of an additional input on the calibrated output. With this input included in the calibration, the RMS error was reduced from approximately 600 microvolts (µV) to under 200µV. When we took the results, including visualizations and plots, back to the IP provider for the chip, they were initially surprised by this unexpected effect, but were able to review the design and find the problem within just one day of obtaining the data. Two key benefits resulted: the calibration was improved, and the IP designer was able to tune the design for future generations of parts.

Another application for explaining AI is fab machine maintenance, where sensor and measurement data are being collected continuously while the machines are running. The question is what we do with the data. With a reactive approach, we’re flying blind, so to speak – we don’t know there’s a problem until we’re alerted to a machine operating out of spec. This creates unexpected downtime and creates problems with trustability and reliability. Far better is to take a predictive approach – ideally, one based not on setting simple conditional triggers alone, but that employs machine learning to look at the data and spot hidden outliers or other complex issues so that a potential problem with a machine is identified long before production problems result. By catching hidden issues – as well as avoiding false alarms – we obtain more trustable results.

The bottom line

Dependable systems require trustability and explainability. Machine learning algorithms hold great promise, but they must be carefully and intelligently applied in order to increase trust and understanding. Explaining AI approaches, which can provide greater insight and help us make better decisions, have many powerful applications in the semiconductor industry.

Read More

Posted in Top Stories

TDR with Recursive Modeling Optimizes Advanced-Package FA

by Shang Yang, Ph.D., Senior R&D and Application Engineer, Advantest Corp.

As the range and volume of chips developed for a host of Internet of Things (IoT) applications continues to escalate, conventional failure analysis (FA) techniques are increasingly challenged by the higher input/output (I/O) density and data throughput associated with complex 2.5D and 3D IC packages. These structures are not flat and one-dimensional; they more closely resemble skyscrapers, with many “floors” or layers, as Figure 1 illustrates. In this example, these layers are sitting on a complex foundation of microbumps, interposers and through-silicon vias (TSVs), on top of a laminate material that is attached to the printed circuit board (PCB) using ball grid array (BGA) bumps. This type of complexity makes it increasingly difficult, when conducting FA on the chip structure, to pinpoint the location of a failure from the package level to the die level.

Figure 1: Multidimensional chips, such as the 3D IC package shown here, face significant challenges with respect to performing failure analysis.

Techniques such as x-ray scanning can perform FA on these devices, but these processes are lengthy, which is problematic given the fast time-to-market windows that IoT devices and applications require. For example, if a 5-micron solder bump is determined to be the source of a failure, it is highly challenging to determine whether the crack is on the top or the bottom surface of the bump. Conducting FA by performing x-ray scanning through the entire chip can take up to a few days depending on the chip complexity.

Time-domain reflectometry (TDR) is increasingly being deployed in order to determine the location of the problem more quickly. However, applying TDR analysis for defect characterization inside the die creates its own set of challenges, as this method becomes less accurate if the failure point is between the package-die interface and the transistors. This combination of challenges points to the need for a new approach to TDR.

Effective defect searching

To further aid in understanding why a revised TDR technique is necessary, let’s take a look at a general chip FA process (Figure 2) leverage two kinds of inspection – structural and functional – both of which are needed to debug the defect down to the device level. The first step is conducting a visual inspection by using the human eye or a microscope. Obvious cracks in the chip may be detected and the failure location narrowed down to the package level with approximately 1000-micron resolution.

Figure 2: Structural and functional inspection techniques are both necessary for failure analysis, but a gap exists on the functional side that conventional TDR cannot fill.

Step 2, electrical evaluation, uses an oscilloscope or curve tracer to verify the functionality of each pin. At this point, the failure location may be further narrowed down to the pin level with resolution of about 300 microns. Next, using TDR, x-ray or ultrasonic imaging, the failure point is further investigated at the interconnect level, down to a resolution of around 100 microns.

While there are a number of powerful tools that can conduct further structural inspection and analysis at the die level, a large gap exists between functional inspection steps 3 and 4, as the figure illustrates. If the density of devices inside the 100-micron scale is very high, conducting step 4 efficiently and getting down to the submicron device level for FA becomes highly difficult. Further complicating the matter is that functional solutions are faster with lower accuracy whereas structural methods are more accurate, but take much longer. A high-resolution TDR system that can deliver accurate results quickly is needed to fill this gap.

TS9000 TDR enables high-res die-level accuracy

Advantest has addressed these challenges by developing a TDR option for its TS9000 terahertz analysis system to achieve real-time analysis with ultra-high measurement resolution. The TS9000 TDR Option relies on Advantest’s TDR measurement technology to pinpoint and map circuit defects utilizing short-pulse signal processing. Figure 3 shows the difference between conventional TDR and the Advantest approach.

Figure 3. Conventional TDR is intrinsically a high-noise, high-jitter process. High-res TDR with the TS9000 option replaces the sampler and step source with photoconductive receptors, enabling low noise and very low jitter.

Using laser-based pulse generation and detection, the Advantest solution delivers impulse-based TDR analysis with ultra-low jitter, high spatial precision of less than 5 microns, and a maximum measurement range of 300mm, including internal circuitry used in TSVs and interposers.

Having a high-resolution TDR solution alone does not guarantee the ability to detect the defect all the way down to the design level. Another problem is signal loss – if it is very high, it will have two effects on the front-end-of-line reflected pulse: the pulse will have reduced amplitude and large spread. This makes it difficult to pinpoint the specific defect location.

Recursive modeling (see Figure 4) simulates “removing” all the layers to enable virtual probing at the desired level without destroying the device or being hampered by the hurdles that conventional FA techniques present. This overcomes the challenge of the probe point not always being available due to probes’ minimum pad size requirement and limited accessibility to points far inside the die. The probe can move down layer by layer, de-embedding each trace and recursively measuring the signal pulse, until the defect point can be clearly observed and characterized in the TDR waveform until the interface before FEOL.

This impulse-based TDR approach has proven to be a highly effective method for quickly localizing failure points in 2.5D/3D chip packages, with ultra-high resolution. The recursive modeling technique described, when implemented with the Advantest TS9000 TDR, can greatly increase the strength of the reflected signal and reduce the spread effect to ensure high-accuracy defect detection.

Figure 4. In recursive modeling, the layers of the device can be virtually peeled away like an onion and probing conducted far inside the die to determine a defect’s nature and location.

Read More

Posted in Top Stories

Next-Generation Vehicles Pose Automotive Semiconductor Test Challenges

By Jerry Koo, Advantest Korea Co., Ltd., Business Promotion Division, Team Leader

Introduction

Various market trends are driving requirements for automotive semiconductor test as technology increasingly defines the future of the automobile. According to IHS Markit,1 the total market for semiconductors, having reached nearly $500 billion in 2018, will grow at a CAGR of 4.88% through 2022, while the automotive electronics category, reaching more than $40 billion in 2018, will outpace the total market, growing at a CAGR of 8.74% through 2022.

This market growth accompanies a paradigm shift toward the technologies that define the future of next-generation vehicles.

Alternative propulsion systems include hybrid and EV drivetrains and will require a new charging/refueling infrastructure, possibly including wireless charging, and will be accompanied by efficient motor drives, car weight reduction, and a move to 48-V batteries. To ensure high quality, it will be necessary to thoroughly test sensors, MCUs, power devices, power-management ICs, and other related components.

Connected cars will feature IoT, GPS, and cellular connectivity to enable infotainment and telematics functionality while integrating media, smartphones, and apps. Communications will extend beyond the car to other vehicles and infrastructure and to datacenters in the cloud.

Vehicles will also require various highly accurate sensors to enable ADAS and fully autonomous vehicle functionality. Complementing cloud connectivity will be high-speed networks within the car and onboard high-performance computing. Technologies will evolve from driver assistance in 2015 to automation (with the car operating as a copilot) driven by sensor fusion in 2020 and on to fully autonomous operation in 2030.

All these technologies will impose on auto manufacturers cost-management and product-planning challenges on the factory floor, in the car, and throughout the supply chain, leading to zero defects from the process to the field.

High voltage and parallelism

High-voltage semiconductor processes for automotive propulsion and cost management include 0.32-μm to 1.0-μm high-voltage (700 V) bipolar-CMOS-DMOS (BCD), 0.18-μm to 0.32-μm (200 V/300 V) SOI BCD, and 0.18-μm to 0.35-μm (80 V/150 V) BCD. Semiconductors fabricated in these processes serve applications areas extending from EV and HEV powertrain to braking, airbag-deployment, and other safety/body functions.

To test such semiconductors, Advantest offers T2000 IPS test modules, including the MMXHE (120 V), the MFHPE (300 V), and the SHV2KV (2,000 V) as well as the EVA100 Evolutionary Value-Added Measurement System, which offers analog VI sources providing outputs to 96 V as well as medium- and high-power VI sources offering outputs to 128 V and 2,000 V, respectively (Figure 1).

Figure 1. T2000 and EVA100 modules facilitate the test of semiconductor devices for a variety of automotive applications, from the powertrain to safety and body systems.

Compared with alternatives, the T2000 MMXHE enhanced multifunction mixed high-voltage module offers a simple board design with high parallelism, including 64 cross-functional ports, offering PMUs, 10-ps-resolution TMUs, 32 digitizer channels, and 32 AWG channels. It supports IDDQ test as well as 20-bit differential voltage measurements.

The T2000 MFHPE multifunctional floating high-power module can operate in single, gang, and stack configurations with up to 18 channels or 36 ports per module to support multisite test.

T2000 IPS supports highly accurate and highly parallel automotive PMIC test, with only two modules able to perform DC/DC converter tests, LDO tests, and MCU I/F functional tests. A key benefit of the T2000 IPS is its minimization of test-board components (such as buffers and electromechanical relays), thereby simplifying PCB design and maintenance (Figure 2).

Figure 2. The T2000 IPS minimizes the number of components required on a test board, thereby easing board design and system maintenance.

Sensor test

Autonomous vehicles present the need to test highly accurate sensors, including CO2 sensors, airbag pressure sensors, vehicle-stability-control (inertial) sensors, and pedestrian pressure sensors. Automotive angle sensors (for electric power steering and integrated starter-generator applications) present particular test challenges, requiring extensive screening to ensure accuracy and reliability. For such applications, the Advantest EVA100 provides the necessary fF and pA testing capabilities.

Electric, hybrid, and plug-in hybrid-electric vehicles also incorporate current sensors to monitor main and auxiliary batteries as well as inverters, motors, and chargers. Conventional methods of testing these sensors, including the use of current clamps or Helmholtz coils, have drawbacks. The Helmholtz-coil method, for example, presents challenges related to magnetic flux intensity and uniformity, and it requires a large chamber (1.5 x 1.5 x 1.5 m) to keep the coils and DUTs at the required -40°C temperature. In contrast, a new high-current sensor-test method based on the Advantest EVA100 offers a 20-fold size reduction (600 x 500 x 300 mm) while testing four DUTs in parallel. The method employs dual-fluid direct temperature control; electromagnets apply guaranteed magnetic flux levels to the DUTs.

SiPs and system-level test

Connected cars are driving a trend toward the increasing integration in automotive modules, SiPs, and SoCs. Such devices increasingly integrate processors and memory as well as imaging, magnetic, or pressure sensors; power devices and MCUs; memory and MCUs; MCUs plus baseband and RF circuits as well as antennas; and motor drivers and MCUs. SiPs present many technical test challenges related to interposer connectivity, warpage, die shifting, die-to-die communications (with marginal timing), limited test ports on the package, package handling, and stress related to level, timing, and processing.

Such highly integrated devices impose stringent test requirements. System-level test (SLT) can play a role in boosting quality, coming after wafer sort and final test and before installation in the end product. However, legacy SLT environments based on rack-and-stack equipment can be cumbersome and slow. In contrast, the Advantest T2000 SLT Solution automates SLT, enabling a compact SLT cell for high-mix low-volume devices or a large-scale SLT cell for ultrahigh-volume production and long SLT times.

Conclusion

From sensors and processors to power-management and motor-controller ICs, semiconductors for next-generation automotive applications will present significant test challenges, including SLT. Advantest’s T2000 IPS and EVA100 systems are available today to meet the test requirements of the devices that will serve in tomorrow’s hybrid, electric, and autonomous vehicles and the infrastructure that supports them.

Reference

1. “Semiconductor Application Forecast AMFT Intelligence Service,” IHS Markit Technology Group, Q4 2018.

Read More

Posted in Top Stories

Looking to the Next (5th) Generation

By Judy Davies, Vice President of Global Marketing Communications, Advantest

The global semiconductor business is constantly on the lookout for the “next big thing”: the mass-market killer app that will drive the next wave of market growth for our industry. While candidates abound – thanks to the continued rise of applications utilizing technologies such as flexible sensors and augmented reality – the new NBT is shaping up to be the next generation of highly efficient 5G mobile networks. Long promised and finally on the cusp of coming to market fruition, 5G will far surpass current 4G LTE technology in both its highs (speed) and lows (latency).

With more than 14 billion connected devices predicted to come into use this year (according to Gartner, Inc.), advanced 5G networks will provide the scalability and energy efficiency necessary to serve the skyrocketing amount of connections. The five major sectors that Advantest sees driving rapid development and adoption of 5G technology are automotive, medical, retail, mobile and Big Data. Each will benefit significantly from 5G’s inherent advantages, which include broader connectivity, speedier response times, greater memory capacity and – everyone’s favorite – longer battery life.

However, the new applications that 5G will enable, such as mobile broadband and massive Internet of Things (IoT) connections, will require new approaches. One key need is network slicing, which entails delivering multiple network instances (such as 4G LTE and 5G) over a single common infrastructure. This technique will provide the flexibility and cost efficiency customers demand while reducing their cost of ownership, as well as facilitating development of new networking products and services.

Another major benefit of 5G is that, while providing much faster signal speeds over greater bandwidths, it will also optimize the benefits associated with lower-speed operation – a “speed as needed” capability, as it were. In the case of IoT devices, 5G allows narrow-spectrum operation in order to achieve connectivity over greater distances while conserving power usage. Connected devices that don’t require constant monitoring, for example, can check in with the network on an as-needed basis so they are not consuming power constantly. This efficiency will go a long way toward preserving and extending battery life.

When 5G is fully implemented, signal latency will drop below 10 milliseconds, yielding network operating speeds up to 100 times faster than what we experience today. This low latency will not only benefit current applications but will also enable numerous next-generation, mission-critical applications, including industrial automation, virtual and augmented reality, online health and medical services, and aerospace and military systems.

Among the aspects of 5G that remain to be worked out is the question of industry standards. Thanks to the massive number of networked IoT devices, connectivity standards must evolve to accommodate much higher connection densities than have ever been required. Specifications indicate that 5G networks will be able to accommodate as many as 1 million connected devices packed into an area of 0.38 square mile, compared to around 2,000 such devices on today’s networks.

Advances must also be made in edge computing to avoid data overload and reduce round-trip latency. Literally, this refers to processing data near the edge of the network on smart devices instead of in a centralized cloud environment. By applying edge computing to information collected by IoT sensors, the findings can be pre-processed and only selected data passed along for central processing. This will aid in managing the immense increase in data that is coming with 5G.

The good news is that, despite these remaining hurdles, it’s clear that there is a finish line in sight. In the new 5G world, the winning companies will be those that collaborate and align with their customers to design and create 5G components that will enable the fast-approaching new world of computing and communications.

Read More

Posted in Top Stories

Adaptable, Modular Platforms Are Key to Future-Focused Test

By GO SEMI & BEYOND Staff

The electronics industry evolves continually, introducing potentially disruptive technologies and driving new applications at a pace that requires companies to respond quickly and nimbly. Being able to recognize trends early on and provide solutions that can adapt to meet emerging demands is key to remaining competitive.

This is especially true of suppliers to the semiconductor ecosystem, including test and measurement solution providers, who must be able to meet the increasingly stringent testing requirements associated with devices developed for everything from smartphones and displays to AI and automotive applications.

A dominant trend is the demand for smart portable devices such as smartphones and tablets to deliver processing performance without significantly compromising battery life. A long battery-charging interval is a huge differentiator that can make or break even the most promising products and technologies. Simply put, people demand long battery life, but still crave faster, smaller, more feature-packed devices with power-hungry connectivity technologies like 5G.

However, solving one problem often produces another. This axiom applies to developing more sophisticated devices, where testing, especially system-on-chip (SoC) testing, has come up against such daunting challenges as higher voltages, data encryption, low-leakage battery-powered designs, more complex chips, and rapid development cycles. Yet test technology providers need to continue to meet the demand for low-cost solutions in high-volume manufacturing environments. Today, the testing space is defined by a broad range of different applications, requiring a similarly large variety of test methodologies. By looking ahead, companies can position themselves early on to benefit over the course of a product’s lifecycle.

Autonomous cars and e-mobility are leading trends that are under continuous development. These applications have evolved rapidly over the past few years, with the number of electronic components in today’s vehicles having rocketed into the near-triple digits. From infotainment (car navigation, center console control) to autonomous driving (image sensors, AI) to vehicle control (driving assistance, tire pressure monitoring, engine control), this market offers phenomenal potential, both current and future. The more innovations that are developed, the more markets created and the greater the demand. Ensuring automotive-grade, zero-defect quality is essential to guaranteeing safety, reliability, and market success.

Enabling high-quality testing

Advantest has a wide portfolio of solutions with the flexibility and capabilities essential for expanding into sectors where innovation is on the rise. These solutions are all designed to contribute to improving test quality and flexibility while lowering test costs.

The V93000 system is configurable to match device needs, providing DC, digital, analog and RF capabilities on one tester platform. As testing needs change and develop over time, the platform can adapt with the addition of new modules to expand functionality. The RF solution, for example, can accommodate a wide range of devices with varying levels of complexity (such as mobile phones, navigation systems, Wi-FI- and Bluetooth-enabled devices, and IoT systems) – testing up to 32 devices or RF ports in parallel.

Complementing the platform with the power analog FVI16 card enables flexible and transparent high-quality power testing (see Figure 1). The card, which is mainly used for automotive, industrial and consumer mobile charging applications, utilizes shorter test pulses, which prevents heating up the tested device and saves test time, and features a digital feedback loop design for accurate and reliable measurements. It also houses test processor technology with 16 units per card, enabling customers to run tests in parallel, time synchronized and with high throughput.

Figure 1: The V93000- FVI16 floating power VI source for testing power is used primarily in the automotive, industrial, and e-mobility markets. (Source: Charlene Perrin)

The Wave Scale RF, MX, and MX HR channel cards are used on the same platform for multi-site and in-site testing of RF and mixed-signal devices. The cards, which each have different capabilities, bandwidths and application targets, were specifically developed to be adaptable to future device test demands.

The T2000 test platform, with air and liquid dual capability, is also available for many different applications, including IoT/module test solutions, automotive and power-management IC (PMIC) solutions (Figure 2). This single test platform can cover all segments, including mobile charging technologies, automotive applications-specific standard products (ASSPs), and battery monitoring. It features high parallelism and multi-site test technology for measuring devices under test (DUTs). The platforms benefits, in addition to reducing test costs and time to market, include providing consistent quality and traceability.

Figure 2: The flexible T2000 test platform performs high-volume, parallel testing of a wide range of SoC devices. (Source: Advantest)

Primarily focused on the automotive and consumer markets, Advantest’s SoC pick-and-place handling systems handle fine-pitch devices while applying precise temperatures. The M4841 system features individual thermal accuracy with high reliability, contact force and throughput. It can operate across a wide temperature range, with very low jam rates. The M4872 has active thermal control with a vision alignment option and fast temperature boost. It also has high contact accuracy and high-power dissipation, to help optimize yield. This system provides failure detection for applications that demand the highest quality.

As technologies evolve into more demanding and complex systems with higher performance capabilities, the future of semiconductor testers will require ongoing development. Advantest is one company that intends to grow along with these and other future innovations, adhering to its strategy of keeping test costs low while delivering high-quality, reliable testing solutions.

Read More

Posted in Top Stories

Parallelism Reduces Cost of Test for IoT, 4G, 5G, and Beyond

By Dieter Ohnesorge, Product Manager for RF Solutions

Introduction

The proliferation of the Internet of Things and the move from 4G to 5G is bringing about pressing test problems. The challenges will increase as billions of IoT devices incorporate GPS, Bluetooth, WLAN, NB-IoT, LTE-A, LTE-M, and other connectivity technologies and as smartphones begin connecting with 5G networks. Applications extend from M2M communications to fixed and mobile wireless access in smart cities. Venues for deployment will extend from factories and vehicles to stadiums and airports.

Transceiver chips for such applications include an increasing number of bands and RF ports carrying high-quality signals. The result can be longer test times leading to increasing cost of test.

Parallel test flow

Many transceivers have architectures that support testing paths and bands in parallel to reduce the cost of test with test techniques that are closer to mission mode, with the test mimicking real-life operation. For example, while you are making a phone call in your car, your smartphone is connected to a cell tower but also to your hands-free Bluetooth connection. You are also likely navigating by GPS and may use a WLAN connection to download a video for your kids to watch. All these functions are taking place in parallel, and an effective production-test strategy should come as close as possible to applying these mission-mode parallel operations.

Traditional “serial” test-flow techniques, based on a fanout RF architecture with shared stimulus and measurement resources, cannot cost-effectively test complex devices. For an LTE-A transceiver with carrier aggregation, a serial test approach would need to test the various uplink and downlink channels sequentially in a series of RF stimulus and baseband measurement operations followed by baseband stimulus and RF measurement operations—leading to long test times.

An alternative is the parallel test flow, enabled by an architecture incorporating independent RF subsystems with truly parallel stimulus and measurement ports. A parallel test flow can speed the test of multiple ports in a single device and can also support multisite test.

WSRF LTE-A/RF combo device test example

The parallel test technique is enabled by instruments such as the V93000 Wave Scale RF (WSRF) card, which offers test-processor-based synchronization and parallel mission-mode test capability. WSRF can simultaneously test multiple transceiver channels in parallel, thereby improving multisite efficiency (MSE) and significantly reducing test time.

The WSRF includes four independent RF subsystems on each card, with 32 truly parallel stimulus and measurement RF ports per card. Each RF subsystem includes an embedded arbitrary waveform generator and digitizer. The WSRF supports 16x multisite test with native ATE resources and includes embedded RF calibration standards.

For less demanding IoT applications, the WSRF scales down to one RF subsystem for use in an A-Class V93000 system. The WSRF can scale down for IoT, enabling it to perform quad-site testing based on one-fourth of a card using just one RF subsystem. At the other end of the spectrum, you may need four WSRF cards to cover the different needs for both sub-6-GHz and mmWave frequencies.

A concept study involving an LTE-A RF transceiver/RF combo device with 802.11ac support and a 3G/4G front-end module showed that the WSRF resulted in test-time improvements of up to 50% as compared with the PSRF, the predecessor to the WSRF.

Figure 1 (not to scale) depicts receive-channel, transmit-channel, and other tests performed serially (top) and the same tests using a mission-mode parallel technique (bottom). Parallel mission-mode test coupled with test-processor-based synchronization can provide a 40% to 60% test-time reduction. Figure 2 provides specific test-time-reduction values for testing parameters such as gain and EVM in single- and quad-site formats, showing MSE and test-time improvement. The results are based on similar setups and sample rates, with the patterns used being the same.

Figure 1. A serial test technique (top) cannot cost-effectively test complex devices, whereas a parallel mission-mode test (bottom) can result in a 50% test-time reduction.

 

Figure 2. This overview shows multisite efficiencies (MSE) and test-time improvements for parallel vs. serial receiver tests.


Testing 802.11ax

Test of 802.11ax devices offers another example of the benefits of parallel test flow. The successor to 802.11ac, 802.11ax offers an expected fourfold increase in user throughput. Designed to improve overall spectral efficiency in dense deployment scenarios, 802.11ax incorporates multiuser MIMO on the downlink and uplink. It operates in both the 2.4-GHz and 5-GHz ISM bands.

These characteristics impose significant ATE challenges. Multiuser MIMO places more demands on RF/analog resources, resulting in longer test times. ATE RF and baseband instruments (AWGs and digitizers) must accommodate the standard’s 160-MHz bandwidth, and the 1024 QAM modulation scheme demands improved phase noise and linearity.

An eight-site test of an 802.11ax transceiver operating in the 5-GHz band with 4×4 MIMO demonstrates how Wave Scale technology and SmarTest 8 software can test over 4,000 test items, including transmitter, receiver, power-detection, DC, and functional test parameters. The Wave Scale technology includes the Wave Scale RF plus the Wave Scale MX, which includes 16 AWGs, 16 digitizers, 64 PMUs, a hardware sequencer, a real-time signal-processing unit, and a large waveform memory.

Complementing the Wave Scale cards, SmarTest 8 protocol-aware software works directly with user-defined register files and generates a protocol-aware sequence using device-setup APIs with no additional conversion required. The software supports the easy-to-implement flexibleA-Class parallel programming required for concurrent testing. An automated bursting capability works with any type of test, including DC, RF, and digital, and runs as fast as tests based on flat patterns, eliminating the need for manual test-time-reduction efforts, thereby providing an early throughput advantage.

In the 802.11ax example, the Wave Scale instruments powered by SmarTest 8 can test four transmitters concurrently in about 23 ms, vs. 80 ms for a serial-measurement approach, resulting in a test-time reduction of about 70%.

Moving to 5G

5G chips are appearing on the market and can be expected to find their way into 5G handsets and infrastructure equipment in the coming months as 5G deployments roll out. Such devices will increasingly need to rely on parallel test flows to handle the complexities of 5G while continuing to provide backwards compatibility with 3G and 4G technologies, and as they continue to support WLAN, GPS, ZigBee, Bluetooth, and various IoT connectivity applications.

With respect to 5G, new smartphones and other devices will achieve high peak speeds, and 5G will rely heavily on eMBB (enhanced mobile broadband). eMBB will provide not only improved data rates but also broadband everywhere, including in vehicles extending to high-speed trains. Coupled with carrier aggregation, eMBB provides a further example of the benefit for having a parallel test flow that goes hand in hand with test-time reduction and lower COT.

The Wave Scale cards, available now, stand ready to help customers keep pace with the parallel test demands of current and next-generation semiconductor devices.

Read More