Pages Menu
Categories Menu

Posted in Top Stories

Speedier, More Accurate Testing of Automotive Sensors Is Here

By Zain Abadin, Director, Handler Product Engineering and Marketing, Advantest

The amount of electronic content in automobiles continues to grow at a brisk pace, and sensors represent a significant percentage of cars’ electronics. MarketsandMarkets estimates that the automotive sensors market alone will reach US$36.42 billion in value by 2023, at a compound annual growth rate (CAGR) of 6.7 percent between 2017 and 2023. 

Sensors in cars are used to monitor and control a host of functions. Pressure sensors are growing at nearly the same rate as the overall automotive sensor market. Technavio reports that the global automotive pressure sensors market is anticipated to post a CAGR of more than 6% between 2017 and 2021. This is due to growing demand for fuel efficiency, safety, and reduced emissions. Many different types of pressure sensors exist, with varying requirements as to the level of pressure they require.

One key subset of pressure sensors – exhaust and side and center airbag sensors – requires very low pressure. The thresholds at which they should activate and deploy are well below 1 megapascal (MPa), and as low as 600 kilopascal (0.6MPa) because their ability to deploy when needed is absolutely critical to protect passengers’ health and safety. Therefore, it’s imperative that they be accurately tested to ensure their functionality prior to purchase and use of the vehicle into which they’re integrated.

The traditional test flow for these sensors is done at the wafer level, conducting logic and DC test on the sensor ASIC first, then performing DC test separately on the sensor element, i.e., the part of the logic device that will make the sensor actually deploy. Normally, these tests are performed separately, then the ASIC and element are tested again as a unit, utilizing manual handling to move the sensors between test steps. This includes a range of temperature tests, which are essential to ensure desired functionality regardless of whether the car is being driven in Palm Spring in the summer or Minnesota in the winter. The multiple steps and manual handling associated with the typical test approach impact test time and cost, and can delay time to market (TTM) for the carmaker.

New solution eases pressure on users

Advantest proposes a new approach, combining a stimulus test cell with an automated handler, creating a module that can accommodate trimming, temperature, logic and DC test all in one unit. Once these are complete, all that is left is to install the module and perform a quick production test to ensure the module is installed properly. This solution allows the user to omit several individual tests and perform the necessary tests in one solution, all at the same time.

Figure 1 illustrates the difference between the current methodology and the Advantest solution, which combines a test handler and EVA100 measurement system with an HA7200/7300 temperature and pressure stimulus unit. Together, they create a compact and easy-to-use production-volume test environment.

Figure 2 shows the test cell setup, which is basically the same regardless of the desired pressure unit. The HA7200 can measure absolute pressure on up to four devices under test (DUTs), which is the ideal choice for airbag sensors. The HA7300 is designed for testing differential pressure sensors (e.g., exhaust sensors), whose use is becoming more pervasive as vehicle designs continue to focus on improving fuel economy and reducing hazardous emissions. The HA7300 enables accurate application of two separate pressures within a short time, using two ports, and can test up to eight DUTs. The setup is flexible, similar to a rack, so the pressure modules can be easily swapped out to test both types of pressure sensors. Figure 2 also includes some of the key specs associated with the setup – notably, the wide range of temperatures and pressures that can be accommodated, and the associated high degree of accuracy that can be attained.

Two major benefits of the Advantest stimulus solution are its ability to perform temperature and pressure test simultaneously, and to minimize the stability time for both. This is due to the use of dual fluid active thermal control (DF ATC), which works together with the conduction employed in the unit to maintain device temperature.

In the automotive market, testing and specifications are highly restrictive – because public safety is parament, the test levels are set, and then performed over and over. There is no margin of error, which means these is no sampling; every device must be tested. This has led to test houses creating their own custom solutions, which are costly. As Figure 3 shows at left, four setups are required for the in-house solution, necessitating a very large footprint. In addition, multiple operators are needed due to manual handling, which is a drain on time and resources.

Devices are trimmed first at high temperature (HT) and high pressure, next at room temperature (RT), and then they are tested at low temperature (LT), after which they are brought back up to HT and tested again prior to output. This means the device is heated or cooled to the desired temperature, then brought to the tester. However, during transport, the device temperature shifts by some degrees before it is tested.

At right in Figure 3, is the process enabled by the Advantest test cell system. The setup includes the same number of tests, but because it based on the use of a handler, the DF ATC technology and pressure sensor module with much smaller chambers, the system footprint is considerably smaller than that of the in-house approach. Also, because the test cell uses conduction rather than convection, the device is always in contact, ensuring the desired temperature is accurately maintained – simultaneously with the pressure. With this approach, system cost is cut by about half, power consumption is reduced by 25 percent, and operator resources are used much more efficiently.

Looking ahead – to the current sensor

Another automotive sensor challenge for which a new test approach will soon be needed is related to current sensors used in electric vehicle (EV) batteries and motors.  New batteries and motors will be much larger, and the current needed to test these sensors may exceed 1000 amps (A), while accommodating the requisite wide ranges of temperature – within an acceptable guard band.

In order to test such current sensors under the 1000 A application condition, sufficient heat generation measures and safety measures are required, so huge test and stimulus equipment is required. Therefore, a method of applying a magnetic field at module level instead of applying a current at unit level is desired to realize a small equipment. However, it is a big technical challenge to apply magnetic flux uniformly while maintaining temperature.

Together, these challenges have created major hurdles that the test industry needs to address. Thanks to increased regulation, demand for electric vehicles is on the rise – Technavio anticipates a CAGR of 42% for current sensors, with the market reaching $87 million by 2021. Meeting this demand will require better and faster testing of current sensors than is being done today. Advantest is leveraging its expertise in sensor testing to investigate new advanced solutions. We look forward to sharing the results of these efforts with you in the near future.

 

Read More

Posted in Top Stories

Preparing Solid-State Drives for Qualification Testing

By Vishal Devadiya, R&D Applications Engineer, Advantest

The market for solid-state drives (SSDs) remains strong. International Data Corp. (IDC) recently released figures forecasting a five-year compound annual growth rate (CAGR) of 15.1 percent in worldwide SSD unit shipments with SSD industry revenue expected to reach $33.6 billion in 2021. With SSD usage growing in PCs, consumer electronics and other applications, qualification testing has become increasingly critical as has finding ways to make the process faster and less costly so that SSDs can be brought to market more quickly.

Qualification testing, in essence, is a formally defined series of tests for evaluating a component or system to ensure its functionality, robustness and reliability prior to final approval and acceptance for release to production. Three types of qualification tests must be performed on SSDs before they enter the manufacturing phase:

  1. Engineering verification test (EVT) and
  2. Design verification test (DVT), both of which are run on a number of samples to check a SSD’s functionality, typically taking one to two weeks; and
  3. Reliability demonstration test (RDT), which is run on every device (not just samples) to check each SSD’s reliability and data integrity. RDT is run for a minimum of 1,000 hours and involves thousands of drives.

What is required to prepare an SSD for qualification testing? It is essential to make sure there are no functionality issues with the drive – most importantly, that it powers up correctly, and then that it works as expected in terms of running input/output (I/O) operations. If any issues arise, finding and fixing the root cause must be achieved as quickly as possible to avoid time-to-market (TTM) delays.

Several key issues can arise during the preparation process. Power-up failure, the most serious, typically happens because of a link training issue. This problem generally applies to PCIe drives because the PCIe protocol is quite complex with different layers in the architecture. Another issue is link retrain/drop. In this instance, the system may power up properly, but essentially becomes stuck in a non-ready loop shortly thereafter. A third type of problem is failure during I/O operations, which comprises three types of failures: write, read or data compare (write/read don’t match).

If one of these issues is discovered during preparation, the problem must be debugged. Traditional debugging methods are less than satisfactory. One way is to perform analysis on the available logs from the host and the drive, but the logs provide few details useful for analysis. The more typical approach is to use a protocol analyzer (PA) to capture bus trace and perform analysis to link issues (see Figure 1).

Figure 1. A PCIe analyzer on an engineering tester

But using a PA for this purpose has its own challenges:

  • The issue may not occur on a fixed slot number on the tester. If the test is run on a DVT trace during DVT and the issue occurs on the first device under test (DUT), the problem can only be captured if it is reproducible and consistent to that DUT slot.
  • If this does not work, it may be necessary to connect multiple PAs to avoid having to keep moving the PA from slot to slot. This creates a huge time sink and adds cost.
  • The large interposer required to connect the PA to the tester may temporarily change the signal properties, which can mask the issue from the tester and prevent its discovery.
  • Ongoing DVT testing on other DUTs cannot be interrupted or stopped in order to debug. EVT takes a week and RDT requires at least 1,000 hours. If an issue occurs within these time periods and a device in a specific slot experiences a failure, testing on all devices must be stopped so that the PA can be connected to that specific slot and then started up again following a period of downtime.
  • Thus, it becomes necessary to reproduce the issue. If there are insufficient or no data logs and a protocol trace must be captured, the test must be rerun. If it is not consistent, reproduction can be difficult, if not impossible. If a failure that happened at 120 hours initially does not happen again, the cause cannot be determined.
  • Additional considerations arise if the test is running under a thermal environment. Some SSD manufacturers run devices at a high temperature during RDT; if an issue arises, there is no way to connect a PA.

The bottom-line impact of these challenges is that it takes longer to identify the issue, resulting in delayed TTM and loss of revenue. One solution is to use the traffic capture tool created by Advantest and available as an add-on to the proven MPT3000 platform for system-level testing of SSDs.

The traffic capture tool enables transaction layer packet capture and link training/status state machine (LTSSM) capture, both of which are critical for debugging, as the following example illustrates. The tool also captures submission and completion queue information for each command and performs a command log dump to assess the number of commands issued and completed. Essentially, the traffic capture tool captures whatever is going on the bus between the FPGA-based test system and the DUTs.

The following figures illustrate how the traffic capture tool detects a power-up failure. In Figure 2, the link is good, but there is an error on the last line of code, indicating that the block device is not present. This means the device did not get ready within 120 seconds and thus timed out.

Figure 2. The drive linked up successfully, but did not get ready within the specified timeout.

Figure 3. The highlighted lines of code indicate that the SSD never got ready.

In Figure 3, the transaction layer packets (TLP) capture screens indicate that the device kept repolling and returning a value of 0 until hitting the 120-second mark. This means the device did not get ready (CSTS.RDY) and experienced a power-up failure. Once the failure is correctly identified, the information is relayed to the SSD manufacturer, whose challenge is to determine why the failure occurred.

When selected as an option, Advantest’s traffic capture tool runs continually in the background on the MPT3000 platform – essentially as an in-line process, capturing data that may be needed to rerun a test or reproduce an issue. Using the traffic capture tool on the tester allows the user to:

  • Run tests on all slots at the same time and capture the information required to debug issues;
  • Capture the traffic log at the time of the failure without having to reproduce the issue; and
  • Change the amount of logic in the design to capture more information if required. Because the test system is FPGA-based, it is easy to adjust the amount of logic for data capture.

The bottom-line benefit is earlier identification and resolution of device issues, resulting in the faster TTM that device makers require to keep pace with continuing market growth.

 

Read More

Posted in Top Stories

SmartShell – A Unique Software Interface for Design and Production

By Shu Li, Business Development Manager, Advantest America and Michael Braun, Product Manager, Advantest Europe

Before device test can take place on automated test equipment (ATE), device-specific test programs need to be developed for the target device and test system. As part of this process, a large amount of digital test content (patterns) gets translated from EDA (design/simulation) to ATE (test) format and needs to be debugged and characterized on the target tester.

In the mixed-signal (MX) and radio-frequency (RF) domain, scripts in various languages (tcl, Python, LabView, etc.) are often used for device bring-up and characterization on bench instruments, using early device samples on an evaluation board, either before ATE test program development starts or sometimes in parallel.

These often interactive scripts are not natively applicable to the production test system, so ATE users have developed proprietary solutions to bridge the gap between ‘bench type’ engineering test and production test environments. This enables leveraging some of the early device learnings for volume testing, or simply running the same test scripts in the two very different environments.

Both digital pattern validation and MX/RF script execution or conversion to ATE have potential for improvement and standardization, which will benefit both time-to-market (TTM) and time-to-quality (TTQ). This article will provide further details for both areas.

Digital (DFT) pattern bring-up and validation

Test patterns for scan, built-in self-test (BIST), functional, or other digital tests are typically created by design or DFT engineers in their design/simulation (EDA) environment and then handed over to the test department, where they are converted to the native ATE pattern format and integrated into the production test program. As part of this process, all patterns need to be validated and characterized on the tester, to make sure that they work as intended and have enough margin to guarantee a stable production test.

This pattern bring-up and validation process can be very time consuming because initial pattern generation and bring-up/validation is typically done in two very different environments: design/DFT/simulation versus test engineering. The design or DFT engineer creates the test patterns, but it is the test engineer’s responsibility to convert and run them against the actual silicon. If they don’t work, the test engineer will produce a log file with failing cycles for the pattern at hand and send it to the designer, whose task is then to identify the root cause of the failures in the simulation environment and to re-generate a corrected test pattern as needed. The corrected pattern needs to be translated and validated on the tester again, going back and forth between design and test. Often, design/DFT and test engineering are isolated from each other, in two different locations, communicating by email or FTP. The test engineer will thus notify the DFT engineer of discovered errors, but the latter may not get around to re-simulating the test patterns immediately. As a result, the test development process will incur some delays. The majority of patterns may pass, but some tricky ones can take months of re-spins, which will not help with getting working products to market quickly. This traditionally manual process – offline pattern generation, conversion and download, then emailing feedback about errors – is painful and time consuming (Figure 1).

If there were a way to execute and validate the generated patterns directly from the DFT/simulation environment without going through the full circle of pattern translation and fail cycle collection for every minor change, it would benefit all parties involved and reduce the pattern bring-up cycle time.

 

Figure 1. The debugging process involves lengthy communication between design and test, requires significant learning, and is prone to errors, leading to lengthy cycle times.

 

Scripts for mixed-signal/RF ‘bench instrument’ test on ATE

Mixed-signal and RF testing involves, besides some digital resources to set up and control the device, additional analog and RF instrumentation. In a lab environment, these resources are benchtop instruments such as oscilloscopes, spectrum analyzers, waveform generators and other tools.On the bench, each test requires specific control scripts for both the device and the various lab instruments involved. On the ATE system, fully integrated hardware instruments are used and controlled by standardized software components that are part of a generic test program. Often, bench instruments have a higher precision for specific tasks but are not as universal as ATE resources and cannot reach nearly the same throughput as ATE can deliver. For volume data collection in characterization, significant effort must be made to reach high throughput for data collection from many devices in a reasonable amount of time. Leveraging an ATE to do some tasks that are normally done in the lab/bench environment will speed up this data collection significantly and help to smooth the transition between design/bench and ATE. In this context, it would be very helpful to have a solution that allows moving back and forth seamlessly between the lab/bench environment and the ATE, without the need to convert bench-type scripts into ATE ‘native’ test programs. Running the exact same script(s) on the bench AND on the ATE system would help to improve correlation and TTM, while leveraging knowledge from both environments.

Figure 2. Time to market is a major issue when dealing with scripting for mixed-signal/RF devices. Producing a working customer sample can take 9-12 months, depending on chip size, type, etc.

Building a unified interface to bridge between design and test

What’s needed to address these challenges is an easy-to-use client/server environment that simplifies the communication between design and test to enable smart debugging. Advantest has developed a software option for its V93000 system-on-chip (SoC) test system that provides such a solution.

The newly developed SmartShell is a software environment for digital pattern validation and native script execution on ATE. The interface links directly between the DFT/bench environment and the V93000 tester, without the need to convert patterns and scripts to the tester’s ‘native’ data format. This allows fast pattern bring-up and characterization, enabling DFT engineers to validate their patterns faster and designs to be characterized more efficiently before they are released to production on the V93000 system. The block diagram in Figure 3 illustrates the dataflow process.

Figure 3. SmartShell data flow, from pattern/script generation to ATE and back.

With this new tool, porting different test content is made easier and straightforward, giving designers the freedom to incorporate various tasks into their test program without having to think about how to port them to an ATE system. Those that work best for the device being developed will be converted when it comes to manufacturing.

Engineers in both design and test can use the tool. The DFT engineer can run a simple script instructing the tool to check a new pattern or to loop over a number of patterns while varying conditions like voltage or frequency. He or she can access the results directly from their environment, without having to learn the native formats and software environment of the test system. The test engineer can run scripts originally generated for a totally different environment, and then quickly compare ATE results with results from the bench instrumentation. The command interface controls functionality and execution, and allows the results to be viewed in the engineer’s preferred format (see Figure 4).

Figure 4. The software package features an interface that is easy to use for design and test engineers alike.

SmartShell’s key capabilities include:

  • On-the-fly control of tester resources for digital, mixed-signal, RF and DC measurements
  • Fast internal pattern conversion, execution, and back-propagation of results
  • Ease of programming using any command-based script language
  • Accommodates customized script language using a bridge to its standard set of commands
  • Auto-recording/generation of setups for early production to ensure reusability
  • Compatible with SmarTest 7 (DFT/pattern validation only) and SmarTest 8 (Scripting)

Summary

SmartShell represents a solution to bridge the gap between design and test, delivering capabilities for pattern validation and script execution that are beneficial regardless of company size or device type. Early validation can be done in a well-contained design or bench environment, without the need to ‘learn the tester.’ The highly programmable SmartShell interface for the V93000 allows experts to best utilize their individual skillsets to debug devices effectively and efficiently in a highly integrated manner. The tool significantly shortens the turnaround times for high-quality test patterns and scripts, enabling device makers to achieve both faster TTM and lower overall cost of test.

Read More

Posted in Top Stories

Inline Contact Resistance Solves Wafer Probing Challenges

By Dave Armstrong, Director of Business Development, Advantest America, Inc.

Testing is increasingly being conducted at the wafer level as well as at lower voltage levels, necessitating even greater test accuracy. Achieving this accuracy is often hampered by poor contact resistance (Cres). In this environment, conventional continuity tests – in which current is applied in parallel to all pins and per-pin diode voltage is measured to verify continuity between the tester and the internal die – are not adequate for determining yield limiting contact problem. Moreover, they have no value in determining potential issues with probe card degradation over time.

Probe cards often are subjected to offline electrical contact resistance measurements, but contact resistance grows as the probes become dirty. As wafer probing progresses, if one pin starts to make poor contact, the situation will worsen, resulting in a large quantity of potentially good parts being turned away. To prevent the impact of accumulated residue affecting probe and test quality, inline contact resistance is becoming essential. Similar to the shift toward inline process control that became industry-standard in the early 2000s, measuring contact resistance inline will serve to mitigate the problem of dirty or damaged probes, unlike continuity testing.

Key industry trends

Moving to inline contact resistance is critical for a number of reasons. First, systems-on-chip (SoCs) face a daunting roadmap for wafer probing, as Figure 1 indicates.

Figure 1. The International Technology Roadmap for Semiconductors (ITRS) spotlights significant challenges for wafer probing through 2020.
*International Technology Roadmap for Semiconductors 2015
**Full Wafer Contact

The data shown in the table for 2016 – 66,000 individual probes making contact with one die – was fairly conservative. Today, the industry is exceeding those numbers, which were published in 2015, by more than 150%. Every probe must be clean and accurate, and therein lies the challenge. Sometimes, the same die is being probed three to four times – perhaps at a different temperature each time – and each probe kicks up oxide “dirt.” In addition, dirt accumulates as the probe moves across the wafer (through the lot) since the same probe is used for each die on the wafer. It’s a problem of numbers, repeatability and trends for a series of wafers and a series of die.

Further complicating this, engineers are now doing more than final test at wafer probe, e.g., at-speed testing and Known-Good-Die (KGD) testing. Getting to KGD is, of course, the ultimate goal – they want to touch down on the die and know that everything works before packaging. This is not easy to achieve. More types of testing, such as high- and low-temperature tests, must be performed – all of which will be challenged by poor contact resistance.

Moving to contact resistance measurements is also critical because of the need to accommodate probe planarity issues. In the environment of the ATE, the probe needs to be held in as planar a manner as possible while, at the same time, a force of 200kg or more is pushing down on the probes. This pressure will cause the probe assembly to bow, making true planarity difficult to maintain. The probe micrograph in Figure 2 provides an example of this problem. The contact resistance in the center of the die was much higher than at the periphery, and much lower at the southern edge. The culprit: bowing up in the center of the die, which created a balloon effect that caused the center and part of the northeast portion to be bad. While a reading of anything less than 8 ohms is considered acceptable, high-power probing requires less than 4 ohms, making Cres requirements much tighter in high-power and extreme-temperature environments.

Figure 2. This example contact resistance (Cres) plot clearly shows bowing in the center of the probe.

Contact resistance measurement process

The diagram in Figure 3 represents circuits typically found in a variety of devices, each of which can benefit from inline Cres measurement. While many readers will be familiar with these diagrams, here is a brief summary of each type: 3a is a traditional I/O circuit with two diodes, one connected to a power supply and one to ground; 3b is similar, with a resistor added to each diode in series; 3c consists of one large diode with a resistor in series; 3d contains a single diode going to ground only and not a power supply; 3e is the opposite, with the diode going to the power supply and not to ground; and 3f represents the class of interfaces known as SerDes, used in high-speed communication. These circuits are unknowns in many respects – they are very different from any other type of circuit and by far the most difficult to assess for contact resistance.

Figure 3. Each type of circuit illustrated in this black box diagram can benefit from inline Cres measurement.

Inline contact resistance measurements can be performed in a variety of ways. Using standard digital pin parametric measurement unit (PPMU) resources enables tracking changes to contact resistance – either over time or positionally. To measure the contact resistance of I/O pins, the engineer basically forces currents, measures voltages, and then performs calculations to determine Cres.

Conceptually, Cres is calculated using this equation:

*************

can be calculated by looking at the diode equation:

**************

Which can be reordered to calculate the change in diode voltage:

The challenge with this equation is what value to use for diode ideality η. This value is not a constant, it varies with technology, process, and transistor geometry. All the other values are known, e.g., q = transistor charge, k = Boltzmann constant, T = die temperature, etc. Because, as noted above, different device pins have different (or no) diodes, the diode configuration becomes critical when trying to determine the value of η. Since diodes may exist to ground, supply, or both, either positive currents/voltage or negative currents/voltage may need to be used in order to obtain valid Cres measurements.

The curves for each pin type also need to be individually analyzed to solve for diode ideality η. Once determined, ideality values don’t seem to change for a given process and design. However, ideality values can vary broadly – the pins shown on the graph in Figure 4 have idealities between +60 and -2.6. Many different pins are superimposed on top of each other, all showing very different performance. The key point to note here is that, when the correct value is determined for η, Cres doesn’t change with different current levels.

Figure 4. By looking at this plot, the test engineer can determine which type of ESD protection circuit is being employed, and use portions of the plot to calculate η.

 

Determining values for η

The process for determining η involves the following steps:

  1. Force different currents into and out of all DUT pins and measure the voltage. For the purpose of this article, the currents selected were ±20mA, ±10mA, ±5mA, ±2mA, ±1mA and ±0.5mA.
  2. Select three positive or three negative measurements, and calculate ideality using this formula:
  3. Try different current values in the equation to check if the ideality stays relatively constant. With the right value for η the result will change very little. Also, all pins with a similar I/O buffer design will have the same η value.
  4. Perform a final check of the ideality selected by using the value in the Cres equation. The resistance value should be positive and will not change with different current levels.

At this point, production measurements of Cres can be performed by simply performing two current force and voltage measurements (of the same polarity that was used for calculating Image) and then performing the following calculation for Cres:

Example Cres measurement results are shown in Figure 5. Measurements were taken at two different force levels (140 lbs. and 92 lbs.), and on a pin-by-pin basis, Cres rose by about 2 ohms between them. The orange plot at 69 ohms highlights a failure in the making. Cres should be lower with higher force, so this tells us that the contactor bowed.

Figure 5. This graph provides the distribution of measurement results obtained at two different overdrive levels. Pins with resistors in series with their ESD diode are clearly visible at ~ 33Ω. Those without are at ~9Ω.

Determining Cres of SerDes pins

SerDes pins are difficult to analyze. They often have, pre-emphasis and equalization circuits on the inputs and outputs to match the on-die circuitry to their transmission lines. This greatly complicates conducting Cres measurement on these pins.

Figure 6. Changes in termination resistance values can help determine Cres for SerDes pins.

As seen in Figure 6, on part of the I-V curve, the slope of lines is about 100 ohms, as SerDes pins typically have termination resistors of 100 ohms, so the I-V curve will show this resistance, not Cres. The good news is that these termination resistance values do change with Cres changes – so the engineer can measure nominally 100 ohms using traditional Ohm’s law equations without the diode voltage adjustment, and then watch to see if the measurement value increase as the Cres degrades.

The test methods described so far are all two-terminal inline test methods. It’s important to recognize that two-terminal measurements will inherently include additional resistances in addition to the key Cres value to be determined. This is shown in Figure 7.

Figure 7. Two-terminal contact resistance stray values are compensated for over time.

While the test system is designed to compensate for all the resistances in the grey fields, it cannot compensate for the green resistors. As a consequence, the Cres values measured by these techniques will 1) be higher than normally expected, and 2) vary from pin to pin due to fixture design differences. The best way to deal with this situation is to simply save a baseline set of Cres as measured by these two-terminal techniques and then monitor the difference between the baseline and the Cres values measured over time.

Determining Cres of supply pins

A supply’s contact resistance can also be measured using a PPMU and a device power supply (DPS) monitor pin. In connecting the DPS in the test system to the DUT pins, it is becoming common practice to connect one of the DUT supply pins to a digital PMU pin. In addition to providing an enhanced ability to monitor the on-die supply voltage, this approach allows direct measurement of Cres from the digital pin to the DPS signal itself, and thus, direct calculation of the contact resistance average value for the DPS interconnection. Using the monitor pin, a simple I-V curve is observed (Figure 8) which allows straightforward calculation of the Cres. Complicating this for high-power designs is the very large number of supply pins in parallel. While this technique will still measure the average contact resistance it is less sensitive to changes in Cres at the per-pin level.

Figure 8. I-V curve for supply-pin Cres measurement. The sensitivity of this measurement to single-pin Cres issues drops as a large number of probes are connected in parallel.

Determining Cres of ground pins

This designed-in capability is unique to Advantest. In power-supply modules, a current is forced through a primary path, and then another path is used to sense voltage-out on the device under test (DUT) board. One of the available modules for the V93000 test system is called the UHC4. The UHC4 has a contact resistance monitor circuit built directly into the supply, giving it the unique ability to measure the voltage difference between force and sense right in the instrument (Figure 9).

Measuring a low value of resistance requires a high current (i.e., measurements must be taken during a power-up condition). As a simple example: with the part in an active mode, consuming, say 100 amps, a 1-volt change across the pins tells the user that all resistors are exhibiting 10 milliohms of resistance. A shift in the resistance indicates a ground connection problem. Continually monitoring the module will provide a good level of sensitivity to any big issues that arise.

Figure 9. The V93000 UHC4 module is uniquely able to measure voltage difference between force and sense.

High-accuracy Cres measurement

Using the precision DC measurement resources available in the V93000, high-accuracy Cres measurements can be made using thermal measurement diodes with four-terminal techniques. Several resources can be used to make these measurements. The results of a Monte-Carlo analysis of the measurement Imageaccuracy with the available instruments are provided in the table at right, which clearly shows the benefits afforded by Advantest’s DC-scale AVI64 universal analog pin module over the per-pin PMU of the PS1600.

In summary

The Advantest V93000 is able to measure both inline (2-terminal) and high-accuracy (4-terminal) contact resistance. These measurements will become more critical as the industry moves to higher pin counts, higher power levels and lower voltages. Expanded testing at the wafer probe will also drive this trend, as will extreme-temperature testing, which makes everything more difficult.

Read More

Posted in Top Stories

Scalable Platform Key to Controlling SSD Test Costs

By Ben Rogel-Favila, Senior Director, System-Level Test, Advantest America

Manufacturers of solid-state drives (SSDs) want to keep test costs under control even as device performance, density, and variety all increase. In addition, the SSD product life cycle is accelerating. Test equipment manufacturers must strike a reasonable balance here.  They must provide more capable but less expensive systems that can cover a wider variety of devices.  One approach is to make the test system modular and scalable, so companies can buy what they need now and add on later as they need more features and more capabilities.  Another approach is to define more parameters and device characteristics in software rather than in hardware, providing more flexibility and allowing upgrades and changes to be made locally without buying an entire new system.  Additionally, the value-add of test equipment is moving beyond traditional “test electronics.” SSDs present a variety of unique test challenges, all of which an SSD test platform must address while keeping test costs in line.

How test affects time to market

Time to market (TTM) is a critical metric in terms of new product success. Manufacturers must hit the market window as early as possible – later entry means realizing less profit. This concept is well understood. So, what role does test play? In the SSD space, a reliability demonstration test (RDT) must be performed to qualify the product. If this step isn’t done correctly, it can affect TTM. To ensure that device testing doesn’t hinder TTM entry, manufacturers must recognize that their device will have some issues – the sooner they’re identified, the greater the chances of achieving optimal TTM.

Several factors can help mitigate these risks. First, it’s essential to employ a thorough set of engineering tools that can help pinpoint more quickly and accurately where any problems lie. Also required is the support of the tester provider. Test products are highly complex, comprising hardware, software and firmware all interacting with one another, so having someone competent in these environments actively supporting efforts to find these problems is key.

Next is test development – a significant undertaking that requires a robust environment able to accommodate the different test stages that a device undergoes during its lifecycle. If all the same tools can be applied at each stage, testing becomes easier and much more efficient. Finally, being able to reuse the test throughout all the various test cycles also saves time, as well as helping minimize the introduction of new test conditions from engineering through production. The bottom line is that test can substantially impact TTM, and if not done correctly, this impact can be negative.

SSD test stages and requirements

An SSD is put through a range of different tests during its lifecycle, each of which has different objectives and different needs. The SSD test lifecycle involves two distinct test stages. The first stage is focused on engineering/R&D, the second on manufacturing (see Figure 1). In stage 1, once the developer has confirmed the product design is correct, he or she must very that the design architecture lends itself to a reliable product. Once these steps are completed and the RDT conducted, the design is ready to move into stage 2, high-volume manufacturing (HVM). Following assembly, two tests are typically employed: built-in self-test (BIST) and full-speed (or, at-speed) functional test. Being able to utilize a single test solution throughout these stages helps ensure both test process consistency and quality of results.

Figure 1. SSD lifecycle test stages

Figure 2. SSD test requirements include a wide range of variables.

The test requirements for SSDs comprise a wide range of variables that span many different engineering disciplines, as shown in Figure 2:

  • Different protocols – As we have examined in prior articles, SSDs make use of several different protocols, e.g., Non-Volatile Memory Express (NVMe), PCI Express (PCIe), Serial Attached SCSI (SAS) and Serial ATA (SATA), which vary significantly in functionality.
  • Different form factors – Testing chips is essentially the same process, regardless of IC shape or sizes. With SSD, form factors range from heavy, 8-inch PCIe cards to small, gumstick-sized M.2 devices, which are very fragile; the same system must be able to test each type of SSD.
  • Different test methods – As mentioned above, a robust system is needed to perform both BIST and functional testing.
  • Different speeds – This is a key requirement. System complexity and cost rise exponentially with increased device speed. With SSDs, speeds currently range from 1.5 Gigabits/second (Gbps) to 12 Gbps, with 16 Gbps (for PCIe Gen 4) and 22.5Gbps (for SAS 24) on the horizon.
  • Enterprise vs. consumer – This distinction is important because, at least intuitively, we can assume enterprise has a bigger budget for test given that the price tag for enterprise-level SSDs is about 50x that for consumer-level products.
  • Manual vs. automated – All SSD devices are being tested manually today, but as volumes increase, three things are happening: a) demand is growing at a rate that makes continually adding operators not economically feasible; b) operator error is growing in parallel with the number of operators; c) operator turnover is on the rise, creating a significant problem for manufacturers and pointing up the need to use robots on the line.
  • Different temperatures – Testing a device at ambient temperature is very different from testing it at -10°C. The automotive market is a prime example of this challenge – there has been a huge increase in electronic content, and test temperatures can range from -45°C to +125°C or more in order to ensure vehicle electronics can handle a wide range of climates.

ALL of these requirements have to be addressed – the question is, how? An all-inclusive test cell, a “Rolls Royce” approach, could be developed to do everything needed, but would be extremely costly, and customers would invariably pay for features they didn’t need. At the other end of the spectrum is an application-specific test cell, which is less costly, but also less flexible. Because this type of solution can only do one particular type of test, if an application changes, a new tester would have to be purchased.

Can one SSD test system accommodate all TTM needs, handle both engineering and manufacturing environments, address the wide range of test requirements, and grow with a customer as their test needs evolve?

An SSD test platform is the answer. Comprising a family of components that can be easily mixed and matched to create new products quickly, such a system allows customers to meet their needs exactly, with no scrimping and no waste. Tomorrow, if a faster module is needed to test PCI Gen 4, the customer only has to purchase that module – the rest of system (thermal, power supplies, and other constants) can be reused, which equates to 70-80 percent of the components. By only paying for what they need, customers can extend the life of the platform over 10-15 years or more.

Scalable, flexible, affordable test solution

Advantest’s offering in this space is its modular MPT3000HVM test platform. Figure 3 illustrates how easily the system can be reconfigured to accommodate a new mix of products with different protocols.

Figure 3. As shown above, a shift in protocol mix for the SSD devices to be tested can be completed in minutes with the SSD test platform approach.

The base unit for the MPT3000HVM, called the primitive, is the secret to the platform’s success. The user starts with this unit, and then adds in components as needed to accommodate specific test demands, e.g., full protocol test or different types of test electronics. The personality of the primitive changes according to what it incorporates. If, for instance, an engineer is using a tester full of primitives doing full functional and needs to switch to BIST, he or she can reconfigure the primitive and continue to use it, simply by changing the modules.

Similarly, different form factors can be easily accommodated. Figure 4 shows several racks of MPT3000HVM primitives running tests on different device sizes. Just by changing the device interface board, any type of SSD, or even chips, can be tested in parallel. Different primitives in same system – can test all at same time, w/different protocols, form factors, test electronics – all flexibility needed. System then becomes accumulation of primitive in 19-inch rack.

Figure 4. Multiple MPT300HVM primitives, each testing a different type of SSD, can be run in parallel within the same rack.

Unique capabilities

One crucial characteristic of the platform is its powerful, easy-to-use software, which can be easily applied to the different test stages. It employs a universal GUI, so the user always sees the same interface whether working on device verification, RDT or HVM. Whether different protocols, form factors, or types of tests are involved, it’s the same software, so engineers need to be trained only once. Because of the software’s capabilities, the same set of tools can be used across all test stages, which eliminates the problem of different people not being able to access different aspects of the software and this being unable to reproduce issues. Tools in the suite include stylus main, datalog & test flow control; graph characterization; power profile; production operator interface; oven control; and calibration & diagnostics.

ATE is traditionally focused on fault detection – i.e., the chip is tested and, in general, the result is pass or fail. Testing beyond this level is unnecessary with chip test because if the chip doesn’t pass, it’s not repairable, so it’s discarded. With SSD, we are learning that pass/fail testing is not enough because they can be repaired. Customers are now expecting not only fault detection but also fault location – they need to know the root cause of the failure, and need tools to help them do this. Advantest is developing these tools, which no other test provider currently offers, to enable a wide range of fault location capabilities, incorporating FPGA technology to detect deviations as they happen.

Summary

The SSD industry requires a test platform that optimally addresses shortening time to market, ever-decreasing SSD product lifecycles, and the need to keep test costs under control even as device performance, density and variety all increase. Advantest’s proven MPT300HVM was created to resolve all these challenges, delivering a flexible, scalable platform that can handle testing a wide range of device types, speeds, form factors and other variables simply through swapping out modules to create a configuration that meets the user’s needs.

This also includes handling a variety of protocols. SATA still has high usage for low-end consumer SSD devices, but the world is moving to PCIe, both standalone and as a transport mechanism for NVMe. PCIe Gen 4 is coming soon, as is SAS 24, and the Advantest solution can handle them all, leveraging the company’s long tradition of creating platforms in a system-level offering.

Read More

Posted in Top Stories

5G Lessons Learned from Automotive Radar Test

By Roger McAleenan, Director, Millimeter-Wave Test Solutions, Advantest America

Situated between microwave and infrared waves, the millimeter-wave spectrum is the band of spectrum between 30 gigahertz (GHz) and 300GHz. It is used for high-speed wireless communications and is widely considered as the means to bring 5G into the future by allocating more bandwidth to deliver faster, higher-quality video, and multimedia content and services. Automotive radar is the entry point into millimeter wave for testing purposes.

Automotive radar has been evolving for the past several years, with Tier One companies producing and developing designs for a variety of different applications. As automotive is considered one of the key vertical markets for 5G technology – others include mobile broadband, healthcare wearables, augmented and virtual reality (AR/VR), and smart homes – radar systems in vehicles can provide valuable insight into the other millimeter-wave applications.

The 5G standard promises new levels of speed and capacity for mobile and wireless communications with greatly improved flexibility and latency compared to 3G and 4G/LTE technologies. However, its unique chip structures will create new challenges for test and measurement. By understanding the limits of test equipment, systems and hardware, we can better address the practical aspects associated with delivering on the promise of this technology.

Test and measurement challenges
From a measurement perspective, 5G and auto radar have functional characteristics in common that need to be measured, such as signal blockage, radiation interference and beamwidth selection. Another aspect is loss of signal penetration, an area where radar has an advantage over optical techniques that can be confounded by rain or snow. The band assigned to automotive radar, 76-81GHz provides greater accuracy in range resolution, and is sandwiched between point-to-point (P2P) bands on each side.

The challenges to be addressed in 5G test are similar to those associated with automotive radar, as well. Challenges in millimeter-wave applications include:

  • Handling multiple port devices economically
  • Providing features and testing optimized for characterization and production
  • Over-the-air environment due to packages with integrated antennas
  • High-port-count switching/multiplexing (4×4, 8×8, etc.), often in the same device
  • High levels of device features on a die– MCU + memory  + radio + high-speed digital

Multiple antennas improve power efficiency since more energy is pointed where it needs to be, and with steering, multiple targets can be tracked. This provides improvements to the capabilities and applications expand broadly to “surround” safety features, vehicle-vehicle coordination/communication.  The increased complexity in devices extends up to multiple combinations of transmit and receive.  This functionality will significantly improve vehicle-to-animal/human/object recognition and avoidance, as well as tracking more targets simultaneously.

Transceiver design is important, and they can be optimized as required as a low- or zero-intermediate frequency (IF) design. Automotive and 5G radios look nearly the same, with the similar IP blocks, e.g., phase shifters, local oscillators, RF amplifiers and mixers (Figure 1). The primary distinction is 5G radios’ modulation capability. Both may include up and down conversions, but for 5G, the market is looking for information bandwidth increase. This is actually pretty difficult from a test perspective because it requires elaborate analog equipment like high-performance oscilloscopes. This aspect is still a work in progress.

Figure 1. Transceiver design in automotive and 5G systems is highly similar.

Four main millimeter issues and considerations must be addressed in auto radar. This applies to 5G as well in that these four problems – rain attenuation, Fresnel zone, path loss, and ground reflection – are all problematic, whether you’re driving a car or the equipment is on a tower. Figure 2 shows all the areas in which radar is being used in cars, and further underscores the challenges associated with effective testing of these systems from a system level perspective.

Figure 2. Radar zones in vehicles continue to multiply as automated content increases.

One way to address some of the operational millimeter challenges is through beamforming. This is a technique that focuses the radar transmitter and receiver in a particular direction. Beamforming can be passive or active, although the former is limited in its effectiveness. Active RF beamforming, the increasingly preferred approach, will be gamechanging: it enables tracking multiple objects, both moving and static (people, vehicles, buildings, etc.) at various speeds, simultaneously. This allows auto radar to actively steer the beam toward objects and track them independently. Because the beam can be positioned with so many possibilities, testing in this way is currently a rhetorical question, although several automakers are working on solutions. For 5G, the beams would normally point either to other towers or to individual handsets and be able to track them. Basestations will have antenna arrays that can be steered to track people with 5G handsets – this will be an essential success factor in achieving the information bandwidth promise.

Test lessons learned
Advantest’s automated test equipment has been deployed for testing automotive radar for more than four years, testing from 18GHz to 81GHz, including wireless gigabit (WiGig) test in the 60GHz range, which may also be applicable to 5G.

At the moment, the focus remains on device test, but this is changing. Millimeter-wave applications provide an ideal opportunity to move away from component-level test and more toward higher-level models and end-to-end system-level testing. Figure 3 highlights the growing trends associated with system-level test. With that noted, here are some key lessons learned from Advantest’s work in the auto radar space, using its proven V93000 test platform.

Figure 3. Demand and opportunity for system-level testing is on the rise.

  1. Power accuracy is critical. This will be very important to understand and address because, as we move closer to built-in self test (BIST), the device must be able to measure accurately the power it’s generating. Right now, we’re still learning how to get RF CMOS and BIST working together to give an accurate power measurement.
  2. Metrology is difficult. Given the various connectors and waveguides that must be navigated, there are few reliable ways to perform accurate metrology of fixtures, connectors, loadboards, and other components. Also, there is the issue of system degradation – every time a new part is tested, it degrades slightly due to the materials used, and over time, the sockets or membranes that begin to deteriorate. In addition, when something finally needs to be changed out on the test system, recalibration must be performed, and that can cause a slight change in measurement results when combined with the degradation issue.
  3. Limits need to be established. As devices grow more complex and better – and as efforts are made to extend radar range – two key factors come into play:
    • Phase noise – This key parameter on RF signals affects performance of radio systems in various ways. It’s important to understand at what point phase noise begins to impact performance    and the cost-benefit.
    • Noise figure – This measure of the degradation of the signal-to-noise ratio, caused by components in an RF signal chain, is essential to making radar more effective. The key question in this regard is, what’s the smallest signal I can see (relates to dynamic range)?
  4. Millimeter “anything” is expensive. Currently, there are significant costs associated with millimeter-wave technology that will likely decline over the next few years. In the meantime, some chipmakers are trying to implement millimeter-wave technology for smaller end products, such as radar distance measuring devices, but they can’t build them because they can’t figure out how to test them economically on a small scale. The solution may rely on future technology that is still being developed.
  5. Test engineering knowledge is scarce. This is perhaps the most critical factor of all – hence, saving it for last. The number of engineers working in millimeter technology is relatively small, and companies wanting to enter the space can’t simply materialize engineers versed in radar technology to help them with product development – particularly when the primary emphasis in most engineering programs is digital technology, rather than analog/RF. This means that talent is expensive, which can put a real damper on what companies are able to do. We need competent engineers to be trained that are strongly motivated and passionate about millimeter-wave.

Summary
Automotive radar technology is here now, and while it’s currently being seen primarily in premium-brand vehicles, the goal is to bring down the unit cost so that it becomes standard equipment throughout the automotive industry. To do this, a number of challenges must be addressed, including solving of the complexities associated with testing. Advantest is strongly committed to this market and in taking a leading role in finding these solutions and applying them to other millimeter-wave applications as the market continues to grow – including the fast-emerging 5G.

Read More