Pages Menu
Categories Menu

Posted in Top Stories

2025 Viewpoint: AI, Device Complexity Will Continue to Drive Packaging and Test Demands

This article is adapted with permission from Semiconductor Digest. The original article can be viewed here.

By Doug Lefever, Representative Director and Group CEO, Advantest Corporation

I believe many of the technology trends that we’ve seen in 2024 will continue through 2025, especially silicon development related to artificial intelligence (AI) and machine learning (ML) applications. Although socioeconomic factors, supply chain issues, and the risk of a global recession will persist, demand for AI should remain strong throughout 2025, spurring major long-term growth in the semiconductor industry. I expect much of this growth will stem from high-performance computing (HPC) products used in data centers, such as GPU/CPUs and DRAM used in high-bandwidth memory (HBM). 

HPC devices are incredibly complex, and their development presents the industry with numerous unique challenges. To increase processing speed and improve performance, engineers are developing chips with more and more transistors – a single data center GPU can contain billions of transistors. Additionally, many manufacturers are employing heterogeneous integration to group multiple ICs together on a single substrate, reducing chip-to-chip communication delays. So, not only are these HPC devices densely packed with transistors, but they also contain various components, all with different needs and processing speeds, packaged together in a single device. This can lead to hot spots and mechanical failures that have the potential to damage the device and render it unusable. Also, power requirements for these devices are increasing exponentially, demanding test and handling equipment with substantial current supplies and advanced thermal control capabilities. 

Most HPC devices are designed for use in data centers running large language models (LLMs), requiring massive computational resources. To run LLMs efficiently, data centers can pool hundreds or even thousands of these AI/ML processors together for long periods of time. If one fails, they have to start the whole process over. This is driving the industry to incorporate more test insertions and added fault coverage in the manufacturing process to ensure higher reliability. Some customers are adding system-level test (SLT) and burn-in (BI) insertions for an added layer of quality assurance. As the demand for AI continues to grow in 2025, we anticipate an increase in the demand for semiconductor testing to support the development of these high-performance devices.

The products used to power today’s AI/ML applications are far more complex than anything our industry has seen before and will only become more complex with the influx of generative AI in consumer products like smartphones and laptops. Advantest is prepared to provide our industry with the test solutions needed to navigate this complexity and drive innovation in 2025.

Read More

Posted in Top Stories

Optimizing Tester Memory Resources with Xtreme Pooling Technology

By Ronald Goerke, Senior Application Consultant, Performance Digital Business Group COE, Advantest

The rapid evolution of semiconductor devices has amplified the demand for advanced automated test equipment (ATE) that can handle increasingly complex test scenarios for logic devices. ATE vector memory is becoming an increasingly valuable commodity as scan-pattern volume soars. Extrapolations based on data from the International Technology Roadmap for Semiconductors (ITRS) indicate that scan data volume will double every three years, and some new data suggests that with the growth of AI products, scan data could begin increasing tenfold over future three-year periods. Furthermore, as parallel and multiplexed scan give way to multi-gigabit high-speed serial I/O (HSIO) scan (as specified in the IEEE 1149.10 standard or in proprietary implementations), devices with fewer pins require even more vector memory behind every single device pin. 

Contending with the data

Key drivers of this data explosion include higher gate counts, new and more intricate fault models, and chiplets as they demand lower DPPM. Consequently, ATE systems are increasingly likely to run out of memory when testing complex devices. Several possible solutions can help to more efficiently use available memory: you can use higher levels of pattern compression, avoid pattern duplication, simplify instructions, or combine patterns to avoid complex operating sequences, for example. If such steps are not sufficient, you can use site memory sharing, which must be enabled on a per-pattern basis, or traditional memory pooling, which occurs automatically, although the user must consider load-board design. In either case, sharing is restricted to one memory pool, which can create bottlenecks for data-intensive scan and functional test and can complicate load-board design.

As an example, Advantest’s Pin Scale 5000 digital card contains eight modules, each with 32 channels and four test processors, providing eight channels per test processor. The eight channels represent one memory pool, and traditional memory pooling can stack all eight channels of memory behind one pin, with fanout supporting up to eight channels for multisite memory sharing (Figure 1). However, with the traditional implementation, a given memory pool in the Pin Scale 5000 cannot extend beyond eight channels, potentially leading to a tradeoff between a costly hardware upgrade and compromising test coverage and efficiency. 

Figure 1. Traditional memory pooling with the Pin Scale 5000 card can stack eight channels behind one pin.

Extending the memory pool

To overcome this limitation and avoid unpleasant tradeoffs, Advantest has introduced the Xtreme Pooling technology with SmarTest release 8.7.2.0, which extends the vector memory pool for the Pin Scale 5000 card beyond eight channels, delivering unmatched flexibility and efficiency and setting a new standard for high-speed, high-data volume test applications. Xtreme Pooling is enabled by Advantest’s proprietary Xtreme Link communication-network technology for ATE systems. 

Xtreme Pooling implementations are possible because a test program usually does not fill the vector-memory pool of all test processors. In Figure 2, moving from left to right, each group of eight vertical bars represents eight channels of memory available for test processors TP2, TP3, and TP4. The dark areas represent memory that the respective test processors utilize, while the lightly shaded areas represent unused memory that could be allocated to other test processors.

 

Figure 2. Xtreme Pooling makes underutilized memory, indicated by the lightly shaded area of each bar, available to other test processors.

Several naming conventions help to clarify how Xtreme Pooling works:

  • Xtreme Pool refers to all free vector memory.
  • Donor refers to a test processor whose memory can store data that can be executed on other test processors.
  • Recipient refers to a test processor that can execute vector data copied from other test processors.

In addition, a new pattern property describes two memory locations: local (standard, associated with a particular test processor) and remote (in the Xtreme Pool).

Xtreme Pooling allows any test processor on a Pin Scale 5000 card to store vector data in other test processors’ underutilized memory. Xtreme Pooling can serve in HSIO applications with data rates up to 4 Gb/s in multisite configurations as well as in any application with high data volumes.

By enabling memory pooling and sharing across all channels and test processors within a single Pin Scale 5000 card, Xtreme Pooling extends the vector-memory pooling limit from 28 giga-vectors (GV) to 896 GV, as shown by the equations at the bottom right of Figure 3. Xtreme Pooling also increases the vector-memory allocation flexibility to 256 sites.

 

Figure 3. This Xtreme Pooling example expands the memory pool to 896 GV, as shown in the equations on the bottom right, and provides fanout to 256 sites.

To enable Xtreme Pooling, the user needs to set a flag for those patterns that should be placed in a remote location. While loading the program, the system analyzes the patterns and test program to configure the recipient and donor channels as well as the required buffer sizes. The remote pattern data content is symmetrically distributed to the available donor channels. During test program execution, the remote vector data content required for the execution of a test suite is copied from the donor pools to the recipient buffers (Figure 4). This preload occurs prior to executing a test suite with Xtreme Pooling patterns, and a user can optionally trigger this process early in the test flow as a background operation. After the execution of a test suite, the relevant buffer is freed up for the next preload.

 

Figure 4. A bind step downloads patterns for donor channels, and a preload step copies content from donors to recipients.

Use cases

Advantest is currently addressing two primary use cases with Xtreme Pooling for high-performance computing and advanced-packaging devices. First, if some scan pins require more memory than is available in a single test processor’s pool, Xtreme Pooling will enable the use of remote memory from low memory pools or additional empty pools, and SmarTest will automatically distribute the vector data to those remote pools. In Figure 5, unused memory in pools 1 and 2 can be allocated to memory pool 3 to provide sufficient vector memory for a single HSIO pin, for example. When a test requests the remote vector data, SmarTest will copy the data to the recipient before execution.

 

Figure 5. Unused memory from memory pools 1 and 2 can be allocated to memory pool 3.

The second use case involves multi-site test programs with heavy usage of some signals. As in the first case, SmarTest enables the use of memory from low-memory pools or additional empty pools. Compared to traditional memory sharing among sites, this approach does not require all sites needing access to one signal to be in the same pool to take advantage of vector-data sharing because the same vector data from the donor pool (memory pool 1 in Figure 6) can be copied to multiple memory pools (memory pools 2 and 3 in Figure 6) for zero-overhead fanout to multiple sites (two sites in Figure 6). Note that the use cases illustrated in Figures 5 and 6 are not mutually exclusive; both can be applied simultaneously. 

Figure 6. The same vector data from donor pool one can be copied to memory pools 2 and 3 for zero-overhead fanout to two sites.

Load-board considerations

Xtreme Pooling also has implications for the load-board layout. 

For applications requiring multiple cards, it is important to distribute high-usage vectors among the cards. To find the optimal solution, first determine which DUT pins require the most vector memory and calculate the number of Pin Scale 5000 cards required for that DUT. Then, distribute the memory-intensive signals evenly between all available cards. Figure 7 shows 40 high-usage scan signals distributed among four cards, with 10 high-usage signals per card.

Figure 7. High-memory-usage signals are distributed evenly among four cards.

For success when employing Xtreme Pooling, keep in mind that it works only within one Pin Scale 5000 card; it does not work across multiple cards in one system. In addition, a memory pool can be configured as a donor pool or a recipient pool, but not both, and only vectors can be stored in donor pools, while sequencer programs can be stored only in recipient pools. 

Summary

Xtreme Pooling extends vector-memory capacity up to 896 GV per pin by dynamically redistributing unused memory across test processors. The technology enhances memory availability and simplifies load-board design, mitigating the complexities of wiring and signal routing on dense test boards. Moreover, by providing a software-driven solution to overcome memory constraints, Xtreme Pooling supports cost-effective scaling and boosts overall test efficiency with reduced reliance on specialized hardware configurations. Advantest, in collaboration with leading semiconductor companies, has already demonstrated Xtreme Pooling’s value for cutting-edge applications. As data volumes continue to grow exponentially, Xtreme Pooling offers a scalable, cost-effective solution that enhances test versatility, reinforcing Advantest’s position as a leader in the era of increasing device complexity.

Read More

Posted in Uncategorized

Beyond Black Boxes: Meet AI that Justifies Its Choices

Unlock the secrets of AI innovation with our esteemed guest, Jem Davies, non-executive director at Literal Labs. Jem shares his transition from Arm to Literal Labs, revealing how the revolutionary Tsetlin machine sets new benchmarks in efficiency, power usage, and processing speed.

Jem is a highly experienced business leader and technologist, having previously served 18 years at Arm. He is an engineer and was an Arm Fellow, holding multiple patents on CPU and GPU design. Jem’s career moved into business management and he became a general manager first in Arm’s Media Processing Groups, then the founding general manager of their Machine Learning group. In addition to setting future technology roadmaps, he also worked on several acquisitions leading to building new businesses inside Arm, including the Mali GPU (the world’s #1 shipping GPU) and Arm’s AI processors. Jem left Arm in 2021 and is currently chair of NAG and a non-executive director of Literal Labs, BOW, CamAI, and Cambridge Future Tech.

Explore the crucial role of explainable AI and why it matters more than ever in today’s regulated industries like healthcare and finance. Jem discusses Literal Labs’ Tsetlin Machine, which offers an intuitive audit trail of AI decision-making through propositional logic. This approach is breaking new ground by enhancing model efficiency without compromising on performance. We also tackle the challenge of unbiased training data and how tailored levels of explainability can make AI accessible to everyone, from everyday users to industry experts.

As we gaze into the future of AI, we tackle the pressing issues of bias, energy consumption, and the potential impact of quantum computing. Jem provides insight into how Literal Labs is pioneering tools to promote ethical AI development, mitigate biases, and democratize AI innovation. From practical applications like water leak monitoring to the potential for AI to evolve into a tool of unimaginable uses, we reflect on how the intersection of explainability, energy efficiency, and bias shapes a responsible AI future. Join us for an episode that promises to broaden your understanding of AI’s profound societal impact.

Read More

Posted in Upcoming Events

Advantest Completes Another Successful SEMICON West

 

SEMICON West 2024 took place at the Moscone Center in San Francisco, California, on July 9-11, where Advantest once again made a strong presence. This event marks the 25th anniversary of the V93000, as it was unveiled at SEMICON West in 1999.

This year’s SEMICON West hosted 650 exhibiting companies, a 15% increase over last year and saw a 19% increase in attendance from 2023.

Advantest’s booth attracted 242 visitors—an astonishing 50% more than last year—including 12 VIP customers. The booth featured Advantest’s 70th anniversary and corporate theme videos, as well as ACS RTDI, automotive device test solutions, memory platforms, the HA12000 die-level handler, and the V93000 EXA Scale PSML and XHC32 with RDA sockets and boards. Twelve press meetings were conducted, nearly four times the number Advantest usually conducts.

Our customer hospitality event on Wednesday, July 10, proved to be a success as well, attracting almost 250 attendees, 67% of which were customers or non-Advantest guests. We celebrated Advantest’s 70th anniversary and V93000’s 25th anniversary with special cupcakes.  

Advantest had a strong presence at the TestVision Symposium as well, presenting multiple papers and posters:

  • Keynote – The Rise of AI-Enhanced Test Engineering: Transforming Challenges into Opportunities by Keith Schaub
  • Revolutionizing AI Chip Testing with AI-Driven Solutions by Ira Leventhal
  • Chiplet Ecosystem Testability for HVM by Bob Bartlett
  • Poster presentations
    • DPD (Digital Pre-Distortion) for RF Power Amplifier Test by Yichuan Lu
    • Optimizing ATE Test Cell Operation with IoT and Cloud Technologies by Vincent Chu

In addition, Ken Butler represented Advantest in the panel discussion, “A Foundation for a Data Driven Future: How to Define, Adapt, and Adopt Standards to Enable a More Intelligent Test Flow and Seamless Operations.”

Another success from the event is that Advantest’s HA1200 test handler was named one of three finalists for the “Best of West” award, a prestigious award presented by SEMI and Semiconductor Digest each year to recognize innovative new products that significantly advance electronics manufacturing capability.

SEMICON West 2025 will take place in October in Phoenix, Arizona, as the conference begins its new rotation between Phoenix and San Francisco.

Read More