Pages Menu
Categories Menu

Posted in Uncategorized

Q&A Interview with Leslie Tugman, SEMI

By GO SEMI & Beyond staff

As most of us in the electronics manufacturing supply chain are aware, the industry is facing a talent crisis and needs to fill the tech workforce pipeline with employees qualified to perform a plethora of available jobs. In this issue, we talk with Leslie Tugman, SEMI’s Vice President of Global Workforce Development and Diversity, about what SEMI and its member companies, which include Advantest, are doing to address this challenge.

Q. How are SEMI and its High Tech University (HTU) program driving industry workforce development efforts?

A. SEMI has made workforce development and talent advocacy a top priority and dedicated significant resources and expertise to tackle the talent shortage. We offer an extensive suite of programs and initiatives addressing the problem. All are available under our umbrella program called SEMI Works™, a holistic approach to workforce development that includes SEMI High Tech U, our University Connections and SEMI Mentoring programs, and SEMI Certs. These initiatives are anchored by an industry-wide competency model we are developing that will standardize and prioritize industry-acknowledged skills and support training programs linked to the skill sets the industry needs most. 

Right now, the electronics manufacturing supply chain has thousands of jobs that it can’t fill. All of these jobs require skills across science, technology, engineering and/or math (STEM). This need intensifies as technology advances, and many K-12 public school systems around the world aren’t producing enough students with an interest or aptitude for high-tech jobs. The purpose of SEMI HTU is to inspire high-school students to pursue careers in our industry by showing them how these STEM skills are relevant and can be applied in the real world.

We take students out of their traditional classrooms and bring them to an industry site for a three-day intensive course. The company facility becomes their classroom, led by an instructor who works at that site and can tell students how they’ll use what they’re learning. The program combines lectures with hands-on learning and STEM exercises as well as lessons in communication, critical thinking, teamwork, and other career/life skills. The instructors serve as role models and provide a positive industry image.

Q. How do members participate in supporting SEMI’s workforce development programs?

A. There are a number of ways that members can participate in and support SEMI HTU. Members can sponsor HTU through financial and/or in-kind contributions. They can also participate by volunteering to teach a module at an HTU program. Participating in HTU is a great way for companies to support their corporate social responsibility (CSR) programs. SEMI can deliver the program for members, or we can train member companies to be certified partners to deliver the program independently. SEMI is currently delivering two HTU sessions per month around the world.  

Q. How have HTU’s workforce development efforts evolved over the past five years? 

A. The constriction in the semiconductor industry’s talent pipeline didn’t happen overnight; it’s been worsening for some time. In the last several years, a number of factors – including a greater shortage of talent, the shortage of STEM-educated students, biases related to gender and diversity, and the aging workforce – have converged to narrow the pipeline even more. At the same time, the number of job vacancies has skyrocketed. SEMI has become a leader in addressing workforce development in a broad, comprehensive manner. High Tech U, our mentor program and our diversity/inclusion initiatives focus on employee recruitment and retention.

We also have a University Connections Program that puts companies such as Advantest in contact with recent or imminent graduates so that they can help them understand why the company would be a great place to work. In the past five years, we have really embraced university students and young professionals as part of the audience we want to reach. SEMICON West will again feature a Workforce Development Pavilion that connects members with emerging talent through our HTU mentoring and University Connections programs. This is a significant area of focus at SEMICON West 2019. In addition, this year, we will conduct a High Tech U – which Advantest is co-sponsoring – in a classroom adjacent to the Workforce Development Pavilion.

It’s important to note that SEMI offers global workforce development initiatives. The need to fill thousands of industry jobs is global, although causes differ by region. For example, the aging workforce is a critical factor in Japan, lagging STEM skills are a key issue in the U.S., while shoring up the industry’s image in terms of diversity and inclusion is an issue worldwide. We tie this all together with a Workforce Development Council in each region that provides guidance and validation of our initiatives.

Q. Clearly, providing inclusive work environments will be vital to attracting new workers. How are you helping members rethink their corporate culture in this regard?

A. This is a critical component in terms of attracting future tech workers. Our CEO, Ajit Manocha, is passionate about diversity and inclusion. These kinds of efforts can fail when they don’t have executive support, and he is making this a top priority.

Mentoring is an important element in recruiting and retaining women in the workforce. Our new Spotlight on SEMI Women program honors women who are working at SEMI member companies and making a difference at every level. At SEMICON West, we will be celebrating our spotlight women at the welcome reception.

We also hold diversity forums on various topics – including unconscious bias and the importance of collecting data – to aid member companies in effecting internal change. Members like Advantest have been instrumental in supporting these efforts. In addition to its HTU sponsorship and partnering in workforce development, Advantest is active on both our Workforce Development Council and our Diversity and Inclusion Council.

Q. How can readers get involved?

A. There are a variety of ways to involve your company in SEMI educational activities. Here are a few specifics to pique your interest:

Read More

Posted in Uncategorized

Q&A Interview with Ira Leventhal

By GO SEMI & Beyond staff

This issue, we delve into a subject of growing interest in the test world and beyond: artificial intelligence. Our Q&A interviewee is Ira Leventhal, Vice President of Advantest America’s New Concept Product Initiative, a position he has held since June 2017. Ira has over 25 years of ATE industry experience, with Hewlett-Packard, Agilent Technologies, Verigy, and Advantest.

Q. Why is now the time for AI to be implemented in the semiconductor industry, given that it’s been discussed for many years?

A. Since Alan Turing first postulated in 1950 that the computer equivalent to a child’s brain could be developed and then trained to learn – evolving into an adult-like brain – we’ve been waiting for the technology to catch up to his theory. Today, all the key components essential to enabling AI are in place. First, you need a lot of data, and the Internet of Things facilitates this. Second, you need access to the data; using cloud computing and Big Data technologies, data silos become data lakes with easy access. Third, you need to fast data crunching, which we can achieve thanks to the tremendous advances in computational power and parallel processing. And finally, you need better algorithms for a wide variety of applications – the first three items have enabled rapid advancements in algorithm design.

Q. You state that advancements in deep learning will fuel the next semiconductor industry revolution. How so?

A. For years, the test industry has used adaptive test and other techniques to streamline and focus test efforts for maximum value (and minimum test times). With the advent of AI technologies such as neural networks, new possibilities are coming to light. Merging these approaches will allow the industry to improve device quality, reduce cost of test, and automate the control of functions best suited to the computers supporting us – freeing humans to concentrate on new developments and innovations.

Q. What is deep learning? Is it synonymous with AI?

A. Many people don’t realize that AI, machine learning and deep learning are not interchangeable terms. AI is actually an umbrella term, and the others are nested subsets of AI. [See Figure 1].

Figure 1. AI vs. machine learning vs. deep learning

Q. Why should we focus on deep learning?

A. Deep learning is analogous to building a skyscraper. When you don’t have sufficient land to build a very large building, you go vertical. When you lack infinite storage, computing power and training data needed to build a very large single-layer neural network – which we do – you go deep. Deep learning promotes efficient use of available resources, much like a skyscraper, and it enables complex problems to be broken up into a series of steps, similar to an automobile assembly line.

Convolutional neural networks (CNNs) are used heavily in deep learning network architectures. When the network is being fed images during the training process, convolutional filtering layers are used that can recognize specific attributes of the images. As each layer views an image through a convolutional filter, it passes on a reduced set of data to the next layer, enabling the network complexity to be kept in control as you go from one layer to the next. The reduction in complexity of a CNN vs. fully connected networks minimizes processing, memory, and time required for image recognition. [See Figure 2.]

Figure 2. How a convolutional neural network works

Q. How can deep learning be applied in semiconductor testing?

A. A type of deep learning called transfer learning is well suited to our industry. Transfer learning enables you to start with an existing set of trained data instead of having to train a network from scratch. If you take a network that was trained with millions of images and you keep the initial layers that can understand low-level aspects of the images, you can replace later layers, training them on a new set of data for which you may only have a few hundred images. The result is a trained network that performs with significantly greater accuracy than if you’d started training from scratch. The reality is that a network trained from scratch would never catch up, no matter how long you trained it.

A key application is wafer metrology. Metrology involves monitoring your wafer process to make sure it’s staying within set limits by making measurements on the wafers over time. Trying to measure data on every wafer can be costly and cumbersome.

Virtual metrology (VM) is the prediction of wafer properties based on equipment settings and sensor data.  This data is used with real metrology data from a sample set of wafers to create a deep learning model that maps process data to wafer metrology parameters such as layer thicknesses, photolithography critical dimensions, and others. Instead of measuring every wafer, you can measure a sample set, and then use VM to predict the metrology performance of the rest.

As geometries shrink and capacity is increased, new wafer processing equipment is constantly brought on line, and it is a big challenge to generate enough training data to keep the deep learning models current. Transfer learning enables you to build up a network that’s been trained on many different types of equipment. When a new piece of equipment is added to the line, you can tune a pre-trained network to operate with only a small set of data collected on that new piece.

Q. This is a fascinating subject. What other kinds of deep learning are there?

A. Reinforcement learning involves training a deep learning network on which actions will achieve the best ultimate reward. In this case, the network is like the brain of a mouse learning the fastest path through a maze to get to the cheese – it learns to navigate complex problems and come up with the optimal solution. An example is using deep reinforcement learning for production scheduling. Let’s say you’re trying to figure out how to minimize the overall time it takes to work through a complex multi-step production process from start to finish – the network will try different types of scenarios and figure out what works best. 

Unsupervised deep learning has great potential for semiconductor manufacturing and test applications. Instead of telling the network what kind of data you’re giving it, you feed in unclassified data, and the network identifies things it sees that are similar to each other. It doesn’t know what those things are, just that they’re similar. It trains itself to classify things that look alike. This is powerful because you can throw a lot of unlabeled data at the network, and it will be able to identify relationships and act on them. It can find hidden relationships that humans might not have thought of, so unsupervised DL can do things that supervised DL can’t.

Advantest is working with university teams to investigate these techniques in detail, and we’re in discussions with multiple customers about ways to apply AI. We view it as a vital competitive advantage going forward.

Read More

Posted in Featured Products, Uncategorized

High-Resolution Audio Requires Advanced Measurement Capabilities  

By Takahiro Nakajima, Senior Expert, Analog/Mixed-signal, Advantest Corp.

Smartphones supporting High-Resolution (Hi-Res) Audio are growing more widely available, enabling consumers to experience high sound quality when streaming music, movies or other content. To accommodate High-Res Audio, these devices integrate an increased number of power management ICs (PMICs) equipped with digital-to-analog converters (DAC), which require high dynamic range testing with 24-bit resolution.

This has, in turn, led to manufacturers’ increased demand for automated test equipment (ATE) with analog performance exceeding a total harmonic distortion (THD) of -130 dBc*, as well as the ability to perform 16 multi-site tests. This article details a solution for achieving both ultrahigh dynamic range performance and 16 multi-site testing.

Figure 1 shows a block diagram of a smartphone. Smartphones incorporate numerous semiconductors to drive power management, connectivity, sensors, displays, audio, cameras, and memory. In recent years, there has been a trend toward integrating the PMIC and audio coder/decoder (CODEC) into a single chip, as the figure illustrates. There has also been an increase in 24-bit resolution DAC, needed for support of Hi-Res Audio.

What is Hi-Res Audio?

The Hi-Res Audio specification – defined by the Japan Electronics and Information Technology Industries Association (JEITA) – allows a much wider dynamic range than that provided by CDs. A Hi-Res sound source, such as 24 bit / 96 kHz or 24 bit / 192 kHz, is converted to data at a finer resolution than a CD sound source (Figure 2), so it has much more sound information compared to a CD sound source. This means that Hi-Res Audio is as close as possible to the original sound, enabling the listener to experience sound quality comparable to being in a studio or concert hall.

Audio testing

The four test methods required for audio devices are the tests for total harmonic distortion (THD); total harmonic distortion + noise (THD+N); dynamic range (DR); and signal noise ratio (SNR). Each of these tests determines various requirements associated with Hi-Res Audio, and together they create a set of parameters that must be met in order to assure the highest quality audio performance.

 Once these tests are completed, frequency weighting is used to obtain measurement values matching the sensitivity of the human ear. The frequencies people hear the best are in the range from 2 to 4 kilohertz (kHz), and sensitivity declines at frequencies that are higher or lower.

A-weighting is commonly used for the weighting network. SNR/DR tests often show analog performance when A-weighting is applied.

Measurement error occurs when measured noise can be calculated from the difference between device performance and measurement instrument performance. For example, if the difference between device performance and measurement instrument performance is 0 dB, the measurement error is 3 dB. If the difference is -5 dB, measurement error is 1.19 dB. This clearly indicates that the better the performance of the measurement instrument, the lower the measurement error.

Advantest solution

The T2000 supports three Mixed-signal modules (GPWGD, BBWGD,8GWGD) as shown in Figure 3.

Advantest has developed a measurement technique with ultrahigh dynamic range to achieve industry-leading levels of analog performance for 24bit DAC solution, by adding high-precision analog circuits such as a band elimination filter (BEF) at the front-end of its T2000 general purpose waveform generator digitizer module (GPWGD).

 The target performance for the T2000 solution was set to be 5dB better than target device performance in order to enable analog measurement with higher precision from characterization to mass production (Table 1). The test results performed indicated that the T2000 Integrated Power System (IPS) + GPWGD solution can address multiple challenges associated with Hi-Res audio testing, including high dynamic range measurement, power supply/GND design and isolation, high multi-site testing.

Mobile PMICs require digital, high-precision mixed-signal/analog, and power testing. As a product for automotive/industrial devices and PMICs, the T2000 IPS system can have a number of modules installed, as shown in Table 2. A high-precision analog function can also be added to the front-end of the GPWGD as a 24-bit DAC solution for Hi-Res Audio. For semiconductor manufacturing pre-processing, a wafer prober, probe card, and pogo tower can be combined together. The analog circuits can be equipped with 16 channels by mounting additional analog circuits in the user area on the wafer prober.

Measurement results

On the T2000 platform, analog performance was demonstrated with an ultra-high dynamic range, showing that the platform can achieve results beyond the target performance – as occurred when all of the audio tests listed earlier were conducted. Moreover, the results are consistent and repeatable, as indicated in Figure 4. When measurements were performed 200 times continuously with 16 multi-site tests, a typical THD result of -134 dBc was consistently obtained.

The results detailed in this article indicated that twice the number of multi-site tests can be achieved compared to conventional systems when the T2000 is combined with an IPS and GPWGD module. This makes it possible for the solution to support everything from characterization to mass production for PMICs associated with Hi-Res Audio. Future test efforts will take on the challenge of solutions for 32-bit DACs that require a higher dynamic range.

* dBc = decibels relative to the fundamental carrier power level; standard measurement for total harmonic distortion (THD)

Read More

Posted in Uncategorized

New Solution Available for System-Level Testing of Advanced, High-Speed Semiconductor Memories for Mobile Applications

Advantest unveiled its new T5851 STM16G memory tester for evaluating high-speed protocol NAND flash memories including UFS3.0 universal flash storage and PCIe Gen 4 NVMe solid-state drives (SSD), both of which are expected to be in high demand for the LTE 5G communications market.

The mobile and automotive communication markets are booming.  It is estimated that soon almost all NAND memory in smart phones will use high-speed serial protocol interface controllers such as PCIe and UFS, the majority of which will require system-level testing (SLT).  In addition, shipments of UFS memories are expected to nearly triple in the next three years and surpass embedded multi-media cards (eMMC), the current market leader.  By 2021, more than 800 million total eMMC, UFS and NAND smart phone units will be shipped, according to the market research firm IHS Markit.

The new T5851 STM16G system’s multi-protocol architecture makes it suitable for testing all SSDs with ball grid arrays (BGA) or land grid arrays (LGA) in both engineering and high-volume production environments.  Using one common platform reduces deployment risks while the system’s modular upgradability enhances users’ return on investment.

The universal, extendable platform has the versatility to test multi-protocol NAND devices with speeds up to 16 Gbps. The system’s tester-per-DUT architecture supports the test flows required for fast SLT of up to 768 devices in parallel.

The configuration and performance of the T5851 STM16G can be optimized for any generation of devices. Advantest’s FutureSuite™ software ensures that the new tester can be easily integrated with all other members of the T5800 product family.

As additional benefits, the new memory tester can be combined with Advantest’s M6242 automated component handler to create a turnkey test cell, it has a liquid-cooling system for reliable thermal management and it delivers superior reliability.

Shipments to customers are scheduled to begin in the second quarter of calendar 2019.

Read More

Posted in Uncategorized

In Vivo Skin Imaging Technology Developed to Aid in Early Diagnosis

 Advantest has developed a non-invasive method to achieve real-time 3D imaging of the vascular network and blood condition (oxygen saturation) of the living body, using a photoacoustic method to detect ultrasonic waves generated by laser irradiation. This method may be used for early diagnosis and monitoring of physical functions related to beauty and health.

As part of the “Innovative Visualization Technology to Lead to Creation of a New Growth Industry” project operated by Advantest’s Takayuki Yagi under the auspices of the Impulsing Paradigm Change through Disruptive Technologies Program (ImPACT), a program of Japan’s Council for Science, Technology and Innovation, an R&D group led by Professor Yoshifumi Saijo of Tohoku University and Noriyuki Masuda of Advantest has succeeded in developing in vivo skin imaging technology (1) that can simultaneously generate dual-wavelength photoacoustic images and ultrasound images.

Photoacoustic imaging is a method of imaging the interior of a living body by irradiating light into the body and measuring ultrasonic waves generated from blood or tissues that selectively absorb light energy. It is attracting interest as a new noninvasive imaging method suitable for measuring small blood vessels in the skin, which is difficult with conventional imaging techniques.

However, when using only photoacoustic imaging, even if microvessels in the skin measuring several tens of microns or less in diameter are imaged, it is impossible to ascertain which region of each layer in the skin they are in. In addition, it is possible to photoacoustically measure the oxygen saturation level of blood vessels (3) by using light sources of multiple wavelengths, but the movement of living bodies affects measurement results, so the use of this method has hitherto been limited to research applications such as animal experiments.

The newly developed in vivo imaging technology utilizes a focused ultrasonic sensor that can detect multiple ultrasonic signals. Thus, photoacoustic waves and ultrasonic waves can be measured with the same sensor, while signals are generated on two alternating wavelengths, allowing the detection of ultrasonic waves that image the microvascular network in the dermis as well as blood oxygen saturation (Fig. 1). A 6 mm square area of 2 mm depth can be imaged in about 4 minutes. Also, using the acquired data, mapping of oxygen saturation and the superposition of photoacoustic images and ultrasound images is possible.

Biopsy studies have proved that signs of skin aging such as spots and wrinkles are related to microvessels in the skin. The newly developed photoacoustic imaging method is expected to be used for monitoring of photoaging of the skin as well as other applications.

Figure 1: Example of forearm skin imaging.

Blue indicates lower oxygen saturation of blood vessels, red higher.

Read more about this novel and promising research technology

Read More

Posted in Uncategorized

The Duality of Machine Learning

By Judy Davies, Vice President of Global Marketing Communications, Advantest America

The term “binary,” with which we in the semiconductor industry are quite familiar, refers to more than the 1s and 0s found in binary code. It implies a balance, a duality that is present throughout the industry. This duality is found in our human makeup, as well. We use both intellect and feeling in living our lives, as we identify challenges and determine solutions.

If artificial intelligence and machine-learning systems are to truly think as humans do, it would seem that moving beyond purely digital computations will be essential. This means finding a way to teach machines to combine left-brained (analytical, data-based) with right-brained (intuitive, perception-based) thinking – i.e., the true duality of the human brain.

The work of John von Neumann has come to represent the left-brained approach. Beginning in the 1920s, von Neumann applied his genius in mathematics across a wide spectrum of projects. These included working on the Manhattan Project to construct the first atomic bomb; creating the landmark von Neumann architecture for digital computers that store both programs and data; and developing the field of game theory, which many high-stakes poker players use today to deduce future outcomes and win tens of millions of dollars.

The right-brained approach can also be described as emotional intellect. It represents more analog or interpretive thinking that takes into account human feelings and attempts to inform actions that are difficult to quantify. As an example, whereas von Neumann’s game theory is used to arrive at decisions through logical reasoning, poker players also gather information about their opponents by reading their body language and demeanor at the table. This is the right brain at work.

Neuromorphic computing involves making machines that more closely replicate the way the way the human brain works. Rather than being limited to solely digital processing, neuromorphic chips assimilate analog information, which is then interpreted for shades of meaning. This forges a path to creating neural networks that are aligned with how we think.

Already present in our lives is what can be viewed as a precursor to neuromorphic computing. When we visit an online retailer’s site, our interest in the products viewed and/or purchased is catalogued, grouped with the interests of other buyers, compared with those buyers’ previous purchases, and used to pitch us on buying other products that people within that demographic have bought. Pop-up ads, emails and texts claiming “You may also be interested in …” demonstrate how computing power is being applied to get into consumers’ heads and not just understand but influence their spending patterns.

Similarly, machine learning can be applied when it comes to guiding consumers’ future actions. Databases are being used both to predict our needs and to stock local inventories accordingly, ensuring that our local store or distributor will know as soon as we exhaust our supply of a particular item and will be able to offer same-day delivery of a replacement.

Factoring in product reviews from other members of our demographic group would allow retailers to draw high-probability conclusions about both our level of satisfaction with products we’re currently using and the likelihood that we may be willing to switch to a similar product from a different supplier. This educated guesswork will be based on “reading” your emotional decision-making processes. With this ability to predict future behavior, poker-player computers are assured continued dominance.

The state of the art in neuromorphic computing does not yet involve precisely predicting all of our next moves. The world of the Steven Spielberg movie “Minority Report” – in which savant-like “pre-cogs” can predict future crimes before they occur, enabling law enforcement to arrest criminals-to-be in advance – does not yet exist. But it’s intriguing to consider, and to wonder if we may actually get there at some point.

Would bringing the duality of digital processing and emotional intellect to fruition be highly beneficial, enabling digital assistants like Alexa and Siri to more accurately anticipate our desires? Or would it bring us a step closer to having our lives actually be run by the machines in our lives? One thing seems sure: If and when full-blown neuromorphic computing becomes a reality, it will definitely be put to use.

Read More