Pages Menu
Categories Menu

Posted in Q&A

Q&A Interview with Scott West – Expanding SSD Test Capabilities for Extreme Temperatures

Advantest has found great success with its test solutions aimed at the solid-state drive (SSD) market. In this issue of GO SEMI & Beyond, Scott West, Product Marketing Manager for System-Level Test, provides some background on how the company began to address this market, and shares details on its latest SSD offering, the MPT3000ARC, for validation testing to accommodate extreme thermal standards. 

Q: What was the catalyst for developing this latest addition to the MPT3000 family?

A: Let me briefly recap the MPT3000’s evolution. After spending several years on product research and development, we launched the SSD platform in 2014, signaling a branching out for Advantest from chip test to system-level test. While chips are a single unit, manufactured all at once, with everything controlled by the chipmaker, SSD drives are themselves systems. They’re modules that contain a great deal of flash memory, a controller, controller circuitry, protection capacitors, and other components, which adds further complexities from a test standpoint.

Our focus from the start has been to test SSDs through a protocol interface, including the three primary SSD protocols: SATA [Serial ATA], SAS [Serial Attached SCSI] and PCIe [PCI Express]. All products in the platform test all of these protocols, including PCIe Gen 4, the latest version of PCIe, which is about twice the speed of Gen 3. However, with the ARC system (see Figure 1), we’re expanding in a different direction – we’re looking not just at all SSD protocols, but at all test insertions.

Q: How does this differ from the other MPT3000 products?

A: Each product in the platform has a slightly different, and specific, purpose. The MPT3000EV2 (the second generation of the 3000ENV) is a large-chamber system for reliability demonstration testing (RDT), which is focused on the test design. This involves constant hot and cold temperature cycling of several hundred drives over many months. For example, there is one project we are working on that requires more than six months of constant temperature cycling and testing.

The MPT3000HVM is a rack system designed for production test. It tests each individual drive during high-volume manufacturing to make sure it’s good. It requires not months but several hours, under hot conditions only, with a large amount of power pushed through, to validate that each drive works as expected. It can test quickly because of the rack design – you can put in one drive and immediately begin testing while you’re loading more.

The ARC system addresses a couple of key parameters that the other systems in the family don’t with respect test insertions. The first is extreme temperature range. The standard is referred to as automotive range (-40°C to +105°C), which is where the product name is derived from – ARC stands for Automotive Range Chamber. But automotive is only one application; the standard is also used in aerospace, defense and other ruggedized applications that require extreme temperature range.

The second parameter is high-volume production cold-temperature test. The new chamber was designed to be able to accommodate this type of testing in the production environment, for which the EV2 is not as well suited as it is optimized for very test times. In the enterprise, cold temperature – down to as low as -40°C – is often required for production test. The MPT3000ARC can test up to 128 PCIe Gen 4 DUTs in parallel. Because you have to load the entire chamber at once at ambient temperature, it doesn’t make sense to make it too big, as the ergonomic range needs to be production friendly.

Q: Do all the MPT3000 products have the same pin electronics?

A: Yes, all the systems are compatible with respect to the software and firmware, and all electronics that go to the DUT come from the same test electronics boards. The systems feature up to 22.5G electronics with tester-per-DUT architecture.

However, the ARC system interface boards, which are designed for the system’s thermal interface, are different from those for the HVM. In the ARC chamber, the chamber is turned on its side, with up to four primitives inserted vertically instead of horizontally (see Figures 2 and 3). This creates a totally closed system that allows air to circulate from right to left inside the chamber, whereas the HVM rack-based system pulls air from the room and then blows it to the back of the system and out into the room. The ARC chamber also features a pocket door designed to accommodate the ergonomic requirements of manual loading and to be automation friendly as well

Q: What are some other key features of the MPT3000ARC?

A: As I mentioned earlier, the system has the ability to test up to 128 DUTs of 50W each. The primary compressor stage has a water-cooled condenser that transfers the energy generated by the chamber load to the test floor’s water-cooling system. This construction helps regulate chamber temperature to ensure consistency. The system has two programmable power supplies for PCIe DUT, targeting high-power enterprise SSDs, and performs current and voltage measurement with high accuracy.

We introduced the MPT3000ARC at the Flash Memory Summit in August 2019, where it was very well received, and the system has shipped to the first customer. We look forward to sharing further successes and advancements in the future.

FIGURES

Figure 1. The MPT2000ARC is the latest addition to the MPT3000 family of SSD testers, and tests devices to extreme temperatures.

Figure 2. The chamber for the MPT3000ARC is oriented vertically to allow a fully closed system with right-to-left air flow. The primitives are stacked 2×2, back to front. 

Figure 3. This view inside the ARC chamber shows the positioning of one of the lower primitives in a 64-DUT system.

 

Did you enjoy this article? Subscribe to GOSEMI AND BEYOND

Read More

Posted in Top Stories

High-Speed I/O Testing with the Power of Two

By Dave Armstrong, Director of Business Development, Advantest America, Inc.

As the internet backbone speed continues to spiral upwards, the interface speeds to the devices making up the cloud also continue to scale up. As many of these interface, server and artificial intelligence (AI) devices move both to 112Gbps data rates and multi-chip heterogeneous integrations, the industry is facing an increased need for at-speed testing in order to confirm truly known-good devices (KGDs).  Until now, an elegant, efficient ATE-based solution for conducting these tests hasn’t been available.

In 2018, Advantest and MultiLane, Inc., a leading supplier of high-speed I/O (HSIO) test instruments, began to explore partnering together to provide a single-platform solution leveraging the best capabilities and qualities of both companies. A fast-growing company founded in 2007, MultiLane is based in Lebanon, where CEO and industry veteran Fadi Daou is committed to expanding the tech industry. With more 200 products and over 500 customers, MultiLane’s product and technology portfolio, as well its corporate culture, are highly complementary to Advantest’s.

The concept of the joint solution is straightforward: existing MultiLane instruments are ported to a form factor compatible with Advantest’s V93000 test head extension frame, as illustrated in Figure 1. As the figure shows, the combined solution consists of an Advantest V93000 tester and twinning test head extension from Advantest, to which MultiLane adds power, cooling and a backplane to create the HSIO card cage. MultiLane then takes existing off-the-shelf instruments and re-lays them out for inclusion in the card cage, which sits on top of the V93000 test head. The use of existing instruments is a key aspect because it contributes to lower cost of test while delivering an already proven capability – just in an ATE environment.

Figure 1. The basic components of the Advantest-MultiLane solution combine to create a unique test offering.

Delving down further into the specifics, Figure 2 illustrates the build-up of the solution. On the bottom is a family board – one of two DUT boards in the build-up – which the customer can typically purchase once and reuse for a variety of testing needs. This bottom board routes the V93000 signals being used to the pogo-block segments located in the HSIO card cage just above, which are then routed to the twinning DUT board at the top of the stack. Multiple MultiLane backplane cassettes sit just underneath the DUT board and device socket, enabling the shortest possible interconnect lead length via high-performance coaxial cabling. The number of cassettes is expandable to include as many as 32 digital storage oscilloscope (DSOs) or 32 bit-error-rate tester (BERT) channels.

Figure 2. The photo at left shows the view from the top of the HSIO card cage with the twinning DUT board and MultiLane instruments removed. 

The setup is designed to be highly configurable. High-speed signals are routed from blind-mate wide-bandwidth connectors to the twinning DUT board mounted connectors adjacent to the DUT. These connectors may be either on the top or on the bottom of the twinning DUT board to provide an optimal signal-integrity solution. Putting the connections on the top of the DUT board allows for direct connection to device signals without the need for routing through vias. For probing, the probe is typically installed on top of the DUT board, with the wide-band connections made on the bottom.  Moving the connectors to the bottom allows the probe to be the only thing extending from the top of the DUT board, as required in a wafer-probe environment.

Another configurable aspect of this solution-set is how bias-tees and splitters are utilized. While very wideband components, these circuits always cause some signal attenuation and distortion. Some users prefer to maximize the signal swing and integrity by not including these circuits in the path. Other users have plenty of amplitude and want the added testability afforded by these components to perform DC tests and/or feed low-frequency scan signal through their HSIO. The flexibility of this approach supports both solutions and allows users to change between them on a part-by-part basis.

Multiple instruments broaden capabilities

MultiLane presently has three pluggable instruments available to coordinate with the V93000 and HSIO card cage. The first can accommodate 58Gbps four-level pulse amplitude modulation (PAM4), while the second is twice the frequency at 112Gbps – the “new normal” data rate. The third is a full, four-channel 50GHz bandwidth sampling oscilloscope, integrated into the solution at a cost far lower than that of a standalone scope, with the same capabilities.  

Figure 3. MultiLane instruments are packaged in cassettes for insertion into the HSIO card cage.   

To ensure the platform solution meets customers’ needs and complementary roadmaps, the MultiLane software and tools are tightly integrated with the V93000. MultiLane eye diagrams and scope plots can be brought up in standard V93000 SmarTest tools (see samples in Figure 4). The scope can also analyze results in the frequency domain to provide a distortion analysis, as is typically done on a vector network analyzer (VNA).   


Figure 4a. MultiLane BERT output waveforms shown on the V93000.


Figure 4b. MultiLane DSO measurements shown on the V93000.

Conserving tester resources

A noteworthy capability of the solution is that the entire HSIO card cage and MultiLane instrument assembly can be used on the bench together with the V93000 DUT boards – i.e., they can run independent of the tester. In some cases, it may be possible to add a simple bench power supply and a PC interface to allow some long-running measurements to be made without the V93000. 

Returning the HSIO card cage on the tester, a local PC can also be used to talk to the MultiLane instruments via the internet. For example, the tester’s SmarTest program can be sequenced to an area of interest and pause, at which point a PC can interact with the MultiLane hardware to interactively explore and analyze the results – much like a scope would be used in the old days, only without the need for probing the fine-geometry wide-bandwidth interfaces. This unique capability both improves the utilization of the HSIO Instruments and allows the user’s offline experience, with the device and instruments to be leveraged into the ATE environment thereby improving efficiencies in both locations.   

Bringing it all together

Developing leading-edge test solutions in the 112Gbps area requires close collaboration and involvement with experienced high-speed I/O experts. Working together with our mutual customers, Advantest and MultiLane can leverage the strengths of both companies to help ensure success and provide the full benefits of this truly unique ATE-meets-HSIO test-platform solution.

 

Did you enjoy this article? Subscribe to GOSEMI AND BEYOND

Read More

Posted in Featured Products

Evaluating a spring probe card solution for 5G WLCSP

By Krzysztof Dabrowiecki, Feinmetall GmbH, Thomas Gneiting AdMOS Gmb], Jose Moreira Advantest

With the deployment of the wireless 5G standard and its support for mmWave frequencies that allow gigabits-per-second data rates on the consumer market, the semiconductor industry needs reliable and low-cost test solutions. The 5G standard allows mmWave range frequencies from 24GHz to 28GHz—to frequencies as high as 44GHz, and possibly even higher.  To achieve these frequencies requires reliable, highly efficient, cost-effective chip packaging technology. 

From that point of view, wafer-level chip-scale packaging (WLCSP) offers one of the most compact package footprints, providing a high level of functionality, and a frequency range with low resistance and inductance path. Despite having a good thermal performance with a finer pitch interconnection to the printed circuit board (PCB), WLCSP is resilient to extreme variations in stress, drop, and vibration. At the wafer test level, WLCSP technology requires a good and consistent contact resistance, a relatively high contact force with short probes, and above all, an effective online cleaning together with easy onsite repair [1]. With respect to those electromechanical wafer test requirements and with added value such as a frequency performance higher than 28GHz, or a high current capability, the spring pin probe card technology is always a favorite on the test floor on account of its cost and versatility and worthwhile to evaluate for high frequency 5G mmWave applications [2-4].

To define the best possible probe card structure, detailed electromagnetic simulations and analyses are required. RF engineers have several modeling approaches available for this type of simulation, such as a lumped element model (SPICE), distributed element model, or 3D electromagnetic (EM) models. For this study, it was decided to utilize CST Studio Suite 3D EM simulation software.  It allows us to build and analyze an exact and detailed 3D-model of the probe card. A probe card acts as an interconnector on the signal transmission path between the wafer chip and automated test equipment (ATE). Therefore, it is vital to keep in mind that, besides the probe card, there are other challenges with respect to ATE and the PCB side.

On the ATE side, mmWave frequencies already present significant implementation challenges, including the measurement instrumentation and interconnect to the ATE device under test (DUT) test fixture PCB.

Figure 1

Figure 1 shows a picture of the bottom of an ATE mmWave test fixture, where it is possible to observe the blind mating connectors to the ATE system and the coaxial cables. They are connected to coaxial connectors, very close to the socket. The use of coaxial cables in the test fixture is essential because a coaxial cable is significantly less lossy than any PCB signal trace. The PCB test fixture challenges, however, are not the main subject of this paper.

The system assembly and modeling (SAM) framework was used to investigate and optimize a signal path. It consists of multiple individual components, such as wafer bump, probe head, and PCB. These are described by relevant physical quantities such as field magnitudes or s-parameters.  This paper is trying to find an answer and explore three objectives: 1) the impact of different materials and probe head designs on the mmWave performance, 2) analysis of s-parameters and crosstalk, and 3) the probe head design optimization to improve them. Crosstalk is also an important parameter that is taken into account. The presented analysis results reveal the impact of different structure probe head elements on the s-parameter results.

Simulation model

Figure 2

Figure 2 shows an example of what mmWave RF peripheral ports (AN1, AN2) might look like on a 5G DUT. The diagonal bump pitch is 0.4mm, with a bump height of 100mm. The distance, in a row, between RF bumps is 0.566mm. Initially, a spring pin was chosen with uncompressed length L=3.7mm, at working mode L1=3.5mm.  The PCB thickness was 3.8mm and used a hybrid stack-up of the FR4 and Tachyon 100G for dielectric material. The matched trace lengths were designed at 38.8mm. Because of the symmetrical PCB traces layout, the simulation was performed for the critical traces only and one-quarter of the PCB. The RF 3D model analysis includes the wafer solder bump, probe head, and contact with the PCB, in which the traces are included up to the connector locations. 

Figure 3

Figure 3 illustrates a quarter of the probe card model and trace topology.

Figure 4

Figure 4 shows a model of a probe head in contact with the wafer at the bottom and the PCB at the top. The probe head is a 2-layer structure comprising guiding plates and fillers between the plates. The filler layers are the additional materials added between the guiding plates with various dielectric constants and loss tangent. The double plunger spring pins are inserted into drilled holes in the guiding plates and fillers. The pin plunger protrusions at the working mode are formed with a uniform air gap of 0.1mm between the head and the PCB, and 0.25mm between the head and wafer. The created 3D simulation model allows quick verification of results to identify appropriate material properties and geometry before building a test probe card.

Initial simulation results

It is well-known that any impedance mismatch in the signal path will have an impact on the return loss and in that way, degrade the measurement path performance. Therefore, impedance is a crucial parameter to be checked and controlled. In the PCB industry, the common impedance specification is in the range of 50 +/-10% Ohm for a single ended signal. But 5% is possible in certain cases, though at a very high cost.

Figure 5

Figure 5 shows the simulated time domain reflectometry (TDR) plot for the model with various filler materials wit a time rise of 29.2ps (for 30GHz). The dashed lines indicate the maximum and minimum impedance tolerances. In the figure, it can be noticed that the air gap between the guiding plates causes an impedance discontinuity that peaks at

70 Ohms. The material option 1 shows a drop in the impedance discontinuity peak at 41 Ohm. The material B options 2 and 3 significantly reduce the impedance discontinuity to an acceptable range. As a consequence of material B air and option 1, the insertion loss and return loss had a limited frequency bandwidth, as shown in Figure 6. In this case, the dashed lines reveal acceptable limits of -1dB for insertion loss, and -10dB for return loss.

To read full article please visit Chip Scale Review December 2019 issue, page 14:  http://fbs.advantageinc.com/chipscale/nov-dec_2019/52/

 

Did you enjoy this article? Subscribe to GOSEMI AND BEYOND

Read More

Posted in Top Stories

Semiconductor Test – Toward a Data-Driven Future

By Keith Schaub, Vice President, Marketing and Business Development, Applied Research and Technology Group, Advantest America

Integrating new and emerging technologies into Advantest’s offerings is vital to ensuring we are on top of future requirements so that we are continually expanding the value we provide to our customers. Industry 4.0 is changing the way we live and work, as well as how we interact with each other and our environment.

This article will look at some key trends driving this new Industry 4.0 era – how they evolved and where they’re headed. Then, we’ll highlight some use cases that could become part of semiconductor test as it drives towards a data-driven future. 

The past

To understand where we’re headed, we need to understand where we’ve been. In the past, we tested to collect data (and we still do today). We’ve accomplished tremendous things – optimized test-cell automation, gathered and analyzed yield learnings, process drift and statistical information, to name a few. But we ran into limitations.

For instance, we lacked tools necessary to make full use of the data. Data is often siloed, or disconnected. Moreover, it’s not in a useful format, so you can’t take data from one insertion and use it in another insertion. Not having a way to utilize data for multiple insertions reduces its value. Sometimes, we were simply missing high-value data, or collecting and testing the wrong type of data.

The future

Moving forward, we think what we are going to see is, the data that we collect will drive the way that we test. Siloed data systems will start to be connected, so that we can move data quickly and seamlessly from one insertion to another – feeding the data both forward and backward – as we move further forward into Industry 4.0. This will allow us to tie all of the different datasets from the test-chain together, from wafer, from package, from system-level test. All of this data will be very large (terabytes and petabytes), and when we apply artificial intelligence (AI) techniques to the data, we’ll gain new insights and new intelligence that will help guide us as to what and where we should be testing.

We’ll ask new questions we hadn’t thought to ask before, as well as explore long-standing questions. For example, one dilemma we’ve faced for years is how best to optimize the entire test flow, from inception to the end of the test cycle. Should the test be performed earlier? Later? Is the test valuable, or should it come out? Do we need more tests? How much testing do we need to do to achieve the quality metric that we’re shooting for? In the Industry 4.0 era, we’ll start seeing the answers to these questions that resonate throughout the test world.

Data…and more data

Today, thanks to the convergence of data lakes and streams, we have more data available to us than ever before. In the last two years alone, we’ve generated more data than in all human history, and this trend will only increase. According to some estimates, in the next few years, we will be generating 44 exabytes per day. In other words, this would be about 5 billion DVDs worth of data per day. Stacked up, those DVDs would be higher than 36,700 Washington Monuments, and the data they contain would circle the globe in about a week (see Figure 1).

Figure 1. The volume of data we generate will soon reach 44 exabytes, or 5 billion DVDs, per day. Since this amount of data could circle the earth in about seven days, an “earth byte” could equate to a week’s worth of data.

These kinds of numbers are so massive that the term “Big Data” doesn’t really suffice. We need a global image to help visualize just how much data we will be generating on a daily basis. Based on these numbers, we could begin using the term “earth byte” to describe how much data is generated per week. Regardless of what we call it, it’s an unprecedented amount of data, and it is the fuel behind Industry 4.0.

Industry 4.0 pillars

Five key pillars are driving and sustaining the Industry 4.0 era (Figure 2):

  • Big Data – as noted above, we are generating an unprecedented and near-infinite amount of data, half comes from our cell phones and much of the rest from the IoT
  • IoT – sensor-rich and fully connected, the IoT is generating a wealth of data related to monitoring our environment – temperature, humidity, location, etc.
  • 5G – the 5G global wireless infrastructure will enable near-zero-latency access to all of the data being generated
  • Cloud computing – allows us to easily and efficiently store and access all our earth bytes of data
  • AI – we need AI techniques (machine learning, data learning) to analyze in real time these large datasets being sent to the cloud in order to produce high-value, actionable insights

Figure 2. The five key pillars of Industry 4.0 are all interconnected and interdependent.

Because they are all reinforcing and accelerating each other, these Industry 4.0 trends are driving entire industries and new business models, creating an environment and level of activity that’s unprecedented.

Hypothetical use cases

Now that we’ve looked at where the test industry has been and what is driving where we’re headed, let’s examine some theoretical use cases (grounded in reality) that provide a visionary snapshot of ways we may be able to leverage the Industry 4.0 era to heighten and improve the test function and customers’ results. Figure 3 provides a snapshot of these five use cases.

 

Figure 3. Industry 4.0 will enable advancements in many areas of the test business.

1) Understanding customers better – across the supply chain

This use case encompasses various customer-related aspects that Industry 4.0 will enable us to understand and tie together to create new solutions. These include:

    • Customers’ march toward and beyond 5nm and how wafer, package, and system-level testing will work together for them
    • The entire supply chain’s cost challenges, which will help us optimize products and services across the value chain
    • How automotive quality requirements are driving into other business segments – as autonomous vehicles will be connected to everything across 5G, the quality of the connected network and its components will be forced to improve
    • 5G’s advanced technologies, including phased arrays, over-the-air, and millimeter-wave, all of which are already mature in the aerospace and military sectors – we will need to be able to leverage those technologies, cost them down appropriately, and support them for high-volume testing 

2) Decision making – yield prediction
The ability to predict yields will change everything. If you know, based on historical process data, that you’ll experience a yield drop within the next one to two months, you can start additional wafers to offset the drop. This easy fix would enable very little disruption to the supply chain.

If you can solve this problem, however, the next obvious question is, what’s causing it? Why don’t I just fix it before it happens? This involves prescriptive analytics, which will follow predictive analytics. Say you have developed a new generation of a product. You’ve collected yield data at all test insertions for previous generations of the product, which share DNA with the new incarnation. Combining past data with present data creates a model that enables highly accurate predictions about how the wafer will perform as it moves through the supply chain.

3) Creating new customer value – predictive maintenance
This use case is the most likely to come to fruition in the near term. Maintenance contracts require carrying inventory, spare parts and myriad logistics – they represent a huge cost. Soon, by combining tester fleet data with customer data and implementing machine learning, we’ll be able to dramatically improve tester availability, reduce planned maintenance, and decrease losses due to service interruptions. This will allow us to replace modules before they fail.

Predictive maintenance is a proven parameter that’s already being used in other industries such as oil and gas manufacturing. IoT sensor arrays are applied to the huge pipes and pumps controlling flow of chemicals, measuring stress, flow rates, and other parameters. The data from these sensors predict when a pump is going to wear out or a pipe needs to be replaced before it fails. We can leverage, redefine and redeploy this implementation for our use case. Soon, a field service engineer could show up with a replacement module before you even know that you need it.

4) Monetization – using data in new ways to drive our business
Data is an asset, and we’ll start to derive new business on sharing access, or leasing use of our data assets. One example might be a tester digital twin that resides in the cloud. Imagine that customers’ chip model data could be fed into this digital twin as a kind of virtual insertion, and the outputs would be parameters such as performance and yield. Customer benefits would include optimized programs, recommended tests, and predicted test coverage at each virtual insertion. This would enable them to optimize the entire flow depending on the product life cycle – perhaps test order could be changed, or a test added in order to improve quality. Because Advantest owns all the data that comes from our testers, we could lease or sell chipmakers access to the data, creating a significant business opportunity.

5) Automating and improving business operations – driving efficiencies
The test engineering community struggles with finding ways to improve process efficiencies. One way to do this is with the use of intelligent assistants. Still in their infancy, this category of AI can best be described as a trained assistant that could guide you in a helpful way when trying to perform a task.

For example, say we are validating a new 5G RF product on our Wave Scale RF card on the V93000 tester. All the pieces are being brought together – load board, tester, socket, chip, test program – and if there are any problems, the whole thing won’t work, or you’ll get partial functionality. An intelligent assistant or ‘bot’ trained in the necessary skillset can dynamically monitor the inputs and outputs and engineers’ interactions and provide real-time suggestions or recommendations on how to resolve the issues. At first it won’t be smart, but will learn quickly from the volume of data and will improve its recommendations over time.

As you can see, AI’s potential is vast. It will touch all aspects of our lives, but at its core, AI is really just another tool. Just as the computer age revolutionized our lives in the ’80s and ’90s, AI and Big Data will disrupt every industry we can think of – and some we haven’t yet imagined. Those slow to adopt AI as a tool risk being left behind, while those that embrace AI and learn to fully utilize it for their industries will be the future leaders and visionaries of Industry 5.0, whatever that may be.

Did you enjoy this article? Subscribe to GOSEMI AND BEYOND

Read More

Posted in Featured

I Think, Therefore I Am… Machine?

By Judy Davies, Vice President, Global Marketing Communications, Advantest

The ability to think has been a central, defining aspect of humanity since our beginning. Today, technologists are using artificial intelligence to instill that capability into machines. Through statistical models and algorithms, machine learning enables computers to perform specific tasks without receiving explicit instructions from a human. This means that the computer reaches conclusions by accessing available data, identifying patterns and using logical deduction. This does NOT mean AI systems can generate original ideas (at least, not yet). Rather, their intellect stems from their near-instant ability to crunch large volumes of data and then employ their massive memory capacity to compare and search for linkages that yield logical answers.

An emerging area of machine learning is generative adversarial networks (GANs): deep neural network architectures comprising two nets, in which one is pitted against the other in an unsupervised learning environment. For example, one computer might generate a realistic image, and another is then tasked with determining whether or not the image is authentic. By having these two neural nets engage in game-playing to repeatedly fabricate and then detect realistic likenesses, GANs can be used to produce images that a human observer would assess as genuine.

It should come as no surprise that training GANs is challenging. To use an analogy easily understood by the human mind: It’s easier to recognize an M.C. Escher drawing than it is to replicate one. Nevertheless, GANs hold extraordinary potential. Working from motion patterns captured on video, they can create 3D models of a wide range of objects, from industrial product designs to online avatars. They can also be used to digitally age a person’s image, showing how he or she may look a decade or more in the future – which may be useful in helping to identify teenagers or adults who went missing as children. Going a step further, GANs can sort through many terabytes of images culled from security monitors and traffic cameras to perform facial recognition. This can help to actually identify and track the whereabouts of missing kids or wandering Alzheimer’s patients – not mention wanted criminals.

As with most technology, there is a cautionary aspect to GANs. For example, they could potentially be used to generate artificial images for nefarious purposes, such as creating fake photographs or video clips that unsavory types might use to make innocent people appear guilty for political or financial gain. They may also be used to circumvent the CAPTCHA security feature of wavy letters and numbers that many websites use to deter bots from accessing the sites in the guise of human viewers. How to build in safeguards that prevent these types of illicit deployment of GANs is an important consideration.

GANs can be applied to synthesize or fine-tune everything from voice-activated smart electronics to robotic medical procedures. As the technology is further developed and applied, machine learning and GANs are becoming reality. Self-improving AI is increasingly being used to affect the authenticity of what we perceive and think – it’s a literal (human) brain-teaser.


Did you enjoy this article? Subscribe to GOSEMI AND BEYOND

Read More