The rise of systems-on-a-chip technology is setting challenging new parameters for test equipment manufacturers and vendors. But with standardisation at zero, what is the best testing approach? Shekar Gopalan of Frost & Sullivan explores built-in self-test functionality as a possible solution.
The semiconductor industry has become increasingly cyclical, driven over the years by economic conditions and transitory technological mutations. Market growth and fall cycles have shortened (now 18 to 24 months), while technological advancements – characterised by greater integration complexities, the transition to submicron manufacturing and wider use of systems on a chip (SOC) – have had an evolutionary impact on testing requirements.
Today, design houses are seeking lower cost tests, shorter time-to-market and added functionalities. These demands are driving out conventional testing methods and prompting the development of versatile embedded testers. Meanwhile, the growing significance of mixed-signal design in devices has magnified the complexities that test equipment must address.
The emergence and growth of the fabless business model has also added to the demands placed on test equipment vendors. Automatic Test Equipment (ATE) suppliers have had to diversify their offerings to meet testing needs (previously consolidated by the integrated device manufacturers), in turn forcing test equipment vendors to modify their business models.
From the test industry perspective, the development of highly flexible, single platform test systems is one of the most significant ways in which manufacturers are meeting new cost and high functional testing requirements. Other strategies include attempts by test equipment vendors to promote open standard tester architecture and the potential development of scalable solutions that can evolve according to the needs of chip design houses.
SOC technology has been around for a while. Recently, it has begun to play a bigger role, especially in the digital consumer space. More functions are converging on devices and smaller size solutions are becoming the norm. As a consequence, end-users in various market segments are demanding higher levels of integration from semiconductor companies.
The popularity of smaller size mobile handsets and the shift toward smart phones are strong trends in the consumer electronics market. Also important is the demand for more features, which is driving the elimination of discrete components and the development of integrated systems. Some semiconductor companies have responded by adopting a system-level approach. This means that instead of using discrete components to build electronic circuits, they are integrating functions on a single chip.
Cost sensitivity is another key factor in SOC market growth. It has led to the incorporation of logic, analogue, memory and processor technologies in chip design, thus creating new intellectual properties (IP).
SOC solutions have evolved from block-based to reusable core-based designs. As a result, chip-level testing has become analogous to testing system-on-board. Thus, SOC core and user-defined-logic (UDL) testing needs to be performed simultaneously. Tools for testing core and UDL at integrated system level are required.
PRICING AND COST
Various discrete functionalities or systems are now part of a single SOC. Speed and complex integration in SOCs continue to evolve apace, positioning the semiconductor well beyond the capabilities of most ATE. As a result, the price of chip tests has risen faster than the cost of manufacture, growing the percentage of total chip cost that test costs occupy.
To compound these challenges and the pressure on component manufacturers, the price of end products has been falling steadily. This impacts test vendors directly; they must now come up with solutions that remain profitable. Pricing and cost are undoubtedly vital issues, but they obscure an even more important one: what the next big 'killer application' is likely to be.
The use of deeply embedded cores has limited the access to ports for testing. Hence, a test mechanism is needed that transports test methodology from source to core, and to the sink. Meanwhile, the array of technologies now incorporated on single chips (and their impact on yield) is demanding specific debug and diagnosis methods. I/O bandwidth is another critical parameter that decides the performance level of SOC solutions. Hence, there is a need for source/sink that scales to this bandwidth.
An interesting trend in various market segments, including consumer electronics and automotive, is the demand for re-usable cores. Reusability is made possible by the provision of the plug and play mode, and has already had a positive impact on various end-user segments. However, it has meant additional pressure for the testing industry, which must now develop re-usable SOC core testing methods.
As well as smaller sizes, faster solutions in a few real-time applications are in high demand. At-speed testing of SOC solutions is necessary, therefore, to ensure proper clock speed. For some original equipment and systems vendors, flexibility has been the main objective. This has encouraged the evolution of flexible software applications on various hardware platforms, resulting in system-level solutions. SOC solutions have various levels that bind software to hardware cores, a combination known as firmware. Testing for register-transfer-level codes, behavioural models and layouts is required.
As technological development in semiconductors gathers pace, new generations of cores emerge. Test tools must support testing for both new and older generation cores. The existence of proprietary SOC cores is a problem in that SOC integrators cannot gain deeper understanding of core architecture. This means that there are several testing methods available, none of which are suitable for every type of core. Ultimately, the quality of products is affected. Built-in self-test (BIST) functionality that is managed within the cores would be an ideal solution.
Despite all the challenges, semiconductor suppliers can still provide complete SOC solutions for markets facing annual price erosion. This trend has left test vendors with no choice other than to find ways of providing optimised, cost-effective test solutions. Yet SOC test tools are expensive in comparison with conventional solutions, mainly because of the complications described.
Test vendors are looking for cost-effective test solutions that complement the existing range. One route that semiconductor manufacturers, systems vendors and test vendors are taking is to share test processes, while testing cores in parallel. Such a strategy produces optimum test solutions that create good profit margins for all types of supplier in the value chain.
With the widespread proliferation of SOC, design for test (DFT) and/or BIST is likely to be the proposition that makes or breaks the market success of a chip. Traditionally, DFT/BIST has been considered a back-end process, but it is now accepted that tests must be facilitated during the design process to ensure the highest fault coverage and shortest production test time. The consensus also seems to be that an optimal DFT/BIST strategy would better handle the challenges posed by SOC technology than ATE companies and test engineers.
The emergence of SOC has seen a lot of integration of analogue, mixed signal and RF blocks – with the average IP cores on a single chip reaching between 30 and 60. So,
in addition to established DFT approaches such as scan techniques, there is a need
for BIST to provide an extremely efficient fault coverage mechanism.
There are different BIST schemes for logic, memory, data converters and phase locked loops. A typical SOC contains a large number of BIST controllers associated with a specific device under test. The BIST test circuitry implements the vector generation and analysis capabilities on the chip. BIST then runs all tests at-speed, which is a great advantage. The BIST controller can initiate tests using the same access points as that of the boundary scan. Additionally, DFT/BIST addresses numerous SOC testing problems that are current.
Implementing DFT and BIST test structures does have its drawbacks, however. For example, there could be interference with high-speed critical paths that derails the chip's performance. Numerous issues relating to wafer yield, die size and device packaging also have to be considered.
Adding DFT/BIST structures creates a silicon overhead, which lowers the number of devices per wafer. Bigger dies could mean lower yields. Implementing BIST also requires early planning and considerable development time.
DFT/BIST is the only option available for controlling the ever-increasing cost of ATE. Too little DFT/BIST means high-cost ATE, long test development times and high manufacturing test costs. Too much DFT/BIST means designs that are too big to fit into the target package or too expensive in terms of silicon area (particularly in high-volume consumer applications). So trade-offs must be made early in the device architecture development and detailed design phases to arrive at the right strategy.
SOLUTION IN SIGHT?
As the global economy recovers, the market shows more signs of growth. Short product life cycles and the absence of clear volume projections for specific products or technologies are the main issues. These translate into a huge demand for highly flexible ATE that can be configured from one technology type to another.
Single, scalable tester architecture is a solution. Dedicated testing methods for specific products function, but with decreasing performance. Many vendors are developing open architecture test systems that meet demands for flexible ATE.