Electronic systems are increasingly used in safety-critical applications, not only in “traditional” domains (such as aerospace) that have employed fault tolerance for decades but also in new domains in which embedded systems are common, such as the automotive, biomedical, and telecommunications domains. In most cases, it’s important to detect possible faults that arise while the system is deployed in the field, thus preventing serious issues. So, solutions for performing in-field tests of embedded systems are becoming more common. In most cases, regulations and standards exist (for example, IEC 61508 for generic safety-related systems, ISO 26262 for automotive applications, and DO-254 for avionics). This makes selecting the most suitable solution even more important, from not only a technical viewpoint but also the economic and legal viewpoints.
Computing Now’s February theme overviews the challenges this scenario raises, as well as proposed solutions for effectively overcoming them.
The Need for System-Level In-Field Test Approaches
In most new domains involving electronic systems with strict safety-critical constraints, several companies are involved in design and manufacturing, each having its own interests and constraints. For example, electronic control units for the complex distributed electronic systems in todays’ cars are integrated by the automakers, but produced by different original equipment manufacturers. In turn, the latter resort to both commercial and custom devices delivered by different semiconductor companies for their products. In this scenario, products intended to guarantee different levels of safety might use the same device, and the device manufacturer might not provide or disclose suitable mechanisms for performing in-field tests (for example, mechanisms based on design for testability). Additionally, the original equipment manufacturer is often unaware of each device’s internal structure, which can make test development a challenge. Moreover, a component can contain intellectual-property cores, making details of its internal structure unknown even to the device manufacturer.
Generally, solutions must combine technical and commercial constraints while preserving each product’s intellectual property. Also, because the test often runs during the operational phase (for example, during idle time slots), the adopted solution must match the application’s constraints in terms of duration and resources. Finally, requirements increasingly include security, which typically mandates blocking access to each device’s inner workings as much as possible. Unfortunately, this requirement clashes with testing, which benefits from high controllability and observability of the inner circuitry.
The Industry Perspective
A common industry practice is to face these problems with a variety of solutions, sometimes relying heavily on design for testability (for example, built-in self-test) and sometimes resorting to a functional approach. Hybrid solutions are also popular. In this video, Davide Appello of STMicroelectronics talks about the importance of addressing these problems. He mentions the importance of techniques that share resources between end-of-line and in-field tests, as well as of devising suitable fault models for current semiconductor technologies.
In This Issue
The IEEE Computer Society Digital Library offers interesting articles that address different aspects of these challenges. One increasingly common solution in industry is to employ functional tests — force the targeted system’s processor to execute a software-based self-test (SBST) that detects faults in the processor or in other system components. This solution is flexible, reduces hardware costs, and has high defect coverage. However, it might prove invasive in terms of the required memory. In “MIHST: A Hardware Technique for Embedded Microprocessor Functional On-Line Self-Test,” Paolo Bernardi and his colleagues offer a solution that minimizes such limitations while reducing test time and increasing defect coverage.
Frequently, embedded systems include reconfigurable components based on field-programmable gate arrays. So, it’s crucial to check these components’ ability to work correctly in the field. In “Test Strategies for Reliable Runtime Reconfigurable Architectures,” Lars Bauer and his colleagues detail such testing. They also discuss the difference between testing a reconfigurable component before and after it has been programmed.
Security has become increasingly important to embedded systems. In “Test Versus Security: Past and Present,” Ingrid Verbauwhede presents a test case that shows how safety and security might become opposing targets. However, she offers possible ways to combine the corresponding constraints.
Several recent papers deal with in-field tests of safety-critical embedded systems. An important issue is the generation of suitable functional test programs to detect permanent faults occurring during the operational phase. These programs address not only the CPU core and its components but also other system components, such as memory and peripherals. In “Software-Based Self Test Methodology for On-Line Testing of L1 Caches in Multithreaded Multicore Architectures,” George Theodorou and his colleagues describe how to write a test program to detect possible faults affecting the L1 caches in a multicore architecture.
Similarly, in “On the Functional Test of Branch Prediction Units,” Ernesto Sanchez and I describe a technique to write a program that tests the branch prediction unit (BPU) existing in most processors. This approach requires no detailed information about the BPU’s structure and is only based on data in the processor manual.
Felix Reimann and his colleagues’ “Advanced Diagnosis: SBST and BIST Integration in Automotive E/E Architectures” is particularly interesting. It shows how to integrate both SBST and built-in self-test (BIST) techniques to achieve not only testing but also diagnostic capabilities in automotive devices.
– See more at: http://www.computer.org/web/computingnow/archive/in-field-tests-of-safety-critical-electronic-systems-february-2016#sthash.0POuDT1L.dpuf