As avionics systems grow in complexity, and with the increased use of field-programmable gate arrays (FPGAs), hardware verification is a major concern within the aerospace sector. Indeed, in addition to recognizing the use of commonly used electronic design automation (EDA) tools – for design entry, synthesis, place and route, and static timing analysis – Federal Aviation Administration (FAA) officials are appreciating how EDA tools can improve verification, traceability, and project management. Thankfully, engineers working on FPGA-based designs for avionics application can draw on certain methodologies being used in the broader semiconductor industry to verify application-specific integrated circuit (ASIC) designs.
Although they are have different business models, in which volume is a key differentiator, an avionics company designing for an FPGA and a semiconductor company designing an ASIC can use many of the same hardware verification techniques – with the proviso that the avionics company must also pass stringent certification processes, as formalized by RTCA/DO-254 (“Design Assurance Guidance for Airborne Electronic Hardware”).
The DO-254 standard dictates that hardware design and hardware verification should be independent. In terms of the design lifecycle, this means designers work to meet defined requirements and verification engineers seek to prove that the design meets the requirements (i.e., requirements-based verification or RBV).
Modern, and especially automated, verification techniques may be used for the verification of avionics designs but may not be suitable for certification purposes. For instance, transaction-based verification (TBV), which is used extensively within the broader semiconductor industry, operates with high-level requirements but may not be suitable for the verification of low-level requirements, such as the timing of individual signals. However, that is not to say TBV does not have a role to play in avionics design, now and increasingly so in in the future. The growing complexity of avionics systems will drive RBV toward TBV, most likely through the requirements being expressed in ways more conducive to TBV.
Why simulation alone is not enough
Simulators play a huge role in verification. Whilst very useful, they are, to a degree, rather restricted tools in that they can only provide confidence in the design at certain points within the design flow – hence the need to perform different simulations (using different simulators).
For instance, an HDL simulator verifies that the design (as coded in VHDL, for example) behaves as intended. It does this by exercising the synthesizable RTL subset of the code (i.e. the intended design at a Register Transfer Level) using an HDL testbench. The simulator is fully deterministic, in that it will always produce the same results for a given design and testbench. However, real hardware is not as deterministic. Multiple clock domains, for instance, introduce uncertainty. Also, the RTL simulation is performed with ideal clocks, and timing inconsistencies and effects like metastability are not modeled – nor are clock phase or frequency drifts.
Timing simulation is much more accurate but seldom practical because of the disproportionate amount of time it takes to simulate even a few seconds’ worth of FPGA operation. Fortunately, RTL simulation can be enhanced to introduce the modeling of uncertainty in simulated clock domain crossing paths.
It is worth noting that in the broader semiconductor industry, some ASIC verification teams claim that HDL simulators are not used at all within their verification environments. Also, for very large designs, extensive RTL simulation may not be realistic because of the time it takes. Instead, emulators are used to speed up the verification process.
Emulators use multiple FPGAs to implement the design. By putting hardware in the loop, confidence is increased; but not necessarily to 100 percent, as emulated designs may or may not be deterministic and may or may not be able to model real metastability issues and non-ideal clocking. It all depends on how well the emulated design is mapped onto the emulator’s FPGAs and how the clocks are generated.
It is possible to observe the behavior of real hardware (i.e., the FPGAs) while running an emulation, provided the ASIC testbenches are prepared for non-deterministic behavior. Also, they must not contain any assumptions, such as responses from tests always appearing on a given interface at a given time. This is because, in real hardware, the responses may appear earlier or later – and possibly even be reordered – due to the non-deterministic behavior of the emulated design.
It is quite easy to implement such assumptions in “directed” and timed testbenches, which are run only in simulation, whereas semiconductor companies use untimed “transactional” testbenches (see Figure 1), where the implementation of any assumptions related to time are difficult to realize. Interface protocols required for communication with tested designs are encapsulated in transactors. Only the transactors may contain timed HDL code, as usually required by interface protocols.
Figure 1: Directed and transactional testbenches
In-hardware verification benefits
Understandably, verification with hardware in the loop brings more reality to the verification process. However, for an FPGA-based avionics application, it would be overkill to map an FPGA design to a multi-FPGA-based emulator.
It is much easier to take the target FPGA and connect its interface to a verification environment. Many avionic designs, for example, use Aldec’s DO-254 Compliance Test System (CTS) (see Figure 2) not only to verify their designs, but also to obtain the required certification credits according to the RTCA/DO-254 specification.
Figure 2: Aldec’s DO-254/CTS platform – used for ‘at-speed, in-hardware’ verification
On the CTS, the target design runs at-speed in the target device (which is mounted on a custom daughter board). The simulation testbench is used as test vectors to support RBV testing with 100 percent FPGA pin-level controllability and visibility necessary to implement normal range and abnormal range tests.
Here the question arises: Can traditional directed testbenches, popular within the avionics industry, be applied to real hardware? Yes. In the case of Aldec’s DO-254/CTS, for example, it automatically applies the test vectors used for simulation to the real hardware. As mentioned, because real hardware is not so deterministic, mismatches may occur between simulation results and the real hardware tests. Such differences – for example, the metastability of signals crossing clock domains or non-ideal clocking – can be investigated using a graphical waveform viewer tool.
DO-254/CTS™ Mother Board
The verification engineer must decide, during the investigation, if the reasons for any differences are an oversimplified functional simulation or whether a real hardware problem is being flagged. To aid in the decision-making process, the comparison tool can be configured to accept differences caused by non-deterministic behavior of the real hardware.
A variety of configuration options are required, starting from simple tolerance or offset and ending on detecting and matching whole transactions on selected interfaces. Once the whole system is configured correctly, all the requirements covered by the simulation testbenches can be quickly verified with the real hardware. What’s more, the process is automated and repeatable.
Accordingly, verification engineers can be spared months of work with application boards and trying to cover the requirements in physical tests. Some physical tests must be still performed with an application board, because the FPGA will need to interact with other board components. Having confidence in the integrity of what’s inside the FPGA is a great boost though.
DO-254/CTS™ Daughter Board
Let’s switch back to the ASIC world for a moment. ASIC verification engineers can run testbenches on an emulator that behaves like real hardware. The testbenches are designed to work with non-deterministic hardware (i.e. the emulator’s FPGAs). The correctness of the results is checked at an abstract level using transactions, and the uncertainty introduced by signals crossing clock domains (or other timing effects) leads to changes in the positions of the transactions with respect to time.
This is not a problem for untimed transactional testbenches (see figure 3), which can automatically deal with various times of occurrence and various orders of transactions. No manual review is required to check the correctness of results. It can be done automatically.
Figure 3: A transactional testbench
However, traditional directed testbenches can also be interactive, and can react correctly on lags or the reordering of operations on the interfaces of tests, so shouldn’t they work just as well as transactional testbenches on real hardware? Unfortunately, not. Even if a traditional testbench is interactive, it cannot directly communicate with real hardware because of speed constraints. HDL simulators are not fast enough to communicate with real hardware. If the simulated testbench is too slow to provide test vectors to hardware in real time, the test vectors must be collected in a file and applied later to the hardware at real speed.
Thankfully, the TBV methodology is more flexible. A TBV testbench communicates with the design over transactors (Figure 1), which are implemented in the emulator along with, and working at the same speed as, the tested design.
The transactors can also act as a speed bridge between the testbench and the design under test. If the testbench is too slow the transactors usually have the ability to keep the tested design in a wait state. Moreover, transactional testbenches are much faster because they operate at a higher abstraction level, communicating with transactors using relatively short messages. Additionally, the speed of the verification system can be increased by dumping the messages exchanged with transactors to a file and applying them later to hardware without unnecessary delays.
The TBV methodology is traditionally associated with SystemC and SystemVerilog languages and libraries like TLM, SCV or VMM, OVM or UVM. The reason for the association is probably because TBV is frequently used with the constrained random generation approach. The libraries mentioned contain useful elements for the implementation of TBV testbenches with constrained random generation.
SystemVerilog testbenches with UVM libraries are already used in the avionics community; however, the TBV methodology need not be restricted to any specific language or library.
Transactional testbenches can be implemented in any language. Indeed, constrained random generation is much more difficult for implementation in HDL language without appropriate built-in constructs for constraining the randomization of the data structures.
A VHDL testbench can also be transactional, even if constrained random generation is not used. It can be written at a higher level of abstraction and communicate with the tested design over transactors. (Figure 4 shows how Aldec’s DO-254/CTS can accommodate this.)
Figure 4: Aldec’s DC-254/CTS working with transactors.
Such a testbench architecture is enough to benefit from fast and flexible verification with hardware in the loop; a technique already in use within the aerospace industry.
Just to make a personal observation, and having verified more than 50 FPGA-based for avionics applications using Aldec’s DO-254/CTS, I have noticed that high-speed interfaces like ARINC 818 (or other based on LVDS signaling) are always verified using the transactional-level methodology, because high-speed interface operations cannot be analyzed (practically) at the bit-level. They must be decoded and provided for analysis at a more abstract level. The traditional bit-level approach is used for low-speed interfaces.
In conclusion, TBV is being adopted by the designers of avionics equipment. It is being correlated with RBV for certification purposes and, although TBV is used mostly for high speed interfaces, avionics testbenches will probably evolve to become fully transactional testbenches in the near future, particularly in light of the growing popularity of System Verilog and the UVM library in testbenches and SoC FPGA approaching avionics projects.
Also, as mentioned at the beginning of this article, FPGAs seeking DO-254 compliance face considerable challenges, with a strict requirements-based design and verification process that must be followed to ensure that the product functions as intended. Traceability, is therefore essential and in this respect Aldec’s Spec-TRACER, very much considered part of the company’s DO-254 solutions portfolio, is used by many avionics companies to support their RBV.
About the author: Slawek Grabowski, engineering manager
Slawek has over 17 years of experience in EDA industry in the areas of HDL simulation, FPGA design and verification and ASIC emulation and is a graduate of AGH University of Science and Technology in Krakow, Poland. He is one of the main architects of the emulation system deployed by leading ASIC manufacturers in the smartphone industry and has patented technologies for mapping ASIC design clocking to FPGA. Since 2008, Slawek has worked on several projects with leading suppliers of avionic systems regarding FPGA chip verification for DO-254 compliance.
Search the Aerospace & Defense Buyer's Guide
You might also like:
Subscribe today to receive all the latest aerospace technology and engineering news, delivered directly to your e-mail inbox twice a week (Tuesdays and Thursdays). Sign upfor your free subscription to the Intelligent Inbox e-newsletter at http://www.intelligent-aerospace.com/subscribe.html.