2
\$\begingroup\$

This thread partially answers the question that I have Writing synthesizable testbenches, but I'm still not truly satisfied.

So I come from a hardware engineering background, as I was discussing how verification is done for ASIC with a good friend of mine who specializes in software engineering out of curiosity, he posted me a question of:

How do we ensure the quality of the testbenches for verification process of chips?

From my perspective there are two things we have to consider, the first being whether the testbenches would incur some errors, the second being whether the testbenches are doing what we want them to do. In summary we need to guarantee the code correctness and functional correctness of the testbenches.

The first item can be solved at compilation stage, that is if we have errors in the code, the compiler would inform us about that.

But how about the second item?

For example, if I am using UVM framework for testing, how do I guarantee that my testbenches are written in good quality and it is designed correctly for the testing purpose?

\$\endgroup\$

2 Answers 2

1
\$\begingroup\$

You can use coverage to measure the quality of the testbench.

Many simulators have the capability to measure the coverage of your Verilog RTL design code with metrics for Block, Expression, Toggle and FSM state/transition. They also measure coverage for SystemVerilog assertions. These metrics can also be measured for the testbench code itself, but that is less common. This type of coverage comes for "free" with your simulator in the sense that you do not need to add any Verilog code; you simply need to enable coverage collection and reporting in the simulator. Coverage reports can be generated after running simulations.

You can also add covergroups to the testbench code to measure functional coverage. Refer to IEEE Std 1800-2023 section, 19. Functional coverage. You need to create covergroups and coverpoints and add code to sample the coverage when needed. Coverage reports can be generated after running simulations. You can also report intermediate coverage results during a simulation.

\$\endgroup\$
2
  • \$\begingroup\$ Yes that makes sense, but a follow-up question I have then would be: I would only be able to view coverage reports after I test it with my DUT, but I would not be able to have any knowledge about the quality of my testbench before that happens? \$\endgroup\$ Commented Apr 30 at 13:17
  • \$\begingroup\$ @LannanJiang: I updated my answer to explicitly mention when coverage results can be made available. \$\endgroup\$
    – toolic
    Commented Apr 30 at 13:28
0
\$\begingroup\$

Your question is recursive. How does one define "correct behavior" of a testbench?

Processes defined in ISO 26262 and similar standards attempt to address this by making sure design requirements are tied to specific test plan steps, which are in turn are tied to coverage metrics. I'm using coverage here in the broad sense of which functional coverage and code coverage are both subcomponents of the overall metrics.

\$\endgroup\$

Not the answer you're looking for? Browse other questions tagged or ask your own question.