EAST GREENWICH, R.I., 27 Aug. 2012. The founder and chief operating officer (COO) of Vector Software Inc., a provider of dynamic automated test tools for embedded software applications headquartered in East Greenwich, R.I., objects to recent article and clarifies the difference between “testing” and static analysis, as well as the value of finding errors and defects in software code.
Read the original article, "Coverity tests 2 million lines of mission-critical flight software for defects on Curiosity Mars Rover," here: http://www.avionics-intelligence.com/articles/2012/08/coverity-test.html
The Letter to the Editor reads as follows:
I didn't see a place under the article to comment but I think the word “test” is very misleading. Static analysis tools do not really test software. They analyze software to see if there are any coding errors and adherence to coding standards like MISRA. They also help with code reviews, buffer overflows, and memory leaks. Tools that test software are considered “dynamic test tools”. They actually stimulate the code with input data, usually based on requirements, and check the associated expected result. Static analysis tools do not do this.
Static analysis tools, while very valuable, are really only a small part of the overall verification and validation effort that should be performed to ensure that safety critical code is properly tested.
The next question to ask is: Did the contractor do full unit testing and achieve 100% code coverage on the target platform prior to declaring this 2.5 million lines of code tested?
Another question is: Would a static analysis tool have caught the Mars Climate Orbiter unit of measure conversion problem (one team used English units, the other metric units) back in 1999? This problem may have easily been uncovered during unit and integration dynamic testing.
Founder and COO
Vector Software Inc.
Follow Avionics Intelligence news updates on Twitter.