NAME: Arthur Hicken
CO.: Parasoft Corp.
ROLE: Research and develop software solutions that help organizations deliver defect-free software efficiently
Arthur Hicken, software evangelist and self-proclaimed code curmudgeon, discusses the need for test and measurement.
Why are test and measurement tools important?
In some sense, quality gets more important as the computer gets smaller. It's an oversimplification, but if you think about the ramifications of a slightly flakey word processor, they are minimal. It crashes, you restart. Maybe you lose a little bit of editing, perhaps not even that if it has crash recovery and auto-save. No harm, no foul.
Once the software is sitting somewhere that is hard/expensive/impossible to update, then quality takes on a stronger meaning. In the realm of a mission-critical device, failure, even small can endanger lives or even kill. The cost of updating is secondary to the risk, and quality becomes paramount. You've got to know that what you're deploying is thoroughly tested against your business needs, requirements, and risk assessment.
What is driving the need for advanced, embedded test and measurement?
An interesting thing that's happening is that system complexity is mirroring the enterprise space. Where we once had a few specific embedded systems doing isolated tasks, now everything is interconnected. It's not just that the parts can talk to each other, but that they rely on each other. If one component fails, it can bring the system down. Even relatively simple devices like cars can have over 100 SoCs in them with tens of millions of lines of code. The modern complex aircraft depends on the effective support of all these pieces working together. Testing has gotten even more difficult because you not only have more devices and more code to test, but all the interactions need to be fully covered as well.
What advice would you give related to selecting or upgrading test and measurement technology?
From my perspective, the process is what's frequently missing from software. It's not just about having good testing tools available, it's about having a process that makes sure that requirements are filled and code is tested. This means consistent practices, training, coding, and testing.
Business needs and requirements can be codified in the SDLC through tools like a development testing platform where you can set policy and see what development is doing and what the quality is in real time, rather than waiting until you hit QA.
The funny thing is that for years I've been saying that software should be more like manufacturing. Sure, software has a certain degree of art to it, but if you treat it more like engineering as much as possible, you get better results. This means proper design and planning, risk assessment, and controllable, measurable, repeatable process. It's been known for decades that this is the path to quality. Software tends to have too much of a "let's test quality into the product" mentality. It's simply not possible.
An interesting corollary to that is the possibility for prevention in static analysis. The current fad trend in static analysis is flow analysis. While flow analysis is cool because it finds potential defects, the mentality behind chasing bugs ignores the better process of building bullet-proof code. The great benefit that static analysis can provide is in preventing bugs in the first place. By this I don't mean "we found it and fixed it", but creating code that can't have the specific bug. Some call this "best practices. So for example if I worry about using uninitialized memory, I can create a static analysis rule that requires me to initialize all variables when they are declared. This may be overkill for some, but it insures that you'll never get any uninitialized memory errors. When it comes to safety and mission critical systems, I think the prevention route of defensive programming like this is a much better idea.