DesignNews

Design News, April 2013

Issue link: http://dc.ee.ubm-us.com/i/118407

Contents of this Issue

Navigation

Page 13 of 109

My Opinion On .... Richard Nass, Brand Director, rich.nass@ubm.com Follow us on: Testing Without ATS automated test software, or ATS, is nothing new. But as systems grow in complexity, the need for automated testing, and likewise automated testing software, grows correspondingly. The data points of current systems are becoming so immense that it's not feasible to deploy anything other than automated testing. A recent report published by National Instruments (NI) states that the Large Hadron Collider at CERN can generate up to 40 Tbytes of data per second while running an experiment (yes, 40 Tbytes/s). A Boeing jet engine can generate 10 Tbytes of data every 30 minutes. Hence, "Software engineering best practices ensure that test systems meet increasingly demanding feature and performance requirements." a trans-Atlantic (eight-hour) flight in a four-engine plane can generate 640 Tbytes of information. This growing mass of data has forced the development of highly customizable measurement systems, with software-centric test solutions arguably being the only viable approach for testing such systems. It's no surprise that there's a correlation between increased system complexity and the need for a greater focus on test software quality. As the folks at NI will tell you, "Software engineering best practices ensure that test systems meet increasingly demanding feature and performance requirements." One of the keys to automated test software is that it can execute tasks, especially repetitive tasks, very quickly. The downside to that phenomenon is that if you have set up the wrong test, or are testing for the wrong variable/characteristic/criteria, you'll get a ton of data — useless data. And that doesn't include any potential bugs in the test software. So care must be taken because you often don't get a second chance to record specific instances. Another aspect of automated test software is that it won't necessarily reduce the costs associated with your testing process. It can make immense improvements in the process, but isn't likely to save you any money, at least in the short term. This is a common misnomer. The real key is the savings you'll reap in the long term. While doing some research on this subject, I realized that there's a lot of confusion around the difference between "automated test software" and "automated software testing." I'm mostly referring here to the former, which refers to software that's used to test both hardware and software systems (although I really meant to discuss just the hardware test). Designing software to test software is a whole other topic. It's an important one, but I'll tackle that one at a later date. Design News | april 2013 | www.d esign n ews.com –12–

Articles in this issue

Links on this page

Archives of this issue

view archives of DesignNews - Design News, April 2013