Who's afraid of test automation?

May 25, 2012
What's a manager or engineer to do if he genuinely wants to automate part or all of his test procedures? The answer does not lie in improving the current methods by which most automation tool vendors and in-house projects approach the problem. Instead, the requirement calls for a new approach based on the creation of basic building blocks that are repeatedly reused to create test automations with little or no need for coding knowledge.

When Aristotle said, "We are what we repeatedly do. Excellence, then, is not an act but a habit,” he probably wasn't thinking about automated network testing… but the truth in his words rings loudly in every testing environment and QA lab today. In thousands of telecom provider and manufacturer testing labs, both management and engineers face the daunting task of testing and validating components and infrastructures by performing hundreds of thousands of repetitive actions and procedures. The ongoing proper execution of these actions is essential to ensure the quality of the network and products under test.

Unfortunately, most of these procedures are script-based and not reusable; every small change in an element or network layout requires manual retyping of the script. When the question of automating these processes arises, more often than not test teams shy away from the subject. And not without reason. Problematic past experiences naturally raise skepticism regarding the possibility of creating user-friendly, cost-saving automated frameworks in even the most seasoned testing personnel. These experiences may include:

  • self-developed automated systems that were so complicated that only the developing team could use them
  • long automation projects that weren't worth the development time that was spent on them instead of performing the actual testing work
  • “automated" systems that relied heavily on "capture and replay" scripting that required a huge amount of effort for recording thousands of component-specific scripts.
So what's a manager or engineer to do if he genuinely wants to automate part or all of his test procedures? How can he overcome the common pitfalls that cause automation to regress into manual labor in so many companies that have tried to automate and failed?
The way forward The answer to these questions does not lie in improving the current methods by which most automation tool vendors and in-house projects approach the problem. Instead, the requirement calls for a new approach based on the creation of basic building blocks that are repeatedly reused to create test automations with little or no need for coding knowledge. If we can create an infrastructure that enables test engineers to "drag and drop" actions into a logical flow chart and then operate these actions in the order they were placed without having to perform actual scripting, we could create a winning approach for easy-to-use, cost-efficient automation. But how would we do this? There may be several ways to implement such a framework, but the following guidelines will ensure the needed functionality.Automation needs to be accessible to all. Especially to the test engineers. Test engineers have to be able to build the automation on their own. If building the automation process requires code developers, we can't expect high adoption rates. Having to relay our requirements to other people so that they can build it for us is not only a highly inefficient process, it's also a nuisance. The "time to automate" a new feature needs to be short. If the time to create automation for testing a new feature is much greater than the time to test it manually, no one is going to do it.Automation needs to be reusable. All the automated actions we create (commands, parsing, reporting, etc.) should be easily contained within reusable and replaceable building blocks from which the structure of the automation process can be designed and built top-down. This means that:
  • Once we've created automated testing for a certain network layout with certain components, changing a component or a small part of the layout will only require us to change a certain building block, not recreate the whole automation from scratch.
  • We can leverage the knowledge of our domain experts by letting others use their setups later on.
All processes must be standardized into one type of user interface. From a user’s perspective, the method of using the different types of building blocks described above should be the same regardless of the technology behind them. For example, the way commands are sent to initialize a component from a certain vendor should be similar to the way commands are sent to operate a totally different type of component from another vendor. The fact that different types of communications are taking place "behind the scenes" should be totally irrelevant to the user – the user should need to learn only one type of user interface. In keeping with this guideline, all relevant test activities need to be supported within a single integrated platform. This includes topologies, devices, interfaces, procedures, tests, regressions, results, reports, and dashboards. If we need to operate more than one software product to complete a full test cycle, we're probably not moving in the right direction.Data results need to be easily understood. One of the main pitfalls of test automation is not the actual testing, but rather the ease (or lack of ease) with which we can understand the results. Automation generates massive amounts of data, so it's important to understand what it takes to turn this data into a beneficial test report. Do we need to manually parse hundreds of files or can we easily aggregate information from multiple tests, testers, and stations and then filter, group, and tabulate these results according to our requirements into a clear and coherent report? If creating a comprehensive report is so difficult that we would rather concentrate on only one aspect of the data, we're doing something wrong.
What we’re looking for Having discussed the guidelines for building successful test automations, we can now define what it is that we are looking for in an "out of the box" approach:
  1. Ease of use. How many people in our team will able to use it, how complex is the training process, and how quickly will we be able to ramp up its use?
  2. Robustness of the interface library. Does it support all the device and application controls we need? How easily (if at all) can we deal with steps that are not “out of the box?”
  3. Flexible and future-proof. Will we be dependent on the vendor to add future controls? What are the commercial implications of additional capabilities -- and what happens when we buy new test equipment? Will the system support large scale operations as the company grows?
  4. Maximum functionality. Apart from the actual testing, the core functionality of any good automation framework should support the tasks surrounding the actual testing, as they too lead to a more cost-effective process. These include lab management, asset reservation and scheduling, device integration, topology definition and setup, automation development and execution, data collection aggregation, and reporting and dashboards.
  5. Vendor. And finally, when choosing such a framework, we need to take a good look at the vendor. What is its core business? Does the product have a roadmap, and how aligned is it with our requirements? What support can we expect? And perhaps the most important factor: What references of successful automation deployments does the vendor have?

In summary, test automation doesn't have to be painful. Adopting the right approach can lead to great results.

Eitan Lavie serves as vice president of product marketing at QualiSystems, where he is responsible for product strategy, marketing activities, and for overseeing key product management functions. QualiSystems provides enterprise software products for lab management, device provisioning, and test automation.

Sponsored Recommendations

Smartphone Certification – Ensuring FCC Regulatory Compliance with Simulation

Sept. 11, 2024
Learn how electromagnetic simulation can provide early-stage compliant design of smartphones. With this tool, smartphone OEMs can build with confidence, from design to hardware...

On Topic: Optical Players Race to Stay Pace With the AI Revolution

Sept. 18, 2024
The optical industry is moving fast with new approaches to satisfying the ever-growing demand from hyperscalers, which are balancing growing bandwidth demands with power efficiency...

ON TOPIC: Cable’s Fiber to the X Play

Aug. 28, 2024
Cable operators are strategically deploying fiber-to-the-home (FTTH) networks in Greenfield markets and Brownfield markets where existing cable plant has reached its end of life...

Fiber Optic Connectivity

Aug. 16, 2024
This event was originally held on September 10, 2024and is now available for on demand viewing.Sponsor: Sumitomo & Tempo CommunicationsDuration: 1 Hour Register...