Software testing automation – why & how

Getronics Editorial Team

In this article:

It is crucial to answer several questions before starting the software testing automation journey. In this article, we also help you to analyse some of the most important ones yourself.

A group of colleagues discussing software testing automation

The race for software testing automation has the potential to not meet customer expectations, countless numbers of “junk” scripts that no one will use, because of one simple functional change or in the system’s architecture. And, a long wait for the promised return of investment and expected savings.

All this for not clearly establishing the direction and goals of our software testing automation initiatives from the start of the journey.

A clear path starts with the why

To avoid these unwanted outcomes, it’s crucial to answer several questions before starting the automation journey. In this article, we will help you to analyse some of the most important ones yourself:

Why software testing automation?

This question might seem obvious, and many organisations may already have several clear answers in mind. “To safe hours in regression tests,” “to promote agility”, or “to monitor pre-productive environments”, to name a few. Regardless, why software testing automation, is still the most important question.

And as key as it is to define the why, it is equally crucial to communicate it to your team so that no one can lose sight of the automation goals.

In one case, while the automation objectives were clear, and they had been shared with the partner, he took a different path and created a huge number of scripts over the same workflow.

With so many scripts, it became very difficult to manage and monitor. Then the software changed. It became impossible to maintain all scripts, so it was more convenient to develop it all again from scratch. However, this time, doing all the validations of the flow in a single script.

The many hows of automation

Once the objective of the automation is clear, another important question is: “how to do it?”. And several others emerge from this, such as:

  • What do we know about automation?
    The first thing I’d recommend is to identify the AS-IS in terms of what we know about the subject. The gap between the AS-IS and the TO-BE is one of the first things to be eliminated or reduced. Anything we do must be founded on a solid knowledge of the subject. Having skilled people internally or an experienced partner can be very helpful at this phase.
  • Do I have a team or partner with the skills to do it?
    Having the right people, or a partner, is key for success. Not only for the technical knowledge, but also for the empowerment they take towards the objective.
  • How much will the project cost?
    Usually, a PoC (proof of concept) or an MVP (minimum viable product) is enough to give us an idea of costs. While most PoCs stay productive, how these are achieved should by no means become the process for the future. A PoC aims to know if “it can be done” – and if it does, it’s time to start planning!
  • When should I expect to see results?
    At the end of the day, automation is a development project, which makes it for straightforward planning. Or it can be dealt within a sprint, so the times will be determined by the size of what we want to do. The ideal scenario is: take a system that is a good candidate for automation, automate a flow of medium complexity, and then use this reference time to plan the rest of the flows in the system.
  • Should I automate everything?
    Not necessarily: quality over quantity. Technically, I would dare to say that “100% of the tests can be automated”. However, to achieve this, we would incur very high costs, forcing us to discard the idea.
  • What is really convenient to automate?
    Prioritising the urgent over the important is a good technique, or the already flawed 80/20 (Pareto). After all, what matters is to be very clear on the value the script that I am developing contributes to the fulfilment of the business objectives.
  • How will I measure the result of my automation?
    Set KPIs. Like any strategic planning, KPIs must be set based on the main goal and secondary objectives. Consider some quality indicators, which are also relevant to this area. Some examples of KPIs are:
    • Defect rate: defects found by the script (ideally using defect density).
    • Types of defects: coding, environment, data, etc.
    • Failure rate: from 100% of one script execution, which percentage results in error.
    • Minutes of manual execution VS minutes of automated execution: this allows to identify the time savings generated by your script.
    • % Automation Coverage: automatable flows on the total Universe of test flows of the system (it is not recommended to define a high target. In fact, we should not even have a target).
    • % Progress in automation development: automated flows VS automatable flows. Depending on your plan, you have to compare the planned against the realised.

Key lesson: planning software testing automation

So, up to here, we understand that automating is not as trivial as they told us: assembling a random server (usually any workstation), generating a script with any tool, and dedicating ourselves to executing over and over as if the more executions or script we have, the closer we will be to our goal. It just doesn’t work like that.

What needs to stay after this analysis is that before embarking on the path of automation, you must stop, as long as it’s needed, to clearly: define objectives > plan > prioritise > define KPIs and targets > and establish control points and tracking methods focused on the value of the automation, not on the amount of script we have to display.

And in the case you are already on this path, and the outlook is not positive, ask yourself these questions to guide you towards the opportunities to improve your team.

A final word on software testing automation

To finalise, and to come back to the “why”, below are some approaches that can be given to automation while adding great value to the organisation:

  • Automate to promote agility
    Under this perspective, it is not convenient to automate 100% of test cases. Rather, consider only the most critical functional workflows to avoid any business impact due to a new version (regressive tests). Additionally, it is also good to include those tests that take more time to be executed manually (to focus on manual tests of the new cases / flows that we create due to the change in the code). Finally, include the most complex ones that require advanced functional knowledge (this also makes it possible to dispense with the expert functional analyst who must be focused on other flows). The executions of these tests are ideally triggered in a continuous integration scheme before the push of a new code modification and after the execution of the corresponding data code inspection (ideally also automated).
  • Automate to monitor the stability of the systems
    When making a major modification to a system, impact analyses often miss some integrations with other systems and it is common for an unidentified impact to affect the operation of one or more systems. This causes tests to stop until the correction or rollback of the change. In this focus of automation, it is advisable to consider only “Happy Paths” and test these workflows several times a day on a scheduled basis (for example, with Jenkins) allowing us to quickly find out through an online Dashboard if a system, service or server encounters problems. If we go further in terms of the “online” notice, we can trigger an email, SMS or a warning message in tools such as HipChat so that team members find out quickly without having to go to look at the dashboard.

Contact us

For more information about software testing automation, contact our experts or visit our Getronics site

Getronics Editorial Team

In this article:

Share this post

Related articles

Talk with one of our experts

If you're considering a new digital experience, whatever stage you're at in your journey, we'd love to talk.