It’s hard to think of software testing and not come upon the idea of automated tests. In theory, automated tests are great – they expedite the testing process, allowing for more time to be spent on new features and building out the software. But oftentimes automated testing is spoken of as if it’s a be-all end-all of testing. There are many scenarios where putting automated tests in place makes sense, but this article doesn’t aim to discuss the intricacies of when and when not to utilize automated testing. Rather, the primary focus here is on the downsides of using automated tests.

Automated Testing Has Benefits

First, it’s important to acknowledge that automated testing does have its place and can be incredibly valuable. For products that are relatively stable or for repetitive or mundane tasks – such as testing a sign in page, the creation of valid passwords, or other features that can be re-used across multiple projects – having automated tests in place makes sense. It’s less likely these tests will be disrupted down the line, so the automations can help prevent tester burnout while saving considerable effort that can be devoted to more business-critical areas.

Automated testing also helps protect against human error. When all test cases are carefully thought out and planned for, the chances of forgetting to test something or inadvertently putting in bad data are virtually eliminated with automation. Mistake-prone humans are largely removed from the equation, ensuring all tests are run, and run the same way every time. In these instances, having automations set up can provide a greater amount of confidence that unintentional errors won’t happen.

But There Are Downsides Too

Significant amounts of time and labor go into planning and creating tests. Taking effort at the start of a project to understand the various scenarios and pieces that need to be tested certainly is encouraged but writing out those tests early on could prove to be a misallocation of resources. If the tests are written as the software is being developed, they will likely need to be re-written multiple times as the software changes and iterates. Adding new features, or simply changing the layout of a page, could render tests entirely invalid or impossible, resulting in continual failures when the automations are run. The amount of time needed to fix the automations can be equivalent to starting from scratch, or more simply, manually testing the software. As a result, some failing automated tests are never rectified.

Chronic automated test failures, whether through poor design or changes to the software that broke a previously successful test, can result in a numbing effect. Seeing these tests fail over and over when they run, it’s easy (particularly if addressing them is a huge time sink) for people to ignore the failures or even turn the tests off, creating situations where real, valid new failures are missed.

Furthermore, the automated tests are only as good as the person writing them. If a developer is writing tests and wants to make sure a given scenario will pass, it’s possible to inadvertently set the tests up so that they don’t fail. As the automations run, things can appear to be going nice and smooth, but there’s no indication if/when something may be amiss. Similarly, while automated tests can help prevent a tester from forgetting to re-test a given test case, the person writing the script still needs to consider that scenario. If they forget to write a test for it, it will not magically be covered when the tests are run.

Importantly, while some tools can get pretty close, there are things automated testing simply cannot do. The tests can track how long an action takes, or if a pop up is triggered where and when it should be, for example. But they can’t tell if text is too hard to read, determine that the layout of a page looks off, or answer a question on how intuitive the app is. A person manually testing the software can better handle such cases and should do so.

Other tools claim to make writing automated tests easier by recording your actions as you interact with your app; creating such tests can otherwise require deep technical knowledge. However, many of these tools are disappointingly lacking. I’ve encountered ones that supposedly allow for running tests in the background, only for those tests to consistently fail every time I’d trigger them. Moving them to the foreground would get them to work, but defeated the entire purpose of the tool. Others couldn’t even successfully enter data in a text field or select an option from a drop-down menu – fairly basic tasks.

Conclusion

While automated testing certainly provides value in reducing time and effort spent in testing software, there are significant drawbacks that are often overlooked. Although it may seem like you can set it and forget it when automated tests are in place, doing so can be a devastating mistake. Failures need to be addressed, and tests need to be maintained and tweaked as system functionality changes. The human eye, while not infallible, still provides immense value in the testing process and should not be overlooked or forgotten.