The IT Executive’s Guide to Employing Ai for Software Testing

What every executive needs to know about reducing costs and improving quality through artificial intelligence & Robotic Test Automation

Peter Varhol, Principal, Technology Strategy Research and a recognized authority in test automation.

IT leadership, from the CIO and CTO and throughout the chain of command, are under constant pressure to deliver software faster, make sure users are satisfied with features and quality, and reduce costs in the process. There have been incremental improvements, from more powerful programming languages and development environments to processes like Agile and DevOps, but more complex applications are counterbalancing may of the presumed benefits.

One area of concern in particular is testing. In old waterfall paradigms, testing was a distinct and long-lasting phase of the application development lifecycle, with final tests often taking weeks and involving multiple beta release cycles. Further, traditional testing practices often fail to find defects prior to production deployment, creating user dissatisfaction, rework, and often lost revenue.

Testing has struggled to adapt to more modern, faster-moving methodologies. Many consider test automation the silver bullet in accelerating testing and are investing heavily in automation tools and skills. Most companies are investing heavily in automation tools and skills.

However, automation has serious issues of its own. Most automation tools require testers to either hard code actions or step through an application manually, recording those steps so that the tools can replicate them at a later time. It sounds straightforward, but it isn’t. If UI controls are moved or added, or underlying code is significantly changed, either the test script has to be re-recorded or the original script has to be modified. The maintenance involved in this process is significant, and can erase many of the efficiencies of automation.

Further, the automation scripts themselves are almost always artificial, in that they represent test cases designed to exercise specific features or functionality, rather than real use.  While individual feature may work as intended, a more complex sequence of features may not.

As a result, the large investments in automation skills and tools have not resulted in the expected return.  In some cases, automation strategies have failed and teams have returned to testing manually, negating any benefits.

In order to fulfill time to market, cost, and quality goals, a different approach to the testing practice is needed.

Enter Artificial Intelligence

Artificial intelligence (AI), and its subset machine learning, involves applying layers of algorithms to make a decision in a problem domain.  While the data scientist provides the initial algorithms, the AI engine itself adjusts those algorithms based on a feedback mechanism.

Applied to testing, one immediate benefit of AI and machine learning is that it can accelerate test case definition, automation, and use.  The fact of the matter is that applications are already generating data that can be applied to their own testing.  It’s often too much data for teams to effectively analyze and use, and is better suited for AI-based analysis and action.  Data is available from server logs as well as testing and monitoring tools such as Splunk and Sumo Logic and others.  This data covers basic user actions and helps the Ai system to learn more about what real users actually do, and thus w=hat must be tested for every subsequent build.

The Appvance AI approach

Appvance has used AI technologies to analyze these logs and extract essential user information, with its AI-based autonomous testing software.  Appvance AI represents a quality and productivity breakthrough that can radically change how we test and deploy software.  It has the potential to make testing far faster and more effective than it is today.

Appvance AI generates automated test scripts based on real user activity, rather than artificial tests designed to test specific features. It analyzes existing data from log files on servers, plus data collected through popular monitoring systems, along with a number of other data sources in order to generate valid test cases. And it does so very rapidly; using a complex regimen of rules and machine learning, it extracts user data and activity, determines the steps the user took while navigating through the application, and automatically build executable scripts, including all page navigation, inputs, database calls, and displayed results. This information, along with a complex deep learning of the application itself, correlations, data for forms and other items forms the basis of robotic test automation.

Because these scripts represent actual use, they accurately correlate with production use.  Organizations have the confidence of knowing that the most tested parts of the application are the activities users perform.

Before and after deployment

Testers have long become used to writing and artificial test cases and proclaiming an application ready to deploy prior to moving into production. However, the world is very different today.  It isn’t possible to replicate the production environment, which is increasingly a distributed cloud, inside the enterprise prior to deployment.

With Appvance AI, teams can build test scripts using scenarios derived from user stories, or actual use by beta testers. That is a good start. However, once in production, it can use logs with data from actual customers or other users, including sessions by testers seeking to exercise specific features or to continue testing of user stories.  User actions in navigating through applications can also be combined to come up with unique performance and load tests.

This is important because there are many things that can cause issues in distributed cloud applications that cannot be found in traditional test environments. There could be network outages, poor performance in DNS lookups, or any one of a number of factors that are difficult or impossible to replicate in a controlled environment. And when applications in production have problems, typically the first indication is emails from irate users. This could result in lost business or a bad reputation for the organization.

So it makes great deal of sense to continue testing an application after deployment. Using Appvance AI, your project teams can build user stories and test scripts based on real user sessions and experiences. This provides a validity check on the original assumptions used to build and test the application, and also provides a fresh set of test cases reflecting the most recent changes and updates to the application. Organizations can be confident that applications continue to effectively serve the business throughout their production lifecycle.

Sleep better at night

Basing an automated testing strategy on real user activities has a number of advantages. Because test cases are automatically generated, it’s both fast and accurate in terms of creating scripts that exercise the capabilities of the application. Given enough log data, thousands of individual test cases can be generated in a matter of minutes.

Further, it tests for the total experience of actual users, turning testing into a user-focused activity. Testers typically test features rather than real user experience, making at least some testing activities artificial and incomplete. Generated test cases can also be automatically updated through log data even as the application changes.

Last, they are executed through automation, rather than manually. These test cases are efficient enough to form part of the smoke test effort that validates every build. This provides more coverage for builds, increasing their reliability and saving time in determining and fixing broken builds. This results in a faster and efficient development and integration pipeline.

Use AI for your upcoming projects

Appvance AI represents a quantum leap in productivity and speed for the testing process.  For the organization, it makes it possible to reduce application time to market, and in doing so reducing overall application development costs. IT management can deliver applications more reliably and without the negative feedback commonly associated with not tested or poorly tested applications.

What does this mean for enterprise IT and customer-facing applications? First, testing can fit seamlessly into accelerated methodologies and development schedules, such as Agile and DevOps.  Your teams don’t have to marginalize testing in order to meet delivery schedules. Testing itself will be accelerated, enabling organizations to deliver business-critical applications quickly and with a known level of quality and performance.

This benefits the business in multiple ways. First, it improves the productivity of your agile and DevOps teams. Testers and other team members aren’t spending weeks manually recording test scripts; instead, hundreds or even thousands of scripts can be recorded in minutes.

Testing also become more relevant to product acceptance. It can become an integral part of the continuous delivery process, rather than a roadblock. Testing can build on a quantum leap of productivity that finally delivers the speed of the rest of the application development process.

The organization itself better keeps to product schedules, incorporates continuous testing based on actual user activities, and doesn’t require expensive and hard-to-obtain automation resources and skills. There are also significant cost savings in reducing overall testing schedules from weeks or months to days.

Last, the application will have the ability to be more fully and completely tested than in the past.  Organization will no longer have to be concerned about an application failure and business meltdown that causes lost revenue and bad publicity. IT can once again sleep well at night.

For more information on Appvance AI, or to see a demonstration of its power, contact Appvance to learn more and schedule a live demonstration.