Software Test Metrics | Defect Metrics | Defect Slippage Ratio

Software Test Metrics | Defect Metrics | Defect Slippage Ratio

Introduction: Metrics can be defined as “STANDARDS OF MEASUREMENT”. A Metric is a unit used for describing or measuring an attribute. Test metrics are the means by which software quality can be measured. The test provides visibility into the readiness of the product and gives a clear measurement of the quality and completeness of the product. What are Test Metrics? Test Metrics are the quantitative measure used to estimate the progress, quality, and other activities of the software testing process. Why do we Need Metrics? “You cannot improve what you cannot measure.” “You cannot control what you cannot measure” AND TEST METRICS HELP IN Take decisions for the next phase of activities Evidence of the Claim or Prediction Understand the type of improvement required Take decisions on process or technology changeset Types of Test Metrics Base Metrics (Direct Measure) Base metrics constitute the raw data gathered by a Test Analyst throughout the testing effort. These metrics are used to provide project status reports to the Test Lead and Project Manager; they also feed into the formulas used to derive Calculated Metrics. Ex: # of Test Cases, # of Test Cases Executed Calculated Metrics (Indirect Measure) Calculated Metrics convert the Base Metrics data into more useful information. These types of metrics are generally the responsibility of the Test Lead and can be tracked at many different levels (by module, tester, or project). Ex: % Complete, % Test Coverage Metrics Life Cycle Defect Metrics Release Criteria Defect Pattern Test Plan Coverage on Functionality: The total number of requirements v/s number of requirements covered through test scripts. (No of requirements covered / total number of requirements) * 100 Define requirements at the time of Effort estimation Example: Total number of requirements estimated is 46, the total number of requirements tested 39, and blocked 7…define what is the coverage? Note: Define requirements clearly at the project level Test case Defect Density: The total number of errors found in test scripts v/s developed and executed. (Defective Test Scripts /Total Test Scripts) * 100 Example: Total test script developed 1360, total test script executed 1280, total test script passed 1065, total test script failed 215 So, the test case defect density is 215 X 100 —————————- = 16.8% 1280 This 16.8% value can also be called test case efficiency %, which depends upon the total number of test cases that uncovered defects Defect Slippage Ratio: The number of defects slipped (reported from production) v/s number of defects reported during execution. Number of Defects Slipped / (Number of Defects Raised – Number of Defects Withdrawn) Example: Customer filed defects are 21, total defect found while testing is 267, and the total number of invalid defects is 17 So, the Slippage Ratio is[21/(267-17) ] X 100 = 8.4% Requirement Volatility: The number of requirements agreed v/s number of requirements changed. (Number of Requirements Added + Deleted + Modified) *100 / Number of Original Requirements Ensure that the requirements are normalized or defined properly while estimating Example: VSS 1.3 release had a total of 67 requirements initially, later they added another 7 new requirements and removed 3 from initial requirements, and modified 11 requirements. So, the requirement Volatility is (7 + 3 + 11) * 100/67 = 31.34% Review Efficiency: Review Efficiency is a metric that offers insight into the review quality and testing. Some organizations also use this term as “Static Testing” efficiency and they are aiming to get a min of 30% defects in static testing. Review efficiency=100* the Total number of defects found by reviews/Total number of project defects. Example: A project found a total of 269 defects in different reviews, which were fixed and the test team got 476 defects that were reported and valid. So, Review efficiency is [269/(269+476)] X 100 = 36.1% Efficiency & Effectiveness of Processes: Effectiveness: Doing the right thing. It deals with meeting the desirable attributes that are expected by the customer. Efficiency: Doing the thing right. It concerns the resources used for the service to be rendered Have questions? Contact the software testing experts at InApp to learn more.

Types of Project Metrics

Types of Project Metrics

Metric is an inevitable part of any piece of work being performed. It’s a system in place to measure the excellence or rather performance of work delivered. Any work that is not controlled and measured can prove the equivalent to incorrect work being delivered. Technology grows at a tremendous pace that enterprises always strive in keeping defined project metrics. Project metrics can be stated as pre-defined or identified measures or benchmarks, which the deliverable is supposed to attain in order to get the expected value. With clearly defined project metrics, the business groups are able to assess the success of a project. Though certain unstated measures like if the project was delivered on time and within the budget, existed ever since the advent of enterprises, the need for more analytics in this area has seen a high spike. There are different types of project metric analysis systems in place across the industry like costing, resource, hours-based, etc. Let me take you through some common project metrics that are much related to the person-hours delivered in a project. Effort Variance (Ev) It’s a derived metric, which gives you an alert of having control over the project. Let there be a project A with the below current attributes : Planned effort: 100 Actual effort: 150 Project progress percentage: 50 Therefore, at 50 % – 150 Hrs taken then at 100 % – X Hrs will be taken X = (100*150)/50 = 300 Hrs., where X is a predicted value for the effort within which the project is going to complete. Hence, the variance Ev = ((Actual – Planned)/Planned)*100 = ((300-100)/100)*100 = 200 % The variance predicted indicates that the project requires attention or it would complete at a much higher cost in terms of the effort delivered. Schedule Variance (Sv) The Schedule variance also has the same calculation in which the number of days is considered instead of the hours. Weighted Defect Rate WDR is a defect metric calculated based on the weightage assigned to the reported bugs. The weightage depends on two factors – severity and reporter. Weightage against severity in descending order: Block, Crash, Major, Minor Weightage against the reporter in descending order: Client, SQC, Team And the rate is calculated against the total planned hours for the project. Quality Costs: Cost of Quality: It’s the total time spent on review activities in the project. Examples are requirements review, design review, code review, test plan review, team meetings for clarifications and client calls, etc. COQ = (Total Review hrs / Total project planned hours)*100 Cost of Detection: The total time spent on the testing activity is considered the cost of detection. Cost of Detection= (Total Testing Hrs/ Total Project planned Hrs)*100 Cost of Failure: The total time spent on rework in the project is considered the cost of failure. Rework includes bug fixing, design change, test plan change, etc. Cost of Failure (Cost of Poor Quality- CoPQ) = (Total Rework or bug fixing Hrs/ Total Project Planned hours)*100 Have questions? Contact the technology experts at InApp to learn more.

Types of Software Testing

Types of Software Testing

Dry run Testing In this type of testing the effects of a possible failure are intentionally mitigated. Usually done in a different server with Customer data before moving into actual Production release. Mutation Testing This type of testing checks whether our unit tests are robust enough. The mutation is a small change in code; where we deliberately alter a program’s code and then re-run our valid unit test suite against the mutated program. A good unit test will detect the change in the program and fail accordingly. Incremental Testing Partial testing of an incomplete product. Usually done to provide early feedback to the developers. Bucket Testing (A/B testing) A/B testing compares the effectiveness of two versions of a webpage, marketing email, in order to discover which has a better response rate or better sales conversion rate. Soak Testing Involves testing a system with a significant load extended over a significant period of time to discover how the system behaves under sustained use. Sandbox Testing It is a testing environment that isolates untested code changes and outright experimentation from the production environment somewhat like a working directory/ test server/ development server in which the developers “check out” a copy of the source code tree or a branch to examine and work on. Only after the developer has fully tested the code changes in their own sandbox should the changes he check back into and merge with the repository and thereby be made available to other developers or end-user of the software. Have questions? Contact the software testing experts at InApp to learn more.

Cross-Site Scripting (XSS)

Cross-Site Scripting (XSS)

What is Cross-Site Scripting? Cross-site scripting, also known as XSS, is a type of security vulnerability typically found in Web applications. It occurs when a web application gathers malicious data from a user. The data is usually gathered in the form of a hyperlink that contains malicious content. Browsers are capable of displaying HTML content and executing JavaScript. If the application does not escape special characters in the input/output and sends the user input back to the browser, an attacker may be able to launch an XSS attack successfully. Through which malicious files can be executed, session details of a logged-in user can be stolen, or Trojans can be installed. Types of XSS: The non-persistent (or reflected) Cross-site scripting vulnerability is the most common type. A non-persistent XSS vulnerability occurs when the data provided by the attacker is immediately executed and a generated page is returned to that user. The persistent (or stored) XSS vulnerability occurs when the data provided by the attacker is saved in the server, and permanently displayed on web pages returned to other users. Another type of XSS attack is DOM Based on XSS. DOM Based XSS (type-0 XSS) is an attack wherein the attack payload is executed as a result of modifying the DOM environment in the victim’s browser. How to Perform XSS Testing: Submitting malicious script through text inputs List out all the text input fields [Text box, Text area] in the application. Submit simple javascript code, like ‘<script>alert(“XSS”)</script> through each identified text input field. If the text box is vulnerable, an alert with the text mentioned in the quotes will be returned.       Submitting malicious script through an application URL Modifying the requests using security testing tools like Burp Suite to test for application vulnerability Capture the request using the Burp tool Append malicious script in the captured request ‘Forward’ the modified URL Validate the result How to prevent XSS attacks are possible mainly because the server is not handling special characters in the output. There are 2 broad strategies for defeating XSS: Whitelisting Good inputs Whitelist: Create a whitelist of characters required by the application. Once the whitelist is ready, the application should disallow all requests containing any character apart from those in the list. Blacklisting Bad input Blacklist: The application should not accept any script, special character, or HTML in fields whenever not required. It should escape special characteristics that may prove harmful. Some of the special characters used in the script that must be escaped are <>()[]{}/\*;:=%+^! Have questions? Contact the software testing experts at InApp to learn more.

Creating AdvancedTest Plan in JMeter

Creating AdvancedTest Plan in JMeter

The need for creating an Advanced Test Plan comes in when the test requires any of the following The need to validate results based on updates to a field in the DB To use Input File in order to parameterize the input variable Use of While, If-Else controller Steps to be followed while recording an advanced script Open a new Test Plan Right-click on Test Plan->Add->Threads (users)->Thread Group Right-click on Thread Group->Add->Config Element->CSV Data Set Config Right-click on Thread Group->Add->Config Element->Http Cookie Manager Right-click on Thread Group->Add->Config Element->Http Header Manager Right-click on Thread Group->Add->Logic Controller->Transaction Controller Right-click on Thread Group->Add->Logic Controller->Recording Controller Right-click on WorkBench->Add->Non-Test Elements->Http Proxy Server Click on HTTP Proxy Server, and from the Target Controller Drop down select ‘Transaction Controller->Recording Controller’ To exclude images, Add rows in the ‘URL Patterns to Exclude’ and use the code ”.*\jpg” , “.*\gif” , “.*\png” Right-click on Recording Controller->Add->Config Element->JDBC Connection Configuration. This is used to create a Database connection. We include Database URL, JDBC Driver Class, Username, and Password Right-click on Recording Controller->Add-> Config Element->User Defined Variables. This is used to replace Hard-coded values say Username, Syntax is ‘Variable name = Variable value’ and use ${variable name} in the script instead of hard-coded value(s). Right click->Recording Controller->Add->Logic Controller->While Controller Right-click on While Controller->Add->Sampler ->JDBC Request. This sampler is used to send SQL Query to the Database. Note: Before using JDBC Request Sampler we need to set up the ‘JDBC Connection Configuration’ configuration element Right-click on While Controller->Add->Sampler->Debug Sampler. It generates the sample of all values of JMeter variables Similarly, we can generate a sample of all values of JMeter properties and system properties. Presently in our scripts, we set these 2 properties to false. Right-click on While Controller->Add->Timer->Constant Timer This timer pauses for the set amount of time between requests. If we don’t add a delay, JMeter could overwhelm the server by making too many requests in a short amount of time. Please note Before recording go to Internet Explorer->Tools->Internet options->Connection->LAN settings Check the Proxy server, address: localhost; port: 8080, click ok Click on Start to record the script Once the recording is over, click Stop and save it as a JMX file Using Transaction Controllers in the Test Plan Grouping the test action within a Transaction Controller helps the normal user to understand the script more effectively than recording an entire script in one full stretch. To add a Transaction Controller, right-click on Thread Group->Add->Logic Controller->Transaction Controller Transaction Controller measures the overall time taken to perform nested test elements Once the entire script is executed, click on View Results Tree and select Transaction Controller, this controller gives the overall time taken to process the request (load time). The load time will is shown in milliseconds. For n multi-users, once the scripts are executed there will be ‘n’ Transaction controllers within the View Results Tree. Each Transaction controller when clicked shows 3 tabs – Sampler, Request, and Response. By default, the sampler tab is shown. Assume a user logins to an online shopping website and performs a search on 5 different products. While the script is under execution, the progress of the script parsing through 5 different products is reflected in the statement – “Search Transaction Controller 1-4”. This means of the 5 searches, 4th has been completed. Merging Scripts Two or more scripts can be merged into a single Test Plan. Assume we have three scripts merged in the order say Create Profile (This script is executed by 10 users) Basic Search (This script is executed by another 10 users) Signing up on an online shopping website (This script is executed by another 5 users) Steps to be followed while merging the scripts Open any existing test plan say ‘CreateProfile.jmx’ Right-click on Test Plan-> Merge Select the test plan which you want to merge say ‘BasicSearch.jmx’ A new thread group is displayed along with the existing thread group In order to avoid confusion, it would be better to rename the thread group(s) merged. When the merged script is executed, there will be 25 (10+10+5 users) Transaction controllers now present in the Test Plan. During execution, each of the 3 merged scripts will be tracked independently, i.e if the Transaction Controller shows ‘Basic search Transaction Controller 2-6’; this implies the 6th search of the 2nd merged script has just completed execution. Input File (Comma delimited (CSV) file) The recorded scripts can be executed with multiple users, by parameterizing our scripts. This can be done in two ways: User-defined variables Input file CSV file is very useful while executing the JMeter script(s) with ‘n’ multiple users. The attached screenshot (CSV.jpg) has 3 columns. The first column is for Username, the second column is for password and the third column is for the server name. On executing the script after parameterizing, the script fetches the value from the CSV file, substitutes it with the corresponding request, and sends it to the server. Points to be noted while using a CSV file Open an Excel file and provide the required information say Username, Password, Server name, etc under each column that we would like to pass as an input parameter to the script. Save the Excel file as .csv In the Test plan, configure CSV Data Set Config as follows, click on CSV Data Set ConfigCSV Data Set Config as follows, click on CSV Data Set Config Filename: filename.csv File encoding: leave it blank Variable Names (comma-delimited): Specify the names for parameters (values) specified in each column in the CSV file. Later use this variable name in the script as ‘${variablename}’ Delimiter (use \t for tab): , Allow quoted data?: False Recycle on EOF?: True Stop thread on EOF?: False Sharing mode: All threads Note: The first variable name entered in the Variable Names text box contains the value of the first column of the CSV file and so on. Replace values with corresponding variable names in the form ‘${variable name}’ throughout the script. It would be even better to rename the

Creating Basic Test Plan in JMeter

Creating Basic Test Plan in JMeter

How to Create a Basic Test Plan? Steps to be followed while recording a script: Open a new Test Plan Right click on Test Plan->Add->Threads (users) ->Thread Group Right-click on Thread Group->Add->Config Element-> HTTP Cookie Manager Right-click on Thread Group->Add->Config Element-> HTTP Header Manager Right-click on Thread Group->Add->Config Element->HTTP Request Defaults Right-click on Thread Group->Add->Logic Controller->Recording Controller Right-click on Workbench->Add->Non-Test Elements-> HTTP Proxy Server* Click on HTTP Proxy Server, and from the Target Controller Drop down select ‘Thread Group>Recording Controller’ ** Click on Start to record the script Once the recording is over, click Stop and save as a “.jmx” file *Before recording go to Internet Explorer->Tools->Internet options->Connection->LAN settings. Check the Proxy server, address: localhost; port: 8080, and click ok. **To exclude images, Add rows in the ‘URL Patterns to Exclude’ and use the code ”.*\jpg” , “.*\gif” , “.*\png” To set the number of users: Click on Thread Group Set the Number of Threads (users) Set Ramp-up period (Time is in seconds). It sets the number of users that are initiated every second. To add appropriate test results (Listeners) to the Test Plan Right-click on Thread Group->Add->Listener->View Results Tree. This report will give details of – Sampler (HTML page), Request info, and Response info. Have questions? Contact the software testing experts at InApp to learn more.

Test Automation Frameworks

Test Automation Frameworks

A framework is a set of assumptions, concepts & practices that support automation. Types of Frameworks Test script Modularity Framework Test Library Architecture Framework Keyword driven Framework Data Driven Framework Hybrid Frameworks Test Script Modularity Framework Test script modularity framework requires the creation of small independent scripts that represent modules section & functions of the AUT. Library Architecture Framework It divides AUT into procedures & functions. Creation of Library files that represent Modules, functions, and sections of AUT. Keyword-driven Framework This framework requires the development of data tables and keywords, independent of the test automation tool used to execute them and the test script code that “drives” the application-under-test and the data. Keyword-driven tests look very similar to manual test cases. In a keyword-driven test, the functionality of the application-under-test is documented in a table as well as in step-by-step instructions for each test. Data-Driven Framework Input & Output values are read from data files. Hybrid Framework Have questions? Contact the software testing experts at InApp to learn more.

Everything about Performance Testing

Everything about Performance Testing

What is Performance Testing? Performance testing of an application is basically the process of understanding how the web application and its operating environment respond at various user load levels. In general, we want to measure the latency, throughput, and utilization of the website while simulating attempts by virtual users to simultaneously access the site. One of the main objectives of performance testing is to maintain a website with low latency, high throughput, and low utilization. The performance test measures how well the application meets the customer expectations in terms of, Speed – determines if the application responds quickly Scalability – determines how much user load the application can handle Stability – determines if the application is stable under varying loads Why Performance Testing? Performance problems are usually the result of contention for, or exhaustion of, some system resource. When a system resource is exhausted, the system is unable to scale to higher levels of performance. Maintaining optimum Web application performance is a top priority for application developers and administrators. Performance analysis is also carried out for various purposes such as: During a design or redesign of a module or a part of the system, more than one alternative presents itself. In such cases, the evaluation of a design alternative is the prime mover for an analysis. Post-deployment realities create a need for tuning the existing system. A systematic approach like performance analysis is essential to extract maximum benefit from an existing system. Identification of bottlenecks in a system is more of an effort at troubleshooting. This helps to replace and focus efforts on improving overall system response. As the user base grows, the cost of failure becomes increasingly unbearable. To increase confidence and to provide an advance warning of potential problems in case of load conditions, the analysis must be done to forecast performance under load. Typically to debug applications, developers would execute their applications using different execution streams (i.e., completely exercise the application) in an attempt to find errors. When looking for errors in the application, performance is a secondary issue to features; however, it is still an issue. Objectives of Performance Testing End-to-end transaction response time measurements. Measure the Application Server component’s performance under various loads. Measure database components’ performance under various loads. Monitor system resources under various loads. Measure the network delay between the server and the clients Performance Testing Approach Identify the Test Environment Identify the physical test environment and the production environment as well as the tools and resources available to the test team. The physical environment includes hardware, software, and network configurations. Having a thorough understanding of the entire test environment at the outset enables more efficient test design and planning and helps you identify testing challenges early in the project. In some situations, this process must be revisited periodically throughout the project’s life cycle. Identify Performance Acceptance Criteria Identify the response time, throughput, and resource utilization goals and constraints. In general, response time is a user concern, throughput is a business concern, and resource utilization is a system concern. Additionally, identify project success criteria that may not be captured by those goals and constraints; for example, using performance tests to evaluate what combination of configuration settings will result in the most desirable performance characteristics. Plan and Design Tests Identify key scenarios, determine variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected. Consolidate this information into one or more models of system usage to be implemented, executed, and analyzed. Configure the Test Environment Prepare the test environment, tools, and resources necessary to execute each strategy as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary. Implement the Test Design Develop the performance tests in accordance with the test design. Execute the Test Run and monitor your tests. Validate the tests, test data, and results collection. Execute validated tests for analysis while monitoring the test and the test environment. Analyze Results, Report, and Retest Consolidate and share results data. Analyze the data both individually and as a cross-functional team. Reprioritize the remaining tests and re-execute them as needed. When all of the metric values are within accepted limits, none of the set thresholds have been violated, and all of the desired information has been collected, you have finished testing that particular scenario on that particular configuration. Functions of a Typical Tool Record & Replay: Record the application workflow and playback the script to verify the recording. Execute: Run the fully developed Test Script for a stipulated number of Virtual users to generate load on the AUT (Application Under Test) The dashboard displays the values for the desired parameters Remote connects to the app/web servers (Linux/Windows); gathers resource utilization data Analyze: Generates the report; helps to analyze the results and troubleshoot the issues. Attributes Considered for Performance Testing The following are the only few attributes out of many that were considered during performance testing: Throughput Response Time Time {Session time, reboot time, printing time, transaction time, task execution time} Hits per second, Request per second, Transaction per seconds Performance measurement with a number of users. Performance measurement with other interacting applications or task CPU usage Memory usage {Memory leakages, thread leakage} All queues and IO waits Bottlenecks {Memory, cache, process, processor, disk, and network} Highly Iterative Loops in the Code Data not optimally aligned in Memory Poor structuring of Joins in SQL queries Too many static variables Indexes on the Wrong Columns; Inappropriate combination of columns in Composite Indexes Network Usage {Bytes, packets, segments, frames received and sent per sec, Bytes Total/sec, Current Bandwidth Connection Failures, Connections Active, failures at network interface lever and protocol level} Database Problem {Settings and configuration, Usage, Read/sec, Write/sec, any locking, queries, compilation error} Web server {request and response per second, services succeeded and failed, serve problem if any} Screen transition Throughput and Response time with different user loads CPU and Memory Usage with different user loads Have questions? Contact the

Working with Regular Expression Extractor in JMeter

Working with JMeter Regular Expression Extractor

Using Regular Expression Extractor in JMeter During automating tests many times, the test scripts depend on input values that are generated during the test run. These values can be stored in a variable but sometimes the test requires only a part of this value. In such cases, the need for a string extractor is felt. Regular Expression Extractor serves this purpose by pulling out the required values that match the pattern. [ ] Matches anything within the square bracket – Dash inside a square bracket specifies the range e.g [ 0-9] means all digits from 0 to 9 ^ Negates the expression e.g [^ a-z] means everything except lowercase a to z $ Checks for the match at the end of a target string More are listed below. While scripting with JMeter, a Regular expression extractor is used to retrieve the values from the server response. This value is passed as a parameter to the While controller & IF controller. It can also be used to replace any pre-defined variable. The regular expression used is a Perl-type regular expression. Working with JMeter To add a regular expression extractor element to a test plan in JMeter: Right-click the sampler element (request to the server from which the value needs to be extracted) Select Add option -> Post Processors -> Regular expression extractor An explanation of the Regular expression extractor element is detailed in the link http://jmeter.apache.org/usermanual/component_reference.html#Regular_Expression_Extractor How to extract Single or multiple strings using the Regular Expression Extractor element Extracting a Single string from the response Consider an example, where a user successfully logins into an online shopping website and is navigated to the user’s home page where the name ‘Welcome Username’ is displayed. In order to extract the username, the below R.E can be used: Reference Name: Username Regular Expression: Welcome (.+?) Template: $1$ Match No. (0 for Random): 1 Default Value: match not found Note: The special characters above mean: ( ) encloses the portion of the match string to be returned . -> match for any character. + -> one or more times. ? stop when the first match is found Without the ? the .+ would continue until it finds the last possible match. Extracting Multiple Strings from the response Consider a scenario where the user selects an item; it has a product id and a category id. To extract both the ids, the below R.E can be used Reference Name: My_ID Regular Expression: Product_ID = (.+?)\&Category_ID = (.+?) Template: $1$$2$ Match No. (0 for Random): 1 Default Value: match not found Since we need to extract two values from the response, two groups are created. So the template has $1$$2$. The JMeter Regex Extractor saves the values of the groups in additional variables. The following variables would be set as: My_ID -> PR_001CAT_001 My_ID _g0 -> Product_ID =” PR_001″ Category_ID =” CAT_001″ My_ID _g1 -> PR_001 My_ID _g2 -> CAT_001 These variables can be later referred to in the JMeter test plan, as ${MY_ID_g1}, ${MYREF_g2}. Extracting only numbers from the String Consider a case where we need to only extract the numbers, for example, the product id says PR_001. To extract 001, the below R.E can be used Reference Name: ProductID Regular Expression: Product_ID = “PR_(.+?)” Template: $1$ Match No. (0 for Random): 1 Default Value: match not found Have questions? Contact the software testing experts at InApp to learn more.

Automation Index Formula – A checklist to help in identifying the tests that are feasible to automate

Automation Index

“Just because a test is automatable it does not mean it should be automated” Elfriede Dustin Automation testing begins with an analysis of what is feasible to automate, taking into account the budget, resources, schedule, and available expertise. Given limited resources and tight deadlines, we first need to prioritize what is to be automated. The effort required can be measured with the help of the Automation Index. Automation Index Formula The automation Index is the ratio of the number of test cases that are feasible to be automated against the total number of test cases. AI = TFA / TC where AI = Automation Index TFA = Tests feasible to be automated TC = Total number of Test Cases A checklist to help in identifying the tests that are feasible to automate: Tests that are yes for the above are good candidates for automation. Factors that are to be considered in addition to the above are: Have questions? Contact the software testing experts at InApp to learn more.

InApp India Office

121 Nila, Technopark Campus
Trivandrum, Kerala 695581
+91 (471) 277 -1800
mktg@inapp.com

InApp USA Office

999 Commercial St. Ste 210 Palo Alto, CA 94303
+1 (650) 283-7833
mktg@inapp.com

InApp Japan Office

6-12 Misuzugaoka, Aoba-ku
Yokohama,225-0016
+81-45-978-0788
mktg@inapp.com
Terms Of Use
© 2000-2026 InApp, All Rights Reserved