Open Source Tools for Test Management
The test case is a set of test inputs, execution conditions, and expected results developed for a particular objective such as to exercise a particular program path or to verify compliance with a specific requirement. In software testing, we might deal with either actual requirements or self-imposed requirements, no matter how well the formal requirements and specifications are defined. The software tester will develop and execute test cases in the course of testing the software product. Many companies are outsourcing the execution of test procedures, and as a result, we are seeing more and more people for whom test cases are about execution not planning. The following figure depicts a test case cycle. Some of the most popular Open Source test management tools are: 1. Test Environment Toolkit (TETware) The Test Environment Toolkit is a test execution management system that can be used to test products across multiple operating systems. It provides an easy-to-use framework that can be built to support local testing, remote testing, distributed testing, and testing on real-time and embedded systems. 2. Bugzilla Testopia Bugzilla Testopia is a web-based test case management system designed to be a generic tool for tracking test cases on virtually anything in the engineering process and integrating bug reports with test case run results for centralized management of the software engineering process. 3. Mantis MantisBT is a Web-based bug-tracking system to aid product bug tracking. It is written in the PHP scripting language and works with MySQL, MS SQL, and PostgreSQL databases and a web server. MantisBT can be installed on Windows, Linux, Mac OS, OS/2, and others. Almost any web browser should be able to function as a client. Mantis is an easily deployable, web-based bug tracker to aid product bug tracking. It requires PHP, MySQL, and a web server. 4. RTH (Requirements and Testing Hub) RTH is a test-management tool, that has requirements-management and bug-tracking capabilities. It offers a large variety of features designed to manage test cases, releases, test results, issue tracking, and reporting. The tool creates a common repository for all test assets and provides a structured approach to software testing. 5. qaManager qaManager is a platform-independent web-based application for managing software QA projects effectively. qaManager facilitates managing project tracking, resource management, test case management, online library, alerts, and more. It’s powered by OpenXava and has a very simple installation. 6. Litmus (Mozilla) Litmus is an integrated test case management and QA tool maintained by Mozilla. It is designed to improve workflow, visibility, and turnaround time in the Mozilla QA process. Litmus server as a repository for test cases and test results provides a query interface for viewing, reporting, and comparing test results. 7. TestLink TestLink is a web-based test management tool that provides test specifications, test plans and execution, reporting, requirements specification, and collaboration with well-known bug trackers. Both requirements specification and test specification are integrated together which allows users to create test projects and document test cases using this tool. 8. FitNesse FitNesse is an integrated wiki and acceptance testing framework. Wiki facilitates the creation of web pages that are run as tests, so any user can go to that page and see if the tests are passing. It also provides the means to automatically run tests and write acceptance tests. Have questions? Contact the software testing experts at InApp to learn more.
Software Testing Frameworks used at InApp
Software testing at InApp is tailored to meet client-specific needs, manage critical testing processes and ensure consistent high quality through repeatable processes. The software testing methods employed here are as follows: Unified Selenium API Automation Framework Robot Framework QTP Modular Framework In-house automation Frameworks Unified Selenium API Automation Framework In Unified Selenium API Automation Framework all objects that will be used to perform actions will be identified and grouped under different nodes, in an XML file. Updating the locator in the XML file will reflect changes in all the areas the locator is referred to. The advantage of this framework is the ease of maintenance. Design API is modeled after human language – If it looks like a button, we call it a button, regardless of implementation Test code does not use locators verbatim – Locators are aliased through the .xml file usAPI exposes locators by UI element type – E.g., button, link, tab, tree node, etc. usAPI transparently handles timing issues, logging, setup/tearDown All tests will be derived from org.usapi.BaseSeleniumTest. This will expose (among others) an object named ‘app’, which is of the type BaseApplication. All interactions with the GUI utilize this ‘app’ object, at no time should there be any need to invoke selenium methods directly. BaseSeleniumTest configures, starts, and stops the selenium client transparently to the test developer. BaseSeleniumTest provides generic methods for use in tests, such as assertTrue, assertFalse, isElementPresent, etc. Note that these methods are application-agnostic. Methods required for a particular application (e.g. to execute SQL queries) do not belong in this class. Robot Framework Robot Framework is a generic test automation framework for acceptance testing and acceptance test-driven development (ATDD). Robot Framework has a modular architecture that can be extended with bundled and self-made test libraries. Test data is defined in files, a file containing test cases creates a test suite, and placing these files into directories creates a nested structure of test suites. When test execution is started, the framework first parses the test data. It then utilizes keywords provided by the test libraries to interact with the system under test. Libraries can communicate with the system either directly or using other test tools as drivers. Test execution is started from the command line or continuous integration tools like Jenkins, Hudson, and the like. As a result, you get the report and log in HTML format as well as an XML output. These provide an extensive look into what your system does. QTP Modular Framework QTP Modular Framework (also known as Functional Decomposition Framework) is the approach where you first identify the reusable code from your test cases. Then you write this reusable code inside different functions and call these functions wherever required. The advantage of this approach is that the reusable code would always stay in one place and thus it would be easy to maintain the code because you would have to make the changes in a single place only. To reuse this piece of code, all you have to do is call the function wherever required. The only challenging part here is to identify the reusable portions in your test case flow. Once that is done, you have to just create functions and use them wherever required. In addition to these popular frameworks, we customize and use frameworks based on client needs such as combining Unified Selenium API Automation Framework with hash maps to use as temporary buffer space and auto email program. Have questions? Contact the software testing experts at InApp to learn more.
An Overview of Testing Frameworks
What is a Testing Framework? A testing automation framework is an overall system in which the tests will be designed, created, and implemented. It also includes the physical structures used for test creation and implementation and the logical interactions among those components. Need for Testing Framework If a group of testers is working on the same or different project and each tester is applying their own strategy to automate the application under test, then the possibility of duplication is higher. Also, the time taken to understand the whole strategy will be high. So we need an environment that should be independent of the application and has the capability to scale as per the application under test. For this purpose, we use a testing Framework. The Testing framework is responsible for: Designing a centralized and standardized logging facility, resulting in self-documenting test output Creating a mechanism to drive the application under test Creating a mechanism to execute the tests Creating a mechanism to Report results Advantages of testing frameworks: Improved code re-usage Reduced script maintenance Independent of an application under test Easy Reporting Modular-Based Testing Framework Data-Driven Testing Framework Keyword Driven Testing Framework Hybrid Testing Framework Types of Testing Framework Modular-Based Testing Framework Data-Driven Testing Framework Keyword Driven Testing Framework Hybrid Testing Modular-based Testing Framework The module is a small independent script that performs a specific set of tasks. It creates a layer in front of the component and hides the components from non-technical users as well as applications. The small components are added up to build a large test set. Advantages of Modular-based Testing Framework: The fastest way to generate a script Modular division of scripts leads to easier maintenance Data-Driven Testing Framework In a Data-driven framework, test input and output values are read from data pools, DB sources, CSV files, Excel files, DAO objects, ADO objects, etc. Navigation through the program, reading the data files, and logging test status information are all coded in the test script. Advantages of Data-Driven Testing Framework: Datasheets can be designed while application development is still in progress Reduces data redundancy Data input/output and expected results are stored as easily maintainable text records in the database Changes to the test scripts do not affect the test data Test Cases can be executed with multiple sets of data Keyword Driven or Table Driven Testing Framework The keyword-driven framework requires the development of data tables and keywords that are independent of the test automation tool used to execute them and the test script code that “drives” the application under test and the data. Keyword-driven tests look very similar to manual test cases. In a keyword-driven test, the functionality of the application-under-test is documented in a table as well as in step-by-step instructions for each test. There are 2 basic components in Keyword Driven Framework viz. Keyword, Application Map. Keyword or Action The keyword is an Action that can be performed on a GUI Component. Ex. for GUI Component Textbox, some keywords (action) would be InputText, VerifyValue, VerifyProperty, and so on. Application Map or Control An Application Map provides Named References for GUI Components. Application Maps are nothing but ‘Object Repository’. Hybrid Testing Framework The most commonly implemented framework is the best combination of all the techniques. It combines Keyword-driven, modular, and Data-Driven frameworks. The hybrid Testing Framework allows data-driven scripts to take advantage of the powerful libraries and utilities that usually accompany a keyword-driven architecture. The framework utilities can make the data-driven scripts more compact and less prone to failure. Tests are fully scripted in a Hybrid Testing Framework thus increasing the automation effort. Hybrid Testing Framework also implements extensive error and unexpected Windows handling. It is used for the automation of medium to large applications with long shelf life. Advantages of Hybrid Testing Framework: Fastest and less costly way to develop the automation scripts due to higher code re-usability Utilizing a modular design, and using files or records to both input and verify data, reduces redundancy and duplication of effort in creating automated test scripts Have questions? Contact the software testing experts at InApp to learn more.
How to Write a Quality Bug Report?
One of the important deliverables in software testing is Bug reports. Writing a good bug report is an important skill for a software tester. In order to document a well-written bug report, the tester requires a combination of testing and communication skill. The bug report is a medium of communication between the tester and the developer when the tester uncovers a defect. The bug report explains the gap between the expected result and the actual result. A bug report should have the following: The Title Steps To Reproduce Test Data Expected and Actual Results Attachments The Title A good title helps reduce duplicate issues and accurately summarize the issue. Include the specific name of the components involved in your bug in the title. A good summary will not be more than 50-60 characters. You can avoid generic problems in the title. For example: ABC is not working properly The issue with the page When we define a title we should specify what makes it “not working”. Bad – “Application crashed” Good – “Canceling from Change Password dialog caused application crash” Bad: Issues with GUI on the navigation bar. Good: The navigation bar is wrapping to a second line. Steps To Reproduce This is the body of the report. This section tells how to reproduce the bug. We should keep the section concise and easy to read. The number of steps should be short and to the point. It is better to write prerequisites to reduce the number of steps. It’s a good exercise to reproduce the bug by following the steps you’ve just outlined. This will help ensure you’ve included everything the developer will need to reproduce it as well. Test Data If the bug is specific to the particular scenario it is better to give the test data, so that developer can recreate the scenarios. Expected and Actual Results When describing expected results, explain what should happen, not what shouldn’t happen. Instead of writing “The app got crashed”, we can write “The user should take to XYZ screen”. When describing actual results, describe what did happen, not what didn’t happen. Instead of writing “The user wasn’t taken to the page”, we can write “The user remained on the ABC page”. Attachments Attachments add the bug value by offering proof. The attachment can be images, videos, or log files. Images Images are an essential part of the bug report. The bug reports should be effective enough to enable the developers to reproduce the problem. Screenshots should be a medium just for verification. If you attach screenshots to your bug reports, ensure that they are not too heavy in terms of size. Use a format like jpg or gif, but definitely not bmp. Attach the image files directly to the report. Don’t put images in a Word document or a zip file. Highlight the areas of bugs in the image. Video The video should be provided if the steps are complex. Actions in the video should match the steps listed in the bug report. Videos should be trimmed to only show the bug. Log Files Make it a point to attach logs from the logs. This will help the developers to analyze and debug the system easily. If the logs are not too large, say about 20-25 lines, you can paste them into the bug report. But if it is large enough, add it to your bug report as an attachment. Avoid proprietary file types (like .docx). Use .txt instead. Have questions? Contact the software testing experts at InApp to learn more.
TestLink – Test Management System
TestLink is a web-based test management system that offers support for test cases, test suites, test plans, test projects, and user management, as well as various reports and statistics. It is developed and maintained by Team Test which facilitates software quality assurance. How to work with TestLink Create a Project Create Test Cases (Test Suites) for this Project Create Test Plan Specify the Build of the Project you are going to test Add Test Cases to the Test Plan Assign Test Cases to Test Engineers Execute Test Cases (Test Engineers) See Reports and Charts Additional facilities Assigning Keywords (we may form a group of Test Cases for Regression tests) Specifying Requirements (we may bind them with Test Cases in the many-to-many relation and see if our Test Cases cover our requirements) Events log (you can see here the history of all the changes) STEP 1. CREATE A PROJECT To create a project, go to the Test Project Management section: STEP 2. CREATE A PROJECT – IMPORTANT FIELDS Name ID (used for forming a unique Test Cases ID) E.g. FT-03 means that the Test Case is created for the Fenestra project and it has ID=3 Project Description (what is the aim of the Project, what is the target group, what is the business logic, what is the Test Environment) Enhanced features: Requirements feature – we may specify requirements and see if they are well-covered by Test Cases Testing priority – we may assign priority to Test Cases (high, medium, low) Test Automation – we may specify whether the test should be performed manually or automatically you can now set this project here, like in Mantis, in the top right corner STEP 3. CREATE TEST CASES Or here: STEP 4. CREATE TEST CASES – CREATE A TEST SUITE Test Case Title Summary Preconditions Execution type (manual or automated) Test importance (High, Medium, or Low) We may also import and export Test Suites and Test Cases (in the .XML/XLS format): We import them from one project And export the file to other STEP 5. SPECIFY TEST PLAN TestLink will not allow you to execute Test Suites if you do not create a Test Plan and specify Test Build. How to do that? Let’s begin with the Plan The current Test Plan will appear in the top right corner STEP 6. SPECIFY BUILD After you’ve added a Test Plan menu, the adding Test Build appears. Add a new build there STEP 7. ADD TEST CASES TO THE PLAN Unfortunately, only Test Cases, not Test Suites or the whole Test Specification can be added to a Test plan. So, until you don’t select one separate TC, the button “Add to Test Plans” will not appear. Then you can choose what Test Plans you want to add the selected TC too. STEP 8. ASSIGN TEST CASE EXECUTION TO TESTERS Before assigning TC to testers you should create a DB of users with appropriate roles here. Add the users you need to fill in the form. Then you can assign TC execution here. You can assign test cases to testers and send them email notifications. STEP 9. EXECUTE TESTS To start executing tests, Test Engineer should go to the test Execution section. Then choose a TC. You may also connect TestLink with our bug-tracking system Mantis, then during execution, you will see as below. After clicking on “Create New Bug”, to create the bug using the mantis user interface and reorganizing the window Test engineer writes the issue ID on Testlink It looks like this after saving Execution history is being saved STEP 10. SEE REPORTS AND CHARTS After the test case execution is finished you may see the results of it using the Test Reports section Or here: You can see the following page Test Plan Report – the document has options to define the content and a document structure. You may choose the info you want to get. Test Plan report (part of it) The document ‘Test Report’ has options to define content and document structure. It includes Test cases together with test results. Test result matrix Charts Charts – results by tester (there are only unassigned test cases in the diagram) Charts – Results for top-level suites: 1. Log in to the application 2. News module Blocked, Failed, and Not Run Test Case Reports These reports show all of the currently blocked, failing, or not run test cases. E.g. General Test Plan Metrics This page shows you only the most current status of a Test plan by the test suite, owner, and keyword. Query metrics – work like filters in Mantis Requirements based report If we have some requirements specified and have connected them with TC we can see the following report: ADDITIONAL FACILITIES – ASSIGNING KEYWORDS Go to the “Assign Keywords” section Select some Test Suite and then you will be able to go to “Keywords Management” Add keywords if there are no KW at all, or if there are no KW you need Now you can add Keywords both to Test Suites & Test Cases, either all the Keywords (>>) or only one KW (>) Then you will be able to see such a useful chart demonstrating the Results by KW You can open the section in this way Or in this: Requirements Specification adding Then we create Requirements Pay attention that there are different types of the Requirements Then assign requirements to Test Cases Select Test Suite or Test Case and assign it to 1 or more requirements (R. can be assigned to TC in relation to many-to-many) 1. We have all the documentation structured and organized. 2. We solve the problem of version control. 3. We can control the testing process (Events log + different kinds of Reports) 4. We can see if all the requirements are covered with Test Cases. 5. We can select Test Cases for Regression Testing. 6. We can see the results of testing in a very clear and easy-to-use form. Have questions? Contact the software
Good Cyber Security Practices
1. XSS – Cross-site scripting vulnerability (XSS) Parameter values sent by the client browser to the web application should be inspected enough by the server and an attacker can inject HTML or Javascript code instead of legitimate values. This vulnerability can be exploited by an attacker to carry out Cross-Site Scripting (XSS) in order to execute code in the victim’s browser. This vulnerability is often used to recover session cookies of a legitimate user in order to steal his session or to usurp his identity and his rights. Recommendations: Filter or encode every parameter sent to the application. For example: drop or escape special characters such as <, >, /, ‘, “ … //To Encode scripts public static string Encode(this string Instance) { return System.Web.HttpUtility.HtmlEncode(Instance); } public static string UriEncode(this string Instance) { return System.Uri.EscapeDataString(Instance); } //To Decode scripts public static string Decode(this string Instance) { return System.Web.HttpUtility.HtmlDecode(Instance); } public static string UriDecode(this string Instance) { return System .System.Uri.UnescapeDataString(Instance); } 2. Weakness in the management of rights Ensure that all features or business functions are protected by an effective access control mechanism. A matrix should map user roles with features to avoid any unauthorized access. Do not assume that users will be unaware of special or hidden URLs or APIs. Implement an authentication process in order to protect sensitive resources or features against anonymous access. Recommendations: – Protect passwords by encryption or hash mechanisms. -Ensure only POST calls to the server to avoid logging. -Implement Industry Standard token-based authentication from the server. -Check authorization based on the server token. -Ensure that all parameters are encrypted before use. -Cross-check calculations and selections from the client before saving transactions on the server. 3. Information Leak Parameters can be passed to the dynamic websites via URL (GET method). Explicit and sensitive information may be present in these settings, such as the Active Directory domain(EXAMPLE), the user name (EXAMPLE), the user password, or information on software architecture. This information can be retrieved by observing the clear stream on the network or by observing the proxy server logs possibly located between the client and the server. Recommendations: – Ensure only POST methods to the server -Only use encrypted passwords. -Ensure no credit card information is passed via GET. 4. Cookie contains sensitive data User credentials, such as logins and/or passwords, may be stored in the browser’s cookies. An attacker having access to these cookies may be able to steal the credentials and so spoof users’ identities on the service. Cookies can be retrieved for example on public workstations when a user forgets to log off, or through a cross-site scripting (XSS) attack. Recommendations: Changing to server-based cookies and tokens will eliminate the possibility of sensitive data in the cookies. In order to maintain a user’s session across his browsing, cookies should only contain a randomly generated session identifier, which cannot be predicted. This kind of feature is already implemented in most web development languages and frameworks. HttpCookie _Cookie = newHttpCookie(“CookieName”,”CookieValue”); _Cookie .Expires=DateTime.Now.Add(7); 5. HttpOnly Option -Set HTTPOnly option in cookie A cookie is a small piece of data sent from a website and stored in a user’s web browser while a user is browsing a website. When the user browses the same website in the future, the data stored in the cookie can be retrieved by the website to notify the website of the user’s previous activity. Cookies are typically used to store session identifiers in order to allow the user to browse the website without re-entering his credentials. If the httpOnly optional flag optional is included in the server’s HTTP response header, the cookie cannot be accessed through the client-side script. As a result, even if a cross-site scripting (XSS) flaw exists, and a user accidentally accesses a link that exploits this flaw, the browser will not reveal the cookie to a third party. As the server does not set the httpOnly flag to session cookies, users’ browsers create a traditional, script-accessible cookie. As a result, the session identifier stored in the cookie becomes vulnerable to theft or modification by malicious script. Recommendations: Set the httpOnly flag to session cookies in order to prevent any access from client-side scripts. As far as possible, renew session cookies for every request in order to prevent their exploitation by an attacker. HttpCookie _Cookie = newHttpCookie(“CookieName”,”CookieValue”); _Cookie .Expires=DateTime.Now.Add(7); _Cookie.HttpOnly = true; // Summary: Gets or sets a value that specifies whether a cookie is accessible by client-side script. // Returns:true if the cookie has the HttpOnly attribute and cannot be accessed through a client-side script; otherwise, false. The default is false. _Cookie.Secure = true; // Summary: Gets or sets a value indicating whether to transmit the cookie using Secure Sockets Layer (SSL)–that is, over HTTPS only. // Returns: true to transmit the cookie over an SSL connection (HTTPS); otherwise, false. The default value is false. Have questions? Contact the software testing experts at InApp to learn more.
Importance of Security Testing
Why Security Testing? With the cyber world becoming more and more vulnerable to attacks, security is something that cannot be compromised. In order to develop secure applications, one really needs to use a security development lifecycle. Security must be considered and tested throughout the project lifecycle of any application. What are the processes involved in Security Testing? The security testing process involves evaluating the quantum of risks within the application under test and pointing out the security vulnerabilities using various techniques and tools. By this it is possible to ensure that there is no data theft, there is no unauthorized access or there is no security compromise that has been made through assistance. Security testing involves Vulnerability scanning, Security scanning, Penetration Testing, Security Auditing, and Security Review. Vulnerability scanning is usually performed using an automated software tool that scans for the basic known vulnerability. It is an automated process performed using a vulnerability scanning tool like SARA. Next in line is Security scanning, where an assessment is done manually along with the software scanning. Although tools help in building a robust application, every tool has its own bottlenecks. That is the reason, in addition to automated scanning one is required to perform manual testing, that is going through system responses, examining the log files, error messages, error codes, and the like. The other aspect is Pen Testing or Penetration testing. A real-time simulation environment is used to perform penetration testing. It is totally a Black Box, a hacker’s approach, the way in which Hackers use it but is done in a controlled environment. It is performed internally within the organization without breaching any security terms. Security Auditing is for specific control or compliance issue. Usually, the compliance team or the risk evaluating team performs this security assessment. So, very frequent audits make the application more error-prone and less vulnerable. Finally, Security Review, which is static testing, wherein security review is performed as per the industry standards by reviewing documents, and architecture diagrams, and performing gap analysis. It is basically done for code reviews considering the architecture diagrams and documents which are very important. All these processes in security testing ensure that the applications developed are prone to any kind of security risk. Have questions? Contact the software testing experts at InApp to learn more.
Test Automation with Selenium
Selenium 2 is the newest addition to the Selenium toolkit. This brand-new automation tool provides all sorts of test features, including a more cohesive and object-oriented API as well as an answer to the limitations of the old implementation. Selenium2Library is a popular Robot Framework test library. Selenium2Library runs tests in a real browser instance which works with most modern browsers and is used with both Python and Jython interpreters. Selenium is a set of different software tools each with a different approach to supporting test automation. The entire suite of tools results in a rich set of testing functions specifically geared to the needs of testing web applications of all types. One of Selenium’s key features is the support for executing one’s tests on multiple browser platforms. Selenium is highly flexible as there are many ways one can add functionality to both Selenium test scripts and Selenium’s framework to customize test automation. Since Selenium is Open Source, the source code can always be downloaded and modified. Operations performed are highly flexible, allowing many options for locating UI elements and comparing expected test results against actual application behavior. This is perhaps Selenium’s greatest strength when compared with other automation tools. Have questions? Contact the software testing experts at InApp to learn more.
Cloud Testing – Nuts & Bolts
Need for Cloud Testing – Issues and Challenges Traditional testing has limitations like latency, performance, concurrency, and planning issues and is way too expensive. Cloud testing is a big game changer and surpasses the challenges faced by traditional testing. It can be used to provide a flexible, scalable, and affordable testing environment at all times or on demand. Cloud testing typically involves monitoring and reporting on real-world user traffic conditions as well as load balance and stress testing for a range of simulated usage conditions. The availability of virtual machines eases the process of setting up, using, reusing, and running test setups. Complex test setups are available as stacked templates, making it easy to integrate complex automation into various processes to build complex cloud testing systems. Cloud testing is a great fit for an agile environment. It can leverage the whole life cycle of web or mobile app development, right from the beginning of development until the application is in production. Today, if you need to generate thousands of virtual users to test a specific web application then the number of servers required for that test can be deployed within a couple of minutes. Best of all, you only need to pay those servers for the duration of the test thus making it more economical and viable. Cloud testing is flexible enough that it can be used for continuous performance testing. Test maker runs tests in multiple cloud testing environments making it possible to manage performance from different geographical locations. Tester gets a real-time testing experience of applications on browsers and OS rather than simulated environments. Cloud testing eliminates the cost of building and maintaining a test lab for load and performance testing. If a specific test environment is required, just use it via the cloud. There is no need to provision expensive and difficult-to-manage quality test labs. Cloud-based testing poses different operational challenges in the real-world scenario. One of the major challenges would be creating an on-demand test environment. The current cloud technology does not have any supporting solutions that will help cloud engineers build a cost-effective cloud test environment. For scalability and performance testing, the current framework and solutions do not support the features such as dynamic scalability, scalable testing environments, SLA-based requirements, and cost models. Testing security is yet another concern inside clouds as security services become a necessary part of modern cloud technology. Engineers must deal with issues and challenges in security validation and quality assurance for SaaS (Software as a Service) and clouds. Integration testing in the cloud may not be performed due to lack of time or additional integration cost which subsequently affects the performance of the application. Cloud testing is under constant evolution, continuously bringing in new opportunities and challenges. It reduces the need for hardware and software resources and offers a flexible and efficient alternative to traditional testing. Finally, moving testing to the cloud is seen as a safe bet as it does not include sensitive corporate data and has minimal impact on the organization’s business activities. Migration of self-testing to the cloud would bring about a notion of test support as-a-service. Have questions? Contact the cloud testing experts at InApp to learn more.
Testing Web Services using ApacheBench

ApacheBench (ab) is a tool for benchmarking an Apache Hypertext Transfer Protocol (HTTP) server. This shows how many requests per second the server is capable of handling. A point to note is that ApacheBench will only use one operating system thread regardless of the concurrency level; specified by the -c parameter. In some cases, especially when benchmarking high-capacity servers, a single instance of ApacheBench can itself be a bottleneck. To overcome this, additional instances of ApacheBench may be used in parallel to more fully saturate the target URL. ApacheBench was recently used to test the capability of the Caleum server, to find the threshold of the total number of web requests it can concurrently serve, in its current configuration. Working with ApacheBench Installing on a Windows machine Download the software from the link http://www.apache.org/dist/httpd/binaries/win32/ by selecting any mirrors on the site. Select the latest version of the software or later Double-click and install the software. While installing provide the information Network Domain: localhost Server Name: localhost Admin Email: provide a real or fake email Leave all default checkboxes checked After installation, an icon will be displayed in the system tray. This means Apache2.2 has been installed and started. To verify further type http://localhost/ in the browser. If Apache 2.2 has been started the message “It works!” in bold will be loaded in the browser. To stop/restart the server click on the icon ->Apache 2.2->Stop/Restart. To measure the performance of a server you may need to point your files to Apache. Since we are doing a web service testing this step is optional. Execution: Open the command prompt and go to the path where ApacheBench is installed say “C:\Program Files\Apache Software Foundation\Apache2.2\bin” Type ab –n 100 –c 10 http://{webserver hostname:port}/{document path} You can also provide the authentication details as the parameters in the document path. Other options that can be used are Options are: -n requests Number of requests to perform -t timelimit Seconds to max. wait for responses -v verbosity How much troubleshooting info to print -b windowsize Size of TCP send/receive buffer, in bytes -C attribute Add cookie, eg. ‘Apache=1234. (repeatable) -H attribute Add Arbitrary header line, eg. ‘Accept-Encoding: gzip’ Inserted after all normal header lines. (repeatable) -A attribute Add Basic WWW Authentication, the attributes are a colon-separated username and password. -P attribute Add Basic Proxy Authentication, the attributes are a colon-separated username and password. -x attributes String to insert as table attributes -y attributes String to insert as tr attributes -z attributes String to insert as td or th attributes -Z ciphersuite Specify SSL/TLS cipher suite (See openssl ciphers) -c concurrency Number of multiple requests to make -T content-type Content-type header for POSTing, eg. ‘application/x-www-form-urlencoded’. The default is ‘text/plain’ -g filename Output collected data to gnuplot format file. -e filename Output CSV file with percentages served -p postfile The file containing data to POST. Remember also to set -T -f protocol Specify SSL/TLS protocol (SSL2, SSL3, TLS1, or ALL) -X proxy:port Proxy server and port number to use -i Use HEAD instead of GET -V Print the version number and exit -k Use the HTTP KeepAlive feature -d Do not show the percentiles served table. -S Do not show confidence in estimators and warnings. -r Don’t exit on socket receive errors. -h Display usage information (this message) -w Print out results in HTML tables Output as below is displayed in the cmd prompt after the execution Concurrency Level: 10 Time taken for tests: 321.212 sec Complete requests: 1000 Failed requests: 11 (Connect: 0, Receive: 0, Length: 11, Exceptions: 0) Write errors: 0 Document length 21bytes Total transferred: 22124 bytes HTML transferred: 11994 bytes Requests per second: 1.01 [#/sec] (mean) Time per request: 1216.319 [ms] (mean) Time per request: 156.272 [ms] (mean, across all concurrent requests) Transfer rate: 1.81 [Kbytes/sec] received 1.61 kb/s sent 0.42 kb/s total Connection Times (ms) min mean [+/-sd] median max Connect: 200 200 121 212 3000 Processing: 301 2121 612.8 1921 3267 Waiting: 211 2112 21 121 1211 Total: 711 3546 799.3 3281 6547 Percentage of the requests served within a certain time (ms) 50% 1212 66% 3823 75% 2211 80% 4555 90% 5555 95% 6666 98% 7777 99% 8888 100% 8899 (longest request) It shows the total time to complete the entire test and the number of completed requests and failed requests. If there is any fail an additional line will be displayed. Connect:, Receive:, Length:, Exceptions: While testing the web server, we mainly focus on the fails in Connect and Receive. The failure in the length is due to the content length not being specified or some additional data like ads coming up in the page which goes beyond the specified length. Have questions? Contact the software testing experts at InApp to learn more.