Mobile Application Testing

Introduction: Handheld devices are evolving and becoming increasingly complex with the continuous addition of features and functionalities. Testing is challenging in the handheld, wireless world because problems are new, or they show up in new ways. This paper is aimed to highlight certain crucial areas the tester needs to concentrate on while testing mobile applications. The 4 main areas to consider: Understanding the behavior of the device UI & Usability Testing. External Constraints Stress Testing Understanding the behavior of the device If you are new to a device the first thing you should do is to get familiar with how the common device functionalities work – such as its Phone, Camera, Contacts, Calendar, Program, etc. Things to note while exploring inbuilt applications: Overall color scheme/theme of the device. Style and color of icons Progress indicators when pages are loading. Menus – How they are invoked and typical items they contain. Overall responsiveness of applications on the device. UI & Usability Testing: The unique features of mobile devices pose a number of significant challenges for examining the usability of mobile applications, including screen orientation, multi-modality, small screen size, different display resolutions, soft keyboards, and touch screens. Screen Resolution If your application is supported on various devices that have different screen resolutions, make sure you test with the device that has the smallest screen and the application still looks good on larger screen sizes as well. Screen orientation (Landscape/Portrait modes) If your device supports screen orientation changes be sure to include lots of testing where you rotate the device from portrait to landscape display, and vice versa, on all of the pages within your application. It is also important to test input reactions when the screen orientation is changed. Try using the soft keyboard while changing the orientation repeatedly. Attempt this repeatedly and quickly to see if the rapid changes in orientation have a negative effect on the application. Touch Screens: Make sure that the application supports multi-touch (eg: pinch, two-finger tap, two-finger scroll, spread, two-hand spread, etc), single touch – eg: tap, double tap, scroll, etc, touch based on the requirement. The application should be tested for long touch and soft touch behavior. Soft keyboards – Points to consider Does the soft keyboard appears automatically Does the first layer of the soft keyboard include shortcuts related to highlights? Does a long touch on a soft character key bring up several different character choices? Can the soft keyboard be dismissed and re-displayed easily Can the soft and hard keyboards be used interchangeably (if the device has both) Do soft keyboard characters entered in password fields only show up as **** Multi-modality: Multi-modality combines voice and touch (via a keypad or stylus) as input with relevant spoken output (e.g., users are able to hear synthesized, prerecorded streaming or live instructions, sounds, and music on their mobile devices) and onscreen visual displays in order to enhance the mobile user experience and expand network operator service offerings. Make sure that the application supports the functionality based on the requirement. External Factors Affecting Mobile Application Testing Network Connections: App going to be used on devices in various locations with various network connection speeds, it is important to plan testing coverage for the following scenarios: Only Wi-Fi connection Only a 3G/2G connection With no SIM card in the device In Airplane mode (or all connections disabled) Using the network through a USB connection to a PC Test intermittent network scenarios that a user might encounter in the real world: Phone calls: The tester has to check the application behavior during incoming and outgoing calls. Make sure that the application works fine during the following phone calls. The application is interrupted by an incoming call, originator hangs up the call The application is interrupted by an incoming call, terminator hangs up the call The application is interrupted by placing an outgoing call, originator hangs up the call The application is interrupted by placing an outgoing call, terminator hangs up the call. Other Interruptions: The tester has to consider the below interrupts that could have an impact on the functionality or overall responsiveness of your application. Text messages Voicemail notifications Calendar events Social media notifications (Facebook, Twitter, etc) Alarm clocks Low battery notifications Device Settings Explore your device’s options, and change settings such as the following to see how they affect your application: Sound profiles – Does your application respect the device’s sound settings? Device password/unlock pattern – Does your application still install correctly when prompted for a password/unlock pattern? Font – How does choosing a different font family, size, or style affect the appearance and usability of your application? Screen time out/Auto on, off- Is your application subject to screen dimming or automatically turning off even when it is actually busy? Screen orientation – Does your application respect this setting? Connections – How does enabling/disabling Bluetooth or other connection types affect your application’s behavior? Stress Testing Certain mobile applications consume more memory and CPU than desktop applications. Stress testing is a must to identify exceptions, situations with the application hang, and deadlocks that may go unnoticed during functional and user interface testing. Note the behavior of the application while testing with the following scenarios: Load your application with as much data as possible in an attempt to reach its breaking point. Perform the same operations over and over again, particularly those that load large amounts of data repeatedly. Perform the repeated operations at varying speeds – very quickly or very slowly. Leave your application running for a long period of time, both interacting with the device and just letting it sit idle, or perform some automatic task that takes a long time. Test multiple applications running on your device so you can switch between your application and other applications. After testing several functionality switched off and switch on the device. Have questions? Contact the software testing experts at InApp to learn more.
Software Quality Control vs Software Quality Assurance (QC vs QA)

Difference between QA and QC This is one of the most frequently asked questions, with many different versions of the definition. What is Software Quality Control (SQC)? Software Quality Control (SQC) is the set of procedures used by an organization to ensure that a software product will meet its quality goals at the best value to the customer and to continually improve the organization’s ability to produce software products in the future. [Source: Wikipedia: http://en.wikipedia.org/wiki/Software_quality_control] What is Software Quality Assurance (SQA)? Software quality assurance (SQA) consists of a means of monitoring the software engineering processes and methods used to ensure quality. The methods by which this is accomplished are many and varied and may include ensuring conformance to one or more standards, such as ISO 9000, or a model such as CMMI. [Source; Wikipedia: http://en.wikipedia.org/wiki/Software_quality_assurance] “Product” and “Process” are the keywords that distinguish the differences between QA and QC. What is Quality Control? Quality Control is “Product-oriented”, it focuses on the “product” itself, whether it meets its quality goals and user requirements. For example, Testing and Review fall into this category. What is Quality Assurance? Quality Assurance is “Process-oriented”; focuses on whether the process in a project conforms to organizational standards and methodologies, as defined. Since QA focuses more on the whole project, hence it can have supervision on quality control.
How to identify dynamically changing objects in QTP ?

Consider an example where you are having a tree with nodes [can be folder or directory]. The tree as a whole is designed as a web table and sub-folders again as a sub-web table. It is easy to identify the index of the tree node while recording, but during playback when an additional folder or directory is added the index will be changed. In this kind of situation where the index of the objects changes dynamically, there are two actions to be performed. Identify the properties of the object Identify the index at run time Identify the properties of the object Use object spy Add the object to OR Identify the index at run time We can use the following piece of code once the properties are identified For i = 0 to 1000 set sObjTable = Browser(“Browser”).Page(“Page”).WebTable(“index:=”&i) If Browser(“Browser”).Page(“Page”).WebTable(“index:=”&i). exist Then If sObjTable.GetROProperty(“property name that is not changed”) = <Value that is expected> Then Set sRootFolder = sObjTable End if Else Exit for End If Next In a similar manner, we can identify checkbox and radio button objects whose index changes at run time. This worked for me, hope it will be useful for you too. Have questions? Contact the software testing experts at InApp to learn more.
Work with multiple IE instances using QTP

If your IE-based application opens another window whose properties are the same, it would be difficult to identify the objects in the newly opened browser. For example, consider an application, login user is navigated to a launch page where we can launch our application in a new window. All the windows opened are having the same set of properties like title, name, etc. Solution: We can select the objects in any page based on the creation time property of the window. Set oDesc = Description.Create oDesc( “micclass” ).Value = “Browser” Set sDesk=Desktop.ChildObjects(oDesc) Set objBrowser = Browser(“creationtime:=1”) Set App_objPage = objBrowser.Page(“title:=NAME”) Make sure all the other IE instances are closed before our scripts start execution. Have questions? Contact the software testing experts at InApp to learn more.
Harness Test Automation

What is Harness Test Automation? The harness is a server-side testing framework. It is being used for testing the server-side functionalities. Cactus (Jakarta product for java server-side testing) is a simple open-source test framework used for unit testing server-side java code (Servlets, EJBs, Tag Libs, Filters,…), and harness is built over it. JUnit V/s Harness JUnit tests run in the same JVM as the test subject whereas Harness tests start in one JVM and are sent to the app server’s JVM to be run. The package is sent via HTTP to the redirector. The redirector then unpacks the information, finds the test class and method, and performs the test. How do the Cactus work? Cactus has a class called ServletTestCase and it has 3 functionalities. 1) beginXXXX(WebRequestwebRequest){ } 2) testXXXXX(){ } 3) endXXXXX(WebResponsewebResponse ){ } Begin() – Here parameters are added to a request. It is executed on the client side. eg): In login, we can add a username and password to webRequest. Test() – Here we call the respective actions to be tested. In this, we get the implicit objects like request, response, and session as well as the parameters that we pass via Testcases.xml which are added inbeginXXXX(). Here we call the respective actions to be tested, which return pass or fail based on assertions. It is executed on the server-side. End() – The values in the response from the action can be tested here. It is executed on the client side. We have extended ServletTestCase to our harness framework with a class called FrameworkServletTestCase. FrameworkServletTestCase has the functionalities to getconfiguration files, testcases etc. The FrameworkServletTestCase is further extended to another class called QAToolTestCase. QAToolTestCase has postAssertions() and preAssertions()functionalities, which form the basis of whether a test case passes or fails. Cactus is configured in its property file called cactus.properties. Parameters configured are – cactus.contextURL = http://localhost:8080/CServer cactus.servletRedirectorName = ServletRedirector cactus.enableLogging=true We need to add the servlet class and servlet-mapping in web.xml for ServletRedirecton <!– Cactus Servlet Redirector configuration –> <servlet> <servlet-name>ServletRedirector</servlet-name> <servlet-class>org.apache.cactus.server.ServletTestRedirector</servlet-class> </servlet> <!– Cactus Servlet Redirector URL mapping –> <servlet-mapping> <servlet-name>ServletRedirector</servlet-name> <url-pattern>/ServletRedirector</url-pattern> </servlet-mapping> Test cases are configured in the testcasexmls, where the parameters to be passed are defined. Test cases can be broken into smaller units called Snippets. These snippets are called fromTestCases.xml. Have questions? Contact the software testing experts at InApp to learn more.
Introduction to Exploratory Testing

Introduction to Exploratory Testing With this procedure, you will walk through the product, find out what it is, and test it. This approach to testing is called exploratory because you test while you explore. Exploratory testing is an interactive test process. It is a free-form process in some ways and has much in common with informal approaches to testing that go by names like ad hoc testing, guerrilla testing, or intuitive testing. However, unlike traditional informal testing, this procedure consists of specific tasks, objectives, and deliverables that make it a systematic process. In operational terms, exploratory testing is an interactive process of concurrent product exploration, test design, and test execution. The outcome of an exploratory testing session is a set of notes about the product, failures found, and a concise record of how the product was tested. When practiced by trained testers, it yields consistently valuable and auditable results. The elements of exploratory testing are: Product Exploration: Discover and record the purposes and functions of the product, types of data processed, and areas of potential instability. Your ability to perform exploration depends upon your general understanding of technology, the information you have about the product and its intended users, and the amount of time you have to do the work. Test Design: Determine strategies for operating, observing, and evaluating the product. Test Execution: Operate the product, observe its behavior, and use that information to form hypotheses about how the product works. Heuristics: Heuristics are guidelines or rules of thumb that help you decide what to do. This procedure employs a number of heuristics that help you decide what should be tested and how to test it. Reviewable Results: Exploratory testing is a results-oriented process. It is finished once you have produced deliverables that meet the specified requirements. It’s especially important for the test results to be reviewable and defensible for certification. As the tester, you must be prepared to explain any aspect of your work to the Test Manager and show how it meets the requirements documented in the procedure. Have questions? Contact the software testing experts at InApp to learn more.