4 Leading Test Automation Tools

Test Automation has come a long way. The new automation testing tools of software automation now allow things unimaginable in the past. It is now possible to think of a situation in which you open your IPad and start running your tests over your company’s VPN while sitting in Central Park. The 4 leading test automation tools which have changed the automation landscape are: UFT [QTP] – Popular tool from HP which can be used to automate web and desktop applications Selenium – Most popular open-source tool available for web automation TestComplete – a Popular tool for web, desktop, and mobile automation testing TestCafé – a popular tool that uses all browsers which support HTML 5 to record and run functional tests across operating systems (Windows, Mac, and Linux, and on remote computers) and mobile devices (iPhone, iPad, Android, and Windows Phone). Let’s discuss each of these leading automation testing tools in greater detail: UFT (QTP) UFT stands for Unified Functional Testing and was formerly known as QuickTest Professional (QTP). This tool is from Hewlett-Packard. UFT combines manual, automation, and framework-based testing together into an Integrated Development Environment. Some of the key functionalities of UFT are: Cross Browser and multi-platform Image-based object recognition Visual representation of test flow Selenium Selenium was developed by Jason Huggins, while he worked for ThoughtWorks. Since the inception of Selenium 2.0 (commonly referred to as Selenium Webdriver) in 2009, testers around the world swear by its name. Selenium is the most popular testing framework for web applications. Some of the key features of selenium that make it one of the leading automation testing tools are: Open-source robust, browser-based regression automation suites and tests Ability to scale and distribute scripts across many environments It can be run in many browsers and operating systems Selenium can be controlled by many programming languages and testing frameworks. TestComplete This is a product from SmartBear Software. Using TestComplete a tester can automate the testing of software developed in various technologies. The technology platforms supported by TestComplete are desktop-based, web-based, and mobile-based (IOS & Android). It is used for automating functional testing and database testing. The features of the test complete which make it one of the leading automation testing tools are: Keyword Testing: TestComplete uses keywords to represent automation testing actions, which are easily identifiable. Test Record and Playback: Using TestComplete the user can easily record and playback the scenarios. Bug Tracking Integration: it allows for a bug tracking system integration. Data-driven testing: With TestComplete it is no longer required to hardcode the data. In fact, you can feed that data from an external data source and monitor the test based on different types of data. TestCafé TestCafe is the functional web testing tool from DevExpress. The key features which differentiate TestCafe and make it one of the leading automation testing tools are: Ability to work without plugins on any browser Compatible with any browser which supports HTML 5. (Nowadays almost all browsers support HTML 5) It also supports all the major operating systems. Have questions about Test Automation? Contact the software testing experts at InApp to learn more.
10 Common Selenium Automation Testing Commands for Newbies

Before we come to the Selenium Automation testing commands, let me relate to you an interesting anecdote on the christening of Selenium. Jason Huggins (the creator of Selenium) worked at ThoughtWorks. One day he got really irked with their Competitor “Mercury” (owned by HP). In his frustration, Jason wrote a mail to his colleagues at ThoughtWorks. The mail said that selenium supplements could cure mercury poisoning. Ever since that day the name Selenium got associated with this product. Since the inception of Selenium 2.0 (commonly referred to as Selenium Webdriver) in 2009, test engineers around the world swear by it. Selenium is the most popular testing framework for web applications. If you are a budding tester or a developer looking for some common Selenium Automation testing commands to make your life easier while doing automation, you have come to the right place. 10 most common Selenium Automation testing commands: 1. Open a browser Web applications are the flavor of the season. Why? Coz, they are running on browsers! Duh! A web application is essentially an application running on a browser. The browser’s UI is actually working on the client side. So, the most basic thing for a budding automation tester working on a web application is to simulate opening a browser. Here is the command for it: Driver = new <browser name>Driver(); 2. Navigate to the web application As mentioned earlier, if you compare the web application architecture to the client-server architecture, the web application’s client is actually the browser’s UI. In order to access the application, one needs to query the server, which is done by typing in the URL of the server. In order to automate this scenario via Selenium, you have a couple of options: Option 1 – driver.navigate().to(“url”) Or Option 2 – driver.get(“url”) 3. Maximize the browser window Testing is really about exploring real-life user interaction with any application. One of the most common user behaviors is to maximize the browser before using any application. The command to execute this action is: driver.manage().window().maximize(); 4. Close the browser The most basic actions are opening and closing the browser. We already saw how to open a browser, now let’s see how we can close the browser using a selenium code, automatically; There 2 options for closing the browser: driver.close() (will close the current window in focus) or driver.quit() (will close all windows, including child windows, and safely end.) 5. Switching frames How does the application handle a pop-up window (child window)? This is a critical part of testing the application. Below is the command for this: webDriver.switchTo().window(handle) 6. Finding an Element Another frequently used Selenium command is to find an element on the UI. The element can be easily found on the basis of a unique identifier. The uniquely identified could be “element id” or “name.” Selenium provides many options for finding elements (ClassName, cssSelector, id, LinkTest, Name, partialLinkTest, tagName, xpath) Out of the above, CSS selector and XPath are the most frequently used, and here is the syntax for using them: driver.findElement(By.Xpath(“”)) driver.findElement(By.cssSelector(selector)) 7. Writing into a Text Box Webpages frequently contain forms with text boxes. So a basic test would include commands to write onto a text box. In order to do so the existing text needs to be cleared and new text inserted using the following commands: driver.find_element_by_xpath(xpath).clear() driver.find_element_by_xpath(xpath).send_keys(“data”) 8. Count of element One use of Selenium-based automation could be running a set of scenarios on a group of elements on a UI. In this case, it would be helpful to have a count of elements that are to be subjected to the automation scenarios. Below is a command to count the number of elements: iCount = driver.findElements(By.xpath(“xxx”)).size()); Waits – In most modern web applications, elements load at different times. This brings complexity to selenium-based automation testing. This is solved by using “wait” commands: 9. Implicit wait Implicit wait means waiting for a specific period of time. driver.implicitly_wait(55) It is a quick and easy way to handle the problem of elements loading at different times on a web application. It is applicable at a global level and affects all the elements. It tells selenium to wait for an exact period of time before it throws a “No Such Element” Exception. 10. Explicit Waits Explicit Waits means waiting for a condition. WebDriverWait(driver, 55).until(“condition”) The explicit wait will tell the Selenium web driver to wait for a condition. Best Practice while running automation tests – Using Tear Down techniques: It is generally a good practice to clean up the database after each automation run. This is important because after each run the size of data in the database increases. This could slow down the application if it has to deal with a huge amount of data, over a period of time. Have questions about Automation Testing? Contact the software testing experts at InApp to learn more.
Benefits of Test Automation

Benefits of Test Automation Saving time spent on test execution Increase in Test Coverage Sustained reliability despite repeated runs Reduced dependency on the QA team To understand these benefits of test automation let us see how Test Automation helps its 3 main stakeholders – The developer, the Product Manager, and the CEO/CIO Benefits of Test Automation for a Developer If you are a developer practicing Test Driven Development, you write code till the test for the code passes. Once your code passes, does it means you were successful? Yes, but only at the functional level. Now, as happens in all Products, changes are introduced. The impact of this can touch the UI, the database, and already written functions. This brings in the need for regression testing. For a quality-conscious developer who abhors that his code introduced bugs, an automated test suite will be a face-saver. Without the need of depending on QA to validate if his changes have not introduced bugs, he can run the regression test suite and fix issues, if any, right away. Saves a great deal of time in the long run. Benefits of Test Automation for a Product Manager As a product manager, your responsibility is to deliver the product along with all its features within the committed timeline. Now suppose a situation in which you are close to the final release. You have worked with your team for over 12 months and you are close to completion. Suddenly you realize that an important feature may have been missed. You rally your team around and make sure you deliver within the timeline. In the process, you may have to compromise on regression. Keep in mind that whatever changes you make has cascading and often interrelated effects on other modules of your product. In addition, you don’t know if your developers didn’t do the basic checks in a hurry for the approaching deadline. If you had planned on using test automation you could have easily accommodated QA without affecting the delivery date. You could have run regression automatically. Your developers could have used test scripts ensuring lesser regression and code quality issues. You could have also planned cross-platform automated testing. Benefits of Test Automation for a CIO/CEO As CIO/CEO you can actually reap the rewards of Automation in your organization. With Test Automation you can ensure that large parts of the organization (developers and product managers) are working efficiently. This could mean that they have enough time to come out with the best quality product. This could also mean your QA team had enough resources at their disposal to ensure that your product was of the highest quality. You would have successfully reduced your dependency on your testing team and acquired artifacts (testing scripts) that will be of value for future releases of your software product. Have questions about Automation Testing? Contact the software testing experts at InApp to learn more.
RequireJS with AngularJS
What is RequireJS? RequireJS is a JavaScript file loader or module loader. As the name suggests, requirejs helps us to load the JavaScript files, modules, libraries, or plugins (along with their dependencies) only when we require them. Why RequireJS? Normal web applications which use MVC patterns of coding in the front end are working in such a way that:- We have to specify all the JavaScript files, plugins, or libraries in the index.html of the application. The order of plugins, libraries, and custom files has to be maintained because library functions inside the custom files will not work if libraries are mentioned after custom files. Finally, the very first loading of the application results in downloading all the JavaScript files needed for the whole application into the browser. These facts clearly imply that the index.html is getting uglier as the application grows. You can neither blindly guess the order of the plugins, libraries, or custom files nor place them in the index.html at random. And the very first loading of the application is going to be time-consuming, even if the launching page – usually the login page – doesn’t require most of the custom files, libraries, or plugins which are getting loaded. Here comes the relevance of RequireJS. How RequireJS? Allows us to keep the index.html page clean just by adding only one script tag as follows:- <script data-main=”scripts/main” src=”scripts/require.js”></script> During the development, you will be having more script files, but you don’t have to mention any of them inside the index.html page, but the main.js specified at the data-main of this script tag will include them. Hence the index.html page looks good. Allows us to provide libraries and plugins which are needed for or dependent on a specific custom script or library in any order you want. Allows us to load only those modules, files, plugins, and libraries which are needed for the current scope of the application. RequireJS Concepts Driving fast into the concepts of RequireJS; there are four dependency types, four great features, and three simple APIs. The four Dependency Types are as follows:- Load Dependency – Determines which class or files needed to be loaded in what order. Contractor Dependency – Determines what parameters or arguments you need to pass before constructing an instance of a class. Runtime Dependency – Determines what functions and utilities you need during pre-instantiation or post-instantiation. Module Dependency – Determines the dependency where one module depends on another module and is a special one because it is angular kind, hence to apply with AngularJS. The four great features are as follows:- Package Dependency Manager – This allows us to provide dependencies in their order. Injector – This allows us to inject classes or dependencies into the module. It doesn’t mean injectors like AngularJS injectors (they inject instances) but injects classes. NB: AngularJS injects instances whereas requireJS injects classes. JavaScript File Loader – This allows us to load JavaScript files only when they are needed. Concatenator – It is a concatenator, uglifier or minifier. Before discussing those three APIs, take a look at the application of RequireJS in AngularJS. By clubbing above mentioned 4 dependency types and 4 great features with AngularJS, a nice scenario has evolved that, RequireJS can manage load dependency and runtime dependency whereas AngularJS can manage constructor dependency and module dependency. Hence the outcome is that; confusion about the order of dependencies vanishes, and anxiety about the initial load is resolved. Back to the essentials of RequireJS, there are 3 simple APIs; define(), require(), and config(). define():- This allows us to create an AMD (Asynchronous Module Definition). This definition will return a value, which will be cached into the internal registry of RequireJS. This returned value is provided by the ready handler of the same AMD. The ready handler will return the value only after the dependencies specified inside the AMD are resolved. Sample code:- define(“file3”, [“file1”, “file2”, function( file1, file2){ return{ //code depending on file1 and file2 goes here } }]); So the consolidation is that an AMD will be a useful one, only after all the dependencies are resolved and we can specify these dependencies in any order we want. If there are no dependencies, the ready handler returns the AMD value immediately. The key thing is that this returned value is about to inject into other AMDs as their dependencies. That is there exists a tree of dependencies. We can use AMD1 only after a chain of dependencies is resolved. So the global define() builds a tree of dependencies we are mentioning. Then all those ready handlers are fired. They build a flat registry of values stored by the dependency IDs. These values are usually references or classes. Hence define() allows us to build a dependency tree. But nothing happens until we say, we want to start the trigger! require() helps us to do this. require():- Acts as root or initialization of the dependency tree. Values specified inside require() will be having dependencies on many other values and so on. require() starts the cascade of dependency tree checks and script file loading. Sample code:- require([“modulename”]); config():- Allows us to configure the location and path to the source files. It also provides aliases for source path. Sample code :- require.config ({ paths : { ‘filepath1’ : bowerPath + ‘file1/file1.min’ } }); Better use the bower tool for downloading the plugins, libraries, and CDN files. Because bower takes care of hunting, finding, downloading, and saving the JavaScript stuff we are looking for. What is the advantage of using bower than mentioning the libraries manually? Normally we specified the JavaScript files by the CDN/source path within the application. What about some changes in the library such as we need the updated version of the same library? Of course, we will have to change the version specification of CDN or download the new version and replace it within the application, build the application again and deploy it. For a deployed application it doesn’t sound good. Think about updating
How to develop Cloud App Using Microservices Architecture

Cloud-based Apps developed using Microservices architecture are a radical change from our present-day Monolithic, on-premise applications. These next-generation applications are providing the software with the robustness and agility it requires in today’s world. These applications are also much cheaper to develop and maintain. Though these applications are economical, they pack a punch! Am I sure you must be mighty intrigued? So let me explain these applications in detail and then we will also discuss how we can develop such a power-packed application. What do we mean by calling today’s applications “Monolith”? A Monolith means a single large unit. Today’s Monolithic applications are as depicted below: The application that you may be having today generally has 3 parts. The first part is the front-end-user interface. It is also referred to as the client part. It is a web interface it will be made up of an HTML page and some JavaScript code running on the client-side machine. The second part is the database, which is a series of related tables. Finally, there is the back end or the server end. The server end is the brains of the operation. It contains all the business logic of the application. A request would first come in from the front end and will be processed by the back end. Whatever information and request come from the front end is first brought to the back end, additional information is then queried from the database, business logic is then executed, the processed information is updated to the database, and request information is displayed back at the front end. Hence the back end becomes like a huge single machine, with one single process doing everything. There are numerous problems with such a structure. As stated before one error can crash the whole application. Hence the application is not robust. Errors are difficult to find and responsibility for failure is difficult to assign. Scaling up is a huge problem. It requires huge instances of the entire application to be created each time. Small changes require the entire application to be changed. These problems are extenuated in today’s environment where we expect the application to continuously change according to real-time requirements. The application is deployed in the cloud and is directly accessible to the user instead of being installed on a local computer. Continuous development is ongoing in real-time as per the requirements of the consumer. It means that there are chances of breakages. Continuous development cannot happen if we have breakages that will bring down the entire system. Microservices Architecture The problems stated above have caused a movement toward a Microservices architecture. Microservices are multiple small modular units, designed to do one specific thing. They are completely independent in that they have their own OS, platform, framework, and runtime. All of it is packaged as one single executable unit. It is also independent of the other Microservices. The way these processes communicate is through platform-agnostic API calls. The protocol is generally Restful state transfer (Rest). These calls generally don’t require the processes to create a state. Thus processes don’t lock out. Because the calls are platform agnostic, the developers can independently develop separate processes in the platform they are comfortable with or is more suitable for that particular process. With cloud apps, the infrastructure itself becomes lines of code. This gives the developers immense flexibility. The cloud is also enabling container technology. The containers have both the OS and the line of executable code to run on the OS. Now developers are using the two (containers & cloud) to create code that is on their own platform and language. There are many benefits to this architecture. Scaling out becomes much easier. Instead of creating multiple instances of the whole application, it is now possible to create multiple instances of the particular services that require to be scaled. The overhead of creating additional virtual machines is avoided. New processes can run on the same virtual machine. Fault tolerance increases much fold because each process has been separated. Now the failure of a particular process doesn’t affect the other. Lastly, by adopting microservices, you can invest in reusable building blocks that can be in continuous development. Each microservice becomes like a Lego block that can be plugged into an application stack. By investing in a set of core microservices, you can assemble them to build cloud apps catering to a variety of uses. Also because it is hosted on the cloud and is highly robust, you can keep the app in continuous development. How to develop cloud apps using Microservices? Now one can easily take an app to the cloud without venturing into a microservices architecture. If you do so you can immediately gain the benefits of getting hardware flexibility and cost savings. However, this is the only fraction of the benefit possible with the cloud. It is a very sub-optimal use of cloud computing capability. In such a scenario an approach proposed by folks from Carnegie Mellon University is “the horseshoe.” As the picture above shows, one should begin at the Technical Architecture level to start a cloud app development. Migrate to the cloud your existing app. Then you can proceed to change the application itself making it microservices oriented and thereby making changes at the architecture level. As we move up each level the path becomes longer and costlier. But, this way old legacy system which has become gridlocked for change, due to it containing critical code, for which expertise has gone away due to retiring personnel, can be unlocked and changed to a modern cloud app. Have questions? Contact the cloud computing experts at InApp to learn more.
Chef vs Puppet vs Ansible | A Comparison Infographic
Chef, Puppet, and Ansible all are open-source deployment management tools, used to manage competently large-scale server infrastructure, by enabling speed and ensuring reliability, with very minimal input from developers and system admins, using different paths. Initial Setup Ansible: Less concentration on configuration managementPuppet: Simple installation and initial setupChef: Initial setup is complicated Interface Ansible: Simple and structured (built on Playbooks)Puppet: Very instinctive and complete Web UIChef: Designed exclusively for programmers Security Ansible: High Security with SSHPuppet: VulnerableChef: Chef Vault Scalability Ansible: YesPuppet: YesChef: Yes Code Ansible: Written in PythonPuppet: Built with RubyChef: Configured in Ruby DSL Pricing Ansible: Starts at $5000 per yearPuppet: Standard pricing starts at $120 per nodeChef: Standard pricing starts at $72 per node Feature Ansible: Automated workflow option for continuous deliveryPuppet: Strong compliance automation and reporting toolsChef: Offers hybrid and SaaS solutions for Chef server, analytics, and reporting Have questions? Contact the technology experts at InApp to learn more.
The 4 Key Factors to Keep in Mind while Choosing the Right Cloud Service Provider
The following are the 4 Key Factors you should keep in mind while choosing a Cloud Computing Service Provider. 4 Key Factors when choosing Cloud Computing Service Providers are: Business Health and Process Business knowledge and technical know-how Compliance audit Financial health Organization, governance, planning, and risk management Trust Technical Capabilities Change management Hybrid capability Ease of deployment, management, and upgrade Standard interfaces Event management Security Practice Security infrastructure Security policies Identity management Data backup and retention Administration and Support Performance reporting Service Level Agreements (SLAs) Billing and accounting Resource monitoring and configuration management Have questions? Contact the cloud computing experts at InApp to learn more.
6 Points to Remember Before Deploying Cloud Apps – An Infographic
6 Points to Remember Before Deploying a Cloud App Evaluate the different ways employees, partners, and customers access information and then map out data access and sharing points. Implement during slow periods – If people are focused on completing year-end, mission-critical activities, it wouldn’t be the best time to implement cloud. Don’t roll out all features at once – It’s just too much for the end-user to absorb. Prepare for change management – Cloud Computing providers typically update their applications a few times a year. A cloud app user should have an update process, test scripts, and a team ready to respond. Create a repeatable process for testing and deploying product updates, and don’t rush it. Communicate any resulting shifts in job responsibilities. Have questions? Contact the cloud computing experts at InApp to learn more.
Amazon Web Services – AWS’s Latest Additions

AWS’s Latest Additions Athena Start querying data instantly. Get results in seconds. pay only for the queries your run. Blox Open Source scheduler for Amazon EX2 container service. EC2 F1 Instances Run custom FPGAs in the AWS Cloud. Glue Easily understand your data sources, prepare the data, and load it reliably to data stores. Lambda@Edge Allows you to run Lambda functions at the AWS Edge locations in response to CloudFront events, without provisioning or managing servers, by using the AWS Lambda serverless programming model. Rekognition Deep learning-based image recognition – Search, verify, and organize millions of images. Snowball Edge Petabyte scale data transport with onboard storage and computing. Pinpoint Targeted push notifications for mobile apps. Polly Turn text into lifelike speech using deep learning X-Ray Analyze and debug production, and distributed applications. Have questions? Contact the cloud computing experts at InApp to learn more.
Key Steps for Successful ERP Cloud Migration

Key Steps for Successful ERP Cloud Migration Have questions? Contact the cloud computing experts at InApp to learn more.