Basics Of Messaging Platform

There are different types of messages that can be sent using a messaging platform. Some of these are: Text Message Multimedia Message WAP Message Service Messages Here we can take a deep look into the text messages. There are basically 3 types of text messages: UTF -16 Encoded (16-bit Unicode Transformation Format) UTF-8 Encoded(8-bit Unicode Transformation Format) Flash UTF – 16 Normal English characters come under this category and special characters like semicolons, full stop, etc… are supported. Messages will be of the length of 160 characters and if it goes beyond the 160 character limit it will split into 2 at the 154th character and then get concatenated at the mobile device. These are class 1. UTF – 8 The purpose of this encoding is to support the international characters; the languages like French, Spanish, Arabic, Hindi, Malayalam, etc… are supported. For UTF-8 encoded messages the length will be 60 characters. Concatenation and slicing take place if the character count goes beyond that limit. Flash Message These are normal messages of the length of 160 characters in English and 60 characters in other languages like French, Spanish, etc… only difference is that these messages will not get saved to the phone memory. The class is set to 0 for generating flash messages. The figure displays the transaction process that takes place between the SMPP and SMSC. SMPP sends a bind request to the SMSC and SMSC will respond to the request. If the bind was successful SMPP sends a Submit SM to the SMSC and will receive a successful response if the submits were good. Then the SMSC identifies the originator, destination, sender, text message, etc, and forwards it to the destination address. On receiving a successful delivery from a mobile device SMSC will forward the same to SMPP and SMPP will send a response. After the delivery response has been sent to the SMSC, SMPP sends an unbind request to SMSC on successful reception of the unbind request the SMSC will unbind by sending an unbind response. The above diagram explains the general internal architecture of a messaging application and its routing system. The system consists of: Messaging application at user end for pushing bulk messages Input queues, operator queues, response queues, etc… Data Base to store the messages, responses, and delivery details Operator to which messages are pushed Finally, the mobile device to which the messages are delivered The messaging application at the end-user side pushes bulk messages into the input queues. From the input queues, the messages will get a parallel push to the database and routing application. The routing application will be responsible for identifying the exact route for each message. Once the route is identified that gets updated in the database. As per the route identified, the message falls into the operator queue. From the operator, queue messages are pushed to operators and when the operator receives a successful message it will push a response to the response queue. After pushing the response the operators will send a message to the mobile device. The mobile devices will respond to the reception of messages to operators and the operator will push this delivery response to the delivery response queue and all the statuses will get updated in the database. Have questions? Contact the technology experts at InApp to learn more.
Everything about Performance Testing

What is Performance Testing? Performance testing of an application is basically the process of understanding how the web application and its operating environment respond at various user load levels. In general, we want to measure the latency, throughput, and utilization of the website while simulating attempts by virtual users to simultaneously access the site. One of the main objectives of performance testing is to maintain a website with low latency, high throughput, and low utilization. The performance test measures how well the application meets the customer expectations in terms of, Speed – determines if the application responds quickly Scalability – determines how much user load the application can handle Stability – determines if the application is stable under varying loads Why Performance Testing? Performance problems are usually the result of contention for, or exhaustion of, some system resource. When a system resource is exhausted, the system is unable to scale to higher levels of performance. Maintaining optimum Web application performance is a top priority for application developers and administrators. Performance analysis is also carried out for various purposes such as: During a design or redesign of a module or a part of the system, more than one alternative presents itself. In such cases, the evaluation of a design alternative is the prime mover for an analysis. Post-deployment realities create a need for tuning the existing system. A systematic approach like performance analysis is essential to extract maximum benefit from an existing system. Identification of bottlenecks in a system is more of an effort at troubleshooting. This helps to replace and focus efforts on improving overall system response. As the user base grows, the cost of failure becomes increasingly unbearable. To increase confidence and to provide an advance warning of potential problems in case of load conditions, the analysis must be done to forecast performance under load. Typically to debug applications, developers would execute their applications using different execution streams (i.e., completely exercise the application) in an attempt to find errors. When looking for errors in the application, performance is a secondary issue to features; however, it is still an issue. Objectives of Performance Testing End-to-end transaction response time measurements. Measure the Application Server component’s performance under various loads. Measure database components’ performance under various loads. Monitor system resources under various loads. Measure the network delay between the server and the clients Performance Testing Approach Identify the Test Environment Identify the physical test environment and the production environment as well as the tools and resources available to the test team. The physical environment includes hardware, software, and network configurations. Having a thorough understanding of the entire test environment at the outset enables more efficient test design and planning and helps you identify testing challenges early in the project. In some situations, this process must be revisited periodically throughout the project’s life cycle. Identify Performance Acceptance Criteria Identify the response time, throughput, and resource utilization goals and constraints. In general, response time is a user concern, throughput is a business concern, and resource utilization is a system concern. Additionally, identify project success criteria that may not be captured by those goals and constraints; for example, using performance tests to evaluate what combination of configuration settings will result in the most desirable performance characteristics. Plan and Design Tests Identify key scenarios, determine variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected. Consolidate this information into one or more models of system usage to be implemented, executed, and analyzed. Configure the Test Environment Prepare the test environment, tools, and resources necessary to execute each strategy as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary. Implement the Test Design Develop the performance tests in accordance with the test design. Execute the Test Run and monitor your tests. Validate the tests, test data, and results collection. Execute validated tests for analysis while monitoring the test and the test environment. Analyze Results, Report, and Retest Consolidate and share results data. Analyze the data both individually and as a cross-functional team. Reprioritize the remaining tests and re-execute them as needed. When all of the metric values are within accepted limits, none of the set thresholds have been violated, and all of the desired information has been collected, you have finished testing that particular scenario on that particular configuration. Functions of a Typical Tool Record & Replay: Record the application workflow and playback the script to verify the recording. Execute: Run the fully developed Test Script for a stipulated number of Virtual users to generate load on the AUT (Application Under Test) The dashboard displays the values for the desired parameters Remote connects to the app/web servers (Linux/Windows); gathers resource utilization data Analyze: Generates the report; helps to analyze the results and troubleshoot the issues. Attributes Considered for Performance Testing The following are the only few attributes out of many that were considered during performance testing: Throughput Response Time Time {Session time, reboot time, printing time, transaction time, task execution time} Hits per second, Request per second, Transaction per seconds Performance measurement with a number of users. Performance measurement with other interacting applications or task CPU usage Memory usage {Memory leakages, thread leakage} All queues and IO waits Bottlenecks {Memory, cache, process, processor, disk, and network} Highly Iterative Loops in the Code Data not optimally aligned in Memory Poor structuring of Joins in SQL queries Too many static variables Indexes on the Wrong Columns; Inappropriate combination of columns in Composite Indexes Network Usage {Bytes, packets, segments, frames received and sent per sec, Bytes Total/sec, Current Bandwidth Connection Failures, Connections Active, failures at network interface lever and protocol level} Database Problem {Settings and configuration, Usage, Read/sec, Write/sec, any locking, queries, compilation error} Web server {request and response per second, services succeeded and failed, serve problem if any} Screen transition Throughput and Response time with different user loads CPU and Memory Usage with different user loads Have questions? Contact the
Working with Regular Expression Extractor in JMeter

Using Regular Expression Extractor in JMeter During automating tests many times, the test scripts depend on input values that are generated during the test run. These values can be stored in a variable but sometimes the test requires only a part of this value. In such cases, the need for a string extractor is felt. Regular Expression Extractor serves this purpose by pulling out the required values that match the pattern. [ ] Matches anything within the square bracket – Dash inside a square bracket specifies the range e.g [ 0-9] means all digits from 0 to 9 ^ Negates the expression e.g [^ a-z] means everything except lowercase a to z $ Checks for the match at the end of a target string More are listed below. While scripting with JMeter, a Regular expression extractor is used to retrieve the values from the server response. This value is passed as a parameter to the While controller & IF controller. It can also be used to replace any pre-defined variable. The regular expression used is a Perl-type regular expression. Working with JMeter To add a regular expression extractor element to a test plan in JMeter: Right-click the sampler element (request to the server from which the value needs to be extracted) Select Add option -> Post Processors -> Regular expression extractor An explanation of the Regular expression extractor element is detailed in the link http://jmeter.apache.org/usermanual/component_reference.html#Regular_Expression_Extractor How to extract Single or multiple strings using the Regular Expression Extractor element Extracting a Single string from the response Consider an example, where a user successfully logins into an online shopping website and is navigated to the user’s home page where the name ‘Welcome Username’ is displayed. In order to extract the username, the below R.E can be used: Reference Name: Username Regular Expression: Welcome (.+?) Template: $1$ Match No. (0 for Random): 1 Default Value: match not found Note: The special characters above mean: ( ) encloses the portion of the match string to be returned . -> match for any character. + -> one or more times. ? stop when the first match is found Without the ? the .+ would continue until it finds the last possible match. Extracting Multiple Strings from the response Consider a scenario where the user selects an item; it has a product id and a category id. To extract both the ids, the below R.E can be used Reference Name: My_ID Regular Expression: Product_ID = (.+?)\&Category_ID = (.+?) Template: $1$$2$ Match No. (0 for Random): 1 Default Value: match not found Since we need to extract two values from the response, two groups are created. So the template has $1$$2$. The JMeter Regex Extractor saves the values of the groups in additional variables. The following variables would be set as: My_ID -> PR_001CAT_001 My_ID _g0 -> Product_ID =” PR_001″ Category_ID =” CAT_001″ My_ID _g1 -> PR_001 My_ID _g2 -> CAT_001 These variables can be later referred to in the JMeter test plan, as ${MY_ID_g1}, ${MYREF_g2}. Extracting only numbers from the String Consider a case where we need to only extract the numbers, for example, the product id says PR_001. To extract 001, the below R.E can be used Reference Name: ProductID Regular Expression: Product_ID = “PR_(.+?)” Template: $1$ Match No. (0 for Random): 1 Default Value: match not found Have questions? Contact the software testing experts at InApp to learn more.