Basics Of Messaging Platform

Basics Of Messaging Platform

There are different types of messages that can be sent using a messaging platform. Some of these are: Text Message Multimedia Message WAP Message Service Messages Here we can take a deep look into the text messages. There are basically 3 types of text messages: UTF -16 Encoded (16-bit Unicode Transformation Format) UTF-8 Encoded(8-bit Unicode Transformation Format) Flash UTF – 16 Normal English characters come under this category and special characters like semicolons, full stop, etc… are supported. Messages will be of the length of 160 characters and if it goes beyond the 160 character limit it will split into 2 at the 154th character and then get concatenated at the mobile device. These are class 1. UTF – 8 The purpose of this encoding is to support the international characters; the languages like French, Spanish, Arabic, Hindi, Malayalam, etc… are supported. For UTF-8 encoded messages the length will be 60 characters. Concatenation and slicing take place if the character count goes beyond that limit. Flash Message These are normal messages of the length of 160 characters in English and 60 characters in other languages like French, Spanish, etc… only difference is that these messages will not get saved to the phone memory. The class is set to 0 for generating flash messages. The figure displays the transaction process that takes place between the SMPP and SMSC. SMPP sends a bind request to the SMSC and SMSC will respond to the request. If the bind was successful SMPP sends a Submit SM to the SMSC and will receive a successful response if the submits were good. Then the SMSC identifies the originator, destination, sender, text message, etc, and forwards it to the destination address. On receiving a successful delivery from a mobile device SMSC will forward the same to SMPP and SMPP will send a response. After the delivery response has been sent to the SMSC, SMPP sends an unbind request to SMSC on successful reception of the unbind request the SMSC will unbind by sending an unbind response. The above diagram explains the general internal architecture of a messaging application and its routing system. The system consists of: Messaging application at user end for pushing bulk messages Input queues, operator queues, response queues, etc… Data Base to store the messages, responses, and delivery details Operator to which messages are pushed Finally, the mobile device to which the messages are delivered The messaging application at the end-user side pushes bulk messages into the input queues. From the input queues, the messages will get a parallel push to the database and routing application. The routing application will be responsible for identifying the exact route for each message. Once the route is identified that gets updated in the database. As per the route identified, the message falls into the operator queue. From the operator, queue messages are pushed to operators and when the operator receives a successful message it will push a response to the response queue. After pushing the response the operators will send a message to the mobile device. The mobile devices will respond to the reception of messages to operators and the operator will push this delivery response to the delivery response queue and all the statuses will get updated in the database. Have questions? Contact the technology experts at InApp to learn more.

Everything about Performance Testing

Everything about Performance Testing

What is Performance Testing? Performance testing of an application is basically the process of understanding how the web application and its operating environment respond at various user load levels. In general, we want to measure the latency, throughput, and utilization of the website while simulating attempts by virtual users to simultaneously access the site. One of the main objectives of performance testing is to maintain a website with low latency, high throughput, and low utilization. The performance test measures how well the application meets the customer expectations in terms of, Speed – determines if the application responds quickly Scalability – determines how much user load the application can handle Stability – determines if the application is stable under varying loads Why Performance Testing? Performance problems are usually the result of contention for, or exhaustion of, some system resource. When a system resource is exhausted, the system is unable to scale to higher levels of performance. Maintaining optimum Web application performance is a top priority for application developers and administrators. Performance analysis is also carried out for various purposes such as: During a design or redesign of a module or a part of the system, more than one alternative presents itself. In such cases, the evaluation of a design alternative is the prime mover for an analysis. Post-deployment realities create a need for tuning the existing system. A systematic approach like performance analysis is essential to extract maximum benefit from an existing system. Identification of bottlenecks in a system is more of an effort at troubleshooting. This helps to replace and focus efforts on improving overall system response. As the user base grows, the cost of failure becomes increasingly unbearable. To increase confidence and to provide an advance warning of potential problems in case of load conditions, the analysis must be done to forecast performance under load. Typically to debug applications, developers would execute their applications using different execution streams (i.e., completely exercise the application) in an attempt to find errors. When looking for errors in the application, performance is a secondary issue to features; however, it is still an issue. Objectives of Performance Testing End-to-end transaction response time measurements. Measure the Application Server component’s performance under various loads. Measure database components’ performance under various loads. Monitor system resources under various loads. Measure the network delay between the server and the clients Performance Testing Approach Identify the Test Environment Identify the physical test environment and the production environment as well as the tools and resources available to the test team. The physical environment includes hardware, software, and network configurations. Having a thorough understanding of the entire test environment at the outset enables more efficient test design and planning and helps you identify testing challenges early in the project. In some situations, this process must be revisited periodically throughout the project’s life cycle. Identify Performance Acceptance Criteria Identify the response time, throughput, and resource utilization goals and constraints. In general, response time is a user concern, throughput is a business concern, and resource utilization is a system concern. Additionally, identify project success criteria that may not be captured by those goals and constraints; for example, using performance tests to evaluate what combination of configuration settings will result in the most desirable performance characteristics. Plan and Design Tests Identify key scenarios, determine variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected. Consolidate this information into one or more models of system usage to be implemented, executed, and analyzed. Configure the Test Environment Prepare the test environment, tools, and resources necessary to execute each strategy as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary. Implement the Test Design Develop the performance tests in accordance with the test design. Execute the Test Run and monitor your tests. Validate the tests, test data, and results collection. Execute validated tests for analysis while monitoring the test and the test environment. Analyze Results, Report, and Retest Consolidate and share results data. Analyze the data both individually and as a cross-functional team. Reprioritize the remaining tests and re-execute them as needed. When all of the metric values are within accepted limits, none of the set thresholds have been violated, and all of the desired information has been collected, you have finished testing that particular scenario on that particular configuration. Functions of a Typical Tool Record & Replay: Record the application workflow and playback the script to verify the recording. Execute: Run the fully developed Test Script for a stipulated number of Virtual users to generate load on the AUT (Application Under Test) The dashboard displays the values for the desired parameters Remote connects to the app/web servers (Linux/Windows); gathers resource utilization data Analyze: Generates the report; helps to analyze the results and troubleshoot the issues. Attributes Considered for Performance Testing The following are the only few attributes out of many that were considered during performance testing: Throughput Response Time Time {Session time, reboot time, printing time, transaction time, task execution time} Hits per second, Request per second, Transaction per seconds Performance measurement with a number of users. Performance measurement with other interacting applications or task CPU usage Memory usage {Memory leakages, thread leakage} All queues and IO waits Bottlenecks {Memory, cache, process, processor, disk, and network} Highly Iterative Loops in the Code Data not optimally aligned in Memory Poor structuring of Joins in SQL queries Too many static variables Indexes on the Wrong Columns; Inappropriate combination of columns in Composite Indexes Network Usage {Bytes, packets, segments, frames received and sent per sec, Bytes Total/sec, Current Bandwidth Connection Failures, Connections Active, failures at network interface lever and protocol level} Database Problem {Settings and configuration, Usage, Read/sec, Write/sec, any locking, queries, compilation error} Web server {request and response per second, services succeeded and failed, serve problem if any} Screen transition Throughput and Response time with different user loads CPU and Memory Usage with different user loads Have questions? Contact the

Working with Regular Expression Extractor in JMeter

Working with JMeter Regular Expression Extractor

Using Regular Expression Extractor in JMeter During automating tests many times, the test scripts depend on input values that are generated during the test run. These values can be stored in a variable but sometimes the test requires only a part of this value. In such cases, the need for a string extractor is felt. Regular Expression Extractor serves this purpose by pulling out the required values that match the pattern. [ ] Matches anything within the square bracket – Dash inside a square bracket specifies the range e.g [ 0-9] means all digits from 0 to 9 ^ Negates the expression e.g [^ a-z] means everything except lowercase a to z $ Checks for the match at the end of a target string More are listed below. While scripting with JMeter, a Regular expression extractor is used to retrieve the values from the server response. This value is passed as a parameter to the While controller & IF controller. It can also be used to replace any pre-defined variable. The regular expression used is a Perl-type regular expression. Working with JMeter To add a regular expression extractor element to a test plan in JMeter: Right-click the sampler element (request to the server from which the value needs to be extracted) Select Add option -> Post Processors -> Regular expression extractor An explanation of the Regular expression extractor element is detailed in the link http://jmeter.apache.org/usermanual/component_reference.html#Regular_Expression_Extractor How to extract Single or multiple strings using the Regular Expression Extractor element Extracting a Single string from the response Consider an example, where a user successfully logins into an online shopping website and is navigated to the user’s home page where the name ‘Welcome Username’ is displayed. In order to extract the username, the below R.E can be used: Reference Name: Username Regular Expression: Welcome (.+?) Template: $1$ Match No. (0 for Random): 1 Default Value: match not found Note: The special characters above mean: ( ) encloses the portion of the match string to be returned . -> match for any character. + -> one or more times. ? stop when the first match is found Without the ? the .+ would continue until it finds the last possible match. Extracting Multiple Strings from the response Consider a scenario where the user selects an item; it has a product id and a category id. To extract both the ids, the below R.E can be used Reference Name: My_ID Regular Expression: Product_ID = (.+?)\&Category_ID = (.+?) Template: $1$$2$ Match No. (0 for Random): 1 Default Value: match not found Since we need to extract two values from the response, two groups are created. So the template has $1$$2$. The JMeter Regex Extractor saves the values of the groups in additional variables. The following variables would be set as: My_ID -> PR_001CAT_001 My_ID _g0 -> Product_ID =” PR_001″ Category_ID =” CAT_001″ My_ID _g1 -> PR_001 My_ID _g2 -> CAT_001 These variables can be later referred to in the JMeter test plan, as ${MY_ID_g1}, ${MYREF_g2}. Extracting only numbers from the String Consider a case where we need to only extract the numbers, for example, the product id says PR_001. To extract 001, the below R.E can be used Reference Name: ProductID Regular Expression: Product_ID = “PR_(.+?)” Template: $1$ Match No. (0 for Random): 1 Default Value: match not found Have questions? Contact the software testing experts at InApp to learn more.

Automation Index Formula – A checklist to help in identifying the tests that are feasible to automate

Automation Index

“Just because a test is automatable it does not mean it should be automated” Elfriede Dustin Automation testing begins with an analysis of what is feasible to automate, taking into account the budget, resources, schedule, and available expertise. Given limited resources and tight deadlines, we first need to prioritize what is to be automated. The effort required can be measured with the help of the Automation Index. Automation Index Formula The automation Index is the ratio of the number of test cases that are feasible to be automated against the total number of test cases. AI = TFA / TC where AI = Automation Index TFA = Tests feasible to be automated TC = Total number of Test Cases A checklist to help in identifying the tests that are feasible to automate: Tests that are yes for the above are good candidates for automation. Factors that are to be considered in addition to the above are: Have questions? Contact the software testing experts at InApp to learn more.

Mobile Application Testing

Mobile Application Testing

Introduction: Handheld devices are evolving and becoming increasingly complex with the continuous addition of features and functionalities. Testing is challenging in the handheld, wireless world because problems are new, or they show up in new ways. This paper is aimed to highlight certain crucial areas the tester needs to concentrate on while testing mobile applications. The 4 main areas to consider: Understanding the behavior of the device UI & Usability Testing. External Constraints Stress Testing Understanding the behavior of the device If you are new to a device the first thing you should do is to get familiar with how the common device functionalities work – such as its Phone, Camera, Contacts, Calendar, Program, etc. Things to note while exploring inbuilt applications: Overall color scheme/theme of the device. Style and color of icons Progress indicators when pages are loading. Menus – How they are invoked and typical items they contain. Overall responsiveness of applications on the device. UI & Usability Testing: The unique features of mobile devices pose a number of significant challenges for examining the usability of mobile applications, including screen orientation, multi-modality, small screen size, different display resolutions, soft keyboards, and touch screens. Screen Resolution If your application is supported on various devices that have different screen resolutions, make sure you test with the device that has the smallest screen and the application still looks good on larger screen sizes as well. Screen orientation (Landscape/Portrait modes) If your device supports screen orientation changes be sure to include lots of testing where you rotate the device from portrait to landscape display, and vice versa, on all of the pages within your application. It is also important to test input reactions when the screen orientation is changed. Try using the soft keyboard while changing the orientation repeatedly. Attempt this repeatedly and quickly to see if the rapid changes in orientation have a negative effect on the application. Touch Screens: Make sure that the application supports multi-touch (eg: pinch, two-finger tap, two-finger scroll, spread, two-hand spread, etc), single touch – eg: tap, double tap, scroll, etc, touch based on the requirement. The application should be tested for long touch and soft touch behavior. Soft keyboards – Points to consider Does the soft keyboard appears automatically Does the first layer of the soft keyboard include shortcuts related to highlights? Does a long touch on a soft character key bring up several different character choices? Can the soft keyboard be dismissed and re-displayed easily Can the soft and hard keyboards be used interchangeably (if the device has both) Do soft keyboard characters entered in password fields only show up as **** Multi-modality: Multi-modality combines voice and touch (via a keypad or stylus) as input with relevant spoken output (e.g., users are able to hear synthesized, prerecorded streaming or live instructions, sounds, and music on their mobile devices) and onscreen visual displays in order to enhance the mobile user experience and expand network operator service offerings. Make sure that the application supports the functionality based on the requirement. External Factors Affecting Mobile Application Testing Network Connections: App going to be used on devices in various locations with various network connection speeds, it is important to plan testing coverage for the following scenarios: Only Wi-Fi connection Only a 3G/2G connection With no SIM card in the device In Airplane mode (or all connections disabled) Using the network through a USB connection to a PC Test intermittent network scenarios that a user might encounter in the real world: Phone calls: The tester has to check the application behavior during incoming and outgoing calls. Make sure that the application works fine during the following phone calls. The application is interrupted by an incoming call, originator hangs up the call The application is interrupted by an incoming call, terminator hangs up the call The application is interrupted by placing an outgoing call, originator hangs up the call The application is interrupted by placing an outgoing call, terminator hangs up the call. Other Interruptions: The tester has to consider the below interrupts that could have an impact on the functionality or overall responsiveness of your application. Text messages Voicemail notifications Calendar events Social media notifications (Facebook, Twitter, etc) Alarm clocks Low battery notifications Device Settings Explore your device’s options, and change settings such as the following to see how they affect your application: Sound profiles – Does your application respect the device’s sound settings? Device password/unlock pattern – Does your application still install correctly when prompted for a password/unlock pattern? Font – How does choosing a different font family, size, or style affect the appearance and usability of your application? Screen time out/Auto on, off- Is your application subject to screen dimming or automatically turning off even when it is actually busy? Screen orientation – Does your application respect this setting? Connections – How does enabling/disabling Bluetooth or other connection types affect your application’s behavior? Stress Testing Certain mobile applications consume more memory and CPU than desktop applications. Stress testing is a must to identify exceptions, situations with the application hang, and deadlocks that may go unnoticed during functional and user interface testing. Note the behavior of the application while testing with the following scenarios: Load your application with as much data as possible in an attempt to reach its breaking point. Perform the same operations over and over again, particularly those that load large amounts of data repeatedly. Perform the repeated operations at varying speeds – very quickly or very slowly. Leave your application running for a long period of time, both interacting with the device and just letting it sit idle, or perform some automatic task that takes a long time. Test multiple applications running on your device so you can switch between your application and other applications. After testing several functionality switched off and switch on the device. Have questions? Contact the software testing experts at InApp to learn more.

Everything About Single Sign On (SSO)

Single Sign On

What do you mean by Single Sign On (SSO)? Single sign-on is authentication to access the different applications from a single environment by without giving multiple usernames or passwords. Single sign-on uses only one login and through this users access different applications. Single sign-off, reverse action of SSO that single action of signing out terminates access of multiple applications. SAML (Security Assertion Mark-up Language) SAML is an XML standard that allows secure web domains to exchange user authentication and authorization data. SSO authentication process only applies to the web-based application For Windows, usable passwords and synchronize those passwords with your internal user database using the Provisioning API Using SAML, an online service provider can contact a separate online identity provider to authenticate users who are trying to access secure content. How SAML works in SSO Let’s see how SAML works in SSO authentication. Look at the workflow of SSO authentication used in the Google application. Initially, before the whole process, the partner must provide Google with the URL for its SSO service as well as the public key that Google should use to verify SAML responses. The following steps are the workflow of the entire SSO authentication process. Each step number is plotted in the above figure: The user attempts to reach a hosted Google application, such as Gmail, Start Pages, or another Google service. Google generates a SAML authentication request. The SAML request is encoded and embedded into the URL for the partner’s SSO service. The RelayState parameter containing the encoded URL of the Google application that the user is trying to reach is also embedded in the SSO URL. This RelayState parameter is meant to be an opaque identifier that is passed back without any modification or inspection. Google sends a redirect to the user’s browser. The redirect URL includes the encoded SAML authentication request that should be submitted to the partner’s SSO service. The partner decodes the SAML request and extracts the URL for both Google’s ACS (Assertion Consumer Service) and the user’s destination URL (RelayState parameter). The partner then authenticates the user. Partners could authenticate users by either asking for valid login credentials or by checking for valid session cookies. The partner generates a SAML response that contains the authenticated user’s username. In accordance with the SAML 2.0 specification, this response is digitally signed with the partner’s public and private DSA/RSA keys. The partner encodes the SAML response and the RelayState parameter and returns that information to the user’s browser. The partner provides a mechanism so that the browser can forward that information to Google’s ACS. For example, the partner could embed the SAML response and destination URL in a form and provide a button that the user can click to submit the form to Google. The partner could also include JavaScript on the page that automatically submits the form to Google. Google’s ACS verifies the SAML response using the partner’s public key. If the response is successfully verified, ACS redirects the user to the destination URL. The user has been redirected to the destination URL and is logged in to Google Apps. These are the basic principle behind the working of SSO authentication. Advantages of SSO applications Reduces phishing success, because users are not trained to enter passwords everywhere without thinking. Reducing password fatigue from the different user names and password combinations Reducing time spent re-entering passwords for the same identity Can support conventional authentication such as Windows credentials (i.e., username/password) Reducing IT costs due to a lower number of IT help desk calls about passwords Security on all levels of entry/exit/access to systems without the inconvenience of re-promoting users Centralized reporting for compliance adherence. Criticisms SSO servers can introduce a single point of network failure. The SSO server and other host security must be hardened What is the area that needs to be tested in SSO authentication applications Check Browser Cache Check authenticated and non-authenticated users Check users have different privileges Check URL security Check server performances Basic Test Scenarios for SSO Login Authentication: 1. Valid users logged in to their own intranet application and accessed external host application Result:- Users should be able to access external host applications since they are valid sso users. 2. Invalid users logged in to their own intranet application and access the external host application Result:- External application login prompt should show up since they are invalid sso users. 3. Valid user logged in intranet and accessed the external host application and at the same time intranet session expires Result: The user should be able to work on an external host application even if the intranet session expires. 4. Valid user logged in to intranet, accessed an external application and logged out, and again access an external application Result:- The user should be logged in to the external application without a login prompt. 5. Valid user logged in to intranet, accessed external application, and then session out from innotas application Result:- The user should be able to continue working on the external application after expires since a session has already been made with the intranet application, so while the session is out user credentials will be automatically fetched from the browser cache and logged in. Have questions? Contact the technology experts at InApp to learn more.

Software Quality Control vs Software Quality Assurance (QC vs QA)

Software Quality Control vs Software Quality Assurance (QC vs QA)

Difference between QA and QC This is one of the most frequently asked questions, with many different versions of the definition. What is Software Quality Control (SQC)? Software Quality Control (SQC) is the set of procedures used by an organization to ensure that a software product will meet its quality goals at the best value to the customer and to continually improve the organization’s ability to produce software products in the future. [Source: Wikipedia: http://en.wikipedia.org/wiki/Software_quality_control] What is Software Quality Assurance (SQA)? Software quality assurance (SQA) consists of a means of monitoring the software engineering processes and methods used to ensure quality. The methods by which this is accomplished are many and varied and may include ensuring conformance to one or more standards, such as ISO 9000, or a model such as CMMI. [Source; Wikipedia: http://en.wikipedia.org/wiki/Software_quality_assurance] “Product” and “Process” are the keywords that distinguish the differences between QA and QC. What is Quality Control? Quality Control is “Product-oriented”, it focuses on the “product” itself, whether it meets its quality goals and user requirements. For example, Testing and Review fall into this category. What is Quality Assurance? Quality Assurance is “Process-oriented”; focuses on whether the process in a project conforms to organizational standards and methodologies, as defined. Since QA focuses more on the whole project, hence it can have supervision on quality control.

Service Oriented Architecture

Service Oriented Architecture

Service Oriented Architecture SOA is an evolution of distributed computing designed to allow the interaction of software components, called “services”, across a network. Applications are created from a composition of these services and the services can be shared among multiple applications. Need for Service-Oriented Architecture Systems today are bigger Systems need to be interconnected OO works for small-medium sized systems CO (Component Orientation) works for medium-large systems SOA is an evolution from OO and CO and fulfills the need for very large systems What is Service Oriented Architecture SOA is an approach that helps design, implement, and deploy large systems that are created from components that implement discrete business functions. These components are called “Services”, which are distributed across geography, across enterprises and can be reconfigured into new business processes as needed. What is a Service A Service is a program that can be interacted with through and well-defined message exchanges. Messages can be sent from one service to another without consideration of how the service handling those messages is implemented, thus providing interoperability and adaptability. Services must be designed for both availability and stability. Services are loosely coupled to help provide the most important benefit of SOA: Agility 4 Tenets of Service-Oriented Architecture 1. Service is Autonomous Services are entities that are independently deployed, versioned, and managed. 2. A Service has explicit boundaries Services interact through explicit message-passing over well-defined boundaries, published via WSDL. 3. A Service exposes schema & contract and not class or type Services communicate using XML, which is agnostic to both programming languages and platforms. The schema defines the structure and content of the messages, while the service’s contract defines the behavior of the service itself. 4. A Service allows or denies use based on policy A Service decides what message it should process. Example 1 A Human Resources Management application could be created from the following services: An Employee Administration Service to manage hiring, changes in status, and termination. A Salary and Review Administration Service to manage salaries and employee performance reviews according to corporate standards. An IT Security Service to manage the addition and removal of access rights for employees according to their role and employee status. A Payroll Service provided securely over the Internet by an outside provider. An HR Department Portal Service provides a web browser-based user interface for members of the HR department presenting the functions of the above services. Example 2 A travel application used on a mobile can use SOA to obtain all the below information from different service providers Hotel Rates Currency Exchange Rates Weather information A list of places to visit Have questions? Contact the technology experts at InApp to learn more.

How to identify dynamically changing objects in QTP ?

How to identify dynamically changing objects in QTP ?

Consider an example where you are having a tree with nodes [can be folder or directory]. The tree as a whole is designed as a web table and sub-folders again as a sub-web table. It is easy to identify the index of the tree node while recording, but during playback when an additional folder or directory is added the index will be changed. In this kind of situation where the index of the objects changes dynamically, there are two actions to be performed. Identify the properties of the object Identify the index at run time Identify the properties of the object Use object spy Add the object to OR Identify the index at run time We can use the following piece of code once the properties are identified For i = 0 to 1000 set sObjTable = Browser(“Browser”).Page(“Page”).WebTable(“index:=”&i) If Browser(“Browser”).Page(“Page”).WebTable(“index:=”&i). exist Then If sObjTable.GetROProperty(“property name that is not changed”) = <Value that is expected> Then Set sRootFolder = sObjTable End if Else Exit for End If Next In a similar manner, we can identify checkbox and radio button objects whose index changes at run time. This worked for me, hope it will be useful for you too. Have questions? Contact the software testing experts at InApp to learn more.

Work with multiple IE instances using QTP

Work with multiple IE instances using QTP

If your IE-based application opens another window whose properties are the same, it would be difficult to identify the objects in the newly opened browser. For example, consider an application, login user is navigated to a launch page where we can launch our application in a new window. All the windows opened are having the same set of properties like title, name, etc. Solution: We can select the objects in any page based on the creation time property of the window. Set oDesc = Description.Create oDesc( “micclass” ).Value = “Browser” Set sDesk=Desktop.ChildObjects(oDesc) Set objBrowser = Browser(“creationtime:=1”) Set App_objPage = objBrowser.Page(“title:=NAME”) Make sure all the other IE instances are closed before our scripts start execution. Have questions? Contact the software testing experts at InApp to learn more.

InApp India Office

121 Nila, Technopark Campus
Trivandrum, Kerala 695581
+91 (471) 277 -1800
mktg@inapp.com

InApp USA Office

999 Commercial St. Ste 210 Palo Alto, CA 94303
+1 (650) 283-7833
mktg@inapp.com

InApp Japan Office

6-12 Misuzugaoka, Aoba-ku
Yokohama,225-0016
+81-45-978-0788
mktg@inapp.com
Terms Of Use
© 2000-2026 InApp, All Rights Reserved