How to edit more than 200 rows in SQL Server Management Studio 2008

There are two options to edit more than 200 rows in SQL Management Studio 2008 Option 1-changing the setting of 200 rows permanently: Tools–>options–>SQL Server object explorer –>CommandsEdit “Change Value for Edit Top <n> Rows Command” Option 2-changing the setting of 200 rows temporarily: Right-click Table–>click on Edit Top 200 Rows –>New Query window will be opened. You can change the SELECT TOP (n) statement. After changing it, click on the red exclamation mark (!) to update the selection. Disable “Prevent saving changes that require the table to be re-created” By default, Microsoft SQL Server Management Studio has the option “Prevent saving changes that require the table to be re-created” enabled. This causes an error when you want to save changes to a table that require the table to be dropped and re-created. To turn off the setting, go to (on the menu bar) Tools/Options/Designers/Table and Database Designers, then under the Table Options section, uncheck Prevent saving changes that require the table to be re-created. Have questions? Contact the technology experts at InApp to learn more.
Insert Generator Script

By executing the following stored procedure we can create insert statement for all records in a table EXECUTE [InsertGenerator] ‘tableName’. SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE PROC [dbo].[InsertGenerator] (@tableName varchar(max)) as –Declare a cursor to retrieve column specific information for the specified table DECLARE cursCol CURSOR FAST_FORWARD FOR SELECT column_name,data_type FROM information_schema.columns WHERE table_name = @tableName OPEN cursCol DECLARE @string nvarchar(max) –for storing the first half of INSERT statement DECLARE @stringData nvarchar(max) –for storing the data (VALUES) related statement DECLARE @dataType nvarchar(max) –data types returned for respective columns SET @string=’INSERT ‘+@tableName+'(‘ SET @stringData=” DECLARE @colName nvarchar(max) FETCH NEXT FROM cursCol INTO @colName,@dataType IF @@fetch_status<>0 begin print ‘Table ‘+@tableName+’ not found, processing skipped.’ close curscol deallocate curscol return END WHILE @@FETCH_STATUS=0 BEGIN IF @dataType in (‘varchar’,’char’,’nchar’,’nvarchar’) BEGIN –SET @stringData=@stringData+””””’+isnull(‘+@colName+’,””)+”””,”+’ SET @stringData=@stringData+””+”’+isnull(””’+””’+[‘+@colName+’]+””’+””’,”NULL”)+”,”+’ END ELSE if @dataType in (‘text’,’ntext’) –if the datatype is text or something else BEGIN SET @stringData=@stringData+””””’+isnull(cast([‘+@colName+’] as varchar(2000)),””)+”””,”+’ END ELSE IF @dataType = ‘money’ –because money doesn’t get converted from varchar implicitly BEGIN SET @stringData=@stringData+”’convert(money,”””+isnull(cast([‘+@colName+’] as varchar(200)),”0.0000”)+”””),”+’ END ELSE IF @dataType=’datetime’ BEGIN SET @stringData=@stringData+”’convert(datetime,’+”’+isnull(””’+””’+convert(varchar(200),[‘+@colName+’],121)+””’+””’,”NULL”)+”,121),”+’ END ELSE IF @dataType=’image’ BEGIN SET @stringData=@stringData+””””’+isnull(cast(convert(varbinary,[‘+@colName+’]) as varchar(6)),”0”)+”””,”+’ END ELSE –presuming the data type is int,bit,numeric,decimal BEGIN SET @stringData=@stringData+””+”’+isnull(””’+””’+convert(varchar(200),[‘+@colName+’])+””’+””’,”NULL”)+”,”+’ END SET @string=@string+'[‘+@colName+’],’ FETCH NEXT FROM cursCol INTO @colName,@dataType END DECLARE @query nvarchar(MAX) SET @query =’SELECT ”’+substring(@string,0,len(@string)) + ‘) VALUES(”+ ‘ + substring(@stringData,0,len(@stringData)-2)+”’+”)” FROM ‘+@tableName exec sp_executesql @query CLOSE cursCol DEALLOCATE cursCol Have questions? Contact the technology experts at InApp to learn more.
How to Clear SQL Server Transaction Log

In some cases, the Microsoft SQL Server Transaction Log (.LDF) file becomes very huge. It’s wasting a lot of disk space and causing some problems if you want to back up and restore the database. We can delete the log file and create a new log file with the minimum size. To delete SQL server transaction log files, follow the steps given below: Backup the database Detach the database Right-click on the database => Tasks => Detach 3. Delete or rename the big log file (path:C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA) 4. Attach the database againRight-click Databases => Attach On Attach Database box, click Add… Browser to the database (.mdf) file then click OK Select the log (.ldf) file then click Remove Finally, click OK. You can see the new log file with the minimum size. Have questions? Contact the technology experts at InApp to learn more.
Software Test Metrics | Defect Metrics | Defect Slippage Ratio

Introduction: Metrics can be defined as “STANDARDS OF MEASUREMENT”. A Metric is a unit used for describing or measuring an attribute. Test metrics are the means by which software quality can be measured. The test provides visibility into the readiness of the product and gives a clear measurement of the quality and completeness of the product. What are Test Metrics? Test Metrics are the quantitative measure used to estimate the progress, quality, and other activities of the software testing process. Why do we Need Metrics? “You cannot improve what you cannot measure.” “You cannot control what you cannot measure” AND TEST METRICS HELP IN Take decisions for the next phase of activities Evidence of the Claim or Prediction Understand the type of improvement required Take decisions on process or technology changeset Types of Test Metrics Base Metrics (Direct Measure) Base metrics constitute the raw data gathered by a Test Analyst throughout the testing effort. These metrics are used to provide project status reports to the Test Lead and Project Manager; they also feed into the formulas used to derive Calculated Metrics. Ex: # of Test Cases, # of Test Cases Executed Calculated Metrics (Indirect Measure) Calculated Metrics convert the Base Metrics data into more useful information. These types of metrics are generally the responsibility of the Test Lead and can be tracked at many different levels (by module, tester, or project). Ex: % Complete, % Test Coverage Metrics Life Cycle Defect Metrics Release Criteria Defect Pattern Test Plan Coverage on Functionality: The total number of requirements v/s number of requirements covered through test scripts. (No of requirements covered / total number of requirements) * 100 Define requirements at the time of Effort estimation Example: Total number of requirements estimated is 46, the total number of requirements tested 39, and blocked 7…define what is the coverage? Note: Define requirements clearly at the project level Test case Defect Density: The total number of errors found in test scripts v/s developed and executed. (Defective Test Scripts /Total Test Scripts) * 100 Example: Total test script developed 1360, total test script executed 1280, total test script passed 1065, total test script failed 215 So, the test case defect density is 215 X 100 —————————- = 16.8% 1280 This 16.8% value can also be called test case efficiency %, which depends upon the total number of test cases that uncovered defects Defect Slippage Ratio: The number of defects slipped (reported from production) v/s number of defects reported during execution. Number of Defects Slipped / (Number of Defects Raised – Number of Defects Withdrawn) Example: Customer filed defects are 21, total defect found while testing is 267, and the total number of invalid defects is 17 So, the Slippage Ratio is[21/(267-17) ] X 100 = 8.4% Requirement Volatility: The number of requirements agreed v/s number of requirements changed. (Number of Requirements Added + Deleted + Modified) *100 / Number of Original Requirements Ensure that the requirements are normalized or defined properly while estimating Example: VSS 1.3 release had a total of 67 requirements initially, later they added another 7 new requirements and removed 3 from initial requirements, and modified 11 requirements. So, the requirement Volatility is (7 + 3 + 11) * 100/67 = 31.34% Review Efficiency: Review Efficiency is a metric that offers insight into the review quality and testing. Some organizations also use this term as “Static Testing” efficiency and they are aiming to get a min of 30% defects in static testing. Review efficiency=100* the Total number of defects found by reviews/Total number of project defects. Example: A project found a total of 269 defects in different reviews, which were fixed and the test team got 476 defects that were reported and valid. So, Review efficiency is [269/(269+476)] X 100 = 36.1% Efficiency & Effectiveness of Processes: Effectiveness: Doing the right thing. It deals with meeting the desirable attributes that are expected by the customer. Efficiency: Doing the thing right. It concerns the resources used for the service to be rendered Have questions? Contact the software testing experts at InApp to learn more.
Types of Project Metrics

Metric is an inevitable part of any piece of work being performed. It’s a system in place to measure the excellence or rather performance of work delivered. Any work that is not controlled and measured can prove the equivalent to incorrect work being delivered. Technology grows at a tremendous pace that enterprises always strive in keeping defined project metrics. Project metrics can be stated as pre-defined or identified measures or benchmarks, which the deliverable is supposed to attain in order to get the expected value. With clearly defined project metrics, the business groups are able to assess the success of a project. Though certain unstated measures like if the project was delivered on time and within the budget, existed ever since the advent of enterprises, the need for more analytics in this area has seen a high spike. There are different types of project metric analysis systems in place across the industry like costing, resource, hours-based, etc. Let me take you through some common project metrics that are much related to the person-hours delivered in a project. Effort Variance (Ev) It’s a derived metric, which gives you an alert of having control over the project. Let there be a project A with the below current attributes : Planned effort: 100 Actual effort: 150 Project progress percentage: 50 Therefore, at 50 % – 150 Hrs taken then at 100 % – X Hrs will be taken X = (100*150)/50 = 300 Hrs., where X is a predicted value for the effort within which the project is going to complete. Hence, the variance Ev = ((Actual – Planned)/Planned)*100 = ((300-100)/100)*100 = 200 % The variance predicted indicates that the project requires attention or it would complete at a much higher cost in terms of the effort delivered. Schedule Variance (Sv) The Schedule variance also has the same calculation in which the number of days is considered instead of the hours. Weighted Defect Rate WDR is a defect metric calculated based on the weightage assigned to the reported bugs. The weightage depends on two factors – severity and reporter. Weightage against severity in descending order: Block, Crash, Major, Minor Weightage against the reporter in descending order: Client, SQC, Team And the rate is calculated against the total planned hours for the project. Quality Costs: Cost of Quality: It’s the total time spent on review activities in the project. Examples are requirements review, design review, code review, test plan review, team meetings for clarifications and client calls, etc. COQ = (Total Review hrs / Total project planned hours)*100 Cost of Detection: The total time spent on the testing activity is considered the cost of detection. Cost of Detection= (Total Testing Hrs/ Total Project planned Hrs)*100 Cost of Failure: The total time spent on rework in the project is considered the cost of failure. Rework includes bug fixing, design change, test plan change, etc. Cost of Failure (Cost of Poor Quality- CoPQ) = (Total Rework or bug fixing Hrs/ Total Project Planned hours)*100 Have questions? Contact the technology experts at InApp to learn more.
Types of Software Testing

Dry run Testing In this type of testing the effects of a possible failure are intentionally mitigated. Usually done in a different server with Customer data before moving into actual Production release. Mutation Testing This type of testing checks whether our unit tests are robust enough. The mutation is a small change in code; where we deliberately alter a program’s code and then re-run our valid unit test suite against the mutated program. A good unit test will detect the change in the program and fail accordingly. Incremental Testing Partial testing of an incomplete product. Usually done to provide early feedback to the developers. Bucket Testing (A/B testing) A/B testing compares the effectiveness of two versions of a webpage, marketing email, in order to discover which has a better response rate or better sales conversion rate. Soak Testing Involves testing a system with a significant load extended over a significant period of time to discover how the system behaves under sustained use. Sandbox Testing It is a testing environment that isolates untested code changes and outright experimentation from the production environment somewhat like a working directory/ test server/ development server in which the developers “check out” a copy of the source code tree or a branch to examine and work on. Only after the developer has fully tested the code changes in their own sandbox should the changes he check back into and merge with the repository and thereby be made available to other developers or end-user of the software. Have questions? Contact the software testing experts at InApp to learn more.
Cross-Site Scripting (XSS)

What is Cross-Site Scripting? Cross-site scripting, also known as XSS, is a type of security vulnerability typically found in Web applications. It occurs when a web application gathers malicious data from a user. The data is usually gathered in the form of a hyperlink that contains malicious content. Browsers are capable of displaying HTML content and executing JavaScript. If the application does not escape special characters in the input/output and sends the user input back to the browser, an attacker may be able to launch an XSS attack successfully. Through which malicious files can be executed, session details of a logged-in user can be stolen, or Trojans can be installed. Types of XSS: The non-persistent (or reflected) Cross-site scripting vulnerability is the most common type. A non-persistent XSS vulnerability occurs when the data provided by the attacker is immediately executed and a generated page is returned to that user. The persistent (or stored) XSS vulnerability occurs when the data provided by the attacker is saved in the server, and permanently displayed on web pages returned to other users. Another type of XSS attack is DOM Based on XSS. DOM Based XSS (type-0 XSS) is an attack wherein the attack payload is executed as a result of modifying the DOM environment in the victim’s browser. How to Perform XSS Testing: Submitting malicious script through text inputs List out all the text input fields [Text box, Text area] in the application. Submit simple javascript code, like ‘<script>alert(“XSS”)</script> through each identified text input field. If the text box is vulnerable, an alert with the text mentioned in the quotes will be returned. Submitting malicious script through an application URL Modifying the requests using security testing tools like Burp Suite to test for application vulnerability Capture the request using the Burp tool Append malicious script in the captured request ‘Forward’ the modified URL Validate the result How to prevent XSS attacks are possible mainly because the server is not handling special characters in the output. There are 2 broad strategies for defeating XSS: Whitelisting Good inputs Whitelist: Create a whitelist of characters required by the application. Once the whitelist is ready, the application should disallow all requests containing any character apart from those in the list. Blacklisting Bad input Blacklist: The application should not accept any script, special character, or HTML in fields whenever not required. It should escape special characteristics that may prove harmful. Some of the special characters used in the script that must be escaped are <>()[]{}/\*;:=%+^! Have questions? Contact the software testing experts at InApp to learn more.
Creating AdvancedTest Plan in JMeter

The need for creating an Advanced Test Plan comes in when the test requires any of the following The need to validate results based on updates to a field in the DB To use Input File in order to parameterize the input variable Use of While, If-Else controller Steps to be followed while recording an advanced script Open a new Test Plan Right-click on Test Plan->Add->Threads (users)->Thread Group Right-click on Thread Group->Add->Config Element->CSV Data Set Config Right-click on Thread Group->Add->Config Element->Http Cookie Manager Right-click on Thread Group->Add->Config Element->Http Header Manager Right-click on Thread Group->Add->Logic Controller->Transaction Controller Right-click on Thread Group->Add->Logic Controller->Recording Controller Right-click on WorkBench->Add->Non-Test Elements->Http Proxy Server Click on HTTP Proxy Server, and from the Target Controller Drop down select ‘Transaction Controller->Recording Controller’ To exclude images, Add rows in the ‘URL Patterns to Exclude’ and use the code ”.*\jpg” , “.*\gif” , “.*\png” Right-click on Recording Controller->Add->Config Element->JDBC Connection Configuration. This is used to create a Database connection. We include Database URL, JDBC Driver Class, Username, and Password Right-click on Recording Controller->Add-> Config Element->User Defined Variables. This is used to replace Hard-coded values say Username, Syntax is ‘Variable name = Variable value’ and use ${variable name} in the script instead of hard-coded value(s). Right click->Recording Controller->Add->Logic Controller->While Controller Right-click on While Controller->Add->Sampler ->JDBC Request. This sampler is used to send SQL Query to the Database. Note: Before using JDBC Request Sampler we need to set up the ‘JDBC Connection Configuration’ configuration element Right-click on While Controller->Add->Sampler->Debug Sampler. It generates the sample of all values of JMeter variables Similarly, we can generate a sample of all values of JMeter properties and system properties. Presently in our scripts, we set these 2 properties to false. Right-click on While Controller->Add->Timer->Constant Timer This timer pauses for the set amount of time between requests. If we don’t add a delay, JMeter could overwhelm the server by making too many requests in a short amount of time. Please note Before recording go to Internet Explorer->Tools->Internet options->Connection->LAN settings Check the Proxy server, address: localhost; port: 8080, click ok Click on Start to record the script Once the recording is over, click Stop and save it as a JMX file Using Transaction Controllers in the Test Plan Grouping the test action within a Transaction Controller helps the normal user to understand the script more effectively than recording an entire script in one full stretch. To add a Transaction Controller, right-click on Thread Group->Add->Logic Controller->Transaction Controller Transaction Controller measures the overall time taken to perform nested test elements Once the entire script is executed, click on View Results Tree and select Transaction Controller, this controller gives the overall time taken to process the request (load time). The load time will is shown in milliseconds. For n multi-users, once the scripts are executed there will be ‘n’ Transaction controllers within the View Results Tree. Each Transaction controller when clicked shows 3 tabs – Sampler, Request, and Response. By default, the sampler tab is shown. Assume a user logins to an online shopping website and performs a search on 5 different products. While the script is under execution, the progress of the script parsing through 5 different products is reflected in the statement – “Search Transaction Controller 1-4”. This means of the 5 searches, 4th has been completed. Merging Scripts Two or more scripts can be merged into a single Test Plan. Assume we have three scripts merged in the order say Create Profile (This script is executed by 10 users) Basic Search (This script is executed by another 10 users) Signing up on an online shopping website (This script is executed by another 5 users) Steps to be followed while merging the scripts Open any existing test plan say ‘CreateProfile.jmx’ Right-click on Test Plan-> Merge Select the test plan which you want to merge say ‘BasicSearch.jmx’ A new thread group is displayed along with the existing thread group In order to avoid confusion, it would be better to rename the thread group(s) merged. When the merged script is executed, there will be 25 (10+10+5 users) Transaction controllers now present in the Test Plan. During execution, each of the 3 merged scripts will be tracked independently, i.e if the Transaction Controller shows ‘Basic search Transaction Controller 2-6’; this implies the 6th search of the 2nd merged script has just completed execution. Input File (Comma delimited (CSV) file) The recorded scripts can be executed with multiple users, by parameterizing our scripts. This can be done in two ways: User-defined variables Input file CSV file is very useful while executing the JMeter script(s) with ‘n’ multiple users. The attached screenshot (CSV.jpg) has 3 columns. The first column is for Username, the second column is for password and the third column is for the server name. On executing the script after parameterizing, the script fetches the value from the CSV file, substitutes it with the corresponding request, and sends it to the server. Points to be noted while using a CSV file Open an Excel file and provide the required information say Username, Password, Server name, etc under each column that we would like to pass as an input parameter to the script. Save the Excel file as .csv In the Test plan, configure CSV Data Set Config as follows, click on CSV Data Set ConfigCSV Data Set Config as follows, click on CSV Data Set Config Filename: filename.csv File encoding: leave it blank Variable Names (comma-delimited): Specify the names for parameters (values) specified in each column in the CSV file. Later use this variable name in the script as ‘${variablename}’ Delimiter (use \t for tab): , Allow quoted data?: False Recycle on EOF?: True Stop thread on EOF?: False Sharing mode: All threads Note: The first variable name entered in the Variable Names text box contains the value of the first column of the CSV file and so on. Replace values with corresponding variable names in the form ‘${variable name}’ throughout the script. It would be even better to rename the
Creating Basic Test Plan in JMeter

How to Create a Basic Test Plan? Steps to be followed while recording a script: Open a new Test Plan Right click on Test Plan->Add->Threads (users) ->Thread Group Right-click on Thread Group->Add->Config Element-> HTTP Cookie Manager Right-click on Thread Group->Add->Config Element-> HTTP Header Manager Right-click on Thread Group->Add->Config Element->HTTP Request Defaults Right-click on Thread Group->Add->Logic Controller->Recording Controller Right-click on Workbench->Add->Non-Test Elements-> HTTP Proxy Server* Click on HTTP Proxy Server, and from the Target Controller Drop down select ‘Thread Group>Recording Controller’ ** Click on Start to record the script Once the recording is over, click Stop and save as a “.jmx” file *Before recording go to Internet Explorer->Tools->Internet options->Connection->LAN settings. Check the Proxy server, address: localhost; port: 8080, and click ok. **To exclude images, Add rows in the ‘URL Patterns to Exclude’ and use the code ”.*\jpg” , “.*\gif” , “.*\png” To set the number of users: Click on Thread Group Set the Number of Threads (users) Set Ramp-up period (Time is in seconds). It sets the number of users that are initiated every second. To add appropriate test results (Listeners) to the Test Plan Right-click on Thread Group->Add->Listener->View Results Tree. This report will give details of – Sampler (HTML page), Request info, and Response info. Have questions? Contact the software testing experts at InApp to learn more.
Test Automation Frameworks

A framework is a set of assumptions, concepts & practices that support automation. Types of Frameworks Test script Modularity Framework Test Library Architecture Framework Keyword driven Framework Data Driven Framework Hybrid Frameworks Test Script Modularity Framework Test script modularity framework requires the creation of small independent scripts that represent modules section & functions of the AUT. Library Architecture Framework It divides AUT into procedures & functions. Creation of Library files that represent Modules, functions, and sections of AUT. Keyword-driven Framework This framework requires the development of data tables and keywords, independent of the test automation tool used to execute them and the test script code that “drives” the application-under-test and the data. Keyword-driven tests look very similar to manual test cases. In a keyword-driven test, the functionality of the application-under-test is documented in a table as well as in step-by-step instructions for each test. Data-Driven Framework Input & Output values are read from data files. Hybrid Framework Have questions? Contact the software testing experts at InApp to learn more.