PushToTest built TestMaker to operate tests, monitor the systems as the test operates, and correlate the results into a set of metrics and charts for root cause analysis and remediation.
This section contains the following sections:
PushToTest designed TestMaker to operate a test and analyze the results for performance bottlenecks and remediation. PushToTest presents results depending on the type of test: functional tests show a check list of step operations, load and performance tests have a choice of 350 or more charts and statistics, and business service monitors show a dashboard of monitor statistics and service up/down status. This document will focus on the results analysis possible from load and performance testing.
As TestMaker operates a load test scenario the Controller Panel displays a live view of the Scalability Index for the target service. TestMaker polls the TestNodes approximately every 10 seconds and updates the Real Time Scalability Index.
The Scalability Index shows the throughput average for Transactions Per Second (TPS), measured by the TestNodes, at each concurrently running simulated user (CR) level. The Scalability Index shows TPS for each Data Index level too for multiple dimension tests. The following shows the live results chart for a load test scenario reporting 6.35 TPS at the first concurrent virtual users (CVUs) level and 8.7 TPS at the second level of CVUs operates:
The higher the TPS the better. And the more TPS for each level concurrent users the better. In a system with linear scalability 4 concurrent running users should give 4 times the TPS as 1 concurrently running user.
The Scalability Index appears as the default chart in load and performance tests because it helps identify server, network, and memory needs, it identifies the population of users that the application or service can handle, and it surfaces scalability and performance issues before your users and customers experience them. Click here to learn about the Scalability Index.
The live results chart provides several functions accessible using a right-click of the mouse on the chart. A pop-menu appears to save the chart to a PNG graphic file, zoom in / out, print and set preferences for the chart, including fonts, axis settings and titles.
TestMaker test operations generate much useful data. TestMaker 5.0
introduced standard charts to visualize the data into actionable
knowledge, including the Scalability Index,Transaction Distribution
Charts, and Resource Monitor Charts. TestMaker 5.0 also introduced a
result log archiving system and a Performance Comparison Utility to
generate charts to compare results between operations of test
scenarios. With TestMaker 5.2 we introduced the new results analysis
1. Transaction logs include the "step" times for the <run> operations within a <transaction>
2. A new charting function summarizes logged results data in new and flexible ways
TestMaker 5 creates much data that can be put into a chart. Here are the base data series:
1. Test Scenario (name)
2. Test Node (name)
3. Concurrent User levels (CRs)
4. Data Index (Message size)
5. Transaction (Use Case)
6. Sequence name
7. Step (run) name
8. Pass/Fail status
9. Error Name and Details
By default, TestMaker stores the above data into a results directory in the default directory defined in the TestScenario. For example, consider the following TestScenario:
<name>BrewBiz Data-Driven Load Test</name>
<logs showsummary="true" step="true" summaryreport="true"/>
After operating the above TestScenario, look at the TestMaker home directory and then in example_agents/WebRecordPlayback/LoadTest for a Results directory. TestMaker creates a new directory in the Results directory to store the results of each operation of the load and performance test.
See Java Enterprise Application Monitoring with Glassbox for details on performance bottleneck and functional issue montoring of Java enterprise applications.
TestMaker records log entries for tests authored in one of the integrated record/playback tools and tests authored in a scripting language. For tests authored in TestGen4Web, Selenium, and soapUI, the TestMaker script runner logs the time it takes to process each transaction and each step within the transaction. For example, if your Selenium test has 10 commands (such as load a page, type a value, click a submit button, and more) then TestMaker logs the time it takes to process each of the 10 commands.
For tests you author in Java, Jython, and the other supported scripting languages, TestMaker logs the transaction time for the <usecase>, the time it takes to execute each <run> method, and any user generated log entries.
The following illustrates transaction and step logging for a test written in Jython:
The above setting tells TestMaker to record step timing values when running this use case. For instance, the ScriptRunner will add <step> entries to the transaction log for each recorded GET / POST operation as shown below:
Click the + icon
in the controller
panel to tell TestMaker that you want to add a new chart definition to
the current TestScenario. The following dialog box appears ready for
you to enter a new chart definition:
Enter a Chart Name, choose the Y and X Axis settings, graph type, and other options. Then click the Save button to save this chart setting to the current TestScenario. TestMaker will offer to re-run the charts for this TestScenario and the charts will appear in the controller panel.
The Chart Templates repository comes with a dozen or more preformed chart definitions for you to immediately add to your TestScenario. Click one of the Chart Templates, make any desired changes to the settings, and click the Save To TestScenario button.
Use the same Chart Settings dialog to save a chart definition to the TestMaker Chart Templates repository for reuse in any TestScenario. With a chart selected, click the Save To Chart Templates button to store the chart definition to the repository.
PushToTest TestMaker operates the chart definitions from the TestScenario at the end of a load test automatically. PushToTest saves the generated charts in PNG format to the results directory which are archived automatically. The Summary (HTML) report includes these generated charts. there will be times when users will need to run a new report based on existing transaction logs. The PushToTest Tools drop-down menu includes a Results Analysis Chart Generator command.
The Results Analysis Chart Generator
Chart Settings user interface. Click the Edit button to choose the
Results directory TestMaker will use as the input. Click Create Graphs
to generate the results.
TestMaker accepts selection of multiple Results transaction log file directories to compare the results of one test to other tests.
The TestMaker Results Analysis engine is capable of delivering hundreds of different charts. Here are recipes for a few of our favorite reports.
Shows the CPU, Network, and Memory
the application host or onen of the TestNodes for the duration of each
test use case. In the following chart the CPU of the TestNode
(localhost) was pegged at 100% except for 20 seconds into the test.
Y Axis: System Resources Percentage
X Axis: Time
Series: Net, CPU, Mem
Graph Type: Lines
Shows the average time it takes to
each step when spread over 10 equal periods of the test time. Each bar
shows the total average time to acomplish the transaction. Each bar is
composed of the average step times.
Y Axis: Duration Average
X Axis: Periods
Graph Type: Stacked Bars
Shows the average total time it takes to a use case at each level of concurrent running simulated users (CRs.)
Y Axis: Duration Average
X Axis: Periods
Graph Type: Stacked Bars
TestMaker offers a convenient way to save a summary of all the charts from a TestScenario to an HTML-formatted Summary Results Report. Enable the Summary Results Report by adding the following to a TestScenario:
<logs showsummary="true" step="true" summaryreport="true">
- summaryreport when set to true will generate the Summary Results Report.
- showsummary when set to true will automatically open the Summary Results Report in your default Web browser after TestMaker executes the TestScenario.
The following is an example of a Summary Results Report.
TestMaker creates the Summary Results Report in the results directory of the default directory. The following illustrates the contents of the results directory for the WebRecordPlayBack LoadTest in the example_agents tutorial.
Additional documentation, product downloads and updates are at www.PushToTest.com. While the PushToTest testMaker software is distributed under an open-source license, the documenation remains (c) 2008 PushToTest. All rights reserved. PushToTest is a trademark of the PushToTest company.