Background
Large concurrent test tasks (more than 6 million virtual users) are often encountered in performance tests, and a single pressure source node is difficult to support a test scene as high as this; commercial performance testing tools are expensive, and open-source testing tools have certain defects in realizing complex service scenes, require manual modification of test data, test scripts and the like, are troublesome and labor-consuming, and are prone to errors.
The test tool Jmeter has a function of remote operation, and a plurality of remote engines can be controlled by the module from a single Jmeter client, so that a large load on a simulation server is realized. In theory, one instance of a Jmeter client can control any number of remote Jmeter instances and collect all data from them. The remote operation function is realized by the following means:
saving the test script in a local machine;
managing multiple meter engines from a single machine;
the client is responsible for sending the content (request) of the script to all servers;
all nodes run the same test plan, and each node runs a complete test plan; moreover, the remote operation mode may occupy more resources, instead of independently operating the same number of non-GUI tests, if too many node instances are used, the resource of the meter client may be overloaded. In addition, for parameterized test scenarios, since each node runs the same test plan, test data files must be prepared under the same path on each server. Under the condition, only one remote instance can be operated by one server, and the requirement of a test environment is greatly increased.
In the remote operation mode of the meter, before executing the test, the meter instance of each node needs to be started (manually executed) in a meter server mode; for a test plan for referring test data, a data file needs to be manually split and then placed under the same path of each remote server, and in such a scenario, only one remote instance can be run on each server; if the number of nodes is too large, the resource overhead and the network overhead of the meter client are greatly increased, and the test result may be polluted to some extent.
When a red envelope performance test task is grabbed in spring festival, because a high-concurrency (more than 5 ten thousand) scene needs to be simulated, a single node cannot meet the test requirement, and each account can only be called once and cannot be reused. The measure adopted at that time is to split the test data into a plurality of pieces manually, place the pieces under different paths of different servers, copy a plurality of pieces of Jmeter, and place the pieces under corresponding paths by renaming respectively. And finally, manually modifying the test script to make a plurality of similar copies (different reference contents of the test data), manually switching to different servers, respectively starting a Jmeter program, and independently running the respective test script. The whole process is manually participated, if a test scene is changed, a plurality of test scripts are required to be changed, and if test data is changed, the test scripts are manually re-split and copied to different nodes of different servers. A test scene lasting 20 minutes usually needs to do preparation work of about half an hour, the time cost is very high, each script needs to be completed manually, and is particularly easy to make mistakes, and in case of one of the scripts making mistakes, a test task fails, needs to start from the beginning, and has very high requirements on the accuracy of manual editing.
Disclosure of Invention
In view of this, the present invention provides a distributed performance testing method, apparatus and computer readable storage medium, which avoid the problem of overload of system overhead of a testing program client in a scenario of multiple remote nodes, and reduce the risk of contamination of testing results.
Specifically, the invention discloses a distributed performance testing method, which comprises the following steps: analyzing and obtaining configuration information of each slave server according to prestored node distribution information, wherein the configuration information comprises a network address and the number of nodes distributed by each slave server; splitting the total test data according to the configuration information of each slave server to obtain sub-test data corresponding to each node, replacing the data reference in the test script with the sub-test data corresponding to each node, and establishing the test script corresponding to each node; pushing the test program, the sub-test data corresponding to each node and the test script to the nodes of each slave server; and sending corresponding commands to the nodes of the slave servers according to the test tasks, executing the tasks by the nodes and feeding back the execution results.
Further, the step of splitting the total test data to obtain the sub-test data corresponding to each node further includes: configuring corresponding node identification for each sub-test data; or/and the step of replacing the data reference in the test script with the sub-test data corresponding to each node further comprises: and configuring the corresponding node identification for the test script corresponding to each node.
Further, the step of pushing the test program, the sub-test data corresponding to each node, and the test script to the node of each slave server includes: copying the test program to the nodes of the slave servers; configuring the corresponding node identification folder for the test program of each node; and pushing the sub-test data corresponding to each node and the corresponding node of the test script.
Further, the node is an XML node in which the test script adopts an XML format, and the distributed performance test method further includes: searching XML nodes which all refer to the test data file by adopting xmlstarlet, and saving the referred test data file name in an array to obtain a file array; and traversing the file array, and executing the step of replacing the data reference in the test script with the sub-test data corresponding to each node.
Further, the configuration information of each slave server is stored in a global array, each value of the array representing the configuration information of one slave server.
Further, the test program is a meter, and the distributed performance test method further includes, after each node executes a task and feeds back an execution result: and combining the feedback results by using the shell script, calling a plug-in of the Jmeter to analyze and count the feedback results, and drawing.
Further, the distributed performance testing method further includes, after each node executes the task and feeds back the execution result: and storing the summary data of the test result in a CSV file, writing a java script, analyzing the drawn chart and the counted CSV file, and loading the analyzed chart and the counted CSV file into an HTML template for display.
The invention discloses a distributed performance testing device, which comprises: the analysis unit is used for analyzing and obtaining the configuration information of each slave server according to the prestored node distribution information, wherein the configuration information comprises a network address and the number of nodes distributed by each slave server; the configuration unit is used for splitting the total test data according to the configuration information of each slave server to obtain sub-test data corresponding to each node, replacing the data reference in the test script with the sub-test data corresponding to each node, and establishing the test script corresponding to each node; pushing the test program, the sub-test data corresponding to each node and the test script to the nodes of each slave server; and the test control unit is used for sending corresponding commands to the nodes of the slave servers according to the test tasks, executing the tasks by the nodes and feeding back the execution results.
Furthermore, the node is an XML node of which the test script adopts an XML format; the configuration unit comprises an xmlstarlet tool and is used for searching XML nodes which all refer to the test data file, and storing the file name of the referred test data in an array to obtain a file array; and traversing the file array, and executing the step of replacing the data reference in the test script with the sub-test data corresponding to each node.
Further, the distributed performance testing apparatus further includes: and the data processing unit is used for merging the feedback results by using the shell script, calling a plug-in of the Jmeter to analyze and count the feedback results, and drawing.
Further, the distributed performance testing apparatus further includes: and the display unit is used for storing the summary data of the test result in a CSV file, writing a java script, analyzing the drawn chart and the counted CSV file, and then loading the analyzed chart and the counted CSV file into an HTML template for display.
The present invention is a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the distributed performance testing method described above.
In each embodiment of the invention, a test task is automatically split and synchronously performed at a plurality of nodes, so that the test scene requirement of large concurrency is met; the labor cost is reduced, and the condition limitations such as working time and place are reduced; and manual errors in data cutting/script editing are reduced, and risks are reduced.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
The following describes a distributed performance testing method, apparatus, and computer-readable storage medium in detail, in accordance with embodiments of the present invention, with reference to the accompanying drawings.
The first embodiment is as follows:
referring to fig. 1, a distributed performance testing method includes:
firstly, analyzing and obtaining configuration information of each slave server according to prestored node distribution information, wherein the configuration information comprises a network address and the number of nodes distributed by each slave server;
secondly, splitting the total test data according to the configuration information of each slave server to obtain sub-test data corresponding to each node, replacing the data reference in the test script with the sub-test data corresponding to each node, and establishing the test script corresponding to each node; pushing the test program, the sub-test data corresponding to each node and the test script to the nodes of each slave server;
the step of pushing the test program, the sub-test data corresponding to each node, and the test script to the node of each slave server may specifically include:
copying the test program to the nodes of the slave servers;
configuring the corresponding node identification folder for the test program of each node;
and pushing the sub-test data corresponding to each node and the corresponding node of the test script.
And thirdly, according to the test task, sending a corresponding command to the nodes of the slave servers, and executing the task and feeding back an execution result by the nodes.
Specifically, the step of splitting the total test data to obtain the sub-test data corresponding to each node may further include: and configuring corresponding node identification for each sub-test data. The step of replacing the data reference in the test script with the sub-test data corresponding to each node may further include: and configuring the corresponding node identification for the test script corresponding to each node.
In addition, the configuration information for each slave server may be stored in a global array, with each value of the array representing configuration information for a slave server.
By the embodiment, one test task can be automatically split and synchronously performed on a plurality of nodes, and the requirement of a test scene with large concurrency is met; the labor cost is reduced, and the condition limitations such as working time and place are reduced; and manual errors in data cutting/script editing are reduced, and risks are reduced.
Example two:
referring to fig. 2, a distributed performance testing method includes:
step 201: splitting the test data file;
specifically, the method comprises the following steps: and the node distribution information is stored in a file, and the shell script reads the file, analyzes the file and stores the file in a global array. Each value of the array represents configuration information for a server, including IP and the number of nodes allocated, while calculating how many nodes are in total. Then, the test data is averagely split into a plurality of parts, and a mark is added behind each part, and the mark indicates the node to which the test data is used.
Step 203: replacing the test script;
specifically, the method comprises the following steps: and searching XML nodes which all reference the test data file by adopting xmlstarlet, and saving the referenced test data file name in an array. Traversing the file array, replacing corresponding data references in the Jmeter test script, replacing the data references with the test data files split in the previous step, respectively storing the test data files as a plurality of test script copies, and respectively adding a mark (the mark corresponds to the test data files in the previous step one by one) after the test script copies, wherein the mark indicates which node is used.
Step 205: deploying a pressure node;
specifically, the method comprises the following steps: traversing the global array of the node distribution information, using the scp command to copy the local Jmeter program to the nodes of each server, and adding a mark to the folder name, wherein the mark is in one-to-one correspondence with the mark in the previous test data file splitting and test script replacing.
Step 207: pushing a test script;
specifically, the method comprises the following steps: traversing the global array of the node distribution information, and respectively distributing the edited test script copy and the split test data file to the lower part of each node by using the scp command
Step 209: a management control node;
specifically, the method comprises the following steps: when the test task is executed, the script traverses all the node arrays, sends corresponding commands through ssh commands and respectively controls (starts/stops) each node; after the program is started successfully, information of corresponding meter nodes appears in each server thread, and whether the program is started successfully or not can be judged by inquiring and filtering through commands such as ps and grep. Meanwhile, the running state of the program is monitored through special information in the Jmeter thread. These can be implemented on the master server through shell scripts.
Step 211: collecting results and performing statistical drawing;
specifically, the method comprises the following steps: when each node executes the test, the original data of the test result is stored in a specified directory. After all the nodes are tested, the results are recalled from the master control server. And then merging the data by using a shell script, calling a plug-in of a Jmeter to analyze and count the data, and drawing. The summary data of the test is stored in a CSV file, both the file and the chart are stored in a unique directory.
Step 213: compiling a test report;
specifically, the method comprises the following steps: writing a java script, analyzing the counted CSV file, loading the CSV file into an HTML template for display, and loading the drawn chart into the template for a tester to view.
In the embodiment, the test environment is set by self-definition, the requirement of the test environment is automatically identified through the script, and the Jmeter program is distributed to the specified server; the test data script is automatically split, and the test script automatically searches keywords and replaces the quote of the data file; in the test task execution process, the method is controllable in real time and reports the running state at any time; and automatically recovering and combining the test results, then counting the test results, and outputting a visual and readable test report.
Example three:
referring to fig. 3, a distributed performance testing apparatus includes:
the analysis unit is used for analyzing and obtaining the configuration information of each slave server according to the prestored node distribution information, wherein the configuration information comprises a network address and the number of nodes distributed by each slave server;
the configuration unit is used for splitting the total test data according to the configuration information of each slave server to obtain sub-test data corresponding to each node, replacing the data reference in the test script with the sub-test data corresponding to each node, and establishing the test script corresponding to each node; pushing the test program, the sub-test data corresponding to each node and the test script to the nodes of each slave server;
and the test control unit is used for sending corresponding commands to the nodes of the slave servers according to the test tasks, executing the tasks by the nodes and feeding back the execution results.
In the specific operation, the node is an XML node of which the test script adopts an XML format;
the configuration unit comprises an xmlstarlet tool and is used for searching XML nodes which all refer to the test data file, and storing the file name of the referred test data in an array to obtain a file array; and traversing the file array, and executing the step of replacing the data reference in the test script with the sub-test data corresponding to each node.
Preferably, the distributed performance testing apparatus further includes: and the data processing unit is used for merging the feedback results by using the shell script, calling a plug-in of the Jmeter to analyze and count the feedback results, and drawing.
Preferably, the distributed performance testing apparatus further includes: and the display unit is used for storing the summary data of the test result in a CSV file, writing a java script, analyzing the drawn chart and the counted CSV file, and then loading the analyzed chart and the counted CSV file into an HTML template for display.
The principle and the working process of the distributed performance testing device are briefly described as follows: the method is realized by adopting a shell script of a Linux platform. The Shell script is simple and easy to use, is suitable for processing objects such as files and directories, and can quickly process some complicated matters in a simple mode. The scheme intention is realized by combining small tools under a Linux platform by utilizing the flexibility of the shell.
First, since the test script of Jmeter is a file in XML format, we use xmlstarlet tool to implement the editing of script. And tools such as wc, head, tail and the like carried by the Linux platform can be flexibly combined to complete work tasks.
Secondly, for tasks such as copying/distributing files, SSH services are opened on the Linux platform, and the deployment of a test environment and the distribution of test plan scripts and test data files can be realized by adopting scp. The client controls each node (start/stop/monitor), and sends related commands to each node by SSH service, so as to realize the management of the Jmeter node.
Finally, we complete the collection of results and the writing of test reports by means of a plug-in to the Jmeter. This is a set of plug-ins that provides customization independent of the Apache meter. The method has excellent drawing and loading modes and also provides a richer function library. The collected result data are integrated, counted and drawn by referring to a jar package of the plug-in combination with a shell script, and finally the counted file is presented in a templated HTML file.
The whole process of the embodiment has no manual intervention, so that the labor intensity can be reduced, the labor and time cost can be saved, and meanwhile, the work accident caused by manual editing is avoided; a plurality of meter instances can be operated on one server, so that the requirement of a test environment is greatly reduced; each Jmeter instance runs independently, no data synchronization exists in the test running process, and each node is flat, so that the problem of overload of system overhead of a Jmeter client under the condition of a plurality of remote nodes is solved, and the risk of pollution of test results is reduced.
The present invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the above-described distributed performance testing method. The test method has the corresponding technical effect, and is not described in detail herein.
It will be understood by those skilled in the art that all or part of the steps/units/modules for implementing the embodiments may be implemented by hardware associated with program instructions, and the program may be stored in a computer-readable storage medium, and when executed, the program performs the steps corresponding to the units in the embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.