CN108038013B - Distributed performance test method and device and computer readable storage medium - Google Patents

Distributed performance test method and device and computer readable storage medium Download PDF

Info

Publication number
CN108038013B
CN108038013B CN201711236780.XA CN201711236780A CN108038013B CN 108038013 B CN108038013 B CN 108038013B CN 201711236780 A CN201711236780 A CN 201711236780A CN 108038013 B CN108038013 B CN 108038013B
Authority
CN
China
Prior art keywords
test
node
test data
sub
script
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711236780.XA
Other languages
Chinese (zh)
Other versions
CN108038013A (en
Inventor
王福山
徐静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Technology Co Ltd
Haier Uplus Intelligent Technology Beijing Co Ltd
Original Assignee
Qingdao Haier Technology Co Ltd
Haier Uplus Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Technology Co Ltd, Haier Uplus Intelligent Technology Beijing Co Ltd filed Critical Qingdao Haier Technology Co Ltd
Priority to CN201711236780.XA priority Critical patent/CN108038013B/en
Publication of CN108038013A publication Critical patent/CN108038013A/en
Application granted granted Critical
Publication of CN108038013B publication Critical patent/CN108038013B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0709Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems

Abstract

The invention discloses a distributed performance testing method and a device and a computer readable storage medium, wherein the distributed performance testing method comprises the following steps: analyzing and obtaining configuration information of each slave server according to prestored node distribution information, wherein the configuration information comprises a network address and the number of nodes distributed by each slave server; splitting the total test data according to the configuration information of each slave server to obtain sub-test data corresponding to each node, replacing the data reference in the test script with the sub-test data corresponding to each node, and establishing the test script corresponding to each node; pushing the test program, the sub-test data corresponding to each node and the test script to the nodes of each slave server; and sending corresponding commands to the nodes of the slave servers according to the test tasks, executing the tasks by the nodes and feeding back the execution results. The implementation of the invention can automatically split a test task to be synchronously carried out at a plurality of nodes, thereby meeting the test scene requirement of large concurrency.

Description

Distributed performance test method and device and computer readable storage medium
Technical Field
The present invention relates to the field of testing, and in particular, to a distributed performance testing method and apparatus, and a computer-readable storage medium.
Background
Large concurrent test tasks (more than 6 million virtual users) are often encountered in performance tests, and a single pressure source node is difficult to support a test scene as high as this; commercial performance testing tools are expensive, and open-source testing tools have certain defects in realizing complex service scenes, require manual modification of test data, test scripts and the like, are troublesome and labor-consuming, and are prone to errors.
The test tool Jmeter has a function of remote operation, and a plurality of remote engines can be controlled by the module from a single Jmeter client, so that a large load on a simulation server is realized. In theory, one instance of a Jmeter client can control any number of remote Jmeter instances and collect all data from them. The remote operation function is realized by the following means:
saving the test script in a local machine;
managing multiple meter engines from a single machine;
the client is responsible for sending the content (request) of the script to all servers;
all nodes run the same test plan, and each node runs a complete test plan; moreover, the remote operation mode may occupy more resources, instead of independently operating the same number of non-GUI tests, if too many node instances are used, the resource of the meter client may be overloaded. In addition, for parameterized test scenarios, since each node runs the same test plan, test data files must be prepared under the same path on each server. Under the condition, only one remote instance can be operated by one server, and the requirement of a test environment is greatly increased.
In the remote operation mode of the meter, before executing the test, the meter instance of each node needs to be started (manually executed) in a meter server mode; for a test plan for referring test data, a data file needs to be manually split and then placed under the same path of each remote server, and in such a scenario, only one remote instance can be run on each server; if the number of nodes is too large, the resource overhead and the network overhead of the meter client are greatly increased, and the test result may be polluted to some extent.
When a red envelope performance test task is grabbed in spring festival, because a high-concurrency (more than 5 ten thousand) scene needs to be simulated, a single node cannot meet the test requirement, and each account can only be called once and cannot be reused. The measure adopted at that time is to split the test data into a plurality of pieces manually, place the pieces under different paths of different servers, copy a plurality of pieces of Jmeter, and place the pieces under corresponding paths by renaming respectively. And finally, manually modifying the test script to make a plurality of similar copies (different reference contents of the test data), manually switching to different servers, respectively starting a Jmeter program, and independently running the respective test script. The whole process is manually participated, if a test scene is changed, a plurality of test scripts are required to be changed, and if test data is changed, the test scripts are manually re-split and copied to different nodes of different servers. A test scene lasting 20 minutes usually needs to do preparation work of about half an hour, the time cost is very high, each script needs to be completed manually, and is particularly easy to make mistakes, and in case of one of the scripts making mistakes, a test task fails, needs to start from the beginning, and has very high requirements on the accuracy of manual editing.
Disclosure of Invention
In view of this, the present invention provides a distributed performance testing method, apparatus and computer readable storage medium, which avoid the problem of overload of system overhead of a testing program client in a scenario of multiple remote nodes, and reduce the risk of contamination of testing results.
Specifically, the invention discloses a distributed performance testing method, which comprises the following steps: analyzing and obtaining configuration information of each slave server according to prestored node distribution information, wherein the configuration information comprises a network address and the number of nodes distributed by each slave server; splitting the total test data according to the configuration information of each slave server to obtain sub-test data corresponding to each node, replacing the data reference in the test script with the sub-test data corresponding to each node, and establishing the test script corresponding to each node; pushing the test program, the sub-test data corresponding to each node and the test script to the nodes of each slave server; and sending corresponding commands to the nodes of the slave servers according to the test tasks, executing the tasks by the nodes and feeding back the execution results.
Further, the step of splitting the total test data to obtain the sub-test data corresponding to each node further includes: configuring corresponding node identification for each sub-test data; or/and the step of replacing the data reference in the test script with the sub-test data corresponding to each node further comprises: and configuring the corresponding node identification for the test script corresponding to each node.
Further, the step of pushing the test program, the sub-test data corresponding to each node, and the test script to the node of each slave server includes: copying the test program to the nodes of the slave servers; configuring the corresponding node identification folder for the test program of each node; and pushing the sub-test data corresponding to each node and the corresponding node of the test script.
Further, the node is an XML node in which the test script adopts an XML format, and the distributed performance test method further includes: searching XML nodes which all refer to the test data file by adopting xmlstarlet, and saving the referred test data file name in an array to obtain a file array; and traversing the file array, and executing the step of replacing the data reference in the test script with the sub-test data corresponding to each node.
Further, the configuration information of each slave server is stored in a global array, each value of the array representing the configuration information of one slave server.
Further, the test program is a meter, and the distributed performance test method further includes, after each node executes a task and feeds back an execution result: and combining the feedback results by using the shell script, calling a plug-in of the Jmeter to analyze and count the feedback results, and drawing.
Further, the distributed performance testing method further includes, after each node executes the task and feeds back the execution result: and storing the summary data of the test result in a CSV file, writing a java script, analyzing the drawn chart and the counted CSV file, and loading the analyzed chart and the counted CSV file into an HTML template for display.
The invention discloses a distributed performance testing device, which comprises: the analysis unit is used for analyzing and obtaining the configuration information of each slave server according to the prestored node distribution information, wherein the configuration information comprises a network address and the number of nodes distributed by each slave server; the configuration unit is used for splitting the total test data according to the configuration information of each slave server to obtain sub-test data corresponding to each node, replacing the data reference in the test script with the sub-test data corresponding to each node, and establishing the test script corresponding to each node; pushing the test program, the sub-test data corresponding to each node and the test script to the nodes of each slave server; and the test control unit is used for sending corresponding commands to the nodes of the slave servers according to the test tasks, executing the tasks by the nodes and feeding back the execution results.
Furthermore, the node is an XML node of which the test script adopts an XML format; the configuration unit comprises an xmlstarlet tool and is used for searching XML nodes which all refer to the test data file, and storing the file name of the referred test data in an array to obtain a file array; and traversing the file array, and executing the step of replacing the data reference in the test script with the sub-test data corresponding to each node.
Further, the distributed performance testing apparatus further includes: and the data processing unit is used for merging the feedback results by using the shell script, calling a plug-in of the Jmeter to analyze and count the feedback results, and drawing.
Further, the distributed performance testing apparatus further includes: and the display unit is used for storing the summary data of the test result in a CSV file, writing a java script, analyzing the drawn chart and the counted CSV file, and then loading the analyzed chart and the counted CSV file into an HTML template for display.
The present invention is a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the distributed performance testing method described above.
In each embodiment of the invention, a test task is automatically split and synchronously performed at a plurality of nodes, so that the test scene requirement of large concurrency is met; the labor cost is reduced, and the condition limitations such as working time and place are reduced; and manual errors in data cutting/script editing are reduced, and risks are reduced.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention. In the drawings, like reference numerals are used to indicate like elements. The drawings in the following description are directed to some, but not all embodiments of the invention. For a person skilled in the art, other figures can be derived from these figures without inventive effort.
Fig. 1 is a flowchart of a distributed performance testing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another distributed performance testing method provided by the embodiment of the invention;
fig. 3 is a block diagram of a distributed performance testing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
The following describes a distributed performance testing method, apparatus, and computer-readable storage medium in detail, in accordance with embodiments of the present invention, with reference to the accompanying drawings.
The first embodiment is as follows:
referring to fig. 1, a distributed performance testing method includes:
firstly, analyzing and obtaining configuration information of each slave server according to prestored node distribution information, wherein the configuration information comprises a network address and the number of nodes distributed by each slave server;
secondly, splitting the total test data according to the configuration information of each slave server to obtain sub-test data corresponding to each node, replacing the data reference in the test script with the sub-test data corresponding to each node, and establishing the test script corresponding to each node; pushing the test program, the sub-test data corresponding to each node and the test script to the nodes of each slave server;
the step of pushing the test program, the sub-test data corresponding to each node, and the test script to the node of each slave server may specifically include:
copying the test program to the nodes of the slave servers;
configuring the corresponding node identification folder for the test program of each node;
and pushing the sub-test data corresponding to each node and the corresponding node of the test script.
And thirdly, according to the test task, sending a corresponding command to the nodes of the slave servers, and executing the task and feeding back an execution result by the nodes.
Specifically, the step of splitting the total test data to obtain the sub-test data corresponding to each node may further include: and configuring corresponding node identification for each sub-test data. The step of replacing the data reference in the test script with the sub-test data corresponding to each node may further include: and configuring the corresponding node identification for the test script corresponding to each node.
In addition, the configuration information for each slave server may be stored in a global array, with each value of the array representing configuration information for a slave server.
By the embodiment, one test task can be automatically split and synchronously performed on a plurality of nodes, and the requirement of a test scene with large concurrency is met; the labor cost is reduced, and the condition limitations such as working time and place are reduced; and manual errors in data cutting/script editing are reduced, and risks are reduced.
Example two:
referring to fig. 2, a distributed performance testing method includes:
step 201: splitting the test data file;
specifically, the method comprises the following steps: and the node distribution information is stored in a file, and the shell script reads the file, analyzes the file and stores the file in a global array. Each value of the array represents configuration information for a server, including IP and the number of nodes allocated, while calculating how many nodes are in total. Then, the test data is averagely split into a plurality of parts, and a mark is added behind each part, and the mark indicates the node to which the test data is used.
Step 203: replacing the test script;
specifically, the method comprises the following steps: and searching XML nodes which all reference the test data file by adopting xmlstarlet, and saving the referenced test data file name in an array. Traversing the file array, replacing corresponding data references in the Jmeter test script, replacing the data references with the test data files split in the previous step, respectively storing the test data files as a plurality of test script copies, and respectively adding a mark (the mark corresponds to the test data files in the previous step one by one) after the test script copies, wherein the mark indicates which node is used.
Step 205: deploying a pressure node;
specifically, the method comprises the following steps: traversing the global array of the node distribution information, using the scp command to copy the local Jmeter program to the nodes of each server, and adding a mark to the folder name, wherein the mark is in one-to-one correspondence with the mark in the previous test data file splitting and test script replacing.
Step 207: pushing a test script;
specifically, the method comprises the following steps: traversing the global array of the node distribution information, and respectively distributing the edited test script copy and the split test data file to the lower part of each node by using the scp command
Step 209: a management control node;
specifically, the method comprises the following steps: when the test task is executed, the script traverses all the node arrays, sends corresponding commands through ssh commands and respectively controls (starts/stops) each node; after the program is started successfully, information of corresponding meter nodes appears in each server thread, and whether the program is started successfully or not can be judged by inquiring and filtering through commands such as ps and grep. Meanwhile, the running state of the program is monitored through special information in the Jmeter thread. These can be implemented on the master server through shell scripts.
Step 211: collecting results and performing statistical drawing;
specifically, the method comprises the following steps: when each node executes the test, the original data of the test result is stored in a specified directory. After all the nodes are tested, the results are recalled from the master control server. And then merging the data by using a shell script, calling a plug-in of a Jmeter to analyze and count the data, and drawing. The summary data of the test is stored in a CSV file, both the file and the chart are stored in a unique directory.
Step 213: compiling a test report;
specifically, the method comprises the following steps: writing a java script, analyzing the counted CSV file, loading the CSV file into an HTML template for display, and loading the drawn chart into the template for a tester to view.
In the embodiment, the test environment is set by self-definition, the requirement of the test environment is automatically identified through the script, and the Jmeter program is distributed to the specified server; the test data script is automatically split, and the test script automatically searches keywords and replaces the quote of the data file; in the test task execution process, the method is controllable in real time and reports the running state at any time; and automatically recovering and combining the test results, then counting the test results, and outputting a visual and readable test report.
Example three:
referring to fig. 3, a distributed performance testing apparatus includes:
the analysis unit is used for analyzing and obtaining the configuration information of each slave server according to the prestored node distribution information, wherein the configuration information comprises a network address and the number of nodes distributed by each slave server;
the configuration unit is used for splitting the total test data according to the configuration information of each slave server to obtain sub-test data corresponding to each node, replacing the data reference in the test script with the sub-test data corresponding to each node, and establishing the test script corresponding to each node; pushing the test program, the sub-test data corresponding to each node and the test script to the nodes of each slave server;
and the test control unit is used for sending corresponding commands to the nodes of the slave servers according to the test tasks, executing the tasks by the nodes and feeding back the execution results.
In the specific operation, the node is an XML node of which the test script adopts an XML format;
the configuration unit comprises an xmlstarlet tool and is used for searching XML nodes which all refer to the test data file, and storing the file name of the referred test data in an array to obtain a file array; and traversing the file array, and executing the step of replacing the data reference in the test script with the sub-test data corresponding to each node.
Preferably, the distributed performance testing apparatus further includes: and the data processing unit is used for merging the feedback results by using the shell script, calling a plug-in of the Jmeter to analyze and count the feedback results, and drawing.
Preferably, the distributed performance testing apparatus further includes: and the display unit is used for storing the summary data of the test result in a CSV file, writing a java script, analyzing the drawn chart and the counted CSV file, and then loading the analyzed chart and the counted CSV file into an HTML template for display.
The principle and the working process of the distributed performance testing device are briefly described as follows: the method is realized by adopting a shell script of a Linux platform. The Shell script is simple and easy to use, is suitable for processing objects such as files and directories, and can quickly process some complicated matters in a simple mode. The scheme intention is realized by combining small tools under a Linux platform by utilizing the flexibility of the shell.
First, since the test script of Jmeter is a file in XML format, we use xmlstarlet tool to implement the editing of script. And tools such as wc, head, tail and the like carried by the Linux platform can be flexibly combined to complete work tasks.
Secondly, for tasks such as copying/distributing files, SSH services are opened on the Linux platform, and the deployment of a test environment and the distribution of test plan scripts and test data files can be realized by adopting scp. The client controls each node (start/stop/monitor), and sends related commands to each node by SSH service, so as to realize the management of the Jmeter node.
Finally, we complete the collection of results and the writing of test reports by means of a plug-in to the Jmeter. This is a set of plug-ins that provides customization independent of the Apache meter. The method has excellent drawing and loading modes and also provides a richer function library. The collected result data are integrated, counted and drawn by referring to a jar package of the plug-in combination with a shell script, and finally the counted file is presented in a templated HTML file.
The whole process of the embodiment has no manual intervention, so that the labor intensity can be reduced, the labor and time cost can be saved, and meanwhile, the work accident caused by manual editing is avoided; a plurality of meter instances can be operated on one server, so that the requirement of a test environment is greatly reduced; each Jmeter instance runs independently, no data synchronization exists in the test running process, and each node is flat, so that the problem of overload of system overhead of a Jmeter client under the condition of a plurality of remote nodes is solved, and the risk of pollution of test results is reduced.
The present invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the above-described distributed performance testing method. The test method has the corresponding technical effect, and is not described in detail herein.
It will be understood by those skilled in the art that all or part of the steps/units/modules for implementing the embodiments may be implemented by hardware associated with program instructions, and the program may be stored in a computer-readable storage medium, and when executed, the program performs the steps corresponding to the units in the embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A distributed performance testing method is characterized by comprising the following steps:
analyzing and obtaining configuration information of each slave server according to prestored node distribution information, wherein the configuration information comprises a network address and the number of nodes distributed by each slave server;
splitting the total test data according to the configuration information of each slave server to obtain sub-test data corresponding to each node, replacing the data reference in the test script with the sub-test data corresponding to each node, and establishing the test script corresponding to each node; pushing the test program, the sub-test data corresponding to each node and the test script to the nodes of each slave server;
according to the test task, sending a corresponding command to the nodes of each slave server, executing the task by each node and feeding back an execution result;
the step of splitting the total test data to obtain the sub-test data corresponding to each node further includes:
configuring corresponding node identification for each sub-test data;
or/and the light source is arranged in the light path,
the step of replacing the data references in the test script with the sub-test data corresponding to each node further comprises:
and configuring the corresponding node identification for the test script corresponding to each node.
2. The distributed performance testing method of claim 1,
the step of pushing the test program, the sub-test data corresponding to each node and the test script to the nodes of each slave server comprises the following steps:
copying the test program to the nodes of the slave servers;
configuring the corresponding node identification folder for the test program of each node;
and pushing the sub-test data corresponding to each node and the corresponding node of the test script.
3. The distributed performance testing method of claim 1, wherein the node is an XML node in which the test script adopts an XML format, the distributed performance testing method further comprising:
searching XML nodes which all refer to the test data file by adopting xmlstarlet, and saving the referred test data file name in an array to obtain a file array;
and traversing the file array, and executing the step of replacing the data reference in the test script with the sub-test data corresponding to each node.
4. The distributed performance testing method of claim 1, wherein the configuration information for each slave server is stored in a global array, each value of the array representing configuration information for a slave server.
5. The distributed performance testing method of any one of claims 1 to 4, wherein the testing program is a Jmeter, and the distributed performance testing method further comprises, after each node executes a task and feeds back an execution result:
and combining the feedback results by using the shell script, calling a plug-in of the Jmeter to analyze and count the feedback results, and drawing.
6. The distributed performance testing method of claim 5, wherein after each node executes a task and feeds back execution results, the distributed performance testing method further comprises:
and storing the summary data of the test result in a CSV file, writing a java script, analyzing the drawn chart and the counted CSV file, and loading the analyzed chart and the counted CSV file into an HTML template for display.
7. A distributed performance testing apparatus, comprising:
the analysis unit is used for analyzing and obtaining the configuration information of each slave server according to the prestored node distribution information, wherein the configuration information comprises a network address and the number of nodes distributed by each slave server;
the configuration unit is used for splitting the total test data according to the configuration information of each slave server to obtain sub-test data corresponding to each node, replacing the data reference in the test script with the sub-test data corresponding to each node, and establishing the test script corresponding to each node; pushing the test program, the sub-test data corresponding to each node and the test script to the nodes of each slave server;
the test control unit sends corresponding commands to the nodes of the slave servers according to the test tasks, and the nodes execute the tasks and feed back execution results;
the step of splitting the total test data to obtain the sub-test data corresponding to each node further includes:
configuring corresponding node identification for each sub-test data;
or/and the light source is arranged in the light path,
the step of replacing the data references in the test script with the sub-test data corresponding to each node further comprises:
and configuring the corresponding node identification for the test script corresponding to each node.
8. The distributed performance testing apparatus of claim 7, wherein the node is an XML node in which the test script adopts an XML format;
the configuration unit comprises an xmlstarlet tool and is used for searching XML nodes which all refer to the test data file, and storing the file name of the referred test data in an array to obtain a file array; and traversing the file array, and executing the step of replacing the data reference in the test script with the sub-test data corresponding to each node.
9. The distributed performance testing apparatus of claim 7 or 8, further comprising:
and the data processing unit is used for merging the feedback results by using the shell script, calling a plug-in of the Jmeter to analyze and count the feedback results, and drawing.
10. The distributed performance testing apparatus of claim 9, further comprising:
and the display unit is used for storing the summary data of the test result in a CSV file, writing a java script, analyzing the drawn chart and the counted CSV file, and then loading the analyzed chart and the counted CSV file into an HTML template for display.
11. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the distributed performance testing method of any one of claims 1 to 6.
CN201711236780.XA 2017-11-30 2017-11-30 Distributed performance test method and device and computer readable storage medium Active CN108038013B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711236780.XA CN108038013B (en) 2017-11-30 2017-11-30 Distributed performance test method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711236780.XA CN108038013B (en) 2017-11-30 2017-11-30 Distributed performance test method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108038013A CN108038013A (en) 2018-05-15
CN108038013B true CN108038013B (en) 2021-07-16

Family

ID=62094702

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711236780.XA Active CN108038013B (en) 2017-11-30 2017-11-30 Distributed performance test method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108038013B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108984417B (en) * 2018-08-15 2022-06-03 北京达佳互联信息技术有限公司 Software testing method, device, terminal and storage medium
CN109359033A (en) * 2018-09-05 2019-02-19 广州神马移动信息科技有限公司 Method for testing pressure, testing service device, management server and system
CN109656791B (en) * 2018-11-01 2022-07-12 奇安信科技集团股份有限公司 gPC performance test method and device based on Jmeter
CN111294250B (en) * 2018-12-07 2023-05-26 三六零科技集团有限公司 Pressure testing method, device and system
CN109753433A (en) * 2018-12-26 2019-05-14 中链科技有限公司 Automated testing method, device and electronic equipment based on block chain
CN110417613B (en) * 2019-06-17 2022-11-29 平安科技(深圳)有限公司 Distributed performance testing method, device, equipment and storage medium based on Jmeter
CN110704326A (en) * 2019-10-10 2020-01-17 浙江中控技术股份有限公司 Test analysis method and device
CN110727570A (en) * 2019-10-11 2020-01-24 重庆紫光华山智安科技有限公司 Concurrent pressure measurement method and related device
CN113821386A (en) * 2020-06-19 2021-12-21 顺丰科技有限公司 Performance test method, device, network equipment and computer readable storage medium
CN111934953B (en) * 2020-08-07 2024-02-02 北京计算机技术及应用研究所 Batch test method based on domestic processor computer platform
CN113055408B (en) * 2021-05-27 2021-08-06 航天中认软件测评科技(北京)有限责任公司 Network security test integrated device
CN114489995B (en) * 2022-02-15 2022-09-30 北京永信至诚科技股份有限公司 Distributed scheduling processing method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727389A (en) * 2009-11-23 2010-06-09 中兴通讯股份有限公司 Automatic test system and method of distributed integrated service
CN102214139A (en) * 2011-06-01 2011-10-12 北京航空航天大学 Automatic test performance control and debugging method facing distributed system
CN102609352A (en) * 2011-01-19 2012-07-25 阿里巴巴集团控股有限公司 Parallel testing method and parallel testing server
CN104978269A (en) * 2015-06-30 2015-10-14 四川九洲电器集团有限责任公司 Automatic testing method
CN105281978A (en) * 2015-10-23 2016-01-27 小米科技有限责任公司 Performance test method, device and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793318B (en) * 2012-10-29 2018-06-12 百度在线网络技术(北京)有限公司 The distributed test method and device of a kind of module stability
US9977821B2 (en) * 2014-11-26 2018-05-22 Wipro Limited Method and system for automatically generating a test artifact
CN106815142A (en) * 2015-12-02 2017-06-09 北京奇虎科技有限公司 A kind of method for testing software and system
CN106776309A (en) * 2016-12-06 2017-05-31 郑州云海信息技术有限公司 A kind of testing time optimization method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727389A (en) * 2009-11-23 2010-06-09 中兴通讯股份有限公司 Automatic test system and method of distributed integrated service
CN102609352A (en) * 2011-01-19 2012-07-25 阿里巴巴集团控股有限公司 Parallel testing method and parallel testing server
CN102214139A (en) * 2011-06-01 2011-10-12 北京航空航天大学 Automatic test performance control and debugging method facing distributed system
CN104978269A (en) * 2015-06-30 2015-10-14 四川九洲电器集团有限责任公司 Automatic testing method
CN105281978A (en) * 2015-10-23 2016-01-27 小米科技有限责任公司 Performance test method, device and system

Also Published As

Publication number Publication date
CN108038013A (en) 2018-05-15

Similar Documents

Publication Publication Date Title
CN108038013B (en) Distributed performance test method and device and computer readable storage medium
CN110389900B (en) Distributed database cluster testing method and device and storage medium
EP3769223B1 (en) Unified test automation system
CN106708740B (en) Script testing method and device
CN109298868B (en) Intelligent dynamic deployment and uninstallation method for mapping image data processing software
CN102693183A (en) Method and system for realizing automatic software testing
US20170123777A1 (en) Deploying applications on application platforms
CN111198814A (en) Continuously integrated acceptance system for continuous delivery
DE102016208672A1 (en) Manage redundancies between application bundles
CN109634843A (en) A kind of distributed automatization method for testing software and platform towards AI chip platform
CN110865840B (en) Application management method, device, server and storage medium
CN112463631A (en) Chip driver testing method, device and equipment and readable storage medium
CN105791417A (en) Intelligent disposition and process monitoring system and method based on cloud management platform
CN112631846A (en) Fault drilling method and device, computer equipment and storage medium
CN111737140A (en) Interface automation test method, device, equipment and computer readable storage medium
CN105653401A (en) Method and device for scheduling disaster recovery, operation and maintenance, monitoring and emergency start-stop of application systems
CN113778486A (en) Containerization processing method, device, medium and equipment for code pipeline
CN114912255A (en) On-line simulation experiment system and method
CN104123397A (en) Automatic test device and method for Web page
CN116467188A (en) Universal local reproduction system and method under multi-environment scene
CN106897181B (en) Vdbernh testing arrangement
CN111435329A (en) Automatic testing method and device
CN112422349B (en) Network management system, method, equipment and medium for NFV
JP6382705B2 (en) Virtual device test apparatus, virtual device test method, and virtual device test program
CN115061746A (en) Jenkins-based vehicle-mounted network controller product development method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20191212

Address after: 100080 Beijing, Haidian District, Haidian District, Zhichun Road, No. 106, Pacific International Building, room 6, room 601-606

Applicant after: Haier Youjia Intelligent Technology (Beijing) Co., Ltd.

Applicant after: Qingdao Haier Science and Technology Co., Ltd.

Address before: 100080 Beijing, Haidian District, Haidian District, Zhichun Road, No. 106, Pacific International Building, room 6, room 601-606

Applicant before: Haier Youjia Intelligent Technology (Beijing) Co., Ltd.

GR01 Patent grant
GR01 Patent grant