CN111367795A - Performance test method based on benchmark service and related equipment - Google Patents

Performance test method based on benchmark service and related equipment Download PDF

Info

Publication number
CN111367795A
CN111367795A CN202010119599.6A CN202010119599A CN111367795A CN 111367795 A CN111367795 A CN 111367795A CN 202010119599 A CN202010119599 A CN 202010119599A CN 111367795 A CN111367795 A CN 111367795A
Authority
CN
China
Prior art keywords
test
performance
message
test message
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010119599.6A
Other languages
Chinese (zh)
Inventor
贾茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202010119599.6A priority Critical patent/CN111367795A/en
Publication of CN111367795A publication Critical patent/CN111367795A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites

Abstract

The application relates to the field of research and development management, and discloses a performance testing method based on benchmark service and related equipment, wherein the method comprises the following steps: acquiring performance test parameters, and establishing a performance test scene according to the performance test parameters, wherein the performance test parameters comprise average response time, average throughput and abnormal rate; acquiring an execution instruction, and starting the test of the performance test scene according to the execution instruction; calling a benchmark service to send a test message, and recording the sending time of the test message; and receiving a feedback result of the test message, counting the performance test parameters according to the feedback result, and outputting a performance test result according to the statistical result. According to the method and the device, the benchmark service is used for simulating the message request, various parameters can be counted according to the request feedback message, the system performance test result is obtained, server resources can be saved, the use of a test machine is reduced, the convenience of system performance test is improved, and the test efficiency is improved.

Description

Performance test method based on benchmark service and related equipment
Technical Field
The application relates to the field of research and development management, in particular to a performance testing method based on benchmark service and related equipment.
Background
Message Queue Service (MQS) is a Message Queue product of an enterprise-level internet architecture, and mainly implements scenarios such as decoupling, asynchronous messages, traffic clipping, IoT device communication, and the like in a distributed system. Because the performance of MQ is very strong, the throughput can reach tens of thousands, even hundreds of thousands +, and the supported protocols are very many, such as TCP, MQTT and the like, the traditional performance testing scheme is difficult to meet our needs.
The traditional performance test mainly depends on a jmeter tool, a java request is packaged in our request and then sent to an application server through a test execution machine, usually, a single test execution machine can only support hundreds or thousands of concurrences, if tens of thousands of concurrences are to be tested, many test execution machines need to be applied, uniform control is difficult to carry out in the test, many test scenes need some special parameters, the jmeter cannot be configured, and in addition, the results of a plurality of test machines are difficult to be summarized and analyzed in the later period.
Disclosure of Invention
The application aims to provide a performance testing method based on benchmark service and related equipment aiming at the defects of the prior art, and the performance testing result of the system is obtained by simulating a message request through the benchmark service and counting various parameters according to a request feedback message, so that server resources can be saved, the use of a testing machine is reduced, the convenience of system performance testing is improved, and the testing efficiency is improved.
In order to achieve the above purpose, the technical solution of the present application provides a performance testing method based on benchmark service and related devices.
The application discloses a performance testing method based on benchmark service, which comprises the following steps:
acquiring performance test parameters, and establishing a performance test scene according to the performance test parameters, wherein the performance test parameters comprise average response time, average throughput and abnormal rate;
acquiring an execution instruction, starting a test on the performance test scene according to the execution instruction, and generating a test message;
calling a benchmark service to send the test message, and recording the sending time of the test message;
and receiving a feedback result of the server side to the test message, counting the performance test parameters according to the feedback result, and outputting a performance test result according to the statistical result.
Preferably, the obtaining performance test parameters and establishing a performance test scenario according to the performance test parameters includes:
presetting configuration parameters, and creating a script according to the configuration parameters;
and acquiring performance test parameters, and calling the script to establish a performance test scene based on the performance test parameters.
Preferably, the obtaining an execution instruction, starting the test on the performance test scenario according to the execution instruction, and generating a test message includes:
creating a first type of test message and a second type of test message;
acquiring script execution instructions corresponding to the first type of the test message and the second type of the test message;
and starting the test of the performance test scene according to the script execution instruction to generate a first type test message and a second type test message.
Preferably, the calling benchmark service sends the test message, including:
calling benchmark service to sequentially send each test message and wait for receiving a feedback result of each test message;
and after all the test messages are sent, counting the total number of the test messages which are successfully sent.
Preferably, the recording the sending time of the test message includes:
when a first test message is sent, recording the sending time of the first test message;
recording the transmission time consumption of each successfully transmitted test message when one test message is successfully transmitted;
and recording the sending time of the last test message after the last test message is sent.
Preferably, the receiving a feedback result of the service end to the test message, and performing statistics on the performance test parameters according to the feedback result includes:
when a feedback result of the server side for the test message is received, accumulating the number of the feedback result, wherein the feedback result comprises success and abnormity;
and when the accumulated total number of the feedback results reaches the total number of the test message, stopping receiving the feedback results, and counting the performance test parameters according to the feedback results.
Preferably, the outputting the performance test result according to the statistical result includes:
acquiring the cumulative response time of the test message which is successfully sent according to the sending time of the test message, and acquiring the average response time according to the cumulative response time and the feedback result and the cumulative total;
acquiring the receiving time of the last feedback result, and acquiring the average throughput according to the sending time of the first test message, the receiving time of the last feedback result and the total number of successfully sent test messages;
obtaining an abnormal rate according to the total number of the received abnormal results and the total number of the feedback results;
and outputting the average response time, the average throughput and the abnormal rate within a preset time interval.
The application also discloses a performance test device based on benchmark service, the device includes:
a creation module: the system comprises a data processing unit, a data processing unit and a data processing unit, wherein the data processing unit is used for acquiring performance test parameters and establishing a performance test scene according to the performance test parameters, and the performance test parameters comprise average response time, average throughput and abnormal rate;
a starting module: the system comprises a test system, a test device and a control device, wherein the test system is used for acquiring an execution instruction, starting a test on the performance test scene according to the execution instruction and generating a test message;
a test module: the system is used for calling benchmark service to send the test message and recording the sending time of the test message;
a statistic module: and the server is used for receiving a feedback result of the server to the test message, counting the performance test parameters according to the feedback result, and outputting a performance test result according to the statistical result.
The application also discloses a computer device, which comprises a memory and a processor, wherein the memory is stored with computer readable instructions, and the computer readable instructions, when executed by one or more processors, cause one or more processors to execute the steps of the test method.
The application also discloses a computer readable storage medium, which can be read and written by a processor, and the storage medium stores computer instructions, and when the computer readable instructions are executed by one or more processors, the one or more processors can execute the steps of the test method.
The beneficial effect of this application is: according to the method and the device, the benchmark service is used for simulating the message request, various parameters can be counted according to the request feedback message, the system performance test result is obtained, server resources can be saved, the use of a test machine is reduced, the convenience of system performance test is improved, and the test efficiency is improved.
Drawings
Fig. 1 is a schematic flowchart of a performance testing method based on benchmark service according to a first embodiment of the present application;
fig. 2 is a schematic flowchart of a performance testing method based on benchmark service according to a second embodiment of the present application;
fig. 3 is a schematic flowchart of a performance testing method based on benchmark service according to a third embodiment of the present application;
fig. 4 is a schematic flowchart of a performance testing method based on benchmark service according to a fourth embodiment of the present application;
fig. 5 is a schematic flowchart of a performance testing method based on benchmark service according to a fifth embodiment of the present application;
fig. 6 is a schematic flowchart of a performance testing method based on benchmark service according to a sixth embodiment of the present application;
fig. 7 is a schematic flowchart of a performance testing method based on benchmark service according to a seventh embodiment of the present application;
fig. 8 is a schematic structural diagram of a performance testing apparatus based on benchmark service according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
A performance testing method based on benchmark service according to a first embodiment of the present application is shown in fig. 1, and the embodiment includes the following steps:
step s101, acquiring performance test parameters, and establishing a performance test scene according to the performance test parameters, wherein the performance test parameters comprise average response time, average throughput and abnormal rate;
specifically, the performance test parameters may generally include average throughput, average response time, and an abnormal rate, where the performance test parameters are used to test the performance of the system, so that before the performance test of the system, a performance test scenario may be determined, the performance test scenario may include an executed script and corresponding parameters, the parameters may include configuration parameters and performance test parameters, the configuration parameters include parameters fixed in the script, and the performance test parameters include parameters that need to be counted in the current test, and are output according to the statistical result.
Step s102, acquiring an execution instruction, starting a test on the performance test scene according to the execution instruction, and generating a test message;
specifically, the execution instruction includes a script execution instruction, which is an instruction for starting a performance test, after the execution instruction is obtained, the client starts a corresponding performance test, and then generates a corresponding test message in the system, where the total number and size of the test message are set in a scene, the test message generally refers to a request message, the request message may include a request of the client for a server resource, a request of data, and the like, and in the test process, what the content of the specific test message is not concerned, it is tested that the client sends the test message to the server successfully or not, and then the server responds to the test message, and sends feedback information to the client.
Step s103, calling benchmark service to send the test message, and recording the sending time of the test message;
specifically, after a client generates a corresponding test message, a benchmark service is called to send the test message to a server, and when the client sends the test message to the server, the sending time of the test message is recorded, the sending time is used for counting the consumed time of the test message, and in addition, the successful sending proportion of the test message can be recorded; and after the server receives each test message, processing the test message and sending a processing result to the client, wherein the processing comprises the following conditions:
when the main refresh DISK is overtime, returning a FLUSH _ DISK _ TIMEOUT message to the client, wherein the state is set as WARN, namely an abnormal state;
when the time is out after the disk is refreshed, returning a FLUSH _ SLAVE _ TIMEOUT message to the client, wherein the state is set as WARN, namely an abnormal state;
when the refreshed disk is unavailable, a SLAVE _ NOT _ AVAILABLE message can be returned to the client, and the state is set to WARN, namely an abnormal state
If the server process is successful, a SEND _ OK message may be sent to the client with the status set to SUCCESS, i.e., a SUCCESS status.
And step s104, receiving a feedback result of the server to the test message, counting the performance test parameters according to the feedback result, and outputting a performance test result according to the statistical result.
Specifically, after receiving a test message from a client, a server processes the test message, where the processing does not include analyzing specific content of the test message, but generates a corresponding feedback result after successfully receiving the test message, where the feedback result includes WARN and SUCCESS, where there are various reasons for the usual occurrence of warning, such as timeout of a disk, unavailability of a disk, and the like; the client receives a feedback result of the server, wherein the feedback result comprises a feedback result of each test message, when the client receives the feedback results of all the test messages, the client can count the performance test parameters, the statistics can be carried out according to the time consumption of the obtained test messages, the total number of the test messages and the feedback results, and finally the values of the performance test parameters such as average throughput, average response time, abnormal rate and the like are obtained and output as the performance test result of the time.
In the embodiment, the benchmark service simulates the message request, and various parameters can be counted according to the request feedback message to obtain the system performance test result, so that the server resources can be saved, the use of a test machine is reduced, the convenience of the system performance test is improved, and the test efficiency is improved.
Fig. 2 is a schematic flowchart of a performance testing method based on benchmark service according to a second embodiment of the present application, where as shown in the drawing, in step s101, performance testing parameters are obtained, and a performance testing scenario is established according to the performance testing parameters, where the method includes:
step s201, presetting configuration parameters, and creating a script according to the configuration parameters;
specifically, besides the performance test parameters, configuration parameters are also determined, the configuration parameters are fixed parameters in the script, and include parameters such as the size of each test message, the number of threads, the identity ID of the message producer, the identity ID of the message consumer, the total number of the test messages, and the like.
Step s202, obtaining performance test parameters, and calling the script to establish a performance test scene based on the performance test parameters.
Specifically, after the script is created, parameters to be tested at this time, that is, performance test parameters, may be determined, where the performance test parameters may include average throughput, average response time, and abnormal rate, and after the script and the performance test parameters are determined, the performance test scenario may also be substantially determined.
In the embodiment, the scenario is determined to perform the performance test by setting the script and the performance test parameters, so that the system performance can be more accurately tested, and the purpose of accurately positioning the system performance is achieved.
Fig. 3 is a schematic flowchart of a performance testing method based on benchmark service according to a third embodiment of the present application, where as shown in the drawing, in step s102, an execution instruction is obtained, a test on the performance testing scenario is started according to the execution instruction, and a test message is generated, including:
step s301, creating a first type of test message and a second type of test message;
specifically, the test messages may be divided into two types, the first type of test message includes a message consumer type, and the second type of test message includes a message producer type.
Step s302, obtaining a script execution instruction corresponding to the first type of the test message and the second type of the test message;
specifically, after the test message type is determined, a script execution instruction corresponding to the first test message type and the second test message type may be obtained, and in practical application, the script execution instruction is as follows:
the instructions for performing the message consumer test are as follows:
./consumer.sh“-n nameserver-c cid-t topic-q messageCount”;
the instructions for execution of the message producer test are as follows:
./producer.sh“-n nameserver-t topic-p pid-w ThreadCount-smessageSize-c messageCount”。
step s303, starting the test of the performance test scenario according to the script execution instruction, and generating a first type test message and a second type test message.
After the client acquires the execution instruction, the client starts a performance test and then generates corresponding test messages, wherein the number and the size of the test messages are set in a scene.
In this embodiment, by classifying the test messages, different types of test messages can be tested, and the flexibility of the test can be improved.
Fig. 4 is a schematic flowchart of a performance testing method based on a benchmark service according to a fourth embodiment of the present application, as shown in the drawing, in step s103, invoking the benchmark service to send the test message includes:
step s401, calling benchmark service to sequentially send each test message, and waiting for receiving a feedback result of each test message;
specifically, when the benchmark service is called to send a test message, the test messages can be sent in sequence, and after each test message is sent, the client side waits to receive a feedback result of each test message.
Step s402, after all the test messages are sent, counting the total number of the test messages which are sent successfully.
Specifically, after all the test messages are sent, the total number of successfully sent test messages can be counted, and although the total number of the test messages to be sent is preset in a scene, due to network and system reasons, the test messages can be sent successfully and also can be sent unsuccessfully, so that only the total number of the successfully sent test messages is counted, and when the client fails to send, the current abnormal information can be recorded and printed in a log.
In the embodiment, the test messages are sequentially sent, and the total number of the test messages which are successfully sent is counted, so that the test messages can be accurately counted, and the accuracy of the performance test is improved.
Fig. 5 is a schematic flowchart of a performance testing method based on benchmark service according to a fifth embodiment of the present application, where as shown in the drawing, the step s103 of recording the sending time of the test message includes:
step s501, when a first test message is sent, recording the sending time of the first test message;
specifically, when the first test message is ready to be sent, the time of the first test message is recorded, that is, the sending time is recorded regardless of whether the first test message is sent successfully or fails.
Step s502, recording the transmission time consumption of each successfully transmitted test message when one test message is successfully transmitted;
specifically, after one test message is successfully sent each time, the sending time consumption of the test messages is counted, and after the last test message is sent, the total time consumption of all the successfully sent test messages is accumulated.
And step s503, after the last test message is sent, recording the sending time of the last test message.
Specifically, when the last test message is ready to be sent, the sending time of the last test message may be recorded, or the sending completion may be recorded after the sending of the last test message is completed, where the sending completion includes a successful sending or a failed sending.
In the embodiment, by recording the total consumed time and the sending time of the test message, the system performance can be accurately counted, and the accuracy of the performance test is improved.
Fig. 6 is a schematic flowchart of a performance testing method based on benchmark service according to a sixth embodiment of the present application, where as shown in the step s104, receiving a feedback result of the service end to the test message, and performing statistics on the performance testing parameters according to the feedback result, the method includes:
step s601, when receiving a feedback result of the server to the test message, accumulating the number of the feedback result, wherein the feedback result comprises success and exception;
specifically, the client receives the feedback results from the server, and respectively accumulates the number of the feedback results when receiving the feedback results, wherein the number of the feedback results includes the number of successful feedback and the number of abnormal feedback.
Step s602, when the cumulative total of the feedback results reaches the total sending number of the test message, stopping receiving the feedback results, and counting the performance test parameters according to the feedback results.
Specifically, when the cumulative total number of the feedback results is consistent with the total number of the test messages sent, the reception of the feedback results may be stopped, the reception time of the last feedback result may be recorded, and the performance test parameters may be counted according to the feedback results.
In this embodiment, whether to stop receiving the feedback result is determined by the number of the feedback results, so that the efficiency of the performance test can be improved, and the efficiency reduction caused by unnecessary waiting is avoided.
Fig. 7 is a schematic flowchart of a performance testing method based on benchmark service according to a seventh embodiment of the present application, as shown in the drawing, the step s104 of outputting a performance testing result according to the statistical result includes:
step s701, acquiring the cumulative response time of the test message which is successfully sent according to the sending time of the test message, and acquiring the average response time according to the cumulative response time and the feedback result and the cumulative total;
specifically, the average response time is a ratio of the total response time of all successfully transmitted test messages to the total number of all received feedback results, the total response time of all successfully transmitted test messages can be obtained from the transmission time of the test messages, the response time of each successfully transmitted test message can be obtained first, because the system is synchronous, the transmission time can be recorded after each test message is transmitted, the reception time can be recorded after the feedback results are received, and the time difference between the reception time and the transmission time is the response time of each test message.
Specifically, besides the average response time, the response time unqualified message rate may be counted, and the counting may preset a response time threshold, which may be one or more of the response time thresholds, for example: the number of test messages with response time exceeding 300ms, exceeding 500ms or exceeding 1000ms can be counted; then obtaining the response time of each test message, comparing the response time of each test message with a response time threshold, and accumulating the number of the test messages when the response time of the test messages is greater than the response time threshold, so as to count the total number of the test messages; and finally, dividing the total number of the test messages which are larger than the response time threshold value by the total number of the sent test messages to obtain the response time unqualified message rate.
Step s702, obtaining the receiving time of the last feedback result, and obtaining the average throughput according to the sending time of the first test message, the receiving time of the last feedback result and the total number of successfully sent test messages;
specifically, the average throughput is the total number of successfully transmitted test messages/(the current time-the time when the first test message is transmitted), where the current time is the time when the last feedback result is received.
Step s703, obtaining an abnormal rate according to the total number of the received abnormal results and the total number of the feedback results;
specifically, the anomaly rate is the total number of anomalies/the total number of all received feedback results.
And s704, outputting the average response time, the average throughput and the abnormal rate in a preset time interval.
Specifically, after the performance test parameters are obtained, a statistical result may be output in a log within a predetermined time interval, where the statistical result may be an index such as an average throughput, an average response time, and an abnormality rate, and the predetermined time interval may be set in advance, for example, 1 minute.
In this embodiment, by counting different parameters and setting time to output, system performance can be known more clearly, which is beneficial to improving system performance and improving system efficiency.
A performance testing apparatus based on benchmark service according to an embodiment of the present application is shown in fig. 8, and includes:
a creating module 801, a starting module 802, a testing module 803 and a counting module 804; the creating module 801 is connected with the starting module 802, the starting module 802 is connected with the testing module 803, and the testing module 803 is connected with the counting module 804; the creating module 801 is configured to obtain performance test parameters, and create a performance test scenario according to the performance test parameters, where the performance test parameters include average response time, average throughput, and abnormal rate; the starting module 802 is configured to obtain an execution instruction, start a test on the performance test scenario according to the execution instruction, and generate a test message; the test module 803 is configured to invoke a benchmark service to send the test message, and record the sending time of the test message; the statistic module 804 is configured to receive a feedback result of the service end to the test message, perform statistics on the performance test parameters according to the feedback result, and output a performance test result according to the statistical result.
The embodiment of the application also discloses a computer device, which comprises a memory and a processor, wherein the memory is stored with computer readable instructions, and the computer readable instructions, when executed by one or more processors, cause the one or more processors to execute the steps in the test method in the above embodiments.
The embodiment of the present application further discloses a computer-readable storage medium, where the storage medium can be read and written by a processor, and the memory stores computer-readable instructions, and when the computer-readable instructions are executed by one or more processors, the one or more processors are enabled to execute the steps in the test method in the foregoing embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A performance test method based on benchmark service is characterized by comprising the following steps:
acquiring performance test parameters, and establishing a performance test scene according to the performance test parameters, wherein the performance test parameters comprise average response time, average throughput and abnormal rate;
acquiring an execution instruction, starting a test on the performance test scene according to the execution instruction, and generating a test message;
calling a benchmark service to send the test message, and recording the sending time of the test message;
and receiving a feedback result of the server side to the test message, counting the performance test parameters according to the feedback result, and outputting a performance test result according to the statistical result.
2. The performance testing method based on benchmark service as claimed in claim 1, wherein said obtaining performance testing parameters and establishing performance testing scenarios according to said performance testing parameters comprises:
presetting configuration parameters, and creating a script according to the configuration parameters;
and acquiring performance test parameters, and calling the script to establish a performance test scene based on the performance test parameters.
3. The method for performance testing based on benchmark service as claimed in claim 2, wherein said obtaining an execution instruction, starting the testing of the performance testing scenario according to the execution instruction, and generating a testing message comprises:
creating a first type of test message and a second type of test message;
acquiring script execution instructions corresponding to the first type of the test message and the second type of the test message;
and starting the test of the performance test scene according to the script execution instruction to generate a first type test message and a second type test message.
4. The method for performance testing based on benchmark service as claimed in claim 3, wherein said invoking the benchmark service to send said test message comprises:
calling benchmark service to sequentially send each test message and wait for receiving a feedback result of each test message;
and after all the test messages are sent, counting the total number of the test messages which are successfully sent.
5. The method for performance testing based on benchmark service as claimed in claim 4, wherein said recording the sending time of said test message comprises:
when a first test message is sent, recording the sending time of the first test message;
recording the transmission time consumption of each successfully transmitted test message when one test message is successfully transmitted;
and recording the sending time of the last test message after the last test message is sent.
6. The performance testing method based on benchmark service according to claim 5, wherein the receiving server side performs statistics on the performance testing parameters according to the feedback result of the test message, and the method comprises:
when a feedback result of the server side for the test message is received, accumulating the number of the feedback result, wherein the feedback result comprises success and abnormity;
and when the accumulated total number of the feedback results reaches the total number of the test message, stopping receiving the feedback results, and counting the performance test parameters according to the feedback results.
7. The method for performance testing based on benchmark service as claimed in claim 6, wherein said outputting the performance testing result according to the statistical result comprises:
acquiring the cumulative response time of the test message which is successfully sent according to the sending time of the test message, and acquiring the average response time according to the cumulative response time and the feedback result and the cumulative total;
acquiring the receiving time of the last feedback result, and acquiring the average throughput according to the sending time of the first test message, the receiving time of the last feedback result and the total number of successfully sent test messages;
obtaining an abnormal rate according to the total number of the received abnormal results and the total number of the feedback results;
and outputting the average response time, the average throughput and the abnormal rate within a preset time interval.
8. A performance testing apparatus based on benchmark service, the apparatus comprising:
the system comprises a creating module, a performance testing module and a judging module, wherein the creating module is used for acquiring performance testing parameters and creating a performance testing scene according to the performance testing parameters, and the performance testing parameters comprise average response time, average throughput and abnormal rate;
a starting module: the system comprises a test system, a test device and a control device, wherein the test system is used for acquiring an execution instruction, starting a test on the performance test scene according to the execution instruction and generating a test message;
a test module: the system is used for calling benchmark service to send the test message and recording the sending time of the test message;
a statistic module: and the server is used for receiving a feedback result of the server to the test message, counting the performance test parameters according to the feedback result, and outputting a performance test result according to the statistical result.
9. A computer device comprising a memory and a processor, the memory having stored therein computer-readable instructions which, when executed by one or more of the processors, cause the one or more processors to carry out the steps of the test method according to any one of claims 1 to 7.
10. A computer-readable storage medium readable by a processor, the storage medium storing computer instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the test method of any one of claims 1 to 7.
CN202010119599.6A 2020-02-26 2020-02-26 Performance test method based on benchmark service and related equipment Pending CN111367795A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010119599.6A CN111367795A (en) 2020-02-26 2020-02-26 Performance test method based on benchmark service and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010119599.6A CN111367795A (en) 2020-02-26 2020-02-26 Performance test method based on benchmark service and related equipment

Publications (1)

Publication Number Publication Date
CN111367795A true CN111367795A (en) 2020-07-03

Family

ID=71211162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010119599.6A Pending CN111367795A (en) 2020-02-26 2020-02-26 Performance test method based on benchmark service and related equipment

Country Status (1)

Country Link
CN (1) CN111367795A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111913861A (en) * 2020-07-31 2020-11-10 中国建设银行股份有限公司 Performance test method, device, equipment and medium of Internet of things system
CN112887171A (en) * 2021-02-03 2021-06-01 南方电网数字电网研究院有限公司 Response rate testing method and device of electric energy meter operating system and computer equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111913861A (en) * 2020-07-31 2020-11-10 中国建设银行股份有限公司 Performance test method, device, equipment and medium of Internet of things system
CN112887171A (en) * 2021-02-03 2021-06-01 南方电网数字电网研究院有限公司 Response rate testing method and device of electric energy meter operating system and computer equipment
CN112887171B (en) * 2021-02-03 2022-07-29 南方电网数字电网研究院有限公司 Response rate testing method and device of electric energy meter operating system and computer equipment

Similar Documents

Publication Publication Date Title
CN111726420A (en) Communication method, device, equipment and storage medium based on RPA
EP3258653A1 (en) Message pushing method and device
CN111190755B (en) Application program function exception handling method and device
CN108089915B (en) Method and system for business control processing based on message queue
CN108984366B (en) Terminal monitoring processing method, device and equipment
CN111367795A (en) Performance test method based on benchmark service and related equipment
CN113360301B (en) Message transmission system and method
CN112202635B (en) Link monitoring method and device, storage medium and electronic device
CN112134754A (en) Pressure testing method and device, network equipment and storage medium
CN110333916B (en) Request message processing method, device, computer system and readable storage medium
CN101800672B (en) Equipment detection method and equipment
CN113608982A (en) Function execution performance monitoring method and device, computer equipment and storage medium
CN115276844B (en) Communication module testing method and device, storage medium and electronic equipment
CN110572315A (en) Information interaction method and device, robot and storage medium
CN114138371B (en) Configuration dynamic loading method and device, computer equipment and storage medium
CN112948195B (en) Interface testing method, device, electronic equipment and storage medium
CN112131180B (en) Data reporting method, device and storage medium
CN110362464B (en) Software analysis method and equipment
CN112148621A (en) Test method and device and electronic equipment
CN112069027A (en) Interface data processing method and device, electronic equipment and storage medium
CN109510774B (en) Method for realizing flow control in data synchronization process
CN112306871A (en) Data processing method, device, equipment and storage medium
CN110908886A (en) Data sending method and device, electronic equipment and storage medium
CN110569673A (en) Data file processing method, device, equipment and storage medium
CN115914152B (en) Receipt information pushing method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination