WO2019037203A1 - 应用程序的性能测试方法、装置、计算机设备和存储介质 - Google Patents

应用程序的性能测试方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2019037203A1
WO2019037203A1 PCT/CN2017/104599 CN2017104599W WO2019037203A1 WO 2019037203 A1 WO2019037203 A1 WO 2019037203A1 CN 2017104599 W CN2017104599 W CN 2017104599W WO 2019037203 A1 WO2019037203 A1 WO 2019037203A1
Authority
WO
WIPO (PCT)
Prior art keywords
test
node
slave
master node
service scenario
Prior art date
Application number
PCT/CN2017/104599
Other languages
English (en)
French (fr)
Inventor
丁晶晶
柯星
贺满
刘慧众
Original Assignee
上海壹账通金融科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海壹账通金融科技有限公司 filed Critical 上海壹账通金融科技有限公司
Publication of WO2019037203A1 publication Critical patent/WO2019037203A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3452Performance evaluation by statistical analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3476Data logging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis

Definitions

  • the present application relates to the field of computer technology, and in particular, to a performance testing method, apparatus, computer device, and storage medium for an application.
  • Performance testing includes client performance testing and server performance testing.
  • server performance test the traditional test platform only supports the performance test of the single interface of the application.
  • Performance testing of a business scenario usually requires simultaneous performance testing of multiple interfaces.
  • Traditional testing platforms cannot meet this testing requirement.
  • a performance testing method, apparatus, computer device, and storage medium of an application are provided.
  • An application performance testing method including:
  • the master node acquires a corresponding test case according to the configuration information, and generates a corresponding test code by using the configuration information;
  • test data returned by the plurality of slave nodes is received by the master node, and the test report is generated by using the test data.
  • An application performance testing device comprising:
  • the master node is configured to receive a performance test request of the service scenario sent by the terminal, where the performance test request carries the configuration information corresponding to the service scenario; obtain the corresponding test case according to the configuration information, and generate corresponding information by using the configuration information. Testing the code; distributing the test code and test cases to a plurality of slave nodes;
  • a slave node for running a plurality of threads, performing performance testing on a plurality of interfaces of the business scenario by using a test code and a test case by a thread;
  • the master node is further configured to receive test data returned by the plurality of slave nodes, and generate test reports by using the test data.
  • a computer device comprising a memory and one or more processors having stored therein computer readable instructions, the computer readable instructions being executable by the processor to cause the one or more processors to execute The following steps:
  • the master node acquires a corresponding test case according to the configuration information, and generates a corresponding test code by using the configuration information;
  • test data returned by the plurality of slave nodes is received by the master node, and the test report is generated by using the test data.
  • One or more non-volatile readable storage media storing computer readable instructions for said computing
  • the machine readable instructions are executed by one or more processors such that the one or more processors perform the following steps:
  • the master node acquires a corresponding test case according to the configuration information, and generates a corresponding test code by using the configuration information;
  • test data returned by the plurality of slave nodes is received by the master node, and the test report is generated by using the test data.
  • 1 is an application scenario diagram of a performance testing method of an application in an embodiment
  • FIG. 2 is a flow chart of a method for testing performance of an application in an embodiment
  • FIG. 3 is a block diagram of a performance testing apparatus of an application in an embodiment
  • FIG. 4 is a block diagram of a master node in one embodiment.
  • the performance testing method of the application provided by the present application can be applied to the application scenario as shown in FIG. 1.
  • the terminal 102 is in communication connection with the server cluster 104 via a network.
  • Server cluster 104 includes a primary node and a plurality of secondary nodes.
  • the master node can communicate with the terminal 102.
  • the tester accesses the master node through the terminal 102, and the master node returns the service scenario management page corresponding to the application performance test to the terminal 102. On this page, testers can configure test requirements for different business scenarios.
  • the master node obtains the configuration information corresponding to the service scenario, the corresponding test case is obtained according to the configuration information, and the corresponding test code is generated by using the configuration information.
  • the master node can distribute the test cases and test codes corresponding to the service scenario to multiple slave nodes, and the slave node invokes multiple threads to test performance of multiple interfaces of the service scenario by using test cases and test codes.
  • the slave node records the test data and returns the test data to the master node.
  • the master node performs statistics on the test data returned from multiple slave nodes to generate a corresponding performance test report.
  • a performance testing method for an application is provided. It should be understood that although the various steps in the flowchart of FIG. 2 are sequentially displayed as indicated by the arrows, these steps are not necessarily performed in the order indicated by the arrows. Except as explicitly stated herein, the execution of these steps is not strictly limited, and may be performed in other sequences. Moreover, at least some of the steps in FIG. 2 may include a plurality of sub-steps or stages, which are not necessarily performed at the same time, but may be executed at different times, and the order of execution thereof is not necessarily This may be performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of the other steps.
  • the method is applied to a server cluster as an example, and the method includes the following steps:
  • step 202 the primary node receives the performance test request of the service scenario sent by the terminal, and the performance test request carries the configuration information corresponding to the service scenario.
  • Step 204 The master node acquires a corresponding test case according to the configuration information, and generates a corresponding test code by using the configuration information.
  • the server cluster includes a plurality of nodes, wherein the nodes include one master node and multiple slave nodes. Test tools for performance testing of applications are installed in both the master and slave nodes. Master node To control multiple slave nodes.
  • the master node can communicate with the terminal. Specifically, the tester accesses the primary node through the terminal, and the primary node returns a service scenario management page corresponding to the application performance test to the terminal. On this page, testers can select different service scenarios and configure test requirements for different business scenarios.
  • the service scenario includes a single service scenario, a mixed service scenario, and a stable service scenario.
  • a single business scenario requires one or more interface support.
  • On the management page corresponding to a single service scenario you can configure the service scenario name, the interface name of each interface, the test cases corresponding to the interface, and the test exit conditions.
  • a hybrid business scenario can include multiple single business scenarios. In the management page of the hybrid service scenario, you can configure the test requirements for each single service scenario. In the stable business scenario, you can configure the test requirements based on the mixed service scenario.
  • the tester can use the terminal to enter the service scenario name and the corresponding interface name.
  • the tester can also use the terminal to select the corresponding test case based on the interface name on the page.
  • the interface name is the query asset interface
  • the test case name is the query activity list.
  • the tester can also configure the interface to exit the condition through the terminal in the page, for example, the response time is 100 milliseconds, the error rate is 0.1%, and the like.
  • the terminal sends the configuration information corresponding to the service scenario to the master node.
  • the configuration information includes the service scenario name, interface name, test case name corresponding to the interface name, and test exit conditions corresponding to the interface name.
  • the primary node When the primary node receives the configuration information corresponding to the single service scenario, the corresponding test case is selected according to the test case name in the configuration information, and the assembly rule of the test code corresponding to the single interface is obtained according to the interface name in the configuration information, and according to the configuration information.
  • the business scenario name obtains the assembly rules of the test code corresponding to the single business scenario.
  • the master node obtains the corresponding program module in the test code library by using the assembly rule of the test code corresponding to the single interface, and assembles the program module according to the rule to generate the test code corresponding to the single interface.
  • the master node assembles the test code corresponding to the multiple interfaces by using the assembly rule of the test code corresponding to the single service scenario, and generates a test code corresponding to the single service scenario.
  • a single service scenario may have only one interface, and the test code corresponding to a single interface may be regarded as a test code corresponding to a single service scenario.
  • the test code corresponding to multiple single service scenarios is assembled according to the foregoing manner.
  • the master node obtains the code assembly rule corresponding to the mixed service scenario, and uses the assembly rule to assemble the test code corresponding to the multiple single service scenarios to generate a test code corresponding to the mixed service scenario.
  • the master node When the master node receives the configuration information corresponding to the stable service scenario, the master node may generate a corresponding test code according to the manner of the hybrid service scenario.
  • Step 206 The test code and the test case are distributed to the plurality of slave nodes by the master node, so that the slave node invokes the plurality of threads to perform performance tests on the multiple interfaces of the service scenario according to the test code and the test case.
  • Step 208 Receive test data returned by the plurality of slave nodes through the master node, and generate test reports by using the test data.
  • a distributed cluster can be in a server cluster.
  • the master node obtains the configuration information corresponding to the service scenario, the corresponding test task can be generated.
  • the test task includes performance testing of multiple interfaces in the business scenario. Performance testing of each interface requires multiple threads to execute.
  • the multiple interfaces corresponding to the test task can also be called the interfaces to be tested.
  • the master node can distribute test tasks to multiple slave nodes for corresponding performance testing.
  • the calling node invokes multiple threads
  • the master node may distribute the test task, the test case, the test code corresponding to the service scenario, and the like to multiple slave nodes, and each slave node invokes multiple threads.
  • each thread in each slave node uses the test code corresponding to the test case and the business scenario to execute the test task, and performs performance test of the service scenario.
  • the master node may distribute test cases and test codes corresponding to the single service scenario to multiple slave nodes.
  • Each slave node calls multiple threads, and each thread performs performance tests on multiple interfaces of a single service scenario using test cases and test code.
  • the thread can perform serial operations on multiple interfaces according to business logic when performing tests on multiple interfaces in a single service scenario.
  • the thread exits the performance test of the interface and performs the performance test of the next interface according to the business logic.
  • a single service scenario includes three interfaces, interface 1, interface 2, and interface 3, according to the service scenario. Business logic, the thread first performs a test on interface 1, then on interface 2, and again on interface 3.
  • Tests are performed on multiple interfaces in a single business scenario by serial operation to ensure the accuracy of the test data.
  • Each thread records the corresponding test data and returns the test data to the master node through the slave node.
  • the master node receives test data returned by multiple slave nodes, performs statistics on the test data, and uses the statistics result to generate a performance test report of the application in the service scenario.
  • each thread performs performance testing on the interface according to the test case and the test code, and exits the test when the configured test exit condition is reached.
  • Multiple threads record the corresponding test data and return the test data to the master node through the slave node.
  • the master node receives test data returned by multiple slave nodes, performs statistics on the test data, and uses the statistics result to generate a performance test report of the application in the service scenario.
  • the master node distributes test cases and test codes corresponding to the mixed service scenario to multiple slave nodes. Multiple threads are called from each slave node.
  • the master node can split the mixed service scenario into multiple single service scenarios and distribute multiple single service scenarios to multiple threads of multiple slave nodes.
  • the master node can distribute a single service scenario to a slave node, and multiple threads of the slave node can perform corresponding performance tests according to the test mode of the single service scenario, thereby completing the hybrid service scenario by cooperation of multiple slave nodes. Performance Testing.
  • the master node can also distribute each single service scenario to all slave nodes, and each slave node can perform performance tests on multiple single service scenarios. This allows each slave node to perform performance testing of the mixed service scenario.
  • multiple slave nodes can perform parallel test on each single service scenario, and serially test multiple interfaces of each single service scenario according to the service logic.
  • the configuration information corresponding to the stable service scenario may be different from the number of threads required for the performance test of the hybrid service scenario.
  • the configuration information is the same.
  • the performance test of the stable service scenario can be performed according to the performance test mode of the hybrid service scenario.
  • Step 208 Receive test data returned by multiple slave nodes through the master node, and use test data. Generate a test report.
  • Test data includes thread concurrency, throughput, error rate, and server performance.
  • thread concurrency refers to the number of threads executing at the same time.
  • Throughput refers to the number of responses for an interface in one second.
  • the error rate is the error rate of the expected interface return value within one second. For example, if the thread concurrency is 100 and the expected interface returns a value of 0, and 3 of the interfaces return a value of 1, the error rate is 3%.
  • Server performance refers to performance metrics of slave nodes, including CPU usage and memory usage bytes.
  • the master node receives test data returned by multiple slave nodes, and performs statistics on the test data to obtain statistical results of the performance test of the service scenario.
  • the master node uses the statistics to generate a test report corresponding to the service scenario test.
  • the master node can obtain all the test data.
  • the master node can generate test reports for application server performance tests using all test data.
  • the terminal may send the configuration information corresponding to the service scenario to the primary node in the server cluster.
  • the master node obtains the corresponding test case according to the configuration information and generates a corresponding test code.
  • the master node distributes test cases and test code in parallel to multiple slave nodes.
  • Each slave node calls multiple threads to perform performance tests on multiple interfaces of the business scenario using test code and test cases.
  • the server performance test is performed on the application by means of the server cluster, so that the performance test of multiple interfaces in the service scenario can be tested simultaneously by multiple threads of different slave nodes.
  • the slave node records the test data corresponding to each thread and returns the test data to the master node.
  • the master node collects test data returned from multiple slave nodes to generate corresponding test reports. Since the test data comes from multiple threads of multiple nodes, the actual situation of the performance test of the business scenario can be more comprehensively and accurately reflected, thereby effectively improving the accuracy of the test.
  • the method further includes: when the error rate of the single interface reaches the first threshold in the performance test, exiting the performance test corresponding to the single interface from the thread of the node; Record the corresponding test data; or the master node records the number of threads performing performance tests on a single interface according to the number of thread steps and the step frequency of a single interface; when the number of threads corresponding to a single interface reaches a second threshold, multiple slave nodes The thread exits the performance test of a single interface, and uses the slave node to record the corresponding test data.
  • the terminal may configure a test exit condition for each interface to exit the performance test in the service scenario.
  • the exit condition can include one or more.
  • the error rate of the interface can be recorded.
  • the error rate reaches the first threshold in the configuration information, it indicates that the test exit condition is reached, and the thread exits the performance of the interface. test.
  • the master node can also record the number of threads executing the single interface performance test.
  • the configuration information includes an initial thread number, a step quantity, and a step frequency corresponding to a single interface.
  • the master node can distribute the test tasks corresponding to the service scenario to the threads of the multiple slave nodes according to the number of steps and the step frequency.
  • the number of executions of the single interface test reaches the second threshold in the configuration information, that is, when the upper limit of the number of threads is reached, it indicates that the test exit condition is reached.
  • the master node sends an instruction to the slave node to exit the single interface test, and the slave node causes the plurality of threads to exit the performance test for the single interface according to the instruction.
  • the configuration information includes multiple test exit conditions
  • multiple test exit conditions can be valid.
  • the first test condition is the exit condition of the interface performance test. That is, which test exit condition is reached first, which test exit condition is executed.
  • the method further includes: when the service scenario is a mixed service scenario, the hybrid service scenario is split into multiple single service scenarios by the primary node; and the test cases corresponding to the multiple single service scenarios are obtained by the primary node. Test code; distribute test cases and test codes corresponding to multiple single service scenarios to multiple slave nodes through the master node, so that the slave nodes perform multiple pairs of single service scenarios Can test.
  • a hybrid service scenario may include multiple single service scenarios.
  • the master node can split the mixed service scenario into multiple single service scenarios and distribute multiple single service scenarios to multiple threads of multiple slave nodes.
  • the number of free threads for each slave node in the server cluster is different. To ensure that the master node can smoothly allocate multiple single service scenarios after the split service scenario is split to multiple slave nodes for performance testing, the master node can read the number of free threads of each slave node, according to the free thread of the slave node. The number to assign performance tests for multiple single business scenarios.
  • the test case and the test code corresponding to the plurality of single service scenarios are distributed to the plurality of slave nodes by the master node, so that the step of performing performance testing on the plurality of single service scenarios by the slave node includes: acquiring by the master node The number of vacant threads per slave node; the master node distributes test cases and test codes of multiple single-service scenarios to corresponding slave nodes according to the number of vacant threads, so that each slave node performs performance tests on multiple single-service scenarios.
  • the number of vacant threads can be pre-configured with corresponding weighting coefficients. Different number of vacant threads can be configured with different weighting coefficients. For example, the number of free threads is 70-80, the configured weighting coefficient is 1; the number of free threads is 60-70, and the configured weighting coefficient is 0.9.
  • the master node distributes the test cases and test codes corresponding to the single service scenario to the plurality of slave nodes according to the weighting coefficients of the vacant threads. The higher the weighting factor, the greater the number of free threads representing the slave node. For a slave node with a high weighting coefficient, the number of threads that receive test cases and test code corresponding to a single service scenario is relatively large.
  • the performance test of the mixed service scenarios of different slave nodes is allocated by weighting coefficients, so that the workload between each slave nodes is as equal as possible, and load balancing between multiple slave nodes is effectively realized.
  • the step of the primary node distributing the test cases and test codes of the multiple single service scenarios to the corresponding slave nodes according to the number of vacant threads includes: obtaining weighting coefficients corresponding to the number of vacant threads of each slave node; When the number of the slave nodes is larger than the number of the single-service scenarios, the master node selects one or more slave nodes for the single-service scenario according to the number of interfaces and the weighting coefficient corresponding to the single-service scenario; Use cases and test code are distributed to the corresponding slave nodes.
  • the master node can distribute a single service scenario after the hybrid service scenario is split to a slave node, and the slave node invokes multiple threads to perform corresponding performance tests according to the test mode of the single service scenario.
  • the master node may select one or more slaves according to the number of interfaces corresponding to the single service scenario and the weighting coefficients corresponding to the slave nodes for different single service scenarios.
  • the node distributes the test case and test code corresponding to the selected single service scenario to the corresponding slave node, and the slave node performs performance test corresponding to the single service scenario by using multiple spare threads.
  • Multiple slave nodes can perform parallel testing on each single service scenario, and multiple interfaces of each single service scenario are serially tested according to business logic. Therefore, performance testing of the mixed service scenario is completed by cooperation of multiple slave nodes.
  • the master node distributes test cases and test codes of multiple single service scenarios to all slave nodes according to weighting coefficients corresponding to the slave nodes, and each slave node invokes a spare thread to perform performance test on multiple single service scenarios. .
  • This allows each slave node to perform performance testing of the mixed service scenario.
  • multiple slave nodes can perform parallel test on each single service scenario, and serially test multiple interfaces of each single service scenario according to the service logic.
  • the thread ceiling configured in the test exit condition may be a preset ratio of the thread ceiling in the mixed service scenario, such as 50%.
  • the performance test of the stable business scenario can be performed in the manner of a mixed business scenario. When the test exit condition is reached, the thread exits the corresponding test.
  • the method further includes: adding a new node to send a new node identifier to the master node; the master node sends an initialization command according to the newly added slave node identifier to the newly added slave node; adding a slave node to perform an initialization command, initializing After the completion, the new slave node receives the test cases and test codes distributed by the master node, and performs performance tests corresponding to the service scenarios.
  • the master node and the slave node in the server cluster can communicate through the message queue. letter.
  • the server cluster can be expanded. Specifically, testers can install test tools for new slave nodes.
  • the new slave node sends a new node identifier to the master node through the message queue.
  • the master node receives the newly added node identifier, records the newly added node identifier, and sends an initialization command to the newly added node through the message queue.
  • the newly added node receives the initialization command and performs an initialization operation according to the command. After the newly added node completes the initialization operation, it can be used as a slave node to receive the test task distributed by the master node by using the test tool calling thread, thereby performing performance test corresponding to the application scenario.
  • an application performance testing apparatus including: a master node 302 and a slave node 304, wherein:
  • the master node 302 is configured to receive a performance test request of the service scenario sent by the terminal, where the performance test request carries the configuration information corresponding to the service scenario; the corresponding test case is obtained according to the configuration information, and the corresponding test code is generated by using the configuration information; Code and test cases are distributed to multiple slave nodes 304.
  • the slave node 304 is configured to run multiple threads, and the performance test is performed on multiple interfaces of the service scenario by using the test code and the test case by the thread.
  • the master node 302 is further configured to receive a plurality of test data returned from the node 304, and generate test reports using the test data.
  • the thread from the node 304 exits the performance test corresponding to the interface, and the slave node 304 also records the corresponding test data.
  • the master node 302 records the number of threads performing performance tests on a single interface according to the number of thread steps and the step frequency of a single interface; when the number of threads corresponding to a single interface reaches a second threshold, multiple slaves The thread of node 304 exits the performance test of a single interface, and the corresponding test data is recorded from node 304.
  • the service scenario includes a hybrid service scenario and a single service scenario.
  • the master node 302 is further configured to split the hybrid service scenario into multiple single service scenarios; Test cases and test code corresponding to the scenario; multiple single business scenarios Corresponding test cases and test codes are distributed to multiple slave nodes 304; slave nodes 304 are also used to invoke multiple threads to perform performance tests on multiple single service scenarios. .
  • the master node 302 is further configured to acquire the number of vacant threads of each slave node 304; distribute test cases and test codes of the plurality of single service scenarios to the corresponding slave nodes 304 according to the number of vacant threads; It is also used to call multiple spare threads to perform performance testing of a single service scenario.
  • the master node 302 is further configured to obtain a weighting coefficient corresponding to the number of vacant threads of each slave node; when the number of slave nodes is greater than the number of multiple single service scenarios obtained by splitting, according to a single service scenario The number of interfaces and the weighting coefficient select one or more slave nodes for a single service scenario; the test cases and test codes corresponding to the single service scenario are distributed to the corresponding slave nodes 304.
  • the master node 302 is further configured to distribute test cases and test codes for a plurality of single service scenarios to all of the slave nodes 304 based on weighting coefficients corresponding to the slave nodes 304.
  • the master node 302 is further configured to receive the newly added slave node identifier sent by the slave node 304, and send an initialization command to the newly added slave node 304.
  • the new slave node 304 performs an initialization command, and after the initialization is completed, the new node
  • the slave node 304 receives the test case and the test code distributed by the master node 302, and performs performance tests corresponding to the service scenario.
  • a server cluster includes a master node and a plurality of slave nodes.
  • the master and slave nodes can be standalone servers.
  • the master node includes a processor, memory, and network interface that are connected by a system bus.
  • the processor of the master node is used to provide computing and control capabilities.
  • the memory of the master node includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium of the server stores an operating system and computer readable instructions.
  • the internal memory of the master node provides an environment for the operation of operating systems and computer readable instructions in a non-volatile storage medium.
  • the network interface of the master node is used for communication with an external terminal and a plurality of slave nodes through a network connection, for example, receiving configuration information of a service scenario sent by the terminal, and distributing required performance test of the service scenario to multiple slave nodes. Test cases and test code.
  • a computer-readable instruction of the master node can be implemented by the processor to implement an application performance testing method.
  • a computer device comprising a memory and one or more processors having stored therein computer readable instructions, the computer readable instructions being executable by the processor to cause the one or more processors to execute The following steps:
  • the master node acquires a corresponding test case according to the configuration information and generates a corresponding test code by using the configuration information;
  • test data returned by the plurality of slave nodes is received by the master node, and the test report is generated by using the test data.
  • the computer readable instructions are executed by the processor such that the one or more processors further perform the following steps:
  • the thread of the slave node exits the performance test corresponding to the interface; the slave node records the corresponding test data; or
  • the master node records the number of threads performing performance tests on a single interface according to the number of thread steps and the step frequency of the single interface.
  • the threads of multiple slave nodes exit the performance test of the interface.
  • the corresponding sex test data is recorded by using the slave node.
  • the computer readable instructions are executed by the processor such that the one or more processors further perform the following steps:
  • the hybrid service scenario is split into multiple single service scenarios through the primary node.
  • test cases and test codes corresponding to multiple single service scenarios through the master node
  • test case and test code corresponding to multiple single service scenarios are distributed to the threads of multiple slave nodes to perform performance testing by the master node.
  • the computer readable instructions are executed by the processor such that the one or more processors further perform the following steps:
  • the master node distributes the test cases and test codes of the multiple single service scenarios to the corresponding slave nodes according to the number of vacant threads, so that the performance test corresponding to each single service scenario is performed by multiple slave nodes.
  • the computer readable instructions are executed by the processor such that the one or more processors further perform the following steps:
  • the master node selects one or more slave nodes for the single service scenario according to the number of interfaces corresponding to the single service scenario and the weighting coefficient;
  • test cases and test codes corresponding to the single service scenario are distributed to the corresponding slave nodes.
  • the computer readable instructions are executed by the processor such that the one or more processors further perform the following steps:
  • the new slave node sends a new node identifier to the master node.
  • the master node sends an initialization command to the newly added slave node according to the newly added slave node identifier
  • the slave node is configured to execute the initialization command. After the initialization is complete, the slave node receives the test case and test code distributed by the master node, and performs performance tests corresponding to the service scenario.
  • one or more non-volatile readable storage media storing computer readable instructions are provided, the computer readable instructions being executed by one or more processors such that the one or more The processors perform the following steps:
  • the master node acquires a corresponding test case according to the configuration information and generates a corresponding test code by using the configuration information;
  • test data returned by the plurality of slave nodes is received by the master node, and the test report is generated by using the test data.
  • the computer readable instructions are executed by one or more processors such that the one or more processors further perform the following steps:
  • the thread of the slave node exits the performance test corresponding to the interface; the slave node records the corresponding test data; or
  • the master node records the number of threads performing performance tests on a single interface according to the number of thread steps and the step frequency of the single interface.
  • the threads of multiple slave nodes exit the performance test of the interface.
  • the corresponding sex test data is recorded by using the slave node.
  • the business scenario includes a hybrid business scenario and a single business scenario; when the computer readable instructions are executed by one or more processors, the one or more processors further perform the following steps:
  • the hybrid service scenario is split into multiple single service scenarios through the primary node.
  • test cases and test codes corresponding to multiple single service scenarios through the master node
  • test case and test code corresponding to multiple single service scenarios are distributed to the threads of multiple slave nodes to perform performance testing by the master node.
  • the computer readable instructions are executed by one or more processors such that the one or more processors further perform the following steps:
  • the master node distributes the test cases and test codes of the multiple single service scenarios to the corresponding slave nodes according to the number of vacant threads, so that the performance test corresponding to each single service scenario is performed by multiple slave nodes.
  • the computer readable instructions are executed by one or more processors such that the one or more processors further perform the following steps:
  • the master node selects one or more slave nodes for the single service scenario according to the number of interfaces corresponding to the single service scenario and the weighting coefficient;
  • test cases and test codes corresponding to the single service scenario are distributed to the corresponding slave nodes.
  • the computer readable instructions are executed by one or more processors such that the one or more processors further perform the following steps:
  • the new slave node sends a new node identifier to the master node.
  • the master node sends an initialization command to the newly added slave node according to the newly added slave node identifier
  • the slave node is configured to execute the initialization command. After the initialization is complete, the slave node receives the test case and test code distributed by the master node, and performs performance tests corresponding to the service scenario.
  • the readable storage medium which when executed, may include the flow of an embodiment of the methods as described above.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Debugging And Monitoring (AREA)

Abstract

一种应用程序的性能测试方法,包括:通过主节点接收终端发送的业务场景的性能测试请求,所述性能测试请求中携带了业务场景对应的配置信息(202);主节点根据所述配置信息获取对应的测试用例以及利用所述配置信息生成对应的测试代码(204);通过主节点将所述测试代码和测试用例分发至多个从节点;以使得所述从节点调用多个线程根据测试代码和测试用例对所述业务场景的多个接口执行性能测试(206);通过主节点接收多个从节点返回的测试数据,利用所述测试数据生成测试报告(208)。

Description

应用程序的性能测试方法、装置、计算机设备和存储介质
本申请要求于2017年8月25日提交中国专利局、申请号为2017107431454、发明名称为“应用程序的性能测试方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及一种应用程序的性能测试方法、装置、计算机设备和存储介质。
背景技术
应用程序在发布之前需要进行性能测试以验证是否达到预期的性能指标。性能测试包括客户端性能测试和服务端性能测试。在服务端性能测试中,传统的测试平台只支持应用程序单一接口的性能测试。而一个业务场景的性能测试通常需要对多个接口同时进行性能测试,传统的测试平台无法满足这一测试需求。
发明内容
根据本申请公开的各种实施例,提供一种应用程序的性能测试方法、装置、计算机设备和存储介质。
一种应用程序的性能测试方法,包括:
通过主节点接收终端发送的业务场景的性能测试请求,所述性能测试请求中携带了业务场景对应的配置信息;
主节点根据所述配置信息获取对应的测试用例以及利用所述配置信息生成对应的测试代码;
通过主节点将所述测试代码和测试用例分发至多个从节点;以使得所述从节点调用多个线程根据测试代码和测试用例对所述业务场景的多个接口执行性能测试;及
通过主节点接收多个从节点返回的测试数据,利用所述测试数据生成测试报告。
一种应用程序的性能测试装置,包括:
主节点,用于接收终端发送的业务场景的性能测试请求,所述性能测试请求中携带了业务场景对应的配置信息;根据所述配置信息获取对应的测试用例以及利用所述配置信息生成对应的测试代码;将所述测试代码和测试用例分发至多个从节点;
从节点,用于运行多个线程,通过线程利用测试代码和测试用例对所述业务场景的多个接口执行性能测试;及
所述主节点还用于接收多个从节点返回的测试数据,利用所述测试数据生成测试报告。
一种计算机设备,包括存储器和一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行以下步骤:
通过主节点接收终端发送的业务场景的性能测试请求,所述性能测试请求中携带了业务场景对应的配置信息;
主节点根据所述配置信息获取对应的测试用例以及利用所述配置信息生成对应的测试代码;
通过主节点将所述测试代码和测试用例分发至多个从节点;以使得所述从节点调用多个线程根据测试代码和测试用例对所述业务场景的多个接口执行性能测试;及
通过主节点接收多个从节点返回的测试数据,利用所述测试数据生成测试报告。
一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算 机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
通过主节点接收终端发送的业务场景的性能测试请求,所述性能测试请求中携带了业务场景对应的配置信息;
主节点根据所述配置信息获取对应的测试用例以及利用所述配置信息生成对应的测试代码;
通过主节点将所述测试代码和测试用例分发至多个从节点;以使得所述从节点调用多个线程根据测试代码和测试用例对所述业务场景的多个接口执行性能测试;及
通过主节点接收多个从节点返回的测试数据,利用所述测试数据生成测试报告。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中应用程序的性能测试方法的应用场景图;
图2为一个实施例中应用程序的性能测试方法的流程图;
图3为一个实施例中应用程序的性能测试装置的框图;
图4为一个实施例中主节点的框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例 仅仅用以解释本申请,并不用于限定本申请。
本申请提供的应用程序的性能测试方法可以应用于如图1所示的应用场景中。终端102通过网络与服务器集群104进行通信连接。服务器集群104包括一个主节点和多个从节点。主节点可以与终端102进行通信。测试人员通过终端102访问主节点,主节点向终端102返回应用程序性能测试所对应的业务场景管理页面。在该页面中,测试人员可以对不同业务场景的测试需求进行配置。当主节点获取到业务场景对应的配置信息时,根据配置信息获取对应的测试用例以及利用配置信息生成对应的测试代码。主节点可以将业务场景对应的测试用例和测试代码分发至多个从节点,从节点调用多个线程利用测试用例和测试代码对业务场景的多个接口进行性能测试。当线程退出测试时,从节点记录测试数据,并将测试数据返回至主节点。主节点对多个从节点返回的测试数据进行统计,生成相应的性能测试报告。
在一个实施例中,如图2所示,提供了一种应用程序的性能测试方法。应该理解的是,虽然图2的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,图2中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。以该方法应用于服务器集群为例进行说明,该方法包括以下步骤:
步骤202,通过主节点接收终端发送的业务场景的性能测试请求,性能测试请求中携带了业务场景对应的配置信息。
步骤204,主节点根据配置信息获取对应的测试用例以及利用配置信息生成对应的测试代码。
服务器集群中包括多个节点,其中,节点包括一个主节点和多个从节点。主节点和从节点中均安装了对应用程序进行性能测试的测试工具。主节点可 以对多个从节点进行控制。主节点可以与终端进行通信。具体的,测试人员通过终端访问主节点,主节点向终端返回应用程序性能测试所对应的业务场景管理页面。在该页面中,测试人员可以通过终端选择不同的业务场景以及对不同业务场景的测试需求进行配置。
业务场景包括:单业务场景、混合业务场景以及稳定性业务场景等。单业务场景需要一个或多个接口支持。在单业务场景对应的管理页面中,可以配置业务场景名称、每个接口的接口名称和接口对应的测试用例以及测试退出条件等。混合业务场景可以包括多个单业务场景。在混合业务场景的管理页面中,可以配置每个单业务场景的测试需求。在稳定性业务场景中可以根据混合业务场景进行测试需求的配置。
以单业务场景为例,在业务场景管理页面中,测试人员可以利用终端输入业务场景名称和对应的接口名称。测试人员还可以利用终端在该页面中根据接口名称选择相应的测试用例。例如,接口名称为查询资产接口,测试用例名称为查询活动列表。测试人员还可以通过终端在页面中配置接口测试退出条件,例如,响应时间为100毫秒,错误率为0.1%等。
终端将业务场景对应的配置信息发送至主节点。配置信息中包括业务场景名称、接口名称、与接口名称对应的测试用例名称以及与接口名称对应的测试退出条件等。
当主节点接收到单业务场景对应的配置信息时,根据配置信息中的测试用例名称选择对应的测试用例,根据配置信息中的接口名称获取单个接口对应的测试代码的拼装规则,以及根据配置信息中的业务场景名称获取单业务场景对应的测试代码的拼装规则。主节点利用单个接口对应的测试代码的拼装规则在测试代码库中获取相应的程序模块,按照该规则对程序模块进行拼装,生成单个接口对应的测试代码。主节点利用单业务场景对应的测试代码的拼装规则对多个接口对应的测试代码进行拼装,生成单业务场景对应的测试代码。进一步的,单业务场景也可以只有一个接口,则单个接口对应的测试代码即可视为单业务场景对应的测试代码。
当主节点接收到混合业务场景对应的配置信息时,按照上述方式拼装生成多个单业务场景对应的测试代码。主节点获取混合业务场景对应的代码拼装规则,利用该拼装规则将多个单业务场景对应的测试代码进行拼装,生成混合业务场景对应的测试代码。
当主节点接收到稳定性业务场景对应的配置信息时,可以按照混合业务场景的方式生成对应的测试代码。
步骤206,通过主节点将测试代码和测试用例分发至多个从节点;以使得从节点调用多个线程根据测试代码和测试用例对业务场景的多个接口执行性能测试。
步骤208,通过主节点接收多个从节点返回的测试数据,利用测试数据生成测试报告。
服务器集群中可以是分布式集群。当主节点获取到业务场景对应的配置信息时,可以生成相应的测试任务。测试任务中包括对业务场景中的多个接口进行性能测试。每个接口的性能测试需要多个线程来执行。测试任务中所对应的多个接口也可以称为待测接口。
主节点可以将测试任务分发至多个从节点进行相应的性能测试。具体的,从节点调用多个线程,主节点可以将测试任务以及测试用例、业务场景对应的测试代码等分发至多个从节点,每个从节点调用多个线程。以此使得每个从节点中的每个线程利用测试用例和业务场景对应的测试代码来执行测试任务,进行业务场景的性能测试。
当测试任务中的业务场景为单业务场景时,主节点可以将单业务场景对应的测试用例和测试代码分发至多个从节点。每个从节点相应的调用多个线程,每个线程利用测试用例和测试代码对单业务场景的多个接口执行性能测试。其中,线程对单业务场景中的多个接口执行测试时可以按照业务逻辑对多个接口进行串行操作。当测试中的接口达到配置的测试退出条件时,线程退出该接口的性能测试,按照业务逻辑执行下一个接口的性能测试。例如,单业务场景中包括3个接口,接口1、接口2和接口3,按照该业务场景中的 业务逻辑,线程首先对接口1执行测试、其次对接口2执行测以及再次对接口3执行测试。通过串行操作的方式对单业务场景中的多个接口执行测试,以此能够确保测试数据的准确性。每个线程记录相应的测试数据,通过从节点将测试数据返回至主节点。主节点接收多个从节点返回的测试数据,对测试数据进行统计,利用统计结果生成应用程序在该业务场景下的性能测试报告。
进一步的,如果单业务场景只包括一个接口,则每个线程根据测试用例和测试代码对该接口进行性能测试,当达到配置的测试退出条件时,退出测试。多个线程记录相应的测试数据,通过从节点将测试数据返回至主节点。主节点接收多个从节点返回的测试数据,对测试数据进行统计,利用统计结果生成应用程序在该业务场景下的性能测试报告。
当测试任务中的业务场景为混合业务场景时,主节点将混合业务场景对应的测试用例和测试代码分发至多个从节点。每个从节点调用多个线程。主节点可以将混合业务场景拆分为多个单业务场景,将多个单业务场景分发至多个从节点的多个线程。其中,主节点可以将一个单业务场景分发至一个从节点,从节点的多个线程可以按照单业务场景的测试方式执行相应的性能测试,从而通过多个从节点的配合来完成混合业务场景的性能测试。主节点也可以将每个单业务场景分发至所有的从节点,每个从节点都可以对多个单业务场景执行性能测试。由此使得每个从节点都可以执行混合业务场景的性能测试。在执行混合业务场景的性能测试时,多个从节点可以对每个单业务场景进行并行测试,对每个单业务场景的多个接口按照业务逻辑进行串行测试。
当测试任务中的业务场景为稳定性业务场景时,稳定性业务场景对应的配置信息中除了线程数量少于混合业务场景性能测试所需的线程数量之外,其他配置信息可以与混合业务场景对应的配置信息相同。当测试任务中的业务场景为稳定性业务场景时,可以按照混合业务场景的性能测试方式执行稳定性业务场景的性能测试。
步骤208,通过主节点接收多个从节点返回的测试数据,利用测试数据 生成测试报告。
当测试任务结束时,线程退出相应的性能测试,从节点记录该线程对应的测试数据。从节点可以将多个线程对应的测试数据发送至主节点。测试数据包括线程并发量、吞吐量、错误率以及服务器性能。其中,线程并发量是指在同一时间执行的线程的数量。吞吐量是指一秒内一个接口的响应次数。错误率是指一秒内期望的接口返回值的错误率。例如,线程并发量为100,期望的接口返回值为0,其中有3个接口返回值为1,则错误率为3%。服务器性能是指从节点的性能指标,包括CPU使用率和内存使用字节数等。
主节点接收多个从节点返回的测试数据,对测试数据进行统计,得到业务场景性能测试的统计结果。主节点利用该统计结果生成业务场景测试所对应的测试报告。
进一步的,当所有业务场景的性能测试完成之后,主节点可以统计得到所有的测试数据。主节点可以利用所有的测试数据生成应用程序服务端性能测试的测试报告。
本实施例中,当需要对应用程序的业务场景进行性能测试时,可以通过终端将业务场景对应的配置信息发送至服务器集群中的主节点。主节点根据配置信息获取相应的测试用例以及生成相应的测试代码。主节点将测试用例和测试代码并行分发至多个从节点。每个从节点调用多个线程利用测试代码和测试用例对业务场景的多个接口执行性能测试。由于采用了服务器集群的方式对应用程序进行服务端性能测试,从而能够使得业务场景中的多个接口的性能测试能够通过不同从节点的多个线程同时进行测试。当业务场景的性能测试结束后,从节点记录每个线程对应的测试数据,并将测试数据返回至主节点。主节点通过对多个从节点返回的测试数据进行统计,进而生成相应的测试报告。由于测试数据来自多个节点的多个线程,由此能够更加全面和准确的反映业务场景性能测试的实际状况,从而能够有效提高测试的准确性。
在一个实施例中,该方法还包括:当单个接口在性能测试中的错误率达到第一阈值时,从节点的线程退出单个接口对应的性能测试;利用从节点记 录对应的测试数据;或者主节点根据单个接口对应线程步进数量和步进频率,记录对单个接口执行性能测试的线程数量;当单个接口对应的线程数量达到第二阈值时,多个从节点的线程退出单个接口的性能测试,利用从节点记录对应的测试数据。
本实施例中,终端可以配置业务场景中对每个接口退出性能测试的测试退出条件。退出条件可以包括一个或多个。当从节点的线程在对业务场景的接口执行性能测试时,如果达到测试退出条件,则线程退出对该接口的性能测试。
从节点的线程可以对业务场景的单个接口执行性能测试时,可以记录该接口的错误率,当错误率到达配置信息中的第一阈值时,表示达到测试退出条件,线程退出对该接口的性能测试。
从节点调用线程对业务场景的单个接口执行性能测试时,主节点还可以记录执行该单个接口性能测试的线程数量。具体的,配置信息中包括与单个接口对应的初始线程数量、步进数量和步进频率。主节点可以根据步进数量和步进频率将业务场景对应的测试任务分发至多个从节点的线程。当执行该单个接口测试的数量达到配置信息中的第二阈值时,即达到线程数量上限时,表示达到测试退出条件。主节点向从节点发送退出该单个接口测试的指令,从节点根据该指令使得多个线程退出对该单个接口的性能测试。
当配置信息中包括多个测试退出条件时,多个测试退出条件都可以有效。在单个接口的性能测试中,首先达到的测试条件为该接口性能测试的退出条件。也就是说,首先达到哪个测试退出条件,则执行哪个测试退出条件。通过配置测试退出条件,可以对单个接口测试进行有效控制,有利于提高业务场景性能测试的准确性。
在一个实施例中,该方法还包括:当业务场景为混合业务场景时,通过主节点将混合业务场景拆分为多个单业务场景;通过主节点获取多个单业务场景对应的测试用例和测试代码;通过主节点将多个单业务场景对应的测试用例和测试代码分发至多个从节点,以使得从节点对多个单业务场景执行性 能测试。
本实施例中,一个混合业务场景可以包括多个单业务场景。主节点可以将混合业务场景拆分为多个单业务场景,将多个单业务场景分发至多个从节点的多个线程。
由于服务器集群中每个从节点的空余线程数量不同。为了确保主节点能够将混合业务场景拆分后的多个单业务场景顺利分配到多个从节点中进行性能测试,主节点可以读取每个从节点的空余线程数量,根据从节点的空余线程数量来分配多个单业务场景的性能测试。
在其中一个实施例中,通过主节点将多个单业务场景对应的测试用例和测试代码分发至多个从节点,以使得从节点对多个单业务场景执行性能测试的步骤包括:通过主节点获取每个从节点的空余线程数量;主节点根据空余线程数量将多个单业务场景的测试用例和测试代码分发至相应的从节点,以使得每个从节点对多个单业务场景执行性能测试。
空余线程数量可以被预先配置相应的加权系数。不同的空余线程数量可以被配置不同的加权系数。例如,空余线程数量为70~80,被配置的加权系数为1;空余线程数量为60~70,被配置的加权系数为0.9。主节点根据空余线程的加权系数将单业务场景对应的测试用例和测试代码分发至多个从节点。加权系数越高,表示从节点的空余线程数量越多。加权系数高的从节点,接收单业务场景对应的测试用例和测试代码的线程相对越多。通过加权系数对不同的从节点进行混合业务场景的性能测试进行分配,使得每个从节点之间的工作负载尽可能相当,有效实现了多个从节点之间的负载均衡。
在其中一个实施例中,主节点根据空余线程数量将多个单业务场景的测试用例和测试代码分发至相应的从节点的步骤,包括:获取每个从节点的空余线程数量对应的加权系数;当从节点数量大于拆分得到的多个单业务场景的数量时,主节点根据单业务场景对应的接口数量和加权系数为单业务场景选择一个或多个从节点;将单业务场景对应的测试用例和测试代码分发至相应从节点。
主节点可以将混合业务场景拆分后的一个单业务场景分发至一个从节点,从节点调用多个线程按照单业务场景的测试方式执行相应的性能测试。当从节点的数量大于拆分得到的多个单业务场景的数量时,主节点还可以根据单业务场景对应的接口数量和从节点对应的加权系数为不同的单业务场景选择一个或多个从节点,将选择后的单业务场景对应的测试用例和测试代码分发至相应从节点,从节点利用多个空余线程执行单业务场景对应的性能测试。多个从节点可以对每个单业务场景进行并行测试,对每个单业务场景的多个接口按照业务逻辑进行串行测试。从而通过多个从节点的配合来完成混合业务场景的性能测试。
在其中一个实施例中,主节点根据从节点对应的加权系数将多个单业务场景的测试用例和测试代码分发至所有从节点,每个从节点调用空余线程对多个单业务场景执行性能测试。由此使得每个从节点都可以执行混合业务场景的性能测试。在执行混合业务场景的性能测试时,多个从节点可以对每个单业务场景进行并行测试,对每个单业务场景的多个接口按照业务逻辑进行串行测试。
通过将混合业务场景拆分为多个单业务场景,对多个单业务场景进行并行测试,有效提高了测试效率。对每个单业务场景的多个接口按照业务逻辑进行串行测试,确保了性能测试的准确性。
进一步的,当业务场景为稳定性业务场景时,测试退出条件中配置的线程上限可以是混合业务场景中线程上限的预设比例,如50%。稳定性业务场景的性能测试可以按照混合业务场景的方式进行,当达到测试退出条件时,线程退出相应测试。
在一个实施例中,该方法还包括:新增从节点向主节点发送新增节点标识;主节点根据新增从节点标识向新增从节点发送初始化命令;新增从节点执行初始化命令,初始化完成后,新增从节点接收主节点分发的测试用例和测试代码,执行与业务场景对应的性能测试。
本实施例中,服务器集群中的主节点和从节点可以通过消息队列进行通 信。当服务器集群中的从节点无法满足当前性能测试的需求时,可以对服务器集群进行扩容。具体的,测试人员可以对新增从节点安装测试工具。新增从节点通过消息队列向主节点发送新增节点标识。主节点接收到新增节点标识,将新增节点标识进行记录,通过消息队列向新增节点发送初始化命令。新增节点接收初始化命令,根据该命令进行初始化操作。新增节点完成初始化操作之后,即可作为从节点利用测试工具调用线程接收主节点分发的测试任务,以此执行与应用场景对应的性能测试。
在一个实施例中,如图3所示,提供了一种应用程序的性能测试装置,包括:主节点302和从节点304,其中:
主节点302,用于接收终端发送的业务场景的性能测试请求,性能测试请求中携带了业务场景对应的配置信息;根据配置信息获取对应的测试用例以及利用配置信息生成对应的测试代码;将测试代码和测试用例分发至多个从节点304。
从节点304,用于运行多个线程,通过线程利用测试代码和测试用例对业务场景的多个接口执行性能测试。
主节点302还用于接收多个从节点304返回的测试数据,利用测试数据生成测试报告。
在其中一个实施例中,当单个接口在性能测试中的错误率达到第一阈值时,从节点304的线程退出接口对应的性能测试,从节点304还用于记录对应的测试数据。
在其中一个实施例中,主节点302根据单个接口对应线程步进数量和步进频率,记录对单个接口执行性能测试的线程数量;当单个接口对应的线程数量达到第二阈值时,多个从节点304的线程退出单个接口的性能测试,从节点304记录对应的测试数据。
在一个实施例中,业务场景包括混合业务场景和单业务场景;当业务场景为混合业务场景时,主节点302还用于将混合业务场景拆分为多个单业务场景;获取多个单业务场景对应的测试用例和测试代码;将多个单业务场景 对应的测试用例和测试代码分发至多个从节点304;从节点304还用于调用多个线程对多个单业务场景执行性能测试。。
在一个实施例中,主节点302还用于获取每个从节点304的空余线程数量;根据空余线程数量将多个单业务场景的测试用例和测试代码分发至相应的从节点304;从节点304还用于调用多个空余线程执行单业务场景的性能测试。
在一个实施例中,主节点302还用于获取每个从节点的空余线程数量对应的加权系数;当从节点数量大于拆分得到的多个单业务场景的数量时,根据单业务场景对应的接口数量和加权系数为单业务场景选择一个或多个从节点;将单业务场景对应的测试用例和测试代码分发至相应从节点304。
在一个实施例中,主节点302还用于根据从节点304对应的加权系数将多个单业务场景的测试用例和测试代码分发至所有从节点304。
在一个实施例中,主节点302还用于接收新增从节点304发送的新增从节点标识,向新增从节点304发送初始化命令;新增从节点304执行初始化命令,初始化完成后,新增从节点304接收主节点302分发的测试用例和测试代码,执行与业务场景对应的性能测试。
在一个实施例中,提供了一种服务器集群,包括主节点和多个从节点。主节点和从节点可以是独立服务器。如图4所示,主节点包括通过系统总线连接的处理器、存储器和网络接口。其中,该主节点的处理器用于提供计算和控制能力。该主节点的存储器包括非易失性存储介质、内存储器。该服务器的非易失性存储介质存储有操作系统和和计算机可读指令。该主节点的内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该主节点的网络接口用于与外部的终端以及多个从节点之间通过网络连接通信,比如,接收终端发送的业务场景的配置信息,以及向多个从节点分发业务场景性能测试所需的测试用例和测试代码。通过终端以及从节点的配合,该主节点的计算机可读指令被处理器执行时可以实现一种应用程序的性能测试方法。本领域技术人员可以理解,图4中示出的结构,仅仅是与本申 请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的服务器的限定,具体的服务器可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
一种计算机设备,包括存储器和一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行以下步骤:
通过主节点接收终端发送的业务场景的性能测试请求,性能测试请求中携带了业务场景对应的配置信息;
主节点根据配置信息获取对应的测试用例以及利用配置信息生成对应的测试代码;
通过主节点将测试代码和测试用例分发至多个从节点中运行的线程;以使得每个从节点的线程利用测试代码和测试用例对业务场景的多个接口执行性能测试;
通过主节点接收多个从节点返回的测试数据,利用测试数据生成测试报告。
在一个实施例中,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器还执行以下步骤:
当单个接口在性能测试中的错误率达到第一阈值时,从节点的线程退出接口对应的性能测试;利用从节点记录对应的测试数据;或者
主节点根据单个接口对应线程步进数量和步进频率,记录对单个接口执行性能测试的线程数量;当接口对应的线程数量达到第二阈值时,多个从节点的线程退出接口的性能测试,利用从节点记录对应的性测试数据。
在一个实施例中,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器还执行以下步骤:
当业务场景为混合业务场景时,通过主节点将混合业务场景拆分为多个单业务场景;
通过主节点获取多个单业务场景对应的测试用例和测试代码;
通过主节点将多个单业务场景对应的测试用例和测试代码分发至多个从节点的线程执行性能测试。
在一个实施例中,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器还执行以下步骤:
通过主节点获取每个从节点的空余线程数量;
主节点根据空余线程数量将多个单业务场景的测试用例和测试代码分发至相应的从节点,以使得每个单业务场景对应的性能测试被多个从节点执行。
在一个实施例中,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器还执行以下步骤:
获取每个从节点的空余线程数量对应的加权系数;
当从节点数量大于拆分得到的多个单业务场景的数量时,主节点根据单业务场景对应的接口数量和加权系数为单业务场景选择一个或多个从节点;
将单业务场景对应的测试用例和测试代码分发至相应从节点。
在一个实施例中,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器还执行以下步骤:
新增从节点向主节点发送新增节点标识;
主节点根据新增从节点标识向新增从节点发送初始化命令;
新增从节点执行初始化命令,初始化完成后,新增从节点接收主节点分发的测试用例和测试代码,执行与业务场景对应的性能测试。
在一个实施例中,提供了一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
通过主节点接收终端发送的业务场景的性能测试请求,性能测试请求中携带了业务场景对应的配置信息;
主节点根据配置信息获取对应的测试用例以及利用配置信息生成对应的测试代码;
通过主节点将测试代码和测试用例分发至多个从节点中运行的线程;以 使得每个从节点的线程利用测试代码和测试用例对业务场景的多个接口执行性能测试;
通过主节点接收多个从节点返回的测试数据,利用测试数据生成测试报告。
在一个实施例中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行以下步骤:
当单个接口在性能测试中的错误率达到第一阈值时,从节点的线程退出接口对应的性能测试;利用从节点记录对应的测试数据;或者
主节点根据单个接口对应线程步进数量和步进频率,记录对单个接口执行性能测试的线程数量;当接口对应的线程数量达到第二阈值时,多个从节点的线程退出接口的性能测试,利用从节点记录对应的性测试数据。
在一个实施例中,业务场景包括混合业务场景和单业务场景;所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行以下步骤:
当业务场景为混合业务场景时,通过主节点将混合业务场景拆分为多个单业务场景;
通过主节点获取多个单业务场景对应的测试用例和测试代码;
通过主节点将多个单业务场景对应的测试用例和测试代码分发至多个从节点的线程执行性能测试。
在一个实施例中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行以下步骤:
通过主节点获取每个从节点的空余线程数量;
主节点根据空余线程数量将多个单业务场景的测试用例和测试代码分发至相应的从节点,以使得每个单业务场景对应的性能测试被多个从节点执行。
在一个实施例中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行以下步骤:
获取每个从节点的空余线程数量对应的加权系数;
当从节点数量大于拆分得到的多个单业务场景的数量时,主节点根据单业务场景对应的接口数量和加权系数为单业务场景选择一个或多个从节点;
将单业务场景对应的测试用例和测试代码分发至相应从节点。
在一个实施例中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行以下步骤:
新增从节点向主节点发送新增节点标识;
主节点根据新增从节点标识向新增从节点发送初始化命令;
新增从节点执行初始化命令,初始化完成后,新增从节点接收主节点分发的测试用例和测试代码,执行与业务场景对应的性能测试。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种应用程序的性能测试方法,包括:
    通过主节点接收终端发送的业务场景的性能测试请求,所述性能测试请求中携带了业务场景对应的配置信息;
    主节点根据所述配置信息获取对应的测试用例以及利用所述配置信息生成对应的测试代码;
    通过主节点将所述测试代码和测试用例分发至多个从节点;以使得所述从节点调用多个线程根据测试代码和测试用例对所述业务场景的多个接口执行性能测试;
    通过主节点接收多个从节点返回的测试数据,利用所述测试数据生成测试报告。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    当单个接口在性能测试中的错误率达到第一阈值时,从节点的线程退出单个接口对应的性能测试;利用从节点记录对应的测试数据;或者
    主节点根据单个接口对应线程步进数量和步进频率,记录对单个接口执行性能测试的线程数量;当所述单个接口对应的线程数量达到第二阈值时,多个从节点的线程退出所述单个接口的性能测试,利用从节点记录对应的测试数据。
  3. 根据权利要求1所述的方法,其特征在于,所述业务场景包括混合业务场景和单业务场景;所述方法还包括:
    当业务场景为混合业务场景时,通过所述主节点将所述混合业务场景拆分为多个单业务场景;
    通过主节点获取多个单业务场景对应的测试用例和测试代码;
    通过主节点将多个单业务场景对应的测试用例和测试代码分发至多个从节点,以使得所述从节点对多个单业务场景执行性能测试。
  4. 根据权利要求3所述的方法,其特征在于,所述通过主节点将多个单业务场景对应的测试用例和测试代码分发至多个从节点,以使得所述从节点 对多个单业务场景执行性能测试的步骤包括:
    通过主节点获取每个从节点的空余线程数量;
    主节点根据所述空余线程数量将多个单业务场景的测试用例和测试代码分发至相应的从节点,以使得每个从节点对多个单业务场景执行性能测试。
  5. 根据权利要求4所述的方法,其特征在于,所述主节点根据所述空余线程数量将多个单业务场景的测试用例和测试代码分发至相应的从节点的步骤,包括:
    获取每个从节点的空余线程数量对应的加权系数;
    当从节点数量大于拆分得到的多个单业务场景的数量时,所述主节点根据单业务场景对应的接口数量和所述加权系数为单业务场景选择一个或多个从节点;
    将所述单业务场景对应的测试用例和测试代码分发至相应从节点。
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    新增从节点向所述主节点发送新增节点标识;
    所述主节点根据所述新增从节点标识向所述新增从节点发送初始化命令;
    所述新增从节点执行所述初始化命令,初始化完成后,所述新增从节点接收主节点分发的测试用例和测试代码,执行与业务场景对应的性能测试。
  7. 一种应用程序的性能测试装置,包括:
    主节点,用于接收终端发送的业务场景的性能测试请求,所述性能测试请求中携带了业务场景对应的配置信息;根据所述配置信息获取对应的测试用例以及利用所述配置信息生成对应的测试代码;将所述测试代码和测试用例分发至多个从节点;
    从节点,用于运行多个线程,通过线程利用测试代码和测试用例对所述业务场景的多个接口执行性能测试;
    所述主节点还用于接收多个从节点返回的测试数据,利用所述测试数据生成测试报告。
  8. 根据权利要求7所述的装置,其特征在于,所述业务场景包括混合业务场景和单业务场景;当业务场景为混合业务场景时,所述主节点还用于将所述混合业务场景拆分为多个单业务场景;获取多个单业务场景对应的测试用例和测试代码;将多个单业务场景对应的测试用例和测试代码分发至多个从节点;所述从节点还用于调用多个线程对多个单业务场景执行性能测试。
  9. 根据权利要求7所述的装置,其特征在于,所述主节点还用于获取每个从节点的空余线程数量;根据所述空余线程数量将多个单业务场景的测试用例和测试代码分发至相应的从节点;所述从节点还用于对多个单业务场景执行性能测试。
  10. 根据权利要求9所述的装置,其特征在于,所述主节点还用于获取每个从节点的空余线程数量对应的加权系数;当从节点数量大于拆分得到的多个单业务场景的数量时,根据单业务场景对应的接口数量和所述加权系数为单业务场景选择一个或多个从节点;将所述单业务场景对应的测试用例和测试代码分发至相应从节点。
  11. 一种计算机设备,包括存储器和一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行以下步骤:
    通过主节点接收终端发送的业务场景的性能测试请求,所述性能测试请求中携带了业务场景对应的配置信息;
    主节点根据所述配置信息获取对应的测试用例以及利用所述配置信息生成对应的测试代码;
    通过主节点将所述测试代码和测试用例分发至多个从节点;以使得所述从节点调用多个线程根据测试代码和测试用例对所述业务场景的多个接口执行性能测试;
    通过主节点接收多个从节点返回的测试数据,利用所述测试数据生成测试报告。
  12. 根据权利要求11所述的计算机设备,其特征在于,所述计算机可读 指令被所述处理器执行时,使得所述一个或多个处理器还执行以下步骤:
    当单个接口在性能测试中的错误率达到第一阈值时,从节点的线程退出单个接口对应的性能测试;利用从节点记录对应的测试数据;或者
    主节点根据单个接口对应线程步进数量和步进频率,记录对单个接口执行性能测试的线程数量;当所述单个接口对应的线程数量达到第二阈值时,多个从节点的线程退出所述单个接口的性能测试,利用从节点记录对应的测试数据。
  13. 根据权利要求11所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器还执行以下步骤:当业务场景为混合业务场景时,通过所述主节点将所述混合业务场景拆分为多个单业务场景;
    通过主节点获取多个单业务场景对应的测试用例和测试代码;
    通过主节点将多个单业务场景对应的测试用例和测试代码分发至多个从节点,以使得所述从节点对多个单业务场景执行性能测试。
  14. 根据权利要求13所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器还执行以下步骤:通过主节点获取每个从节点的空余线程数量;
    主节点根据所述空余线程数量将多个单业务场景的测试用例和测试代码分发至相应的从节点,以使得每个从节点对多个单业务场景执行性能测试。
  15. 根据权利要求14所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器还执行以下步骤:获取每个从节点的空余线程数量对应的加权系数;
    当从节点数量大于拆分得到的多个单业务场景的数量时,所述主节点根据单业务场景对应的接口数量和所述加权系数为单业务场景选择一个或多个从节点;
    将所述单业务场景对应的测试用例和测试代码分发至相应从节点。
  16. 一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    通过主节点接收终端发送的业务场景的性能测试请求,所述性能测试请求中携带了业务场景对应的配置信息;
    主节点根据所述配置信息获取对应的测试用例以及利用所述配置信息生成对应的测试代码;
    通过主节点将所述测试代码和测试用例分发至多个从节点;以使得所述从节点调用多个线程根据测试代码和测试用例对所述业务场景的多个接口执行性能测试;
    通过主节点接收多个从节点返回的测试数据,利用所述测试数据生成测试报告。
  17. 根据权利要求16所述的存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行以下步骤:
    当单个接口在性能测试中的错误率达到第一阈值时,从节点的线程退出单个接口对应的性能测试;利用从节点记录对应的测试数据;或者
    主节点根据单个接口对应线程步进数量和步进频率,记录对单个接口执行性能测试的线程数量;当所述单个接口对应的线程数量达到第二阈值时,多个从节点的线程退出所述单个接口的性能测试,利用从节点记录对应的测试数据。
  18. 根据权利要求16所述的存储介质,其特征在于,所述业务场景包括混合业务场景和单业务场景;所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行以下步骤:
    当业务场景为混合业务场景时,通过所述主节点将所述混合业务场景拆分为多个单业务场景;
    通过主节点获取多个单业务场景对应的测试用例和测试代码;
    通过主节点将多个单业务场景对应的测试用例和测试代码分发至多个从 节点,以使得所述从节点对多个单业务场景执行性能测试。
  19. 根据权利要求18所述的存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行以下步骤:
    通过主节点获取每个从节点的空余线程数量;
    主节点根据所述空余线程数量将多个单业务场景的测试用例和测试代码分发至相应的从节点,以使得每个从节点对多个单业务场景执行性能测试。
  20. 根据权利要求19所述的存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行以下步骤:
    获取每个从节点的空余线程数量对应的加权系数;
    当从节点数量大于拆分得到的多个单业务场景的数量时,所述主节点根据单业务场景对应的接口数量和所述加权系数为单业务场景选择一个或多个从节点;
    将所述单业务场景对应的测试用例和测试代码分发至相应从节点。
PCT/CN2017/104599 2017-08-25 2017-09-29 应用程序的性能测试方法、装置、计算机设备和存储介质 WO2019037203A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710743145.4A CN107688526A (zh) 2017-08-25 2017-08-25 应用程序的性能测试方法、装置、计算机设备和存储介质
CN201710743145.4 2017-08-25

Publications (1)

Publication Number Publication Date
WO2019037203A1 true WO2019037203A1 (zh) 2019-02-28

Family

ID=61155342

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/104599 WO2019037203A1 (zh) 2017-08-25 2017-09-29 应用程序的性能测试方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN107688526A (zh)
WO (1) WO2019037203A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110058990A (zh) * 2019-03-12 2019-07-26 平安普惠企业管理有限公司 性能测试方法及装置、计算机设备、存储介质

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108427613B (zh) * 2018-03-12 2021-02-09 平安普惠企业管理有限公司 异常接口定位方法、装置、计算机设备和存储介质
CN108958727A (zh) * 2018-04-13 2018-12-07 北京优帆科技有限公司 一种api客户端代码的生成方法及系统
CN108763089B (zh) * 2018-05-31 2022-04-22 新华三信息安全技术有限公司 一种测试方法、装置及系统
CN110727573A (zh) * 2018-07-16 2020-01-24 中移(苏州)软件技术有限公司 应用程序编程接口的性能测试方法、装置、设备及介质
CN108959100A (zh) * 2018-07-20 2018-12-07 中国邮政储蓄银行股份有限公司 应用程序的测试方法、装置和系统
CN109344053B (zh) * 2018-09-03 2023-05-30 平安科技(深圳)有限公司 接口覆盖测试方法、系统、计算机设备和存储介质
CN109165165A (zh) * 2018-09-04 2019-01-08 中国平安人寿保险股份有限公司 接口测试方法、装置、计算机设备和存储介质
CN109660421A (zh) * 2018-10-26 2019-04-19 平安科技(深圳)有限公司 弹性调度资源的方法、装置、服务器及存储介质
CN109815138A (zh) * 2019-01-03 2019-05-28 深圳壹账通智能科技有限公司 业务信息测试方法、装置、计算机设备和存储介质
CN110008117A (zh) * 2019-03-12 2019-07-12 深圳壹账通智能科技有限公司 页面测试方法、装置、计算机设备和存储介质
CN110008118B (zh) * 2019-03-13 2023-03-10 深圳壹账通智能科技有限公司 页面数据测试方法、装置、计算机设备和存储介质
CN110046093A (zh) * 2019-03-14 2019-07-23 平安信托有限责任公司 接口测试方法、装置、计算机设备和存储介质
CN110083535A (zh) * 2019-04-22 2019-08-02 网宿科技股份有限公司 一种软件测试方法及装置
CN111984510B (zh) * 2019-05-21 2024-05-17 阿里巴巴集团控股有限公司 调度系统的性能测试方法及装置
CN110417613B (zh) * 2019-06-17 2022-11-29 平安科技(深圳)有限公司 基于Jmeter的分布式性能测试方法、装置、设备及存储介质
CN111181800B (zh) * 2019-11-27 2023-09-19 腾讯科技(深圳)有限公司 测试数据处理方法、装置、电子设备及存储介质
CN111104320A (zh) * 2019-12-15 2020-05-05 浪潮电子信息产业股份有限公司 一种测试方法、装置、设备及介质
CN111177003A (zh) * 2019-12-30 2020-05-19 北京同邦卓益科技有限公司 一种测试方法、装置、系统、电子设备及存储介质
CN111404769B (zh) * 2020-02-28 2022-07-08 北京达佳互联信息技术有限公司 一种性能测试方法、装置、服务器及存储介质
CN111400192A (zh) * 2020-04-02 2020-07-10 北京达佳互联信息技术有限公司 服务程序性能测试方法、装置及电子设备、存储介质
CN112559325B (zh) * 2020-12-02 2024-02-23 海南车智易通信息技术有限公司 一种应用程序测试系统、方法、计算设备及可读存储介质
US12001822B2 (en) * 2021-02-01 2024-06-04 Capital One Services, Llc Multi-signature validation of deployment artifacts
CN117835300A (zh) * 2022-09-28 2024-04-05 中国移动通信集团设计院有限公司 一种测试设备、方法、装置及计算机可读存储介质
CN117331836A (zh) * 2023-10-16 2024-01-02 中教畅享(北京)科技有限公司 一种基于代码语法树分析的评测方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102035697A (zh) * 2010-12-31 2011-04-27 中国电子科技集团公司第十五研究所 一种文件系统的并发连接数性能测试系统和方法
CN102609352A (zh) * 2011-01-19 2012-07-25 阿里巴巴集团控股有限公司 一种并行测试方法及并行测试服务器
CN102855173A (zh) * 2011-06-27 2013-01-02 北京新媒传信科技有限公司 一种软件性能测试方法和装置
CN105281978A (zh) * 2015-10-23 2016-01-27 小米科技有限责任公司 一种性能测试的方法、装置和系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117345B (zh) * 2015-09-23 2017-12-19 网易(杭州)网络有限公司 一种应用程序的接口测试方法及装置
US9600400B1 (en) * 2015-10-29 2017-03-21 Vertafore, Inc. Performance testing of web application components using image differentiation
CN105808428B (zh) * 2016-03-03 2018-09-14 南京大学 一种对分布式文件系统进行统一性能测试的方法
CN106776280B (zh) * 2016-11-24 2020-10-16 上海携程商务有限公司 可配置性能测试装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102035697A (zh) * 2010-12-31 2011-04-27 中国电子科技集团公司第十五研究所 一种文件系统的并发连接数性能测试系统和方法
CN102609352A (zh) * 2011-01-19 2012-07-25 阿里巴巴集团控股有限公司 一种并行测试方法及并行测试服务器
CN102855173A (zh) * 2011-06-27 2013-01-02 北京新媒传信科技有限公司 一种软件性能测试方法和装置
CN105281978A (zh) * 2015-10-23 2016-01-27 小米科技有限责任公司 一种性能测试的方法、装置和系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110058990A (zh) * 2019-03-12 2019-07-26 平安普惠企业管理有限公司 性能测试方法及装置、计算机设备、存储介质

Also Published As

Publication number Publication date
CN107688526A (zh) 2018-02-13

Similar Documents

Publication Publication Date Title
WO2019037203A1 (zh) 应用程序的性能测试方法、装置、计算机设备和存储介质
CN108537543B (zh) 区块链数据的并行处理方法、装置、设备和存储介质
TWI791389B (zh) 任務調度方法、裝置及電腦可讀存儲介質
CN111190810B (zh) 执行测试任务的方法、装置、服务器和存储介质
CN107590075B (zh) 一种软件测试方法及装置
CN108845954B (zh) 压力测试方法、系统及存储介质
US20060155708A1 (en) System and method for generating virtual networks
CN107241315B (zh) 银行网关接口的接入方法、装置及计算机可读存储介质
US20190199785A1 (en) Determining server level availability and resource allocations based on workload level availability requirements
CN110389903B (zh) 测试环境部署方法和装置、电子设备和可读存储介质
CN110233802B (zh) 一种构建一主链多侧链的区块链架构的方法
CN107566214B (zh) 一种性能测试方法和装置
US20150100831A1 (en) Method and system for selecting and executing test scripts
CN113434283B (zh) 服务调度方法及装置、服务器、计算机可读存储介质
CN112631919B (zh) 一种对比测试方法、装置、计算机设备及存储介质
CN114327861A (zh) 执行eda任务的方法、装置、系统和存储介质
Sundas et al. An introduction of CloudSim simulation tool for modelling and scheduling
CN110874319A (zh) 自动化测试方法、平台、设备及计算机可读存储介质
CN112783778A (zh) 测试方法、装置、网络设备及存储介质
CN113485828B (zh) 基于quartz的分布式任务调度系统及方法
CN115712524A (zh) 数据恢复方法及装置
CN115292176A (zh) 一种压力测试方法、装置、设备及存储介质
Liu et al. A concurrent approach for improving the efficiency of Android CTS testing
CN113703930A (zh) 任务调度方法、装置及系统、计算机可读存储介质
CN113518974A (zh) 用于找出并标识网络中的计算节点的系统和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17922594

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17922594

Country of ref document: EP

Kind code of ref document: A1