CN110417613B - Distributed performance testing method, device, equipment and storage medium based on Jmeter - Google Patents

Distributed performance testing method, device, equipment and storage medium based on Jmeter Download PDF

Info

Publication number
CN110417613B
CN110417613B CN201910523487.4A CN201910523487A CN110417613B CN 110417613 B CN110417613 B CN 110417613B CN 201910523487 A CN201910523487 A CN 201910523487A CN 110417613 B CN110417613 B CN 110417613B
Authority
CN
China
Prior art keywords
jmeter
node
slave
test
slave node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910523487.4A
Other languages
Chinese (zh)
Other versions
CN110417613A (en
Inventor
李润妮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910523487.4A priority Critical patent/CN110417613B/en
Publication of CN110417613A publication Critical patent/CN110417613A/en
Priority to PCT/CN2019/119070 priority patent/WO2020253079A1/en
Application granted granted Critical
Publication of CN110417613B publication Critical patent/CN110417613B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • G06F8/63Image based installation; Cloning; Build to order
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computer Security & Cryptography (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention relates to the field of cloud testing, and discloses a distributed performance testing method, device, equipment and storage medium based on a Jmeter. The distributed performance testing method based on the Jmeter comprises the following steps: deploying the Jmeter image of the predicted version to distributed nodes, wherein the distributed nodes comprise a main node and at least one slave node; when the main node receives a test starting instruction, the main node sends corresponding test scripts to the slave nodes; when each slave node receives the corresponding test script, the performance test is carried out through each slave node to obtain the corresponding test result; sending the corresponding test results to the master node through each slave node; and counting the test results corresponding to the master nodes respectively to obtain performance test results. The invention carries out version control on the Jmeter mirror image, adopts the distributed node based on the container to deploy the Jmeter mirror image for performance test, is simple and quick in deployment, and improves the test efficiency.

Description

Distributed performance testing method, device, equipment and storage medium based on Jmeter
Technical Field
The invention relates to the field of cloud testing, in particular to a distributed performance testing method, device, equipment and storage medium based on a Jmeter.
Background
The main tools for server-side performance testing include a meter and a LoadRunner, and the principle is that instructions sent by concurrent clients are monitored and collected through an intermediate agent, scripts are generated and sent to an application server, and then results fed back by the server are monitored.
When an application program is subjected to interface performance test, due to the limitation of a single-machine CPU and a memory, a single-machine deployed Jmeter cannot meet the test requirement, and Jmeter distributed deployment is required. The traditional application deployment mode is to install an application through a plug-in or a script, and the disadvantage of this is that the running, configuration, management and all life cycles of an application program will be bound with the current operating system, which is not beneficial to the operations of upgrading and updating the application and rolling back the version.
When the Jmeter is deployed in a mode of creating the virtual machine, the deployment process is complex, the efficiency is low, the portability is not facilitated, and meanwhile, a Jmeter cluster based on a large number of virtual machines has the defect that the management is inconvenient, for example, the Jmeter service needs to be restarted, the Jmeter cluster needs to be connected to each virtual machine for operation, the installation and the deployment are inconvenient, and the cluster is also inconvenient to be packaged into a whole application to provide service externally.
Disclosure of Invention
The invention mainly aims to solve the problems that a single machine can not meet the test requirement when a Jmeter is deployed, and the Jmeter deployment process by adopting a virtual machine is complex, low in efficiency and poor in portability, and meanwhile, the problem that the version control support is not friendly by a test script is solved.
In order to achieve the above object, a first aspect of the present invention provides a method for distributed performance testing based on a meter, including: deploying the Jmeter image of the predicted version to distributed nodes, wherein the distributed nodes comprise a main node and at least one slave node; when the main node receives a test starting instruction, sending a corresponding test script to each slave node through the main node; when each slave node receives the corresponding test script, performing performance test through each slave node to obtain the corresponding test result; sending the test result corresponding to each slave node to the master node; and counting the respective corresponding test results through the main node to obtain performance test results.
Optionally, in a first implementation manner of the first aspect of the present invention, the deploying the Jmeter image of the predicted version to the distributed nodes, where the distributed nodes include a master node and at least one slave node, includes: selecting a Jmeter image of a predicted version from an image repository; setting up the distributed nodes by an Elastic Computing Service (ECS) instance, wherein the distributed nodes comprise the master node and the at least one slave node; judging whether the main node and each slave node simultaneously have the Jmeter mirror image of the predicted version; if the Jmeter mirror images of the prediction versions do not exist in the master node and the slave nodes at the same time, deploying the Jmeter mirror images of the prediction versions in a preset mode to obtain deployment results; and if the Jmeter mirror images of the prediction versions exist in the master node and the slave nodes at the same time, determining that the deployment is successful.
Optionally, in a second implementation manner of the first aspect of the present invention, if the Jmeter mirror of the predicted version does not exist in the master node and each slave node at the same time, deploying the Jmeter mirror of the predicted version in a preset manner, and obtaining a deployment result includes: if the Jmeter mirror image of the prediction version does not exist in the master node and each slave node at the same time, deploying the Jmeter mirror image of the prediction version to the ECS instance to obtain Jmeter application; running the Jmeter application through a container docker; adding configuration information corresponding to each slave node into a control list of the master node, wherein the configuration information corresponding to each slave node comprises an IP address and a port corresponding to each slave node; and restarting the Jmeter application of the main node to obtain a deployment result.
Optionally, in a third implementation manner of the first aspect of the present invention, if the Jmeter mirror of the predicted version does not exist in the master node and each slave node at the same time, the Jmeter mirror of the predicted version is deployed in a preset manner, and after a deployment result is obtained, the method for testing distributed performance based on Jmeter further includes: judging whether the deployment result is a target value, wherein the target value is used for indicating that the Jmeter mirror image of the prediction version is deployed successfully; if the deployment result is not the target value, re-deploying; and if the deployment result is the target value, determining that the deployment is successful.
Optionally, in a fourth implementation manner of the first aspect of the present invention, when the master node receives a test starting instruction, sending, by the master node, a corresponding test script to each slave node includes: setting the main node as a client of Remote Method Invocation (RMI); setting each slave node as a server side of the RMI; when the main node receives a test starting instruction, executing a test plan through the main node, wherein the test plan comprises total test data and a test script; analyzing the control list of the master node to obtain configuration information corresponding to each slave node; splitting the total test data according to the configuration information corresponding to each slave node to obtain the test data corresponding to each slave node; replacing the keywords in the test scripts according to the test data corresponding to each slave node to obtain the corresponding test scripts; and pushing the corresponding test script to each slave node through the master node in the RMI mode.
Optionally, in a fifth implementation manner of the first aspect of the present invention, when each slave node receives the corresponding test script, the performing a performance test on each slave node to obtain the corresponding test result further includes: when each slave node receives the corresponding test script, analyzing the corresponding test script through each slave node to obtain the corresponding test request; executing the respective corresponding test request through each slave node to obtain the respective corresponding test result; and recording the corresponding test results through the slave nodes.
Optionally, in a sixth implementation manner of the first aspect of the present invention, before deploying the Jmeter image of the predicted version to a distributed node, where the distributed node includes a master node and at least one slave node, the method for testing distributed performance based on Jmeter further includes: uploading the Jmeter image of the predicted version to the image repository; and carrying out version management on the Jmeter mirror image of the predicted version in a labeling mode.
The second aspect of the present invention provides a Jmeter-based distributed performance testing apparatus, including: the deployment unit is used for deploying the Jmeter mirror image of the predicted version to distributed nodes, and the distributed nodes comprise a main node and at least one slave node; the control unit is used for sending respective corresponding test scripts to each slave node through the master node when the master node receives a test starting instruction; the test unit is used for carrying out performance test through each slave node to obtain a test result corresponding to each slave node when each slave node receives the test script corresponding to each slave node; a sending unit, configured to send respective corresponding test results to the master node through each slave node; and the statistical unit is used for counting the corresponding test results through the main node to obtain performance test results.
Optionally, in a first implementation manner of the second aspect of the present invention, the deployment unit further includes: a selecting subunit, configured to select a Jmeter image of a predicted version from the image repository; a setting subunit, configured to set the distributed nodes through an elastic computing service ECS instance, where the distributed nodes include the master node and the at least one slave node; the first judging subunit is used for judging whether the main node and each slave node simultaneously have the Jmeter mirror image of the predicted version; the deployment subunit is used for deploying the Jmeter mirror image of the predicted version in a preset mode to obtain a deployment result if the Jmeter mirror image of the predicted version does not exist in the main node and each slave node at the same time; and the confirmation subunit is used for determining that the deployment is successful if the Jmeter mirror images of the predicted versions exist in the master node and the slave nodes at the same time.
Optionally, in a second implementation manner of the second aspect of the present invention, the deployment subunit is specifically configured to: if the main node and each slave node do not have the Jmeter mirror image of the prediction version at the same time, deploying the Jmeter mirror image of the prediction version to the ECS instance to obtain Jmeter application; running the Jmeter application through a container docker; adding configuration information corresponding to each slave node into a control list of the master node, wherein the configuration information corresponding to each slave node comprises an IP address and a port corresponding to each slave node; and restarting the Jmeter application of the main node to obtain a deployment result.
Optionally, in a third implementation manner of the second aspect of the present invention, the deployment unit further includes: the second judgment subunit is configured to judge whether the deployment result is a target value, where the target value is used to indicate that the Jmeter image of the prediction version is successfully deployed; the first processing subunit is used for redeploying if the deployment result is not the target value; and the second processing subunit is used for determining that the deployment is successful if the deployment result is the target value.
Optionally, in a fourth implementation manner of the second aspect of the present invention, the control unit is specifically configured to: setting the main node as a client of Remote Method Invocation (RMI); setting each slave node as a server side of the RMI; when the main node receives a test starting instruction, executing a test plan through the main node, wherein the test plan comprises total test data and a test script; analyzing the control list of the master node to obtain configuration information corresponding to each slave node; splitting the total test data according to the configuration information corresponding to each slave node to obtain the test data corresponding to each slave node; replacing the keywords in the test scripts according to the test data corresponding to each slave node to obtain the corresponding test scripts; and pushing the corresponding test scripts to the slave nodes in the RMI mode through the master node.
Optionally, in a fifth implementation manner of the second aspect of the present invention, the test unit is specifically configured to: when each slave node receives the corresponding test script, analyzing the corresponding test script through each slave node to obtain the corresponding test request; executing the respective corresponding test request through each slave node to obtain the respective corresponding test result; and recording the corresponding test results through the slave nodes.
Optionally, in a sixth implementation manner of the second aspect of the present invention, the Jmeter-based distributed performance testing apparatus further includes: the uploading unit is used for uploading the Jmeter image of the predicted version to the image warehouse; and the management unit is used for carrying out version management on the Jmeter mirror image of the predicted version in a labeling mode.
The third aspect of the present invention provides a Jmeter-based distributed performance testing apparatus, including: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line; the at least one processor invokes the instructions in the memory to cause the Jmeter-based distributed performance testing apparatus to perform the method of the first aspect.
A fourth aspect of the present invention provides a computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method of the first aspect described above.
According to the technical scheme, the invention has the following advantages:
in the technical scheme provided by the invention, a Jmeter mirror image of a predicted version is deployed to a distributed node, and the distributed node comprises a main node and at least one slave node; when the main node receives a test starting instruction, sending a corresponding test script to each slave node through the main node; when each slave node receives the corresponding test script, performing performance test through each slave node to obtain the corresponding test result; sending the corresponding test result to the master node through each slave node; and counting the corresponding test results through the main node to obtain performance test results. In the embodiment of the invention, the version control is carried out on the Jmeter mirror image through the mirror image warehouse, the Jmeter mirror image is deployed by adopting the distributed nodes based on the containers and the performance test is carried out, the processes among the containers cannot influence each other, the computing resources can be distinguished, and meanwhile, compared with a virtual machine, the container deployment process is simplified, and the test efficiency is improved.
Drawings
FIG. 1 is a diagram of an embodiment of a Jmeter-based distributed performance testing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of another embodiment of a Jmeter-based distributed performance testing method according to the embodiment of the present invention;
FIG. 3 is a diagram of an embodiment of a Jmeter-based distributed performance testing apparatus according to the embodiment of the present invention;
FIG. 4 is a schematic diagram of another embodiment of a Jmeter-based distributed performance testing apparatus according to the present invention;
FIG. 5 is a diagram of another embodiment of a Jmeter-based distributed performance testing device according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a Jmeter-based distributed performance testing method, a Jmeter-based distributed performance testing device, a Jmeter-based distributed performance testing equipment and a Jmeter-based distributed performance testing storage medium. Because the container is decoupled from the bottom facility and the machine file system, the container can be migrated between different clouds and different versions of operating systems, the deployment process is simplified, and the test efficiency is improved.
In order to make the technical field of the invention better understand the scheme of the invention, the embodiment of the invention will be described in conjunction with the attached drawings in the embodiment of the invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Moreover, the terms "comprises," "comprising," or "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For convenience of understanding, a detailed flow of an embodiment of the present invention is described below, and referring to fig. 1, an embodiment of a Jmeter-based distributed performance testing method in an embodiment of the present invention includes:
101. deploying a forecast version of a meter image to a distributed node, wherein the distributed node comprises a main node and at least one slave node;
the server deploys the Jmeter image of the predicted version to a distributed node, the distributed node comprises a main node and at least one slave node, and the main node and the slave node both adopt container. The docker is an open-source application container engine, completely uses a sandbox mechanism, does not have any interface, and uses a client-server C/S architecture mode.
It should be noted that the server selects the Jmeter image of the predicted version from the image repository. The server is mirrored through the Jmeter, so that the environment such as the Java virtual machine and the property file does not need to be set, and only the creation of the test script and the test resource such as the data file need to be concentrated. One Jmeter mirror image comprises all scripts in the test task, and a Jmeter mirror image of a certain version is selected as a Jmeter mirror image of a predicted version according to the corresponding relation between the test task and the test script name. The Jmeter image does not filter any command parameters of the Jmeters, allowing multiple modes of operation, and in a container-based distributed mode, the image can be used to build a container cluster. For example, the server deploys the configured template by setting a meter image through configuration management of a Container As A Service (CAAS) platform. The CAAS is an application environment which is managed and protected by IT, applications can be constructed and deployed through the CAAS, and Jerner container distributed clusters can be managed and deployed through container-based virtualization.
102. When the main node receives a test starting instruction, the main node sends corresponding test scripts to the slave nodes;
and when the main node receives the test starting instruction, the server sends the corresponding test script to each slave node through the main node. Specifically, the server sets the master node as a client for calling RMI by a remote method; the server sets each slave node as a server side of RMI; when the main node receives a test starting instruction, the server executes a test plan through the main node, wherein the test plan comprises total test data and a test script; the server analyzes the control list of the master node to obtain configuration information corresponding to each slave node; the server splits the total test data according to the configuration information corresponding to each slave node to obtain the test data corresponding to each slave node; the server replaces the key words in the test scripts according to the test data corresponding to each slave node to obtain the corresponding test scripts; and the server pushes the corresponding test script to each slave node through the master node in an RMI mode. For example, 700 test request data are sent concurrently, the distributed nodes include a master node a and slave nodes B, C, D and E, the server performs calculation according to the respective configuration information conditions of the slave nodes B, C, D and E, the server allocates 180, 170, 190 and 160 test request data of the slave nodes B, C, D and E, respectively, and the server sends test scripts including the respective corresponding test request data to the slave nodes B, C, D and E through the master node a.
It should be noted that Remote Method Invocation (RMI) is an application programming interface for implementing remote procedure calls, and supports the fact that programs running on clients can invoke objects on remote servers, i.e., program-level objects stored in different address spaces, to communicate with each other, and RMI features support distributed operations in a network environment. The RMI includes a server running a remote method invocation service and a client running a remote method invocation service. For example, a master node a including a meter application is set as a client of RMI, slave nodes B, C, D and E including a meter application are set as servers of RMI services, and the master node a and the slave nodes B, C, D and E communicate with each other by means of RMI.
103. When each slave node receives the corresponding test script, the performance test is carried out through each slave node to obtain the corresponding test result;
and when each slave node receives the corresponding test script, the server performs performance test through each slave node to obtain the corresponding test result. Specifically, when each slave node receives a corresponding test script, the server analyzes the corresponding test script through each slave node to obtain a corresponding test request; executing respective corresponding test requests through each slave node to obtain respective corresponding test results; and the server records the corresponding test result through each slave node.
It should be noted that the performance test index includes a concurrency number, an average response time, a request success rate, and a concurrency number of requests or transaction numbers per second, that is, QPS/TPS. And the QPS/TPS is in relation to the concurrency number and the average response time, wherein the QPS/TPS is equal to the concurrency number divided by the average response time. For example, a user may log into the check-in system to check in within 30 minutes from 8 o ' clock to 9 o ' clock, which is a 9 o ' clock in the morning. The company staff is 2000 persons, the average time length for each person to log in the check-in system is 5 minutes, the specific calculation process is that 30 minutes is converted into 1800 seconds, QPS is 2000 divided by 1800, the average response time is 3 minutes, and the conversion time is 180 seconds, and then the calculated concurrency number is 200.
104. Sending the corresponding test results to the master node through each slave node;
and the server sends the corresponding test result to the master node through each slave node. Specifically, the server receives respective corresponding test results through each slave node, the server sends the respective corresponding test results to the master node through each slave node in an RMI service manner, and meanwhile, each slave node records the respective corresponding test results, the respective corresponding test results are test requests under the same test plan, but the respective corresponding test results are not the same, and the respective corresponding test results include normal test data results and abnormal test data results.
105. And counting the test results corresponding to the master nodes respectively to obtain performance test results.
And the server counts the test results corresponding to the master nodes to obtain the performance test results. Specifically, the server receives the respective test results of the slave nodes through the master node, and the server calculates and classifies the respective test results of the slave nodes according to a preset statistical script to obtain performance test results.
In the embodiment of the invention, the version control is carried out on the Jmeter mirror image through the mirror image warehouse, the version rollback is supported, meanwhile, the distributed node based on the container is adopted to deploy the Jmeter mirror image and carry out the performance test, the processes among the containers cannot influence each other, the computing resources can be distinguished, and the container can be rapidly deployed relative to the virtual machine. Because the container is decoupled from the bottom layer facility and the machine file system, the migration can be carried out among different cloud and different version operating systems, the deployment process is simplified, and the testing efficiency is improved.
Referring to fig. 2, another embodiment of the method for measuring distributed performance based on a meter according to the embodiment of the present invention includes:
201. selecting a Jmeter image of a predicted version from an image warehouse;
the server selects a Jmeter image of a predicted version from an image repository, wherein the image is an elastic cloud server template containing software and necessary configurations, at least comprises an operating system, and also can contain application software and private software, and the application software comprises database software for example. The mirror image is divided into a public mirror image and a private mirror image, the public mirror image is a mirror image provided by default by the system, and the private mirror image is a mirror image created by a user. For example, if the application is a website or Web service, the image may contain a Web server, associated static content, and dynamic page code, and after the elastic cloud server is created by the image, the Web server will boot and the application is ready to accept the request.
It should be noted that the server selects a predicted version of the meter image from the image repository. The server is mirrored through the Jmeter, so that the environment such as the Java virtual machine and the property file does not need to be set, and only the creation of the test script and the test resource such as the data file need to be concentrated. One Jmeter mirror image comprises all scripts in the test task, and a Jmeter mirror image of a certain version is selected as a Jmeter mirror image of a predicted version according to the corresponding relation between the test task and the test script name. The Jmeter image does not filter any command parameters of the Jmeters, allowing multiple modes of operation, and in a container-based distributed mode, the image can be used to build a container cluster.
Optionally, the server uploads the Jmeter mirror image of the predicted version to a mirror image warehouse, wherein the mirror image warehouse is a warehouse for storing docker mirror images, is not limited to Jmeter mirror images and is divided into a public warehouse and a private warehouse; the server carries out version management on the Jmeter mirror images of the predicted versions in a labeling mode, the version management is to label the Jmeter mirror images of different test tasks according to preset rules, different branches and labels can be provided for the same test task according to different test scripts, and version rollback is supported after the Jmeter mirror images are deployed, so that the test deployment is more convenient and faster.
202. Setting distributed nodes by a computing service ECS instance, wherein the distributed nodes comprise a main node and at least one slave node;
the server sets up distributed nodes through an Elastic Computing Service (ECS) instance, wherein the distributed nodes comprise a main node and at least one slave node. The distributed node is a system of computer nodes that communicate over a network and that work in concert to accomplish a common task. Distributed nodes are present to perform computing and storage tasks that cannot be performed by a single computer with inexpensive and common machines. For example, the server deploys the configured template by setting a meter image through configuration management of a Container As A Service (CAAS) platform. The CAAS is an application environment which is managed and protected by IT, applications can be constructed and deployed through the CAAS, and Jerner container distributed clusters can be managed and deployed through container-based virtualization.
It should be noted that, during actual deployment, the server dynamically adjusts the number of the slave nodes according to the concurrency number of the performance test indexes, for example, the concurrency number of the performance test indexes is 1000, and the maximum concurrency number of each slave node is 200, so that at least 5 slave nodes are required.
203. Judging whether the main node and each slave node have Jmeter mirror images of the predicted version at the same time;
the method comprises the steps that a server judges whether Jmeter images of predicted versions exist in a master node and all slave nodes at the same time, specifically, the server judges whether target image installation files are contained in appointed directories of the master node and all slave nodes, and if the target image installation files are not contained in the appointed directories of the master node and all slave nodes, the Jmeter images of the predicted versions do not exist at the same time; if the designated directories of the master node and each slave node contain the target image installation file, reading the target image installation file to obtain a version number; judging whether the version number is matched with the version number of the Jmeter image of the predicted version, and if the version number of the Jmeter image installation file is matched with the version number of the Jmeter image of the predicted version, determining that the Jmeter image of the predicted version exists; and if the version number is not matched with the version number of the Jmeter image of the predicted version, determining that the Jmeter image of the predicted version does not exist at the same time. For example, in the slave node B, the version number in the target image installation file is j20190122, and the version number of the Jmeter image of the predicted version is j20190123, so that the version number in the target image installation file does not match the version number of the Jmeter image of the predicted version, and the Jmeter image of the predicted version does not exist in the slave node B.
204. If the Jmeter mirror images of the prediction versions do not exist in the master node and the slave nodes at the same time, deploying the Jmeter mirror images of the prediction versions in a preset mode to obtain deployment results;
if the main node and each slave node do not have the Jmeter mirror image of the prediction version at the same time, deploying the Jmeter mirror image of the prediction version in a preset mode to obtain a deployment result, and specifically, if the main node and each slave node do not have the Jmeter mirror image of the prediction version at the same time, deploying the Jmeter mirror image of the prediction version to an ECS instance by a server to obtain Jmeter application; the server runs a Jmeter application through a container docker; the server adds each slave node to a control list of the master node; and the server restarts the Jmeter application of the main node to obtain a deployment result. One ECS instance is equivalent to one virtual machine and comprises basic computing components such as a CPU, an internal memory, an operating system, a network and a disk, and the ECS instance can be divided into multiple specification families according to a service scene and a use scene. Under the same service scene, a plurality of new and old specification families can be selected. In the same specification family, the system can be divided into a plurality of different specifications according to the configuration of a CPU and a memory. The ECS instance specification defines the configuration of the CPU and memory of an instance, including the CPU model, the host frequency, and the like.
Optionally, the server determines whether the deployment result is a target value, where the target value indicates that the Jmeter image of the prediction version is successfully deployed; if the deployment result is not the target value, the server redeploys the Jmeter mirror image of the prediction version; and if the deployment result is the target value, determining that the deployment is successful. Specifically, the server queries the state of deploying the Jmeter mirror image through the kubernets platform, and accordingly judges whether the Jmeter mirror image is deployed successfully. Among them, kubernets is an open source container orchestration engine that supports automated deployment, large-scale scalability, and application containerization management. When an application is deployed in a production environment, multiple instances of the application are typically deployed to load balance application requests. For example, a plurality of containers are created in kubernets, a Jmeter application runs in each container, and management and access to the Jmeter applications are achieved through built-in load balancing strategies.
205. If the Jmeter mirror images of the prediction versions exist in the master node and the slave nodes at the same time, determining that the deployment is successful;
and if the Jmeter images of the predicted versions exist in the master node and the slave nodes at the same time, the server determines that the deployment is successful. Specifically, the server circularly traverses the Jmeter mirror images of the deployment prediction versions to the master node and each slave node, and checks the deployment results one by one until all deployments are successful, then step 206 is executed.
206. When the main node receives a test starting instruction, the main node sends corresponding test scripts to the slave nodes;
and when the main node receives the test starting instruction, the server sends the corresponding test script to each slave node through the main node. Specifically, the server sets the master node as a client for calling RMI by a remote method; the server sets each slave node as a server side of RMI; when the main node receives a test starting instruction, the server executes a test plan through the main node, wherein the test plan comprises total test data and a test script; the server analyzes the control list of the master node to obtain configuration information corresponding to each slave node; the server splits the total test data according to the configuration information corresponding to each slave node to obtain the test data corresponding to each slave node; the server replaces the keywords in the test scripts according to the test data corresponding to each slave node to obtain the corresponding test scripts, for example, the test scripts 2, 3, 4 and 5 corresponding to each slave node are expanded according to the test script 1, and the test tasks executed by the corresponding test scripts 2, 3, 4 and 5 are the same, but the test data are different; and the server pushes the corresponding test script to each slave node through the master node in an RMI mode. The master node communicates with each slave node, but the slave nodes do not communicate with each other. For example, 700 test requests are sent concurrently, the distributed nodes include a master node a and slave nodes B, C, D and E, the server performs calculation according to the configuration information of the slave nodes B, C, D and E, the server allocates 180, 170, 190 and 160 test requests to the slave nodes B, C, D and E, respectively, and the server sends corresponding test scripts containing the test requests to the slave nodes B, C, D and E through the master node a.
It should be noted that Remote Method Invocation (RMI) is an application programming interface for implementing remote procedure calls, and supports the fact that programs running on clients can invoke objects on remote servers, i.e., program-level objects stored in different address spaces, to communicate with each other, and RMI features support distributed operations in a network environment. The RMI includes a server running a remote method invocation service and a client running a remote method invocation service. For example, the master node a is set to be the most RMI client, the slave nodes B, C, D and E are set to be RMI servers, and the master node a and the slave nodes B, C, D and E communicate with each other by means of RMI.
207. When each slave node receives the corresponding test script, the performance test is carried out through each slave node to obtain the corresponding test result;
and when each slave node receives the corresponding test script, the server performs performance test through each slave node to obtain the corresponding test result. Specifically, when each slave node receives a corresponding test script, the server analyzes the corresponding test script through each slave node to obtain a corresponding test request; executing respective corresponding test requests through each slave node to obtain respective corresponding test results; the server records the corresponding test result through each slave node, for example, records the corresponding test result in a database or a specified directory file, which is not limited herein.
It should be noted that the performance test index includes a concurrency number, an average response time, a request success rate, and a concurrency number of requests or transaction numbers per second, i.e. QPS/TPS. And the QPS/TPS is in relation to the concurrency number and the average response time, wherein the QPS/TPS is equal to the concurrency number divided by the average response time. For example, a user may log into the check-in system to check in within 30 minutes from 8 o ' clock to 9 o ' clock, which is a 9 o ' clock in the morning. The company staff is 2000 persons, the average time length for each person to log in the check-in system is 5 minutes, the specific calculation process is that 30 minutes is converted into 1800 seconds, QPS is 2000 divided by 1800, the average response time is 3 minutes, and the conversion time is 180 seconds, and then the calculated concurrency number is 200.
208. Sending the test result corresponding to each slave node to the master node;
and the server sends the corresponding test result to the master node through each slave node. Specifically, the server receives respective corresponding test results through each slave node, and the server sends the respective corresponding test results to the master node through each slave node in an RMI service manner, where the respective corresponding test results are test requests under the same test plan, but the respective corresponding test results are different. The test result includes normal test data and abnormal test data. For example, the number of test requests from nodes B, C, D, and E is 180, 170, 190, and 160, respectively, and the number of test results from nodes B, C, D, and E is 180, 170, 190, and 160, respectively.
209. And counting the test results corresponding to the master nodes respectively by the master nodes to obtain performance test results.
And the server counts the test results corresponding to the master nodes to obtain the performance test results. Specifically, the server receives the respective test results of each slave node through the master node, and the server calculates and classifies the respective test results of each slave node according to a preset statistical script to obtain performance test results. Further, the server displays the performance test result through the Jmeter application of the main node.
It should be noted that the throughput of each slave node is closely related to the consumption of the CPU by the user request. The higher the consumption of a single user request to the CPU, the lower the throughput capability of each slave node, i.e. the fewer the number of processed concurrencies.
In the embodiment of the invention, the Jmeter mirror image is subjected to version control through the mirror image warehouse, the version rollback is supported, meanwhile, the Jmeter mirror image is deployed by adopting container-based distributed nodes and the performance test is carried out, processes among containers cannot influence each other, computing resources can be distinguished, and the containers can be rapidly deployed relative to a virtual machine. Because the container is decoupled from the bottom facility and the machine file system, the container can be migrated between different clouds and different versions of operating systems, the deployment process is simplified, and the test efficiency is improved.
In the above description of the method for testing distributed performance based on meter in the embodiment of the present invention, the following description of the device for testing distributed performance based on meter in the embodiment of the present invention refers to fig. 3, and an embodiment of the device for testing distributed performance based on meter in the embodiment of the present invention includes:
a deployment unit 301, configured to deploy the Jmeter image of the predicted version to a distributed node, where the distributed node includes a master node and at least one slave node;
the control unit 302 is configured to send, when the master node receives the start test instruction, a corresponding test script to each slave node through the master node;
the test unit 303 is configured to perform a performance test on each slave node to obtain a corresponding test result when each slave node receives a corresponding test script;
a sending unit 304, configured to send the test result corresponding to each slave node to the master node;
the counting unit 305 is configured to count the respective corresponding test results through the master node to obtain performance test results.
In the embodiment of the invention, the Jmeter mirror image is subjected to version control through the mirror image warehouse, the version rollback is supported, meanwhile, the Jmeter mirror image is deployed by adopting container-based distributed nodes and the performance test is carried out, processes among containers cannot influence each other, computing resources can be distinguished, and the containers can be rapidly deployed relative to a virtual machine. Because the container is decoupled from the bottom layer facility and the machine file system, the migration can be carried out among different cloud and different version operating systems, the deployment process is simplified, and the testing efficiency is improved.
Referring to fig. 4, another embodiment of the Jmeter-based distributed performance testing apparatus according to the embodiment of the present invention includes:
a deployment unit 301, configured to deploy the Jmeter image of the predicted version to a distributed node, where the distributed node includes a master node and at least one slave node;
the control unit 302 is configured to send, when the master node receives the start test instruction, a corresponding test script to each slave node through the master node;
the test unit 303 is configured to perform a performance test on each slave node to obtain a corresponding test result when each slave node receives a corresponding test script;
a sending unit 304, configured to send the test result corresponding to each slave node to the master node;
the counting unit 305 is configured to count the test results corresponding to the master nodes, so as to obtain performance test results.
Optionally, the deployment unit 301 may further include:
a selection subunit 3011, configured to select a Jmeter image of a predicted version from the image repository;
a setting subunit 3012, configured to set, through an elastic computing service ECS instance, a distributed node, where the distributed node includes a master node and at least one slave node;
a first determining subunit 3013, configured to determine whether a Jmeter mirror image of a predicted version exists in the master node and each slave node at the same time;
the deployment subunit 3014, if the Jmeter mirror image of the prediction version does not exist at the same time in the master node and each slave node, is configured to deploy the Jmeter mirror image of the prediction version in a preset manner to obtain a deployment result;
the confirmation subunit 3015 is configured to determine that the deployment is successful if the main node and each slave node have a Jmeter image of a predicted version at the same time.
Optionally, the deployment subunit 3014 may be further specifically configured to:
if the main node and each slave node do not have the Jmeter mirror image of the prediction version at the same time, deploying the Jmeter mirror image of the prediction version to an ECS instance to obtain Jmeter application;
running a Jmeter application through a container docker;
adding configuration information corresponding to each slave node into a control list of a master node, wherein the configuration information corresponding to each slave node comprises an IP address and a port corresponding to each slave node;
and restarting Jmeter application of the main node to obtain a deployment result.
Optionally, the deployment unit 301 may further include:
a second determining subunit 3016, configured to determine whether the deployment result is a target value, where the target value is used to indicate that the Jmeter image of the predicted version is successfully deployed;
a first processing subunit 3017, configured to, if the deployment result is not the target value, redeploy;
the second processing subunit 3018, if the deployment result is the target value, is configured to determine that the deployment is successful.
Optionally, the control unit 302 may be further specifically configured to:
setting a main node as a client for calling RMI by a remote method;
setting each slave node as a server end of RMI;
when the main node receives a test starting instruction, executing a test plan through the main node, wherein the test plan comprises total test data and a test script;
analyzing a control list of the master node to obtain configuration information corresponding to each slave node;
splitting the total test data according to the configuration information corresponding to each slave node to obtain the test data corresponding to each slave node;
replacing keywords in the test scripts according to the test data corresponding to each slave node to obtain the corresponding test scripts;
and pushing the corresponding test script to each slave node through the master node in an RMI mode.
Optionally, the test unit 303 may be further specifically configured to:
when each slave node receives the corresponding test script, each slave node analyzes the corresponding test script to obtain the corresponding test request;
and executing the corresponding test request by each slave node to obtain the corresponding test result.
Optionally, the Jmeter-based distributed performance testing apparatus may further include:
an uploading unit 306, configured to upload the Jmeter image of the predicted version to an image repository;
and the management unit 307 is configured to perform version management on the Jmeter image of the predicted version in a labeling manner.
In the embodiment of the invention, the Jmeter mirror image is subjected to version control through the mirror image warehouse, the version rollback is supported, meanwhile, the Jmeter mirror image is deployed by adopting container-based distributed nodes and the performance test is carried out, processes among containers cannot influence each other, computing resources can be distinguished, and the containers can be rapidly deployed relative to a virtual machine. Because the container is decoupled from the bottom layer facility and the machine file system, the migration can be carried out among different cloud and different version operating systems, the deployment process is simplified, and the testing efficiency is improved.
Fig. 3 and 4 describe the Jmeter-based distributed performance testing apparatus in the embodiment of the present invention in detail from the perspective of the modular functional entity, and the Jmeter-based distributed performance testing apparatus in the embodiment of the present invention is described in detail from the perspective of hardware processing.
Fig. 5 is a schematic structural diagram of a meter-based distributed performance testing apparatus 500 according to an embodiment of the present invention, where the meter-based distributed performance testing apparatus 500 may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 501 (e.g., one or more processors) and a memory 509, and one or more storage media 508 (e.g., one or more mass storage devices) for storing applications 509 or data 509. Memory 509 and storage medium 508 may be, among other things, transient storage or persistent storage. The program stored on storage medium 508 may include one or more modules (not shown), each of which may include a sequence of instructions operating on Jmeter-based distributed performance testing. Still further, the processor 501 may be configured to communicate with the storage medium 508 to execute a series of instruction operations in the storage medium 508 on the Jmeter-based distributed performance testing device 500.
The meter-based distributed performance testing apparatus 500 may also include one or more power supplies 502, one or more wired or wireless network interfaces 503, one or more input-output interfaces 504, and/or one or more operating systems 505, such as Windows Server, mac OS X, unix, linux, freeBSD, and the like. Those skilled in the art will appreciate that the meter-based distributed performance testing device architecture shown in FIG. 5 does not constitute a limitation of the meter-based distributed performance testing device, and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention, which is substantially or partly contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (9)

1. A distributed performance testing method based on a Jmeter is characterized by comprising the following steps:
deploying a forecast version of a Jmeter image to distributed nodes, wherein the distributed nodes comprise a main node and at least one slave node;
the deploying the Jmeter image of the predicted version to the distributed nodes, the distributed nodes including a master node and at least one slave node, comprises:
selecting a Jmeter image of a predicted version from an image warehouse;
setting the distributed nodes through an Elastic Computing Service (ECS) example, and dynamically adjusting the number of slave nodes according to the concurrency number of performance test indexes, wherein the distributed nodes comprise the master node and the at least one slave node, and the performance test indexes comprise the concurrency number, the average response time, the request success rate, the request per second or the number of transactions;
judging whether the main node and each slave node simultaneously have the Jmeter mirror image of the predicted version;
the step of judging whether the main node and each slave node simultaneously have the Jmeter mirror image of the predicted version comprises the following steps:
judging whether the appointed directories of the main node and each slave node contain target mirror image installation files or not;
if the designated directories of the master node and each slave node do not contain target mirror image installation files, determining that Jmeter mirror images of the predicted versions do not exist simultaneously;
if the designated directories of the master node and each slave node contain the target image installation file, reading the target image installation file to obtain a version number;
judging whether the version number is matched with the version number of the Jmeter mirror image of the predicted version;
if the version number is matched with the version number of the Jmeter mirror image of the predicted version, determining that the Jmeter mirror image of the predicted version exists;
if the version number is not matched with the version number of the Jmeter image of the predicted version, determining that the Jmeter image of the predicted version does not exist at the same time;
if the Jmeter mirror images of the prediction versions do not exist in the master node and the slave nodes at the same time, deploying the Jmeter mirror images of the prediction versions in a preset mode to obtain deployment results;
if the Jmeter mirror images of the prediction versions exist in the master node and the slave nodes at the same time, determining that the deployment is successful;
when the main node receives a test starting instruction, the main node sends corresponding test scripts to all the slave nodes, and the main node and all the slave nodes communicate in a remote method calling mode;
when each slave node receives the corresponding test script, performing performance test through each slave node to obtain the corresponding test result;
sending the respective corresponding test results to the master node through each slave node;
and counting the corresponding test results through the main node to obtain performance test results.
2. The method according to claim 1, wherein if the Jmeter mirror image of the predicted version does not exist in the master node and each slave node at the same time, deploying the Jmeter mirror image of the predicted version in a preset manner, and obtaining a deployment result comprises:
if the main node and each slave node do not have the Jmeter mirror image of the prediction version at the same time, deploying the Jmeter mirror image of the prediction version to the ECS instance to obtain Jmeter application;
running the Jmeter application through a container docker;
adding configuration information corresponding to each slave node into a control list of the master node, wherein the configuration information corresponding to each slave node comprises an IP address and a port corresponding to each slave node;
and restarting the Jmeter application of the main node to obtain a deployment result.
3. The method of claim 1, wherein if the Jmeter image of the predicted version does not exist in the master node and each slave node at the same time, the Jmeter image of the predicted version is deployed in a preset manner, and after a deployment result is obtained, the method further comprises:
judging whether the deployment result is a target value, wherein the target value is used for indicating that the Jmeter mirror image of the prediction version is deployed successfully;
if the deployment result is not the target value, re-deploying;
and if the deployment result is the target value, determining that the deployment is successful.
4. The method of claim 2, wherein the sending, by the master node, the respective corresponding test scripts to the respective slave nodes when the master node receives a start test instruction comprises:
setting the main node as a client of Remote Method Invocation (RMI);
setting each slave node as a server of the RMI;
when the main node receives a test starting instruction, executing a test plan through the main node, wherein the test plan comprises total test data and a test script;
analyzing the control list of the master node to obtain configuration information corresponding to each slave node;
splitting the total test data according to the configuration information corresponding to each slave node to obtain the test data corresponding to each slave node;
replacing the keywords in the test scripts according to the test data corresponding to each slave node to obtain the test scripts corresponding to each slave node;
and pushing the corresponding test script to each slave node through the master node in the RMI mode.
5. The method of claim 4, wherein the obtaining of the corresponding test result through the performance test of each slave node when each slave node receives the corresponding test script comprises:
when each slave node receives the corresponding test script, analyzing the corresponding test script through each slave node to obtain the corresponding test request;
executing the respective corresponding test request through each slave node to obtain the respective corresponding test result;
and recording the respective corresponding test results through the respective slave nodes.
6. The method for Jmeter-based distributed performance testing according to any one of claims 1 to 5, wherein the deployment of the Jmeter image of the predicted version to the distributed nodes is preceded by a master node and at least one slave node, and the Jmeter-based distributed performance testing method further comprises:
uploading the Jmeter image of the predicted version to the image repository;
and carrying out version management on the Jmeter mirror image of the predicted version in a labeling mode.
7. A Jmeter-based distributed performance testing apparatus, comprising:
the deployment unit is used for deploying the Jmeter mirror image of the predicted version to distributed nodes, and the distributed nodes comprise a main node and at least one slave node;
the deploying the Jmeter image of the predicted version to the distributed nodes, the distributed nodes including a master node and at least one slave node, comprises:
selecting a Jmeter image of a predicted version from an image repository;
setting the distributed nodes through an Elastic Computing Service (ECS) example, and dynamically adjusting the number of slave nodes according to the concurrency number of performance test indexes, wherein the distributed nodes comprise the master node and the at least one slave node, and the performance test indexes comprise the concurrency number, the average response time, the request success rate, and the request per second or the number of transactions;
judging whether the main node and each slave node simultaneously have the Jmeter mirror image of the predicted version;
the step of judging whether the main node and each slave node simultaneously have the Jmeter mirror image of the predicted version comprises the following steps:
judging whether the appointed directories of the main node and each slave node contain target mirror image installation files or not;
if the appointed directories of the main node and each slave node do not contain the target mirror image installation file, determining that Jmeter mirrors of the predicted versions do not exist at the same time;
if the designated directories of the master node and each slave node contain the target image installation file, reading the target image installation file to obtain a version number;
judging whether the version number is matched with the version number of the Jmeter mirror image of the predicted version;
if the version number is matched with the version number of the Jmeter image of the predicted version, determining that the Jmeter image of the predicted version exists;
if the version number is not matched with the version number of the Jmeter image of the predicted version, determining that the Jmeter image of the predicted version does not exist at the same time;
if the Jmeter mirror images of the prediction versions do not exist in the main node and the slave nodes at the same time, deploying the Jmeter mirror images of the prediction versions in a preset mode to obtain deployment results;
if the Jmeter mirror images of the prediction versions exist in the master node and the slave nodes at the same time, determining that the deployment is successful;
the control unit is used for sending respective corresponding test scripts to each slave node through the master node when the master node receives a test starting instruction, and the master node and each slave node communicate in a remote method calling mode;
the test unit is used for carrying out performance test through each slave node to obtain a corresponding test result when each slave node receives the corresponding test script;
a sending unit, configured to send the respective corresponding test result to the master node through each slave node;
and the statistical unit is used for counting the respective corresponding test results through the main node to obtain performance test results.
8. A Jmeter-based distributed performance testing apparatus, comprising: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line;
the at least one processor invoking the instructions in the memory to cause the Jmeter-based distributed performance testing device to perform the method of any of claims 1-6.
9. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program when executed by a processor realizes the steps of the method according to any of claims 1-6.
CN201910523487.4A 2019-06-17 2019-06-17 Distributed performance testing method, device, equipment and storage medium based on Jmeter Active CN110417613B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910523487.4A CN110417613B (en) 2019-06-17 2019-06-17 Distributed performance testing method, device, equipment and storage medium based on Jmeter
PCT/CN2019/119070 WO2020253079A1 (en) 2019-06-17 2019-11-18 Jmeter-based distributed performance test method and apparatus, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910523487.4A CN110417613B (en) 2019-06-17 2019-06-17 Distributed performance testing method, device, equipment and storage medium based on Jmeter

Publications (2)

Publication Number Publication Date
CN110417613A CN110417613A (en) 2019-11-05
CN110417613B true CN110417613B (en) 2022-11-29

Family

ID=68359200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910523487.4A Active CN110417613B (en) 2019-06-17 2019-06-17 Distributed performance testing method, device, equipment and storage medium based on Jmeter

Country Status (2)

Country Link
CN (1) CN110417613B (en)
WO (1) WO2020253079A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110417613B (en) * 2019-06-17 2022-11-29 平安科技(深圳)有限公司 Distributed performance testing method, device, equipment and storage medium based on Jmeter
CN110727570A (en) * 2019-10-11 2020-01-24 重庆紫光华山智安科技有限公司 Concurrent pressure measurement method and related device
CN111078516A (en) * 2019-11-26 2020-04-28 支付宝(杭州)信息技术有限公司 Distributed performance test method and device and electronic equipment
CN111400192A (en) * 2020-04-02 2020-07-10 北京达佳互联信息技术有限公司 Service program performance testing method and device, electronic equipment and storage medium
CN111817913B (en) * 2020-06-30 2022-05-17 北京红山信息科技研究院有限公司 Distributed network performance test method, system, server and storage medium
CN111858352B (en) * 2020-07-22 2024-04-05 中国平安财产保险股份有限公司 Method, device, equipment and storage medium for automatic test monitoring
CN112346979A (en) * 2020-11-11 2021-02-09 杭州飞致云信息科技有限公司 Software performance testing method, system and readable storage medium
CN112596750B (en) * 2020-12-28 2022-04-26 上海安畅网络科技股份有限公司 Application testing method and device, electronic equipment and computer readable storage medium
CN112817858B (en) * 2021-02-05 2024-04-19 深圳市世强元件网络有限公司 Method and computer equipment for batch generation of test data based on Jmeter
CN113204410B (en) * 2021-05-31 2024-01-30 平安科技(深圳)有限公司 Container type localization deployment method, system, equipment and storage medium
CN113704358A (en) * 2021-09-02 2021-11-26 湖南麒麟信安科技股份有限公司 Distributed task cooperative processing method and device and computer equipment
CN114116487B (en) * 2021-11-29 2024-03-15 北京百度网讯科技有限公司 Pressure testing method and device, electronic equipment and storage medium
CN114546852B (en) * 2022-02-21 2024-04-09 北京百度网讯科技有限公司 Performance test method and device, electronic equipment and storage medium
CN117234827B (en) * 2023-11-14 2024-02-13 武汉凌久微电子有限公司 Multi-platform automatic test method and system based on domestic graphic processor

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105471675A (en) * 2015-11-20 2016-04-06 浪潮电子信息产业股份有限公司 Method and system of testing nodes in batches
CN107688526A (en) * 2017-08-25 2018-02-13 上海壹账通金融科技有限公司 Performance test methods, device, computer equipment and the storage medium of application program
CN108038013A (en) * 2017-11-30 2018-05-15 海尔优家智能科技(北京)有限公司 Distributed performance test method and device and computer-readable recording medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI476586B (en) * 2011-07-13 2015-03-11 Inst Information Industry Cloud-based test system, method and computer readable storage medium storing thereof
CN106936636B (en) * 2017-03-15 2019-08-30 无锡华云数据技术服务有限公司 A kind of implementation method of the cloud computing test platform of rapid deployment containerization
CN109739744B (en) * 2018-12-05 2022-04-22 北京奇艺世纪科技有限公司 Test system and method
CN110417613B (en) * 2019-06-17 2022-11-29 平安科技(深圳)有限公司 Distributed performance testing method, device, equipment and storage medium based on Jmeter

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105471675A (en) * 2015-11-20 2016-04-06 浪潮电子信息产业股份有限公司 Method and system of testing nodes in batches
CN107688526A (en) * 2017-08-25 2018-02-13 上海壹账通金融科技有限公司 Performance test methods, device, computer equipment and the storage medium of application program
CN108038013A (en) * 2017-11-30 2018-05-15 海尔优家智能科技(北京)有限公司 Distributed performance test method and device and computer-readable recording medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于容器技术的软件测试优化研究;刘钱超等;《计算机技术与发展》;20181220(第04期);正文第3小节 *

Also Published As

Publication number Publication date
WO2020253079A1 (en) 2020-12-24
CN110417613A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN110417613B (en) Distributed performance testing method, device, equipment and storage medium based on Jmeter
US11797395B2 (en) Application migration between environments
US11593149B2 (en) Unified resource management for containers and virtual machines
US11288053B2 (en) Conversion and restoration of computer environments to container-based implementations
CN109062655B (en) Containerized cloud platform and server
US11663085B2 (en) Application backup and management
EP2633400B1 (en) Stateful applications operating in a stateless cloud computing environment
Beloglazov et al. OpenStack Neat: a framework for dynamic and energy‐efficient consolidation of virtual machines in OpenStack clouds
US9128742B1 (en) Systems and methods for enhancing virtual machine backup image data
US10929048B2 (en) Dynamic multiple proxy deployment
US11016855B2 (en) Fileset storage and management
US10909000B2 (en) Tagging data for automatic transfer during backups
US10860427B1 (en) Data protection in a large-scale cluster environment
CN107533503A (en) The method and apparatus that virtualized environment is selected during deployment
US20210133326A1 (en) Managing software vulnerabilities
CN108009004B (en) Docker-based method for realizing measurement and monitoring of availability of service application
US10735540B1 (en) Automated proxy selection and switchover
US20220182290A1 (en) Status sharing in a resilience framework
CN115129542A (en) Data processing method, data processing device, storage medium and electronic device
US20210067599A1 (en) Cloud resource marketplace
KR102231358B1 (en) Single virtualization method and system for HPC cloud service
Uyar et al. Twister2 Cross‐platform resource scheduler for big data
US20240103980A1 (en) Creating a transactional consistent snapshot copy of a sql server container in kubernetes
US11940884B2 (en) Containerized data mover for data protection workloads
Seuster et al. Context-aware distributed cloud computing using CloudScheduler

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant