CN115982049A - Abnormity detection method and device in performance test and computer equipment - Google Patents

Abnormity detection method and device in performance test and computer equipment Download PDF

Info

Publication number
CN115982049A
CN115982049A CN202310079120.4A CN202310079120A CN115982049A CN 115982049 A CN115982049 A CN 115982049A CN 202310079120 A CN202310079120 A CN 202310079120A CN 115982049 A CN115982049 A CN 115982049A
Authority
CN
China
Prior art keywords
test
target
time point
running state
state information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310079120.4A
Other languages
Chinese (zh)
Inventor
段晗
张�浩
傅媛媛
丘士丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202310079120.4A priority Critical patent/CN115982049A/en
Publication of CN115982049A publication Critical patent/CN115982049A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Debugging And Monitoring (AREA)

Abstract

The application relates to an abnormality detection method, an abnormality detection device, a computer apparatus, a storage medium, and a computer program product in a performance test. The method comprises the steps of sending a test message to target execution equipment in a service system, instructing the target execution equipment to acquire a test script based on a test identifier and starting a target performance test, comparing running state information of a current time point in the target performance test with running state prediction information of a corresponding time point predicted based on a target long-short term memory model, and if a comparison result represents that the running state information of the current time point is abnormal running state information, collecting and storing running snapshots corresponding to the target execution equipment. Compared with the traditional method that whether retesting is needed aiming at the abnormity is determined after one-time pressure test, the method and the system have the advantages that the target long-term and short-term memory model is trained, the test state abnormity is timely detected at the corresponding time point of the target execution equipment for executing the test, the corresponding operation snapshot is timely captured, and the abnormity detection efficiency is improved.

Description

Abnormity detection method and device in performance test and computer equipment
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method and an apparatus for detecting an anomaly in a performance test, a computer device, a storage medium, and a computer program product.
Background
In the development process of the business system, the performance test needs to be performed on the codes of the business system. During the performance test, the performance of the service system may not meet the expectation, and at this time, the operation information of the system during the performance test needs to be detected and collected, so as to analyze and improve the performance abnormality of the service system. At present, an anomaly detection mode in a performance test is that after a performance tester carries out a pressure test on a system according to the requirement of a developer, the developer determines whether anomaly detection is needed or not based on the result of the pressure test. However, the way of testing the abnormal information by the tester first with pressure and then determining whether to test again according to the test result may cause the efficiency of detecting the abnormal information to decrease.
Therefore, the existing abnormal detection method in the performance test has the defect of low detection efficiency.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, a computer device, a computer-readable storage medium, and a computer program product for detecting an abnormality in a performance test, which can improve detection efficiency.
In a first aspect, the present application provides a method for detecting an anomaly in a performance test, the method comprising:
receiving a test instruction aiming at the target performance test, and determining target execution equipment from a plurality of execution equipment contained in the service system;
sending a corresponding test message to the target execution device, where the test message carries a test identifier corresponding to the target performance test, and the test message is used to instruct the target execution device to obtain a corresponding test script based on the test identifier, and execute the target performance test based on the test script;
acquiring the running state information of the current time point in the process of executing the target performance test by the target execution equipment, and comparing the running state information of the current time point with the running state prediction information of the corresponding time point; the running state prediction information corresponding to the time point is obtained based on a target long-term and short-term memory model and model output when the test script is used as model input; the target long-short term memory model is obtained by training according to a plurality of historical test reports corresponding to the test script;
and if the comparison result represents that the running state information of the current time point is abnormal running state information, collecting a running snapshot corresponding to the target execution equipment, and storing the running snapshot.
In one embodiment, the determining a target execution device from a plurality of execution devices included in the service system includes:
acquiring the equipment state of each execution equipment in the service system;
and determining the executing equipment with the equipment state being non-failure and idle state as the target executing equipment.
In one embodiment, the sending the corresponding test message to the target execution device includes:
acquiring an equipment identifier corresponding to the target execution equipment, and generating a test message of an execution type according to the equipment identifier and a test identifier corresponding to the target performance test;
and broadcasting the test message of the execution type to each execution device, wherein the test message of the execution type is used for indicating each execution device receiving the test message to compare whether the device identifier is consistent with the self identifier, acquiring a corresponding test script according to the test identifier under the condition of consistency, and executing the target performance test based on the test script.
In one embodiment, the method further comprises:
obtaining a plurality of historical test reports corresponding to the test script; the plurality of historical test reports comprise a plurality of qualified running state information obtained by a plurality of historical time point tests; the plurality of historical time points represent a plurality of time points in a test period corresponding to the test script;
inputting the plurality of historical time points and the qualified running state information corresponding to the historical time points into a long-short term memory model to be trained, and outputting running state prediction information of each historical time point corresponding to the test script by the long-short term memory model to be trained based on the plurality of historical time points and the plurality of qualified running state information corresponding to the plurality of historical time points;
and adjusting the model parameters of the long-short term memory model to be trained according to the similarity between the running state prediction information of each historical time point and the qualified running state information of each time point until the training condition is met, and obtaining the target long-short term memory model.
In one embodiment, the inputting the qualified operating state information corresponding to the plurality of historical time points and each historical time point into the long-short term memory model to be trained includes:
aiming at each historical test report, acquiring at least one of the number of threads, memory information, the number of transmission control protocol connections, response time, the number of service processes in unit time, the number of service process failures and request time consumption corresponding to each historical time point in the historical test report as qualified running state information corresponding to each historical time point;
and inputting qualified running state information corresponding to each historical time point into a long-short term memory model to be trained, extracting the relation between running state characteristics and the time points by the long-short term memory model to be trained based on each historical time point and the qualified running state information corresponding to each historical time point, and outputting running state prediction information of each historical time point corresponding to the test script based on the relation.
In one embodiment, the comparing the operating state information of the current time point with the operating state prediction information of the corresponding time point includes:
obtaining the similarity between the running state information of the current time point and the running state prediction information of the corresponding time point in each time point output by the target long-short term memory model;
and if the similarity corresponding to the current time point is detected to be smaller than a preset similarity threshold, determining that the running state information of the current time point represented by the comparison result is abnormal running state information.
In one embodiment, the acquiring a running snapshot corresponding to the target execution device and saving the running snapshot includes:
sending a file grabbing message to the target execution device; the file capture message comprises capture frequency and address information of a file transfer station, and is used for indicating the target execution equipment to capture a corresponding heap dump file and a thread snapshot file according to the capture frequency and sending the heap dump file and the thread snapshot file to the file transfer station;
wherein, the heap dump file comprises a memory stack snapshot when the target execution device executes the test script; the thread snapshot file comprises an execution stack of each thread corresponding to the processor when the target execution device executes the test script;
the file transfer station is used for storing the heap dump file and the thread snapshot file into a database.
In one embodiment, the method further comprises:
acquiring heartbeat information of each target execution device, and determining whether each target execution device fails or not based on the heartbeat information;
after the sending of the corresponding test message to the target service execution device, the method further includes:
if the target execution equipment is determined to be in fault, generating a corresponding test cancellation message according to the equipment identification and the test cancellation instruction of the target service execution equipment;
and sending the test cancellation message to the target execution equipment, wherein the test cancellation message is used for indicating the target execution equipment to stop executing the target performance test under the condition that the equipment identification is determined to be consistent with the self identification.
In a second aspect, the present application provides an anomaly detection apparatus in a performance test, the apparatus comprising:
the determining module is used for receiving a test instruction aiming at the target performance test and determining target execution equipment from a plurality of execution equipment contained in the service system;
a sending module, configured to send a corresponding test message to the target execution device, where the test message carries a test identifier corresponding to the target performance test, and the test message is used to instruct the target execution device to obtain a corresponding test script based on the test identifier and execute the target performance test based on the test script;
the detection module is used for acquiring the running state information of the current time point in the process of executing the target performance test by the target execution equipment and comparing the running state information of the current time point with the running state prediction information of the corresponding time point; the running state prediction information corresponding to the time point is obtained based on a target long-term and short-term memory model and model output when the test script is used as model input; the target long-short term memory model is obtained by training according to a plurality of historical test reports corresponding to the test script;
and the acquisition module is used for acquiring the running snapshot corresponding to the target execution equipment and storing the running snapshot if the running state information of the current time point is represented as abnormal running state information by the comparison result.
In a third aspect, the present application provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when the processor executes the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method described above.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of the method described above.
According to the method, the device, the computer equipment, the storage medium and the computer program product for detecting the abnormity in the performance test, the test message is sent to the target execution equipment in the service system, the target execution equipment is instructed to acquire the test script based on the test identifier and start the target performance test, the running state information of the current time point in the target performance test is compared with the running state prediction information of the corresponding time point predicted based on the target long-short term memory model, and if the comparison result indicates that the running state information of the current time point is the abnormal running state information, the running snapshot corresponding to the target execution equipment is collected and stored. Compared with the traditional method that whether retesting needs to be carried out aiming at the abnormity is determined after one-time pressure test is finished, the method has the advantages that the target long-term and short-term memory model is trained, the abnormity of the test state is detected in time at the corresponding time point in the process of executing the test by the target execution equipment, and then the corresponding running snapshot is captured in time, so that the abnormity detection efficiency is improved.
Drawings
FIG. 1 is a diagram of an exemplary implementation of a method for anomaly detection in performance testing;
FIG. 2 is a flow diagram illustrating a method for anomaly detection in performance testing, according to one embodiment;
FIG. 3 is a flowchart illustrating the test message sending step in one embodiment;
FIG. 4 is a flowchart illustrating the steps performed in the device determination process according to one embodiment;
FIG. 5 is a schematic flow chart of the training steps in one embodiment;
FIG. 6 is a flow chart illustrating an anomaly detection method in a performance test according to another embodiment;
FIG. 7 is a block diagram showing an example of an abnormality detection apparatus in a performance test;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The method for detecting the abnormality in the performance test provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the service system 104 via a network. The data storage system may store data that the business system 104 needs to process. The data storage system may be integrated on business system 104 or may be placed on the cloud or other network server. The terminal 102 may receive the test instruction, determine a target execution device in the service system 104, assemble a corresponding test message, and send the test message to the target execution device in the service system 104, so that the target execution device starts a test after acquiring the test script based on the test message, and the terminal 102 may determine whether the target execution device is abnormal based on a comparison result between the running state information of the target execution device at each time point in the test process and the running state prediction information at the corresponding time point, and collect a corresponding snapshot when the target execution device is abnormal, thereby implementing abnormal detection on the target execution device. The terminal 102 may be, but is not limited to, various personal computers and notebook computers. The business system 104 may be implemented as a stand-alone execution device or as a cluster of multiple execution devices.
In one embodiment, as shown in fig. 2, a method for detecting an abnormality in a performance test is provided, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:
step S202, receiving a test instruction for the target performance test, and determining a target execution device from a plurality of execution devices included in the service system.
The business system may be a system corresponding to banking business, for example, a system of transaction business of a bank, the business system includes a plurality of execution devices, and each execution device may be a device for executing business, for example, executing related business of banking transaction. The target performance test may be a test for each execution device in the business system. The performance test is to simulate various normal, peak and abnormal load conditions by an automatic test tool to test various performance indexes of the system. Both load tests and pressure tests belong to the performance tests. The load test can determine the system performance under various working loads, and mainly tests the performance index change condition of the system when the load is gradually increased. Stress testing is a test that achieves the maximum level of service that a system can provide by determining a bottleneck or unacceptable performance point for the system.
The terminal may start a target performance test process after receiving the test instruction, that is, perform the target performance test on the execution device in the service system, which meets the requirement. The test instruction may be an instruction triggered automatically periodically or manually by a user. After receiving the test instruction for the target performance test, the terminal may determine the target execution device from a plurality of execution devices included in the service system. The target execution device may be an execution device that meets the test condition.
The terminal can determine the target execution device from the plurality of execution devices based on the states of the execution devices. For example, in some embodiments, the terminal may obtain the device state of each execution device in the service system, and determine the execution device whose device state is in a non-failure and idle state as the target execution device. The states of the execution devices include idle states, locking states, failure states, non-failure states, and the like, and the terminal may use the idle and non-failure execution devices as target execution devices. Specifically, the terminal may be configured with applications corresponding to the pressure test platform and the task scheduling platform, and if the user only clicks an "execute" button on the pressure test platform in the terminal to execute the pressure test, the terminal obtains all current non-faulty idle execution devices through the task scheduling platform, selects one of the idle execution devices as a target execution device, and assembles a specified execution task message format to send the target execution device. In addition, the user may also designate a specific execution device for testing, and if the user selects the designated execution device on the stress test platform to execute the stress test, the terminal may directly assemble a message in the execution format through the task scheduling platform and send the message to the execution device, and execute the stress test. It should be noted that the target execution device may include multiple target execution devices, the terminal may perform machine management through the task scheduling platform, and the state of the execution device needs to be changed in the process of selecting the target execution device, for example, the selected target execution device is locked to change the state into the locked state, so that multiple times of use are avoided. When the terminal can receive an unlocking message sent by a task execution device, namely the task execution device in the target execution equipment, the terminal releases the lock applied to the target execution equipment, and rejoins the equipment into the optional queue, namely, the equipment can be rejectedby the test object.
Step S204, sending a corresponding test message to the target execution device, wherein the test message carries a test identifier corresponding to the target performance test, and the test message is used for instructing the target execution device to obtain a corresponding test script based on the test identifier and execute the target performance test based on the test script.
Wherein the test message may be a message corresponding to the target execution device. Different execution devices may carry different device identifications. Each test can also carry a corresponding test identifier, the test identifier corresponds to a case corresponding to the test, and the case can include a test process, relevant information of the tested equipment and the test script, and the like. The terminal may generate the test information based on the device identification and the test identification of each execution device. And sending the test message to the corresponding target execution device. The target execution device may receive the test message, and obtain a corresponding test script based on the test identifier in the test message, so that the target execution device may execute the target performance test based on the test script. The test script may be stored in a database, and the target execution device may obtain the corresponding test script from the database according to the test identifier.
Step S206, acquiring the running state information of the current time point in the process of executing the target performance test by the target executing equipment, and comparing the running state information of the current time point with the running state prediction information of the corresponding time point; the running state prediction information corresponding to the time point is obtained based on the target long-term and short-term memory model and model output when the test script is used as the model input; and training the target long-short term memory model according to a plurality of historical test reports corresponding to the test script.
The target execution device may execute the target performance test based on the test script. For example, the target execution device is configured with a task execution device, and the target execution device may call the Jmeter to perform the target performance test based on the test script by using the task execution device. The Jmeter is a pressure testing tool developed by Apache organization based on Java and used for performing pressure testing on software. After the target execution device obtains the test script, the jmx structure of the target execution device can be analyzed and processed. Wherein, the Jmx script is an xml document, which records the information of the thread group to be initiated, the request information, etc. The target execution device can analyze the result of jmx, and call the Jmeter to perform the performance test after performing certain processing on the structure based on subsequent use. The above-mentioned structure processing of jmx may be to enable the target signal execution device to show each test step through the terminal in the performance test process based on the test script, that is, after the target execution device performs a certain processing on the jmx script, the target execution device may be observed through the terminal to execute the test process of the target performance test. In the process of executing the target performance test by the target execution device, the running state information of a plurality of time points is generated by taking time as a unit. The terminal may obtain the operation state information of the target execution device at the current time point in the process of executing the target performance test, and the operation state information may be obtained from the target execution device in real time. The terminal may compare the operation state information at the current time point with the operation state prediction information at the corresponding time point. For example, the terminal may input a test script into the target long-short term memory model, and the target long-short term memory model predicts the operation state prediction information corresponding to each time point in the test based on the test script, so as to obtain operation state prediction information of a plurality of time points, and the time points may be time points in one test cycle. The terminal can obtain the target long-short term memory model through training of a plurality of historical test reports corresponding to the test scripts. For example, the terminal inputs the running state feature information of each time point in a plurality of historical test reports corresponding to the test script into the long-short term memory model to be trained, so that the long-short term memory model to be trained extracts the relation between time and running state features, and finally obtains a target long-short term memory model capable of predicting the running state of each time point in the test corresponding to the test script.
And step S208, if the comparison result represents that the running state information of the current time point is abnormal running state information, collecting a running snapshot corresponding to the target execution equipment, and saving the running snapshot.
The terminal can obtain a corresponding comparison result by comparing the running state information of the current time point with the running state prediction information of the corresponding time point. The terminal may determine whether the running state information of the target execution device at the current time point is abnormal running state information based on the comparison result, and start a process of collecting a running snapshot corresponding to the target execution device when the abnormal running state information is detected. The running snapshot may include various files, such as a heappdump file and a JavaCore file. After the terminal finishes collecting the running snapshot, the obtained running snapshot can be stored, for example, in a database. The HeapDump file is a binary file, stores the use condition of an object in a JVM (Java virtual machine) heap at a certain moment, is a Java stack snapshot at a specified moment, and is a mirror image file; the Java core file mainly stores the running position of each thread of the Java application at a certain moment, namely which class, which method and which line the JVM executes. It is a text file, and the execution stack of each thread can be seen after opening, and displayed in stack trace. Analysis of the JavaCore file can analyze whether the application is "card" at a certain point and whether long-term unresponsiveness causes system crash. The JVM refers to a Java virtual machine, has a self-perfected hardware architecture such as a processor, a stack, a register, and the like, and also has a corresponding instruction system. It should be noted that, in some embodiments, the terminal may further determine to start the process of collecting the operation snapshot only when it detects that the operation state information at the preset number of time points is abnormal operation state information. The terminal can also generate a test report of the test based on the execution process of the target execution device in the test process, the running state information of each time point, the running snapshot and the like after the test is finished, and can also display the test report. For example, a user may input an identifier corresponding to the test report in the terminal, so that the terminal may query the database to obtain the corresponding test report and display the test report.
In the method for detecting the abnormity in the performance test, the test message is sent to the target execution equipment in the service system, the target execution equipment is instructed to acquire the test script based on the test identifier and start the target performance test, the running state information of the current time point in the target performance test is compared with the running state prediction information of the corresponding time point predicted based on the target long-short term memory model, and if the comparison result represents that the running state information of the current time point is abnormal running state information, the running snapshot corresponding to the target execution equipment is collected and stored. Compared with the traditional method that whether retesting needs to be carried out aiming at the abnormity is determined after one-time pressure test is finished, the method has the advantages that the target long-term and short-term memory model is trained, the abnormity of the test state is detected in time at the corresponding time point in the process of executing the test by the target execution equipment, and then the corresponding running snapshot is captured in time, so that the abnormity detection efficiency is improved.
In one embodiment, sending a corresponding test message to a target execution device includes: acquiring a device identifier corresponding to target execution equipment, and generating a test message of an execution type according to the device identifier and a test identifier corresponding to a target performance test; and broadcasting the test message of the execution type to each execution device, wherein the test message of the execution type is used for indicating each execution device receiving the test message to compare whether the device identifier is consistent with the self identifier, acquiring a corresponding test script according to the test identifier under the condition of consistency, and executing the target performance test based on the test script.
In this embodiment, the execution device in the service system may further include a device identifier. Because there are multiple executing devices in the service system, the terminal needs to send the test message to the corresponding target executing device through the device identifier. For example, as shown in fig. 3, fig. 3 is a flowchart illustrating a test message sending step in one embodiment. The terminal may be configured with a pressure test platform, a task operation state determination platform, and the like, a tester may initiate a test request through the pressure test platform, and the task operation state determination platform may also send a corresponding request according to the state of the target execution device, for example, send a request for capturing an operation snapshot file, and the like. The terminal may generate a message of a corresponding type based on these requests and broadcast it to the various execution devices described above. The terminal can obtain the device identifier of the target execution device, generate the test message of the execution type according to the device identifier and the test identifier corresponding to the target performance test, and broadcast the test message of the execution type to each execution device. Therefore, each executing device in the service system receives the broadcasted test message, compares the device identifier in the test message with the device identifier of the executing device, and if the comparison is consistent, the test message is sent to the local machine, and at this moment, the target executing device can obtain the corresponding test script based on the test identifier in the test message and execute the target performance test based on the test script.
The execution type indicates that the execution equipment starts a test process of the target performance test when receiving the test message of the type and determining the message which is sent to the local equipment. The terminal may also generate and broadcast other types of test messages to the target enforcement device. For example, in an embodiment, the terminal may further obtain heartbeat information of each target execution device, and determine whether each target execution device fails based on the heartbeat information. If the terminal determines that the target execution device fails after sending the corresponding test message to the target service execution device, the terminal may generate a corresponding test cancellation message according to the device identifier of the target execution device and the test cancellation instruction. After receiving the test cancellation message, the target execution device may detect whether the device identifier in the test cancellation message is consistent with the device identifier of the target execution device, and stop executing the target performance test when the device identifier in the test cancellation message is consistent with the device identifier of the target execution device. In addition, the terminal can also generate a message corresponding to the file for collecting the running snapshot based on the device identifier, so that the target execution device collects the corresponding running snapshot under the condition that the device identifier is determined to be consistent with the self identifier.
Specifically, for the execution type test message, if the tester only clicks the execute button in the terminal to execute the target performance test, the terminal may acquire all non-faulty idle devices in the current service system, select one device from the idle devices, assemble the test message by using the task scheduling platform and combining the device identifier and the test identifier based on the user's requirement, and broadcast the test message to the task execution devices of the execution devices in the service system. If the tester selects the specified target execution device to execute the target performance test in the terminal, the terminal can directly assemble the corresponding test message through the task scheduling platform and send the test message to the target execution device. In addition, for the test canceling message of the canceling type, a tester can click a button for stopping execution in the terminal, so that the terminal can generate a corresponding test canceling message based on the device identifier of the target execution device to be canceled, and send the test canceling message to the target execution device in a broadcast mode to stop the target performance test. In addition, the terminal can also receive state self-checking information of the target execution equipment, the target execution equipment uploads running state information of the target execution equipment every two minutes, if the terminal detects that the running state information of the target execution equipment is not uploaded in two continuous heartbeat intervals, the terminal can send the test canceling message through the task scheduling platform, so that the target execution equipment finishes a test process of executing the target performance test, and releases the target execution equipment. For the message of the grab type, when the terminal detects the abnormal operation state information, the terminal can generate a corresponding grab message according to the equipment identifier of the abnormal target execution equipment and broadcast the corresponding grab message to the service system, so that the target execution equipment grabs the java core and the heappdump files according to a certain frequency through the task execution device after receiving the grab message and sends the javacore and the heappdump files to a specified position, and the file loss caused by restarting a container is prevented.
Specifically, as shown in fig. 4, fig. 4 is a schematic flow chart illustrating the step of executing the device determination in one embodiment. Each execution device in the business system is configured with a task execution device and a cloud container of a Jmeter environment. When receiving messages of various types sent by the terminal, the execution device can firstly judge whether the device identifier in the message is the identifier of the local machine, if so, the local machine enters a process of judging the type of the message, and executes a corresponding process according to the type of the message; if the execution device judges that the message is not the local identifier, the message can be ended and is not processed. If the target execution equipment detects that the message type is the execution type, the target execution equipment can inquire the analysis result of the test case stored in the database according to the test identification in the message, obtain the jmx structure obtained through analysis, obtain the test script, and after certain processing is carried out on the jmx structure, the terminal can obtain the process information of the target execution equipment for executing the target performance test, the target execution equipment can call the Jmeter for carrying out the target performance test, and after the test is finished, the target execution equipment can send the test report to a specified position, for example, to the terminal and display the test report in the terminal. If the message type is cancel execution, the target execution device may end the currently-processing Jmeter process, register that the task is ended, and send an unlock message to the terminal, so that the terminal releases the target execution device. If the message type is a capture type, the snapshot file is obtained, and the target execution device can obtain the javacore and heappdump files at a certain frequency. The frequency may be a frequency set when the test plan is newly added, and may be acquired once in one minute, for example.
Through the embodiment, the terminal can generate different types of messages based on the state of the target execution equipment and broadcast the messages to the target execution equipment, so that the target execution equipment can execute corresponding processes according to the messages received in real time, and the efficiency of performance testing is improved.
In one embodiment, further comprising: obtaining a plurality of historical test reports corresponding to the test script; the plurality of historical test reports comprise a plurality of qualified running state information obtained by a plurality of historical time point tests; the plurality of historical time points represent a plurality of time points in a test period corresponding to the test script; inputting a plurality of historical time points and qualified running state information corresponding to the historical time points into a long-short term memory model to be trained, and outputting running state prediction information of each historical time point corresponding to a test script by the long-short term memory model to be trained based on the plurality of historical time points and the plurality of qualified running state information corresponding to the plurality of historical time points; and adjusting the model parameters of the long-short term memory model to be trained according to the similarity between the running state prediction information of each historical time point and the qualified running state information of each time point until the training condition is met, thereby obtaining the target long-short term memory model.
In this embodiment, the terminal may train a target LSTM (Long Short Term Memory) model in advance. One test script corresponding to one execution device may correspond to one target long-short term memory model, that is, each execution device may correspond to a target long-short term memory model corresponding to a plurality of test scripts. For the test script, the terminal may obtain a plurality of historical test reports corresponding to the test script during the training process. The plurality of historical test reports comprise a plurality of qualified running state information obtained by a plurality of historical time point tests. The qualified operating state information indicates that the operating state of the target execution device meets expectations. The plurality of historical time points represent a plurality of time points in one test cycle corresponding to the test script, for example, each time point in a time period from the start of the test to the end of the test.
The terminal can obtain a long-short term memory model to be trained, the plurality of historical time points and the qualified running state information corresponding to each historical time point are input into the long-short term memory model to be trained, and the long-short term memory model to be trained outputs the running state prediction information of each historical time point corresponding to the test script based on the plurality of historical time points and the plurality of qualified running state information corresponding to the plurality of historical time points. The terminal can adjust the model parameters of the long-short term memory model to be trained according to the similarity between the running state prediction information of each historical time point output by the model and the corresponding qualified running state information of each time point, and the target long-short term memory model is obtained until the training conditions are met. The operation state information may include various types of operation states, and the various types of operation states may be used as feature information in model prediction. For example, in one embodiment, inputting the qualified operating state information corresponding to a plurality of historical time points and each historical time point into the long-short term memory model to be trained includes: acquiring at least one of the number of threads, the memory information, the number of transmission control protocol connections, the response time, the number of service processes in unit time, the number of service process failures and the request time consumption corresponding to each historical time point in each historical test report as qualified running state information corresponding to each historical time point; and inputting the qualified running state information corresponding to each historical time point into the long-short term memory model to be trained, extracting the relation between the running state characteristics and the time points by the long-short term memory model to be trained based on each historical time point and the qualified running state information corresponding to each historical time point, and outputting the running state prediction information of each historical time point corresponding to the test script based on the relation. In this embodiment, each historical test report may include at least one of a number of threads, memory information, a number of tcp connections, response time, a number of service processes per unit time, a number of service process failures, and a request time consumption corresponding to each historical time point, and the terminal may obtain the at least one operation state as information of a qualified operation state corresponding to each historical time point. Therefore, the terminal can input the qualified running state information of each historical time point into the long-short term memory model to be trained, and train the long-short term memory model to be trained to obtain the target long-short term memory model.
Specifically, for the transaction service system, since the peripheral nodes of different transactions are different, for example, the modes used in communication of different transaction services are different, so that the operating characteristics are different, the terminal can accumulate a certain amount of test reports, and ensure sufficient data, so as to ensure the accuracy of extracting the characteristics. For example, for completion of a transaction, multiple nodes need to be collaboratively completed by multiple applications. For a node inside a transaction, its peripheral nodes include nodes upstream of the link, nodes downstream of the link, and interactions between its own nodes, such as databases, message middleware, etc. According to the difference of the transaction completion links, the peripheral nodes have certain difference. The terminal may obtain a plurality of historical test reports corresponding to a single service, extract wait thread number, memory, TCP (Transmission Control Protocol) connection number, response time, TPS (transactionally per second), number of failed transactions, request time consumption, and the like as features from the test reports, input the relation between the values of the features and the time sequence to the LSTM to be trained, and extract the relation between the relevant features and the time. The terminal can have certain prediction capability on the performance of the execution equipment of the designated transaction node according to the relation.
Taking the transaction characteristics as an example, the process of training the terminal to obtain the LSTM includes: step 1, acquiring pressure measurement data of different transactions by using a data processing tool, wherein the pressure measurement data comprises pressure measurement time, thread number, duration, connection duration, delay time, response time and the like, and acquiring the characteristics of each transaction to construct one piece of data. As transaction 1: (draw, 3,600s,0.01s, 0.02s) correspond to (transaction name, number of threads, duration, connection duration, delay time, response time) step 2, respectively, and since the transaction information is non-plate data, it is not easy to analyze and process, and thus it is converted into structured data. And establishing a data dictionary of the transaction information and converting the data dictionary into structured data. The details are shown in the following table:
Figure BDA0004066914380000141
the terminal may convert the transaction data of step 1 into: transaction 1: (1,3,600s, 0.01s, 0.02s). And step 3: the transaction information is obtained through the steps 1 and 2, a transaction data characteristic table is formed, and a multidimensional data axis is established according to the table data. The dimensionality is pressure measurement time, thread number, duration, connection duration, delay time and response time, and a transaction characteristic distribution coordinate system is constructed by using data of each transaction in a table. The transaction data profile is as follows:
Figure BDA0004066914380000151
in the data acquisition process, the terminal can acquire historical data of the transaction, such as transaction name, thread number, duration, connection duration, delay time and response time, can reflect the change condition of each characteristic of the transaction, and constructs a transaction historical data table:
Figure BDA0004066914380000152
the terminal may tag the transaction history data with the response time and the remaining features construct training data pairs (X _ train, Y _ train) in ascending date order. The LSTM model of the recurrent neural network is constructed, which is different from other conventional neural networks in that it is not independent for a plurality of input data, and the result and the next input are transmitted to the next calculation unit after each calculation, and can be simply expressed as g (x) = σ (W (x + a) + b) for the hidden layer. Thus, past data can be memorized, and the future can be predicted more accurately. Learning the historical data through the LSTM model reduces the prediction error for response time by passing in transaction historical data. Taking the historical data table data in step 1 as an example, the terminal may construct training data: taking the withdrawal response time as a label Y, wherein the value sequence of Y is as follows: y1, Y2, ·= [0.98,0.975,. ]. Taking the time of the withdrawal transaction, the number of threads, the duration, the connection duration, the delay time and the response time as the characteristic X in ascending order of date, wherein the value sequence of X is as follows: x1, X2, = [ (30,200, 0.02,0.01,1, 2022-aa-bb), (30,200, 0.01,0.02,1,2022-aa- (bb + 1)), ·.
The terminal can thus build the LSTM model and its learning process, as shown in fig. 5, fig. 5 is a flow diagram of the training steps in one embodiment. LSTM can be simply expressed as g (X) = σ (Wx + b), the input layer to the post neural network by incoming (X, Y) data can be expressed as a = σ (Wx + b), the hidden layer is g (X) = σ (W (X + a) + b), where σ is the input gate and forgetting gate parameters of LSTM, combined into a σ representation. Through BP back propagation, parameters sigma, W and b are reduced in a gradient mode in each learning process until convergence, and then the model is well constructed. For new data X of which the result Y is to be predicted, g (X) = σ (WX + b) after X is transmitted, that is, the result Y = g (X) to be predicted, that is, the response time future trend feature, can be obtained, and may also be referred to as the operation state prediction information. The terminal can generate data changing along with time according to the input according to the operation state prediction information obtained in the last step, the new metadata is converted into a feature matrix by using a feature extractor, the distance between the two sets of metadata in the calculation is calculated by using a cosine similarity algorithm, and the similarity between the two sets of metadata is normalized to [0,1 ]. If the similarity exceeds 90%, the node transaction is considered to be normal, otherwise, the node transaction is considered to be abnormal, and an alarm is sent to wait for processing. The formula is as follows:
Figure BDA0004066914380000161
wherein, x is the feature matrix of the transaction, y is the newly defined feature matrix newly pushed by the configuration center, | | x | | represents the Euclidean norm of the feature matrix where the transaction occurred once, | y | | | represents the Euclidean norm of the feature matrix where the transaction is currently performed, x is i Representing each row vector, y, of the old defined feature matrix i Representing each row vector in the newly defined feature matrix.
Through the embodiment, the terminal can extract the characteristics of a plurality of time points in the historical test report corresponding to the test script of a single service and train to obtain the target long-short term memory model, so that the terminal can predict the running state of the target execution equipment based on the target long-short term memory model, and the efficiency of performance test is improved.
In one embodiment, comparing the operation state information of the current time point with the operation state prediction information of the corresponding time point includes: obtaining the similarity between the running state information of the current time point and the running state prediction information of the corresponding time point in each time point output by the target long-short term memory model; and if the similarity corresponding to the current time point is detected to be smaller than a preset similarity threshold, determining that the running state information of the comparison result representing the current time point is abnormal running state information.
In this embodiment, the terminal detects the running state of the target execution device in the process of the target performance test, so as to determine whether the running state of the target execution device is normal. The terminal may input the test script into the target long-term and short-term memory model, and output the operation state prediction information at each time point corresponding to the test script by the target long-term and short-term memory model, and the terminal may obtain a similarity, specifically, a cosine similarity, between the operation state prediction information at each time point and the operation state information at the current time point. If the terminal detects that the similarity between the current time point and the running state information of the corresponding time point is smaller than the preset similarity threshold, the terminal can determine that the comparison result is that the running state information of the current time point is abnormal running state information. In addition, in some embodiments, when detecting that the similarity of each time point in the preset time period is smaller than the preset similarity threshold, the terminal may determine that the comparison result represents that the operation state information of the current time point is the abnormal operation state information, that is, the terminal may obtain the operation state information of a plurality of time points in the target performance test process, and compare the operation state information with the operation state prediction information of the corresponding time point, and when the similarity obtained through comparison is smaller than the preset similarity threshold, determine that the target execution device enters the abnormal state. The similarity may specifically be cosine similarity, and the terminal may determine whether the operation state information is abnormal by detecting the cosine similarity between the operation state information and the operation state prediction information.
Through the embodiment, the terminal can detect whether the running state is abnormal or not in the process of carrying out the target performance test on the target execution equipment based on the target long-term and short-term memory model, so that the efficiency of the performance test is improved.
In one embodiment, acquiring a running snapshot corresponding to a target execution device and saving the running snapshot includes: sending a file grabbing message to target execution equipment; the file capture message comprises capture frequency and address information of the file transfer station, and is used for indicating the target execution equipment to capture a corresponding heap dump file and a thread snapshot file according to the capture frequency and sending the heap dump file and the thread snapshot file to the file transfer station; the heap dump file comprises memory stack snapshots of target execution equipment when executing the test script; the thread snapshot file comprises an execution stack of each thread corresponding to the processor when the target execution device executes the test script; the file transfer station is used for storing the heap dump file and the thread snapshot file into the database.
In this embodiment, when detecting that the target execution device has abnormal operation state information, the terminal acquires a relevant operation snapshot file of the target execution device. When the terminal determines that the snapshot file is required to be collected, the terminal can send a file capture message to the target execution device, wherein the file capture message can be generated based on the device identifier, capture frequency and address information of the file transfer station of the target execution device, and after the target execution device receives the file capture message, the target execution device can capture the corresponding heap dump file and thread snapshot file based on the capture frequency and send the heap dump file and thread snapshot file to the file transfer station according to the address information in the message under the condition that the device identifier in the message is determined to be the identifier of the local machine. The heap dump file may be the heappdump file, and includes a memory stack snapshot when the target execution device executes the test script; the thread snapshot file may be the JavaCore file, and includes an execution stack of each thread corresponding to the processor when the target execution device executes the test script. After receiving the above-mentioned running snapshot files, the file transfer station can save the heap dump file and the thread snapshot file into the database.
Specifically, in the target performance testing process, the terminal can judge the running state information of the target execution equipment and compare the running state information with the running state prediction information according to the target long-term and short-term memory model, if a large deviation exists, the terminal can initiate a file grabbing message through the task scheduling platform, the target execution equipment is required to grab javacore and heappdump files through the task execution device, and contact developers are reminded to follow up in time through short messages and mails. The javacore and heappump files can be sent to the file transfer platform, the file transfer platform can store a pressure test report of a task execution device for completing a pressure test and receive files to be captured according to a specified frequency when messages of capturing the javacore and heappump files are received, and therefore the related files can be prevented from being lost due to restarting of a container. The file is originated from a task execution device in the target execution equipment.
Through the embodiment, the terminal can instruct to capture the running snapshot file in the target execution equipment through the file capture message when the target execution equipment is abnormal, so that related test developers can analyze and improve based on the running snapshot file, and the efficiency of performance test is improved.
In one embodiment, as shown in fig. 6, fig. 6 is a schematic flow chart of an anomaly detection method in a performance test in another embodiment. In this embodiment, the terminal may be provided with a pressure test platform, a task scheduling platform, an operation state determination platform, and the like. Each execution device in the business system may be configured with a task execution means. Wherein each tester may be an individual tester. The terminal can be used for logging in by testers on a front-end page of the pressure test platform, and a channel for the testers to initiate target performance tests is provided. The tester can manage the test cases on the platform, and the management comprises conventional case addition, case deletion, existing case copying, test case parameter adjustment according to the actual pressure test requirement, existing case file exporting and the like. The platform simultaneously provides a pressure report of each pressure test for the downloading of testers, and helps the testers to know the pressure test condition of each time. The platform shows all normally running machines for testers, the testers can select equipment to execute pressure tests according to actual needs, and the task scheduling platform schedules and selects the equipment for the pressure tests.
Specifically, the pressure test platform comprises four columns which are respectively: test script, test task, executive machine and index library. All scripts within the user's privileges are presented to the user in the test scripts column. A tester can add a test case in the terminal, and after logging in the pressure test platform, the tester adds a test plan in a test plan page and inputs a version, an application name and a test plan name. And dragging the original jmx script after editing the details, and enabling the pressure test platform to analyze the original jmx and display the analyzed original jmx at the front end, for example, obtaining corresponding pressure test basic parameter information after the pressure test platform analyzes the original jmx file, such as the number of thread group users, delay thread creation time, pressure test duration, pressure test execution times, a pressure test request path, a port, related components and the like. The terminal completes two contents by processing the original jmx: pressure testing personnel originally initiate through visual mode and press the circumstances of surveying, can see the pressure condition of surveying in the pressure survey process, and the last execution pressure test of pressure survey final controlling element is non-visual mode, unable normal observation. By processing the jmx, corresponding components are added, and the new system and the original jmeter are ensured to provide consistent visualization service. Through the processing of jmx, the data which can be obtained by the background in real time and is used for trend similarity judgment can be obtained by the background. Thereby saving the test cases of the testers to the background. The terminal can also edit a test case, after logging in a pressure test platform, a tester clicks and selects the test plan, the whole structure of the test plan can be browsed after clicking details, the configuration of each component can be checked, the tester can check the details of the components by clicking the component names, modified parameters can be manually input into a page, the test case can be covered as new after clicking and storing, and a background stores an analyzed result into a database for subsequent use. The tester can also delete the test cases, and after logging in the pressure test platform, the tester clicks and selects the test plan and deletes the button in the point to complete the deletion of the test cases. Due to the authority control, other people are not authorized to modify or delete other people's test plans unless the stress test platform administrator. A tester can initiate a target performance test through a terminal, after logging in a pressure test platform, the tester selects an execution button beside a test plan to initiate the pressure test, and at the moment, a task scheduling platform selects a machine to execute the pressure test. The tester can also terminate the target performance test, and after logging in the pressure test platform, the tester decides to stop the pressure test according to various considerations, and can click the execution stop button on the right side of the test plan, and the task scheduling platform in the terminal sends a stop instruction to the pressure test machine in a broadcasting mode to finish the pressure test. The tester can also browse the historical report in the terminal, and the tester can acquire the appointed pressure test report according to the time of the pressure initiation test to know the historical pressure test condition. And moreover, file capture frequency configuration can be carried out, when the terminal judges that the performance of the node has larger deviation in the operation process through the task operation state platform, the terminal sends a message to the task scheduling center to capture related javacore and heappdump files, a tester can configure the files according to the requirement of the tester, and the default time is one minute for collection.
The terminal can be a dispatcher, the dispatcher can receive a request initiated by a tester in a browser, and if the tester specifies that the machine operates and performs pressure test, the request is directly sent to the specified machine; and if the tester does not designate the machine running pressure test, selecting the machine pressure test according to the machine running condition of the machine cluster, and writing the selected pressure test machine information into the database, so as to facilitate subsequent tracking. And the dispatcher can use a broadcasting mechanism to send messages to all the execution machines, and the execution machines judge whether the execution machines execute locally or not according to the execution machine information in the messages. The execution machine can be a target execution device in the business system. When the executive machine is started, the operation related information is written into the database, so that a tester can check the operation related information on a front page of the terminal. When the execution machine receives the command of the dispatching machine, whether the selected execution machine is the local machine is judged, if the execution machine executes the command, the state of the local machine is updated to be in use, a test plan, such as the Jmx test script, is modified, a back-end monitor and a BeanShell script can be supplemented to the execution machine, the execution machine starts a new test plan after the completion of the test, the state of the execution machine is updated to be idle after the test is completed, a file generated by the test is sent to a specified position, such as a file transfer station, and the test can be conveniently checked by a tester at the front end. The states of the execution machine include idle, in-use and fault states. The back-end listener may be an internal function of the aforementioned meter, and may write summary data of the test plan into infiluxdb (time series database), where the data includes: number of transactions, number of successful transactions, number of failed transactions, response time, etc. The Bennshell is a lightweight scripting language written by using Java language, and certain logic calculation is completed through executable Java code of the Bennshell.
Through the embodiment, the terminal detects that the test state is abnormal in time at the corresponding time point in the process of executing the test by the target execution equipment through training the target long-term and short-term memory model, and then captures the corresponding running snapshot in time, so that the efficiency of abnormality detection is improved. And can reduce the threshold of capability test effectively, the tester need not to know in a plurality of systems and presses the survey information, has realized that integration capability test common information has in a system, has accomplished clear visual show by pressure survey server's resource use condition, the stroke number, the success rate etc. that the capability test launches, also can release pressure survey personnel's machine to a certain extent, accomplish that can guarantee to press the initiation of surveying, also can accomplish other work content, avoid the problem of machine performance to influence work. Moreover, the cloud container has the characteristics of easiness in contraction and simplicity in expansion, and can better meet the requirement of performance test compared with a personal computer.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides an abnormality detection device in the performance test for realizing the abnormality detection method in the performance test. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so specific limitations in the embodiment of the abnormality detection device in one or more performance tests provided below can be referred to the above limitations on the abnormality detection method in the performance test, and are not described herein again.
In one embodiment, as shown in fig. 7, there is provided an abnormality detection apparatus in a performance test, including: a determination module 500, a sending module 502, a detection module 504, and an acquisition module 506, wherein:
the determining module 500 is configured to receive a test instruction for a target performance test, and determine a target execution device from a plurality of execution devices included in the service system.
The sending module 502 is configured to send a corresponding test message to the target execution device, where the test message carries a test identifier corresponding to the target performance test, and the test message is used to instruct the target execution device to obtain a corresponding test script based on the test identifier and execute the target performance test based on the test script.
A detection module 504, configured to obtain running state information at a current time point in a process of executing a target performance test by a target execution device, and compare the running state information at the current time point with running state prediction information at a corresponding time point; the running state prediction information corresponding to the time point is obtained based on the target long-term and short-term memory model and model output when the test script is used as the model input; and the target long-short term memory model is obtained by training according to a plurality of historical test reports corresponding to the test script.
And an acquiring module 506, configured to acquire an operation snapshot corresponding to the target execution device and save the operation snapshot if the comparison result indicates that the operation state information at the current time point is the abnormal operation state information.
In an embodiment, the determining module 500 is specifically configured to obtain a device state of each executing device in the service system; and determining the executing equipment with the equipment state being non-failure and idle state as target executing equipment.
In an embodiment, the sending module 502 is specifically configured to obtain an equipment identifier corresponding to a target execution equipment, and generate a test message of an execution type according to the equipment identifier and a test identifier corresponding to a target performance test; and broadcasting the test message of the execution type to each execution device, wherein the test message of the execution type is used for indicating each execution device receiving the test message to compare whether the device identifier is consistent with the self identifier, acquiring a corresponding test script according to the test identifier under the condition of consistency, and executing the target performance test based on the test script.
In one embodiment, the apparatus further comprises: the training module is used for acquiring a plurality of historical test reports corresponding to the test script; the plurality of historical test reports comprise a plurality of qualified running state information obtained by a plurality of historical time point tests; the plurality of historical time points represent a plurality of time points in a test period corresponding to the test script; inputting the plurality of historical time points and the qualified running state information corresponding to each historical time point into a long-short term memory model to be trained, and outputting running state prediction information of each historical time point corresponding to the test script by the long-short term memory model to be trained based on the plurality of historical time points and the plurality of qualified running state information corresponding to the plurality of historical time points; and adjusting the model parameters of the long-short term memory model to be trained according to the similarity between the running state prediction information of each historical time point and the qualified running state information of each time point until the training condition is met, thereby obtaining the target long-short term memory model.
In an embodiment, the training module is specifically configured to, for each historical test report, obtain at least one of a number of threads, memory information, a number of tcp connections, response time, a number of service processes per unit time, a number of service process failures, and a request time consumption corresponding to each historical time point in the historical test report, as the qualified operation state information corresponding to each historical time point; and inputting the qualified running state information corresponding to each historical time point into the long-short term memory model to be trained, extracting the relation between the running state characteristics and the time points by the long-short term memory model to be trained based on each historical time point and the qualified running state information corresponding to each historical time point, and outputting the running state prediction information of each historical time point corresponding to the test script based on the relation.
In an embodiment, the detecting module 504 is specifically configured to obtain similarity between the operation state information at the current time point and the operation state prediction information at the corresponding time point in each time point output by the target long-term and short-term memory model; and if the similarity corresponding to the current time point is detected to be smaller than a preset similarity threshold, determining that the running state information of the comparison result representing the current time point is abnormal running state information.
In an embodiment, the acquisition module 506 is specifically configured to send a file capture message to the target execution device; the file capture message comprises capture frequency and address information of the file transfer station, and is used for indicating the target execution equipment to capture a corresponding heap dump file and a thread snapshot file according to the capture frequency and sending the heap dump file and the thread snapshot file to the file transfer station; the heap dump file comprises memory stack snapshots of target execution equipment when executing the test script; the thread snapshot file comprises an execution stack of each thread corresponding to the processor when the target execution device executes the test script; the file transfer station is used for storing the heap dump file and the thread snapshot file into the database.
In one embodiment, the apparatus further comprises: and the fault detection module is used for acquiring heartbeat information of each target execution device and determining whether each target execution device is in fault or not based on the heartbeat information.
In one embodiment, the apparatus further comprises: the test stopping module is used for generating a corresponding test canceling message according to the equipment identifier and the test canceling instruction of the target service execution equipment if the target execution equipment is determined to be in fault; and sending a test canceling message to the target execution equipment, wherein the test canceling message is used for indicating the target execution equipment to stop executing the target performance test under the condition of determining that the equipment identifier is consistent with the self identifier.
The modules in the abnormality detection apparatus in the performance test may be implemented in whole or in part by software, hardware, or a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 8. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of anomaly detection in a performance test. The display unit of the computer device is used for forming a visual picture and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on a shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the above-mentioned abnormality detection method in the performance test when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when executed by a processor, implements the above-described method of anomaly detection in a performance test.
In one embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the above-described method of anomaly detection in a performance test.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), magnetic Random Access Memory (MRAM), ferroelectric Random Access Memory (FRAM), phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (12)

1. A method of anomaly detection in a performance test, the method comprising:
receiving a test instruction aiming at the target performance test, and determining target execution equipment from a plurality of execution equipment contained in the service system;
sending a corresponding test message to the target execution device, where the test message carries a test identifier corresponding to the target performance test, and the test message is used to instruct the target execution device to obtain a corresponding test script based on the test identifier, and execute the target performance test based on the test script;
acquiring running state information of the current time point in the process of executing the target performance test by the target execution equipment, and comparing the running state information of the current time point with running state prediction information of a corresponding time point; the running state prediction information corresponding to the time point is obtained based on a target long-term and short-term memory model and model output when the test script is used as model input; the target long-short term memory model is obtained by training according to a plurality of historical test reports corresponding to the test script;
and if the comparison result represents that the running state information of the current time point is abnormal running state information, collecting a running snapshot corresponding to the target execution equipment, and storing the running snapshot.
2. The method according to claim 1, wherein the determining a target execution device from a plurality of execution devices included in the service system comprises:
acquiring the equipment state of each execution equipment in the service system;
and determining the execution equipment with the equipment state being non-failure and idle state as the target execution equipment.
3. The method of claim 1, wherein sending the corresponding test message to the target execution device comprises:
acquiring an equipment identifier corresponding to the target execution equipment, and generating a test message of an execution type according to the equipment identifier and a test identifier corresponding to the target performance test;
and broadcasting the test message of the execution type to each execution device, wherein the test message of the execution type is used for indicating each execution device receiving the test message to compare whether the device identifier is consistent with the self identifier, acquiring a corresponding test script according to the test identifier under the condition of consistency, and executing the target performance test based on the test script.
4. The method of claim 1, further comprising:
obtaining a plurality of historical test reports corresponding to the test script; the plurality of historical test reports comprise a plurality of qualified running state information obtained by a plurality of historical time point tests; the plurality of historical time points represent a plurality of time points in a test period corresponding to the test script;
inputting the plurality of historical time points and the qualified running state information corresponding to the historical time points into a long-short term memory model to be trained, and outputting running state prediction information of each historical time point corresponding to the test script by the long-short term memory model to be trained based on the plurality of historical time points and the plurality of qualified running state information corresponding to the plurality of historical time points;
and adjusting the model parameters of the long-short term memory model to be trained according to the similarity between the running state prediction information of each historical time point and the qualified running state information of each time point until the training condition is met, and obtaining the target long-short term memory model.
5. The method according to claim 4, wherein the inputting the plurality of historical time points and the qualified operating state information corresponding to each historical time point into the long-short term memory model to be trained comprises:
acquiring at least one of the number of threads, the memory information, the number of transmission control protocol connections, the response time, the number of service processes in unit time, the number of service process failures and the request time consumption corresponding to each historical time point in each historical test report as qualified running state information corresponding to each historical time point;
and inputting qualified running state information corresponding to each historical time point into a long-short term memory model to be trained, extracting the relation between running state characteristics and the time points by the long-short term memory model to be trained based on each historical time point and the qualified running state information corresponding to each historical time point, and outputting running state prediction information of each historical time point corresponding to the test script based on the relation.
6. The method of claim 1, wherein comparing the operating state information at the current time point with the operating state prediction information at the corresponding time point comprises:
obtaining the similarity between the running state information of the current time point and the running state prediction information of the corresponding time point in each time point output by the target long-short term memory model;
and if the similarity corresponding to the current time point is detected to be smaller than a preset similarity threshold value, determining that the running state information of the current time point represented by the comparison result is abnormal running state information.
7. The method according to claim 1, wherein the collecting a running snapshot corresponding to the target execution device and saving the running snapshot comprises:
sending a file grabbing message to the target execution device; the file capture message comprises capture frequency and address information of a file transfer station, and is used for indicating the target execution equipment to capture a corresponding heap dump file and a thread snapshot file according to the capture frequency and sending the heap dump file and the thread snapshot file to the file transfer station;
wherein, the heap dump file comprises a memory stack snapshot when the target execution device executes the test script; the thread snapshot file comprises an execution stack of each thread corresponding to a processor when the target execution device executes the test script;
the file transfer station is used for storing the heap dump file and the thread snapshot file into a database.
8. The method of claim 1, further comprising:
acquiring heartbeat information of each target execution device, and determining whether each target execution device fails or not based on the heartbeat information;
after the sending of the corresponding test message to the target service execution device, the method further includes:
if the target execution equipment is determined to be in fault, generating a corresponding test cancellation message according to the equipment identification and the test cancellation instruction of the target service execution equipment;
and sending the test cancellation message to the target execution equipment, wherein the test cancellation message is used for indicating the target execution equipment to stop executing the target performance test under the condition that the equipment identification is determined to be consistent with the self identification.
9. An anomaly detection apparatus in a performance test, the apparatus comprising:
the determining module is used for receiving a test instruction aiming at the target performance test and determining target execution equipment from a plurality of execution equipment contained in the service system;
a sending module, configured to send a corresponding test message to the target execution device, where the test message carries a test identifier corresponding to the target performance test, and the test message is used to instruct the target execution device to obtain a corresponding test script based on the test identifier and execute the target performance test based on the test script;
the detection module is used for acquiring the running state information of the current time point in the process of executing the target performance test by the target execution equipment and comparing the running state information of the current time point with the running state prediction information of the corresponding time point; the running state prediction information corresponding to the time point is obtained based on a target long-term and short-term memory model and model output when the test script is used as model input; the target long-short term memory model is obtained by training according to a plurality of historical test reports corresponding to the test script;
and the acquisition module is used for acquiring the running snapshot corresponding to the target execution equipment and storing the running snapshot if the running state information of the current time point is represented as abnormal running state information by the comparison result.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
12. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 8 when executed by a processor.
CN202310079120.4A 2023-01-18 2023-01-18 Abnormity detection method and device in performance test and computer equipment Pending CN115982049A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310079120.4A CN115982049A (en) 2023-01-18 2023-01-18 Abnormity detection method and device in performance test and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310079120.4A CN115982049A (en) 2023-01-18 2023-01-18 Abnormity detection method and device in performance test and computer equipment

Publications (1)

Publication Number Publication Date
CN115982049A true CN115982049A (en) 2023-04-18

Family

ID=85959699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310079120.4A Pending CN115982049A (en) 2023-01-18 2023-01-18 Abnormity detection method and device in performance test and computer equipment

Country Status (1)

Country Link
CN (1) CN115982049A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116340191A (en) * 2023-05-31 2023-06-27 合肥康芯威存储技术有限公司 Method, device, equipment and medium for testing memory firmware

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116340191A (en) * 2023-05-31 2023-06-27 合肥康芯威存储技术有限公司 Method, device, equipment and medium for testing memory firmware
CN116340191B (en) * 2023-05-31 2023-08-08 合肥康芯威存储技术有限公司 Method, device, equipment and medium for testing memory firmware

Similar Documents

Publication Publication Date Title
CN110399293B (en) System test method, device, computer equipment and storage medium
CN110245078B (en) Software pressure testing method and device, storage medium and server
CN112910945B (en) Request link tracking method and service request processing method
Lou et al. Software analytics for incident management of online services: An experience report
CN108959059B (en) Test method and test platform
EP3567496B1 (en) Systems and methods for indexing and searching
CN110768872A (en) Inspection method, system, device, computer equipment and storage medium
JP5989194B1 (en) Test management system and program
US11169910B2 (en) Probabilistic software testing via dynamic graphs
CN109992506A (en) Scheduling tests method, apparatus, computer equipment and storage medium
CN115982049A (en) Abnormity detection method and device in performance test and computer equipment
CN107451056B (en) Method and device for monitoring interface test result
CN111694724B (en) Test method and device of distributed form system, electronic equipment and storage medium
CN116561003A (en) Test data generation method, device, computer equipment and storage medium
US12001920B2 (en) Generating a global snapshot of a quantum computing device
CN110543413A (en) Business system testing method, device, equipment and storage medium
CN111338609B (en) Information acquisition method, device, storage medium and terminal
JP2009181495A (en) Job processing system and job management method
CN114385498A (en) Performance test method, system, computer equipment and readable storage medium
CN116414594A (en) Fault tree updating method, device, computer equipment and storage medium
CN111679924A (en) Component software system reliability simulation method and device and electronic equipment
JP2009181494A (en) Job processing system and job information acquisition method
CN114172823B (en) Micro-service link sampling method, device, equipment and readable storage medium
US11689412B2 (en) Automated monitoring of infrastructure and application on cloud
CN110008114B (en) Configuration information maintenance method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination