CN116909903A - System testing method, device, equipment and storage medium - Google Patents

System testing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116909903A
CN116909903A CN202310892949.6A CN202310892949A CN116909903A CN 116909903 A CN116909903 A CN 116909903A CN 202310892949 A CN202310892949 A CN 202310892949A CN 116909903 A CN116909903 A CN 116909903A
Authority
CN
China
Prior art keywords
test
period
time
determining
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310892949.6A
Other languages
Chinese (zh)
Inventor
孙佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202310892949.6A priority Critical patent/CN116909903A/en
Publication of CN116909903A publication Critical patent/CN116909903A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Test And Diagnosis Of Digital Computers (AREA)

Abstract

The application provides a system testing method, device, equipment and storage medium, which can be used in the field of big data. The method comprises the following steps: acquiring a plurality of test parameters and script files corresponding to a tested system through a web page, wherein the script files comprise test requests and test result determining methods, and the plurality of test parameters and the script files are configured on the web page; according to the plurality of test parameters and the script file, testing the tested system in a test period to obtain test data; analyzing the test data to determine a plurality of sub-test periods within the test period; determining a test mode corresponding to each sub-test period to obtain a plurality of test modes; and determining a test result corresponding to each test mode according to the test data. The method comprehensively determines the performance of the tested system in the performance test, and improves the efficiency of the test work.

Description

System testing method, device, equipment and storage medium
Technical Field
The present application relates to the field of big data, and in particular, to a method, an apparatus, a device, and a storage medium for testing a system.
Background
Due to the rapid development of business and technology, financial institution information systems are increasingly complex, and the complexity of system performance testing is greatly increased.
The performance test is to simulate various normal, peak and abnormal load conditions by an automatic test tool to test various performance indexes of the system. JMeter, one of the most popular open source tools for performance testing at present, can be used to simulate a huge load on a server, network or object, thereby completing performance testing. However, when performing performance test with large concurrency and complex scenario by using JMeter, the tester needs to use command line mode, which consumes a lot of time and effort for testing configuration, script and parameter file distribution, and obtains relatively crude test data after the test is completed, resulting in lower efficiency of performance test.
At present, how to improve the efficiency of performance test is a urgent problem to be solved.
Disclosure of Invention
The application provides a system testing method, device, equipment and storage medium, which are used for solving the problem of how to improve the efficiency of performance testing.
In a first aspect, the present application provides a method for testing a system, including:
acquiring a plurality of test parameters and script files corresponding to a tested system through a web page, wherein the script files comprise test requests and test result determining methods, and the plurality of test parameters and the script files are configured on the web page;
according to the plurality of test parameters and the script file, testing the tested system in a test period to obtain test data;
analyzing the test data to determine a plurality of sub-test periods within the test period;
determining a test mode corresponding to each sub-test period to obtain a plurality of test modes;
and determining a test result corresponding to each test mode according to the test data.
In a second aspect, the present application provides a test apparatus for a system, comprising:
the system comprises an acquisition module, a test module and a test module, wherein the acquisition module is used for acquiring a plurality of test parameters and script files corresponding to a tested system through a web page, the script files comprise test requests and test result determination methods, and the plurality of test parameters and the script files are configured on the web page;
the test module is used for testing the tested system in a test period according to the plurality of test parameters and the script file to obtain test data;
the analysis module is used for analyzing the test data to determine a plurality of sub-test periods in the test period;
the determining module is used for determining a test mode corresponding to each sub-test period to obtain a plurality of test modes;
and the determining module is also used for determining a test result corresponding to each test mode according to the test data.
In a third aspect, the present application provides an electronic device, comprising:
a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to cause the electronic device to perform the method of any one of the first aspects.
In a fourth aspect, the present application provides a computer-readable storage medium having stored therein computer-executable instructions for performing the method of any of the first aspects when executed by a processor.
In a fifth aspect, the application provides a computer program product comprising a computer program which, when executed by a processor, implements a method as in any of the first aspects.
The embodiment provides a testing method, a device, equipment and a storage medium of a system, wherein the method firstly acquires a plurality of testing parameters and script files corresponding to a tested system through web pages; then, testing the tested system in a test period according to the plurality of test parameters and the script file to obtain test data; then, analyzing the test data to determine a plurality of sub-test periods within the test period; then, determining a test mode corresponding to each sub-test period to obtain a plurality of test modes; and finally, determining a test result corresponding to each test mode according to the test data. According to the method, the performance test is configured through the web page, after the test is completed, the test mode is determined according to the test time period, the test result is determined according to the test data, the performance of the tested system is analyzed from multiple dimensions, the performance of the tested system in the performance test is comprehensively determined, and meanwhile, the efficiency of the test work is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a specific application scenario diagram of a test method of a system provided by the present application;
FIG. 2a is a flowchart of a testing method of a system according to an embodiment of the present application;
FIG. 2b is a schematic diagram of a web page according to an embodiment of the present application;
FIG. 3a is a flowchart of a second test method of the system according to the embodiment of the present application;
FIG. 3b is a schematic diagram of characteristic moments of the transaction number TPS completed per unit time according to an embodiment of the present application;
FIG. 4 is a flowchart III of a test method of the system according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a test device of a system according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Specific embodiments of the present application have been shown by way of the above drawings and will be described in more detail below. The drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but rather to illustrate the inventive concepts to those skilled in the art by reference to the specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with related laws and regulations and standards, and provide corresponding operation entries for the user to select authorization or rejection.
It should be noted that the method and apparatus for testing the system of the present application may be used in the big data field, and may be used in any field other than the big data field, and the application field of the method and apparatus for testing the system of the present application is not limited.
Fig. 1 is a specific application scenario diagram of a test method of a system provided by the present application. As shown in fig. 1, in the prior art, when performance testing is performed on a financial institution information system (a tested system) in a large concurrency and complex scenario by using jmeters, a worker needs to input command lines to a plurality of testers (tester a, tester B and tester C) respectively by using a command line mode, so as to complete the tasks of test configuration, script, parameter file distribution, and the like. After performance testing is completed, the resulting test data is a JTL format based file.
The JTL file is an output format of the JMER. The JTL file contains the result data collected during test execution. JTL files are stored in plain text format and can be opened and viewed using any text editor. The data therein is typically arranged in comma separated format, with each row representing a sample or a sampling result. The data in the JTL file includes the following information: a time stamp, i.e., a time stamp of the sample execution; sample tags, i.e., the name or identifier of the sample; a success flag, i.e., indicating whether the sample execution was successful; response time, i.e., the time it takes for the server to respond; the data size, namely the data size returned by the server; error message, i.e. detailed description of the error message if an error occurs.
Thus, after the JTL files are obtained, further analysis, graphical presentation, and performance evaluation are required. From this, the prior art has the problems of complex operation, low efficiency and failure to obtain visual results of performance test.
The application provides a system testing method, which is characterized in that performance tests are configured through web pages, after the tests are completed, test modes are determined according to test time periods, test results are determined according to test data, the performance of a tested system is analyzed from multiple dimensions, the performance of the tested system in the performance tests is comprehensively determined, and meanwhile, the efficiency of test work is improved.
The application provides a system testing method, which aims to solve the technical problems in the prior art.
The following describes the technical scheme of the present application and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2a is a flowchart of a test method of a system according to an embodiment of the present application. As shown in fig. 2a, the method of the present embodiment includes:
s201, acquiring a plurality of test parameters and script files corresponding to a tested system through a web page, wherein the script files comprise test requests and test result determining methods, and the plurality of test parameters and script files are configured on the web page;
the execution body of the embodiment of the application can be electronic equipment or a testing device of a system arranged in the electronic equipment. Alternatively, the test device of the system may be implemented by software, or may be implemented by a combination of software and hardware. The electronic device may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, or the like.
In this embodiment, the tested system is a financial institution information system, and the JMeter is used to perform performance test on the tested system.
Wherein the performance test includes a load test and a pressure test, which may be performed in combination. The performance of the system under various working loads is determined through load testing, and the aim is to test the change condition of various performance indexes of the system when the load is gradually increased. Stress testing is a test that achieves the maximum level of service a system can provide by determining a bottleneck or unacceptable performance point of the system.
Test parameters refer to parameters related to a test plan, including: script name, script identification, scene execution time, virtual user number, virtual user leveling mode, result file path, fluctuation range parameter, etc. The fluctuation range parameter refers to the fluctuation range of the performance index obtained by the pressure test, so that the value is related to the performance index, and different performance indexes have different fluctuation range parameters and can be set according to service requirements. The script file refers to a JMeter script file, which includes a test request and a test result determination method. The test request is a request for analog services sent to the system under test. For example, when the system under test is a financial institution information system, the test request is to transact a financial related business. The test result determination method is a method of determining a test result corresponding to a test request by receiving a response or other means after transmitting the test request to the system under test.
In this embodiment, a plurality of test parameters and script files corresponding to the tested system are acquired, and a web page mode is adopted. Specifically, fig. 2b is a schematic diagram of a web page provided by the embodiment of the present application, and as shown in fig. 2b, a user may input characters or operate controls to set the characters or operations on the web page according to service requirements. Thereafter, a plurality of test parameters and script files determined by the user are obtained through the web page.
The method has the advantages that when the performance of the tested system is tested, a user can uniformly configure a plurality of JMeter testers on a web page according to service requirements, so that the testing flow is simplified, and the testing working efficiency is improved.
S202, testing a tested system in a test period according to a plurality of test parameters and script files to obtain test data;
in this embodiment, on the basis of acquiring a plurality of test parameters and script files corresponding to a tested system through a web page, the tested system is tested according to the plurality of test parameters and script files, so as to obtain test data. And determining the performance of the tested system according to the test data.
It should be noted that for subsequent analysis, the moments of the start and end of the test, i.e. the test period, need to be recorded. Meanwhile, the obtained test data is rough, and only the maximum value, the minimum value and the average value of indexes in the whole time period of test execution can be displayed. In practical applications, users often pay more attention to accurate test result data during a stable operation period or during a performance bottleneck period, and such demands can only be obtained through a series of complicated and time-consuming operations such as manually estimating a time period, changing a JMeter configuration file in the background, and recharging test results.
Specifically, according to a plurality of test parameters and script files, the tested system is tested in the test period to obtain test data, and various modes can be adopted. For example: firstly, testing a tested system according to a plurality of test parameters and script files, and storing the generated test files into a database; then, reading a database according to a preset period to obtain a test file; and finally, analyzing the test file to obtain test data. The test file can be JTL file, and the test data of the stage can be analyzed in the middle of test by real-time monitoring and real-time analysis of the JTL file, so that the determination efficiency of the test result is improved. The database here may be an elastomer search database. It is also a distributed, highly extended, high real-time search and data analysis engine. The method can conveniently enable a large amount of data to have the capabilities of searching, analyzing and exploring.
S203, analyzing the test data to determine a plurality of sub-test periods in the test period;
in this embodiment, the subtest period refers to a special period determined according to the feature time of the performance index. For example, for the success rate, the characteristic points include the virtual user up-time and the virtual user down-start time, that is, two subtest periods are corresponding to: a virtual user up period and a virtual user down period.
Wherein, virtual Users (Virtual Users) refer to concurrent Users simulated in performance tests, which are used for simulating access behaviors of actual Users to tested systems or application programs. The virtual user up time refers to the time when the number of simultaneous virtual users reaches the maximum concurrent number under the peak value or high load condition.
Determining the number of virtual user aliases is an important goal of performance testing, which can help evaluate the performance and ultimate capacity of a system under test under high load conditions. The concurrent users are gradually increased until the tested system reaches the maximum load and the performance is not improved, namely the bottleneck of the tested system is reached, and the concurrent users are the virtual users. The corresponding characteristic time is the virtual user alignment time. The time period from the test start time to the virtual user alignment time is the virtual user rising time period.
The virtual user drop start time is a point in time at which the increase in the number of virtual users is stopped and the decrease in the number of virtual users is started gradually at the performance test end stage. It marks the beginning of the return from the high load state to the normal load state. And the virtual user descending period is from the virtual user descending starting time to the time when the test is finished.
S204, determining a test mode corresponding to each sub-test period to obtain a plurality of test modes;
in this embodiment, on the basis of determining a plurality of sub-test periods, a test mode corresponding to each sub-test period is determined, so as to obtain a plurality of test modes. The test modes are in one-to-one correspondence with the test periods, for example, the first-segment mode corresponds to the rising period of the virtual user, wherein the first-segment mode refers to the period of time when the virtual user gradually rises after the test starts, and is used for analyzing the condition of rising of the test pressure. The complete mode corresponds to a complete period and is used for selecting the condition that the test is executed for the whole period and performing index statistics. And the tail section mode corresponds to the descending period of the virtual user, namely the period of gradually zeroing the virtual user before the ending of the intercepting test, and index statistics is carried out for analyzing the ending condition of the testing pressure. And the advice mode corresponds to advice time periods, namely, the time period when the virtual user gradually gets up after the start of the test and the time period when the virtual user gradually returns to zero before the end of the test are removed through intelligent analysis, and the middle time period is taken for index statistics and is used for removing the influence when the pressure does not reach the preset effect when the pressure is up and down during the test. The peak mode corresponds to the maximum peak period, namely the period when the intelligent intercepting test index value reaches the peak, and index statistics is carried out to analyze the performance bottleneck of the tested software system. And the maximum stable mode corresponds to the maximum stable period, namely the maximum continuous period of which the intelligent interception test index numerical value fluctuation range is within the fluctuation range parameter is used for index statistics and analyzing the performance of the tested software system during stable operation.
The scheme has the advantage that the performance of the tested system is more intuitively known according to the performances of the tested system in different test modes.
S205, determining a test result corresponding to each test mode according to the test data.
In this embodiment, the test data may be analyzed and processed by the angle analysis of the test result, in addition to the analysis and processing of the test data for different performance indexes of the pressure test. After the test result is obtained, a test report in two formats of an html page and an Excel file can be generated for viewing and downloading by a user.
Specifically, according to the test data, determining the test result corresponding to each test mode may take the following procedure: determining successful transaction data, failed transaction data and all transaction data according to the test data; and determining a test result corresponding to the test mode according to the successful transaction data, the failed transaction data and all the transaction data aiming at any one test mode.
In this embodiment, the test result refers to a result of further analyzing the test data from three dimensions of a successful transaction, a failed transaction, and all transactions. The test time period can be determined, a plurality of sub-test time periods and test results are further determined, and then the test results are classified according to different dimensions, so that the performances of different test modes of the tested system under the condition of different test results are determined.
The method has the advantages that the test data is further analyzed from the successful and failed dimensions of the test result, and the performance of the tested system in the performance test is more comprehensively determined.
The embodiment provides a testing method of a system, which comprises the steps of firstly, acquiring a plurality of testing parameters and script files corresponding to a tested system through a web page; then, testing the tested system in a test period according to a plurality of test parameters and script files to obtain test data; then, analyzing the test data to determine a plurality of sub-test periods within the test period; then, determining a test mode corresponding to each sub-test period to obtain a plurality of test modes; and finally, determining a test result corresponding to each test mode according to the test data. According to the method, the performance test is configured through the web page, after the test is completed, the test mode is determined according to the test time period, the test result is determined according to the test data, the performance of the tested system is analyzed from multiple dimensions, the performance of the tested system in the performance test is comprehensively determined, and meanwhile, the efficiency of the test work is improved.
Fig. 3a is a flowchart of a second test method of the system according to an embodiment of the present application. As shown in fig. 3a, the method of the present embodiment, based on the embodiment shown in fig. 2a, performs analysis processing on test data to describe in detail a process of determining a plurality of sub-test periods within a test period.
S301, analyzing the test data to obtain a plurality of performance indexes, wherein the plurality of performance indexes comprise at least one of the following: the number of transactions completed in unit time, the response time, the success rate, the error number, the number of transmitted bytes and the number of received bytes;
in this embodiment, the performance index characterizes the performance of the tested system in the performance test, where the performance includes the number of transactions completed in unit time, the response time, the success rate, the error number, the number of bytes sent, and the number of bytes received.
Where the number of transactions completed per unit time (Transactions Per Second, TPS) refers to the number of transactions completed per second for evaluating the performance of a system or application. In performance testing, TPS is an important indicator that measures the number of transactions a system can handle in a given time.
TPS is calculated by dividing the total number of transactions by the time to execute the transaction, typically in seconds. For example, if 1000 transactions have been performed in total for a period of 10 seconds, the calculated TPS is 1000/10=100.
A high value for TPS generally indicates that the system has higher throughput and response capability, and is able to handle more concurrent transactions. Higher TPS is particularly important for highly loaded systems or applications that need to handle large numbers of transactions because they need to process requests efficiently in a limited time.
Response Time (Response Time) refers to the amount of Time required for a system under test to respond to a test request. It includes the time to send the request, process the request, and return the response.
The success rate refers to the percentage of success of the tested system in processing the test request. The number of errors refers to the number of failures measured by the tested system to process the test request. The number of bytes sent is used to evaluate the efficiency and ability of the system in handling data transmissions. It represents the amount of data sent from a client to a server or from a server to a client over a period of time. The number of received bytes represents the amount of data received from the server over a period of time. The size of the number of bytes received depends on the test scenario and the nature of the system under test.
Because the test data is rough and original, the test data is required to be analyzed according to the definition of different performance indexes, so that the numerical values of different characteristic indexes changing along with time can be obtained.
S302, processing a plurality of performance indexes by using a characteristic moment identification algorithm according to the fluctuation range parameters to obtain a plurality of characteristic moments corresponding to each performance index; wherein the plurality of feature moments includes at least one of: virtual user alignment time, virtual user descent start time, wave crest time, wave trough time, ascending inflection point time, descending inflection point time, non-attention disturbance time, test start time and test end time;
in this embodiment, the feature time refers to a time corresponding to a numerical value having a feature meaning among numerical values of the feature index changing with time. Fig. 3b is a schematic diagram of characteristic moments of the number of transactions completed per unit time TPS according to an embodiment of the present application. As shown, the feature moments include: virtual user up timing a, virtual user down start timing J, peak timing G, trough timing H, rising inflection point timing F, falling inflection point timing (not shown in the figure), non-attention disturbance timings C and D, test start timing t1, and test end timing t2. The virtual user up time and the virtual user down start time are described above, and are not described here again.
Peak Point (Peak Point) refers to the Point in time at which the system performance under test reaches the highest level or the highest Peak in the number of virtual users during performance testing. The system typically reaches maximum load and maximum throughput near the peak point. Trough Point (truutpoint): the trough point refers to the time point when the performance of the tested system reaches the lowest level or the lowest value of the number of virtual users in the performance test process. The performance near the trough point is lower and delays or response time increases may occur.
The rising Inflection Point refers to an Inflection Point at which the performance index starts to transition from a lower state to a fast-growing Point of time or the number of virtual users in the performance test. It marks that the system under test is beginning to approach its load limit and performance degradation occurs.
The falling inflection point (Deceleration Point) refers to an inflection point at which the performance index starts to transition from a higher state to a slowly increasing or stable point in time or virtual user number in the performance test. It marks that the performance of the tested system is slow or stable.
Non-concern perturbation points (Non-Critical Perturbation Point) refer to the impact that occurs on the performance of the system under test during performance testing, but are not the primary or important indicators of test concern. These disturbances may be due to testing environments, network factors, or other factors and do not have a significant impact or represent a true use of the system under test.
In this embodiment, the feature moment recognition algorithm is used to process the multiple performance indexes, that is, the feature moment is recognized by combining the fluctuation range parameter in the values of the different feature indexes along with the time change. Specifically, if the fluctuation range parameter of the TPS is 10%, and if the value corresponding to the alignment time a of the virtual user of the TPS is 1000, if the TPS value fluctuates between 900 and 1100, the disturbance is considered as non-attention disturbance, and the corresponding time is the disturbance non-attention time; if the TPS value fluctuates beyond 1100, or less than 900, then it is again identified whether peaks and valleys are reached.
S303, determining a plurality of sub-test periods corresponding to each performance index in the test period according to a plurality of characteristic moments corresponding to each performance index.
In this embodiment, a plurality of sub-test periods corresponding to each performance index may be determined based on determining a plurality of feature moments, and the process is described in more detail later, so that the description is omitted here.
The embodiment provides a system testing method, which comprises the steps of firstly analyzing and processing test data to obtain a plurality of performance indexes; then, processing the plurality of performance indexes by using a characteristic moment identification algorithm according to the fluctuation range parameters to obtain a plurality of characteristic moments corresponding to each performance index; and finally, determining a plurality of sub-test periods corresponding to each performance index in the test period according to a plurality of characteristic moments corresponding to each performance index. According to the method, the test data are processed to obtain a plurality of performance indexes, a plurality of characteristic moments are further obtained, a plurality of sub-test time periods are finally obtained, and the test data are comprehensively analyzed, so that the test result is more scientific and accurate, and the actual requirements are met.
Fig. 4 is a flowchart of a test method of the system according to an embodiment of the present application. As shown in fig. 4, the method of the present embodiment, based on the embodiment shown in fig. 2a, describes in detail the process of determining a plurality of sub-test periods corresponding to each performance index according to a plurality of feature moments corresponding to each performance index.
S401, determining a virtual user ascending period according to the virtual user ascending time and the test starting time;
in this embodiment, as shown in fig. 3b, the virtual user rising period is a period a from the test start time t1 to the virtual user up time, that is, the duration of the virtual user rising period is equal to a-t1, and the virtual user rising period is [ t1, a ].
S402, determining a virtual user descending period according to the virtual user descending starting time and the test ending time;
in this embodiment, as shown in fig. 3b, the virtual user descent period is a period from the virtual user descent start time J to the test end time t2, that is, the duration of the virtual user descent period is equal to t2-J, and the virtual user descent period is [ J, t2].
S403, determining a maximum peak period according to the peak moment and the fluctuation range parameter;
in this embodiment, the fluctuation range parameter includes a fluctuation value in the maximum peak period, in addition to the above-mentioned use. As shown in fig. 3b, the maximum peak period takes the peak moment G as the midpoint, and intercepts a period of time forwards and backwards according to the fluctuation range parameter. Assuming that the fluctuation range parameter of TPS is b2, the duration of the maximum peak period is 2×b2, and the maximum peak period is [ G-b2, G+b2]
S404, determining a proposal period according to the virtual user alignment time and the virtual user descending starting time;
in this embodiment, as shown in fig. 3b, the proposed period is a period from the time point a of the virtual user up to the time point J of the start of the virtual user down, that is, the duration of the proposed period is equal to J-a, and the virtual user down period is [ a, J ].
S405, determining a complete period according to the test starting time and the test ending time;
in this embodiment, as shown in fig. 3b, the complete period is a period from the test start time t1 to the test end time t2, that is, the duration of the proposed period is equal to t2-t1, and the virtual user descent period is [ t1, t2].
S406, determining a plurality of stable time periods according to the virtual user alignment time, the virtual user descending time, the ascending inflection point time, the descending inflection point time and the non-attention disturbance time;
in this embodiment, the stationary period refers to a period in which there is no peak time or no trough time in the period, and the numerical value of the performance index is in a relatively stationary range. I.e. the stationary period may include non-interesting disturbance moments. During the test period, a plurality of plateau periods may be included. In general, the first plateau in the test period is the virtual user's ascent time to the ascent inflection time (descent inflection time), the last plateau is the peak or trough ending time to the virtual user's descent time, and both [ a, F ] and [ I, J ] are plateau as shown in fig. 3 b.
S407, among the plurality of stationary periods, determining a stationary period having the longest duration as a maximum stationary period.
In this embodiment, in order to determine the maximum stationary period, the durations of the stationary periods need to be compared, and thus the stationary period with the longest duration is determined as the maximum stationary period. Specifically, as shown in fig. 3b, the duration of [ a, F ] is compared with the duration of [ I, J ], and if the duration of [ a, F ] is greater than the duration of [ I, J ], then [ a, F ] is the maximum plateau, whereas [ I, J ] is the maximum plateau.
The embodiment provides a system test method, which comprises the steps of firstly determining a virtual user rising period according to a virtual user rising time and a test starting time; secondly, determining a virtual user descent time period according to the virtual user descent start time period and the test end time; then, determining the maximum peak period according to the peak moment and the fluctuation range parameter; then, determining a suggested period according to the virtual user alignment time and the virtual user descent starting time; then, determining a complete period according to the test starting time and the test ending time; then, determining a plurality of stable time periods according to the virtual user alignment time, the virtual user descending time period, the ascending inflection point time, the descending inflection point time and the non-attention disturbance time; finally, among the plurality of stationary periods, a stationary period having the longest duration is determined as a maximum stationary period. According to the method, sub-test periods corresponding to different characteristic indexes are determined through different characteristic moments, and test data are comprehensively analyzed, so that test results are more scientific and accurate, and actual requirements are met.
Fig. 5 is a schematic structural diagram of a test device of a system according to an embodiment of the present application. The apparatus of this embodiment may be in the form of software and/or hardware. As shown in fig. 5, a test apparatus 500 of a system according to an embodiment of the present application includes an obtaining module 501, a test module 502, an analyzing module 503 and a determining module 504,
an obtaining module 501, configured to obtain, through a web page, a plurality of test parameters and script files corresponding to a tested system, where the script files include a test request and a test result determining method, and the plurality of test parameters and script files are configured on the web page;
the test module 502 is configured to test the tested system in a test period according to a plurality of test parameters and script files, so as to obtain test data;
an parsing module 503, configured to parse the test data to determine a plurality of sub-test periods within the test period;
a determining module 504, configured to determine a test mode corresponding to each sub-test period, so as to obtain a plurality of test modes;
the determining module 504 is further configured to determine a test result corresponding to each test mode according to the test data.
In one possible implementation, the parsing module 503 is specifically configured to:
analyzing the test data to obtain a plurality of performance indexes and a plurality of characteristic moments corresponding to each performance index;
and determining a plurality of sub-test periods corresponding to each performance index in the test period according to the plurality of characteristic moments corresponding to each performance index.
In one possible implementation, the parsing module 503 is specifically configured to:
analyzing the test data to obtain a plurality of performance indexes, wherein the plurality of performance indexes comprise at least one of the following: the number of transactions completed in unit time, the response time, the success rate, the error number, the number of transmitted bytes and the number of received bytes;
according to the fluctuation range parameters, a plurality of performance indexes are processed by using a characteristic moment identification algorithm, and a plurality of characteristic moments corresponding to each performance index are obtained; wherein the plurality of feature moments includes at least one of: virtual user up time, virtual user down start time, wave crest time, wave trough time, up inflection point time, down inflection point time, non-attention disturbance time, test start time and test end time.
In one possible implementation, the parsing module 503 is specifically configured to:
determining a virtual user ascending period according to the virtual user ascending time and the test starting time;
determining a virtual user descending period according to the virtual user descending starting time and the test ending time;
determining a maximum peak period according to the peak moment and the fluctuation range parameter;
determining a suggested period according to the virtual user alignment time and the virtual user descent starting time;
and determining the complete period according to the test starting time and the test ending time.
In one possible implementation, the parsing module 503 is specifically configured to:
determining a plurality of stable time periods according to the virtual user alignment time, the virtual user descending time, the ascending inflection point time, the descending inflection point time and the non-attention disturbance time;
among the plurality of stationary periods, a stationary period having the longest duration is determined as a maximum stationary period.
In one possible implementation, the determining module 504 is specifically configured to:
determining successful transaction data, failed transaction data and all transaction data according to the test data;
and determining a test result corresponding to the test mode according to the successful transaction data, the failed transaction data and all the transaction data aiming at any one test mode.
In one possible implementation, the test module 502 is specifically configured to:
testing the tested system according to a plurality of test parameters and script files, and storing the generated test files into a database;
reading a database according to a preset period to obtain a test file;
and analyzing the test file to obtain test data.
The device for testing a system provided in this embodiment may be used to execute the above method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein.
Referring to fig. 6, the electronic device 20 may include a processor 21 and a memory 22. The processor 21, the memory 22, and the like are illustratively interconnected by a bus 23.
Memory 22 stores computer-executable instructions;
the processor 21 executes computer-executable instructions stored in the memory 22 to cause the electronic device to perform the test method of the system as described above.
It should be understood that the processor 21 may be a central processing unit (in english: central Processing Unit, abbreviated as CPU), or may be other general purpose processors, digital signal processors (in english: digital Signal Processor, abbreviated as DSP), application specific integrated circuits (in english: application Specific Integrated Circuit, abbreviated as ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution. The memory 22 may include a high-speed random access memory (in english: random Access Memory, abbreviated as RAM), and may further include a Non-volatile memory (in english: NVM), such as at least one magnetic disk memory, and may also be a U-disk, a removable hard disk, a read-only memory, a magnetic disk, or an optical disk.
The embodiment of the application correspondingly provides a computer readable storage medium, wherein computer execution instructions are stored in the computer readable storage medium, and the computer execution instructions are used for realizing the testing method of the system when being executed by the processor.
The embodiment of the application correspondingly provides a computer program product, which comprises a computer program, and the computer program realizes the testing method of the system when being executed by a processor.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method for testing a system, comprising:
acquiring a plurality of test parameters and script files corresponding to a tested system through a web page, wherein the script files comprise test requests and test result determining methods, and the plurality of test parameters and the script files are configured on the web page;
according to the plurality of test parameters and the script file, testing the tested system in a test period to obtain test data;
analyzing the test data to determine a plurality of sub-test periods within the test period;
determining a test mode corresponding to each sub-test period to obtain a plurality of test modes;
and determining a test result corresponding to each test mode according to the test data.
2. The method of claim 1, wherein parsing the test data to determine a plurality of sub-test periods within the test period comprises:
analyzing the test data to obtain a plurality of performance indexes and a plurality of characteristic moments corresponding to each performance index;
and determining a plurality of sub-test periods corresponding to each performance index in the test period according to a plurality of characteristic moments corresponding to each performance index.
3. The method of claim 2, wherein the plurality of test parameters includes a fluctuation range parameter; analyzing the test result data to obtain a plurality of performance indexes and a plurality of characteristic moments corresponding to each performance index, wherein the analyzing comprises the following steps:
analyzing the test data to obtain the plurality of performance indexes, wherein the plurality of performance indexes comprise at least one of the following: the number of transactions completed in unit time, the response time, the success rate, the error number, the number of transmitted bytes and the number of received bytes;
according to the fluctuation range parameters, processing the performance indexes by using a characteristic moment identification algorithm to obtain a plurality of characteristic moments corresponding to each performance index; wherein the plurality of feature moments includes at least one of: virtual user up time, virtual user down start time, wave crest time, wave trough time, up inflection point time, down inflection point time, non-attention disturbance time, test start time and test end time.
4. The method of claim 3, wherein the plurality of subtest periods include a virtual user rise period, a virtual user fall period, a maximum spike period, a suggested period, a full period; according to the feature moments corresponding to each performance index, determining a plurality of sub-test periods corresponding to each performance index comprises:
determining the virtual user rising period according to the virtual user alignment time and the test starting time;
determining the virtual user descending period according to the virtual user descending starting time and the test ending time;
determining the maximum peak period according to the peak moment and the fluctuation range parameter;
determining the proposal time period according to the virtual user alignment time and the virtual user descending starting time;
and determining the complete period according to the test starting time and the test ending time.
5. The method of claim 3, wherein the plurality of sub-test periods further comprises a maximum plateau period; the method further comprises the steps of:
determining a plurality of stationary periods according to the virtual user alignment time, the virtual user descent time, the ascent inflection point time, the descent inflection point time, and the non-attention disturbance time;
among the plurality of stationary periods, a stationary period having a longest duration is determined as the maximum stationary period.
6. The method of any one of claims 1-5, wherein determining a test result corresponding to each test pattern based on the test data comprises:
determining successful transaction data, failed transaction data and all transaction data according to the test data;
and determining a test result corresponding to the test mode according to the successful transaction data, the failed transaction data and the all transaction data aiming at any test mode.
7. The method according to any one of claims 1-6, wherein testing the system under test for a test period based on the plurality of test parameters and the script file to obtain test data comprises:
testing the tested system according to the plurality of test parameters and the script file, and storing the generated test file into a database;
reading the database according to a preset period to obtain the test file;
and analyzing the test file to obtain the test data.
8. A test device for a system, comprising:
the system comprises an acquisition module, a test module and a test module, wherein the acquisition module is used for acquiring a plurality of test parameters and script files corresponding to a tested system through a web page, the script files comprise test requests and test result determination methods, and the plurality of test parameters and the script files are configured on the web page;
the test module is used for testing the tested system in a test period according to the plurality of test parameters and the script file to obtain test data;
the analysis module is used for analyzing the test data to determine a plurality of sub-test periods in the test period;
the determining module is used for determining a test mode corresponding to each sub-test period to obtain a plurality of test modes;
and the determining module is also used for determining a test result corresponding to each test mode according to the test data.
9. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method of any one of claims 1 to 7.
10. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of claims 1 to 7.
CN202310892949.6A 2023-07-19 2023-07-19 System testing method, device, equipment and storage medium Pending CN116909903A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310892949.6A CN116909903A (en) 2023-07-19 2023-07-19 System testing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310892949.6A CN116909903A (en) 2023-07-19 2023-07-19 System testing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116909903A true CN116909903A (en) 2023-10-20

Family

ID=88359873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310892949.6A Pending CN116909903A (en) 2023-07-19 2023-07-19 System testing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116909903A (en)

Similar Documents

Publication Publication Date Title
CN110096430B (en) Third party SDK access test method, device, terminal and storage medium
CN111045879B (en) Method, device and storage medium for generating pressure test report
CN111563014A (en) Interface service performance test method, device, equipment and storage medium
CN110059068B (en) Data verification method and data verification system in distributed storage system
CN113656174A (en) Resource allocation method, system, computer device and storage medium
CN112153375A (en) Front-end performance testing method, device, equipment and medium based on video information
CN113535538A (en) Application full-link automatic testing method and device, electronic equipment and storage medium
CN112328335B (en) Method, device, equipment and storage medium for diagnosing timeout of concurrent requests
CN112948262A (en) System test method, device, computer equipment and storage medium
CN117331846A (en) Internet-based software development, operation, test and management system
CN112965889A (en) Stability testing method and device, electronic equipment and readable storage medium
RU2532714C2 (en) Method of acquiring data when evaluating network resources and apparatus therefor
CN116909903A (en) System testing method, device, equipment and storage medium
CN111127223A (en) Insurance product testing method and device and storage medium
EP1622309A2 (en) Method and system for treating events and data uniformly
CN113868031A (en) Method, device and system for testing performance consistency of hard disk
CN111965609A (en) Radar reliability evaluation method and device, electronic equipment and readable storage medium
CN106528577B (en) Method and device for setting file to be cleaned
CN113610644B (en) System transaction screening method and device
CN116775427A (en) Test result determining method, device, equipment and storage medium
CN113238966B (en) Report platform front end testing method, device, equipment and storage medium
CN115102874B (en) Gateway system performance testing method and device and electronic equipment
CN115629950B (en) Extraction method of performance test asynchronous request processing time point
CN113360362B (en) Dynamic sql efficiency checking method and plug-in
CN115290798B (en) Stability performance monitoring method and terminal of transformer oil chromatographic online monitoring device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination