CN111124791A - System testing method and device - Google Patents

System testing method and device Download PDF

Info

Publication number
CN111124791A
CN111124791A CN201911244071.5A CN201911244071A CN111124791A CN 111124791 A CN111124791 A CN 111124791A CN 201911244071 A CN201911244071 A CN 201911244071A CN 111124791 A CN111124791 A CN 111124791A
Authority
CN
China
Prior art keywords
server
resource utilization
utilization rate
stack
target server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911244071.5A
Other languages
Chinese (zh)
Inventor
安继贤
李晶
晋晓峰
盛勤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN201911244071.5A priority Critical patent/CN111124791A/en
Publication of CN111124791A publication Critical patent/CN111124791A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/26Functional testing

Abstract

The embodiment of the invention provides a system testing method and a system testing device, wherein the method comprises the following steps: the performance testing device performs performance testing on at least one server where the tested system is located, obtains operating parameters of the at least one server in the performance testing process, then determines a target server from the at least one server according to the operating parameters, and performs stack collection on the tested system on the target server. And analyzing the performance problem according to the stack acquisition result to finally generate a performance test result and a performance problem analysis result of the tested system. The method is used for solving the problems that the existing system is low in testing efficiency and not timely in fault location.

Description

System testing method and device
Technical Field
The embodiment of the invention relates to the field of financial technology (Fintech), in particular to a system testing method and device.
Background
With the development of computer technology, more and more technologies are applied in the financial field, the traditional financial industry is gradually changing to financial science and technology, performance testing technology is no exception, and due to the requirements of the financial industry on safety and real-time performance, the requirements of users on financial systems are higher and higher, and higher requirements are put forward on the performance testing technology of the systems. At present, after a performance test result is obtained by performing a performance test on a system, a third-party analysis device is mainly used for analyzing the performance problem of the system again, so that the problems of low test efficiency and untimely system fault location exist.
Therefore, a system testing method and apparatus that can overcome the above problems is desired.
Disclosure of Invention
The embodiment of the invention provides a system testing method and device, which are used for solving the problems of low testing efficiency and untimely system fault positioning of the existing system.
In a first aspect, an embodiment of the present invention provides a system testing method, where the method includes:
the method comprises the steps of carrying out performance test on at least one server where a tested system is located, then obtaining operation parameters of the at least one server in the performance test process, and further determining a target server from the at least one server according to the operation parameters. Further, stack collection is carried out on the system to be tested on the target server, performance problem analysis is carried out according to stack collection results, and a performance test result and a performance problem analysis result of the system to be tested are generated.
In the embodiment of the invention, the system corresponding to each server is started to perform performance test, then the server with the highest resource utilization rate is determined as the target server through the operation parameters of each server, and the tested system corresponding to the target server is stacked, collected and analyzed, so that the problems in the prior art are overcome, the efficiency of analyzing the performance problem of the tested system is improved, and the fault position of the tested system can be quickly positioned.
In one possible design, determining a target server from the at least one server based on the operating parameters includes: and aiming at any server, calculating the resource utilization rate of the server according to the operating parameters of the server, and screening out the server of which the resource utilization rate meets a preset index from at least one server as a target server.
In the embodiment of the invention, the target server is selected according to the resource utilization rate of each server, so that the selected target server is ensured to be the most representative server.
In one possible design, the operating parameter includes at least one of system load, CPU usage, memory usage, or network usage per unit time; further, according to the operating parameters of the server, calculating the resource utilization rate of the server through a preset resource utilization rate formula, where the preset resource utilization rate formula is:
R_U=L_ONE×K1+CPU_U×K2+M_U×K3+Net_U×K4
wherein R _ U represents a resource usage rate of the server, L _ ONE represents a system load per unit time of the server, K1 represents a weight of the system load per unit time, CPU _ U represents a CPU usage rate of the server, K2 represents a weight of the CPU usage rate, M _ U represents a memory usage rate of the server, K3 represents a weight of the memory usage rate, NET _ U represents a network usage rate of the server, and K4 represents a weight of the network usage rate.
In the embodiment of the invention, the resource utilization rates of the servers are respectively calculated according to a formula, and then the server with the maximum resource utilization rate is used as a target server. Therefore, a representative server can be selected as the target server, and the performance test result and the performance problem analysis result obtained after the performance test is carried out on the tested system on the target server are the most reliable and have the most reference value.
In one possible design, the predetermined criteria include: the resource utilization rate of the target server is the maximum value of the resource utilization rates of all servers where the tested system is located, wherein the tested system is in the performance test, or the resource utilization rate of the target server is larger than a set threshold value.
In the embodiment of the invention, the server with the largest resource utilization rate is selected as the target server, or the server with the resource utilization rate reaching the set threshold value is selected as the target server, so that the aim of screening out the most representative server is achieved. Thus, the stack collection data of the tested system performance test of the target server in the subsequent step is most representative and reliable.
In one possible design, performing stack collection on a system under test on the target server, and performing performance problem analysis according to the stack collection result includes: and carrying out stack collection on the tested system on the target server at a plurality of randomly selected time points. Further, comparing and analyzing the stack acquisition results of the multiple time points to generate a performance problem analysis result.
In the embodiment of the invention, the tested system on the target server is subjected to stack acquisition by selecting a plurality of random time points, and then the stack acquisition results are subjected to comparative analysis to generate the performance analysis results, so that the fairness and the randomness of stack data acquisition at each time are achieved, and the obtained performance analysis results are more reliable.
In the embodiment of the present invention, the stack analysis result includes: at least one of deadlock information, a thread running state statistical result and a high-frequency function calling statistical result of a business logic function, wherein the high-frequency function is a running function of which the function calling times accord with set conditions.
In a second aspect, an embodiment of the present invention provides a system test apparatus, and the technical effect of the apparatus may refer to the foregoing method embodiment, where the apparatus includes:
the processing unit is used for carrying out performance test on at least one server where the tested system is located;
the acquisition unit is used for acquiring the operating parameters of the at least one server in the performance test process;
the processing unit is further used for determining a target server from the at least one server according to the operation parameters; performing stack collection on the system to be tested on the target server, and performing performance problem analysis according to the stack collection result; and generating a performance test result and a performance problem analysis result of the tested system.
In one possible design, the processing unit is specifically configured to, based on the operating parameters: aiming at any server, calculating the resource utilization rate of the server according to the operation parameters of the server;
and screening out the servers with the resource utilization rates meeting preset indexes from at least one server as target servers.
In one possible design, the operating parameters include at least one of system load, CPU usage, memory usage, or network usage per unit time;
the processing unit is further to: calculating the resource utilization rate of the server through a preset resource utilization rate formula according to the operation parameters of the server; the preset resource utilization rate formula is as follows:
R_U=L_ONE×K1+CPU_U×K2+M_U×K3+Net_U×K4;
wherein R _ U represents a resource usage rate of the server, L _ ONE represents a system load per unit time of the server, K1 represents a weight of the system load per unit time, CPU _ U represents a CPU usage rate of the server, K2 represents a weight of the CPU usage rate, M _ U represents a memory usage rate of the server, K3 represents a weight of the memory usage rate, NET _ U represents a network usage rate of the server, and K4 represents a weight of the network usage rate.
Further, the processing unit is specifically configured to: and determining a target server from the at least one server according to the resource utilization rate of the server, wherein the resource utilization rate of the target server meets a preset index.
In one possible design, the predetermined criteria include: the resource utilization rate of the target server is the maximum value of the resource utilization rates of all servers which are performing performance tests in the tested system, or the resource utilization rate of the target server is larger than a set threshold value.
In one possible design, the processing unit is further configured to: and carrying out stack collection on the tested system on the target server at a plurality of randomly selected time points. And then, comparing and analyzing the stack acquisition results of the multiple time points to generate a performance problem analysis result.
In one possible design, the stack analysis results include: at least one of deadlock information, thread running state statistical results, and high frequency function call statistical results of business logic functions. The high-frequency function is an operation function of which the function calling times meet set conditions;
in a third aspect, an embodiment of the present invention provides a computing device, which includes at least one processing unit and at least one storage unit, where the storage unit stores a computer program, and when the program is executed by the processing unit, the processing unit is caused to execute the system testing method according to any of the above first aspects.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium storing a computer program executable by a computing device, wherein the program, when executed on the computing device, causes the computing device to execute the system testing method according to any of the first aspects.
These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic view of an apparatus architecture according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a system testing method according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a partial stack collection result according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of a system testing method according to an embodiment of the present invention;
FIG. 5 is a block diagram of a system test according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a system test apparatus according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a computing device for system testing according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 exemplarily shows a system architecture to which a system testing method provided by the embodiment of the present invention is applied, where the system architecture may include a testing device 101 and a device under test 102, and when the device under test 102 performs a system update or a software and hardware improvement, the testing device 101 is required to perform a system test on the device under test 102. Specifically, the testing device 101 sends a testing task to the device to be tested 102, the device to be tested 102 receives the testing task and processes the testing task, and the testing device 101 generates a performance testing result of the device to be tested 102 according to parameters such as response time of the device to be tested 102 for processing the testing task, the maximum user concurrency number, the maximum TPS and the like. The user concurrency number refers to the number of user requests/transactions processed by the system at the same time, and the TPS refers to the number of transaction requests processed by the system per second.
Based on the above description, fig. 2 exemplarily shows a flow of a system testing method provided by the embodiment of the present invention, the method may be executed by the testing apparatus 101, and the testing apparatus 101 may be disposed in at least one server where the system is located. As shown in fig. 2, the method specifically includes the following steps.
Step 201, performing a performance test on at least one server where the system under test is located.
In a possible embodiment, the administrator triggers the start of the performance test on the system under test by operating the testing device 101 on the server where the system is located.
Illustratively, as shown in FIG. 5, the clearance systems may be located on multiple servers (e.g., server 1, server 2, and server 3 in FIG. 5), with the default that the clearance systems running on each server are consistent, i.e., server 1, server 2, and server 3 run the same clearance system. The administrator can operate the testing device 101 on the management platform to send a testing instruction to the server 1, the server 2 and the server 3, wherein the testing instruction is used for instructing to start performance testing on the clearing system. The management platform may be disposed on the server 1, the server 2, or the server 3.
Step 202, in the performance test process, obtaining the operation parameters of at least one server.
The operation parameter may include at least one of a system load, a CPU usage rate, a memory usage rate, or a network usage rate per unit time.
Specifically, in one possible implementation, in order to ensure accurate data collection, a list of servers where the system under test is located may be obtained one minute after the performance test is started, and then, according to the list of servers, an operation parameter of each server in the current one minute, such as a system load per unit time, a CPU usage rate, a memory usage rate, or a network usage rate, is calculated.
Step 203, determining a target server from at least one server according to the operation parameters.
Specifically, for any server, after the test device 101 obtains the operation parameters, the resource utilization rate of the server may be calculated according to the operation parameters, and further, a server whose resource utilization rate meets a preset index is screened from all servers as a target server. Optionally, a server with the largest resource utilization rate is selected from the at least one server as a target server, or a server with a resource utilization rate greater than a set threshold is selected from the at least one server as a target server, where the target server may be one or more servers, and the embodiment of the present application does not limit this.
Calculating the resource utilization rate of the server through a preset resource utilization rate formula; the preset resource utilization rate formula is as follows:
in a possible implementation manner, for a first server, the first server is any one of at least one server, and according to an operation parameter of the first server, the resource utilization rate of the first server is calculated, where the resource utilization rate satisfies a preset resource utilization rate formula;
R_U=L_ONE×K1+CPU_U×K2+M_U×K3+Net-U×K4;
wherein, R _ U represents a resource usage rate of the first server, L _ ONE represents a system load of the first server in the unit time, K1 represents a weight of the system load in the unit time, CPU _ U represents the CPU usage rate of the first server, K2 represents a weight of the CPU usage rate, M _ U represents a memory usage rate of the first server, K3 represents a weight of the memory usage rate, NET _ U represents a network usage rate of the first server, and K4 represents a weight of the network usage rate. The specific value of the weight can be set according to actual needs.
And 204, performing stack acquisition on the system to be tested on the target server, and performing performance problem analysis according to stack acquisition results.
Specifically, in one possible implementation, stack acquisition may be performed on a system under test on a target server at a plurality of randomly selected time points; and then comparing and analyzing the stack acquisition results of the multiple time points to generate a performance problem analysis result. For example, after the target server is determined, a Java stack collection may be performed on the system under test on the target server using a script, where the stack collection command is mainly a jstack command that is a tool carried by JDK.
Step 205, generating a performance test result and a performance problem analysis result of the system under test.
Specifically, the stack analysis results may include: at least one of deadlock information, thread running state statistical results, and high frequency function call statistical results of business logic functions. The high-frequency function is an operation function with the function calling times meeting set conditions. Further, when a stack analysis result is obtained, a performance test result can be obtained, then the stack analysis result is stored in the database, and when the administrator triggers the performance test result display, the stack analysis result can be displayed along with the performance test result, so that the tester can be guided to find the application performance problem.
In one possible implementation, step 204 is described above, wherein, for example, the running script performs Java stack collection on the system under test on the target server. The collection command in the script is mainly stack collection by using a jstack command of a tool carried by JDK (Java development Kit, Java language software development Kit). Optionally, the collection logic of the script is as follows:
1: logging in a target server by using a command ssh $ ip ' ps-ef | grep $ name | grep-v grep | awk ' { print $2} ', and acquiring a process ID by a service name; wherein, IP is a target server IP, and name is a target service name;
2: logging in a target server by using a command ssh ip ' ps-ef | grep $ name | grep-v grep | awk ' { print $8} ', and starting a java directory through a tested system;
3: using a command ssh ip "$ home/bin/jstack-l $ pid >/data/pid _1. tack" to carry out stack acquisition of the tested system; wherein Home is the java directory obtained in the step 2, and pid is the process IP obtained in the step one;
4: and copying the acquired stack of the system to be tested to the local by using a command scp ip:/data/pid _1.stack/data, and then analyzing.
The stack collection of the tested system is realized by the script.
In a possible embodiment, in order to more truly reflect the operation condition of the system to be tested in the performance test process, at least one time point of the second set time after the target server is calculated is randomly acquired in a random discrete mode, and data acquisition is performed. Illustratively, 3 time points within 3 minutes after the performance test started are selected in a randomly discrete manner;
illustratively, the discrete three acquisition time points are calculated as follows:
first acquisition time point: calculating a random number between 20 and 40 by using a random function, wherein the random number is used as a first acquisition time point, and when the time arrives, stack acquisition is carried out by adopting the acquisition method; for example, if the random calculation results in "30", the stack collection is performed on the system under test where the target server is located 30 seconds after the target server is calculated.
Second acquisition time point: calculating a random number between 80 and 100 by using a random function, and taking the random number as a second acquisition time point; for example, if the random calculation results in "89", the stack acquisition is performed on the system under test when 89 seconds after the target server is calculated.
Third acquisition time point: calculating a random number between 140 and 160 by using a random function as a third acquisition time point; for example, if the random calculation results in "155", the stack acquisition is performed on the system under test after 155 seconds from the calculation of the target server.
In a possible implementation manner, in step 205, after the stack collection, the stack collection result is saved, and meanwhile, the comparison analysis is performed to generate a stack analysis result, so as to obtain a performance test result and a performance problem analysis result, which are then saved in the database corresponding to the management platform. Wherein, the stack acquisition result comprises at least one of the following contents:
1. thread name, thread ID and number;
2. thread running state, lock state;
3. calling a stack by a thread function;
in one possible embodiment, the stack collection results are analyzed and compared, and the following information is mainly extracted:
1. deadlock information: searching whether the deadlock keyword exists in the stack information; if yes, representing the thread deadlock condition, and storing the relevant deadlock state and the thread information in a database.
2. Thread running state information: analyzing the running state of each thread according to the stack acquisition result, and counting the states of the threads such as WAITING, running and the like; illustratively, counting the threads in the running state, if the count of the state in all stack acquisition results is relatively small, judging that the thread configuration is unreasonable or prompting of thread competition exists, and storing the thread state, the counted count and the judgment result into a database.
3. High-frequency function call statistics of business logic functions: the method comprises the steps of obtaining thread stacks with Running states, analyzing function information in each stack, and storing function names, function calling times and the like as records.
For the same function name in multiple threads, the number of function calls is gradually increased. In this way, the number of calls for each function in the thread stack is obtained, and further, the function statistics are sorted. And storing the information into a database corresponding to the management platform and then analyzing the information. Illustratively, the function with the TOP10 number of function calls meets the set condition, in other words, the function with the TOP10 number of function calls is the high frequency function. It should be noted that, in practical applications, the extracted information is not limited to the three.
In a possible embodiment, the system under test performs performance testing, and performs stack acquisition according to the method to obtain a partial stack acquisition result. As shown in fig. 3, the stack collection result includes the two threads shown in fig. 3, specifically, both threads are in a WAITING state and have no deadlock key, that is, both threads have no deadlock condition.
For more systematically describing the above system testing method, an embodiment of the present invention further schematically provides a system testing method flow, as shown in fig. 4, including:
step 401, the testing apparatus 101 starts a performance test for at least one server where the system under test is located.
In a possible embodiment, a manager starts each tested system to perform performance test through a management platform; wherein, the system under test corresponds to at least one server.
In step 402, the testing apparatus 101 selects a target server with the highest resource occupation from the servers of the system under test.
In a possible embodiment, after a first set time after the performance test starts, the testing device 101 collects the operation parameters of each server, processes the operation parameters, and determines a server whose operation parameters meet a preset index as a target server. As shown in fig. 5, after the test apparatus 101 obtains the operating parameters of the system load, the CPU utilization, the memory utilization, the network utilization, and the like of the liquidation systems on the service 1, the server 2, and the server 3, it performs an operation according to the resource utilization formula, for example, when the highest R _ U of the server 3 is calculated, the server 3 is the target server.
In step 403, the test device 101 connects to the target server.
In one possible embodiment, the test device 101 logs in to the target server according to the IP of the target server. As shown in fig. 5, the test apparatus 101 logs in to the server 3.
In step 404, the testing device 101 performs stack collection on the system under test on the target server.
In one possible embodiment, after logging in to the target server, a stack collection is performed on the system under test on the target server. And running the script to stack and collect the tested system on the target server. Specifically, after entering the directory of the system under test by using the script command, the stack collection is performed on the system under test by using the corresponding command. After logging in the target server 3, as shown in fig. 5, the clearing system on the server 3 runs the script and performs stack collection by using the method in step 204.
In a possible embodiment, in order to more truly reflect the operation condition of the system under test in the performance test process, the script randomly obtains at least one time point of the second set time after the target server is calculated in a random discrete manner, and performs data acquisition, specifically, the method in step 204 is described above.
In step 405, the test apparatus 101 saves the stack collection result of the system under test on the target service.
In a possible embodiment, after the test device 101 completes stack collection on the system under test on the target server in step 404, the stack collection result is saved in the database corresponding to the management platform.
As shown in fig. 5, after the test apparatus 101 performs stack collection on the liquidation system on the server 3, the stack collection result is stored in the server corresponding to the management platform.
In step 406, the test device 101 performs a comparative analysis on the stack collection results.
In one possible embodiment, the test apparatus 101 performs a comparative analysis on the saved stack acquisition results to generate a stack analysis result, i.e., a performance problem analysis result.
Step 407, displaying the stack analysis result.
In one possible embodiment, the stack analysis results and the performance test results are presented simultaneously by the management platform. Optionally, the management platform outputs the stack analysis result and the performance test result in a text or voice manner.
As shown in fig. 5, the management platform displays the stack analysis result and the performance test result of the clearing system on the server 3 by the testing device 101 to the administrator in a display output manner.
Based on the same technical concept, the embodiment of the invention also provides a performance testing device, and the device can execute the embodiment of the method. As shown in fig. 6, the apparatus provided in the embodiment of the present invention includes:
the processing unit 601 is configured to perform a performance test on at least one server where the system under test is located;
an acquisition unit 602, configured to acquire an operating parameter of the at least one server in a performance test process;
the processing unit 601 is further configured to determine a target server from the at least one server according to the operation parameter; performing stack collection on the system to be tested on the target server, and performing performance problem analysis according to the stack collection result; and generating a performance test result and a performance problem analysis result of the tested system.
In one possible design, the processing unit 601 is specifically configured to: aiming at any one server, calculating the resource utilization rate of the server according to the operation parameters of the server; and screening out the target server of which the resource utilization rate meets a preset index from the at least one server.
In one possible design, the operating parameters include at least one of system load, CPU usage, memory usage, or network usage per unit time;
wherein, the processing unit 601 is further configured to: calculating the resource utilization rate of the server through a preset resource utilization rate formula according to the operation parameters of the server;
the preset resource utilization rate formula is as follows:
R_U=L_ONE×K1+CPU_U×K2+M_U×K3+Net_U×K4
wherein, R _ U represents resource utilization rate of a server, L _ ONE represents system load of the server per unit time, K1 represents weight of the system load per unit time, CPU _ U represents CPU utilization rate of the server, K2 represents weight of the CPU utilization rate, M _ U represents memory utilization rate of the server, K3 represents weight of the memory utilization rate, NET _ U represents network utilization rate of the server, and K4 represents weight of the network utilization rate;
further, the processing unit 601 is specifically configured to: and determining a target server from the at least one server according to the resource utilization rate of the server, wherein the resource utilization rate of the target server meets a preset index.
In one possible design, the step of the target server's resource usage meeting the predetermined criteria includes: the resource utilization rate of the target server is the maximum value of the resource utilization rates of all servers where the tested system is located under the performance test, or the resource utilization rate of the target server is greater than a set threshold value.
In one possible design, the processing unit 601 is further configured to: and carrying out stack collection on the tested system on the target server at a plurality of randomly selected time points. And then, comparing and analyzing the stack acquisition results of the plurality of time points to generate a performance problem analysis result.
In one possible design, the stack analysis results include: at least one of deadlock information, thread running state statistical results, and high frequency function call statistical results of business logic functions. The high-frequency function is an operation function with the function calling times meeting set conditions;
based on the same technical concept, the embodiment of the present invention provides a computing device, as shown in fig. 7, including at least one processor 701 and a memory 702 connected to the at least one processor, where a specific connection medium between the processor 701 and the memory 702 is not limited in the embodiment of the present invention, and the processor 701 and the memory 702 are connected through a bus in fig. 7 as an example. The bus may be divided into an address bus, a data bus, a control bus, etc.
In the embodiment of the present invention, the memory 702 stores instructions executable by the at least one processor 701, and the at least one processor 701 may execute the steps included in the aforementioned settlement method by executing the instructions stored in the memory 702.
The processor 701 is a control center of the terminal device, and may connect various parts of the terminal device by using various interfaces and lines, and process data by executing or executing instructions stored in the memory 702 and calling data stored in the memory 702. Optionally, the processor 701 may include one or more processing units, and the processor 701 may integrate an application processor and a modem processor, wherein the application processor mainly handles an operating system, a user interface, an application program, and the like, and the modem processor mainly handles wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 701. In some embodiments, processor 701 and memory 702 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 701 may be a general-purpose processor, such as a Central Processing Unit (CPU), a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, configured to implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor.
Memory 702, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 702 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charge Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory 702 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 702 of embodiments of the present invention may also be circuitry or any other device capable of performing a storage function to store program instructions and/or data.
Based on the same technical concept, embodiments of the present invention provide a computer-readable medium storing a computer program executable by a terminal device, the program causing the terminal device to perform steps of a settlement method when the program runs on the terminal device.
It should be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (14)

1. A method for system testing, comprising:
performing performance test on at least one server where a tested system is located;
in the performance test process, obtaining the operation parameters of the at least one server;
determining a target server from the at least one server according to the operating parameters;
performing stack collection on the system to be tested on the target server, and performing performance problem analysis according to the stack collection result;
and generating a performance test result and a performance problem analysis result of the tested system.
2. The method of claim 1, wherein determining a target server from the at least one server based on the operating parameter comprises:
aiming at any one server, calculating the resource utilization rate of the server according to the operation parameters of the server;
and screening out the target server of which the resource utilization rate meets a preset index from the at least one server.
3. The method of claim 2, wherein the operational parameters include at least one of system load, CPU usage, memory usage, or network usage per unit time;
the calculating the resource utilization rate of the server according to the operating parameters of the server comprises:
calculating the resource utilization rate of the server through a preset resource utilization rate formula according to the operation parameters of the server;
the preset resource utilization rate formula is as follows:
R_U=L_ONE×K1+CPU_U×K2+M_U×K3+Net_U×K4;
wherein R _ U represents a resource usage rate of the server, L _ ONE represents a system load of the server per unit time, K1 represents a weight of the system load per unit time, CPU _ U represents the CPU usage rate of the server, K2 represents a weight of the CPU usage rate, M _ U represents the memory usage rate of the server, K3 represents a weight of the memory usage rate, NET _ U represents the network usage rate of the server, and K4 represents a weight of the network usage rate.
4. The method of claim 2, wherein the predetermined criteria comprises:
the resource utilization rate of the target server is the maximum value of the resource utilization rates of all servers where the tested system is located and the performance test is carried out, or the resource utilization rate of the target server is larger than a set threshold value.
5. The method of claim 1, wherein performing stack collection on the system under test on the target server and performing performance problem analysis according to the stack collection comprises:
performing stack collection on the system under test on the target server at a plurality of randomly selected time points;
and comparing and analyzing the stack acquisition results of the plurality of time points to generate a performance problem analysis result.
6. The method of claim 5, wherein the stack acquisition comprises: at least one of deadlock information, a thread running state statistical result and a high-frequency function calling statistical result of a service logic function, wherein the high-frequency function is a running function of which the function calling times meet set conditions.
7. A system test apparatus, the apparatus comprising: the device comprises a collecting unit and a processing unit;
the processing unit is used for carrying out performance test on at least one server where the tested system is located;
the acquisition unit is used for acquiring the operating parameters of the at least one server in the performance test process;
the processing unit is further configured to determine a target server from the at least one server according to the operation parameter; performing stack collection on the system to be tested on the target server, and performing performance problem analysis according to the stack collection result; and generating a performance test result and a performance problem analysis result of the tested system.
8. The apparatus according to claim 7, wherein the processing unit is specifically configured to:
aiming at any one server, calculating the resource utilization rate of the server according to the operation parameters of the server;
and screening out the target server of which the resource utilization rate meets a preset index from the at least one server.
9. The apparatus of claim 8, wherein the operational parameters include at least one of system load, CPU usage, memory usage, or network usage per unit time;
the processing unit is further to: calculating the resource utilization rate of the server through a preset resource utilization rate formula according to the operation parameters of the server;
the preset resource utilization rate formula is as follows:
R_U=L_ONE×K1+CPU_U×K2+M_U×K3+Net_U×K4;
wherein R _ U represents a resource usage rate of the server, L _ ONE represents a system load of the server per unit time, K1 represents a weight of the system load per unit time, CPU _ U represents the CPU usage rate of the server, K2 represents a weight of the CPU usage rate, M _ U represents the memory usage rate of the server, K3 represents a weight of the memory usage rate, NET _ U represents the network usage rate of the server, and K4 represents a weight of the network usage rate.
10. The apparatus of claim 8, wherein the predetermined criteria comprises:
the resource utilization rate of the target server is the maximum value of the resource utilization rates of all servers where the tested system which is performing the performance test is located, or the resource utilization rate of the target server is greater than a set threshold value.
11. The apparatus of claim 7, wherein the processing unit is further configured to:
performing stack collection on the system under test on the target server at a plurality of randomly selected time points;
and comparing and analyzing the stack acquisition results of the plurality of time points to generate a performance problem analysis result.
12. The apparatus of claim 11, wherein the stack acquisition result comprises: at least one of deadlock information, a thread running state statistical result and a high-frequency function calling statistical result of a service logic function, wherein the high-frequency function is a running function of which the function calling times meet set conditions.
13. A computing device comprising at least one processing unit and at least one memory unit, wherein the memory unit stores a computer program that, when executed by the processing unit, causes the processing unit to perform the method of any of claims 1-6.
14. A computer-readable storage medium, storing a computer program executable by a computing device, the program, when executed on the computing device, causing the computing device to perform the method of any of claims 1-6.
CN201911244071.5A 2019-12-06 2019-12-06 System testing method and device Pending CN111124791A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911244071.5A CN111124791A (en) 2019-12-06 2019-12-06 System testing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911244071.5A CN111124791A (en) 2019-12-06 2019-12-06 System testing method and device

Publications (1)

Publication Number Publication Date
CN111124791A true CN111124791A (en) 2020-05-08

Family

ID=70497730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911244071.5A Pending CN111124791A (en) 2019-12-06 2019-12-06 System testing method and device

Country Status (1)

Country Link
CN (1) CN111124791A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112131086A (en) * 2020-09-18 2020-12-25 浪潮电子信息产业股份有限公司 Performance tuning method, device and equipment of application server
CN113608982A (en) * 2021-07-27 2021-11-05 远景智能国际私人投资有限公司 Function execution performance monitoring method and device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112131086A (en) * 2020-09-18 2020-12-25 浪潮电子信息产业股份有限公司 Performance tuning method, device and equipment of application server
CN113608982A (en) * 2021-07-27 2021-11-05 远景智能国际私人投资有限公司 Function execution performance monitoring method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
US9715409B2 (en) Job delay detection method and information processing apparatus
CN109495291B (en) Calling abnormity positioning method and device and server
CN111563014A (en) Interface service performance test method, device, equipment and storage medium
CN110647447B (en) Abnormal instance detection method, device, equipment and medium for distributed system
CN110659985A (en) Method and device for fishing back false rejection potential user and electronic equipment
CN113268403B (en) Time series analysis and prediction method, device, equipment and storage medium
CN111651595A (en) Abnormal log processing method and device
CN111124791A (en) System testing method and device
CN110633211A (en) Multi-interface testing method, device, server and medium
CN111680085A (en) Data processing task analysis method and device, electronic equipment and readable storage medium
CN110322143B (en) Model materialization management method, device, equipment and computer storage medium
CN112328335B (en) Method, device, equipment and storage medium for diagnosing timeout of concurrent requests
CN112948262A (en) System test method, device, computer equipment and storage medium
CN107330031B (en) Data storage method and device and electronic equipment
CN115168014A (en) Job scheduling method and device
CN115269519A (en) Log detection method and device and electronic equipment
CN115190010A (en) Distributed recommendation method and device based on software service dependency relationship
CN113722141A (en) Method and device for determining delay reason of data task, electronic equipment and medium
CN108846634B (en) Case automatic authorization method and system
CN113010310A (en) Job data processing method and device and server
CN111222739A (en) Task allocation method and task allocation system of nuclear power station
CN117389841B (en) Method and device for monitoring accelerator resources, cluster equipment and storage medium
CN113946333B (en) Mobile terminal logic script execution method and device
CN114168440A (en) Performance test method, device, equipment and medium for metadata acquisition
CN109886327B (en) System and method for processing Java data in distributed system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination