CN114490375B - Performance test method, device, equipment and storage medium of application program - Google Patents

Performance test method, device, equipment and storage medium of application program Download PDF

Info

Publication number
CN114490375B
CN114490375B CN202210079863.7A CN202210079863A CN114490375B CN 114490375 B CN114490375 B CN 114490375B CN 202210079863 A CN202210079863 A CN 202210079863A CN 114490375 B CN114490375 B CN 114490375B
Authority
CN
China
Prior art keywords
data
performance test
test data
application program
running
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210079863.7A
Other languages
Chinese (zh)
Other versions
CN114490375A (en
Inventor
周原
王亚昌
张嘉明
邱宏健
陈洁昌
宋博文
宋天琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210079863.7A priority Critical patent/CN114490375B/en
Publication of CN114490375A publication Critical patent/CN114490375A/en
Application granted granted Critical
Publication of CN114490375B publication Critical patent/CN114490375B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3604Software analysis for verifying properties of programs
    • G06F11/3612Software analysis for verifying properties of programs by runtime analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/302Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a software system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3089Monitoring arrangements determined by the means or processing involved in sensing the monitored data, e.g. interfaces, connectors, sensors, probes, agents
    • G06F11/3093Configuration details thereof, e.g. installation, enabling, spatial arrangement of the probes

Abstract

The application discloses a performance test method, device, equipment and storage medium of an application program, which relate to the technical field of program test and are used for improving the efficiency of positioning the abnormality of the application program, wherein the method comprises the following steps: acquiring a plurality of running objects, first performance test data triggered when the application program of the target version runs, and second performance test data respectively triggered when the application programs of the N historical versions run, wherein the first performance test data are triggered when the application programs of the target version run; wherein N is more than or equal to 1; based on the plurality of operation objects, respectively corresponding first performance test data and N second performance test data, respectively determining operation change data of the corresponding operation objects; determining an abnormal operation object from the plurality of operation objects based on the obtained operation change data; and generating a performance test result corresponding to the application program of the target version based on the first performance test data of the abnormal operation object.

Description

Performance test method, device, equipment and storage medium of application program
Technical Field
The present invention relates to the field of computer technologies, and in particular, to the field of program testing technologies, and provides a method, an apparatus, a device, and a storage medium for testing performance of an application program.
Background
In the development process of an application program, by evaluating the application program, finding and correcting errors (BUGs) existing in the application program are indispensable links in the development process.
Taking a game application as an example, a fantasy Engine (UE) is a game Engine commonly used for game application development, and when the UE Engine is used for game development, usually, a game application includes a huge amount of code, for example: the amount of code on the engine layer, business logic layer, blueprints, etc. is typically over 10 tens of thousands of lines, and therefore the effort of game application evaluation is enormous.
Code review (code review) is a systematic review mode of codes, and is usually performed by adopting a software peer review mode, but in the early development stage of a game application program, the BUG and performance problems of the codes are large in scale, and the code review mode is extremely low in efficiency and cannot be applied. Moreover, other types of applications besides gaming applications also have such problems.
Disclosure of Invention
The embodiment of the application provides a performance test method, device and equipment of an application program and a storage medium, which are used for improving the efficiency of positioning the abnormality of the application program.
In one aspect, a method for testing performance of an application program is provided, the method comprising:
acquiring first performance test data triggered by a plurality of operation objects when the application program of the target version operates, and acquiring second performance test data respectively triggered by the plurality of operation objects when the application programs of the N historical versions operate; wherein N is more than or equal to 1;
determining operation change data of the corresponding operation objects based on the first performance test data and the N second performance test data corresponding to the operation objects respectively;
determining an abnormal operation object from the plurality of operation objects based on the obtained operation change data;
and generating a performance test result corresponding to the application program of the target version based on the first performance test data of the abnormal operation object.
Optionally, in response to the running operation performed on the application program, running the application program of the target version based on the running engine includes:
determining the number of application programs required to be operated in an operation scene based on the operation scene set by operation, and simulating a plurality of use objects in the operation scene based on the determined number to operate the corresponding number of application programs of the target version;
Determining operation change data of the corresponding operation object based on the first performance test data and the N second performance test data corresponding to the operation objects respectively, including:
for each operation scene, the following operations are respectively executed:
for one operation scene, determining operation change data corresponding to each operation object in the one operation scene based on first performance test data and N second performance test data corresponding to each operation object in the one operation scene;
and determining abnormal operation objects in the operation scene based on operation change data corresponding to each operation object in the operation scene.
In one aspect, there is provided an apparatus for testing performance of an application, the apparatus comprising:
the data collection unit is used for acquiring first performance test data triggered by a plurality of operation objects when the application program of the target version operates, and acquiring second performance test data respectively triggered by the plurality of operation objects when the application programs of the N historical versions operate; wherein N is more than or equal to 1;
the data analysis unit is used for respectively determining the operation change data of the corresponding operation objects based on the first performance test data and the N second performance test data corresponding to the operation objects;
The abnormal positioning unit is used for determining an abnormal operation object from the plurality of operation objects based on the obtained operation change data;
and the result generating unit is used for generating a performance test result corresponding to the application program of the target version based on the first performance test data of the abnormal operation object.
Alternatively to this, the method may comprise,
the data analysis unit is specifically configured to perform, for the plurality of running objects, the following operations respectively: for one operation object, performing linear fitting processing based on first performance test data and N second performance test data corresponding to the one operation object to obtain a slope of a straight line obtained by fitting, wherein the slope is used for representing the change rate of the one operation object;
the result generating unit is specifically configured to determine, for the one operation object, that the one operation object is an abnormal operation object if the slope is greater than a set slope threshold.
Optionally, the data collection unit is specifically configured to:
for the plurality of running objects, the following operations are respectively executed:
aiming at one operation object, acquiring first basic operation data triggered by the operation object when an application program of a target version runs for the Mth time;
Respectively acquiring second basic operation data triggered by the one operation object in the previous (M-1) operation of the application program of the target version from the stored basic operation data;
and merging the obtained first basic operation data and (M-1) second basic operation data to obtain first performance test data corresponding to the operation object.
Optionally, the data collection unit is specifically configured to:
and carrying out average value solving processing on the first basic operation data and the (M-1) second basic operation data to obtain first performance test data corresponding to the operation object.
Optionally, the device further comprises an access operation unit;
the access operation unit is used for responding to the triggering operation of the operation engine corresponding to the application program and integrating a data acquisition plug-in a plug-in package of the operation engine; the data acquisition plug-in comprises a hook function corresponding to each of the plurality of operation objects;
the data collection unit is specifically configured to respond to a running operation performed on the application program, and run the application program of the target version based on the running engine; and triggering the corresponding hook function to collect the first performance test data of the corresponding operation object based on the operation of each operation object.
Optionally, the data acquisition plug-in further comprises a logic segment start function and a logic segment end function;
the access operation unit is further configured to insert the logic segment start function at a start position of each operation stage in the application program and insert the logic segment end function at an end position in response to a trigger operation for performing operation stage division on the application program, so as to divide the application program into a plurality of operation stages;
the data collection unit is specifically configured to trigger to call a corresponding logic segment start function when running to a start position of one of the running phases, and start to collect first performance test data of each running object in the one running phase; and triggering and calling a corresponding logic segment ending function when the operation is carried out to the ending position of the operation stage, and ending the acquisition of the first performance test data of each operation object in the operation stage.
Optionally, the data analysis unit is specifically configured to:
for each operation stage, the following operations are respectively executed:
determining operation change data corresponding to each operation object in one operation stage based on the first performance test data and N second performance test data corresponding to each operation object in the one operation stage;
The abnormality locating unit is specifically configured to:
and determining abnormal operation objects in the operation stage based on the operation change data corresponding to each operation object in the operation stage.
Alternatively to this, the method may comprise,
the data collection unit is specifically configured to: determining the number of application programs required to be operated in an operation scene based on the operation scene set by operation, and simulating a plurality of use objects in the operation scene based on the determined number to operate the corresponding number of application programs of the target version;
the data analysis unit is specifically configured to: for each operation scene, the following operations are respectively executed: determining operation change data corresponding to each operation object in one operation scene based on first performance test data and N second performance test data corresponding to each operation object in the one operation scene;
the abnormality locating unit is specifically configured to: and determining abnormal operation objects in the operation scene based on the operation change data corresponding to each operation object in the operation scene.
Optionally, the device further includes a data warehouse unit, configured to:
acquiring an operation mode adopted when triggering the first performance test data, an operation frequency serial number when triggering the first performance test data and a phase identifier corresponding to the one operation phase;
generating a storage identifier corresponding to the one operation stage based on the target version, the operation mode, the operation frequency serial number and the stage identifier;
and storing the first performance test data corresponding to the operation stage into a database based on the generated storage identification.
Optionally, the apparatus further comprises a parameter determining unit for determining the values of N and M by:
aiming at a plurality of set candidate test schemes, acquiring performance test data triggered by each running object under different versions when each candidate test scheme is adopted; the number of versions or the running times corresponding to any two candidate test schemes are different, and the application programs of different versions are obtained by carrying out pseudo-random modification on the application programs of specified versions;
determining a target test scheme from the plurality of candidate test schemes based on the operation change condition and the real change condition corresponding to each operation object when each candidate test scheme is adopted;
And setting the value of N based on the version number corresponding to the target test scheme, and setting the value of M based on the corresponding running time.
Optionally, the parameter determining unit is specifically configured to:
for the plurality of candidate test schemes, the following operations are respectively executed:
aiming at a candidate test scheme, acquiring basic operation data triggered by each operation object under different versions when the candidate test scheme is adopted;
based on the operation times corresponding to the candidate test scheme, respectively carrying out pseudo-random noise processing on the basic operation data triggered by each operation object under each version;
and acquiring performance test data triggered by each operation object under each version based on the acquired multiple basic operation data triggered by each operation object under each version.
Optionally, the device further comprises a data display unit, configured to:
responding to a test result display operation aiming at the target version, and presenting a test result display interface corresponding to the target version; the test result display interface comprises test results of the application program in different operation stages;
Responding to a trigger operation for displaying the test result of a target operation stage in the test result display interface, and displaying first performance test data corresponding to each operation object in the target operation stage;
responding to a triggering operation of first performance test data of a target operation object in the operation objects, and displaying a data display interface of the target operation object; the data display interface comprises operation change data corresponding to the target operation object.
In one aspect, a computer device is provided comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any of the methods described above when the computer program is executed.
In one aspect, there is provided a computer storage medium having stored thereon computer program instructions which, when executed by a processor, perform the steps of any of the methods described above.
In one aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the steps of any of the methods described above.
In the embodiment of the application, the abnormal operation object is located by acquiring a plurality of operation objects, first performance test data triggered when the application program of the target version operates, and second performance test data respectively triggered when the application program of the N historical versions operates, and by acquiring operation change data of each operation object in different versions. Therefore, the application is used for carrying out fine granularity of the operation object level through the application program, carrying out performance monitoring on multiple versions in a long period to locate the change condition of the operation object caused by logic code change of different versions, and further, the abnormal operation object can be quickly located through operation change data, so that the efficiency of abnormal location is greatly improved, and a developer is assisted to effectively find out the BUG existing in the development process of the application program, and timely correct the BUG so as to improve the development efficiency of the application program.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described below, and it is apparent that the drawings in the following description are only embodiments of the present application, and other drawings may be obtained according to the provided drawings without inventive effort for a person having ordinary skill in the art.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an iterative test of a game application provided in an embodiment of the present application;
FIG. 3 is a schematic architecture diagram of a performance testing system for an application according to an embodiment of the present disclosure;
fig. 4 is a schematic flow chart of data collection performed by the data collection module according to the embodiment of the present application;
fig. 5a and fig. 5b are schematic diagrams of a data collection flow implementation scenario provided in an embodiment of the present application;
FIG. 6 is a flowchart illustrating a method for testing performance of an application according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a straight line after a linear fitting process according to an embodiment of the present application;
FIG. 8 is a flowchart of obtaining first performance test data according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of the results of a data noise filtering experiment provided in an embodiment of the present application;
FIG. 10 is a schematic diagram of the results of an anomaly display experiment provided in an embodiment of the present application;
FIG. 11 is a schematic flow chart of determining values of M and N based on experimental results provided in the embodiments of the present application;
FIG. 12 is a flow chart of a performance testing method based on logic segment partitioning according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a logic segment partitioning function according to an embodiment of the present disclosure;
Fig. 14 is a schematic operation flow diagram of data presentation according to an embodiment of the present application;
fig. 15a to 15d are schematic views of a data display interface provided in an embodiment of the present application;
FIGS. 16 a-16 h are schematic diagrams illustrating performance comparisons provided in embodiments of the present application;
FIG. 17 is a schematic structural diagram of an apparatus for testing performance of an application according to an embodiment of the present disclosure;
fig. 18 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure. Embodiments and features of embodiments in this application may be combined with each other arbitrarily without conflict. Also, while a logical order is depicted in the flowchart, in some cases, the steps depicted or described may be performed in a different order than presented herein.
In order to facilitate understanding of the technical solutions provided in the embodiments of the present application, some key terms used in the embodiments of the present application are explained here:
application program: the application program of the embodiment of the application program can be any application program, for example, can be a game application program
(1) The operation object is as follows: one running object is a monitoring dimension, and in the embodiment of the present application, each running object is a monitoring dimension with fine granularity, where fine granularity is relative to coarse granularity such as central processing unit (central processing unit, CPU) consumption and memory consumption of an overall application in the related art, and the fine granularity provides a monitoring dimension of program running details such as class level, function level, and the like, and one object in each level may be one running object.
For example, for class level, a class may be a running object, which is a monitoring dimension, and in general, there may be tens of thousands of classes in an application, that is, tens of thousands of running objects, which constitute tens of thousands of monitoring dimensions for performing performance tests on the application; alternatively, for a function level, a function may be a running object, which also serves as a monitoring dimension, and in general, there may be hundreds of thousands of functions in an application, that is, hundreds of thousands of running objects, which constitute hundreds of thousands of monitoring dimensions for performing performance tests on the application. Thus, the performance of an application is monitored by the running object, which is very fine-grained.
In this embodiment of the present application, the running object may be any combination of program running details, for example, may be a class in an application program, or may be a function in an application program, or may be a combination of a class and a function. Of course, other program running details may be included, such as content of object creation, time consumption of functions, network performance, etc., and the running object may be set according to actual requirements in actual application, which is not limited in the embodiment of the present application.
(2) Performance test data: for each execution object, the performance test data may be data for characterizing performance of the execution object, for example, may be a call number, a creation number, an execution duration, a resource occupation condition, and the like of one execution object. For example, for memory objects, the performance test data may be the number of object types, the number of created objects, etc. It should be noted that, since most application programs may use a Client-server (CS) architecture, the performance test data collected in the embodiments of the present application may be data collected from a Client, data collected from a background server, or a combination of data collected by a Client and a background server, which is not limited to this embodiment of the present application.
(3) Running change data: the operational change data is used to characterize the performance change of the same operational object between different versions. For example, delta data between different versions, such as the number of objects created in the previous version and the current version, or the performance change rate obtained by performing linear fitting according to the performance data of the same running object in multiple versions, although other manners may be used for characterization, which is not limited thereto.
(4) And (3) an operation stage: for an application, it may include multiple run phases, each of which is a logical segment that may be used to perform a function or to present an effect. For example, for a game application, it may be staged according to the progress of the game, such as an initialization stage, a single-office creation stage, a player loading stage, and a game stage, or for an instant messaging application, it may be staged according to a page scenario, such as a loading stage, a chat page stage, and a friend dynamic sharing page stage. Of course, in practical application, the user-defined segmentation can be performed according to the actual running logic of the application program, and the performance test can be performed through the segmentation, so that the data in the same running stage can be strictly compared, and more accurate data comparison can be provided.
The following briefly describes the design concept of the embodiment of the present application.
At present, test evaluation of an application program is an indispensable link in the development process, but the mode of code review is extremely low in efficiency and obviously inapplicable, so that the method can assist in finding BUG in program codes through a performance monitoring tool in running.
Taking a game application as an example, performance monitoring may also be performed by some performance monitoring tools, such as LLM (Low-Level Memory Tracker, a memory statistics tool provided by the UE engine official), stat (a data statistics system provided by the UE engine that can collect and display performance data), and insists (a performance analysis tool provided by the UE official) in the UE engine when the game application is developed. Such tools typically perform data statistics and monitoring for the current version, but because of the numerous monitoring objects involved in game applications, it is difficult to find anomalies from only one version, since they can be located to obvious anomalies by virtue of the experience of the developer. Secondly, performance monitoring of a single version has no retrospective capability, the running condition before each monitoring dimension cannot be mastered, if a history record is needed, data storage and maintenance are needed, the difficulty is very high, and the sharing of monitoring data is not facilitated.
The core objective of the performance monitoring tool is to accurately locate the BUG and simplify the BUG locating process. Therefore, in order to improve the accuracy of the BUG positioning, the monitoring dimension needs to be set to the fine granularity level of the application program, such as class level, function level, etc., in addition, in the fine granularity monitoring scene, a large number of monitoring dimensions tend to exist, if the BUG is not referred to and compared, the BUG is difficult to be accurately positioned, therefore, the defect in this aspect can be made up through long-period performance comparison, and after a performance baseline is provided, the tool can clearly find the BUG appearing in the current version through comparison with the baseline.
In view of this, an embodiment of the present application provides an application-based performance test method, in which an abnormal operation object is located by acquiring a plurality of operation objects, first performance test data triggered when an application of a target version is running, and second performance test data, respectively triggered when N historical versions of the application are running, of the plurality of operation objects, and by acquiring operation change data of each operation object in different versions. Therefore, the application is used for carrying out fine granularity of the operation object level through the application program, carrying out performance monitoring on multiple versions in a long period to locate the change condition of the operation object caused by logic code change of different versions, and further, the abnormal operation object can be quickly located through operation change data, so that the efficiency of abnormal location is greatly improved, and a developer is assisted to effectively find out the BUG existing in the development process of the application program, and timely correct the BUG so as to improve the development efficiency of the application program.
In the embodiment of the application, in order to find the monitoring dimension with the abnormality in the mass monitoring dimension, the difference data of the same operation object in the two versions can be compared rapidly by a double-version comparison mode so as to rapidly locate the abnormal operation object.
In addition, considering that the application program has system noise influence during each running, such as the sequence of system scheduling, random factors in the program, overall system load and the like, performance test data of each running can be influenced, and the fluctuation generated by data noise can influence a comparison effect. Meanwhile, the monitoring dimension without abnormality can be filtered conveniently through the linear fitting scheme, so that the abnormality dimension can be determined in a small range, and the positioning efficiency of the abnormality dimension is improved greatly.
The following description is made for some simple descriptions of application scenarios applicable to the technical solutions of the embodiments of the present application, and it should be noted that the application scenarios described below are only used for illustrating the embodiments of the present application and are not limiting. In the specific implementation process, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
The scheme provided by the embodiment of the application can be suitable for performance test scenes of most application programs, such as performance test scenes of game application programs. As shown in fig. 1, an application scenario is schematically provided in an embodiment of the present application, where the scenario may include a test end device 101 and a server 102.
The test-end device 101 may be, for example, a mobile phone, a tablet personal computer (PAD), a notebook computer, a desktop computer, a smart television, an intelligent vehicle-mounted device, an intelligent wearable device, or other devices capable of running the application program under test. The test end device 101 may be provided with an application program to be tested, and the test end device 101 is provided with an operation environment required by the operation of the application program to be tested, for example, for a game application program developed by a UE engine, the UE engine environment needs to be deployed for the game application program, and a data acquisition plug-in of the embodiment of the present application is integrated in the UE engine, so as to acquire performance test data triggered in the operation process of the game application program.
The server 102 may be a background server corresponding to the data collection plug-in, which may implement functions such as storing and analyzing performance test data. For example, the cloud server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, namely, content delivery networks (Content Delivery Network, CDNs), basic cloud computing services such as big data and artificial intelligence platforms, etc., but the present invention is not limited thereto.
In the embodiment of the present application, each of the test end device 101 and the server 102 may include one or more processors, a memory, and an I/O interface that interacts with a terminal, etc. The memory of the test end device 101 may store program instructions related to data collection in the performance test method of the application program provided in the embodiment of the present application, where the program instructions, when executed by the processor of the test end device 101, can be used to implement a process of collecting performance test data when the application program runs. Program instructions related to data storage and analysis in the performance test method of the application program provided in the embodiment of the present application may be stored in a memory of the server 102, where the program instructions when executed by a processor of the server 102 can be used to implement a process of storing and analyzing the collected performance test data. In addition, the server 102 may also be configured with a database that may be used to store performance test data, performance test results, and the like.
Taking a game application program test scenario as an example, when the test end device 101 runs a tested game application program, a call data acquisition plug-in is sent to acquire performance data of each running object in the game application program, for example, the type and the number of memory objects, etc., after the test end device 101 acquires the performance test data, the performance test data can be uploaded to the server 102 after being preprocessed, and the server 102 stores the performance test data. In addition, server 102 may also perform a lateral comparison based on the multiple versions of performance test data to locate an abnormal run object from among the run objects based on the run change data of each of the multiple versions. Meanwhile, the located abnormal object can be pushed and displayed to a developer, so that the developer can modify and optimize the game application program after modification and optimization can enter a new iterative testing process, namely the process is repeated, and performance test is performed on the new game application program again.
Specifically, referring to fig. 2, a schematic diagram of an iterative test of a game application is shown. The iterative test refers to a test method that a memory is added in each period based on the last period, as shown in fig. 2, when the game application of version 1 runs, performance data of the running object in the game application can be collected, for example, memory object data shown in fig. 2, and when the game application of version 2 runs, performance data of the running object in the game application can still be collected, then performance data of version 1 and performance data of version 2 can be compared, version delta data for the same object can be obtained, so that abnormal running objects can be positioned based on the version delta data, in practical application, a situation of adding running objects may exist in a subsequent version, therefore, new version added object data can be obtained through version comparison, and further, a specified optimization scheme can be obtained through difference driving optimization of version data, namely, a developer can obtain a specified optimization scheme based on difference of the version data. In addition, when the number of versions is gradually accumulated, the performance data can be linearly fitted through multiple versions, for example, 7 versions are used for linear fitting, so that the influence of the system noise of double-version comparison on the test result is filtered.
In one embodiment, the processes respectively executed by the test end device 101 and the server 102 may also be implemented by integrating into the same device, that is, the test end device 101 and the server 102 may be implemented by the same device, and the test end device 101 and the server 102 may be different functional modules of the device to implement corresponding functions.
The test end device 101 and the server 102 may be in direct or indirect communication connection via one or more networks 103. The network 103 may be a wired network, or may be a Wireless network, for example, a mobile cellular network, or may be a Wireless-Fidelity (WIFI) network, or may be other possible networks, which are not limited in this embodiment of the present invention.
It should be noted that, in the embodiment of the present application, the number of the test end devices 101 may be one or more, and similarly, the number of the servers 102 may be one or more, that is, the number of the test end devices 101 or the servers 102 is not limited.
Fig. 2 is a schematic architecture diagram of an application program performance test system according to an embodiment of the present application, where the architecture includes a data acquisition module, a data service module, and a data display module.
(1) Data acquisition module
The data acquisition module can be deployed in test end equipment, and is used for collecting performance test data when an application program runs by integrating the data acquisition plug-in provided by the embodiment of the application program in the test end equipment. The data acquisition module may include a plug-in package (patch) integrated into the UE engine and a data acquisition plug-in, such as an iterative tracking plug-in (Iteration Trace Plugin) of an embodiment of the present application
Taking a game application developed by a UE engine as an example, before performance monitoring, an access operation is required, wherein the access operation is to integrate software development kits (Software Development Kit, SDKs) of the patch and Iteration Trace Plugin into the UE program and the UE engine, iteration Trace Plugin provides a custom logic segment splitting function, and the game application can be divided into a plurality of operation phases, and data collection is performed for each operation phase.
The data acquisition module adopts a bypass data acquisition mode, the influence on the running performance of an application program is small, and a client and a Dedicated Server (DS) system can be accessed.
(2) Data service module
The data service module can be deployed in a micro-service deployment mode, provides a function of storing the received performance test data of a certain version into a Database (DB), and can filter small data in the received performance test data and analyze the data so as to locate an abnormal operation object.
(3) Data display module
The data display module is used for providing a display function of the performance test result, so that a developer can acquire abnormal information through a display page of the performance test result, further, the code of the application program is modified and optimized, and the modified and optimized code is used as a new version to enter a new iterative test flow until the performance of the application program reaches the expected value.
The DB can be considered as an electronic file cabinet, a place for storing electronic files, and a user can perform operations such as adding, inquiring, updating, deleting and the like on data in the files. A "database" is a collection of data stored together in a manner that can be shared with multiple users, with as little redundancy as possible, independent of the application.
The database management system (Database Management System, DBMS) is a computer software system designed for managing databases, and generally has basic functions of storage, interception, security, backup, and the like. The database management system may classify according to the database model it supports, e.g., relational, extensible markup language (Extensible Markup Language, XML); or by the type of computer supported, e.g., server cluster, mobile phone; or by classification according to the query language used, e.g. structured query language (Structured Query Language, SQL), XQuery; or by performance impact emphasis, such as maximum scale, maximum speed of operation; or other classification schemes. Regardless of the manner of classification used, some DBMSs are able to support multiple query languages across categories, for example, simultaneously.
In a possible application scenario, related data, such as performance test data and performance test results, which are related in the embodiments of the present application, may be stored by using a cloud storage (cloud storage) technology. Cloud storage is a new concept which extends and develops in the concept of cloud computing, and a distributed cloud storage system refers to a storage system which integrates a large number of storage devices (or called storage nodes) of different types in a network through application software or application interfaces to cooperatively work and jointly provides data storage and service access functions for the outside through functions of cluster application, grid technology, a distributed storage file system and the like.
At present, the storage method of the storage system is as follows: when creating logical volumes, each logical volume is allocated a physical storage space, which may be a disk composition of a certain storage device or of several storage devices. The client stores data on a certain logical volume, that is, the data is stored on a file system, the file system divides the data into a plurality of parts, each part is an object, the object not only contains the data but also contains additional information such as data identification (IDentity, ID) and the like, the file system writes each object into a physical storage space of the logical volume respectively, and the file system records storage position information of each object, so that when the client requests to access the data, the file system can enable the client to access the data according to the storage position information of each object.
The process of allocating physical storage space for the logical volume by the storage system specifically includes: physical storage space is divided into stripes in advance according to the set of capacity measures for objects stored on a logical volume (which measures tend to have a large margin with respect to the capacity of the object actually to be stored) and redundant array of independent disks (RAID, redundant Array of Independent Disk), and a logical volume can be understood as a stripe, whereby physical storage space is allocated for the logical volume.
In one possible application scenario, in order to reduce the communication delay of the search, the server 102 may be deployed in each region, or for load balancing, different servers 102 may be used to serve the test end devices 101 in different regions, for example, the test end device 101 is located at the site a, a communication connection is established with the server 102 serving the site a, the test end device 101 is located at the site b, a communication connection is established with the server 102 serving the site b, and multiple servers 102 form a data sharing system to realize data sharing through a blockchain.
For each server 102 in the data sharing system having a node identifier corresponding to the server 102, each server 102 in the data sharing system may store the node identifiers of other servers 102 in the data sharing system, so that the generated block may be subsequently broadcast to other servers 102 in the data sharing system according to the node identifiers of the other servers 102. A list of node identifiers may be maintained in each server 102, and the server 102 name and node identifier may be stored in the list of node identifiers. The node identity may be a protocol (Internet Protocol, IP) address of the interconnection between networks, as well as any other information that can be used to identify the node.
Of course, the method provided in the embodiment of the present application is not limited to the application scenario shown in fig. 1 or the architecture of fig. 3, but may be used in other possible application scenarios or system architectures, which is not limited by the embodiment of the present application. The functions that can be implemented by the respective devices shown in fig. 1 or the respective modules shown in fig. 3 will be described together in the subsequent method embodiments.
The method flow provided in the embodiments of the present application may be performed by the server 102 or the test end device 101 in fig. 1, or may be performed jointly by the server 102 and the test end device 101. In the following description, the application program will be mainly described as a game application program, but other types of application programs are also applicable.
In the embodiment of the present application, when performing performance analysis on the application program of the target version, the first performance test data of the target version and the second performance test data of N historical versions before the target version need to be collected as data bases, so the data collection is described first. It should be noted that, although the first performance test data and the second performance test data in the embodiments of the present application use different names, this is only used to distinguish the target version from the history version, and does not indicate that there is an essential difference between the included data, and the collection process of the first performance test data and the second performance test data is similar, so the collection of the first performance test data is specifically described herein.
Referring to fig. 4, a flow chart of data acquisition by the data acquisition module is shown.
Step 401: in response to a triggering operation performed on a running engine of the application, integrating a data acquisition plug-in a plug-in package of the running engine.
In the embodiment of the application, the data acquisition can be realized through a data acquisition module. For a smooth data acquisition, an access operation necessary for data acquisition is also required, which may be disposable, after the first access, the subsequent versions may continue to use the accessed data acquisition plug-in. When the access is performed, correspondingly, in response to the triggering operation, a data acquisition plug-in can be integrated in a plug-in package of the running engine, and then the data acquisition plug-in can be used for data acquisition in the follow-up process. Taking the above-mentioned UE engine as an example, the above-mentioned patch and Iteration Trace Plugin may be integrated in the UE program and engine.
In an embodiment of the present application, the data acquisition plug-in may include one or more of the following combinations:
(1) Hook (hook) function
For each operation object needing data acquisition in the application program, hook functions can be set for the operation objects, and when the program runs, the hook functions can be automatically triggered to collect basic data.
(2) Logic segment partitioning function
The logic segment splitting function may specifically include a logic segment start function and a logic segment end function, and then the required logic segment splitting function may be called in the service logic layer to perform logic segment splitting of the application program, where each logic segment corresponds to one operation stage, and then the application program may be split into multiple operation stages by calling the logic segment splitting function to perform data collection and analysis on each operation stage respectively.
(3) Custom data collection function
In practical applications, there may be a case of performing separate data reporting at an appropriate location, so in order to implement separate reporting of data, a custom data collection function is further provided, which is used to support an application program interface (Application Programming Interface, API) for calling the custom data collection function at an appropriate location to report required data to the data service module.
Of course, the above data acquisition plug-in may further add any possible function according to the actual requirement, which is not limited in this embodiment of the present application.
Step 402: in response to the running operation, the application of the target version is run based on the running engine described above.
In this embodiment of the present application, after the access operation of the data acquisition plug-in is completed, data may be acquired based on the data acquisition plug-in when the application program runs.
Specifically, when the performance test data of the application program is required to be collected, the operation can be performed on the application program of the target version, then the application program of the target version can be operated based on the operation engine in response to the operation, and then when the application program is operated, the operation of each operation object is based on the operation of each operation object, and the corresponding hook function is triggered to collect the first performance test data of the corresponding operation object.
In one possible implementation, after compiling the target version of the application based on the program code, the target version of the application may be run. When an application program is started, a command line parameter needs to be configured to start a function of collecting data by the data acquisition plug-in, and the function is used for realizing data reporting and subsequent data storage. The command line parameters may include one or more of the following combinations of parameters:
(1) The reporting IP (ReportIP) address, i.e. the IP address of the server, is used for addressing of subsequent data reporting.
(2) Reporting port (ReportPort), i.e. the data port of the server for data reporting.
(3) Version of application (ReportVersion), i.e., version number of the target version.
(4) The test mode (ReportTestType) of an application, or run scenario.
(5) The running time serial number of the application program is used for indicating the running time of the current target version, if not filling, the testing end device or the server can default to be added with one on the running time serial number before.
The parameters (1) and (2) are used for designating the service address of data reporting, and the parameters (3) and (4) and (5) are used for the subsequent server to perform data management on the reported data. In practical application, if the command line parameters are not filled, the functions of the default data acquisition plug-in are closed, and release version performance of the application program is not affected.
Step 403: based on the operation of each operation object, triggering the corresponding hook function to collect the first performance test data of the corresponding operation object.
Specifically, after the application program runs, data collection can be performed on each running object of the application program. Taking the game application program as an example, after the game application program is accessed IterationTrace Plugin, the function enhancement is performed on the engine, which is equivalent to IterationTrace Plugin, a hook function is deployed at the core API of the application program, and the collection of the first performance test data can be automatically performed when the program runs, for example, data such as object allocation data, time-consuming data of function running, network transceiving data and the like can be collected and counted.
In addition, when the application program runs, the user-defined data reporting can be performed, namely, the data to be reported can be added at a proper position according to the requirement.
In the embodiment of the present application, the flow of data collection may be implemented in the following several scenarios.
(1) Actively implementing performance testing
Referring to FIG. 5a, a schematic diagram of a pipeline is shown when performance testing is actively performed. The pipeline design may be performed according to a test Stage (Stage), for example, fig. 5a includes Stage-3 to Stage-6, each test Stage may design a process to be executed in the Stage, for example, in Stage-3, a preprocessing may be performed, where the preprocessing specifically includes three steps of cleaning a legacy DS, cleaning a legacy Client, and pulling up a data collection program, and other stages are analogized in order, so that the test is not performed any more.
Performance testing is actively performed, as the name suggests, which requires manual triggering of the testing process, so that performance test data of the application may be collected after triggering.
(2) Integration into an automated build flow
Referring to fig. 5b, the performance testing process may be integrated into an automated build process, and further, the performance testing process may be automatically triggered to implement a data collection process, for example, a period of data collection may be set, such as daily test data collection, weekly test data collection, or once per update of a version, so that the performance testing process may be automatically triggered without manual operation.
(3) The flow of data collection is implemented in a test environment.
After collecting the multiple versions of performance test data, then a performance analysis of the current version may be performed based on such data. Referring to fig. 6, a flow chart of a performance testing method of an application program according to an embodiment of the present application is shown.
Step 601: acquiring a plurality of running objects, first performance test data triggered when the application program of the target version runs, and second performance test data respectively triggered when the application programs of the N historical versions run, wherein the first performance test data are triggered when the application programs of the target version run; wherein N is more than or equal to 1.
In the embodiment of the application, the BUG of the current version is positioned by collecting the fine-grained long-period performance test data of the application program.
In practical application, the performance test data can be collected for the application programs of different versions, and the collected performance test data are brought into the database to be managed in a unified way, so when the performance test data of the target version and N historical versions are already stored in the database, the first performance test data corresponding to the target version and the second performance test data corresponding to the N historical versions can be read from the database.
The performance test data in the embodiment of the present application may include, for example, data such as object allocation data, function running time-consuming data, and network transceiver data, but may also include other possible performance data, which is not limited in this embodiment of the present application.
Step 602: and based on the plurality of operation objects, respectively corresponding first performance test data and N second performance test data, respectively determining operation change data of the corresponding operation objects.
In the embodiment of the application, the performance test data of different versions can be performed by means of version comparison.
In one embodiment, a dual-version comparison mode may be adopted, for example, for a target version, performance test data of the target version may be adopted and compared with performance test data of a previous version of the target version, so as to obtain delta data, where the delta data is operation change data of an operation object, and further whether the operation object is abnormal is judged based on the delta data, for example, when the delta data is greater than a certain threshold, the operation object is marked as abnormal.
In the embodiment of the present application, it is further considered that, when an application program is running, since the influence of system noise, for example, the sequence of system scheduling, the random factor in the program, the overall load of the system, etc. affects the data of each running, in order to filter the data noise and enable the final test result to reach a certain accuracy, the running change data of each running object is determined by a multi-version linear fitting manner, so that the problem of accurate positioning in a huge number of dimensions is located as much as possible.
Since the determination of the running change data is similar for each running object, a running object a is specifically described here as an example.
Specifically, for the operation object a, linear fitting processing is performed based on the first performance test data and the N second performance test data corresponding to the operation object a, so as to obtain a slope of a straight line obtained by fitting, where the slope is used to characterize a change rate of the operation object a.
Referring to fig. 7, a schematic diagram of a straight line after the linear fitting process is shown. In fig. 7, taking N as 6 as an example, that is, a total of 7 versions of performance test data are selected for linear fitting each time performance test is performed, the abscissa of the fitting straight line shown in fig. 7 is the version number, and the ordinate is the value of the running object a in the performance test data, so that after mirror image linear fitting, the slope of the straight line obtained by the fitting can be obtained, and the slope is the performance change rate of the running object a between 7 versions.
Step 603: based on the obtained respective operation change data, an abnormal operation object is determined from a plurality of operation objects.
In one embodiment, when the dual-version comparison mode is adopted, if the difference data is greater than a certain threshold value, the running object can be determined to be an abnormal running object, and the abnormal running object is marked.
In one embodiment, when the multi-version linear fitting mode is adopted, when the slope of the operation object A is greater than the set slope threshold, the operation object A is determined to be an abnormal operation object, and the abnormal operation object is marked.
In the embodiment of the application program, since the application program is subjected to fine-granularity performance monitoring, the monitoring dimension is huge, the abnormal positioning is difficult, the running change of a certain running object can be more accurately obtained in a linear fitting mode, and compared with the baseline change, the abnormal running object can be easily identified, so that the abnormal dimension is locked in a smaller range, and the abnormal positioning efficiency is improved.
Step 604: and generating a performance test result corresponding to the application program of the target version based on the first performance test data of the abnormal operation object.
In this embodiment of the present application, for the located abnormal operation objects, the first performance test data of the abnormal operation objects in the target version may be combined to generate the performance test result corresponding to the application program of the target version. In actual application, the abnormal operation object can be marked so as to be different from the normal operation object in subsequent display.
In the embodiment of the present application, considering that the application program of each version inevitably receives the influence of system noise during single operation, the obtained performance test result is inaccurate and may eventually cause abnormal positioning failure, so, in order to filter the influence caused by system noise, the data obtained by multiple operations may be processed for each version, so as to obtain the performance test data required by final performance analysis.
Taking the first performance test data corresponding to the target version as an example, fig. 8 is a schematic flow chart for obtaining the first performance test data. Here, the first performance test data corresponding to any one of the running objects a is specifically taken as an example.
Step 6011: and aiming at the running object A, acquiring first basic running data triggered by the running object A when the Mth running of the application program of the target version is performed.
Step 6012: and respectively acquiring the running object A from the stored basic running data, and triggering second basic running data when the application program of the target version runs for the first time (M-1).
It should be noted that, the first basic operation data and the second basic operation data are only used to distinguish the basic operation data obtained by different times of operation, and the basic operation data and the parameter data included in the performance test data are not substantially different, and the performance test data are data obtained by processing the basic operation data obtained by multiple times of operation tests under the same condition, for example, the performance test data may be data obtained by performing average processing on the basic operation data of the same version.
Step 6013: and merging the obtained first basic operation data and (M-1) second basic operation data to obtain first performance test data corresponding to the operation object A.
Specifically, the average value of the first basic operation data and (M-1) second basic operation data can be processed, so that an average value of the performance test data of the operation object A in M times of operation is obtained.
For example, when the performance test data is the running time consumption, the running time consumption of each time when the running object a runs M times may be obtained, so that the average time consumption of M times is calculated as the first performance test data finally participating in the performance analysis.
In this embodiment of the present application, the values of M and N may be, for example, 2 and 7, respectively, that is, the basic operation data obtained when each of 7 versions is executed 2 times participates in the performance analysis process. In practical application, the values of M and N can be set based on empirical values or according to experimental results.
In order to determine that it is appropriate to average several test data for each version, a data noise filtering experiment was performed. In order to obtain the experimental result conveniently and intuitively, a single version can be adopted to simulate the operation of multiple versions, so that the same version should not display any abnormality theoretically. Since the baseline of the control was itself, and a linear fit was made using the true values, it can be seen that the slope of the fitted line was 0. Here, 7 noisy test data are generated for each test, which may be randomly generated based on, for example, 2 standard deviations of the normal distribution band.
Referring to fig. 9, a schematic diagram of the results of the data noise filtering experiment is shown.
Referring to fig. 9, in scheme 1, any one test data is adopted to perform linear fitting, so that the fitted straight line is greatly different from the true value; scheme 2 is to use the average value of any two times of test data to make linear fitting, and the value is very close to the true value; scheme 3 is a linear fit using the average of 7 test data values, and it can be seen that the fit line does not differ significantly from scheme 2.
Therefore, the difference between the result and the true value of each scheme is integrated, and the scheme is considered in terms of difficulty, and the average value of the two tests is adopted to perform linear fitting, so that a better balance can be achieved in terms of difficulty and effect.
Also, in order to determine that each version can better display abnormality using several test data, an abnormality display experiment was performed. In practical experiments, the actual performance of a version is increased by a bug which may be normally increased or abnormal for some reason in the version. Thus, when linear fitting is performed using the true values, it can be seen that the slope of the fitted line is greater than 0. Similarly, 7 noisy test data are generated for each test, and the test data may be randomly generated according to a normal distribution.
Referring to fig. 10, a schematic diagram showing the results of the experiment for abnormality.
Referring to fig. 10, in scheme 1, any one test data is adopted to perform linear fitting, so that the fitted straight line is greatly different from the true value; scheme 2 is to use the average value of any two times of test data to make linear fitting, and the value is very close to the true value; in the scheme 3, the average value of 7 times of test data is adopted to perform linear fitting, and the differences between the scheme 2 and the scheme 3 and the actual value are relatively close, so that the scheme 2 is suitable under comprehensive consideration.
Through the two experiments, the effect of linear fitting can be obviously perceived, and the effect of linear fitting is different due to different test values, so that the accuracy of each test scheme is verified by repeating the experiments for a plurality of times.
Referring to fig. 11, a flow chart for determining the values of M and N based on the experimental results is shown.
Step 1101: and (3) formulating candidate test schemes, wherein the number of versions or the running times corresponding to any two candidate test schemes are different, and the application programs of different versions are obtained by performing pseudo-random modification on the application programs of specified versions.
The following are examples of several candidate test protocols:
(1) A linear fit was made with 2 versions, each version using an average of 7 base runs.
(2) A linear fit was made with 7 versions, each version using 1 base run data.
(3) A linear fit was made with 7 versions, each version using an average of 2 base runs.
(4) The linear fit was made using 7 versions, each of which was linearly fitted using the average of 7 test data.
It should be noted that, the application programs of different versions are obtained by performing pseudo-random modification on the application program of a specified version, for example, normal addition or abnormal BUG addition is performed on the basis of a certain version, so as to increase the real performance of the version.
Step 1102: and aiming at each candidate test scheme, acquiring performance test data triggered by each running object under different versions.
Specifically, the performance test data are generated similarly to the above experiments, but each experiment re-randomizes the test data to make a linear fit based on the true values.
Taking one of the test schemes as an example, aiming at the candidate test scheme, acquiring basic operation data triggered by each operation object under different versions when the candidate test scheme is adopted, further aiming at the operation times designated in the test scheme, respectively carrying out pseudo-random noise processing of corresponding operation times on the basic operation data triggered by each operation object under each version, and further acquiring performance test data triggered by each operation object under each version based on the acquired multiple basic operation data triggered by each operation object under each version.
For example, if the test scheme is the test scheme (1) described above, for 2 versions, the basic operation data of each operation object under the two versions may be obtained respectively, and for each version of the basic operation data, 7 pseudo-random noise processes are performed to obtain 7 basic operation data with data noise, and then the 7 basic operation data with data noise are averaged to obtain the performance test data of each version.
The pseudo-random noise processing may be, for example, the above-described normal distributed random noise.
Step 1103: and determining a target test scheme from the determined multiple candidate test schemes based on the operation change condition and the real change condition corresponding to each operation object when each candidate test scheme is adopted.
Taking linear fitting as an example, the performance test data of each version obtained by the candidate test schemes can be subjected to linear fitting, the performance change rate of each test scheme is obtained, and the accuracy of each test scheme can be judged by comparing the performance change rate with the real change rate.
In practical application, the test can be repeated to obtain the accuracy of each test scheme, and then a final test scheme is selected based on the accuracy, for example, 1 ten thousand times of experiments can be repeated to verify the accuracy of each scheme, wherein the slope of the straight line obtained by fitting is smaller than 0.5, which is considered to be in accordance with the expected fitting, so that the obtained accuracy is shown in the following table 1.
TABLE 1
It can be seen that the solution (3) uses 7 versions to perform linear fitting, each version uses an average value of 2 data, the accuracy of the fitting result is 78.2, the solution (4) uses 7 versions to perform linear fitting, each version uses an average value of 7 data, the accuracy of the fitting result is 97.2, and the accuracy of both solutions is higher, so that the solution can be used as a target test solution.
In practical application, candidate test schemes with accuracy meeting certain conditions can be implemented, but different test schemes have different execution complexity in specific execution, so that a final target test scheme can be selected after balance consideration is carried out on the execution complexity and accuracy. For example, the accuracy of the solution (3) can approach 80%, and there is a better balance between the execution complexity and the accuracy, so that the solution (3) can be selected as the target test solution for final implementation.
Step 1104: the value of N is set based on the number of versions corresponding to the target test scheme, and the value of M is set based on the corresponding run time.
For example, if the target test solution is solution (3), N may be set to 6 accordingly, i.e. the current version and the previous 6 historical versions are selected, a total of 7 versions are linearly fitted, and M is set to 2 accordingly, i.e. each version is averaged using the basic operation data after two operations and then used as the performance test data of the version.
In the embodiment of the present application, it is considered that the application program may include a plurality of running phases, for example, the game application program may be divided into a plurality of logic phases, such as an initialization phase, a single-office creation phase, a player loading phase, and a game phase. In order to strictly compare the data of the same logic segment, a standard is used for comparing multiple versions so as to provide more accurate data comparison and improve the data comparison effect, and the embodiment of the application also provides a logic segment segmentation function.
Referring to fig. 12, a flow chart of a performance testing method based on logic segment segmentation is shown.
Step 1201: in response to a trigger operation for performing run-phase segmentation of the application, the application is divided into a plurality of run phases.
Step 1202: when the operation is started to the starting position of one operation stage in each operation stage, triggering and calling a corresponding logic segment starting function to start collecting the first performance test data of each operation object in the operation stage.
Step 1203: when the operation is carried out to the ending position of the operation stage, triggering and calling a corresponding logic segment ending function to end the collection of the first performance test data of each operation object in the operation stage.
Specifically, based on the description of the foregoing embodiments, the data acquisition plug-in the embodiments of the present application includes a logic segment partitioning function for partitioning an application into a plurality of operation phases, and then comparing each operation phase separately.
Specifically, in order to collect data in segments, when performing access operation, the service logic layer calls the API of IterationTrace Plugin to set up statistical service logic segments. When setting the logic segment, the API provided by the SDK needs to be called in the application program, and the API, that is, the logic segment start function and the logic segment end function, are respectively used for triggering to start collecting data and ending collecting data.
Referring to fig. 13, a schematic diagram of a logic segment splitting function is shown, when an application program runs after the logic segment is split, if the application program runs to a start position of a logic segment, a data acquisition plug-in may be triggered to start collecting first performance test data, and at an end position of the logic segment, the data acquisition plug-in may be triggered to end collecting the first performance test data, and the collected first performance test data may be reported to a data service module.
In one embodiment, when the logic segment division is performed, a logic segment start function may be inserted at a start position of each operation stage in the application program in response to a trigger operation of performing operation stage division on the application program, and a logic segment end function may be inserted at an end position to divide the application program into a plurality of operation stages, so when the application program is operated to the start position of one operation stage in each operation stage, the trigger operation is performed to call the corresponding logic segment start function, and the first performance test data of each operation object in the operation stage is started to be collected; and triggering and calling a corresponding logic segment ending function when the operation is carried out to the ending position of the operation stage, and ending the collection of the first performance test data of each operation object in the operation stage.
In another embodiment, when the logic segment is divided, the logic segment start function may be called to record the start position of each operation stage, and the logic segment end function may be called to record the end position of each operation stage, so that the start position and the end position of each record may be monitored, when the logic segment start function is operated to the start position of one operation stage, the logic segment start function may monitor the trigger of the start position, so as to trigger the start of collecting the data of the operation stage, and similarly, when the logic segment end function is operated to the end position of one operation stage, the logic segment end function may monitor the trigger of the end position, so as to trigger the end of collecting the first performance test data of the operation stage, and report the collected first performance test data to the data service module.
It should be noted that, when the test scheme adopted is linear fitting based on data obtained by one operation, the first performance test data is substantially basic operation data, and when the test scheme adopted is linear fitting based on data obtained by a plurality of operations, the first performance test data may be substantially understood as basic operation data obtained by one acquisition, and the first performance test data corresponding to the target version may be obtained by combining the basic operation data obtained by a plurality of operations.
In practical application, whether the logic section is required to be divided or not can be judged according to the practical requirement, and if the logic section is not required to be divided, the data acquisition plug-in can acquire first performance test data in the whole running process of the application program.
In the embodiment of the present application, when the operation stage is divided in the actual application, if the logic stage end function is called, it indicates that the data acquisition plug-in completes the data collection work of one operation stage, and further performs performance analysis based on the collected first performance test data of the operation stage and the collected second performance test data of the operation stage in the history version, so as to obtain a corresponding performance test result.
Specifically, in performance analysis, analysis may be performed for each operation stage, and any operation stage S will be described as an example.
For the operation stage S, operation change data corresponding to each operation object in the operation stage S may be determined based on the first performance test data and the N second performance test data corresponding to each operation object in the operation stage S, and further, abnormal operation objects in the operation stage S may be determined based on the operation change data corresponding to each operation object in the operation stage S.
For example, in the operation stage S, the performance test data of each operation object may be linearly fitted to obtain the performance change rate of each operation object, and when the performance change rate is greater than the set threshold, the operation object may be determined as an abnormal operation object.
In this embodiment of the present application, besides the division of the logic segments, the division of the operation scenario may be performed, that is, the above-mentioned ReportTestType may be performed, and in different operation scenarios, the operation data of each operation object may be different due to different operation logics, so that when data analysis is performed, the operation scenario needs to be considered, and the targeted data comparison is performed according to different operation scenarios, so as to improve the accuracy of the data analysis.
Specifically, before starting the application program to run, the running scene can be preset, and then the number of application programs required to run in the running scene can be determined based on the running scene set by the running operation, and based on the determined number, a plurality of use objects in the running scene are simulated to run the application programs of the corresponding number of target versions. For example, for a game application program, in a 5-person combat scene and a 2-person combat scene, the number of game clients to be operated is respectively 10 and 4, each game client is used for simulating the use process of one game player, and due to different operation logics, each monitoring dimension may be different, and the operation data of each monitoring dimension may be different, so that data acquisition can be performed in different operation scenes.
After the performance test data of each operation scene are collected, the operation scenes can be compared to accurately locate the abnormality of the game application program, namely, for each operation scene, the operation change data corresponding to each operation object in the operation scene is determined based on the first performance test data and N second performance test data corresponding to each operation object in the operation scene;
In practical application, the combination of the sub-operation scenes and the sub-operation stages can also be used for comparison, namely, data comparison is carried out aiming at one operation stage under one operation scene so as to accurately locate the abnormality of the game application program.
The following describes in detail the technical solution provided in the embodiment of the present application, taking the system architecture shown in fig. 3 as an example. Taking a game application developed based on a UE engine as an example, referring to fig. 3, the technical solution of the embodiment of the present application may include steps S1 to S10.
S1: and running the program.
When the access job is ready and a new version of the application is compiled based on the program code, the application may be run to begin receipt collection. The running program can adopt a mode of manually running the application program, and can also adopt a mode of triggering the application program to run by the data acquisition plug-in.
When an application program is started, a command line parameter needs to be configured to start a function of collecting data by the data acquisition plug-in, so that data reporting and subsequent data storage can be realized, for example, reportIP, reportPort, report Version, report testtype, running number sequence number and the like can be carried.
S2: and (5) data collection.
When the access work is completed, the function enhancement is performed on the UE engine in IterationTrace Plugin, a hook function is set at the core API, and when the program runs, the collection of basic data can be automatically performed. Of course, the data to be reported can be added at a proper position according to the requirement.
S3: logic segment control.
When the access operation is performed, the service logic layer calls the API setting statistics service logic section of IterationTrace Plugin, namely calls the API provided by the SDK in the program, wherein the API comprises a start collection API and an end collection API, and in the running process of the program, the data collection work of one stage is completed by calling the logic section to end API, iterationTrace Plugin.
S4: and (5) reporting data.
After the data acquisition is completed, the acquired performance test data can be reported to the data service module.
In practical application, if the operation stages are divided, if the logic segment end function is called, the data acquisition plug-in unit is indicated to complete the data collection work of one operation stage, and the collected first performance test data of the operation stage can be reported to the data service module.
In order to reduce the amount of data transmitted by the network and reduce the network performance impact on the application itself, iterationTrace Plugin may perform a certain preprocessing on the collected first performance test data after the data collection work in one operation phase is completed.
In one embodiment, the first performance test data of the same running object may be combined, for example, the running duration of multiple times of running of one running object is averaged, and the obtained average value is used as the final running duration of the running object.
The preprocessed first performance test data may be formatted according to a json object numbered musical notation (JavaScript Object Notation, json) format, and then reported to the data service module in the json format.
In one embodiment, each running object can be classified according to dimension types, so that aggregation transmission can be conveniently carried out by dimension classification, the number of network packet transmissions is reduced, and the packet transmission efficiency is improved.
S5: and (5) data storage.
After the data service module receives the performance test data reported by the data acquisition module, the performance test data can be subjected to warehouse entry processing.
In one embodiment, if the application program does not perform logic segment division, the performance test data, that is, the performance test data is generated in the whole running process of the application program, may generate a storage identifier corresponding to the performance test data according to the version number of the target version, the running mode adopted when the performance test data is triggered, and the running time serial number when the performance test data is triggered, and further store the performance test data in the database according to the storage identifier.
In one embodiment, if the application program performs logic segment division, the performance test data is generated in the running process of one running stage of the application program, so that a running mode adopted when the first performance test data is triggered, a running frequency serial number when the first performance test data is triggered, and a stage identifier corresponding to the running stage can be obtained, a storage identifier corresponding to the running stage is generated based on the version number, the running mode, the running frequency serial number and the stage identifier, and then the performance test data corresponding to the corresponding running stage is stored in the database based on the generated storage identifier.
Specifically, taking the command line parameter on the application program starting time as an example, after the data is reported, the performance test data is stored in the database by taking version, test mode testtype, logic section and running time sequence number seq of the current running as keys.
In addition, the version, the test mode testtype and the logic segment section are used as keys, and the performance test data of the version corresponding to the test mode and the logic segment can be summarized, for example, the data of each operation object in the reported data is subjected to average evaluation and stored in a database. That is, if the mth operation is currently performed for the target version, after the basic operation data collected by the mth operation is put in storage, the basic operation data corresponding to the mth operation and the basic operation data corresponding to the previous M-1 operation are further subjected to average evaluation, and are stored in the database as performance test data for performing performance analysis on the target version.
S6: and calculating difference data.
After the performance test data of the plurality of versions are collected, the performance test data of the current version and the previous N versions can be summarized and analyzed to obtain the performance test result of the current version.
Specifically, taking the test scheme (2) as an example, 6 versions of performance test data before the current version can be taken out from the database, linear fitting is performed on 7 versions of performance test data of each operation object, if the slope of the fitted straight line exceeds a set value, marking is performed to indicate that the operation object is likely to be an abnormal operation object, and abnormal data is inserted into the displayed database.
S7: and showing summary.
The data in the database is displayed through a data display module included in the system to display the abnormality, and the displayed data can comprise summary information, such as summary information of each version or summary information of each operation stage under each version, and of course, detail information of an abnormal operation object and the like can also be displayed.
Specifically, referring to fig. 14, an operation flow diagram of data presentation is shown.
Step 1401: responding to a test result display operation aiming at a target version, and presenting a test result display interface corresponding to the target version; the test result display interface comprises test results of the application program in different operation stages.
Taking the initialization object and the function call monitoring as examples, each running object may be an object or a function.
In one embodiment, when the test result is displayed, the performance test data of a certain version, the test result and other contents can be displayed independently. If the logic segment division is performed, the test result of the target version can be displayed in stages.
Referring to FIG. 15a, an interface is presented for a version of a test result, wherein data overview information for each run phase of the version is presented. In fig. 15a, memory object data is shown as an example, where the application is divided into an Init phase (DS start procedure), a Preprocess phase (Client connection procedure), a CountDown phase (Client connection procedure), a Load phase (Client connection procedure), and a name phase (Client entry Game procedure), and the types and numbers of the memory objects of the respective phases are shown.
In practical applications, different types of data overviews can be expanded according to specific requirements.
In one embodiment, the comparison data of the target version with the previous historical version may also be presented.
Referring to fig. 15b, a schematic diagram of an interface is shown for test results. The comparison version of the current version to be displayed can be selected, and then the delta data between the comparison version and the selected comparison file can be displayed, as shown in fig. 15b, after the comparison version of the version number "0.1.0.3070.0_4978_14" is selected, the delta data between the comparison version and the comparison version can be displayed.
Step 1402: and responding to the triggering operation of displaying the test result of the target operation stage in the display interface of the test result, and displaying the first performance test data corresponding to each operation object in the target operation stage.
In the embodiment of the application, performance monitoring data of each operation object in the operation stage can be displayed. Referring to fig. 15c, an interface is shown for data presentation, taking the Load phase as an example of the target operation phase.
In one embodiment, the information of the object creation condition and the function call condition of the current version, such as the number of object creation, the object name and the number of times of function call, and the like, can be displayed in the data display interface.
In practical application, in order to facilitate the browsing of the viewers, the method can also provide operation functions such as screening, searching, sorting and the like.
In one embodiment, a comparison version of the current version may also be selected, and delta data of the current version and the comparison version may be displayed in the data display interface. As shown in fig. 15c, the difference value of the created number is 1313 after comparing the current version with the comparative version for the Object named "Object 1".
In practical application, in order to facilitate the viewer to more intuitively view the abnormal operation object, the abnormal operation object may be specially marked, for example, may be marked with a color or a marker, for example, an exclamation mark may be marked in fig. 15c to indicate that an abnormality exists, or the operation object may be marked with different degrees according to the degree of the abnormality, so that the viewer may intuitively perceive the data that an abnormality exists, and thus check the corresponding code to locate the position where the abnormality may occur in the code, and further analyze the code.
Step 1403: responding to a triggering operation of first performance test data of a target operation object in each operation object, and displaying a data display interface of the target operation object; the data display interface comprises operation change data corresponding to the target operation object.
In the embodiment of the application, detailed information of each running object can be displayed.
In one embodiment, the detailed information of the performance test data of the selected execution object may be presented in the data presentation interface. As shown in FIG. 15d, taking a function as an example, an object called by the function is presented, along with call related performance data, such as time consuming and call times information.
In one embodiment, delta data of the selected running object and the comparison version, such as a call number difference value, a time consumption difference value, and the like, can be displayed in the data display interface.
Of course, the data to be displayed may be set according to specific operation objects and requirements, which is not limited in the embodiment of the present application.
S8: and (5) result feedback.
In the embodiment of the application program, in order to notify the developer of the abnormal information, the abnormal information may be pushed to the terminal device of the developer in an abnormal information pushing manner, or the abnormal information may be displayed to the developer in the information displaying manner, so that the developer may acquire abnormal data from the system and perform positioning analysis.
S9: modifying the optimization.
S10: new iterations.
In practical application, after carrying out exception positioning analysis, a developer modifies necessary exceptions and submits updated codes to a code library, so that new codes can enter a new iterative test process to form an optimized closed loop to enable an application program to be optimized continuously, and an optimal effect is achieved.
In summary, the embodiment of the application provides a tool capable of performing bypass analysis and transverse comparison, such as transverse comparison memory analysis, in a development environment, by performing fine-granularity and long-period performance monitoring on an application program in iteration, the development process of the program performance can be grasped on the whole, and corresponding performance consumption points can be rapidly located through backtracking of a historical version. The problem with each fine-grained long-period monitoring can be apparent because changes to the different versions of the logic code can directly result in changes in the performance of the monitoring dimension at runtime. When the performance abnormality occurs, the method can help a developer to quickly locate the problem point, simplifies the problem locating work, and is suitable for the developer and the tester to use. By using the scheme, the tester can quickly find out abnormal points of the performance and feed back the abnormal points to the developer, and the developer can quickly locate problems in codes after taking the information.
Furthermore, there are very many monitoring dimensions in view of the fine-grained monitoring of long periods. If there is no good algorithm to filter out the dimension without anomalies, it is difficult to find anomalies in a vast number of monitored dimensions. Therefore, the linear fitting algorithm is adopted to filter the dimension without abnormality, the abnormality dimension can be determined in a small range, the difficulty and complexity of finding the abnormality point in the mass dimension are reduced, the positioning efficiency of the abnormality dimension is greatly improved, and the accuracy is better.
The technical scheme provided by the embodiment of the application is put into an actual application scene, and the BUG can be accurately positioned. Specifically, the same version of an application program is simulated to run in 7 versions, each version is run twice, taking the Load phase of the application program as an example, according to the statistical data shown in fig. 16a, total 3941 types are created, 22269 objects are created in total, and each type is taken as one running object, then 3941 running objects can exist, and the 3941 running objects need to be monitored.
To compare the effect of the linear fit scheme, a double version comparison scheme was used for the control. When a double-version comparison scheme is adopted, as shown in fig. 16b, 530 abnormal operation objects are obtained by comparison, and when a linear fitting scheme is adopted, as shown in fig. 16c, only less than 20 abnormal operation objects are obtained, and therefore, the linear fitting scheme can effectively determine the abnormal dimension in a small range, and the positioning efficiency of the abnormal dimension is greatly improved.
In addition, in order to verify the accuracy of the abnormality localization, by embedding an abnormality in version 8, for example, the following abnormality is added:
after running the program and collecting the performance test data, the abnormal running object can be detected, as shown in fig. 16d, and the finally displayed abnormal data contains the purchased abnormal object "USGStateBulletInfo", so that the technical scheme can effectively locate the problem.
In addition, by monitoring the performance consumption of the IterationTrace Plugin plugin in the application process, as shown in fig. 16e, if the IterationTrace Plugin plugin is closed to integrate with the UE engine, the performance of the IterationTrace Plugin plugin will not be affected when the plugin is closed, and as shown in fig. 16f, wherein different line types represent performance curves under different experimental conditions, it can be seen that under the same condition, even if the consumption of the IterationTrace Plugin plugin is opened, and as shown in fig. 16g, the consumption of the internal memory of the plugin is opened, as shown in fig. 16h, the performance of the IterationTrace Plugin plugin is equivalent to that of the insists plugin under different line types, and is obviously better than that of the STAT, so that the plugin has a better monitoring effect and will not have excessive influence on the running of the application program.
Referring to fig. 17, based on the same inventive concept, an apparatus 170 for testing performance of an application program is provided, where the apparatus may be, for example, the test end device, the server, or may be partially deployed in the test end device, and partially deployed in the server, and the apparatus includes:
the data collection unit 1701 is configured to obtain first performance test data triggered by a plurality of running objects when the application program of the target version runs, and obtain second performance test data triggered by a plurality of running objects when the application program of the N historical versions runs, respectively; wherein N is more than or equal to 1;
a data analysis unit 1702 configured to determine operation change data of a corresponding operation object based on first performance test data and N second performance test data corresponding to each of the plurality of operation objects, respectively;
an abnormal location unit 1703 for determining an abnormal operation object from a plurality of operation objects based on the obtained respective operation change data;
the result generating unit 1704 is configured to generate a performance test result corresponding to the application program of the target version based on the first performance test data of the abnormal operation object.
Alternatively to this, the method may comprise,
the data analysis unit 1702 is specifically configured to perform, for the above-mentioned multiple operation objects, the following operations respectively: for an operation object, performing linear fitting processing based on first performance test data and N second performance test data corresponding to the operation object to obtain a slope of a straight line obtained by fitting, wherein the slope is used for representing the change rate of the operation object;
the result generating unit 1704 is specifically configured to determine, for the operation object, that the operation object is an abnormal operation object if the slope is greater than the set slope threshold.
Optionally, the data collecting unit 1701 is specifically configured to:
for the above-mentioned multiple operation objects, the following operations are executed respectively:
aiming at an operation object, acquiring first basic operation data triggered by the operation object in the Mth operation of an application program of a target version;
respectively acquiring second basic operation data triggered by the operation object in the previous (M-1) operation of the application program of the target version from the stored basic operation data;
and merging the obtained first basic operation data and (M-1) second basic operation data to obtain performance test data corresponding to the operation object.
Optionally, the data collecting unit 1701 is specifically configured to:
and carrying out average value solving processing on the first basic operation data and the (M-1) second basic operation data to obtain performance test data corresponding to the operation object.
Optionally, the apparatus further comprises an access operation unit 1705;
the access operation unit is used for responding to the triggering operation of the operation engine corresponding to the application program and integrating the data acquisition plug-in the plug-in package of the operation engine; the data acquisition plug-in comprises a plurality of hook functions corresponding to each running object;
the data collection unit is specifically configured to run the application program of the target version based on the running engine in response to the running operation performed on the application program; and triggering the corresponding hook function to collect the first performance test data of the corresponding operation object based on the operation of each operation object.
Optionally, the data acquisition plug-in further comprises a logic segment start function and a logic segment end function;
the access operation unit 1705 is further configured to insert a logic segment start function at a start position of each operation stage in the application program and insert a logic segment end function at an end position in response to a trigger operation for performing operation stage division on the application program, so as to divide the application program into a plurality of operation stages;
The data collection unit 1701 is specifically configured to trigger to call a corresponding logic segment start function when running to a start position of one of the running phases, and start to collect first performance test data of each running object in the running phase; and triggering and calling a corresponding logic segment ending function when the operation is carried out to the ending position of the operation stage, and ending the collection of the first performance test data of each operation object in the operation stage.
Alternatively to this, the method may comprise,
the data analysis unit 1702 is specifically configured to perform, for each of the above operation phases, the following operations:
for one operation stage, determining operation change data corresponding to each operation object in the operation stage based on the first performance test data and N second performance test data corresponding to each operation object in the operation stage;
the abnormal location unit 1703 is specifically configured to determine an abnormal operation object in the operation stage based on the operation change data corresponding to each operation object in the operation stage.
Alternatively to this, the method may comprise,
the data collection unit 1701 is specifically configured to: determining the number of application programs required to be operated in an operation scene based on the operation scene set by the operation, and simulating a plurality of use objects in the operation scene based on the determined number to operate the application programs of a corresponding number of target versions;
The data analysis unit 1702 is specifically configured to: for each operation scene, the following operations are respectively executed: aiming at an operation scene, based on first performance test data and N second performance test data which are respectively corresponding to all operation objects in the operation scene, determining operation change data which are respectively corresponding to all operation objects in the operation scene;
the anomaly locating unit 1703 is specifically configured to: and determining abnormal operation objects in the operation scene based on operation change data corresponding to each operation object in the operation scene.
Optionally, the apparatus further comprises a data entry unit 1706 for:
acquiring an operation mode adopted when triggering the first performance test data, an operation frequency serial number when triggering the first performance test data, and a phase identifier corresponding to the operation phase;
generating a storage identifier corresponding to an operation stage based on the target version, the operation mode, the operation time serial number and the stage identifier;
and storing the first performance test data corresponding to the operation stage into a database based on the generated storage identification.
Optionally, the apparatus further comprises a parameter determination unit 1707 for determining values of N and M by:
Aiming at a plurality of set candidate test schemes, acquiring performance test data triggered by each operation object under different versions when each candidate test scheme is adopted; the number of versions or the running times corresponding to any two candidate test schemes are different, and the application programs of different versions are obtained by carrying out pseudo-random modification on the application programs of specified versions;
determining a target test scheme from a plurality of candidate test schemes based on the operation change condition and the real change condition corresponding to each operation object when each candidate test scheme is adopted;
the value of N is set based on the number of versions corresponding to the target test scheme, and the value of M is set based on the corresponding run time.
Optionally, the parameter determining unit 1707 is specifically configured to:
for a plurality of candidate test schemes, the following operations are respectively executed:
aiming at a candidate test scheme, basic operation data triggered by each operation object under different versions when the candidate test scheme is adopted is obtained;
based on the operation times corresponding to the candidate test scheme, respectively carrying out pseudo-random noise processing of corresponding operation times on basic operation data triggered by each operation object under each version;
And acquiring performance test data triggered by each operation object under each version based on the acquired multiple basic operation data triggered by each operation object under each version.
Optionally, the apparatus further comprises a data display unit 1708 for:
responding to a test result display operation aiming at a target version, and presenting a test result display interface corresponding to the target version; the test result display interface comprises test results of the application program in different operation stages;
responding to a trigger operation for displaying the test result of the target operation stage in the display interface for displaying the test result, and displaying the first performance test data corresponding to each operation object in the target operation stage;
responding to a triggering operation of first performance test data of a target operation object in each operation object, and displaying a data display interface of the target operation object; the data display interface comprises operation change data corresponding to the target operation object.
In the device, the abnormal operation object is positioned by acquiring a plurality of operation objects, first performance test data triggered when the application program of the target version operates, and second performance test data respectively triggered when the application program of the N historical versions operates by acquiring a plurality of operation objects, and by acquiring operation change data of each operation object in different versions. Therefore, the application is used for carrying out fine granularity of the operation object level through the application program, carrying out performance monitoring on multiple versions in a long period to locate the change condition of the operation object caused by logic code change of different versions, and further, the abnormal operation object can be quickly located through operation change data, so that the efficiency of abnormal location is greatly improved, and a developer is assisted to effectively find out the BUG existing in the development process of the application program, and timely correct the BUG so as to improve the development efficiency of the application program.
In addition, in the device, in consideration of performance monitoring of fine granularity and long period of the application program in iteration, the number of running objects is huge, so that the monitoring dimension is more, in order to improve the efficiency of abnormal positioning, the embodiment of the application adopts a linear fitting algorithm to filter the dimension without abnormality, the abnormal dimension can be determined in a small range, the difficulty and complexity of finding abnormal points in a large number of dimensions are reduced, the positioning efficiency of the abnormal dimension is greatly improved, and the device has good accuracy.
The apparatus may be used to perform the methods shown in the embodiments of the present application, so the descriptions of the foregoing embodiments may be referred to for the functions that can be implemented by each functional module of the apparatus, and are not repeated.
Referring to fig. 18, based on the same technical concept, the embodiment of the present application further provides a computer device 180, which may be the terminal device or the server shown in fig. 1, and the computer device 180 may include a memory 1801 and a processor 1802.
The memory 1801 is used for storing a computer program executed by the processor 1802. The memory 1801 may mainly include a storage program area that may store an operating system, application programs required for at least one function, and the like, and a storage data area; the storage data area may store data created according to the use of the computer device, etc. The processor 1802 may be a central processing unit (central processing unit, CPU), or a digital processing unit or the like. The specific connection medium between the memory 1801 and the processor 1802 is not limited to those described above in the embodiments of the present application. In the embodiment of the present application, the memory 1801 and the processor 1802 are connected through the bus 1803 in fig. 18, the bus 1803 is shown with a thick line in fig. 18, and the connection manner between other components is merely schematically illustrated, which is not limited thereto. The bus 1803 may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, only one thick line is shown in fig. 18, but not only one bus or one type of bus.
The memory 1801 may be a volatile memory (RAM) such as a random-access memory (RAM); the memory 1801 may also be a nonvolatile memory (non-volatile memory), such as a read-only memory, a flash memory (flash memory), a Hard Disk Drive (HDD) or a Solid State Drive (SSD), or the memory 1801 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto. The memory 1801 may be a combination of the above memories.
The processor 1802 is configured to execute a method executed by the apparatus according to each embodiment of the present application when invoking the computer program stored in the memory 1801.
In some possible implementations, aspects of the methods provided herein may also be implemented in the form of a program product comprising program code for causing a computer device to carry out the steps of the methods described herein above according to the various exemplary embodiments of the application, when the program product is run on the computer device, e.g. the computer device may carry out the methods performed by the devices in the embodiments of the application.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (14)

1. A method for testing performance of an application, the method comprising:
for a plurality of running objects of an application program, acquiring first performance test data triggered by the running objects when the application program of a target version runs, wherein the running objects comprise: at least one of a class and a function in the application; wherein, for each running object, the first performance test data is obtained by performing the following operations:
acquiring first basic operation data triggered by each operation object in the Mth operation of the application program of the target version;
respectively acquiring second basic operation data triggered by each operation object in the previous (M-1) operation of the application program of the target version from the stored basic operation data;
combining the obtained first basic operation data and (M-1) second basic operation data to obtain first performance test data corresponding to each operation object;
acquiring second performance test data triggered by the plurality of running objects when the application programs of the N historical versions run respectively; wherein N is more than or equal to 1; the N historical versions are historical versions before the target version, and the first performance test data and the second performance test data are acquired in the same acquisition mode;
Determining operation change data of the corresponding operation objects based on the first performance test data and the N second performance test data corresponding to the operation objects respectively;
determining an abnormal operation object from the plurality of operation objects based on the obtained operation change data;
and generating a performance test result corresponding to the application program of the target version based on the first performance test data of the abnormal operation object.
2. The method of claim 1, wherein determining operational variation data for a respective operational object based on the first performance test data and the N second performance test data for each of the plurality of operational objects, respectively, comprises:
for the plurality of running objects, the following operations are respectively executed: for one operation object, performing linear fitting processing based on first performance test data and N second performance test data corresponding to the one operation object to obtain a slope of a straight line obtained by fitting, wherein the slope is used for representing the change rate of the one operation object;
determining an abnormal operation object from the plurality of operation objects based on the obtained respective operation change data, including:
And determining that the one operation object is an abnormal operation object if the slope is larger than a set slope threshold for the one operation object.
3. The method of claim 1, wherein merging the obtained first basic operation data and (M-1) second basic operation data to obtain first performance test data corresponding to the one operation object, comprises:
and carrying out average value solving processing on the first basic operation data and the (M-1) second basic operation data to obtain first performance test data corresponding to the operation object.
4. The method of claim 1, wherein prior to obtaining the first performance test data triggered by the plurality of execution objects at the execution of the target version of the application, the method further comprises:
responding to triggering operation of an operation engine corresponding to the application program, and integrating a data acquisition plug-in a plug-in package of the operation engine; the data acquisition plug-in comprises a hook function corresponding to each of the plurality of operation objects;
acquiring first performance test data triggered by the plurality of running objects when the application program of the target version runs, wherein the first performance test data comprises:
Running the target version of the application program based on the running engine in response to a running operation performed on the application program;
and triggering a corresponding hook function to collect first performance test data of the corresponding operation object based on the operation of each operation object.
5. The method of claim 4, wherein the data acquisition plug-in further comprises a logical segment start function and a logical segment end function; acquiring first performance test data triggered by the plurality of running objects when the application program of the target version runs, wherein the first performance test data comprises:
in response to a trigger operation for performing operation phase segmentation on the application program, inserting the logic segment start function at a start position of each operation phase in the application program, and inserting the logic segment end function at an end position to divide the application program into a plurality of operation phases;
when the operation is carried out to the starting position of one operation stage in each operation stage, triggering and calling a corresponding logic section starting function to start collecting first performance test data of each operation object in the one operation stage;
when the operation is carried out to the ending position of the operation stage, triggering and calling a corresponding logic segment ending function to end the collection of the first performance test data of each operation object in the operation stage.
6. The method of claim 5, wherein determining operational variation data for a respective operational object based on the first performance test data and the N second performance test data for each of the plurality of operational objects, respectively, comprises:
for each operation stage, the following operations are respectively executed:
determining operation change data corresponding to each operation object in one operation stage based on the first performance test data and N second performance test data corresponding to each operation object in the one operation stage;
determining an abnormal operation object from the plurality of operation objects based on the obtained respective operation change data, including:
and determining abnormal operation objects in the operation stage based on the operation change data corresponding to each operation object in the operation stage.
7. The method of claim 5, wherein upon triggering the invocation of the corresponding logical segment ending function when running to the ending location of the one run phase, ending the collection of the first performance test data for each of the run objects in the one run phase, the method further comprises:
Acquiring a scene identifier of an operation scene when the first performance test data is triggered, an operation frequency serial number when the first performance test data is triggered, and a phase identifier corresponding to the one operation phase;
generating a storage identifier corresponding to the operation stage based on the target version, the scene identifier, the operation time serial number and the stage identifier;
and storing the first performance test data corresponding to the operation stage into a database based on the generated storage identification.
8. The method according to any one of claims 1 to 7, wherein the values of N and M are determined by:
aiming at a plurality of set candidate test schemes, acquiring performance test data triggered by each running object under different versions when each candidate test scheme is adopted; the number of versions or the running times corresponding to any two candidate test schemes are different, and the application programs of different versions are obtained by carrying out pseudo-random modification on the application programs of specified versions;
determining a target test scheme from the plurality of candidate test schemes based on the operation change condition and the real change condition corresponding to each operation object when each candidate test scheme is adopted;
And setting the value of N based on the version number corresponding to the target test scheme, and setting the value of M based on the corresponding running time.
9. The method of claim 8, wherein obtaining performance test data triggered by the respective running object under different versions when each candidate test solution is adopted for the set plurality of candidate test solutions, comprises:
for the plurality of candidate test schemes, the following operations are respectively executed:
aiming at a candidate test scheme, acquiring basic operation data triggered by each operation object under different versions when the candidate test scheme is adopted;
based on the operation times corresponding to the candidate test scheme, respectively carrying out pseudo-random noise processing on the basic operation data triggered by each operation object under each version;
and acquiring performance test data triggered by each operation object under each version based on the acquired multiple basic operation data triggered by each operation object under each version.
10. The method of any one of claims 1-7, further comprising:
responding to a test result display operation aiming at the target version, and presenting a test result display interface corresponding to the target version; the test result display interface comprises test results of the application program in different operation stages;
Responding to a trigger operation for displaying the test result of a target operation stage in the test result display interface, and displaying first performance test data corresponding to each operation object in the target operation stage;
responding to a triggering operation of first performance test data of a target operation object in the operation objects, and displaying a data display interface of the target operation object; the data display interface comprises operation change data corresponding to the target operation object.
11. An apparatus for testing performance of an application program, the apparatus comprising:
the data collection unit is used for acquiring first performance test data triggered by a plurality of running objects of the application program when the application program of the target version runs, and the plurality of running objects comprise: at least one of a class and a function in the application; wherein, for each running object, the first performance test data is obtained by performing the following operations: acquiring first basic operation data triggered by each operation object in the Mth operation of the application program of the target version; respectively acquiring second basic operation data triggered by each operation object in the previous (M-1) operation of the application program of the target version from the stored basic operation data; combining the obtained first basic operation data and (M-1) second basic operation data to obtain first performance test data corresponding to each operation object;
The data collection unit is further used for obtaining second performance test data which are triggered by the plurality of operation objects when the application programs of the N historical versions are operated respectively; wherein N is more than or equal to 1; the N historical versions are historical versions before the target version, and the first performance test data and the second performance test data are acquired in the same acquisition mode;
the data analysis unit is used for respectively determining the operation change data of the corresponding operation objects based on the first performance test data and the N second performance test data corresponding to the operation objects;
the abnormal positioning unit is used for determining an abnormal operation object from the plurality of operation objects based on the obtained operation change data;
and the result generating unit is used for generating a performance test result corresponding to the application program of the target version based on the first performance test data of the abnormal operation object.
12. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that,
the processor, when executing the computer program, implements the steps of the method of any one of claims 1 to 10.
13. A computer storage medium having stored thereon computer program instructions, characterized in that,
which computer program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 10.
14. A computer program product comprising computer program instructions, characterized in that,
which computer program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 10.
CN202210079863.7A 2022-01-24 2022-01-24 Performance test method, device, equipment and storage medium of application program Active CN114490375B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210079863.7A CN114490375B (en) 2022-01-24 2022-01-24 Performance test method, device, equipment and storage medium of application program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210079863.7A CN114490375B (en) 2022-01-24 2022-01-24 Performance test method, device, equipment and storage medium of application program

Publications (2)

Publication Number Publication Date
CN114490375A CN114490375A (en) 2022-05-13
CN114490375B true CN114490375B (en) 2024-03-15

Family

ID=81473754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210079863.7A Active CN114490375B (en) 2022-01-24 2022-01-24 Performance test method, device, equipment and storage medium of application program

Country Status (1)

Country Link
CN (1) CN114490375B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115329155B (en) * 2022-10-11 2023-01-13 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium
CN116701236B (en) * 2023-08-08 2023-10-03 贵州通利数字科技有限公司 APP testing method, system and readable storage medium
CN117234935A (en) * 2023-09-28 2023-12-15 重庆赛力斯新能源汽车设计院有限公司 Test method and device based on illusion engine, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106201856A (en) * 2015-05-04 2016-12-07 阿里巴巴集团控股有限公司 A kind of multi version performance test methods and device
CN106802856A (en) * 2015-11-26 2017-06-06 腾讯科技(深圳)有限公司 The performance test methods of game application, server and game application client
CN109726100A (en) * 2018-04-19 2019-05-07 平安普惠企业管理有限公司 Application performance test method, apparatus, equipment and computer readable storage medium
CN110362460A (en) * 2019-07-12 2019-10-22 腾讯科技(深圳)有限公司 A kind of application program capacity data processing method, device and storage medium
CN111045927A (en) * 2019-11-07 2020-04-21 平安科技(深圳)有限公司 Performance test evaluation method and device, computer equipment and readable storage medium
CN111611144A (en) * 2020-05-27 2020-09-01 中国工商银行股份有限公司 Method, apparatus, computing device, and medium for processing performance test data
CN115114141A (en) * 2021-03-18 2022-09-27 腾讯科技(深圳)有限公司 Method, device and equipment for testing performance of application program and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190294528A1 (en) * 2018-03-26 2019-09-26 Ca, Inc. Automated software deployment and testing

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106201856A (en) * 2015-05-04 2016-12-07 阿里巴巴集团控股有限公司 A kind of multi version performance test methods and device
CN106802856A (en) * 2015-11-26 2017-06-06 腾讯科技(深圳)有限公司 The performance test methods of game application, server and game application client
CN109726100A (en) * 2018-04-19 2019-05-07 平安普惠企业管理有限公司 Application performance test method, apparatus, equipment and computer readable storage medium
CN110362460A (en) * 2019-07-12 2019-10-22 腾讯科技(深圳)有限公司 A kind of application program capacity data processing method, device and storage medium
CN111045927A (en) * 2019-11-07 2020-04-21 平安科技(深圳)有限公司 Performance test evaluation method and device, computer equipment and readable storage medium
CN111611144A (en) * 2020-05-27 2020-09-01 中国工商银行股份有限公司 Method, apparatus, computing device, and medium for processing performance test data
CN115114141A (en) * 2021-03-18 2022-09-27 腾讯科技(深圳)有限公司 Method, device and equipment for testing performance of application program and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Yuchen Tan等.Performance Comparison of Data Classification based on Modern Convolutional Neural Network Architectures.《2020 39th Chinese Control Conference (CCC)》.2020,第815 - 818页. *
谷林涛.基于GUI的Android自动化性能测试方法的研究和实现.《cnki优秀硕士学位论文全文库 信息科技辑》.2019,(第01期),第I138-1683页. *

Also Published As

Publication number Publication date
CN114490375A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN114490375B (en) Performance test method, device, equipment and storage medium of application program
US20200012962A1 (en) Automated Machine Learning System
US9811527B1 (en) Methods and apparatus for database migration
CN105580032A (en) Method and system for reducing instability when upgrading software
US11809866B2 (en) Software change tracking and analysis
US20180046675A1 (en) Automatic adjustment of an execution plan for a query
CN107622008B (en) Traversal method and device for application page
US11036608B2 (en) Identifying differences in resource usage across different versions of a software application
US10942801B2 (en) Application performance management system with collective learning
CN109783457B (en) CGI interface management method, device, computer equipment and storage medium
CN109471874A (en) Data analysis method, device and storage medium
CN111897707B (en) Optimization method and device for business system, computer system and storage medium
US10848371B2 (en) User interface for an application performance management system
CN110309206B (en) Order information acquisition method and system
US20160294922A1 (en) Cloud models
US11847120B2 (en) Performance of SQL execution sequence in production database instance
CN101661428B (en) Method for evaluating a production rule for a memory management analysis
CN115185998A (en) Target field searching method and device, server and computer readable storage medium
CN115310011A (en) Page display method and system and readable storage medium
CN113868141A (en) Data testing method and device, electronic equipment and storage medium
US10817396B2 (en) Recognition of operational elements by fingerprint in an application performance management system
CN114281549A (en) Data processing method and device
CN113806205A (en) Software performance testing method and device, electronic equipment and readable storage medium
CN114331167B (en) Method, system, medium and equipment for managing champion challenger strategy
CN113987010B (en) Method and device for realizing insight of multi-dimensional data set

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant