CN114490375A - Method, device and equipment for testing performance of application program and storage medium - Google Patents

Method, device and equipment for testing performance of application program and storage medium Download PDF

Info

Publication number
CN114490375A
CN114490375A CN202210079863.7A CN202210079863A CN114490375A CN 114490375 A CN114490375 A CN 114490375A CN 202210079863 A CN202210079863 A CN 202210079863A CN 114490375 A CN114490375 A CN 114490375A
Authority
CN
China
Prior art keywords
data
performance test
running
test data
application program
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210079863.7A
Other languages
Chinese (zh)
Other versions
CN114490375B (en
Inventor
周原
王亚昌
张嘉明
邱宏健
陈洁昌
宋博文
宋天琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210079863.7A priority Critical patent/CN114490375B/en
Publication of CN114490375A publication Critical patent/CN114490375A/en
Application granted granted Critical
Publication of CN114490375B publication Critical patent/CN114490375B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3604Software analysis for verifying properties of programs
    • G06F11/3612Software analysis for verifying properties of programs by runtime analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/302Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a software system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3089Monitoring arrangements determined by the means or processing involved in sensing the monitored data, e.g. interfaces, connectors, sensors, probes, agents
    • G06F11/3093Configuration details thereof, e.g. installation, enabling, spatial arrangement of the probes

Abstract

The application discloses a performance test method, a device, equipment and a storage medium of an application program, relates to the technical field of program test, and is used for improving the efficiency of positioning the abnormity of the application program, and the method comprises the following steps: the method comprises the steps of obtaining a plurality of running objects, first performance test data triggered when an application program of a target version runs, obtaining a plurality of running objects, and second performance test data respectively triggered when the application program of N historical versions runs; wherein N is more than or equal to 1; respectively determining operation change data of corresponding operation objects based on the plurality of operation objects and the corresponding first performance test data and N second performance test data; determining an abnormal operation object from the plurality of operation objects based on the obtained operation change data; and generating a performance test result corresponding to the application program of the target version based on the first performance test data of the abnormal operation object.

Description

Method, device and equipment for testing performance of application program and storage medium
Technical Field
The application relates to the technical field of computers, in particular to the technical field of program testing, and provides a method, a device, equipment and a storage medium for testing the performance of an application program.
Background
In the development process of the application program, the application program is evaluated to find out and correct errors (BUG) existing in the application program, and the errors are indispensable in the development process.
Taking a game application as an example, a Unreal Engine (UE) is a game Engine commonly used for game application development, and when a UE Engine is used for game development, a game application usually includes a huge amount of codes, for example: the amount of code at the engine level, business logic level, blueprints, etc. typically exceeds 10 ten thousand lines, and therefore the workload of game application evaluation is quite enormous.
Code review (code review) is a systematic review mode for codes, and is usually performed by adopting a software peer review mode, but in the early development stage of game application programs, the BUG and performance problems of the codes are huge in scale, and the code review mode is extremely low in efficiency and cannot be applied. Also, other types of applications besides gaming applications have the same problems.
Disclosure of Invention
The embodiment of the application provides a performance test method, a performance test device, performance test equipment and a storage medium of an application program, and is used for improving the efficiency of positioning the abnormality of the application program.
In one aspect, a method for testing performance of an application program is provided, where the method includes:
acquiring first performance test data triggered by a plurality of running objects when an application program of a target version runs, and acquiring second performance test data respectively triggered by the running objects when the application program of N historical versions runs; wherein N is more than or equal to 1;
respectively determining operation change data of corresponding operation objects based on first performance test data and N second performance test data corresponding to the operation objects;
determining an abnormal operation object from the plurality of operation objects based on the obtained operation change data;
and generating a performance test result corresponding to the application program of the target version based on the first performance test data of the abnormal operation object.
Optionally, in response to the execution operation performed on the application program, executing the target version of the application program based on the execution engine, including:
determining the number of application programs required to be operated in an operation scene based on the operation scene set by operation, simulating a plurality of using objects in the operation scene based on the determined number, and operating the corresponding number of application programs of the target version;
determining operation change data of the corresponding operation objects respectively based on the first performance test data and the N second performance test data corresponding to the multiple operation objects respectively, wherein the operation change data comprises:
for each operation scene, the following operations are respectively executed:
for one operation scene, determining operation change data corresponding to each operation object in the operation scene based on first performance test data and N second performance test data corresponding to each operation object in the operation scene;
and determining abnormal operation objects in the operation scene based on the operation change data corresponding to each operation object in the operation scene.
In one aspect, a performance testing apparatus for an application is provided, the apparatus including:
the data collection unit is used for acquiring first performance test data triggered by a plurality of running objects when the application program of a target version runs and acquiring second performance test data respectively triggered by the running objects when the application program of N historical versions runs; wherein N is more than or equal to 1;
the data analysis unit is used for respectively determining operation change data of corresponding operation objects on the basis of first performance test data and N second performance test data which correspond to the operation objects;
the abnormal positioning unit is used for determining an abnormal operation object from the plurality of operation objects based on the obtained operation change data;
and the result generating unit is used for generating a performance test result corresponding to the application program of the target version based on the first performance test data of the abnormal operation object.
Alternatively to this, the first and second parts may,
the data analysis unit is specifically configured to, for the multiple running objects, respectively perform the following operations: aiming at one running object, performing linear fitting processing based on first performance test data and N second performance test data corresponding to the running object to obtain the slope of a straight line obtained through fitting, wherein the slope is used for representing the change rate of the running object;
the result generating unit is specifically configured to determine, for the one running object, that the one running object is an abnormal running object if the slope is greater than a set slope threshold.
Optionally, the data collection unit is specifically configured to:
for the multiple running objects, respectively executing the following operations:
aiming at one running object, acquiring first basic running data triggered by the running object when the running object runs in the Mth time of the application program of the target version;
respectively acquiring second basic operation data triggered by the operation object in the previous (M-1) operation of the target version of the application program from the stored basic operation data;
and merging the obtained first basic operation data and the (M-1) second basic operation data to obtain first performance test data corresponding to the operation object.
Optionally, the data collection unit is specifically configured to:
and carrying out averaging processing on the first basic operation data and the (M-1) second basic operation data to obtain first performance test data corresponding to the operation object.
Optionally, the apparatus further includes an access operation unit;
the access operation unit is used for responding to triggering operation performed on an operation engine corresponding to the application program and integrating a data acquisition plug-in a plug-in package of the operation engine; the data acquisition plug-in comprises hook functions corresponding to the multiple running objects respectively;
the data collection unit is specifically configured to run the target version of the application program based on the running engine in response to a running operation performed on the application program; and triggering the corresponding hook function to acquire first performance test data of the corresponding operation object based on the operation of each operation object.
Optionally, the data acquisition plug-in further includes a logic segment start function and a logic segment end function;
the access operation unit is further configured to insert the logical segment start function at a start position of each operation phase in the application program and insert the logical segment end function at an end position in response to a trigger operation for performing operation phase segmentation on the application program, so as to divide the application program into a plurality of operation phases;
the data collection unit is specifically configured to trigger and call a corresponding logic segment start function when the data collection unit runs to a start position of one of the operation stages, and start to collect first performance test data of each operation object in the one operation stage; and when the operation is carried out to the end position of the operation stage, triggering and calling a corresponding logic section end function, and ending the collection of the first performance test data of each operation object in the operation stage.
Optionally, the data analysis unit is specifically configured to:
for each operation stage, the following operations are respectively executed:
for one operation stage, determining operation change data corresponding to each operation object in the one operation stage based on first performance test data and N second performance test data corresponding to each operation object in the one operation stage;
the anomaly locating unit is specifically configured to:
and determining an abnormal operation object in the one operation stage based on the operation change data corresponding to each operation object in the one operation stage.
Alternatively to this, the first and second parts may,
the data collection unit is specifically configured to: determining the number of application programs required to be operated in an operation scene based on the operation scene set by operation, simulating a plurality of using objects in the operation scene based on the determined number, and operating the corresponding number of application programs of the target version;
the data analysis unit is specifically configured to: for each operation scene, the following operations are respectively executed: for one operation scene, determining operation change data corresponding to each operation object in the one operation scene based on first performance test data and N second performance test data corresponding to each operation object in the one operation scene;
the anomaly locating unit is specifically configured to: and determining abnormal operation objects in the operation scene based on the operation change data corresponding to each operation object in the operation scene.
Optionally, the apparatus further includes a data warehousing unit, configured to:
acquiring an operation mode adopted when the first performance test data is triggered, an operation time serial number when the first performance test data is triggered and a stage identifier corresponding to one operation stage;
generating a storage identifier corresponding to the operation stage based on the target version, the operation mode, the operation time serial number and the stage identifier;
and storing the first performance test data corresponding to the one operation stage into a database based on the generated storage identifier.
Optionally, the apparatus further includes a parameter determining unit, configured to determine the values of N and M by:
aiming at multiple set candidate test schemes, acquiring performance test data triggered by each running object under different versions when each candidate test scheme is adopted; the number of versions or the running times corresponding to any two candidate test schemes are different, and the application programs of different versions are obtained by performing pseudo-random modification on the application program of the specified version;
determining a target test scheme from the multiple candidate test schemes based on the operation change condition and the real change condition corresponding to each operation object when each candidate test scheme is adopted;
and setting the value of N based on the number of versions corresponding to the target test scheme, and setting the value of M based on the corresponding running times.
Optionally, the parameter determining unit is specifically configured to:
aiming at the multiple candidate test schemes, the following operations are respectively executed:
aiming at a candidate test scheme, acquiring basic operation data triggered by each operation object under different versions when the candidate test scheme is adopted;
respectively carrying out pseudo-random noise processing on corresponding operation times on basic operation data triggered by each operation object under each version based on the operation times corresponding to the candidate test scheme;
and acquiring performance test data triggered by each running object under each version based on the acquired multiple basic running data triggered by each running object under each version.
Optionally, the apparatus further includes a data presentation unit, configured to:
responding to the test result display operation aiming at the target version, and presenting a test result display interface corresponding to the target version; the test result display interface comprises test results of the application program in different operation stages;
responding to a test result of a target operation stage in the test result display interface to perform display triggering operation, and displaying first performance test data corresponding to each operation object in the target operation stage;
responding to a trigger operation for displaying first performance test data of a target operation object in each operation object, and displaying a data display interface of the target operation object; and the data display interface comprises operation change data corresponding to the target operation object.
In one aspect, a computer device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any of the above methods when executing the computer program.
In one aspect, a computer storage medium is provided having computer program instructions stored thereon that, when executed by a processor, implement the steps of any of the methods described above.
In one aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps of any of the methods described above.
In the embodiment of the application, the abnormal operation object is positioned by acquiring a plurality of operation objects, first performance test data triggered when the application program of the target version runs, acquiring a plurality of operation objects, second performance test data respectively triggered when the application programs of the N historical versions run, and operation change data of each operation object in different versions. Therefore, the method and the device have the advantages that the application program is subjected to fine granularity of the operating object level, performance monitoring is conducted on the application program in a multi-version long period, the change condition of the operating object caused by the change of the logic codes of different versions is positioned, the abnormal operating object can be quickly positioned through the operation change data, the efficiency of abnormal positioning is greatly improved, developers are assisted to effectively find out the BUG existing in the application program development process, the BUG is timely corrected, and the development efficiency of the application program is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or related technologies, the drawings needed to be used in the description of the embodiments or related technologies are briefly introduced below, it is obvious that the drawings in the following description are only the embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating an iterative testing of a game application according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a performance testing system for an application according to an embodiment of the present disclosure;
fig. 4 is a schematic flow chart illustrating data acquisition performed by the data acquisition module according to the embodiment of the present application;
fig. 5a and fig. 5b are schematic diagrams of an implementation scenario of a data collection process provided in an embodiment of the present application;
fig. 6 is a schematic flowchart of a performance testing method for an application according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a line after linear fitting processing provided by an embodiment of the present application;
FIG. 8 is a schematic flow chart illustrating a process for obtaining first performance test data according to an embodiment of the present disclosure;
FIG. 9 is a graph illustrating the results of a data noise filtering experiment provided by an embodiment of the present application;
FIG. 10 is a graph illustrating the results of an anomaly display experiment provided in an embodiment of the present application;
FIG. 11 is a schematic flow chart for determining values of M and N based on experimental results provided in an embodiment of the present application;
fig. 12 is a schematic flowchart of a performance testing method based on logic segment partitioning according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a logic segment splitting function provided in an embodiment of the present application;
FIG. 14 is a schematic operational flow chart of data presentation provided by an embodiment of the present application;
15 a-15 d are schematic diagrams of data presentation interfaces provided by embodiments of the present application;
FIGS. 16 a-16 h are comparative graphs of performance provided by examples of the present application;
fig. 17 is a schematic structural diagram of a performance testing apparatus for an application according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. In the present application, the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
For the convenience of understanding the technical solutions provided by the embodiments of the present application, some key terms used in the embodiments of the present application are explained first:
the application program comprises the following steps: the application program of the embodiment of the present application may be any application program, and may be a game application program, for example
(1) The operation object is as follows: in the embodiment of the present application, each running object is a fine-grained monitoring dimension, the fine-grained monitoring dimension provides monitoring dimensions of program running details such as class levels and function levels with respect to coarse-grained Central Processing Unit (CPU) consumption and memory consumption of an entire application in the related art, and one object in each level may be a running object.
For example, for a class level, one class may be a running object, which is used as a monitoring dimension, and in general, there may be tens of thousands of classes in one application, that is, there may be tens of thousands of running objects, and these tens of thousands of running objects constitute tens of thousands of monitoring dimensions for performing a performance test on the application; or, regarding to the function level, one function may be a running object, which is also used as a monitoring dimension, and in general, hundreds of thousands of functions may exist in one application program, that is, hundreds of thousands of running objects may exist, and the hundreds of thousands of running objects constitute hundreds of thousands of monitoring dimensions for performing performance testing on the application program. Thus, the performance monitoring of the application is performed by the running object, and the granularity is extremely fine.
In the embodiment of the present application, the targeted operation object may be a combination of any program operation details, for example, a class in an application program, a function in the application program, or a combination of the class and the function. Of course, other program operation details may also be included, such as contents of object creation, time consumption of functions, network performance, and the like, and the operation object may be set according to actual requirements in actual application, which is not limited in this embodiment of the present application.
(2) Performance test data: for each operation object, the performance test data may be data for characterizing the performance of the operation object, and may be, for example, the number of calls, the number of creation times, the operation duration, and the resource occupation condition of one operation object. For example, for memory objects, the performance test data may include the number of object types and the number of objects created. It should be noted that, most application programs may adopt a Client-server (CS) architecture, and thus, the performance test data collected in the embodiment of the present application may be data collected from a Client, data collected from a backend server, or a combination of data collected from the Client and the backend server, which is not limited in the embodiment of the present application.
(3) Running change data: the run change data is used to characterize performance changes of the same run object between different versions. For example, the difference data between different versions, such as the number of objects created in the previous version and the current version, or the performance change rate obtained by performing linear fitting according to the performance data of the same running object in multiple versions, may be represented in other manners, and is not limited thereto.
(4) And (3) an operation stage: for an application, it may comprise a plurality of execution phases, each execution phase being a logical segment that may be used to perform a function or to present an effect. For example, for a game application, it may be divided into stages according to a game process, such as an initialization stage, a single-game creation stage, a player loading stage, and a game stage, or for an instant messaging application, it may be divided into stages according to a page scene, such as a loading stage, a chat page stage, and a friend dynamic sharing page stage. Of course, in practical application, the user-defined segmentation can be performed according to the practical operation logic of the application program, and the data in the same operation stage can be strictly compared by performing the performance test through the segmentation, so as to provide more accurate data comparison.
The following briefly introduces the design concept of the embodiments of the present application.
At present, testing and evaluating an application program is an essential link in a development process, but the code review mode is extremely low in efficiency and obviously cannot be applied, so that a performance monitoring tool in running can be considered to assist in discovering a BUG in a program code.
Taking a game application as an example, when the game application is developed, performance monitoring may also be performed by some performance monitoring tools, for example, in the UE engine, there are LLM (Low-Level Memory Tracker, a Memory statistical tool provided by the UE engine authority), stat (a data statistical system provided by the UE engine, which can collect and display performance data), and insights (a performance analysis tool provided by the UE authority). Such tools generally perform data statistics and monitoring on the current version, but since many monitoring objects are involved in the game application program, the obvious exception can be located only by the experience of the developer, and thus it is very difficult to find the exception through only one version. Secondly, the performance monitoring of a single version has no backtracking capability, the running condition before each monitoring dimension cannot be grasped, if historical records are required, data is always stored and maintained by self, the difficulty is very high, and the sharing of monitoring data is not facilitated.
The core objective of the method is to accurately locate the BUG and simplify the BUG locating process no matter what performance monitoring tool is considered. Therefore, in order to improve the accuracy of positioning the BUG, the monitoring dimension needs to be reduced to a fine-grained level of an application program, such as a class level, a function level and the like, in addition, under a fine-grained monitoring scene, a large number of monitoring dimensions are bound to exist, and the BUG is difficult to be accurately positioned if no reference and comparison is made, so that the defects in the aspect can be overcome through long-period performance comparison, and after a performance baseline is obtained, a tool can clearly find the BUG appearing in the current version by comparing with the baseline.
In view of this, an embodiment of the present application provides an application-based performance testing method, in which an abnormal operation object is located by acquiring a plurality of operation objects, first performance testing data triggered when an application program of a target version runs, acquiring a plurality of operation objects, second performance testing data respectively triggered when application programs of N historical versions run, and acquiring operation change data of each operation object in different versions. Therefore, the method and the device have the advantages that the application program is subjected to fine granularity of the operating object level, performance monitoring is conducted on the application program in a multi-version long period, the change condition of the operating object caused by the change of the logic codes of different versions is positioned, the abnormal operating object can be quickly positioned through the operation change data, the efficiency of abnormal positioning is greatly improved, developers are assisted to effectively find out the BUG existing in the application program development process, the BUG is timely corrected, and the development efficiency of the application program is improved.
In the embodiment of the application, in order to find the monitoring dimension with the abnormality in the mass monitoring dimensions, the difference data of the same running object in the two versions can be quickly compared in a double-version comparison mode, so that the abnormal running object can be quickly positioned.
In addition, considering that each time an application program runs, system noise influences, such as the sequence of system scheduling, random factors in the program, the overall load of the system, and the like, all influence performance test data running each time, and the volatility generated by data noise influences the contrast effect, in order to reduce the influence of noise, a multi-version linear fitting scheme is adopted to filter the data noise in the embodiment of the application, so that the accuracy of the abnormal monitoring dimension located in the massive monitoring dimensions is improved. Meanwhile, the linear fitting scheme can also be used for conveniently filtering the abnormal monitoring dimension, so that the abnormal dimension can be determined in a very small range, and the positioning efficiency of the abnormal dimension is greatly improved.
Some brief descriptions are given below to application scenarios to which the technical solution of the embodiment of the present application can be applied, and it should be noted that the application scenarios described below are only used for describing the embodiment of the present application and are not limited. In a specific implementation process, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
The scheme provided by the embodiment of the application can be applied to performance test scenes of most application programs, such as performance test scenes of game application programs. As shown in fig. 1, an application scenario schematic diagram provided in the embodiment of the present application may include a test-end device 101 and a server 102.
The test-end device 101 may be, for example, a mobile phone, a tablet computer (PAD), a laptop, a desktop, a smart television, a smart car device, a smart wearable device, and the like, which can run an application under test. The test-end device 101 may be installed with a tested application, and the test-end device 101 has an operating environment required for the running of the tested application, for example, for a game application developed by a UE engine, a UE engine environment needs to be deployed for the game application, and the UE engine is integrated with the data acquisition plug-in of the embodiment of the present application, and is configured to acquire performance test data triggered in the running process of the game application.
The server 102 may be a background server corresponding to the data acquisition plug-in, and may implement functions such as storage and analysis of performance test data. For example, the cloud server may be an independent physical server, a server cluster or a distributed system including a plurality of physical servers, or a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform, but is not limited thereto.
In the embodiment of the present application, the test-side device 101 and the server 102 may each include one or more processors, memories, I/O interfaces for interacting with the terminal, and the like. The memory of the test-end device 101 may store program instructions related to data acquisition in the performance testing method of the application program provided in the embodiment of the present application, and when the program instructions are executed by the processor of the test-end device 101, the program instructions can be used to implement a process of acquiring performance testing data when the application program runs. The memory of the server 102 may store program instructions related to data storage and analysis in the performance testing method of the application program provided by the embodiment of the present application, and the program instructions, when executed by the processor of the server 102, can be used to implement a process of storing and analyzing collected performance testing data. In addition, the server 102 may be configured with a database, which may be used to store performance test data and performance test results, etc.
Taking a game application program test scenario as an example, when the test end device 101 runs the game application program to be tested, a data acquisition plug-in is sent to be called to acquire performance data of each running object in the game application program, such as the type and number of memory objects, and after the performance test data acquired by the test end device 101 is preprocessed, the performance test data can be uploaded to the server 102, and the server 102 stores the performance test data. In addition, the server 102 may perform a horizontal comparison based on the multiple versions of the performance test data to locate an abnormal operation object from the operation objects based on the operation change data of each operation object in the multiple versions. Meanwhile, the positioned abnormal object can be pushed and displayed to the developer, so that the developer can modify and optimize, the game application program after being modified and optimized can enter a new iterative test process, namely, the process is repeated, and the performance of the new game application program is tested again.
Specifically, referring to fig. 2, a schematic diagram of an iterative testing of a game application is shown. Wherein, the iterative test refers to a continuously repeated program development cycle, each cycle will be based on the test method of the memory increase of the previous cycle, as shown in fig. 2, when the game application program of version 1 runs, the collection of performance data can be performed on the running object in the game application program, for example, the memory object data shown in fig. 2, similarly, when the game application program of version 2 runs, the collection of performance data can still be performed on the running object in the game application program, then the performance data of version 1 and version 2 can be compared, the version differential data for the same object can be obtained, the abnormal running object can be located based on the version differential data, in the actual application, in the subsequent version, there may also be a situation of the running object increase, so through the version comparison, the new increased object data of the new version can also be obtained, and then through the difference drive optimization of the version data, that is, a developer can output a specified optimization scheme based on the difference of the version data. Furthermore, when the number of versions is gradually accumulated, linear fitting of performance data can be performed through multiple versions, for example, linear fitting is performed by using 7 versions, so that the influence of system noise of double version comparison on the test result is filtered.
In an embodiment, the processes executed by the test-side device 101 and the server 102 respectively may also be implemented by being integrated into the same device, that is, the test-side device 101 and the server 102 may be implemented by the same device, and the test-side device 101 and the server 102 may be different functional modules of the device, so as to implement the corresponding functions.
The test end device 101 and the server 102 may be in direct or indirect communication connection via one or more networks 103. The network 103 may be a wired network or a Wireless network, for example, the Wireless network may be a mobile cellular network, or may be a Wireless-Fidelity (WIFI) network, or may also be other possible networks, which is not limited in this embodiment of the present invention.
It should be noted that, in the embodiment of the present application, the number of the test-end device 101 may be one, or may be multiple, and similarly, the number of the server 102 may also be one, or may be multiple, that is, the number of the test-end device 101 or the server 102 is not limited.
Fig. 2 is a schematic diagram of an architecture of a performance testing system for an application program according to an embodiment of the present application, where the architecture includes a data collection module, a data service module, and a data presentation module.
(1) Data acquisition module
The data acquisition module can be deployed in the test end device, and collects performance test data when the application program runs by integrating the data acquisition plug-in provided by the embodiment of the application into the test end device. The data collection module may include a plug-in package (patch) integrated into the UE engine and a data collection plug-in, such as an Iteration Trace plug-in (Iteration Trace plug-in) of an embodiment of the present application
Taking a game application developed by a UE engine as an example, before performance monitoring is carried out, access operation is required, namely, a Software Development Kit (SDK) of patch and Iteration Trace plug is integrated into the UE program and the UE engine, the Iteration Trace plug provides a self-defined logic segment segmentation function, the game application can be divided into a plurality of operation stages, and data collection is carried out for each operation stage.
The data acquisition module adopts a bypass data acquisition mode, has small influence on the running performance of the application program, and can be accessed by both a client and a Dedicated Server (DS) system.
(2) Data service module
The data service module can be deployed in a micro-service deployment mode, provides a function of storing received performance test data of a certain version into a Database (DB), and can filter small data in the received performance test data and perform data analysis to locate an abnormal operation object.
(3) Data display module
The data display module is used for providing a display function of the performance test result, so that a developer can acquire abnormal information through a display page of the performance test result, and then modify and optimize the code of the application program, and the modified and optimized code is used as a new version to enter a new iteration test flow until the performance of the application program reaches an expectation.
In short, the DB can be regarded as an electronic file cabinet, where electronic files are stored, and a user can perform operations such as adding, querying, updating, and deleting on data in the files. A "database" is a collection of data that is stored together in a manner that can be shared by multiple users, has as little redundancy as possible, and is independent of the application.
A Database Management System (DBMS) is a computer software System designed for managing a Database, and generally has basic functions such as storage, interception, security assurance, and backup. The database management system may be categorized according to the database model it supports, such as relational, Extensible Markup Language (XML); or classified according to the type of computer supported, e.g., server cluster, mobile phone; or classified according to the Query Language used, such as Structured Query Language (SQL), XQuery; or by performance impulse emphasis, e.g., maximum size, maximum operating speed; or other classification schemes. Regardless of the manner of classification used, some DBMSs are capable of supporting multiple query languages across categories, for example, simultaneously.
In a possible application scenario, relevant data related in the embodiment of the present application, such as performance test data and performance test results, may be stored by using a cloud storage (cloud storage) technology. The distributed cloud storage system refers to a storage system which integrates a large number of storage devices (or called storage nodes) of different types in a network through application software or application interfaces to cooperatively work through functions of cluster application, grid technology, distributed storage file systems and the like, and provides data storage and service access functions to the outside.
At present, a storage method of a storage system is as follows: logical volumes are created, and when created, each logical volume is allocated physical storage space, which may be the disk composition of a certain storage device or of several storage devices. The client stores data on a certain logical volume, that is, the data is stored on a file system, the file system divides the data into a plurality of parts, each part is an object, the object not only contains the data but also contains additional information such as data Identification (ID), the file system writes each object into a physical storage space of the logical volume, and the file system records storage location information of each object, so that when the client requests to access the data, the file system can allow the client to access the data according to the storage location information of each object.
The process of allocating physical storage space for the logical volume by the storage system specifically includes: physical storage space is divided in advance into stripes according to a group of capacity measures of objects stored in a logical volume (the measures often have a large margin with respect to the capacity of the actual objects to be stored) and Redundant Array of Independent Disks (RAID), and one logical volume can be understood as one stripe, thereby allocating physical storage space to the logical volume.
In a possible application scenario, in order to reduce communication delay of retrieval, the servers 102 may be deployed in various regions, or for load balancing, different servers 102 may respectively serve the testing-side devices 101 in different regions, for example, the testing-side device 101 is located at a site a, a communication connection is established with the server 102 of the service site a, the testing-side device 101 is located at a site b, a communication connection is established with the server 102 of the service site b, and a plurality of servers 102 form a data sharing system, and share data through a block chain.
Each server 102 in the data sharing system has a node identifier corresponding to the server 102, and each server 102 in the data sharing system may store node identifiers of other servers 102 in the data sharing system, so that the generated block is broadcast to other servers 102 in the data sharing system according to the node identifiers of other servers 102. Each server 102 may maintain a node identifier list, and store the server 102 name and the node identifier in the node identifier list. The node identifier may be an Internet Protocol (IP) address and any other information that can be used to identify the node.
Of course, the method provided in the embodiment of the present application is not limited to be used in the application scenario shown in fig. 1 or the architecture of fig. 3, and may also be used in other possible application scenarios or system architectures, which is not limited in the embodiment of the present application. The functions that can be realized by each device shown in fig. 1 or each module shown in fig. 3 will be described together in the following method embodiment.
The method flows provided in the embodiments of the present application may be executed by the server 102 or the test end device 101 in fig. 1, or may be executed by both the server 102 and the test end device 101. In the following description, the application is mainly described as a game application, but other types of applications are also applicable.
In the embodiment of the present application, when performing performance analysis on an application program of a target version, first performance test data of the target version and second performance test data of N historical versions before the target version need to be collected as data bases, so data collection is described first here. It should be noted that although the first performance test data and the second performance test data in the embodiment of the present application are named differently, this is only used to distinguish the target version from the historical version, and does not mean that there is a substantial difference between the included data itself, and the collection processes of the first performance test data and the second performance test data are similar, so the collection of the first performance test data is specifically described herein.
Referring to fig. 4, a schematic flow chart of data acquisition performed by the data acquisition module is shown.
Step 401: and in response to a triggering operation performed by a running engine of the application program, integrating the data acquisition plug-in a plug-in package of the running engine.
In the embodiment of the application, data acquisition can be realized through the data acquisition module. In order to successfully perform data acquisition, an access operation necessary for data acquisition is also needed, the operation can be one-time, and after the first access, subsequent versions can continue to use the accessed data acquisition plug-in. When accessing, correspondingly, in response to the trigger operation, a data acquisition plug-in may be integrated in the plug-in package of the running engine, and then subsequently, data acquisition may be performed by using the data acquisition plug-in. Taking the above-mentioned UE engine as an example, the run engine may integrate the patch and the Iteration Trace plug in the UE program and the engine.
In the embodiment of the present application, the data acquisition plug-in may include one or more of the following combinations:
(1) hook function
Hook functions can be set for each running object needing data acquisition in the application program, and when the program runs, the hook functions can be automatically triggered to collect basic data.
(2) Logical segment partitioning function
The logic segment dividing function may specifically include a logic segment start function and a logic segment end function, so that the required logic segment dividing function may be called in the service logic layer to perform logic segment division of the application program, each logic segment corresponds to one operation stage, and then the application program may be divided into multiple operation stages through the call of the logic segment dividing function, so as to perform data collection and analysis on each operation stage respectively.
(3) Custom data collection function
In practical applications, there may be a case where data is individually reported at an appropriate location, and therefore, in order to implement individual reporting of data, a custom data collection function is further provided to support an Application Programming Interface (API) that calls the custom data collection function at an appropriate location, and report required data to the data service module.
Of course, the data acquisition plug-in may also add any possible function according to actual needs, which is not limited in the embodiment of the present application.
Step 402: in response to the execution operation, the target version of the application is executed based on the execution engine described above.
In the embodiment of the application, after the access work of the data acquisition plug-in is completed, data acquisition can be performed based on the data acquisition plug-in when an application program runs.
Specifically, when the performance test data of the application program needs to be acquired, the running operation may be performed on the application program of the target version, and then the application program of the target version may be run based on the running engine in response to the running operation, and then, when the application program runs, the corresponding hook function may be triggered to acquire the first performance test data of the corresponding running object based on the running of each running object.
In one possible implementation, the target version of the application program may be run after the target version of the application program is compiled based on the program code. When an application program is started, command line parameters need to be configured to start a data collection plug-in to collect data, so that data reporting and subsequent data storage are realized. Wherein the command line parameters may include one or a combination of the following parameters:
(1) the reporting IP (reportip) address, i.e. the IP address of the server, is used for addressing of subsequent data reporting.
(2) A report port (ReportPort), i.e. a data port of a server for data reporting.
(3) The version of the application (ReportVersion), i.e. the version number of the target version described above.
(4) The test mode (ReportTestType), or running scenario, of an application.
(5) The running time serial number of the application program is used for indicating that the current target version runs for the second time, and if the running time serial number of the application program is not filled in, the test end device or the server can default to add one to the current running time serial number.
The parameters (1) and (2) are used for specifying a service address for reporting data, and the parameters (3) and (4) and (5) are used for the subsequent server to perform data management on the reported data. In practical application, if the command line parameters are not filled in, the function of the data acquisition plug-in can be closed by default, and the release version performance of the application program is not affected.
Step 403: and triggering the corresponding hook function to acquire first performance test data of the corresponding operation object based on the operation of each operation object.
Specifically, after the application program runs, data collection can be performed on each running object of the application program. Taking the game application program as an example, after the IterationTrace plug is accessed, the engine is functionally enhanced, and a hook function is deployed at a core API of the application program, so that collection of first performance test data can be automatically performed when the program runs, for example, data such as object distribution data, function running time consumption data, network transceiving data and the like can be collected and counted.
In addition, when the application program runs, the user-defined data reporting can be carried out, namely, the data needing to be reported can be added at a proper position according to the requirement.
In the embodiment of the present application, the flow of data collection may be implemented in the following several scenarios.
(1) Actively conducting performance testing
Referring to fig. 5a, a schematic diagram of a pipeline for actively performing a performance test is shown. In which, a pipeline design may be performed according to a test Stage (Stage), for example, as shown in fig. 5a including stages-3 to-6, each test Stage may design a process to be executed in the Stage, for example, in Stage-3, a preprocessing may be performed, where the preprocessing specifically includes three steps of cleaning a legacy DS, cleaning a legacy Client, and pulling up a data collection program, and other stages are also analogized in sequence, so that no test is performed.
The performance test is performed proactively, which, as the name implies, requires manual triggering of the test process, and can then collect performance test data for the application after triggering.
(2) Integration into an automated build flow
Referring to fig. 5b, the performance testing process may be integrated into an automated construction process, and then the performance testing process may be automatically triggered to implement a data collection process, for example, a data collection period may be set, such as daily test data collection, weekly test data collection, or once per version update, so that the performance testing process may be automatically triggered without manual operation.
(3) The flow of data collection is implemented in a test environment.
After multiple versions of performance test data are collected, a performance analysis of the current version may then be performed based on such data. Fig. 6 is a schematic flow chart of a performance testing method for an application program according to an embodiment of the present application.
Step 601: the method comprises the steps of obtaining a plurality of running objects, first performance test data triggered when an application program of a target version runs, obtaining a plurality of running objects, and second performance test data respectively triggered when the application program of N historical versions runs; wherein N is more than or equal to 1.
In the embodiment of the application, the BUG of the current version is positioned by collecting the fine-grained long-period performance test data of the application program.
In actual application, performance test data can be collected for application programs of different versions, and the collected performance test data is incorporated into the database for unified management, so that when the performance test data of the target version and the N historical versions are stored in the database, first performance test data corresponding to the target version and second performance test data corresponding to the N historical versions can be read from the database.
The performance test data in the embodiment of the present application may include, for example, data such as object allocation data, function operation time consumption data, and network transceiving data, and of course, may also include other possible performance data, which is not limited in the embodiment of the present application.
Step 602: and respectively determining the operation change data of the corresponding operation object based on the plurality of operation objects and the first performance test data and the N second performance test data which respectively correspond to the plurality of operation objects.
In the embodiment of the application, the performance test data of different versions can be compared in a version comparison mode.
In an embodiment, a dual-version comparison may be performed, for example, for a target version, performance test data of the target version may be used to compare with performance test data of a previous version of the target version, so as to obtain difference data, where the difference data is operation change data of an operation object, and further determine whether the operation object is abnormal based on the difference data, for example, when the difference data is greater than a certain threshold, the operation object is marked as abnormal.
In the embodiment of the application, it is further considered that when an application program runs, because influence of system noise, such as a system scheduling sequence, a random factor in the program, a system total load and the like, affects data running each time, in order to filter data noise and enable a final test result to reach a certain accuracy rate, running change data of each running object is determined in a multi-version linear fitting manner, so that problems can be accurately located in massive dimensions as much as possible.
Since it is determined that the operation change data is similar for each operation object, a specific example of one operation object a is described here.
Specifically, for the operation object a, linear fitting processing is performed based on first performance test data and N second performance test data corresponding to the operation object a to obtain a slope of a straight line obtained through fitting, where the slope is used for representing a change rate of the operation object a.
Fig. 7 is a schematic diagram showing a linear fitting processed straight line. In fig. 7, taking N as 6 as an example, that is, each time a performance test is performed, performance test data of a total of 7 versions is selected for linear fitting, an abscissa of a fitting straight line shown in fig. 7 is a version number, and an ordinate is a value of the performance test data of the operation object a, so that after the linear fitting is mirrored, a slope of the straight line obtained by the fitting can be obtained, where the slope is a performance change rate of the operation object a among the 7 versions.
Step 603: and determining an abnormal operation object from the plurality of operation objects based on the obtained operation change data.
In one embodiment, when the double-version comparison mode is adopted, when the difference data is greater than a certain threshold value, the running object may be determined to be an abnormal running object, and the running object is marked.
In one embodiment, when the multi-version linear fitting mode is adopted, when the slope of the running object a is greater than a set slope threshold, the running object a is determined to be an abnormal running object, and the abnormal running object is marked.
In the embodiment of the application, because fine-grained performance monitoring is carried out on the application program, the monitoring dimension is very large, and therefore abnormal positioning is very difficult, the operation change of a certain operation object can be more accurately known in a linear fitting mode, and compared with the baseline change of the operation object, the abnormal operation object can be easily identified, so that the abnormal dimension is locked in a small range, and the efficiency of abnormal positioning is improved.
Step 604: and generating a performance test result corresponding to the application program of the target version based on the first performance test data of the abnormal operation object.
In the embodiment of the application, for the located abnormal operation objects, the performance test result corresponding to the application program of the target version may be generated by combining the first performance test data of the abnormal operation objects in the target version. In actual application, the abnormal operation object can be marked, so that the abnormal operation object can be distinguished from the normal operation object in subsequent display.
In the embodiment of the application, it is considered that the application program of each version is inevitably influenced by system noise when running at a single time, so that the obtained performance test result is inaccurate, and finally abnormal positioning may fail.
Taking the first performance test data corresponding to the target version as an example, refer to fig. 8, which is a schematic flow chart for obtaining the first performance test data. Specifically, the first performance test data corresponding to any one of the operation objects a is taken as an example.
Step 6011: and acquiring first basic operation data triggered by the operation object A when the operation object A runs for the Mth time of the application program of the target version.
Step 6012: and respectively acquiring second basic operation data, triggered by the operation object A in the previous (M-1) operation of the target version of the application program, from the stored basic operation data.
It should be noted that, the first basic operation data and the second basic operation data are also only used for distinguishing the basic operation data obtained by different times of operations, and the basic operation data and the performance test data include parameter data that are not substantially different, and the performance test data is obtained by processing the basic operation data obtained after multiple times of operation tests under the same condition, for example, the performance test data may be obtained by averaging the basic operation data of the same version.
Step 6013: and merging the obtained first basic operation data and the (M-1) second basic operation data to obtain first performance test data corresponding to the operation object A.
Specifically, the averaging process may be performed on the first basic operation data and the (M-1) second basic operation data, so as to obtain the average value of the performance test data of the operation object a that is operated M times.
For example, when the performance test data is running time consumption, the running time consumption of each time the running object a runs M times may be obtained, so as to calculate the average time consumption of M times, as the first performance test data finally participating in the performance analysis.
In the embodiment of the present application, the values of M and N may be, for example, 2 and 7, respectively, that is, the basic operation data of each version is taken to perform 2 times of operation, and participate in the performance analysis process. In practical applications, the values of M and N may be set based on empirical values, or may be set according to experimental results.
Data noise filtering experiments were performed to determine that averaging of several test data for each version was appropriate. In order to obtain the experimental result conveniently and intuitively, a single version can be adopted to simulate the operation of multiple versions, so that theoretically the same version should not display any abnormity. Since the baseline of the control is itself, a linear fit using the true values can be seen with a slope of 0 for the fitted line. Here, 7 noisy test data are generated for each test, and the test data may be randomly generated according to 2 standard deviations of a normal distribution band, for example.
FIG. 9 is a graph showing the results of a data noise filtering experiment.
Referring to fig. 9, in the scheme 1, linear fitting is performed by using any one-time test data, and it can be seen that a fitted straight line has a large difference from a true value; in the scheme 2, the average value of any two times of test data is adopted to carry out linear fitting, so that the actual value is very close to the actual value; in the case of scheme 3, the values of the average 7 times of test data are used for linear fitting, and it can be seen that the straight line obtained by fitting is not much different from the straight line obtained in the case of scheme 2.
Therefore, the difference between the result and the actual value of each scheme is integrated, and the consideration on the scheme realization difficulty is considered, and the linear fitting is carried out by adopting the average value of the two tests, so that a better balance can be achieved on the difficulty and the effect.
Similarly, in order to determine that the anomaly was well revealed using several test data for each version, an anomaly demonstration experiment was performed. In an actual experiment, the real performance of a certain version is increased due to the fact that bug and the like of the version can be added normally or abnormally for some reason. Thus, when linear fitting is performed using the true values, it can be seen that the slope of the fitted straight line is greater than 0. Similarly, 7 noisy test data are generated for each test, and the test data may be randomly generated according to a normal distribution.
Fig. 10 is a schematic diagram showing the results of the anomaly display experiment.
Referring to fig. 10, in the scheme 1, linear fitting is performed by using any one-time test data, and it can be seen that the difference between a fitted straight line and a true value is large; in the scheme 2, the average value of any two times of test data is adopted to carry out linear fitting, so that the actual value is very close to the actual value; in the case of the scheme 3, the average value of 7 times of test data is used for linear fitting, and it can be seen that the differences between the real values and the scheme 2 and the scheme 3 are relatively close, so that the scheme 2 is more suitable under the comprehensive consideration.
Through the two experiments, the linear fitting effect can be obviously sensed, and due to the fact that the test values are different, the linear fitting effect is also different, and therefore through the mode, the accuracy of each test scheme is verified through repeated experiments.
Referring to fig. 11, a schematic flow chart for determining the values of M and N based on the experimental results is shown.
Step 1101: and formulating candidate test schemes, wherein the number of versions or the running times of any two candidate test schemes are different, and the application programs of different versions are obtained by performing pseudo-random modification on the application program of the specified version.
The following are examples of several candidate test scenarios:
(1) linear fitting was done with 2 versions, each version using the average of 7 base runs.
(2) Linear fitting was performed using 7 versions, each using 1 run of the baseline data.
(3) Linear fitting was done with 7 versions, each version using the average of 2 base runs.
(4) And adopting 7 versions to perform linear fitting, and adopting the average value of the test data of 7 times to perform linear fitting on each version.
It should be noted that the application programs of different versions are obtained by performing pseudo-random modification on the application program of a specific version, for example, on the basis of a certain version, normal addition or addition of an abnormal BUG is performed, so that the actual performance of the version is increased.
Step 1102: and acquiring performance test data triggered by each running object under different versions aiming at each candidate test scheme.
Specifically, the performance test data is generated similarly to the above experiment, but each experiment will re-generate the test data according to the real value and perform linear fitting.
Taking one of the test schemes as an example, for the candidate test scheme, when the candidate test scheme is adopted, acquiring basic operation data triggered by each operation object under different versions, and further performing pseudo-random noise processing on the basic operation data triggered by each operation object under each version for operation times specified in the test scheme, respectively, and further acquiring performance test data triggered by each operation object under each version based on a plurality of acquired basic operation data triggered by each operation object under each version.
For example, if the test scheme is the test scheme (1), for 2 versions, the basic operation data of each operation object under the two versions can be obtained respectively, and pseudo-random noise processing is performed 7 times on the basic operation data of each version to obtain 7 basic operation data with data noise, and then the 7 basic operation data with data noise are averaged to obtain performance test data of each version.
The pseudo random noise processing may be, for example, the above-described normally distributed random noise.
Step 1103: and determining a target test scheme from the multiple candidate test schemes based on the operation change condition and the real change condition corresponding to each operation object when each candidate test scheme is adopted.
Taking linear fitting as an example, linear fitting may be performed on the performance test data of each version obtained by the candidate test scheme, so as to obtain a performance change rate for each test scheme, and the performance change rate is compared with a real change rate, so that the accuracy of each test scheme may be determined.
In practical application, the test can be repeated to obtain the accuracy of each test scheme, and then the final test scheme is selected based on the accuracy, for example, the accuracy of each scheme can be verified by repeating the test 1 ten thousand times, and it is considered that the fitting is satisfied if the slope of the straight line obtained by fitting is less than 0.5, and the obtained accuracy is as shown in table 1 below.
Figure BDA0003485689910000191
Figure BDA0003485689910000201
TABLE 1
It can be seen that, in the scheme (3), 7 versions are used for linear fitting, each version adopts the average value of 2 times of data, the accuracy of the fitting result is 78.2, in the scheme (4), 7 versions are used for linear fitting, each version adopts the average value of 7 times of data, the accuracy of the fitting result is 97.2, and the accuracy of the two schemes is higher, so that the two schemes can be used as target test schemes.
In practical application, candidate test schemes with accuracy meeting certain conditions can be implemented, but different test schemes have different execution complexities in specific execution, so that a final target test scheme can be selected after balance consideration is performed on the execution complexities and the accuracy. For example, the accuracy of the scheme (3) can approach 80%, and there is a good balance between the execution complexity and the accuracy, so that the scheme (3) can be selected as the target test scheme for the final implementation.
Step 1104: setting the value of N based on the number of versions corresponding to the target test scheme, and setting the value of M based on the corresponding running times.
For example, if the target test scenario is scenario (3), N may be set to 6 accordingly, that is, the current version and the previous 6 historical versions are selected, a total of 7 versions are subjected to linear fitting, and M is set to 2 accordingly, that is, each version is averaged using the basic operation data after two operations to serve as the performance test data of the version.
In the embodiment of the present application, it is considered that the application program may include a plurality of running phases, and taking the game application program as an example, the game application program may be divided into a plurality of logical phases, such as an initialization phase, a single-play creation phase, a player loading phase, a game phase, and the like. In order to strictly compare data of the same logic segment and perform multi-version comparison by using one standard so as to provide more accurate data comparison and improve the effect of data comparison, the embodiment of the application also provides a function of logic segment segmentation.
Referring to fig. 12, a flow chart of a performance testing method based on logic segment partitioning is shown.
Step 1201: and in response to the triggering operation of the operation phase segmentation of the application program, dividing the application program into a plurality of operation phases.
Step 1202: when the operation is carried out to the starting position of one operation stage in each operation stage, triggering and calling a corresponding logic section starting function, and starting to acquire first performance test data of each operation object in the operation stage.
Step 1203: and when the operation is carried out to the end position of the operation stage, triggering and calling a corresponding logic segment end function, and ending the collection of the first performance test data of each operation object in the operation stage.
Specifically, based on the introduction of the above embodiments, the data collection plug-in of the embodiment of the present application includes a logic segment division function, which is used to divide the application program into a plurality of operation stages, and then subsequently compare the operation stages with each other.
Specifically, in order to perform data acquisition in a segmented manner, when performing access operation, the service logic layer calls the API of the IterationTrace plug to set a statistical service logic segment. When a logical segment is set, an API provided by the SDK needs to be called in an application program, where the API is the above-mentioned logical segment start function and logical segment end function, and is used to trigger the start of collecting data and the end of collecting data, respectively.
Referring to fig. 13, which is a schematic diagram of a logic segment splitting function, after a logic segment is divided, in an application program running process, if the application program runs to a start position of a logic segment, the data acquisition plug-in may be triggered to start collecting first performance test data, and at an end position of the logic segment, the data acquisition plug-in may be triggered to end collecting the first performance test data, and report the collected first performance test data to the data service module.
In one embodiment, when the logic segment is divided, in response to a triggering operation of performing the operation stage division on the application program, a logic segment start function is inserted into a start position of each operation stage in the application program, and a logic segment end function is inserted into an end position of each operation stage, so as to divide the application program into a plurality of operation stages, and then when the application program runs to the start position of one of the operation stages, the corresponding logic segment start function is triggered and called, and the collection of first performance test data of each operation object in the operation stage is started; and when the operation is carried out to the end position of the operation stage, triggering and calling a corresponding logic section end function, and ending the collection of the first performance test data of each operation object in the operation stage.
In another embodiment, during the division of the logic segment, a logic segment start function may be called to record a start position of each operation phase, and a logic segment end function may be called to record an end position of each operation phase, so that the start position and the end position of each record may be monitored, when the start position of one operation phase is reached, the logic segment start function may monitor a trigger to the start position, so as to trigger to start collecting data of the operation phase, and similarly, when the end position of one operation phase is reached, the logic segment end function may monitor a trigger to the end position, so as to trigger to end collecting first performance test data of the operation phase, and report the collected first performance test data to the data service module.
It should be noted that, when the adopted test scheme is linear fitting based on data obtained by one-time operation, the first performance test data is substantially basic operation data, and when the adopted test scheme is linear fitting based on data obtained by multiple operations, the first performance test data can be substantially understood as basic operation data obtained by single acquisition, and the first performance test data corresponding to the target version can be obtained by combining the basic operation data obtained by multiple operations.
In practical application, whether the logic section needs to be divided or not can be judged according to practical requirements, and if the logic section does not need to be divided, the data acquisition plug-in can acquire first performance test data in the whole operation process of the application program.
In the embodiment of the application, in actual application, if the operation stage is divided, if the logic section end function is called, it is indicated that the data acquisition plug-in completes data collection work of one operation stage, and then performance analysis is performed based on the acquired first performance test data of the operation stage and the second performance test data of the operation stage in the historical version to obtain a corresponding performance test result.
Specifically, when performing the performance analysis, the performance analysis may be performed for each operation stage, and here, any one of the operation stages S is taken as an example for description.
For the operation stage S, the operation change data corresponding to each operation object in the operation stage S may be determined based on the first performance test data and the N second performance test data corresponding to each operation object in the operation stage S, and then the abnormal operation object in the operation stage S may be determined based on the operation change data corresponding to each operation object in the operation stage S.
For example, in the operation stage S, the performance test data of each operation object may be subjected to linear fitting to obtain a performance change rate of each operation object, and when the performance change rate is greater than a set threshold, the operation object may be determined as an abnormal operation object.
In the embodiment of the application, besides the division of the logic segment, the division of the operation scene can be performed, that is, the above-mentioned ReportTestType, in different operation scenes, the operation data of each operation object may be different due to the difference of the operation logics, so that when data analysis is performed, the operation scene also needs to be considered, and targeted data comparison is performed according to different operation scenes, so as to improve the accuracy of the data analysis.
Specifically, before the application program is started to run, a running scene may be preset, and then the number of application programs that need to run in the running scene may be determined based on the running scene set by the running operation, and based on the determined number, a plurality of objects in the running scene are simulated, and a corresponding number of application programs in the target version are run. For example, for a game application program, in a 5-player battle scene and a 2-player battle scene, the number of game clients to be operated is 10 and 4 respectively, each game client is used for simulating the use process of one game player, and due to the difference of operation logics, each monitoring dimension may be different, and the operation data of each monitoring dimension may also be different, so that data acquisition can be performed in each operation scene.
After the performance test data of each operation scene is collected, the performance test data can be compared with the operation scenes to accurately position the abnormity of the game application program, namely for each operation scene, the operation change data corresponding to each operation object in the operation scene is determined based on the first performance test data and the N second performance test data corresponding to each operation object in the operation scene;
in practical application, the data can be compared according to operation scenes and operation stages, namely, the data is compared according to one operation stage in one operation scene, so that the abnormity of the game application program is accurately positioned.
Next, taking the system architecture shown in fig. 3 as an example, a technical solution provided by the embodiment of the present application is described in detail. Taking a game application developed based on a UE engine as an example, as shown in fig. 3, the technical solution of the embodiment of the present application may include steps S1 to S10.
S1: and operating the program.
When the access job is ready and a new version of the application is compiled based on the program code, the application can then be run to begin receipt collection. The running program can adopt a mode of manually running the application program, and can also adopt a mode of triggering the application program to run by the data acquisition plug-in.
When the application program is started, command line parameters need to be configured to start the function of collecting data by the data collection plug-in, so that data reporting and subsequent data storage are realized, for example, Report ip, Report port, Report Version, Report testtype, running time serial number and the like can be brought.
S2: and (6) collecting data.
When the access work is finished, the function of the UE engine is enhanced in the IterationTrace plug, a hook function is arranged at a core API, and the collection of basic data can be automatically carried out when a program runs. Of course, the data to be reported can be added at a proper position according to the requirement.
S3: and controlling the logic section.
During access operation, calling the API of IterationTrace plug to set statistical service logic section in the service logic layer, namely calling the API provided by SDK in the program, including the API for starting collection and the API for ending collection, calling the logic section to end API in the program operation, and finishing data collection in one stage by the IterationTrace plug.
S4: and (6) reporting the data.
After the data collection is completed, the collected performance test data can be reported to the data service module.
In practical application, if the operation stage is divided, if the logic stage end function is called, it indicates that the data acquisition plug-in completes data acquisition work of one operation stage, and the collected first performance test data of the operation stage can be reported to the data service module.
In order to reduce the data volume transmitted by the network and reduce the influence on the network performance of the application program, after the data collection work in one operation stage is finished, the IterationTrace plug can carry out certain pretreatment on the collected first performance test data.
In one embodiment, the first performance test data of the same operation object may be merged, for example, averaging the operation durations of multiple operations of one operation object, and the obtained average may be used as the final operation duration of the operation object.
The preprocessed first performance test data may be formatted according to a JavaScript Object Notation (json) format, and then reported to the data service module in the json format.
In an implementation manner, each running object may be further classified according to a dimension type, so as to facilitate subsequent aggregation transmission by dimension classification, reduce the number of network packet transmissions, and improve the efficiency of packet transmission, for example, the running objects may be classified according to dimensions such as a program object class and a function call class.
S5: and (6) warehousing the data.
After the data service module receives the performance test data reported by the data acquisition module, the performance test data can be put into a warehouse for processing.
In an embodiment, if the application program is not divided into logical segments, the performance test data is generated in the whole running process of the application program, and a storage identifier corresponding to the performance test data can be generated according to the version number of the target version, the running mode adopted when the performance test data is triggered, and the running time serial number when the performance test data is triggered, so that the performance test data is stored in the database by the storage identifier.
In an embodiment, if the application program is divided into logical segments, the performance test data is generated during the operation of an operation phase of the application program, an operation mode used when the first performance test data is triggered, an operation time serial number used when the first performance test data is triggered, and a phase identifier corresponding to an operation phase may be acquired, a storage identifier corresponding to an operation phase is generated based on the version number, the operation mode, the operation time serial number, and the phase identifier, and the performance test data corresponding to the corresponding operation phase is stored in the database based on the generated storage identifier.
Specifically, for example, when the command line parameter on the band is used when the application program is started, after the data report is received, the performance test data is stored in the database by using version, test type, logic section, and running time sequence number seq of this running as keys.
In addition, the version, the test mode testtype and the logic section can be used as keys, performance test data of the test mode and the logic section corresponding to the version can be summarized, for example, the data of each running object in the reported data is subjected to average evaluation and is stored in a database. That is, if the mth operation is currently performed on the target version, after the basic operation data collected from the mth operation is put into the database, the basic operation data corresponding to the mth operation and the basic operation data corresponding to the previous M-1 operations are subjected to average evaluation and stored in the database as performance test data for subsequent performance analysis of the target version.
S6: and calculating difference data.
After collecting the performance test data of the multiple versions, the performance test data of the current version and the performance test data of the previous N versions may be summarized and analyzed to obtain the performance test result of the current version.
Specifically, taking the test scheme (2) as an example, the performance test data of 6 versions before the current version may be taken out from the database, linear fitting may be performed on the performance test data of 7 versions of each running object, if the slope of the fitted straight line exceeds a set value, a flag may be marked to indicate that the running object may be an abnormal running object, and the abnormal data may be inserted into the displayed database.
S7: and displaying the summary.
The data in the database is displayed through a data display module included in the system, and the abnormal operation is displayed, the displayed data may include summary information, such as summary information of each version, or summary information of each operation stage under each version, and of course, detailed information of an abnormal operation object and the like may also be displayed.
Specifically, referring to fig. 14, an operation flow diagram for data display is shown.
Step 1401: responding to the test result display operation aiming at the target version, and presenting a test result display interface corresponding to the target version; and the test result display interface comprises test results of the application program in different operation stages.
For example, the initialization object and the function call monitoring are taken as examples, and each running object may be an object or a function.
In one embodiment, when the test result is displayed, the content of the performance test data and the test result of a certain version can be displayed separately. If the logic segment division is performed, the test result of the target version can be displayed in stages.
Referring to fig. 15a, an interface is shown for a version of the test results, in which data overview information for each run phase of the version is shown. Fig. 15a shows memory object data as an example, for example, the application program is divided into an Init phase (DS start-up process), a Preprocess phase (Client connection process), a CountDown phase (Client connection process), a Load phase (Client connection process), and a Game phase (Client enters a Game process), and shows the types and the numbers of the memory objects in the corresponding phases.
In practical application, different types of data overviews can be expanded according to specific requirements.
In one embodiment, comparison data of the target version and the previous historical version may also be presented.
Referring to fig. 15b, a schematic view of the test result display interface is shown. As shown in fig. 15b, after the comparison version with the version number "0.1.0.3070.0 _4978_ 14" is selected, the difference data with the comparison version can be displayed.
Step 1402: and responding to the test result of the target operation stage in the test result display interface to perform display triggering operation, and displaying the first performance test data corresponding to each operation object in the target operation stage.
In the embodiment of the application, performance monitoring data of each operation object in the operation stage can be displayed. Referring to fig. 15c, an interface is shown for data taking the target operation phase as a Load phase as an example.
In one embodiment, the data presentation interface may present the object creation status and function call status of the current version, such as the number of object creations, and the object name and number of function calls.
In practical application, the system can provide operation functions such as screening, searching and sorting for facilitating browsing of viewers.
In one embodiment, a comparison version of the current version may also be selected, and the delta data between the current version and the comparison version is displayed in the data display interface. As shown in fig. 15c, for an Object named "Object 1", after comparing the current version with the comparison version, the delta value of the created number is 1313.
In practical applications, in order to facilitate a viewer to view the abnormal operation object more intuitively, the abnormal operation object may be specially marked, for example, the abnormal operation object may be marked with a color or a marker, for example, an exclamation mark may be marked in fig. 15c to indicate that an abnormality exists, or the operation object may be marked with different degrees according to the degree of the abnormality, and the viewer may visually perceive the data that the abnormality exists, so as to check the corresponding code to locate a position where the abnormality may appear in the code, and perform further analysis.
Step 1403: responding to a trigger operation for displaying first performance test data of a target operation object in each operation object, and displaying a data display interface of the target operation object; and the data display interface comprises operation change data corresponding to the target operation object.
In the embodiment of the application, the detailed information of each running object can be displayed.
In one embodiment, detailed information of the performance test data of the selected runtime object may be presented in a data presentation interface. As shown in fig. 15d, taking a function as an example, the object called by the function is shown, and the performance data related to the call, such as information about time consumption and number of calls, is shown.
In one embodiment, delta data of the selected runtime object and the comparison version, such as a call count difference, a time consumption difference, and the like, may be presented in the data presentation interface.
Of course, the data to be displayed may be set according to a specific operation object and a requirement, which is not limited in this application.
S8: and (5) result feedback.
In the embodiment of the application, in order to notify the developer of the application of the abnormal information, the abnormal information can be pushed to the terminal device of the developer in an abnormal information pushing manner, and the abnormal information can also be displayed to the developer in an information displaying manner, so that the developer can obtain abnormal data from the system and perform positioning analysis.
S9: and modifying and optimizing.
S10: a new iteration.
In practical application, after carrying out exception positioning analysis and modifying aiming at necessary exception, developers submit updated codes to a code base, so that new codes can enter a new iterative test process to form an optimized closed loop so as to continuously optimize an application program and achieve the optimal effect.
To sum up, the embodiment of the present application provides a tool capable of performing bypass analysis and lateral comparison in a development environment, such as lateral comparison memory analysis, and by performing fine-grained and long-period performance monitoring on an application program in iteration, a program performance development process can be grasped as a whole, and corresponding performance consumption points can be quickly located by backtracking a historical version. The problem can be obviously found for each fine-grained long-period monitoring, because the change of the logic code of different versions can directly cause the change of the dimension performance of the runtime monitoring. When the performance is abnormal, the method can help a developer to quickly locate a problem point, simplify the work of problem location, and is suitable for developers and testers to use. The tester can quickly find the performance abnormal point by using the scheme and feed the performance abnormal point back to the developer, and the developer can quickly locate the problem in the code after taking the information.
Furthermore, there are very many monitoring dimensions considering long-period fine-grained monitoring. If there is no good algorithm to filter dimensions without anomalies, it is difficult to find outliers in the massive monitoring dimensions. Therefore, the linear fitting algorithm is adopted to filter the dimension without abnormality, the abnormal dimension can be determined in a very small range, the difficulty and the complexity of finding abnormal points in massive dimensions are reduced, the positioning efficiency of the abnormal dimension is greatly improved, and the accuracy is better.
The technical scheme provided by the embodiment of the application is put into an actual application scene, so that the BUG can be accurately positioned. Specifically, the same version of an application is simulated to run in 7 versions, each version is run twice, taking the Load phase of the application as an example, according to the statistical data shown in fig. 16a, there are 3941 types in total, 22269 objects are created altogether, and each type is taken as a running object, then 3941 running objects may exist, and it is necessary to monitor the 3941 running objects.
To compare the effects of the linear fit protocol, a two-version comparison protocol was used for comparison. When a scheme of double-version comparison is adopted, as shown in fig. 16b, the number of abnormal operation objects obtained by comparison is 530, and when a scheme of linear fitting is adopted, as shown in fig. 16c, only less than 20 abnormal operation objects are provided, so that the scheme of linear fitting can effectively determine the abnormal dimension in a small range, and the positioning efficiency of the abnormal dimension is greatly improved.
Furthermore, to verify the accuracy of the anomaly location, by embedding an anomaly within the 8 th version, for example, the following anomalies are added:
Figure BDA0003485689910000271
after the program is run and the performance test data is collected, the abnormal running object may be detected, as shown in fig. 16d, the abnormal data displayed finally includes the purchased abnormal object "USGStateBulletInfo", which shows that the technical solution can effectively locate the problem.
In addition, after monitoring the performance consumption of the IterationTrace plug-in the application process, as shown in fig. 16e, after the IterationTrace plug-in is integrated into the UE engine, if the IterationTrace plug-in is closed and does not consume performance, the program operation is not affected, and as shown in fig. 16f, where different line types represent performance curves under different experimental conditions, it is seen that, under the same condition, even if the IterationTrace plug-in is opened, the consumption is less than 5%, and as shown in fig. 16g, the memory consumption of the plug-in is opened is less than 100MB, as shown in fig. 16h, different line types identify performance curves under different conditions, compared with other tools, the IterationTrace plug-in of the present application has performance equivalent to that of the instruments, and is obviously superior to STAT, and as a result, the monitoring effect is better, and the operation of the application program is not affected too much.
Referring to fig. 17, based on the same inventive concept, an embodiment of the present application further provides an apparatus 170 for testing performance of an application, where the apparatus may be, for example, the test-end device, or the server, or may be partially deployed in the test-end device, and partially deployed in the server, and the apparatus includes:
a data collection unit 1701, configured to obtain first performance test data triggered by multiple running objects when the target version of the application runs, and obtain second performance test data triggered by multiple running objects when the N historical versions of the application run respectively; wherein N is more than or equal to 1;
a data analysis unit 1702, configured to determine operation change data of corresponding operation objects respectively based on first performance test data and N second performance test data corresponding to each of the plurality of operation objects;
an abnormal location unit 1703, configured to determine an abnormal operation object from the multiple operation objects based on the obtained operation change data;
a result generating unit 1704, configured to generate a performance test result corresponding to the application program of the target version based on the first performance test data of the abnormal operation object.
Alternatively to this, the first and second parts may,
the data analysis unit 1702 is specifically configured to perform the following operations for the multiple running objects: aiming at an operation object, performing linear fitting processing based on first performance test data and N second performance test data corresponding to the operation object to obtain the slope of a straight line obtained by fitting, wherein the slope is used for representing the change rate of the operation object;
the result generating unit 1704 is specifically configured to determine, for the operation target, that the operation target is an abnormal operation target if the slope is greater than the set slope threshold.
Optionally, the data collection unit 1701 is specifically configured to:
for the plurality of running objects, the following operations are respectively executed:
aiming at an operation object, acquiring first basic operation data triggered by the operation object when the operation object runs in the Mth time of an application program of a target version;
respectively acquiring second basic operation data triggered by the operation object in the previous (M-1) operation of the application program of the target version from the stored basic operation data;
and merging the obtained first basic operation data and the (M-1) second basic operation data to obtain performance test data corresponding to the operation object.
Optionally, the data collection unit 1701 is specifically configured to:
and carrying out averaging processing on the first basic operation data and the (M-1) second basic operation data to obtain performance test data corresponding to the operation object.
Optionally, the apparatus further includes an access operation unit 1705;
the access operation unit is used for responding to the triggering operation performed by the running engine corresponding to the application program and integrating the data acquisition plug-in the plug-in package of the running engine; the data acquisition plug-in comprises hook functions corresponding to a plurality of running objects respectively;
a data collection unit, specifically configured to run the application program of the target version based on the running engine in response to a running operation performed on the application program; and triggering the corresponding hook function to acquire first performance test data of the corresponding operation object based on the operation of each operation object.
Optionally, the data acquisition plug-in further includes a logic segment start function and a logic segment end function;
the access operation unit 1705 is further configured to insert a logic segment start function at a start position of each operation stage in the application program and insert a logic segment end function at an end position in response to a trigger operation for performing operation stage segmentation on the application program, so as to divide the application program into a plurality of operation stages;
the data collection unit 1701 is specifically configured to, when the data collection unit runs to a start position of one of the operation stages, trigger to call a corresponding logic segment start function, and start to collect first performance test data of each operation object in the operation stage; and when the operation is carried out to the end position of the operation stage, triggering and calling a corresponding logic section end function, and ending the collection of the first performance test data of each operation object in the operation stage.
Alternatively to this, the first and second parts may,
the data analysis unit 1702 is specifically configured to perform the following operations for each of the above operation stages:
for one operation stage, determining operation change data corresponding to each operation object in the operation stage based on first performance test data and N second performance test data corresponding to each operation object in the operation stage;
the exception positioning unit 1703 is specifically configured to determine an abnormal operation object in the operation phase based on the operation change data corresponding to each operation object in the operation phase.
Alternatively to this, the first and second parts may,
the data collection unit 1701 is specifically configured to: determining the number of application programs required to be operated in the operation scene based on the operation scene set by the operation, simulating a plurality of using objects in the operation scene based on the determined number, and operating the application programs of the target versions in corresponding number;
the data analysis unit 1702 is specifically configured to: for each operation scene, the following operations are respectively executed: for one operation scene, determining operation change data corresponding to each operation object in the operation scene based on first performance test data and N second performance test data corresponding to each operation object in the operation scene;
the exception location unit 1703 is specifically configured to: and determining abnormal operation objects in the operation scene based on the operation change data corresponding to each operation object in the operation scene.
Optionally, the apparatus further includes a data entering unit 1706, configured to:
acquiring an operation mode adopted when first performance test data is triggered, an operation time serial number when the first performance test data is triggered, and a stage identifier corresponding to the operation stage;
generating a storage identifier corresponding to an operation stage based on the target version, the operation mode, the operation time serial number and the stage identifier;
and storing the first performance test data corresponding to the operation stage into a database based on the generated storage identification.
Optionally, the apparatus further includes a parameter determining unit 1707, configured to determine values of N and M by:
aiming at multiple set candidate test schemes, acquiring performance test data triggered by each running object under different versions when each candidate test scheme is adopted; the number of versions or the running times corresponding to any two candidate test schemes are different, and the application programs of different versions are obtained by performing pseudo-random modification on the application program of the specified version;
determining a target test scheme from a plurality of candidate test schemes based on the operation change condition and the real change condition corresponding to each operation object when each candidate test scheme is adopted;
setting the value of N based on the number of versions corresponding to the target test scheme, and setting the value of M based on the corresponding running times.
Optionally, the parameter determining unit 1707 is specifically configured to:
aiming at a plurality of candidate test schemes, the following operations are respectively carried out:
aiming at a candidate test scheme, acquiring basic operation data triggered by each operation object under different versions when the candidate test scheme is adopted;
respectively carrying out pseudo-random noise processing on the corresponding operation times of the basic operation data triggered by each operation object under each version based on the operation times corresponding to the candidate test scheme;
and acquiring performance test data triggered by each running object under each version based on the acquired multiple basic running data triggered by each running object under each version.
Optionally, the apparatus further includes a data presentation unit 1708, configured to:
responding to the test result display operation aiming at the target version, and presenting a test result display interface corresponding to the target version; the test result display interface comprises test results of the application program in different operation stages;
responding to a test result of a target operation stage in a test result display interface for displaying a trigger operation, and displaying first performance test data corresponding to each operation object in the target operation stage;
responding to a trigger operation for displaying first performance test data of a target operation object in each operation object, and displaying a data display interface of the target operation object; and the data display interface comprises operation change data corresponding to the target operation object.
In the device, the abnormal operation object is positioned by acquiring a plurality of operation objects, first performance test data triggered when the target version of the application program runs, and acquiring a plurality of operation objects, second performance test data respectively triggered when the N historical versions of the application program run, and operation change data of each operation object in different versions. Therefore, the method and the device have the advantages that the application program is subjected to fine granularity of the operating object level, performance monitoring is conducted on the application program in a multi-version long period, the change condition of the operating object caused by the change of the logic codes of different versions is positioned, the abnormal operating object can be quickly positioned through the operation change data, the efficiency of abnormal positioning is greatly improved, developers are assisted to effectively find out the BUG existing in the application program development process, the BUG is timely corrected, and the development efficiency of the application program is improved.
In addition, in the device, the performance monitoring of fine granularity and long period of the application program in iteration is considered, so that the number of the running objects is large, the monitoring dimensionality is more, in order to improve the efficiency of abnormal positioning, the embodiment of the application adopts a linear fitting algorithm to filter the dimensionality without abnormality, the abnormal dimensionality can be determined in a very small range, the difficulty and the complexity of finding abnormal points in massive dimensionality are reduced, the positioning efficiency of the abnormal dimensionality is greatly improved, and the accuracy is better.
The apparatus may be configured to execute the method shown in each embodiment of the present application, and therefore, for functions and the like that can be realized by each functional module of the apparatus, reference may be made to the description of the foregoing embodiment, which is not repeated herein.
Referring to fig. 18, based on the same technical concept, an embodiment of the present application further provides a computer device 180, where the computer device 180 may be the terminal device or the server shown in fig. 1, and the computer device 180 may include a memory 1801 and a processor 1802.
The memory 1801 is used for storing computer programs executed by the processor 1802. The memory 1801 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to use of the computer device, and the like. The processor 1802 may be a Central Processing Unit (CPU), a digital processing unit, or the like. The embodiment of the present application does not limit the specific connection medium between the memory 1801 and the processor 1802. In fig. 18, the memory 1801 and the processor 1802 of the embodiment of the present application are connected by a bus 1803, the bus 1803 is represented by a thick line in fig. 18, and the connection manner between other components is merely illustrative and not limited. The bus 1803 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 18, but that does not indicate only one bus or type of bus.
The memory 1801 may be a volatile memory (volatile memory), such as a random-access memory (RAM); the memory 1801 may also be a non-volatile memory (non-volatile memory) such as, but not limited to, a read-only memory (rom), a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD), or the memory 1801 may be any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 1801 may be a combination of the above memories.
A processor 1802, configured to execute the method executed by the apparatus in the embodiments of the present application when calling the computer program stored in the memory 1801.
In some possible embodiments, various aspects of the methods provided by the present application may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of the methods according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device, for example, the computer device may perform the methods performed by the devices in the embodiments of the present application.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (15)

1. A performance testing method for an application program, the method comprising:
acquiring first performance test data triggered by a plurality of running objects when an application program of a target version runs, and acquiring second performance test data respectively triggered by the running objects when the application program of N historical versions runs; wherein N is more than or equal to 1;
respectively determining operation change data of corresponding operation objects based on first performance test data and N second performance test data corresponding to the operation objects;
determining an abnormal operation object from the plurality of operation objects based on the obtained operation change data;
and generating a performance test result corresponding to the application program of the target version based on the first performance test data of the abnormal operation object.
2. The method of claim 1, wherein determining the operation change data of the corresponding operation object based on the first performance test data and the N second performance test data corresponding to the operation objects respectively comprises:
for the multiple running objects, respectively executing the following operations: aiming at one running object, performing linear fitting processing based on first performance test data and N second performance test data corresponding to the running object to obtain the slope of a straight line obtained through fitting, wherein the slope is used for representing the change rate of the running object;
determining an abnormal operation object from the plurality of operation objects based on the obtained operation change data, including:
and for the running object, if the slope is greater than a set slope threshold, determining that the running object is an abnormal running object.
3. The method of claim 1, wherein obtaining first performance test data that is triggered by a plurality of runtime objects while a target version of an application is running comprises:
for the multiple running objects, respectively executing the following operations:
aiming at one running object, acquiring first basic running data triggered by the running object when the running object runs in the Mth time of the application program of the target version;
respectively acquiring second basic operation data triggered by the operation object in the previous (M-1) operation of the target version of the application program from the stored basic operation data;
and merging the obtained first basic operation data and the (M-1) second basic operation data to obtain first performance test data corresponding to the operation object.
4. The method of claim 3, wherein the merging the obtained first basic operation data and the (M-1) second basic operation data to obtain the first performance test data corresponding to the one operation object comprises:
and carrying out averaging processing on the first basic operation data and the (M-1) second basic operation data to obtain first performance test data corresponding to the operation object.
5. The method of claim 1, prior to obtaining first performance test data that is triggered by a plurality of runtime objects while a target version of an application is running, the method further comprising:
responding to a triggering operation performed on an operation engine corresponding to the application program, and integrating a data acquisition plug-in a plug-in package of the operation engine; the data acquisition plug-in comprises hook functions corresponding to the multiple running objects respectively;
acquiring first performance test data triggered by a plurality of running objects when the target version of the application program runs, wherein the first performance test data comprises:
in response to a running operation performed on the application, running the target version of the application based on the running engine;
and triggering the corresponding hook function to acquire first performance test data of the corresponding operation object based on the operation of each operation object.
6. The method of claim 5, wherein the data collection plug-in further comprises a logical segment start function and a logical segment end function; acquiring first performance test data triggered by a plurality of running objects when the target version of the application program runs, wherein the first performance test data comprises:
in response to a trigger operation of running stage segmentation on the application program, inserting the logic segment starting function at the starting position of each running stage in the application program, and inserting the logic segment ending function at the ending position so as to divide the application program into a plurality of running stages;
when the operation is carried out to the starting position of one operation stage in each operation stage, triggering and calling a corresponding logic section starting function, and starting to acquire first performance test data of each operation object in the operation stage;
and when the operation is carried out to the end position of the operation stage, triggering and calling a corresponding logic section end function, and ending the collection of the first performance test data of each operation object in the operation stage.
7. The method of claim 6, wherein determining the operational change data of the corresponding operational object based on the first performance test data and the N second performance test data corresponding to the respective operational objects comprises:
for each operation stage, the following operations are respectively executed:
for one operation stage, determining operation change data corresponding to each operation object in the one operation stage based on first performance test data and N second performance test data corresponding to each operation object in the one operation stage;
determining an abnormal operation object from the plurality of operation objects based on the obtained operation change data, including:
and determining an abnormal operation object in the one operation stage based on the operation change data corresponding to each operation object in the one operation stage.
8. The method of claim 6, wherein after triggering to call the corresponding logical segment end function when the execution reaches the end position of the one execution phase and ending the collection of the first performance test data of each execution object in the one execution phase, the method further comprises:
acquiring a scene identifier of an operation scene when the first performance test data is triggered, an operation time serial number when the first performance test data is triggered and a stage identifier corresponding to one operation stage;
generating a storage identifier corresponding to the operation stage based on the target version, the operation mode, the operation time serial number and the stage identifier;
and storing the first performance test data corresponding to the one operation stage into a database based on the generated storage identifier.
9. The method of any of claims 3 to 8, wherein the values of N and M are determined by:
aiming at multiple set candidate test schemes, acquiring performance test data triggered by each running object under different versions when each candidate test scheme is adopted; the number of versions or the running times corresponding to any two candidate test schemes are different, and the application programs of different versions are obtained by performing pseudo-random modification on the application program of the specified version;
determining a target test scheme from the multiple candidate test schemes based on the operation change condition and the real change condition corresponding to each operation object when each candidate test scheme is adopted;
and setting the value of N based on the number of versions corresponding to the target test scheme, and setting the value of M based on the corresponding running times.
10. The method of claim 9, wherein the obtaining performance test data triggered by the respective running object under different versions when each candidate test scheme is adopted for the multiple candidate test schemes includes:
aiming at the multiple candidate test schemes, the following operations are respectively executed:
aiming at a candidate test scheme, acquiring basic operation data triggered by each operation object under different versions when the candidate test scheme is adopted;
respectively carrying out pseudo-random noise processing on the corresponding operation times of the basic operation data triggered by each operation object under each version based on the operation times corresponding to the candidate test scheme;
and acquiring performance test data triggered by each running object under each version based on the acquired multiple basic running data triggered by each running object under each version.
11. The method of any of claims 1 to 8, further comprising:
responding to the test result display operation aiming at the target version, and presenting a test result display interface corresponding to the target version; the test result display interface comprises test results of the application program in different operation stages;
responding to a test result of a target operation stage in the test result display interface to perform display triggering operation, and displaying first performance test data corresponding to each operation object in the target operation stage;
responding to a trigger operation for displaying first performance test data of a target operation object in each operation object, and displaying a data display interface of the target operation object; and the data display interface comprises operation change data corresponding to the target operation object.
12. An apparatus for testing performance of an application, the apparatus comprising:
the data collection unit is used for acquiring first performance test data triggered by a plurality of running objects when the application program of a target version runs and acquiring second performance test data respectively triggered by the running objects when the application program of N historical versions runs; wherein N is more than or equal to 1;
the data analysis unit is used for respectively determining operation change data of corresponding operation objects on the basis of first performance test data and N second performance test data which correspond to the operation objects;
the abnormal positioning unit is used for determining an abnormal operation object from the plurality of operation objects based on the obtained operation change data;
and the result generating unit is used for generating a performance test result corresponding to the application program of the target version based on the first performance test data of the abnormal operation object.
13. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor,
the processor, when executing the computer program, realizes the steps of the method of any one of claims 1 to 11.
14. A computer storage medium having computer program instructions stored thereon, wherein,
the computer program instructions, when executed by a processor, implement the steps of the method of any one of claims 1 to 11.
15. A computer program product comprising computer program instructions, characterized in that,
the computer program instructions, when executed by a processor, implement the steps of the method of any one of claims 1 to 11.
CN202210079863.7A 2022-01-24 2022-01-24 Performance test method, device, equipment and storage medium of application program Active CN114490375B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210079863.7A CN114490375B (en) 2022-01-24 2022-01-24 Performance test method, device, equipment and storage medium of application program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210079863.7A CN114490375B (en) 2022-01-24 2022-01-24 Performance test method, device, equipment and storage medium of application program

Publications (2)

Publication Number Publication Date
CN114490375A true CN114490375A (en) 2022-05-13
CN114490375B CN114490375B (en) 2024-03-15

Family

ID=81473754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210079863.7A Active CN114490375B (en) 2022-01-24 2022-01-24 Performance test method, device, equipment and storage medium of application program

Country Status (1)

Country Link
CN (1) CN114490375B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115329155A (en) * 2022-10-11 2022-11-11 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium
CN116701236A (en) * 2023-08-08 2023-09-05 贵州通利数字科技有限公司 APP testing method, system and readable storage medium
CN117234935A (en) * 2023-09-28 2023-12-15 重庆赛力斯新能源汽车设计院有限公司 Test method and device based on illusion engine, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106201856A (en) * 2015-05-04 2016-12-07 阿里巴巴集团控股有限公司 A kind of multi version performance test methods and device
CN106802856A (en) * 2015-11-26 2017-06-06 腾讯科技(深圳)有限公司 The performance test methods of game application, server and game application client
CN109726100A (en) * 2018-04-19 2019-05-07 平安普惠企业管理有限公司 Application performance test method, apparatus, equipment and computer readable storage medium
US20190294528A1 (en) * 2018-03-26 2019-09-26 Ca, Inc. Automated software deployment and testing
CN110362460A (en) * 2019-07-12 2019-10-22 腾讯科技(深圳)有限公司 A kind of application program capacity data processing method, device and storage medium
CN111045927A (en) * 2019-11-07 2020-04-21 平安科技(深圳)有限公司 Performance test evaluation method and device, computer equipment and readable storage medium
CN111611144A (en) * 2020-05-27 2020-09-01 中国工商银行股份有限公司 Method, apparatus, computing device, and medium for processing performance test data
CN115114141A (en) * 2021-03-18 2022-09-27 腾讯科技(深圳)有限公司 Method, device and equipment for testing performance of application program and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106201856A (en) * 2015-05-04 2016-12-07 阿里巴巴集团控股有限公司 A kind of multi version performance test methods and device
CN106802856A (en) * 2015-11-26 2017-06-06 腾讯科技(深圳)有限公司 The performance test methods of game application, server and game application client
US20190294528A1 (en) * 2018-03-26 2019-09-26 Ca, Inc. Automated software deployment and testing
CN109726100A (en) * 2018-04-19 2019-05-07 平安普惠企业管理有限公司 Application performance test method, apparatus, equipment and computer readable storage medium
CN110362460A (en) * 2019-07-12 2019-10-22 腾讯科技(深圳)有限公司 A kind of application program capacity data processing method, device and storage medium
CN111045927A (en) * 2019-11-07 2020-04-21 平安科技(深圳)有限公司 Performance test evaluation method and device, computer equipment and readable storage medium
CN111611144A (en) * 2020-05-27 2020-09-01 中国工商银行股份有限公司 Method, apparatus, computing device, and medium for processing performance test data
CN115114141A (en) * 2021-03-18 2022-09-27 腾讯科技(深圳)有限公司 Method, device and equipment for testing performance of application program and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YUCHEN TAN等: "Performance Comparison of Data Classification based on Modern Convolutional Neural Network Architectures", 《2020 39TH CHINESE CONTROL CONFERENCE (CCC)》, 9 September 2020 (2020-09-09), pages 815 *
谷林涛: "基于GUI的Android自动化性能测试方法的研究和实现", 《CNKI优秀硕士学位论文全文库 信息科技辑》, no. 01, 15 January 2019 (2019-01-15), pages 138 - 1683 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115329155A (en) * 2022-10-11 2022-11-11 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium
CN115329155B (en) * 2022-10-11 2023-01-13 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium
CN116701236A (en) * 2023-08-08 2023-09-05 贵州通利数字科技有限公司 APP testing method, system and readable storage medium
CN116701236B (en) * 2023-08-08 2023-10-03 贵州通利数字科技有限公司 APP testing method, system and readable storage medium
CN117234935A (en) * 2023-09-28 2023-12-15 重庆赛力斯新能源汽车设计院有限公司 Test method and device based on illusion engine, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114490375B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
US10733078B2 (en) System and method of handling complex experiments in a distributed system
CN114490375B (en) Performance test method, device, equipment and storage medium of application program
CN107729227B (en) Application program test range determining method, system, server and storage medium
CN110377704B (en) Data consistency detection method and device and computer equipment
US11036608B2 (en) Identifying differences in resource usage across different versions of a software application
CN111309734B (en) Method and system for automatically generating table data
CN111026647B (en) Method and device for acquiring code coverage rate, computer equipment and storage medium
CN110007921B (en) Code publishing method and device
US8850407B2 (en) Test script generation
CN114116422A (en) Hard disk log analysis method, hard disk log analysis device and storage medium
CN111897707B (en) Optimization method and device for business system, computer system and storage medium
CN110309206B (en) Order information acquisition method and system
US10848371B2 (en) User interface for an application performance management system
US11847120B2 (en) Performance of SQL execution sequence in production database instance
CN101661428B (en) Method for evaluating a production rule for a memory management analysis
CN115860709A (en) Software service guarantee system and method
CN115310011A (en) Page display method and system and readable storage medium
CN113868141A (en) Data testing method and device, electronic equipment and storage medium
CN114281549A (en) Data processing method and device
EP3671467A1 (en) Gui application testing using bots
CN112416417A (en) Code amount statistical method and device, electronic equipment and storage medium
CN108763300B (en) Data query method and device
CN116737562A (en) APP response delay test method and device and computer equipment
CN116048975A (en) Database testing method and device, electronic equipment and storage medium
CN114884799A (en) Method, system, equipment and storage medium for testing cluster alarm function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant