CN113608978B - Test method, test device and computer readable storage medium - Google Patents

Test method, test device and computer readable storage medium Download PDF

Info

Publication number
CN113608978B
CN113608978B CN202110800736.7A CN202110800736A CN113608978B CN 113608978 B CN113608978 B CN 113608978B CN 202110800736 A CN202110800736 A CN 202110800736A CN 113608978 B CN113608978 B CN 113608978B
Authority
CN
China
Prior art keywords
scene
interface
target
virtual
game
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110800736.7A
Other languages
Chinese (zh)
Other versions
CN113608978A (en
Inventor
张涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Honor Device Co.,Ltd.
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110800736.7A priority Critical patent/CN113608978B/en
Publication of CN113608978A publication Critical patent/CN113608978A/en
Application granted granted Critical
Publication of CN113608978B publication Critical patent/CN113608978B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/349Performance evaluation by tracing or monitoring for interfaces, buses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Abstract

The application discloses a testing method, a testing device and a computer readable storage medium, and belongs to the technical field of networks. The method comprises the following steps: the first device instructs the second device to display a target interface for displaying a scene screen of a virtual scene, the virtual scene including a plurality of virtual objects. And then, the first device determines a target visual angle according to the aggregation condition of the virtual objects in the virtual scene, wherein the target visual angle is a visual angle corresponding to a scene picture with the highest complexity in the virtual scene. And finally, the first device instructs the second device to display a scene picture of the virtual scene under the target visual angle in the target interface so as to test the performance of the second device. Because the target view angle is the view angle corresponding to the scene picture with the highest complexity in the virtual scene, the rendering load of the second device is the highest when the scene picture of the virtual scene under the target view angle is displayed in the target interface, so that the performance test requirement of the second device can be met, and the test efficiency is improved.

Description

Test method, test device and computer readable storage medium
Technical Field
The present application relates to the field of network technologies, and in particular, to a testing method, a testing device, and a computer-readable storage medium.
Background
With the development of network technology, mobile games become an important entertainment activity in people's daily life. When a user plays a mobile phone game, the requirements on smoothness, heating, touch delay, network delay and the like are extremely high. In order to improve the satisfaction of the user on the mobile phone, the tester needs to test the performance of the mobile phone in the game process so as to find and solve problems in advance.
Currently, manual testing is mainly used for testing the performance of the mobile phone. Specifically, a tester opens a game application on a mobile phone, then opens a game interface in the game application, and controls game characters in the game interface to fight against each other, so that a game picture of fierce confrontation is presented as much as possible. The more fierce the confrontation of the game role, the higher the rendering load of the mobile phone when displaying the game picture, the more the performance test requirement of the mobile phone can be met. After completing one game, the tester analyzes the mobile phone performance data in the game process to obtain a test result.
However, in the above-described test method, the game screen needs to be displayed manually, which is time-consuming and labor-consuming. Moreover, the displayed game picture is only the game picture under the visual angle of the game role controlled by the tester, and is single, so that the game picture with violent confrontation is difficult to be displayed to the maximum extent, and the testing efficiency is influenced.
Disclosure of Invention
The application provides a test method, a test device and a computer readable storage medium, which can improve the efficiency of testing the performance of equipment. The technical scheme is as follows:
in a first aspect, a test method is provided. In the method, a first device instructs a second device to display a target interface, and the target interface is used for displaying a scene picture of a virtual scene. And then, the first equipment determines a target view angle according to the aggregation condition of the virtual objects in the virtual scene. And finally, the first device instructs the second device to display a scene picture of the virtual scene under the target visual angle in the target interface so as to test the performance of the second device.
The virtual scene may include a plurality of virtual objects and may also include a virtual environment. Each virtual object may perform an operation in the virtual environment. For example, the virtual scene may be a game scene, the game scene includes a plurality of game roles and game environments, the game environments may include scenes, such as roads, stone walls, plants, defense buildings, and the like, and may also include non-player characters (NPCs), such as monsters, soldiers, and the like. Individual game characters can perform operations in the game environment. For example, for any one game character, the game character may move on the road in the game environment, or the game character may attack a defensive building in the game environment, or the game character may interact with an NPC in the game environment, etc. In this case, the target interface is a game interface for displaying a game screen of a game scene.
The scene displayed in the target interface is a scene of the virtual scene at a certain viewing angle, which may be referred to as a display viewing angle of the virtual scene in the target interface. The more complex the scene picture of the virtual scene is, the larger the calculation amount required during rendering is, the more computer memory is occupied during rendering, and the longer the rendering time is, and accordingly, the higher the rendering load of the second device when displaying the scene picture of the virtual scene is, that is, the higher the rendering load of the second device when displaying the target interface is. The virtual scene is constantly changing because the virtual scene includes a plurality of virtual objects that can be manipulated to perform operations. In the application, for the virtual scene which changes continuously, the display view angle of the virtual scene in the target interface can be continuously adjusted, that is, the scene picture displayed in the target interface is continuously adjusted, so that the target interface always displays the scene picture with the highest complexity, and thus the second device can always be in the state with the highest rendering load when displaying the target interface, thereby meeting the performance test requirement of the second device and improving the test efficiency.
The plurality of virtual objects in the virtual scene can move, and any two virtual objects can be gathered together or dispersed at two distant places by moving. Generally, the larger the number of virtual objects gathered in a certain area in the virtual scene is, the more complicated the scene picture including the area is, so that the viewing angle corresponding to the scene picture with the highest complexity, that is, the target viewing angle can be determined according to the gathering condition of the virtual objects in the virtual scene. That is to say, the first device may determine a target view according to the aggregation condition of the virtual objects in the virtual scene, where the target view is a view corresponding to a scene picture with the highest complexity in the virtual scene. That is, the scene view of the virtual scene at the target viewing angle has the highest complexity.
Since the target view angle is the view angle corresponding to the scene picture with the highest complexity in the virtual scene, after the first device instructs the second device to display the scene picture of the virtual scene at the target view angle in the target interface, the rendering load when the second device displays the scene picture is the highest, so that the performance test requirement of the second device can be met, and the test efficiency is improved.
It should be noted that the virtual scene is changed since the virtual scene includes a plurality of virtual objects, and the plurality of virtual objects can be manipulated to perform operations. In the application, for the virtual scene which changes continuously, the first device can continuously determine the target visual angle of the virtual scene, and accordingly continuously adjust the scene picture displayed in the target interface, so that the target interface always displays the scene picture with the highest complexity, and thus the second device can always be in the state with the highest rendering load when displaying the target interface, thereby meeting the performance test requirement of the second device and improving the test efficiency. That is, the operation of the first device determining the target view angle according to the aggregation condition of the virtual objects in the virtual scene, and the operation of the first device instructing the second device to display the scene picture of the virtual scene at the target view angle in the target interface may be performed once every certain time, for example, once every two seconds. That is, the first device may re-determine the target viewing angle at regular intervals and instruct the second device to display the scene picture of the virtual scene at the target viewing angle.
Optionally, the operation of the first device instructing the second device to display the target interface may be: the first device starts a man-machine interaction application in the second device and then instructs the man-machine interaction application in the second device to display the target interface.
The man-machine interaction application is an application providing the virtual scene. For example, the human-computer interaction application may be a game application, the game application provides a game scene, and the target interface is a game interface in the game application.
Optionally, before the first device instructs the human-computer interaction application in the second device to display the target interface, a test tool in the second device may also be started.
The test tool is used to test the performance of the second device. The testing tool can test the average frame rate, the jitter rate, the number of times of blocking per hour, the maximum number of frames lost, the power consumption, the utilization rate of a Central Processing Unit (CPU), the temperature of the CPU and other performances of the second equipment. The average frame rate, the jitter rate, the number of times of stuck per hour and the maximum number of dropped frames can reflect the feeling of stuck when a user uses an interactive application of a mobile phone, the average frame rate reflects the whole fluency, the jitter rate reflects the sudden stuck, the number of times of stuck per hour reflects the quantifiable number of times of stuck, and the maximum number of dropped frames reflects the worst stuck. The power consumption can reflect the heat feeling of the user when the user uses the interactive application, and the power consumption can reflect the power consumption.
In the application, the first device may start a human-computer interaction application and a testing tool in the second device. After the test tool is started, the performance of the second device can be automatically tested. The second device also starts the man-machine interaction application, so that the performance of the second device in the process of running the man-machine interaction application can be automatically tested after the testing tool is started.
As an example, if the scene in the target interface is already displayed, the first device may end the test, or the first device may further instruct the second device to exit the target interface and then end the test. After the first device finishes testing, the testing tool in the second device can output the performance testing result. In this way, the performance test of the second device in the process of running the man-machine interaction application is completed.
Generally, a main interface is displayed after the human-computer interaction application is started, and at this time, a series of options in the human-computer interaction application need to be triggered to enter a target interface from the main interface. Thus, in the present application, the first device may send a series of commands to the second device to enable triggering of a series of options in the human-computer interaction application to instruct the human-computer interaction application to enter the target interface from the main interface.
In this case, the operation of the first device instructing the human-computer interaction application in the second device to display the target interface may be: setting i equal to 1, and determining the ith option image in the n option images as a reference option image; sending a first screen image acquisition command to second equipment; after receiving a first screen image sent by second equipment, acquiring a position coordinate of a reference option image in the first screen image as a first position coordinate; sending an option triggering command carrying the first position coordinate to the second equipment to indicate the second equipment to trigger the option located at the first position coordinate in an application interface of the man-machine interaction application; and if i is not equal to n, making i equal to i +1, and re-executing the step of determining the ith option image in the n option images as the reference option image and the subsequent steps until i is equal to n.
It is understood that n is a positive integer. The n option images correspond to the n options one by one, and each option image is an image of one option corresponding to the option image. The n options are a series of options which need to be triggered to enter the target interface from the main interface of the human-computer interaction interface. The n option images have sequence, the sequence of the n option images is the triggering sequence of the corresponding n options, namely the sequence of the n options triggered in the process of entering the target interface from the main interface of the human-computer interaction interface. The option corresponding to the last option image (i.e., the nth option image) in the n option images is used for entering a target interface in the human-computer interaction application, i.e., after the option corresponding to the nth option image is triggered, the human-computer interaction application displays the target interface.
If i is not equal to n, the first device may determine that the current reference option image is not the nth option image, and then let i be i +1 to continue to select the next option image as the reference option image to continue to instruct the second device to trigger the next option in the application interface of the human-computer interaction application. In this way, the second device is instructed to sequentially trigger each option in the n options corresponding to the n option images in the application interface of the human-computer interaction application from the 1 st option image in the n option images until the target interface is entered after the last option in the n options is triggered.
If i is equal to n, the first device may determine that the current reference option image is the nth option image, and the option triggered by the second device is the option corresponding to the nth option image in the application interface of the human-computer interaction application, that is, the last option in the n options corresponding to the n option images in the application interface of the human-computer interaction application, so that the first device may determine that the human-computer interaction application has displayed the target interface.
Optionally, the target interface is further configured to display a scene map of the virtual scene, the scene map including the virtual object identifier. At least two virtual objects of the plurality of virtual objects belong to different organizations. Virtual objects of different organizations have a competitive relationship. Therefore, the larger the number of the virtual objects of different organizations gathered in a certain area in the virtual scene is, the more complicated the operations performed by the virtual objects of different organizations in the area are, and thus, the more complicated the scene picture including the area is.
In this case, the operation of the first device determining the target view angle according to the aggregation condition of the virtual objects in the virtual scene may be: the first equipment sends a second screen image acquisition command to the second equipment; after receiving a second screen image sent by second equipment, identifying the scene map in the second screen image; and determining a target view angle according to the aggregation degree of the virtual object identifications of different organizations in the scene map.
Since the target interface is displayed with the scene map of the virtual scene and the second screen image is an image of the target interface, the scene map can be identified in the second screen image.
Optionally, the operation of the first device to identify the scene map in the second screen image may be: the first device determines position coordinates of the reference icon in the second screen image, determines a second position coordinate range according to the position coordinates and a map position determination rule, and determines an image in the second position coordinate range in the second screen image as the scene map.
Generally, icons having a fixed positional relationship with the scene map exist in the target interface, and the icons can be used for positioning the scene map. In the application, the icons capable of positioning the scene map can be set as reference icons in advance, then a map position determination rule is set in advance according to the position relationship between the reference icons and the scene map, the map position determination rule can reflect the position relationship between the reference icons and the scene map, and the map position determination rule is used for calculating the position coordinate range of the scene map according to the position coordinates of the reference icons. Therefore, after the first device receives the second screen image sent by the second device, the scene map can be positioned in the second screen image according to the position coordinate of the reference icon in the second screen image and the map position determination rule directly, and the scene map can be identified.
Virtual object identifications of different tissues may be distinguished by color or shape. For example, the virtual scene includes two organized virtual objects, and correspondingly, the scene map includes two organized virtual object identifiers. The virtual object identifiers of the two tissues can be distinguished by colors, for example, the virtual object identifiers of the two tissues are both circular in shape, wherein the virtual object identifier of one tissue is blue and the virtual object identifier of the other tissue is red. Alternatively, the virtual object identifiers of the two organizations may be distinguished by shape, such as that one of the virtual object identifiers of the two organizations is a circle and the other of the virtual object identifiers of the two organizations is a square.
The first device may identify virtual object identifiers of each organization from the scene map, and then determine the target view angle according to the aggregation degree of the virtual object identifiers of different organizations in the scene map.
Optionally, the operation of the first device determining the target viewing angle according to the aggregation degree of the identifications of the virtual objects of different organizations in the scene map may be: determining a plurality of aggregation areas in the scene map; determining the aggregation areas with the largest number of virtual object identifications contained in the aggregation areas as reference areas; and determining the corresponding view angle of the reference area in the scene map as a target view angle.
Each aggregation area contains virtual object identifications of different organizations, i.e. the aggregation area is an area where virtual object identifications of different organizations are aggregated. The size of the aggregation area may be a preset size, or may be determined according to the size of the virtual object identifier.
Since the plurality of aggregation areas are all areas aggregated with virtual object identifiers of different organizations, and the reference area is an area with the most aggregated virtual object identifiers in the plurality of aggregation areas, that is, an aggregation area with the highest aggregation degree in the plurality of aggregation areas, the reference area is an area with the most aggregated virtual object identifiers in the areas aggregated with virtual object identifiers of different organizations.
The corresponding view angle of the reference area in the scene map is a view angle for displaying a scene picture at a location where the reference area is located in the virtual scene. Since the reference area is an area in which virtual object identifiers of different organizations are aggregated, and the number of the aggregated virtual object identifiers is the largest, a scene picture of the reference area at a location in the virtual scene is relatively complex and is a scene picture with the highest complexity in the virtual scene, and thus a corresponding view angle of the reference area in the scene map can be determined as a target view angle.
Optionally, the operation of the first device determining the plurality of aggregation areas in the scene map may be: the method comprises the steps that first equipment determines a target area where each virtual object identifier in a plurality of virtual object identifiers in a scene map is located to obtain a plurality of target areas; and determining a target area containing virtual object identifications of different tissues in the plurality of target areas as an aggregation area.
For any virtual object identifier, the target area where the virtual object identifier is located is an area with the center same as the center of the virtual object identifier, the shape same as the shape of the virtual object identifier and the size k times the size of the virtual object identifier, and k is an integer greater than or equal to 2. For example, assuming that k is 3 and the virtual object identifier is a circle with a radius r, the target area where the virtual object identifier is located may be a circle with a radius 3 r.
Optionally, the operation of the first device instructing the second device to display the scene picture of the virtual scene at the target viewing angle in the target interface may be: acquiring the position coordinate of the reference area in the second screen image as a second position coordinate; and sending a contact point triggering command carrying the second position coordinate to the second equipment to indicate the second equipment to trigger a contact point at the second position coordinate in the target interface and then display a scene picture of the virtual scene under the target visual angle.
The position coordinates of the reference region in the second screen image may be position coordinates of the center of the reference region in the second screen image, or the position coordinates of the reference region in the second screen image may be an average of the position coordinates of the centers of all the virtual object identifiers included in the reference region in the second screen image.
The contact triggering command is used for indicating the second device to trigger a contact at the second position coordinate in the target interface, and after the second device triggers the contact at the second position coordinate, the contact at the reference area is actually triggered in the scene map, so that the target interface can display the scene picture of the virtual scene at the target visual angle, the rendering load when the second device displays the scene picture is the highest, the performance testing requirement of the second device can be met, and the testing efficiency is improved.
As an example, after the second device triggers a touch point at the second position coordinate, that is, after triggering a touch point at the reference area in the scene map, even if the touch point is not continuously triggered, the target interface still continues to display the scene picture of the virtual scene at the target viewing angle as long as other points in the scene map are not triggered. For example, in the game interface in the fighting mode, after the second device simulates the click operation on the reference area in the small map, if the click operation is not continuously performed on the reference area in the small map, the game interface may continuously display the game screen of the game scene at the view angle (i.e., the target view angle) corresponding to the latest click point in the small map.
In this case, the sending of the contact point trigger command carrying the second position coordinate to the second device by the first device may be a single trigger command for the contact point at the second position coordinate. For example, the first device may send a click command to the second device for a touch point at the second position coordinate, and after receiving the click command, the second device may simulate a click operation for the touch point at the second position coordinate using the Android system tool sendent.
As another example, after the second device triggers a touch point at the second position coordinate, that is, after triggering a touch point at the reference area in the scene map, the touch point needs to be continuously triggered, so that the target interface can continuously display the scene picture of the virtual scene at the target viewing angle. For example, in the game interface in the battle mode, the second device needs to simulate a long-press operation on the reference area in the small map, and then the game interface can continuously display the game picture of the game scene at the view angle (i.e., the target view angle) corresponding to the long-press point in the small map.
In this case, the sending of the contact point trigger command carrying the second position coordinate to the second device by the first device may be a continuous trigger command for the contact point at the second position coordinate. For example, the first device may send a long-press command to the second device for the touch point at the second position coordinate, and after receiving the long-press command, the second device may simulate a long-press operation for the touch point at the second position coordinate by using the Android system tool sendent.
In a second aspect, a test apparatus is provided, which has the function of implementing the behavior of the test method in the first aspect. The testing device comprises at least one module, and the at least one module is used for realizing the testing method provided by the first aspect.
In a third aspect, a test apparatus is provided, where the structure of the test apparatus includes a processor and a memory, and the memory is used to store a program that supports the test apparatus to execute the test method provided in the first aspect, and to store data used to implement the test method in the first aspect. The processor is configured to execute programs stored in the memory. The test apparatus may further comprise a communication bus for establishing a connection between the processor and the memory.
In a fourth aspect, there is provided a computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the testing method of the first aspect described above.
In a fifth aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the testing method of the first aspect described above.
The technical effects obtained by the second, third, fourth and fifth aspects are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described herein again.
Drawings
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 2 is a block diagram of a software system of a terminal according to an embodiment of the present disclosure;
FIG. 3 is a schematic view of a game interface provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of an application interface of a first game application provided in an embodiment of the present application;
FIG. 5 is a schematic view of a game interface in a match mode according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of an application interface of a second game application provided in an embodiment of the present application;
FIG. 7 is a schematic view of a game interface in a spectator mode according to an embodiment of the present application;
FIG. 8 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
FIG. 9 is a schematic illustration of another example implementation environment provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of a test interface of a first test tool provided in an embodiment of the present application;
FIG. 11 is a schematic diagram of a test interface of a second test tool provided in an embodiment of the present application;
FIG. 12 is a flow chart of a testing method provided by an embodiment of the present application;
FIG. 13 is a diagram illustrating a selection image set corresponding to a fighting mode according to an embodiment of the present application;
FIG. 14 is a schematic diagram of an option image set corresponding to a spectator mode according to an embodiment of the present application;
FIG. 15 is a schematic diagram of a minimap provided by an embodiment of the present application;
FIG. 16 is a schematic view of another minimap provided by an embodiment of the present application;
FIG. 17 is a schematic diagram of yet another minimap provided by an embodiment of the present application;
FIG. 18 is a schematic diagram of black and white images of channels of a minimap provided in an embodiment of the present application;
FIG. 19 is a black and white image of a green channel of a minimap according to an embodiment of the present disclosure;
FIG. 20 is a schematic diagram of a target area in a small map provided by an embodiment of the present application;
FIG. 21 is a schematic view of another game interface provided by embodiments of the present application;
FIG. 22 is a schematic diagram of a testing process provided by an embodiment of the present application;
fig. 23 is a schematic structural diagram of a testing apparatus according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that reference to "a plurality" in this application means two or more. In the description of the present application, "/" means "or" unless otherwise stated, for example, a/B may mean a or B; "and/or" herein is only an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, for the convenience of clearly describing the technical solutions of the present application, the terms "first", "second", and the like are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
Before explaining the test method provided by the embodiment of the present application in detail, the terminal according to the embodiment of the present application will be explained.
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application. Referring to fig. 1, the terminal 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation to the terminal 100. In other embodiments of the present application, terminal 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the terminal 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces, such as an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C interfaces. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C interfaces. Such as: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 and the touch sensor 180K communicate through an I2C interface to implement the touch function of the terminal 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S interfaces. The processor 110 may be coupled to the audio module 170 via an I2S interface to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset.
The UART interface is a universal serial data bus used for asynchronous communications. The UART interface may be a bi-directional communication bus. The UART interface may convert data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. Such as: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of terminal 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the terminal 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the terminal 100, and may also be used to transmit data between the terminal 100 and peripheral devices. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The USB interface 130 may also be used to connect other terminals, such as AR devices, etc.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an exemplary illustration, and does not limit the structure of the terminal 100. In other embodiments of the present application, the terminal 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the terminal 100. The charging management module 140 may also supply power to the terminal 100 through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the terminal 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in terminal 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. Such as: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication and the like applied to the terminal 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication applied to the terminal 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, the antenna 1 of the terminal 100 is coupled to the mobile communication module 150 and the antenna 2 is coupled to the wireless communication module 160 so that the terminal 100 can communicate with a network and other devices through a wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, among others. GNSS may include Global Positioning System (GPS), global navigation satellite system (GLONASS), beidou satellite navigation system (BDS), quasi-zenith satellite system (QZSS), and/or Satellite Based Augmentation System (SBAS).
The terminal 100 implements a display function through the GPU, the display screen 194, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the terminal 100 may include 1 or N display screens 194, where N is an integer greater than 1.
The terminal 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, and the application processor, etc.
The ISP is used to process the data fed back by the camera 193. For example, when taking a picture, open the shutter, on light passed through the lens and transmitted camera light sensing element, light signal conversion was the signal of telecommunication, and camera light sensing element transmits the signal of telecommunication to ISP and handles, turns into the image that the naked eye is visible. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the terminal 100 may include 1 or N cameras 193, N being an integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the terminal 100 selects a frequency point, the digital signal processor is configured to perform fourier transform or the like on the frequency point energy.
Video codecs are used to compress or decompress digital video. The terminal 100 may support one or more video codecs. In this way, the terminal 100 can play or record video in a plurality of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor, which processes input information quickly by referring to a biological neural network structure, for example, by referring to a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can implement applications such as intelligent recognition of the terminal 100, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the terminal 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving files of music, video, etc. in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the terminal 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (e.g., audio data, a phonebook, etc.) created during use of the terminal 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The terminal 100 can implement audio functions, such as music playing, recording, etc., through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The terminal 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the terminal 100 receives a call or voice information, it can receive voice by bringing the receiver 170B close to the human ear.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The terminal 100 may be provided with at least one microphone 170C. In other embodiments, the terminal 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the terminal 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, implement directional recording functions, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The terminal 100 determines the intensity of the pressure according to the change in the capacitance. When a touch operation is applied to the display screen 194, the terminal 100 detects the intensity of the touch operation based on the pressure sensor 180A. The terminal 100 may also calculate the touched position based on the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. Such as: and when the touch operation with the touch operation intensity smaller than the pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine a motion attitude of the terminal 100. In some embodiments, the angular velocity of terminal 100 about three axes (i.e., x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the terminal 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the terminal 100 by a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, the terminal 100 calculates an altitude from the barometric pressure measured by the barometric pressure sensor 180C to assist in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The terminal 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the terminal 100 is a folder, the terminal 100 may detect the opening and closing of the folder according to the magnetic sensor 180D. The terminal 100 sets the automatic unlocking of the flip according to the detected opening/closing state of the holster or the detected opening/closing state of the flip.
The acceleration sensor 180E may detect the magnitude of acceleration of the terminal 100 in various directions (generally, three axes). The magnitude and direction of gravity can be detected when the terminal 100 is stationary. The acceleration sensor 180E may also be used to identify the posture of the terminal 100, and be applied to horizontal and vertical screen switching, pedometer, and the like.
A distance sensor 180F for measuring a distance. The terminal 100 may measure the distance by infrared or laser. In some embodiments, in a shooting scene, the terminal 100 may utilize the distance sensor 180F to range for fast focus.
The proximity light sensor 180G may include a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The terminal 100 emits infrared light outward through the light emitting diode. The terminal 100 detects infrared reflected light from a nearby object using a photodiode. When sufficient reflected light is detected, terminal 100 can determine that there is an object near terminal 100. When insufficient reflected light is detected, it can be determined that there is no object near the terminal 100. The terminal 100 can utilize the proximity light sensor 180G to detect that the user holds the terminal 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. The terminal 100 may adaptively adjust the brightness of the display 194 according to the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the terminal 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The terminal 100 can utilize the collected fingerprint characteristics to realize fingerprint unlocking, access to an application lock, fingerprint photographing, fingerprint incoming call answering, and the like.
The temperature sensor 180J is used to detect temperature. In some embodiments, the terminal 100 executes a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the terminal 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the terminal 100 heats the battery 142 when the temperature is below another threshold to avoid abnormal shutdown of the terminal 100 due to low temperature. In other embodiments, when the temperature is lower than a further threshold, the terminal 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor 180K may pass the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on the surface of the terminal 100 at a different position than the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signals acquired by the bone conduction sensor 180M, and the heart rate detection function is realized.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys or touch keys. The terminal 100 may receive a key input, and generate a key signal input related to user setting and function control of the terminal 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. Touch operations applied to different areas of the display screen 194 may also correspond to different vibration feedback effects. Different application scenes (such as time reminding, information receiving, alarm clock, games and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the terminal 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The terminal 100 may support 1 or N SIM card interfaces, where N is an integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The terminal 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the terminal 100 employs eSIM, namely: an embedded SIM card. The eSIM card can be embedded in the terminal 100 and cannot be separated from the terminal 100.
Next, a software system of the terminal 100 will be explained.
The software system of the terminal 100 may adopt a hierarchical architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In the embodiment of the present application, an Android (Android) system with a layered architecture is taken as an example to exemplarily describe a software system of the terminal 100.
Fig. 2 is a block diagram of a software system of the terminal 100 according to an embodiment of the present disclosure. Referring to fig. 2, the layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system layer, and a kernel layer from top to bottom.
The application layer may include a series of application packages. As shown in fig. 2, the application packages may include camera, gallery, calendar, phone, map, navigation, WLAN, bluetooth, music, games, short messages, etc. applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions. As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like. The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data, which may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc., and makes the data accessible to applications. The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system can be used for constructing a display interface of an application program, and the display interface can be composed of one or more views, such as a view for displaying a short message notification icon, a view for displaying characters and a view for displaying pictures. The phone manager is used to provide communication functions of the terminal 100, such as management of call states (including connection, disconnection, etc.). The resource manager provides various resources, such as localized strings, icons, pictures, layout files, video files, etc., to the application. The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. For example, a notification manager is used to notify download completion, message alerts, and the like. The notification manager may also be a notification that appears in the form of a chart or scrollbar text at the top status bar of the system, such as a notification of a background running application. The notification manager may also be a notification that appears on the screen in the form of a dialog window, such as prompting a text message in a status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system. The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android. The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules, such as: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like. The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications. The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of the software and hardware of the terminal 100 in connection with a game application start scenario.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the original input event. Taking the touch operation as a click operation, and taking the control corresponding to the click operation as the control of the game application icon as an example, the game application calls the interface of the application program framework layer to start the game application, and then calls the kernel layer to start the display driver, and displays the application interface of the game application through the display screen 194.
Next, an application scenario of the test method provided in the embodiment of the present application is described.
With the development of network technology, more and more man-machine interactive applications, such as game applications, appear. When a user uses man-machine interaction application on a terminal, the requirements on smoothness, heating, touch delay, network delay and the like are extremely high. In order to improve the user satisfaction, a tester needs to test the performance of the terminal in the process of running the human-computer interaction application so as to find and solve problems in advance. The testing method provided by the embodiment of the application is applied to a scene in which the performance of the terminal in the running process of the human-computer interaction application needs to be tested. For example, when a game application installed in a mobile phone is used to play a game, the test method provided by the embodiment of the application can be used to test the performance of the mobile phone in the game process.
Next, the man-machine interactive application will be explained.
The human-computer interaction application can provide a virtual scene containing virtual objects for a user, and the user can control the virtual objects in the virtual scene.
The human-computer interaction application generally has a target interface, the target interface is used for displaying a scene picture of a virtual scene provided by the human-computer interaction application, and a user can control a virtual object of the user through the target interface. Besides the virtual objects which can be controlled by the user, the virtual scene also comprises a virtual environment, and the user can control the virtual object to execute operation in the virtual environment.
After the human-computer interaction application is started, the main interface is displayed, and then a series of options can be triggered through specified operations (including but not limited to click operations, sliding operations, gesture operations, voice operations, or body sensing operations) to enter a target interface for displaying a scene picture of a virtual scene from the main interface.
As an example, the human-computer interaction application can be a game application, and a user can manipulate a game character in a game scene provided by the game application. For example, the game application may be a multiplayer online battle arena game (MOBA) application, which provides a game in which multiple user accounts compete in the same game scene. The game play method of the MOBA application is as follows: all game characters in a game scene are divided into a plurality of organizations (also called teams), the game characters in the plurality of organizations compete with each other in the game, and each player operates the own game character to compete with the game characters of other organizations.
The game application is provided with a game interface, the game interface can display a game picture of a game scene provided by the game application, the game picture can contain a game environment and game characters, the game environment can comprise scenes such as roads, stone walls, plants, defense buildings and the like, and the game environment can also comprise NPCs such as wild monsters, soldiers and the like. The user can control the own game role to execute the operation in the game environment through the game interface.
Fig. 3 is a schematic view of a game interface provided in an embodiment of the present application.
Referring to FIG. 3, a game interface 300 displays a game view with game scenes that contain a game environment (including, but not limited to, scenery, NPC, etc.) and game characters 301. The game character 301 is in a game environment, and can perform operations in the game environment.
To facilitate the user's knowledge of the scenes in the game environment, the NPCs, and where the game character 301 is located in the game scene, the game interface 300 may also display a small map 302 of the game scene, as shown in fig. 3. The minimap 302 contains the environment icons of the various scenes and NPCs in the game environment, as well as the character avatar containing the game character 301. The positions of the environment icons in the minimap 302 may indicate where the respective scenes and NPCs in the game environment are located in the game scene, and the positions of the character avatars in the minimap 302 may indicate where the game character 301 is located in the game scene.
Optionally, as shown in fig. 3, the game interface may further display some virtual keys 303 for manipulating the game character 301, and the user may manipulate the game character 301 to perform operations in the game environment through the virtual keys 303. For example, the user may manipulate his or her game character 301 to move on a road in the game environment through the virtual keys 303, manipulate his or her game character 301 to attack a defensive building in the game environment, manipulate his or her game character 301 to interact with an NPC in the game environment, and so on.
The gaming application may have different gaming modes including, but not limited to, a battle mode and a spectator mode. The fighting mode refers to that a user selects a game role and controls the game role to execute operation in a game environment through a game interface after the game interface is opened. The fighting mode is that the user opens the game interface of other people and watches the game characters of other people to execute operation in the game environment. After the game application is started, the main interface is displayed first, and then the game interfaces in different game modes can be entered through different options.
For convenience of description, the following description will be given by taking the terminal as a mobile phone, and taking a manner that a user clicks an option in an application interface of a game application running in the mobile phone to trigger entry into a next interface as an example.
The following describes a process of entering the game interface in the battle mode.
Fig. 4 is a schematic diagram of an application interface of a game application provided in an embodiment of the present application. Fig. 4, a-d, shows the entire process of entering the game interface in the battle mode from the main interface through a series of options in the game application. This will be specifically explained below.
After the game application is started on the mobile phone, a main interface 401 as shown in a diagram a in fig. 4 is displayed, and a user head portrait, a user nickname, a fighting option 4011 and the like are displayed on the main interface 401. Thereafter, the user clicks on the fight option 4011. After detecting that the fighting option 4011 is clicked, the mobile phone enters a next application interface, that is, an interface 402 for selecting a fighting form as shown in b in fig. 4 is entered, and a matching option 4021 and a man-machine option 4022 are displayed in the interface 402. Thereafter, the user clicks on the man-machine option 4022. After detecting that the man-machine option 4022 is clicked, the mobile phone enters a next application interface, that is, an interface 403 for selecting a game character as shown in a diagram c in fig. 4 is entered, wherein character avatars 4031 of a plurality of game characters are displayed in the interface 403, and a determination option 4032 is also displayed. Then, after the user sequentially clicks the character avatar 4031 and the determination option 4032 of one game character, the next application interface is entered, that is, the game interface 404 in the battle mode shown in d in fig. 4 is entered. The game interface 404 displays a game screen of a game scene including a game environment and a game character 4041. Also, the game interface 404 may also display a minimap 4042 of the game scene. The minimap 4042 contains environment icons for each scene and NPC in the game environment, and a character avatar containing the game character 4041. The game interface 404 may also display virtual keys 4043 for manipulating the game character 4041, and the user may manipulate his or her own game character 4041 through the virtual keys 4043 to perform operations in the game environment.
It should be understood that the content shown in fig. 4 is only an example, and the embodiment of the present application only takes the respective application interfaces and the options in the respective application interfaces shown in fig. 4 as an example to describe the process of entering the game interface in the battle mode from the main interface of the game application, and the content shown in fig. 4 does not limit the embodiment of the present application. In practical applications, the game interface in the battle mode may be entered from the main interface of the game application through other application interfaces and other options, which is not limited in the embodiment of the present application.
Next, the display of the game screen on the game interface in the battle mode will be described with reference to fig. 5.
FIG. 5 is a schematic view of a game interface in a match-up mode according to an embodiment of the present disclosure. Referring to fig. 5, a game screen of a game scene is displayed in the game interface 500, and the game screen includes a game environment and a game character 501. Also, the game interface 500 may also display a minimap 502 of the game scene.
In the case where the user does not click on the minimap 502, as shown in a diagram a in fig. 5, a game screen of the game scene in the view of the user's own game character 5011, that is, a game screen corresponding to the place where the user's own game character 5011 is located in the game scene, is displayed in the game interface 500.
In the case that the user clicks the minimap 502, for example, as shown in a diagram in fig. 5, in the case that the user clicks the point a in the minimap 502, the view angle corresponding to the point a in the minimap 502 is taken as the display view angle, and then the game screen of the game scene at the display view angle is displayed in the game interface, as shown in b diagram in fig. 5, that is, the game screen corresponding to the place where the point a in the minimap 502 is located in the game scene is displayed.
In this case, if the user holds down a certain point on the minimap 502, the game screen corresponding to the point where the user holds down the minimap 502 is displayed on the game interface. If the user slides on the small map 502, the game interface displays a series of game pictures corresponding to a series of places where the sliding track of the user in the small map 502 is located in the game scene.
If the user raises his hand after clicking the minimap 502 and does not continue to click the minimap 502, the game interface jumps back to display the game scene at the view angle of the user's own game character 5011. For example, when the user clicks point a in the minimap 502, the game screen shown in fig. 5 b is displayed on the game interface, and then, if the user raises his hand and does not continue to click point a in the minimap 502, the game interface will jump back from the game screen shown in fig. 5 b to the game screen shown in fig. 5 a.
The following describes a process of entering a game interface in the fighting mode:
fig. 6 is a schematic diagram of an application interface of a game application provided in an embodiment of the present application. Fig. 6, a-c, shows the overall process of entering the gaming interface in the spectator mode from the main interface through a series of options in the gaming application. This will be specifically explained below.
After the game application is started on the mobile phone, a main interface 601 as shown in a diagram in fig. 6 is displayed, and the main interface 601 displays a user avatar, a user nickname, a battle option, an event option 6011 and the like. Thereafter, the user clicks on the event option 6011. After detecting that the event option 6011 is clicked, the mobile phone enters a next application interface, that is, the interface 602 for selecting an event as shown in b in fig. 6 is entered, an event list 6021 is displayed in the interface 602, and the event list 6021 includes a plurality of event play options 6022. Thereafter, the user clicks on the play option 6022 for a certain event in the event list 6021. After detecting that the play option 6022 of the event is clicked, the mobile phone enters the next application interface, i.e. enters the game interface 603 in the fighting mode as shown in the diagram c in fig. 6. The game interface 603 displays a game screen with a game scene. The game screen includes a game environment and a game character 6031. Also, the game interface 603 may also display a minimap 6032 of the game scene. The minimap 6032 contains environment icons of respective scenes and NPCs in the game environment, and character avatars including the game character 6031.
It should be understood that the content shown in fig. 6 is only an example, and the embodiment of the present application only takes the respective application interfaces and the options in the respective application interfaces shown in fig. 6 as an example to describe the process of entering the game interface in the fighting mode from the main interface of the game application, and the content shown in fig. 6 does not limit the embodiment of the present application. In practical applications, the game interface in the spectator mode may also be entered from the main interface of the game application through other application interfaces and other options, which is not limited in the embodiment of the present application.
Next, the display of the game screen on the game interface in the fighting mode will be described with reference to fig. 7.
FIG. 7 is a schematic view of a game interface in a spectator mode according to an embodiment of the present application. Referring to fig. 7, a game screen of a game scene is displayed in the game interface 700, and the game screen includes a game environment and a game character 701. Also, the game interface 700 may also display a small map 702 of the game scene.
When the mobile phone enters the game interface 700 and the user does not click the minimap 702 at the beginning, as shown in a of fig. 7, a game screen of the game scene at the view angle of any one game character 701, that is, a game screen corresponding to the location where the game character 701 is located in the game scene, is displayed in the game interface 700.
In the case that the user clicks the minimap 702, for example, as shown in a diagram in fig. 7, in the case that the user clicks a point B in the minimap 702, the viewing angle corresponding to the point B in the minimap 702 is taken as the display viewing angle, and then the game screen of the game scene at the display viewing angle is displayed in the game interface, as shown in B diagram in fig. 7, that is, the game screen corresponding to the place where the point B in the minimap 702 is located in the game scene is displayed.
In this case, if the user holds down a certain point on the minimap 702, the game screen corresponding to the point where the user holds down the minimap 702 is displayed on the game screen. If the user slides on the small map 702, the game interface displays a series of game pictures corresponding to a series of places where the sliding track of the user in the small map 702 is located in the game scene.
If the user raises his hand after clicking the small map 702, the game interface does not continue to click the small map 702, and the game interface also continues to display the game picture corresponding to the location where the user last clicked point in the small map 702 is located in the game scene until the user re-clicks the small map 702, the view angle corresponding to the new clicked point in the small map 702 is re-determined as a new display view angle, and then the game picture of the game scene at the new display view angle is displayed in the game interface, that is, the game picture corresponding to the location where the user currently clicked point in the small map 702 is located in the game scene is displayed.
As can be seen from the above description, the game interface in the battle mode is different from the game interface in the spectator mode. The game interface in the fighting mode and the game interface in the fighting mode both display game pictures and small maps. However, the virtual keys for controlling the user's own game characters are also displayed on the game interface in the battle mode, and the virtual keys for controlling the game characters do not exist in the game interface in the battle mode. In some embodiments, the game interface in the battle mode also displays icons of controls for zooming in and out of the map in the vicinity of the minimap, while the game interface in the battle mode does not have icons of controls for zooming in and out of the minimap.
Next, an implementation environment of the test method provided in the embodiments of the present application is described.
FIG. 8 is a schematic diagram of an implementation environment provided by an embodiment of the application. Referring to fig. 8, the implementation environment includes: a first device 801 and a second device 802.
The first device 801 and the second device 802 may each be a terminal, such as the terminal 100 described above with respect to the embodiments of fig. 1-2. The processing power of the first device 801 may be higher than the processing power of the second device 802. For example, the second device 802 may be a mobile terminal such as a mobile phone or a tablet computer, and the first device 801 may be a terminal with higher processing capability than the second device 802 such as a desktop computer or a notebook computer.
The first device 801 and the second device 802 may communicate through a wired connection or a wireless connection. Referring to fig. 9, the first device 801 is equipped with a first testing tool 8011, and the first testing tool 8011 may control a second device 802, for example, the first testing tool 8011 may be an Airtest tool or other automated testing tool. The second device 802 is equipped with a second test tool 8021 and a human-machine-interactive application 8022.
After the first device 801 establishes a communication connection with the second device 802, the first testing tool 8011 in the first device 801 may control the second device 802 to start the second testing tool 8021 and the human-computer interaction application 8022.
After the second testing tool 8021 and the human-computer interaction application 8022 are run on the second device 802, the second testing tool 8021 may test the performance of the second device 802 in a process that the second device 802 runs the human-computer interaction application 8022. Illustratively, the second testing tool 8021 may test the performance of average frame rate, jitter rate, number of calories per hour, maximum number of frames lost, power consumption, CPU utilization, CPU temperature, etc.
The first testing tool 8011 may not only control the second device 802 to launch the second testing tool 8021 and the human-computer interaction application 8022, but may also control the operation of the human-computer interaction application 8022. For example, the first testing tool 8011 may control the human-computer interaction application 8022 to enter a target interface for displaying a scene picture of a virtual scene, and after the human-computer interaction application 8022 enters the target interface, the first testing tool 8011 may further control the target interface to display a scene picture with higher complexity, so as to meet a device performance testing requirement and improve device performance testing efficiency.
The first test tool 8011 is described below.
The first test tool 8011 is used to control the second device 802. Alternatively, the code for controlling the second device 802 may be generated in the test interface of the first test tool 8011, and the generated code may be displayed in the test interface of the first test tool 8011. In this manner, control of the second device 802 may be achieved by running code displayed in the test interface of the first test tool 8011.
Fig. 10 is a schematic diagram of a test interface of the first test tool 8011 according to an embodiment of the present disclosure.
As shown in fig. 10, the test interface 1000 of the first test tool 8011 includes a help window and a code editing window. The auxiliary window displays some commonly used functions, such as touch (touch operation) function, swipe (slide operation) function, and the like. The code editing window is used for inputting codes.
In the preparation phase, a code for controlling the second device 802 may be entered in the code editing window using the function displayed in the auxiliary window according to the control demand of the second device 802 by the first test tool 8011. For example, using the function displayed in the auxiliary window, a code for starting the second test tool 8021 and the human-computer interaction application 8022 is first input in the code editing window, and then a code for controlling the operation of the human-computer interaction application 8022 is input in the code editing window. In the testing phase, the code displayed in the code editing window is run to realize the control of the second device 802. For example, by executing the code displayed in the code editing window, the second testing tool 8021 and the human-computer interaction application 8022 can be started, and then the operation of the human-computer interaction application 8022 can be controlled.
The second test tool 8021 is described below.
The second test tool 8021 is used to test the performance of the second device 802. Specifically, after the second testing tool 8021 is started, the performance of the second device 802 may be continuously tested during the operation, and when the second testing tool 8021 stops the operation, the performance test result of the second device 802 may be displayed in the test interface of the second testing tool 8021.
Fig. 11 is a schematic view of a test interface of a second test tool 8021 according to an embodiment of the present disclosure.
As shown in fig. 11, the performance test result of the second device 802 is displayed in the test interface 1100 of the second test tool 8021. The performance test results may include average frame rate, jitter rate, number of clicks per hour, maximum number of frames lost, power consumption, CPU utilization, CPU temperature, etc. The data monitored in real time, such as the utilization rate of the CPU, the temperature of the CPU and the like, can be shown in a form of a graph.
The first device 801 and the second device 802 may execute a testing method described in the embodiment of fig. 12 below to implement a test on the performance of the second device 802 during the process that the second device 802 runs the human-computer interaction application 8022. For convenience of explanation, the first device 801 is hereinafter referred to as a test device, and the second device 802 is hereinafter referred to as a device under test.
Next, the test method provided in the embodiments of the present application will be described.
Fig. 12 is a flowchart of a testing method according to an embodiment of the present application. Referring to fig. 12, the method includes the steps of:
step 1201: the test equipment establishes communication connection with the tested equipment.
In some embodiments, the testing device and the device under test may establish a communication connection via a near field communication technique. For example, the near field communication technology may be a USB technology, a bluetooth technology, a wireless local area network technology, or the like, which is not limited in this embodiment of the present application.
After the communication connection between the test equipment and the tested equipment is established, the test equipment can control the tested equipment so as to realize the performance test of the tested equipment.
Step 1202: the test equipment starts the man-machine interaction application and the second test tool in the tested equipment.
The human-computer interaction application may provide a virtual scene. The virtual scene may include a plurality of virtual objects and may also include a virtual environment. Each virtual object may perform an operation in the virtual environment. For example, the human-computer interaction application may be a game application, the game application providing a game scenario including a plurality of game characters and a game environment, the game environment may include a scene such as a road, a stone wall, a plant, a defensive building, and the like, and may further include an NPC such as a monster, an soldier, and the like. Individual game characters can perform operations in the game environment. For example, for any one game character, the game character may move on the road in the game environment, or the game character may attack a defensive building in the game environment, or the game character may interact with an NPC in the game environment, etc.
The second test tool is a tool for testing the performance of the device under test. The second testing tool can test the average frame rate, the jitter rate, the number of times of blocking per hour, the maximum number of frames lost, the power consumption, the CPU utilization rate, the CPU temperature and other performances of the tested device. The average frame rate, the jitter rate, the number of times of stuck per hour and the maximum number of dropped frames can reflect the feeling of stuck when a user uses an interactive application of a mobile phone, the average frame rate reflects the whole fluency, the jitter rate reflects the sudden stuck, the number of times of stuck per hour reflects the quantifiable number of times of stuck, and the maximum number of dropped frames reflects the worst stuck. The power consumption can reflect the heat feeling of the user when the user uses the interactive application, and the power consumption can reflect the power consumption.
After the testing device establishes communication connection with the tested device, the testing device can start the human-computer interaction application and the second testing tool in the tested device. After the second testing tool is started, the performance of the tested device can be automatically tested. The human-computer interaction application is started by the tested device, so that the performance of the tested device in the process of running the human-computer interaction application can be automatically tested after the second testing tool is started.
The test equipment can start the man-machine interaction application in the tested equipment firstly and then start a second test tool in the tested equipment; or, the test device may start the second test tool in the device under test first, and then start the human-computer interaction application in the device under test; or, the test device may start the human-computer interaction application and the second test tool in the device under test at the same time, which is not limited in this embodiment of the application.
Optionally, the test device may send a first start command carrying the human-computer interaction application identifier to the device under test; and after the tested device receives the first starting command, the human-computer interaction application identified by the human-computer interaction application identification can be started. The test equipment can also send a second starting command carrying the test tool identifier to the tested equipment; after the device under test receives the second start command, the test tool may be started to identify the identified second test tool.
The first start command and the second start command may be the same start command, and at this time, the test device carries the human-computer interaction application identifier and the test tool identifier in the same start command and sends the same start command to the device under test. Of course, the first start command and the second start command may also be different start commands, and at this time, the test device carries the human-computer interaction application identifier and the test tool identifier in different start commands and sends the different start commands to the device under test. In this case, the test device may send the first start command to the device under test first, and then send the second start command to the device under test; or the test device may send the second start command to the device under test first, and then send the first start command to the device under test.
The man-machine interaction application identifier is used for uniquely identifying the man-machine interaction application, for example, the man-machine interaction application identifier can be a name of the man-machine interaction application. Optionally, after receiving the human-computer interaction application identifier sent by the testing device, the device under test may call a system built-in command (including but not limited to an am start command) to start the human-computer interaction application identified by the human-computer interaction application identifier, with the human-computer interaction application identifier as a parameter.
The test tool identifier is used to uniquely identify the second test tool, for example, the test tool identifier may be the name of the second test tool. Optionally, after receiving the test tool identifier sent by the test device, the device under test may call a system built-in command (including but not limited to an am start command) to start the second test tool identified by the test tool identifier, with the test tool identifier as a parameter.
In some embodiments, a human-machine-interaction application and a second test tool in a device under test may be launched by a first test tool in a test device. That is, a start command may be sent by the first test tool to the device under test to instruct the device under test to start the human-machine-interaction application and the second test tool.
For example, in the preparation phase, a code for starting the human-machine interaction application and the second testing tool in the device under test may be input in a code editing window in the testing interface of the first testing tool shown in fig. 10. In the testing stage, the codes can be run to enable the first testing tool to generate and send a starting command to the tested device to instruct the tested device to start the man-machine interaction application and the second testing tool.
Step 1203: the testing equipment indicates the man-machine interaction application in the tested equipment to display a target interface, and the target interface is used for displaying a scene picture of a virtual scene.
For example, in the case where the human-computer interaction application is a game application, the target interface is a game interface in the game application, and the game interface is used for displaying a game screen of a game scene.
Generally, a main interface is displayed after the human-computer interaction application is started, and at this time, a series of options in the human-computer interaction application need to be triggered to enter a target interface from the main interface. Therefore, in the embodiment of the application, the test device may send a series of commands to the device under test to implement triggering of a series of options in the human-computer interaction application, so as to instruct the human-computer interaction application to enter the target interface from the main interface.
In this case, the test device may store in advance an option image set, where the option image set includes a plurality of option images, the option images correspond to the options one to one, and each option image is an image of one option corresponding to the option image. The multiple options are a series of options which need to be triggered to enter the target interface from the main interface of the human-computer interaction interface. The multiple option images have sequence, and the sequence of the multiple option images is the triggering sequence of the multiple corresponding options, namely the sequence of the multiple options triggered in the process of entering the target interface from the main interface of the human-computer interaction interface.
The man-machine interaction application can have a plurality of different modes, and the target interfaces in the different modes can be accessed through different options. In this case, a plurality of option image sets may be stored in the test device in advance, the option image sets correspond to the plurality of modes one to one, and the option images in each option image set are images of a series of options for entering the target interface in the corresponding mode.
For example, for the process of entering the game interface in the battle mode from the main interface of the game application shown in fig. 4, 4 options need to be operated, and the 4 options are: fight options, man-machine options, character head portrait, and determination options. In this case, the option image set corresponding to the fight mode may include an image of each of the 4 options, and the 4 option images in the option image set corresponding to the fight mode may be as shown in fig. 13.
For another example, for the process of entering the game interface in the fighting mode from the main interface of the game application shown in fig. 6, 2 options need to be operated, and the 2 options are: event options, play options. In this case, the option image set corresponding to the viewing mode may include an image of each of the 2 options, and the 2 option images in the option image set corresponding to the viewing mode may be as shown in fig. 14.
The test equipment can select a mode in the human-computer interaction application in advance, and sends a series of commands to the tested equipment according to the plurality of option images in the option image set corresponding to the mode to indicate the tested equipment to trigger the plurality of options corresponding to the plurality of option images, so that the human-computer interaction application in the tested equipment enters the target interface from the main interface.
Taking the number of the multiple option images in the option image set as n as an example, the following operation that the test device instructs the human-computer interaction application in the device under test to display the target interface according to the n option images in the option image set is described, and specifically may include the following steps (1) to (11).
It is understood that n is a positive integer. Moreover, due to the sequence of the n option images, the option corresponding to the last option image (i.e., the nth option image) in the n option images is used for entering the target interface in the human-computer interaction application, i.e., after the option corresponding to the nth option image is triggered, the human-computer interaction application displays the target interface.
(1) The test equipment initializes i, i.e., i equals 1.
(2) And the tested device determines the ith option image in the n option images as the reference option image.
(3) The test equipment sends a first screen image acquisition command to the tested equipment.
The first screen image acquisition command is used for requesting to acquire an image being displayed by a screen of the device under test.
(4) After receiving a first screen image acquisition command sent by the testing equipment, the tested equipment acquires an image displayed on a screen of the tested equipment as a first screen image.
Optionally, the device to be tested may obtain an image being displayed on its own screen as the first screen image in a full screen capture manner. Since the device under test starts the human-computer interaction application, the first screen image is an image of an application interface being displayed by the human-computer interaction application.
(5) The tested device sends the first screen image to the testing device.
(6) After receiving the first screen image sent by the tested device, the testing device acquires the position coordinates of the reference option image in the first screen image as first position coordinates.
The reference option image contained in the first screen image is an image of an option to be triggered in an application interface being displayed by the human-computer interaction application, so that the test equipment can acquire the position coordinate (i.e. a first position coordinate) of the reference option image in the first screen image, wherein the first position coordinate is the position coordinate of the option to be triggered in the application interface being displayed by the human-computer interaction application.
The test equipment can match the reference option image with the first screen image according to the image characteristics of the reference option image and the image characteristics of the first screen image to determine the position coordinates of a part of the first screen image which is most matched (i.e. most similar) with the reference option image, wherein the position coordinates are the position coordinates of the reference option image in the first screen image, namely the first position coordinates.
For example, the test device may use the reference option image as a template image and the first screen image as a source image, and perform template matching on the template image and the source image by using a template matching function matchTemplate () in the OpenCV software library, so as to determine the position coordinates, i.e., the first position coordinates, of a part of the image that most matches the template image in the source image.
(7) And the test equipment sends an option trigger command carrying the first position coordinate to the tested equipment.
The option trigger command is used to indicate triggering of an option located at a first location coordinate in a screen of the device under test. Since the tested device starts the human-computer interaction application, an application interface of the human-computer interaction application is displayed on the screen of the tested device, so that the tested device triggers an option located at the first position coordinate in the screen of the tested device, namely, triggers an option located at the first position coordinate in the application interface of the human-computer interaction application. That is, the option trigger command is actually an option at the first location coordinate in the application interface that instructs the device under test to trigger the human-computer interaction application.
(8) After receiving the option triggering command sent by the testing equipment, the tested equipment triggers an option located at the first position coordinate in the screen of the tested equipment, namely, triggers an option located at the first position coordinate in an application interface of the human-computer interaction application.
(9) The test equipment determines whether i is equal to n.
The test equipment judges whether i is equal to n, namely whether the current reference option image is the nth option image.
(10) And (3) the test equipment enables i to be i +1 when i is not equal to n, and the steps (2) to (9) are executed again until i is equal to n.
If i is not equal to n, the test device may determine that the current reference option image is not the nth option image, and then let i be i +1 to continue to select the next option image as the reference option image to continue to instruct the device under test to trigger the next option in the application interface of the human-computer interaction application. In this way, the tested device is instructed to sequentially trigger each option in the n options corresponding to the n option images in the application interface of the human-computer interaction application from the 1 st option image in the n option images until the tested device enters the target interface after triggering to the last option in the n options.
(11) And under the condition that i is equal to n, the test equipment determines that the human-computer interaction application in the tested equipment displays the target interface.
If i is equal to n, the test device may determine that the current reference option image is the nth option image, and the current option indicating the device to be tested to trigger is an option corresponding to the nth option image in the application interface of the human-computer interaction application, that is, the last option in the n options corresponding to the n option images in the application interface of the human-computer interaction application, so that the test device may determine that the human-computer interaction application has displayed the target interface.
In some embodiments, a human-machine-interaction application in a device under test may be instructed by a first test tool in the test device to display a target interface. That is, the first screen image acquisition command and the option trigger command may be sent to the device under test by the first test tool to instruct the human-computer interaction application in the device under test to display the target interface.
For example, in the preparation phase, a first test tool acquires n option images, and a code editing window in the test interface of the first test tool shown in fig. 10 may input a code for sending a first screen image acquisition command to the device under test and a code for sending an option trigger command for the n option images to the device under test. In the testing stage, the codes can be operated, so that the first testing tool continuously sends a first screen image acquisition command and an option triggering command to the tested equipment, thus, the first testing tool can continuously acquire a first screen image of the tested equipment, and sequentially triggers options corresponding to each option image in the n option images in an application interface of the human-computer interaction application in the tested equipment according to the acquired first screen image, until the option corresponding to the last option image in the n option images is triggered, the human-computer interaction application can display a target interface.
The target interface displays a scene picture of a virtual scene provided by the man-machine interactive application. The scene displayed in the target interface is a scene of the virtual scene at a certain viewing angle, which may be referred to as a display viewing angle of the virtual scene in the target interface. The more complex the scene picture of the virtual scene is, the larger the calculation amount required during rendering is, the more computer memory is occupied during rendering, the longer the rendering time is, and correspondingly, the higher the rendering load of the tested device when the scene picture of the virtual scene is displayed is, that is, the higher the rendering load of the tested device when the target interface is displayed is. The virtual scene is constantly changing because the virtual scene includes a plurality of virtual objects that can be manipulated to perform operations. In the embodiment of the application, for the virtual scene which changes constantly, the display view angle of the virtual scene in the target interface can be constantly adjusted, that is, the scene picture displayed in the target interface is constantly adjusted, so that the target interface always displays the scene picture with the highest complexity, and therefore the tested equipment can always be in the state with the highest rendering load when the target interface is displayed, the performance test requirement of the tested equipment can be met, and the test efficiency is improved. A specific process of adjusting a scene displayed in the target interface is explained below.
Step 1204: and the test equipment determines a target view angle according to the aggregation condition of the virtual objects in the virtual scene.
The target view is a view corresponding to a scene picture with the highest complexity in the virtual scene. That is, the scene view of the virtual scene at the target viewing angle has the highest complexity. Correspondingly, the rendering load of the subsequent tested equipment is the highest when the scene picture of the virtual scene under the target view angle is displayed.
The plurality of virtual objects in the virtual scene can move, and any two virtual objects can be gathered together or dispersed at two distant places by moving. Generally, the larger the number of virtual objects gathered in a certain area in the virtual scene is, the more complicated the scene picture including the area is, so that the viewing angle corresponding to the scene picture with the highest complexity, that is, the target viewing angle can be determined according to the gathering condition of the virtual objects in the virtual scene.
The target interface can display a scene map of the virtual scene, including the virtual object identifier, in addition to the scene picture of the virtual scene. Each virtual object of the plurality of virtual objects in the virtual scene has a virtual object identification. For any virtual object in the plurality of virtual objects, the virtual object identifier of the virtual object is used to uniquely identify the virtual object, for example, the virtual object identifier of the virtual object may be an avatar, a name, and the like of the virtual object, which is not limited in this embodiment of the application. The position of a virtual object identification in the scene map is used to indicate where the virtual object identified by this virtual object identification is located in the virtual scene. In this way, the aggregation of multiple virtual objects in the virtual scene can be determined according to the virtual object identifiers in the scene map.
At least two virtual objects of the plurality of virtual objects belong to different organizations. Virtual objects of different organizations have a competitive relationship. Therefore, the larger the number of the virtual objects of different organizations gathered in a certain area in the virtual scene is, the more complicated the operations performed by the virtual objects of different organizations in the area are, and thus, the more complicated the scene picture including the area is.
In the above case, the operation of step 1204 may include the following steps (1) to (5).
(1) And the test equipment sends a second screen image acquisition command to the tested equipment.
The second screen image acquisition command is used for requesting to acquire an image being displayed by the screen of the device under test.
In some embodiments, the test device may send a second screen image acquisition command to the device under test at a preset time interval after instructing the human-computer interaction application in the device under test to display the target interface. The preset time may be preset and may be set to be longer, for example, the preset time may be greater than or equal to 10 seconds and less than or equal to 20 seconds.
Generally, a plurality of virtual objects in the virtual scene just start to move from the initial position within a period of time (namely, a preset time) when the target interface just starts to be displayed, and the virtual objects of different organizations are likely to be dispersed and not converged together within the period of time, so that the testing equipment does not need to acquire the image displayed by the screen of the tested equipment within the period of time. Thus, communication resources and processing resources of the test device and the device under test can be saved. And after the period of time, the testing equipment acquires the image which is displayed by the screen of the tested equipment, namely after the period of time, the testing equipment sends a second screen image acquisition command to the tested equipment.
(2) And after receiving a second screen image acquisition command sent by the testing equipment, the tested equipment acquires an image which is displayed on the screen of the tested equipment and serves as a second screen image.
Optionally, the device to be tested may obtain an image being displayed on its own screen as the second screen image in a full screen capture manner. The second screen image is an image of the target interface since the device under test is displaying the target interface.
(3) And the tested device sends the second screen image to the testing device.
(4) And after the test equipment receives the second screen image sent by the tested equipment, identifying the scene map of the virtual scene in the second screen image.
Since the target interface is displayed with the scene map of the virtual scene and the second screen image is an image of the target interface, the scene map can be identified in the second screen image.
The way in which the test equipment identifies the scene map in the second screen image may be various, and two possible ways are described below. Of course, the test device may also identify the scene map in the second screen image in other manners, which is not limited in this embodiment of the application.
A first possible way: the test equipment determines the image in the first position coordinate range in the second screen image as the scene map.
Generally, the scene map is displayed at a fixed position in the target interface, that is, the display position of the scene map in the target interface is unchanged no matter how the scene picture displayed by the target interface changes. For example, as shown in the diagram d in fig. 4 or the diagram c in fig. 6, the small map is fixedly displayed in a rectangular area of a certain size in the upper left corner of the game interface. Therefore, in the embodiment of the present application, the first position coordinate range may be set in advance according to the display position of the scene map fixed in the target interface, that is, the coordinate range of the display position of the scene map in the target interface is set as the first position coordinate range in advance. Therefore, after the test equipment receives the second screen image sent by the tested equipment, the image in the first position coordinate range in the second screen image can be directly determined as the scene map, and the scene map is identified. The identification process is simple and quick, and the identification efficiency of the scene map can be improved.
A second possible way: the test equipment determines the position coordinates of the reference icon in the second screen image, determines a second position coordinate range according to the position coordinates and a map position determination rule, and determines the image in the second position coordinate range in the second screen image as the scene map.
The test device may match the reference icon with the second screen image based on the image features of the reference icon and the image features of the second screen image to determine the position coordinates of a portion of the second screen image that most matches (i.e., is most similar to) the reference icon, i.e., the position coordinates of the reference icon in the second screen image.
For example, the test device may use the reference icon as a template image and the second screen image as a source image, and perform template matching on the template image and the source image by using a template matching function matchTemplate () in the OpenCV software library, so as to determine, in the source image, the position coordinates of a part of the image that most matches the template image, where the position coordinates are the position coordinates of the reference icon in the second screen image.
Generally, icons having a fixed positional relationship with the scene map exist in the target interface, and the icons can be used for positioning the scene map. Illustratively, the scene map contains fixed scene icons, for example, a small map in the game interface shown in the diagram d in fig. 4 or the diagram c in fig. 6 contains an icon of the defensive building 1 positioned at the lower left corner and an icon of the defensive building 2 positioned at the upper right corner. Alternatively, there are some icons for operating the controls of the scene map on the peripheral side of the scene map, for example, there is an icon for enlarging the control 1 of the minimap on the peripheral side of the minimap in the game interface shown by the d-diagram in fig. 4. These icons are all capable of locating the scene map. In the embodiment of the application, the icons capable of positioning the scene map can be set as reference icons in advance, then a map position determination rule is set in advance according to the position relationship between the reference icons and the scene map, the map position determination rule can reflect the position relationship between the reference icons and the scene map, and the map position determination rule is used for calculating the position coordinate range of the scene map according to the position coordinates of the reference icons. Therefore, after the test equipment receives the second screen image sent by the tested equipment, the scene map can be positioned in the second screen image according to the position coordinate of the reference icon in the second screen image and the map position determination rule directly, and the scene map can be identified.
For example, in a small map as shown in fig. 15, icons of controls for a zoomed-in map existing on the peripheral side of the small map in fig. 15 are set as reference icons in advance. As shown in fig. 15, it is assumed that the position relationship between the reference icon and the small map is: the minimap is located within a rectangular area of length a and width b above and to the left of the reference icon. Assuming that the position coordinates of the reference icon are (x, y), the map position determination rule may be set in advance as: and determining the position coordinate range of the rectangular area with the position coordinates of (x-a, y-b), (x-a, y) and (x, y) of the four vertexes as the position coordinate range of the scene map, namely a second coordinate range.
For another example, in the minimap shown in fig. 16, the icon of the defensive building located at the lower left corner in the minimap in fig. 16 is set as the reference icon in advance. As shown in fig. 16, it is assumed that the position relationship between the reference icon and the small map is: the minimap is located in a rectangular area of length a and width b above and to the right of the reference icon. Assuming that the position coordinates of the reference icon are (x, y), the map position determination rule may be set in advance as: and determining the position coordinate range of the rectangular area with the position coordinates of (x, y-b), (x + a, y-b), (x, y) and (x + a, y) of the four vertexes as the position coordinate range of the scene map, namely a second coordinate range.
For another example, in a small map as shown in fig. 17, icons of controls for a zoomed-in map existing on the peripheral side of the small map in fig. 17 are set as reference icons in advance. As shown in fig. 17, it is assumed that the position relationship between the reference icon and the small map is: the minimap is within a square area immediately above and to the left of the reference icon, proximate to the upper edge of the target interface. Assuming that the position coordinates of the reference icon are (x, y), the map position determination rule may be set in advance as: and determining the position coordinate range of the square area with the position coordinates of (x-y,0), (x-y, y) and (x, y) of the four vertexes as the position coordinate range of the scene map, namely a second coordinate range.
In some embodiments, there may be multiple icons in the target interface that are capable of locating the scene map, and the icons in the target interface that are capable of locating the scene map in different modes may also be different. For example, the game interface shown in the d diagram in fig. 4 is a game interface in the battle mode, the game interface shown in the c diagram in fig. 6 is a game interface in the battle mode, and the modes of the game interfaces shown in the d diagram in fig. 4 and the c diagram in fig. 6 are different. The icons capable of locating the minimap in the game interface in the battle mode shown in the d-diagram in fig. 4 include: an icon of a defensive building 1 positioned at the lower left corner in the minimap, an icon of a defensive building 2 positioned at the upper right corner in the minimap, and an icon of a control 1 for magnifying the minimap, which is positioned at the peripheral side of the minimap. The icons capable of locating the minimap in the game interface in the fighting mode shown in the diagram c in fig. 6 include: an icon of the defensive building 1 positioned in the lower left corner of the minimap, and an icon of the defensive building 2 positioned in the upper right corner of the minimap.
Therefore, for the target interface in any one of the plurality of modes, one icon can be selected in advance from icons capable of positioning the scene map in the target interface in the mode as a reference icon corresponding to the mode, and then the map position determination rule corresponding to the mode is preset according to the position relationship between the reference icon corresponding to the mode and the scene map in the target interface in the mode. Therefore, after the test equipment receives the second screen image sent by the tested equipment, the reference icon and the map position determination rule corresponding to the mode of the target interface displayed by the tested equipment can be determined, and then the scene map is positioned in the second screen image according to the position coordinate of the reference icon in the second screen image and the map position determination rule, so that the scene map is identified.
(5) And the test equipment determines a target view angle according to the aggregation degree of the virtual object identifications of different organizations in the scene map.
Alternatively, virtual object identifications of different tissues may be distinguished by color or shape. For example, the virtual scene includes two organized virtual objects, and correspondingly, the scene map includes two organized virtual object identifiers. The virtual object identifiers of the two tissues can be distinguished by colors, for example, the virtual object identifiers of the two tissues are both circular in shape, wherein the virtual object identifier of one tissue is blue and the virtual object identifier of the other tissue is red. Alternatively, the virtual object identifiers of the two organizations may be distinguished by shape, such as that one of the virtual object identifiers of the two organizations is a circle and the other of the virtual object identifiers of the two organizations is a square.
The test equipment can firstly identify the virtual object identifications of each organization from the scene map, and then determine the target view angle according to the aggregation degree of the virtual object identifications of different organizations in the scene map.
There are many ways for the test device to identify the virtual object identifier of each organization from the scene map, and several possible ways are described below. Of course, the test device may also identify the virtual object identifier of each organization from the scene map in other manners, which is not limited in this embodiment of the present application.
If the virtual object identifiers of different organizations in the scene map are distinguished by shapes, that is, if the virtual object identifiers of different organizations in the scene map are different in shape, the testing device may identify the virtual object identifier of each organization from the scene map in a first possible manner as follows. In this case, for each of the plurality of tissues, the shape identified by the virtual object of the tissue is referred to as a shape corresponding to the tissue.
A first possible way: for any one of the plurality of tissues, the test equipment acquires at least one image with the same shape in the scene map as that of the tissue, and each image in the at least one image is taken as a virtual object identifier of the tissue.
For example, the scene map includes virtual object identifications of a first organization and a second organization. The shape of the virtual object identifier of the first tissue is a circle and the shape of the virtual object identifier of the second tissue is a square. The test device may detect an image from the scene map that is circular in shape using a function in the OpenCV software library, and determine the detected circular image as a virtual object identification of the first tissue. The test device may detect an image shaped as a square from the scene map using a function in the OpenCV software library, and determine the detected image of the square as a virtual object identification of the second tissue.
In general, the virtual object identifiers of different organizations may be different in shape, but may be uniform in size. For example, the area of the virtual object identifier of the first tissue in a circle shape and the area of the virtual object identifier of the second tissue in a square shape may be the same. In this case, after the test device identifies the virtual object identifiers of the respective organizations from the scene map, the size of the virtual object identifiers may also be determined, so that the aggregation degree of the virtual object identifiers of different organizations in the scene map may be analyzed accordingly.
In some embodiments, the size of the virtual object identifier in the scene map is generally fixed, and assuming that the size of the virtual object identifier in the scene map is a specified size, the technician may store the specified size in the test equipment in advance, so that after the test equipment identifies the virtual object identifiers of the various organizations from the scene map, the test equipment may directly determine the size of the virtual object identifier as the specified size, and the operation is simple and fast.
In other embodiments, after the test equipment identifies the virtual object identifiers of the respective organizations from the scene map, the size of each virtual object identifier identified from the scene map may be determined directly.
If the virtual object identifiers of different organizations in the scene map are distinguished by colors, that is, if the virtual object identifiers of different organizations in the scene map have the same shape but different colors, the test device may identify the virtual object identifiers of each organization from the scene map in the following second possible manner. In this case, it is assumed that the shape of the virtual object identifier of each of the plurality of tissues is a specified shape, and the specified shape may be a circle, a square, or the like. And, for each of the plurality of tissues, the color of the virtual object identification of that tissue is referred to as the color corresponding to that tissue.
A second possible way: for each tissue in the plurality of tissues, the test equipment acquires at least one image which has the same color as the color corresponding to the tissue and has a specified shape in the scene map, and identifies each image in the at least one image as a virtual object of the tissue.
For example, the scene map includes virtual object identifications of a first organization and a second organization. The shape of the virtual object identifier of the first tissue and the shape of the virtual object identifier of the second tissue are both circular, wherein the virtual object identifier of the first tissue is blue and the virtual object identifier of the second tissue is red. The scene map comprises a blue channel (blue, b), a green channel (green, b), and a red channel (r), i.e. the scene map is a three-channel image. The second possible manner may specifically include the following steps one to four:
step one, the test equipment separates and obtains a gray level image of each channel from the scene map by using a channel splitting function cv2.split (), which is in an OpenCV software library.
The grayscale map of each channel is a single channel image. The gray scale value range of the gray scale map of each channel is 0-255. The gray scale value is 0, which is black, and the gray scale value is 255, which is white.
And step two, the test equipment uses the gray threshold value to carry out binarization on the gray image of each channel to obtain a black-white image of each channel.
The gray threshold can be preset, and can be determined through repeated tests, so that a black-white image with good effect can be obtained through the gray threshold. For example, the grayscale threshold may be 120.
The test equipment uses the gray threshold to carry out binarization on the gray image of each channel, namely, the gray value of the pixel point of which the gray value is less than or equal to the gray threshold in the gray image of each channel is changed into 0, and the gray value of the pixel point of which the gray value is greater than the gray threshold in the gray image of each channel is changed into 255, so that the black-white image of each channel can be obtained. The range of gray values of the black and white map of each channel is 0 or 255.
For example, the test device may binarize the grayscale map of each channel using the function cv2.inrange () in the OpenCV software library to obtain a black-and-white map of each channel. Assuming that the grayscale threshold is 120, the black-and-white image of the red channel increment is cv2.inrange (r,120,255), where r refers to the grayscale value in the grayscale image of the red channel, and cv2.inrange (r,120,255) can change the grayscale value less than or equal to 120 in the grayscale image of the red channel to 0, and change the grayscale value greater than 120 to 255, so as to obtain the black-and-white image of the red channel. The black-and-white image inrange of the green channel is cv2.inrange (g,120,255), where g refers to the gray scale value in the gray scale image of the green channel, and cv2.inrange (g,120,255) can change the gray scale value less than or equal to 120 in the gray scale image of the green channel to 0 and change the gray scale value greater than 120 to 255 to obtain the black-and-white image of the green channel. The black-and-white image inrange of the blue channel is cv2.inrange (b,120,255), where b refers to the gray scale value in the gray scale image of the blue channel, and cv2.inrange (b,120,255) can change the gray scale value less than or equal to 120 in the gray scale image of the blue channel to 0 and change the gray scale value greater than 120 to 255 to obtain the black-and-white image of the blue channel. For example, for the small map shown in fig. 15, the black and white map of the blue channel in the small map may be as shown in a of fig. 18, the black and white map of the green channel in the small map may be as shown in b of fig. 18, and the black and white map of the red channel in the small map may be as shown in c of fig. 18.
And step three, recognizing an image with a circular shape from the black-and-white image of the blue channel by the testing equipment as a virtual object identifier of the first tissue, and recognizing an image with a circular shape from the black-and-white image of the red channel as a virtual object identifier of the second tissue.
Alternatively, the testing device may recognize the image in the shape of a circle directly from the black and white image of the blue channel as the virtual object identification of the first tissue. For example, the test device may detect an image in a shape of a circle in a black-and-white image of a blue channel using a function cv2. houghcirle () in the OpenCV software library, and identify the detected circular image as a virtual object of the first tissue. Wherein cv2.houghcircle () is used to detect circles in an image using the hough transform algorithm.
Alternatively, in order to improve the recognition accuracy, the testing device may process the black-and-white image of the blue channel, and then recognize the image with the circular shape from the processed black-and-white image of the blue channel as the virtual object identifier of the first tissue. Specifically, the test device may subtract the gray value of each pixel point in the black-and-white image of the Blue channel from the gray value of the corresponding pixel point in the black-and-white image of the red channel, and subtract the gray value of the corresponding pixel point in the black-and-white image of the green channel, that is, the processed black-and-white image Blue _ channel of the Blue channel is inrangeb-inranger-inrangeg. Thus, the gray scale value occupied by the non-blue color in the black-and-white image of the blue channel can be effectively offset, and a more ideal black-and-white image of the blue channel can be obtained. Thereafter, the test device may detect an image in a circular shape in the processed black-and-white image of the blue channel using a function cv2. houghcirle () in the OpenCV software library, and identify the detected circular image as a virtual object of the first tissue.
Alternatively, the testing device may recognize the image in the shape of a circle directly from the black and white image of the red channel as the virtual object identification of the second tissue. For example, the test device may detect an image in a shape of a circle in a black-and-white image of a red channel using a function cv2. houghcirle () in the OpenCV software library, and identify the detected circular image as a virtual object of the second tissue.
Alternatively, in order to improve the recognition accuracy, the testing device may process the black-and-white image of the red channel, and then recognize the image with the circular shape from the processed black-and-white image of the red channel as the virtual object identifier of the second tissue. Specifically, the testing device may subtract the gray value of each pixel point in the black-and-white image of the Red channel from the gray value of the corresponding pixel point in the black-and-white image of the blue channel, and subtract the gray value of the corresponding pixel point in the black-and-white image of the green channel, that is, the processed black-and-white image Red _ channel of the Red channel is intra-arrangement-intra-arrangement. Thus, the gray value occupied by the non-red color in the black-and-white image of the red channel can be effectively offset, and a more ideal black-and-white image of the red channel can be obtained. Thereafter, the test device may detect an image in a circular shape in the processed black-and-white image of the red channel using a function cv2. houghcirle () in the OpenCV software library, and identify the detected circular image as a virtual object of the second tissue.
In general, virtual object identifiers of different tissues have the same shape and the same size. For example, the radius of the circular virtual object identifier of the first tissue and the radius of the circular virtual object identifier of the second tissue may be the same. In this case, after the test device identifies the virtual object identifiers of the respective organizations from the scene map, the size of the virtual object identifiers may also be determined, so that the aggregation degree of the virtual object identifiers of different organizations in the scene map may be analyzed accordingly.
In some embodiments, the size of the virtual object identifier in the scene map is generally fixed, and assuming that the size of the virtual object identifier in the scene map is a specified size, the technician may store the specified size in the test equipment in advance, so that after the test equipment identifies the virtual object identifiers of the various organizations from the scene map, the test equipment may directly determine the size of the virtual object identifier as the specified size, and the operation is simple and fast.
In other embodiments, after the test equipment identifies the virtual object identifiers of the respective organizations from the scene map, the size of each virtual object identifier identified from the scene map may be determined directly.
In still other embodiments, the color of the virtual object identifier of the virtual object manipulated by the user may be different from the color of the virtual object identifiers of other virtual objects in the organization in which the virtual object is located. That is, for each of a plurality of virtual objects of the same organization, in the scene map in the target interface displayed on the device for manipulating this virtual object, the color of the virtual object identification of this virtual object is different from the color of the virtual object identifications of the other virtual objects of the same organization. For example, in the game interface in the battle scene as shown in d-diagram in fig. 4, the color of the character avatar of the game character manipulated by the user in the minimap may be green, the color of the character avatar of the other game characters of the same organization in the minimap may be blue, and the colors of the character avatars of the game characters of the other organization in the minimap are all red.
Assuming that one virtual object of the plurality of virtual objects of the first organization is manipulated by the device under test, the color of the virtual object identifier of the virtual object is green in the scene map in the target interface displayed by the device under test. In this case, the test device may process the black-and-white image of the green channel obtained in the second step, identify an image with a circular shape from the processed black-and-white image of the green channel as a virtual object identifier of a virtual object being controlled by the test device, and then determine the size of the virtual object identifier. Specifically, the testing device may subtract the gray value of each pixel point in the black-and-white image of the Green channel from the gray value of the corresponding pixel point in the black-and-white image of the red channel, and subtract the gray value of the corresponding pixel point in the black-and-white image of the blue channel, that is, the processed black-and-white image Green _ channel of the Green channel is associated with ranging-accessing b. For example, the gray-scale value of each pixel point in the black-and-white image of the green channel shown in the diagram b in fig. 18 is subtracted by the gray-scale value of the corresponding pixel point in the black-and-white image of the blue channel shown in the diagram a in fig. 18, and the pixel value of the corresponding pixel point in the black-and-white image of the red channel shown in the diagram c in fig. 18 is subtracted, so that the black-and-white image of the green channel shown in fig. 19 can be obtained. Therefore, the gray value occupied by the non-green color in the black-and-white image of the green channel can be effectively offset, and a more ideal black-and-white image of the green channel can be obtained. Then, the test device may detect an image in a circular shape in a black-and-white image of the processed green channel by using a function cv2. houghcirle () in the OpenCV software library, use the detected circular image as a virtual object identifier of a virtual object being controlled by the device to be tested, and obtain a radius of the detected circular image, that is, obtain a size of the virtual object identifier. Because the tested device generally only controls one virtual object, the scene map generally only has one green virtual object identifier, and the processed black-and-white image of the green channel generally only has one circular image, so that the circular image can be detected quickly and accurately, and the size of the virtual object identifier determined according to the size is also accurate.
After the test equipment identifies the virtual object identifiers of each organization in the scene map, the test equipment may determine the target view angle according to the aggregation degrees of the virtual object identifiers of different organizations in the scene map, and specifically, the operation of determining the target view angle according to the aggregation degrees of the virtual object identifiers of different organizations in the scene map by the test equipment may include the following steps one to three:
step one, the test equipment determines a plurality of gathering areas in the scene map.
Each aggregation area contains virtual object identifications of different organizations, i.e. the aggregation area is an area where virtual object identifications of different organizations are aggregated. The size of the aggregation area may be a preset size, or may be determined according to the size of the virtual object identifier.
Optionally, the test device may determine a target area where each virtual object identifier in the plurality of virtual object identifiers in the scene map is located, so as to obtain a plurality of target areas; and determining a target area containing virtual object identifications of different tissues in the plurality of target areas as an aggregation area.
For any virtual object identifier, the target area where the virtual object identifier is located is an area with the center same as the center of the virtual object identifier, the shape same as the shape of the virtual object identifier and the size k times the size of the virtual object identifier, and k is an integer greater than or equal to 2. For example, assuming that k is 3 and the virtual object identifier is a circle with a radius r, the target area where the virtual object identifier is located may be a circle with a radius 3 r.
For example, in a small map as shown in fig. 20, the testing device may determine a plurality of target areas in the small map, where the determined target areas include: target area 1, target area 2, target area 3, target area 4, target area 5, target area 6. The test device may determine that, of the target areas in the small map shown in fig. 20, the target area 4 and the target area 5 located at the lower right corner each include a character avatar of a different organization, and may determine both the target area 4 and the target area 5 as an aggregation area.
And step two, the test equipment determines the aggregation areas with the maximum number of virtual object identifications contained in the aggregation areas as reference areas.
Since the plurality of aggregation areas are all areas aggregated with virtual object identifiers of different organizations, and the reference area is an area with the most aggregated virtual object identifiers in the plurality of aggregation areas, that is, an aggregation area with the highest aggregation degree in the plurality of aggregation areas, the reference area is an area with the most aggregated virtual object identifiers in the areas aggregated with virtual object identifiers of different organizations.
For example, in the small map shown in fig. 20, the target area 4 and the target area 5 are both aggregation areas, where the target area 4 includes three character avatars, and the target area 5 includes two character avatars, so that the test device may determine the target area 4 as the reference area if the number of character avatars included in the target area 4 is the largest.
And step three, the test equipment determines the corresponding visual angle of the reference area in the scene map as the target visual angle.
The corresponding view angle of the reference area in the scene map is a view angle for displaying a scene picture at a location where the reference area is located in the virtual scene. Since the reference area is an area in which virtual object identifiers of different organizations are aggregated, and the number of the aggregated virtual object identifiers is the largest, a scene picture of the reference area at a location in the virtual scene is relatively complex and is a scene picture with the highest complexity in the virtual scene, and thus a corresponding view angle of the reference area in the scene map can be determined as a target view angle.
In some embodiments, the target perspective may be determined by a first testing tool in the testing device based on an aggregation of virtual objects in the virtual scene. That is, the first testing tool may send a second screen image acquisition command to the device under test, identify the scene map in the acquired second screen image, and determine the target view angle according to the aggregation degree of the virtual object identifiers of different organizations in the scene map.
For example, in the preparation phase, a code editing window in the test interface of the first test tool shown in fig. 10 may input a code for sending a second screen image acquisition command to the device under test, and input a code for identifying the scene map in the acquired second screen image, and determining the target view angle according to the aggregation degree of virtual object identifiers of different organizations in the scene map. In the testing stage, the codes can be operated, so that the first testing tool sends a second screen image acquisition command to the tested equipment, and thus, the first testing tool can acquire the second screen image of the tested equipment, identify the scene map in the acquired second screen image, and then determine the target view angle according to the aggregation degree of the virtual object identifiers of different organizations in the scene map.
Step 1205: and the test equipment indicates the tested equipment to display a scene picture of the virtual scene under the target visual angle in the target interface so as to test the performance of the tested equipment.
Because the target visual angle is the visual angle corresponding to the scene picture with the highest complexity in the virtual scene, after the testing equipment indicates the tested equipment to display the scene picture of the virtual scene under the target visual angle in the target interface, the rendering load when the tested equipment displays the scene picture is the highest, so that the performance testing requirement of the tested equipment can be met, and the testing efficiency is improved.
Specifically, the test equipment acquires the position coordinate of the reference area in the second screen image as a second position coordinate, and sends a contact point triggering command carrying the second position coordinate to the tested equipment so as to indicate the tested equipment to trigger a contact point at the second position coordinate in the target interface and then display the scene picture of the virtual scene under the target visual angle. After the tested device receives the contact triggering command, triggering a contact at the second position coordinate in the target interface so as to display a scene picture of the virtual scene under the target visual angle in the target interface.
The position coordinates of the reference region in the second screen image may be position coordinates of the center of the reference region in the second screen image, or the position coordinates of the reference region in the second screen image may be an average of the position coordinates of the centers of all the virtual object identifiers included in the reference region in the second screen image. For example, the reference area includes 5 virtual object identifiers, and the testing device may first determine the position coordinates of the center of each virtual object identifier in the 5 virtual object identifiers in the second screen image as: (x1, y1), (x2, y2), (x3, y3), (x4, y4), (x5, y5), and then according to the position coordinates of each of the 5 virtual object identifications in the second screen image, the position coordinates (x _ new, y _ new) of the reference region in the second screen image are determined by the formula x _ new ═ x1+ x2+ x3+ x4+ x5)/5 and the formula y _ new ═ y1+ y2+ y3+ y4+ y 5)/5.
The contact triggering command is used for indicating the tested equipment to trigger the contact at the second position coordinate in the target interface, and after the tested equipment triggers the contact at the second position coordinate, the contact at the reference area is actually triggered in the scene map, so that the target interface can display the scene picture of the virtual scene at the target visual angle, the rendering load when the tested equipment displays the scene picture is the highest, the performance testing requirement of the tested equipment can be met, and the testing efficiency is improved.
For example, the scene picture being displayed in the game interface of the game application of the device under test is the scene picture in the game interface 2100 as shown in a diagram in fig. 21. Then, the device to be tested receives a contact triggering command carrying a second position coordinate sent by the testing device, and the device to be tested triggers a contact at the second position coordinate in the game interface 2100, that is, triggers a point C in the small map 2101 in the game interface 2100, at this time, the game interface 2100 switches to a scene picture displaying the virtual scene at a viewing angle (that is, a target viewing angle) corresponding to the point C in the small map 2101, that is, the game interface 2100 switches to a scene picture displaying the game interface 2100 as shown in a diagram b in fig. 21. The game picture shown in the b picture in fig. 21 has high complexity, so that the rendering load when the tested device displays the scene picture is high, the performance test requirement of the tested device can be met, and the test efficiency is improved.
As an example, after the device under test triggers a touch point at the second position coordinate, that is, after triggering a touch point at the reference area in the scene map, even if the touch point is not continuously triggered, the target interface still continues to display the scene picture of the virtual scene at the target viewing angle as long as other points in the scene map are not triggered. For example, in a game interface in a fighting mode, after the device to be tested simulates the click operation on the reference area in the small map, if the click operation is not continuously performed on the reference area in the small map, the game interface may continuously display a game picture of a game scene at a view angle (i.e., a target view angle) corresponding to the latest click point in the small map.
In this case, the sending of the contact point trigger command carrying the second position coordinate to the device under test by the test equipment may be a single trigger command for the contact point at the second position coordinate. For example, the test device may send a click command to the contact point at the second position coordinate to the tested device, and after receiving the click command, the tested device may simulate a click operation on the contact point at the second position coordinate by using the Android system tool sendent.
As another example, after the device under test triggers a touch point at the second position coordinate, that is, after triggering a touch point at the reference area in the scene map, the touch point needs to be continuously triggered, so that the target interface can continuously display the scene picture of the virtual scene at the target viewing angle. For example, in a game interface in a battle mode, after the device to be tested needs to simulate a long-press operation on a reference area in a small map, the game interface can continuously display a game picture of a game scene at a view angle (i.e., a target view angle) corresponding to the long-press point in the small map.
In this case, the sending of the contact point trigger command carrying the second position coordinate to the device under test by the test equipment may be a continuous trigger command for the contact point at the second position coordinate. For example, the test device may send a long press command to the contact point at the second position coordinate to the device under test, and after receiving the long press command, the device under test may simulate a long press operation on the contact point at the second position coordinate using the Android system tool sendent.
The code for the sensor tool to simulate the touch screen operation is as follows: Sendevent/dev/input/eventX type code value. Wherein, the present is a time stamp when the event is written,/dev/input/eventX is a device node of the write event, type is an event type, code is an event code, and value is a value of the event.
Illustratively, the code for the device under test to simulate a pressing operation using the sensory tool is as follows:
sendevent/dev/input/event 3358500 degree of pressure 500
Sendevent/dev/input/event 3353400 x position coordinate 400
Sendevent/dev/input/event 3354900 y position coordinate 900
Sendive/dev/input/event 313301 touch event pressing
Sendive/dev/input/event 3000 sync event (pressing after sync event simulation really takes effect, simulating gesture pressing (400, 900) position)
If the tested device needs to simulate the sliding operation, the following codes can be run after the codes:
sendevent/dev/input/event 3358500 degree of pressure 500
Sendevent/dev/input/event 3353400 x position coordinate 400
Sendevent/dev/input/event 3354950 y position coordinates 950
Sendive/dev/input/event 3000 sync event (simulated gesture slides to (400, 950) position after sync event)
Sendevent/dev/input/event 3358500 degree of pressure 500
Sendevent/dev/input/event 3353400 x position coordinate 400
Sendevent/dev/input/event 33541000 y position coordinate 1000
Sendive/dev/input/event 3000 sync event (simulated gesture continues to slide to (400, 1000) position after sync event)
If the tested device needs to simulate the hand-lifting operation, the following codes can be run after the codes:
sendive/dev/input/event 313300 touch event Lift
Sendive/dev/input/event 3000 sync event (real effect of hand-lifting simulation after sync event)
In some embodiments, the device under test may be instructed by a first test tool in the test equipment to display a scene view of the virtual scene at the target perspective in the target interface. That is, the first test tool may send a contact triggering command carrying the second position coordinate to the device under test, so as to instruct the device under test to display the scene picture of the virtual scene at the target viewing angle after triggering the contact at the second position coordinate in the target interface.
For example, in the preparation stage, the code editing window in the test interface of the first test tool shown in fig. 10 may input a code for acquiring the second position coordinate of the reference region in the second screen image, and a code for sending a contact triggering command carrying the second position coordinate to the device under test. In the testing stage, the codes can be operated to enable the first testing tool to acquire the second position coordinate of the reference area in the second screen image and send a contact triggering command carrying the second position coordinate to the tested device.
It should be noted that the virtual scene is changed since the virtual scene includes a plurality of virtual objects, and the plurality of virtual objects can be manipulated to perform operations. In the embodiment of the application, the target visual angle of the virtual scene can be continuously determined for the continuously changing virtual scene, and the scene picture displayed in the target interface is continuously adjusted according to the target visual angle, so that the target interface always displays the scene picture with the highest complexity, and therefore the tested equipment can always be in the state with the highest rendering load when the target interface is displayed, the performance test requirement of the tested equipment can be met, and the test efficiency is improved. That is to say, the operation of determining the target view angle by the testing device according to the aggregation condition of the virtual objects in the virtual scene in step 1204, and the operation of indicating the scene picture of the virtual scene at the target view angle by the testing device in step 1205 may be executed once every certain time, for example, once every two seconds. That is, the test device may repeatedly perform the above steps 1204 and 1205 at regular intervals to re-determine the target viewing angle and instruct the device under test to display the scene picture of the virtual scene at the target viewing angle.
In this case, the testing device may first determine whether the scene in the target interface is finished being displayed, for example, in a game application, it is determined whether the game is finished. Specifically, after the second screen image of the tested device is acquired, the testing device can judge whether the second screen image has an end mark image; if the second screen image has the ending mark image, the test equipment determines that the scene picture in the target interface is ended to be displayed; and if the second screen image does not have the ending mark image, the test equipment determines that the scene picture in the target interface is not ended to be displayed.
The end mark image is an image of an end mark, and the end mark is a mark displayed when the scene in the target interface ends. The technician may store the end-marker image in the test device in advance.
When the test device determines whether the end mark image exists in the second screen image, the end mark image may be matched with the second screen image according to the image feature of the end mark image and the image feature of the second screen image to determine whether a part of the image matching (i.e., similar) to the end mark image exists in the second screen image. For example, the test device may use the ending mark image as a template image and the second screen image as a source image, and perform template matching on the template image and the source image by using a template matching function matchTemplate () in the OpenCV software library, so as to determine whether there is a part of image matched with the template image in the source image.
And if the scene picture in the target interface is not displayed, the test equipment continuously determines the target visual angle and indicates the tested equipment to display the scene picture of the virtual scene under the target visual angle. And if the test equipment sends a long press command to the contact point at the second position coordinate, after the new second position coordinate is determined again, the test equipment may send the long press command to the new second position coordinate to instruct the test equipment to simulate the gesture to slide to the contact point at the new second position coordinate.
And if the scene picture in the target interface is displayed, the test equipment does not determine the target visual angle any more. And if the test equipment sends a long press command to the contact point at the second position coordinate to the tested equipment, the test equipment sends a hand raising command to the tested equipment to instruct the tested equipment to simulate hand raising.
In some embodiments, if the scene in the target interface is already displayed, the testing device may end the test, or the testing device may further instruct the device under test to exit the target interface and then end the test. When the test equipment indicates that the tested equipment exits from the target interface, the position coordinate of the ending mark image in the second screen image can be acquired as a third position coordinate, and a trigger command carrying the third position coordinate is sent to the tested equipment to indicate that the tested equipment exits from the target interface after triggering the ending mark at the third position coordinate in the target interface. And after receiving the trigger command, the tested device triggers an end mark at the third position coordinate in the target interface so as to exit the target interface.
After the test device finishes testing, the second test tool in the device under test may output a performance test result, where the performance test result is a test result of performance of the device under test in a process of running the human-computer interaction application, as shown in fig. 11, and the performance test result may include an average frame rate, a jitter rate, a number of times of blocking per hour, a maximum number of frames lost, power consumption, a CPU utilization rate, a CPU temperature, and the like. Therefore, the performance of the tested device in the process of running the human-computer interaction application is tested.
The above testing process is illustrated below with reference to fig. 22, taking a game application as an example. Referring to fig. 22, the test process may include the following steps (1) to (3):
(1) early preparation
And starting the second testing tool, starting the game application and entering a game interface.
(2) Middle test
And capturing a picture within 10-20 seconds after entering the game interface, identifying the small map in the picture capturing, and identifying the character head portrait and the radius of the character head portrait in the small map.
And then, capturing pictures every 2 seconds, and judging whether the game is finished or not according to the pictures. And if the game is finished, the second testing tool outputs a performance testing result. If the game is not finished, identifying the minimap and the character head portraits of different organizations in the minimap in the screenshot; determining a target area by taking the center of each character head portrait as a circle center and taking the radius length of the 3 times of the character head portrait as a radius; finding out a target area which has the largest number of role head portraits and contains the role head portraits of different organizations from the plurality of determined target areas as a reference area; taking the average value of the position coordinates of the centers of all the character head portraits in the reference area as the position coordinates needing to be triggered; and instructing the game interface to switch to the game picture at the position coordinate in the small map. And repeating the screenshot operation until the game is ended.
(3) Late stage statistics
After the game is finished, the second testing tool is used for analyzing to obtain performance testing results, wherein the performance testing results can comprise average frame rate, jitter rate, number of times of blocking per hour, maximum number of frames lost, power consumption, CPU utilization rate, CPU temperature and the like.
In the embodiment of the application, after the testing device establishes communication connection with the tested device, the testing device starts the human-computer interaction application and the second testing tool in the tested device. And then, the test equipment indicates the human-computer interaction application in the tested equipment to display a target interface, and the target interface is used for displaying a scene picture of the virtual scene. And then the test equipment determines a target view angle according to the aggregation condition of the virtual objects in the virtual scene, wherein the complexity of a scene picture of the virtual scene under the target view angle is the highest. And finally, the test equipment indicates the tested equipment to display a scene picture of the virtual scene under the target visual angle in the target interface so as to test the performance of the tested equipment. Because the target visual angle is the visual angle corresponding to the scene picture with the highest complexity in the virtual scene, after the tested device displays the scene picture of the virtual scene under the target visual angle in the target interface, the rendering load of the tested device when the tested device displays the scene picture is the highest, so that the performance test requirement of the tested device can be met, and the test efficiency is improved.
Fig. 23 is a schematic structural diagram of a testing apparatus provided in an embodiment of the present application, where the testing apparatus may be implemented by software, hardware, or a combination of the two as part or all of a computer device, and the computer device may be the terminal shown in fig. 1. Referring to fig. 23, the apparatus includes: a first indication module 2301, a determination module 2302, and a second indication module 2303.
A first indicating module 2301, configured to instruct a second device to display a target interface, where the target interface is used to display a scene screen of a virtual scene, and the virtual scene includes a plurality of virtual objects;
a determining module 2302, configured to determine a target view according to an aggregation condition of a virtual object in a virtual scene, where the target view is a view corresponding to a scene picture with the highest complexity in the virtual scene;
a second indicating module 2303, configured to instruct the second device to display a scene picture of the virtual scene in the target view angle in the target interface, so as to test performance of the second device.
Optionally, the first indicating module 2301 is configured to:
starting a human-computer interaction application in the second equipment, wherein the human-computer interaction application is an application for providing a virtual scene;
and instructing the man-machine interaction application in the second equipment to display the target interface.
Optionally, the apparatus further comprises:
and the starting module is used for starting a test tool in the second equipment, and the test tool is used for testing the performance of the second equipment.
Optionally, the first indicating module 2301 is configured to perform the operations performed by the testing device in steps (1) - (11) in step 1203 in the embodiment of fig. 12.
Optionally, at least two virtual objects in the plurality of virtual objects belong to different organizations, the target interface is further configured to display a scene map of the virtual scene, and the determining module 2302 is configured to perform the operations performed by the testing apparatus in steps (1) to (5) in step 1204 in the embodiment of fig. 12.
Optionally, the determining module 2302 is configured to:
determining a plurality of aggregation areas in a scene map, wherein each aggregation area comprises virtual object identifications of different organizations;
determining an aggregation area with the largest number of virtual object identifications contained in the plurality of aggregation areas as a reference area;
and determining the corresponding view angle of the reference area in the scene map as the target view angle.
Optionally, the determining module 2302 is configured to:
determining a target area where each virtual object identifier is located in a plurality of virtual object identifiers in a scene map to obtain a plurality of target areas, wherein the target area where each virtual object identifier is located is an area with the center the same as that of each virtual object identifier, the shape the same as that of each virtual object identifier and the size k times that of each virtual object identifier, and k is an integer greater than or equal to 2;
and determining a target area containing virtual object identifications of different tissues in the plurality of target areas as an aggregation area.
Optionally, the second indicating module 2303 is configured to:
acquiring the position coordinate of the reference area in the second screen image as a second position coordinate;
and sending a contact point triggering command carrying the second position coordinate to the second equipment to indicate the second equipment to trigger a contact point at the second position coordinate in the target interface and then display a scene picture of the virtual scene under the target visual angle.
In the embodiment of the application, the second device is instructed to display the target interface, and the target interface is used for displaying the scene picture of the virtual scene. And then, determining a target visual angle according to the aggregation condition of the virtual objects in the virtual scene, wherein the target visual angle is a visual angle corresponding to a scene picture with the highest complexity in the virtual scene. And finally, instructing the second equipment to display a scene picture of the virtual scene under the target visual angle in the target interface so as to test the performance of the second equipment. Because the target visual angle is the visual angle corresponding to the scene picture with the highest complexity in the virtual scene, the rendering load of the tested equipment is highest when the scene picture of the virtual scene under the target visual angle is displayed in the target interface, so that the performance test requirement of the tested equipment can be met, and the test efficiency is improved.
It should be noted that: in the test device provided in the above embodiment, only the division of the functional modules is exemplified, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
Each functional unit and module in the above embodiments may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used to limit the protection scope of the embodiments of the present application.
The test apparatus and the test method provided by the above embodiments belong to the same concept, and the specific working processes and technical effects of the units and modules in the above embodiments can be referred to the method embodiments, which are not described herein again.
In the above embodiments, the implementation may be wholly or partly realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is not intended to limit the present application to the particular embodiments disclosed, but rather, the present application is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present application.

Claims (8)

1. A method of testing, applied to a first device, the method comprising:
instructing a second device to display a target interface, wherein the target interface is used for displaying a scene picture and a scene map of a virtual scene, the virtual scene comprises a plurality of virtual objects, at least two virtual objects in the plurality of virtual objects belong to different organizations, and the scene map comprises virtual object identifiers;
sending a second screen image acquisition command to the second equipment every other preset time length;
after receiving a second screen image sent by the second device, identifying the scene map in the second screen image;
determining a plurality of aggregation areas in the scene map, wherein each aggregation area comprises virtual object identifications of different organizations;
determining the aggregation areas with the largest number of virtual object identifications contained in the plurality of aggregation areas as reference areas;
determining a view angle corresponding to the reference area in the scene map as a target view angle, wherein the target view angle is a view angle corresponding to a scene picture with the highest complexity in the virtual scene;
and instructing the second equipment to display a scene picture of the virtual scene under the target visual angle in the target interface so as to test the performance of the second equipment.
2. The method of claim 1, wherein said instructing the second device to display the target interface comprises:
starting a human-computer interaction application in the second device, wherein the human-computer interaction application is an application for providing the virtual scene;
and instructing the human-computer interaction application in the second equipment to display the target interface.
3. The method of claim 2, wherein prior to said instructing the human-machine-interactive application in the second device to display the target interface, further comprising:
and starting a test tool in the second equipment, wherein the test tool is used for testing the performance of the second equipment.
4. The method of claim 2 or 3, wherein said instructing the human-machine-interactive application in the second device to display the target interface comprises:
setting i equal to 1, determining an ith option image in n option images as a reference option image, wherein an option corresponding to the nth option image in the n option images is used for entering the target interface, and n is a positive integer;
sending a first screen image acquisition command to the second device;
after receiving a first screen image sent by the second device, acquiring a position coordinate of the reference option image in the first screen image as a first position coordinate;
sending an option triggering command carrying the first position coordinate to the second device to indicate the second device to trigger an option located at the first position coordinate in an application interface of the human-computer interaction application;
if the i is not equal to the n, making i equal to i +1, and re-executing the step of determining the ith option image in the n option images as the reference option image and the subsequent steps until the i is equal to the n.
5. The method of claim 1, wherein the determining a plurality of aggregate areas in the scene map comprises:
determining a target area where each virtual object identifier in a plurality of virtual object identifiers in the scene map is located to obtain a plurality of target areas, wherein the target area where each virtual object identifier is located is an area with the center the same as that of each virtual object identifier, the shape the same as that of each virtual object identifier, and the size k being an integer greater than or equal to 2;
and determining a target area containing virtual object identifications of different tissues in the plurality of target areas as the aggregation area.
6. The method of claim 1 or 5, wherein said instructing the second device to display a scene view of the virtual scene at the target perspective in the target interface comprises:
acquiring the position coordinate of the reference area in the second screen image as a second position coordinate;
and sending a touch point triggering command carrying the second position coordinate to the second device to indicate the second device to display a scene picture of the virtual scene under the target visual angle after triggering a touch point at the second position coordinate in the target interface.
7. A test apparatus, the apparatus comprising:
the first indicating module is used for indicating the second equipment to display a target interface and a scene map, wherein the target interface is used for displaying a scene picture of a virtual scene, the virtual scene comprises a plurality of virtual objects, at least two virtual objects in the plurality of virtual objects belong to different organizations, and the scene map comprises virtual object identifiers;
the determining module is used for sending a second screen image acquisition command to the second equipment every other preset time length; after receiving a second screen image sent by the second device, identifying the scene map in the second screen image; determining a plurality of aggregation areas in the scene map, wherein each aggregation area comprises virtual object identifications of different organizations; determining the aggregation areas with the largest number of virtual object identifications contained in the plurality of aggregation areas as reference areas; determining a view angle corresponding to the reference area in the scene map as a target view angle, wherein the target view angle is a view angle corresponding to a scene picture with the highest complexity in the virtual scene;
and the second indicating module is used for indicating the second device to display a scene picture of the virtual scene under the target visual angle in the target interface so as to test the performance of the second device.
8. A computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1-6.
CN202110800736.7A 2021-07-15 2021-07-15 Test method, test device and computer readable storage medium Active CN113608978B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110800736.7A CN113608978B (en) 2021-07-15 2021-07-15 Test method, test device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110800736.7A CN113608978B (en) 2021-07-15 2021-07-15 Test method, test device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113608978A CN113608978A (en) 2021-11-05
CN113608978B true CN113608978B (en) 2022-08-02

Family

ID=78337625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110800736.7A Active CN113608978B (en) 2021-07-15 2021-07-15 Test method, test device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113608978B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI803376B (en) * 2022-06-30 2023-05-21 神達數位股份有限公司 A test method and device for an android mobile device
CN116665752B (en) * 2023-07-25 2023-11-21 成都佰维存储科技有限公司 UFS steady-state performance test method and device, readable storage medium and electronic equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101520056B1 (en) * 2014-08-24 2015-05-20 주식회사 큐랩 Cloud-based mobile/online game server load test automation service method
FR3029659B1 (en) * 2014-12-09 2018-03-09 Bull Sas METHODS AND SYSTEMS FOR VALIDATION OF PERFORMANCE TEST SCENARIOS
CN105045367A (en) * 2015-01-16 2015-11-11 中国矿业大学 Android system equipment power consumption optimization method based on game load prediction
US10277477B2 (en) * 2015-09-25 2019-04-30 Vmware, Inc. Load response performance counters
CN106874700A (en) * 2017-04-01 2017-06-20 上海术理智能科技有限公司 Surgical simulation method, surgical simulation device and electronic equipment based on Web
CN108170611B (en) * 2018-01-23 2019-07-16 网易(杭州)网络有限公司 Automated testing method and device, storage medium, electronic equipment
CN109603154B (en) * 2018-12-14 2022-05-20 网易(杭州)网络有限公司 Game interface testing method, client, hosting server and system
CN110141864B (en) * 2019-04-30 2022-08-23 深圳市腾讯网域计算机网络有限公司 Automatic game testing method and device and terminal

Also Published As

Publication number Publication date
CN113608978A (en) 2021-11-05

Similar Documents

Publication Publication Date Title
CN112130742B (en) Full screen display method and device of mobile terminal
CN113645351B (en) Application interface interaction method, electronic device and computer-readable storage medium
CN111669515B (en) Video generation method and related device
CN113254120B (en) Data processing method and related device
CN110559645B (en) Application operation method and electronic equipment
CN114466128B (en) Target user focus tracking shooting method, electronic equipment and storage medium
CN113608978B (en) Test method, test device and computer readable storage medium
CN112383664B (en) Device control method, first terminal device, second terminal device and computer readable storage medium
CN110727380A (en) Message reminding method and electronic equipment
CN114470750B (en) Display method of image frame stream, electronic device and storage medium
CN113837984A (en) Playback abnormality detection method, electronic device, and computer-readable storage medium
CN115426521A (en) Method, electronic device, medium, and program product for screen capture
CN113688019B (en) Response time duration detection method and device
CN111249728B (en) Image processing method, device and storage medium
CN113438366B (en) Information notification interaction method, electronic device and storage medium
CN115032640B (en) Gesture recognition method and terminal equipment
CN113970888A (en) Household equipment control method, terminal equipment and computer readable storage medium
CN114564101A (en) Three-dimensional interface control method and terminal
CN113391775A (en) Man-machine interaction method and equipment
CN114995715B (en) Control method of floating ball and related device
CN112023403B (en) Battle process display method and device based on image-text information
CN115273216A (en) Target motion mode identification method and related equipment
CN114445522A (en) Brush effect graph generation method, image editing method, device and storage medium
CN112149483A (en) Blowing detection method and device
RU2802281C2 (en) Application launch method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220823

Address after: No. 9, Xingyao Road, Chang'an District, Xi'an, Shaanxi Province 710000

Patentee after: Xi'an Honor Device Co.,Ltd.

Address before: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040

Patentee before: Honor Device Co.,Ltd.

TR01 Transfer of patent right