CN109815150B - Application testing method and device, electronic equipment and storage medium - Google Patents

Application testing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109815150B
CN109815150B CN201910087252.5A CN201910087252A CN109815150B CN 109815150 B CN109815150 B CN 109815150B CN 201910087252 A CN201910087252 A CN 201910087252A CN 109815150 B CN109815150 B CN 109815150B
Authority
CN
China
Prior art keywords
application
image
scene
scene template
template image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910087252.5A
Other languages
Chinese (zh)
Other versions
CN109815150A (en
Inventor
王洁梅
周大军
张力柯
荆彦青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910087252.5A priority Critical patent/CN109815150B/en
Publication of CN109815150A publication Critical patent/CN109815150A/en
Application granted granted Critical
Publication of CN109815150B publication Critical patent/CN109815150B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Debugging And Monitoring (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an application testing method and device, electronic equipment and a storage medium, and belongs to the technical field of the Internet. According to the method and the device, the similarity between the at least one operation interface image and the at least one scene template image is determined when the test instruction is received, so that the scene template image displayed in the application operation process can be rapidly determined based on the similarity. Based on the scene template image, the scene actually displayed during the operation of the application is accurately reflected, so that the actual operation condition of the application is accurately and quickly obtained, and the efficiency and the accuracy of the application test are improved.

Description

Application testing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of internet technologies, and in particular, to an application test method and apparatus, an electronic device, and a storage medium.
Background
Game AI (Artificial Intelligence) refers to the operating logic in game applications that is used to simulate human thinking and intelligent behavior. And when the terminal runs the game application, executing the running logic of the game AI to show the game scene of the game application to the user, so as to realize the interaction between the user and the game application.
The game scene can comprise a scene image, and the terminal displays the scene image so as to show the game scene to the user. Such as a scene image that selects a game scene mode for a game, a arena, or a day-to-day game. However, if the running logic of the game application is wrong, it is highly likely that the terminal will display an error on the scene image. For example, a scene image is not displayed, which causes the terminal not to correctly display a game scene, and reduces the user experience. Therefore, a method for testing an application accurately and efficiently is needed to obtain the actual operation condition of a game application.
Disclosure of Invention
The embodiment of the invention provides an application testing method and device, electronic equipment and a storage medium, which can solve the problem that the actual running condition of an application cannot be acquired. The technical scheme is as follows:
in one aspect, an application testing method is provided, and the method includes:
when a test instruction is received, acquiring at least one running interface image of an application, wherein the test instruction is used for testing an interface displayed when the application runs;
determining the similarity between the at least one running interface image and the at least one scene template image according to the at least one running interface image and the at least one scene template image of the application, wherein the at least one scene template image is used for representing target display content of a corresponding scene of the application;
determining a test result of the application based on the similarity between the at least one running interface image and the at least one scene template image, wherein the test result is used for indicating a scene displayed when the application runs.
In another aspect, an application testing apparatus is provided, the apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring at least one running interface image of an application when receiving a test instruction, and the test instruction is used for testing an interface displayed when the application runs;
a determining module, configured to determine, according to the at least one running interface image and the at least one scene template image of the application, a similarity between the at least one running interface image and the at least one scene template image, where the at least one scene template image is used to represent target display content of a corresponding scene of the application;
the determining module is further configured to determine a test result of the application based on a similarity between the at least one running interface image and the at least one scene template image, where the test result is used to indicate a scene displayed during the running of the application.
In another aspect, an electronic device is provided that includes one or more processors and one or more memories having at least one instruction stored therein, the at least one instruction being loaded and executed by the one or more processors to implement the operations performed by the application testing method as described above.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the operations performed by the application testing method as described above.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
the similarity between at least one operation interface image and at least one scene template image is determined when a test instruction is received, so that the scene template image displayed in the application operation process can be quickly determined based on the similarity, the scene actually displayed in the application operation process is accurately reflected based on the scene template image, the actual operation condition of the application is accurately and quickly obtained, and the efficiency and the accuracy of application test are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the invention;
FIG. 2 is a flow chart of an application testing method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of feature point extraction according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of determining a description vector according to an embodiment of the present invention;
FIG. 5 is a graph illustrating a distance ratio of a nearest neighbor distance to a second nearest neighbor distance provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of a scene template image and a running interface image according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a scene template image and a running interface image according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a scene template image and a running interface image according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a scene template image and a running interface image according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a scene template image and a running interface image according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a scene template image and a running interface image according to an embodiment of the present invention;
fig. 12 is a schematic diagram illustrating feature point extraction of a scene template image and an operation interface image according to an embodiment of the present invention;
fig. 13 is a schematic diagram of a matching process of feature points of a scene template image and an operation interface image according to an embodiment of the present invention;
fig. 14 is a schematic diagram of a matching result between a scene template image and a running interface image feature point according to an embodiment of the present invention;
fig. 15 is a schematic diagram of extracting feature points of a scene template image and an operation interface image according to an embodiment of the present invention;
fig. 16 is a schematic diagram of a scene template image and operation interface image feature point matching process according to an embodiment of the present invention;
fig. 17 is a schematic diagram of a matching result between a scene template image and a running interface image feature point according to an embodiment of the present invention;
FIG. 18 is a flow chart of an application test provided by an embodiment of the present invention;
FIG. 19 is a schematic structural diagram of an application testing apparatus according to an embodiment of the present invention;
fig. 20 is a schematic structural diagram of a terminal according to an embodiment of the present invention;
fig. 21 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present invention, and referring to fig. 1, the implementation environment includes: the test device 101 is provided with a test application, and the test application is used for testing an interface displayed during the running of the application so as to acquire a scene actually displayed during the running of the application.
The implementation environment may further include a device 102 installed with an application, and the test device 101 may collect at least one running interface image of the application during running from the device 102. Or, an application may also be installed on the test device 101, and the test device 101 may open the application through the test application and acquire at least one running interface image during running of the application. The testing device 101 tests the interface displayed during the running of the application based on the at least one running interface image and the at least one scene template image of the application to obtain the scene actually displayed during the running of the application.
The test device 101 may be a server or a terminal, and the test application may be an independent application program or a test module installed in other applications, for example, a test plug-in installed in a browser. The application can be a game application, the game application comprises a plurality of game scenes, each game scene can comprise one or more game pictures, and the test application can test an interface displayed during the running of the game application based on the game pictures displayed in real time during the running process of the game application so as to know the game scene actually displayed during the running of the game application.
Fig. 2 is a flowchart of an application testing method according to an embodiment of the present invention. An execution subject of the embodiment of the present invention is a test device, which may be a terminal or a server, and the embodiment of the present invention is not specifically limited to this, referring to fig. 2, the method includes:
201. when a test instruction is received, the test equipment acquires at least one running interface image of the application.
Wherein, the at least one running interface image refers to an interface image actually displayed in the running process of the application. The test instruction is used for testing the interface displayed during the running of the application so as to acquire the actual displayed scene during the running of the application.
In the embodiment of the invention, when the test instruction is received, the test equipment acquires the running interface image displayed in the running process of the application according to the application identifier of the application to be tested. The test equipment can be provided with the application, and the test equipment can directly acquire an operation interface image when the application runs; or, the test device may not install the application, and the test device acquires the running interface image from the device on which the application is installed. Accordingly, this step can be implemented in the following two ways.
In a first way, the application is installed on the test device. And when the test equipment receives the test instruction, the test equipment starts the application, and in the running process of the application, the test equipment performs screenshot on the application interface of the application to obtain a running interface image of the application.
The test equipment can acquire different running interface images of the application running in different scenes based on the different scenes of the application running at present. In this step, in the running process of the application, the test equipment performs screenshot on the application interface of the application based on the currently displayed first scene to obtain a running interface image of the application in the first scene. When the application runs to the next scene, the test equipment performs screenshot on the application interface of the application again to obtain a running interface image of a second scene. The test equipment is provided with a test application, and the test application is used for testing the running process of an application program. The test device may perform a test based on the triggering of a test instruction in the test application. In a possible implementation manner, an application identifier of a plurality of applications may be provided in the application interface of the test application, and when a user triggers a certain application identifier, the test device receives the test instruction based on the application identifier of the triggered application, and opens the application based on the application identifier.
In a possible implementation manner, an application interface of the test application may be provided with multiple scenario options of the application, when a user triggers a certain application identifier, the test device displays the multiple scenario options of the application in a current display interface, and when the user triggers a certain scenario option, the test device opens the application based on the triggered target scenario option, and starts to execute an execution logic corresponding to the target scenario based on the target scenario option. And in the process of executing the running logic corresponding to the target scene, the test equipment captures the application to obtain a running interface image when the application shows the target scene.
In a second way, the application is not installed on the test device. And when the test equipment receives the test instruction, the test equipment acquires the running interface image from the target equipment for installing the application.
When the test device receives the test instruction, the test device may send an acquisition request to the target device, where the acquisition request is used to request to acquire at least one running interface image displayed in real time in the running process of the application. The obtaining request may carry an application identifier of the application. The target device receives the acquisition request, starts the application according to the application identifier carried by the acquisition request, and captures an application interface of the application in the application running process to obtain a running interface image of the application. And the target equipment sends the at least one running interface image to the testing equipment in real time. The test equipment receives at least one running interface image sent by the target equipment. The process of acquiring the operation interface image by the target device is the same as the process of acquiring the operation interface image by the test device in the first mode, and is not repeated here.
The user may also test a target scene in the multiple scenes, and the acquisition request may also carry the target scene option. The target device can start the application, run the running logic corresponding to the target scene of the application, and capture the screenshot of the application by the target device in the process of executing the running logic corresponding to the target scene to obtain the running interface image of the application when the target scene is displayed. And the target equipment sends the running interface image corresponding to the target scene to the test equipment.
It should be noted that, in the embodiment of the present invention, the test device may acquire, in real time, a screenshot image of an application based on an actual running process of the application, and use the screenshot image as an operation interface image of the application, and then test the application based on the actual running process, so as to better understand an actual running condition of the application, and improve accuracy of an application test.
202. The test equipment acquires at least one scene template image of the application.
Wherein the at least one scene template image is used for representing target exhibition content of a corresponding scene of the application. That is, if the application is operating normally, the running interface image of the application should be displayed as the corresponding scene template image. In the embodiment of the invention, the test equipment stores at least one scene template image of the application in advance, and when the test instruction is received, the test equipment acquires the at least one scene template image of the application. In this step, the test device loads the at least one scene template image from the target storage space according to the application identifier of the application.
The test device may load, in real time, a scene template image corresponding to the scene based on the scene currently displayed by the application, and this step may be: in the application running process, the test equipment acquires a scene template image corresponding to a first scene according to the scene identifier of the first scene currently running based on the corresponding relation between the scene identifier and the scene template image. Of course, when the application runs to the next scene, that is, the second scene, the test device loads the scene template image corresponding to the second scene from the target storage space based on the scene identifier of the second scene.
In a possible implementation manner, the test device may further load scene template images of a current scene and a next scene based on a currently displayed scene, and this step may further be: when a test instruction is received, the test equipment loads a scene template image of a first scene and a scene template image of a second scene from a target storage space based on a scene identifier of the first scene which is currently displayed. Wherein the second scene is a scene that is shown sequentially after the first scene in a plurality of scenes included in the application. Of course, the test device may also load scene template images of the current scene and two scenes that are shown in sequence after the current scene based on the currently shown scene, which is not specifically limited in the embodiment of the present invention.
The test equipment can respectively and correspondingly store the applied scene identifiers and the scene template images corresponding to the scene identifiers, so that the test equipment can obtain the scene template images corresponding to the scenes on the basis of the scene identifiers in real time.
In this embodiment of the present invention, the scene template image may be a historical application interface adopting the application, or a pre-drawn image, or the test device acquires a pre-drawn image as the scene template image of the application, or the test device acquires the historical application interface of the application as the scene template image of the application. It should be noted that the scene template image may be drawn in advance and stored in the target storage space, for example, when the application is first brought online, a developer draws the scene template image, and designs an operation logic of the application based on the scene template image. Or, the scene template image may also be a historical operation interface image of the application, for example, when the application version is updated after the application is online, a part of the operation interface image in the test device may still be the same as the previous application version of the application, and the test device may directly obtain the historical operation interface image when the application is online last time, as the scene template image.
It should be noted that, in the embodiment of the present invention, the test device may test the operation process of the application based on the scene template image and the operation interface image. In addition, the test equipment can also acquire a scene template image and an operation interface image corresponding to the current scene in real time based on the current displayed scene, so that the scene template image and the operation interface image can be in one-to-one correspondence, and the accuracy of acquiring the scene template image and the operation interface image is improved. In addition, when the scene template image of the current first scene is acquired, the test equipment can also acquire the next scene, namely the scene template image of the second scene in advance, so that the efficiency of acquiring the scene template image is improved. In addition, the test equipment can also acquire a scene template image drawn by a developer, or a historical operation interface image of the previous version is applied as the scene template image, so that the source for acquiring the scene template image is enriched, and the applicability of the invention is improved.
203. The test equipment determines the similarity between the at least one running interface image and the at least one scene template image according to the at least one running interface image and the at least one scene template image of the application.
The similarity is used for representing the similarity of the operation interface image and the scene template image on the image characteristics, and the testing device can determine the similarity based on the image characteristics of the operation interface image and the image characteristics of the scene template image. In this step, the test equipment extracts the image features of the operation interface image and the image features of the scene template image, and determines the similarity between any one operation interface image and any one scene template image according to the image features of the operation interface image and the image features of the scene template image. The image characteristics include picture characteristics when the image is displayed, and the image characteristics may include graphics, picture colors, animation special effects, gray scales, text information and the like in the image.
In the step, for any running interface image, the test equipment acquires a first characteristic point of the any running interface image; for any scene template image, the test equipment acquires a second characteristic point of the any scene template image; and the test equipment determines the similarity between any one running interface image and any one scene template image according to the first characteristic point of any one running interface image and the second characteristic point of any one scene template image.
In the embodiment of the invention, the test equipment can extract the feature points of the images to represent the image features, and the test equipment can extract the feature points of all the areas of the running interface images as the image features of the running interface images so as to determine the similarity between the two images; or, the test equipment may further extract feature points of a partial region of the running interface image, and determine the similarity as an image feature of the running interface image. Accordingly, this step can be implemented in the following two ways.
In the first mode, the test equipment extracts feature points of all areas of the running interface image. The test equipment extracts feature points of all the areas of the operation interface image to obtain a first feature point of the operation interface image. The test equipment determines the similarity between any one operation interface image and any one scene template image according to the first characteristic point of the operation interface image and the second characteristic point of the scene template image.
For any first feature point in any running interface image, the test equipment can determine whether any scene template image comprises a second feature point matched with any first feature point. For any one running interface image, the testing equipment determines the similarity between any one running interface image and any one scene template image according to the number of the matched feature points in any one running interface image. The matching feature point refers to a first feature point matched with a second feature point of any scene template image.
The test equipment can determine whether the first feature point is matched with the second feature point according to the description vector of the first feature point and the description vector of the second feature point. In a possible implementation manner, the test equipment acquires a description vector of any first feature point in any one running interface image and a description vector of any second feature point in the scene template image; for any first feature point, the test equipment determines the description distances between the first feature point and any second feature point according to the description vector of the first feature point and the description vector of any second feature point, and obtains a plurality of description distances; the test equipment determines whether the second feature points matched with the first feature points are included in the plurality of second feature points according to the plurality of description distances. Wherein any one of the description distances is a description distance between one first feature point and one second feature point. Wherein the test device may filter out a nearest neighbor distance and a second neighbor distance of the plurality of descriptive distances, the test device determining a distance ratio of the nearest neighbor distance and the second neighbor distance. When the distance ratio is smaller than a threshold value, the test equipment determines that the scene template image comprises a second feature point matched with the first feature point; when the distance ratio is not less than a threshold value, the test equipment determines that the second feature point matched with the first feature point is not included in the scene template image. The threshold may be set based on needs, which is not particularly limited in this embodiment of the present invention, and for example, the threshold may be 0.75, 0.8, and the like.
The test equipment can search a second feature point matched with the first feature point from the plurality of second feature points according to any first feature point in the running interface image to obtain a plurality of point pairs, wherein any point pair comprises a first feature point from the running interface image and a second feature point from the scene template image. And the test equipment determines the similarity between the running interface image and the scene template image according to the number of the first characteristic points in the plurality of point pairs. In a possible implementation manner, when the testing device determines the similarity between any one of the running interface images and any one of the scene template images according to the number of the matched feature points in the running interface image, the testing device may directly use the number of the first feature points in the plurality of point pairs, that is, the number of the matched feature points in the plurality of first feature points, as the similarity between the running interface image and the scene template image. Of course, the test equipment may also determine the ratio of the number of the first feature points in the plurality of point pairs to the total number of the first feature points in the running interface image as the similarity between the running interface image and the scene template image.
The feature points of the image refer to relatively salient points in the image, such as contour points of a figure in the image, bright points in darker areas, dark points in lighter areas, and the like. For the extraction process of the first characteristic points and the second characteristic points, the test equipment can extract a plurality of first characteristic points from the running interface image through a target algorithm and extract second characteristic points from the scene template image. The target algorithm may be an ORB (Oriented FAST and Rotated BRIEF, compact Oriented FAST rotation) algorithm, among others. The main principle of the process of extracting feature points by the FAST from obtained segment test (feature based on the accelerated segmentation test) algorithm is as follows: if a pixel p is far from its neighboring area (as shown in fig. 3, pixel 1 to pixel 16), enough pixels are different, then the pixel p may be a corner point, i.e. a feature point. Because the process of extracting the feature points by the FAST algorithm does not have scale invariance, when the ORB algorithm is adopted for extracting the feature points, the testing equipment can firstly construct a Gaussian pyramid and detect whether the feature points exist on the pyramid image of each layer so as to realize the scale invariance. Because the FAST algorithm also has no direction invariance, when the ORB algorithm is adopted to extract the feature points, the test equipment can extract the feature points by utilizing a gray centroid method, and for any one feature point p, the moment of the neighborhood pixels of p is defined as
Figure BDA0001962174390000061
Wherein, I (x, y) is the gray value at the feature point (x, y), and the centroid of the image is:
Figure BDA0001962174390000062
the included angle between the characteristic point and the mass center is the direction of the characteristic point: θ ═ arctan (m)01,m10)。mpqMoment, m, for representing an image00For representing 0 th order moment, m01And m10Representing the 1 st moment.
For the determination process of the description vector of the first feature point, the conventional algorithm mainly selects N point pairs (N is a positive integer greater than or equal to 1) around the feature point P, and then combines the comparison results of the N point pairs into a binary code string with the length of N, which is used as the description feature of the feature point. Because the calculation process does not have rotation invariance, the test equipment can adopt an ORB algorithm, and when the description vector of the feature point is calculated, the test equipment can establish a two-dimensional coordinate system which is established by taking the feature point P as the center of a circle and taking the connecting line of the feature point P and the centroid Q of the point taking area as an X axis. As shown in fig. 4, since the center of the circle is fixed, PQ is taken as an x-axis coordinate, and the vertical direction is taken as a y-axis coordinate, and the point pairs extracted from the same feature point are identical at different rotation angles, rotational consistency can be achieved. Wherein, in the test device, the vector distance can be represented as a binary string.
For the determination process of the description Distance of the first feature point and the second feature point, the testing apparatus may employ Hamming Distance (Hamming Distance), where the Hamming Distance of two equal-length binary strings is the number of different characters at corresponding positions of the two binary strings. In addition, after the screening is performed based on the description distance, the testing device may screen a plurality of point pairs matching with each other based on the distance Ratio and the threshold, as shown in fig. 5, the testing device may perform the screening by using a Ratio-Test algorithm, and the parameter Ratio controls to reject the feature points with the distance Ratio outside a certain range. In one possible embodiment, the nearest neighbor may be the shortest euclidean distance between the description vectors of the two feature points. Fig. 5 is a ratio graph of nearest neighbor distance and second nearest neighbor distance, the horizontal axis represents the ratio between the nearest neighbor distance and the second nearest neighbor distance, and the vertical axis represents the PDF (probability density function) that matches correctly or incorrectly, where the solid line is the PDF of ratio that matches correctly; the dashed line is the PDF of the ratio at the time of the mismatch. As shown in fig. 5, the best separation of the correct match and the false match conditions can be achieved when the threshold is 0.75. Therefore, when the distance ratio is less than 0.75, it is determined that the second feature points include a feature point matching the first feature point.
As shown in fig. 6, some scene template images and operation interface images are the same only in a partial scene region, and different scene images are relatively different. As shown in fig. 6, the virtual character in the operation interface image and the scene template image are the same, but the scene template image is in the game scene selection mode, and is a selection interface of the game scene, and the operation interface image indicates a scene for selecting the virtual character, and is a selection interface of the virtual character, so that the scene template image and the operation interface image belong to different scenes and are not matched. Therefore, the test equipment needs to jointly screen matching results by combining the feature point matching strategy of the whole graph. As shown in fig. 7, fig. 7 is a schematic view of an actual interface between a scene template image and a running interface image, and actual differences between the scene template image and the running interface image can be more clearly shown in fig. 7.
In the second mode, the test equipment extracts the characteristic points of the partial region of the running interface image. The testing equipment determines a matching area of the operation interface image and obtains a first characteristic point of the operation interface image in the matching area. And the test equipment determines the similarity between any one running interface image and any one scene template image according to the first characteristic point of any one running interface image in the matching area and the second characteristic point of the scene template image.
In this step, the mode of obtaining the first feature point from the matching region by the testing device, and the mode of determining the similarity between the operation interface image and the scene template image by the testing device based on the first feature point and the second feature point are the same as the first mode, and are not described herein again.
The running interface image may include a display user information display area and/or a debugging area, where the display user information display area is used to display personal information of a user or personal information of a friend user of the user, and the debugging area is used to display debugging information of the application during running. Since personal information of different users is different and debugging information of different running processes can be different, the test equipment can take the area outside the user information display area and the debugging area as the area for extracting the feature points. Accordingly, the step of extracting the first feature point from the operation interface image by the test device may include the following two cases.
In the first case, for a first operation interface image including a user information display area, the test equipment acquires a first feature point of a first area in the first operation interface image.
The first area is an area except the user information display area in the first operation interface image; the test equipment may extract a first feature point from within the first region.
As shown in fig. 8, in the scene template image, the user information display area does not include user information, and in the operation interface image, the user information display area includes user information of a plurality of friend users of the user, obviously, the user information display area of the scene template image and the user information display area of the operation interface image are not the same, and when the test device determines the scene template image matched with the operation interface image, the test device may perform matching based on an area other than the user information display area. The core image in the game scene does not show the user information of the friend. If the matching is performed based on the first feature points of all the areas of the operation interface image, the first feature points extracted from the user information display area cannot find the matching points in the scene template image, so that the total number of the feature points integrally matched in the operation interface image and the scene template image is easily reduced, and the error conclusion that the operation interface image and the scene template image are not the same scene template image is judged. As shown in fig. 9, fig. 9 is a schematic view of an actual interface between a scene template image and a running interface image, and the difference between the scene template image and the running interface image in the user information display area can be more clearly shown in fig. 9.
In the second case, for a second operation interface image including a debugging area, the test equipment acquires a first feature point of a second area in the second operation interface image. And the second area is an area except the debugging area in the running interface image. The test equipment may extract a first feature point from within the second region.
As shown in fig. 10, in the scene template image, the debugging area does not include debugging information, and in the running interface image, the debugging area includes debugging information generated by the application in the running process, obviously, the debugging area of the scene template image is different from the debugging area of the running interface image, and when the test device determines the scene template image matched with the running interface image, the test device may perform matching based on the area outside the debugging area. As shown in fig. 11, fig. 11 is a schematic view of an actual interface between a scene template image and a running interface image, and the difference between the scene template image and the running interface image in the debugging area can be more clearly shown in fig. 11.
In a possible implementation manner, the test device may further combine the first manner and the second manner to determine the similarity between any one of the operation interface images and any one of the scene template images. The process may be: the test equipment can divide the running interface image into a plurality of areas and extract a first characteristic point of the running interface image in any area. The test equipment determines the description distance between the first feature point of any one region and any second feature point of a scene template image based on the first feature point of any one region, determines the similarity between the region and the scene template image of the running interface image based on the description distance, and obtains a plurality of similarities between the running interface image and at least one scene template image in the plurality of regions respectively. Then, the test equipment screens out at least one scene template image with the similarity larger than a similarity threshold value according to a plurality of similarities corresponding to the plurality of areas to obtain at least one scene template image, and the test equipment determines the similarity between the operation interface image and the at least one scene template image obtained through screening.
As shown in fig. 12, the test device extracts feature points of the operation interface image and the scene template image, and as for the operation interface image including the debugging region, the test device may only extract first feature points of a matching region outside the debugging region, as shown in fig. 13, the test device calculates a description distance between each first feature point and each second feature point, respectively, to obtain a plurality of description distances. As shown in fig. 14, the testing device filters the first feature points and the second feature points corresponding to the description distances based on the description distances, and filters out the second feature points matching with the first feature points, so as to obtain a plurality of point pairs in one-to-one correspondence, where each point pair includes one first feature point and one second feature point. Fig. 15, 16, and 17 are schematic diagrams of actual interfaces corresponding to fig. 12, 13, and 14, respectively, and the matching process between the scene template image and the operation interface image can be more clearly shown in fig. 11.
It should be noted that, in the embodiment of the present invention, since the feature point is extracted by using an algorithm with scale invariance and rotation invariance, rather than directly performing image matching based on the pixel gray-scale value, the image matching based on the pixel gray-scale value is one-to-one matching of the pixel gray-scale values at the same position in the image, and the main principle is that the gray-scale values of the pixels at the same position are equal or close to each other. When image matching is performed, the difference between the size of an image and the rendering position of a user interface is large because the user interface resolution and rendering of different terminals are different, and the image matching is inaccurate. The embodiment of the invention adopts matching based on the characteristic points, and the characteristic points refer to angular points or salient points in the image and are irrelevant to the position and size relationship of object elements in the image, so the method of the embodiment of the invention has stronger applicability. In addition, when the ORB algorithm extracts feature points and calculates description vectors of the feature points, the FAST algorithm is combined with BRIEF (Binary Robust Independent basic Features) algorithm Features, improvement and optimization are further performed, and an algorithm with rotation invariance and scale invariance is used for determining the similarity. Therefore, the method in the embodiment of the invention can greatly improve the accuracy and the applicability of calculating the similarity of the operation interface image and the scene template image.
204. The testing device determines a testing result of the application based on the similarity between the at least one running interface image and the at least one scene template image.
In the embodiment of the present invention, the test result is used to indicate a scenario displayed when the application runs. The test equipment can determine the scene template image with the similarity meeting the target screening condition as the scene template image displayed during the application operation according to the similarity between the at least one operation interface image and the at least one scene template image. Wherein, the target screening condition may be that the similarity is greater than a similarity threshold.
The test equipment can screen out scene template images matched with any one operation interface image based on the similarity between any one operation interface image and any one scene template image and based on the target screening condition; and the test equipment determines the scene displayed in the application running process based on the scene template image matched with any running interface image. The similarity between the running interface image and the scene template image may be represented by the number of matching feature points in the running interface image, and the screening process of the test device based on the target screening condition may be: for each operation interface image, the test equipment can screen out the operation interface image and the scene template image corresponding to the operation interface image when the number of the matching feature points is larger than a number threshold value according to the number of the matching feature points in the operation interface image, and determine the scene template image as the scene template image matched with the operation interface image. The number threshold may be set based on needs, and this is not particularly limited in this embodiment of the present invention. For example, the quantity threshold may be 100, 80, etc.
Further, the testing device may further determine, based on the number of the scene template images exhibited by the target application running process and the total number of the at least one scene template image, a coverage rate of the application on the scene template images, where the coverage rate is used to represent the number of the scenes actually exhibited by the application in running and is in a size relationship with the number of the scenes exhibited by the application in an ideal running state, and the coverage rate is numerically equal to a ratio of the number of the scene template images exhibited by the actual running process and the total number of the at least one scene template image corresponding to the application.
It should be noted that the test equipment may use the scene template image with a larger similarity as the scene template image matched with the operation interface image based on the similarity between the operation interface image and the scene template image, so that the scene template image actually displayed during the operation of the application may be obtained, the scene actually displayed during the operation process may be accurately known, and the accuracy of determining the operation process of the application may be improved. Furthermore, the test equipment can also adopt the coverage rate of the application to the scene template image to accurately represent the coverage degree of the application operation process to the scene template image, so that a user can know the actual operation condition of the application based on actual data, the numerical value of the scene displayed during the operation of the application is further grasped, and the user experience is improved.
It should be noted that the test device may know the application scene displayed during the application running process based on the scene template image displayed during the application running process. Further, the test device may further modify the running logic of the application based on the test result through the following step 204.
205. And the test equipment modifies the operation logic of the application according to the scene template image which is not shown in the operation process of the application.
In the embodiment of the invention, the test equipment can obtain the scene template image which is not shown when the application runs; the testing equipment repeatedly executes the step of modifying the operation logic of the application according to the scene template images which are not displayed during the operation of the application, and determines the testing result of the application based on the similarity between the at least one scene template image and the at least one operation interface image corresponding to the modified operation logic until the at least one scene template image of the application is displayed during the operation of the application.
It should be noted that, in the embodiment of the present invention, the test device may further perform real-time optimization on the application operation logic based on the test result, so that the whole test process forms a closed loop, the application operation logic is continuously optimized and perfected, and the practicability of the application test and the accuracy of the operation process are improved.
For a clearer description of the above process, the process of steps 201 and 204 is described by taking the flow shown in fig. 18 as an example. As shown in fig. 18, the test device loads a scene template image of an application, inputs an image set composed of at least one operation interface image of the application, sequentially traverses each operation interface image, performs matching of feature points of partial regions and matching of feature points of all regions on the operation interface image and the scene template image in the traversal process, outputs a scene template image matched with the operation interface image based on a matching result of the feature points, where the scene template image is a scene template image covered by the application operation process, obtains all scene template images covered by the application when the application operation is finished when the traversal of all operation interface images is finished, and optimizes the operation logic of the application based on the uncovered scene template image, so that the operation logic of the application is further improved.
In the embodiment of the invention, the similarity between at least one operation interface image and at least one scene template image is determined when the test instruction is received, so that the scene template image displayed in the application operation process can be rapidly determined based on the similarity, and the scene actually displayed in the application operation process is accurately reflected based on the scene template image, so that the actual operation condition of the application is accurately and rapidly acquired, and the efficiency and the accuracy of the application test are improved.
Fig. 19 is a schematic structural diagram of an application testing apparatus according to an embodiment of the present invention. Referring to fig. 19, the apparatus includes: an acquisition module 1901 and a determination module 1902.
An obtaining module 1901, configured to obtain at least one running interface image of an application when a test instruction is received, where the test instruction is used to test an interface displayed when the application runs;
a determining module 1902, configured to determine, according to the at least one running interface image and the at least one scene template image of the application, a similarity between the at least one running interface image and the at least one scene template image, where the at least one scene template image is used to represent target display content of a corresponding scene of the application;
the determining module 1902 is further configured to determine a test result of the application based on a similarity between the at least one running interface image and the at least one scene template image, where the test result is indicative of a scene displayed during the running of the application.
Optionally, the determining module 1902 includes:
the acquisition unit is used for acquiring a first characteristic point of any running interface image for any running interface image;
the acquiring unit is further configured to acquire a second feature point of any scene template image;
and the determining unit is used for determining the similarity between any one running interface image and any one scene template image according to the first characteristic point of any one running interface image and the second characteristic point of any one scene template image.
Optionally, the obtaining unit is further configured to obtain, for a first operation interface image including a user information display area, a first feature point of a first area in the first operation interface image, where the first area is an area in the first operation interface image except for the user information display area; and for a second operation interface image comprising a debugging area, acquiring a first characteristic point of a second area in the second operation interface image, wherein the second area is an area except the debugging area in the second operation interface image.
Optionally, the determining unit is further configured to determine, for any first feature point in any one of the running interface images, whether a second feature point matched with any one of the first feature points is included in any one of the scene template images; and for any one running interface image, determining the similarity between the any one running interface image and any one scene template image according to the number of matched feature points in the any one running interface image, wherein the matched feature points refer to first feature points matched with second feature points of the any one scene template image.
Optionally, the determining unit is further configured to determine, for any one of the first feature points, description distances between the any one of the first feature points and any one of the second feature points according to the description vector of the any one of the first feature points and the description vector of the any one of the second feature points, so as to obtain a plurality of description distances; and when the ratio of the nearest neighbor distance to the second nearest neighbor distance in the plurality of description distances is smaller than a threshold value, determining that the any scene template image comprises a second feature point matched with the any first feature point.
Optionally, the obtaining module 1901 is further configured to start the application when the test instruction is received; in the application running process, screenshot is carried out on the application interface of the application based on the currently displayed first scene, and a running interface image of the first scene is obtained.
Optionally, the obtaining module 1901 is further configured to, in the application running process, obtain, based on the correspondence between the scene identifier and the scene template image, the scene template image corresponding to the first scene according to the scene identifier of the first scene currently displayed.
Optionally, the determining module 1902 is further configured to determine, according to the similarity between the at least one running interface image and the at least one scene template image, a scene template image whose similarity satisfies a target screening condition as the scene template image displayed during the running of the application.
Optionally, the apparatus further comprises:
the obtaining module 1901 is further configured to obtain a scene template image that is not shown when the application runs;
and the modification module is used for repeatedly modifying the operation logic of the application according to the scene template images which are not displayed during the operation of the application, and determining the step of the test result of the application based on the similarity of the at least one scene template image and the at least one operation interface image corresponding to the modified operation logic until the at least one scene template image of the application is displayed during the operation of the application.
In the embodiment of the invention, the similarity between at least one operation interface image and at least one scene template image is determined when the test instruction is received, so that the scene template image displayed in the application operation process can be rapidly determined based on the similarity, and the scene actually displayed in the application operation process is accurately reflected based on the scene template image, so that the actual operation condition of the application is accurately and rapidly acquired, and the efficiency and the accuracy of the application test are improved.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the application test apparatus provided in the above embodiment, only the division of the functional modules is illustrated, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules to complete all or part of the functions described above. In addition, the application testing apparatus and the application testing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Fig. 20 is a schematic structural diagram of a terminal according to an embodiment of the present invention. The terminal 2000 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 2000 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
In general, terminal 2000 includes: a processor 2001 and a memory 2002.
The processor 2001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 2001 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 2001 may also include a main processor and a coprocessor, the main processor being a processor for Processing data in an awake state, also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 2001 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor 2001 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 2002 may include one or more computer-readable storage media, which may be non-transitory. The memory 2002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 2002 is used to store at least one instruction for execution by processor 2001 to implement the application testing methods provided by the method embodiments herein.
In some embodiments, terminal 2000 may further optionally include: a peripheral interface 2003 and at least one peripheral. The processor 2001, memory 2002 and peripheral interface 2003 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 2003 through a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 2004, a touch display 2005, a camera 2006, an audio circuit 2007, a positioning assembly 2008, and a power supply 2009.
The peripheral interface 2003 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 2001 and the memory 2002. In some embodiments, the processor 2001, memory 2002 and peripheral interface 2003 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 2001, the memory 2002, and the peripheral interface 2003 may be implemented on separate chips or circuit boards, which is not limited by this embodiment.
The Radio Frequency circuit 2004 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 2004 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 2004 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 2004 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 2004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 2004 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 2005 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 2005 is a touch display screen, the display screen 2005 also has the ability to capture touch signals on or over the surface of the display screen 2005. The touch signal may be input to the processor 2001 as a control signal for processing. At this point, the display 2005 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 2005 may be one, providing the front panel of terminal 2000; in other embodiments, the display screens 2005 can be at least two, respectively disposed on different surfaces of the terminal 2000 or in a folded design; in still other embodiments, display 2005 may be a flexible display disposed on a curved surface or a folded surface of terminal 2000. Even more, the display screen 2005 can be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 2005 can be made of a material such as an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), and the like.
Camera assembly 2006 is used to capture images or video. Optionally, camera assembly 2006 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 2006 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 2007 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 2001 for processing or inputting the electric signals to the radio frequency circuit 2004 so as to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different positions of the terminal 2000. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 2001 or the radio frequency circuit 2004 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 2007 may also include a headphone jack.
The positioning component 2008 is configured to locate a current geographic Location of the terminal 2000 to implement navigation or LBS (Location Based Service). The Positioning component 2008 may be a Positioning component based on a Global Positioning System (GPS) in the united states, a beidou System in china, a graves System in russia, or a galileo System in the european union.
Power supply 2009 is used to power the various components in terminal 2000. The power supply 2009 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When power supply 2009 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 2000 also includes one or more sensors 2010. The one or more sensors 2010 include, but are not limited to: acceleration sensor 2011, gyro sensor 2012, pressure sensor 2013, fingerprint sensor 2014, optical sensor 2015, and proximity sensor 2016.
The acceleration sensor 2011 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 2000. For example, the acceleration sensor 2011 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 2001 may control the touch display screen 2005 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 2011. The acceleration sensor 2011 may also be used for acquisition of motion data of a game or a user.
The gyroscope sensor 2012 can detect the body direction and the rotation angle of the terminal 2000, and the gyroscope sensor 2012 and the acceleration sensor 2011 can cooperate to acquire the 3D motion of the user on the terminal 2000. The processor 2001 may implement the following functions according to the data collected by the gyro sensor 2012: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 2013 may be disposed on the side bezel of terminal 2000 and/or underlying touch screen display 2005. When the pressure sensor 2013 is disposed on the side frame of the terminal 2000, the holding signal of the user to the terminal 2000 can be detected, and the processor 2001 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 2013. When the pressure sensor 2013 is disposed at a lower layer of the touch display screen 2005, the processor 2001 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 2005. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 2014 is used for collecting fingerprints of the user, and the processor 2001 identifies the identity of the user according to the fingerprints collected by the fingerprint sensor 2014, or the fingerprint sensor 2014 identifies the identity of the user according to the collected fingerprints. Upon identifying that the user's identity is a trusted identity, the processor 2001 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 2014 may be disposed on the front, back, or side of the terminal 2000. When a physical key or vendor Logo is provided on the terminal 2000, the fingerprint sensor 2014 may be integrated with the physical key or vendor Logo.
The optical sensor 2015 is used to collect ambient light intensity. In one embodiment, the processor 2001 may control the display brightness of the touch display 2005 according to the ambient light intensity collected by the optical sensor 2015. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 2005 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 2005 is turned down. In another embodiment, the processor 2001 may also dynamically adjust the shooting parameters of the camera assembly 2006 according to the ambient light intensity collected by the optical sensor 2015.
The proximity sensor 2016, also known as a distance sensor, is typically disposed on a front panel of the terminal 2000. The proximity sensor 2016 is used to collect a distance between a user and a front surface of the terminal 2000. In one embodiment, the touch display 2005 is controlled by the processor 2001 to switch from a bright screen state to a dark screen state when the proximity sensor 2016 detects that the distance between the user and the front surface of the terminal 2000 is gradually reduced; when the proximity sensor 2016 detects that the distance between the user and the front surface of the terminal 2000 is gradually increasing, the touch display 2005 is controlled by the processor 2001 to switch from a rest screen state to a bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 20 is not intended to be limiting of terminal 2000 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Fig. 21 is a schematic structural diagram of a server according to an embodiment of the present invention, where the server 2100 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 2101 and one or more memories 2102, where the memory 2102 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 2101 to implement the application testing method provided by each of the method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, is also provided that includes instructions executable by a processor in an electronic device to perform the application testing method in the above-described embodiments. For example, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (random access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (16)

1. An application testing method, the method comprising:
when a test instruction is received, acquiring at least one running interface image of an application, wherein the test instruction is used for testing an interface displayed when the application runs;
determining the similarity between the at least one running interface image and the at least one scene template image according to the at least one running interface image and the at least one scene template image of the application, wherein the at least one scene template image is a running interface image displayed by the application when the application runs normally;
determining the scene template image with the similarity meeting the target screening condition as the scene template image displayed when the application runs according to the similarity between the at least one running interface image and the at least one scene template image;
acquiring a scene template image which is not shown when the application runs;
and repeatedly executing the step of modifying the operation logic of the application according to the scene template images which are not displayed during the operation of the application, and determining the scene template images displayed during the operation of the application based on the similarity between the at least one scene template image and the at least one operation interface image corresponding to the modified operation logic until the at least one scene template image of the application is displayed during the operation of the application.
2. The method of claim 1, wherein determining the similarity between the at least one running interface image and the at least one scene template image of the application according to the at least one running interface image and the at least one scene template image comprises:
for any running interface image, acquiring a first characteristic point of the any running interface image;
for any scene template image, acquiring a second feature point of the any scene template image;
and determining the similarity between any one running interface image and any one scene template image according to the first characteristic point of any one running interface image and the second characteristic point of any one scene template image.
3. The method according to claim 2, wherein the acquiring, for any one of the running interface images, the first feature point of the any one of the running interface images comprises:
for a first operation interface image comprising a user information display area, acquiring a first characteristic point of a first area in the first operation interface image, wherein the first area is an area except the user information display area in the first operation interface image;
for a second operation interface image comprising a debugging area, acquiring a first characteristic point of a second area in the second operation interface image, wherein the second area is an area except the debugging area in the second operation interface image.
4. The method according to claim 2, wherein the determining the similarity between the any one running interface image and the any one scene template image according to the first feature point of the any one running interface image and the second feature point of the any one scene template image comprises:
for any first feature point in any one of the operation interface images, determining whether any scene template image comprises a second feature point matched with any first feature point;
and for any one operation interface image, determining the similarity between any one operation interface image and any one scene template image according to the number of matched feature points in any one operation interface image, wherein the matched feature points are first feature points matched with second feature points of any one scene template image.
5. The method according to claim 4, wherein the determining whether any of the scene template images includes a second feature point matching any of the first feature points for any of the first feature points in any of the run interface images comprises:
for any one first feature point, determining description distances between any one first feature point and any one second feature point according to the description vector of any one first feature point and the description vector of any one second feature point, and obtaining a plurality of description distances;
when the ratio of the nearest neighbor distance to the second nearest neighbor distance in the plurality of description distances is smaller than a threshold value, determining that a second feature point matched with any one of the first feature points is included in any one of the scene template images.
6. The method of claim 1, wherein obtaining at least one running interface image of an application upon receiving a test instruction comprises:
when the test instruction is received, starting the application;
and in the application running process, based on the currently displayed first scene, screenshot is carried out on the application interface of the application, and a running interface image of the first scene is obtained.
7. The method of claim 1, wherein before determining the similarity between the at least one running interface image and the at least one scene template image of the application based on the at least one running interface image and the at least one scene template image, the method further comprises:
and in the application running process, acquiring a scene template image corresponding to a first scene according to the scene identifier of the currently displayed first scene based on the corresponding relation between the scene identifier and the scene template image.
8. An application testing apparatus, the apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring at least one running interface image of an application when receiving a test instruction, and the test instruction is used for testing an interface displayed when the application runs;
a determining module, configured to determine, according to the at least one running interface image and the at least one scene template image of the application, a similarity between the at least one running interface image and the at least one scene template image, where the at least one scene template image is a running interface image displayed by the application when the application runs normally;
the determining module is further configured to determine, according to the similarity between the at least one running interface image and the at least one scene template image, a scene template image whose similarity satisfies a target screening condition as the scene template image displayed during the application running;
the acquisition module is further used for acquiring a scene template image which is not shown when the application runs;
the device further comprises:
and the modification module is used for repeatedly modifying the operation logic of the application according to the scene template images which are not displayed during the operation of the application, and determining the test result of the application based on the similarity of the at least one scene template image and the at least one operation interface image corresponding to the modified operation logic until the at least one scene template image of the application is displayed during the operation of the application.
9. The apparatus of claim 8, wherein the determining module comprises:
the acquisition unit is used for acquiring a first characteristic point of any running interface image for any running interface image;
the acquiring unit is further configured to acquire a second feature point of any scene template image for any scene template image;
and the determining unit is used for determining the similarity between any one running interface image and any one scene template image according to the first characteristic point of any one running interface image and the second characteristic point of any one scene template image.
10. The apparatus of claim 9,
the obtaining unit is further configured to obtain, for a first operation interface image including a user information display area, a first feature point of a first area in the first operation interface image, where the first area is an area in the first operation interface image other than the user information display area; for a second operation interface image comprising a debugging area, acquiring a first characteristic point of a second area in the second operation interface image, wherein the second area is an area except the debugging area in the second operation interface image.
11. The apparatus of claim 9,
the determining unit is further configured to determine, for any first feature point in any one of the run interface images, whether a second feature point matched with any one of the first feature points is included in any one of the scene template images; and for any one operation interface image, determining the similarity between any one operation interface image and any one scene template image according to the number of matched feature points in any one operation interface image, wherein the matched feature points are first feature points matched with second feature points of any one scene template image.
12. The apparatus according to claim 11, wherein the determining unit is further configured to determine, for the any one first feature point, description distances between the any one first feature point and the any one second feature point according to the description vector of the any one first feature point and the description vector of the any one second feature point, so as to obtain a plurality of description distances; when the ratio of the nearest neighbor distance to the second nearest neighbor distance in the plurality of description distances is smaller than a threshold value, determining that a second feature point matched with any one of the first feature points is included in any one of the scene template images.
13. The apparatus of claim 8, wherein the obtaining module is further configured to open the application when the test instruction is received; and in the application running process, based on the currently displayed first scene, screenshot is carried out on the application interface of the application, and a running interface image of the first scene is obtained.
14. The apparatus according to claim 8, wherein the obtaining module is further configured to, during the running of the application, obtain, according to a scene identifier of a currently displayed first scene, a scene template image corresponding to the first scene based on a correspondence between the scene identifier and the scene template image.
15. An electronic device, comprising one or more processors and one or more memories having stored therein at least one instruction that is loaded and executed by the one or more processors to perform operations performed by the application testing method of any of claims 1 to 7.
16. A computer-readable storage medium having stored therein at least one instruction which is loaded and executed by a processor to perform operations performed by the application testing method of any one of claims 1 to 7.
CN201910087252.5A 2019-01-29 2019-01-29 Application testing method and device, electronic equipment and storage medium Active CN109815150B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910087252.5A CN109815150B (en) 2019-01-29 2019-01-29 Application testing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910087252.5A CN109815150B (en) 2019-01-29 2019-01-29 Application testing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109815150A CN109815150A (en) 2019-05-28
CN109815150B true CN109815150B (en) 2021-08-06

Family

ID=66605698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910087252.5A Active CN109815150B (en) 2019-01-29 2019-01-29 Application testing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109815150B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110502425A (en) * 2019-06-28 2019-11-26 平安银行股份有限公司 Test data generating method, device, electronic equipment and storage medium
CN112559314B (en) * 2019-09-26 2024-05-31 上海汽车集团股份有限公司 Testing method and testing device for man-machine interaction interface
CN110738185B (en) * 2019-10-23 2023-07-07 腾讯科技(深圳)有限公司 Form object identification method, form object identification device and storage medium
CN111273905B (en) * 2020-01-17 2023-04-18 南京大学 Application retrieval method and device based on interface sketch
CN111522615B (en) * 2020-04-23 2023-08-15 深圳赛安特技术服务有限公司 Method, device, equipment and storage medium for updating command line interface
CN112150464B (en) * 2020-10-23 2024-01-30 腾讯科技(深圳)有限公司 Image detection method and device, electronic equipment and storage medium
CN112559341A (en) * 2020-12-09 2021-03-26 上海米哈游天命科技有限公司 Picture testing method, device, equipment and storage medium
CN112559342A (en) * 2020-12-09 2021-03-26 上海米哈游天命科技有限公司 Method, device and equipment for acquiring picture test image and storage medium
CN113360378B (en) * 2021-06-04 2024-07-19 如你所视(北京)科技有限公司 Regression testing method and device for application program for generating VR scene
CN113782003A (en) * 2021-09-14 2021-12-10 上汽通用五菱汽车股份有限公司 Test method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1658158A (en) * 2004-01-28 2005-08-24 微软公司 Method and system for masking dynamic regions in a user interface to enable testing of user interface consistency
CN105589801A (en) * 2014-10-20 2016-05-18 网易(杭州)网络有限公司 Mobile phone cluster test method and system
CN105868102A (en) * 2016-03-22 2016-08-17 中国科学院软件研究所 Computer vision based mobile terminal application testing system and method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9251435B2 (en) * 2013-09-20 2016-02-02 Oracle International Corporation Screenshot database for application verification
CN105630686B (en) * 2016-03-24 2018-12-18 厦门美图移动科技有限公司 A kind of application traversal test method, equipment and mobile terminal
CN108304584A (en) * 2018-03-06 2018-07-20 百度在线网络技术(北京)有限公司 Illegal page detection method, apparatus, intruding detection system and storage medium
CN108875797B (en) * 2018-05-29 2023-04-18 腾讯科技(深圳)有限公司 Method for determining image similarity, photo album management method and related equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1658158A (en) * 2004-01-28 2005-08-24 微软公司 Method and system for masking dynamic regions in a user interface to enable testing of user interface consistency
CN105589801A (en) * 2014-10-20 2016-05-18 网易(杭州)网络有限公司 Mobile phone cluster test method and system
CN105868102A (en) * 2016-03-22 2016-08-17 中国科学院软件研究所 Computer vision based mobile terminal application testing system and method

Also Published As

Publication number Publication date
CN109815150A (en) 2019-05-28

Similar Documents

Publication Publication Date Title
CN109815150B (en) Application testing method and device, electronic equipment and storage medium
CN110210571B (en) Image recognition method and device, computer equipment and computer readable storage medium
CN111079576B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN110222789B (en) Image recognition method and storage medium
CN110650379B (en) Video abstract generation method and device, electronic equipment and storage medium
CN110807361A (en) Human body recognition method and device, computer equipment and storage medium
CN110110787A (en) Location acquiring method, device, computer equipment and the storage medium of target
CN110059652B (en) Face image processing method, device and storage medium
CN109522863B (en) Ear key point detection method and device and storage medium
CN109360222B (en) Image segmentation method, device and storage medium
CN110570460A (en) Target tracking method and device, computer equipment and computer readable storage medium
US11386586B2 (en) Method and electronic device for adding virtual item
CN108320756B (en) Method and device for detecting whether audio is pure music audio
CN111144365A (en) Living body detection method, living body detection device, computer equipment and storage medium
CN112581358B (en) Training method of image processing model, image processing method and device
CN114170349A (en) Image generation method, image generation device, electronic equipment and storage medium
CN112749613A (en) Video data processing method and device, computer equipment and storage medium
CN112287852A (en) Face image processing method, display method, device and equipment
CN111738365B (en) Image classification model training method and device, computer equipment and storage medium
CN110705438A (en) Gait recognition method, device, equipment and storage medium
CN110705614A (en) Model training method and device, electronic equipment and storage medium
CN111586279A (en) Method, device and equipment for determining shooting state and storage medium
CN109189290A (en) Click on area recognition methods, device and computer readable storage medium
CN110263695B (en) Face position acquisition method and device, electronic equipment and storage medium
CN112053360A (en) Image segmentation method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant