CN113468066A - User interface testing method and device - Google Patents

User interface testing method and device Download PDF

Info

Publication number
CN113468066A
CN113468066A CN202110826907.3A CN202110826907A CN113468066A CN 113468066 A CN113468066 A CN 113468066A CN 202110826907 A CN202110826907 A CN 202110826907A CN 113468066 A CN113468066 A CN 113468066A
Authority
CN
China
Prior art keywords
test
image
execution result
result image
test case
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110826907.3A
Other languages
Chinese (zh)
Inventor
沈煜超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hode Information Technology Co Ltd
Original Assignee
Shanghai Hode Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hode Information Technology Co Ltd filed Critical Shanghai Hode Information Technology Co Ltd
Priority to CN202110826907.3A priority Critical patent/CN113468066A/en
Publication of CN113468066A publication Critical patent/CN113468066A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The application provides a user interface testing method and a user interface testing device, wherein the user interface testing method comprises the following steps: executing a test case of an application program, and acquiring an execution result image of the test case; comparing the execution result image with the reference image corresponding to the test case to obtain a test result; and if the execution result image is determined to be different from the reference image according to the test result and the execution result image meets the test requirement, updating the reference image by using the execution result image. The scheme can improve the reliability of the user interface test.

Description

User interface testing method and device
Technical Field
The application relates to the technical field of user interfaces, in particular to a user interface testing method. The application also relates to a user interface testing device, a user interface testing system, a computing device and a computer readable storage medium.
Background
A User Interface (UI) is an entry for a User to use an application program, and can implement a function related to human-computer interaction in the application program. Therefore, testing of the user interface is critical to the application.
In the related art, the test case can be automatically executed for the application program on the computing equipment such as the mobile terminal and the tablet computer, correspondingly, the execution result of the test case is displayed on the user interface of the application program, and the execution result image is obtained by screenshot of the user interface. The test case can simulate the operation of the user on the application program. On the basis, the execution result image is compared with the reference image of the application program to obtain a test result, and whether the execution result image meets the test requirement or not is determined according to the test result. The reference image is an image of a user interface adapted to the test requirements corresponding to the test case.
However, the test requirement of the application program may change along with the optimization of the application program, and the reference image is preset and fixed, so that the current test standard of the application program is not adapted to the preset reference image, and the reliability of the user interface test is reduced.
Disclosure of Invention
In view of this, the embodiment of the present application provides a user interface testing method. The application also relates to a user interface testing device, a user interface testing system, a computing device and a computer readable storage medium, which are used for solving the problem that the reliability of the user interface testing is reduced in the prior art.
According to a first aspect of embodiments of the present application, there is provided a user interface testing method, including:
executing a test case of an application program, and acquiring an execution result image of the test case;
comparing the execution result image with the reference image corresponding to the test case to obtain a test result;
and if the execution result image is determined to be different from the reference image according to the test result and the execution result image meets the test requirement, updating the reference image by using the execution result image.
According to a second aspect of embodiments of the present application, there is provided a user interface testing apparatus, including:
the execution result image acquisition module is configured to execute a test case of an application program and acquire an execution result image of the test case;
the test result acquisition module is configured to compare the execution result image with a reference image corresponding to the test case to obtain a test result;
and the reference image updating module is configured to update the reference image by using the execution result image if the execution result image is determined to be different from the reference image according to the test result and the execution result image meets the test requirement.
According to a third aspect of embodiments of the present application, there is provided a user interface testing system, including: a client and a server;
the client is configured to execute a test case of an application program, acquire an execution result image of the test case and send the execution result image to the server;
the server is configured to receive the execution result image, compare the execution result image with a reference image corresponding to the test case, and obtain a test result; and if the execution result image is determined to be different from the reference image according to the test result and the execution result image meets the test requirement, updating the reference image by using the execution result image.
According to a fourth aspect of embodiments herein, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the user interface testing method when executing the instructions.
According to a fifth aspect of embodiments herein, there is provided a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the user interface testing method.
According to the scheme provided by the embodiment of the application, the execution result image of the test case is obtained by executing the test case of the application program; comparing the execution result image with a reference image corresponding to the test case to obtain a test result; and if the execution result image is determined to be different from the reference image according to the test result and the execution result image meets the test requirement, updating the reference image by using the execution result image. The execution result image can be ensured to be used for updating the reference image when reaching the test requirement. And the execution result image is different from the reference image, so that the updated reference image can be ensured to contain new content different from the reference image before updating, and the updated reference image is ensured to have higher adaptation degree with the current test requirement. Therefore, compared with a fixed preset reference image, the scheme of the application can ensure that the reference image utilized by each user interface test has higher adaptation degree with the current test requirement through the automatic updating of the reference image, thereby improving the reliability of the user interface test.
Drawings
FIG. 1 is a flow chart of a method for testing a user interface according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a user interface testing system according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a user interface testing method applied to a user interface testing system according to an embodiment of the present application;
FIG. 4 is an illustration of an annotated image in a method for testing a user interface as provided in another embodiment of the present application;
FIG. 5a is an example of a test result image in a method for testing a user interface according to another embodiment of the present application;
FIG. 5b is a diagram illustrating another example of a test result image in a method for testing a user interface according to another embodiment of the present application;
FIG. 5c is a diagram illustrating another example of a test result image in a method for testing a user interface according to another embodiment of the present application;
FIG. 6 is an example of test result data in a method for testing a user interface according to another embodiment of the present application;
FIG. 7 is a process flow diagram of a method for testing a user interface according to another embodiment of the present application;
FIG. 8 is a schematic structural diagram of a user interface testing apparatus according to an embodiment of the present application;
fig. 9 is a block diagram of a computing device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments of the present application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first aspect may be termed a second aspect, and, similarly, a second aspect may be termed a first aspect, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present application relate are explained.
Automated testing of the user interface: also known as UI automation testing. A method for automatically executing the operation of simulating human behavior aiming at application program on the computing devices such as mobile terminal, tablet computer, Internet TV, etc. includes executing test case to obtain test result and judging whether the test result meets the test requirement.
Roman (Rome) platform: the Rome platform is a platform for acquiring and displaying a test result of UI (user interface) automatic test, and comprises a Server and a client. The service end of the Rome platform may be a virtual module or a computing device that writes corresponding processing logic using Python language, and may provide an interface for testing services. The client of the Rome platform may be a virtual module or a computing device written with Vue to write corresponding processing logic, and may provide a browsing page of the user's front end. Among them, Python language is a machine language representing the idea of simple meaning. Vue is a set of progressive frameworks for building user interfaces.
UI automation client (Uiauto client): the child node executing the UI automation test is connected to a computing device, on which an application program to be tested is installed, through a Universal Serial Bus (USB) to execute a test case.
Agile Product Development (TAPD) platform: a product research and development collaboration platform provides one-stop service throughout an agile development lifecycle.
Reference image: and the image of the user interface matched with the test requirement corresponding to the test case is used for carrying out image comparison with the execution result image.
Perceptual hashing algorithm: a Hash algorithm is used for calculating a perceptual Hash value of an image and obtaining similarity between similar pictures by utilizing the perceptual Hash value.
And (3) assertion processing: when writing code, assumptions are always made and assertions are used to capture these assumptions in the code. The assertion processing includes: precondition assertion: the characteristics that the code must possess before it executes; post condition assertion: the characteristics that the code must possess after execution; and the assertion that the front and the back are not changed: a characteristic that cannot be changed before and after execution of the code).
The present application provides a user interface testing method, and the present application also relates to a user interface testing apparatus, a user interface testing system, a computing device, and a computer readable storage medium, which are described in detail in the following embodiments one by one.
Fig. 1 shows a flowchart of a user interface testing method according to an embodiment of the present application, which specifically includes the following steps:
s101, executing the test case of the application program, and acquiring an execution result image of the test case.
In specific application, a test case is automatically executed for an application program to be tested on computing equipment such as a mobile terminal and a tablet computer, correspondingly, an execution result of the test case is displayed on a user interface of the application program, and an execution result image is obtained by screenshot of the user interface. The test case can simulate the operation of the user on the application program. For example, the test case may simulate an operation of a user logging in the application program, and the corresponding execution result image is an image of a user interface displayed by the application program after the user logs in, for example, a user interface displaying a login result indicating whether the login is successful, or a home page of the application program, and the like.
Moreover, one application program can correspond to one or more test cases, and one test case is used for testing one test task related to the user interface in the application program. One test task can correspond to a plurality of test cases, and different test cases corresponding to one test task are respectively used for simulating different scenes aiming at the same test task.
For ease of understanding and reasonable layout, the following describes the case of multiple test cases in an alternative embodiment.
And S102, comparing the execution result image with the reference image corresponding to the test case to obtain a test result.
In a specific application, the execution result image and the reference image corresponding to the test case may be directly compared, or the execution result image and the reference image corresponding to the test case may be preprocessed and the preprocessed images may be compared. Among other things, pre-processing can reduce the amount of data for image contrast processing. Also, the test results may be varied. Exemplary test results may include: similarity between the execution result image and the corresponding reference image, a degree of difference, and a classification label of whether the execution result image is similar or not, and the like.
For ease of understanding and reasonable layout, the second comparison mode is described in detail below in the form of an alternative embodiment.
And S103, if the execution result image is determined to be different from the reference image according to the test result and the execution result image meets the test requirement, updating the reference image by using the execution result image.
Specifically, the reference image may be updated by using the execution result image. For example, the current reference image may be directly replaced with the execution result image as a new reference image. Thus, the updating efficiency of the reference image can be improved, and the testing efficiency of the user interface is improved. Alternatively, for example, a notification that the execution result image is adjusted may be presented, and the adjusted execution result image sent by the user for the notification may be received, and the current reference image may be replaced with the adjusted execution result image as a new reference image. Through the adjustment of the user, the adaptation degree of the reference image utilized by each user interface test and the current test requirement can be further improved.
In a specific application, the execution result image and the reference image are determined to be different according to the test result, and the number of the execution result images and the reference image can be various. Exemplarily, if the test result contains the similarity, the similarity is smaller than the similarity threshold, and it is determined that the execution result image is different from the reference image; or if the test result contains the difference degree, the difference degree is larger than the difference threshold value, and the execution result image and the reference image are determined to be different; or, if the test result includes the same label, the label is different, and it is determined that the execution result image is different from the reference image. Any manner of determining that the execution result image and the reference image are different according to the test result can be used in the present application, and the present embodiment does not limit this.
And, according to the test result, it is determined that the execution result image meets the test requirement, and the number of the execution result images can be various. Illustratively, if the test result includes the similarity, the similarity belongs to a similarity threshold interval, or is greater than the test threshold, and it is determined that the execution result image meets the test requirement. The similarity threshold interval comprises a similarity threshold, and the similarity threshold is greater than the lower limit of the similarity threshold interval; or the upper limit of the similar threshold interval is less than or equal to the similar threshold; the test threshold is less than the similarity threshold. Alternatively, for example, it may be determined by the user for the test result whether the execution result image meets the test requirement. For ease of understanding and reasonable layout, the second way of determining whether the test requirements are met is described in detail in the alternative embodiment of the present application and in the embodiment of fig. 3.
And the updated reference image is used for the user interface test of the corresponding test case again until the new reference image is updated, and the like, so that iterative updating is realized. In addition, the reference image obtained by each update contains different content from the reference image before the update, so that the effect of updating the content contained in the reference image in an incremental manner is achieved.
According to the scheme provided by the embodiment of the application, the execution result image can be ensured to be used for updating the reference image by meeting the test requirement through the execution result image. And the execution result image is different from the reference image, so that the updated reference image can be ensured to contain new content different from the reference image before updating, and the updated reference image is ensured to have higher adaptation degree with the current test requirement. Therefore, compared with a fixed preset reference image, the scheme of the application can ensure that the reference image utilized by each user interface test has higher adaptation degree with the current test requirement through the automatic updating of the reference image, thereby improving the reliability of the user interface test.
In an optional implementation manner, the comparing the execution result image with the reference image corresponding to the test case to obtain the test result may specifically include the following steps:
respectively preprocessing the execution result image and the reference image to obtain a preprocessed execution result image and a preprocessed reference image; wherein the pretreatment comprises: at least one of binarization processing and image size adjustment;
acquiring the similarity between the preprocessed execution result image and the preprocessed reference image;
and obtaining a test result by utilizing the similarity.
The binarization processing, namely, the image binarization, can adjust the gray value of the pixel points of the image to be 0 or 255, so as to realize the black-and-white effect of the whole image, greatly reduce the data volume of the image and highlight the target contour. Therefore, the binarization processing can improve the efficiency and accuracy of image contrast. And, the image resizing may include: the image size is reduced or enlarged to a preset size. The preset size may be set empirically, or, if the neural network is used to obtain the similarity, the preset size may be an image size adapted to the neural network.
The similarity between the pre-processed execution result image and the pre-processed reference image may be acquired in various ways. For example, the preprocessed execution result image and the preprocessed reference image may be input into a neural network model obtained by pre-training, so as to obtain the similarity. The neural network model is obtained by training an execution result image after sample preprocessing, a reference image after sample preprocessing and a corresponding similarity label. Alternatively, the above-described similarity may be obtained by a histogram matching method, for example. The histogram matching method comprises the following steps: and respectively calculating histograms of the two images, and acquiring normalized correlation coefficients of the two histograms as similarity. Normalizing the correlation coefficient may include: babbitt distance, and histogram intersection distance, etc. Alternatively, the similarity between the preprocessed execution result image and the preprocessed reference image may be calculated using a perceptual hash algorithm. Any method capable of obtaining the similarity between the images can be used in the present application, and the present embodiment does not limit this.
Also, the test result obtained by using the similarity may be various. Illustratively, the similarity may be taken as a test result. Or, for example, the similarity is marked on the execution result image or the preprocessed execution result image as the test result. Or, for example, a region different from the reference image in the execution result image may be labeled, or a region different from the preprocessed reference image in the preprocessed execution result image may be labeled to obtain a labeled image, and a similarity is labeled on the labeled image as a test result. Any method that can obtain a test result by using the similarity and the obtained test result can indicate that the execution result image is different from the reference image can be used in the present application, and this embodiment does not limit this.
The optional embodiment can improve the efficiency and the accuracy of image comparison through binarization processing in the preprocessing; by adjusting the image size in the preprocessing, the user interface test can be guaranteed not to be limited by the display interface size of the client executing the test case, and therefore the application scene of the user interface test method provided by the embodiment of the application is improved.
Corresponding to the above method embodiment, the present application further provides an embodiment of a user interface testing system, and fig. 2 shows a structure diagram of a user interface testing system provided according to an embodiment of the present application, where the system includes: a client 201 and a server 202;
the client 201 is configured to execute a test case of an application program, obtain an execution result image of the test case, and send the execution result image to the server 202;
the server 202 is configured to receive the execution result image, and compare the execution result image with the reference image corresponding to the test case to obtain a test result; and if the execution result image is determined to be different from the reference image according to the test result and the execution result image meets the test requirement, updating the reference image by using the execution result image.
According to the scheme provided by the embodiment of the application, the execution result image can be ensured to be used for updating the reference image by meeting the test requirement through the execution result image. And the execution result image is different from the reference image, so that the updated reference image can be ensured to contain new content different from the reference image before updating, and the updated reference image is ensured to have higher adaptation degree with the current test requirement. Therefore, compared with a fixed preset reference image, the scheme of the application can ensure that the reference image utilized by each user interface test has higher adaptation degree with the current test requirement through the automatic updating of the reference image, thereby improving the reliability of the user interface test.
For ease of understanding, a user interface testing method applied to the user interface testing system shown in fig. 2 described above is specifically described below in an exemplary form.
Illustratively, as shown in fig. 3, a flow of a user interface testing method applied to a user interface testing system provided in the embodiment of the present application specifically includes the following steps:
triggering an automatic test task by a triggering tool;
the target client automatically executes the UI test case, generates execution result data and sends the execution result data to a server of the user interface test system;
the server side of the user interface test system analyzes the execution result data, compares the execution result image with the reference image to obtain a test result, integrates and stores the test result and the execution result data, and sends the test result data to the client side of the user interface test system;
the client of the user interface test system displays the test result data through the Web front end, receives the confirmation result of the user on the test result data, and sends the confirmation result to the server of the user interface test system;
and the server side of the user interface test system updates the reference picture according to the confirmation result.
Wherein, the triggering tool specifically can be Jenkins, is used for realizing the triggering mechanism: and informing the target client to execute the UI test case. The above trigger action of Jenkins can be automatically executed at regular time, or a trigger instruction is input to Jenkins by a user and used for informing the Jenkins to execute the above trigger action. The target client may specifically be a child node that performs the UI automation test: a Uiauto client, or a computing device on which an application to be tested is installed. The user interface test system may specifically be a Rome platform.
Moreover, this embodiment is similar to the embodiment of fig. 1 of the present application, and the description of the same parts is omitted here, and the detailed description is given in the embodiment of fig. 1 of the present application. The difference lies in that: defines an execution body; the execution result data of the present embodiment corresponds to the execution result image in fig. 1; the execution result data sent to the server of the ui test system is usually packaged, and therefore, the server of the ui test system parses the received execution result data to obtain the execution result data that can be used for image comparison. In addition, for convenience of understanding and reasonable layout, specific ways of integrating and storing the test results and the execution result data and acquiring the confirmation results of the test result data by the user are described in the form of alternative embodiments in the following.
In an optional implementation manner, after the comparing the execution result image with the reference image corresponding to the test case to obtain the test result, the user interface test method provided in the embodiment of the present application may further include the following steps:
determining an area where there is a difference between the execution result image and the reference image;
marking areas with differences in the execution result image to obtain a marked image;
and obtaining a test result image based on the annotation image.
In a specific application, the manner of determining the region where there is a difference between the execution result image and the reference image may be various. For example, the execution result image may be input into a difference recognition model obtained by training in advance, and a region in the execution result image that has a difference from the reference image may be obtained. The difference identification model is a neural network model obtained by training a reference image, a sample execution result image and corresponding area labels with differences. Or, for example, the area with the difference may be determined according to the coordinates of the pixel points with the difference in the execution result image and the reference image. The second exemplary description is followed by a detailed description in the form of an alternative embodiment for ease of understanding and reasonable layout.
Moreover, the marking of the region with the difference in the execution result image may specifically include: drawing a bounding box of the region in the execution result image according to the contour information of the region with the difference; alternatively, the color of the region where there is a difference in the execution result image is adjusted to a specified color. For ease of understanding and reasonable layout, the manner in which the bounding box is drawn is described in detail below in the form of alternative embodiments.
Moreover, obtaining the test result image based on the annotation image may specifically include: taking the marked image as a test result image; or, splicing the annotation image and the execution result image to obtain a test result image; or, the annotation image, the reference image, and the execution result image are stitched, and so on. The splicing case will be described in detail later in the form of an alternative embodiment.
In the optional embodiment, the test result image is obtained by marking the marking image with the difference region between the reference image and the execution result image, and data about the test result which is more intuitive and convenient to compare can be provided, so that the convenience and the accuracy of analyzing the test result by using the test result image by a user are improved.
In an optional implementation manner, the determining the region where the difference exists between the execution result image and the reference image specifically includes the following steps:
determining the coordinates of pixel points with difference between the execution result image and the reference image;
grouping the coordinates according to a preset grouping rule to obtain areas with differences;
correspondingly, the marking the region with the difference in the execution result image to obtain the marked image may specifically include the following steps:
aiming at each region with difference, obtaining contour information of the region according to coordinates of each pixel point on the boundary of the region;
and drawing a boundary frame of each region in the execution result image according to the contour information of each region to obtain an annotated image.
In the optional embodiment, the contour information of the area with the difference is obtained through the coordinates of the pixels with the difference, so that the accuracy of obtaining the contour information can be improved. Compared with the method for labeling the different areas by colors, the method for labeling the different areas has the advantages that the boundary frame of each area is drawn in the execution result image according to the contour information of each area to obtain the labeled image, labeling of the different areas can be achieved under the condition that the content of the different areas is not changed, and convenience and accuracy of analyzing the execution result by the labeled image in the follow-up process are improved.
In a specific application, the manner of determining the coordinates of the pixel points having a difference between the execution result image and the reference image may be various. For example, the execution result image and the reference image may be processed by using a Structural Similarity algorithm (SSIM) to obtain coordinates of pixels in the execution result image and the reference image that have a difference, that is, a coordinate set of pixels in the difference. The structural similarity algorithm is used for measuring the similarity of two pictures, a window with a preset specification is taken from the pictures in each calculation, then the window is continuously slid to carry out brightness contrast, contrast and structural contrast, and finally the average value is taken as the global SSIM. Therefore, the comparison result of each window can be used for obtaining the coordinates of the pixel points with differences in the window, and the coordinate set can be obtained by completing the processing of the whole image. Alternatively, the coordinate set may be obtained by searching for different points in the execution result image and the reference image by using a Normalized Cross Correlation matching algorithm (NCC), and recording coordinates of the different points. The normalized cross-correlation matching algorithm can respectively calculate the integral images of the two images and perform normalized cross correlation calculation on the integral images, so that coordinates of pixel points with differences in the two images are obtained.
And, the coordinates of each pixel point with difference are grouped, that is, the coordinates are grouped, and each coordinate divided into a group can form an area. The specific way of grouping the coordinates according to the preset grouping rule to obtain the areas with differences may be various. For example, the coordinates belonging to the same service area in the coordinates of the pixels with differences may be divided into a group, so as to obtain the areas with differences. For example, among the coordinates of the pixels with differences, the coordinate C1 and the coordinate C2 belong to the same service area: the image display area, therefore, the coordinate C1 and the coordinate C2 may be divided into one group. Or, for example, if the sliding window is used to determine the coordinates of the pixels with differences, the coordinates of the pixels belonging to the same sliding window in the coordinates of the pixels with differences may be divided into a group, so as to obtain the regions with differences. Or, for example, the pixel points in the execution result image and the reference image may be compared one by one, and the coordinates of the pixel points having the difference may be recorded. Or, the coordinates which accord with the association rule in the coordinates of the pixels with the difference are divided into a group. Wherein the association rule may include: the coordinates are located at the outermost periphery of the preset area, and the interval between the coordinates and the adjacent coordinates is smaller than or equal to an interval threshold value.
Any method capable of determining the coordinates of the pixel points having the difference between the execution result image and the reference image can be used in the present application, and this embodiment does not limit this.
Moreover, for each region with difference, obtaining the contour information of the region according to the coordinates of each pixel point on the region boundary may include: aiming at each region with difference, taking the coordinates of each pixel point on the region boundary as the contour information of the region; or, for each region with difference, determining coordinates which are separated from coordinates of each pixel point on the region boundary by a preset distance as contour information of the region; or, for each region with difference, according to the coordinates of each pixel point on the boundary of the region, determining a rectangular boundary or a circular boundary which is separated from the region by a preset distance, and determining each coordinate on the rectangular boundary or the circular boundary as the contour information of the region. Wherein, for any region with difference, the preset distance is used for expanding or contracting the boundary of the region.
Furthermore, the specific manner of drawing the bounding box of each region in the execution result image according to the contour information of each region to obtain the annotation image may be various, and the following describes in detail in the form of an alternative embodiment.
In an optional implementation manner, the drawing a bounding box of each region in the execution result image according to the contour information of the region to obtain an annotated image may specifically include the following steps:
and for each region, connecting lines in the execution result image according to the contour information of the region to be used as a boundary frame of the region, so as to obtain an annotated image.
In an optional implementation manner, the drawing a bounding box of each region in the execution result image according to the contour information of the region to obtain an annotated image may specifically include the following steps:
calculating the area of a graph formed by the outline of each area according to the outline information of each area;
and drawing a bounding box of the region with the graph area reaching the preset threshold value in the execution result image to obtain the annotation image.
In a specific application, area calculation parameters, such as side length, radius and the like, of a graph formed by the outline of each region can be determined according to the outline information of the region, and then the area of the graph formed by the outline of the region can be calculated by using the area calculation parameters. Wherein, the graphic area reaching the preset threshold may include: the area of the graph is larger than or equal to a preset threshold value. Therefore, the labeling of the difference area with less difference or included in a certain difference area can be avoided, so that redundant labeling is reduced, and the efficiency and the intuitiveness of the labeled image are improved. After the figure areas formed by the outlines of the respective regions are obtained, the figure areas may be sorted, and a bounding box of a region having a figure area n times larger in the front may be drawn in the execution result image to obtain the annotation image. Wherein n is an integer greater than or equal to 1, and can be specifically set according to the labeling requirement.
Illustratively, as shown in fig. 4. The executing of the region in the result image where the graphic area reaches the preset threshold value includes: the area where the game icon is located, service options: the area where movies and tests are located, the area where icons in content C2 are located, the area where the reminder message is located, and the area where advertisement a is located. For purposes of this exemplary illustration, bounding boxes are used as bounding boxes, and thus bounding box B1 through bounding box B5 are labeled in the execution results interface, resulting in the annotated image shown in FIG. 4. Also, in a specific application, the line type of the bounding box may be a red solid line, the shape of the bounding box may be a circle, an irregular shape, etc., the rectangle frame in the form of a dashed line shown in fig. 4 is only an example for easy understanding, and the line type and the shape of the bounding box are not limited in the present embodiment. In addition, referring to the different service areas shown in fig. 4, the service areas corresponding to the areas having differences include: a first menu area, a content area, and an advertisement area. Thus, in another implementation, the bounding box may include: a bounding box of the first menu area, a bounding box of the content area, and a bounding box of the advertisement area.
In an alternative embodiment, the test result includes a similarity;
correspondingly, the obtaining of the test result image based on the annotation image may specifically include the following steps:
and if the similarity is smaller than the similarity threshold, splicing at least one of the reference image and the execution result image with the marked image to obtain a test result image.
Exemplarily as shown in fig. 5a to 5 c. Splicing the reference image and the marked image to obtain a test result image of the figure 5 a; splicing the execution result image and the marked image to obtain a test result image of the figure 5 b; and splicing the reference image, the execution result image and the annotation image to obtain the test result image of fig. 5 c. In addition, in a specific application, the position of the spliced image in the test result image can be adjusted according to a specific requirement. For example, the reference image shown in fig. 5a may be on the left and the test result image may be on the right, or the reference image may be on the top, the test result image may be on the bottom, etc., and this embodiment is merely an example and is not limited thereto.
In the optional embodiment, at least one of the reference image and the execution result image is spliced with the annotation image to obtain the test result image, so that the test result can be directly and intuitively and quickly analyzed through the test result image.
In an optional implementation manner, the user interface testing method provided in the embodiment of the present application may further include the following steps:
determining test result data based on the test result image and the test result;
displaying test result data and option information at a client; the option information is used for the user to select whether the execution result image meets the test requirement or not according to the test result data;
and receiving a selection result of the user on the option information, and determining whether the execution result image meets the test requirement according to the selection result.
In a specific application, determining test result data based on the test result image and the test result may include: if the test result is a numerical value or a label, marking the test result in the test result image to obtain test result data; or establishing a corresponding relation between the test result and the test result image, and taking the associated structure as the test result. Illustratively, as shown in fig. 6, the test results include: similarity 0.765625. Therefore, the test result is labeled in the test result image, resulting in the test result data shown in fig. 6. In addition, after a determination result that whether the execution result image meets the test requirement is obtained, the determination result may be marked in the test result data, or the test result data and the determination result may be stored in an associated manner, so as to facilitate subsequent viewing, analysis, tracing, and the like.
The optional embodiment can determine more intuitive test result data based on the test result image and the test result. And the test result data and the option information are displayed at the client, and the user selects whether the execution result image meets the test requirement or not for the test result data, so that whether the execution result image meets the test requirement or not can be more accurately determined.
In an optional implementation manner, the application program corresponds to a plurality of test tasks, and each test task includes at least one test case;
correspondingly, the executing the test case of the application program to obtain the execution result image of the test case may specifically include:
respectively executing each test case of each test task of the application program, and capturing a screenshot of a user interface corresponding to the test case in the application program to obtain an execution result image of the test case;
and aiming at any test task, obtaining all execution result images of the test task, and simultaneously comparing each execution result image of the test task with the reference image corresponding to the execution result image to obtain the test result corresponding to each execution result image.
In a specific application, an open-source message queue component (Remote Dictionary Server), also called a Remote Dictionary service component, is an open-source supporting network, can be based on a memory and can also be a persistent log type, is a Key-Value database, provides APIs of multiple languages, and can implement communication between a master process and a slave process. Therefore, each test task of the application program can be regarded as a main process, a queue of picture comparison subtasks including a plurality of test tasks is obtained by using the component Redis, and the picture comparison process corresponding to each test case concurrently reads the picture comparison subtasks from the queue, so that the comparison between each execution result image of the test task and the reference image corresponding to the execution result image is concurrently performed. One test task may correspond to an interactive function of the user interface, such as a search function and a recommendation function performed through the user interface.
The optional embodiment widens the application range of the user interface test method through a plurality of test tasks, thereby improving the reliability of the user interface obtained by optimizing the test result, and improving the efficiency of the user interface test through the comparison of the pictures which are carried out concurrently.
In an optional implementation manner, after obtaining the execution result image of the test case by the screenshot of the user interface corresponding to the test case in the application program, the user interface testing method provided in the embodiment of the present application may further include the following steps:
uploading the execution result image of the test case to a storage platform to obtain a request address of the execution result image;
correspondingly storing the task identification of the test task to which the test case belongs, the case identification of the test case and the request address in a database;
generating first aggregation information of each test task according to the corresponding storage of the test task;
correspondingly, the obtaining of the image of all execution results of any test task may specifically include the following steps:
searching request addresses of all execution result images of the test task from a database by using the first aggregation information of the test task;
and acquiring all execution result images of the test task from the storage platform according to the searched request address.
In specific application, the database can be a local storage space of a client in the user interface test system so as to improve the searching efficiency; or, the data base is independent of the user interface test system to improve the disaster tolerance capability of the data. Moreover, the image usually occupies a large amount of storage space, so that the optional embodiment uploads the execution result image to a storage platform different from the database, and the performance of the client and the server in the user interface test system can be improved.
And, the first aggregation information may include: the corresponding relation among the task identification of the test task to which the test case belongs, the case identification of the test case and the request address; or, the storage address corresponding to the stored storage result carries the task identifier of the test task to which the test case belongs and the case identifier of the test case. In addition, the first aggregation information may further include information such as a test time and the number of execution result images.
In an alternative embodiment, any test case contains at least one assertion process; correspondingly, the user interface testing method provided by the embodiment of the application can further comprise the following steps:
aiming at each test case, generating a case report of the test case by using the assertion processing result of the test case;
correspondingly, after the user interface screenshot corresponding to the test case in the application program obtains the execution result image of the test case, the user interface testing method provided by the embodiment of the application program may further include the following steps:
correlating the execution result image of the test case with the case report, and storing the correlation result;
aiming at each test task, generating second aggregate information of the test task according to the case report and the associated result of each test case of the test task; the second aggregated information is used to find the execution result of the test task.
In a specific application, for each test case, generating a case report of the test case by using the assertion processing result of the test case may include: and summarizing the assertion processing result of the test case to obtain a case report of the test case. And, the association result may include: the corresponding relation between the execution result image of the test case and the case report; or, the execution result image of the test case and the address correspondingly stored by the case report; or an execution result image of the test case and an image formed by splicing the case report, and the like. Similar to the first aggregation information, the second aggregation information may include: the case report comprises the task identification of the test task to which the corresponding test case belongs, the case identification of the test case and the correlation result. In addition, the second polymerization information may further include: the number of test cases meeting the test requirements, the number of test cases not meeting the test requirements, and the proportion of test cases meeting the test requirements, namely at least one of information such as test success rate and the like.
For ease of understanding, the above-described alternative embodiments regarding performing result storage, aggregate information generation, image comparison, and updating of the reference image are collectively described below in an exemplary form.
Exemplarily, as shown in fig. 7. Another embodiment of the present application provides a processing flow of a user interface testing method, including: the method comprises an image inserting sub-process triggered by a target client, an inserting task result sub-process triggered by the target client, an image comparison sub-process triggered by a server and a reference updating sub-process triggered by a user. Wherein:
the picture inserting sub-process triggered by the target client comprises the following steps: inserting information of each picture, generating and inserting aggregated information;
the task result inserting sub-process triggered by the target client comprises the following steps: searching the picture corresponding to each case by using the aggregated information inserted into the picture sub-process, and combining the test report result of each case with the picture to generate aggregated information;
the image comparison sub-process triggered by the server comprises the following steps: searching all execution result pictures by using aggregated information in the task result inserting sub-process, calculating the similarity of each picture, generating an aggregated picture, and updating the result information of each use case;
the user-triggered reference updating sub-process comprises the following steps: displaying the result information in the image comparison sub-process at the client, and confirming whether the test structure meets the test requirement by a user; if the test requirement is met, updating the test result and updating the reference; and if the test requirement is not met, providing defects.
The embodiment of fig. 7 is similar to the above-mentioned alternative embodiment regarding the storing of the execution result, the generating of the aggregation information, the comparing of the image, and the updating of the reference image, and for the same parts, details are not repeated herein, and refer to the description of the above-mentioned alternative embodiment. The difference is that for the sake of brevity, different expressions are used, specifically: the specific execution body of the trigger sub-flow is defined in the embodiment of FIG. 7; the sub-process of inserting the picture is equivalent to the process of uploading the execution result image of the test case and generating the first aggregation information; the sub-process of inserting the task result is equivalent to the process of generating the second aggregation information; the sub-process of the picture comparison is equivalent to the process of obtaining a test result image and determining test result data based on the test result image and the test result; the sub-process of the reference updating is equivalent to the process of receiving the selection result of the user on the option information, determining whether the execution result image meets the test requirement or not according to the selection result, and updating the reference image. The reference in fig. 7 corresponds to the reference image, and the updated test result in the sub-flow of reference updating corresponds to the result of specification by the user or the related user noted in the test result data.
In addition, in order to facilitate the optimization of the application program to perform the interactive function through the user interface, if the execution result image does not meet the test requirement, the defect that the execution result image corresponds to the interactive function can be provided. For example, a defect may be committed in the TAPD platform.
Corresponding to the above method embodiment, the present application further provides an embodiment of a user interface testing apparatus, and fig. 8 shows a schematic structural diagram of a user interface testing apparatus provided in an embodiment of the present application. As shown in fig. 8, the apparatus includes:
an execution result image obtaining module 801 configured to execute a test case of an application program and obtain an execution result image of the test case;
a test result obtaining module 802 configured to compare the execution result image with a reference image corresponding to the test case to obtain a test result;
a reference image updating module 803 configured to update the reference image with the execution result image if it is determined according to the test result that the execution result image is different from the reference image and the execution result image meets the test requirement.
According to the scheme provided by the embodiment of the application, the execution result image can be ensured to be used for updating the reference image by meeting the test requirement through the execution result image. And the execution result image is different from the reference image, so that the updated reference image can be ensured to contain new content different from the reference image before updating, and the updated reference image is ensured to have higher adaptation degree with the current test requirement. Therefore, compared with a fixed preset reference image, the scheme of the application can ensure that the reference image utilized by each user interface test has higher adaptation degree with the current test requirement through the automatic updating of the reference image, thereby improving the reliability of the user interface test. .
In an optional implementation manner, the test result obtaining module 802 is further configured to:
respectively preprocessing the execution result image and the reference image to obtain a preprocessed execution result image and a preprocessed reference image; wherein the pre-processing comprises: at least one of binarization processing and image size adjustment;
acquiring the similarity between the preprocessed execution result image and the preprocessed reference image;
and obtaining the test result by utilizing the similarity.
In an optional implementation manner, the test result obtaining module 802 is further configured to, after the comparing the execution result image with the reference image corresponding to the test case to obtain a test result, determine an area where a difference exists between the execution result image and the reference image;
marking the areas with the difference in the execution result image to obtain a marked image;
and obtaining a test result image based on the annotation image.
In an optional implementation manner, the test result obtaining module 802 is further configured to:
determining the coordinates of pixel points with difference between the execution result image and the reference image;
grouping the coordinates according to a preset grouping rule to obtain the areas with the differences;
aiming at each region with difference, obtaining contour information of the region according to coordinates of each pixel point on the boundary of the region;
and drawing a boundary frame of each region in the execution result image according to the contour information of each region to obtain an annotated image.
In an optional implementation manner, the test result obtaining module 802 is further configured to:
calculating the area of a graph formed by the outline of each area according to the outline information of each area;
and drawing a boundary frame of an area with the graph area reaching a preset threshold value in the execution result image to obtain an annotated image.
In an alternative embodiment, the test results include similarity;
the test result acquisition module 802, as described above, is further configured to,
the obtaining of the test result image based on the annotation image comprises:
and if the similarity is smaller than a similarity threshold value, splicing at least one of the reference image and the execution result image with the annotation image to obtain a test result image.
In an alternative embodiment, the apparatus further comprises: a presentation module configured to:
determining test result data based on the test result image and the test result;
displaying the test result data and the option information at the client; the option information is used for selecting whether the execution result image meets the test requirement or not by the user according to the test result data;
the test result obtaining module 802 is further configured to receive a selection result of the option information from the user, and determine whether the execution result image meets the test requirement according to the selection result.
In an optional implementation manner, the application program corresponds to a plurality of test tasks, and each test task includes at least one test case;
the test result obtaining module 802 is further configured to:
respectively executing each test case of each test task of an application program, and capturing a screenshot of a user interface corresponding to the test case in the application program to obtain an execution result image of the test case;
and aiming at any test task, obtaining all execution result images of the test task, and simultaneously comparing each execution result image of the test task with the reference image corresponding to the execution result image to obtain the test result corresponding to each execution result image.
In an alternative embodiment, the apparatus further comprises: a storage module configured to:
after the user interface corresponding to the test case in the application program is subjected to screenshot to obtain an execution result image of the test case, uploading the execution result image of the test case to a storage platform to obtain a request address of the execution result image;
correspondingly storing the task identifier of the test task to which the test case belongs, the case identifier of the test case and the request address in a database;
for each test task, generating first aggregation information of the test task according to the corresponding storage of the test task;
the test result obtaining module 802 is further configured to:
searching request addresses of all execution result images of the test task from the database by using the first aggregation information of the test task;
and acquiring all execution result images of the test task from the storage platform according to the searched request address.
In an alternative embodiment, any test case contains at least one assertion process; the device further comprises:
the case report generating module is configured to generate a case report of each test case by using the assertion processing result of the test case;
the storage module further configured to:
after the user interface corresponding to the test case in the application program is subjected to screenshot to obtain an execution result image of the test case, correlating the execution result image of the test case with a case report, and storing a correlation result;
aiming at each test task, generating second aggregate information of the test task according to the case report and the correlation result of each test case of the test task; and the second aggregation information is used for searching the execution result of the test task.
The above is an illustrative scheme of a user interface testing apparatus according to the present embodiment. It should be noted that the technical solution of the user interface testing apparatus and the technical solution of the user interface testing method described above belong to the same concept, and details that are not described in detail in the technical solution of the user interface testing apparatus can be referred to the description of the technical solution of the user interface testing method described above.
Fig. 9 illustrates a block diagram of a computing device 900 provided in accordance with an embodiment of the present application. Components of the computing device 900 include, but are not limited to, a memory 910 and a processor 920. The processor 920 is coupled to the memory 910 via a bus 930, and a database 950 is used to store data.
Computing device 900 also includes access device 940, access device 940 enabling computing device 900 to communicate via one or more networks 960. Examples of such networks include a Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 940 may include one or more of any type of Network Interface (e.g., a Network Interface Controller (NIC)) whether wired or Wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) Wireless Interface, a Worldwide Interoperability for Microwave access (Wi-MAX) Interface, an ethernet Interface, a Universal Serial Bus (USB) Interface, a cellular Network Interface, a bluetooth Interface, a Near Field Communication (NFC) Interface, and so forth.
In one embodiment of the present application, the above-described components of computing device 900 and other components not shown in FIG. 9 may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 9 is for purposes of example only and is not limiting as to the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 900 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), a mobile phone (e.g., smartphone), a wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 900 may also be a mobile or stationary server.
Wherein, the processor 920 implements the steps of the user interface testing method when executing the instructions.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the user interface testing method described above belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the user interface testing method described above.
An embodiment of the present application further provides a computer readable storage medium storing computer instructions, which when executed by a processor, implement the steps of the user interface testing method as described above.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium and the technical solution of the user interface testing method belong to the same concept, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the user interface testing method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (14)

1. A method for user interface testing, the method comprising:
executing a test case of an application program, and acquiring an execution result image of the test case;
comparing the execution result image with the reference image corresponding to the test case to obtain a test result;
and if the execution result image is determined to be different from the reference image according to the test result and the execution result image meets the test requirement, updating the reference image by using the execution result image.
2. The method according to claim 1, wherein after comparing the execution result image with the reference image corresponding to the test case to obtain a test result, the method further comprises:
determining a region where there is a difference between the execution result image and the reference image;
marking the areas with the difference in the execution result image to obtain a marked image;
and obtaining a test result image based on the annotation image.
3. The method of claim 2, wherein the determining the region where the difference exists between the execution result image and the reference image comprises:
determining the coordinates of pixel points with difference between the execution result image and the reference image;
grouping the coordinates according to a preset grouping rule to obtain the areas with the differences;
the step of marking the areas with the differences in the execution result image to obtain a marked image includes:
aiming at each region with difference, obtaining contour information of the region according to coordinates of each pixel point on the boundary of the region;
and drawing a boundary frame of each region in the execution result image according to the contour information of each region to obtain an annotated image.
4. The method according to claim 3, wherein the drawing a bounding box of each region in the execution result image according to the contour information of the region to obtain an annotation image comprises:
calculating the area of a graph formed by the outline of each area according to the outline information of each area;
and drawing a boundary frame of an area with the graph area reaching a preset threshold value in the execution result image to obtain an annotated image.
5. The method of claim 2, wherein the test results include similarity;
the obtaining of the test result image based on the annotation image comprises:
and if the similarity is smaller than a similarity threshold value, splicing at least one of the reference image and the execution result image with the annotation image to obtain a test result image.
6. The method of claim 2, further comprising:
determining test result data based on the test result image and the test result;
displaying the test result data and the option information at the client; the option information is used for selecting whether the execution result image meets the test requirement or not by the user according to the test result data;
and receiving a selection result of the user on the option information, and determining whether the execution result image meets the test requirement according to the selection result.
7. The method of claim 1, wherein the application program has a plurality of test tasks corresponding thereto, each test task including at least one test case;
the step of obtaining the execution result image of the test case by the test case of the execution application program comprises the following steps:
respectively executing each test case of each test task of an application program, and capturing a screenshot of a user interface corresponding to the test case in the application program to obtain an execution result image of the test case;
and aiming at any test task, obtaining all execution result images of the test task, and simultaneously comparing each execution result image of the test task with the reference image corresponding to the execution result image to obtain the test result corresponding to each execution result image.
8. The method of claim 7, wherein after obtaining the execution result image of the test case from the screenshot of the user interface corresponding to the test case in the application program, the method further comprises:
uploading the execution result image of the test case to a storage platform to obtain a request address of the execution result image;
correspondingly storing the task identifier of the test task to which the test case belongs, the case identifier of the test case and the request address in a database;
for each test task, generating first aggregation information of the test task according to the corresponding storage of the test task;
the method for obtaining all execution result images of any test task comprises the following steps:
searching request addresses of all execution result images of the test task from the database by using the first aggregation information of the test task;
and acquiring all execution result images of the test task from the storage platform according to the searched request address.
9. The method of claim 7 or 8, wherein any test case contains at least one assertion process; the method further comprises the following steps:
aiming at each test case, generating a case report of the test case by using the assertion processing result of the test case;
after the user interface screenshot corresponding to the test case in the application program is obtained and the execution result image of the test case is obtained, the method further comprises the following steps:
correlating the execution result image of the test case with the case report, and storing the correlation result;
aiming at each test task, generating second aggregate information of the test task according to the case report and the correlation result of each test case of the test task; and the second aggregation information is used for searching the execution result of the test task.
10. The method according to claim 1, wherein the comparing the execution result image with the reference image corresponding to the test case to obtain the test result comprises:
respectively preprocessing the execution result image and the reference image to obtain a preprocessed execution result image and a preprocessed reference image; wherein the pre-processing comprises: at least one of binarization processing and image size adjustment;
acquiring the similarity between the preprocessed execution result image and the preprocessed reference image;
and obtaining the test result by utilizing the similarity.
11. A user interface testing apparatus, comprising:
the execution result image acquisition module is configured to execute a test case of an application program and acquire an execution result image of the test case;
the test result acquisition module is configured to compare the execution result image with a reference image corresponding to the test case to obtain a test result;
and the reference image updating module is configured to update the reference image by using the execution result image if the execution result image is determined to be different from the reference image according to the test result and the execution result image meets the test requirement.
12. A user interface testing system, comprising: a client and a server;
the client is configured to execute a test case of an application program, acquire an execution result image of the test case and send the execution result image to the server;
the server is configured to receive the execution result image, compare the execution result image with a reference image corresponding to the test case, and obtain a test result; and if the execution result image is determined to be different from the reference image according to the test result and the execution result image meets the test requirement, updating the reference image by using the execution result image.
13. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-10 when executing the instructions.
14. A computer-readable storage medium storing computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 10.
CN202110826907.3A 2021-07-21 2021-07-21 User interface testing method and device Pending CN113468066A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110826907.3A CN113468066A (en) 2021-07-21 2021-07-21 User interface testing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110826907.3A CN113468066A (en) 2021-07-21 2021-07-21 User interface testing method and device

Publications (1)

Publication Number Publication Date
CN113468066A true CN113468066A (en) 2021-10-01

Family

ID=77881636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110826907.3A Pending CN113468066A (en) 2021-07-21 2021-07-21 User interface testing method and device

Country Status (1)

Country Link
CN (1) CN113468066A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115357501A (en) * 2022-08-24 2022-11-18 中国人民解放军32039部队 Automatic testing method and system for space flight measurement and control software

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978270A (en) * 2015-07-03 2015-10-14 上海触乐信息科技有限公司 Automatic software testing method and apparatus
CN110737599A (en) * 2019-10-18 2020-01-31 付彪 Front-end automatic regression testing system and method based on picture comparison technology
CN111881054A (en) * 2020-08-04 2020-11-03 携程计算机技术(上海)有限公司 User interface automation test method, system, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978270A (en) * 2015-07-03 2015-10-14 上海触乐信息科技有限公司 Automatic software testing method and apparatus
CN110737599A (en) * 2019-10-18 2020-01-31 付彪 Front-end automatic regression testing system and method based on picture comparison technology
CN111881054A (en) * 2020-08-04 2020-11-03 携程计算机技术(上海)有限公司 User interface automation test method, system, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115357501A (en) * 2022-08-24 2022-11-18 中国人民解放军32039部队 Automatic testing method and system for space flight measurement and control software
CN115357501B (en) * 2022-08-24 2024-04-05 中国人民解放军32039部队 Automatic testing method and system for aerospace measurement and control software

Similar Documents

Publication Publication Date Title
EP3692438B1 (en) Automatic generation of a graphic user interface (gui) based on a gui screen image
CN108229485B (en) Method and apparatus for testing user interface
WO2021203863A1 (en) Artificial intelligence-based object detection method and apparatus, device, and storage medium
WO2020041399A1 (en) Image processing method and apparatus
CN111160569A (en) Application development method and device based on machine learning model and electronic equipment
KR102002024B1 (en) Method for processing labeling of object and object management server
CN111199541A (en) Image quality evaluation method, image quality evaluation device, electronic device, and storage medium
US20230066504A1 (en) Automated adaptation of video feed relative to presentation content
EP3852007A2 (en) Method, apparatus, electronic device, readable storage medium and program for classifying video
US11948385B2 (en) Zero-footprint image capture by mobile device
CN112203150A (en) Time-consuming acquisition method, device, equipment and computer-readable storage medium
CN115496820A (en) Method and device for generating image and file and computer storage medium
CN108665769B (en) Network teaching method and device based on convolutional neural network
CN110909768A (en) Method and device for acquiring marked data
CN113468066A (en) User interface testing method and device
KR102086600B1 (en) Apparatus and method for providing purchase information of products
CN111598128A (en) Control state identification and control method, device, equipment and medium of user interface
CN115631374A (en) Control operation method, control detection model training method, device and equipment
CN115756461A (en) Annotation template generation method, image identification method and device and electronic equipment
CN115376137A (en) Optical character recognition processing and text recognition model training method and device
CN114842476A (en) Watermark detection method and device and model training method and device
CN113255819A (en) Method and apparatus for identifying information
CN112308074A (en) Method and device for generating thumbnail
US20230260006A1 (en) System and method for searching image of goods
CN116363393A (en) Method for judging similarity of graphical interface and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination