CN111026644B - Operation result labeling method and device, storage medium and electronic equipment - Google Patents

Operation result labeling method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111026644B
CN111026644B CN201911143643.0A CN201911143643A CN111026644B CN 111026644 B CN111026644 B CN 111026644B CN 201911143643 A CN201911143643 A CN 201911143643A CN 111026644 B CN111026644 B CN 111026644B
Authority
CN
China
Prior art keywords
target object
target
operation result
result image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911143643.0A
Other languages
Chinese (zh)
Other versions
CN111026644A (en
Inventor
殷坤
纪勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201911143643.0A priority Critical patent/CN111026644B/en
Publication of CN111026644A publication Critical patent/CN111026644A/en
Application granted granted Critical
Publication of CN111026644B publication Critical patent/CN111026644B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to a labeling method, a labeling device, a storage medium and electronic equipment of operation results, and relates to the technical field of terminals, wherein the method comprises the following steps: after the terminal executes the target operation, an operation result image displayed on a display interface of the terminal is obtained, whether the target object in the operation result image is wrong or not is judged, and if the target object is wrong, the target object is marked in the operation result image. According to the method and the device, in the process of testing the terminal, the wrong target object in the operation result image can be automatically marked, the information content in the operation result is effectively increased, the testing workload is reduced, and the problem can be quickly and intuitively found.

Description

Operation result labeling method and device, storage medium and electronic equipment
Technical Field
The disclosure relates to the technical field of terminals, and in particular relates to a method and a device for labeling operation results, a storage medium and electronic equipment.
Background
With the continuous development of computer technology and software development technology, the functions of terminals are more and more diversified, and various demands of users can be realized by installing different Application programs (English: application, abbreviated: APP) on the terminals. In the development of an application program, a lot of tests are required, for example, to test whether an image displayed on a display interface is correct when the application program is running on a terminal. The result image on the display interface is generally intercepted as the operation result after the designated operation is executed, and is taken as a reference when the tester performs the subsequent defect analysis. However, the information content in the result image is less, so that the problem is not found by the testers, the self working experience of the testers is needed, the later stage of the testers is used for manually marking the result image, the accuracy is low, and the working efficiency is low.
Disclosure of Invention
The purpose of the present disclosure is to provide a method, an apparatus, a storage medium and an electronic device for labeling an operation result, so as to solve the problems of the prior art that the operation result needs to be manually labeled in error, and the efficiency is low and the error is easy to occur.
To achieve the above object, according to a first aspect of embodiments of the present disclosure, there is provided a method for labeling an operation result, the method including:
after a terminal executes target operation, acquiring an operation result image displayed on a display interface of the terminal;
judging whether a target object in the operation result image is wrong;
and if the target object is wrong, marking the target object in the operation result image.
Optionally, labeling the target object in the operation result image includes:
and generating annotation information in a target position area corresponding to the target object in the operation result image.
Optionally, the target object error includes the target object not being present; in the operation result image, generating annotation information in a target position area corresponding to the target object comprises the following steps:
determining the appointed position information of the target object, wherein the appointed position information is the position information on the display interface when the target object is correctly displayed after the target operation is executed;
taking a preset area corresponding to the specified position information in the operation result image as the target position area;
and generating first labeling information in the target position area.
Optionally, the target object error includes that the target object exists, and a target attribute of the target object is different from a preset attribute; in the operation result image, generating annotation information in a target position area corresponding to the target object comprises the following steps:
determining current position information of the target object in the operation result image;
taking a preset area corresponding to the current position information as the target position area;
and generating second labeling information in the target position area.
Optionally, in the operation result image, generating annotation information in a target position area corresponding to the target object includes:
acquiring a first color of the target position area;
determining a second color different from the first color;
and generating the labeling information of the second color in the target position area.
Optionally, labeling the target object in the operation result image further includes:
and generating annotation information in a target position area corresponding to the target object in the operation result image.
According to a second aspect of embodiments of the present disclosure, there is provided an apparatus for labeling operation results, the apparatus including:
the acquisition module is used for acquiring an operation result image on a display interface of the terminal after the terminal executes target operation;
the judging module is used for judging whether the target object in the operation result image is wrong or not;
and the labeling module is used for labeling the target object in the operation result image if the target object is wrong.
Optionally, the labeling module is configured to:
and generating annotation information in a target position area corresponding to the target object in the operation result image.
Optionally, the target object error includes the target object not being present; the labeling module comprises:
the position determining sub-module is used for determining the appointed position information of the target object, wherein the appointed position information is the position information on the display interface when the target object is correctly displayed after the target operation is executed;
the region determination submodule is used for taking a preset region corresponding to the specified position information in the operation result image as the target position region;
and the first labeling sub-module is used for generating first labeling information in the target position area.
Optionally, the target object error includes that the target object exists, and a target attribute of the target object is different from a preset attribute; the labeling module comprises:
a position determining sub-module, configured to determine current position information of the target object in the operation result image;
the region determination submodule is used for taking a preset region corresponding to the current position information as the target position region;
and the first labeling sub-module is used for generating second labeling information in the target position area.
Optionally, the labeling module includes:
the color determination submodule is used for acquiring a first color of the target position area;
a color determination sub-module further for determining a second color different from the first color;
and the second labeling sub-module is used for generating the labeling information of the second color in the target position area.
Optionally, the labeling module is further configured to:
and generating annotation information in a target position area corresponding to the target object in the operation result image.
According to a third aspect of the disclosed embodiments, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of the method of the first aspect of the disclosed embodiments.
According to a fourth aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of the first aspect of the embodiments of the present disclosure.
Through the technical scheme, after the terminal executes the target operation, the operation result image displayed on the display interface of the terminal is firstly obtained, then the target object in the operation result image is judged, whether the target object is wrong or not is determined, and if the target object is wrong, the target object is marked in the operation result image. According to the method and the device, in the process of testing the terminal, the wrong target object in the operation result image can be automatically marked, the information quantity contained in the operation result is effectively increased, the efficiency and the accuracy of testing work are improved, and the problem can be quickly and intuitively found.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate the disclosure and together with the description serve to explain, but do not limit the disclosure. In the drawings:
FIG. 1 is a flowchart illustrating a method of labeling operation results, according to an exemplary embodiment;
FIG. 2a is a flowchart illustrating another method of labeling operation results, according to an example embodiment;
FIG. 2b is a schematic diagram of an operation result image, according to an example embodiment;
FIG. 2c is a schematic diagram of an operation result image, according to an example embodiment;
FIG. 3a is a flowchart illustrating another method of labeling operation results, according to an exemplary embodiment;
FIG. 3b is a schematic diagram of an operation result image, according to an example embodiment;
FIG. 4 is a flowchart illustrating another method of labeling operation results, according to an example embodiment;
FIG. 5 is a block diagram of an annotation device for the outcome of an operation, according to an exemplary embodiment;
FIG. 6 is a block diagram of an annotating device showing another operational outcome in accordance with an exemplary embodiment;
FIG. 7 is a block diagram of an annotating device showing another operational outcome in accordance with an exemplary embodiment;
fig. 8 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Before introducing the method, the device, the storage medium and the electronic equipment for labeling the operation result provided by the disclosure, application scenes related to various embodiments of the disclosure are first described. The application scene can be any terminal provided with a display screen, and the display screen can display a display interface, for example, the application scene can be a fixed terminal such as a smart phone, a tablet personal computer, a smart television, a smart watch, a PDA (English: personal Digital Assistant, chinese: personal digital assistant), a portable computer, a desktop computer and the like.
FIG. 1 is a flow chart illustrating a method of labeling operation results, as shown in FIG. 1, according to an exemplary embodiment, the method comprising:
step 101, after the terminal executes the target operation, an operation result image displayed on a display interface of the terminal is obtained.
For example, a terminal may have a plurality of applications installed thereon in advance for implementing different functions. And executing the target operation on the terminal, and displaying a corresponding operation result image on a display interface of the terminal. Wherein the target operation may be an operation performed on any one of the plurality of applications. Taking the test of the application program as an example, the target operation may be: opening the application, using various functions of the application, selecting various tab interfaces to view the application, setting attributes of the application, and the like. The target operation may be performed manually by a user of the terminal or may be performed automatically by an automated test program preset on the terminal. After the terminal executes the target operation, the current image on the display interface is intercepted and taken as an operation result image corresponding to the target operation. For example, after the automated test program detects that the target operation is performed, a current display interface is subjected to screenshot to obtain an operation result image.
Step 102, judging whether the target object in the operation result image is wrong.
And step 103, if the target object is wrong, marking the target object in the operation result image.
For example, the operation result image may include one or more target objects, where the target objects may be understood as various types of objects such as pictures, characters, icons, symbols, etc. displayed by the application program on the display interface, and each target object may further include one or more corresponding attributes, where the attribute of the target object may be, for example, a numerical value, a display position, a display color, a display size, etc. of the target object. And judging whether the target object is wrong or not according to the operation result image. If the target object is wrong, the target object is marked in the operation result image, and if the target object is not wrong, the target object is not marked. There are various ways to annotate the target object, for example: the target object is framed by using a labeling frame, or the target object is labeled by using a line, and an arrow pointing to the target object can be used for labeling the target object.
It should be noted that the target object error may include two types, one is that the target object does not exist, that is, the target object should be displayed in the operation result image, but not displayed. The other is that the target object exists, but the target attribute of the target object is different from the preset attribute which should be displayed, namely the target object is displayed in the operation result image, but the display style or the display value of the target object is inconsistent with the expected style or value.
In summary, after the terminal executes the target operation, the present disclosure first obtains the operation result image displayed on the display interface of the terminal, then determines the target object in the operation result image, determines whether the target object has an error, and if the target object has an error, marks the target object in the operation result image. According to the method and the device, in the process of testing the terminal, the wrong target object in the operation result image can be automatically marked, the information quantity contained in the operation result is effectively increased, the efficiency and the accuracy of testing work are improved, and the problem can be quickly and intuitively found.
In a specific application scenario, the implementation manner of step 103 may be:
and generating annotation information in a target position area corresponding to the target object in the operation result image.
For example, the method of marking the target object in the operation result image may be that a target location area corresponding to the target object is found in the operation result image, and marking information is generated in the target location area. The target position area may be a circular or rectangular area of a predetermined size centered on the target object, or may be an area of a predetermined size at a predetermined distance (e.g., 1 cm) from the target object. The labeling information may be labeling frames of various shapes (such as circles, rectangles and the like), lines of various types (such as underlines, strikethroughs, wavy lines and the like), arrows pointing to target objects and the like.
Further, annotation information is generated in the target position region, and annotation information can be generated in the vicinity of the annotation information in the operation result image after the target object is annotated.
In order to further increase the information amount contained in the operation result image, so that a tester can quickly and intuitively find out a problem, annotation information can be generated in a target position area corresponding to the target object with the error. The annotation information may include, for example: the annotation information can be text, or other types. Wherein, when the target object does not exist, the error type may include a name of the target object, and when the target object exists but the target attribute is wrong, the error type may include a name of the target object and a name of the target attribute. Therefore, when a tester sees the labeling information in the operation result image, the tester can quickly and intuitively determine which kind of error occurs.
For example, when a certain application program on the terminal is tested, an operation of opening the application program is performed, and at this time, a "search field" is not displayed in an operation result image displayed on a display interface of the terminal, that is, a target object is the "search field", and an error type is that the target object does not exist. The label information may be generated in the target position area corresponding to the "search field", for example, by framing a position where the "search field" is to be displayed with a label frame. Then, corresponding annotation information can be generated in the annotation frame: "target operation: an opening operation; the current time: 2019/07/22; error type: the 'search bar' object does not exist. Therefore, the operation result image not only marks the target object generating the error, but also can comprise the wrong annotation information, so that a tester can quickly and intuitively find out the problem according to the annotation information.
FIG. 2a is a flowchart illustrating a method of labeling a result of another operation, according to an exemplary embodiment, as shown in FIG. 2a, the target object error includes the target object not being present. Step 103 may include:
step 1031, determining the designated position information of the target object, where the designated position information is the position information on the display interface when the target object is correctly displayed after the target operation is performed.
Step 1032, taking the corresponding preset area of the designated position information in the operation result image as the target position area.
Step 1033, generating first annotation information in the target location area.
For example, when the target object does not exist, first, the designated location information of the target object may be determined, and the designated location information may be understood as location information that the target object should display on the display interface, that is, location information of the target object that is expected to be on the display interface, if the execution result is correct after the target operation is performed. The specified position information may be a coordinate range (e.g., coordinate values of upper left corner, lower left corner, upper right corner, lower right corner). Next, a preset area of the specified position information corresponding to the operation result image is found as a target position area. The target position area may be a coordinate range indicated by the specified position information, or may be an area near the coordinate range indicated by the specified position information. And finally, generating first labeling information in the target position area. Taking the operation result image obtained in step 101 as an example shown in fig. 2b, on the right side of "live hotel", the "last browsing" should be displayed, but the operation result image is displayed with a blank, i.e. the target object is the "last browsing", and the "last browsing" does not exist in the operation result image. Then it may be determined that the "most recently browsed" if properly displayed, the specified location information on the display interface is: the lower left and upper right coordinates of the "most recently browsed". The predetermined area of the operation result image corresponding to the specified position information may be used as the target position area, and for example, a rectangular area determined by the lower left corner coordinates and the upper right corner coordinates of the "latest browsing" may be used as the target position area, and the first labeling information may be generated in the rectangular area. As shown in fig. 2 c.
Fig. 3a is a flowchart of another method for labeling operation results according to an exemplary embodiment, as shown in fig. 3a, in another implementation, the target object error includes that the target object exists, and the target attribute of the target object is different from the preset attribute, that is, the operation result image shows the target object, but the target attribute of the displayed target object is different from the preset attribute. Accordingly, step 103 may include the steps of:
step 1034, determining current position information of the target object in the operation result image.
Here, since the display interface sizes of the terminals of different models may be different, the display positions of the same application on the terminals of different models may also be different. Therefore, there may be a deviation between the current position information of the target object in the operation result image and the specified position information of the target object (i.e., the position information of the intended target object on the display interface), the position of the target object being indicated with the current position information more accurately than the specified position information. The current position information may be a coordinate range (e.g., coordinate values of upper left corner, lower left corner, upper right corner, lower right corner).
Step 1035, taking the preset area corresponding to the current position information as the target position area.
Step 1036, generating second annotation information in the target location area.
The target position area may be a coordinate range indicated by the current position information, or may be an area near the coordinate range indicated by the current position information. Taking the operation result image shown in fig. 2b as an example, the integral value of the "integral" of the target object (i.e. the target attribute) is displayed incorrectly, the correct integral value (i.e. the preset attribute) should be 5800, and 5747 is displayed in the operation result image, then the current position information of the "integral" may be determined first, for example: the lower left and upper right coordinates of the "integral". And taking a rectangular area determined by the left lower corner coordinate and the right upper corner coordinate of the integral as a target position area, and generating second labeling information in the rectangular area. Further, the display content (i.e., the target attribute) of the target object "common information" is displayed incorrectly, the correct display content (i.e., the preset attribute) should be information set by the user, and the default content is displayed in the operation result image. Then the current location information of "usual information" is first determined, for example: upper left and lower right coordinates of "usual information". And taking a rectangular area determined by the upper left corner coordinates and the lower right corner coordinates of the common information as a target position area, and generating second labeling information in the rectangular area. The operation result image generates the second labeling information as shown in fig. 3 b.
FIG. 4 is a flowchart illustrating another method of labeling operation results, according to an exemplary embodiment, as shown in FIG. 4, step 103 may also be implemented by:
step 1037, a first color of the target location area is acquired.
Step 1038, determining a second color different from the first color.
In step 1039, labeling information of a second color is generated within the target location area.
For example, in order to be able to highlight the annotation information in the operation result image, the first color of the target location area may be determined first, and may be understood as the background color of the target location area. And then selecting a second color different from the first color, and finally generating the labeling information of the second color in the target position area, thereby avoiding the problem that the labeling information cannot be identified or is difficult to identify due to the fact that the labeling information is the same or similar to the background color of the target position area.
In addition, considering that even if the first color and the second color are different, the color difference between the two colors may be small, so that the human eye still cannot clearly recognize the two colors, such as white and off-white, or black and dark brown, etc., in order to solve the problem, in this embodiment, the second color may be a color that is clearly distinguished from the first color, for example, the first color is red, and the RGB value is (255, 0), and then the second color may be determined as yellow with RGB being (255, 0) to highlight the difference between the labeling information and the background color. The second color may be a contrast color of the first color or a complementary color of the first color, and in addition, a color having a color difference with the first color greater than or equal to a preset color difference threshold may be used as the second color, so that the human eye can clearly identify two different colors.
In summary, after the terminal executes the target operation, the present disclosure first obtains the operation result image displayed on the display interface of the terminal, then determines the target object in the operation result image, determines whether the target object has an error, and if the target object has an error, marks the target object in the operation result image. According to the method and the device, in the process of testing the terminal, the wrong target object in the operation result image can be automatically marked, the information quantity contained in the operation result is effectively increased, the efficiency and the accuracy of testing work are improved, and the problem can be quickly and intuitively found.
FIG. 5 is a block diagram of an apparatus for labeling results of operations, according to an exemplary embodiment, as shown in FIG. 5, the apparatus 200 includes:
and the acquiring module 201 is configured to acquire an operation result image displayed on a display interface of the terminal after the terminal performs the target operation.
A judging module 202, configured to judge whether the target object in the operation result image is wrong.
And the labeling module 203 is configured to label the target object in the operation result image if the target object is wrong.
Optionally, the labeling module 203 is configured to perform the following steps:
and generating annotation information in a target position area corresponding to the target object in the operation result image.
FIG. 6 is a block diagram of an annotating device for an alternative operation result, as shown in FIG. 6, with a target object error including a target object absence, the annotating module 203 comprising:
the location determining submodule 2031 is configured to determine specified location information of the target object, where the specified location information is location information on the display interface when the target object is correctly displayed after the target operation is performed.
The region determining submodule 2032 is configured to set a preset region corresponding to the specified position information in the operation result image as a target position region.
A first labeling submodule 2033 for generating first labeling information in the target location area.
In another implementation, the target object error includes that the target object exists, and the target attribute of the target object is different from the preset attribute, corresponding to:
the location determining submodule 2031 is configured to determine current location information of the target object in the operation result image.
The area determining submodule 2032 is configured to take a preset area corresponding to the current location information as a target location area.
A first labeling submodule 2033 for generating second labeling information in the target location area.
FIG. 7 is a block diagram of another operation result labeling apparatus, as shown in FIG. 7, according to an exemplary embodiment, the labeling module 203 includes:
the color determination submodule 2034 is configured to obtain a first color of the target location area.
The color determination submodule 2034 is further configured to determine a second color different from the first color.
A second labeling submodule 2035 for generating labeling information of a second color within the target location area.
Optionally, the labeling module 203 may be further configured to:
in the operation result image, annotation information is generated in a target position area corresponding to the target object.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
In summary, after the terminal executes the target operation, the present disclosure first obtains the operation result image displayed on the display interface of the terminal, then determines the target object in the operation result image, determines whether the target object has an error, and if the target object has an error, marks the target object in the operation result image. According to the method and the device, in the process of testing the terminal, the wrong target object in the operation result image can be automatically marked, the information quantity contained in the operation result is effectively increased, the efficiency and the accuracy of testing work are improved, and the problem can be quickly and intuitively found.
Fig. 8 is a block diagram of an electronic device 300, according to an example embodiment. As shown in fig. 8, the electronic device 300 may include: a processor 301, a memory 302. The electronic device 300 may also include one or more of a multimedia component 303, an input/output (I/O) interface 304, and a communication component 305.
The processor 301 is configured to control the overall operation of the electronic device 300, so as to complete all or part of the steps in the method for labeling an operation result. The memory 302 is used to store various types of data to support operation at the electronic device 300, which may include, for example, instructions for any application or method operating on the electronic device 300, as well as application-related data, such as contact data, transceived messages, pictures, audio, video, and the like. The Memory 302 may be implemented by any type or combination of volatile or non-volatile Memory devices, such as static random access Memory (Static Random Access Memory, SRAM for short), electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM for short), erasable programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM for short), programmable Read-Only Memory (Programmable Read-Only Memory, PROM for short), read-Only Memory (ROM for short), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia component 303 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen, the audio component being for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signals may be further stored in the memory 302 or transmitted through the communication component 305. The audio assembly further comprises at least one speaker for outputting audio signals. The I/O interface 304 provides an interface between the processor 301 and other interface modules, which may be a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 305 is used for wired or wireless communication between the electronic device 300 and other devices. Wireless communication, such as Wi-Fi, bluetooth, near field communication (Near Field Communication, NFC for short), 2G, 3G or 4G, or a combination of one or more thereof, the corresponding communication component 305 may thus comprise: wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the electronic device 300 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), digital signal processors (Digital Signal Processor, abbreviated as DSP), digital signal processing devices (Digital Signal Processing Device, abbreviated as DSPD), programmable logic devices (Programmable Logic Device, abbreviated as PLD), field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), controllers, microcontrollers, microprocessors, or other electronic components for performing the method of labeling operational results described above.
In another exemplary embodiment, a computer readable storage medium is also provided that includes program instructions that, when executed by a processor, implement the steps of the method of labeling operational results described above. For example, the computer readable storage medium may be the memory 302 including program instructions described above, which are executable by the processor 301 of the electronic device 300 to perform the labeling method of the operation results described above.
In another exemplary embodiment, a computer program product is also provided, comprising a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-described method of labeling the result of operations when executed by the programmable apparatus.
In summary, after the terminal executes the target operation, the present disclosure first obtains the operation result image displayed on the display interface of the terminal, then determines the target object in the operation result image, determines whether the target object has an error, and if the target object has an error, marks the target object in the operation result image. According to the method and the device, in the process of testing the terminal, the wrong target object in the operation result image can be automatically marked, the information quantity contained in the operation result is effectively increased, the efficiency and the accuracy of testing work are improved, and the problem can be quickly and intuitively found.
The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solutions of the present disclosure within the scope of the technical concept of the present disclosure, and all the simple modifications belong to the protection scope of the present disclosure.
In addition, the specific features described in the foregoing embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, the present disclosure does not further describe various possible combinations.
Moreover, any combination between the various embodiments of the present disclosure is possible as long as it does not depart from the spirit of the present disclosure, which should also be construed as the disclosure of the present disclosure.

Claims (8)

1. A method for labeling an operation result, the method comprising:
after a terminal executes target operation, acquiring an operation result image displayed on a display interface of the terminal;
judging whether a target object in the operation result image is wrong, wherein the target object error comprises the absence of the target object, the target object error also comprises the presence of the target object, and the target attribute of the target object is different from a preset attribute;
if the target object is wrong, marking the target object in the operation result image;
labeling the target object in the operation result image comprises the following steps: and generating annotation information in a target position area corresponding to the target object in the operation result image, wherein the annotation information comprises target operation, current time and error type.
2. The method according to claim 1, wherein generating annotation information in the target location area corresponding to the target object in the operation result image includes:
determining the appointed position information of the target object, wherein the appointed position information is the position information on the display interface when the target object is correctly displayed after the target operation is executed;
taking a preset area corresponding to the specified position information in the operation result image as the target position area;
and generating first labeling information in the target position area.
3. The method according to claim 1, wherein generating annotation information in the target location area corresponding to the target object in the operation result image includes:
determining current position information of the target object in the operation result image;
taking a preset area corresponding to the current position information as the target position area;
and generating second labeling information in the target position area.
4. The method according to claim 1, wherein generating annotation information in the target location area corresponding to the target object in the operation result image includes:
acquiring a first color of the target position area;
determining a second color different from the first color;
and generating the labeling information of the second color in the target position area.
5. An apparatus for labeling an operation result, the apparatus comprising:
the acquisition module is used for acquiring an operation result image displayed on a display interface of the terminal after the terminal executes target operation;
the judging module is used for judging whether a target object in the operation result image is wrong, wherein the target object error comprises the absence of the target object, the target object error also comprises the presence of the target object, and the target attribute of the target object is different from a preset attribute;
the marking module is used for marking the target object in the operation result image if the target object is wrong;
the labeling module is used for: and generating annotation information in a target position area corresponding to the target object in the operation result image, wherein the annotation information comprises target operation, current time and error type.
6. The apparatus of claim 5, wherein the labeling module comprises:
the position determining sub-module is used for determining the appointed position information of the target object, wherein the appointed position information is the position information on the display interface when the target object is correctly displayed after the target operation is executed;
the region determination submodule is used for taking a preset region corresponding to the specified position information in the operation result image as the target position region;
the first labeling sub-module is used for generating first labeling information in the target position area;
or,
the position determining submodule is used for determining current position information of the target object in the operation result image;
the region determining submodule is used for taking a preset region corresponding to the current position information as the target position region;
the first labeling sub-module is used for generating second labeling information in the target position area.
7. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1-4.
8. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of any one of claims 1-4.
CN201911143643.0A 2019-11-20 2019-11-20 Operation result labeling method and device, storage medium and electronic equipment Active CN111026644B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911143643.0A CN111026644B (en) 2019-11-20 2019-11-20 Operation result labeling method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911143643.0A CN111026644B (en) 2019-11-20 2019-11-20 Operation result labeling method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111026644A CN111026644A (en) 2020-04-17
CN111026644B true CN111026644B (en) 2023-09-26

Family

ID=70205994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911143643.0A Active CN111026644B (en) 2019-11-20 2019-11-20 Operation result labeling method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111026644B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832255B (en) * 2020-06-29 2024-05-14 深圳市万翼数字技术有限公司 Labeling processing method, electronic equipment and related products

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109388532A (en) * 2018-09-26 2019-02-26 Oppo广东移动通信有限公司 Test method, device, electronic equipment and computer-readable storage medium
CN109800153A (en) * 2018-12-14 2019-05-24 深圳壹账通智能科技有限公司 Mobile application test method and device, electronic equipment, storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109388532A (en) * 2018-09-26 2019-02-26 Oppo广东移动通信有限公司 Test method, device, electronic equipment and computer-readable storage medium
CN109800153A (en) * 2018-12-14 2019-05-24 深圳壹账通智能科技有限公司 Mobile application test method and device, electronic equipment, storage medium

Also Published As

Publication number Publication date
CN111026644A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN110347587B (en) APP compatibility testing method and device, computer equipment and storage medium
CN111026645B (en) User interface automatic test method and device, storage medium and electronic equipment
CN107025174B (en) Method, device and readable storage medium for user interface anomaly test of equipment
CN106502891B (en) Automatic detection method and device for user interface
CN111881019B (en) User interface testing method and device
CN110879777A (en) Control testing method and device for application interface, computer equipment and storage medium
CN108376094B (en) Notification message display method and device, computer equipment and storage medium
KR20140038381A (en) Systems and methods for testing content of mobile communication devices
US9804955B2 (en) Method and apparatus for creating reference images for an automated test of software with a graphical user interface
CN110851299A (en) Automatic flow exception eliminating method, device, equipment and storage medium
CN106776319B (en) Automatic test method and device
CN110765015A (en) Method for testing application to be tested and electronic equipment
CN114490395A (en) Test processing method, device, equipment and medium
CN108845924B (en) Control response area display control method, electronic device, and storage medium
CN112286782B (en) Control shielding detection method, software detection method, device and medium
CN111026644B (en) Operation result labeling method and device, storage medium and electronic equipment
CN109753217B (en) Dynamic keyboard operation method and device, storage medium and electronic equipment
CN111309613A (en) Application testing method, device, equipment and computer readable storage medium
CN109144841B (en) Method and device for identifying advertisement application, computer equipment and storage medium
CN111949510A (en) Test processing method and device, electronic equipment and readable storage medium
CN113900932A (en) Test script generation method, device, medium and electronic equipment
CN115878491A (en) Interface abnormity detection method and device, electronic equipment, storage medium and chip
CN115357488A (en) Method and device for automatic testing, electronic equipment and storage medium
CN112214404A (en) Mobile application testing method and device, storage medium and electronic equipment
CN112612469A (en) Interface element processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant