CN111026644A - Operation result labeling method and device, storage medium and electronic equipment - Google Patents

Operation result labeling method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111026644A
CN111026644A CN201911143643.0A CN201911143643A CN111026644A CN 111026644 A CN111026644 A CN 111026644A CN 201911143643 A CN201911143643 A CN 201911143643A CN 111026644 A CN111026644 A CN 111026644A
Authority
CN
China
Prior art keywords
target object
target
operation result
result image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911143643.0A
Other languages
Chinese (zh)
Other versions
CN111026644B (en
Inventor
殷坤
纪勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201911143643.0A priority Critical patent/CN111026644B/en
Publication of CN111026644A publication Critical patent/CN111026644A/en
Application granted granted Critical
Publication of CN111026644B publication Critical patent/CN111026644B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to a method, a device, a storage medium and an electronic device for labeling an operation result, relating to the technical field of terminals, wherein the method comprises the following steps: after the terminal executes the target operation, acquiring an operation result image displayed on a display interface of the terminal, judging whether a target object in the operation result image is wrong or not, and if the target object is wrong, highlighting the target object in the operation result image. According to the method and the device, in the process of testing the terminal, the target object with the error in the operation result image can be automatically marked, the information content in the operation result is effectively increased, the testing workload is reduced, and the problem can be quickly and visually found.

Description

Operation result labeling method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of terminal technologies, and in particular, to a method and an apparatus for labeling an operation result, a storage medium, and an electronic device.
Background
With the continuous development of computer technology and software development technology, the functions of the terminal are more and more diversified, and various requirements of users can be met by installing different Application programs (English: Application, abbreviation: APP) on the terminal. During the development of the application program, a lot of tests are needed, for example, whether the image displayed on the display interface is correct when the application program runs on the terminal is tested. Generally, after a specified operation is performed, a result image on a display interface is intercepted as an operation result, and is used as a reference when a tester performs subsequent defect analysis. However, the amount of information contained in the result image is small, which is not beneficial for the tester to find problems, the tester needs to use his own working experience, and the tester marks the result image manually in the later period, so that the accuracy is not high, and the working efficiency is low.
Disclosure of Invention
The disclosure aims to provide an operation result labeling method, an operation result labeling device, a storage medium and electronic equipment, which are used for solving the problems that operation results need to be labeled by hands, and the operation results are low in efficiency and easy to make mistakes in the prior art.
In order to achieve the above object, according to a first aspect of embodiments of the present disclosure, there is provided an operation result labeling method, including:
after a terminal executes a target operation, acquiring an operation result image displayed on a display interface of the terminal;
judging whether the target object in the operation result image is wrong or not;
and if the target object is wrong, marking the target object in the operation result image.
Optionally, marking the target object in the operation result image includes:
and generating marking information in a target position area corresponding to the target object in the operation result image.
Optionally, the target object error comprises the target object not being present; in the operation result image, generating annotation information in a target position area corresponding to the target object, including:
determining appointed position information of the target object, wherein the appointed position information is position information on the display interface when the target object is correctly displayed after the target operation is executed;
taking a preset area corresponding to the designated position information in the operation result image as the target position area;
and generating first labeling information in the target position area.
Optionally, the target object error includes that the target object exists, and a target attribute of the target object is different from a preset attribute; in the operation result image, generating annotation information in a target position area corresponding to the target object, including:
determining current position information of the target object in the operation result image;
taking a preset area corresponding to the current position information as the target position area;
and generating second labeling information in the target position area.
Optionally, in the operation result image, generating annotation information in a target position area corresponding to the target object includes:
acquiring a first color of the target position area;
determining a second color different from the first color;
and generating the marking information of the second color in the target position area.
Optionally, the marking the target object in the operation result image further includes:
and in the operation result image, generating annotation information in a target position area corresponding to the target object.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for labeling an operation result, the apparatus including:
the terminal comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring an operation result image on a display interface of the terminal after the terminal executes target operation;
the judging module is used for judging whether the target object in the operation result image is wrong or not;
and the marking module is used for marking the target object in the operation result image if the target object is wrong.
Optionally, the labeling module is configured to:
and generating marking information in a target position area corresponding to the target object in the operation result image.
Optionally, the target object error comprises the target object not being present; the labeling module comprises:
the position determining submodule is used for determining appointed position information of the target object, and the appointed position information is position information on the display interface when the target object is correctly displayed after the target operation is executed;
the area determination submodule is used for taking a preset area corresponding to the designated position information in the operation result image as the target position area;
and the first labeling submodule is used for generating first labeling information in the target position area.
Optionally, the target object error includes that the target object exists, and a target attribute of the target object is different from a preset attribute; the labeling module comprises:
the position determining submodule is used for determining the current position information of the target object in the operation result image;
the area determination submodule is used for taking a preset area corresponding to the current position information as the target position area;
and the first labeling submodule is used for generating second labeling information in the target position area.
Optionally, the labeling module includes:
the color determining submodule is used for acquiring a first color of the target position area;
a color determination sub-module further for determining a second color different from the first color;
and the second labeling submodule is used for generating the labeling information of the second color in the target position area.
Optionally, the labeling module is further configured to:
and in the operation result image, generating annotation information in a target position area corresponding to the target object.
According to a third aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect of embodiments of the present disclosure.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of the first aspect of an embodiment of the disclosure.
According to the technical scheme, after the terminal executes the target operation, the operation result image displayed on the display interface of the terminal is firstly obtained, then the target object in the operation result image is judged, whether the target object is wrong or not is determined, and if the target object is wrong, the target object is marked in the operation result image. According to the method and the device, in the process of testing the terminal, the target object with errors in the operation result image can be automatically marked, the information content contained in the operation result is effectively increased, the efficiency and the accuracy of testing work are improved, and the problem can be quickly and visually found.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow diagram illustrating a method of annotating operational results according to an exemplary embodiment;
FIG. 2a is a flow chart illustrating another method of annotating operational results according to an exemplary embodiment;
FIG. 2b is a schematic illustration of an operation result image shown in accordance with an exemplary embodiment;
FIG. 2c is a schematic illustration of an operation result image shown in accordance with an exemplary embodiment;
FIG. 3a is a flow chart illustrating another method of annotating operational results according to an exemplary embodiment;
FIG. 3b is a schematic illustration of an operation result image shown in accordance with an exemplary embodiment;
FIG. 4 is a flow chart illustrating another method of annotating operational results in accordance with an exemplary embodiment;
FIG. 5 is a block diagram illustrating an apparatus for annotating operational results in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating an apparatus for annotating operational results in accordance with an exemplary embodiment;
FIG. 7 is a block diagram illustrating an apparatus for annotating operational results in accordance with an exemplary embodiment;
FIG. 8 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Before describing the method, the apparatus, the storage medium, and the electronic device for labeling operation results provided by the present disclosure, an application scenario related to various embodiments of the present disclosure is first described. The application scenario may be any terminal provided with a display screen, and the display screen can display a display interface, for example, the application scenario may be a fixed terminal such as a smart phone, a tablet computer, a smart television, a smart watch, a PDA (Personal Digital Assistant, chinese), a portable computer, or a desktop computer.
Fig. 1 is a flowchart illustrating a method for annotating operational results according to an exemplary embodiment, as shown in fig. 1, the method comprising:
step 101, after the terminal executes the target operation, acquiring an operation result image displayed on a display interface of the terminal.
For example, a plurality of application programs may be pre-installed on the terminal to implement different functions. And executing target operation on the terminal, and displaying a corresponding operation result image on a display interface of the terminal. The target operation may be an operation performed on any one of the plurality of application programs. Taking the test of the application as an example, the target operation may be: opening the application, using various functions of the application, selecting various tab interfaces for viewing the application, setting properties of the application, and the like. The target operation can be manually executed by a user of the terminal, or can be automatically executed by an automatic test program preset on the terminal. And after the terminal executes the target operation, intercepting a current image on the display interface as an operation result image corresponding to the target operation. For example, after the automated test program detects that the target operation is completed, the current display interface is subjected to screenshot to obtain an operation result image.
And 102, judging whether the target object in the operation result image is wrong or not.
Step 103, if the target object is wrong, the target object is marked in the operation result image.
For example, one or more target objects may be included in the operation result image, where a target object may be understood as various types of objects such as pictures, texts, icons, symbols, and the like displayed on the display interface by the application program, each target object may further include one or more corresponding attributes, and the attributes of the target object may be, for example, a numerical value, a display position, a display color, a display size, and the like of the target object. And judging whether the target object is wrong or not according to the operation result image. And if the target object is wrong, marking the target object in the operation result image, and if the target object is not wrong, not marking the target object. The way of labeling the target object can be various, for example: the target object is framed by using the labeling frame, or the target object is labeled by using lines, and the target object can be labeled by using an arrow pointing to the target object.
It should be noted that the target object error may include two types, one is that the target object does not exist, that is, the target object should be displayed but not displayed in the operation result image. The other is that the target object exists, but the target attribute of the target object is different from the preset attribute that should be displayed, that is, the target object is displayed in the operation result image, but the display style or display value of the target object is not consistent with the expected style or value.
In summary, according to the present disclosure, after the terminal executes the target operation, the operation result image displayed on the display interface of the terminal is first obtained, and then the target object in the operation result image is determined, to determine whether the target object is in error, and if the target object is in error, the target object is marked in the operation result image. According to the method and the device, in the process of testing the terminal, the target object with errors in the operation result image can be automatically marked, the information content contained in the operation result is effectively increased, the efficiency and the accuracy of testing work are improved, and the problem can be quickly and visually found.
In a specific application scenario, the implementation manner of step 103 may be:
and generating annotation information in a target position area corresponding to the target object in the operation result image.
For example, the target object may be marked in the operation result image by finding a target position area corresponding to the target object in the operation result image and generating the marking information in the target position area. The target position area may be a circular or rectangular area with a preset size centered on the target object, or may be an area with a preset size at a preset distance (e.g., 1cm) from the target object. The annotation information may be an annotation frame of various shapes (e.g., circle, rectangle, etc.), a line of various types (e.g., underline, strikethrough, wavy line, etc.), an arrow pointing to the target object, etc.
Furthermore, annotation information may be generated in the vicinity of the annotation information in the operation result image after the target object is marked out.
In order to further increase the amount of information contained in the operation result image so that the tester can quickly and intuitively find the problem, annotation information can be generated in the target position area corresponding to the target object in which the error occurs. The annotation information may include, for example: target operation, current time, error type, etc., and the annotation information may be text or other types. Wherein, when the target object does not exist, the error type may include a name of the target object, and when the target object exists but the target attribute is wrong, the error type may include a name of the target object and a name of the target attribute. Therefore, when the testers see the annotation information in the operation result image, the testers can quickly and intuitively determine which error occurs.
For example, when an application program on the terminal is tested, an operation of opening the application program is executed, and at this time, in an operation result image displayed on a display interface of the terminal, a "search bar" is not displayed, that is, a target object is a "search bar", and an error type is that the target object does not exist. The label information may be generated in the target position area corresponding to the "search bar", and for example, a position where the "search bar" should be displayed may be framed with a label frame. Then, corresponding annotation information can be generated in the annotation box: "target operation: opening operation; the current time: 2019/07/22, respectively; error type: ' search bar ' object does not exist '. Therefore, the operation result image not only marks the target object generating the error, but also can comprise wrong annotation information, so that a tester can quickly and intuitively find the problem according to the annotation information.
FIG. 2a is a flowchart illustrating another method for annotating operational results according to an exemplary embodiment, where the target object error includes that the target object is not present, as shown in FIG. 2 a. Step 103 may include:
and step 1031, determining the designated position information of the target object, wherein the designated position information is the position information on the display interface when the target object is correctly displayed after the target operation is executed.
And step 1032, taking a preset area corresponding to the specified position information in the operation result image as a target position area.
Step 1033, generate the first label information in the target location area.
For example, when the target object does not exist, first, the designated position information of the target object may be determined, and the designated position information may be understood as the position information that the target object should be displayed on the display interface if the execution result is correct after the target operation is executed, that is, the position information of the target object on the display interface is expected. The specified position information may be a coordinate range (e.g., coordinate values of upper left, lower left, upper right, and lower right). And secondly, finding a preset area of the specified position information corresponding to the operation result image as a target position area. The target position area may be a coordinate range indicated by the position information, or may be an area near the coordinate range indicated by the position information. And finally, generating first labeling information in the target position area. Taking the operation result image acquired in step 101 as an example as shown in fig. 2b, the "recently viewed" should be displayed on the right side of the "hotel was there", but a blank space is displayed in the operation result image, that is, the target object is the "recently viewed" and the "recently viewed" does not exist in the operation result image. Then it may be determined first that "most recently viewed" if displayed correctly, the specified location information on the display interface is: the lower left corner coordinate and the upper right corner coordinate of "most recently viewed". The specified position information is used as a target position area corresponding to a preset area on the operation result image, for example, a rectangular area determined by the lower left corner coordinate and the upper right corner coordinate of the "most recently browsed" may be used as the target position area, and the first label information is generated in the rectangular area. As shown in fig. 2 c.
Fig. 3a is a flowchart illustrating another method for annotating an operation result according to an exemplary embodiment, where, as shown in fig. 3a, in another implementation, the target object error includes the existence of the target object and the target attribute of the target object is different from the preset attribute, that is, the operation result image displays the target object, but the target attribute of the displayed target object is different from the preset attribute. Accordingly, step 103 may include the steps of:
at step 1034, the current position information of the target object in the operation result image is determined.
Here, since the display interface sizes of the terminals of different models may be different, the display positions of the same application on the terminals of different models may also be different. Therefore, there may be a deviation of the target object between the current position information in the operation result image and the specified position information of the target object (i.e., the position information of the target object expected on the display interface), and the position of the target object is indicated with the current position information more accurately than the specified position information. The current position information may be a coordinate range (e.g., coordinate values of upper left corner, lower left corner, upper right corner, and lower right corner).
In step 1035, the preset area corresponding to the current position information is used as the target position area.
In step 1036, second label information is generated in the target position area.
The target position area may be a coordinate range indicated by the current position information, or may be an area near the coordinate range indicated by the current position information. Taking the operation result image shown in fig. 2b as an example, if the integral value (i.e., the target attribute) of the "integral" of the target object is displayed incorrectly, the correct integral value (i.e., the preset attribute) should be 5800, and if 5747 is displayed in the operation result image, the current position information of the "integral" may be determined, for example: the lower left and upper right coordinates of the "integral". And taking a rectangular area determined by the lower left corner coordinate and the upper right corner coordinate of the integral as a target position area, and generating second labeling information in the rectangular area. Further, the display content (i.e. the target attribute) of the target object "common information" is displayed incorrectly, the correct display content (i.e. the preset attribute) is the information set by the user, and the default content is displayed in the operation result image. Then the current location information of the "common information" is determined, for example: the upper left corner coordinates and the lower right corner coordinates of "common information". And a rectangular area determined by the upper left corner coordinate and the lower right corner coordinate of the 'common information' is used as a target position area, and second labeling information is generated in the rectangular area. The operation result image is shown in FIG. 3b after generating the second annotation information.
Fig. 4 is a flowchart illustrating another method for labeling operation results according to an exemplary embodiment, and as shown in fig. 4, step 103 can be further implemented by:
at step 1037, a first color of the target location area is obtained.
At step 1038, a second color different from the first color is determined.
In step 1039, label information of the second color is generated in the target location area.
For example, in order to highlight the annotation information in the operation result image, the first color of the target position region may be determined, which may be understood as the background color of the target position region. And then selecting a second color different from the first color, and finally generating the labeling information of the second color in the target position area, so that the problem that the labeling information cannot be identified or is difficult to identify due to the same color or similarity of the labeling information and the background color of the target position area is solved.
In addition, in order to solve the problem that even though the first color and the second color are different, the color difference between the two colors may be small, so that the human eye still cannot clearly recognize the two colors, such as white and gray-white, or black and dark brown, etc., in this embodiment, the second color may be a color that is clearly distinguished from the first color, for example, the first color is red, and the RGB value is (255, 0, 0), and then the second color may be determined as yellow with RGB of (255, 255, 0) to highlight the difference between the labeling information and the background color. The second color may also be a contrast color of the first color, or a complementary color of the first color, and in addition, a color having a color difference with the first color greater than or equal to a preset color difference threshold value may also be used as the second color, so that human eyes can clearly recognize two different colors.
In summary, according to the present disclosure, after the terminal executes the target operation, the operation result image displayed on the display interface of the terminal is first obtained, and then the target object in the operation result image is determined, to determine whether the target object is in error, and if the target object is in error, the target object is marked in the operation result image. According to the method and the device, in the process of testing the terminal, the target object with errors in the operation result image can be automatically marked, the information content contained in the operation result is effectively increased, the efficiency and the accuracy of testing work are improved, and the problem can be quickly and visually found.
Fig. 5 is a block diagram illustrating an apparatus for annotating operational results according to an exemplary embodiment, as shown in fig. 5, the apparatus 200 comprising:
the obtaining module 201 is configured to obtain an operation result image displayed on a display interface of the terminal after the terminal executes the target operation.
And the judging module 202 is used for judging whether the target object in the operation result image is wrong.
And the labeling module 203 is used for marking the target object in the operation result image if the target object is wrong.
Optionally, the labeling module 203 is configured to perform the following steps:
and generating annotation information in a target position area corresponding to the target object in the operation result image.
Fig. 6 is a block diagram illustrating another operation result annotating device according to an exemplary embodiment, where as shown in fig. 6, the target object error includes that the target object does not exist, and the annotating module 203 includes:
the position determining sub-module 2031 is configured to determine designated position information of the target object, where the designated position information is position information on the display interface when the target object is correctly displayed after the target operation is performed.
The area determination sub-module 2032 is configured to use a preset area corresponding to the designated position information in the operation result image as the target position area.
The first labeling sub-module 2033 is configured to generate first labeling information in the target location area.
In another implementation manner, the target object error includes that the target object exists, and the target attribute of the target object is different from the preset attribute, and accordingly:
a position determining sub-module 2031 for determining current position information of the target object in the operation result image.
The area determining sub-module 2032 is configured to use a preset area corresponding to the current location information as the target location area.
The first labeling sub-module 2033 is configured to generate second labeling information in the target location area.
FIG. 7 is a block diagram illustrating another operation result annotating device according to an exemplary embodiment, as shown in FIG. 7, the annotating module 203 comprises:
the color determination sub-module 2034 is configured to obtain a first color of the target location area.
The color determination sub-module 2034 is further configured to determine a second color different from the first color.
The second labeling sub-module 2035 is configured to generate labeling information of a second color in the target location area.
Optionally, the annotation module 203 may be further configured to:
and in the operation result image, generating annotation information in a target position area corresponding to the target object.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In summary, according to the present disclosure, after the terminal executes the target operation, the operation result image displayed on the display interface of the terminal is first obtained, and then the target object in the operation result image is determined, to determine whether the target object is in error, and if the target object is in error, the target object is marked in the operation result image. According to the method and the device, in the process of testing the terminal, the target object with the error in the operation result image can be automatically marked, the information content contained in the operation result is effectively increased, the efficiency and accuracy of testing work are improved, and the problem can be quickly and visually found.
Fig. 8 is a block diagram illustrating an electronic device 300 in accordance with an example embodiment. As shown in fig. 8, the electronic device 300 may include: a processor 301 and a memory 302. The electronic device 300 may also include one or more of a multimedia component 303, an input/output (I/O) interface 304, and a communication component 305.
The processor 301 is configured to control the overall operation of the electronic device 300, so as to complete all or part of the steps in the above-mentioned method for labeling operation results. The memory 302 is used to store various types of data to support operation at the electronic device 300, such as instructions for any application or method operating on the electronic device 300 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and the like. The Memory 302 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia components 303 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 302 or transmitted through the communication component 305. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 304 provides an interface between the processor 301 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 305 is used for wired or wireless communication between the electronic device 300 and other devices. Wireless communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, or 4G, or a combination of one or more of them, so that the corresponding communication component 305 may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the electronic Device 300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components, and is used for performing the above-mentioned operation result labeling method.
In another exemplary embodiment, there is also provided a computer readable storage medium including program instructions which, when executed by a processor, implement the steps of the above-described method of labeling operational results. For example, the computer readable storage medium may be the memory 302 including program instructions executable by the processor 301 of the electronic device 300 to perform the above-described method for labeling the operation result.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned method for labeling the result of the above-mentioned operation when executed by the programmable apparatus.
In summary, according to the present disclosure, after the terminal executes the target operation, the operation result image displayed on the display interface of the terminal is first obtained, and then the target object in the operation result image is determined, to determine whether the target object is in error, and if the target object is in error, the target object is marked in the operation result image. According to the method and the device, in the process of testing the terminal, the target object with the error in the operation result image can be automatically marked, the information content contained in the operation result is effectively increased, the efficiency and accuracy of testing work are improved, and the problem can be quickly and visually found.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that, in the foregoing embodiments, various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various combinations that are possible in the present disclosure are not described again.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (10)

1. A method for labeling operation results, the method comprising:
after a terminal executes a target operation, acquiring an operation result image displayed on a display interface of the terminal;
judging whether the target object in the operation result image is wrong or not;
and if the target object is wrong, marking the target object in the operation result image.
2. The method of claim 1, wherein labeling the target object in the operation result image comprises:
and generating marking information in a target position area corresponding to the target object in the operation result image.
3. The method of claim 2, wherein the target object error comprises an absence of the target object; in the operation result image, generating annotation information in a target position area corresponding to the target object, including:
determining appointed position information of the target object, wherein the appointed position information is position information on the display interface when the target object is correctly displayed after the target operation is executed;
taking a preset area corresponding to the designated position information in the operation result image as the target position area;
and generating first labeling information in the target position area.
4. The method of claim 2, wherein the target object error comprises the presence of the target object and a target property of the target object is different from a preset property; in the operation result image, generating annotation information in a target position area corresponding to the target object, including:
determining current position information of the target object in the operation result image;
taking a preset area corresponding to the current position information as the target position area;
and generating second labeling information in the target position area.
5. The method according to claim 2, wherein generating annotation information in a target position area corresponding to the target object in the operation result image comprises:
acquiring a first color of the target position area;
determining a second color different from the first color;
and generating the marking information of the second color in the target position area.
6. An apparatus for annotating operational results, said apparatus comprising:
the terminal comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring an operation result image displayed on a display interface of the terminal after the terminal executes target operation;
the judging module is used for judging whether the target object in the operation result image is wrong or not;
and the marking module is used for marking the target object in the operation result image if the target object is wrong.
7. The apparatus of claim 6, wherein the tagging module is configured to:
and generating marking information in a target position area corresponding to the target object in the operation result image.
8. The apparatus of claim 7, wherein the target object error comprises an absence of the target object; the labeling module comprises:
the position determining submodule is used for determining appointed position information of the target object, and the appointed position information is position information on the display interface when the target object is correctly displayed after the target operation is executed;
the area determination submodule is used for taking a preset area corresponding to the designated position information in the operation result image as the target position area;
the first labeling submodule is used for generating first labeling information in the target position area;
alternatively, the first and second electrodes may be,
the target object error comprises the existence of the target object, and the target attribute of the target object is different from the preset attribute;
the position determining submodule is used for determining the current position information of the target object in the operation result image;
the area determination submodule is used for taking a preset area corresponding to the current position information as the target position area;
and the first labeling submodule is used for generating second labeling information in the target position area.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
10. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 5.
CN201911143643.0A 2019-11-20 2019-11-20 Operation result labeling method and device, storage medium and electronic equipment Active CN111026644B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911143643.0A CN111026644B (en) 2019-11-20 2019-11-20 Operation result labeling method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911143643.0A CN111026644B (en) 2019-11-20 2019-11-20 Operation result labeling method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111026644A true CN111026644A (en) 2020-04-17
CN111026644B CN111026644B (en) 2023-09-26

Family

ID=70205994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911143643.0A Active CN111026644B (en) 2019-11-20 2019-11-20 Operation result labeling method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111026644B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832255A (en) * 2020-06-29 2020-10-27 万翼科技有限公司 Label processing method, electronic equipment and related product

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109388532A (en) * 2018-09-26 2019-02-26 Oppo广东移动通信有限公司 Test method, device, electronic equipment and computer-readable storage medium
CN109800153A (en) * 2018-12-14 2019-05-24 深圳壹账通智能科技有限公司 Mobile application test method and device, electronic equipment, storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109388532A (en) * 2018-09-26 2019-02-26 Oppo广东移动通信有限公司 Test method, device, electronic equipment and computer-readable storage medium
CN109800153A (en) * 2018-12-14 2019-05-24 深圳壹账通智能科技有限公司 Mobile application test method and device, electronic equipment, storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832255A (en) * 2020-06-29 2020-10-27 万翼科技有限公司 Label processing method, electronic equipment and related product

Also Published As

Publication number Publication date
CN111026644B (en) 2023-09-26

Similar Documents

Publication Publication Date Title
CN110347587B (en) APP compatibility testing method and device, computer equipment and storage medium
US20180173614A1 (en) Technologies for device independent automated application testing
US20140304280A1 (en) Text display and selection system
CN110879777A (en) Control testing method and device for application interface, computer equipment and storage medium
KR20140038381A (en) Systems and methods for testing content of mobile communication devices
US10810113B2 (en) Method and apparatus for creating reference images for an automated test of software with a graphical user interface
CN108553894B (en) Display control method and device, electronic equipment and storage medium
CN109471805B (en) Resource testing method and device, storage medium and electronic equipment
US9170714B2 (en) Mixed type text extraction and distribution
CN104823150B (en) information terminal and storage medium
US20160266769A1 (en) Text display and selection system
CN114240882A (en) Defect detection method and device, electronic equipment and storage medium
CN103984626A (en) Method and device for generating test-case script
CN110765015A (en) Method for testing application to be tested and electronic equipment
CN107450912B (en) Page layout method, device and terminal
CN106250374B (en) Word-taking translation method and system
CN111026644B (en) Operation result labeling method and device, storage medium and electronic equipment
CN108845924B (en) Control response area display control method, electronic device, and storage medium
CN112416751A (en) Processing method and device for interface automation test and storage medium
CN114666634B (en) Picture quality detection result display method, device, equipment and storage medium
US10254959B2 (en) Method of inputting a character into a text string using a sliding touch gesture, and electronic device therefor
RU2636673C2 (en) Method and device for line saving
CN115878491A (en) Interface abnormity detection method and device, electronic equipment, storage medium and chip
CN113900932A (en) Test script generation method, device, medium and electronic equipment
CN109753217B (en) Dynamic keyboard operation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant