CN112987994A - Frame selection annotation method, frame selection annotation device, electronic equipment and storage medium - Google Patents

Frame selection annotation method, frame selection annotation device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112987994A
CN112987994A CN202110349584.3A CN202110349584A CN112987994A CN 112987994 A CN112987994 A CN 112987994A CN 202110349584 A CN202110349584 A CN 202110349584A CN 112987994 A CN112987994 A CN 112987994A
Authority
CN
China
Prior art keywords
target
coordinate
input
position information
target interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110349584.3A
Other languages
Chinese (zh)
Inventor
余万利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110349584.3A priority Critical patent/CN112987994A/en
Publication of CN112987994A publication Critical patent/CN112987994A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Abstract

The application discloses a frame selection annotation method, a frame selection annotation device, an electronic device and a storage medium, which belong to the technical field of communication, wherein the frame selection annotation method comprises the following steps: receiving a first input in the case of displaying a target interface; and displaying a first selection frame in response to the first input, wherein the position information of the first selection frame is associated with the position information of the first input and the display position of the target content in the target interface associated with the first input.

Description

Frame selection annotation method, frame selection annotation device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of communication, and particularly relates to a frame selection annotation method, a frame selection annotation device, electronic equipment and a storage medium.
Background
In the related art, when a user needs to highlight a target content in a picture, a target content box in the picture content is usually selected manually to play a role of labeling.
When the selection frame is adjusted, a user needs to manually drag the edge position of the selection frame, and the position of the selection frame is manually adjusted and cannot reach an accurate position easily, so that the user can repeatedly adjust the selection frame.
How to reduce the operation steps of repeatedly adjusting the selection box by the user becomes a problem which needs to be solved urgently.
Disclosure of Invention
The embodiment of the application aims to provide a frame selection annotation method, a frame selection annotation device, an electronic device and a storage medium, wherein the edge position of a frame selection is adjusted according to the content in the frame selected by a user so that the position and the size of the frame selection are both matched with the frame selection content, and the technical problem of reducing the operation steps of repeatedly adjusting the frame selection by the user when the user selects picture content is solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a box selection and annotation method, including: receiving a first input in the case of displaying a target interface; displaying a first selection frame in response to a first input; and the position information of the first selection box is associated with the position information of the first input and the display position of the target content associated with the first input in the target interface.
In a second aspect, an embodiment of the present application provides a box selecting and annotating device, including: the receiving unit is used for receiving a first input under the condition that a target interface is displayed; a display unit for displaying a first selection frame in response to a first input; and the position information of the first selection box is associated with the position information of the first input and the display position of the target content associated with the first input in the target interface.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the frame selection annotation method provided in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor, implement the steps of the box annotation method as provided in the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip is provided and includes a processor and a communication interface, where the communication interface is provided and coupled to the provided processor, and the provided processor is configured to execute a program or instructions to implement the steps of the frame selection labeling method provided in the first aspect.
In the embodiment of the application, under the condition that the electronic equipment displays the target interface, a user can operate the target interface so that the electronic equipment receives the first input. The first input is a frame selection operation executed by a user on the content in the target interface, and specifically, the first input can be a sliding operation instruction, a clicking operation instruction or a dragging operation instruction output to the electronic equipment by the user. The electronic device responds to the first input and displays a first selection frame according to the first input. After receiving the first input, the electronic device determines position information of the first input, analyzes all contents in the target interface, finds corresponding target contents according to the position information of the first input, and associates the position information of the first selection frame with the display position of the target contents.
The first input is a frame selection operation executed by a user on the target content in the target interface, the target content in the target interface can be accurately searched according to the first input, and the first selection frame is automatically generated according to the position information of the target content, so that the engagement degree between the first selection frame and the target content is higher, and the electronic equipment can directly display the first selection frame engaged with the target content in the target interface after the user executes the first input on the target interface.
Compared with the mode that a user needs to manually drag the selection frame for multiple times in the prior art, the method and the device not only improve the accuracy of the correspondence between the selection frame and the target content in the target interface, but also avoid frequent manual adjustment of the selection frame by the user, and improve the use experience of the user.
Drawings
FIG. 1 shows one of the flow diagrams of a method of box annotation according to an embodiment of the present application;
FIG. 2 illustrates a schematic diagram of a first input in a target interface according to an embodiment of the present application;
FIG. 3 illustrates a schematic diagram of a first box in a target interface according to an embodiment of the present application;
FIG. 4 shows a second flowchart of a box annotation method according to an embodiment of the present application;
FIG. 5 illustrates a schematic diagram of a set of annotation areas in a target interface, in accordance with an embodiment of the present application;
FIG. 6 shows a third flowchart of a box annotation method according to an embodiment of the present application;
FIG. 7 shows one of the schematic diagrams of the picture coordinate system in the target picture according to an embodiment of the present application;
FIG. 8 shows a fourth flowchart of a box annotation method according to an embodiment of the present application;
FIG. 9 shows a fifth flowchart of a method of box annotation according to an embodiment of the present application;
FIG. 10 is a second diagram of a picture coordinate system in a target picture according to an embodiment of the present application;
FIG. 11 is a third diagram of a picture coordinate system in a target picture according to an embodiment of the present application;
FIG. 12 shows a sixth flowchart of a method of box annotation in accordance with an embodiment of the present application;
FIG. 13 shows one of the schematic diagrams of a first box and a second box according to an embodiment of the application;
FIG. 14 illustrates a second schematic diagram of the first box and the second box according to an embodiment of the present application;
FIG. 15 shows a third schematic diagram of a first box and a second box according to an embodiment of the present application;
FIG. 16 shows a seventh flowchart of a method of box annotation according to an embodiment of the present application;
FIG. 17 shows one of the schematic diagrams of an electronic device target interface in accordance with an embodiment of the present application;
FIG. 18 shows a second schematic diagram of an electronic device target interface in accordance with an embodiment of the present application;
fig. 19 shows one of the block diagrams of the structure of a box selecting and labeling apparatus according to an embodiment of the present application;
fig. 20 shows a second block diagram of the structure of the box selecting and labeling apparatus according to the embodiment of the present application;
fig. 21 is a third block diagram showing the structure of a box annotation device according to an embodiment of the present application;
FIG. 22 is a block diagram illustrating a fourth embodiment of a box annotation device according to the present application;
FIG. 23 shows a fifth block diagram of the structure of a box selecting and labeling apparatus according to an embodiment of the present application;
FIG. 24 shows one of the block diagrams of the structure of an electronic device according to an embodiment of the application;
fig. 25 shows a second block diagram of the electronic device according to the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below clearly with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The method, device, electronic device and storage medium for selecting a box provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
In some embodiments of the present application, fig. 1 shows one of flowcharts of a box selection annotation method according to an embodiment of the present application, and specifically, as shown in fig. 1, the box selection annotation method specifically includes the following steps:
102, receiving a first input under the condition of displaying a target interface;
step 104, responding to the first input, and displaying a first selection frame.
And the position information of the first selection box is associated with the position information of the first input and the display position of the target content associated with the first input in the target interface.
In the embodiment of the application, under the condition that the electronic equipment displays the target interface, a user can operate the target interface so that the electronic equipment receives the first input. The first input is a frame selection operation executed by a user on the content in the target interface, and specifically, the first input can be a sliding operation instruction, a clicking operation instruction or a dragging operation instruction output to the electronic equipment by the user. The electronic device responds to the first input and displays a first selection frame according to the first input. After receiving the first input, the electronic device determines position information of the first input, analyzes all contents in the target interface, finds corresponding target contents according to the position information of the first input, and associates the position information of the first selection frame with the display position of the target contents.
The first input is a frame selection operation executed by a user on the target content in the target interface, the target content in the target interface can be accurately searched according to the first input, and the first selection frame is automatically generated according to the position information of the target content, so that the engagement degree between the first selection frame and the target content is higher, and the electronic equipment can directly display the first selection frame engaged with the target content in the target interface after the user executes the first input on the target interface.
Compared with the mode that a user needs to manually drag the selection frame for multiple times in the prior art, the method and the device not only improve the accuracy of the correspondence between the selection frame and the target content in the target interface, but also avoid frequent manual adjustment of the selection frame by the user, and improve the use experience of the user.
FIG. 2 illustrates location information of a first input received in a target interface according to an embodiment of the application.
In this embodiment, the user performs a box selection operation on the target interface, such as inputting a slide instruction in the target interface in fig. 2.
FIG. 3 illustrates a first box in a target interface according to an embodiment of the present application.
In this embodiment, after the target interface receives the first input, a first box is displayed in the target interface, and the first box is associated with the target content in the target interface.
The frame in fig. 2 is position information of a first input received when the target interface is displayed, that is, position information of a user through clicking, dragging, or sliding operation, and the frame in fig. 3 is a first frame.
As shown in fig. 4, in the foregoing embodiment, the step of displaying the first selection box in response to the first input specifically includes:
step 202, identifying character information in the target interface to determine a labeling area in the target interface;
step 204, determining target position information according to the first input position information, and screening a target labeling area in the labeling area according to the target position information;
and step 206, determining the position information of the first selection frame according to the target labeling area, and displaying the first selection frame according to the position information of the first selection frame.
And the labeling area corresponds to each character in the character information one by one.
In an embodiment of the application, after the electronic device receives the first input, the text information in the target interface is identified. The method comprises the steps of selecting and identifying Character information in a target picture through an Optical Character Recognition technology (OCR Optical Character Recognition), converting position information of the Character information in a target interface into a rectangular color block, converting a single Character in each Character information into a single rectangular color block, and converting all text information in the target picture into the rectangular color block, so that all marked areas in the target interface are obtained. The method includes the steps that first input position information is obtained, the first input is operation executed by a user aiming at target content needing to be selected in a target interface, therefore, the target position information can be determined according to the first input position information, specifically, the first input position information can be used as the target position information, and a target labeling area in a labeling area in the target interface is screened according to the target position information. And determining the position information of the first selection frame according to the target labeling area, thereby generating and displaying the corresponding first selection frame.
As shown in fig. 5, the text information in the target interface is identified by an optical character recognition technique, the identified text information is labeled by a rectangular color block, and all labeled areas in the target interface are determined.
And identifying the character information in the target interface, and screening out a target labeling area according to the first input position information. Because the labeling area in the target interface corresponds to each character in the text information one by one, the degree of fit between the first selection frame and the text information is higher according to the position information of the first selection frame determined by the target labeling area and the generated and displayed position information of the first selection frame.
In some embodiments, the first input is selected as a drag instruction, that is, a user performs a drag operation on the target interface, and records a start point and an end point in a drag track to obtain the position information of the first input, that is, the position information of the first input includes a position point between the start point and the end point in the drag track.
In some embodiments, the first input is selected as a click command, that is, the user clicks and selects content in the target interface, and the position information of the user click and selection is recorded, so as to obtain the position information of the first input, that is, the position information of the first input includes a position point of the user click and selection on the target interface.
As shown in fig. 6, the step of determining target location information according to the first input location information and screening a target labeling area in the labeling area according to the target location information specifically includes:
step 302, configuring a coordinate system in a display target interface;
step 304, determining a first coordinate point set according to the first input position information, wherein the first coordinate point set is target position information;
and step 306, acquiring a coordinate set of the labeling area, and screening a second coordinate point set in the coordinate set according to the first coordinate point set to determine the target labeling area.
In an embodiment of the present application, a coordinate system is configured in a displayed target interface, and a first coordinate point set is determined in the coordinate system according to first input position information, and the first coordinate point set is a base point of the first input position information. Specifically, the first input position information is a rectangular track, and the first coordinate point set is a coordinate set of four endpoints of the rectangular track. The marking area corresponds to character information in the target interface one by one, a coordinate set of the marking area is determined, the coordinate set of the marking area is screened according to the first coordinate point set, a second coordinate point set which is closest to the first coordinate point set is found, the number of coordinate points in the second coordinate point set is the same as that of the coordinate points in the first coordinate point set, and the target marking area in the marking area of the target interface can be determined according to the second coordinate point set.
The coordinate point of each marked area in the target interface is obtained first, and therefore the coordinate set of the marked areas is obtained. And then, four coordinate points of the first input track are obtained to obtain a first coordinate point set, the four coordinate points are coordinates of four vertexes of the first input track, so that the position and the coverage range of the first input in the target interface can be determined, and a second coordinate point in the coordinate set is screened through the four coordinate points, so that the target labeling area is determined. By configuring a coordinate system in the target interface and searching the corresponding target labeling area according to the first coordinate point set of the first input position information, the accuracy of searching the target labeling area can be improved, and the accuracy of the finally obtained first selection frame is further improved.
In some embodiments, the coordinate system in the target interface may be a coordinate system pre-stored in the local storage area, and when the target interface is displayed, the coordinate system stored in the local storage area is directly called to configure the coordinate system in the target interface.
In other embodiments, the coordinate system in the target interface is established by the electronic device according to parameters such as the size of the target interface.
As shown in fig. 7, fig. 7 shows that a coordinate system is established in the target interface, an origin of the coordinate system is selectively set at the upper left corner of the target interface, and an extending direction of coordinates of the coordinate system coincides with an arrangement direction of the text information, that is, it is ensured that an abscissa and an ordinate of the coordinate system are both parallel to a rectangular color block converted from the text information.
Specifically, after the target interface is configured with the coordinate system, a reference line is marked on a marking area in the target interface. Because the labeling area is obtained by converting the rectangular color blocks, the labeling area is labeled with a first reference line and a second reference line. The first datum line is an extension line of a transverse center line of each marking area, and the second datum line is an extension line of a vertical center line of each marking area. And acquiring a first distance between each first reference line and the abscissa of the picture coordinate system and a second distance between each second reference line and the ordinate of the picture coordinate system, and determining the coordinates of each marking area according to the lengths of the four sides of the marking area, the first distances and the second distances. The coordinate of the labeling area can be rapidly acquired.
The coordinate system established in the target picture in fig. 7 is exemplary, and the establishment manner of the coordinate system is not limited. And reasonably setting the picture coordinate system according to actual needs.
As shown in fig. 8, the step of determining the position information of the first box according to the target labeling area specifically includes:
step 402, acquiring a preset coordinate adjustment value;
step 404, adjusting the second coordinate point set according to a preset coordinate adjustment value to obtain a third coordinate point set;
and 406, displaying the first selection frame through the third coordinate point set.
In the embodiment of the application, since the labeling areas in the labeling area set are obtained by converting the text information in the target interface, in order to prevent the first selection box from shielding the text information, the preset coordinate adjustment value is obtained. The adjustment value of the preset coordinates is smaller than the distance between adjacent target text messages in the target interface.
And if the second coordinate point set comprises four vertexes of the rectangular target labeling area, determining coordinate values of four sides of the rectangular target labeling area according to the four fixed point coordinate points, and calculating to obtain coordinate values of the four sides of the first selection frame, namely a third coordinate point set, through a preset coordinate adjustment value and the coordinate values of the four sides of the rectangular target labeling area. And generating and displaying a first selection frame according to the third coordinate point set.
Coordinate points of the second coordinate point set corresponding to the target labeling area are adjusted through the preset coordinate adjustment value to obtain a third coordinate point set, so that the first selection frame is generated and displayed, the coincidence degree of the first selection frame and the character information is guaranteed, and meanwhile the character information is prevented from being shielded due to the fact that the first selection frame and the character information are overlapped.
In the step of calculating the third coordinate point set according to the second coordinate point set, it is required to ensure that the generated first frame is located outside the target labeling area.
As shown in fig. 9, the step of screening the second coordinate point set in the coordinate set according to the first coordinate point set specifically includes:
step 502, calculating a distance value between the first coordinate point set and each coordinate in the coordinate set;
and step 504, determining a target coordinate in the coordinate set according to the magnitude relation between the first coordinate point set and the distance value of each coordinate in the coordinate set.
In the embodiment defined by the application, four sides of the coverage area of the first input track can be determined through the first coordinate point set, and the distance value between the coordinate of each of the four sides and each coordinate is determined by calculating the distance value between each side of the four sides and the reference line of each labeling area in the coordinate set. The specific way to determine the target coordinates is as follows:
as shown in fig. 10, the distance between the upper edge of the four edges of the rectangular coverage area and the abscissa of the coordinate system is obtained, and the distance between the first reference line of each labeling area and the abscissa of the coordinate system is obtained, so as to determine the distance value between the upper edge and each coordinate.
And then obtaining the distance from the lower edge of the rectangular coverage area to the abscissa of the coordinate system, and obtaining the distance from the first reference line of each marking area to the abscissa of the coordinate system, thereby determining the distance value from the lower edge to each coordinate.
As shown in fig. 11, the distance from the left side of the rectangular coverage area to the ordinate of the coordinate system is obtained, and the distance from the second reference line of each labeling area to the ordinate of the coordinate system is obtained, so as to determine the distance value from the left side of the second picture selection frame to each coordinate.
And finally, acquiring the distance from the right side of the second picture selection frame to the vertical coordinate of the coordinate system, and acquiring the distance from the second reference line of each marking area to the vertical coordinate of the coordinate system, thereby determining the distance value from the right side to each coordinate.
The target coordinate can be determined through the method, wherein the target coordinate comprises coordinate values of four sides, and the target labeling area can be found based on the coordinate values of the four sides.
The first datum line is an extension line of a transverse center line of each marking area, and the second datum line is an extension line of a vertical center line of each marking area.
It is to be understood that "up, down, left, and right" in the above embodiments are based on the coordinate systems in fig. 10 and 11.
As shown in fig. 12, before the step of identifying the text information in the target interface to determine the labeled area in the target interface, the method further includes:
step 602, displaying a second selection frame and a selection frame adjustment prompt message according to the first input position information;
and step 604, receiving a second input corresponding to the selection frame adjustment prompt message, and executing the step of recognizing the character information in the target interface according to the second input to determine the labeling area in the target interface.
In the embodiment defined by the application, after the electronic equipment receives the first input, a second selection frame is generated and displayed according to the position information of the first input, and the selection frame adjustment prompt information is displayed in the display area near the second selection frame. And displaying the first input operation of the user on the target interface by using the second selection frame, so that the user can conveniently observe whether the selection position is accurate. The displayed frame selection adjustment prompt message comprises confirmation adjustment and cancellation adjustment. And after the user clicks and confirms the adjustment, the electronic equipment receives a second input, and after the electronic equipment receives the second input, the step of recognizing the text information in the target interface is continuously executed until the first selection frame is generated and displayed. And after the user clicks to cancel the adjustment, the user can update the position of the second selection frame by dragging the vertex of the second selection frame, and the selection frame adjustment prompt information is displayed again after the position information of the second selection frame is updated each time.
As shown in fig. 13, 14 and 15, the first box is more suitable for the target annotation content than the second box, so that the first box with a higher fitting degree is obtained without the need of repeatedly and manually adjusting the second box by the user, the operation steps of the user are reduced, and the use experience of the user is improved.
As shown in fig. 16, the box-selecting marking method further includes:
step 702, receiving a third input under the condition that the target interface displays the first selection frame;
step 704, in response to the third input, adjusting the first selection frame to obtain the updated target location information, and returning to perform the step of screening the target labeling area in the labeling area according to the target location information.
In the embodiment defined by the application, under the condition that the first selection frame is displayed, a user can drag and drop the vertex and/or the edge of the first selection frame so as to update the first selection frame, so that updated target position information is obtained. And returning to execute the step of screening the target marking area in the marking area according to the target position information according to the updated target position information, and obtaining the first selection frame again according to the updated target position information.
After the electronic equipment generates the first selection frame according to the first input sent by the user, the user can adjust the first selection frame again according to actual needs, and accordingly the first selection frame is generated again, and the user can adjust the first selection frame at any time.
In the case of displaying the target interface, before the step of receiving the first input, the method further includes: and responding to the fourth input of the user, and acquiring and displaying the target interface.
In the embodiment of the application, the electronic device responds to a fourth input of the user to acquire the target interface of the electronic device in the current display state.
The fourth input may be selected as a screenshot instruction. And the electronic equipment receives a screenshot instruction sent by a user, and performs screenshot operation on the currently displayed interface so as to obtain a target picture. And displaying the target picture obtained after screenshot in a floating window mode, and simultaneously displaying an instruction bar for framing the target picture, wherein a user can execute first input on a target interface by clicking a framing instruction in the instruction bar, namely framing the target picture to obtain a first picture selection frame, and the screenshot and the operation of framing the target picture obtained by screenshot can be completed through the steps. And after the user marks the first picture selection frame, clicking the determination button to generate a target picture with the first picture selection frame, and if clicking the cancel button to delete the first picture selection frame in the target picture.
The current displayed target interface of the electronic equipment can be directly obtained through the screenshot form, so that the subsequent operation steps of selecting the target interface and adjusting the selected frame are conveniently executed.
Fig. 17 shows an operation interface for sending a screenshot command to the electronic device, and fig. 18 shows an operation interface for sending a frame selection command to the electronic device.
It should be noted that, in the box selection annotation method provided in the embodiment of the present application, the execution main body may be a box selection annotation device, or a control module in the box selection annotation device for executing the box selection annotation method. In the embodiment of the present application, a method for executing a frame selection annotation by a frame selection annotation device is taken as an example, and the frame selection annotation device provided in the embodiment of the present application is described.
Fig. 19 is a block diagram illustrating a structure of a box annotation apparatus 800 according to an embodiment of the present application, and as shown in fig. 19, the box annotation apparatus 800 includes:
a receiving unit 810 for receiving a first input in a case where a target interface is displayed;
a display unit 820 for displaying a first selection frame in response to the first input.
Wherein the position information of the first selection box is associated with the position information of the first input and the display position of the target content in the target interface associated with the first input.
In the embodiment of the application, under the condition that the electronic equipment displays the target interface, a user can operate the target interface so that the electronic equipment receives the first input. The first input is a frame selection operation executed by a user on the content in the target interface, and specifically, the first input can be a sliding operation instruction, a clicking operation instruction or a dragging operation instruction output to the electronic equipment by the user. The electronic device responds to the first input and displays a first selection frame according to the first input. After receiving the first input, the electronic device determines position information of the first input, analyzes all contents in the target interface, finds corresponding target contents according to the position information of the first input, and associates the position information of the first selection frame with the display position of the target contents.
The first input is a frame selection operation executed by a user on the target content in the target interface, the target content in the target interface can be accurately searched according to the first input, and the first selection frame is automatically generated according to the position information of the target content, so that the engagement degree between the first selection frame and the target content is higher, and the electronic equipment can directly display the first selection frame engaged with the target content in the target interface after the user executes the first input on the target interface.
Compared with the mode that a user needs to manually drag the selection frame for multiple times in the prior art, the method and the device not only improve the accuracy of the correspondence between the selection frame and the target content in the target interface, but also avoid frequent manual adjustment of the selection frame by the user, and improve the use experience of the user.
FIG. 2 illustrates location information of a first input received in a target interface according to an embodiment of the application.
In this embodiment, the user performs a box selection operation on the target interface, such as inputting a slide instruction in the target interface in fig. 2.
FIG. 3 illustrates a first box in a target interface according to an embodiment of the present application.
In this embodiment, after the target interface receives the first input, a first box is displayed in the target interface, and the first box is associated with the target content in the target interface.
The frame in fig. 2 is position information of a first input received when the target interface is displayed, that is, position information of a user through clicking, dragging, or sliding operation, and the frame in fig. 3 is a first frame.
Fig. 20 is a block diagram illustrating a structure of a display unit 820 according to an embodiment of the present application, where as shown in fig. 20, the display unit 820 specifically includes:
an identifying unit 822, configured to identify text information in the target interface to determine a labeled region in the target interface
A screening unit 824, configured to determine target location information according to the first input location information, and screen a target labeling area in the labeling area according to the target location information;
the output unit 826 is configured to determine location information of the first frame according to the target labeling area, and display the first frame according to the location information of the first frame.
And the labeling area corresponds to each character in the character information one by one.
In an embodiment of the application, after the electronic device receives the first input, the text information in the target interface is identified. The method comprises the steps of selecting and identifying Character information in a target picture through an Optical Character Recognition technology (OCR Optical Character Recognition), converting position information of the Character information in a target interface into a rectangular color block, converting a single Character in each Character information into a single rectangular color block, and converting all text information in the target picture into the rectangular color block, so that all marked areas in the target interface are obtained. The method includes the steps that first input position information is obtained, the first input is operation executed by a user aiming at target content needing to be selected in a target interface, therefore, the target position information can be determined according to the first input position information, specifically, the first input position information can be used as the target position information, and a target labeling area in a labeling area in the target interface is screened according to the target position information. And determining the position information of the first selection frame according to the target labeling area, thereby generating and displaying the corresponding first selection frame.
Fig. 21 is a block diagram illustrating a structure of a screening unit 824 according to an embodiment of the present application, and as shown in fig. 21, the screening unit 824 specifically includes:
a configuration unit 8242 for configuring a coordinate system in the display target interface;
a determining unit 8244, configured to determine a first coordinate point set according to first input position information, where the first coordinate point set is target position information;
the searching unit 8246 is configured to obtain a coordinate set of the labeled region, and filter a second coordinate point set in the coordinate set according to the first coordinate point set to determine the target labeled region.
In an embodiment of the present application, a coordinate system is configured in a displayed target interface, and a first coordinate point set is determined in the coordinate system according to first input position information, and the first coordinate point set is a base point of the first input position information. Specifically, the first input position information is a rectangular track, and the first coordinate point set is a coordinate set of four endpoints of the rectangular track. The marking area corresponds to character information in the target interface one by one, a coordinate set of the marking area is determined, the coordinate set of the marking area is screened according to the first coordinate point set, a second coordinate point set which is closest to the first coordinate point set is found, the number of coordinate points in the second coordinate point set is the same as that of the coordinate points in the first coordinate point set, and the target marking area in the marking area of the target interface can be determined according to the second coordinate point set.
The coordinate point of each marked area in the target interface is obtained first, and therefore the coordinate set of the marked areas is obtained. And then, four coordinate points of the first input track are obtained to obtain a first coordinate point set, the four coordinate points are coordinates of four vertexes of the first input track, so that the position and the coverage range of the first input in the target interface can be determined, and a second coordinate point in the coordinate set is screened through the four coordinate points, so that the target labeling area is determined. By configuring a coordinate system in the target interface and searching the corresponding target labeling area according to the first coordinate point set of the first input position information, the accuracy of searching the target labeling area can be improved, and the accuracy of the finally obtained first selection frame is further improved.
In some embodiments, the coordinate system in the target interface may be a coordinate system pre-stored in the local storage area, and when the target interface is displayed, the coordinate system stored in the local storage area is directly called to configure the coordinate system in the target interface.
In other embodiments, the coordinate system in the target interface is established by the electronic device according to parameters such as the size of the target interface.
As shown in fig. 7, fig. 7 shows that a coordinate system is established in the target interface, an origin of the coordinate system is selectively set at the upper left corner of the target interface, and an extending direction of coordinates of the coordinate system coincides with an arrangement direction of the text information, that is, it is ensured that an abscissa and an ordinate of the coordinate system are both parallel to a rectangular color block converted from the text information.
Specifically, after the target interface is configured with the coordinate system, a reference line is marked on a marking area in the target interface. Because the labeling area is obtained by converting the rectangular color blocks, the labeling area is labeled with a first reference line and a second reference line. The first datum line is an extension line of a transverse center line of each marking area, and the second datum line is an extension line of a vertical center line of each marking area. And acquiring a first distance between each first reference line and the abscissa of the picture coordinate system and a second distance between each second reference line and the ordinate of the picture coordinate system, and determining the coordinates of each marking area according to the lengths of the four sides of the marking area, the first distances and the second distances. The coordinate of the labeling area can be rapidly acquired.
The coordinate system established in the target picture in fig. 7 is exemplary, and the establishment manner of the coordinate system is not limited. And reasonably setting the picture coordinate system according to actual needs.
Fig. 22 shows a block diagram of a structure of an output unit 826 according to an embodiment of the present application, and as shown in fig. 22, the output unit 826 specifically includes:
an obtaining unit 8262, configured to obtain a preset coordinate adjustment value;
an adjusting unit 8264, configured to adjust the second coordinate point set according to a preset coordinate adjustment value to obtain a third coordinate point set;
a generating unit 8266 configured to display the first frame by the third coordinate point set.
In the embodiment of the application, since the labeling areas in the labeling area set are all obtained by converting the text information in the target interface, in order to prevent the first selection box from shielding the text information, the preset coordinate adjustment value is obtained. The adjustment value of the preset coordinates is smaller than the distance between adjacent target text messages in the target interface.
And if the second coordinate point set comprises four vertexes of the rectangular target labeling area, determining coordinate values of four sides of the rectangular target labeling area according to the four fixed point coordinate points, and calculating to obtain coordinate values of the four sides of the first selection frame, namely a third coordinate point set, through a preset coordinate adjustment value and the coordinate values of the four sides of the rectangular target labeling area. And generating and displaying a first selection frame according to the third coordinate point set.
Coordinate points of the second coordinate point set corresponding to the target labeling area are adjusted through the preset coordinate adjustment value to obtain a third coordinate point set, so that the first selection frame is generated and displayed, the coincidence degree of the first selection frame and the character information is guaranteed, and meanwhile the character information is prevented from being shielded due to the fact that the first selection frame and the character information are overlapped.
In the step of calculating the third coordinate point set according to the second coordinate point set, it is required to ensure that the generated first frame is located outside the target labeling area.
Fig. 23 shows a block diagram of a structure of a lookup unit 8246 according to an embodiment of the application, and as shown in fig. 23, the lookup unit 8246 specifically includes:
a calculating unit 82462, configured to calculate a distance value between the first coordinate point set and each coordinate in the coordinate set;
the comparing unit 82464 is configured to determine the target coordinate in the coordinate set according to a size relationship between the first coordinate point set and the distance value of each coordinate in the coordinate set.
In the embodiment defined by the application, four sides of the coverage area of the first input track can be determined through the first coordinate point set, and the distance value between the coordinate of each of the four sides and each coordinate is determined by calculating the distance value between each side of the four sides and the reference line of each labeling area in the coordinate set. The specific way to determine the target coordinates is as follows:
as shown in fig. 10, the distance between the upper edge of the four edges of the rectangular coverage area and the abscissa of the coordinate system is obtained, and the distance between the first reference line of each labeling area and the abscissa of the coordinate system is obtained, so as to determine the distance value between the upper edge and each coordinate.
And then obtaining the distance from the lower edge of the rectangular coverage area to the abscissa of the coordinate system, and obtaining the distance from the first reference line of each marking area to the abscissa of the coordinate system, thereby determining the distance value from the lower edge to each coordinate.
As shown in fig. 11, the distance from the left side of the rectangular coverage area to the ordinate of the coordinate system is obtained, and the distance from the second reference line of each labeling area to the ordinate of the coordinate system is obtained, so as to determine the distance value from the left side of the second picture selection frame to each coordinate.
And finally, acquiring the distance from the right side of the second picture selection frame to the vertical coordinate of the coordinate system, and acquiring the distance from the second reference line of each marking area to the vertical coordinate of the coordinate system, thereby determining the distance value from the right side to each coordinate.
The target coordinate can be determined through the method, wherein the target coordinate comprises coordinate values of four sides, and the target labeling area can be found based on the coordinate values of the four sides.
The first datum line is an extension line of a transverse center line of each marking area, and the second datum line is an extension line of a vertical center line of each marking area.
It is to be understood that "up, down, left, and right" in the above embodiments are based on the coordinate systems in fig. 10 and 11.
The display unit 820 is further configured to display a second selection frame and a selection frame adjustment prompt message according to the first input position information;
and receiving a second input corresponding to the selection frame adjustment prompt information, and executing the step of identifying the character information in the target interface according to the second input so as to determine the labeling area in the target interface.
In the embodiment defined by the application, after the electronic equipment receives the first input, a second selection frame is generated and displayed according to the position information of the first input, and the selection frame adjustment prompt information is displayed in the display area near the second selection frame. And displaying the first input operation of the user on the target interface by using the second selection frame, so that the user can conveniently observe whether the selection position is accurate. The displayed frame selection adjustment prompt message comprises confirmation adjustment and cancellation adjustment. And after the user clicks and confirms the adjustment, the electronic equipment receives a second input, and after the electronic equipment receives the second input, the step of recognizing the text information in the target interface is continuously executed until the first selection frame is generated and displayed. And after the user clicks to cancel the adjustment, the user can update the position of the second selection frame by dragging the vertex of the second selection frame, and the selection frame adjustment prompt information is displayed again after the position information of the second selection frame is updated each time.
The receiving unit 810 is further configured to receive a third input in a case that the target interface displays the first selection box;
the filtering unit 824 is further configured to adjust the first box to obtain the updated target location information in response to the third input, and return to the step of performing the filtering of the target labeling area in the labeling area according to the target location information.
In the embodiment defined by the application, under the condition that the first selection frame is displayed, a user can drag and drop the vertex and/or the edge of the first selection frame so as to update the first selection frame, so that updated target position information is obtained. And returning to execute the step of screening the target marking area in the marking area according to the target position information according to the updated target position information, and obtaining the first selection frame again according to the updated target position information.
After the electronic equipment generates the first selection frame according to the first input sent by the user, the user can adjust the first selection frame again according to actual needs, and accordingly the first selection frame is generated again, and the user can adjust the first selection frame at any time.
The frame selecting and labeling apparatus 800 further includes:
the display unit 820 is further configured to acquire and display a target interface in response to a fourth input by the user.
In the embodiment of the application, the electronic device responds to a fourth input of the user to acquire the target interface of the electronic device in the current display state.
The fourth input may be selected as a screenshot instruction. And the electronic equipment receives a screenshot instruction sent by a user, and performs screenshot operation on the currently displayed interface so as to obtain a target picture. And displaying the target picture obtained after screenshot in a floating window mode, and simultaneously displaying an instruction bar for framing the target picture, wherein a user can execute first input on a target interface by clicking a framing instruction in the instruction bar, namely framing the target picture to obtain a first picture selection frame, and the screenshot and the operation of framing the target picture obtained by screenshot can be completed through the steps. And after the user marks the first picture selection frame, clicking the determination button to generate a target picture with the first picture selection frame, and if clicking the cancel button to delete the first picture selection frame in the target picture.
The current displayed target interface of the electronic equipment can be directly obtained through the screenshot form, so that the subsequent operation steps of selecting the target interface and adjusting the selected frame are conveniently executed.
Fig. 18 shows an operation interface for sending a screenshot command to the electronic device, and fig. 19 shows an operation interface for sending a frame selection command to the electronic device.
The frame selecting and marking device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The box selecting and marking device in the embodiment of the application can be a device with an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
Optionally, as shown in fig. 24, an electronic device 900 is further provided in this embodiment of the present application, and includes a processor 902, a memory 904, and a program or an instruction stored in the memory 904 and executable on the processor 902, where the program or the instruction is executed by the processor 902 to implement each process of the foregoing frame selection and annotation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above. Fig. 25 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
As shown in fig. 25, the electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
The user input unit 1007, which is the touch panel 10071 in this embodiment, receives a first input when the target interface is displayed.
The user input unit 1007 also includes other input devices 10072.
The display unit 1006, which is a display screen in this embodiment, is configured to display a first selection frame in response to a first input.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 25 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description thereof is omitted.
In one embodiment, the memory 1009 stores applications and operating systems, the input unit 1004 may include a graphics processor 10041 and a microphone 10042, and the display unit 1006 includes a display panel 10061.
In one embodiment, processor 1010 is configured to: identifying character information in the target interface to determine a labeling area in the target interface; determining target position information according to the first input position information, and screening a target labeling area in the labeling area according to the target position information; determining the position information of a first selection frame according to the target labeling area, and displaying the first selection frame according to the position information of the first selection frame;
and the labeling area corresponds to each character in the character information one by one.
In one embodiment, the processor 1010 is further configured to: configuring a coordinate system in a display target interface; determining a first coordinate point set according to first input position information, wherein the first coordinate point set is target position information; and acquiring a coordinate set of the labeling area, and screening a second coordinate point set in the coordinate set according to the first coordinate point set to determine the target labeling area.
In one embodiment, the processor 1010 is further configured to: acquiring a preset coordinate adjustment value; adjusting the second coordinate according to a preset coordinate adjusting value to obtain a third coordinate; and displaying the first selection frame through the third coordinate.
In one embodiment, the processor 1010 is further configured to: calculating a distance value between the first coordinate and each coordinate in the coordinate set; and determining the target coordinate in the coordinate set according to the magnitude relation between the first coordinate and the distance value of each coordinate in the coordinate set.
In one embodiment, the processor 1010 is further configured to: displaying a second selection frame and selection frame adjustment prompt information according to the first input position information; and receiving a second input corresponding to the selection frame adjustment prompt information, and executing the step of identifying the character information in the target interface according to the second input so as to determine the labeling area in the target interface.
In one embodiment, the processor 1010 is further configured to: receiving a third input under the condition that the first selection frame is displayed on the target interface; and responding to the third input, adjusting the first selection frame to obtain updated target position information, and returning to execute the step of screening the target labeling area in the labeling area according to the target position information.
In one embodiment, the processor 1010 is further configured to: and responding to the fourth input of the user, and acquiring and displaying the target interface.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the foregoing frame selection and annotation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer-readable storage media, such as Read-Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, etc.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above-mentioned frame selection and annotation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. A method for selecting a frame and marking is characterized by comprising the following steps:
receiving a first input in the case of displaying a target interface;
displaying a first selection frame in response to the first input;
wherein the position information of the first selection box is associated with the position information of the first input and the display position of the target content in the target interface associated with the first input.
2. The method for selecting a box according to claim 1, wherein the step of displaying a first box in response to the first input specifically comprises:
identifying character information in the target interface to determine a labeling area in the target interface;
determining target position information according to the first input position information, and screening a target labeling area in the labeling areas according to the target position information;
determining the position information of the first selection frame according to the target labeling area, and displaying the first selection frame according to the position information of the first selection frame;
and the labeling area corresponds to each character in the text information one by one.
3. The method for selecting a box and annotating according to claim 2, wherein the step of determining target location information according to the first input location information and screening the target annotation area in the annotation area according to the target location information specifically comprises:
configuring a coordinate system in the display target interface;
determining a first coordinate point set according to the first input position information, wherein the first coordinate point set is the target position information;
and acquiring a coordinate set of the labeling area, and screening a second coordinate point set in the coordinate set according to the first coordinate point set to determine the target labeling area.
4. The method for selecting a box and annotating according to claim 3, wherein the step of determining the position information of the first box according to the target annotation area specifically comprises:
acquiring a preset coordinate adjustment value;
adjusting the second coordinate according to the preset coordinate adjustment value to obtain a third coordinate;
and displaying the first selection frame through the third coordinate.
5. The method for box-selecting and annotation according to claim 3, wherein the step of screening the second coordinate in the coordinate set according to the first coordinate specifically comprises:
calculating a distance value between the first coordinate and each coordinate in the coordinate set;
and determining the target coordinate in the coordinate set according to the magnitude relation between the first coordinate and the distance value of each coordinate in the coordinate set.
6. The method for selecting a box and annotating according to claim 2, wherein the step of identifying the text information in the target interface to determine the labeled area in the target interface is preceded by the step of:
displaying a second selection frame and selection frame adjustment prompt information according to the first input position information;
and receiving a second input corresponding to the selection frame adjustment prompt message, and executing the step of identifying the text message in the target interface according to the second input so as to determine a labeling area in the target interface.
7. The box-selecting and annotating method according to any one of claims 2 to 6, further comprising:
receiving a third input under the condition that the first selection frame is displayed on the target interface;
and responding to the third input, adjusting the first selection frame to obtain the updated target position information, and returning to execute the step of screening the target labeling area in the labeling area according to the target position information.
8. The method of any one of claims 1 to 6, wherein the step of receiving a first input while displaying the target interface is preceded by the step of:
and responding to a fourth input of the user, and acquiring and displaying the target interface.
9. A frame-selecting annotation device, comprising:
the receiving unit is used for receiving a first input under the condition that a target interface is displayed;
a display unit for displaying a first selection frame in response to the first input;
wherein the position information of the first selection box is associated with the position information of the first input and the display position of the target content in the target interface associated with the first input.
10. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the box annotation method according to any one of claims 1 to 8.
11. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the box annotation method according to any one of claims 1 to 8.
CN202110349584.3A 2021-03-31 2021-03-31 Frame selection annotation method, frame selection annotation device, electronic equipment and storage medium Pending CN112987994A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110349584.3A CN112987994A (en) 2021-03-31 2021-03-31 Frame selection annotation method, frame selection annotation device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110349584.3A CN112987994A (en) 2021-03-31 2021-03-31 Frame selection annotation method, frame selection annotation device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112987994A true CN112987994A (en) 2021-06-18

Family

ID=76338648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110349584.3A Pending CN112987994A (en) 2021-03-31 2021-03-31 Frame selection annotation method, frame selection annotation device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112987994A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150293690A1 (en) * 2014-04-15 2015-10-15 Acer Incorporated Method for user interface display and electronic device using the same
CN109683798A (en) * 2018-12-28 2019-04-26 咪咕音乐有限公司 A kind of text determines method, terminal and computer readable storage medium
CN109977949A (en) * 2019-03-20 2019-07-05 深圳市华付信息技术有限公司 Text positioning method, device, computer equipment and the storage medium of frame fine tuning
CN110689010A (en) * 2019-09-27 2020-01-14 支付宝(杭州)信息技术有限公司 Certificate identification method and device
CN111259878A (en) * 2018-11-30 2020-06-09 中移(杭州)信息技术有限公司 Method and equipment for detecting text

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150293690A1 (en) * 2014-04-15 2015-10-15 Acer Incorporated Method for user interface display and electronic device using the same
CN111259878A (en) * 2018-11-30 2020-06-09 中移(杭州)信息技术有限公司 Method and equipment for detecting text
CN109683798A (en) * 2018-12-28 2019-04-26 咪咕音乐有限公司 A kind of text determines method, terminal and computer readable storage medium
CN109977949A (en) * 2019-03-20 2019-07-05 深圳市华付信息技术有限公司 Text positioning method, device, computer equipment and the storage medium of frame fine tuning
CN110689010A (en) * 2019-09-27 2020-01-14 支付宝(杭州)信息技术有限公司 Certificate identification method and device

Similar Documents

Publication Publication Date Title
US10013408B2 (en) Information processing apparatus, information processing method, and computer readable medium
CN102999752A (en) Method and device for quickly identifying local characters in picture and terminal
AU2013222990A1 (en) Method and device for generating captured image for display windows
WO2019233318A1 (en) Content identification method and device, and mobile terminal
CN112399006B (en) File sending method and device and electronic equipment
CN109670507B (en) Picture processing method and device and mobile terminal
CN109085982B (en) Content identification method and device and mobile terminal
CN112162807A (en) Function execution method and device
CN112269522A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN113126862A (en) Screen capture method and device, electronic equipment and readable storage medium
WO2023116216A1 (en) Data visualization method and apparatus, and device and storage medium
CN112925426A (en) Processing method and device of stylus and electronic equipment
CN111638849A (en) Screenshot method and device and electronic equipment
CN112162812A (en) Object adding method and device
CN112698762B (en) Icon display method and device and electronic equipment
CN112099714B (en) Screenshot method and device, electronic equipment and readable storage medium
CN111638844A (en) Screen capturing method and device and electronic equipment
CN105068781A (en) Windowing method and system for tiled display device
CN112416199A (en) Control method and device and electronic equipment
JP6668868B2 (en) Information processing apparatus and information processing program
CN112987994A (en) Frame selection annotation method, frame selection annotation device, electronic equipment and storage medium
CN112286430B (en) Image processing method, apparatus, device and medium
CN106649707B (en) Method and device for managing image content label
CN111796736B (en) Application sharing method and device and electronic equipment
CN111857465B (en) Application icon sorting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210618