CN115826821A - Object labeling method and device, electronic equipment and storage medium - Google Patents

Object labeling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115826821A
CN115826821A CN202211513445.0A CN202211513445A CN115826821A CN 115826821 A CN115826821 A CN 115826821A CN 202211513445 A CN202211513445 A CN 202211513445A CN 115826821 A CN115826821 A CN 115826821A
Authority
CN
China
Prior art keywords
interface
input
content
target
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211513445.0A
Other languages
Chinese (zh)
Inventor
涂兵兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211513445.0A priority Critical patent/CN115826821A/en
Publication of CN115826821A publication Critical patent/CN115826821A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses an object labeling method, an object labeling device, electronic equipment and a storage medium, which belong to the technical field of electronic equipment, and the method comprises the following steps: displaying a first interface; responding to the first input, and displaying the label content of the target object in the first interface; responding to the second input, storing the marked content and the target object in a related mode, and storing the access address of the first interface to the note application; and displaying the annotation content under the condition that the first interface is accessed through the access address of the first interface.

Description

Object labeling method and device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of electronic equipment, and particularly relates to an object labeling method and device, electronic equipment and a storage medium.
Background
Currently, when a user browses contents such as pictures, videos, or news by using an electronic device, if the user wants to label a certain content, for example: to generate some ideas related to a certain content to be recorded, a "notepad" application in the electronic device needs to be invoked for recording.
However, when the above operations are performed, a user needs to first switch the currently used application to the background, then find the application icon of the "note book" application, open the "note book" application for recording, and after the recording is completed, switch the application corresponding to the content that has just been browsed to the foreground to continue browsing the content, which results in a cumbersome operation of labeling the browsed content.
Disclosure of Invention
The embodiment of the application aims to provide an object labeling method, an object labeling device, electronic equipment and a storage medium, and can solve the problem of complex operation of labeling browsing contents.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an object labeling method, where the object labeling method includes: displaying a first interface; responding to the first input, and displaying the annotation content of the target object in the first interface; responding to the second input, storing the marked content and the target object in a correlation mode, and storing the access address of the first interface to the note application; and displaying the annotation content under the condition that the first interface is accessed through the access address of the first interface.
In a second aspect, an embodiment of the present application provides an object labeling apparatus, where the object labeling apparatus includes: the device comprises a display module and a processing module. The display module is used for displaying a first interface; and responding to the first input, and displaying the label content of the target object in the first interface. The processing module is used for responding to the second input, storing the marked content and the target object in a correlation mode, and storing the access address of the first interface to the note application; and displaying the annotation content under the condition that the first interface is accessed through the access address of the first interface.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the electronic device may display a first interface, display, in response to a first input, annotated content to a target object in the first interface, store, in response to a second input, the annotated content in association with the target object, and store an access address of the first interface to the note application, and in a case where the first interface is accessed through the access address of the first interface, display the annotated content. In the scheme, when the electronic equipment displays the first interface, the user can directly add the annotation content to the target object in the first interface and trigger the electronic equipment to store the added annotation content in association with the target object, so that when the subsequent user opens the first interface again, the electronic equipment can directly display the first interface including the annotation content, and thus, the operation of annotating the browsed content is simplified.
Drawings
Fig. 1 is a schematic diagram of an object labeling method according to an embodiment of the present application;
FIG. 2 (A) is a schematic diagram illustrating an example of displaying a first interface according to an embodiment of the present disclosure;
FIG. 2 (B) is a schematic diagram of an example of displaying the annotated content of the target object according to the embodiment of the present application;
FIG. 3 is a schematic diagram of an example of an access address input to a first interface according to an embodiment of the present disclosure;
fig. 4 (a) is a second schematic diagram of an example of an access address input to the first interface according to an embodiment of the present application;
FIG. 4 (B) is a second exemplary diagram illustrating a display of a label on a target object according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an example of inputting label information according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an object labeling apparatus according to an embodiment of the present application;
fig. 7 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 8 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, of the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The object labeling method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The object labeling method in the embodiment of the application can be applied to a labeling scene of interface display content.
Exemplarily, in the case that a user browses news through a "news information" APP, if the user generates some ideas about one of news contents and wants to record the ideas, the user needs to control the electronic device to operate the "news information" APP in the background, then start the "notepad" APP, record in a newly created page of the "notepad" APP, and switch the "news information" APP to the foreground to operate after the recording is completed, so as to continue browsing news. The whole process is not only complicated in steps, but also long in time consumption.
In this embodiment of the application, when a user browses news through a "news information" APP, if the user wants to record some idea about a certain news content, the user may directly add a tagged content to the certain news content in a current interface, that is, input the tagged content in the current interface, and trigger the electronic device to store the tagged content added by the user in association with the certain news content, and store an access address corresponding to the news browsing interface into a note application, so that when a subsequent user opens the news browsing interface again, the electronic device may directly display a news browsing interface including the tagged content. Therefore, the operation of labeling the browsed content is simplified.
An embodiment of the present application provides an object labeling method, and fig. 1 shows a flowchart of the object labeling method provided in the embodiment of the present application, where the method can be applied to an electronic device. As shown in fig. 1, an object labeling method provided in an embodiment of the present application may include steps 201 to 205 described below.
Step 201, displaying a first interface.
Optionally, in this embodiment of the application, before the electronic device displays the first interface, the user may open an application corresponding to the first interface to display the first interface.
Illustratively, the electronic device may display the first interface in a preset window.
Illustratively, the first interface may be any one interface in the electronic device, for example: a file management interface, an image management interface, a video playing interface, a friend dynamic interface, a news reading interface, a novel reading interface, and the like.
Step 202, receiving a first input.
Optionally, in this embodiment of the present application, the first input is used to add annotation content to the target object in the first interface, so that after the electronic device receives the first input, the annotation content to the target object in the first interface may be displayed.
Illustratively, the first input may be an input of a user in the first interface.
Illustratively, the first input may be an input of inputting the annotation content in the first interface by the user.
Illustratively, the first input may be an input of the user in the first interface through a stylus.
Step 203, responding to the first input, and displaying the label content of the target object in the first interface.
It is understood that the user can input the annotation content of the target object in the first interface to trigger the electronic device to display the annotation content.
Optionally, in this embodiment of the application, the target object may be any content in the first interface, for example: pictures, text, etc.
For example, if the first interface is an image management interface, the target object may be a picture; if the first interface is a video playing interface, the target object can be a video frame image; if the first interface is a news reading details interface, for example: and a detail interface of an article, the target object can be the article.
Optionally, in this embodiment of the application, the annotation content may be a content arbitrarily added by the user, for example: characters, numbers, letters, labels, graffiti, and the like.
Optionally, in this embodiment of the present application, before the step 202, the object labeling method provided in this embodiment of the present application further includes the following steps 301 and 302.
And 301, receiving a third input to the first interface.
Optionally, in this embodiment of the application, the third input is used to invoke the transparent cover layer, so that after the electronic device receives the third input, the transparent cover layer may be displayed in an overlapping manner on the first interface.
Illustratively, the third input may be a touch input such as a click input, a drag input, a slide input, or the like, or other feasible input. The embodiment of the present application does not limit this.
Illustratively, the third input may be an input of a Near-field communication (NFC) sensing area of the electronic device by a user when the electronic device displays the first interface.
Illustratively, the third input may be an input of the NFC sensing area by a user through a tip of a stylus pen.
Illustratively, the tip of the stylus may include a signal emitting device for the NFC sensing region to respond to user input.
Optionally, in this embodiment of the present application, a first interface is displayed on the electronic device, for example: when the video is played on the interface, if the user inputs the first interface, the electronic device may stop playing the video, and display the transparent cover layer on the first interface in an overlapping manner.
Step 302, responding to a third input to the first interface, and overlaying and displaying a transparent covering layer on the first interface.
It can be understood that, in a case that the electronic device displays the first interface, the user may input to the first interface to trigger the electronic device to display the transparent cover layer on the first interface in an overlapping manner, so that the user may input the annotation content for the target object in the transparent cover layer.
Optionally, in this embodiment of the application, the transparent cover layer is a view container, and a user implements a mark on the transparent cover layer without triggering a function of the first interface.
Illustratively, the transparency of the transparent cover layer can be default for the electronic device system or preset for the user.
For example: the transparency of the transparent covering layer may be zero, i.e. completely transparent.
For example, the user may make a sliding input on the screen to trigger the electronic device to cancel displaying the target skin.
Optionally, in this embodiment of the application, the electronic device displays the transparent cover layer on the first interface in an overlapping manner, which may be understood as that the electronic device enters a tablet mode when displaying the first interface, so that a user may input the labeled content of the target object on the first interface.
Therefore, the user does not need to trigger the electronic equipment to exit the current application, and records the labeled content of the target object through other applications, so that the steps of labeling the content are simplified.
Alternatively, in this embodiment of the application, the step 203 may be specifically implemented by the step 203a described below.
Step 203a, responding to the first input of the transparent covering layer, and displaying the label content of the target object on the transparent covering layer.
Optionally, in this embodiment of the application, the first input may be an input of a user in the transparent cover layer.
Illustratively, the first input may be an input of inputting the label content in the transparent cover layer by the user.
Illustratively, the first input may be an input by a user in the transparent cover layer through a stylus.
It is understood that the user can input the annotation content for the target object in the transparent cover layer to trigger the electronic device to display the annotation content in the transparent cover layer.
For example, as shown in fig. 2 (a), the electronic device displays a first interface, such as: a picture management interface, including target objects, for example: the picture 11, in the case that the electronic device displays a masking layer with zero transparency on the picture management interface, the user may input label content to the picture 11, for example: "day 6/10 month", as shown in fig. 2 (B), the electronic device displays the annotation content for the picture 11: "day 6/10", so that the electronic device can, after receiving the second input, annotate the content: "day 6/10" is saved in association with the picture 11.
Optionally, in this embodiment of the application, in the process that the user inputs the annotation content in the target cover layer, a sliding input may be performed on the screen to trigger the electronic device to cancel displaying the target cover layer.
It can be understood that after the user inputs part of the content in the target layer, if the user does not want to continue inputting and does not need to save the existing input content, the user can directly perform a sliding input on the screen to trigger the electronic device to cancel displaying the target layer.
Therefore, the screenshot does not need to be found in the album after the screenshot of the first interface, and the screenshot is marked through the input of a plurality of controls, for example: and adding characters or exiting the current application, and recording the labeled content related to the first interface content by using the notepad application, thereby simplifying the step of content labeling.
And step 204, receiving a second input.
In an embodiment of the application, the electronic device may receive a second input.
Illustratively, the second input may be an input of the first interface or the transparent cover layer by the user.
For example, the second input may be a touch input such as a click input, a drag input, a slide input, or the like, or other feasible input. The embodiment of the present application does not limit this.
Illustratively, the second input may be input to the first interface or the transparent cover layer by the user through a stylus.
For example, the second input may be an input of the NFC sensing area by a user when the electronic device displays the first interface or when a transparent cover layer is displayed on the first interface in an overlapping manner.
Illustratively, the second input may be an input of the NFC sensing area by a user through a tip of a stylus.
And step 205, responding to the second input, storing the marked content and the target object in a related manner, and storing the access address of the first interface to the note application.
And displaying the annotation content under the condition that the first interface is accessed through the access address of the first interface.
Optionally, in this embodiment of the application, after the electronic device displays the tagged content added by the user in the first interface, if a second input is received, the electronic device may respond to the second input, extract a keyword or key information in the tagged content and a keyword or key information of the target object, and then establish an association relationship between the tagged content and the target object through the keyword, and store the association relationship.
Illustratively, the electronic device may extract keywords or key information in the annotation content and keywords or key information of the first interface, and then establish an association relationship between the annotation content and the target object through the keywords and store the association relationship.
After the electronic device displays the annotation content added by the user in the transparent cover layer, if a second input is received, the electronic device may extract a keyword or key information in the annotation content added by the user in the transparent cover layer and a keyword or key information of the target object in response to the second input, and then establish an association relationship between the transparent cover layer containing the annotation content and the target object or an association relationship between the annotation content and the target object through the keywords, and store the association relationship.
Optionally, in this embodiment, the electronic device may obtain the access address of the first interface and store the access address to the note application.
For example, the electronic device may save the access address of the first interface into an application collection of an application corresponding to the first interface.
By way of example, the note application described above may be understood as a native application, such as: an atomic note application, a notepad application, a file management application, a note application, and the like.
It can be understood that, because the electronic device stores the annotation content in association with the target object in the first interface and stores the access address of the first interface, when the user wants to access the first interface again, the electronic device can be triggered to jump directly to the first interface for displaying through the stored access address of the first interface, and the annotation content added by the user is displayed in the first interface.
Optionally, in this embodiment of the application, in response to the second input, the electronic device may further synthesize the annotation content and the target object to obtain the target content and store the target content.
Exemplarily, the electronic device may perform a deduction process on the annotation content input by the user, and perform a fusion process on the deducted annotation content and the target object; the transparent Mongolian layer containing the marked content and the first interface can also be fused to obtain the target content.
For example, when a user uses an electronic device to browse a dynamic state of a friend, if the user is interested in a certain dynamic state, the user may display a dynamic browsing interface of the friend on the electronic device, that is, when one interface includes multiple dynamic states, click the NFC sensing area using the tail end of the stylus pen to trigger the electronic device to display a transparent cover layer on the dynamic display interface in an overlapping manner, so that the user may perform input on the transparent cover layer, for example: after the user operation is finished, the tail end of the stylus pen can be used again to click the NFC induction area, so that the electronic equipment is triggered to fuse and store the screenshot of the dynamic display interface and the transparent cover layer containing the user input content.
For example, when the electronic device displays a news detail interface, the user may click on the NFC sensing area using the end of the stylus pen to trigger the electronic device to display a transparent cover layer on the current interface in an overlapping manner, so that the user may perform input on the transparent cover layer, for example: after the user operation is completed, the NFC induction area can be clicked by the tail end of the stylus again, so that the electronic equipment is triggered to fuse and store the content corresponding to the news and the content input by the user.
Illustratively, after the electronic device obtains the target content, a save menu may be displayed.
Illustratively, the save menu may include at least one of: not saving, saving to application, saving to local.
Illustratively, the user may click on: and storing the target content to the application to trigger the electronic equipment to call the collection function of the application corresponding to the first interface, and storing the target content to the application collection.
Illustratively, the user may click on: and saving the target content to the local to trigger the electronic equipment to save the target content to the note application.
Illustratively, after the user clicks save to local, the electronic device may be triggered to display at least one save address option: default folder, new folder, and folder selection.
Optionally, in this embodiment of the application, the electronic device is configured to, at the first interface, for example: after the marked content is displayed in the transparent covering layer overlapped on the video playing interface, if the user performs the second input, the electronic device can store the marked content and the target object in a related manner, and resume playing the video.
Optionally, in this embodiment of the application, when the first interface is a video playing interface of the first video file and the target object is a target video frame in the first video file, the electronic device synthesizes the annotation content with the target video frame in response to the second input, obtains the target content, saves the target content, and then displays a save menu when the first video file is finished playing.
Illustratively, the save menu includes at least one of: the target content is not saved, only the target content is saved, and the video is saved.
Illustratively, the user may click on: and storing the video to trigger the electronic equipment to download the first video file, automatically replacing an original video frame in the first video file, namely a target video frame, with the target content after the first video file is downloaded to obtain a second video file, and storing the second video file to the note application, so that the integration of the user editing content and the original video is realized.
For example, when the electronic device stores the second video file, the electronic device may perform target marking on the original video file name, that is, the first video file name, and store the target marking, where the target marking is used to characterize the second video file as a video file including the annotation content.
Therefore, the user can directly play the video file containing the marked content in the note application, and when the user views the saved content, the user can know that the second video file is the video file with the modified video frame through the target mark, so that the flexibility of saving the file by the electronic equipment is improved.
The embodiment of the application provides an object labeling method, which can display a first interface, respond to a first input, display labeling content of a target object in the first interface, respond to a second input, store the labeling content and the target object in a correlated manner, store an access address of the first interface to a note application, and display the labeling content by an electronic device under the condition that the first interface is accessed through the access address of the first interface. In the scheme, when the electronic equipment displays the first interface, the user can directly add the annotation content to the target object in the first interface and trigger the electronic equipment to store the added annotation content and the target object in a correlation manner, so that when the subsequent user opens the first interface again, the electronic equipment can directly display the first interface comprising the annotation content, and the operation of annotating the browsed content is simplified.
Optionally, in this embodiment of the present application, the object labeling method provided in this embodiment of the present application may further include the following steps 401 to 403.
And step 401, displaying an application interface of the note application.
In the embodiment of the application, the application interface comprises an access address of the first interface;
optionally, in this embodiment of the application, the user may input an application icon of the note application to trigger the electronic device to display an application interface of the note application.
Optionally, in this embodiment of the application, the application interface includes target content generated according to the target object and the content labeled to the target object.
Step 402, receiving a fourth input of an access address of the first interface.
In an embodiment of the application, the electronic device may receive a fourth input of the access address of the first interface.
Illustratively, the fourth input may be a touch input such as a click input, a drag input, a slide input, or the like, or other feasible input. The embodiment of the present application does not limit this.
And step 403, responding to a fourth input of the access address of the first interface, and displaying the first interface.
In this embodiment of the present application, the first interface includes a content of a label for the target object.
It is to be appreciated that upon entry of the access address of the user to the first interface, the electronic device can jump to the first interface display and display the annotation content associated with the target object in the first interface.
Optionally, in this embodiment of the application, after the user inputs the access address of the first interface, the electronic device may jump to the first interface to display, and simultaneously acquire key information of the first interface or key information of a target object in the first interface, and then determine, according to the key information, annotation content associated with the target object, so as to display the annotation content in the first interface.
For example, as shown in fig. 3, the electronic device displays an application interface 12 of a note application, where the application interface 12 includes a first interface, such as: the user can click the access address 13 of the picture management interface, that is, the fourth input is shown in fig. 2 (B), so that the electronic device displays the picture management interface, and the picture management interface includes the following labeled contents: "day 6/10 month".
Therefore, when the user opens the first interface again, the previously marked content can be directly checked in the first interface, and the flexibility of displaying the marked content by the electronic equipment is improved.
Optionally, in this embodiment of the application, the first interface is a graphical interface, and the target object is a first image in the graphical interface; the step 402 can be specifically realized by the steps 402a and 402b described below.
Step 402a, a fourth input of an access address to the first interface is received.
In an embodiment of the application, the electronic device may receive a fourth input of the access address of the first interface.
Illustratively, the fourth input may be a touch input such as a click input, a drag input, a slide input, or the like, or other feasible input. The embodiment of the present application does not limit this.
Illustratively, the fourth input may be an input of an access address of the first interface by the user through the stylus pen.
And 402b, responding to the fourth input of the access address of the first interface, and displaying the image-text interface.
In this embodiment, the display priority of the first image is greater than that of the second image, and the second image is an image that does not include the annotation content in the image-text interface.
It can be understood that, in a case where the electronic device displays an application interface of a note application, a user may input an access address of the graphical interface, so that the electronic device displays the graphical interface, and in a case where the graphical interface includes a plurality of images, the electronic device preferentially displays the first image including the annotation content.
Therefore, when the user opens the first interface again through the stored access address of the first interface, the user can preferentially check the marked content without turning pages or sliding in the first interface for searching, and the flexibility of displaying the marked content by the electronic equipment is improved.
Optionally, in this embodiment of the application, the first interface is a video playing interface of a target video, and the target object is a target video frame of the target video; the object labeling method provided by the embodiment of the application may further include the following step 501.
Step 501, displaying the marked content when the target video frame of the target video is played.
It can be understood that, since the electronic device stores the target video frame and the annotation content for the target video frame in an associated manner, when the electronic device plays the target video again and plays the target video frame, the electronic device can display the annotation content.
For example, as shown in fig. 4 (a), the electronic device displays an application interface 12 of a note application, where the application interface 12 includes target videos, such as: the access address 14 of the video 1, the user can click on the access address 14, that is, the fourth input mentioned above, so that the electronic device plays the video 1, and when playing to the target video frame 15 of the video 1, as shown in fig. 4 (B), the annotation content is displayed: for example: "refuel".
Therefore, when the target video frame, namely the video frame with the content marked by the user, is played again by the electronic equipment, the corresponding marked content can be displayed, and the flexibility of displaying the marked content by the electronic equipment in the process of playing the video is improved.
Optionally, in this embodiment of the application, the first interface is a video playing interface of a target video, and the target object is a target video frame of the target video; the step 403 can be specifically realized by the step 403b described below.
And 403b, responding to the fourth input of the access address of the first interface, and displaying the video playing interface.
In the embodiment of the application, the video playing interface comprises at least one piece of mark information, and each piece of mark information indicates that a target video frame with marked content exists in a target video; the at least one tag information includes target tag information.
Optionally, in this embodiment of the application, the at least one piece of mark information may be displayed in a floating manner in the video playing interface.
Illustratively, the at least one mark information may be displayed in the video playing interface with a preset transparency.
Illustratively, one piece of tag information may be at least one of: thumbnails corresponding to video frames, keywords labeling content, and the like.
Optionally, in this embodiment of the application, the target mark information is mark information corresponding to a target video frame.
It can be understood that, in a case where the electronic device displays an application interface of the note application, the user may input an access address of the first interface, so that the electronic device displays a video playing interface.
Optionally, in this embodiment of the present application, the object labeling method provided in this embodiment of the present application may further include step 601 and step 602 described below.
Step 601, receiving a fifth input of the target mark information by the user.
In this embodiment, the electronic device may receive a fifth input of the target mark information by the user.
Illustratively, the fifth input may be a touch input such as a click input, a drag input, a slide input, or the like, or other feasible input. The embodiment of the present application does not limit this.
Step 602, responding to the fifth input, displaying the target video frame corresponding to the target mark information, and marking the content of the target video frame.
Optionally, in this embodiment of the application, after receiving the fifth input, the electronic device may determine, according to the target mark information, a target video frame corresponding to the target mark information, jump to the target video frame to start playing, and display the tagged content of the target video frame at the same time.
For example, as shown in fig. 5, the electronic device displays a target video, such as: the video playing interface 16 of the video 1, where the video playing interface 16 includes tag information 1 and tag information 2, where the tag information 2 is tag information corresponding to a target video frame 15 of the video 1, and the user can click the tag information 2, that is, the fifth input is shown in fig. 4 (B), so that the electronic device displays the target video frame 15 corresponding to the tag information 2, and the target video frame 15 includes a markup content: "refuel".
Illustratively, after the electronic device plays or jumps to the target video frame and displays the annotation content for the target video frame, the annotation content may be displayed for a predetermined length of time, for example: and displayed for 5 seconds.
Therefore, when the electronic equipment plays the video, the user can directly jump to the corresponding video frame to start playing through inputting the mark information, and the corresponding marked content is displayed, so that the flexibility of playing the video and the convenience of displaying the marked content of the electronic equipment are improved.
It should be noted that, in the object labeling method provided in the embodiment of the present application, the execution subject may be an object labeling apparatus, or an electronic device, and may also be a functional module or an entity in the electronic device. In the embodiment of the present application, an object labeling apparatus executes an object labeling method as an example, and the object labeling apparatus provided in the embodiment of the present application is described.
Fig. 6 shows a schematic diagram of a possible structure of an object labeling apparatus related to the embodiment of the present application. As shown in fig. 6, the object labeling apparatus 70 may include: a display module 71 and a processing module 72.
The display module 71 is configured to display a first interface; and responding to the first input, and displaying the label content of the target object in the first interface. The processing module 72 is configured to, in response to the second input, store the annotation content in association with the target object, and store the access address of the first interface to the note application; and displaying the annotation content under the condition that the first interface is accessed through the access address of the first interface.
The embodiment of the application provides an object labeling device, when the object labeling device displays a first interface, a user can directly add labeling content to a target object in the first interface, and the object labeling device is triggered to store the added labeling content and the target object in an associated manner, so that when a subsequent user opens the first interface again, the object labeling device can directly display the first interface including the labeling content, and therefore the operation of labeling browsing content is simplified.
In a possible implementation manner, the display module 71 is configured to, in response to a first input, display a transparent cover layer on the first interface in an overlapping manner before displaying the labeled content of the target object in the first interface, in response to a third input to the first interface. The display module 71 is specifically configured to display, in response to a first input to the transparent cover layer, the labeled content of the target object on the transparent cover layer.
In a possible implementation manner, the display module 71 is further configured to display an application interface of the note application, where the application interface includes an access address of the first interface; and displaying the first interface in response to a fourth input of the access address to the first interface; the first interface comprises the labeling content of the target object.
In a possible implementation manner, the application interface includes an access address of the first interface, and target content generated according to the target object and the label content of the target object.
In a possible implementation manner, the first interface is a graphical interface, and the target object is a first image in the graphical interface; the display module 71 is specifically configured to display a graphical interface in response to a fourth input of the access address to the first interface; the display priority of the first image is greater than that of the second image, and the second image is an image which does not contain the annotated content in the image-text interface.
In a possible implementation manner, the first interface is a video playing interface of a target video, and the target object is a target video frame of the target video; the display module 71 is further configured to display the annotation content when the target video frame of the target video is played.
In a possible implementation manner, the first interface is a video playing interface of a target video, and the target object is a target video frame of the target video; the object labeling apparatus provided in the embodiment of the present application further includes: a receiving module; the display module 71 is specifically configured to display a video playing interface in response to a fourth input of the access address to the first interface; the video playing interface comprises at least one piece of mark information, and each piece of mark information indicates that a target video frame with marked content exists in a target video; the at least one tag information includes target tag information. The receiving module is used for receiving fifth input of the target marking information by the user; and the display module 71 is further configured to, in response to the fifth input, display a target video frame corresponding to the target mark information and the content of the label to the target video frame.
The object labeling apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in an electronic device. The device can be mobile electronic equipment or non-mobile electronic equipment. The Mobile electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm computer, an in-vehicle electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (NAS PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The object labeling device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The object labeling device provided in the embodiment of the present application can implement each process implemented by the method embodiment, and is not described here again to avoid repetition.
Optionally, as shown in fig. 7, an electronic device 900 is further provided in this embodiment of the present application, and includes a processor 901 and a memory 902, where the memory 902 stores a program or an instruction that can be executed on the processor 901, and when the program or the instruction is executed by the processor 901, the steps of the foregoing method embodiment are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The display unit 106 is configured to display a first interface; and responding to the first input, and displaying the label content of the target object in the first interface.
The processor 110 is used for responding to the second input, storing the marked content and the target object in a correlation manner, and storing the access address of the first interface to the note application; and displaying the annotation content under the condition that the first interface is accessed through the access address of the first interface.
The embodiment of the application provides electronic equipment, and when the electronic equipment displays a first interface, a user can directly add annotation content to a target object in the first interface and trigger the electronic equipment to store the added annotation content in association with the target object, so that when a subsequent user opens the first interface again, the electronic equipment can directly display the first interface including the annotation content, and thus, the operation of annotating browsing content is simplified.
Optionally, the display unit 106 is configured to, in response to the first input, display the transparent cover layer on the first interface in an overlapping manner in response to a third input to the first interface before displaying the annotation content to the target object in the first interface.
The display unit 106 is specifically configured to display, in response to a first input to the transparent cover layer, the label content of the target object on the transparent cover layer.
Optionally, the display unit 106 is further configured to display an application interface of the note application, where the application interface includes an access address of the first interface; and displaying the first interface in response to a fourth input of the access address to the first interface; the first interface comprises the labeling content of the target object.
Optionally, the first interface is a graphical interface, and the target object is a first image in the graphical interface; a display unit 106, specifically configured to display a graphical interface in response to a fourth input of the access address to the first interface; the display priority of the first image is greater than that of the second image, and the second image is an image which does not contain the annotated content in the image-text interface.
Optionally, the first interface is a video playing interface of a target video, and the target object is a target video frame of the target video; the display unit 106 is further configured to display the annotation content when the target video frame of the target video is played.
Optionally, the first interface is a video playing interface of a target video, and the target object is a target video frame of the target video; the display unit 106 is specifically configured to display a video playing interface in response to a fourth input of the access address to the first interface; the video playing interface comprises at least one piece of mark information, and each piece of mark information indicates that a target video frame with marked content exists in a target video; the at least one tag information includes target tag information.
A user input unit 107, configured to receive a fifth input of the target mark information by the user.
And the display unit 106 is further configured to, in response to the fifth input, display a target video frame corresponding to the target mark information and annotation content of the target video frame.
The electronic device provided by the embodiment of the application can realize each process realized by the method embodiment, and can achieve the same technical effect, and for avoiding repetition, the details are not repeated here.
The beneficial effects of the various implementation manners in this embodiment may specifically refer to the beneficial effects of the corresponding implementation manners in the above method embodiments, and are not described herein again to avoid repetition.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes at least one of a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, memory 109 may include volatile memory or non-volatile memory, or memory 109 may include both volatile and non-volatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 109 in the embodiments of the subject application includes, but is not limited to, these and any other suitable types of memory.
Processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor, which mainly handles operations related to the operating system, user interface, application programs, etc., and a modem processor, which mainly handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements the processes of the foregoing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the foregoing method embodiments, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing method embodiments, and achieve the same technical effects, and in order to avoid repetition, details are not described here again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatuses in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions recited, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An object labeling method, characterized in that the method comprises:
displaying a first interface;
responding to the first input, and displaying the label content of the target object in the first interface;
responding to a second input, storing the marked content and the target object in a related mode, and storing the access address of the first interface to a note application;
and displaying the annotation content under the condition that the first interface is accessed through the access address of the first interface.
2. The method of claim 1, wherein prior to displaying the annotation content for the target object in the first interface in response to the first input, the method further comprises:
in response to a third input to the first interface, displaying a transparent cover layer over the first interface;
the responding to the first input and displaying the label content of the target object in the first interface comprises the following steps:
in response to a first input to the transparent cover layer, displaying the annotation content for the target object on the transparent cover layer.
3. The method of claim 1, further comprising:
displaying an application interface of a note application, wherein the application interface comprises an access address of the first interface;
displaying the first interface in response to a fourth input to the access address of the first interface; the first interface comprises the labeling content of the target object.
4. The method according to claim 3, wherein the application interface comprises an access address of the first interface and target content generated according to the target object and the label content of the target object.
5. The method of claim 3, wherein the first interface is a graphical interface, and the target object is a first image in the graphical interface;
the displaying the first interface in response to a fourth input to the access address of the first interface comprises:
displaying the image-text interface in response to a fourth input of the access address of the first interface; the display priority of the first image is greater than that of the second image, and the second image is an image which does not contain the annotated content in the image-text interface.
6. The method according to claim 3, wherein the first interface is a video playing interface of a target video, and the target object is a target video frame of the target video;
the method further comprises the following steps:
and displaying the annotation content under the condition of playing to a target video frame of the target video.
7. The method according to claim 3, wherein the first interface is a video playing interface of a target video, and the target object is a target video frame of the target video;
the displaying the first interface in response to a fourth input to the access address of the first interface comprises:
responding to a fourth input of the access address of the first interface, and displaying the video playing interface; the video playing interface comprises at least one piece of mark information, and each piece of mark information indicates that a target video frame with marked content exists in the target video; the at least one tag information includes target tag information;
the method further comprises the following steps:
receiving a fifth input of the target mark information by a user;
and responding to the fifth input, and displaying a target video frame corresponding to the target mark information and the annotation content of the target video frame.
8. An object labeling apparatus, comprising: the device comprises a display module and a processing module;
the display module is used for displaying a first interface; responding to the first input, and displaying the labeled content of the target object in the first interface;
the processing module is used for responding to a second input, storing the marked content and the target object in a correlation mode, and storing the access address of the first interface to a note application;
and displaying the annotation content under the condition that the first interface is accessed through the access address of the first interface.
9. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the object annotation method according to any one of claims 1 to 7.
10. A readable storage medium, on which a program or instructions are stored, which when executed by a processor, carry out the steps of the object annotation method according to any one of claims 1 to 7.
CN202211513445.0A 2022-11-29 2022-11-29 Object labeling method and device, electronic equipment and storage medium Pending CN115826821A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211513445.0A CN115826821A (en) 2022-11-29 2022-11-29 Object labeling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211513445.0A CN115826821A (en) 2022-11-29 2022-11-29 Object labeling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115826821A true CN115826821A (en) 2023-03-21

Family

ID=85532772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211513445.0A Pending CN115826821A (en) 2022-11-29 2022-11-29 Object labeling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115826821A (en)

Similar Documents

Publication Publication Date Title
CN107368511B (en) Information display method and device
CN108351880A (en) Image processing method, device, electronic equipment and graphic user interface
CN109948101B (en) Page switching method and device, storage medium and electronic equipment
CN115658197A (en) Interface switching method and interface switching device
CN114116098B (en) Application icon management method and device, electronic equipment and storage medium
CN113849092A (en) Content sharing method and device and electronic equipment
CN107784037B (en) Information processing method and device, and device for information processing
CN113094194A (en) Clipboard information processing method and device
WO2024012508A1 (en) Functional interface display method and apparatus
CN107220371A (en) Page display method, device and storage medium
CN115437736A (en) Method and device for recording notes
CN115373555A (en) Display method and device of folder icon, electronic equipment and medium
CN115826821A (en) Object labeling method and device, electronic equipment and storage medium
CN114679546A (en) Display method and device, electronic equipment and readable storage medium
CN113253904A (en) Display method, display device and electronic equipment
CN112764634A (en) Content processing method and device
CN112948844A (en) Control method and device and electronic equipment
CN112202958B (en) Screenshot method and device and electronic equipment
CN107145314B (en) Display processing method and device for display processing
CN112596646B (en) Information display method and device and electronic equipment
CN115659084A (en) Display control method and device and electronic equipment
CN117435808A (en) Material display method, device, equipment and storage medium
CN115904569A (en) Resource sharing method and device, electronic equipment and storage medium
CN114943202A (en) Information processing method, information processing apparatus, and electronic device
CN115168078A (en) Data processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination