CN114282031A - Information labeling method and device, computer equipment and storage medium - Google Patents

Information labeling method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114282031A
CN114282031A CN202111037637.4A CN202111037637A CN114282031A CN 114282031 A CN114282031 A CN 114282031A CN 202111037637 A CN202111037637 A CN 202111037637A CN 114282031 A CN114282031 A CN 114282031A
Authority
CN
China
Prior art keywords
information
live broadcast
tag
entity article
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111037637.4A
Other languages
Chinese (zh)
Inventor
阳萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111037637.4A priority Critical patent/CN114282031A/en
Publication of CN114282031A publication Critical patent/CN114282031A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to an information labeling method, an information labeling device, computer equipment and a storage medium, and relates to the technical field of live broadcasting. The method comprises the following steps: displaying a live broadcast interface; displaying a live broadcast picture in a live broadcast interface, wherein the live broadcast picture is a picture for displaying a live broadcast scene of an entity article; responding to the existence of the target entity article in the live broadcast picture, and overlaying and displaying an information tag of the target entity article on the live broadcast picture; the information tag is used for indicating the designated attribute information of the target entity article; by the method, the process of displaying the specified attribute information in the live broadcast picture is simplified, and the display efficiency of the specified attribute information is improved; the display effect of the specified attribute information is improved by directly displaying the specified attribute information in the live broadcast picture.

Description

Information labeling method and device, computer equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of live broadcast, in particular to an information labeling method and device, computer equipment and a storage medium.
Background
Live broadcast delivery has become a convenient shopping consumption mode for connecting merchants with consumers; through the live show commodity for the consumer feels the use method, performance etc. of commodity more directly perceivedly.
In the related art, in order to enable a user watching a live broadcast to obtain specified attribute information of a commodity, such as price information, an auxiliary material for displaying the specified attribute information is usually additionally arranged in a scene, so as to achieve a prompt effect on the specified attribute information, such as a price tag for displaying the price information.
However, in the related art, the flexibility of the manner of displaying the specified attribute information of the commodity is poor, and the method depends on the material preparation before live broadcast, and cannot be correspondingly changed based on the real-time explanation or change of the commodity, so that the display effect of the specified attribute information in a live broadcast picture is poor.
Disclosure of Invention
The embodiment of the application provides an information labeling method, an information labeling device, computer equipment and a storage medium, which can simplify the process of displaying specified attribute information in a live broadcast picture, improve the display efficiency of the specified attribute information and improve the display effect of the specified attribute information. The technical scheme is as follows:
in one aspect, an information labeling method is provided, and the method includes:
displaying a live broadcast interface;
displaying a live broadcast picture in the live broadcast interface, wherein the live broadcast picture is a picture for displaying a live broadcast scene of an entity article;
responding to the existence of a target entity article in the live broadcast picture, and overlaying and displaying an information tag of the target entity article on the live broadcast picture; the information tag is used for indicating the designated attribute information of the target entity article.
In another aspect, an information labeling method is provided, and the method includes:
acquiring a target entity article in a live broadcast picture; the live broadcast picture is a picture for displaying a live broadcast scene of an entity article;
generating an information tag of the target entity article based on the target entity article;
instructing a target terminal to display the live broadcast picture based on the information tag, wherein the information tag of the target entity article is displayed on the live broadcast picture in an overlapped mode; the information tag is used for indicating the designated attribute information of the target entity article, and the target terminal is a terminal for displaying the live broadcast picture.
In another aspect, an information labeling apparatus is provided, the apparatus including:
the interface display module is used for displaying a live broadcast interface;
the picture display module is used for displaying a live broadcast picture in the live broadcast interface, wherein the live broadcast picture is a picture for displaying a live broadcast scene of an entity article;
the tag display module is used for responding to the existence of a target entity article in the live broadcast picture and overlaying and displaying an information tag of the target entity article on the live broadcast picture; the information tag is used for indicating the designated attribute information of the target entity article.
In a possible implementation manner, the tag display module is configured to, in response to that a target entity article exists in the live view, display, on the live view, an information tag of the target entity article in an overlapping manner, corresponding to a display position of the target entity article.
In one possible implementation, the apparatus further includes:
and the first label updating module is used for updating the information label of the target entity article, which is displayed on the live broadcast picture in a superposition manner, in response to the change of the specified attribute information of the target entity article.
In a possible implementation manner, the updated information tag of the target entity article includes first specified attribute information and second specified attribute information, where the first specified attribute information is the specified attribute information before being changed, and the second specified attribute information is the specified attribute information after being changed.
In a possible implementation manner, in response to that the target entity article is an entity article explained in the live broadcast scene, an interactive control is displayed in an information tag of the target entity article, and the interactive control is configured to jump the live broadcast picture to a detailed information page of the target entity article after receiving a selection operation.
In a possible implementation manner, the live view includes at least two target physical objects, and the apparatus further includes:
a second tag updating module, configured to update an information tag of a first entity article that is displayed in a superimposed manner on the live view in response to a change in an explanation state of the first entity article in the live view, where the first entity article is one of at least two target entity articles; the change of the explanation state includes a change from an explained state to an unexplained state and a change from an unexplained state to an explained state.
In a possible implementation manner, the second tag updating module is configured to update at least one of a display size, a display position, and a display content of an information tag of a first entity article, which is displayed in an overlaid manner on the live view, in response to a change in an explanation state of the first entity article in the live view.
In another aspect, an information labeling apparatus is provided, the apparatus including:
the article acquisition module is used for acquiring a target entity article in a live broadcast picture; the live broadcast picture is a picture for displaying a live broadcast scene of an entity article;
the label generating module is used for generating an information label of the target entity article based on the target entity article;
the indicating module is used for indicating a target terminal to display the live broadcast picture based on the information tag, and the information tag of the target entity article is superposed and displayed on the live broadcast picture; the information tag is used for indicating the designated attribute information of the target entity article, and the target terminal is a terminal for displaying the live broadcast picture.
In one possible implementation, the item acquisition module includes:
the image recognition sub-module is used for carrying out image recognition on the live broadcast picture to obtain at least one entity article in the live broadcast picture;
and the image matching sub-module is used for obtaining the object entity article from the at least one entity article which is successfully matched with the template image in the template image set.
In one possible implementation manner, the tag generation module includes:
the attribute information acquisition sub-module is used for acquiring the specified attribute information of the first entity article based on the first template image; the first physical object is one of the target physical objects, and the first template image is a template image in the set of template images that matches the first physical object;
and the label generating submodule is used for generating an information label of the first entity article based on the specified attribute information of the first entity article.
In one possible implementation manner, the tag generation sub-module includes:
the information acquisition unit is used for acquiring first area information and first position information of the first entity article in the live broadcast picture;
a type determining unit, configured to determine a tag type of an information tag of the first entity item based on the first area information and the first location information of the first entity item;
a first tag generating unit, configured to generate an information tag of the first entity item based on the specified attribute information of the first entity item and the tag type.
In a possible implementation manner, the type determining unit is configured to determine that an information type of an information tag of the first physical item is a first information type in response to the first area information and the first location information indicating that the first physical item is an explained physical item;
in response to the first area information and the first location information indicating that the first physical item is an unexplained physical item, determining that an information type of an information tag of the first physical item is a second information type;
the information amount contained in the information label of the first information type is larger than the information amount contained in the information label of the second type.
In one possible implementation manner, the tag generation sub-module includes:
the information acquisition unit is used for acquiring first area information and first position information of the first entity article in the live broadcast picture;
a size information determination unit configured to determine size information of an information tag of the first physical item based on the first area information and the first position information of the first physical item;
a second tag generating unit, configured to generate an information tag of the first entity item based on the specified attribute information of the first entity item and the size information.
In another aspect, a computer device is provided, the computer device comprising a processor and a memory, the memory storing at least one computer program, the at least one computer program being loaded and executed by the processor to implement the information annotation method provided in the various alternative implementations described above.
In another aspect, a computer-readable storage medium is provided, in which at least one computer program is stored, the computer program being loaded and executed by a processor to implement the information annotation method provided in the various alternative implementations described above.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the information labeling method provided in the above-mentioned various optional implementation modes.
The technical scheme provided by the application can comprise the following beneficial effects:
when the target entity article exists in the live broadcast picture, the information label for indicating the designated attribute information of the target entity article is automatically displayed on the live broadcast picture in an overlapping manner so as to display the designated attribute information of the target entity article, thereby simplifying the process of displaying the designated attribute information in the live broadcast picture and improving the display efficiency of the designated attribute information; the display effect of the specified attribute information is improved by directly displaying the specified attribute information in the live broadcast picture.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic structural diagram illustrating an information annotation system according to an exemplary embodiment of the present application;
FIG. 2 is a flowchart illustrating an information annotation method provided by an exemplary embodiment of the present application;
FIG. 3 is a flow chart illustrating an information annotation methodology according to an exemplary embodiment of the present application;
FIG. 4 is a flow chart illustrating an information annotation methodology according to an exemplary embodiment of the present application;
FIG. 5 illustrates a schematic diagram of information tags of different tag types shown in an exemplary embodiment of the present application;
FIG. 6 illustrates a schematic diagram showing a change in tag type in an exemplary embodiment of the present application;
FIG. 7 illustrates a schematic diagram of an information tag shown in an exemplary embodiment of the present application;
FIG. 8 is a diagram illustrating an updated information tag specifying attribute information provided by an exemplary embodiment of the present application;
FIG. 9 illustrates a schematic diagram of an information tag shown in an exemplary embodiment of the present application;
FIG. 10 is an interaction diagram illustrating an information annotation process according to an exemplary embodiment of the present application;
FIG. 11 is a block diagram of an information annotation device provided in an exemplary embodiment of the present application;
FIG. 12 is a block diagram of an information annotation device provided in an exemplary embodiment of the present application;
FIG. 13 is a block diagram illustrating the structure of a computer device in accordance with an exemplary embodiment;
FIG. 14 is a block diagram illustrating the structure of a computer device according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The embodiment of the application provides an information labeling method, which can simplify the process of displaying the specified attribute information in a live broadcast picture and improve the display efficiency and the display effect of the specified attribute information. Embodiments of the present application relate to Artificial Intelligence (AI) technology. The AI is a theory, method, technique and application system that simulates, extends and expands human intelligence, senses the environment, acquires knowledge and uses the knowledge to obtain the best results using a digital computer or a machine controlled by a digital computer. In other words, artificial intelligence is an integrated technique of computer science; the intelligent machine is mainly produced by knowing the essence of intelligence and can react in a manner similar to human intelligence, so that the intelligent machine has multiple functions of perception, reasoning, decision making and the like.
The AI technology is a comprehensive subject, and relates to the field of extensive technology, both hardware level technology and software level technology. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly includes Computer Vision (CV), speech processing, natural language processing, and Machine Learning (ML)/deep Learning. Computer vision is a science for researching how to make a machine look, and in particular, it refers to that a camera and a computer are used to replace human eyes to perform machine vision of identifying, tracking and measuring a target, and further to perform graphic processing, so that the computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image Recognition, image semantic understanding, image retrieval, OCR (Optical Character Recognition), video processing, video semantic understanding, video content/behavior Recognition, three-dimensional object reconstruction, 3D (3Dimensions) technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face Recognition and fingerprint Recognition. The present application relates generally to image recognition technology in computer vision technology, and image recognition technology refers to a technology for recognizing various different patterns of targets and objects by processing, analyzing and understanding an image with a computer device.
Fig. 1 is a schematic structural diagram of an information annotation system according to an exemplary embodiment of the present application, and as shown in fig. 1, the information annotation system includes a server 110, a terminal 120, and a terminal 130.
The server 110 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform, and the like. The server 110 may be a server having live data streaming functionality.
The terminal 120 has an application program supporting the live broadcast function, and each user can access a server of the application program supporting the live broadcast function through the terminal 120 to realize the live broadcast function; further, the terminal 120 may be a terminal device having a network connection function and an interface display function, for example, the terminal 120 may be a smart phone, a tablet computer, an e-book reader, smart glasses, a smart watch, a smart television, an MP3 player (Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4), a laptop computer, a desktop computer, and the like.
The terminal 130 is provided with an application program supporting the live broadcast watching function, and each user can access a server of the application program supporting the live broadcast watching function through the terminal 130 so as to realize the purpose of watching live broadcast; further, the terminal 130 may be a terminal device having a network connection function and an interface display function, for example, the terminal 130 may be a smart phone, a tablet computer, an e-book reader, smart glasses, a smart watch, a smart television, an MP3 player, an MP4 player, a laptop portable computer, a desktop computer, and the like.
The same terminal can be provided with an application program supporting a live broadcast function and also can be provided with an application program supporting a live broadcast watching function; optionally, the application program supporting the live viewing function and the application program supporting the live viewing function may be the same application program. Alternatively, the application program supporting the live broadcast function and the application program supporting the live broadcast viewing function may be different application programs, which is not limited in this application.
The system includes one or more servers 110, one or more terminals 120, and one or more terminals 130. The number of the server 110, the terminal 120, and the terminal 130 is not limited in the embodiment of the present application.
The terminal and the server are connected through a communication network. Optionally, the communication network is a wired network or a wireless network.
Optionally, the system may further include a management device (not shown in fig. 1), which is connected to the server 110 through a communication network. Optionally, the communication network is a wired network or a wireless network.
Optionally, the wireless network or wired network described above uses standard communication techniques and/or protocols. The Network is typically the Internet, but may be any Network including, but not limited to, a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a mobile, wireline or wireless Network, a private Network, or any combination of virtual private networks. In some embodiments, data exchanged over a network is represented using techniques and/or formats including Hypertext Mark-up Language (HTML), Extensible Markup Language (XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as Secure Socket Layer (SSL), Transport Layer Security (TLS), Virtual Private Network (VPN), Internet Protocol Security (IPsec). In other embodiments, custom and/or dedicated data communication techniques may also be used in place of, or in addition to, the data communication techniques described above. The application is not limited thereto.
Fig. 2 shows a flowchart of an information annotation method provided in an exemplary embodiment of the present application, where the information annotation method may be executed by a terminal, which may be the terminal 120 or the terminal 130 shown in fig. 1, and as shown in fig. 2, the information annotation method may include the following steps:
and step 210, displaying a live broadcast interface.
In this embodiment of the application, the live interface may be an interface in a client (anchor side) corresponding to an anchor when the anchor performs live broadcasting, or the live interface may be an interface in a client (viewer side) corresponding to a viewer watching live broadcasting.
Step 220, displaying a live broadcast picture in the live broadcast interface, wherein the live broadcast picture is a picture for displaying a live broadcast scene of the entity article.
In the embodiment of the present application, the live view is a view obtained by shooting a live view showing an entity article by a main broadcast, the live view is an entity view, and the live view may include at least one entity article.
Step 230, in response to the existence of the target entity article in the live broadcast picture, overlaying and displaying an information tag of the target entity article on the live broadcast picture; the information tag is used for indicating the designated attribute information of the target entity article.
In this embodiment of the application, the target entity item may be an entity item having specified attribute information in the live broadcast scene, the specified attribute information of the target entity item may be preset by a host or a related person, and illustratively, the host may enter the template image and the specified attribute information of the target entity item in a setting interface corresponding to the host, so as to upload the template image and the specified attribute information of the target entity item to a server for storage. The number of the target entity articles can be one or more.
Optionally, when the template image and the specified attribute information of the target entity item are entered by the anchor, other information of the target entity item may also be entered at the same time, including but not limited to the name, profile, detail link, price information, etc. of the target entity item. The specified attribute information of the target entity item may be implemented as one of the entered information of the target entity item, for example, the specified attribute information may be price information of the target entity item.
Illustratively, in response to the existence of a plurality of target entity articles in the live broadcast picture, information tags corresponding to the target entity articles are automatically displayed in an overlaying manner on the live broadcast picture.
To sum up, according to the information labeling method provided by the embodiment of the application, when a target entity article exists in a live broadcast picture, an information label for indicating the designated attribute information of the target entity article is automatically displayed in a superimposed manner on the live broadcast picture so as to display the designated attribute information of the target entity article, so that the process of displaying the designated attribute information in the live broadcast picture is simplified, and the display efficiency of the designated attribute information is improved; the display effect of the specified attribute information is improved by directly displaying the specified attribute information in the live broadcast picture.
Fig. 3 shows a flowchart of an information annotation method according to an exemplary embodiment of the present application, where the information annotation method may be executed by a server, which may be the server 110 shown in fig. 1, and as shown in fig. 3, the information annotation method may include the following steps:
step 310, acquiring a target entity article in a live broadcast picture; the live view is a view showing a live scene of the physical object.
In the embodiment of the application, a plurality of entity articles are displayed in the live broadcast picture, and the server acquires the entity articles in the live broadcast picture by identifying the live broadcast picture and screens the entity articles to acquire the target entity articles in the live broadcast picture.
Step 320, generating an information tag of the target entity article based on the target entity article; the information tag is used for indicating the designated attribute information of the target entity article.
And 330, indicating a target terminal to display a live broadcast picture based on the information tag, wherein the information tag of the target entity article is superposed and displayed on the live broadcast picture, and the target terminal is a terminal for displaying the live broadcast picture.
In the embodiment of the application, the server can send the generated information tag of the target entity article to the target terminal so as to indicate the target terminal to display the information tag of the target entity article on a live broadcast picture in a superimposed manner when the live broadcast picture is displayed; or, the server may combine the information tag of the target entity article with the live data stream, and send the combined live data stream to the target terminal, so that the target terminal displays a live view on which the information tag of the target entity article is displayed in a superimposed manner, which is not limited in this application.
To sum up, according to the information labeling method provided by the embodiment of the application, when a target entity article exists in a live broadcast picture, an information label for indicating the designated attribute information of the target entity article is automatically displayed in a superimposed manner on the live broadcast picture so as to display the designated attribute information of the target entity article, so that the process of displaying the designated attribute information in the live broadcast picture is simplified, and the display efficiency of the designated attribute information is improved; the display effect of the specified attribute information is improved by directly displaying the specified attribute information in the live broadcast picture.
Fig. 4 is a flowchart illustrating an information annotation method according to an exemplary embodiment of the present application, where the information annotation method may be performed by a server and a terminal interactively, and the server and the terminal may be implemented as the server and the terminal in the system illustrated in fig. 1, and as illustrated in fig. 4, the information annotation method may include the following steps:
in step 410, the anchor starts live broadcasting.
Before the anchor side starts live broadcast, the anchor side can enter related information of a target entity article through the anchor side in advance, wherein the related information comprises a name, a brief introduction, a detail link, a template image of the target entity article, price information and the like of the target entity article, the anchor side sends the related information of the target entity article to the server for storage, so that the server confirms whether the target entity article exists in a live broadcast picture or not based on the stored template image of the target entity article, and when the target entity article exists in the live broadcast picture, an information tag of the target entity article is generated based on specified attribute information of the target entity article, wherein the specified attribute information of the target entity article is at least one of the information of the target entity article.
And step 420, the target terminal displays a live broadcast interface, and a live broadcast picture is displayed in the live broadcast interface.
In the embodiment of the application, the target terminal may be a broadcaster terminal or a viewer terminal; taking the target terminal as the spectator side as an example, the live broadcast picture is a picture which is transmitted by the anchor side and shows a live broadcast scene of the physical object.
Step 430, the server obtains the target entity article in the live broadcast picture.
The server acquires a live broadcast picture in real time, and performs image recognition on the live broadcast picture to determine whether a target entity article exists in the live broadcast picture, wherein the process of acquiring the target entity article can be realized as follows:
carrying out image recognition on the live broadcast picture to obtain at least one entity article in the live broadcast picture;
and obtaining the object entity article from at least one entity article successfully matched with the template image in the template image set.
The template image set is an image set generated based on template images of target entity items uploaded by the anchor terminal, and the template image set can contain template images corresponding to different target entity items. The entity articles of the live broadcast picture acquired by the server through image recognition can include target entity articles and other entity articles, the other entity articles can be entity articles without specified attribute information in the live broadcast picture, and schematically, the other entity articles can be decoration articles in a live broadcast scene; therefore, the server needs to obtain the target entity item by screening from the acquired at least one entity item.
In order to improve the accuracy of image recognition and further improve the display effect of the information tag, in the embodiment of the application, the server can call the image recognition model to perform image recognition on a live broadcast picture so as to determine at least one entity article in the live broadcast picture;
the image recognition model can be obtained based on sample images and entity article label training corresponding to the sample images.
Step 440, the server generates an information tag of the target entity article based on the target entity article; the information tag is used for indicating the designated attribute information of the target entity article.
Taking the generation of the information tag of the first entity article in the target entity article as an example, the process of generating the information tag of the target entity article may be implemented as follows:
acquiring appointed attribute information of a first entity article based on a first template image; the first physical object is one of the target physical objects, and the first template image is a template image in the set of template images that matches the first physical object;
and generating an information tag of the first entity article based on the specified attribute information of the first entity article.
Since the anchor terminal associates each piece of information of the target entity article when uploading the related information of the target entity article, the server stores each piece of information of the target entity article in an associated manner, when the server acquires one piece of the related information of the target entity article, other related information of the target entity article can be acquired by querying based on the acquired information, and illustratively, after the server acquires a template image (first template image) of the first entity article, the designated attribute information of the first entity article can be acquired by querying based on the template image.
Generally speaking, when the anchor explains an entity article, in order to enable audiences to better acquire detailed information of the entity article, the entity article is displayed in a close range, that is, the entity article is moved to a central area of a live broadcast picture and is displayed in an enlarged manner; therefore, the display area and the display position of the entity article in the live broadcast area can be used for displaying the explanation state of the entity article, and in the embodiment of the application, the current explanation state of the target entity article, such as whether the target entity article is in the explained state, can be determined based on the area occupied by the target entity article in the live broadcast picture and the display position of the target entity article in the live broadcast picture; in the embodiment of the application, the server can obtain the area information and the position information of the target entity object in the live broadcast picture through image recognition.
The live broadcast picture can contain at least two target entity articles, and in order to improve the display effect of the information tags, the information tags of different tag types can be displayed based on different explanation states. Taking the first physical object as an example, the process may be implemented as:
acquiring first area information and first position information of a first entity article in a live broadcast picture;
determining a tag type of an information tag of a first entity article based on first area information and first position information of the first entity article;
and generating an information tag of the first entity article based on the specified attribute information of the first entity article and the tag type.
Wherein, in response to the first area information and the first position information indicating that the first physical object is an explained physical object, determining that the information type of the information tag of the first physical object is a first information type;
in response to the first area information and the first position information indicating that the first physical item is an unexplained physical item, determining that the information type of the information tag of the first physical item is a second information type;
the amount of information contained in the information tag of the first information type is greater than the amount of information contained in the information tag of the second type.
Illustratively, in this embodiment of the application, when the first position information indicates that the first physical object is located in a middle area of the live broadcast screen, and/or the first area information indicates that the area of the first physical object in the live broadcast screen is the largest, it is determined that the first physical object is an explained physical object, and an information tag of a first tag type is displayed corresponding to the first physical object; otherwise, the first entity article is confirmed to be an unexplained entity article, and the information tag of the second tag type is displayed corresponding to the first entity article. Fig. 5 is a schematic diagram illustrating information tags of different tag types according to an exemplary embodiment of the present application, as shown in fig. 5, a live scene includes a target entity article 510, a target entity article 520, and a target entity article 530, where the target entity article 510 is located in a middle area of a live scene, and a display area of the target entity article 510 is larger than areas of other target entity articles, so that the server confirms that the target entity article 510 is in an interpreted state, confirms that the information tag of the target entity article 510 is of a first tag type, and the target entity article 520 and the target entity article 530 are not located in the middle area of the live scene, and display areas of the target entity article 520 and the target entity article 530 are smaller than the display area of the target entity article 510, so that the server confirms that the target entity article 520 and the target entity article 530 are in an unexplained state, determining that the information tags of the target entity item 520 and the target entity item 530 are of the second tag type, and instructing the target terminal to display the information tags of the corresponding tag types corresponding to different target entity items in the live view, as shown in fig. 5, the information tag 540 of the first tag type corresponding to the target entity item 510 includes the name and price information of the target entity item 510, and the information tag 550 of the second tag type corresponding to the target entity item 520 and the target entity item 530 includes the price information of the target entity item 520 and the target entity item 530. It should be noted that the information contents included in the information tag of the first tag type and the information tag of the second tag type shown in fig. 5 are illustrative, and the information contents included in the information tags of different tag types are not limited in the present application.
Further, in response to the change of the interpretation state of the first physical object, updating the tag type of the information tag of the first physical object to update the information tag of the first physical object:
in response to the change of the area information of the first entity article in the live broadcast picture to the second area information, the position information of the first entity article in the live broadcast picture is changed to the second position information, and the label type of the information label of the first entity article is updated based on the second area information and the second position information;
and updating the information tag of the first entity article based on the specified attribute information of the first entity article and the updated tag type. Illustratively, when the first physical object finishes the explained state and changes into the unexplained state, the tag type of the information tag of the first physical object is correspondingly changed from the first tag type to the second tag type; and when the first entity article is changed from the unexplained state to the explained state, the label type of the information label of the first entity article is correspondingly changed from the second label type to the first label type. Fig. 6 is a schematic diagram illustrating a tag type change according to an exemplary embodiment of the present application, where as shown in fig. 6, when a target entity item 610 changes from an interpreted state to an unexplained state, an information tag of the target entity item 610 changes from an information tag 611 (an information tag of a first tag type) to an information tag 612 (an information tag of a second tag type); meanwhile, the target physical item 620 is changed from the unexplained state to the explained state, and the information tag of the target physical item 620 is changed from the tag 621 (information tag of the second tag type) to the information tag 622 (information tag of the first tag type), wherein the change of the explained state of the target physical item is reflected on the change of the display area and the display position of the target physical item.
In this application embodiment, for further improving the display effect of the information tag, the display size of the information tag may be adjusted based on the explanation state of the entity commodity, and the information tag with different display sizes may be displayed based on different explanation states, taking the first entity article as an example, this process may be implemented as follows:
acquiring first area information and first position information of a first entity article in a live broadcast picture;
determining size information of an information tag of a first entity article based on first area information and first position information of the first entity article;
and generating an information tag of the first entity article based on the specified attribute information of the first entity article and the size information.
In response to the first area information and the first position information indicating that the first physical item is the explained physical item, determining size information of an information tag of the first physical item as a first display size;
determining the size information of the first physical object to be a second display size in response to the first area information and the first position information indicating that the first physical object is an unexplained physical object;
wherein the first display size is larger than the second display size.
Schematically, fig. 7 shows a schematic diagram of an information tag according to an exemplary embodiment of the present application, as shown in fig. 7, when each target entity article in a live view is in an unexplained state, a display size of the information tag corresponding to each target entity article is a second display size, and when an explained target entity article 710 exists in the live view, a display size of an information tag 711 corresponding to the target entity article 710 is changed to a first display size.
It should be noted that, the above scheme of adjusting the display size of the information tag and adjusting the tag type of the information tag based on the explanation state of the physical commodity may be combined, that is, when the target physical object in the live broadcast picture is in an unexplained state, the information tag corresponding to the target physical object is set as the information tag of the second tag type with the second display size; when the target entity article in the live broadcast picture is in an explained state, the information tag corresponding to the target entity article is set to be the information tag of the first tag type with the first display size.
That is, for the target terminal, in the live broadcast screen of the target terminal, as the interpretation status of the target physical object changes, the corresponding information tag also changes, taking the first physical object as an example, the first physical object is one of at least two target physical objects, and the process is expressed as follows:
and in response to the change of the explanation state of the first entity article in the live scene, updating an information tag of the first entity article displayed on the live scene in an overlapping mode, wherein the change of the explanation state comprises a change from an explained state to an unexplained state and a change from the unexplained state to the explained state.
The change of the information tag of the first physical object may be represented by at least one of a display size of the information tag, a display position of the information tag, and a display content of the information tag, that is, at least one of the display size, the display position, and the display content of the information tag of the first physical object displayed on the live view in a superimposed manner is updated in response to a change of an explanation state of the first physical object in the live view.
And step 450, the server instructs the target terminal to display a live broadcast picture based on the information tag, and the information tag of the target entity article is superposed and displayed on the live broadcast picture.
In the embodiment of the present application, in order to enable the display of the information tag to correspond to the target entity article in the live broadcast picture, in the real-time example of the present application, in response to the presence of the target virtual article in the live broadcast picture, the target terminal displays the tag information of the target virtual article in an overlapping manner on the live broadcast picture corresponding to the display position of the target entity article. In order to achieve the effect, the server performs image recognition on the live broadcast picture to obtain the position information of the target virtual article in the live broadcast picture, and instructs the target terminal to display the live broadcast picture based on the information label of the target virtual article and the position information of the target virtual article in the live broadcast picture.
In the embodiment of the application, in response to the change of the designated attribute information of the target entity article, the server updates the information tag of the target entity based on the updated designated attribute information and instructs the target terminal to display the live broadcast picture based on the updated information tag of the target entity; correspondingly, the target terminal responds to the change of the specified attribute information of the target entity article, and updates the information label of the target entity article which is superposed and displayed on the live broadcast picture.
When the server updates the information tag of the target entity, the server can keep the specified attribute information in the original information tag, or obtain the attribute information difference before and after the specified attribute information is updated on the basis of the specified attribute information of the original information tag, and display the attribute information difference in the updated information tag; correspondingly, the updated information tag of the target entity article displayed in the target terminal includes first specified attribute information and second specified attribute information, or includes second attribute information and attribute information difference, the attribute information difference is the difference between the first attribute information and the second attribute information, the first specified attribute information is the specified attribute information before being changed, and the second specified attribute information is the specified attribute information after being changed. Illustratively, when the specified attribute information is price information, the updated information tag may include updated price information and price information before updating, or the updated information tag includes updated price information and price difference before and after updating, and the price difference may be expressed as a difference value, or may also be expressed as a difference amplitude, such as a discount value and the like; fig. 8 is a schematic diagram illustrating an information tag after updating of designated attribute information according to an exemplary embodiment of the present application, and as shown in fig. 8, taking the designated attribute information as price information as an example, an information tag 811 corresponding to a target designated item 810 includes both first price information 1488 before being changed and second price information 1288 after being changed. Optionally, in order to distinguish the first specified attribute information from the second specified attribute information, the first specified attribute information and the second specified attribute information are displayed in different display manners, as shown in fig. 8, the first price information is displayed in a deletion form to distinguish the first price information from the second price information.
In order to reduce background operation of a target user and improve the updating effect of the information tag content, the information tag content can be updated by combining at least one of voice information and user gestures in a live scene; optionally, in this embodiment of the application, voice information in a live broadcast scene may be acquired, voice recognition may be performed on the voice information, a specified attribute information change instruction is generated in response to that the acquired voice information indicates that specified attribute information is changed, and specified attribute information in an information tag of a target specified article is updated based on the specified attribute information change instruction, where update content of the specified attribute information is also acquired based on the voice recognition on the voice information in the live broadcast scene; the target designated item corresponding to the designated attribute information change instruction is an entity item in an explained state, and illustratively, in a live broadcast scene, after a host broadcasts voice information of 'price reduction 300 yuan on the basis of an original price' in an explanation process, the server obtains information of 'price reduction' and '300 yuan' through voice recognition, so that original price information in an information tag of the target designated item in the explained state is updated to price information after the price reduction of 300 yuan.
Optionally, in this embodiment of the application, a user gesture of a target user in a live broadcast scene may also be acquired, image recognition is performed on the user gesture, a specified attribute change instruction is generated in response to the acquired user gesture indicating that specified attribute information is changed, and specified attribute information in an information tag of a target specified item is updated based on the specified attribute information change instruction. Wherein the target user is a anchor in a live scene; the user posture can be represented as an updating gesture, the relationship between the updating gesture and the updating of the designated attribute information can be preset, and the updating content of the designated attribute information can also be preset; the target designated item corresponding to the designated attribute information change instruction is an entity item in an explained state, illustratively, the anchor may preset "food direction pointing down" as an update gesture, and preset update content corresponding to the update gesture to be 100 yuan less, then, in a live broadcast scene, after acquiring the update gesture based on image recognition of a user gesture, the server updates original price information in an information tag of the target designated item in the explained state to be price information after 100 yuan less.
In order to improve the interaction effect of the live broadcast picture, in the embodiment of the application, in response to that the target entity object is an entity object which is explained in a live broadcast scene, an interaction control is displayed in an information tag of the target entity object, and the interaction control is used for skipping the live broadcast picture to a detailed information page of the target entity object after receiving a selection operation, so that the operation required by a user for opening the detailed information page of the target entity object is reduced, and the interaction efficiency and the interaction effect of the live broadcast picture are improved. As shown in fig. 8, the information tab 811 corresponding to the target entity item 810 being explained in the live view includes an interactive control 812, so as to open the detail information page corresponding to the target entity item 810 when the selection operation of the user is received.
It should be noted that, after the server generates the information tag corresponding to the target entity article based on the specified attribute information of the target entity article, or updates the information tag corresponding to the target entity article, the server may send the live broadcast data stream of the anchor terminal and the information tag corresponding to the target entity article to the viewer terminal, respectively, so that the viewer terminal displays the information tag corresponding to the target entity article in an overlaid manner on the live broadcast screen; or, the server may send the synthesized live data stream to the viewer side, so that the viewer side displays a live view on which the information tag corresponding to the target entity article is displayed in an overlapping manner.
In one possible implementation manner, in response to that the second physical object is an entity object which is explained in a live scene, the tag information of the second physical object is displayed in an overlapping manner at a specified position on a live screen, and the tag information may be implemented as a tag card for displaying the related information of the second physical object; the designated position can be set by related personnel, or the user corresponding to the audience terminal drags the label card ticket based on actual requirements to determine the display position of the label card ticket, which is not limited by the application; fig. 9 is a schematic diagram illustrating an information tag according to an exemplary embodiment of the present application, and as shown in fig. 9, the second physical item 910 handles an interpreted state, and the information tag of the second physical item is implemented as a tag card 920 and displayed at a designated position on a live view. Optionally, a virtual ticket related to the second physical article may be set in the tag card 920, and in response to receiving a selection operation of the user based on the virtual ticket, the virtual ticket is confirmed to be taken, and the virtual ticket is issued to an account corresponding to the user, where the virtual ticket is used to deduct a part of resources when performing resource transfer based on the second physical article.
In this embodiment of the application, the anchor terminal may set a display manner of the tag information in the live broadcast picture, and illustratively, the anchor terminal may set the display manner of the tag information of the explained target entity item to be displayed according to the display position of the target entity item, or may also set the display manner of the tag information of the explained target entity item to be displayed in the form of a tag card at a specified position of the live broadcast picture; the display mode of the tag information of the target entity article which is not explained is set to be displayed according to the display position of the target entity article, or the display mode is not displayed, and the display modes corresponding to different explanation states can be used in a pairwise combination mode, and the application is not limited to this.
To sum up, according to the information labeling method provided by the embodiment of the application, when a target entity article exists in a live broadcast picture, an information label for indicating the designated attribute information of the target entity article is automatically displayed in a superimposed manner on the live broadcast picture so as to display the designated attribute information of the target entity article, so that the process of displaying the designated attribute information in the live broadcast picture is simplified, and the display efficiency of the designated attribute information is improved; the display effect of the specified attribute information is improved by directly displaying the specified attribute information in the live broadcast picture.
Meanwhile, the explanation state of the target entity article can be determined based on the area information and the position information of the target entity article in the live broadcast picture, the label type and/or the display size of the label information can be determined based on the explanation state of the target entity article, the information label can be updated in real time based on the real-time adjustment of the specified attribute information of the target entity article by the live broadcast section, and therefore the display effect of the information label is improved.
With reference to the embodiment shown in any one of fig. 2 to 4, taking an example that a virtual tag is superimposed and displayed on a live broadcast screen in a live broadcast cargo area, fig. 10 shows an interaction diagram of an information annotation method shown in an exemplary embodiment of the present application, where the method is interactively executed by a live broadcast terminal, a live broadcast system, and an audience terminal, and as shown in fig. 10, the information interaction method includes:
step 1001, the anchor terminal uploads commodity information to the live broadcast system, and correspondingly, the live broadcast system receives the commodity information.
The commodity information can be input into the anchor terminal by the anchor terminal through a commodity information input interface of the anchor terminal, the commodity information can contain a commodity image of at least one commodity and related information, and the related information can include the name, the brief introduction, the price, the detailed link and the like of the commodity.
Step 1002, the live system stores the commodity information.
And step 1003, the anchor end starts live broadcasting.
And 1004, acquiring a live broadcast picture by the live broadcast system, and performing image recognition on the live broadcast picture.
And 1005, matching the commodity image in the acquired live broadcast picture with the background commodity library image by the live broadcast system.
And step 1006, the live broadcast system judges whether the commodity image in the live broadcast picture is matched with the background commodity image, if so, step 1007 is executed, and if not, step 1009 is executed.
Step 1007, the live system generates a virtual label.
In step 1008, the live broadcast system instructs the viewer to display virtual labels around the items in the live broadcast frame.
The live broadcast system instructs the viewer to display a virtual tag based on the merchandise display position in the live broadcast frame.
Step 1009, the live broadcasting system instructs the audience to display only the goods in the live broadcasting picture without virtual tags.
In step 1010, the anchor starts to explain the first merchandise in detail.
Step 1011, the live broadcast system monitors the position and area of the commodity in real time, sets the virtual label of the commodity with the largest area in the picture to be the largest, and sets a shopping button.
In step 1012, the spectator displays a first-time purchase button on the virtual tag of the first item.
And step 1013, the anchor terminal adjusts the commodity price in real time.
Step 1014, the live broadcast system synchronizes the price information to the virtual label and updates the virtual label.
Step 1015, the spectator end displays two pieces of price information in the same virtual label; the two price information are the commodity price before adjustment and the commodity price after adjustment respectively.
To sum up, according to the information labeling method provided by the embodiment of the application, when a target entity article exists in a live broadcast picture, an information label for indicating the designated attribute information of the target entity article is automatically displayed in a superimposed manner on the live broadcast picture so as to display the designated attribute information of the target entity article, so that the process of displaying the designated attribute information in the live broadcast picture is simplified, and the display efficiency of the designated attribute information is improved; the display effect of the specified attribute information is improved by directly displaying the specified attribute information in the live broadcast picture.
Meanwhile, the explanation state of the target entity article can be determined based on the area information and the position information of the target entity article in the live broadcast picture, the label type and/or the display size of the label information can be determined based on the explanation state of the target entity article, the information label can be updated in real time based on the real-time adjustment of the specified attribute information of the target entity article by the live broadcast section, and therefore the display effect of the information label is improved.
Fig. 11 is a block diagram of an information annotation apparatus according to an exemplary embodiment of the present application, and as shown in fig. 11, the information annotation apparatus may include:
an interface display module 1110, configured to display a live interface;
a frame display module 1120, configured to display a live frame in the live interface, where the live frame is a frame showing a live scene of an entity article;
a tag display module 1130, configured to, in response to that a target entity item exists in the live view, superimpose and display an information tag of the target entity item on the live view; the information tag is used for indicating the designated attribute information of the target entity article.
In a possible implementation manner, the tag display module 1130 is configured to, in response to that a target entity item exists in the live view, display an information tag of the target entity item in an overlaid manner on the live view, corresponding to a display position of the target entity item.
In one possible implementation, the apparatus further includes:
and the first label updating module is used for updating the information label of the target entity article, which is displayed on the live broadcast picture in a superposition manner, in response to the change of the specified attribute information of the target entity article.
In a possible implementation manner, the updated information tag of the target entity article includes first specified attribute information and second specified attribute information, where the first specified attribute information is the specified attribute information before being changed, and the second specified attribute information is the specified attribute information after being changed.
In a possible implementation manner, in response to that the target entity article is an entity article explained in the live broadcast scene, an interactive control is displayed in an information tag of the target entity article, and the interactive control is configured to jump the live broadcast picture to a detailed information page of the target entity article after receiving a selection operation.
In a possible implementation manner, the live view includes at least two target physical objects, and the apparatus further includes:
a second tag updating module, configured to update an information tag of a first entity article that is displayed in a superimposed manner on the live view in response to a change in an explanation state of the first entity article in the live view, where the first entity article is one of at least two target entity articles; the change of the explanation state includes a change from an explained state to an unexplained state and a change from an unexplained state to an explained state.
In a possible implementation manner, the second tag updating module is configured to update at least one of a display size, a display position, and a display content of an information tag of a first entity article, which is displayed in an overlaid manner on the live view, in response to a change in an explanation state of the first entity article in the live view.
To sum up, the information labeling device provided in the embodiment of the present application automatically displays, by superimposing and displaying an information tag indicating the specified attribute information of the target entity article on the live broadcast image when the target entity article exists in the live broadcast image, the specified attribute information of the target entity article is displayed, so that the process of displaying the specified attribute information in the live broadcast image is simplified, and the display efficiency of the specified attribute information is improved; the display effect of the specified attribute information is improved by directly displaying the specified attribute information in the live broadcast picture.
Fig. 12 is a block diagram of an information annotation apparatus according to an exemplary embodiment of the present application, and as shown in fig. 12, the information annotation apparatus may include:
an article obtaining module 1210, configured to obtain a target entity article in a live broadcast frame; the live broadcast picture is a picture for displaying a live broadcast scene of an entity article;
a tag generating module 1220, configured to generate an information tag of the target entity item based on the target entity item;
an indicating module 1230, configured to indicate a target terminal to display the live broadcast picture based on the information tag, where the information tag of the target entity item is displayed in a superimposed manner on the live broadcast picture; the information tag is used for indicating the designated attribute information of the target entity article, and the target terminal is a terminal for displaying the live broadcast picture.
In one possible implementation, the article obtaining module 1210 includes:
the image recognition sub-module is used for carrying out image recognition on the live broadcast picture to obtain at least one entity article in the live broadcast picture;
and the image matching sub-module is used for obtaining the object entity article from the at least one entity article which is successfully matched with the template image in the template image set.
In one possible implementation manner, the tag generating module 1220 includes:
the attribute information acquisition sub-module is used for acquiring the specified attribute information of the first entity article based on the first template image; the first physical object is one of the target physical objects, and the first template image is a template image in the set of template images that matches the first physical object;
and the label generating submodule is used for generating an information label of the first entity article based on the specified attribute information of the first entity article.
In one possible implementation manner, the tag generation sub-module includes:
the information acquisition unit is used for acquiring first area information and first position information of the first entity article in the live broadcast picture;
a type determining unit, configured to determine a tag type of an information tag of the first entity item based on the first area information and the first location information of the first entity item;
a first tag generating unit, configured to generate an information tag of the first entity item based on the specified attribute information of the first entity item and the tag type.
In a possible implementation manner, the type determining unit is configured to determine that an information type of an information tag of the first physical item is a first information type in response to the first area information and the first location information indicating that the first physical item is an explained physical item;
in response to the first area information and the first location information indicating that the first physical item is an unexplained physical item, determining that an information type of an information tag of the first physical item is a second information type;
the information amount contained in the information label of the first information type is larger than the information amount contained in the information label of the second type.
In one possible implementation manner, the tag generation sub-module includes:
the information acquisition unit is used for acquiring first area information and first position information of the first entity article in the live broadcast picture;
a size information determination unit configured to determine size information of an information tag of the first physical item based on the first area information and the first position information of the first physical item;
a second tag generating unit, configured to generate an information tag of the first entity item based on the specified attribute information of the first entity item and the size information.
To sum up, the information labeling device provided in the embodiment of the present application automatically displays, by superimposing and displaying an information tag indicating the specified attribute information of the target entity article on the live broadcast image when the target entity article exists in the live broadcast image, the specified attribute information of the target entity article is displayed, so that the process of displaying the specified attribute information in the live broadcast image is simplified, and the display efficiency of the specified attribute information is improved; the display effect of the specified attribute information is improved by directly displaying the specified attribute information in the live broadcast picture.
Fig. 13 is a block diagram illustrating the structure of a computer device 1300 according to an example embodiment. The computer device 1300 may be a terminal in the information annotation system shown in fig. 1.
Generally, computer device 1300 includes: a processor 1301 and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1301 may further include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. The memory 1302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1302 is used to store at least one instruction for execution by processor 1301 to implement the methods provided by the method embodiments herein.
In some embodiments, computer device 1300 may also optionally include: a peripheral interface 1303 and at least one peripheral. Processor 1301, memory 1302, and peripheral interface 1303 may be connected by a bus or signal line. Each peripheral device may be connected to the peripheral device interface 1303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, display screen 1305, camera assembly 1306, audio circuitry 1307, positioning assembly 1308, and power supply 1309.
In some embodiments, computer device 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyro sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
Those skilled in the art will appreciate that the architecture shown in FIG. 13 is not intended to be limiting of the computer device 1300, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
FIG. 14 is a block diagram illustrating the structure of a computer device 1400 in accordance with an exemplary embodiment. The computer device may be implemented as a server in the above-mentioned aspects of the present application.
The computer apparatus 1400 includes a Central Processing Unit (CPU) 1401, a system Memory 1404 including a Random Access Memory (RAM) 1402 and a Read-Only Memory (ROM) 1403, and a system bus 1405 connecting the system Memory 1404 and the Central Processing Unit 1401. The computer device 1400 also includes a basic Input/Output system (I/O system) 1406 that facilitates transfer of information between devices within the computer, and a mass storage device 1407 for storing an operating system 1413, application programs 1414, and other program modules 1415.
The basic input/output system 1406 includes a display 1408 for displaying information and an input device 1409, such as a mouse, keyboard, etc., for user input of information. Wherein the display 1408 and input device 1409 are both connected to the central processing unit 1401 via an input-output controller 1410 connected to the system bus 1405. The basic input/output system 1406 may also include an input/output controller 1410 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 1410 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1407 is connected to the central processing unit 1401 through a mass storage controller (not shown) connected to the system bus 1405. The mass storage device 1407 and its associated computer-readable media provide non-volatile storage for the computer device 1400. That is, the mass storage device 1407 may include a computer-readable medium (not shown) such as a hard disk or Compact disk-Only Memory (CD-ROM) drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other solid state Memory technology, CD-ROM, Digital Versatile Disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1404 and mass storage device 1407 described above may collectively be referred to as memory.
The computer device 1400 may also operate as a remote computer connected to a network via a network, such as the internet, in accordance with various embodiments of the present disclosure. That is, the computer device 1400 may be connected to the network 1412 through the network interface unit 1411 connected to the system bus 1405, or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 1411.
The memory further includes at least one instruction, at least one program, a code set, or a set of instructions, which is stored in the memory, and the central processing unit 1401 implements all or part of the steps of the information labeling method shown in the above embodiments by executing the at least one instruction, the at least one program, the code set, or the set of instructions.
In an exemplary embodiment, a non-transitory computer readable storage medium including instructions, such as a memory including at least one instruction, at least one program, set of codes, or set of instructions, executable by a processor to perform all or part of the steps of the method shown in any of fig. 2, fig. 3, fig. 4, or fig. 10 described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, which comprises computer instructions, which are stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to perform all or part of the steps of the method described in any of the embodiments of fig. 2, fig. 3, fig. 4 or fig. 10.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (18)

1. An information labeling method, characterized in that the method comprises:
displaying a live broadcast interface;
displaying a live broadcast picture in the live broadcast interface, wherein the live broadcast picture is a picture for displaying a live broadcast scene of an entity article;
responding to the existence of a target entity article in the live broadcast picture, and overlaying and displaying an information tag of the target entity article on the live broadcast picture; the information tag is used for indicating the designated attribute information of the target entity article.
2. The method according to claim 1, wherein the displaying, in response to the existence of the target physical object in the live view, an information tag corresponding to the target physical object on the live view in an overlapping manner includes:
and responding to the existence of the target entity article in the live broadcast picture, and superposing and displaying an information tag of the target entity article on the live broadcast picture corresponding to the display position of the target entity article.
3. The method of claim 1, further comprising:
and updating the information label of the target entity article displayed on the live broadcast picture in an overlapped mode in response to the change of the specified attribute information of the target entity article.
4. The method according to claim 3, wherein the updated information tag of the target entity item includes first specified attribute information and second specified attribute information, the first specified attribute information is the specified attribute information before being changed, and the second specified attribute information is the specified attribute information after being changed.
5. The method according to claim 1, wherein in response to the target entity item being an entity item being explained in the live scene, an interactive control is displayed in an information tag of the target entity item, and the interactive control is configured to jump the live screen to a detailed information page of the target entity item after receiving a selection operation.
6. The method of claim 1, wherein the live view includes at least two of the target physical items, and wherein the method further comprises:
updating an information tag of a first entity article displayed in an overlapped mode on the live broadcast picture in response to the fact that the explanation state of the first entity article in the live broadcast scene changes, wherein the first entity article is one of at least two target entity articles; the change of the explanation state includes a change from an explained state to an unexplained state and a change from an unexplained state to an explained state.
7. The method of claim 6, wherein the updating the information tag of the first physical item displayed in an overlaid manner on the live view in response to the change of the explanation state of the first physical item in the live scene comprises:
and updating at least one of a display size, a display position and a display content of an information tag of the first entity article, which is superimposed and displayed on the live broadcast picture, in response to the change of the explanation state of the first entity article in the live broadcast scene.
8. An information labeling method, characterized in that the method comprises:
acquiring a target entity article in a live broadcast picture; the live broadcast picture is a picture for displaying a live broadcast scene of an entity article;
generating an information tag of the target entity article based on the target entity article; the information tag is used for indicating the designated attribute information of the target entity article;
instructing a target terminal to display the live broadcast picture based on the information tag, wherein the information tag of the target entity article is displayed on the live broadcast picture in an overlapped mode; the target terminal is a terminal displaying the live broadcast picture.
9. The method of claim 8, wherein the obtaining of the target physical object in the live view comprises:
performing image recognition on the live broadcast picture to obtain at least one entity article in the live broadcast picture;
and obtaining the entity article successfully matched with the template image in the template image set from the at least one entity article as the target entity article.
10. The method according to claim 9, wherein the generating an information tag of the target physical item based on the target physical item comprises:
acquiring the appointed attribute information of the first entity article based on the first template image; the first physical object is one of the target physical objects, and the first template image is a template image in the set of template images that matches the first physical object;
generating an information tag of the first entity item based on the specified attribute information of the first entity item.
11. The method of claim 10, wherein the generating an information tag for the first physical item based on the specified attribute information of the first physical item comprises:
acquiring first area information and first position information of the first entity article in the live broadcast picture;
determining a tag type of an information tag of the first physical item based on the first area information and the first location information of the first physical item;
generating an information tag of the first entity item based on the specified attribute information of the first entity item and the tag type.
12. The method of claim 11, wherein the determining a tag type of an information tag of the first physical item based on the first area information and the first location information of the first physical item comprises:
in response to the first area information and the first location information indicating that the first physical item is an interpreted physical item, determining that an information type of an information tag of the first physical item is a first tag type;
in response to the first area information and the first location information indicating that the first physical item is an unexplained physical item, determining that an information type of an information tag of the first physical item is a second tag type;
the information amount contained in the information tag of the first tag type is greater than the information amount contained in the information tag of the second tag type.
13. The method of claim 10, wherein the generating an information tag for the first physical item based on the specified attribute information of the first physical item comprises:
acquiring first area information and first position information of the first entity article in the live broadcast picture;
determining size information of an information tag of the first physical item based on the first area information and the first location information of the first physical item;
generating an information tag of the first entity item based on the specified attribute information of the first entity item and the size information.
14. An information labeling apparatus, comprising:
the interface display module is used for displaying a live broadcast interface;
the picture display module is used for displaying a live broadcast picture in the live broadcast interface, wherein the live broadcast picture is a picture for displaying a live broadcast scene of an entity article;
the tag display module is used for responding to the existence of a target entity article in the live broadcast picture and overlaying and displaying an information tag of the target entity article on the live broadcast picture; the information tag is used for indicating the designated attribute information of the target entity article.
15. An information labeling apparatus, comprising:
the article acquisition module is used for acquiring a target entity article in a live broadcast picture; the live broadcast picture is a picture for displaying a live broadcast scene of an entity article;
the label generating module is used for generating an information label of the target entity article based on the target entity article;
the indicating module is used for indicating a target terminal to display the live broadcast picture based on the information tag, and the information tag of the target entity article is superposed and displayed on the live broadcast picture; the information tag is used for indicating the designated attribute information of the target entity article, and the target terminal is a terminal for displaying the live broadcast picture.
16. A computer device comprising a processor and a memory, the memory storing at least one computer program that is loaded and executed by the processor to implement the information annotation method according to any one of claims 1 to 13.
17. A computer-readable storage medium, in which at least one computer program is stored, which is loaded and executed by a processor to implement the information labeling method according to any one of claims 1 to 13.
18. A computer program product, characterized in that it comprises at least one computer program which is loaded and executed by a processor to implement the information annotation method according to any one of claims 1 to 13.
CN202111037637.4A 2021-09-06 2021-09-06 Information labeling method and device, computer equipment and storage medium Pending CN114282031A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111037637.4A CN114282031A (en) 2021-09-06 2021-09-06 Information labeling method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111037637.4A CN114282031A (en) 2021-09-06 2021-09-06 Information labeling method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114282031A true CN114282031A (en) 2022-04-05

Family

ID=80868477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111037637.4A Pending CN114282031A (en) 2021-09-06 2021-09-06 Information labeling method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114282031A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115098764A (en) * 2022-05-18 2022-09-23 北京达佳互联信息技术有限公司 Multimedia processing method, device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115098764A (en) * 2022-05-18 2022-09-23 北京达佳互联信息技术有限公司 Multimedia processing method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2021047396A1 (en) Image processing method and apparatus, electronic device and computer-readable storage medium
US10026229B1 (en) Auxiliary device as augmented reality platform
US20190333478A1 (en) Adaptive fiducials for image match recognition and tracking
CN111787242B (en) Method and apparatus for virtual fitting
US20160050465A1 (en) Dynamically targeted ad augmentation in video
CN107633441A (en) Commodity in track identification video image and the method and apparatus for showing merchandise news
US20150070347A1 (en) Computer-vision based augmented reality system
US20140079281A1 (en) Augmented reality creation and consumption
WO2021213067A1 (en) Object display method and apparatus, device and storage medium
EP3425483B1 (en) Intelligent object recognizer
CN109740571A (en) The method of Image Acquisition, the method, apparatus of image procossing and electronic equipment
US20140078174A1 (en) Augmented reality creation and consumption
WO2022016915A1 (en) Advertisement information positioning method and corresponding apparatus therefor, advertisement information display method and corresponding apparatus therefor, device, and medium
CN111491187B (en) Video recommendation method, device, equipment and storage medium
TWI795762B (en) Method and electronic equipment for superimposing live broadcast character images in real scenes
US20170214980A1 (en) Method and system for presenting media content in environment
KR20120071444A (en) Method for providing advertisement by using augmented reality, system, apparatus, server and terminal therefor
CN112017300A (en) Processing method, device and equipment for mixed reality image
CN113766296A (en) Live broadcast picture display method and device
CN111308707A (en) Picture display adjusting method and device, storage medium and augmented reality display equipment
CN111724231A (en) Commodity information display method and device
CN111598996A (en) Article 3D model display method and system based on AR technology
CN114282031A (en) Information labeling method and device, computer equipment and storage medium
CN105760420B (en) Realize the method and device with multimedia file content interaction
US11361523B2 (en) Integrated rendering method for various extended reality modes and device having thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination