CN111246232A - Live broadcast interaction method and device, electronic equipment and storage medium - Google Patents

Live broadcast interaction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111246232A
CN111246232A CN202010052310.3A CN202010052310A CN111246232A CN 111246232 A CN111246232 A CN 111246232A CN 202010052310 A CN202010052310 A CN 202010052310A CN 111246232 A CN111246232 A CN 111246232A
Authority
CN
China
Prior art keywords
live
target area
virtual resource
virtual
live broadcast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010052310.3A
Other languages
Chinese (zh)
Inventor
马晓昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN202010052310.3A priority Critical patent/CN111246232A/en
Publication of CN111246232A publication Critical patent/CN111246232A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4784Supplemental services, e.g. displaying phone caller identification, shopping application receiving rewards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a live broadcast interaction method, a live broadcast interaction device, electronic equipment and a storage medium, and relates to the technical field of live broadcast, wherein the method comprises the following steps: acquiring a selected target area in a live broadcast interface; acquiring virtual resources corresponding to the target area; and when the virtual resource is presented successfully, generating a composite live broadcast interface comprising the live broadcast interface and the virtual resource, wherein the virtual resource is composited in the target area in the live broadcast interface. According to the method and the device, the corresponding virtual resources can be dynamically configured according to the selected target area in the live interface, and when the virtual resources are presented successfully, the virtual resources can be rendered to the target area in the live interface, so that the interactivity among users is improved.

Description

Live broadcast interaction method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of live broadcast technologies, and in particular, to a live broadcast interaction method and apparatus, an electronic device, and a storage medium.
Background
With the development of network technology, real-time video communication such as live webcast and video chat room becomes an increasingly popular entertainment mode. In the real-time video communication process, the interactivity among users can be increased by giving gifts and showing special effects. For example, in a live scene, the anchor user is live in the live room, and the viewer user watches the live process of the anchor at the viewer client. In order to increase the interactivity between the anchor user and the audience user, the audience user can select a specific target special effect gift to be presented to the anchor user, so that a specific position of a live interface can show a corresponding special effect. However, at present, the live broadcast interaction mode between users is single, and the interactivity between users is not high.
Disclosure of Invention
The embodiment of the application provides a live broadcast interaction method and device, electronic equipment and a storage medium, and the live broadcast interaction among users can be improved.
In a first aspect, an embodiment of the present application provides a live broadcast interaction method, where the method includes: acquiring a selected target area in a live broadcast interface; acquiring virtual resources corresponding to the target area; and when the virtual resource is presented successfully, generating a composite live broadcast interface comprising the live broadcast interface and the virtual resource, wherein the virtual resource is composited in the target area in the live broadcast interface.
In a second aspect, an embodiment of the present application provides a live broadcast interaction apparatus, including: the system comprises a target acquisition module, a resource acquisition module and a resource synthesis module. The target acquisition module is used for acquiring a selected target area in the live broadcast interface; the resource acquisition module is used for acquiring virtual resources corresponding to the target area; the resource synthesis module is used for generating a synthesis live broadcast interface containing the live broadcast interface and the virtual resource when the virtual resource is presented successfully, wherein the virtual resource is synthesized in the target area in the live broadcast interface.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory; one or more processors coupled with the memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications being configured to perform the live interaction method provided by the first aspect above.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code may be called by a processor to execute the live broadcast interaction method provided in the first aspect.
According to the live broadcast interaction method and device, the electronic device and the storage medium, the selected target area in the live broadcast interface is obtained, the virtual resource corresponding to the target area is obtained, and when the virtual resource is presented successfully, a composite live broadcast interface comprising the live broadcast interface and the virtual resource is generated, wherein the virtual resource is composited in the selected target area in the live broadcast interface. Therefore, the virtual resources corresponding to the selection areas can be dynamically configured by selecting different areas in the live broadcast interface, and when the virtual resources are presented successfully, the virtual resources can be rendered to the corresponding selection areas in the live broadcast interface, so that when the areas are selected in the live broadcast interface, the corresponding virtual resources can be rendered in the selected areas, the live broadcast interaction modes among users are enriched, the rendering effect of the virtual resources is improved, and the interaction interest is increased.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows an application scene diagram of a live broadcast interaction method provided in an embodiment of the present application.
Fig. 2 shows a flowchart of a live interaction method according to an embodiment of the present application.
Fig. 3 shows an interface schematic diagram provided in an embodiment of the present application.
Fig. 4 shows another interface schematic diagram provided in the embodiment of the present application.
Fig. 5 shows a schematic view of another interface provided in the embodiment of the present application.
Fig. 6 shows a schematic view of still another interface provided in the embodiment of the present application.
Fig. 7 shows a schematic view of still another interface provided in an embodiment of the present application.
Fig. 8 is a flowchart illustrating a live interaction method according to another embodiment of the present application.
Fig. 9 shows a schematic flow chart of step S220 in fig. 8.
Fig. 10 shows a schematic flow chart of step S221 in fig. 9.
Fig. 11 shows a schematic flow chart of step S222 in fig. 9.
Fig. 12 shows a device interaction flow diagram applicable to the live broadcast interaction method provided in the embodiment of the present application.
Fig. 13 shows a schematic flow chart of step S230 in fig. 8.
Fig. 14 is a flowchart illustrating a live interaction method according to yet another embodiment of the present application.
Fig. 15 is a flowchart illustrating a live interaction method according to still another embodiment of the present application.
Fig. 16 is a schematic overall flow chart of a live broadcast interaction method provided by an embodiment of the present application.
Fig. 17 shows a block diagram of a live interaction device according to an embodiment of the present application.
Fig. 18 shows a block diagram of an electronic device according to an embodiment of the present application.
Fig. 19 illustrates a storage unit for storing or carrying program codes for implementing a live interaction method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application scenario of a live broadcast interaction method provided in an embodiment of the present application, where the application scenario includes a live broadcast interaction system 10 provided in an embodiment of the present application. The live interaction system 10 includes: a terminal device 100 and a server 200. Wherein, the terminal device 100 and the server 200 are located in a wireless network or a wired network, and the terminal device 100 and the server 200 can perform data interaction. In some embodiments, there may be a plurality of terminal devices 100, the server 200 may be communicatively connected to a plurality of terminal devices 100, a plurality of terminal devices 100 may also be communicatively connected to each other through the internet, and the server 200 may also be used as a transmission medium to implement data interaction with each other through the internet.
In this embodiment, the terminal device 100 may be a mobile phone, a smart phone, a notebook computer, a desktop computer, a tablet computer, a Personal Digital Assistant (PDA), a media player, a smart television, a wearable electronic device, and the like, and a specific type of the terminal device may not be limited in this embodiment. The server 200 may be a single server, or a server cluster, or a local server, or a cloud server, and a specific server type may not be limited in this embodiment of the application.
In some embodiments, a client may be installed in the terminal device 100. The client may be a computer Application (APP) installed on the terminal device 100, or may be a Web client, which may refer to an Application developed based on a Web architecture. In some embodiments, a user logs in through an account at a client, and all information corresponding to the account can be stored in the storage space of the server 200. The information corresponding to the account includes information input by the user through the client, information received by the user through the client, and the like.
In some implementations, the client may be an application of a real-time video communication platform. As one way, the client may be an application of a live platform, and live content may be displayed on a live interface of the client. The client can be divided into: an anchor client for use by an anchor user and a viewer client for use by a viewer user. The anchor can upload the video collected by the local camera to the server 200 through the client used by the anchor, and then the server 200 forwards the live video to the client used by all audiences who are in the same channel (or live room) with the anchor, so that the live video can be displayed on the live interface of the audience client.
It should be noted that the user can switch between two roles of anchor and audience, for example, the user can be used as the anchor when the user is in the live broadcast room, and can be used as the audience when the user watches the live broadcast in the live broadcast room of other users.
Further, the client may also receive a trigger event (e.g., a click event, a touch event, etc.) input by the user based on the client, where the trigger event may act on a manipulation object displayed on the live interface. And the client receives the trigger event and can execute the operation corresponding to the control object acted by the trigger event. The control object may be the whole picture displayed on the live interface or display content in the picture.
As one way, the manipulation object may be a virtual gift displayed in the screen, and the viewer client may trigger execution of a corresponding operation when receiving a trigger event acting on the virtual gift. For example, the interactive data is generated and sent to the server 200, the server 200 forwards the interactive data to the anchor client of the same channel (or live broadcast room), then the anchor client synthesizes the virtual gift into a video frame according to the interactive data, the video frame containing the virtual gift is put into the live broadcast video stream and sent to the server 200, and the server 200 transmits the live broadcast video stream to the audience client to display the virtual gift. Therefore, when watching the live broadcast in the live broadcast room, the user can present the virtual gift to the main broadcast in the live broadcast room and can also see the virtual gift presented by other users, so that the interaction between the audience and the main broadcast and the interaction between the audience and the audience are realized.
However, in the current live broadcast platform, the interaction of the virtual gift synthesized into the live broadcast video stream is single, most users directly open the virtual gift panel to give the virtual gift, the anchor client directly synthesizes the virtual gift onto the live broadcast video stream, the virtual gift rotates along with the face of the video or the nose rotates, the single gift giving interaction mode limits the gift giving mode, limits the rendering effect of the virtual gift, and the interactivity between the users is not high.
In order to solve the above defects, the inventor has conducted long-term research and has proposed a live broadcast interaction method, device, electronic device, and storage medium in the embodiments of the present application, which can enrich the live broadcast interaction manner and improve the live broadcast interaction effect. The following will be described in detail by way of specific examples.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a live broadcast interaction method provided in an embodiment of the present application, and the live broadcast interaction method is applicable to an electronic device, where the electronic device may be the terminal device or the server. The terminal device may be a terminal device corresponding to the anchor client, or a terminal device corresponding to the viewer client. The specific electronic device is not limited herein. For example, in a specific embodiment, the live interaction method can also be applied to the live interaction apparatus 600 shown in fig. 17 and the electronic device 800 shown in fig. 18.
The flow shown in fig. 2 will be described in detail below. The live broadcast interaction method can comprise the following steps:
step S110: and acquiring the selected target area in the live broadcast interface.
In the embodiment of the application, a user can watch the live video of the main broadcast through the live interface of the client, the live video can be live content recorded by the main broadcast, such as talent performance like singing, dancing, games and the like, and the live video can comprise the main broadcast. When the user needs to donate the virtual resource to the anchor, the user may select a certain area in the live interface to show the virtual resource donated by the user in the area in the live interface. The virtual resources can be virtual gifts, can include virtual wearing devices such as virtual earrings, virtual glasses, virtual hats and virtual hairs, can also include virtual special effects such as virtual rockets, virtual fireworks, virtual cloud and fog animations and virtual makeup, and can also include virtual points, virtual coins and the like. The specific virtual resource is not limited in this application.
Specifically, when the client receives a trigger event (such as a click event, a touch event, and the like) of a user acting on the live interface, the client may acquire a selected target area in the live interface. The target area may be a coordinate value identifier corresponding to the trigger event in the live interface. As an embodiment, the coordinate value identification may be represented by coordinates (x, y) in a display coordinate system, x representing an abscissa and y representing an ordinate. In one example, the display coordinate system may be a coordinate system with the top left corner of the video frame image of the live video as the origin, the horizontal right side being the positive x-axis half axis, and the vertical down side being the positive y-axis half axis. In another example, the display coordinate system may also be a coordinate system with the upper left corner of the live interface as the origin, the horizontal right side as the positive x-axis half axis, and the vertical downward side as the positive y-axis half axis. Of course, the present application does not limit the manner in which the display coordinate system is constructed. In some embodiments, the target area may also be a recognized area identifier, for example, a background area, a portrait area, or other area identifiers, or a face area, a hair area, or other area identifiers.
In one example, when the client is installed on a computer, a user can click and select any region in the live interface through a mouse, so that the client can acquire the selected target region in the live interface. In another example, when the client is installed on a mobile phone, a user can select any area in the live interface through finger touch, so that the client can acquire the selected target area in the live interface. For example, referring to fig. 3, when the user clicks the hair of the anchor 111 in the live interface 10 with a finger, the client may obtain the coordinate value of the click position 20 in the current live video frame, e.g., (80, 40).
In some embodiments, live video of at least two anchor may also be displayed in the live interface. For example, referring to fig. 4, when a anchor a and an anchor B are in a spelling (PK) interaction, the two anchors can connect to each other and video, so that the live interface 10 of the client can simultaneously display the live contents of the two anchors online, wherein the first area 11 displays the live content of the anchor a, and the second area 12 displays the live content of the anchor B. When the user needs to give away the virtual resource to the anchor a, the user can select a certain area in the live video area of the anchor a in the live interface so as to show the virtual resource given away by the user in the certain area in the live video area of the anchor a. Therefore, when at least two live videos of the anchor are displayed in the live interface, the client can determine a specific donation object of the virtual resource through the specific position of the target area selected by the user.
In some embodiments, the live video may occupy the entire live interface, or may occupy only a partial area of the live interface. The specific layout of the live interface can be set according to the user requirement, and is not limited here. Optionally, the selected target area in the live interface may be an area in the live video, or may be another area outside the live video. For example, referring to fig. 5, fig. 5 is a schematic diagram of a live interface, in which a live video area 13 occupies a partial area of the live interface 10, other areas 14 may be used to display interactive information, such as a message that a viewer sends a gift, a chat message of the viewer and the viewer, and the user may click on any area in the live video 13, and may click on any area in the other areas 14.
In some embodiments, after the client acquires the target area, the client may also send the target area to the server, so that the server may acquire the selected target area in the live interface.
Step S120: and acquiring the virtual resource corresponding to the target area.
In the embodiment of the application, when the selected target area in the live interface is obtained, the virtual resource corresponding to the target area can be obtained.
In some embodiments, the virtual resource may have a correlation with the target area, i.e., the rendering effect of the virtual resource may match the target area. In some embodiments, different target areas may correspond to different virtual resources, such that when a user clicks on a different area in the live interface, different virtual resources may be donated. For example, when selecting the hair of the anchor in the live interface, the corresponding virtual resource may be a virtual hair clip, a virtual hat, a virtual hair band, or other virtual headwear. For another example, when a blank area on the anchor head is selected in the live interface, the corresponding virtual resource may be a virtual special effect such as a virtual firework and a virtual airplane.
In some embodiments, a database may be established to store the correspondence between regions and virtual resources. Therefore, when the target area is obtained, the virtual resource corresponding to the target area can be searched from the database. As a mode, the database may be stored in the terminal device where the viewer client is located, so that when the viewer client obtains the target area, the corresponding virtual resource may be directly obtained according to the locally stored database. As another way, the database may be stored in the server, so that the client may send the acquired target area to the server, and the server may find the corresponding virtual resource according to the database, and then the server may return the found virtual resource to the client.
In some embodiments, when the client acquires the virtual resource corresponding to the target area, the virtual resource may be displayed in the live interface, so that the user may preview the virtual resource first and then determine whether to give the virtual resource. For example, referring to fig. 3 and 6, when a user clicks the hair of the anchor 111 in the live interface 10 with a finger, a virtual resource window 30 corresponding to the hair may be displayed on the live interface 10, and one or more virtual resources may be included in the virtual resource window 30.
Step S130: and when the virtual resource is presented successfully, generating a composite live broadcast interface comprising the live broadcast interface and the virtual resource, wherein the virtual resource is composited in the target area in the live broadcast interface.
In this embodiment of the present application, when the virtual resource corresponding to the target area is given by the user successfully, a composite live interface including the live interface and the virtual resource may be generated. The virtual resources are synthesized in a target area in the live interface. Further, the client can display the composite live broadcast interface, so that a user can watch the display effect of the virtual resources through the composite live broadcast interface of the client.
For example, referring to fig. 3, 6 and 7, when a user clicks the head of the anchor 111 in the live interface 10, the virtual resource window 30 pops up on the live interface 10, and after the user selects the gift 1 in the virtual resource window 30, the user can click the "give away" button, and when the give away is successful, the user can view the display effect of the gift 1 superimposed on the head of the anchor 111 in the live interface 10 through the client, where it is assumed that the gift 1 can be viewed as the virtual hat 31, the virtual hat 31 is displayed superimposed on the head of the anchor 111, and the virtual hat can rotate along with the rotation of the head of the anchor 111.
In some embodiments, the composition process of the virtual resource and the live video may be performed by the anchor client. By one approach, the viewer may send a comp instruction for the virtual resource to the server through the viewer client, and the server may then forward the comp instruction to the anchor client. When the anchor client receives a presentation instruction sent by the audience client, the anchor client can extract a video frame image from the live video, and process the video frame image according to the virtual resource to extract relevant information for synthesizing the virtual resource, such as a rendering position of the virtual resource in the video frame image. And then the anchor client side can synthesize the virtual resources to the target position of the video frame image according to the rendering position to obtain a synthesized live video, and the synthesized live video is coded and packaged to obtain a synthesized video stream. After the composite video stream is forwarded to the viewer client by the server, the viewer client can generate a composite live interface according to the composite video stream. The target position of the virtual resource synthesized to the video frame image may correspond to the target area.
In other embodiments, the viewer client may perform the process of combining the virtual resource with the live video. As a mode, after the anchor audience terminal identifies the rendering position of the virtual resource in the video frame image, the rendering position and the live video can be coded and packaged to form a live video stream and sent to the server, so that the rendering position can be forwarded to the audience client terminal together with the live video via the server. After receiving the live video stream, the audience client can extract the video frame image and the rendering position of the virtual resource from the live video stream, and then the audience client can synthesize the virtual resource and the video frame image according to the rendering position so as to generate a synthesized live interface, wherein the synthesized live interface can comprise a live interface and the virtual resource synthesized to a target area in the live interface. The video frame images in the live video stream can be used for generating a live interface.
As another way, the virtual resource may be preset with a corresponding composite position, and when the virtual resource is presented successfully, the viewer client may extract a video frame image from the received live video stream according to the preset composite position, so as to determine a rendering position of the virtual resource in the video frame image according to the preset composite position. And then the audience client side can synthesize the virtual resources and the video frame images according to the rendering position so as to generate a synthesized live broadcast interface.
In still other embodiments, the server may perform the process of synthesizing the virtual resource and the live video. The user can send a presentation instruction of the virtual resources to the server through the client, and then when the server receives the live video stream sent by the anchor client, the virtual resources are synthesized to the live video stream to obtain a synthesized video stream after synthesis. After the composite video stream is forwarded to the viewer client by the server, the viewer client can generate a composite live interface according to the composite video stream.
It should be understood that the above-mentioned combination of the virtual resource and the live video is only an example, and is not limited in this application.
In some embodiments, when at least two anchor live videos are displayed in the live interface, the client may determine a specific donation object of the virtual resource through a specific position of the target area selected by the user. Optionally, the client may obtain identification information corresponding to the presentation object, where the identification information may be a client identifier, a live broadcast room number, an anchor ID account registered on the client, an anchor nickname, and the like, and is not limited herein. One piece of identification information can uniquely correspond to one donation object, and different donation objects can be distinguished according to the identification information.
As a mode, when a viewer client or server synthesizes a virtual resource and a live video, the viewer client or server may determine, from the two anchor live videos, a live video of a target anchor of the virtual resource to be synthesized according to identification information corresponding to the presentation object, and then synthesize the virtual resource into the live video of the target anchor. As another way, when the anchor client performs composition of the virtual resource and the live video, the audience client sends a presentation instruction of the virtual resource to the server, where the presentation instruction may include identification information corresponding to a presentation object, so that the server may prepare to forward the presentation instruction to the anchor client of the target anchor according to the identification information, and the anchor client of the target anchor may compose the virtual resource into the live video of the target anchor.
It should be noted that the present application is not only applicable to live scenes, but also applicable to other real-time video communication scenes, for example, when WeChat video chat is performed, both parties can click any area in the video of the other party, and then pop up a corresponding virtual resource window to select a certain virtual resource to be superimposed and synthesized into the video of the other party. The specific application scenario is not limited herein.
It can be understood that, in the embodiment of the present application, the above steps may be performed locally by a client of the terminal device, may also be performed in the server, and may also be performed by the terminal device and the server separately, and according to different actual application scenarios, tasks may be allocated according to requirements, so as to implement an optimized live broadcast interactive experience, which is not limited herein.
According to the live broadcast interaction method, the selected target area in the live broadcast interface is obtained, the virtual resource corresponding to the target area is obtained, and when the virtual resource is presented successfully, a composite live broadcast interface comprising the live broadcast interface and the virtual resource is generated, wherein the virtual resource is composited in the selected target area in the live broadcast interface. Therefore, the virtual resources corresponding to the selection areas can be dynamically configured by selecting different areas in the live broadcast interface, and when the virtual resources are presented successfully, the virtual resources can be rendered to the corresponding selection areas in the live broadcast interface, so that when the areas are selected in the live broadcast interface, the corresponding virtual resources can be rendered in the selected areas, the live broadcast interaction modes among users are enriched, the rendering effect of the virtual resources is improved, and the interaction interest is increased.
Referring to fig. 8, fig. 8 is a flowchart illustrating a live broadcast interaction method according to another embodiment of the present application, which is applicable to the electronic device, and the live broadcast interaction method includes:
step S210: and acquiring the selected target area in the live broadcast interface.
Step S220: and acquiring the virtual resource corresponding to the target area.
In the embodiment of the present application, step S210 and step S220 may refer to the foregoing embodiments, and are not described herein again.
In some embodiments, a certain body part of a main broadcast in a live broadcast interface can be selected to obtain virtual resources strongly related to the body part of the main broadcast for presentation, so that the interactivity between audiences and the main broadcast is improved, the interest of the interactivity is increased, and the retention rate of a live broadcast is improved. Therefore, as a mode, when a selected target area in the live broadcast interface is acquired, the target area can be identified to judge whether the target area is a human body part. Specifically, referring to fig. 9, step S220 may include:
step S221: and identifying the target area to obtain the human body part information corresponding to the target area.
In some embodiments, when the client acquires a selected target area in the live interface, the client may identify the target area to obtain the human body part information corresponding to the target area. As a mode, when the client detects the selection operation of the target area, the client may capture a current live broadcast interface and obtain the coordinate value identifier of the target area. And then the client can determine the area where the coordinate value identifier in the screenshot is located according to the screenshot and the target area, and identify the area to obtain the corresponding human body part information. The human body part information may be a human body part such as a head, an ear, an eye, a nose, a neck, and the like, and the information is not limited herein and only needs a human body part.
In some embodiments, the identification of the target area may also be performed by the server. As one approach, the server may comprise a human recognition library server. When the client acquires a selected target area in the live broadcast interface, the target area can be sent to the human body recognition library server, the human body recognition library server recognizes the target area, and when human body part information corresponding to the target area is recognized, the human body part information can be returned to the client. So that the client can obtain the human body part information corresponding to the target area. As one mode, the client may generate an identification request instruction according to the coordinate value identifier corresponding to the target area, and then send the identification request instruction to the human body recognition library server. The identification request command may carry a coordinate value identifier of the target area. In some embodiments, a screenshot of the live interface may also be included in the identification request instruction.
In some embodiments, since there is a possibility of a sensitive violation by touching a human body part, violation verification may be performed on the target area to prevent the violation. Specifically, referring to fig. 10, step S221 may include:
step S2211: and detecting whether the target area is illegal.
In some embodiments, detecting whether the target area is illegal may be determining whether the target area is a preset area. The preset area is an area corresponding to a sensitive illegal body part. Where the body parts of the sensitive violation may be various private parts of the body. As one mode, whether the coordinate value corresponding to the target area is located in the preset area may be detected, and when the coordinate value is located in the preset area, it may be determined that the target area is illegal. When not located in the preset area, the target area may be determined to be non-violating.
In some embodiments, violation detection may be performed by the server. Alternatively, the server may comprise a violation database server. As a mode, when the client acquires a selected target area in the live broadcast interface, the target area may be sent to the violation database server, and the violation database server detects the target area to detect whether the target area violates rules. As another mode, the human body recognition library server accesses the violation database server, and when the human body recognition library server obtains a target area uploaded by the client, the target area may be sent to the violation database server for violation detection.
Step S2212: and when the target area is not violated, identifying the target area to obtain the human body part information corresponding to the target area.
In some embodiments, when it is detected that the target area is not violated, the target area may be identified to obtain the human body part information corresponding to the target area. When the target area violation is detected, the violation detection result can be returned to the client side so as to prompt the user to select the target area violation.
As one approach, when the violation database server detects that the target area is not violating, the detection result of non-violating may be returned to the client. And when receiving a detection result without violation, the client sends the target area to the human body recognition library server for recognition so as to obtain human body part information corresponding to the target area.
As another mode, the human body recognition base server may access the violation database server, and when the human body recognition base server acquires a target area uploaded by the client, the target area may be first sent to the violation database server for violation detection. When the violation database server detects that the target area does not violate the rule, the detection result of the violation may be returned to the human body recognition database server. When receiving the detection result without violation, the human body recognition library server can recognize the target area to obtain human body part information corresponding to the target area. And then returning the human body part information to the client. When the violation database server detects a target area violation, the detection result of the violation may be returned to the human recognition database server. And when receiving the violation detection result, the human body recognition library server can return the violation detection result to the client to prompt the user that the selected target area violates the rule.
Step S222: and acquiring virtual resources matched with the human body part information.
In some embodiments, when the human body part information corresponding to the target region is obtained, a virtual resource matching the human body part information may be acquired. For example, the virtual resource matched with the human face may be a virtual mask, or the like, or may be a pernicious effect such as a swollen face, a palmar impression, or the like.
In some embodiments, the server may perform a corresponding virtual resource query according to the human body part information. In one approach, the server may comprise a service server that may implement interactive services of multiple service types. When the client obtains the identified human body part information, a resource query instruction can be generated according to the human body part information, and the resource query instruction is sent to the service server. Wherein, the resource inquiry command can carry the information of the human body part. After receiving the resource query instruction, the service server can search the virtual resource matched with the human body part information, and when the virtual resource is searched, the virtual resource matched with the human body part information can be returned to the client.
In some embodiments, the database of the service server may store virtual resources corresponding to the human body part information in a key-value pair manner. For example, { key ═ hair ", value ═ headwear wearing device data set" }, { key ═ ear ", value ═ ear ring wearing device data set" }.
In some embodiments, when the human body part information corresponding to the target area is identified by the human body identification library server, the human body part information may be returned to the client. Then the client side can send the human body part information to the service server for virtual resource matching, so that the client side can obtain the virtual resource matched with the human body part information returned by the service server. In other embodiments, the human body recognition library server may access the service server, and when the human body recognition library server recognizes the human body part information, the human body part information may also be directly sent to the service server for virtual resource matching. And are not limited thereto.
In some embodiments, when it is detected that the target area is not violated, the human recognition library server may return the recognized human part information to the client. However, since the data may be tampered in the transmission process, if the returned human body part information is tampered (changed to sensitive illegal part information), when the client sends the tampered human body part information to the service server to perform virtual resource matching, finally, a vulnerability may occur in rendering composition of the virtual resources. For example, a matching virtual resource may not be found. For another example, finding a virtual resource may render a composition to a private part of a human body. Therefore, in order to improve information security and improve data tamper resistance, a second violation detection can be performed. Specifically, referring to fig. 11, since the step S222 may include:
step S2221: and detecting whether the human body part information violates rules or not.
In some embodiments, when the identified human body part information is acquired, violation detection may be performed on the human body part information to determine whether the human body part information is a sensitive violation part of a human body. As one way, a database may be established, each sensitive violation part of the human body is pre-stored, and when violation detection is performed, it may be determined whether human body part information is identified in the database, and if so, the human body part information violates the rule, and if not, the human body part information does not violate the rule.
In some embodiments, violation detection of the human body part information may be performed by a server. Optionally, the server may be the violation database server, that is, the violation database server may perform two violation detections, where the first violation detection is to detect whether the target area is violated, and the second violation detection is to detect whether the human body part information is violated.
As one mode, when the client acquires the human body part information, the human body part information may be sent to the violation database server, and the violation database server detects the human body part information to detect whether the human body part information is violated.
As another mode, the service server may access the violation database server, when receiving a resource query instruction of the human body part information sent by the client, the service server may first send the human body part information to the violation database server for violation detection, and when no violation is detected, the service server may then query a virtual resource matched with the human body part information.
Step S2222: and when the human body part information does not violate rules, acquiring virtual resources matched with the human body part information.
In some embodiments, when it is detected that the human body part information is not violated, a virtual resource matching the human body part information may be queried. When the human body part information violation is detected, a violation result can be returned to the client to prompt the user to violate the rule.
As one mode, when the service server accesses the violation database server, if the violation database server detects that there is no violation, the service server may query a virtual resource matching the human body part information, and return the queried virtual resource to the client, so that the client may obtain the virtual resource matching the human body part information corresponding to the selected area.
For example, please refer to fig. 12, and fig. 12 is a schematic diagram illustrating device interaction applicable to a live broadcast interaction method provided by an embodiment of the present application. Specifically, when the client of the terminal device 100 sends the target area to the human body recognition server 201 for human body part recognition, the human body recognition server may send a recognition sensitive information instruction to the violation database server 202, so as to perform a first sensitive violation verification operation on the target area. When the first verification is not violated, the human body recognition server 201 may return the recognized human body part information to the client. Then, the client sends the human body part information to the service server 203, and when the virtual resource query matched with the human body part is performed, the service server 203 can also send an identification sensitive information instruction to the violation database server 202 so as to perform a second sensitive violation verification operation on the human body part information. When the second verification is not violated, the service server 203 may return the queried virtual resource matching the human body part to the client.
Step S230: and when the virtual resource is presented successfully, generating a composite live broadcast interface comprising the live broadcast interface and the virtual resource, wherein the virtual resource is composited in the target area in the live broadcast interface.
In the embodiment of the present application, step S230 may refer to the foregoing embodiments, and is not described herein again.
In some embodiments, when there are multiple virtual resources matching the target area, the user may select one of the virtual resources to be donated. Specifically, step S230 may include: and when a target virtual resource in the plurality of virtual resources is donated successfully, generating a composite live broadcast interface containing the live broadcast interface and the target virtual resource.
In some embodiments, the client may receive one or more virtual resources (e.g., a wearable device data set such as a virtual hat corresponding to the hair, a virtual hair clip, virtual hair, etc.) returned by the business server that match the target area, and may then generate a virtual resource window according to the one or more virtual resources. So that the user can select a corresponding target virtual resource from the virtual resource window. When receiving the selected target virtual resource, the client side can generate a presentation instruction according to the target virtual resource and send the presentation instruction to the server. The target virtual resource may be carried in the donation instruction, and the server may be the service server. Therefore, the purpose that a set of virtual resources are designed for the anchor broadcast and displayed on the client side of each audience is achieved, the interest of the gift delivery is increased, and the possibility that the gift delivery person and the anchor broadcast carry out more vivid communication is improved.
Optionally, the virtual resource may be paid, and when the client receives an instruction for successful payment, the giving instruction of the virtual resource may be sent to the server.
In some embodiments, referring to fig. 13, step S230 may include:
step S231: and when a presentation request of the virtual resource initiated by a client is received, acquiring a rendering position of the virtual resource in a live video stream, wherein the client is used for displaying a live interface according to the live video stream.
The live video stream can be transmitted to the server in real time after the anchor client end packs according to the recorded live video codes, so that the server can forward the live video stream to all audience client ends which are positioned in the same channel (or a live room) as the anchor, the audience client ends can generate a live interface when receiving the live video stream, and the live video can be displayed in the live interface.
In some embodiments, when the server receives a donation request of a virtual resource sent by the client, the server may obtain a rendering position of the virtual resource in the live video stream. In one approach, the server may extract video frame images from the received live video stream, and process the video frame images according to the virtual resources to extract relevant information for synthesizing the virtual resources, such as rendering positions of the virtual resources in the video frame images. Optionally, the rendering position may be represented by one or more key points, where each key point has a unique coordinate value identifier in the video frame image, and the target position where the virtual resource is added in the video frame image may be obtained according to the coordinate value identifier of the one or more key points. The target position of the virtual resource added in the video frame image can correspond to the target area.
In some embodiments, the gifting request for the virtual resource may include identification information for the virtual resource, which may be a combination of numbers, english, and the like. Therefore, the server can call the display data of the virtual resources to perform rendering synthesis of the virtual resources according to the identification information of the virtual resources. The display data may be colors, pixel coordinates, and the like required for displaying the virtual resources, and is not limited herein.
Step S232: and synthesizing the virtual resources and the live video stream based on the rendering position to obtain a synthesized video stream.
Step S233: and sending the composite video stream to a client, wherein the client is used for displaying a composite live broadcast interface containing the live broadcast interface and the virtual resource according to the composite video stream.
In some embodiments, the server may synthesize the virtual resource and the video frame image according to the rendering position of the virtual resource, to obtain a synthesized live video. The synthesized live video may be composed of a frame of continuous synthesized video frame images. The server may then re-encode and packetize the composed live video to form a composed video stream and forward the composed video stream to all viewer clients on the same channel (or live room) as the anchor. The viewer client may then generate a composite live interface based on the composite video stream. And the synthesized live video can be displayed in the synthesized live interface.
In some embodiments, the server may retrieve rendering data corresponding to the virtual resource according to the comp instruction. The rendering data may include display data of the virtual resource, rendering duration of the virtual resource, and the like. The server may then render the virtual resource into the live video stream according to the rendering data. The rendering duration can be understood as the duration of the synthesized live video.
In some embodiments, when rendering the data includes rendering the duration, please refer to fig. 14, the live interactive method of the present application may further include:
step S234: and acquiring the rendering time of the virtual resource in the live video stream.
Step S235: and when the duration of the composite video stream reaches the rendering duration, canceling the composition of the virtual resource and the live video stream.
In some embodiments, when the server performs rendering and synthesizing processing on the virtual resource and the live video stream, the server may determine the rendering duration of the virtual resource directly according to rendering data corresponding to the called virtual resource. And then synthesizing the virtual resources into the live video stream according to the rendering duration to obtain a synthesized live stream. And when the duration of the composite video stream reaches the rendering duration, canceling the composition of the virtual resource and the live video stream. Thereby ending the display effect of the virtual resources in the live video.
In other embodiments, the server may send rendering data corresponding to the called virtual resource to the anchor client, so that the anchor client may perform rendering and composition processing of the virtual resource and the live video stream according to the rendering data. Therefore, the anchor client can render the virtual resources to the live broadcast video stream according to different virtual resources given by the user, and can realize a stronger virtual makeup visual effect.
In still other embodiments, the viewer client may also locally store rendering data of the virtual resource, so that the viewer client may also perform rendering and composition processing of the virtual resource and the live video stream according to the rendering data.
In some embodiments, a queue may be provided to wait for processing when virtual resource donation requests are received from multiple viewers. Specifically, referring to fig. 15, the live broadcast interaction method of the present application may further include:
step S236: when a plurality of comp requests are received, a request queue is generated for storing the plurality of comp requests to be processed.
Step S237: and synthesizing the virtual resources corresponding to each presentation request in the plurality of presentation requests with the live video stream in sequence according to the sequence of the request queue.
In some embodiments, when the server receives multiple comp requests, the request queue may be generated according to the precedence order of the comp requests. The request queue may be based on a first-in-first-out principle, i.e., the received comp request is processed first. When the server completes rendering composition of the virtual resources and the live video stream according to the current presentation request, the server can obtain the next presentation request from the request queue so as to call rendering data of the corresponding virtual resources according to the next presentation request, and perform rendering composition of the virtual resources and the live video stream according to the rendering data. Therefore, the server can sequentially synthesize the virtual resources corresponding to each presentation request in the plurality of presentation requests and the live video stream according to the sequence of the request queues.
In some embodiments, the server may send the received plurality of comp requests to the anchor client when the rendering composition of the virtual resource and the live video stream is performed by the anchor client. The anchor client may generate a request queue upon receiving a plurality of comp requests sent by the server. And then sequentially synthesizing the virtual resources corresponding to each donation request in the plurality of donation requests with the live video stream according to the sequence of the request queues.
In some embodiments, when a plurality of comp requests are received, the comp request of one of the users may also be selected for processing in a manner of issuing the right. As a mode, audiences in the live broadcast room can compete for the permission of giving the virtual resources, users who compete for success can give the virtual resources successfully, and the display effect of the virtual resources combined in the live broadcast video is achieved.
Referring to fig. 16, fig. 16 is a schematic overall flow chart of a live interaction method. The method specifically comprises the following steps:
step S301: the viewer client sends coordinate instructions to the body recognition server. The coordinate instruction can be a coordinate instruction of clicking the selected human body part area in the live broadcast interface by the user.
Step S302: the human body recognition server recognizes the coordinate command and returns a corresponding data set. Wherein the data set may be identified body part information data (e.g., head, ears, eyes, nose, etc.).
Step S303: and the audience client sends an instruction to the service server according to the data set returned by the human body recognition server, and the service server returns a corresponding virtual resource data set. The virtual resource data set may be a virtual earring corresponding to an ear, virtual glasses corresponding to an eye, a virtual hat corresponding to a hair, a virtual hair, and other virtual wearable device data sets.
Step S304: and the audience client displays the gift-sending popup according to the virtual resource data set returned by the service server, and sends a presentation instruction to the service server according to the selected target virtual resource and the anchor ID.
Step S305: and when receiving a presentation instruction sent by the audience client, the service server sends rendering data of the virtual resources to the anchor client, and the anchor client is used for rendering the virtual resource data set to the live broadcast video stream.
Step S306: and after the audience client subscribes the live video stream, displaying the latest synthesized video stream picture on a live interface.
The subscription of the live video stream may be that the audience account enters a live broadcast room of the anchor, or that the audience account pays attention to the live broadcast room of the anchor. So that the server can forward the live video stream to all viewer clients of the live room. In this way, when each viewer presents the virtual resource, other subscribed viewers can also see the composite effect of the virtual resource on the live interface.
It can be understood that, in this embodiment of the present application, each of the above steps may be performed locally by the terminal device, may also be performed in the server, and may also be performed by the terminal device and the server separately, and according to different actual application scenarios, tasks may be allocated according to requirements, so as to implement an optimized live broadcast interactive experience, which is not limited herein.
According to the live broadcast interaction method, the selected target area in the live broadcast interface is obtained, and violation detection is carried out on the target area to prevent violation of the selected area. And when the target area is not violated, identifying the target area to obtain the human body part information corresponding to the target area. And then carrying out second violation detection on the human body part information, and improving the data tamper resistance to further prevent violation of the selected area. When the human body part information does not violate rules, the virtual resource matched with the human body part information can be obtained, so that when the virtual resource is presented successfully, a synthesized live broadcast interface comprising a live broadcast interface and the virtual resource is generated, wherein the virtual resource is synthesized in a selected target area in the live broadcast interface. Therefore, virtual resources related to body parts can be dynamically configured by selecting different body parts of a main broadcast in a live broadcast interface, and when the virtual resources are presented successfully, the virtual resources can be rendered to a corresponding selection area in the live broadcast interface, so that the participation sense of users is improved, the live broadcast interaction mode among the users is enriched, the rendering effect of the virtual resources is improved, and the interaction interest is increased.
Referring to fig. 17, fig. 17 is a block diagram illustrating a structure of a live interactive apparatus 600 according to an embodiment of the present application, where the live interactive apparatus 600 is applied to an electronic device. The live interaction apparatus 600 may include: a target acquisition module 610, a resource acquisition module 620, and a resource composition module 630. The target obtaining module 610 is configured to obtain a selected target area in the live interface; the resource obtaining module 620 is configured to obtain a virtual resource corresponding to the target area; the resource composition module 630 is configured to generate a composite live broadcast interface including the live broadcast interface and the virtual resource when the virtual resource is presented successfully, where the virtual resource is composited in the target area in the live broadcast interface.
In some embodiments, the resource obtaining module 620 may include: a part identification unit and a part resource unit. The part identification unit is used for identifying the target area to obtain human body part information corresponding to the target area; and the part resource unit is used for acquiring virtual resources matched with the human body part information.
In some embodiments, the location identification unit may be specifically configured to: detecting whether the target area is illegal; and when the target area is not violated, identifying the target area to obtain the human body part information corresponding to the target area.
In some embodiments, the above-mentioned part resource unit may be specifically configured to: detecting whether the human body part information violates rules or not; and when the human body part information does not violate rules, acquiring virtual resources matched with the human body part information.
In some embodiments, the number of the virtual resources is multiple, and the resource composition module 630 may be specifically configured to: and when a target virtual resource in the plurality of virtual resources is donated successfully, generating a composite live broadcast interface containing the live broadcast interface and the target virtual resource.
In some embodiments, resource composition module 630 may be specifically configured to: when a presentation request of the virtual resource initiated by a client is received, acquiring a rendering position of the virtual resource in a live video stream, wherein the client is used for displaying a live interface according to the live video stream; synthesizing the virtual resources and a live video stream based on the rendering position to obtain a synthesized video stream; and sending the composite video stream to a client, wherein the client is used for displaying a composite live broadcast interface containing the live broadcast interface and the virtual resource according to the composite video stream.
In some embodiments, the live interaction device 600 may further include: the device comprises a time length obtaining module and a synthesis canceling module. The duration acquisition module is used for acquiring the rendering duration of the virtual resources in the live video stream; and the composition canceling module is used for canceling the composition of the virtual resources and the live video stream when the duration of the composite video stream reaches the rendering duration.
In some embodiments, the live interaction device 600 may further include: a queue generating module and a sequence synthesizing module. The queue generating module is used for generating a request queue when a plurality of donation requests are received, wherein the request queue is used for storing the donation requests to be processed; and the sequence synthesis module is used for sequentially synthesizing the virtual resources corresponding to each presentation request in the plurality of presentation requests with the live video stream according to the sequence of the request queue.
The live broadcast interaction device provided by the embodiment of the application is used for realizing the corresponding live broadcast interaction method in the method embodiment, has the beneficial effects of the corresponding method embodiment, and is not repeated herein.
Referring to fig. 18, fig. 18 is a block diagram illustrating a structure of an electronic device according to an embodiment of the present disclosure. The electronic device 800 may be the terminal device or the server, where the terminal device may be a user device capable of running an application, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, and a wearable terminal device. The electronic device 800 in the present application may include one or more of the following components: a processor 810, a memory 820, and one or more applications, wherein the one or more applications may be stored in the memory 820 and configured to be executed by the one or more processors 810, the one or more applications configured to perform the methods described in the above method embodiments applied to the client, and also configured to perform the methods described in the above method embodiments applied to the server.
Processor 810 may include one or more processing cores. The processor 810 interfaces with various interfaces and circuitry throughout the electronic device 800 to perform various functions and process data of the electronic device 800 by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 820 and invoking data stored in the memory 820. Alternatively, the processor 810 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 810 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 810, but may be implemented by a communication chip.
The Memory 820 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 820 may be used to store instructions, programs, code sets, or instruction sets. The memory 820 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The stored data area may also store data created during use by the electronic device 800, and the like.
Those skilled in the art will appreciate that the configuration shown in fig. 18 is a block diagram of only a portion of the configuration relevant to the present application, and does not constitute a limitation on the electronic device to which the present application is applied, and a particular electronic device may include more or less components than those shown in the drawings, or combine certain components, or have a different arrangement of components.
To sum up, according to the live broadcast interaction method, the live broadcast interaction device, the electronic device, and the storage medium provided by the embodiments of the present application, a composite live broadcast interface including a live broadcast interface and a virtual resource is generated when the virtual resource is presented successfully by acquiring a selected target area in a live broadcast interface and acquiring the virtual resource corresponding to the target area, where the virtual resource is composited in the selected target area in the live broadcast interface. Therefore, the virtual resources corresponding to the selection areas can be dynamically configured by selecting different areas in the live broadcast interface, and when the virtual resources are presented successfully, the virtual resources can be rendered to the corresponding selection areas in the live broadcast interface, so that when the areas are selected in the live broadcast interface, the corresponding virtual resources can be rendered in the selected areas, the live broadcast interaction modes among users are enriched, the rendering effect of the virtual resources is improved, and the interaction interest is increased.
Referring to fig. 19, a block diagram of a computer-readable storage medium according to an embodiment of the present disclosure is shown. The computer-readable storage medium 900 stores program codes, which can be called by a processor to execute the methods described in the above embodiments of the method applied to the client, and can also be called by a processor to execute the methods described in the above embodiments of the method applied to the server.
The computer-readable storage medium 900 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 900 includes a non-transitory computer-readable storage medium. The computer readable storage medium 900 has storage space for program code 910 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 910 may be compressed, for example, in a suitable form.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (11)

1. A live interaction method, comprising:
acquiring a selected target area in a live broadcast interface;
acquiring virtual resources corresponding to the target area;
and when the virtual resource is presented successfully, generating a composite live broadcast interface comprising the live broadcast interface and the virtual resource, wherein the virtual resource is composited in the target area in the live broadcast interface.
2. The method of claim 1, wherein the obtaining the virtual resource corresponding to the target area comprises:
identifying the target area to obtain human body part information corresponding to the target area;
and acquiring virtual resources matched with the human body part information.
3. The method according to claim 2, wherein the identifying the target area and obtaining the human body part information corresponding to the target area comprises:
detecting whether the target area is illegal;
and when the target area is not violated, identifying the target area to obtain the human body part information corresponding to the target area.
4. The method according to claim 3, wherein the obtaining of the virtual resource matched with the human body part information comprises:
detecting whether the human body part information violates rules or not;
and when the human body part information does not violate rules, acquiring virtual resources matched with the human body part information.
5. The method according to any one of claims 1-4, wherein the virtual resource is plural, and the generating a composite live interface including the live interface and the virtual resource when the virtual resource is donated successfully comprises:
and when a target virtual resource in the plurality of virtual resources is donated successfully, generating a composite live broadcast interface containing the live broadcast interface and the target virtual resource.
6. The method of any of claims 1-4, wherein displaying a composite live interface containing the live interface and the virtual resource when the virtual resource is donated successfully comprises:
when a presentation request of the virtual resource initiated by a client is received, acquiring a rendering position of the virtual resource in a live video stream, wherein the client is used for displaying a live interface according to the live video stream;
synthesizing the virtual resources and a live video stream based on the rendering position to obtain a synthesized video stream;
and sending the composite video stream to a client, wherein the client is used for displaying a composite live broadcast interface containing the live broadcast interface and the virtual resource according to the composite video stream.
7. The method of claim 6, further comprising:
acquiring the rendering time of the virtual resource in the live video stream;
and when the duration of the composite video stream reaches the rendering duration, the virtual resource and the live video stream are not synthesized.
8. The method of claim 7, further comprising:
when a plurality of donation requests are received, generating a request queue, wherein the request queue is used for storing the donation requests to be processed;
and synthesizing the virtual resources corresponding to each presentation request in the plurality of presentation requests with the live video stream in sequence according to the sequence of the request queue.
9. A live interaction device, the device comprising:
the target acquisition module is used for acquiring a selected target area in the live broadcast interface;
a resource obtaining module, configured to obtain a virtual resource corresponding to the target area;
and the resource synthesis module is used for generating a synthesis live broadcast interface containing the live broadcast interface and the virtual resources when the virtual resources are presented successfully, wherein the virtual resources are synthesized in the target area in the live broadcast interface.
10. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of any of claims 1-8.
11. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 8.
CN202010052310.3A 2020-01-17 2020-01-17 Live broadcast interaction method and device, electronic equipment and storage medium Pending CN111246232A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010052310.3A CN111246232A (en) 2020-01-17 2020-01-17 Live broadcast interaction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010052310.3A CN111246232A (en) 2020-01-17 2020-01-17 Live broadcast interaction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111246232A true CN111246232A (en) 2020-06-05

Family

ID=70871257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010052310.3A Pending CN111246232A (en) 2020-01-17 2020-01-17 Live broadcast interaction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111246232A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111711831A (en) * 2020-06-28 2020-09-25 腾讯科技(深圳)有限公司 Data processing method and device based on interactive behavior and storage medium
CN111726687A (en) * 2020-06-30 2020-09-29 北京百度网讯科技有限公司 Method and apparatus for generating display data
CN112261433A (en) * 2020-10-22 2021-01-22 广州繁星互娱信息科技有限公司 Virtual gift sending method, virtual gift display device, terminal and storage medium
CN112423110A (en) * 2020-08-04 2021-02-26 上海哔哩哔哩科技有限公司 Live video data generation method and device and live video playing method and device
CN112565806A (en) * 2020-12-02 2021-03-26 广州繁星互娱信息科技有限公司 Virtual gift presenting method, device, computer equipment and medium
CN112616091A (en) * 2020-12-18 2021-04-06 北京达佳互联信息技术有限公司 Virtual article sending method and device, computer equipment and storage medium
CN112929680A (en) * 2021-01-19 2021-06-08 广州虎牙科技有限公司 Live broadcast room image rendering method and device, computer equipment and storage medium
CN112929681A (en) * 2021-01-19 2021-06-08 广州虎牙科技有限公司 Video stream image rendering method and device, computer equipment and storage medium
CN113518240A (en) * 2021-07-20 2021-10-19 北京达佳互联信息技术有限公司 Live broadcast interaction method, virtual resource configuration method, virtual resource processing method and device
CN113596561A (en) * 2021-07-29 2021-11-02 北京达佳互联信息技术有限公司 Video stream playing method and device, electronic equipment and computer readable storage medium
CN113852839A (en) * 2021-09-26 2021-12-28 游艺星际(北京)科技有限公司 Virtual resource allocation method and device and electronic equipment
CN114237792A (en) * 2021-12-13 2022-03-25 广州繁星互娱信息科技有限公司 Virtual object display method and device, storage medium and electronic equipment
CN114449305A (en) * 2022-01-29 2022-05-06 上海哔哩哔哩科技有限公司 Gift animation playing method and device in live broadcast room
CN114845129A (en) * 2022-04-26 2022-08-02 北京达佳互联信息技术有限公司 Interaction method, device, terminal and storage medium in virtual space
WO2023088461A1 (en) * 2021-11-22 2023-05-25 北京字跳网络技术有限公司 Image processing method and apparatus, electronic device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020170056A1 (en) * 1999-10-18 2002-11-14 Ryuhei Akiyama Television broadcasting method, television broadcasting device, receiving device and medium
CN105068748A (en) * 2015-08-12 2015-11-18 上海影随网络科技有限公司 User interface interaction method in camera real-time picture of intelligent touch screen equipment
US20160345052A1 (en) * 2015-05-19 2016-11-24 Lemobile Information Technology (Beijing) Co., Ltd. Method and device for previewing video files
CN106303733A (en) * 2016-08-11 2017-01-04 腾讯科技(深圳)有限公司 The method and apparatus playing live special-effect information
CN107438200A (en) * 2017-09-08 2017-12-05 广州酷狗计算机科技有限公司 The method and apparatus of direct broadcasting room present displaying
CN108900858A (en) * 2018-08-09 2018-11-27 广州酷狗计算机科技有限公司 A kind of method and apparatus for giving virtual present

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020170056A1 (en) * 1999-10-18 2002-11-14 Ryuhei Akiyama Television broadcasting method, television broadcasting device, receiving device and medium
US20160345052A1 (en) * 2015-05-19 2016-11-24 Lemobile Information Technology (Beijing) Co., Ltd. Method and device for previewing video files
CN105068748A (en) * 2015-08-12 2015-11-18 上海影随网络科技有限公司 User interface interaction method in camera real-time picture of intelligent touch screen equipment
CN106303733A (en) * 2016-08-11 2017-01-04 腾讯科技(深圳)有限公司 The method and apparatus playing live special-effect information
CN107438200A (en) * 2017-09-08 2017-12-05 广州酷狗计算机科技有限公司 The method and apparatus of direct broadcasting room present displaying
CN108900858A (en) * 2018-08-09 2018-11-27 广州酷狗计算机科技有限公司 A kind of method and apparatus for giving virtual present

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111711831A (en) * 2020-06-28 2020-09-25 腾讯科技(深圳)有限公司 Data processing method and device based on interactive behavior and storage medium
CN111726687A (en) * 2020-06-30 2020-09-29 北京百度网讯科技有限公司 Method and apparatus for generating display data
CN112423110A (en) * 2020-08-04 2021-02-26 上海哔哩哔哩科技有限公司 Live video data generation method and device and live video playing method and device
US11863801B2 (en) 2020-08-04 2024-01-02 Shanghai Bilibili Technology Co., Ltd. Method and device for generating live streaming video data and method and device for playing live streaming video
CN112261433A (en) * 2020-10-22 2021-01-22 广州繁星互娱信息科技有限公司 Virtual gift sending method, virtual gift display device, terminal and storage medium
CN112565806A (en) * 2020-12-02 2021-03-26 广州繁星互娱信息科技有限公司 Virtual gift presenting method, device, computer equipment and medium
CN112565806B (en) * 2020-12-02 2023-08-29 广州繁星互娱信息科技有限公司 Virtual gift giving method, device, computer equipment and medium
CN112616091A (en) * 2020-12-18 2021-04-06 北京达佳互联信息技术有限公司 Virtual article sending method and device, computer equipment and storage medium
CN112929680B (en) * 2021-01-19 2023-09-05 广州虎牙科技有限公司 Live broadcasting room image rendering method and device, computer equipment and storage medium
CN112929681A (en) * 2021-01-19 2021-06-08 广州虎牙科技有限公司 Video stream image rendering method and device, computer equipment and storage medium
CN112929681B (en) * 2021-01-19 2023-09-05 广州虎牙科技有限公司 Video stream image rendering method, device, computer equipment and storage medium
CN112929680A (en) * 2021-01-19 2021-06-08 广州虎牙科技有限公司 Live broadcast room image rendering method and device, computer equipment and storage medium
CN113518240A (en) * 2021-07-20 2021-10-19 北京达佳互联信息技术有限公司 Live broadcast interaction method, virtual resource configuration method, virtual resource processing method and device
CN113518240B (en) * 2021-07-20 2023-08-08 北京达佳互联信息技术有限公司 Live interaction, virtual resource configuration and virtual resource processing method and device
WO2023000652A1 (en) * 2021-07-20 2023-01-26 北京达佳互联信息技术有限公司 Live streaming interaction and virtual resource configuration methods
CN113596561A (en) * 2021-07-29 2021-11-02 北京达佳互联信息技术有限公司 Video stream playing method and device, electronic equipment and computer readable storage medium
CN113852839A (en) * 2021-09-26 2021-12-28 游艺星际(北京)科技有限公司 Virtual resource allocation method and device and electronic equipment
CN113852839B (en) * 2021-09-26 2024-01-26 游艺星际(北京)科技有限公司 Virtual resource allocation method and device and electronic equipment
WO2023088461A1 (en) * 2021-11-22 2023-05-25 北京字跳网络技术有限公司 Image processing method and apparatus, electronic device, and storage medium
CN114237792A (en) * 2021-12-13 2022-03-25 广州繁星互娱信息科技有限公司 Virtual object display method and device, storage medium and electronic equipment
CN114449305A (en) * 2022-01-29 2022-05-06 上海哔哩哔哩科技有限公司 Gift animation playing method and device in live broadcast room
CN114845129A (en) * 2022-04-26 2022-08-02 北京达佳互联信息技术有限公司 Interaction method, device, terminal and storage medium in virtual space

Similar Documents

Publication Publication Date Title
CN111246232A (en) Live broadcast interaction method and device, electronic equipment and storage medium
WO2021109652A1 (en) Method and apparatus for giving character virtual gift, device, and storage medium
CN112383786B (en) Live broadcast interaction method, device, system, terminal and storage medium
US10898809B2 (en) Overlaying content within live streaming video
CN107911736B (en) Live broadcast interaction method and system
WO2023071443A1 (en) Virtual object control method and apparatus, electronic device, and readable storage medium
WO2014107681A1 (en) System and method for providing augmented reality on mobile devices
WO2019114328A1 (en) Augmented reality-based video processing method and device thereof
US11451858B2 (en) Method and system of processing information flow and method of displaying comment information
CN111970532A (en) Video playing method, device and equipment
CN114257875B (en) Data transmission method, device, electronic equipment and storage medium
CN112516589A (en) Game commodity interaction method and device in live broadcast, computer equipment and storage medium
US20220270302A1 (en) Content distribution system, content distribution method, and content distribution program
CN112351327A (en) Face image processing method and device, terminal and storage medium
US20230209125A1 (en) Method for displaying information and computer device
CN113573090A (en) Content display method, device and system in game live broadcast and storage medium
CN113485617A (en) Animation display method and device, electronic equipment and storage medium
CN114697703B (en) Video data generation method and device, electronic equipment and storage medium
CN113101633B (en) Cloud game simulation operation method and device and electronic equipment
CN113938696A (en) Live broadcast interaction method and system based on user-defined virtual gift and computer equipment
JP6609078B1 (en) Content distribution system, content distribution method, and content distribution program
WO2020194973A1 (en) Content distribution system, content distribution method, and content distribution program
CN112333460B (en) Live broadcast management method, computer equipment and readable storage medium
CN112929685B (en) Interaction method and device for VR live broadcast room, electronic device and storage medium
JP7291106B2 (en) Content delivery system, content delivery method, and content delivery program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210119

Address after: 511442 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 511400 24th floor, building B-1, North District, Wanda Commercial Plaza, Wanbo business district, No.79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou, Guangdong Province

Applicant before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20200605

RJ01 Rejection of invention patent application after publication