CN109874021B - Live broadcast interaction method, device and system - Google Patents

Live broadcast interaction method, device and system Download PDF

Info

Publication number
CN109874021B
CN109874021B CN201711260540.3A CN201711260540A CN109874021B CN 109874021 B CN109874021 B CN 109874021B CN 201711260540 A CN201711260540 A CN 201711260540A CN 109874021 B CN109874021 B CN 109874021B
Authority
CN
China
Prior art keywords
live broadcast
anchor
user
live
avatar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711260540.3A
Other languages
Chinese (zh)
Other versions
CN109874021A (en
Inventor
黄小凤
陈阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201711260540.3A priority Critical patent/CN109874021B/en
Publication of CN109874021A publication Critical patent/CN109874021A/en
Application granted granted Critical
Publication of CN109874021B publication Critical patent/CN109874021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention relates to a live broadcast interaction method, a device and a system, wherein the method comprises the following steps: the anchor client acquires an anchor virtual image corresponding to an anchor user; displaying a live broadcast picture including a main broadcast avatar in at least one live broadcast client accessed to a live broadcast room; when a live broadcast picture comprising a main broadcast virtual image is played, carrying out live broadcast authorization for a specified user so as to allow the specified user to participate in live broadcast interaction; acquiring an audience virtual image corresponding to the designated user according to the live broadcast authorization of the designated user; displaying a live view including a spectator avatar and a anchor avatar in the at least one live client. The invention solves the problem of single form of live content in the prior art.

Description

Live broadcast interaction method, device and system
Technical Field
The invention relates to the technical field of computers, in particular to a live broadcast interaction method, device and system.
Background
With the development of computer technology, a live broadcast method for real-time data sharing by using internet and streaming media technologies has become a next popular interactive communication method.
Specifically, the anchor client may establish a live broadcast room on line to share a live broadcast video with a live broadcast client accessing the live broadcast room, and for the live broadcast client, the live broadcast client may display live broadcast content, such as the live broadcast video, in a live broadcast screen played in the live broadcast room.
In order to improve the live broadcast liveness, the anchor user and the audience user can send interactive resources to each other, for example, the interactive resources include virtual gifts, virtual expressions, electronic red packs and the like, or the anchor user can configure an avatar for the anchor user and configure the avatar to be synchronous with the live broadcast form of the anchor user, so that the live broadcast can be carried out through the avatar dynamically displayed in the live broadcast picture, and the audience user can obtain the experience beyond reality.
However, the above-mentioned live broadcasting process is limited by the participation of the anchor user, and the form of the live broadcasting content is too single.
Disclosure of Invention
In order to solve the above technical problems, an object of the present invention is to provide a live broadcast interaction method, device and system.
The technical scheme adopted by the invention is as follows:
a live interaction method, comprising: the anchor client acquires an anchor virtual image corresponding to an anchor user; displaying a live broadcast picture including a main broadcast avatar in at least one live broadcast client accessed to a live broadcast room; when a live broadcast picture comprising a main broadcast virtual image is played, carrying out live broadcast authorization for a specified user so as to allow the specified user to participate in live broadcast interaction; acquiring an audience virtual image corresponding to the designated user according to the live broadcast authorization of the designated user; displaying a live view including a spectator avatar and a anchor avatar in the at least one live client.
A live interactive system, comprising: the anchor client is used for acquiring an anchor virtual image corresponding to an anchor user; at least one live broadcast client end accessed to the live broadcast room for displaying live broadcast pictures including the anchor virtual image; the anchor client is also used for carrying out live broadcast authorization for a specified user when a live broadcast picture comprising an anchor virtual image is played so as to allow the specified user to participate in live broadcast interaction; the anchor client is also used for acquiring the audience virtual image corresponding to the specified user according to the live broadcast authorization of the specified user; the at least one live client is further configured to display a live view including the audience avatar and the anchor avatar.
A live interactive device comprises a processor and a memory, wherein the memory is stored with computer readable instructions, and the computer readable instructions are executed by the processor to realize the live interactive method.
A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a live interaction method as described above.
In the technical scheme, when a live broadcast picture comprising a main broadcast avatar corresponding to a main broadcast user is played in at least one live broadcast client end accessed to a live broadcast room, live broadcast authorization is carried out for a specified user to allow the specified user to participate in live broadcast interaction, further audience avatar corresponding to the specified user is obtained through the live broadcast authorization of the specified user, and the live broadcast picture displayed in the at least one live broadcast client end simultaneously comprises the main broadcast avatar and the audience avatar, namely, in the live broadcast process, the specified user does not depend on the participation degree of the main broadcast user any more, and also participates in the live broadcast interaction through the audience avatar, so that the form of live broadcast content simultaneously depends on the participation degree of the specified user, and the problem of single live broadcast content form in the prior art is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic illustration of an implementation environment in accordance with the present invention.
Fig. 2 is a block diagram illustrating a hardware structure of a user terminal according to an exemplary embodiment.
Fig. 3 is a block diagram illustrating another hardware configuration of a user terminal according to an example embodiment.
Fig. 4 is a flow diagram illustrating a method of live interaction in accordance with an example embodiment.
Fig. 5 is a flow diagram illustrating a method of live interaction in accordance with an example embodiment.
Fig. 6 is a flow diagram illustrating a live process in a live room by a anchor avatar configured by an anchor user according to an exemplary embodiment.
FIG. 7 is a flow chart of one embodiment of step 310 in the corresponding embodiment of FIG. 5.
Fig. 8 is a flow diagram illustrating a live interaction request initiation process in accordance with an exemplary embodiment.
Fig. 9 is a flow diagram illustrating another live interaction method in accordance with an example embodiment.
Fig. 10 is a flow diagram illustrating another live interaction method in accordance with an example embodiment.
Fig. 11 is a flow diagram illustrating another live interaction method in accordance with an example embodiment.
Fig. 12 is a schematic diagram of a specific implementation of a live broadcast interaction method in an application scenario.
Fig. 13 is a diagram of a live view dynamically displaying a anchor avatar and a viewer avatar in an application scene.
Fig. 14 is a diagram of a live view dynamically displaying a anchor avatar and a viewer avatar in an application scene.
Fig. 15 is a block diagram illustrating a live interaction device, according to an example embodiment.
Fig. 16 is a block diagram illustrating another live interaction device, according to an example embodiment.
Fig. 17 is a block diagram of an embodiment of the authorization instruction generation module 1010 in the corresponding embodiment of fig. 15.
Fig. 18 is a block diagram illustrating another live interaction device, according to an example embodiment.
Fig. 19 is a block diagram illustrating another live interaction device, according to an example embodiment.
Fig. 20 is a block diagram of one embodiment of a live authorization acquisition module 1310 according to the corresponding embodiment of fig. 19.
Fig. 21 is a block diagram of an embodiment of a request initiation unit 1311 in a corresponding embodiment of fig. 20.
Fig. 22 is a block diagram illustrating another live interaction device, according to an example embodiment.
While specific embodiments of the invention have been shown by way of example in the drawings and will be described in detail hereinafter, such drawings and description are not intended to limit the scope of the inventive concepts in any way, but rather to explain the inventive concepts to those skilled in the art by reference to the particular embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
As described above, in the live broadcast process, the form of the live broadcast content is too single due to the participation of the anchor user. For example, when the live broadcast room is live broadcast through the avatar configured by the anchor user, the avatar displayed in the live broadcast frame only changes correspondingly following the live broadcast form of the anchor user, so that the form of the live broadcast content depends on the participation of the anchor user, and furthermore, the communication and interaction between the anchor user and other users (such as other anchor users or audience users) are relatively poor.
To this end, the live room is allowed to be configured to be live by a plurality of anchor users together, and during the live broadcast, if the plurality of anchor users are in different regions, live content is presented to audience users in a manner that each anchor user corresponds to one live window. In consideration of the visual experience of the user, if each live broadcast window is displayed in a full screen mode, the audience user not only needs to manually switch a plurality of live broadcast windows back and forth, the operation is complicated, but also cannot watch all anchor users simultaneously, and the defect that the form of live broadcast content is single still exists.
Therefore, the invention particularly provides a live broadcast interaction method which can effectively solve the problem that the live broadcast content form is too single.
Fig. 1 is a schematic diagram of an implementation environment involved in a live interaction method. The implementation environment includes a client 100 and a server 200.
The client 100 is divided into a main broadcasting client and a live broadcasting client according to different users, where the electronic device operated by the client may be a desktop computer, a notebook computer, a tablet computer, a smart phone, or other electronic devices providing video and network connection functions, and the electronic device providing video functions may be an application client operated in the electronic device, or may be a web page client, which is not limited herein.
The server 200 may be a server, or a server cluster formed by multiple servers, or may be a cloud computing center, that is, a virtual computing platform formed by a whole server cluster.
The client 100 and the server 200 establish a communication connection through a wireless network or a wired network, and thus data is shared between the anchor client and the live client in real time.
Specifically, the anchor client may establish a live broadcast room on line, upload a live broadcast video acquired by a local camera to the server 200 based on the live broadcast room, and forward the live broadcast video to the live broadcast client by the server 200.
Based on the live broadcast room established by the anchor client, a plurality of live broadcast clients, such as millions of live broadcast clients or tens of millions of live broadcast clients, can be accessed simultaneously.
Therefore, any live client can upload the interactive data to the server 200, and the server 200 forwards the interactive data to the anchor client and the other live clients.
Correspondingly, each live client can receive the live video and the interactive data sent by the server 200 and display the live video and the interactive data in a live picture played in a live room.
Referring to fig. 2, fig. 2 is a block diagram illustrating an electronic device according to an example embodiment.
As shown in fig. 2, the electronic device 100 includes a memory 101, a memory controller 103, one or more processors 105 (only one shown in fig. 2), a peripheral interface 107, a radio frequency module 109, a positioning module 111, a camera module 113, an audio module 115, a touch screen 117, and a key module 119. These components communicate with each other via one or more communication buses/signal lines 121.
The memory 101 may be used to store software programs and modules, such as program instructions and modules corresponding to the live broadcast interaction method and apparatus in the exemplary embodiment of the present invention, and the processor 105 executes various functions and data processing by executing the program instructions stored in the memory 101, so as to complete the live broadcast interaction method.
The memory 101, as a carrier of resource storage, may be random access memory, e.g., high speed random access memory, non-volatile memory, such as one or more magnetic storage devices, flash memory, or other solid state memory. The storage means may be a transient storage or a permanent storage.
The peripheral interface 107 may include at least one wired or wireless network interface, at least one serial-to-parallel conversion interface, at least one input/output interface, at least one USB interface, and the like, for coupling various external input/output devices to the memory 101 and the processor 105, so as to realize communication with various external input/output devices.
The rf module 109 is configured to receive and transmit electromagnetic waves, and achieve interconversion between the electromagnetic waves and electrical signals, so as to communicate with other devices through a communication network. Communication networks include cellular telephone networks, wireless local area networks, or metropolitan area networks, which may use various communication standards, protocols, and technologies.
The positioning module 111 is used for acquiring the current geographic position of the electronic device 100. Examples of the positioning module 111 include, but are not limited to, a global positioning satellite system (GPS), a wireless local area network-based positioning technology, or a mobile communication network-based positioning technology.
The camera module 113 is attached to a camera and is used for taking pictures or videos. The shot pictures or videos can be stored in the memory 101 and also can be sent to an upper computer through the radio frequency module 109.
Audio module 115 provides an audio interface to a user, which may include one or more microphone interfaces, one or more speaker interfaces, and one or more headphone interfaces. And performing audio data interaction with other equipment through the audio interface. The audio data may be stored in the memory 101 and may also be transmitted through the radio frequency module 109.
The touch screen 117 provides an input-output interface between the electronic device 100 and a user. Specifically, the user may perform an input operation, such as a gesture operation, e.g., clicking, touching, sliding, etc., through the touch screen 117, so that the electronic apparatus 100 responds to the input operation. The electronic device 100 displays and outputs output contents formed by any one or combination of text, pictures or videos to the user through the touch screen 117.
The key module 119 includes at least one key for providing an interface for a user to input to the electronic device 100, and the user can cause the electronic device 100 to perform different functions by pressing different keys. For example, the sound adjustment key may allow a user to effect an adjustment of the volume of sound played by the electronic device 100.
Fig. 3 is another block diagram illustrating a hardware configuration of an electronic device 100 according to an example embodiment.
The hardware structure of the electronic device 100 may have a large difference due to the difference of configuration or performance, as shown in fig. 3, the electronic device 100 includes: a power source 110, an interface 130, at least one memory 150, and at least one Central Processing Unit (CPU) 170.
The power supply 110 is used to provide operating voltages for the hardware devices on the electronic device 100.
The interface 130 includes at least one wired or wireless network interface 131, at least one serial-to-parallel conversion interface 133, at least one input/output interface 135, and at least one USB interface 137, etc. for communicating with external devices.
The storage 150 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the resources stored thereon include an operating system 151, an application 153, data 155, etc., and the storage manner may be a transient storage or a permanent storage. The operating system 151 is used for managing and controlling hardware devices and application programs 153 on the electronic device 100 to realize the calculation and processing of the mass data 155 by the central processing unit 170, and may be Windows server, Mac OS XTM, unix, linux, FreeBSDTM, or the like. Application 153 is a computer program that performs at least one particular task on operating system 151 and may include at least one module (not shown in FIG. 3) that may each include a sequence of computer-readable instructions for electronic device 100. The data 155 may be photographs, pictures, etc. stored in a disk.
The central processor 170 may include one or more processors and is arranged to communicate with the memory 150 via a bus for computing and processing the mass data 155 in the memory 150.
As described in detail above, the electronic device 100 to which the present invention is applied will complete the live interaction method by the cpu 170 reading a series of computer readable instructions stored in the memory 150.
It should be noted that the electronic device 100 shown in fig. 2 or fig. 3 is only an example adapted to the present invention, and should not be considered as providing any limitation to the scope of the present invention. The electronic device 100 also cannot be interpreted as requiring reliance on, or necessity of, one or more components of the exemplary electronic device 100 illustrated in fig. 2 or 3.
It is to be understood that the structures shown in fig. 2 or fig. 3 are merely illustrative, and that electronic device 100 may also include more or fewer components than shown in fig. 2 or fig. 3, or have different components than shown in fig. 2 or fig. 3. The components shown in fig. 2 or 3 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 4, in an exemplary embodiment, a live interaction method may include the following steps:
step 810, the anchor client obtains an anchor avatar corresponding to the anchor user.
Step 830, displaying a live view including a main broadcast avatar in at least one live client accessing a live broadcast room.
Step 850, when playing the live broadcast picture including the anchor avatar, performing live broadcast authorization for the designated user to allow the designated user to participate in the live broadcast interaction.
And step 870, acquiring the audience avatar corresponding to the specified user according to the live broadcast authorization of the specified user.
Step 890 displaying a live view including the viewer avatar and the anchor avatar in at least one live client.
In the live broadcast interaction process, based on a live broadcast room established by an anchor client, at least one live broadcast client accessed to the live broadcast room displays a live broadcast picture comprising an anchor virtual image, and the anchor client can carry out live broadcast authorization for an appointed user along with the live broadcast, and accordingly obtains an audience virtual image corresponding to the appointed user, and further enables the at least one live broadcast client to display the live broadcast picture comprising the audience virtual image and the anchor virtual image.
That is to say, the form of the live broadcast content no longer depends on the participation of the main broadcast user, and the designated user also participates in the live broadcast interaction through the audience virtual image, so that the form of the live broadcast content depends on the participation of the designated user at the same time, and the problem that the form of the live broadcast content is single in the prior art is solved.
The live broadcast interaction process is described below by the execution actions of the anchor client and the live broadcast client, and the interaction between the anchor client and the live broadcast client and the server.
Referring to fig. 5, in an exemplary embodiment, a live interaction method is applied to an anchor client of the implementation environment shown in fig. 1, and the structure of the anchor client may be as shown in fig. 2 or fig. 3.
The live broadcast interaction method can be executed by a main broadcast client, and can comprise the following steps:
and 310, playing the live broadcast picture merged into the anchor avatar according to the anchor avatar configured by the anchor user.
First, the live broadcast performed in the live broadcast room by the anchor avatar configured by the anchor user will be described as follows.
In a specific implementation of an embodiment, as shown in fig. 6, a process of performing live broadcast through a anchor avatar configured by an anchor user in a live broadcast room may include the following steps:
and step 410, establishing a live broadcast room for live broadcast through the anchor virtual image configured by the anchor user according to the triggered live broadcast starting operation.
The live broadcast starting operation refers to a related operation triggered by a main broadcast user for establishing a live broadcast room, for example, a selection operation of a background interface of the live broadcast room, a selection operation of a main broadcast virtual image, a confirmation operation of live broadcast starting and the like.
For example, a selection entry of the anchor avatar is added in the live broadcast room, such as a button corresponding to the anchor avatar to be selected, and if the anchor avatar needs to be selected, a click operation triggered by the selection entry of the anchor avatar in the live broadcast room is detected, and the click operation is regarded as the selection operation of the anchor avatar.
Based on the above, after the anchor user completes the above series of operations, the live broadcast room is established, and the anchor avatar configured by the anchor user is displayed in the live broadcast picture played in the live broadcast room.
It should be noted that the display of the anchor avatar in the live broadcast frame is realized by a two-dimensional model, that is, the live broadcast frame played in the live broadcast room is formed by overlapping a background layer where a background interface of the live broadcast room is located and a foreground layer where the anchor avatar is located. Thus, the display position of the anchor avatar in the live view can be adjusted by superimposition. For example, a main cast avatar is displayed centrally in a live view.
Further, for the anchor client, the synchronization of the configured avatars, that is, the synchronization of the anchor avatar and the live broadcast form of the anchor user, is performed, so that the anchor avatar displayed in the live broadcast frame changes correspondingly along with the live broadcast form of the anchor user, that is, the anchor avatar is dynamically displayed in the live broadcast frame during the live broadcast process, or it can be understood that the live broadcast frame including the anchor avatar is played in the live broadcast room. The live broadcast form of the anchor user includes but is not limited to: head movements, facial expressions, etc. of the anchor user.
In other words, the anchor user configuring the anchor avatar includes two actions: firstly, selecting the anchor virtual image, and secondly, configuring virtual image synchronization so as to ensure the synchronization of the anchor virtual image and the live broadcast form of the anchor user.
Step 430, acquiring corresponding anchor user face data for the anchor avatar dynamically displayed in the live broadcast picture.
The anchor user face data is used for representing the form of the anchor user during live broadcasting and can be obtained through face capture of the anchor user. The anchor user face data includes, but is not limited to, the rotating and moving of the anchor user face as a whole and the changing expressions of five sense organs and the like in the live broadcasting process.
Wherein the capturing of the face of the anchor user comprises: and starting a face capturing device, such as a camera, acquiring image data containing the face of the anchor user, and performing face recognition on the image data to obtain the face data of the anchor user.
Because the anchor avatar is synchronous with the live broadcast form of the anchor user, the face data of the anchor user changes along with the live broadcast form of the anchor user along with the tracking and identification of the face of the anchor user, and the anchor avatar also changes along with the face data of the anchor user, so that the dynamic anchor avatar is presented in the live broadcast picture.
It should be noted that face tracking and recognition algorithms include, but are not limited to: geometric feature algorithm, local feature analysis algorithm, eigenface algorithm, elastic model algorithm, deep learning algorithm and the like based on the consumption-level camera image video stream.
Any one of the face tracking and recognition algorithms can recognize the face of the anchor user from the image data which contains the face of the anchor user and continuously changes, including but not limited to the expression change of the facial features of the anchor user and the changes of the whole face such as rotation and movement, so as to form continuously-changing face data of the anchor user, and further provide sufficient basis for the dynamic display of the anchor virtual image in the live broadcast picture.
Based on the method, the live broadcast picture for dynamically displaying the anchor virtual image can be generated for the live broadcast room through the acquisition of the face data of the anchor user.
Step 450, the request server side forwards the human face data of the anchor user to the live broadcast client side which can be accessed to the live broadcast room, so that the live broadcast picture which is integrated with the anchor virtual image is played in the anchor client side and the live broadcast client side synchronously.
For a live broadcast client side accessed to a live broadcast room, after receiving continuously-changed anchor user face data forwarded by a server side, rendering of an anchor avatar in a live broadcast picture can be performed according to the anchor user face data, namely, the head action and/or facial expression of an anchor user indicated by the anchor user face data are mapped to the anchor avatar, and the anchor avatar formed by rendering is dynamically displayed in the live broadcast picture, so that real-time data sharing between the anchor client side and the live broadcast client side is realized.
After being installed in the electronic equipment, the anchor client or the live client correspondingly stores the virtual image template in the storage space configured by the electronic equipment so as to provide different anchor virtual images and audience virtual images for users, therefore, in the process, the anchor client only uploads the face data of the anchor user without uploading live video (including the face data of the anchor user and the anchor virtual image) to the server, the data transmission quantity in the live broadcast process is greatly reduced, the requirement on transmission bandwidth in the live broadcast process is effectively reduced, and the task processing pressure between interaction ends is also reduced.
And step 330, in the live broadcast by the live broadcast picture playing integrated with the anchor virtual image, generating a live broadcast authorization instruction for the specified user to carry out live broadcast authorization.
As can be seen from the above, in the live broadcast room, live broadcast is performed through the anchor avatar configured by the anchor user, and in the live broadcast process, as described above, the live broadcast is limited by the participation degree of the anchor user, and the form of the live broadcast content is too single. Therefore, in the embodiment, live broadcast authorization is performed on the specified user through generation of the live broadcast authorization instruction, and the form of live broadcast content is enriched.
The authorization of live broadcasting for the designated user may refer to allowing the audience avatar configured by the designated user to join live broadcasting, or may refer to actively inviting the designated user to configure the audience avatar and join live broadcasting. Here, the designated user may be another anchor user or an audience user entering the live broadcast room, and the anchor client can confirm whether the designated user performing the live broadcast authorization is another anchor user or an audience user according to a user selection operation performed by a trigger, which is not limited herein.
It should be noted that the audience avatar is configured by a given user, as distinguished from the anchor avatar that the anchor user is configured for itself.
For example, a live broadcast authorization entry is set in the live broadcast room for a main broadcast user to perform live broadcast authorization on an appointed user, when the appointed user adds a live broadcast initiation request for a configured audience avatar, the live broadcast authorization entry is activated, for example, a confirmation dialog box is popped up and displayed in a live broadcast picture, a confirmation operation triggered by the activated live broadcast authorization entry in the live broadcast room is detected, and a live broadcast authorization instruction is generated for the live broadcast authorization performed by the appointed user according to the confirmation operation.
For the appointed user, when the anchor user generates a live broadcast authorization instruction, the live broadcast authorization of the anchor user to the appointed user can be obtained, so that the configured audience virtual image is added into the live broadcast, and the live broadcast interaction is implemented between the live broadcast user and the audience user.
And step 350, acquiring the audience virtual image configured by the appointed user through the live broadcast authorization instruction.
After the live broadcast authorization instruction is obtained, it is known that the anchor user performs live broadcast authorization for the specified user, that is, it is known that the audience virtual image configured by the specified user needs to be merged into the live broadcast picture.
For the anchor client, in the acquisition of face data of the specified user, a live broadcast authorization instruction is transmitted to other clients corresponding to the specified user, so that the other clients are controlled to capture the face of the specified user through the live broadcast authorization instruction, thereby generating face data of the specified user, and the face data of the specified user is obtained by forwarding of the server.
For other clients corresponding to the specified user, the face data of the specified user is correspondingly obtained in the face capture of the specified user controlled by the anchor client.
It should be noted that, in the above-mentioned process of capturing the face of the designated user, the designated user also configures the audience avatar for himself, associates the face data of the designated user with the configured audience avatar, and notifies the server end accordingly, so that the audience avatar configured by the designated user is rendered in the subsequent live broadcast picture according to the face data of the designated user.
Step 370, playing the live broadcast picture merged with the audience avatar and the anchor avatar in the live broadcast room.
It should be understood that the data shared in real time through the live room is consistent for all clients accessing the live room, including the anchor client, the designated client, and other live clients, i.e., the live content presented in the live view is consistent.
Accordingly, the face data of the designated user is forwarded through the server, and correspondingly, for all the clients, after the face data of the designated user issued by the server is received, the virtual image of the audience is rendered in the live broadcast picture according to the face data of the designated user, so that the data can be shared in real time through the live broadcast room.
Rendering, which means mapping between the face data of the specified user and the virtual image of the audience in the live broadcast picture. Specifically, the audience virtual image stored in the client is obtained, the integral rotation and movement of the face of the designated user, the change of expressions such as five officials and the like are mapped to the audience virtual image according to the indication of the face data of the designated user, and the mapped audience virtual image is integrated into a live broadcast picture containing the anchor virtual image, so that the audience virtual image and the anchor virtual image are displayed in a live broadcast room at the same time.
Further, similarly to the anchor avatar, the display of the audience avatar in the live broadcast is also formed by laminating the background layer and the foreground layer, and based on this, when the audience avatar is merged into the live broadcast, the display positions of the anchor avatar and the audience avatar in the live broadcast can be adjusted by superposition. For example, when the live images are merged, the anchor avatar is adjusted from a centered display to a left-biased display, and accordingly, the viewer avatar is displayed to a right-biased display.
It is worth mentioning that the audience avatar, the anchor avatar and the live broadcast room background interface are stored in the client when the application program client or the web page client providing the video function is installed in the client, and the user can update the audience avatar, the anchor avatar and the live broadcast room background interface stored in the client in a downloading mode and the like so as to be selected by user configuration.
Through the process, the live interaction between the anchor user and the designated user is realized, so that the form of the live content does not depend on the participation degree of the anchor user any more and also depends on the participation degree of the designated user.
In addition, no matter whether the designated user and the anchor user are in the same region or not, the audience virtual image is merged into the live broadcast picture containing the anchor virtual image, the same-station interaction between the anchor user and the designated user is realized, so that the audience user does not need to manually switch back and forth among a plurality of live broadcast windows, the operation is greatly simplified, all users participating in live broadcast can be directly and simultaneously watched, and the form of live broadcast content is further effectively enriched.
Referring to fig. 7, in an exemplary embodiment, step 330 may include the following steps:
step 331, receiving a live broadcast interaction request initiated by a specified user for applying for joining a live broadcast for the configured audience avatar.
In this embodiment, performing live broadcast authorization for the specified user refers to allowing the audience avatar configured by the specified user to join in live broadcast, that is, the generation of the live broadcast authorization instruction is a live broadcast interaction request initiated in response to the specified user applying for joining in live broadcast for the configured audience avatar.
For other clients where the specified user is located, hereinafter referred to as the specified client, as shown in fig. 7, the initiating process of the live interaction request may include the following steps:
step 3311, when detecting the interaction request control in the live broadcast room is triggered, displaying the audience virtual image selection interface in the live broadcast picture, and performing the face capture of the designated user.
The control refers to text, pictures, charts, buttons, switches, sliders, input boxes and the like contained in the interface. Wherein, the button, the switch, the slide bar, the input box and other controls can be triggered to realize the man-machine interaction. Therefore, the interaction request control refers to any one of the above-mentioned triggerable controls, for example, the interaction request control is a button displayed in a live broadcast picture, and when a specified user clicks the button, it is detected that the interaction request control in the live broadcast room is triggered.
When the interaction request control is triggered, the appointed client side knows that the appointed user wants to implement live broadcast interaction with the anchor user, and therefore the method firstly enters the configuration of the audience avatar for the appointed user so as to enable the configured audience avatar to be added into live broadcast subsequently.
The configuration of the audience avatar includes: selection of a viewer avatar and specifying a user association.
Specifically, a viewer avatar selection interface is displayed in the live broadcast screen, and a viewer avatar is selected for a designated user according to a triggered viewer avatar selection operation.
And starting a face capturing device, such as a camera, acquiring image data containing the face of the specified user, and performing face recognition on the image data to obtain face data of the specified user, so as to finish face capturing of the specified user.
And associating the acquired face data of the specified user with the selected audience virtual image so as to inform the specified client to render the audience virtual image in the live broadcast picture according to the face data of the specified user, wherein the rendering is specific to the selected audience virtual image.
Further, in the image data acquisition, if the face capture of the designated user fails, the designated client generates and displays a capture prompt message to prompt the designated user to aim the face at the face capture device, so as to implement accurate face capture. Preferably, the display of the capture alert message is in the viewer avatar selection interface to avoid impacting the ongoing live broadcast.
Step 3313, associate the obtained specified user face data with the selected audience avatar in the audience avatar selection interface.
Step 3315, according to the face data of the appointed user who has carried on the audience virtual image association, the live broadcast interaction request is sent to the main broadcast client, and the live broadcast interaction request is used for the audience virtual image to apply for joining the live broadcast.
For the server, after receiving the live broadcast interaction request, the live broadcast interaction request carrying the face data of the specified user is forwarded to the anchor client, so that the anchor client can respond to the live broadcast interaction request and perform live broadcast authorization for the specified user.
And 333, allowing the audience avatar to join the live broadcast according to the live broadcast interaction request for the anchor user to generate a live broadcast authorization instruction.
Step 335, according to the live broadcast authorization instruction, the start of the live broadcast interaction between the anchor user and the designated user is identified in the server.
When the anchor user allows the audience avatar to join in the live broadcast, a live broadcast authorization instruction is generated, and the live broadcast authorization instruction is transmitted to the appointed client by the server, so that the appointed client can subsequently play the live broadcast picture merged into the audience avatar and the anchor avatar.
For the server, the live broadcast interaction between the anchor user and the designated user can be known through the live broadcast authorization instruction, and at the moment, the server performs virtual image synchronization of live broadcast pictures in a subsequent live broadcast room by starting the live broadcast interaction between the anchor user and the designated user for identification.
That is, when the live broadcast interaction between the anchor user and the designated user is started, the server needs to forward the face data of the anchor user and the face data of the designated user to the live broadcast client accessing the live broadcast room, so as to ensure that the anchor avatar and the audience avatar in the live broadcast picture can be respectively kept synchronous with the live broadcast forms of the anchor user and the designated user.
Accordingly, in an exemplary embodiment, step 350 may include the steps of:
and executing the live broadcast authorization instruction to extract the face data of the live broadcast interaction request to obtain the face data of the specified user.
As described above, the live broadcast interaction request is initiated by the designated client to the anchor client according to the face data of the designated user, that is, the live broadcast interaction request at least carries the face data of the designated user.
Therefore, after the anchor user carries out live broadcast authorization on the designated user, the face data of the designated user can be extracted from the live broadcast interaction request, and further, a sufficient basis is provided for audience virtual image rendering in a live broadcast picture.
Referring to fig. 9, in an exemplary embodiment, after step 370, the method as described above may further include the steps of:
and step 510, receiving the avatar synchronization data pushed by the server.
And the virtual image synchronization data is used for respectively synchronizing the audience virtual image, the anchor virtual image and the direct broadcasting forms of the appointed user and the anchor user. Accordingly, the avatar synchronization data includes designated user face data and/or anchor user face data.
The face data of the designated user or the anchor user is obtained by tracking and identifying the face. That is, no matter the anchor client or the designated client, when the corresponding avatar is configured to be synchronized with the live broadcast form of the corresponding user, the face of the corresponding user is tracked by the started face capture device to continuously obtain the image data of the corresponding user, and then the face data of the corresponding user is obtained by face recognition of the image data, and is reported to the server, so that the server is requested to generate the avatar synchronization data according to the face data of the corresponding user.
For the server, virtual image synchronization data are issued aiming at all the clients accessed to the live broadcast room, so that virtual image synchronization in live broadcast pictures is carried out in the live broadcast room, and the clients accessed to the live broadcast room are ensured to share the data in real time.
Step 530, the avatar synchronization data is mapped to the viewer avatar and/or the anchor avatar in the live view.
For example, in the face tracking recognition of a specific user, for any face tracking and recognition algorithm, first, the most basic output parameters include: the position parameters of the face box, and rotation parameters such as pitch (pitch angle rotating around the X axis), yaw (yaw angle rotating around the Y axis), roll (roll angle rotating around the Z axis), and the like. Secondly, calculating the geometric center of the face frame according to the position parameters, and comparing the geometric center of the face frame with the geometric center of a rectangular picture shot by the camera to obtain the position deviation parameters (delta x, delta y) of the whole face relative to the geometric center of the camera. Thus, specifying user face data includes: position offset parameter, pitch, yaw, roll, and other rotation parameters.
Based on this, the specified user face data is mapped to the audience avatar, with the audience avatar having the same degree of positional offset from its geometric center by the positional offset parameter. The information of head pitching, twisting, head tilting and the like of the appointed user is transmitted to the audience virtual image through rotating parameters such as pitch, yaw, roll and the like, so that the synchronization of the head motion of the audience virtual image and the head motion of the appointed user during live broadcasting is realized.
Further, the face tracking and recognition algorithm will also set a plurality of facial feature points on the face, for example, the facial feature points include: feature points for describing a face contour, feature points for describing left and right eye contours, feature points for describing left and right eyebrow contours, feature points for describing a mouth contour, and feature points for describing a nose contour. The number of the facial feature points can be flexibly adjusted according to the accuracy requirement of tracking recognition, and is not limited herein. Thus, specifying user face data includes: and the facial feature values corresponding to the plurality of facial feature points are used for representing the positions of the facial feature points on the human face.
Therefore, the audience virtual image can be respectively provided with a corresponding number of facial feature points, in the mapping process, the facial feature points of the audience virtual image are adjusted through the facial feature values in the face data of the appointed user, and the information of the facial expression, the mouth shape and the like of the appointed user is transmitted to the audience virtual image, so that the synchronization of the audience virtual image and the facial expression of the appointed user during live broadcast is realized.
Further, if the viewer avatar is wearing a pendant, such as a hair accessory, hat, or the like, the pendant may be deformed in accordance with the rotation parameters in the face data of the specified user when performing the viewer avatar mapping. Deformations include, but are not limited to, curling, flapping, extending, upwarping, sinking, and the like. Accordingly, the degree of deformation varies with the variation of the rotation parameter, and specific correlation functions include, but are not limited to, linear increase or decrease, exponential increase or decrease, and logarithmic increase or decrease with the rotation parameter, and the like. The specific association function can be flexibly adjusted according to the actual application scenario, including but not limited to being used alone or in combination.
In an exemplary embodiment, after step 370, the method as described above may further include the steps of:
and when the appointed user requests to quit the live broadcast interaction, clearing the audience virtual image displayed in the live broadcast picture, and requesting the server end to close the live broadcast interaction between the anchor user and the appointed user.
And clearing the audience virtual image dynamically displayed in the live broadcast picture along with the exit of the specified user from the live broadcast interaction. For example, for a live view that dynamically displays a anchor avatar and a viewer avatar, the viewer avatar is moved out of the live view, thereby causing the anchor avatar to resume a centered presentation in the live view.
In addition, a request for closing the live broadcast interaction between the anchor user and the designated user is sent to the server, so that the server is informed of stopping the synchronization of the audience avatars in the live broadcast pictures for the live broadcast room. That is, for the server, after knowing that the anchor user and the specified user stop the live broadcast interaction, only the face data of the anchor user needs to be forwarded to the live broadcast client accessing the live broadcast room, so as to ensure the synchronization of the anchor virtual image in the live broadcast picture and the live broadcast form of the anchor user.
Referring to fig. 10, in an exemplary embodiment, a live interaction method is applied to a live client of the implementation environment shown in fig. 1, and the structure of the live client may be as shown in fig. 2 or 3.
The live broadcast interaction method can be executed by the live broadcast client, and can comprise the following steps:
step 610, acquiring the anchor avatar configured by the anchor user, and playing the live broadcast frame merged with the anchor avatar in the live broadcast room.
Step 630, in the live broadcast by the live broadcast picture integrated with the anchor avatar, obtaining the live broadcast authorization of the anchor user to the specified user.
And 650, acquiring the audience avatar configured by the appointed user according to the live broadcast authorization of the anchor user to the appointed user.
Step 670, playing the live broadcast picture merged with the audience virtual image and the anchor virtual image in the live broadcast room.
Further, in another exemplary embodiment, step 630 may include the steps of:
receiving a live broadcast authorization instruction generated by the anchor user inviting the specified user to configure the audience virtual image and join the live broadcast.
And obtaining live broadcast authorization for allowing audience avatars to join live broadcast for the designated user according to the live broadcast authorization instruction.
Referring to fig. 11, in an exemplary embodiment, the method as described above may further include the steps of:
and step 710, tracking and identifying the face of the appointed user by configuring virtual image synchronization.
And step 730, requesting the server to send virtual image synchronization data according to the obtained specified user face data so as to synchronize the virtual image in the live broadcast picture in the live broadcast room.
Fig. 12 to fig. 14 are schematic diagrams of specific implementations of a live broadcast interaction method in an application scenario.
In this application scenario, the specified user is taken as an example of a viewer user who enters a live broadcast room. The application scene comprises a main broadcast client, a server and two live broadcast clients.
After the anchor client establishes the live broadcast room on line, the anchor user face data forwarding can be carried out by the server, so that the two live broadcast clients are accessed to the live broadcast room, and further live broadcast pictures are shared in real time.
For one of the direct broadcasting client sides, a direct broadcasting interaction request is sent to the anchor client side through the configuration of the audience virtual image, the audience virtual image is applied to join the direct broadcasting, and the direct broadcasting authorization of the anchor user to one of the audience users is waited for the direct broadcasting.
When one audience user is authorized by the anchor user for live broadcast, the audience virtual image in the live broadcast picture is mapped by acquiring the face data of the appointed user, and further the audience virtual image and the anchor virtual image are merged into the live broadcast picture played in the live broadcast room, as shown in fig. 13, so that the live broadcast interaction between one audience user and the anchor user is realized.
In the above live broadcasting process, the server receives the face data of the anchor user reported by the anchor client and the face data of the designated user reported by the anchor client, and completes the synchronization of the virtual image in the live broadcasting picture, so that the virtual image of the audience and the virtual image of the anchor are synchronized with the live broadcasting state of one of the audience user and the anchor user in real time, as shown in fig. 14.
When one audience user requests the audience avatar to push out live broadcast, the audience avatar in the live broadcast picture is removed, and the server is informed to close the live broadcast interaction between the anchor user and one audience user, so that live broadcast is continuously carried out in the live broadcast room through the anchor avatar.
A specific flow of a live broadcast process performed in the live broadcast room by a live broadcast frame integrated with the audience avatar and the anchor avatar is shown in fig. 12.
Therefore, for audience users entering a live broadcast room, whether the audience users are located in the same region with the anchor user or not, the audience users can actively request live broadcast interaction with the anchor user on the same channel, and further the configured audience virtual images are integrated into live broadcast pictures containing the anchor virtual images to provide live broadcast for other audience users, so that the form of live broadcast content is effectively enriched, the operation is simple, and the interaction and the appreciation are strong.
The following is an embodiment of the apparatus of the present invention, which can be used to execute the live broadcast interaction method according to the present invention. For details not disclosed in the embodiment of the apparatus of the present invention, please refer to the embodiment of the live broadcast interaction method according to the present invention.
Referring to fig. 15, in an exemplary embodiment, a live interactive apparatus 1000 is applied to an anchor client, where the apparatus 1000 includes, but is not limited to: a live broadcast module 1010, an authorization instruction generation module 1030, an avatar acquisition module 1050, and an avatar fusion module 1070.
The live broadcast module 1010 is configured to play a live broadcast frame merged into the anchor avatar according to the anchor avatar configured by the anchor user.
The authorization instruction generating module 1030 is configured to generate a live broadcast authorization instruction for authorizing a specified user to perform live broadcast in live broadcast performed by live broadcast screen playback integrated with the anchor avatar.
The avatar obtaining module 1050 is configured to obtain the audience avatar configured by the specified user through the live broadcast authorization instruction.
The avatar blending module 1070 is configured to perform mapping of the audience avatar in the live broadcast frame according to the face data of the specified user, and play the live broadcast frame blended with the audience avatar and the anchor avatar in the live broadcast room.
Referring to fig. 16, in an exemplary embodiment, the apparatus 1000 further includes, but is not limited to: a live broadcast room establishing module 1110, a second face data acquiring module 1130, and a face data forwarding module 1150.
The live broadcast room establishing module 1110 is configured to establish a live broadcast room in which live broadcast is performed through a anchor avatar configured by an anchor user according to a live broadcast start operation performed by triggering.
The second face data acquiring module 1130 is configured to acquire corresponding anchor user face data for an anchor avatar dynamically displayed in a live broadcast frame.
The face data forwarding module 1150 is configured to request the server to forward the anchor user face data to the live broadcast client that can access the live broadcast room, so as to play the live broadcast frame merged with the anchor avatar in the live broadcast client that can access the live broadcast room.
Referring to FIG. 17, in an exemplary embodiment, the authorization instruction generation module 1030 includes, but is not limited to: an interaction request receiving unit 1031 and a live authorization unit 1033.
The interaction request receiving unit 1031 is configured to receive a live interaction request initiated by a specified user for applying for joining a live broadcast for the configured audience avatar.
The live broadcast authorization unit 1033 is configured to generate a live broadcast authorization instruction for allowing the audience avatar to join the live broadcast for the anchor user according to the live broadcast interaction request.
In an exemplary embodiment, the face data acquisition module 1050 includes, but is not limited to: a first face data acquisition unit.
The first face data acquisition unit is used for executing a live broadcast authorization instruction to extract face data of the live broadcast interaction request, and face data of the specified user are obtained.
In an exemplary embodiment, the authorization instruction generation module 1030 includes, but is not limited to: a user confirmation unit and a second live authorization unit are specified.
The designated user confirmation unit is used for confirming the designated user for live broadcast authorization according to user selection operation triggered.
And the second live broadcast authorization unit is used for inviting the designated user to configure the audience virtual image for the anchor user and joining in live broadcast to generate a live broadcast authorization instruction.
In an exemplary embodiment, the face data acquisition module 1050 includes, but is not limited to: the system comprises an instruction transmission unit, a face capture control unit and a face data receiving unit.
The instruction transmitting unit is used for transmitting the live broadcast authorization instruction to other clients corresponding to the specified user.
The face capture control unit is used for controlling other clients to capture faces of the specified users through the transmitted live broadcast authorization instruction to obtain face data of the specified users.
The face data receiving unit is used for receiving face data of the specified user fed back by other clients.
Further, in an exemplary embodiment, the authorization instruction generation module 1030 further includes, but is not limited to: and a live broadcast interaction starting module.
And the live broadcast interaction starting module is used for identifying the start of the live broadcast interaction between the anchor user and the appointed user in the server according to the live broadcast authorization instruction.
Referring to fig. 18, in an exemplary embodiment, the apparatus 1000 as described above further includes, but is not limited to: a synchronized data receiving module 1210 and a synchronized data mapping module 1230.
The synchronization data receiving module 1210 is configured to receive avatar synchronization data pushed by a server.
The synchronization data mapping module 1230 is configured to map avatar synchronization data to a viewer avatar and/or a anchor avatar in a live broadcast frame, and keep the viewer avatar and the anchor avatar synchronized with a specified user and an anchor user during live broadcast respectively through mapping.
In an exemplary embodiment, the apparatus 1000 as described above further includes, but is not limited to: and an avatar clearing module.
The virtual image clearing module is used for clearing the audience virtual image displayed in the live broadcast picture when the appointed user requests to quit the live broadcast interaction, and requesting the server end to close the live broadcast interaction between the anchor user and the appointed user.
Referring to fig. 19, in an exemplary embodiment, a live interaction device 1300 includes, but is not limited to: a live broadcast module 1310, a live broadcast authorization obtaining module 1330, an avatar obtaining module 1350 and an avatar merging module 1370.
The live broadcasting module 1310 is configured to obtain a main broadcasting avatar configured by a main broadcasting user, and play a live broadcasting frame merged with the main broadcasting avatar in a live broadcasting room.
The live broadcast authorization obtaining module 1330 is configured to obtain a live broadcast authorization of the anchor user for the specified user during a live broadcast performed by the live broadcast screen merged into the anchor avatar.
The avatar acquisition module 1350 is configured to acquire the audience avatar configured by the specified user according to the live authorization of the anchor user for the specified user.
The avatar fusion module 1370 is configured to play a live broadcast frame fused with the audience avatar and the anchor avatar in a live broadcast room.
Referring to fig. 20, in an exemplary embodiment, the live authorization acquisition module 1330 includes, but is not limited to: a request initiation unit 1331 and a live authorization unit 1333.
The request initiating unit 1331 is configured to request the audience avatar to join the live broadcast from the anchor client according to the audience avatar configured by the specified user.
A live authorization unit 1333 is used to obtain live authorization for the anchor user to allow the audience avatar to join the live by requesting it for the specified user.
Referring to FIG. 21, in an exemplary embodiment, a request initiation unit 1331 includes, but is not limited to: a control triggering unit 1331a, a data association unit 1331c and an application adding unit 1331 e.
The control triggering unit 1331a is configured to display an audience avatar selection interface in a live broadcast picture and perform face capture of a specified user when detecting that an interaction request control in a live broadcast room is triggered.
The data associating unit 1331c is for associating the obtained specified user face data to the selected viewer avatar in the viewer avatar selection interface.
The application adding unit 1331e is configured to initiate a live broadcast interaction request to the anchor client according to the face data of the specified user having performed the audience avatar association, and add live broadcast to the audience avatar application through the live broadcast interaction request.
Referring to fig. 22, in an exemplary embodiment, the apparatus 1300 as described above further includes, but is not limited to: a tracking identification module 1410 and an avatar synchronization module 1430.
The tracking and identifying module 1410 is configured to perform tracking and identifying on the face of a specified user by configuring virtual image synchronization.
The avatar synchronization module 1430 is configured to request the server to send avatar synchronization data according to the obtained face data of the specified user, so as to perform avatar synchronization in a live broadcast frame in a live broadcast room.
It should be noted that, when the live broadcast interaction apparatus provided in the foregoing embodiment performs the meeting invitation processing, only the division of the function modules is illustrated, and in practical applications, the function distribution may be completed by different function modules as needed, that is, the internal structure of the live broadcast interaction apparatus is divided into different function modules to complete all or part of the functions described above.
In addition, the embodiments of the live broadcast interaction apparatus and the live broadcast interaction method provided by the above embodiments belong to the same concept, and the specific manner in which each module executes operations has been described in detail in the method embodiments, and is not described herein again.
In an exemplary embodiment, a live interaction system includes, but is not limited to: an anchor client and a live client.
The anchor client is used for acquiring an anchor virtual image corresponding to an anchor user.
And at least one live client accessed to the live broadcast room and used for displaying a live broadcast picture comprising the anchor avatar.
And the anchor client is also used for carrying out live broadcast authorization on the appointed user when the live broadcast picture comprising the anchor virtual image is played so as to allow the appointed user to participate in live broadcast interaction.
And the anchor client is also used for acquiring the audience virtual image corresponding to the specified user according to the live broadcast authorization of the specified user.
At least one live client further for displaying a live view including a spectator avatar and a anchor avatar.
In an exemplary embodiment, the anchor client includes, but is not limited to: a request receiving unit and a live authorization unit.
The request receiving unit is used for receiving a live broadcast interaction request initiated by a specified user request for participating in live broadcast interaction.
And the live broadcast authorization unit is used for allowing the appointed user to participate in the live broadcast interaction according to the live broadcast interaction request.
In an exemplary embodiment, the anchor client includes, but is not limited to: the system comprises a data receiving unit, a data mapping unit and an image merging unit.
The data receiving unit is used for receiving the face data of the specified user in the live broadcast authorization.
A data mapping unit for mapping the head motion and/or facial expression of the specified user indicated by the specified user face data to the audience avatar.
And the image merging unit is used for adjusting the display position of the anchor avatar in the live broadcast picture for the merging of the audience avatar, and displaying the audience avatar on the live broadcast picture with the adjusted display position of the anchor avatar.
In an exemplary embodiment, a live interactive device includes a processor and a memory.
Wherein the memory has stored thereon computer readable instructions which, when executed by the processor, implement the live interaction method in the embodiments as described above.
In an exemplary embodiment, a computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements a live interaction method in embodiments as described above.
The above-mentioned embodiments are merely preferred examples of the present invention, and are not intended to limit the embodiments of the present invention, and those skilled in the art can easily make various changes and modifications according to the main concept and spirit of the present invention, so that the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (14)

1. A live interaction method is characterized by comprising the following steps:
the method comprises the steps that an anchor client acquires an anchor virtual image corresponding to an anchor user, wherein the anchor virtual image is used for being displayed in a live broadcast picture of at least one live broadcast client accessed to a live broadcast room;
sending the anchor user face data of an anchor user when a live broadcast room is established to the at least one live broadcast client, so that the at least one live broadcast client maps the head action and/or facial expression of the anchor user indicated by the anchor user face data to the anchor avatar, and synchronously displaying a live broadcast picture comprising the anchor avatar with the anchor client;
when a live broadcast picture comprising the anchor avatar is played, performing live broadcast authorization for a specified user to allow the specified user to participate in live broadcast interaction;
and acquiring the audience virtual image corresponding to the appointed user according to the live broadcast authorization of the appointed user, wherein the audience virtual image is used for being displayed in a live broadcast picture of at least one live broadcast client accessed to a live broadcast room.
2. The method of claim 1, wherein the performing live authorization for a given user comprises:
receiving a live broadcast interaction request initiated by the appointed user requesting to participate in the live broadcast interaction;
and allowing the specified user to participate in the live interaction according to the live interaction request.
3. The method of claim 1, wherein the performing live authorization for a given user comprises:
confirming an appointed user for carrying out live broadcast authorization according to user selection operation triggered to be carried out;
and initiating an invitation to participate in the live interaction to the specified user.
4. A method as claimed in claim 2 or 3, wherein said performing live authorization for a given user further comprises:
and identifying the opening of the live broadcast interaction between the anchor user and the appointed user according to the live broadcast authorization request.
5. The method of claim 1, wherein obtaining the viewer avatar corresponding to the specified user based on the specified user's live authorization comprises:
receiving the face data of the specified user in the live broadcast authorization;
mapping specified user head movements and/or facial expressions indicated by the specified user face data to the audience avatar;
displaying in the at least one live client a live view including an audience avatar and a anchor avatar, comprising:
and adjusting the display position of the anchor avatar in the live broadcast picture for the integration of the audience avatar, and displaying the audience avatar on the live broadcast picture with the adjusted display position of the anchor avatar.
6. The method of claim 5, wherein prior to receiving the specified user face data of the specified user in the live authorization, the method further comprises:
when the specified user requests to participate in live broadcast interaction, the live broadcast client of the specified user displays an audience virtual image selection interface in the live broadcast picture, and performs face capture of the specified user;
associating the obtained face data of the specified user with the selected audience virtual image in the audience virtual image selection interface;
and transmitting the face data of the specified user with the associated audience virtual image.
7. The method of claim 1, wherein after displaying a live view comprising a viewer avatar and a anchor avatar in the at least one live client, the method further comprises:
receiving virtual image synchronization data;
and mapping the virtual image synchronization data to the audience virtual image and/or the anchor virtual image in the live broadcast picture, and keeping the audience virtual image and the anchor virtual image to be synchronized with the direct broadcast forms of the designated user and the anchor user respectively through the mapping.
8. The method of claim 7, wherein prior to said receiving avatar synchronization data, said method further comprises:
tracking and identifying the face of the anchor user or the face of the designated user by configuring virtual image synchronization;
and sending virtual image synchronization data according to the obtained anchor user face data or the appointed user face data request.
9. The method of claim 1, wherein after displaying a live view comprising a viewer avatar and a anchor avatar in the at least one live client, the method further comprises:
and when the appointed user requests to quit the live broadcast interaction, the anchor client removes the audience virtual image displayed in the live broadcast picture, and closes the live broadcast interaction between the anchor user and the appointed user.
10. A live interactive system, comprising:
the anchor client is used for acquiring an anchor virtual image corresponding to an anchor user;
the system comprises at least one live broadcast client end accessed to a live broadcast room and a live broadcast client end, wherein the live broadcast client end is used for receiving the face data of a main broadcast user when the main broadcast user establishes the live broadcast room; mapping the anchor user head movements and/or facial expressions indicated by the anchor user face data to the anchor avatar; displaying a live broadcast picture including a anchor avatar in synchronization with the anchor client;
the anchor client is also used for carrying out live broadcast authorization for a specified user when a live broadcast picture comprising an anchor virtual image is played so as to allow the specified user to participate in live broadcast interaction;
the anchor client is also used for acquiring the audience virtual image corresponding to the specified user according to the live broadcast authorization of the specified user;
the at least one live client is further configured to display a live view including the audience avatar and the anchor avatar.
11. The system of claim 10, wherein the anchor client comprises:
a request receiving unit, configured to receive a live broadcast interaction request initiated by the designated user requesting to participate in live broadcast interaction;
and the live broadcast authorization unit is used for allowing the specified user to participate in live broadcast interaction according to the live broadcast interaction request.
12. The system of claim 10, wherein the anchor client comprises:
the data receiving unit is used for receiving the face data of the specified user in the live broadcast authorization;
a data mapping unit for mapping a specified user's head movements and/or facial expressions indicated by the specified user's face data to the audience avatar;
and the image fusion unit is used for adjusting the fusion of the audience virtual image, the anchor virtual image is positioned at the display position in the live broadcast picture, and the audience virtual image is displayed on the live broadcast picture of which the display position of the anchor virtual image is adjusted.
13. A live interaction device, comprising:
a processor; and
a memory having computer readable instructions stored thereon that, when executed by the processor, implement a live interaction method as recited in any of claims 1-9.
14. A computer-readable storage medium having a computer program stored thereon, the computer program being characterized in that
A live interaction method as claimed in any one of claims 1 to 9 when executed by a processor.
CN201711260540.3A 2017-12-04 2017-12-04 Live broadcast interaction method, device and system Active CN109874021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711260540.3A CN109874021B (en) 2017-12-04 2017-12-04 Live broadcast interaction method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711260540.3A CN109874021B (en) 2017-12-04 2017-12-04 Live broadcast interaction method, device and system

Publications (2)

Publication Number Publication Date
CN109874021A CN109874021A (en) 2019-06-11
CN109874021B true CN109874021B (en) 2021-05-11

Family

ID=66915563

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711260540.3A Active CN109874021B (en) 2017-12-04 2017-12-04 Live broadcast interaction method, device and system

Country Status (1)

Country Link
CN (1) CN109874021B (en)

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110446090A (en) * 2019-07-25 2019-11-12 天脉聚源(杭州)传媒科技有限公司 A kind of virtual auditorium spectators bus connection method, system, device and storage medium
CN110427110B (en) * 2019-08-01 2023-04-18 广州方硅信息技术有限公司 Live broadcast method and device and live broadcast server
CN110493628A (en) * 2019-08-29 2019-11-22 广州创幻数码科技有限公司 A kind of the main broadcaster's system and implementation method of the same virtual scene interaction of polygonal color
CN110602517B (en) * 2019-09-17 2021-05-11 腾讯科技(深圳)有限公司 Live broadcast method, device and system based on virtual environment
CN110719533A (en) * 2019-10-18 2020-01-21 广州虎牙科技有限公司 Live virtual image broadcasting method and device, server and storage medium
CN110856032B (en) * 2019-11-27 2022-10-04 广州虎牙科技有限公司 Live broadcast method, device, equipment and storage medium
CN111083509B (en) * 2019-12-16 2021-02-09 腾讯科技(深圳)有限公司 Interactive task execution method and device, storage medium and computer equipment
CN110971930B (en) * 2019-12-19 2023-03-10 广州酷狗计算机科技有限公司 Live virtual image broadcasting method, device, terminal and storage medium
CN111277845B (en) * 2020-01-15 2022-07-12 网易(杭州)网络有限公司 Game live broadcast control method and device, computer storage medium and electronic equipment
CN111312240A (en) * 2020-02-10 2020-06-19 北京达佳互联信息技术有限公司 Data control method and device, electronic equipment and storage medium
CN111263178A (en) * 2020-02-20 2020-06-09 广州虎牙科技有限公司 Live broadcast method, device, user side and storage medium
CN113301412B (en) * 2020-04-26 2023-04-18 阿里巴巴集团控股有限公司 Information display method, information processing method, device and system
CN112153400B (en) * 2020-09-22 2022-12-06 北京达佳互联信息技术有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN114449297B (en) * 2020-11-04 2024-08-30 阿里巴巴集团控股有限公司 Multimedia information processing method, computing device and storage medium
CN112672175A (en) * 2020-12-11 2021-04-16 北京字跳网络技术有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN112995692B (en) * 2021-03-04 2023-05-02 广州虎牙科技有限公司 Interactive data processing method, device, equipment and medium
CN113095206A (en) * 2021-04-07 2021-07-09 广州华多网络科技有限公司 Virtual anchor generation method and device and terminal equipment
CN113099298B (en) * 2021-04-08 2022-07-12 广州华多网络科技有限公司 Method and device for changing virtual image and terminal equipment
CN115239916A (en) * 2021-04-22 2022-10-25 北京字节跳动网络技术有限公司 Interaction method, device and equipment of virtual image
CN113259451B (en) * 2021-05-31 2021-09-21 长沙鹏阳信息技术有限公司 Cluster processing architecture and method for intelligent analysis of large-scale monitoring nodes
CN113645472B (en) * 2021-07-05 2023-04-28 北京达佳互联信息技术有限公司 Interaction method and device based on play object, electronic equipment and storage medium
CN113613048A (en) * 2021-07-30 2021-11-05 武汉微派网络科技有限公司 Virtual image expression driving method and system
CN113660503B (en) * 2021-08-17 2024-04-26 广州博冠信息科技有限公司 Same-screen interaction control method and device, electronic equipment and storage medium
CN113852839B (en) * 2021-09-26 2024-01-26 游艺星际(北京)科技有限公司 Virtual resource allocation method and device and electronic equipment
WO2023178640A1 (en) * 2022-03-25 2023-09-28 云智联网络科技(北京)有限公司 Method and system for realizing live-streaming interaction between virtual characters
CN114885199B (en) * 2022-04-18 2024-02-23 北京达佳互联信息技术有限公司 Real-time interaction method, device, electronic equipment, storage medium and system
WO2023236045A1 (en) * 2022-06-07 2023-12-14 云智联网络科技(北京)有限公司 System and method for realizing mixed video chat between virtual character and real person
CN115379269A (en) * 2022-08-17 2022-11-22 咪咕文化科技有限公司 Live broadcast interaction method of virtual image, computing equipment and storage medium
CN115494962A (en) * 2022-11-18 2022-12-20 清华大学深圳国际研究生院 Virtual human real-time interaction system and method
CN118264822A (en) * 2022-12-28 2024-06-28 华为技术有限公司 Live broadcast picture synthesis method and equipment
CN116437137B (en) * 2023-06-09 2024-01-09 北京达佳互联信息技术有限公司 Live broadcast processing method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014036642A1 (en) * 2012-09-06 2014-03-13 Decision-Plus M.C. Inc. System and method for broadcasting interactive content
CN106162369A (en) * 2016-06-29 2016-11-23 腾讯科技(深圳)有限公司 A kind of realize in virtual scene interactive method, Apparatus and system
CN106303555A (en) * 2016-08-05 2017-01-04 深圳市豆娱科技有限公司 A kind of live broadcasting method based on mixed reality, device and system
CN106454537A (en) * 2016-10-14 2017-02-22 广州华多网络科技有限公司 Live video streaming method and relevant equipment
CN106789991A (en) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 A kind of multi-person interactive method and system based on virtual scene
CN106937154A (en) * 2017-03-17 2017-07-07 北京蜜枝科技有限公司 Process the method and device of virtual image
CN106954100A (en) * 2017-03-13 2017-07-14 网宿科技股份有限公司 Live broadcasting method and system, company's wheat management server
CN107248195A (en) * 2017-05-31 2017-10-13 珠海金山网络游戏科技有限公司 A kind of main broadcaster methods, devices and systems of augmented reality

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402691A (en) * 2010-09-08 2012-04-04 中国科学院自动化研究所 Method for tracking gestures and actions of human face
US9268406B2 (en) * 2011-09-30 2016-02-23 Microsoft Technology Licensing, Llc Virtual spectator experience with a personal audio/visual apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014036642A1 (en) * 2012-09-06 2014-03-13 Decision-Plus M.C. Inc. System and method for broadcasting interactive content
CN106162369A (en) * 2016-06-29 2016-11-23 腾讯科技(深圳)有限公司 A kind of realize in virtual scene interactive method, Apparatus and system
CN106303555A (en) * 2016-08-05 2017-01-04 深圳市豆娱科技有限公司 A kind of live broadcasting method based on mixed reality, device and system
CN106454537A (en) * 2016-10-14 2017-02-22 广州华多网络科技有限公司 Live video streaming method and relevant equipment
CN106789991A (en) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 A kind of multi-person interactive method and system based on virtual scene
CN106954100A (en) * 2017-03-13 2017-07-14 网宿科技股份有限公司 Live broadcasting method and system, company's wheat management server
CN106937154A (en) * 2017-03-17 2017-07-07 北京蜜枝科技有限公司 Process the method and device of virtual image
CN107248195A (en) * 2017-05-31 2017-10-13 珠海金山网络游戏科技有限公司 A kind of main broadcaster methods, devices and systems of augmented reality

Also Published As

Publication number Publication date
CN109874021A (en) 2019-06-11

Similar Documents

Publication Publication Date Title
CN109874021B (en) Live broadcast interaction method, device and system
TWI650675B (en) Method and system for group video session, terminal, virtual reality device and network device
US11563779B2 (en) Multiuser asymmetric immersive teleconferencing
US10699482B2 (en) Real-time immersive mediated reality experiences
JP7498209B2 (en) Information processing device, information processing method, and computer program
US10602121B2 (en) Method, system and apparatus for capture-based immersive telepresence in virtual environment
WO2020090786A1 (en) Avatar display system in virtual space, avatar display method in virtual space, and computer program
US20200053318A1 (en) Communication processing method, terminal, and storage medium
US11048464B2 (en) Synchronization and streaming of workspace contents with audio for collaborative virtual, augmented, and mixed reality (xR) applications
US11924397B2 (en) Generation and distribution of immersive media content from streams captured via distributed mobile devices
CN111343476A (en) Video sharing method and device, electronic equipment and storage medium
US20230035243A1 (en) Interaction method, apparatus, device, and storage medium based on live streaming application
WO2022048651A1 (en) Cooperative photographing method and apparatus, electronic device, and computer-readable storage medium
CN109407821A (en) With the collaborative interactive of virtual reality video
US20230405475A1 (en) Shooting method, apparatus, device and medium based on virtual reality space
US20240015264A1 (en) System for broadcasting volumetric videoconferences in 3d animated virtual environment with audio information, and procedure for operating said device
WO2015139562A1 (en) Method for implementing video conference, synthesis device, and system
CN113518198B (en) Session interface display method, conference interface display method, device and electronic equipment
JP2000090288A (en) Face image control method for three-dimensional shared virtual space communication service, equipment for three-dimensional shared virtual space communication and program recording medium therefor
WO2024051540A1 (en) Special effect processing method and apparatus, electronic device, and storage medium
CN108320331B (en) Method and equipment for generating augmented reality video information of user scene
CN114827686A (en) Recording data processing method and device and electronic equipment
KR20200080041A (en) Method and apparatus for generating multi channel images using mobile terminal
US20240022688A1 (en) Multiuser teleconferencing with spotlight feature
JP2023092729A (en) Communication device, communication system, display method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant