WO2023142756A1 - Procédé d'interaction de retransmission en direct, dispositif et système - Google Patents

Procédé d'interaction de retransmission en direct, dispositif et système Download PDF

Info

Publication number
WO2023142756A1
WO2023142756A1 PCT/CN2022/139298 CN2022139298W WO2023142756A1 WO 2023142756 A1 WO2023142756 A1 WO 2023142756A1 CN 2022139298 W CN2022139298 W CN 2022139298W WO 2023142756 A1 WO2023142756 A1 WO 2023142756A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
anchor
live
live broadcast
terminal device
Prior art date
Application number
PCT/CN2022/139298
Other languages
English (en)
Chinese (zh)
Inventor
陈曦
Original Assignee
华为云计算技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为云计算技术有限公司 filed Critical 华为云计算技术有限公司
Publication of WO2023142756A1 publication Critical patent/WO2023142756A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/437Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection

Definitions

  • the present application relates to the field of live broadcast technology, and in particular to a method, device and system for live broadcast interaction.
  • the interaction between the anchor and the audience is relatively simple.
  • the anchor can only introduce real-life items to the audience, and the audience can only send bullet chat, likes, comments or gift virtual objects.
  • Gifts (such as virtual rockets or virtual flowers, etc.) are used to interact with the anchor, which is not ideal for the interactive experience between the anchor and the audience.
  • the present application provides a live broadcast interaction method, device and system, which can optimize the effect of live broadcast interaction, provide a variety of interaction methods for the anchor and the audience, and optimize the interactive experience between the anchor and the audience.
  • a live broadcast interaction method is provided, which is applied to a cloud computing platform.
  • the method includes: firstly, merging the image of the anchor with the virtual object to obtain the first live broadcast picture, and then sending the first live broadcast picture to the terminal of the audience device and the host's terminal equipment, after that, receive the first operation instruction on the image or virtual object of the anchor in the first live broadcast picture, and finally obtain the second live broadcast picture according to the first operation instruction and the first live broadcast picture, and transfer the second live broadcast picture 2.
  • the live broadcast image is sent to the terminal equipment of the audience and the terminal equipment of the host.
  • the anchor is a virtual person or a real person, that is to say, the image of the anchor can be a virtual character image, or a real person image of the anchor.
  • the image of the anchor can be It is a virtual 3D character model; virtual objects can be any virtual 3D models such as virtual buildings, virtual stages, virtual animals, virtual plants, virtual tables, virtual chessboards, virtual golf, etc.
  • Interactive games such as chess and card games or ball games, are not specifically limited here.
  • the first operation instruction is an instruction obtained by processing the first operation.
  • the first operation is the viewer's operation on the anchor's image or virtual object in the first live screen, or the host's operation on the anchor in the first live screen. Manipulation of images or virtual objects. It should be understood that when the first operation is an operation performed by the viewer, the first operation instruction is an instruction obtained by processing the first operation on the terminal device of the viewer, and after obtaining the first operation instruction, the terminal device of the viewer sends the first operation instruction sent to the cloud computing platform; when the first operation is an operation performed by the anchor, the first operation instruction is an instruction obtained by the anchor's terminal device processing the first operation, and the anchor's terminal device receives the first operation instruction. An operation instruction is sent to the cloud computing platform.
  • the first operation instruction may be an instruction for changing the image of the host in the first live broadcast picture, or an instruction for changing the coordinate value, moving speed, acceleration, offset angle, movement of a point on the virtual object in the first live broadcast picture
  • the instruction of one or more of direction and color, or an instruction for adding the image of a real object in the first live image, is not specifically limited here.
  • the first operation instruction can specifically be used to add decorations (such as hairpins, glasses) to the image of the anchor, change the hairstyle of the anchor (such as straight hair curly hair) or change the host’s clothing (such as changing a dress into a suit), etc.
  • decorations such as hairpins, glasses
  • the above-mentioned decorations, hairstyles or clothing can be virtual objects originally included in the first live broadcast screen.
  • the first operation instruction can also be used to change the coordinate value, moving speed, acceleration, offset angle, moving direction, and color of points on the image of the anchor.
  • the image of the host is located in the middle of the screen
  • the first operation instruction is an instruction for moving the image of the anchor to the lower left corner of the first live image
  • the first operation instruction is an instruction for making the image of the anchor in the first live image sit down.
  • the first operating instruction can specifically It is used to change the position of the virtual object in the first live image, change the moving speed of the virtual object in the first live image, or change the color of the virtual object displayed in the first live image.
  • the first An operation instruction may be an instruction for changing the position of a chess piece.
  • the first operation instruction may be an instruction for making the golf ball fly at a certain angle and speed.
  • the first operation instruction When the first operation instruction is used to add an image of a real object in the first live broadcast picture, the first operation instruction may specifically be used to add an image of a real hat, an image of a real plant, an image of a real animal, etc. in the first live broadcast picture , are not specifically limited here.
  • the live broadcast interaction method provided by this application can realize various interactive operations on the image or virtual objects of the host in the live broadcast screen by the audience or the host, and update the live broadcast screen based on the interactive operations performed by the audience or the host, that is, Said that through this method, the live images seen by the audience and the anchor will change with the various interactive operations performed by the audience or the anchor. Interactive experience with the audience.
  • the image of the anchor and the virtual object are fused together to obtain the live broadcast image.
  • the live broadcast image obtained by simply superimposing two two-dimensional images in the prior art, the three-dimensional display effect of the live broadcast image is better.
  • the content is also more natural and coordinated.
  • the live broadcast interaction method provided in the first aspect further includes the following steps: first, receiving a second operation instruction on the image or virtual object of the host in the second live broadcast screen, and then, according to the second operation The instruction and the second live image obtain the third live image, and finally, the third live image is sent to the terminal devices of the audience and the terminal devices of the anchor.
  • the second operation instruction is an instruction obtained by processing the second operation.
  • the first operation is an operation performed by the audience
  • the second operation is the anchor's operation on the anchor's image or virtual object in the second live screen.
  • the second operation instruction is obtained by processing the second operation by the anchor's terminal device, and after obtaining the second operation instruction, the anchor's terminal device sends the second operation instruction to the cloud computing platform; when the first operation is an operation performed by the anchor, The second operation is the viewer's operation on the anchor's image or virtual object in the second live broadcast screen.
  • the second operation instruction is obtained by the audience's terminal device processing the second operation. After the audience's terminal device receives the second operation instruction, Send the second operation instruction to the cloud computing platform.
  • the second operation instruction may be an instruction for changing the image of the host in the first live broadcast picture, or an instruction for changing the coordinate value, moving speed, acceleration, offset angle, movement of a point on a virtual object in the first live broadcast picture
  • the instruction of one or more of direction and color, or an instruction for adding the image of a real object in the first live image, is not specifically limited here.
  • the above-mentioned implementation method can realize that after the host and the audience see the updated live screen based on the operation of the other party, they can operate again on the image or virtual objects of the host in the updated live screen, so that the live screen can be updated again. That is to say, the above implementation method can realize the interaction between the anchor and the audience by operating the image or virtual objects of the anchor in the live broadcast screen multiple times.
  • the first live image can be obtained by merging the image of the host with the virtual object in the following manner: after merging the 3D data of the host and the 3D data of the virtual object, the fused 3D The data is processed to obtain a two-dimensional first live image.
  • the 3D data of the anchor refers to the data constituting the 3D model of the anchor
  • the 3D data of the virtual object refers to the data constituting the 3D model of the virtual object
  • virtual objects can be divided into virtual scenes and virtual props, wherein, the virtual scene can be used as the scene where the anchor's image is located when the anchor is live broadcasting, for example, a virtual game scene, a virtual life scene or a virtual work scene etc.
  • virtual props can be used as props for the anchor to interact with the audience during the live broadcast, such as virtual chessboard, virtual chess pieces, virtual golf balls, virtual footballs, virtual decorations, etc.
  • the fusion of the three-dimensional data of the anchor and the three-dimensional data of the virtual object can be understood as placing the three-dimensional model corresponding to the image of the anchor and the virtual props at a suitable position in the virtual scene.
  • the fused 3D data can be rendered and processed to obtain a 2D first live image
  • the rendering method can be a rasterization rendering method, a ray tracing rendering method, or a combination of a rasterization rendering method and a ray tracing rendering method The method is not specifically limited here.
  • the image of the anchor and the virtual object are fused together.
  • the live image obtained by simply superimposing two two-dimensional images in the prior art the three-dimensional display of the image of the anchor and the virtual object The effect is better, and the live broadcast picture is more natural and harmonious.
  • the image of the anchor and the virtual object can be fused in the following manner to obtain the first live broadcast image: first, obtain the two-dimensional image of the anchor, and then combine the two-dimensional image of the anchor and the virtual object
  • the 2D images are fused to obtain a 2D first live broadcast image.
  • the two-dimensional image of the anchor can be a live video uploaded by the real anchor in real time, including multiple frames of continuous images, and then perform character recognition on each frame of the image. After the real anchor is identified, the real anchor is extracted from each frame of image
  • the two-dimensional image of the virtual object can be obtained by rendering the three-dimensional data of the virtual object.
  • the rendering method can be a rasterization rendering method, a ray-tracing rendering method, or a mixed method of a rasterization rendering method and a ray-tracing rendering method , not specifically limited here.
  • the two-dimensional image of the anchor and the two-dimensional image of the virtual object can be fused with virtual reality through augmented reality technology to obtain the first live broadcast picture.
  • the image of the anchor and the virtual object are fused together.
  • the display effect is better, and the picture is more natural and coordinated.
  • the image of the anchor and the virtual object are fused together.
  • the live image obtained by simply superimposing two two-dimensional images in the prior art the three-dimensional display of the image of the anchor and the virtual object The effect is better, and the live broadcast picture is more natural and harmonious.
  • a live broadcast interaction method is provided, which is applied to a terminal device of a viewer or a terminal device of an anchor.
  • the method includes: firstly, displaying a first live broadcast image, the first live broadcast image including an image of the anchor and a virtual object, and then, A second live image is displayed, and the second live image is obtained according to the first operation instruction on the anchor's image or virtual object in the first live image and the first live image.
  • the live interaction method provided by the second aspect further includes the following steps: displaying a third live image, the third live image is based on the image of the anchor or the virtual object in the second live image.
  • the second operation instruction and the second live screen are obtained.
  • the first operation instruction is an instruction obtained by processing the first operation
  • the second operation instruction is an instruction obtained by processing the second operation
  • the second operation is the host’s operation on the anchor’s image or virtual object in the second live screen; or, the first operation is the host’s operation on the anchor’s image or virtual object in the first live screen.
  • the operation of the object, the second operation is the viewer's operation on the anchor's image or virtual object in the second live image.
  • the anchor is a real person or a virtual person.
  • the first live broadcast image is a two-dimensional image obtained by fusing the three-dimensional data of the anchor and the three-dimensional data of the virtual object, and processing the fused three-dimensional data.
  • the first live broadcast image is obtained by fusing the two-dimensional image of the anchor and the two-dimensional image of the virtual object.
  • the first operation instruction is used to change the image of the anchor; or, the first operation instruction is used to change the coordinate value, moving speed, acceleration, offset angle, moving direction, One or more of the colors; or, the first operation instruction is used to add an image of a real object in the first live image.
  • the first operation instruction is used to add decorations, change hairstyles, or change clothing to the host's image.
  • the virtual object is used to implement an interactive game between the audience and the host, such as a board game or a ball game.
  • a live broadcast interaction method is provided, which is applied to a live broadcast system.
  • the live broadcast system includes a cloud computing platform, a terminal device of an audience, and a terminal device of an anchor.
  • the method includes: the cloud computing platform integrates the image of the anchor with a virtual object, Obtain the first live picture, and send the first live picture to the terminal equipment of the audience and the terminal equipment of the anchor for displaying, and then, the terminal equipment of the audience or the terminal equipment of the anchor send to the cloud computing platform the information about the anchor in the first live picture image or the first operation instruction of the virtual object, after the cloud computing platform receives the first operation instruction, it obtains the second live image according to the first operation instruction and the first live image, and finally, the cloud computing platform sends the second live image to It is displayed to the terminal equipment of the audience and the terminal equipment of the host.
  • a live broadcast interaction device is provided, which is applied to a cloud computing platform, and the device includes various modules for executing the method provided in the first aspect or any possible implementation manner of the first aspect.
  • a fifth aspect provides a live broadcast interaction device, which is applied to a host's terminal device or a viewer's terminal device, and the device includes various modules for executing the method provided in the second aspect or any possible implementation manner of the second aspect.
  • a sixth aspect provides a live broadcast system, which includes the live broadcast interactive device described in the fourth aspect above and the interactive device described in the fifth aspect above.
  • a cloud computing platform includes one or more computing devices, each computing device includes a processor and a memory; the processor of the one or more computing devices is used to execute the one or more computing devices
  • the instructions stored in the memory of the plurality of computing devices cause the one or more computing devices to perform the following steps: firstly, fuse the image of the anchor with the virtual object to obtain the first live image, and then send the first live image to the audience The terminal equipment of the host and the terminal equipment of the anchor, after that, receiving the first operation instruction on the image or virtual object of the anchor in the first live broadcast picture, and finally, obtaining the second live broadcast picture according to the first operation instruction and the first live broadcast picture, and The second live broadcast picture is sent to the terminal device of the audience and the terminal device of the host.
  • a terminal device in an eighth aspect, includes a processor and a memory; the processor is used to execute instructions stored in the memory, so that the terminal device performs the following steps: first, display the first live image, the The first live image includes the image of the anchor and the virtual object, and then displays the second live image, which is based on the first operation instruction to the image of the anchor or the virtual object in the first live image and the first live image owned.
  • a computer-readable storage medium where instructions are stored in the computer-readable storage medium, and the instructions are used to implement the method provided in any one possible implementation manner of the first aspect to the third aspect above.
  • a computer program product including a computer program.
  • the computer program When the computer program is read and executed by a computing device, the computing device executes any one of the possible implementations of the first aspect to the third aspect above. provided method.
  • Fig. 1 is a schematic structural diagram of a live broadcast system involved in the present application
  • Fig. 2 is a schematic diagram of a live broadcast picture involved in the present application
  • FIG. 3 is an interactive schematic diagram of a live interactive method provided by the present application.
  • Fig. 4 is a schematic diagram of fusion of the anchor's 3D data and the virtual object's 3D data provided by the present application;
  • FIG. 5 is a schematic flowchart of a rasterization rendering method provided by the present application.
  • Fig. 6 is a schematic diagram of the transformation process of the vertex shader provided by the present application.
  • Fig. 7 is a schematic diagram of the tessellation technology provided by the present application.
  • Fig. 8A is a schematic diagram of a live broadcast screen provided by the present application.
  • Fig. 8B is a schematic diagram of a live broadcast screen provided by the present application.
  • FIG. 9 is a schematic structural diagram of a live interactive device provided by the present application.
  • Fig. 10 is a schematic structural diagram of another live interactive device provided by the present application.
  • FIG. 11 is a schematic structural diagram of a terminal device provided by the present application.
  • Fig. 12 is a schematic structural diagram of a cloud computing platform provided by the present application.
  • Fig. 13 is a schematic structural diagram of a computing device provided by the present application.
  • Virtual object refers to an object that does not exist in the real world.
  • a virtual object is a virtual three-dimensional model created in advance in the virtual world that can be used to reflect objects in the real world in the virtual world.
  • Virtual scene It can be a virtual reality scene simulated by a computer using 3D virtual reality scene technology, or a semi-simulation and semi-fictional 3D environment scene, or a purely fictional 3D environment scene.
  • 3D virtual reality scene technology is a computer simulation system that can create and experience a virtual world. It uses a computer to generate a 3D simulation scene of a real scene. It is an interactive 3D dynamic scene and entity behavior that integrates multi-source information. system simulation. Virtual scenes include any actual scenes that exist in real life, including any scenes that can be felt through somatosensory such as vision and hearing, and are simulated by computer technology.
  • Augmented reality augmented reality, AR: This technology is a technology that ingeniously integrates virtual information with the real world. It widely uses various technical means such as multimedia, 3D modeling, real-time tracking and registration, intelligent interaction, and sensing. After simulating and simulating virtual information such as text, images, 3D models, music, and videos generated by computers, they are applied to the real world. The two kinds of information complement each other, so as to realize the "enhancement" of the real world. richer information. Through the augmented reality technology, the image of the real scene and the image of the virtual object can be fused, and the two-dimensional image obtained after fusion contains both the content of the real scene and the virtual object.
  • Rendering refers to the process of using software to generate an image from a model, where a model is a description of a three-dimensional object in a strictly defined language or data structure, including geometry, viewpoint, texture, and lighting information.
  • the image is a digital image or a bitmap image.
  • Rendering is a term similar to "an artist's rendering of a scene", and is also used to describe "the process of computing the effects in a video edit file to produce the final video output".
  • Fig. 1 is a schematic structural diagram of a live broadcast system involved in this application, the anchor can realize the live broadcast through the live broadcast system 100, and the audience can watch the content of the live broadcast of the anchor through the live broadcast system 100, and, the anchor You can interact with the audience through the live broadcast system 100 .
  • the live broadcast system 100 includes a terminal device 110 , a network device 120 and a cloud computing platform 130 .
  • the terminal device 110 may also be called a mobile terminal or a user terminal, etc., and may be an electronic device installed with a live broadcast application, such as a personal computer, a smart phone, a tablet computer, a notebook computer, a handheld computer, a mobile Internet device (mobile Internet device, MID), wearable devices (such as smart watches, smart bracelets, pedometers, etc.) and other electronic devices, which are not specifically limited here.
  • a live broadcast application such as a personal computer, a smart phone, a tablet computer, a notebook computer, a handheld computer, a mobile Internet device (mobile Internet device, MID), wearable devices (such as smart watches, smart bracelets, pedometers, etc.) and other electronic devices, which are not specifically limited here.
  • a live broadcast application Through the live broadcast application, the terminal device 110 can support the host to perform the live broadcast and support the viewers to watch the live broadcast.
  • client applications such as a shopping application, a search application, and an audio playback application may also be installed on the terminal device
  • the terminal device 110A represents the host's terminal device
  • the terminal device 110B represents the audience's terminal device.
  • the anchor's terminal device 110A can also serve as an audience to watch the live broadcast of other anchors through the terminal device 110A.
  • the audience's terminal device 110B can also As an anchor, the terminal device 110B performs live broadcast.
  • the network device 120 is used to transmit data between the terminal device 110 and the cloud computing platform 130 through any communication mechanism/communication standard communication network.
  • the communication network may be a wide area network, a local area network, a point-to-point connection, etc., or any combination thereof.
  • the cloud computing platform 130 can be an independent server, or a server cluster or a distributed system composed of multiple servers, and can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, Cloud servers for basic cloud computing services such as middleware services, domain name services, security services, content distribution networks, and big data and artificial intelligence platforms.
  • FIG. 1 takes the cloud computing platform 130 as an example of a cloud-based server.
  • the cloud computing platform 130 may include multiple cloud computing nodes, and each cloud computing node includes hardware, virtualization services, and live application servers from bottom to top. in,
  • Hardware includes computing resources, storage resources, and network resources.
  • Computing resources can adopt heterogeneous computing architecture, for example, central processing unit (central processing unit, CPU) + graphics processing unit (graphics processing unit, GPU) architecture, CPU+AI chip, CPU+GPU+AI chip architecture, etc. , not specifically limited here.
  • Storage resources may include memory, among others.
  • computing resources may be divided into multiple computing unit resources, storage resources may be divided into multiple storage unit resources, and network resources may be divided into multiple network unit resources. Therefore, the image processing platform can be freely combined on the basis of unit resources according to the user's resource requirements, so as to provide resources according to the user's needs.
  • computing resources can be divided into computing unit resources of 5u
  • storage resources can be divided into storage unit resources of 10G
  • the combination of computing resources and storage resources can be, 5u+10G, 5u+20G, 5u+30u,...,10u +10G, 10u+20G, 10u+30u,....
  • Virtualization service is a service that builds the resources of multiple physical hosts into a unified resource pool through virtualization technology, and flexibly isolates mutually independent resources according to user needs to run user applications.
  • the virtualization service may include a virtual machine (virtual machine, VM) service, a bare metal server (bare metal server, BMS) service, and a container (container) service.
  • the VM service may be a service in which virtual machine (virtual machine, VM) resource pools are virtualized on multiple physical hosts through a virtualization technology to provide VMs for users on demand.
  • BMS service is a service that virtualizes BMS resource pools on multiple physical hosts to provide users with BMS on demand.
  • Container Service is a service that virtualizes container resource pools on multiple physical hosts to provide users with containers on demand.
  • VM is a simulated virtual computer, that is, a logical computer.
  • BMS is an elastically scalable high-performance computing service.
  • Container is a kernel virtualization technology that can provide lightweight virtualization to achieve the purpose of isolating user space, processes and resources. It should be understood that the VM service, BMS service, and container service in the above-mentioned virtualization services are only used as specific examples. In actual applications, the virtualization service can also be other lightweight or heavyweight virtualization services, which will not be described in detail here. limited.
  • the live broadcast application server can be used to call hardware to implement live broadcast services, such as providing live video recording, injection, transcoding and other services for the host, and providing live video distribution services for viewers.
  • live broadcast services such as providing live video recording, injection, transcoding and other services for the host, and providing live video distribution services for viewers.
  • the live application server can receive the live video sent by the live application client on the anchor's terminal device 110A through the network device 120, and then perform services such as transcoding and storing the live video.
  • the device 120 receives the viewing request sent by the audience's terminal device 110B, it then searches for the corresponding live video according to the viewing request, and finally sends the found live video to the audience's terminal device 110B through the network device 120, and the audience
  • the live broadcast application client on the terminal device 110B displays the live video to the audience.
  • the live application client is equivalent to the intermediary between the user (referring to the host or audience) and the live application server, and the live application client and the live application server are collectively referred to as a rendering application.
  • the live application server and the live application client may be provided by the live application provider.
  • the live broadcast application developer installs the live broadcast application server on the cloud computing platform 130 provided by the cloud service provider, and the live broadcast application developer provides the live broadcast application client to the user for download through the Internet, and installs it on the user's terminal on device 110.
  • the live application server and the live application client may also be provided by a cloud service provider.
  • the cloud provider can use the live broadcast service provided by the cloud computing platform 130 as a cloud service.
  • This cloud service can be used for live broadcast, and viewers who have registered accounts on the live broadcast platform can use this cloud service to watch live broadcasts through the live broadcast platform, and the host and viewers can use this cloud service to interact.
  • the live broadcast system 100 shown in FIG. 1 is only a specific example. In practical applications, the live broadcast system 100 may include any number of terminal devices 110 , network devices 120 and cloud computing platforms 130 , which are not specifically limited here.
  • the interaction between the anchor and the audience is relatively simple.
  • the anchor can only introduce real-life items to the audience, and the audience can only send bullet chat, likes, comments or gift virtual objects.
  • Gifts are used to interact with the anchor, and the virtual gift is a two-dimensional image material.
  • audience A sent a barrage "Hello! to interact with the anchor Lisa
  • audience B sent a barrage "Your live broadcast is very good!” to interact with the anchor Lisa.
  • this application provides a live broadcast interaction method, device and system, which can optimize the live broadcast interaction effect, provide a variety of interaction methods for the anchor and the audience, thereby optimizing the interactive experience between the anchor and the audience.
  • FIG. 3 is an interactive schematic diagram of a live interactive method provided by the present application.
  • the live interactive method can be applied to the live broadcast system 100 shown in FIG. 1.
  • FIG. 3 The shown live broadcast system 100 executes the live broadcast interaction method provided by this application, which may include the following steps:
  • S301 The cloud computing platform 130 fuses the image of the anchor with the virtual object to obtain a first live broadcast image.
  • the anchor is a virtual person or a real person, that is to say, the image of the anchor can be a virtual character image, or a real person image of the anchor.
  • the image of the anchor can be It is a virtual 3D character model; virtual objects can be any virtual 3D models such as virtual buildings, virtual stages, virtual animals, virtual plants, virtual tables, virtual chessboards, virtual golf, etc.
  • Interactive games such as chess and card games or ball games, are not specifically limited here.
  • virtual objects can be divided into virtual scenes and virtual props, wherein the virtual scene can be used as the scene where the anchor's image is located when the anchor is live broadcasting, for example, a virtual game scene, a virtual life scene Or virtual work scenes, etc.; virtual props can be used as props for the anchor to interact with the audience during the live broadcast, such as virtual chessboard, virtual chess pieces, virtual golf balls, virtual footballs, virtual decorations, etc.
  • the cloud computing platform 130 can realize the integration of the anchor's image and the virtual object through any of the following methods to obtain the first live broadcast picture:
  • the cloud computing platform 130 can fuse the three-dimensional data of the anchor and the three-dimensional data of the virtual object to obtain the fused three-dimensional data, and then process the fused three-dimensional data to obtain The first two-dimensional live broadcast screen.
  • the 3D data of the anchor refers to the data constituting the 3D model of the anchor
  • the 3D data of the virtual object refers to the data constituting the 3D model of the virtual object
  • virtual objects include virtual scenes and virtual props. Therefore, the fusion of the 3D data of the anchor and the 3D data of the virtual object can be understood as placing the 3D model corresponding to the image of the anchor and the 3D model corresponding to the virtual prop. to the appropriate location in the virtual scene.
  • the virtual scene is a virtual room where chess games can be played
  • the virtual props include a virtual chessboard and virtual chess pieces
  • the image of the anchor, the virtual chessboard, and virtual chess pieces can be placed in the middle of the virtual room, where the anchor
  • the image of is behind the virtual chessboard, and the virtual chess pieces are on the virtual chessboard, as shown in Figure 4.
  • the cloud computing platform 130 can perform rendering processing on the fused three-dimensional data to obtain the first two-dimensional live image, and the rendering method can be a raster rendering method, a ray tracing rendering method, or a raster rendering method and a ray tracing rendering method.
  • the method of method mixing is not specifically limited here.
  • FIG. 5 is a schematic diagram of the principle of a rasterization rendering method provided in this application.
  • the rasterization rendering method provided in this application generally includes an application stage, a geometry stage, and a rasterization stage. in,
  • Application stage mainly divided into three tasks: (1) Prepare scene data, such as virtual scene and 3D model information in virtual scene, lighting information, etc.; (2) In order to improve rendering performance, it is usually necessary to do a coarse-grained culling (culling) Work, remove those invisible objects in the scene, so that these objects do not need to be handed over to the geometry stage for processing; (3) Set the rendering state of each 3D model, such as the used material, texture, etc.
  • the output of the application stage is the geometric information required for rendering, that is, the rendering primitives.
  • Each rendering primitive contains all the vertex data corresponding to the primitives.
  • the rendering primitives can be points, lines, triangles, etc. These Rendered primitives are passed to the geometry stage.
  • Geometry stage usually includes vertex specification, vertex shader, primitive assembly, tessellation, geometry shader, vertex post-processing, Multiple stages of primitive assembly, rasterization, fragment shader, and per-sample operations.
  • Vertex specifications are commonly used to obtain vertex data.
  • the vertex data is generated according to the virtual scene and the 3D model in the virtual scene, the vertex data includes the 3D coordinates of the vertex, and the vertex data may also include the normal vector of the vertex, the color of the vertex, and the like.
  • a vertex may be a point on a 3D model, for example, where two edges of a polygon in a 3D model meet, a common endpoint of two edges in a 3D model, and so on.
  • Vertex shaders are typically used to transform the 3D coordinates of vertices from object space to screen/image space.
  • the transformation process can be: transform from model space to world space (world space), then transform from world space to view space (view space), and then transform from view space to nominal projection space ( normalized projection space), and then transform from the nominal projection space to the screen space.
  • the visual space includes the visual frustum, the space inside the visual frustum is the space that can be seen from the user's perspective, and the space outside the visual frustum is the space that cannot be seen from the user's perspective.
  • Tessellation techniques are used to substantially increase the number of vertices in a 3D model.
  • the 3D model includes three vertices constituting a triangle.
  • the number of vertices in the 3D model has changed from three to six. It can be seen that the 3D model appears rough and stiff before surface subdivision, and the 3D model appears realistic and vivid after surface subdivision.
  • Geometry shaders are used to transform one or more vertices in a 3D model into a completely different primitive (primitive), thereby generating more vertices.
  • Vertex post-processing is used to clip the primitive, that is, if the primitive is partly outside the viewing frustum and partly inside the viewing frustum, you need to perform clipping on the part of the primitive outside the viewing frustum Cropping, only keep the part inside the visual frustum.
  • Primitive assembly is usually used to assemble the vertices in the 3D model into geometric primitives.
  • This stage will generate a series of triangles, line segments and points.
  • the assembled line segments may include independent line segments, end-to-end connected but ultimately unclosed line segments, end-to-end connected and finally sealed and closed line segments, and the like.
  • Assembled triangles may include individual triangles, linear chains of consecutive triangles, fan-shaped consecutive triangles, and the like.
  • culling can also be done, i.e. removing invisible objects from the scene.
  • culling may include frustum culling, viewport culling, and occlusion culling.
  • the rasterization stage includes rasterization, fragment shader, and per-sample operation.
  • Rasterization is the process of converting vertex data into fragments. It has the function of converting the graph into an image composed of rasters. The characteristic is that each element corresponds to a pixel in the frame buffer.
  • the first part of rasterization works: deciding which integer grid regions in window coordinates are occupied by primitives; the second part works: assigning a color value and a depth value to each region.
  • the rasterization process produces a fragment, in which each point on the two-dimensional image contains color, depth and texture data, and the point and related information are called a fragment.
  • the fragment shader is used to calculate the final color output of the pixel.
  • Pixel-by-pixel processing includes depth testing and transparency handling. It is understandable that if we first draw an object that is closer, and then draw an object that is farther away, the object that is far away will cover the object that is closer because it will be drawn later. This effect is not what we want.
  • the depth test is actually to record the distance (drawing coordinates) of the pixel point from the camera in the 3D world.
  • the depth value (Z value) of each pixel point (drawn on the screen) stored in the depth buffer is larger, the distance The farther the camera is, therefore, with the depth buffer, the order of drawing objects is not so important, and they can be displayed normally according to the distance (Z value).
  • the processing order of the rasterization rendering method is: vertex shader, tessellation technology, geometry shader, vertex post-processing (including clipping), primitive assembly (including culling), rasterization, fragment shader and pixel-by-pixel processing as an example.
  • the processing order of the rasterization rendering method may change, which is not specifically limited here.
  • a two-dimensional first live broadcast image can be obtained.
  • the three-dimensional display effect of the image of the anchor and the virtual object is better, and the image is more natural and coordinated.
  • the cloud computing platform 130 can fuse the two-dimensional image of the virtual object with the two-dimensional image of the anchor, so as to obtain a two-dimensional first live broadcast image.
  • the two-dimensional image of the anchor can be the cloud computing platform 130 receiving the live video uploaded by the real anchor in real time and including multiple frames of continuous images, and then performing character recognition on each frame of image.
  • the two-dimensional image of the live broadcaster can be the cloud computing platform 130 receiving the live video uploaded by the real anchor in real time and including multiple frames of continuous images, and then performing character recognition on each frame of image.
  • the two-dimensional image of the live broadcaster can be the cloud computing platform 130 receiving the live video uploaded by the real anchor in real time and including multiple frames of continuous images, and then performing character recognition on each frame of image.
  • the cloud computing platform 130 when the cloud computing platform 130 receives each frame of image uploaded by the live anchor, it can extract the two-dimensional image of the live anchor from the image, and then combine the extracted two-dimensional image of the live anchor with the virtual object
  • the two-dimensional images are fused with reality to obtain the first two-dimensional live broadcast picture.
  • the three-dimensional display effect of the image of the real anchor and the virtual object is better, and the picture is more natural and coordinated.
  • the cloud computing platform 130 can perform rendering processing on the three-dimensional data of the virtual object to obtain a two-dimensional image of the virtual object, and the rendering method can be a rasterization rendering method, a ray tracing rendering method, or a rasterization rendering method and a ray tracing rendering method
  • the cloud computing platform 130 can use augmented reality technology to perform virtual-real fusion of the 2D image of the real anchor and the 2D image of the virtual object to obtain the first live broadcast image.
  • the image of the anchor and the virtual props are integrated into a part of the virtual scene, compared with the live broadcast image obtained by simply superimposing two two-dimensional images in the prior art , the three-dimensional display effect of the picture is better, and the picture is more natural and coordinated.
  • S302 The cloud computing platform 130 sends the first live broadcast picture to the terminal device 110B of the audience.
  • the live broadcast application server on the cloud computing platform 130 may send the first live broadcast picture to the live broadcast application client on the terminal device 110B of the audience through the network, and the live broadcast application client will display the first live broadcast picture to audience.
  • S303 The cloud computing platform 130 sends the first live broadcast picture to the anchor's terminal device 110A.
  • the live broadcast application server on the cloud computing platform 130 may send the first live broadcast picture to the live broadcast application client on the anchor's terminal device 110A through the network, and the live broadcast application client will display the first live broadcast picture to anchor.
  • S304 The terminal device 110B of the viewer displays the first live broadcast image.
  • S305 The host's terminal device 110A displays the first live broadcast image.
  • the terminal device 110B of the viewer or the terminal device 110A of the anchor receives the first operation performed by the corresponding user on the image or virtual object of the anchor in the first live image.
  • the first operation is an operation for changing the image of the host in the first live screen, or for changing the first live screen
  • One or more of the coordinates, moving speed, acceleration, offset angle, moving direction, and color of the point on the virtual object in the virtual object, or the operation for adding the image of the real object in the first live broadcast screen not specifically limited here.
  • the first operation is the viewer's operation
  • the terminal device receiving the first operation is the viewer's terminal device 110B as an example. It should be understood that FIG. 3 is only an example and should not be regarded as a specific limitation.
  • the first operation is an operation for changing the image of the anchor in the first live broadcast screen
  • the first operation can specifically be used to add decorations (such as hairpins, glasses) to the image of the anchor, change the hairstyle of the anchor (such as changing the live Curly hair) or change the host’s clothing (such as changing a dress into a suit), etc.
  • the above-mentioned decorations, hairstyles or clothing can be virtual objects originally included in the first live broadcast screen.
  • the first operation can also be used to change the coordinate value, moving speed, acceleration, offset angle, moving direction, and color of points on the image of the anchor.
  • the image of the anchor is located in the middle of the image
  • the first operation is an operation for moving the image of the anchor to the lower left corner of the first live image, and for example, in the first live image
  • the image of the anchor in the live broadcast picture is in a standing posture
  • the first operation is an operation for making the image of the anchor in the first live broadcast picture sit down.
  • the first operation is an operation for changing one or more of the coordinate value, moving speed, acceleration, offset angle, moving direction, and color of a point on the virtual object in the first live screen
  • the first operation specifically It can be used to change the position of the virtual object in the first live screen, change the moving speed of the virtual object in the first live screen, or change the color displayed by the virtual object in the first live screen.
  • the first operation is an operation for changing the position of a chess piece.
  • the first operation is an operation for making the golf ball fly at a certain angle and speed.
  • the first operation is an operation for adding an image of a real object in the first live broadcast picture
  • the first operation may specifically be used to add an image of a real hat, an image of a real plant, or an image of a real animal in the first live broadcast picture etc., are not specifically limited here.
  • the viewer/host can input the first operation to the terminal device 110B of the viewer/the terminal device 110A of the host through the input component, wherein the input component includes but not limited to: keyboard, mouse, touch screen, touch panel and audio Input devices, etc., if the input component is a touch screen as an example, then the first operation may be a sliding operation performed by the audience on the touch screen of the terminal device 110B of the audience using a finger or a stylus, and the sliding operation may be used to change The position of the virtual object in the first live image.
  • the input component includes but not limited to: keyboard, mouse, touch screen, touch panel and audio Input devices, etc.
  • the first operation may be a sliding operation performed by the audience on the touch screen of the terminal device 110B of the audience using a finger or a stylus, and the sliding operation may be used to change The position of the virtual object in the first live image.
  • the terminal device 110B of the viewer or the terminal device 110A of the host that receives the first operation processes the first operation as a first operation instruction to the image or virtual object of the host in the first live broadcast screen.
  • FIG. 3 the viewer's terminal device 110B processes the first operation as a first operation instruction as an example. It should be understood that FIG. 3 is only an example and should not be regarded as a specific limitation.
  • the first operation instruction Since the first operation instruction is obtained by processing the first operation, the first operation instruction corresponds to the first operation.
  • the first operation instruction is an operation for changing the image of the host in the first live broadcast screen
  • the first operation instruction It is an instruction for changing the image of the host in the first live broadcast screen
  • the first operation instruction is used to change the coordinate value, moving speed, acceleration, offset angle, moving direction, and color of a point on a virtual object in the first live broadcast screen
  • the first operation instruction is used to change one of the coordinate values, moving speed, acceleration, offset angle, moving direction, and color of the point on the virtual object in the first live broadcast screen
  • the first operation instruction is an instruction for adding an image of a real object in the first live image.
  • the audience or the host can scan the real object in advance to obtain the scanned data of the real object, and then input the scanned data of the real object into the corresponding terminal device, when it is necessary to add the image of the real object in the first live broadcast screen
  • the first operation performed by the viewer or the host may be to click on the scanned data of the real object stored in the terminal device, and the terminal device 110B of the viewer or the terminal device 110A of the host will process the above-mentioned first operation after receiving the above-mentioned first operation , the first operation instruction for adding the image of the real object to the first live broadcast image can be obtained, and the first operation instruction can carry the scan data of the real object; or, the audience or the host can pre-build a three-dimensional model of the real object, Then input the three-dimensional data of the real object (referring to the data constituting the three-dimensional model of the real object) into the corresponding terminal device, when it is necessary to add the image of the real object in the first live broadcast picture, the first operation performed by the
  • the terminal device 110B of the viewer or the terminal device 110A of the host that has obtained the first operation instruction sends the first operation instruction to the cloud computing platform 130 .
  • the live broadcast application client on the terminal device 110B of the viewer or the terminal device 110A of the host sends the first operation instruction to the cloud computing platform 130 through the network.
  • the viewer's terminal device 110B sends the first operation instruction to the cloud computing platform 130 as an example. It should be understood that FIG. 3 is only an example and should not be regarded as a specific limitation.
  • the cloud computing platform 130 obtains a second live image according to the first operation instruction and the first live image.
  • the cloud computing platform 130 may obtain the physical information of the host's image or virtual object affected by the first operation instruction in the first live image, and then update the first live image according to the acquired physical information, thereby obtaining the second live image;
  • the operation instruction is used to add the image of the real object
  • the cloud computing platform 130 can construct the three-dimensional data of the real object based on the scan data of the real object, and then according to the three-dimensional data of the real object, The data updates the first live image to obtain the second live image. If the first operation instruction carries the three-dimensional data of the real object, the cloud computing platform 130 can directly update the first live image according to the three-dimensional data of the real object. .
  • the image of the anchor or the physical information of the virtual object affected by the first operation instruction may include any one or more of the following: the image of the anchor or the coordinate value of a point on the virtual object affected by the first operation instruction, The speed of point movement, the acceleration of point movement, the offset angle of point movement, the direction of point movement, the color of point, etc. It should be understood that the physical information listed above is merely an example and should not be regarded as a specific limitation.
  • the physical information of the virtual object affected by the first operation instruction is the coordinate value of each point on the virtual chess piece after being moved, and The first operation instruction is used to make the virtual golf ball in the first live image fly off at a certain offset angle and speed.
  • the physical information of the virtual object affected by the first operation instruction is The offset angle and speed of the flying point, taking the first operation instruction to change the color of the clothes of the image of the anchor in the first live screen as an example, the physical information of the image of the anchor after being affected by the first operation instruction is: The color information of each point on the clothes of the anchor's image.
  • Figure 8A is the first live image 800A
  • Figure 8B is the second live image 800B
  • the first operation instruction is used to move the pawn 8001 in the first live image 800A from The second grid in the first row on the chessboard is moved to the fifth grid in the third row on the chessboard. It can be seen that in the second live broadcast screen 800B, the chess piece 8001 has moved to a position that meets the user's needs.
  • the first live image is updated according to the acquired physical information to obtain the second live image, which may be to update the anchor's image or virtual object in the virtual scene corresponding to the first live image according to the acquired physical information , and then perform rendering processing on the three-dimensional data of the virtual scene, the three-dimensional data of the virtual objects in the virtual scene, and the image of the anchor, so as to obtain a two-dimensional second live broadcast image.
  • the first live image is updated according to the three-dimensional data of the real object to obtain the second live image, which may be to add the three-dimensional data of the real object to the virtual scene corresponding to the first live image, and then, Perform rendering processing on the 3D data of the virtual scene, the 3D data of the virtual objects in the virtual scene, the image of the anchor in the virtual scene, and the 3D data of the real objects added to the virtual scene, so as to obtain a two-dimensional second live broadcast picture.
  • S310 The cloud computing platform 130 sends the second live broadcast picture to the terminal device 110B of the audience.
  • S311 The cloud computing platform 130 sends the second live broadcast picture to the anchor's terminal device 110A.
  • S312 The terminal device 110B of the viewer displays the second live broadcast image.
  • S313 The anchor's terminal device 110A displays the second live broadcast image.
  • the execution of the above S302 to S303 can be executed in parallel or in any order
  • the execution of the above S304 to S305 can be executed in parallel or in any order
  • the execution of the above S310 to S311 can be executed in parallel or in any order
  • the above S312 to S313 The execution may be performed in parallel or successively in any order, which is not specifically limited here.
  • the host's terminal device 110A can receive the host's second live broadcast.
  • the terminal device 110A of the anchor can process the second operation as a second operation instruction on the image of the anchor or the virtual object in the second live broadcast picture, and
  • the second operation instruction is sent to the cloud computing platform 130, and the third live image is obtained by the cloud computing platform 130 according to the second operation instruction and the second live image, after the cloud computing platform 130 obtains the third live image, it can display the third live image
  • the terminal device 110B of the viewer and the terminal device 110A of the host are sent to display.
  • the audience's terminal device 110B and the anchor's terminal device 110A display the second live broadcast picture
  • the audience's terminal device 110B can receive the viewer's view of the anchor's image or virtual live broadcast on the second live broadcast picture.
  • the second operation of the object then, the terminal device 110B of the viewer can process the second operation as a second operation instruction to the image of the host in the second live broadcast screen or a virtual object, and send the second operation instruction to the cloud computing platform 130.
  • the cloud computing platform 130 obtains the third live image according to the second operation instruction and the second live image. After obtaining the third live image, the cloud computing platform 130 may send the third live image to the terminal device 110B of the audience and the anchor
  • the terminal device 110A of the terminal device 110A displays.
  • the definition of the above-mentioned second operation is similar to the definition of the first operation in the embodiment shown in Figure 3, the definition of the second operation instruction is similar to the definition of the first operation instruction in the embodiment shown in Figure 3, the cloud computing platform 130
  • the process of obtaining the third live image according to the second operation instruction and the second live image is the same as the process of obtaining the second live image by the cloud computing platform 130 in step S309 in the embodiment shown in FIG.
  • the process of the screen is similar, and for the sake of brevity of the description, no further description is given here. For details, please refer to the relevant content in the embodiment shown in FIG. 3 .
  • the live broadcast interaction method provided by this application after the host and the viewer see the live broadcast screen updated based on the operation of the other party, they can operate again on the image or virtual object of the host in the updated live screen, so that the live stream can be viewed again. Updating the live broadcast screen, that is to say, the live broadcast interaction method provided by this application can realize the interaction between the host and the audience by operating the host's image or virtual objects in the live broadcast screen multiple times.
  • the anchor's terminal device 110A when the anchor's terminal device 110A receives the anchor's first operation, the anchor's terminal device 110A can also obtain the limbs when the anchor performs the first operation. Action information.
  • the facial expression information when the anchor performs the first operation can also be obtained.
  • the anchor’s terminal device 110A sends the first operation instruction to the cloud computing platform 130, it can also obtain the information when the anchor performs the first operation.
  • the body movement information and facial expression information of the anchor are sent to the cloud computing platform 130, and the cloud computing platform 130 obtains the second live broadcast picture according to the first operation instruction, the body movement information, facial expression information and the first live broadcast picture of the host when performing the first operation. .
  • the host's terminal device 110A when the host's terminal device 110A receives the host's second operation, the host's terminal device 110A can also obtain the body movement information when the host performs the second operation, optionally , the facial expression information when the anchor performs the second operation can also be obtained, and then, when the anchor’s terminal device 110A sends the second operation instruction to the cloud computing platform 130, it can also obtain the body movement information, facial expression information, and facial expression information when the anchor performs the second operation.
  • the expression information is sent to the cloud computing platform 130, and the cloud computing platform 130 obtains the third live image according to the second operation instruction, the body movement information when the host performs the second operation, facial expression information, and the second live image.
  • the cloud computing platform 130 can update the first live broadcast picture according to the first operation instruction, body movement information and facial expression information when the host performs the first operation, so as to obtain the second live broadcast picture;
  • the second operation instruction, the body movement information and the facial expression information when the host performs the second operation update the second live image, thereby obtaining the third live image;
  • the cloud computing platform 130 updates the first live image/second live image according to the anchor’s body movement information and facial expression information.
  • body movement information facial expression information
  • facial expression information to drive the image of the anchor in the first live screen/second live screen to run, that is, the body movements and facial expressions of the image of the anchor in the updated live screen are compared with those in the live screen before the update
  • the body movements and facial expressions of the host's image will change.
  • the anchor's terminal device 110A can take images of the anchor through the camera, and the image includes the anchor's body movement information and facial expression information, or, when the anchor wears a wearable device that can collect the body movement information, it can also use the The wearable device collects the body movement information of the anchor, which is not specifically limited here.
  • the anchor's terminal device 110A can also collect the anchor's body movement information and facial expression information in real time, and then send the collected anchor's body movement information and facial expression information to the cloud computing platform 130 in real time,
  • the cloud computing platform 130 updates the body movement information and facial expression information of the anchor's image in the live broadcast screen in real time according to the anchor's body movement information and facial expression information, which is not specifically limited here.
  • the cloud computing platform 130 when it sends live images to the terminal equipment 110B of the audience and the terminal equipment 110A of the host, it can use real time streaming protocol (real time streaming protocol, RTSP), web real-time communication (web real-time communication) , referred to as WebRTC) protocol and other real-time transport protocols for transmission.
  • real time streaming protocol real time streaming protocol, RTSP
  • web real-time communication web real-time communication
  • WebRTC web real-time communication protocol
  • the live broadcast interaction method provided by this application can realize various interactive operations by viewers or anchors on the image or virtual objects of the anchor in the live broadcast screen, update the live screen based on the interactive operations performed by the audience or the anchor, and also That is to say, through this method, the live images seen by the audience and the anchor will change with the various interactive operations performed by the audience or the anchor. Optimize the interactive experience between the anchor and the audience.
  • the live broadcast interaction method provided by this application is to fuse the image of the anchor and the virtual object together to obtain a live broadcast image, and when the cloud computing platform 130 updates the live broadcast image according to the operation instruction, the image of the anchor in the updated live broadcast image It is also integrated with virtual objects.
  • the live broadcast image obtained by simply superimposing two two-dimensional images in the prior art the three-dimensional display effect of the live broadcast image is better, and the displayed content is more natural and coordinated.
  • the live interactive method provided by this application has been described in detail above. Based on the same inventive concept, the live interactive device provided by this application will be introduced next.
  • This application provides two live interactive devices, one of which can be applied to the terminal equipment 110 shown in Figure 1, such as 110A, 110B, and the other can be applied to the cloud computing platform 130 shown in Figure 1, when the application provided When the live interactive device is applied to the terminal device 110, and/or, when applied to the cloud computing platform 130, the live broadcast system 100 shown in FIG. 1 can optimize the live interactive effect between the host and the audience, and provide various Various interactive methods optimize the interactive experience between the anchor and the audience.
  • the unit modules inside the live interactive device provided by this application can be divided into multiple types, and each module can be a software module, or a hardware module, or partly a software module and partly a hardware module, and this application does not limit it .
  • FIG. 9 is a schematic structural diagram of a live broadcast interactive device 900 applied to a terminal device 110 exemplarily shown in this application.
  • the device 900 includes: a display module 910 , a receiving module 920 , a processing module 930 and a sending module 940 .
  • the display module 910 is configured to display the first live broadcast picture, wherein the first live broadcast picture includes the image of the anchor and the virtual object.
  • the receiving module 920 is configured to receive the first operation on the anchor's image or virtual object in the first live broadcast picture.
  • the processing module 930 is configured to process the first operation as a first operation instruction to the anchor's image or virtual object in the first live image.
  • the sending module 940 is configured to send the first operation instruction to the cloud computing platform 130, and the cloud computing platform 130 obtains the second live image according to the first operation instruction and the first live image.
  • the receiving module 920 is configured to receive the second live image sent by the cloud computing platform 130 .
  • the display module 910 is configured to display the second live image.
  • the receiving module 920 is configured to receive a second operation on the image or virtual object of the host in the first live broadcast screen, and the processing module 930 is configured to process the second operation as an operation on the second live broadcast image.
  • the image of the anchor in the screen or the second operation instruction of the virtual object, the sending module 940 is used to send the second operation instruction to the cloud computing platform 130, and the cloud computing platform 130 obtains the second operation instruction according to the second operation instruction and the second live image.
  • the third live image, the receiving module 920 is used to receive the third live image sent by the cloud computing platform 130, and the display module 910 is used to display the third live image.
  • the first operation is an operation of a viewer
  • the second operation is an operation of a broadcaster
  • the first operation is an operation of a broadcaster
  • the second operation is an operation of a viewer.
  • the anchor is a real person or a virtual person.
  • the first live broadcast image is a two-dimensional live broadcast image obtained by fusing the 3D data of the anchor and the 3D data of the virtual object, and processing the fused 3D data.
  • the first live broadcast image is obtained by fusing the two-dimensional image of the anchor and the two-dimensional image of the virtual object.
  • the first operation instruction is used to change the image of the anchor; or, the first operation instruction is used to change the coordinate value, moving speed, acceleration, offset angle, moving direction, One or more of the colors; or, the first operation instruction is used to add an image of a real object in the first live image.
  • the first operation instruction is used to add decorations, change hairstyles, or change clothing to the anchor's image.
  • the virtual object is used to implement an interactive game between the audience and the host, such as a board game or a ball game.
  • FIG. 10 is a structural diagram of a live interactive device 1000 applied to a cloud computing platform 130 exemplarily shown in the present application.
  • the device 1000 includes: a processing module 1010 , a sending module 1020 and a receiving module 1030 .
  • the processing module 1010 is configured to fuse the image of the anchor with the virtual object to obtain the first live broadcast image.
  • the sending module 1020 is configured to send the first live broadcast picture to the terminal device 110B of the audience and the terminal device 110A of the host.
  • the receiving module 1030 is configured to receive a first operation instruction on the anchor's image or virtual object in the first live broadcast picture.
  • the processing module 1010 is configured to obtain a second live image according to the first operation instruction and the first live image.
  • the sending module 1020 is configured to send the second live broadcast picture to the terminal device 110B and the host's terminal device 110A.
  • the receiving module 1030 is configured to receive a second operation instruction on the anchor's image or virtual object in the second live broadcast screen; the processing module 1010 is configured to The picture obtains the third live picture; the sending module 1020 is configured to send the third live picture to the terminal device 110B and the host's terminal device 110A.
  • the first operation instruction is an instruction obtained by processing the first operation
  • the second operation instruction is an instruction obtained by processing the second operation
  • the second operation is the host’s operation on the anchor’s image or virtual object in the second live screen; or, the first operation is the host’s operation on the anchor’s image or virtual object in the first live screen
  • the operation of the object, the second operation is the viewer's operation on the anchor's image or virtual object in the second live image.
  • the anchor is a real person or a virtual person.
  • the processing module 1010 can specifically realize the fusion of the anchor's image and the virtual object in the following manner to obtain the first live image: after merging the three-dimensional data of the anchor and the three-dimensional data of the virtual object, the The fused 3D data is processed to obtain a 2D first live broadcast image.
  • the processing module 1010 can specifically realize the fusion of the host's image and the virtual object in the following manner to obtain the first live broadcast image: firstly, acquire the two-dimensional image of the anchor, and then combine the two-dimensional image of the anchor The image and the two-dimensional image of the virtual object are fused to obtain a two-dimensional first live image.
  • the first operation instruction is used to change the image of the anchor; or, the first operation instruction is used to change the coordinate value, moving speed, acceleration, offset angle, moving direction, One or more of the colors; or, the first operation instruction is used to add an image of a real object in the first live image.
  • the first operation instruction is used to add decorations, change hairstyles, or change clothing to the host's image.
  • the virtual object is used to implement an interactive game between the audience and the host, such as a board game or a ball game.
  • the above-mentioned live interactive device 900 for the implementation of various operations performed by the above-mentioned live interactive device 900, reference may be made to the relevant descriptions in the steps performed by the host's terminal device 110A or the audience's terminal device 110B in the above-mentioned embodiment of the live interactive method.
  • the above-mentioned live interactive device 1000 For the specific implementation of performing various operations, refer to the relevant descriptions in the steps performed by the cloud computing platform 130 in the above embodiment of the live interaction method, and for the sake of brevity, details are not repeated here.
  • the live interactive device provided by this application (the device 900 shown in FIG. 9 and the device 1000 shown in FIG. 10 ) can realize various interactions between the viewer or the host and the image or virtual objects of the host in the live screen.
  • Interactive operation based on the interactive operation performed by the audience or the host to update the live broadcast screen, that is to say, through this method, the live broadcast screen seen by the audience and the host will change with the various interactive operations performed by the audience or the host. Therefore, the above The solution can provide a variety of interactive methods for the anchor and the audience, and optimize the interactive experience between the anchor and the audience.
  • the live broadcast interactive device 1000 provided in this application is to fuse the anchor's image and virtual objects together to obtain a live broadcast screen, and when the live broadcast interactive device 1000 updates the live broadcast screen according to the operation instruction, the image of the anchor and the virtual object in the updated live broadcast screen The objects are also fused together.
  • the live broadcast image obtained by simply superimposing two two-dimensional images in the prior art the three-dimensional display effect of the live broadcast image is better, and the displayed content is more natural and coordinated.
  • FIG. 11 is a schematic structural diagram of a terminal device 110 provided in the present application.
  • the terminal device 110 includes: a processor 1110, a memory 1120 and a communication interface 1130 , where the processor 1110 , the memory 1120 , and the communication interface 1130 may be connected to each other through a bus 1140 .
  • the processor 1110 can read the program code (including instructions) stored in the memory 1120, and execute the program code stored in the memory 1120, so that the terminal device 110 executes the terminal device of the host in the live broadcast interaction method provided by the present application shown in FIG. 3 110A or the steps performed by the terminal device 110B of the audience, or make the terminal device 110 deploy the live interactive device 900 .
  • the processor 1110 may have various specific implementation forms, such as a CPU, or a combination of a CPU and a hardware chip.
  • the aforementioned hardware chip may be an application-specific integrated circuit (application-specific integrated circuit, ASIC), a programmable logic device (programmable logic device, PLD) or a combination thereof.
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • the above-mentioned PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (field-programmable gate array, FPGA), a general array logic (generic array logic, GAL) or any combination thereof.
  • the processor 1110 executes various types of digitally stored instructions, such as software or firmware programs stored in the memory 1120, which enable the terminal device 110 to provide various services.
  • the memory 1120 is used to store program codes, which are executed under the control of the processor 1110 .
  • the program code may include one or more software modules, and the one or more software modules may be the software modules provided in the embodiment in FIG.
  • the memory 1120 may include a volatile memory (volatile memory), such as a random access memory (random access memory, RAM); the memory 1120 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (read-only memory). only memory, ROM), flash memory (flash memory), hard disk (hard disk drive, HDD) or solid-state drive (solid-state drive, SSD); the memory 1120 may also include a combination of the above types.
  • volatile memory volatile memory
  • RAM random access memory
  • non-volatile memory such as a read-only memory (read-only memory).
  • ROM read-only memory
  • flash memory flash memory
  • HDD hard disk drive
  • solid-state drive solid-state drive
  • the communication interface 1130 can be a wired interface (such as an Ethernet interface, a fiber optic interface, other types of interfaces (such as an infiniBand interface)) or a wireless interface (such as a cellular network interface or using a wireless local area network interface) for communicating with other computing devices or devices. communication.
  • the communication interface 1130 can adopt a protocol family above the transmission control protocol/internet protocol (transmission control protocol/internet protocol, TCP/IP), for example, a remote function call (remote function call, RFC) protocol, a simple object access protocol (simple object access protocol (SOAP) protocol, simple network management protocol (simple network management protocol, SNMP) protocol, common object request broker architecture (common object request broker architecture, CORBA) protocol and distributed protocols, etc.
  • TCP/IP transmission control protocol/internet protocol
  • RFC remote function call
  • SOAP simple object access protocol
  • simple network management protocol simple network management protocol
  • CORBA common object request broker architecture
  • the bus 1140 can be a peripheral component interconnect express (PCIe) bus, an extended industry standard architecture (EISA) bus, a unified bus (Ubus or UB), a computer fast link (compute express link, CXL), cache coherent interconnect for accelerators (CCIX), etc.
  • PCIe peripheral component interconnect express
  • EISA extended industry standard architecture
  • Ubus or UB unified bus
  • CXL computer fast link
  • CXL cache coherent interconnect for accelerators
  • the bus 1140 can be divided into an address bus, a data bus, a control bus, and the like.
  • the bus 1140 may also include a power bus, a control bus, a status signal bus, and the like.
  • the various buses are labeled as bus 1140 in the figure. For ease of representation, only one thick line is used in FIG. 11 , but it does not mean that there is only one bus or one type of bus.
  • the above-mentioned terminal device 110 is used to execute the steps performed by the terminal device 110A or the terminal device 110B in the live broadcast interaction method provided by the present application shown in FIG.
  • terminal device 110 is only an example provided by the embodiment of the present application, and the terminal device 110 may have more or fewer components than those shown in FIG. 11 , and two or more components may be combined, or It can be realized with different configurations of components.
  • the present application also provides a cloud computing platform 130, the cloud computing platform 130 can be implemented by the cluster deployment of computing devices including one or more computing devices 1200 shown in Figure 12, so as to execute the live interactive method shown in Figure 3 by cloud computing Steps performed by platform 130 .
  • the cloud computing platform 130 only includes one computing device 1200, all the modules in the live interactive device 1000 shown in FIG. receiving module 1030 .
  • each computing device 1200 in the multiple computing devices 1200 can be used to deploy some modules in the live interactive device 1000 shown in FIG.
  • Two or more computing devices 1200 among the computing devices 1200 are jointly used to deploy one or more modules in the live broadcast interactive device 1000 shown in FIG. 10 .
  • the device 1200B can be used to deploy the sending module 1020 and the receiving module 1030, or the computing device 1200A can be used to deploy the sending module 1020 and the receiving module 1030, and the computing device 1200A can be used to deploy the processing module 1010 together with the computing device 1200B.
  • each computing device 1200 in the cloud computing platform 130 may include a processor 1210, a memory 1220, and a communication interface 1230, etc.
  • the memory 1220 in one or more computing devices 1200 in the cloud computing platform 130 may be There are the same codes (also referred to as instructions or program instructions) corresponding to the method executed by the cloud computing platform 130 in the live interactive method provided by this application, and the processor 1210 can read the code from the memory 1220, And execute the code to implement the method executed by the cloud computing platform 130 in the live interactive method provided by this application, and the communication interface 1230 can be used to realize the communication between each computing device 1200 and other devices.
  • each computing device 1200 in the cloud computing platform 130 may also communicate with other devices through a network connection.
  • the network may be a wide area network or a local area network or the like.
  • the computing device 1200 provided by the present application will be described in detail below with reference to FIG. 13 by taking all the modules of the live interactive device 1000 deployed on one computing device 1200 as an example.
  • a computing device 1200 includes: a processor 1210 , a memory 1220 and a communication interface 1230 , where the processor 1210 , the memory 1220 and the communication interface 1230 may be connected to each other through a bus 1240 .
  • the processor 1210 can read the program code stored in the memory 1220, and execute the program code stored in the memory 1220, so that the cloud computing platform 130 executes the steps performed by the cloud computing platform 130 in the live interactive method provided by the present application shown in FIG. 3 , or make the computing device 1200 deploy the live interactive apparatus 1000 .
  • the processor 1210 may have multiple specific implementation forms, for example, the processor 1210 may be a CPU or a GPU, and the processor 1210 may also be a single-core processor or a multi-core processor.
  • the processor 1210 may be a combination of a CPU and a hardware chip.
  • the aforementioned hardware chip may be an ASIC, a PLD or a combination thereof.
  • the aforementioned PLD may be CPLD, FPGA, GAL or any combination thereof.
  • the processor 1210 may also be implemented solely by a logic device with built-in processing logic, such as FPGA or DSP.
  • the memory 1220 is used to store program codes, which are executed under the control of the processor 1210 .
  • the program code may include the software modules provided in the embodiment in FIG. 10 : a processing module 1010 , a sending module 1020 and a receiving module 1030 .
  • the memory 1220 may be a non-volatile memory, such as ROM, PROM, EPROM, EEPROM or flash memory.
  • Memory 1220 can also be volatile memory, which can be RAM, which acts as an external cache.
  • the communication interface 1230 can be a wired interface (such as an Ethernet interface) or a wireless interface (such as a cellular network interface or using a wireless local area network interface) for communicating with other computing nodes or devices.
  • a wired interface such as an Ethernet interface
  • a wireless interface such as a cellular network interface or using a wireless local area network interface
  • the communication interface 1230 may adopt a protocol suite above TCP/IP, for example, RFC protocol, SOAP protocol, SNMP protocol, CORBA protocol, distributed protocol, and the like.
  • the bus 1240 may be a PCIe bus, an EISA bus, UB, CXL, CCIX, etc.
  • the bus 1240 can be divided into an address bus, a data bus, a control bus, and the like.
  • the bus 1240 may include not only a data bus, but also a power bus, a control bus, and a status signal bus.
  • the various buses are labeled as bus 1240 in the figure. For ease of representation, only one thick line is used in FIG. 13 , but it does not mean that there is only one bus or one type of bus.
  • the above-mentioned computing device 1200 is used to execute the steps performed by the cloud computing platform 130 in the live broadcast interaction method provided by the present application shown in FIG.
  • computing device 1200 is only one example provided herein, and that computing device 1200 may have more or fewer components than those shown in FIG. 13 , may combine two or more components, or may have Different configuration implementations of widgets.
  • the present application also provides a live broadcast system, which may include the live interactive device 900 shown in FIG. 9 and the live interactive device 1000 shown in FIG. 10 , or include the terminal device 110 shown in FIG. 11 and the cloud computing platform 130.
  • the present application also provides a computer-readable storage medium, and the computer-readable storage medium stores instructions, and when the instructions are executed, some or all steps of the live interaction method described in the above-mentioned embodiments can be implemented.
  • all or part may be implemented by software, hardware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, DSL) or wireless (eg, infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (such as a floppy disk, a hard disk, or a magnetic tape), an optical medium, or a semiconductor medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente demande concerne un procédé d'interaction de retransmission en direct, un dispositif et un système. Le procédé comprend : la combinaison par une plateforme informatique en nuage d'une image d'un diffuseur avec un objet virtuel pour obtenir un premier visuel de retransmission en direct, et l'envoi du premier visuel de retransmission en direct à un dispositif terminal d'un spectateur et à un dispositif terminal du diffuseur pour l'affichage ; ensuite, lorsque le dispositif terminal du spectateur ou le dispositif terminal du diffuseur reçoit une première opération d'un utilisateur sur l'objet virtuel ou l'image du diffuseur dans le premier visuel de retransmission en direct, la première opération est traitée en tant que première instruction d'opération, et ladite instruction est envoyée à la plateforme informatique en nuage ; l'obtention par la plateforme informatique en nuage d'un second visuel de retransmission en direct selon la première instruction d'opération et le premier visuel de retransmission en direct ; et après que la plateforme informatique en nuage obtient le second visuel de retransmission en direct, l'envoi par la plateforme informatique en nuage du second visuel de retransmission en direct au dispositif terminal du spectateur et au dispositif terminal du diffuseur pour affichage. Le présent procédé peut améliorer un effet d'interaction de retransmission en direct, une variété de modes d'interaction sont fournis pour un diffuseur et un spectateur, et une expérience d'interaction entre le diffuseur et le spectateur est améliorée.
PCT/CN2022/139298 2022-01-27 2022-12-15 Procédé d'interaction de retransmission en direct, dispositif et système WO2023142756A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210102164.X 2022-01-27
CN202210102164.XA CN116567274A (zh) 2022-01-27 2022-01-27 直播互动方法、装置以及系统

Publications (1)

Publication Number Publication Date
WO2023142756A1 true WO2023142756A1 (fr) 2023-08-03

Family

ID=87470375

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/139298 WO2023142756A1 (fr) 2022-01-27 2022-12-15 Procédé d'interaction de retransmission en direct, dispositif et système

Country Status (2)

Country Link
CN (1) CN116567274A (fr)
WO (1) WO2023142756A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106375775A (zh) * 2016-09-26 2017-02-01 广州华多网络科技有限公司 虚拟礼物展示方法及装置
CN110519611A (zh) * 2019-08-23 2019-11-29 腾讯科技(深圳)有限公司 直播互动方法、装置、电子设备及存储介质
CN113395533A (zh) * 2021-05-24 2021-09-14 广州博冠信息科技有限公司 虚拟礼物特效显示方法、装置、计算机设备及存储介质
US20210291061A1 (en) * 2020-03-20 2021-09-23 Amazon Technologies, Inc. Video Game Player, Spectator and Audience Interaction
CN113965812A (zh) * 2021-12-21 2022-01-21 广州虎牙信息科技有限公司 直播方法、系统及直播设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106375775A (zh) * 2016-09-26 2017-02-01 广州华多网络科技有限公司 虚拟礼物展示方法及装置
CN110519611A (zh) * 2019-08-23 2019-11-29 腾讯科技(深圳)有限公司 直播互动方法、装置、电子设备及存储介质
US20210291061A1 (en) * 2020-03-20 2021-09-23 Amazon Technologies, Inc. Video Game Player, Spectator and Audience Interaction
CN113395533A (zh) * 2021-05-24 2021-09-14 广州博冠信息科技有限公司 虚拟礼物特效显示方法、装置、计算机设备及存储介质
CN113965812A (zh) * 2021-12-21 2022-01-21 广州虎牙信息科技有限公司 直播方法、系统及直播设备

Also Published As

Publication number Publication date
CN116567274A (zh) 2023-08-08

Similar Documents

Publication Publication Date Title
US11620800B2 (en) Three dimensional reconstruction of objects based on geolocation and image data
CN108619720B (zh) 动画的播放方法和装置、存储介质、电子装置
CN107852573B (zh) 混合现实社交交互
TWI543108B (zh) 群眾外包式(crowd-sourced)視訊顯像系統
JP6181917B2 (ja) 描画システム、描画サーバ、その制御方法、プログラム、及び記録媒体
US9928637B1 (en) Managing rendering targets for graphics processing units
CN113689537A (zh) 用于基于体素的三维建模的系统、方法和设备
US8363051B2 (en) Non-real-time enhanced image snapshot in a virtual world system
WO2022252547A1 (fr) Procédé, dispositif et système de rendu
US9588651B1 (en) Multiple virtual environments
CN112053370A (zh) 基于增强现实的显示方法、设备及存储介质
WO2023045637A1 (fr) Procédé et appareil de génération de données vidéo, dispositif électronique et support de stockage lisible
CN111142967B (zh) 一种增强现实显示的方法、装置、电子设备和存储介质
CN113936086B (zh) 毛发模型的生成方法、装置、电子设备以及存储介质
CN115082607A (zh) 虚拟角色头发渲染方法、装置、电子设备和存储介质
KR100632535B1 (ko) 이동통신단말기용 삼차원 그래픽 엔진 및 그래픽 제공 방법
WO2023142756A1 (fr) Procédé d'interaction de retransmission en direct, dispositif et système
CN116958344A (zh) 虚拟形象的动画生成方法、装置、计算机设备及存储介质
CN116958390A (zh) 一种图像渲染方法、装置、设备、存储介质及程序产品
KR20180104915A (ko) 3차원 가상공간 애니메이션 구현 시스템
CN113192173A (zh) 三维场景的图像处理方法、装置及电子设备
WO2022135050A1 (fr) Procédé de rendu, dispositif et système
WO2023169089A1 (fr) Procédé et appareil de lecture de vidéo, dispositif électronique, support et produit de programme
RU2810701C2 (ru) Гибридный рендеринг
Wang Construction of Internet Game Development Environment based on OpenGL and Augmented Reality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22923542

Country of ref document: EP

Kind code of ref document: A1