CN112188223A - Live video playing method, device, equipment and medium - Google Patents
Live video playing method, device, equipment and medium Download PDFInfo
- Publication number
- CN112188223A CN112188223A CN202011038362.1A CN202011038362A CN112188223A CN 112188223 A CN112188223 A CN 112188223A CN 202011038362 A CN202011038362 A CN 202011038362A CN 112188223 A CN112188223 A CN 112188223A
- Authority
- CN
- China
- Prior art keywords
- live
- portrait
- video
- virtual room
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 117
- 230000008569 process Effects 0.000 claims abstract description 59
- 230000003993 interaction Effects 0.000 abstract description 12
- 238000013473 artificial intelligence Methods 0.000 abstract description 11
- 230000004044 response Effects 0.000 description 26
- 238000005516 engineering process Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 16
- 238000012545 processing Methods 0.000 description 10
- 238000012790 confirmation Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000009877 rendering Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/254—Management at additional data server, e.g. shopping server, rights management server
- H04N21/2541—Rights Management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25866—Management of end-user data
- H04N21/25875—Management of end-user data involving end-user authentication
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/485—End-user interface for client configuration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4882—Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Computer Graphics (AREA)
- Human Computer Interaction (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The application relates to the technical field of artificial intelligence, and provides a live video playing method, a live video playing device, live video playing equipment and live video playing media. The method comprises the following steps: when a current user account is added into a live broadcast virtual room, acquiring a live broadcast image corresponding to the current user account; acquiring a portrait live broadcast image corresponding to a current account from the live broadcast image; acquiring a portrait live broadcast image of other user accounts in a live broadcast virtual room; the method has the advantages that the live portrait images of the current account and the live portrait images of other user accounts are arranged and displayed in the background image of the live virtual room, any client side which joins in live broadcast can display live portrait images corresponding to all the accounts which join in live broadcast in the live broadcast process, and interaction degree is improved.
Description
Technical Field
The application relates to the technical field of computers, in particular to the technical field of artificial intelligence, and provides a live video playing method, device, equipment and medium.
Background
With the continuous development of video technology, a user can remotely interact with relatives and friends through a dance game based on videos so as to enhance the emotion between the user and the relatives and friends.
Currently, the scheme of a video-based dance game is: the dance videos corresponding to the users are collected by the devices respectively, the server obtains the dance videos corresponding to the users, the dance videos of the users are scored, scores corresponding to the dance videos of the users are obtained finally, and the subsequent users can send the dance videos of the users to other users. However, in the current scheme, each user cannot see other users during the dancing process, i.e. the interaction degree is low.
Disclosure of Invention
The embodiment of the application provides a live video playing method, a live video playing device, live video playing equipment and a live video playing medium, which are used for improving the video interaction degree.
In one aspect, a live video playing method is provided, including:
when a current user account is added into a live broadcast virtual room, acquiring a live broadcast image corresponding to the current user account;
acquiring a portrait live broadcast image corresponding to the current account from the live broadcast image;
acquiring a portrait live broadcast image of other user accounts in the live broadcast virtual room;
and arranging and displaying the live portrait images of the current account and the live portrait images of the other user accounts in the background image of the live virtual room.
The embodiment of the application provides a live video play device, includes:
the first acquisition module is used for acquiring a live broadcast image corresponding to a current user account when the current user account is added into a live broadcast virtual room;
the second acquisition module is used for acquiring a portrait live broadcast image corresponding to the current account from the live broadcast image;
the third acquisition module is used for acquiring the portrait live broadcast images of other user accounts in the live broadcast virtual room;
and the display module is used for displaying the live portrait images of the current account and the live portrait images of the other user accounts in a background image of the live virtual room in an arrayed manner.
In one possible embodiment, the display module is further configured to:
responding to room creating operation and displaying a live broadcast invitation interface; the live broadcast invitation interface is displayed with a live broadcast virtual room identifier and an invitation control; responding to the operation of triggering the invitation control and displaying a contact list; and responding to the confirmation operation of selecting the contact persons to participate in the contact person list, displaying the live broadcast invitation sent to each contact person to participate in, and displaying the information of the contact persons to participate in the live broadcast virtual room according to the live broadcast invitation.
In a possible embodiment, the apparatus further comprises a determination module, wherein:
the display module is also used for responding to parameter setting operation aiming at the live virtual room and displaying a configuration parameter selection interface; the configuration parameter selection interface comprises a parameter selection control for selecting each configuration parameter, and the configuration parameters comprise one or two of background music and queue arrangement modes;
the determining module is used for responding to the triggering operation aiming at the parameter selection control and determining the selected configuration parameters;
the display module is further configured to control display of the live portrait images of the user accounts in the live virtual room according to the selected configuration parameters when the live portrait images of the current account and the live portrait images of the other user accounts are arranged and displayed in the background image of the live virtual room.
In a possible embodiment, an example video playing selection control is further disposed in the configuration parameter selection interface or the live virtual room, and the display module is further configured to:
and responding to the triggering operation of the example video playing selection control, showing an example window in the live virtual room, and playing an example video in the example window.
In one possible embodiment, when the selected dynamic configuration parameter comprises a queue arrangement mode, wherein:
the display module is also used for responding to the operation aiming at the queue arrangement mode and displaying each display position included in the queue arrangement mode;
the determining module is further used for determining the corresponding display position of each user account in the live virtual room according to the display position selected by each user account;
the display module is specifically configured to display the portrait live broadcast image of each user account on a corresponding display position according to the display position corresponding to each user account in the live broadcast virtual room.
In a possible embodiment, the determining module is specifically configured to:
receiving display positions corresponding to other user accounts which are added into the live broadcast virtual room and sent by the server, responding to display position selection operation, and obtaining the display positions corresponding to the current user account; or the like, or, alternatively,
and determining the corresponding display position of each user account in the live virtual room according to the sequence of adding each user account into the live virtual room.
In a possible embodiment, the third obtaining module is specifically configured to:
carrying out video coding on a character video consisting of live images of characters to obtain video data;
sending the video data to a server; and receiving a video data packet sent by the server, and obtaining the portrait live broadcast image of each user account in the live broadcast virtual room from the received video data packet.
In a possible embodiment, the apparatus further includes a prompt module, where the prompt module is specifically configured to: and if detecting that the live image does not exist in the live image, sending out prompt information.
In one possible embodiment, the video data further comprises time information for each character video frame; the display module is specifically configured to:
and synchronizing the portrait live broadcast images corresponding to the user accounts according to the corresponding display positions of the user accounts in the live broadcast virtual room and the time information of the portrait live broadcast images in the video data packet, and covering the portrait live broadcast images corresponding to the user accounts on the background images of the live broadcast virtual room.
In one possible embodiment, the display module is further configured to: and responding to ending operation aiming at live video playing, or displaying a grading result of each user account in the live video playing process on the live virtual room when the live video playing time reaches a preset time.
In a possible embodiment, the scoring result of each user account in the process of playing the live video is obtained by any one of the following methods:
receiving a scoring result of each user account from a server; or the like, or, alternatively,
determining the grade of each user account at each moment according to the matching degree between the live image of the portrait of each user account at each moment and the video frame of the target character at the corresponding moment in the selected example video;
and obtaining a grading result of each user account in the video live broadcast process according to the grading of each user account at each moment.
An embodiment of the present application provides a computer device, including:
at least one processor, and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the method of any one of the aspects by executing the instructions stored by the memory.
Embodiments of the present application provide a storage medium storing computer instructions, which when executed on a computer, cause the computer to perform the method according to any one of the aspects.
Due to the adoption of the technical scheme, the embodiment of the application has at least the following technical effects:
in the embodiment of the application, in the live broadcasting process, the client can display the live images of the portrait separated from the live images corresponding to each account, so that users participating in live broadcasting can view the live images of the portrait corresponding to all accounts in the live virtual room, and the real-time performance and the interaction degree of interaction are improved. And only the complete live images but the portrait live images in the live images are required to be transmitted and displayed among the clients, so that the data transmission quantity in the live broadcasting process is reduced, and the live broadcasting efficiency is improved. In addition, during live broadcasting, the background of the environment where the user participating in live broadcasting is located can not be displayed, only the portrait live broadcasting image corresponding to the user can be displayed, and the overall display effect in the live broadcasting process is improved.
Drawings
Fig. 1 is a schematic view of an application scenario of a live video playing method provided in an embodiment of the present application;
fig. 2 is a diagram of an interaction process between devices in fig. 1 according to an embodiment of the present disclosure;
FIG. 3 is an exemplary diagram of a main interface of a client provided in an embodiment of the present application;
FIG. 4 is a diagram illustrating an example process for determining a target contact according to an embodiment of the present application;
fig. 5 is an exemplary diagram of a live invited interface provided in an embodiment of the present application;
fig. 6 is a diagram illustrating an example of a process for determining to join a live virtual room according to an embodiment of the present application;
fig. 7 is an exemplary diagram of an updated live invitation interface provided in an embodiment of the present application;
FIG. 8 is an exemplary diagram of a configuration parameter selection interface provided by an embodiment of the present application;
FIG. 9 is a diagram illustrating an example process for determining an indication bit according to an embodiment of the present application;
fig. 10 is a flowchart of generating video data according to an embodiment of the present application;
fig. 11 is a diagram illustrating an example of a process for obtaining video data according to an embodiment of the present application;
fig. 12 is a first flowchart of displaying a live video according to an embodiment of the present application;
fig. 13 is a second flowchart for displaying a live video according to an embodiment of the present application;
fig. 14 is a diagram illustrating an example process of displaying a live video according to an embodiment of the present application;
FIG. 15 is a flow chart of scoring results provided in an embodiment of the present application;
FIG. 16 is an exemplary diagram illustrating scoring results provided by an embodiment of the present application;
fig. 17 is a flowchart of a live video playing method according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of a live video playing device according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the drawings and specific embodiments.
To facilitate better understanding of the technical solutions of the present application for those skilled in the art, the following terms related to the present application are introduced.
Artificial Intelligence (AI): the method is a theory, method, technology and application system for simulating, extending and expanding human intelligence by using a digital computer or a machine controlled by the digital computer, sensing the environment, acquiring knowledge and obtaining the best result by using the knowledge. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (Computer Vision, CV): computer vision is a science for researching how to make a machine "see", and further, it means that a camera and a computer are used to replace human eyes to perform machine vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition. Related technologies related to image recognition and the like are described below in the embodiments of the present application.
Open Source Computer Vision platform (Open Source Computer Vision Library, openCV): is a cross-platform computer vision library. OpenCV was initiated and developed by intel corporation and issued with BSD license authorization and is available for free use in business and research areas. OpenCV may be used to develop real-time image processing, computer vision, and pattern recognition programs. The manner of separating the character video from the live video according to the embodiment of the present application may be implemented by OpenCV, which will be described in detail below.
The application comprises the following steps: the present invention generally refers to a program that can be used to implement some functions, and the application in the embodiments of the present application generally refers to an application that can implement an interactive control function in the embodiments of the present application. The client refers to a carrier applied in each terminal, and may be a web page version client, or may be a client pre-installed in the terminal, or may be a client embedded in a third party application.
Live virtual room: the method refers to created virtual rooms, and each live virtual room corresponds to a room identifier. Each user account can join the live broadcast virtual room, and a user corresponding to any user account in the live broadcast virtual room can see the live image of each user in the live broadcast virtual room. After a certain user account joins the live virtual room, the client who logs in the user account is regarded as joining the live virtual room.
Background music: the music played in the live broadcast process is referred to, and for convenience of distinguishing, the music which is determined to be played in the live broadcast process can be also referred to as target background music. The target background music may be a default or determined according to a user's operation.
Example video: the example video played in the live broadcasting process is referred to, and for convenience of distinguishing, the example video which is determined to be played in the live broadcasting process can be also referred to as a target example video. The target example video may be a default or determined according to a user operation.
Queue arrangement mode: the method refers to the relative position of each display bit in a live virtual room, and specifically includes the number of the display bits and the position of each display bit relative to other display bits.
Live image of the portrait: the method refers to a person whole-body image obtained from a live image, namely the live image of the person not only comprises a human face part, a body part and the like.
The following is a description of the design concept of the embodiments of the present application.
In the related art, each device respectively collects dance videos of users, the dance videos are sent to a server, after dancing is finished, any user can share the dance videos of the users to the users through the server, sharing of the dance videos is achieved, however, the dance videos cannot be shared in real time in the dancing process in the mode, and interaction degree among the users is low.
In view of this, the embodiment of the present application provides a live video playing method, where each client can display a live portrait image of each user account added to a live virtual room in a background image of the live virtual room, that is, the scheme can implement real-time sharing of live portrait images of each user, improve real-time performance of live video playing, and improve interaction degree between users. And moreover, the portrait live broadcast image is displayed in the live broadcast virtual room, namely, any client displays the portrait live broadcast image of each user in the live broadcast virtual room, but not the live broadcast image directly acquired by the client, and the transmission quantity of the live broadcast image is inevitably greater than that of the portrait live broadcast image in the live broadcast image, so that the data transmission quantity in the live broadcast video playing process can be reduced by displaying the portrait live broadcast image of each user, the portrait live broadcast image corresponding to each user account is displayed, unnecessary other background elements in the live broadcast image are avoided being displayed, and the overall display effect of the live broadcast video is improved.
Based on the above design concept, an application scenario of the live video playing method according to the embodiment of the present application is introduced below.
Referring to fig. 1, a scene of a live video playing method is shown, where the scene includes a plurality of terminals and a server that can communicate with each other through a communication network. Each terminal corresponds to one user, a client is arranged in each terminal, and each client correspondingly logs in a corresponding account. Each terminal may be configured with a camera, which may be part of the terminal or may be located independently of the terminal, e.g., a first terminal 110-1 configured with a first camera 112-1, a second terminal 110-2 configured with a second camera 112-2, and a third terminal 110-3 configured with a third camera 112-3. The number of the terminals and the servers may be arbitrary, and the present application is not particularly limited. The specific form of the client can refer to the content discussed above, and is not described herein again.
For convenience of description, the users corresponding to the first terminal 110-1, the second terminal 110-2 and the third terminal 110-3 are the current user a, the second user B and the third user C, respectively, and the first terminal 110-1. The clients corresponding to the second terminal 110-2 and the third terminal 110-3 are a first client 111-1, a second client 111-2 and a third client 111-3, respectively. The accounts logged in by the first client 111-1, the second client 111-2 and the third client 111-3 are called a current account, a second account and a third account, respectively.
The terminal can be, for example, a mobile phone, a personal computer, a portable tablet computer, a smart television, a television box of a smart television, or a game device. The types of the first terminal 110-1, the second terminal 110-2, and the third terminal 110-3 in fig. 1 may be the same or different. Referring to fig. 1, the first terminal 110-1 is, for example, a smart television, the second terminal 110-2 is, for example, a personal computer, and the third terminal 110-3 is, for example, a mobile phone. The server 120 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, content distribution, big data and an artificial intelligence platform.
Before the live video is played, the current client creates a live virtual room according to the operation of the current user, and other users can join in the live virtual room through the client. The process of creating and joining a live virtual room will be described below.
When the live video playing starts, the camera corresponding to each client can acquire the corresponding live image, the client captures the live image of the portrait in the live image in real time, the live image of the portrait at each moment forms a character video, the character video is further packaged into video data, and the video data is sent to the server 120. In this way, the server 120 obtains video data corresponding to all accounts in the live virtual room. The content of the generated video data will be described below.
In a first possible scenario: the server 120 receives video data corresponding to all the account numbers in the live virtual room, encapsulates the video data into video data packets, and then sends the video data packets to the client corresponding to each user account number in the live virtual room. And after the client receives the video data packet, rendering and playing the live video according to the video data corresponding to all the clients in the live virtual room. The content related to playing the live video according to the video data will be described below.
In a second possible scenario: after receiving video data sent by each client in all clients of each user account in the live virtual room, the server 120 performs rendering synthesis based on the video data to obtain live videos, and then sends related data of the live videos to each client, and each client can receive and play the live videos.
Further, after the live broadcast is finished, the server 120 may generate a scoring result corresponding to each client according to video data of each client in the live broadcast video playing process, and issue the scoring result corresponding to each client, so that each client displays the scoring result of the live broadcast.
The following describes an example implementation architecture of a client according to an embodiment of the present application, with reference to an application scenario discussed in fig. 1:
the client can be constructed based on a data Model-View-interface Model (MVVM) mode, and can be divided into a presentation layer, a business logic layer and a data layer.
A display layer: the method is used for displaying various interfaces, such as displaying live video, displaying a target queue arrangement mode and the like.
And a service logic layer: the method is used for executing various information processing, and for example, the method can be used for responding various operations of the user, executing corresponding events, such as recording the time when the user joins the live virtual room, or scoring the live video playing process of the user.
And (3) a data layer: the method is used for transmitting various types of data, such as acquiring live images, transmitting live images of people acquired from the live images, and the like. The specific content of the client implementing various functions will be described in detail below.
Based on the discussion of fig. 1, in a first possible case, the following describes a live video playing method in an embodiment of the present application with reference to an interaction process between devices in fig. 1 shown in fig. 2:
s201, the first client 111-1 responds to the room creating operation and generates a room creating request.
When the current user wants to perform live broadcasting, the first client 111-1 may be started, and the first client 111-1 displays the main interface in response to the start operation. The primary interface may include a create room control. The main interface may include other operation controls besides the creation of the room control, which is not limited herein.
When the current user performs a trigger operation on the creation room control, the trigger operation on the creation room control may be, for example, a manual click operation performed on the creation room control, or may be a click operation performed on the creation room by moving a remote controller. The first client 111-1 generates a room creation request for requesting creation of a live virtual room in response to the trigger operation. The room creation request may carry an account id corresponding to the account id of the first client 111-1.
In one possible embodiment, when the current user performs the room creating operation, the preset number of virtual rooms to join the live broadcast may be selected, and the first client 111-1 determines the preset number of virtual rooms to join the live broadcast in response to the input operation of the preset number. Further, a preset number of joining live virtual rooms may be carried in the room creation request.
For example, referring to FIG. 3, which is an exemplary diagram of a main interface of the first client 111-1, the first client 111-1 generates a room creation request in response to a current user clicking on the create room control 300 shown in FIG. 3.
S202, the first client 111-1 sends a room creation request to the server 120.
S203, the server 120 generates live virtual room information.
After receiving the room creation request, the server 120 determines that the first client 111-1 needs to create a live virtual room, and thus may generate live virtual room information for the first client 111-1. The live broadcast virtual room information comprises a unique live broadcast virtual room identifier, and the live broadcast virtual room identifier is used for indicating the live video.
S204, the server 120 sends the live virtual room information to the first client 111-1.
S205, the first client 111-1 displays a live invitation interface.
After receiving the live virtual room information, the first client 111-1 generates a live invitation interface according to the live virtual room identifier, where the live invitation interface is used to instruct other users to join in the live virtual room. The live broadcast invitation interface can also display live broadcast virtual room identification so that a current user can know the live broadcast virtual room identification conveniently. In addition, the live invitation interface includes an invitation control. The live broadcast invitation interface can also comprise account identification corresponding to a current user which is added into the live broadcast virtual room by default, and can also comprise a room cancel creating control.
For example, after the current user clicks on create room control 300 as shown in fig. 3, a live invitation interface is displayed as shown in (1) in fig. 4, which includes an invitation control 410, an account identification 420 of the current user, a cancel create room control 430, and a live virtual room identification 440.
S206, the first client 111-1 responds to the triggering operation aiming at the invitation control and displays the contact list on the live invitation interface.
After the live invitation interface is displayed, the current user may perform a trigger operation on an invitation control in the live invitation interface, where the trigger operation may be a manual click operation or a click operation performed by moving a focus of a remote controller. The first client 111-1 displays the contact list in response to a triggering operation for the invitation control. The contact list includes at least one contact, each contact being associated with the current user, e.g., being a friend of the current user, etc. It should be noted that each contact corresponds to an account, and after the current user selects a contact, the client may obtain the account corresponding to the contact according to the contact selected by the current user.
The contact list may be a contact list associated with the current user in the first client 111-1, or may be a contact list acquired by the first client 111-1 from another application, for example, a contact list acquired from an instant messaging application through an interface.
In another case, the first client 111-1 may cancel the creation of the live virtual room in response to a triggering operation of the current user with respect to the cancel creation room control.
S207, the first client 111-1 responds to the contact selection operation for the contact list to generate an invitation request.
The first client 111-1 may determine a target contact that the current user needs to invite in response to a contact selection operation of the current user for one or more contacts in the contact list, and generate an invitation request. The invite request is for inviting to join a live virtual room. The invitation request carries the account identification corresponding to the target contact person needing to be invited, and the invitation request is used for inviting the target contact person to join the live broadcast virtual room.
S208, the first client 111-1 sends an invitation request to each client through the server 120.
After the current user selects the contact, for example, the current user selects the second user and the third user, the first client 111-1 determines to send an invitation request to the account corresponding to the second user and the account corresponding to the third user, where the invitation request may carry the account id corresponding to the second user and the account id corresponding to the third user. After receiving the invitation request, the server 120 sends the invitation request to the second client 111-2 corresponding to the second user and the third client 111-3 corresponding to the third user according to the account id carried in the invitation request.
Alternatively, the first client 111-1 generates the invitation request in response to a triggering operation performed on the invitation control. The invitation request carries the identification of the live virtual room, and the invitation request is used for inviting the user to join the live virtual room. After generating the invitation request, the first client 111-1 displays a contact list, the current user may further select to send the invitation request to one or more contacts, the first client 111-1 determines a target contact that the current user wants to invite in response to a contact selection operation of the current user, and the first client 111-1 may send the invitation request to a client corresponding to the target contact.
As an example, S206-S208 are optional components. For example, after the first client 111-1 generates the invitation request, the current user may not need to select the corresponding contact, and the server 120 may broadcast the invitation request to various other clients or selectively send to clients that are currently logged on online and idle. Idle means that the client is not currently joining other live virtual rooms.
For example, with continued reference to fig. 4, when the current user clicks the invite control 410 shown in (1) of fig. 4, the first client 111-1 displays the contact list 450 as shown in (2) of fig. 4, the interface further includes a scroll bar 460, and the first client 111-1 responds to the current user's pull-down operation on the scroll bar 460 to display more contacts. The current user may select one or more contacts in the contact list 450, and the first client 111-1 determines the target contact that the current user wants to invite according to the contact selection operation of the current user.
And S209, displaying a live broadcast invited interface by each client.
For example, after the server 120 sends the invitation request to the second client 111-2 and the third client 111-3, the second client 111-2 and the third client 111-3 respectively display the live invited interfaces according to the invitation request. The live invited interface is used for prompting the user whether to receive the live invitation. The live invited interface comprises a live virtual room identifier, a confirmation control for confirming joining in the live virtual room, and a cancellation control for confirming not joining in the live virtual room.
For example, referring to fig. 5, a live invited interface displayed by the second client 111-2 is shown, where the live invited interface includes a prompt 510, a confirmation 510, and a denial 520. The prompt information carries live broadcast virtual room identification.
S210, the second client 111-2 responds to the invitation accepting operation and generates joining room information.
For example, the second user may perform an invitation accepting operation on the live broadcast invited interface, specifically, the second user clicks a confirmation control in the live broadcast invited interface, and the second client 111-2 generates room joining information according to the invitation accepting operation, where the room joining information is used to indicate that the second user confirms to join the live broadcast virtual room.
For example, the second user is busy, or does not want to join the live broadcast virtual room, the second user may perform an invitation rejection operation on the live broadcast invited interface, specifically, for example, the second user clicks a denial control in the live broadcast invited interface, which is equivalent to performing the invitation rejection operation, and the second client 111-2 determines that the second user does not participate in the live broadcast according to the invitation rejection operation.
For example, referring to fig. 5 again, when the user clicks the confirmation control 510 in fig. 5, which is equivalent to performing an invitation receiving operation, the second client 111-2 generates room joining information according to the invitation receiving operation.
As an embodiment, the steps S207 to S210 are optional.
In another possible embodiment, after the current user obtains the live virtual room information, the live virtual room identifier may be directly notified to other users in any manner, the other users may directly perform live virtual room identifier input operation in the corresponding clients, and the clients of the other users respond to the live virtual room identifier input operation to generate room joining information.
For example, referring to fig. 6 (1), the second client 111-2 displays a live virtual room identification input box 620 shown in fig. 6 (2) in response to a click operation of the second user on the join room control 610, and the second client 111-2 generates join room information in response to a live virtual room identification input box 620 input by the second user in the live virtual room identification input box 620.
S211, the second client 111-2 may transmit the joined room information to the first client 111-1 through the server 120.
S212, the third client 111-3 responds to the invitation accepting operation and generates joining room information.
Similarly, the third client 111-3 may generate the room joining information in response to the invitation accepting operation performed by the third user. The content of the invitation operation and the room joining information may refer to the content discussed above, and will not be described herein again.
S213, the third client 111-3 may transmit the joining room information to the first client 111-1 through the server 120.
For ease of description, a contact that confirms participation in a live broadcast is referred to as a participating contact, which includes the current user creating the live virtual room, as well as other contacts that confirm joining the live virtual room.
S214, the first client 111-1 updates the live invitation interface according to the room joining information.
The first client 111-1 may obtain the joining live broadcast information of other clients through the server 120, and update the live broadcast invitation interface according to the joining live broadcast information.
For example, the first client 111-1 updates the live invitation interface after receiving the joining room information sent by the second client 111-2 and the third client 111-3. Specifically, for example, an account id corresponding to the second user who has confirmed to join the live virtual room and an account id corresponding to the third user are displayed in the live invitation interface. The account id is, for example, an account or a corresponding avatar of the user.
For example, please continue to refer to (1) in fig. 4, after the current user invites the second user and the third user, the first client 111-1 determines that the second user and the third user have confirmed to join the live virtual room, the first client 111-1 updates the live invitation interface shown in (1) in fig. 4 to the live invitation interface shown in fig. 7, and the updated live invitation interface newly displays the account id B and the account id C of the second user and the third user. In addition, the live invitation interface may further include a stop invitation control 710, where the stop invitation control 710 is configured to instruct to stop continuing to invite the user to join the live virtual room.
S215, the first client 111-1 generates room creation success information.
The first client 111-1 may generate the room creation success information upon determining that the preset condition is satisfied. The preset condition is, for example, an invitation stop operation of the live broadcast virtual room performed by the current user, or the number of accounts that have joined the live broadcast virtual room has reached a preset number, or an invitation stop instruction issued by the receiving server 120. For example, the server 120 receives the invitation stop information sent by the clients of other users, and may send an invitation stop instruction to all the clients. The preset number may be, for example, the maximum number of accounts that can be accommodated by the live virtual room, or set for the current user when creating the live virtual room.
After determining that the preset condition is satisfied, the first client 111-1 generates room creation success information. The room creation success information is used for representing that the room creation is successful, and includes account identifiers corresponding to all users who are confirmed to join the live broadcast virtual room. Alternatively, the server 120 may generate the room creation success information after determining that the preset condition is satisfied, and the preset condition may refer to the content discussed above and will not be described herein.
It should be noted that, in fig. 2, the first client creates a live virtual room and generates the room creation success information as an example, and in fact, any client may create a live virtual room and generate the room creation success information and the like in any manner described above.
In a possible embodiment, after the room creation success information is generated, or after the current user creates the live virtual room, interaction between users in the live virtual room may be performed, for example, each client may share the currently captured live image with other clients in the live virtual room through the server 120, so as to implement an earlier interaction process.
S216, the first client 111-1 sends the room creation success information to other clients participating in the live virtual room through the server 120.
For example, the first client 111-1 may transmit the room creation success information to the second client 111-2 and the third client 111-3 through the server 120.
And S217, each client displays a configuration parameter selection interface.
After each client receives the successful room creation information, a configuration parameter selection interface can be displayed respectively, and the configuration parameter selection interface or the live broadcast virtual room is provided with a parameter setting control. Or a parameter setting control is arranged on the live broadcast invitation interface or the main interface, any client responds to the triggering operation of the user aiming at the parameter setting control, namely the parameter setting operation is performed, and a configuration parameter selection interface is displayed.
The configuration parameter selection interface comprises a parameter selection control for selecting various configuration parameters. The parameter selection controls include one or more of a background music selection control, an example video selection control, and a queue arrangement mode selection control. The parameter selection controls may also include an example video play selection control.
Specifically, the background music selection control is used for selecting background music played in the process of playing the live video; the example video selection control is used for selecting an example video in the process of playing the live video; the queue arrangement mode selection control is used for selecting the arrangement of the relative display positions of the live images of the portraits in the process of playing the live video; the example video play selection control is used for selecting whether to show the example window in the process of playing the live video. An example video may be presented in an example window.
For example, with continued reference to fig. 7, after the current user clicks the invitation stopping control 710 shown in fig. 7, the first client 111-1 generates the room creation success information according to the operation, and sends the room creation success information to other clients through the server 120, where each client may display the exemplary diagram of the configuration parameter selection interface shown in fig. 8. The configuration parameter selection interface includes a background music selection control 810, an example video selection control 820, a queue arrangement mode 830, and an example video play selection control 840.
S218, the third client 111-3 obtains the target configuration parameters according to the triggering operation of the parameter selection control.
The parameter selection control specifically includes different controls, and the obtained target configuration parameters are also different, and the manner of obtaining the target configuration parameters is exemplified below by taking the third client 111-3 as an example.
The third client 111-3 may perform at least one of the following to obtain the corresponding target configuration parameters:
a1: the third client 111-3 displays a selectable plurality of background music in response to a triggering operation of the third user with respect to the background music selection control. The third client 111-3 may obtain the target background music in response to a background music selection operation for the third user with respect to any one of the plurality of background music, and play the target background music during the playing of the live video.
In one possible embodiment, the target background music may be associated with a corresponding background image, and the background image associated with the target background music may be an image or a background video. After the client determines the target background music, the background image associated with the target background music may be obtained from the server 120. Or, if there is no associated background image for some target background music, the client may use the default background image as the background image in the process of playing the live video.
In one possible embodiment, the target background music may be associated with a corresponding target example video, and after the client determines the target background music, the client determines the example video associated with the target background music as the target example video without selecting the target example video.
A2: the third client 111-3 may display the selectable plurality of example videos in response to a triggering operation of the third user with respect to the example video selection control, and the third client 111-3 may obtain the target example video in response to an example video selection operation of the third user with respect to any of the plurality of example videos.
In one possible embodiment, the target example video may be associated with corresponding target background music, and after the client determines the target example video, the client may determine the background music associated with the target example video as the target background music without selecting the target background music.
In one possible embodiment, the target example video may be associated with a corresponding background image, and the background image associated with the target example video may be an image or a background video. After the client determines the target example video, a background image associated with the target example video may be obtained from the server 120. Or, some target example videos do not have associated background images, then the default background can be used as the background image in the process of playing the live video.
Because different target example videos have different imitation difficulties for users, such as action difficulties related to some example videos, after the client determines the target example videos, at least one difficulty mode associated with the target example videos can be displayed, and the target difficulty mode in the process of playing the live videos is determined in response to mode selection operation of the user for any difficulty mode.
A3: the third client 111-3 responds to the triggering operation of the third user for the queue arrangement mode, displays a plurality of selectable queue arrangement modes, and the third client 111-3 responds to the selection operation of the third user for any queue arrangement mode in the plurality of queue arrangement modes to obtain a target queue arrangement mode.
In one possible embodiment, the client may select a target queue arrangement mode associated with the number of accounts according to the number of accounts currently determined to join the live virtual room. The client may select a queue arrangement mode in which the number of display bits is the same as the number of account numbers, or may select a queue arrangement mode in which the number of display bits is greater than the number of account numbers, for example. Because each account can collect more than one person, a queue arrangement mode with the number larger than that of the accounts can be selected, so that the corresponding display position of each person live image can be ensured.
In one possible embodiment:
the above-mentioned a 1-A3 may be that after any client joining the live virtual room determines the target configuration parameters, other clients do not need to make repeated selections. Other clients may change any of the target configuration parameters a 1-A3 that have been determined by the client, and the changing manner is similar to the process of a 1-A3, and is not described here again.
In another possible embodiment:
the user corresponding to each account can respectively select one or two of a target example video or a target background music in the process of playing the live video, each client respectively responds to the corresponding selection operation to determine the configuration parameters of the client in the process of playing the live video, namely, each client can execute the processes of a1 and a 2.
A4: the third client 111-3 responds to the triggering operation of the third user for selecting the control to play the example video, and displays the selectable example playing window or the example non-playing window, and the third client 111-3 responds to the operation for selecting the example playing window for the third user, determines to display the example window when the portrait live image is displayed, and plays the target example video in the example window.
Of course, the third user may also determine not to play the example window in displaying the live portrait image for the operation of the third user selecting not to play the example window.
Similarly, each client corresponding to each user account in the live virtual room may execute the process in a 4.
It should be noted that S217 to S218 are optional steps. For example, any user joining the live virtual room does not select the configuration parameters, and each client can display the configuration parameters according to default configuration parameters in the process of playing the live video.
For example, with continued reference to fig. 8, after the third user clicks the queue arrangement mode selection control 830, the third client displays a plurality of queue arrangement modes 910 as shown in (1) in fig. 9, determines, in response to a selection operation of the third user for the queue arrangement mode of three positions, that the arrangement formation including three positions in (1) in fig. 9 is determined as the target queue arrangement mode, and further includes a preview view 920 in the interface shown in fig. 9, the preview view showing the target queue arrangement mode. Additionally, a next control 930 and a return control 940 are included in FIG. 9. Third client 111-3 may display the next selectable configuration parameter in response to operating on next control 930. Alternatively, the third client 111-3 may display the previously displayed interface in response to the return control 940.
The above S218 is described as an example of the third client, and actually, S218 may be executed by any client.
S219, the third client 111-3 sends the target configuration parameters to other clients joining the live virtual room through the server 120.
For example, the third client 111-3 via the server 120 may send the target configuration parameters to the first client 111-1 and the second client 111-2, so that other clients may obtain the determined target configuration parameters.
Or, in another possible embodiment, S219 is an optional step, for example, each client only needs to record the target configuration parameter selected by the corresponding user, and does not need to notify other clients.
Further, after any client determines the selected target queue arrangement mode, the target queue arrangement mode may be displayed. Each user can select the display position of the user in the target queue arrangement mode through the respective client, and the client determines the target display position corresponding to the user according to the display position selection operation of the user. For example, a user may move to a target display position in the target queue arrangement mode by moving a focus of the remote controller, or the user directly clicks a certain display position, and the client determines the target display position corresponding to the user according to the display position selection operation of the user. In the process of selecting the display position, the client may synchronize the target display position selected by the user to the server 120 in real time according to the display position selection operation of the user, and the server 120 synchronizes the target display position selected by the user to other clients so as to prevent other users from repeatedly selecting the corresponding target display position.
In another possible embodiment, the user may not select a position, and the client determines the corresponding display position of each user account in the target queue arrangement mode according to the order of adding each account into the live broadcast virtual room.
In another possible embodiment, the server 120 may record the order in which each account joins the live virtual room, and determine the display position in the target queue arrangement mode corresponding to each user account.
After the target display positions corresponding to the account numbers are determined, the client sides can obtain the account numbers corresponding to the display positions in the target arrangement formation, namely, the client sides obtain the display positions of the portrait live broadcast images corresponding to the user account numbers.
For example, with continued reference to (1) in fig. 9, after the third client 111-3 selects the target arrangement formation, a position selection interface as shown in (2) in fig. 9 may be displayed in response to an operation performed on the next control in (1) in fig. 9 or in response to a click operation of the selected target arrangement formation mode, the third user may select any one of the three display positions 1, 2, and 3 shown in (2) in fig. 9, and the third client 111-3 obtains the target display position in response to the display position selection operation of the third user.
S220, the first client 111-1 responds to the operation of starting the live video playing and generates a starting instruction.
The current user can start live broadcast video playing operation, or after all configuration parameters are selected, or the current user responds to invitation stopping operation on a live broadcast invitation interface, or responds that the number of contacts joining a live broadcast virtual room reaches a preset number, the live broadcast video playing displayed on the live broadcast invitation interface starts, or responds to joining operation aiming at live broadcast invitation, and a starting instruction can be generated.
Of course, in fig. 2, the first client 111-1 generates the start command as an example, and actually any other client may execute the step of S220.
Alternatively, the server 120 may generate a start instruction after determining that all the interactive configuration parameters have been selected, and issue the start instruction to each client.
S221, the first client 111-1 sends a start instruction to the other clients joining the live virtual room through the server 120.
Other clients here can be understood as clients corresponding to other user accounts except for the first client 111-1 corresponding to the current account in the live virtual room.
S222, each client generates video data.
The video data is obtained by carrying out video coding on live images of the human images acquired by the client corresponding to the user account. Each client in the live virtual room generates corresponding video data, and the mode of generating the video data by each client is the same. The following describes a process of generating video data by each client, taking the first client 111-1 as an example:
the process of the first client 111-1 obtaining video data is described below with reference to the flowchart shown in fig. 10:
s1001, the first client 111-1 collects live images through the first camera 112-1.
For example, the first camera 112-1 captures live images. The first client 111-1 acquires a live image through a camera (camera) interface.
Since the current user may not be aligned with the camera, the live image of the portrait does not exist in the live image, and therefore when the first client 111-1 determines that the live image of the portrait does not exist in the live image, the prompt message may be sent. The prompt information is used for prompting the current user to adjust the relative position with the client, so that the current user can adjust the position of the current user after viewing the prompt information, and the first camera 112-1 can acquire a live broadcast image containing a portrait live broadcast image.
S1002, the first client 111-1 obtains the portrait live broadcast image in the live broadcast image.
The first client 111-1 obtains a live image, and can process the live image to obtain a portrait live image in the live image.
Specifically, the camera interface may send the live broadcast image to an image capture module in the first client 111-1, and capture a portrait live broadcast image corresponding to each live broadcast image in real time through the image capture module. The image capture module can capture the live image of the portrait according to a first preset frame rate, for example, 30 frames/second, which also means that the image capture module needs to capture 30 live images of the portrait from the live image within 1 second.
For example, the first client 111-1 may capture a live image of a portrait by openCV, so as to remove the background in the live image except for the live image of the portrait. Or for example, the image capture module can detect the portrait area in the live image, and acquire and obtain the live image in the portrait area in real time.
S1003, the first client 111-1 carries out video coding on the live image of the portrait to obtain video data.
After obtaining the live image of the portrait at each time, the first client 111-1 forms the live image of the portrait at each time into a character video, and performs video coding on the character video to obtain video data. The second preset frame rate of the first client 111-1 composing the character video may be any, and the application is not particularly limited. The second preset frame rate may be less than or equal to the first preset frame rate, for example, the first client 111-1 has grabbed 30 frames in the current 1 second, and the first client 111-1 may take 10 frames at intervals to form the feature video.
As an embodiment, the first client 111-1 may screen live portrait images meeting specific conditions from the captured live portrait images, and compose the live portrait images meeting the specific conditions into a character video. The specific condition may be, for example, screening a portrait in which the sharpness of the live image of the portrait is higher than a threshold value.
In a possible embodiment, the live image captured by the first client 111-1 may include a plurality of portraits, and then the first client 111-1 may capture each live image of the plurality of portraits in the live image at the same time, obtain a character video corresponding to each portrait in the plurality of portraits respectively, and perform video coding on the plurality of character videos to obtain video data. Alternatively, the first client 111-1 may perform video coding on the live portrait image of each of the multiple portraits respectively, and obtain video data corresponding to each portrait respectively.
Further, while the character video is obtained, time information of live images of the respective characters in the character video can be obtained, and the time information can be obtained when the live images of the characters are collected or when the character video is formed. For example, the time information of the live image of the first person is the 0 th to 1 st seconds in the person video.
For example, please refer to fig. 11, which is an exemplary diagram of separating a portrait live image from a live image. The first client 111-1 acquires a first live image shown in (1) in fig. 11 at time t1, and separates a live portrait image, specifically, a live portrait image shown in a in fig. 11, from the live image. The first client 111-1 acquires a live image two shown in (2) in fig. 11 at time t2, and separates a live portrait image, specifically, a live portrait image shown in b in fig. 11, from the live image. The first client 111-1 may perform video coding on the portrait live broadcast image shown in a and the portrait live broadcast image shown in b according to the sequence of the acquisition time to obtain video data.
Fig. 10 is described by taking an example of generating video data by the first client 111-1, and other clients may generate video data by using the method shown in fig. 10, which is not described herein again.
S223, each client transmits the video data to the server 120.
After obtaining the respective video data, each client may send the obtained video data to the server 120, so that the server 120 obtains the video data sent by all clients in the live virtual room.
S224, the server 120 generates a video data packet.
The server 120 may package video data for all clients joining the live virtual room to obtain video data packets. The packing may be understood as processing video data of all clients of the live virtual room into one file packet, and specifically, for example, the server 120 may construct a Real time Transport Protocol (RTP)/Real Time Control Protocol (RTCP) video data packet according to the video data to obtain the video data packet.
In a possible embodiment, since each client has its own video data corresponding to the user, the server 120 may generate a different video data packet for each client when generating the video data packet, where the data packet corresponding to each client includes the video data of other clients except the client in the live virtual room.
For example, the server 120 may generate a video data packet corresponding to the first client 111-1 according to the video data of the second client 111-2 and the video data of the third client 111-3, and the server 120 only needs to send the video data packet to the first client 111-1, so that the data transmission amount may be relatively reduced.
S225, the server 120 sends the video data packet to each client.
After obtaining the video data packets, the server 120 sends the video data packets to the respective clients.
As an embodiment, the server 120 and the plurality of clients may perform Real-Time Communication by using a Web Real-Time Communication (WebRTC) technology, so as to send the video data packets to each client in Real Time, thereby reducing transmission delay between the server 120 and each client.
And S226, each client plays the live video according to the video data packet.
The manner in which each client generates and displays live video is the same, and the following takes the first client 111-1 as an example, and the manner in which live video is generated is exemplified with reference to the flow chart of generating live video shown in fig. 12:
s1201, the first client 111-1 decodes the video data packet.
After receiving the video data packets, the first client 111-1 decodes the video data packets, thereby obtaining a live portrait image of each client.
S1202, the first client 111-1 renders the decoded live images of the individual portraits into a live virtual room.
The first client 111-1 may render the live image corresponding to each moment in each character video to the corresponding display position in the live virtual room according to the display position corresponding to each live image, so as to generate the live video.
Further, since the target background music or the example video may be associated with a corresponding background image, when generating the live video, the first client 111-1 may respectively overlay the live portrait image corresponding to each time in each character video on the background image corresponding to the live virtual room according to the display position associated with each character video, so as to generate a live video frame at each time, and further obtain the live video.
To more clearly illustrate the process of generating live video in the embodiment of the present application, the following description is made with reference to the flowchart shown in fig. 13:
s1301, the first client 111-1 places the live image of the portrait according to the display position corresponding to each character video.
S1302, the first client 111-1 determines whether the server has a background image associated therewith.
The first client 111-1 may acquire the background image from the server, and if the target background music or the example video is not associated with the background image, it indicates that there is no associated background image for the server, and the first client 111-1 performs S1303, using a default background image, which may be set by the current user, or set by the first client 111-1 by default. If the target background music or the example video is associated with a background image, indicating that the server has an associated background image, S1304 is performed, i.e., the background image is obtained from the server.
S1305, the first client 111-1 overlays the live portrait image on the background image.
After obtaining the background image, if the background image is an image, only the live image corresponding to each moment is needed to be respectively covered on the background image; if the background image is a background video, the portrait live broadcast image corresponding to each moment can be covered in the background image corresponding to the moment, and the background image corresponding to the moment is the background video frame corresponding to the moment in the background video, so as to obtain the live broadcast video frame.
Continuing with the example of obtaining the live portrait image of the current user shown in fig. 11, a background image corresponding to the time t1 is shown as a in fig. 14, live portrait images of the current user a, the second user B, and the third user C at the time t1 are shown as B in fig. 14, respectively, and according to the display positions associated with the live portrait images of the users, the first client 111-1 may overlay the live portrait images of the time t1 in the background image shown as a, so as to obtain a live video frame shown as C in fig. 14.
In a possible embodiment, the live virtual room further includes a play example video selection control, in the process of playing the live video, any user can perform a trigger operation on the play example video selection control, and the client determines whether to display an example window in the live virtual room according to the trigger operation performed on the play example video selection control.
For example, any user may perform a trigger operation for the playing example video selection control, and the client determines to play the example video in the live broadcast process according to the selection of the playing example video selection control, or the client determines not to play the example video according to the selection of the playing example video selection control, and determines not to play the example video in the live broadcast video playing process.
Further, in order to ensure that the example video and the live video are played synchronously, the client synchronously plays the example video according to the playing time length of the live video and the playing time length.
S227, the server 120 generates a scoring result.
After the server 120 determines that the live video playing process reaches a preset time length, for example, the total time length of the target background music or the total time length of the example video, the server 120 determines that the live video playing is finished. Or according to an ending operation of any user for the live video playing, the client corresponding to the user determines that the live video playing is ended, generates a live video playing ending instruction, and sends the live video playing ending instruction to the server 120 or other clients.
The server 120 may determine, according to the portrait live broadcast image of each user at each time, a rating result of each user in the live broadcast video, and send the rating result of each of the multiple users to the client of each of the multiple users, so that each client displays the rating result of each of the multiple users.
In this regard, the server 120 determines the scoring result for each user, and the following example illustrates:
when the character video is the dance video corresponding to the user, the scoring of the character video substantially determines the fit degree of the character motion and the standard motion in the character video frame, and the higher the fit degree is, the more standard the dance motion of the user is, the higher the corresponding score is. Therefore, in the embodiment of the present application, the server 120 may determine the score of each user at each time according to the matching degree between the live portrait image of each user at each time and the video frame of the target person at the corresponding time in the example video. Specifically, for example, the server 120 may directly use the matching degree as the score of each time, or multiply the matching degree by a fixed value to obtain the score of each time.
The server 120 may determine the score of each portrait live broadcast image in real time during the live broadcast video playing process, or the server 120 may calculate the score corresponding to each portrait live broadcast image after the live broadcast video playing is finished.
After the scores of each user at each time in the live video playing process are obtained, the score result of each user in the live video playing process is obtained according to the scores of each user at each time in the live video playing process, for example, the scores of the user at each time may be summed to obtain the score result, or the scores of the user at each time may be weighted and summed to obtain the score result.
As an embodiment, since the difficulty patterns selected by the users may be the same or different, in this embodiment, the server 120 may further consider the target difficulty pattern selected by the user when determining the scoring result corresponding to each user, and determine the final scoring result of the user according to the target difficulty pattern selected by the user and the scoring result of the user. For example, the server 120 may store the difficulty pattern and the corresponding weight coefficient in advance, and after the target difficulty pattern is obtained, the weight coefficient corresponding to the target difficulty pattern may be multiplied by the scoring result of the user to obtain the scoring result of the user.
To more clearly illustrate the manner of determining the scoring result, the following describes an example of the manner of determining the scoring result according to the embodiment of the present application with reference to the flowchart shown in fig. 15:
s1501, the live video playback starts.
S1502, the server 120 obtains a live portrait image.
S1503, the server 120 detects a core key point in the live image of the portrait.
After the server 120 obtains the video data, it may obtain a live portrait image corresponding to the user at each time, and detect and locate a core key point in the live portrait image by using artificial intelligence, where the core key point is a key point used for locating a human body action, for example, specifically, 22 human body key points.
S1504, the server 120 determines the matching degree of the core key points in the portrait live images and the video frames of the target characters at the corresponding moments of the example videos.
After obtaining the core key points, the core key points may be compared with the core key points in the target person video frame at the corresponding time to obtain the score of the person video frame. Specifically, for example, the euclidean distance between each core key point in the character video frame and the core key point in the target character video frame is determined, and the euclidean distance is used as the matching degree of the two.
S1505, the server 120 accumulates the scores.
And accumulating the scores of the portrait live images of the user at all times.
S1506, the server 120 determines whether playing the live video is finished.
If the server 120 determines that the live video playing is finished, S1507 is executed, that is, a scoring result of the user is obtained. If the server 120 determines that the playing of the live video is not finished, the steps from S1502 to S1507 are repeated until the live video is finished.
In one possible embodiment, the scoring result includes one or both of a score of each user during the playing of the live video, or a scoring ranking of each user.
In a possible embodiment, if the presentation position selected by each user is different, the action that the user needs to make at the corresponding time may be different, and therefore, in this embodiment of the application, during scoring, the score of the live image of the corresponding portrait may be obtained according to the live image of the portrait and a video frame of a target person corresponding to the example video at the corresponding time and the corresponding presentation position.
S228, the server 120 sends the scoring result to each client.
The server 120 may send the scoring result of each user to each client after obtaining the scoring result of each user.
S229, each client displays the scoring result.
Each client receives the scoring result sent by the server 120, and displays the scoring result of each user. Or the client responds to the grading display operation performed by the user and displays the grading result of each user.
In a possible embodiment, the scoring results may be directly presented in the live virtual room, or the scoring results of the respective users may be presented in the live virtual room in the form of a sub-interface.
Referring to fig. 16, a scoring result 1500 is presented for the first client 111-1, where the scoring result 1500 includes a ranking of each user participating in the live virtual room, and a scoring result of each user is specifically indicated as "No. 1A 98" presented in fig. 15; score No.2B 97; no.3B 96 point 96 ".
In another possible embodiment, S227 to S228 are optional steps, for example, each client may determine the scoring result of the user corresponding to the client, and the manner in which the client determines the scoring result may refer to the manner in which the server 120 determines the scoring result, which is not described herein again. After the clients obtain the scoring results, the scoring results of the clients can be shared to other clients.
In a second possible scenario discussed based on fig. 1, a live video playing method according to an embodiment of the present application is described below.
Unlike the interactive process discussed in fig. 2, in the second possible scenario, the related data of the live video is rendered directly by the server 120.
After the server 120 obtains the video data of each client in the live virtual room, the server 120 obtains the video data and the generation process of the video data may refer to the content discussed with reference to fig. 2, which is not described herein again. The server 120 may render the live portrait images corresponding to each time in each character video on the corresponding display positions, so as to generate the related data of the live video. The server 120 sends the relevant data of the live video to each client, so that each client receives and directly displays the live video.
To more clearly describe the live video playing method according to the embodiment of the present application, the following description is made with reference to the live video playing method shown in fig. 17:
s1701, the first client 111-1 confirms joining the live virtual room in response to the live virtual room identification input operation.
Fig. 17 illustrates an example in which the current user corresponding to the first client 111-1 is a live virtual room.
S1702, the first client 111-1 responds to the background music selection operation to obtain the target background music.
S1703, the first client 111-1 confirms whether the presentation bits are determined in a default order.
If the display bits in the queue arrangement mode are determined not to be in the default order, executing S1704, responding to the selection operation of the display bits to obtain target display bits, and if the display bits in the queue arrangement mode are determined to be in the default order, executing S1705, and determining the target display bits according to the order of adding the current user into the live broadcast virtual room.
S1706, according to the selection operation of the playing target example video, it is confirmed whether the target example video is played.
If the user selects to confirm the playing of the target example video, S1707 is executed, i.e., the target example video is played. If the user selects not to play the target example video, S1708 is performed, i.e., not to play the target example video.
And S1709, responding to the difficulty mode selection operation, and determining a target difficulty mode.
And S1710, starting the live video playing and displaying the live video.
And S1711, finishing the live video and displaying the grading result.
The obtaining manner of the scoring result can refer to the content discussed above, and is not described herein again.
Based on the same inventive concept, an embodiment of the present application provides a live video playing device, please refer to fig. 18, the live video playing device includes:
a first obtaining module 1801, configured to obtain a live broadcast image corresponding to a current user account when the current user account joins a live broadcast virtual room;
a second obtaining module 1802, configured to obtain a portrait live broadcast image corresponding to a current account from a live broadcast image;
a third obtaining module 1803, configured to obtain a live portrait image of another user account in a live virtual room;
a display module 1804, configured to arrange and display the live portrait images of the current account and the live portrait images of the other user accounts in the background image of the live virtual room.
In a possible embodiment, the display module 1804 is further configured to:
responding to room creating operation and displaying a live broadcast invitation interface; the live broadcast virtual room identification and the invitation control are displayed on the live broadcast invitation interface; responding to the operation of triggering the invitation control and displaying a contact list; and responding to the confirmation operation of selecting the contact persons to participate in the contact person list, displaying the live broadcast invitation sent to each contact person to participate in the contact person list, and displaying the information of the contact persons to participate in the live broadcast virtual room according to the live broadcast invitation.
In a possible embodiment, the apparatus further comprises a determining module 1805, wherein:
the display module 1804 is further configured to respond to a parameter setting operation for the live virtual room, and display a configuration parameter selection interface; the configuration parameter selection interface comprises a parameter selection control for selecting each configuration parameter, and the configuration parameters comprise one or two of background music and queue arrangement modes;
a determining module 1805, configured to determine a selected configuration parameter in response to a triggering operation for the parameter selection control;
the display module 1804 is further configured to control display of live portrait images of user accounts in the live virtual room according to the selected configuration parameters when live portrait images of the current account and live portrait images of other user accounts are arranged and displayed in the background image of the live virtual room.
In a possible embodiment, an example video playing selection control is further disposed in the configuration parameter selection interface or the live virtual room, and the display module 1804 is further configured to:
in response to a triggering operation for the example video play selection control, an example window is presented in the live virtual room and an example video is played in the example window.
In one possible embodiment, when the selected dynamic configuration parameter comprises a queue queuing pattern, wherein:
a display module 1804, further configured to respond to the operation directed to the queue arrangement mode, and display each display bit included in the queue arrangement mode;
the determining module 1805 is further configured to determine, according to the display position selected by each user account, a corresponding display position of each user account in the live broadcast virtual room;
the display module 1804 is specifically configured to display the portrait live broadcast image of each user account on a corresponding display position according to the display position corresponding to each user account in the live broadcast virtual room.
In a possible embodiment, the determining module 1805 is specifically configured to:
receiving display positions corresponding to other user accounts which are added into the live broadcast virtual room and sent by the server, responding to display position selection operation, and obtaining the display positions corresponding to the current user account; or the like, or, alternatively,
and determining the corresponding display position of each user account in the live broadcast virtual room according to the sequence of adding each user account into the live broadcast virtual room.
In a possible embodiment, the third obtaining module 1803 is specifically configured to:
carrying out video coding on a character video consisting of live images of characters to obtain video data;
sending the video data to a server; and receiving a video data packet sent by the server, and obtaining the live image of the portrait of each user account in the live virtual room from the received video data packet.
In a possible embodiment, the apparatus further includes a prompting module 1806, where the prompting module 1806 is specifically configured to: and if detecting that the live image does not exist in the live image, sending out prompt information.
In one possible embodiment, the video data further includes time information for each character video frame; the display module 1804 is specifically configured to:
and synchronizing the portrait live broadcast images corresponding to the user accounts according to the corresponding display positions of the user accounts in the live broadcast virtual room and the time information of the portrait live broadcast images in the video data packet, and covering the portrait live broadcast images corresponding to the user accounts on the background images of the live broadcast virtual room. In one possible embodiment, the display module is further configured to: and responding to the ending operation aiming at the live video playing, or displaying the grading result of each user account in the live video playing process on the live virtual room when the live video playing time reaches the preset time.
In a possible embodiment, the scoring result of each user account in the process of playing the live video is obtained by any one of the following methods:
receiving a scoring result of each user account from a server; or the like, or, alternatively,
determining the grade of each user account at each moment according to the matching degree between the live image of the portrait of each user account at each moment and the video frame of the target character at the corresponding moment in the selected example video;
and obtaining a grading result of each user account in the video live broadcast process according to the grading of each user account at each moment.
Based on the same inventive concept, the embodiment of the present application provides a computer device 1900, please refer to fig. 19, which includes a processor 1901 and a memory 1902.
The processor 1901 may be a Central Processing Unit (CPU), a digital processing unit, or the like. The specific connection medium between the memory 1902 and the processor 1901 is not limited in this embodiment. In the embodiment of the present application, the memory 1902 and the processor 1901 are connected by a bus 1903 in fig. 19, the bus 1903 is represented by a thick line in fig. 19, and the connection manner between other components is merely illustrative and not limited. The bus 1903 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 19, but it is not intended that there be only one bus or one type of bus.
The memory 1902 may be a volatile memory (volatile memory), such as a random-access memory (RAM); the memory 1902 may also be a non-volatile memory (non-volatile memory) such as, but not limited to, a read-only memory (rom), a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD), or the memory 1902 may be any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Memory 1902 may be a combination of the above.
The processor 1901, when calling the computer program stored in the memory 1902, is configured to execute the live video playing method as discussed in any of the foregoing, and may also be configured to implement the functions of the terminal or the server or the apparatus shown in fig. 18 discussed in the foregoing.
Based on the same inventive concept, embodiments of the present application provide a storage medium storing computer instructions, which when executed on a computer, cause the computer to perform any one of the live video playing methods discussed above.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Based on the same inventive concept, the embodiments of the present application provide a computer program product, which includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to enable the computer device to execute any one of the live video playing methods described above.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (13)
1. A live video playing method is characterized by comprising the following steps:
when a current user account is added into a live broadcast virtual room, acquiring a live broadcast image corresponding to the current user account;
acquiring a portrait live broadcast image corresponding to the current account from the live broadcast image;
acquiring a portrait live broadcast image of other user accounts in the live broadcast virtual room;
and arranging and displaying the live portrait images of the current account and the live portrait images of the other user accounts in the background image of the live virtual room.
2. The method of claim 1, wherein the method further comprises:
responding to parameter setting operation aiming at the live virtual room, and displaying a configuration parameter selection interface; the configuration parameter selection interface comprises a parameter selection control for selecting each configuration parameter, and the configuration parameters comprise one or two of background music and queue arrangement modes;
responding to the triggering operation aiming at the parameter selection control, and determining the selected configuration parameters; and the number of the first and second groups,
and when the live portrait images of the current account and the live portrait images of the other user accounts are arranged and displayed in the background image of the live virtual room, controlling the display of the live portrait images of the user accounts in the live virtual room according to the selected configuration parameters.
3. The method of claim 2, wherein an example video play selection control is further disposed in the configuration parameter selection interface or the live virtual room, the method further comprising:
and responding to the triggering operation of the example video playing selection control, showing an example window in the live virtual room, and playing an example video in the example window.
4. The method of claim 2, wherein when the selected dynamic configuration parameter comprises a queue queuing mode, the method further comprises:
responding to the operation aiming at the queue arrangement mode, and displaying each display bit included by the queue arrangement mode;
determining the corresponding display position of each user account in the live virtual room according to the display position selected by each user account; and
the arranging and displaying the portrait live broadcast image of the current account and the portrait live broadcast images of the other user accounts in the background image of the live broadcast virtual room specifically includes:
and displaying the portrait live broadcast image of each user account on the corresponding display position according to the display position corresponding to each user account in the live broadcast virtual room.
5. The method of claim 4, wherein the determining, according to the display position selected by each user account, the corresponding display position of each user account in the live virtual room specifically comprises:
receiving display positions corresponding to other user accounts which are added into the live broadcast virtual room and sent by the server, responding to display position selection operation, and obtaining the display positions corresponding to the current user account; or the like, or, alternatively,
and determining the corresponding display position of each user account in the live virtual room according to the sequence of adding each user account into the live virtual room.
6. The method according to any one of claims 1 to 5, wherein the acquiring of the live portrait image of the account of the other user in the live virtual room specifically includes:
carrying out video coding on a character video consisting of live images of characters to obtain video data;
sending the video data to a server; and
and receiving a video data packet sent by the server, and obtaining the live portrait images of the user accounts in the live virtual room from the received video data packet.
7. The method of claim 6, wherein prior to obtaining the live image of the portrait from the live image, the method further comprises:
and if detecting that the live image does not exist in the live image, sending out prompt information.
8. The method of claim 6, wherein the video data further includes time information of each portrait live image; the arranging and displaying the portrait live broadcast image of the current account and the portrait live broadcast images of the other user accounts in the background image of the live broadcast virtual room specifically includes:
and synchronizing the portrait live broadcast images corresponding to the user accounts according to the corresponding display positions of the user accounts in the live broadcast virtual room and the time information of the portrait live broadcast images in the video data packet, and covering the portrait live broadcast images corresponding to the user accounts on the background images of the live broadcast virtual room.
9. The method of any one of claims 1 to 5, further comprising:
and responding to ending operation aiming at live video playing, or displaying a grading result of each user account in the live video playing process on the live virtual room when the live video playing time reaches a preset time.
10. The method of claim 9, wherein the scoring result of each user account in the process of playing the live video is obtained by any one of the following methods:
receiving a scoring result of each user account from a server; or the like, or, alternatively,
determining the grade of each user account at each moment according to the matching degree between the live image of the portrait of each user account at each moment and the video frame of the target character at the corresponding moment in the selected example video;
and obtaining a grading result of each user account in the video live broadcast process according to the grading of each user account at each moment.
11. A live video playback device, comprising:
the first acquisition module is used for acquiring a live broadcast image corresponding to a current user account when the current user account is added into a live broadcast virtual room;
the second acquisition module is used for acquiring a portrait live broadcast image corresponding to the current account from the live broadcast image;
the third acquisition module is used for acquiring the portrait live broadcast images of other user accounts in the live broadcast virtual room;
and the display module is used for displaying the live portrait images of the current account and the live portrait images of the other user accounts in a background image of the live virtual room in an arrayed manner.
12. A computer device, comprising:
at least one processor, and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the method of any one of claims 1-10 by executing the instructions stored by the memory.
13. A storage medium storing computer instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011038362.1A CN112188223B (en) | 2020-09-28 | 2020-09-28 | Live video playing method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011038362.1A CN112188223B (en) | 2020-09-28 | 2020-09-28 | Live video playing method, device, equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112188223A true CN112188223A (en) | 2021-01-05 |
CN112188223B CN112188223B (en) | 2023-12-01 |
Family
ID=73945189
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011038362.1A Active CN112188223B (en) | 2020-09-28 | 2020-09-28 | Live video playing method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112188223B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113596353A (en) * | 2021-08-10 | 2021-11-02 | 广州艾美网络科技有限公司 | Somatosensory interaction data processing method and device and somatosensory interaction equipment |
CN116170618A (en) * | 2022-12-29 | 2023-05-26 | 北京奇树有鱼文化传媒有限公司 | Method and device for calculating play quantity, electronic equipment and readable storage medium |
WO2024113635A1 (en) * | 2022-11-30 | 2024-06-06 | 腾讯科技(深圳)有限公司 | Livestream processing method and apparatus, electronic device, storage medium and program product |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016058861A (en) * | 2014-09-09 | 2016-04-21 | みこらった株式会社 | Sports game live watching system, and image collection/distribution facility device and watcher terminal for sports game live watching system |
CN106789991A (en) * | 2016-12-09 | 2017-05-31 | 福建星网视易信息系统有限公司 | A kind of multi-person interactive method and system based on virtual scene |
CN106803966A (en) * | 2016-12-31 | 2017-06-06 | 北京星辰美豆文化传播有限公司 | A kind of many people's live network broadcast methods, device and its electronic equipment |
WO2018095129A1 (en) * | 2016-11-26 | 2018-05-31 | 广州华多网络科技有限公司 | Method and device for playing live video |
US20190149852A1 (en) * | 2016-07-15 | 2019-05-16 | Tencent Technology (Shenzhen) Company Limited | Live broadcasting method, method for presenting live broadcasting data stream, and terminal |
CN110418155A (en) * | 2019-08-08 | 2019-11-05 | 腾讯科技(深圳)有限公司 | Living broadcast interactive method, apparatus, computer readable storage medium and computer equipment |
CN111083516A (en) * | 2019-12-31 | 2020-04-28 | 广州酷狗计算机科技有限公司 | Live broadcast processing method and device |
WO2020134841A1 (en) * | 2018-12-28 | 2020-07-02 | 广州市百果园信息技术有限公司 | Live broadcast interaction method and apparatus, and system, device and storage medium |
CN111405340A (en) * | 2020-03-12 | 2020-07-10 | 钟杰东 | 5G video webpage technology management system and method |
CN111654713A (en) * | 2020-04-20 | 2020-09-11 | 视联动力信息技术股份有限公司 | Live broadcast interaction method and device |
-
2020
- 2020-09-28 CN CN202011038362.1A patent/CN112188223B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016058861A (en) * | 2014-09-09 | 2016-04-21 | みこらった株式会社 | Sports game live watching system, and image collection/distribution facility device and watcher terminal for sports game live watching system |
US20190149852A1 (en) * | 2016-07-15 | 2019-05-16 | Tencent Technology (Shenzhen) Company Limited | Live broadcasting method, method for presenting live broadcasting data stream, and terminal |
WO2018095129A1 (en) * | 2016-11-26 | 2018-05-31 | 广州华多网络科技有限公司 | Method and device for playing live video |
CN106789991A (en) * | 2016-12-09 | 2017-05-31 | 福建星网视易信息系统有限公司 | A kind of multi-person interactive method and system based on virtual scene |
CN106803966A (en) * | 2016-12-31 | 2017-06-06 | 北京星辰美豆文化传播有限公司 | A kind of many people's live network broadcast methods, device and its electronic equipment |
WO2020134841A1 (en) * | 2018-12-28 | 2020-07-02 | 广州市百果园信息技术有限公司 | Live broadcast interaction method and apparatus, and system, device and storage medium |
CN110418155A (en) * | 2019-08-08 | 2019-11-05 | 腾讯科技(深圳)有限公司 | Living broadcast interactive method, apparatus, computer readable storage medium and computer equipment |
CN111083516A (en) * | 2019-12-31 | 2020-04-28 | 广州酷狗计算机科技有限公司 | Live broadcast processing method and device |
CN111405340A (en) * | 2020-03-12 | 2020-07-10 | 钟杰东 | 5G video webpage technology management system and method |
CN111654713A (en) * | 2020-04-20 | 2020-09-11 | 视联动力信息技术股份有限公司 | Live broadcast interaction method and device |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113596353A (en) * | 2021-08-10 | 2021-11-02 | 广州艾美网络科技有限公司 | Somatosensory interaction data processing method and device and somatosensory interaction equipment |
CN113596353B (en) * | 2021-08-10 | 2024-06-14 | 广州艾美网络科技有限公司 | Somatosensory interaction data processing method and device and somatosensory interaction equipment |
WO2024113635A1 (en) * | 2022-11-30 | 2024-06-06 | 腾讯科技(深圳)有限公司 | Livestream processing method and apparatus, electronic device, storage medium and program product |
CN116170618A (en) * | 2022-12-29 | 2023-05-26 | 北京奇树有鱼文化传媒有限公司 | Method and device for calculating play quantity, electronic equipment and readable storage medium |
CN116170618B (en) * | 2022-12-29 | 2023-11-14 | 北京奇树有鱼文化传媒有限公司 | Method and device for calculating play quantity, electronic equipment and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112188223B (en) | 2023-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220410007A1 (en) | Virtual character interaction method and apparatus, computer device, and storage medium | |
US11794102B2 (en) | Cloud-based game streaming | |
WO2022143182A1 (en) | Video signal playing method, apparatus, and device for multi-user interaction | |
CN112188223B (en) | Live video playing method, device, equipment and medium | |
CN108986192B (en) | Data processing method and device for live broadcast | |
CN110536725A (en) | Personalized user interface based on behavior in application program | |
US9066144B2 (en) | Interactive remote participation in live entertainment | |
CN113163223B (en) | Live interaction method, device, terminal equipment and storage medium | |
CN114501104B (en) | Interaction method, device, equipment, storage medium and product based on live video | |
CN110472099B (en) | Interactive video generation method and device and storage medium | |
CN110162667A (en) | Video generation method, device and storage medium | |
CN112672179B (en) | Method, device and equipment for live game | |
CN113490004B (en) | Live broadcast interaction method and related device | |
CN114430494B (en) | Interface display method, device, equipment and storage medium | |
US20230356082A1 (en) | Method and apparatus for displaying event pop-ups, device, medium and program product | |
CN113996053A (en) | Information synchronization method, device, computer equipment, storage medium and program product | |
CN111314714A (en) | Game live broadcast method and device | |
CN111479119A (en) | Method, device and system for collecting feedback information in live broadcast and storage medium | |
CN112287848A (en) | Live broadcast-based image processing method and device, electronic equipment and storage medium | |
CN113824983A (en) | Data matching method, device, equipment and computer readable storage medium | |
WO2022134943A1 (en) | Explanation video generation method and apparatus, and server and storage medium | |
CN113542895A (en) | Live broadcast method and device, computer equipment and storage medium | |
CN115065838B (en) | Live broadcast room cover interaction method, system, device, electronic equipment and storage medium | |
CN115494962A (en) | Virtual human real-time interaction system and method | |
JP2000040161A (en) | Method and system for actualizing broadcast type content in three-dimensional shared virtual space and storage medium stored with broadcast type content actualizing program in three-dimensional virtual space |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |