CN112188223B - Live video playing method, device, equipment and medium - Google Patents

Live video playing method, device, equipment and medium Download PDF

Info

Publication number
CN112188223B
CN112188223B CN202011038362.1A CN202011038362A CN112188223B CN 112188223 B CN112188223 B CN 112188223B CN 202011038362 A CN202011038362 A CN 202011038362A CN 112188223 B CN112188223 B CN 112188223B
Authority
CN
China
Prior art keywords
live
virtual room
image
video
user account
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011038362.1A
Other languages
Chinese (zh)
Other versions
CN112188223A (en
Inventor
孙千柱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011038362.1A priority Critical patent/CN112188223B/en
Publication of CN112188223A publication Critical patent/CN112188223A/en
Application granted granted Critical
Publication of CN112188223B publication Critical patent/CN112188223B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/254Management at additional data server, e.g. shopping server, rights management server
    • H04N21/2541Rights Management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25875Management of end-user data involving end-user authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application relates to the technical field of artificial intelligence, and provides a live video playing method, device, equipment and medium. The method comprises the following steps: when a current user account joins in a live virtual room, acquiring a live image corresponding to the current user account; acquiring a portrait live image corresponding to a current account from a live image; acquiring human figure live broadcast images of other user accounts in a live broadcast virtual room; the live portrait images of the current account and the live portrait images of other user accounts are arranged and displayed in the background images of the live virtual room, and in the live broadcast method, any client added with the live broadcast can display the live portrait images corresponding to all the accounts added with the live broadcast in the live broadcast process, so that the interaction degree is improved.

Description

Live video playing method, device, equipment and medium
Technical Field
The application relates to the technical field of computers, in particular to the technical field of artificial intelligence, and provides a live video playing method, device, equipment and medium.
Background
With the continuous development of video technology, users can perform remote interaction between video-based dance games and relatives and friends to enhance emotion between the video-based dance games and the relatives and friends.
Currently, video-based dance games are: and each device respectively collects dance videos corresponding to the users, the server obtains the dance videos corresponding to each user, scores the dance videos of each user, finally obtains the score corresponding to the dance videos of each user, and the subsequent users can send the dance videos of the subsequent users to other users. However, in the current scheme, each user cannot view other users in the dancing process, i.e. the interaction degree is low.
Disclosure of Invention
The embodiment of the application provides a live video playing method, a device, equipment and a medium, which are used for improving the video interaction degree.
In one aspect, a live video playing method is provided, including:
when a current user account joins in a live virtual room, acquiring a live image corresponding to the current user account;
acquiring a live image of a portrait corresponding to the current account from the live image;
acquiring human figure live broadcast images of other user accounts in the live broadcast virtual room;
and arranging and displaying the live portrait images of the current account and the live portrait images of the other user accounts in the background images of the live virtual room.
The embodiment of the application provides a live video playing device, which comprises:
the first acquisition module is used for acquiring a live image corresponding to the current user account when the current user account joins the live virtual room;
the second acquisition module is used for acquiring a portrait live image corresponding to the current account from the live image;
the third acquisition module is used for acquiring the portrait live broadcast images of other user accounts in the live broadcast virtual room;
and the display module is used for displaying the live image of the current account and the live image of the other user accounts in a background image of the live virtual room in a arrayed manner.
In a possible embodiment, the display module is further configured to:
responding to room creation operation, and displaying a live invitation interface; wherein, the live invitation interface is displayed with a live virtual room identifier and an invitation control; responding to the operation of triggering the invitation control, and displaying a contact list; and responding to the confirmation operation of selecting the participation contact person on the contact person list, displaying the participation contact person information of joining the live virtual room according to the live invitation, and sending the live invitation to each participation contact person.
In a possible embodiment, the apparatus further comprises a determination module, wherein:
the display module is also used for responding to the parameter setting operation aiming at the live virtual room and displaying a configuration parameter selection interface; the configuration parameter selection interface comprises parameter selection controls for selecting various configuration parameters, wherein the configuration parameters comprise one or two of background music and a queue arrangement mode;
the determining module is used for responding to the triggering operation of the parameter selection control and determining the selected configuration parameters;
and the display module is also used for controlling the display of the live image of each user account in the live virtual room according to the selected configuration parameters when the live image of the current account and the live image of the other user accounts are displayed in the background image of the live virtual room in an arrayed manner.
In a possible embodiment, the configuration parameter selection interface or the live virtual room is further provided with an example video play selection control, and the display module is further configured to:
and responding to the triggering operation of the example video playing selection control, displaying an example window in the live virtual room, and playing the example video in the example window.
In one possible embodiment, when the selected dynamic configuration parameter comprises a queuing mode, wherein:
the display module is also used for responding to the operation aiming at the queue arrangement mode and displaying each display bit included in the queue arrangement mode;
the determining module is further used for determining corresponding display positions of the user accounts in the live virtual room according to the display positions selected by the user accounts;
the display module is specifically configured to display the portrait live image of each user account on a corresponding display position according to the display position corresponding to each user account in the live virtual room.
In a possible embodiment, the determining module is specifically configured to:
receiving display positions corresponding to other user accounts added into the live virtual room, which are sent by a server, and responding to display position selection operation to obtain display positions corresponding to the current user account; or alternatively, the first and second heat exchangers may be,
and determining the corresponding display positions of the user accounts in the live virtual room according to the order of joining the live virtual room by the user accounts.
In a possible embodiment, the third obtaining module is specifically configured to:
carrying out video coding on a character video formed by the live image of the human image to obtain video data;
Transmitting the video data to a server; and receiving the video data packet sent by the server, and obtaining the portrait live broadcast image of each user account in the live broadcast virtual room from the received video data packet.
In a possible embodiment, the device further comprises a prompting module, and the prompting module is specifically configured to: and if the live image is detected to be not in the live image, sending out prompt information.
In one possible embodiment, the video data further includes time information for each of the humanoid video frames; the display module is specifically used for:
and synchronizing the live image corresponding to each user account according to the corresponding display position of each user account in the live virtual room and the time information of the live image in the video data packet, and covering the live image corresponding to each user account on the background image of the live virtual room.
In a possible embodiment, the display module is further configured to: and responding to the ending operation of the live video playing, or displaying the scoring result of each user account in the live video playing process on the live virtual room when the time length of the live video playing reaches the preset time length.
In one possible embodiment, the scoring result of each user account in the live video playing process is obtained by any one of the following modes:
receiving scoring results of each user account from a server; or alternatively, the first and second heat exchangers may be,
determining the score of each user account at each moment according to the matching degree between the live image of the person of each user account at each moment and the target person video frame at the corresponding moment in the selected example video;
and obtaining a scoring result of each user account in the video live broadcast playing process according to the scoring of each user account at each moment.
An embodiment of the present application provides a computer apparatus including:
at least one processor, and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the method of any one of the aspects by executing the memory stored instructions.
An embodiment of the present application provides a storage medium storing computer instructions that, when run on a computer, cause the computer to perform a method according to any one of the aspects.
Due to the adoption of the technical scheme, the embodiment of the application has at least the following technical effects:
in the embodiment of the application, in the live broadcast process, the client displays the portrait live broadcast images separated from the live broadcast images corresponding to each account, so that users participating in live broadcast can view the portrait live broadcast images corresponding to all accounts in the live broadcast virtual room, and the real-time performance and the interaction degree of interaction are improved. And as only the complete live image is transmitted and displayed between the clients, but the portrait live image in the live image is not required, the data transmission amount in the live process is reduced, and the live efficiency is improved. And in live broadcast, only the portrait live image corresponding to the user can be displayed as the background of the environment where the user participating in live broadcast is not displayed, so that the overall display effect in the live broadcast process is improved.
Drawings
Fig. 1 is an application scene schematic diagram of a live video playing method provided by an embodiment of the present application;
FIG. 2 is a diagram illustrating an interaction process between the devices in FIG. 1 according to an embodiment of the present application;
FIG. 3 is an exemplary diagram of a main interface of a client according to an embodiment of the present application;
FIG. 4 is an exemplary diagram of a process for determining a target contact according to an embodiment of the present application;
FIG. 5 is an exemplary diagram of a live invited interface provided by an embodiment of the present application;
FIG. 6 is a diagram illustrating an example of a process for determining to join a live virtual room provided by an embodiment of the present application;
FIG. 7 is an exemplary diagram of a live invitation interface after update provided by an embodiment of the present application;
FIG. 8 is an exemplary diagram of a configuration parameter selection interface provided by an embodiment of the present application;
FIG. 9 is a diagram illustrating an exemplary process for determining a display position according to an embodiment of the present application;
FIG. 10 is a flowchart of generating video data according to an embodiment of the present application;
FIG. 11 is a diagram illustrating an exemplary process for obtaining video data according to an embodiment of the present application;
fig. 12 is a flowchart of displaying live video according to an embodiment of the present application;
fig. 13 is a second flowchart of displaying live video according to an embodiment of the present application;
fig. 14 is a process example diagram for displaying live video according to an embodiment of the present application;
FIG. 15 is a flowchart of a determination score according to an embodiment of the present application;
FIG. 16 is an exemplary graph of a displayed scoring result provided by an embodiment of the present application;
fig. 17 is a flowchart of a live video playing method according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of a live video playing device according to an embodiment of the present application;
Fig. 19 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions provided by the embodiments of the present application, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
In order to facilitate a better understanding of the technical solutions of the present application, the following description of the terms related to the present application will be presented to those skilled in the art.
Artificial intelligence (Artificial Intelligence, AI): the system is a theory, a method, a technology and an application system which simulate, extend and extend human intelligence by using a digital computer or a machine controlled by the digital computer, sense environment, acquire knowledge and acquire an optimal result by using the knowledge. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Computer Vision technology (CV): the computer vision is a science for researching how to make a machine "see", and more specifically, a camera and a computer are used to replace human eyes to identify, track and measure targets, and the like, and further, graphic processing is performed, so that the computer is processed into images which are more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, among others, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and others. The related technology related to image recognition and the like in the embodiment of the application will be described below.
Open source computer vision platform (Open Source Computer Vision Library, openCV): is a cross-platform computer vision library. OpenCV was initiated by intel corporation and involved in development, issued with BSD license authorization, and is freely available in commercial and research areas. OpenCV may be used to develop real-time image processing, computer vision, and pattern recognition programs. The method for separating the character video from the live video according to the embodiment of the application can be realized through OpenCV, and is specifically described below.
Application: generally refers to a program that can be used to implement certain functions, and applications in embodiments of the present application generally refer to applications that can implement interactive control functions in embodiments of the present application. The client refers to a carrier applied in each terminal, may be a client of a web page version, or may be a client preloaded in the terminal, or may be a client embedded in a third party application.
Live virtual room: refers to created virtual rooms, each live virtual room corresponding to a room identification. Each user account can be added into the live virtual room, and a user corresponding to any user account in the live virtual room can see the human figure live image of each user in the live virtual room. After a certain user account joins a live virtual room, a client that logs in the user account is considered to join the live virtual room.
Background music: refers to music played during live broadcast, and for convenience of distinction, music that has been determined to be played during live broadcast may also be referred to as target background music. The target background music may be default or determined according to a user's operation.
Example video: referring to example videos played in a live process, example videos that have been determined to be played in a live process may also be referred to as target example videos for ease of distinction. The target example video may be default or determined according to a user's operation.
Queue arrangement mode: refers to the relative position of each display position in the live virtual room, and specifically includes the number of display positions and the position of each display position relative to other display positions.
Live image of portrait: the live image is a whole-body image of a person acquired from the live image, namely the live image of the person comprises more than a face part, a body part and the like.
The following describes the design concept of the embodiment of the present application.
In the related art, each device collects dance videos of users respectively, the dance videos are sent to a server, after dancing is finished, any user can share the dance videos of the users to each user through the server, so that sharing of the dance videos is realized, but the dance videos cannot be shared in real time in the dancing process in such a way, and the interaction degree among the users is low.
In view of this, the embodiment of the application provides a live video playing method, in which each client can display a live image of each user account added into a live virtual room in a background image of the live virtual room, that is, the scheme can realize real-time sharing of live images of people of each user, promote real-time performance of live video playing, and improve interaction degree between each user. In addition, the live image is displayed in the live virtual room, that is, any client displays the live image of the person added to each user in the live virtual room, but not the live image directly collected by the client, and the transmission amount of the live image is necessarily larger than that of the live image in the live image, so that the data transmission amount of the live image of each user in the live video playing process can be reduced, the live image corresponding to each user account is displayed, unnecessary other background elements in the live image are avoided, and the integral display effect of the live video is improved.
Based on the design concept, the application scenario of the live video playing method according to the embodiment of the application is described below.
Referring to fig. 1, a schematic view of a scenario of a live video playing method is shown, where the scenario includes a plurality of terminals and a server that can communicate with each terminal through a communication network. Each terminal corresponds to a user, a client is arranged in each terminal, and each client is correspondingly logged in with a corresponding account. Each terminal may be configured with a camera, which may be part of the terminals or may be provided independently of the terminals, e.g. the first terminal 110-1 is configured with a first camera 112-1, the second terminal 110-2 is configured with a second camera 112-2, and the third terminal 110-3 is configured with a third camera 112-3. The number of terminals and servers may be arbitrary, and the present application is not particularly limited. The specific form of the client may refer to the content discussed above, and will not be described here again.
For convenience of description, the users corresponding to the first terminal 110-1, the second terminal 110-2 and the third terminal 110-3 are the current user a, the second user B and the third user C, respectively, and the first terminal 110-1. The clients corresponding to the second terminal 110-2 and the third terminal 110-3 are the first client 111-1, the second client 111-2 and the third client 111-3, respectively. The accounts registered in the first client 111-1, the second client 111-2, and the third client 111-3 are referred to as a current account, a second account, and a third account, respectively.
The terminal may be, for example, a mobile phone, a personal computer, a portable tablet computer, a smart television, a television box of a smart television, a game device, or the like. The types of the first terminal 110-1, the second terminal 110-2, and the third terminal 110-3 in fig. 1 may be the same or different. With continued reference to fig. 1, the first terminal 110-1 is, for example, a smart television, the second terminal 110-2 is, for example, a personal computer, and the third terminal 110-3 is, for example, a mobile phone. The server 120 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content distribution, basic cloud computing services such as big data and an artificial intelligence platform.
Before live video playing, the current client creates a live virtual room according to the operation of the current user, and other users can join the live virtual room through the client. The process of creating and joining a live virtual room is described below.
When live video playing starts, cameras corresponding to all clients can acquire corresponding live images, the clients capture live portrait images in the live video in real time, the live portrait images at all times form a person video, the person video is further packaged into video data, and the video data is sent to the server 120. In this way, the server 120 obtains video data corresponding to all accounts in the live virtual room. The content of which is described below.
In the first possible case: the server 120 receives video data corresponding to all accounts in the live virtual room, encapsulates the video data into video data packets, and then distributes the video data packets to clients corresponding to all user accounts in the live virtual room. And after the client receives the video data packet, rendering and playing the live video according to the video data corresponding to all the clients in the live virtual room. Wherein, the related content of playing live video according to the video data will be described below.
In the second possible case: after receiving the video data sent by each client of all user accounts in the live virtual room, the server 120 performs rendering synthesis based on the video data to obtain live video, and then issues relevant data of the live video to each client, and each client receives and plays the live video.
Further, after the live broadcast is finished, the server 120 may generate a scoring result corresponding to each client according to the video data of each client in the live broadcast video playing process, and send the scoring result corresponding to each client, so that each client displays the scoring result of the live broadcast.
The following describes an example of an implementation architecture of a client according to an embodiment of the present application in an application scenario discussed in conjunction with fig. 1:
clients may be built based on a Model-View-interface Model (MVVM) Model, which may be divided into a presentation layer, a business logic layer, and a data layer.
Display layer: for displaying various interfaces, such as displaying live video, or displaying a target queue arrangement mode, etc.
Business logic layer: for performing various types of information processing, for example, may be used to perform corresponding events in response to various types of operations by the user, such as recording the time the user joins the live virtual room, or scoring the user during live video playback.
Data layer: for transmitting various data, for example, live images can be acquired, live images of people obtained from live images can be transmitted, and the like. The specific content of the client implementing various functions will be specifically described below.
Based on the first possible scenario discussed in fig. 1, the following describes a live video playing method in the embodiment of the present application in conjunction with the interaction procedure between the devices in fig. 1 shown in fig. 2:
S201, the first client 111-1 generates a room creation request in response to the room creation operation.
When the current user wants to live, the first client 111-1 may be turned on, and the first client 111-1 displays the main interface in response to the turn-on operation. The main interface may include creating a room control. The main interface may include other operational controls in addition to creating room controls, not limited herein.
When the current user performs a triggering operation on the creation room control, the triggering operation on the creation room control may be, for example, a manual clicking operation on the creation room control, or may be a clicking operation on the creation room by a mobile remote controller. The first client 111-1 generates a room creation request for requesting creation of a live virtual room in response to the trigger operation. The room creation request may carry an account identifier corresponding to the account of the login first client 111-1.
In one possible embodiment, when the current user creates the room operation, the first client 111-1 may select a preset number of joining live virtual rooms, and determine the preset number of joining live virtual rooms in response to the preset number of input operations. Further, a preset number of joining live virtual rooms may be carried in the room creation request.
For example, referring to FIG. 3, which is an exemplary diagram of a main interface of the first client 111-1, the first client 111-1 generates a room creation request in response to a current user clicking operation on the create room control 300 shown in FIG. 3.
S202, the first client 111-1 transmits a room creation request to the server 120.
S203, the server 120 generates live virtual room information.
Upon receiving the room creation request, the server 120 determines that the first client 111-1 needs to create a live virtual room, and thus can generate live virtual room information for the first client 111-1. The live virtual room information includes a unique live virtual room identifier, which is used to indicate the live video.
S204, the server 120 transmits the live virtual room information to the first client 111-1.
S205, the first client 111-1 displays a live invitation interface.
After receiving the live virtual room information, the first client 111-1 generates a live invitation interface according to the live virtual room identifier, where the live invitation interface is used to instruct other users to be invited to join the live virtual room. The live invitation interface may also display live virtual room identifiers so that the current user is aware of the live virtual room identifiers. In addition, the live invitation interface includes an invitation control. The live invitation interface may also include an account number identification corresponding to the current user who has entered the live virtual room by default, and may also include a cancel creation room control.
For example, after the current user clicks on the create room control 300 as shown in FIG. 3, a live invitation interface is displayed as shown in FIG. 4 (1) including an invitation control 410, an account number identification 420 of the current user, a cancel create room control 430, and a live virtual room identification 440.
S206, the first client 111-1 responds to the triggering operation for the invitation control to display a contact list on the live invitation interface.
After the live invitation interface is displayed, the current user can perform triggering operation on the invitation control in the live invitation interface, wherein the triggering operation can be a manual clicking operation or a clicking operation performed by moving a focus of a remote controller. The first client 111-1 displays a contact list in response to a triggering operation for the invitation control. The contact list includes at least one contact, each contact associated with a current user, such as a friend of the current user, and the like. It should be noted that, each contact corresponds to an account, and after the current user selects a contact, the client may obtain the account corresponding to the contact according to the contact selected by the current user.
The contact list may be a contact list associated with the current user in the first client 111-1, or may be a contact list acquired by the first client 111-1 from another application, for example, a contact list acquired from an instant messaging application through an interface.
In another case, the first client 111-1 may cancel creation of the live virtual room in response to a trigger operation of the current user to cancel creation of the room control.
S207, the first client 111-1 generates an invitation request in response to a contact selection operation for the contact list.
The first client 111-1 may determine a target contact that the current user needs to invite in response to a contact selection operation of the current user for one or more contacts in the contact list, and generate an invite request. The invite request is for inviting to join the live virtual room. The invitation request carries an account identifier corresponding to the target contact to be invited, and the invitation request is used for inviting the target contact to join in the live virtual room.
S208, the first client 111-1 sends the invite request to each client through the server 120.
After the current user selects the contact, for example, the current user selects the second user and the third user, the first client 111-1 determines to send the invitation request to the account corresponding to the second user and the account corresponding to the third user, where the invitation request may carry the account identifier corresponding to the second user and the account identifier corresponding to the third user. After receiving the invitation request, the server 120 sends the invitation request to the second client 111-2 corresponding to the second user and the third client 111-3 corresponding to the third user according to the account identifier carried in the invitation request.
Alternatively, the first client 111-1 generates the invitation request in response to a triggering operation for the invitation control. The invite request carries an identity of the live virtual room, the invite request being for inviting a user to join the live virtual room. After generating the invite request, the first client 111-1 displays a contact list, the current user may further select to send the invite request to one or more contacts, the first client 111-1 determines a target contact that the current user wants to invite in response to a contact selection operation of the current user, and the first client 111-1 may send the invite request to a client corresponding to the target contact.
As an example, S206 to S208 are optional parts. For example, after the first client 111-1 generates the invite request, the current user may not need to select the corresponding contact, and the server 120 may broadcast the invite request to various other clients, or alternatively send to clients that are currently logged on and idle. Idle means that the client is not currently joining other live virtual rooms.
For example, referring to fig. 4, when the current user clicks the invite control 410 shown in fig. 4 (1), the first client 111-1 displays the contact list 450 shown in fig. 4 (2), the interface further includes a scroll bar 460, and the first client 111-1 responds to the pull-down operation performed by the current user on the scroll bar 460 to display more contacts. The current user may select one or more contacts in the contact list 450, and the first client 111-1 determines a target contact that the current user wants to invite according to the contact selection operation of the current user.
S209, each client displays a live invited interface.
For example, after the server 120 sends the invite request to the second client 111-2 and the third client 111-3, the second client 111-2 and the third client 111-3 display live invited interfaces, respectively, according to the invite request. The live invited interface is used to prompt the user whether to receive live invitations. The live invited interface includes a live virtual room identification, a confirmation control to confirm joining the live virtual room, and a cancel control to confirm not joining the live virtual room.
For example, referring to fig. 5, a live invited interface displayed by the second client 111-2 is shown, where the live invited interface includes prompt 510, confirm control 510, and deny control 520. The prompt information carries the identification of the live virtual room.
S210, the second client 111-2 generates joining room information in response to accepting the invitation operation.
For example, the second user may perform an invitation acceptance operation on the live invited interface, specifically, for example, the second user clicks a confirmation control in the live invited interface, and the second client 111-2 generates joining room information according to the invitation acceptance operation, where the joining room information is used to indicate that the second user confirms joining the live virtual room.
For example, the second user is busy or does not want to join the live virtual room, the second user may perform an invitation rejection operation on the live invited interface, specifically, for example, the second user clicks a denial control in the live invited interface, which is equivalent to performing an invitation rejection operation, and the second client 111-2 determines that the second user does not participate in the live broadcast according to the invitation rejection operation.
For example, referring to fig. 5, when the user clicks the confirmation control 510 in fig. 5, the second client 111-2 generates the joining room information according to the invitation receiving operation.
As an embodiment, the steps S207 to S210 are optional steps.
In another possible embodiment, after the current user obtains the live virtual room information, the live virtual room identifier may be directly notified to other users in any manner, and the other users may directly perform live virtual room identifier input operation in the corresponding clients, and the clients of the other users generate joining room information in response to the live virtual room identifier input operation.
For example, referring to fig. 6 (1), in response to the clicking operation of the second user on the join room control 610, the second client 111-2 displays a live virtual room identifier input box 620 as shown in fig. 6 (2), and the second client 111-2 generates the join room information in response to the live virtual room identifier input by the second user in the live virtual room identifier input box 620.
S211, the second client 111-2 may transmit the joining room information to the first client 111-1 through the server 120.
S212, the third client 111-3 generates joining room information in response to accepting the invitation operation.
Similarly, the third client 111-3 may generate joining room information in response to the invitation accepting operation by the third user. The contents of receiving the invitation operation and joining the room information may refer to the contents discussed above, and will not be described herein.
S213, the third client 111-3 may transmit the joining room information to the first client 111-1 through the server 120.
For ease of description, contacts that are confirmed to participate in the live will be referred to as participating contacts, which include the current user creating the live virtual room, as well as other contacts that are confirmed to join the live virtual room.
S214, the first client 111-1 updates the live invitation interface according to the joining room information.
The first client 111-1 may obtain the joining live information of the other clients through the server 120, and update the live invitation interface according to the joining live information.
For example, the first client 111-1 updates the live invitation interface after receiving the joining room information transmitted by the second client 111-2 and the third client 111-3. Specifically, for example, an account identifier corresponding to the second user who has confirmed to join the live virtual room and a corresponding account identifier of the third user are displayed in the live invitation interface. The account number identifies, for example, an account number or a user's corresponding avatar, etc.
For example, referring to fig. 4 (1), after the current user invites the second user and the third user, the first client 111-1 determines that the second user and the third user have confirmed to join the live virtual room, and the first client 111-1 updates the live invitation interface shown in fig. 4 (1) to the live invitation interface shown in fig. 7, and the account identifier B and the account identifier C of the second user and the account identifier C of the third user are newly displayed in the updated live invitation interface. In addition, the live invitation interface may also include a stop invitation control 710, the stop invitation control 710 for indicating to stop continuing to invite the user to join the live virtual room.
S215, the first client 111-1 generates room creation success information.
The first client 111-1 may generate room creation success information when it is determined that the preset condition is satisfied. The preset condition is, for example, an invitation stopping operation of the live virtual room performed by the current user, or when the number of accounts that have joined the live virtual room has reached a preset number, or an invitation stopping instruction issued by the server 120 is received. For example, the server 120 receives the invitation stopping information sent by the clients of other users, and may send an invitation stopping instruction to all the clients. The preset number may be, for example, the maximum number of accounts that can be accommodated by the participating live virtual rooms, or set for the current user when creating the live virtual rooms.
After determining that the preset condition is satisfied, the first client 111-1 generates room creation success information. The room creation success information is used for indicating that the room creation is successful, and comprises account numbers corresponding to all users who join in the live virtual room. Alternatively, the server 120 may generate the room creation success information after determining that the preset condition is satisfied, and the preset condition may refer to the content discussed above, which is not described herein.
It should be noted that, in fig. 2, the first client creates a live virtual room, and generates room creation success information, and virtually any client may create a live virtual room, generate room creation success information, and the like in any of the above manners.
In one possible embodiment, after the room creation success information is generated, or after the current user creates the live virtual room, the users in the live virtual room may interact with each other, for example, each client may share the currently acquired live image to other clients in the live virtual room through the server 120, so as to implement a previous interaction procedure.
S216, the first client 111-1 sends room creation success information to other clients participating in the live virtual room through the server 120.
For example, the first client 111-1 may transmit room creation success information to the second client 111-2 and the third client 111-3 through the server 120.
S217, each client displays a configuration parameter selection interface respectively.
After each client receives the room creation success information, a configuration parameter selection interface may be displayed, respectively, with parameter setting controls set on the configuration parameter selection interface or on the live virtual room. Or, a parameter setting control is arranged on the live invitation interface or the main interface, and any client responds to the triggering operation of the user on the parameter setting control, which is equivalent to the parameter setting operation, and the configuration parameter selection interface is displayed.
Wherein the configuration parameter selection interface includes parameter selection controls for selecting various configuration parameters. The parameter selection controls include one or more of a background music selection control, an example video selection control, and a queue ranking mode selection control. The parameter selection controls may also include an example video play selection control.
Specifically, the background music selection control is used for selecting background music played in the process of playing live video; the example video selection control is used for selecting an example video in the process of playing the live video; the queue arrangement mode selection control is used for selecting the arrangement of the relative display positions of the live images of each portrait in the process of playing the live video; the example video play selection control is used to select whether or not to present an example window during the playing of live video. Example video may be presented in an example window.
For example, with continued reference to FIG. 7, after the current user clicks the stop invitation control 710 shown in FIG. 7, the first client 111-1 generates room creation success information according to this operation, and issues the room creation success information to other respective clients through the server 120, each of which may display an example diagram of the configuration parameter selection interface as shown in FIG. 8. The configuration parameter selection interface includes a background music selection control 810, an example video selection control 820, a queue arrangement mode 830, and an example video play selection control 840.
S218, the third client 111-3 obtains the target configuration parameters aiming at the triggering operation of the parameter selection control.
The parameter selection controls specifically include different controls, and the obtained target configuration parameters are also different, and a manner of obtaining the target configuration parameters is exemplified below by taking the third client 111-3 to obtain the target configuration parameters as an example.
The third client 111-3 may perform at least one of the following to obtain the corresponding target configuration parameters:
a1: the third client 111-3 displays a selectable plurality of background music in response to a trigger operation of the third user for the background music selection control. The third client 111-3 may obtain target background music in response to a background music selection operation for a third user for any one of a plurality of background music, and play the target background music during the live video playing.
In one possible embodiment, the target background music may be associated with a corresponding background image, and the background image associated with the target background music may be an image or a background video. After the client determines the target background music, the background image associated with the target background music may be acquired from the server 120. Alternatively, if some target background music does not have an associated background image, then the client may use the default background image as the background image during the live video playing process.
In one possible embodiment, the target background music may be associated with its corresponding target example video, and after the client determines the target background music, the user need not select the target example video, and the client determines the example video associated with the target background music as the target example video.
A2: the third client 111-3 may display a selectable plurality of example videos in response to a triggering operation of the third user for an example video selection control, and the third client 111-3 may obtain a target example video in response to an example video selection operation of the third user for any one of the plurality of example videos.
In one possible embodiment, the target example video may be associated with corresponding target background music, and after the client determines the target example video, the user may not need to select the target background music, and the client determines the background music associated with the target example video as the target background music.
In one possible embodiment, the target example video may be associated with a corresponding background image, which may be an image, or a background video. After the client determines the target example video, the background image associated with the target example video may be obtained from the server 120. Alternatively, some target example videos do not have associated background images, then the background images may be subsequently used as default background images during the playing of live video.
Because the simulation difficulty is different for the user, such as the action difficulty related to some example videos is large, after the client determines the target example videos, at least one difficulty mode associated with the target example videos can be displayed, and the target difficulty mode in the process of playing the live video is determined in response to the mode selection operation of the user for any difficulty mode.
A3: the third client 111-3 displays a plurality of selectable queuing modes in response to a triggering operation of the third user for the queuing mode, and the third client 111-3 obtains a target queuing mode in response to a selecting operation of the third user for any one of the plurality of queuing modes.
In one possible embodiment, the client may select a target queuing mode associated with the number of accounts that have been currently determined to join the live virtual room. The client may be, for example, selecting a queuing mode in which the number of display bits is the same as the number of accounts, or may be selecting a queuing mode in which the number of display bits is greater than the number of accounts. Because the number of people which can be acquired by each account is more than one, a queue arrangement mode which is larger than the number of the accounts can be selected, so that the live image of each person can be ensured to have a corresponding display position.
In one possible embodiment:
the above-mentioned A1-A3 may be that after any client joining the live virtual room determines the target configuration parameter, the other clients do not need to perform repeated selection. Other clients may change the target configuration parameters of any one of the above-mentioned A1 to A3 that have been determined by the client, and the changing manner is similar to the process of the above-mentioned A1 to A3, which is not repeated here.
In another possible embodiment:
the user corresponding to each account can respectively select one or two of the target example video or the target background music in the live video playing process, each client responds to the corresponding selection operation respectively, and the configuration parameters of the client in the live video playing process are determined, namely each client can execute the processes of A1 and A2.
A4: the third client 111-3 displays a selectable play example window or no play example window in response to a trigger operation of the third user for the play example video selection control, and the third client 111-3 determines to display the example window when the live portrait image is displayed in response to an operation of selecting the play example window for the third user, and plays the target example video in the example window.
Of course, the third user may also determine that the example window is not played in displaying the live portrait image for the operation that the third user selects not to play the example window.
Similarly, each client corresponding to each user account in the live virtual room may execute the process in A4.
It should be noted that S217 to S218 are optional steps. For example, no user joining the live virtual room selects the configuration parameters, each client may display the live video in accordance with the default configuration parameters.
For example, referring to fig. 8, after the third user clicks the queuing mode selection control 830, the third client displays a plurality of queuing modes 910 as shown in (1) of fig. 9, determines that the queuing form including three positions in (1) of fig. 9 is determined to be the target queuing mode in response to the selection operation of the queuing mode for three positions by the third user, and further includes a preview 920 in the interface shown in fig. 9, which displays the target queuing mode. In addition, a next control 930 and a return control 940 are also included in FIG. 9. The third client 111-3 may display the next alternative configuration parameters in response to the operation of the next control 930. Alternatively, the third client 111-3 may display the previously displayed interface in response to the return control 940.
The above S218 is described by taking the third client as an example, and in practice S218 may be performed by any client.
S219, the third client 111-3 sends the target configuration parameters to other clients joining the live virtual room through the server 120.
For example, the third client 111-3 may send the target configuration parameters to the first client 111-1 and the second client 111-2 through the server 120 so that other clients may obtain the determined target configuration parameters.
Alternatively, in another possible embodiment, S219 is an optional step, for example, each client only needs to record the target configuration parameters selected by the corresponding user, and no other clients need to be notified.
Further, after any client determines the selected target queuing mode, the target queuing mode may be displayed. Each user can select the display position of the user in the target queue arrangement mode through the respective client, and the client determines the target display position corresponding to the user according to the display position selection operation of the user. For example, the user may move the focus of the remote controller to the target display position in the target queue arrangement mode, or the user directly clicks a certain display position, and the client determines the target display position corresponding to the user according to the display position selection operation of the user. In the process of selecting the display position, the client may synchronize the target display position selected by the user to the server 120 in real time according to the display position selection operation of the user, and the server 120 synchronizes the target display position selected by the user to other clients, so as to avoid that other users repeatedly select the corresponding target display position.
In another possible embodiment, the user may not need to select a location, and the client determines, according to the order in which each account is added to the live virtual room, a presentation location corresponding to each user account in the target queue arrangement mode.
In another possible embodiment, the server 120 may record the order in which each account joins the live virtual room, and determine the display position in the target queue arrangement mode for each user account.
After determining the target display positions corresponding to the accounts, each client can obtain the accounts corresponding to the display positions in the target arrangement formation, namely, the accounts corresponding to the portrait live images corresponding to the user accounts are obtained by each client.
For example, with continued reference to fig. 9 (1), after the third client 111-3 selects the target queuing form, a position selection interface shown in fig. 9 (2) may be displayed in response to an operation performed on a control of the next step in fig. 9 (1), or in response to a click operation of the selected target queuing mode, the third user may select any one of the three display positions 1, 2, and 3 shown in fig. 9 (2), and the third client 111-3 obtains the target display position in response to a display position selection operation of the third user.
S220, the first client 111-1 responds to the start of the live video playing operation to generate a start instruction.
The current user can start live video playing operation, or after all configuration parameters are selected, or the current user responds to the invitation stopping operation on the live invitation interface, or responds to the live video playing start displayed on the live invitation interface when the number of contacts joining the live virtual room reaches a preset number, or responds to the joining operation for the live invitation, a start instruction can be generated.
Of course, in fig. 2, taking the first client 111-1 generating the start command as an example, any other client may actually execute the step S220.
Alternatively, the server 120 may generate a start instruction after determining that all the interactive configuration parameters have been selected, and issue the start instruction to each client.
S221, the first client 111-1 sends a start instruction to other clients joining the live virtual room through the server 120.
Other clients may be understood herein as clients corresponding to other user accounts in the live virtual room, except for the first client 111-1 corresponding to the current account.
S222, each client generates video data.
The video data is obtained by video coding of a live image of a portrait collected by a client corresponding to a user account. Each client in the live virtual room generates corresponding video data in the same manner as each client generates the video data. The process of generating video data by each client will be described below by taking the first client 111-1 as an example:
the process of obtaining video data by the first client 111-1 will be described with reference to the flowchart shown in fig. 10:
s1001, the first client 111-1 collects live images through the first camera 112-1.
For example, the first camera 112-1 captures live images. The first client 111-1 acquires a live image through a camera (camera) interface.
Because the current user may not aim at the camera, so that no live image exists in the live image, when the first client 111-1 determines that no live image exists in the live image, a prompt message may be sent. The prompt information is used for prompting the current user to adjust the relative position with the client, so that the current user can adjust the position of the current user after viewing the prompt information, and the first camera 112-1 can acquire the live image comprising the live image of the portrait.
S1002, the first client 111-1 acquires a live image of a portrait in the live image.
The first client 111-1 obtains a live image, and may process the live image to obtain a live image of a portrait in the live image.
Specifically, the camera interface may send the live image to the image capturing module in the first client 111-1, and capture, by using the image capturing module, live images of the portrait corresponding to each live image in real time. The image capturing module may capture live images of the portrait according to a first preset frame rate, for example, 30 frames/second, which means that the image capturing module needs to capture 30 live images of the portrait from the live images within 1 second.
For example, the first client 111-1 may perform capturing of the live image by the openCV, so as to remove the background in the live image except the live image. Or the image capturing module can detect a portrait area in the live image, and acquire and obtain the portrait live image in the portrait area in real time.
S1003, the first client 111-1 performs video encoding on the live image of the portrait to obtain video data.
After obtaining the live portrait images at each time, the first client 111-1 composes the live portrait images at each time into a person video, and video encodes the person video to obtain video data. The second preset frame rate of the first client 111-1 at which the character video is composed may be arbitrary, and the present application is not particularly limited. The second preset frame rate may be less than or equal to the first preset frame rate, for example, the first client 111-1 grabs 30 frames within the current 1 second, and the first client 111-1 may take 10 frames at intervals to form the character video.
As an embodiment, the first client 111-1 may screen out live portrait images meeting specific conditions from the captured live portrait images, and form the live portrait images meeting the specific conditions into a video of a person. The specific condition may be, for example, screening of a portrait with a sharpness of the live image above a threshold.
In a possible embodiment, the live image collected by the first client 111-1 may include a plurality of portraits, so that the first client 111-1 may simultaneously capture a live image of each of the plurality of portraits in the live image, obtain a video of a person corresponding to each of the plurality of portraits, and perform video encoding on the video of the plurality of persons to obtain video data. Alternatively, the first client 111-1 may perform video encoding on a live image of each of the plurality of portraits, to obtain video data corresponding to each of the portraits.
Further, while the person video is obtained, time information of each live image of the person video may be obtained, which may be obtained when the live image of the person is acquired or obtained when the person video is composed. For example, the time information of the first live image is from 0 th second to 1 st second in the human video.
For example, please refer to fig. 11, which is an exemplary diagram for separating a live image from a live image. The first client 111-1 acquires a live image one shown in (1) in fig. 11 at time t1, and separates a live image of a portrait, specifically, a live image of a portrait shown in a fig. 11 from the live image. The first client 111-1 acquires a live image two shown in (2) in fig. 11 at time t2, and separates a live image from the live image, specifically, a live image b shown in fig. 11. The first client 111-1 may perform video encoding on the live portrait image shown in a and the live portrait image shown in b according to the sequence of the acquisition time to obtain video data.
In fig. 10, the video data generated by the first client 111-1 is taken as an example for introduction, and other clients can use the method shown in fig. 10 to generate video data, which is not described herein again.
S223, each client transmits the video data to the server 120.
After each client obtains the respective video data, the obtained video data may be transmitted to the server 120, so that the server 120 obtains the video data transmitted by all clients in the live virtual room.
S224, the server 120 generates a video data packet.
The server 120 may package video data for all clients joining the live virtual room to obtain video data packets. Packaging may be understood as processing video data of all clients of a live virtual room into one file packet, and specifically, for example, the server 120 may construct a Real-time transport protocol (Real time Transport Protocol, RTP)/Real-time transport control protocol (Real-time Control Protocol, RTCP) video data packet from the video data to obtain the video data packet.
In one possible embodiment, since each client has its own video data corresponding to the user, the server 120 may generate a different video data packet for each client when generating the video data packet, each client corresponding to a data packet that includes video data of other clients in the live virtual room than the client.
For example, the server 120 may generate a video data packet corresponding to the first client 111-1 according to the video data of the second client 111-2 and the video data of the third client 111-3, and the server 120 only needs to send the data packet to the first client 111-1, so that the data transmission amount can be relatively reduced.
The server 120 transmits the video data packet to each client S225.
After obtaining the video data packet, the server 120 transmits the video data packet to each client.
As an embodiment, a Web Real-time communication technology (Web Real-Time Communication, webRTC) may be used between the server 120 and the multiple clients to send video data packets to each client in Real time, so as to reduce the transmission delay between the server 120 and each client.
And S226, each client plays the live video according to the video data packet.
The manner in which each client generates and displays the live video is the same, and the manner in which the live video is generated will be described by taking the first client 111-1 as an example, with reference to the flowchart for generating the live video shown in fig. 12:
s1201, the first client 111-1 decodes the video data packet.
After receiving the video data packet, the first client 111-1 decodes the video data packet, thereby obtaining a live portrait image of each client.
S1202, the first client 111-1 renders the decoded live images of the respective portrait into the live virtual room.
The first client 111-1 may respectively render the live portrait images corresponding to each moment in each of the personal videos to the corresponding display positions in the live virtual room according to the display positions corresponding to the live portrait images, so as to generate the live video.
Further, since the target background music or the example video may be associated with a corresponding background image, when generating the live video, the first client 111-1 may cover the live image of the person corresponding to each moment in each person video on the background image corresponding to the live virtual room according to the display position associated with each person video, thereby generating a live video frame of each moment, and further obtaining the live video.
To more clearly illustrate the process of generating live video in an embodiment of the present application, the following description is provided in connection with the flowchart shown in fig. 13:
s1301, the first client 111-1 places live portrait images according to the display positions corresponding to the videos of the people.
In S1302, the first client 111-1 determines whether the server is associated with a background image.
The first client 111-1 may acquire a background image from the server, and if the target background music or the example video is not associated with the background image, it indicates that the server does not have the associated background image, and the first client 111-1 performs S1303 using a default background image, which may be set by the current user, or the first client 111-1 defaults. If the target background music or the example video is associated with a background image, indicating that the server has an associated background image, S1304 is performed, i.e., the background image is obtained from the server.
S1305, the first client 111-1 overlays the live portrait image on the background image.
After the background image is obtained, if the background image is an image, only the live portrait images corresponding to all the moments are respectively covered on the background image; if the background image is a background video, the live image of the portrait corresponding to each moment can be covered in the background image of the corresponding moment, namely the background video frame of the corresponding moment in the corresponding background video, so as to obtain the live video frame.
Continuing taking the example of obtaining the live image of the current user shown in fig. 11, the background image corresponding to the time t1 is shown as a in fig. 14, the live images of the current user a, the second user B and the third user C at the time t1 are respectively shown as B in fig. 14, and according to the display positions associated with the live images of the respective users, the first client 111-1 may cover the live images of the respective times t1 in the background image shown as a, so as to obtain the live video frame shown as C in fig. 14.
In a possible embodiment, the live virtual room further includes a play example video selection control, during the live video playing process, any user may perform a triggering operation on the play example video selection control, and the client determines whether to display the example window in the live virtual room according to the triggering operation on the play example video selection control.
For example, any user may trigger an example video selection control, where the client plays the example video according to the selection made by the example video selection control, and the client determines that the example video is played in a live process, or the client does not play the example video according to the selection made by the example video selection control, and the client determines that the example video is not played in a live video playing process.
Further, in order to ensure that the example video is synchronous with the playing of the live video, the client synchronously plays the example video according to the playing time length of the live video.
S227, the server 120 generates a scoring result.
Upon determining that the live video playing process reaches a preset duration, for example, a total duration of the target background music or a total duration of the example video, the server 120 determines that the live video playing is finished. Or according to the ending operation of any user on the live video playing, the client corresponding to the user determines that the live video playing is ended, generates a live video playing ending instruction, and sends the live video playing ending instruction to the server 120 or other clients.
The server 120 may determine a scoring result of each user in playing the live video according to the live image of each user at each moment, and send the scoring result of each user in the plurality of users to the client of each user in the plurality of users, so that each client displays the scoring result of each user in the plurality of users.
Among these, how the server 120 determines the scoring result for each user is exemplified as follows:
when the character video is a dance video corresponding to the user, the character video is scored, and the character video is essentially scored, namely, the degree of agreement between the character action and the standard action in the character video frame is determined, and the higher the degree of agreement is, the more standard the dance action of the user is, and the higher the corresponding score is. Thus, in embodiments of the present application, server 120 may determine a score for each user at each time based on the degree of matching between the live image of the person of each user at each time and the target person video frame of the example video at the corresponding time. Specifically, for example, the server 120 may directly use the matching degree as the score at each time, or multiply the matching degree by a fixed value to obtain the score at each time.
The server 120 may determine the score of each live image in real time during the live video playing process, or the server 120 may calculate the score corresponding to each live image after the live video playing is finished.
After the scores of each user at each moment in the live video playing process are obtained, the score results of each user in the live video playing process are obtained according to the scores of each user at each moment in the live video playing process, for example, the scores of the users at each moment can be summed to obtain the score results, or the scores of the users at each moment can be weighted and summed to obtain the score results.
As an embodiment, because the difficulty modes selected by the respective users may be the same or different, in the embodiment of the present application, the server 120 may further need to consider the target difficulty mode selected by the user when determining the scoring result corresponding to the respective users, and determine the final scoring result of the user according to the target difficulty mode selected by the user and the scoring result of the user. For example, the server 120 may pre-store the difficulty pattern and the corresponding weight coefficient, and after obtaining the target difficulty pattern, multiply the weight coefficient corresponding to the target difficulty pattern by the scoring result of the user to obtain the scoring result of the user.
In order to more clearly describe the manner in which the scoring results are determined, the manner in which the scoring results are determined according to the embodiment of the present application is described by way of example with reference to the flowchart shown in fig. 15:
s1501, live video play starts.
S1502, the server 120 acquires a live portrait image.
In S1503, the server 120 detects core key points in the live portrait image.
After the server 120 obtains the video data, it may obtain the live image of the person corresponding to each moment, and locate the core key point in the live image by using artificial intelligence detection, where the core key point refers to a key point for locating the human motion, for example, specifically 22 human key points.
S1504, the server 120 determines the matching degree of the core key point in the live portrait image and the target character video frame at the corresponding time of the example video.
After obtaining the core keypoints, the core keypoints may be compared with core keypoints in the target character video frame at corresponding times to obtain scores for the character video frame. Specifically, for example, the euclidean distance between each core key point in the character video frame and the core key point in the target character video frame is determined, and the euclidean distance is used as the matching degree of the core key points and the core key point.
S1505, the server 120 accumulates the scores.
And accumulating the scores of the portrait live images of the user at all times.
S1506, the server 120 determines whether playing the live video ends.
If the server 120 determines that the live video play is ended, S1507 is performed, that is, the scoring result of the user is obtained. If the server 120 determines that the live video play is not finished, the steps of S1502 to S1507 are repeated until the live video is finished.
In one possible embodiment, the scoring results include one or both of a score for each user during live video playback, or a score ranking for each user.
In one possible embodiment, each display position selected by the user is different, so that the action required by the user at the corresponding moment may be different, so in the embodiment of the present application, the scoring of the corresponding live portrait image may be obtained by comparing the live portrait image with the target video frames corresponding to the corresponding moment and the example video on the corresponding display position.
S228, the server 120 transmits the scoring result to each client.
After obtaining the scoring result for each user, the server 120 may send the scoring result for each user to the respective clients.
And S229, each client displays the grading result.
Each client receives the scoring result transmitted from the server 120 and displays the scoring result of each user. Or the client responds to the scoring display operation performed by the user and displays the scoring result of each user.
In one possible embodiment, the scoring results may be presented directly in the live virtual room, or the scoring results of the individual users may be presented in the form of a sub-interface in the live virtual room.
Referring to fig. 16, a scoring result 1500 shown in the first client 111-1, where the scoring result 1500 includes a ranking of each user joining in the live virtual room, and the scoring result of each user is specifically shown in fig. 15 as "No.1a 98 score; no.2B 97 score; no.3B 96 score).
In another possible embodiment, S227 to S228 are optional steps, for example, each client may determine the scoring result of its corresponding user, and the manner in which the client determines the scoring result may refer to the manner in which the server 120 determines the scoring result, which is not described herein. After the clients obtain the scoring results, the scoring results of each client may be shared to other clients.
In a second possible scenario, discussed based on fig. 1, a live video playing method according to an embodiment of the present application is described below.
Unlike the interactive process discussed in fig. 2, in the second possible scenario, relevant data for obtaining live video is rendered directly by the server 120.
After the server 120 obtains the video data of each client in the live virtual room, the process of obtaining the video data and generating the video data by the server 120 may refer to the content discussed in fig. 2, which is not described herein again. The server 120 may respectively render the live portrait images corresponding to each moment in each character video on the corresponding display positions, so as to generate relevant data of the live video. The server 120 transmits the related data of the live video to each client so that each client receives and directly displays the live video.
In order to more clearly illustrate the live video playing method according to the embodiment of the present application, the following description is provided with reference to the live video playing method shown in fig. 17:
s1701, the first client 111-1 confirms joining the live virtual room in response to the live virtual room identification input operation.
Fig. 17 illustrates an example in which the current user corresponding to the first client 111-1 is to join the live virtual room.
S1702, the first client 111-1 obtains target background music in response to the background music selection operation.
S1703, the first client 111-1 confirms whether the display bits are determined in a default order.
If it is confirmed that the presentation bit in the queue arrangement mode is not determined in the default order, S1704 is performed, the target presentation bit is obtained in response to the presentation bit selection operation, and if it is confirmed that the presentation bit in the queue arrangement mode is determined in the default order, S1705 is performed, the target presentation bit is determined in the order in which the current user joins the live virtual room.
S1706, according to the selection operation of the playing target example video, whether the target example video is played or not is confirmed.
If the user selects to confirm the play target example video, S1707, i.e., play target example video, is performed. If the user selects not to play the target example video, S1708 is performed, that is, the target example video is not played.
S1709, in response to the difficulty mode selection operation, determining a target difficulty mode.
S1710, starting to play the live video and displaying the live video.
S1711, ending the live video, and displaying the grading result.
The scoring result may be obtained by referring to the foregoing discussion, and will not be described herein.
Based on the same inventive concept, an embodiment of the present application provides a live video playing device, please refer to fig. 18, which includes:
the first obtaining module 1801 is configured to obtain a live image corresponding to a current user account when the current user account joins a live virtual room;
a second obtaining module 1802, configured to obtain a live image of a portrait corresponding to the current account from the live image;
a third obtaining module 1803, configured to obtain live images of other user accounts in the live virtual room;
the display module 1804 is configured to display the live portrait image of the current account and the live portrait images of other user accounts in a background image of the live virtual room in an aligned manner.
In one possible embodiment, the display module 1804 is further configured to:
responding to room creation operation, and displaying a live invitation interface; the live invitation interface is displayed with a live virtual room identifier and an invitation control; responding to the operation of triggering the invitation control, and displaying a contact list; and responding to the confirmation operation of selecting the participation contact on the contact list, displaying the participation contact information of joining the live virtual room according to the live invitation, and sending the live invitation to each participation contact.
In one possible embodiment, the apparatus further comprises a determination module 1805, wherein:
a display module 1804 further configured to display a configuration parameter selection interface in response to a parameter setting operation for the live virtual room; the configuration parameter selection interface comprises parameter selection controls for selecting various configuration parameters, wherein the configuration parameters comprise one or two of background music and a queue arrangement mode;
a determining module 1805, configured to determine a selected configuration parameter in response to a triggering operation for a parameter selection control;
the display module 1804 is further configured to control, when the live portrait image of the current account and live portrait images of other user accounts are displayed in the background image of the live virtual room in an aligned manner, display of live portrait images of the user accounts in the live virtual room according to the selected configuration parameters.
In one possible embodiment, an example video play selection control is also provided in the configuration parameter selection interface or in the live virtual room, and the display module 1804 is further configured to:
in response to a triggering operation for the example video play selection control, an example window is presented in the live virtual room and the example video is played in the example window.
In one possible embodiment, when the selected dynamic configuration parameter comprises a queuing mode, wherein:
a display module 1804, further configured to display each display bit included in the queuing mode in response to operation with respect to the queuing mode;
the determining module 1805 is further configured to determine, according to the display positions selected by each user account, a display position corresponding to each user account in the live virtual room;
the display module 1804 is specifically configured to display the portrait live image of each user account on the corresponding display location according to the display location corresponding to each user account in the live virtual room.
In one possible embodiment, the determining module 1805 is specifically configured to:
receiving display positions corresponding to other user accounts which join in the live virtual room and sent by a server, and responding to display position selection operation to obtain display positions corresponding to the current user accounts; or alternatively, the first and second heat exchangers may be,
and determining the corresponding display positions of the user accounts in the live virtual rooms according to the order of joining the live virtual rooms by the user accounts.
In one possible embodiment, the third obtaining module 1803 is specifically configured to:
carrying out video coding on a character video formed by the live image of the human image to obtain video data;
Transmitting the video data to a server; and receiving the video data packet sent by the server, and obtaining the portrait live broadcast image of each user account in the live broadcast virtual room from the received video data packet.
In a possible embodiment, the apparatus further includes a prompting module 1806, where the prompting module 1806 is specifically configured to: and if the live image is detected to be not in the live image, sending out prompt information.
In one possible embodiment, the video data further includes time information for each of the humanoid video frames; the display module 1804 is specifically configured to:
according to the corresponding display positions of the user accounts in the live virtual room, according to the time information of the live image in the video data packet, synchronizing the live image of the corresponding person of each user account, and covering the live image of the corresponding person of each user account on the background image of the live virtual room. In a possible embodiment, the display module is further configured to: and responding to the ending operation of the live video playing, or displaying the scoring result of each user account in the live video playing process on the live virtual room when the time length of the live video playing reaches the preset time length.
In one possible embodiment, the scoring result of each user account in the live video playing process is obtained by any one of the following modes:
receiving scoring results of each user account from a server; or alternatively, the first and second heat exchangers may be,
determining the score of each user account at each moment according to the matching degree between the live image of the person of each user account at each moment and the target person video frame at the corresponding moment in the selected example video;
and obtaining a scoring result of each user account in the video live broadcast playing process according to the scoring of each user account at each moment.
Based on the same inventive concept, an embodiment of the present application provides a computer device 1900, please refer to fig. 19, which includes a processor 1901 and a memory 1902.
The processor 1901 may be a central processing unit (central processing unit, CPU), or a digital processing unit or the like. The particular connection medium between the memory 1902 and the processor 1901 described above is not limited in embodiments of the application. In the embodiment of the present application, the memory 1902 and the processor 1901 are connected through the bus 1903 in fig. 19, the bus 1903 is shown with a thick line in fig. 19, and the connection manner between other components is only schematically illustrated and is not limited thereto. The bus 1903 may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, only one thick line is shown in fig. 19, but not only one bus or one type of bus.
The memory 1902 may be a volatile memory (RAM), such as a random-access memory (RAM); the memory 1902 may also be a non-volatile memory (non-volatile memory), such as a read-only memory, a flash memory (flash memory), a hard disk (HDD) or a Solid State Drive (SSD), or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto. Memory 1902 may be a combination of the above.
The processor 1901, when used to invoke a computer program stored in the memory 1902, performs the live video playback method as discussed in any of the foregoing, and may also be used to implement the functionality of the terminal or server or the apparatus shown in fig. 18 as discussed in the foregoing.
Based on the same inventive concept, an embodiment of the present application provides a storage medium storing computer instructions that, when executed on a computer, cause the computer to perform any of the live video playback methods discussed above.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Based on the same inventive concept, embodiments of the present application provide a computer program product comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform any of the live video playback methods described above.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware associated with program instructions, where the foregoing program may be stored in a computer readable storage medium, and when executed, the program performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the above-described integrated units of the present application may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (13)

1. A live video playing method, applied to a first client, the method comprising:
when a current user account joins in a live virtual room, acquiring a live image corresponding to the current user account;
responding to the operation aiming at a queue arrangement mode, and displaying each display position included in the queue arrangement mode, wherein the queue arrangement mode refers to the relative position of each display position in a live virtual room;
determining a display position corresponding to each user account in the live virtual room according to the display position selected by each user account, wherein the determining the display position corresponding to each user account in the live virtual room according to the display position selected by each user account specifically comprises: receiving display positions corresponding to other user accounts added into the live virtual room, which are sent by a server, and responding to display position selection operation to obtain display positions corresponding to the current user account; or determining corresponding display positions of each user account in the live virtual room according to the order of adding each user account into the live virtual room;
Acquiring a portrait live image corresponding to the current user account from the live image;
acquiring human figure live broadcast images of other user accounts in the live broadcast virtual room;
arranging and displaying the live portrait images of the current user account and the live portrait images of the other user accounts in a background image of the live virtual room;
responding to the ending operation of the live video playing, or displaying scoring results of all user accounts in the live video playing process on the live virtual room when the duration of the live video playing reaches a preset duration;
the step of displaying the live image of the current account and the live image of the other user accounts in the background image of the live virtual room in an aligned manner specifically includes: and displaying the portrait live image of each user account on the corresponding display position according to the display position corresponding to each user account in the live virtual room.
2. The method of claim 1, wherein the method further comprises:
responding to the parameter setting operation aiming at the live virtual room, and displaying a configuration parameter selection interface; the configuration parameter selection interface comprises parameter selection controls for selecting various configuration parameters, wherein the configuration parameters comprise one or two of background music and the queue arrangement mode;
Determining selected configuration parameters in response to a triggering operation for the parameter selection control; the method comprises the steps of,
and when the live portrait images of the current account and the live portrait images of the other user accounts are displayed in the background images of the live video virtual room in an arrayed manner, controlling the display of the live portrait images of the user accounts in the live video virtual room according to the selected configuration parameters.
3. The method of claim 2, wherein an example video play selection control is also provided in the configuration parameter selection interface or the live virtual room, the method further comprising:
and responding to the triggering operation of the example video playing selection control, displaying an example window in the live virtual room, and playing the example video in the example window.
4. A method as claimed in any one of claims 1 to 3, wherein said obtaining a live portrait image of other user accounts in said live virtual room comprises:
carrying out video coding on a character video formed by the live image of the human image to obtain video data;
transmitting the video data to a server; and
and receiving the video data packet sent by the server, and obtaining the portrait live broadcast image of each user account in the live broadcast virtual room from the received video data packet.
5. The method of claim 4, wherein prior to the acquiring the live image of the portrait from the live image, the method further comprises:
and if the live image is detected to be not in the live image, sending out prompt information.
6. The method of claim 4, wherein the video data further includes time information for each live image; the step of displaying the live image of the current account and the live image of the other user accounts in the background image of the live virtual room in an arrangement manner specifically comprises the following steps:
and synchronizing the live image corresponding to each user account according to the corresponding display position of each user account in the live virtual room and the time information of the live image in the video data packet, and covering the live image corresponding to each user account on the background image of the live virtual room.
7. The method of claim 1, wherein the scoring result of each user account during live video playing is obtained by any one of the following means:
receiving scoring results of each user account from a server; or alternatively, the first and second heat exchangers may be,
Determining the score of each user account at each moment according to the matching degree between the live image of the person of each user account at each moment and the target person video frame at the corresponding moment in the selected example video;
and obtaining a scoring result of each user account in the video live broadcast playing process according to the scoring of each user account at each moment.
8. A live video playback device, the device being applied to a first client, the device comprising:
the first acquisition module is used for acquiring a live image corresponding to the current user account when the current user account joins the live virtual room;
the display module is used for responding to the operation aiming at the queue arrangement mode, and displaying each display position included in the queue arrangement mode, wherein the queue arrangement mode refers to the relative position of each display position in the live virtual room;
the determining module is configured to determine, according to the display positions selected by the user accounts, a display position corresponding to the user accounts in the live virtual room, where the determining, according to the display positions selected by the user accounts, a display position corresponding to the user accounts in the live virtual room specifically includes: receiving display positions corresponding to other user accounts added into the live virtual room, which are sent by a server, and responding to display position selection operation to obtain display positions corresponding to the current user account; or determining corresponding display positions of each user account in the live virtual room according to the order of adding each user account into the live virtual room;
The second acquisition module is used for acquiring a portrait live image corresponding to the current user account from the live image;
the third acquisition module is used for acquiring the portrait live broadcast images of other user accounts in the live broadcast virtual room;
the display module is used for displaying the live image of the current user account and the live image of the other user accounts in a background image of the live video playing virtual room in an arrangement manner, responding to the ending operation of the live video playing, or displaying the scoring result of each user account in the live video playing process on the live video playing virtual room when the duration of the live video playing reaches the preset duration;
the display module is specifically configured to display, according to display positions corresponding to the user accounts in the live virtual room, a portrait live image of each user account on the corresponding display position.
9. The apparatus of claim 8, wherein the third acquisition module is specifically configured to:
carrying out video coding on a character video formed by the live image of the human image to obtain video data;
transmitting the video data to a server; and receiving the video data packet sent by the server, and obtaining the portrait live broadcast image of each user account in the live broadcast virtual room from the received video data packet.
10. The apparatus of claim 9, wherein the third acquisition module is specifically configured to:
and if the live image is detected to be not in the live image, sending out prompt information.
11. The apparatus of claim 9, wherein the video data further comprises time information for each live image; the live image of the current account is displayed, and the display module is specifically configured to:
and synchronizing the live image corresponding to each user account according to the corresponding display position of each user account in the live virtual room and the time information of the live image in the video data packet, and covering the live image corresponding to each user account on the background image of the live virtual room.
12. A computer device, comprising:
at least one processor, and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the method of any one of claims 1-8 by executing the memory stored instructions.
13. A storage medium storing computer instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 8.
CN202011038362.1A 2020-09-28 2020-09-28 Live video playing method, device, equipment and medium Active CN112188223B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011038362.1A CN112188223B (en) 2020-09-28 2020-09-28 Live video playing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011038362.1A CN112188223B (en) 2020-09-28 2020-09-28 Live video playing method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN112188223A CN112188223A (en) 2021-01-05
CN112188223B true CN112188223B (en) 2023-12-01

Family

ID=73945189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011038362.1A Active CN112188223B (en) 2020-09-28 2020-09-28 Live video playing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN112188223B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596353A (en) * 2021-08-10 2021-11-02 广州艾美网络科技有限公司 Somatosensory interaction data processing method and device and somatosensory interaction equipment
CN116170618B (en) * 2022-12-29 2023-11-14 北京奇树有鱼文化传媒有限公司 Method and device for calculating play quantity, electronic equipment and readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016058861A (en) * 2014-09-09 2016-04-21 みこらった株式会社 Sports game live watching system, and image collection/distribution facility device and watcher terminal for sports game live watching system
CN106789991A (en) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 A kind of multi-person interactive method and system based on virtual scene
CN106803966A (en) * 2016-12-31 2017-06-06 北京星辰美豆文化传播有限公司 A kind of many people's live network broadcast methods, device and its electronic equipment
WO2018095129A1 (en) * 2016-11-26 2018-05-31 广州华多网络科技有限公司 Method and device for playing live video
CN110418155A (en) * 2019-08-08 2019-11-05 腾讯科技(深圳)有限公司 Living broadcast interactive method, apparatus, computer readable storage medium and computer equipment
CN111083516A (en) * 2019-12-31 2020-04-28 广州酷狗计算机科技有限公司 Live broadcast processing method and device
WO2020134841A1 (en) * 2018-12-28 2020-07-02 广州市百果园信息技术有限公司 Live broadcast interaction method and apparatus, and system, device and storage medium
CN111405340A (en) * 2020-03-12 2020-07-10 钟杰东 5G video webpage technology management system and method
CN111654713A (en) * 2020-04-20 2020-09-11 视联动力信息技术股份有限公司 Live broadcast interaction method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106254311B (en) * 2016-07-15 2020-12-08 腾讯科技(深圳)有限公司 Live broadcast method and device and live broadcast data stream display method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016058861A (en) * 2014-09-09 2016-04-21 みこらった株式会社 Sports game live watching system, and image collection/distribution facility device and watcher terminal for sports game live watching system
WO2018095129A1 (en) * 2016-11-26 2018-05-31 广州华多网络科技有限公司 Method and device for playing live video
CN106789991A (en) * 2016-12-09 2017-05-31 福建星网视易信息系统有限公司 A kind of multi-person interactive method and system based on virtual scene
CN106803966A (en) * 2016-12-31 2017-06-06 北京星辰美豆文化传播有限公司 A kind of many people's live network broadcast methods, device and its electronic equipment
WO2020134841A1 (en) * 2018-12-28 2020-07-02 广州市百果园信息技术有限公司 Live broadcast interaction method and apparatus, and system, device and storage medium
CN110418155A (en) * 2019-08-08 2019-11-05 腾讯科技(深圳)有限公司 Living broadcast interactive method, apparatus, computer readable storage medium and computer equipment
CN111083516A (en) * 2019-12-31 2020-04-28 广州酷狗计算机科技有限公司 Live broadcast processing method and device
CN111405340A (en) * 2020-03-12 2020-07-10 钟杰东 5G video webpage technology management system and method
CN111654713A (en) * 2020-04-20 2020-09-11 视联动力信息技术股份有限公司 Live broadcast interaction method and device

Also Published As

Publication number Publication date
CN112188223A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
US20220410007A1 (en) Virtual character interaction method and apparatus, computer device, and storage medium
CN110536725A (en) Personalized user interface based on behavior in application program
CN112073299B (en) Plot chat method
CN108986192B (en) Data processing method and device for live broadcast
CN113209632B (en) Cloud game processing method, device, equipment and storage medium
CN110472099B (en) Interactive video generation method and device and storage medium
CN111835531B (en) Session processing method, device, computer equipment and storage medium
CN112188223B (en) Live video playing method, device, equipment and medium
CN111314714B (en) Game live broadcast method and device
CN109754329B (en) Electronic resource processing method, terminal, server and storage medium
CN110677685B (en) Network live broadcast display method and device
CN114501104B (en) Interaction method, device, equipment, storage medium and product based on live video
WO2022078167A1 (en) Interactive video creation method and apparatus, device, and readable storage medium
US20230356082A1 (en) Method and apparatus for displaying event pop-ups, device, medium and program product
CN113996053A (en) Information synchronization method, device, computer equipment, storage medium and program product
WO2022267729A1 (en) Virtual scene-based interaction method and apparatus, device, medium, and program product
CN112752153A (en) Video playing processing method, intelligent device and storage medium
CN112423143B (en) Live broadcast message interaction method, device and storage medium
CN111479119A (en) Method, device and system for collecting feedback information in live broadcast and storage medium
CN113824983A (en) Data matching method, device, equipment and computer readable storage medium
CN114549744A (en) Method for constructing virtual three-dimensional conference scene, server and AR (augmented reality) equipment
CN113392690A (en) Video semantic annotation method, device, equipment and storage medium
CN114430494B (en) Interface display method, device, equipment and storage medium
CN114139491A (en) Data processing method, device and storage medium
KR20220159968A (en) Conference handling method and system using avatars

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant