CN110971930A - Live virtual image broadcasting method, device, terminal and storage medium - Google Patents

Live virtual image broadcasting method, device, terminal and storage medium Download PDF

Info

Publication number
CN110971930A
CN110971930A CN201911320650.3A CN201911320650A CN110971930A CN 110971930 A CN110971930 A CN 110971930A CN 201911320650 A CN201911320650 A CN 201911320650A CN 110971930 A CN110971930 A CN 110971930A
Authority
CN
China
Prior art keywords
avatar
anchor
target
identification
live video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911320650.3A
Other languages
Chinese (zh)
Other versions
CN110971930B (en
Inventor
汤伯超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201911320650.3A priority Critical patent/CN110971930B/en
Publication of CN110971930A publication Critical patent/CN110971930A/en
Application granted granted Critical
Publication of CN110971930B publication Critical patent/CN110971930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6106Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
    • H04N21/6125Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via Internet

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application discloses a live virtual image broadcasting method, and belongs to the technical field of live broadcasting. The method comprises the following steps: the anchor application program acquires a target virtual image identifier; the anchor application program sends a target avatar identification to the avatar rendering application program; the virtual image rendering application program sends a virtual image model obtaining request to the data server, wherein the virtual image model obtaining request carries a target virtual image identifier; the data server sends a target virtual image model corresponding to the target virtual image identifier to the virtual image rendering application program; the avatar rendering application obtains a anchor live video, generates an avatar live video based on the target avatar model and the anchor live video, and sends the avatar live video to the anchor application. According to the method and the device, the virtual image live video generation and live broadcast are respectively completed through two independent application programs, and the blocking problem of live video of the anchor application program is avoided.

Description

Live virtual image broadcasting method, device, terminal and storage medium
Technical Field
The present application relates to the field of live broadcast technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for live broadcast of an avatar.
Background
With the development of live broadcast technology, besides live broadcast of real people, live broadcast of virtual images also starts to enter the visual field of people.
Currently, live virtual image broadcasting is mainly realized through a live broadcast application program, and when a main broadcast selects to use the virtual image for live broadcast, the live broadcast application program obtains an virtual image model from a background server. And then, the live broadcast application program can acquire live broadcast video of the anchor broadcast, and render the virtual image model according to the live broadcast video through a built-in three-dimensional rendering plug-in unit to generate the virtual image live broadcast video.
In the process of implementing the present application, the inventor finds that the prior art has at least the following problems:
the three-dimensional rendering plug-in installed in the live broadcast application program occupies resources of the live broadcast application program when the three-dimensional rendering plug-in operates, so that the live broadcast function of the live broadcast application program is influenced, and the live broadcast video in the anchor application program is blocked and unsmooth.
Disclosure of Invention
The embodiment of the application provides a live virtual image method which can solve the problem that live video is blocked. The technical scheme is as follows:
in a first aspect, a method for live broadcasting of an avatar is provided, the method comprising:
the anchor application program acquires a target virtual image identifier;
the anchor application sends the target avatar identification to an avatar rendering application;
the virtual image rendering application program sends a virtual image model obtaining request to a data server, wherein the virtual image model obtaining request carries the target virtual image identifier;
the data server sends a target avatar model corresponding to the target avatar identification to the avatar rendering application program;
the avatar rendering application obtains a anchor live video, generates an avatar live video based on the target avatar model and the anchor live video, and sends the avatar live video to the anchor application.
In one possible implementation, the anchor application obtains a target avatar identification, including:
the anchor application program sends an avatar identification query request to the data server, wherein the avatar identification query request carries an identification of a target anchor account logged by the anchor application program;
and the data server sends a target avatar identification corresponding to the identification of the target anchor account to the anchor application program according to the stored corresponding relation between the identification of the anchor account and the avatar identification.
In one possible implementation, the anchor application sending the target avatar identification to an avatar rendering application, including:
the anchor application program sends the target avatar identification and the identification of the target anchor account to an avatar rendering application program;
the virtual image model acquisition request also carries an identifier of the target anchor account;
the data server sending a target avatar model corresponding to a target avatar identification to the avatar rendering application, comprising:
and if the data server determines that the target avatar identification and the identification of the target anchor account are correspondingly stored in the corresponding relation between the identification of the anchor account and the avatar identification, sending a target avatar model corresponding to the target avatar identification to the avatar rendering application program.
In one possible implementation, the method further includes:
if the data server determines that the target avatar identification and the identification of the target anchor account are not correspondingly stored in the corresponding relation between the identification of the anchor account and the avatar identification, a request failure message is sent to the avatar rendering application program;
the avatar rendering application sending an avatar barring message to the anchor application;
the anchor application displays an avatar barring alert message.
In one possible implementation, the avatar rendering application obtains a anchor live video, and generates an avatar live video based on the target avatar model and the anchor live video, including:
the virtual image rendering application program acquires a main broadcast live video, carries out facial expression recognition on a main broadcast in video frames of the main broadcast live video, and determines the expression information of the main broadcast in each video frame;
and the avatar rendering application program controls the avatar model to execute the expressions corresponding to the expression information of the anchor in each video frame so as to generate the live video of the avatar.
In one possible implementation, the avatar rendering application obtains a anchor live video, and generates an avatar live video based on the target avatar model and the anchor live video, including:
the virtual image rendering application program acquires a anchor live video, performs action recognition on an anchor in video frames of the anchor live video, and determines action information of the anchor in each video frame;
and the avatar rendering application program controls the avatar model to execute actions corresponding to action information of a main broadcast in each video frame so as to generate an avatar live broadcast video.
In a second aspect, a method for live broadcasting of an avatar is provided, the method comprising:
receiving a target virtual image identifier sent by a main broadcasting application program;
sending an avatar model acquisition request to a data server, wherein the avatar model acquisition request carries the target avatar identification;
receiving a target virtual image model corresponding to the target virtual image identification sent by the data server;
acquiring a main broadcast live video, generating an avatar live video based on the target avatar model and the main broadcast live video, and sending the avatar live video to the main broadcast application program.
In one possible implementation, the receiving a target avatar identification sent by an anchor application includes:
receiving the target avatar identification sent by the anchor application program and the identification of a target anchor account logged by the anchor application program;
the virtual image model acquisition request also carries an identifier of the target anchor account;
the receiving of the target avatar model corresponding to the target avatar identification sent by the data server comprises;
and receiving a target avatar model corresponding to the target avatar identification sent by the data server when the target avatar identification and the identification of the target anchor account are determined to be correspondingly stored in the corresponding relation between the identification of the anchor account and the avatar identification.
In one possible implementation, the method further includes:
receiving a request failure message sent by the data server, wherein the request failure message is sent to the avatar rendering application program by the data server when the data server determines that the target avatar identifier and the identifier of the target anchor account are not correspondingly stored in the corresponding relationship between the identifier of the anchor account and the avatar identifier;
sending an avatar barring message to the anchor application to cause the anchor application to display an avatar barring alert message.
In one possible implementation, the obtaining a anchor live video and generating an avatar live video based on the target avatar model and the anchor live video includes:
acquiring a main broadcast live video, carrying out facial expression recognition on a main broadcast in video frames of the main broadcast live video, and determining the expression information of the main broadcast in each video frame;
and controlling the avatar model to execute the expressions corresponding to the expression information of the anchor in each video frame so as to generate the live video of the avatar.
In one possible implementation, the obtaining a anchor live video and generating an avatar live video based on the target avatar model and the anchor live video includes:
acquiring a main broadcast live video, identifying the action of a main broadcast in the video frames of the main broadcast live video, and determining the action information of the main broadcast in each video frame;
and controlling the avatar model to execute actions corresponding to action information of the anchor in each video frame so as to generate the live video of the avatar.
In a third aspect, there is provided an apparatus for live avatar, the apparatus comprising:
the first receiving module is used for receiving a target virtual image identifier sent by the anchor application program;
the acquiring module is used for sending an avatar model acquiring request to the data server, wherein the avatar model acquiring request carries the target avatar identification;
the second receiving module is used for receiving the target virtual image model corresponding to the target virtual image identifier sent by the data server;
and the generation module is used for acquiring a main broadcast live video, generating an avatar live video based on the target avatar model and the main broadcast live video, and sending the avatar live video to the main broadcast application program.
In a possible implementation manner, the first receiving module is configured to:
receiving the target avatar identification sent by the anchor application program and the identification of a target anchor account logged by the anchor application program;
the virtual image model acquisition request also carries an identifier of the target anchor account;
the second receiving module is used for receiving the first signal;
and receiving a target avatar model corresponding to the target avatar identification sent by the data server when the target avatar identification and the identification of the target anchor account are determined to be correspondingly stored in the corresponding relation between the identification of the anchor account and the avatar identification.
In a possible implementation manner, the second receiving module is further configured to:
receiving a request failure message sent by the data server, wherein the request failure message is sent to the avatar rendering application program by the data server when the data server determines that the target avatar identifier and the identifier of the target anchor account are not correspondingly stored in the corresponding relationship between the identifier of the anchor account and the avatar identifier;
sending an avatar barring message to the anchor application to cause the anchor application to display an avatar barring alert message.
In one possible implementation manner, the generating module is configured to:
acquiring a main broadcast live video, carrying out facial expression recognition on a main broadcast in video frames of the main broadcast live video, and determining the expression information of the main broadcast in each video frame;
and controlling the avatar model to execute the expressions corresponding to the expression information of the anchor in each video frame so as to generate the live video of the avatar.
In one possible implementation manner, the generating module is configured to:
acquiring a main broadcast live video, identifying the action of a main broadcast in the video frames of the main broadcast live video, and determining the action information of the main broadcast in each video frame;
and controlling the avatar model to execute actions corresponding to action information of the anchor in each video frame so as to generate the live video of the avatar.
In a fourth aspect, a terminal is provided, where the terminal includes a processor and a memory, where the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the operation performed by the method for live avatar broadcast according to the first aspect.
In a fifth aspect, a computer-readable storage medium is provided, wherein at least one instruction is stored in the computer-readable storage medium, and the at least one instruction is loaded and executed by a processor to implement the operations performed by the method for live avatar broadcast according to the first aspect.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
according to the embodiment of the application, after the avatar rendering application program receives the target avatar identification sent by the anchor application program, the data server obtains the target avatar model corresponding to the target avatar identification. After the target avatar model is obtained, the avatar rendering application program obtains the anchor live video, generates an avatar live video according to the anchor live video and the target avatar model, and sends the avatar live video to the anchor application program for live broadcast. Therefore, the generation of the live video of the virtual image and the live broadcast are performed in two independent application programs, so that the live broadcast function of the anchor application program cannot be influenced in the process of generating the live video of the virtual image, and the problems that the live video in the anchor application program is unsmooth and unsmooth are solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application;
fig. 2 is a flowchart of a live virtual image broadcasting method provided in an embodiment of the present application;
fig. 3 is a flowchart of a live virtual image broadcasting method provided in an embodiment of the present application;
FIG. 4 is an interface diagram of an anchor application provided by an embodiment of the present application;
FIG. 5 is an interface diagram of an anchor application provided by an embodiment of the present application;
fig. 6 is a flowchart of a live virtual image method provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of an apparatus for live broadcasting of an avatar according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application. Referring to fig. 1, a host terminal, a data server, a viewer terminal, and a live server may be included in the implementation environment.
The anchor terminal can be provided with an avatar rendering application and an anchor application. The avatar rendering application may have a three-dimensional rendering function for rendering the avatar model to generate an avatar live video. The anchor application may have a live function for live broadcasting of an avatar live video or an anchor video. And the data server is used for storing the virtual image models corresponding to the anchor. And the audience terminal is used for displaying the live video or the anchor video of the virtual image for the audience to watch. And the live broadcast server is used for forwarding the virtual image live broadcast video or the anchor live broadcast video sent by the anchor terminal to each audience terminal.
Fig. 2 is a flowchart of a live avatar broadcast method according to an embodiment of the present application, where the method may be implemented by an avatar rendering application installed in a anchor terminal. Referring to fig. 2, the process flow of the method may include the following steps:
step 201, receiving the target virtual image identification sent by the anchor application program
Step 202, sending an avatar model acquisition request to the data server, wherein the avatar model acquisition request carries a target avatar identifier.
And step 203, receiving the target avatar model corresponding to the target avatar identifier sent by the data server.
And 204, acquiring the anchor live video, generating the avatar live video based on the target avatar model and the anchor live video, and sending the avatar live video to an anchor application program.
Fig. 3 is a flowchart of a live avatar broadcast method according to an embodiment of the present application, where the method may be implemented by an avatar rendering application installed in a anchor terminal, an anchor application, and a data server. Referring to fig. 3, the process flow of the method may include the following steps:
step 301, the anchor application obtains a target avatar identification.
In implementation, after each anchor registers an anchor account, an avatar model may be selected by the anchor application for subsequent use in live avatar model broadcast. The virtual image model can be a three-dimensional model of an animal, an animation character and the like.
After the anchor selects an avatar model through the anchor application, the anchor application may send an avatar registration message to the data server, where the avatar registration message may include an identifier of the anchor account currently logged in by the anchor application and an avatar identifier of the avatar model selected by the anchor. And after receiving the virtual image registration message, the data server acquires the identifier of the anchor account and the identifier of the virtual image, and correspondingly stores the identifiers.
The anchor application may locally store avatar identifications for the anchor-selected avatar model. When the anchor wants to perform live virtual image broadcasting, the anchor application program can locally acquire the virtual identifier. The main interface of the anchor application program can display a live function option, and when the anchor selects the live function option, a live mode option can be displayed. As shown in fig. 4, an interface diagram showing a live mode option in the form of a floating window for the anchor application is shown.
The live broadcast mode options can comprise a main live broadcast option and an avatar live broadcast option. If the anchor wants to use the actual image of the anchor to carry out live broadcast, the anchor live broadcast option can be selected, and if the anchor wants to use the virtual image model to carry out virtual image live broadcast, the virtual image live broadcast option can be selected. After the anchor selects the avatar live option, the anchor application may obtain the avatar identifier corresponding to the anchor account currently logged in.
In one possible implementation manner, in order to prevent the anchor from modifying the avatar identifier and reduce the data storage amount of the anchor terminal, the anchor application may not store the avatar identifier corresponding to the avatar model selected by the anchor, and accordingly, the process of step 301 may be as follows: and the anchor application program sends an avatar identification query request to the data server, wherein the avatar identification query request carries an identification of a target anchor account logged by the anchor application program.
In an implementation, the identity of the anchor account and the avatar identity by the data server may be stored in the form of a correspondence table. Of course, the data server may also store the identifier of the anchor account and the avatar identifier in other storage forms, which is not limited in this embodiment of the present application. As shown in table 1 below, the table is a corresponding relationship table between the identifier of the anchor account and the identifier of the avatar.
TABLE 1
Identity of anchor account Virtual image identification
Anchor 1 Avatar 1
Anchor 2 Avatar 2
Anchor 3 Avatar 3
…… ……
After the anchor selects the avatar live option, the anchor application may send an avatar identification query request to the data server. And after receiving the virtual image query request, the data server acquires the identifier of the target anchor account carried in the virtual image query request. Then, the data server may query a corresponding target avatar identifier from the stored correspondence between the anchor account identifier and the avatar identifier according to the target anchor account identifier. And returns the target avatar identification to the anchor application.
Step 302, the anchor application sends the target avatar identification to the avatar rendering application.
In an implementation, the anchor application may wake up the avatar rendering application in the system background after the anchor selects the avatar live option. And then, sending the obtained target virtual image identifier to a virtual image rendering application program.
Step 303, the avatar rendering application sends an avatar model acquisition request to the data server, where the avatar model acquisition request carries the target avatar identifier.
In implementation, after the avatar rendering application acquires the target avatar identifier sent by the anchor application, the data server may acquire the avatar model corresponding to the target avatar identifier by sending an avatar model acquisition request to the data server.
Step 304, the data server sends the target avatar model corresponding to the target avatar identification to the avatar rendering application.
In implementation, after receiving the avatar model acquisition request, the data server may acquire the avatar identifier carried therein, and return the avatar model corresponding to the avatar identifier to the avatar rendering application.
In one possible implementation, to prevent the anchor from stealing the avatar models of others, the data server may verify the avatar model acquisition request sent by the avatar rendering application to ensure that the acquired avatar model is the avatar model that the anchor applies for itself. Accordingly, in step 302, the anchor application, when sending the target avatar identification to the avatar rendering application, may send the identification of the currently logged-in target anchor account to the avatar rendering application together. In step 303, the avatar rendering application may also carry an identifier of the anchor account in the avatar model acquisition request sent to the data server. In step 304, the data server sends the target avatar model corresponding to the target avatar identification to the avatar rendering application if it is determined that the target avatar identification and the identification of the target anchor account are stored in correspondence with each other in the association of the identification of the anchor account and the avatar identification.
In implementation, the data server obtains the target avatar identifier and the identifier of the target anchor account carried in the avatar model obtaining request. Then, it may be determined whether the target avatar identification and the identification of the target anchor account are stored in correspondence with each other in the association between the identification of the anchor account and the avatar identification. Here, the determination method may be as follows.
The first method can inquire whether the virtual character mark corresponding to the mark of the target anchor account is the target virtual character mark in the stored corresponding relation between the mark of the anchor account and the virtual character mark. If yes, the fact that the target avatar identification and the identification of the target anchor account are correspondingly stored in the corresponding relation between the identification of the anchor account and the avatar identification is determined. If not, determining that the target avatar identification and the identification of the target anchor account are not stored in the corresponding relation between the identification of the anchor account and the avatar identification correspondingly.
And if so, determining that the target avatar identification and the identification of the target anchor account are correspondingly stored in the corresponding relation between the identity of the anchor account and the avatar identification. If not, determining that the target avatar identification and the identification of the target anchor account are not stored in the corresponding relation between the identification of the anchor account and the avatar identification correspondingly.
The data server determines that the target avatar identifier and the identifier of the target anchor account are correspondingly stored in the corresponding relation between the identifier of the anchor account and the avatar identifier through the method, and then the data server can return the target avatar model corresponding to the target avatar identifier to the avatar rendering application program.
If the data server determines that the target avatar identifier and the identifier of the target anchor account are not stored in the corresponding relationship between the identifier of the anchor account and the avatar identifier correspondingly, the data server does not return the target avatar model corresponding to the target avatar identifier to the avatar rendering application program. Instead, a request failure message is returned to the avatar rendering application.
And 305, acquiring the anchor live video by the avatar rendering application program, and generating the avatar live video based on the target avatar model and the anchor live video.
In implementation, if the avatar application receives a target avatar model sent by the data server, a anchor live video is acquired through a camera of the anchor terminal. And then, respectively inputting each video frame of the anchor live video into a pre-trained expression recognition model. The expression recognition model can output the expression information of the anchor in each video frame. Here, the expression information is used to indicate an expression that is anchor in a certain video frame, such as laughing, anger, crying, and the like. For each video frame, after the expression information of the anchor in the video frame is identified, the target virtual image model is controlled to execute the expression corresponding to the expression information, so that the virtual live video corresponding to the anchor live video can be generated.
In addition, the avatar application may also input each video frame of the anchor live video into a pre-trained motion recognition model, respectively. The motion recognition model may output motion information of the anchor in each video frame. Here, the motion information is used to indicate the motion of the anchor in a certain video frame, such as raising the hand, clapping the palm, nodding the head, and the like. For each video frame, after the action information of the anchor in the video frame is identified, the target virtual image model is controlled to execute the action corresponding to the action information, so that the virtual live video corresponding to the anchor live video can be generated.
Of course, the avatar application may use both the expression recognition model and the motion recognition model, and the control target avatar model executes both the corresponding expression and the corresponding motion.
It should be further noted that the expression recognition model and the motion recognition model may be machine learning models, such as CNN (Convolutional Neural Networks), RNN (Recurrent Neural Networks), and so on.
If the avatar application receives a request failure message sent by the data server, the anchor live video is not acquired, but an avatar use prohibition message is sent to the anchor application. Accordingly, the anchor application displays an avatar disable prompt message. As shown in fig. 5, the anchor application may display an avatar use prohibition prompt message in the form of a pop-up window after receiving the avatar use prohibition message, such as "avatar is illegal, please use the applied avatar".
Step 306, the avatar rendering application sends the avatar live video to the anchor application.
In implementation, after the anchor selects the avatar live broadcast option, the anchor application wakes up the avatar rendering application at the background and notifies the avatar rendering application to start the virtual camera. Therefore, after the live video of the virtual image is generated by the virtual image rendering application program, the live video of the virtual image can be sent to the anchor application program through the virtual camera head, and the anchor application program can display the live video of the virtual image. Meanwhile, the anchor application program can also send the live video of the avatar to a live broadcast server, and the live broadcast server sends the live video of the avatar to a viewer terminal watching the live broadcast. After receiving the live video of the virtual image, each audience terminal can display the live video of the virtual image.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
In the embodiment of the application, after receiving the target avatar identifier sent by the anchor application, the avatar rendering application obtains the target avatar model corresponding to the target avatar identifier from the data server. After the target avatar model is obtained, the avatar rendering application program obtains the anchor live video, generates an avatar live video according to the anchor live video and the target avatar model, and sends the avatar live video to the anchor application program for live broadcast. Therefore, the generation of the live video of the virtual image and the live broadcast are carried out in two independent application programs, so that the live broadcast function of the anchor application program cannot be influenced in the process of generating the live video of the virtual image, and the problems that the live video in the anchor application program is blocked and unsmooth are solved.
Fig. 6 is a flowchart of a live avatar broadcast method according to an embodiment of the present application, where the method may be implemented by an avatar rendering application installed in a anchor terminal, an anchor application, and a data server. Referring to fig. 6, the process flow of the method may include the following steps:
step 601, the anchor application program sends an avatar identification query request to the data server, wherein the avatar identification query request carries an identification of a target anchor account logged in by the anchor application program.
Step 602, the data server sends the target avatar identification corresponding to the identification of the target anchor account to the anchor application program according to the stored correspondence between the identification of the anchor account and the avatar identification.
Step 603, the anchor application sends the target avatar identification and the identification of the target anchor account to the avatar rendering application.
Step 604, the avatar rendering application sends an avatar model acquisition request to the data server, wherein the avatar model acquisition request carries the target avatar identifier and the identifier of the target anchor account.
Step 605, the data server determines whether the target avatar identifier and the identifier of the target anchor account are stored in the corresponding relationship between the identifier of the anchor account and the avatar identifier.
Step 606, if the data server determines that the target avatar identifier and the identifier of the target anchor account are correspondingly stored in the corresponding relationship between the identifier of the anchor account and the avatar identifier, the data server sends the target avatar model corresponding to the target avatar identifier to the avatar rendering application program. And if the target avatar identification and the identification of the target anchor account are determined not to be correspondingly stored in the corresponding relation between the identification of the anchor account and the avatar identification, sending a request failure message to the avatar rendering application program.
Step 607, the avatar rendering application program obtains the anchor live video if it receives the target avatar model, generates the avatar live video based on the target avatar model and the anchor live video, and sends the avatar live video to the anchor application program. If a request failure message is received, an avatar barring message is sent to the anchor application.
And step 608, if the anchor application program receives the live video of the virtual image, live broadcasting the live video of the virtual image. And if the avatar use prohibition message is received, displaying the avatar use prohibition prompt message.
It should be noted that the specific implementation of the above steps 601 to 608 is the same as or similar to the specific implementation of the above steps 301 to 306, and the detailed implementation of the steps 601 to 608 is not repeated herein.
In the embodiment of the application, after receiving the target avatar identifier sent by the anchor application, the avatar rendering application obtains the target avatar model corresponding to the target avatar identifier from the data server. After the target avatar model is obtained, the avatar rendering application program obtains the anchor live video, generates an avatar live video according to the anchor live video and the target avatar model, and sends the avatar live video to the anchor application program for live broadcast. Therefore, the generation of the live video of the virtual image and the live broadcast are carried out in two independent application programs, so that the live broadcast function of the anchor application program cannot be influenced in the process of generating the live video of the virtual image, and the problems that the live video in the anchor application program is blocked and unsmooth are solved.
Based on the same technical concept, an embodiment of the present application further provides a device for live broadcasting of an avatar, as shown in fig. 6, the device includes: a first receiving module 710, an obtaining module 720, a second receiving module 730, and a generating module 740.
A first receiving module 710, configured to receive a target avatar identifier sent by an anchor application;
an obtaining module 720, configured to send an avatar model obtaining request to a data server, where the avatar model obtaining request carries the target avatar identifier;
a second receiving module 730, configured to receive the target avatar model corresponding to the target avatar identifier sent by the data server;
and a generating module 740, configured to obtain a anchor live video, generate an avatar live video based on the target avatar model and the anchor live video, and send the avatar live video to the anchor application.
In a possible implementation manner, the first receiving module 710 is configured to:
receiving the target avatar identification sent by the anchor application program and the identification of a target anchor account logged by the anchor application program;
the virtual image model acquisition request also carries an identifier of the target anchor account;
the second receiving module 730, configured to;
and receiving a target avatar model corresponding to the target avatar identification sent by the data server when the target avatar identification and the identification of the target anchor account are determined to be correspondingly stored in the corresponding relation between the identification of the anchor account and the avatar identification.
In a possible implementation manner, the second receiving module 730 is further configured to:
receiving a request failure message sent by the data server, wherein the request failure message is sent to the avatar rendering application program by the data server when the data server determines that the target avatar identifier and the identifier of the target anchor account are not correspondingly stored in the corresponding relationship between the identifier of the anchor account and the avatar identifier;
sending an avatar barring message to the anchor application to cause the anchor application to display an avatar barring alert message.
In a possible implementation manner, the generating module 740 is configured to:
acquiring a main broadcast live video, carrying out facial expression recognition on a main broadcast in video frames of the main broadcast live video, and determining the expression information of the main broadcast in each video frame;
and controlling the avatar model to execute the expressions corresponding to the expression information of the anchor in each video frame so as to generate the live video of the avatar.
In a possible implementation manner, the generating module 740 is configured to:
acquiring a main broadcast live video, identifying the action of a main broadcast in the video frames of the main broadcast live video, and determining the action information of the main broadcast in each video frame;
and controlling the avatar model to execute actions corresponding to action information of the anchor in each video frame so as to generate the live video of the avatar.
It should be noted that: in the apparatus for live broadcasting an avatar provided in the above embodiment, only the division of the above functional modules is used for illustration, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the anchor terminal is divided into different functional modules to complete all or part of the above described functions. In addition, the apparatus for live broadcasting an avatar provided in the above embodiments and the method embodiment for live broadcasting an avatar belong to the same concept, and the specific implementation process thereof is described in detail in the method embodiment and is not described herein again.
Fig. 8 shows a block diagram of a terminal 800 according to an exemplary embodiment of the present application. The terminal 800 may be the above-described terminal in which the avatar rendering application and the anchor application are installed. The terminal 800 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 800 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 800 includes: a processor 801 and a memory 802.
The processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 801 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 801 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 801 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 801 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 802 may include one or more computer-readable storage media, which may be non-transitory. Memory 802 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 802 is used to store at least one instruction for execution by processor 801 to implement the avatar live method provided by the method embodiments herein.
In some embodiments, the terminal 800 may further include: a peripheral interface 803 and at least one peripheral. The processor 801, memory 802 and peripheral interface 803 may be connected by bus or signal lines. Various peripheral devices may be connected to peripheral interface 803 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 804, a touch screen display 805, a camera 806, an audio circuit 807, a positioning component 808, and a power supply 809.
The peripheral interface 803 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 801 and the memory 802. In some embodiments, the processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 801, the memory 802, and the peripheral interface 803 may be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 804 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 804 converts an electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 804 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to capture touch signals on or above the surface of the display 805. The touch signal may be input to the processor 801 as a control signal for processing. At this point, the display 805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 805 may be one, providing the front panel of the terminal 800; in other embodiments, the display 805 may be at least two, respectively disposed on different surfaces of the terminal 800 or in a folded design; in still other embodiments, the display 805 may be a flexible display disposed on a curved surface or a folded surface of the terminal 800. Even further, the display 805 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 805 can be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 806 is used to capture images or video. Optionally, camera assembly 806 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 806 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 807 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 801 for processing or inputting the electric signals to the radio frequency circuit 804 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 800. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 804 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 807 may also include a headphone jack.
The positioning component 808 is used to locate the current geographic position of the terminal 800 for navigation or LBS (location based Service). The positioning component 808 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
Power supply 809 is used to provide power to various components in terminal 800. The power supply 809 can be ac, dc, disposable or rechargeable. When the power source 809 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: acceleration sensor 811, gyro sensor 812, pressure sensor 813, fingerprint sensor 814, optical sensor 815 and proximity sensor 816.
The acceleration sensor 811 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 800. For example, the acceleration sensor 811 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 801 may control the touch screen 805 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 811. The acceleration sensor 811 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 812 may detect a body direction and a rotation angle of the terminal 800, and the gyro sensor 812 may cooperate with the acceleration sensor 811 to acquire a 3D motion of the user with respect to the terminal 800. From the data collected by the gyro sensor 812, the processor 801 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 813 may be disposed on the side bezel of terminal 800 and/or underneath touch display 805. When the pressure sensor 813 is disposed on the side frame of the terminal 800, the holding signal of the user to the terminal 800 can be detected, and the processor 801 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 813. When the pressure sensor 813 is disposed at a lower layer of the touch display screen 805, the processor 801 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 805. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 814 is used for collecting a fingerprint of the user, and the processor 801 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 814, or the fingerprint sensor 814 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 801 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 814 may be disposed on the front, back, or side of terminal 800. When a physical button or a vendor Logo is provided on the terminal 800, the fingerprint sensor 814 may be integrated with the physical button or the vendor Logo.
The optical sensor 815 is used to collect the ambient light intensity. In one embodiment, the processor 801 may control the display brightness of the touch screen 805 based on the ambient light intensity collected by the optical sensor 815. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 805 is increased; when the ambient light intensity is low, the display brightness of the touch display 805 is turned down. In another embodiment, the processor 801 may also dynamically adjust the shooting parameters of the camera assembly 806 based on the ambient light intensity collected by the optical sensor 815.
A proximity sensor 816, also known as a distance sensor, is typically provided on the front panel of the terminal 800. The proximity sensor 816 is used to collect the distance between the user and the front surface of the terminal 800. In one embodiment, when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal 800 gradually decreases, the processor 801 controls the touch display 805 to switch from the bright screen state to the dark screen state; when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal 800 becomes gradually larger, the processor 801 controls the touch display 805 to switch from the screen-on state to the screen-on state.
Those skilled in the art will appreciate that the configuration shown in fig. 8 is not intended to be limiting of terminal 800 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, there is also provided a computer readable storage medium, such as a memory, comprising instructions executable by a processor in a terminal to perform the method of avatar live in the above embodiments. For example, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (Random access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (18)

1. A method of live casting of an avatar, the method comprising:
the anchor application program acquires a target virtual image identifier;
the anchor application sends the target avatar identification to an avatar rendering application;
the virtual image rendering application program sends a virtual image model obtaining request to a data server, wherein the virtual image model obtaining request carries the target virtual image identification;
the data server sends a target avatar model corresponding to the target avatar identification to the avatar rendering application program;
the avatar rendering application obtains a anchor live video, generates an avatar live video based on the target avatar model and the anchor live video, and sends the avatar live video to the anchor application.
2. The method of claim 1, wherein the anchor application obtains a target avatar identification, comprising:
the anchor application program sends an avatar identification query request to the data server, wherein the avatar identification query request carries an identification of a target anchor account logged by the anchor application program;
and the data server sends a target avatar identification corresponding to the identification of the target anchor account to the anchor application program according to the stored corresponding relation between the identification of the anchor account and the avatar identification.
3. The method of claim 2, wherein the anchor application sending the target avatar identification to an avatar rendering application, comprises:
the anchor application program sends the target avatar identification and the identification of the target anchor account to an avatar rendering application program;
the virtual image model acquisition request also carries an identifier of the target anchor account;
the data server sending a target avatar model corresponding to a target avatar identification to the avatar rendering application, comprising:
and if the data server determines that the target avatar identification and the identification of the target anchor account are correspondingly stored in the corresponding relation between the identification of the anchor account and the avatar identification, the data server sends a target avatar model corresponding to the target avatar identification to the avatar rendering application program.
4. The method of claim 3, further comprising:
if the data server determines that the target avatar identification and the identification of the target anchor account are not correspondingly stored in the corresponding relation between the identification of the anchor account and the avatar identification, the data server sends a request failure message to the avatar rendering application program;
the avatar rendering application sending an avatar barring message to the anchor application;
the anchor application displays an avatar barring alert message.
5. The method of any of claims 1-4, wherein the avatar rendering application obtains a anchor live video, generates an avatar live video based on the target avatar model and the anchor live video, comprising:
the virtual image rendering application program acquires a main broadcast live video, carries out facial expression recognition on a main broadcast in video frames of the main broadcast live video, and determines the expression information of the main broadcast in each video frame;
and the avatar rendering application program controls the avatar model to execute the expressions corresponding to the expression information of the anchor in each video frame so as to generate the live video of the avatar.
6. The method of any of claims 1-4, wherein the avatar rendering application obtains a anchor live video, generates an avatar live video based on the target avatar model and the anchor live video, comprising:
the virtual image rendering application program acquires a anchor live video, performs action recognition on an anchor in video frames of the anchor live video, and determines action information of the anchor in each video frame;
and the avatar rendering application program controls the avatar model to execute actions corresponding to action information of a main broadcast in each video frame so as to generate an avatar live broadcast video.
7. A method for live rendering of an avatar, the method being applied to an avatar rendering application, the method comprising:
receiving a target virtual image identifier sent by a main broadcasting application program;
sending an avatar model acquisition request to a data server, wherein the avatar model acquisition request carries the target avatar identification;
receiving a target virtual image model corresponding to the target virtual image identification sent by the data server;
acquiring a main broadcast live video, generating an avatar live video based on the target avatar model and the main broadcast live video, and sending the avatar live video to the main broadcast application program.
8. The method of claim 7, wherein said receiving a target avatar identification sent by an anchor application comprises:
receiving the target avatar identification sent by the anchor application program and the identification of a target anchor account logged by the anchor application program;
the virtual image model acquisition request also carries an identifier of the target anchor account;
the receiving of the target avatar model corresponding to the target avatar identification sent by the data server comprises;
and receiving a target avatar model corresponding to the target avatar identification sent by the data server when the target avatar identification and the identification of the target anchor account are determined to be correspondingly stored in the corresponding relation between the identification of the anchor account and the avatar identification.
9. The method of claim 3, further comprising:
receiving a request failure message sent by the data server, wherein the request failure message is sent to the avatar rendering application program by the data server when the data server determines that the target avatar identifier and the identifier of the target anchor account are not correspondingly stored in the corresponding relationship between the identifier of the anchor account and the avatar identifier;
sending an avatar barring message to the anchor application to cause the anchor application to display an avatar barring alert message.
10. The method according to any of claims 7-9, wherein said obtaining a anchor live video, generating an avatar live video based on said target avatar model and said anchor live video, comprises:
acquiring a main broadcast live video, carrying out facial expression recognition on a main broadcast in video frames of the main broadcast live video, and determining the expression information of the main broadcast in each video frame;
and controlling the avatar model to execute the expressions corresponding to the expression information of the anchor in each video frame so as to generate the live video of the avatar.
11. The method according to any of claims 7-9, wherein said obtaining a anchor live video, generating an avatar live video based on said target avatar model and said anchor live video, comprises:
acquiring a main broadcast live video, identifying the action of a main broadcast in the video frames of the main broadcast live video, and determining the action information of the main broadcast in each video frame;
and controlling the avatar model to execute actions corresponding to action information of the anchor in each video frame so as to generate the live video of the avatar.
12. An apparatus for live rendering of an avatar, the apparatus comprising:
the first receiving module is used for receiving a target virtual image identifier sent by the anchor application program;
the acquiring module is used for sending an avatar model acquiring request to the data server, wherein the avatar model acquiring request carries the target avatar identification;
the second receiving module is used for receiving the target virtual image model corresponding to the target virtual image identifier sent by the data server;
and the generation module is used for acquiring a main broadcast live video, generating an avatar live video based on the target avatar model and the main broadcast live video, and sending the avatar live video to the main broadcast application program.
13. The apparatus of claim 12, wherein the first receiving module is configured to:
receiving the target avatar identification sent by the anchor application program and the identification of a target anchor account logged by the anchor application program;
the virtual image model acquisition request also carries an identifier of the target anchor account;
the second receiving module is used for receiving the first signal;
and receiving a target avatar model corresponding to the target avatar identification sent by the data server when the target avatar identification and the identification of the target anchor account are determined to be correspondingly stored in the corresponding relation between the identification of the anchor account and the avatar identification.
14. The method of claim 13, wherein the second receiving module is further configured to:
receiving a request failure message sent by the data server, wherein the request failure message is sent to the avatar rendering application program by the data server when the data server determines that the target avatar identifier and the identifier of the target anchor account are not correspondingly stored in the corresponding relationship between the identifier of the anchor account and the avatar identifier;
sending an avatar barring message to the anchor application to cause the anchor application to display an avatar barring alert message.
15. The apparatus according to any one of claims 12-14, wherein the generating means is configured to:
acquiring a main broadcast live video, carrying out facial expression recognition on a main broadcast in video frames of the main broadcast live video, and determining the expression information of the main broadcast in each video frame;
and controlling the avatar model to execute the expressions corresponding to the expression information of the anchor in each video frame so as to generate the live video of the avatar.
16. The apparatus according to any one of claims 12-14, wherein the generating means is configured to:
acquiring a main broadcast live video, identifying the action of a main broadcast in the video frames of the main broadcast live video, and determining the action information of the main broadcast in each video frame;
and controlling the avatar model to execute actions corresponding to action information of the anchor in each video frame so as to generate the live video of the avatar.
17. A terminal, characterized in that it comprises a processor and a memory, in which at least one instruction is stored, which is loaded and executed by the processor to implement the operations performed by the method of live avatar according to any of claims 7 to 11.
18. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor, to perform operations performed by the method of avatar live broadcast of any of claims 7-11.
CN201911320650.3A 2019-12-19 2019-12-19 Live virtual image broadcasting method, device, terminal and storage medium Active CN110971930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911320650.3A CN110971930B (en) 2019-12-19 2019-12-19 Live virtual image broadcasting method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911320650.3A CN110971930B (en) 2019-12-19 2019-12-19 Live virtual image broadcasting method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110971930A true CN110971930A (en) 2020-04-07
CN110971930B CN110971930B (en) 2023-03-10

Family

ID=70035279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911320650.3A Active CN110971930B (en) 2019-12-19 2019-12-19 Live virtual image broadcasting method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110971930B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111540055A (en) * 2020-04-16 2020-08-14 广州虎牙科技有限公司 Three-dimensional model driving method, device, electronic device and storage medium
CN111741326A (en) * 2020-06-30 2020-10-02 腾讯科技(深圳)有限公司 Video synthesis method, device, equipment and storage medium
CN111970521A (en) * 2020-07-16 2020-11-20 深圳追一科技有限公司 Live broadcast method and device of virtual anchor, computer equipment and storage medium
CN112598785A (en) * 2020-12-25 2021-04-02 游艺星际(北京)科技有限公司 Method, device and equipment for generating three-dimensional model of virtual image and storage medium
CN112601098A (en) * 2020-11-09 2021-04-02 北京达佳互联信息技术有限公司 Live broadcast interaction method and content recommendation method and device
CN112653898A (en) * 2020-12-15 2021-04-13 北京百度网讯科技有限公司 User image generation method, related device and computer program product
CN113014471A (en) * 2021-01-18 2021-06-22 腾讯科技(深圳)有限公司 Session processing method, device, terminal and storage medium
CN113099298A (en) * 2021-04-08 2021-07-09 广州华多网络科技有限公司 Method and device for changing virtual image and terminal equipment
CN113507621A (en) * 2021-07-07 2021-10-15 上海商汤智能科技有限公司 Live broadcast method, device, system, computer equipment and storage medium
CN113542801A (en) * 2021-06-29 2021-10-22 北京百度网讯科技有限公司 Method, device, equipment, storage medium and program product for generating anchor identification
CN113766119A (en) * 2021-05-11 2021-12-07 腾讯科技(深圳)有限公司 Virtual image display method, device, terminal and storage medium
CN114007091A (en) * 2021-10-27 2022-02-01 北京市商汤科技开发有限公司 Video processing method and device, electronic equipment and storage medium
CN114286021A (en) * 2021-12-24 2022-04-05 北京达佳互联信息技术有限公司 Rendering method, rendering apparatus, server, storage medium, and program product
CN114422647A (en) * 2021-12-24 2022-04-29 上海浦东发展银行股份有限公司 Digital person-based agent service method, apparatus, device, medium, and product
CN114422862A (en) * 2021-12-24 2022-04-29 上海浦东发展银行股份有限公司 Service video generation method, device, equipment, storage medium and program product
CN114827652A (en) * 2022-05-18 2022-07-29 上海哔哩哔哩科技有限公司 Virtual image playing method and device
CN115314728A (en) * 2022-07-29 2022-11-08 北京达佳互联信息技术有限公司 Information display method, system, device, electronic equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101079874A (en) * 2006-09-13 2007-11-28 腾讯科技(深圳)有限公司 A method and system for customizing virtual image
DE60318770D1 (en) * 2002-06-12 2008-03-13 Medison Co Ltd Method and apparatus for producing three-dimensional ultrasound images in quasi-real time
CN105099860A (en) * 2014-05-19 2015-11-25 腾讯科技(深圳)有限公司 Method and system for performing real-time interaction in instant messaging and client
CN107027046A (en) * 2017-04-13 2017-08-08 广州华多网络科技有限公司 Auxiliary live audio/video processing method and device
CN108200446A (en) * 2018-01-12 2018-06-22 北京蜜枝科技有限公司 Multimedia interactive system and method on the line of virtual image
CN108322832A (en) * 2018-01-22 2018-07-24 广州市动景计算机科技有限公司 Comment on method, apparatus and electronic equipment
CN109697060A (en) * 2018-12-29 2019-04-30 广州华多网络科技有限公司 Special video effect software and its generation method, device, equipment and storage medium
CN109874021A (en) * 2017-12-04 2019-06-11 腾讯科技(深圳)有限公司 Living broadcast interactive method, apparatus and system
WO2019128787A1 (en) * 2017-12-26 2019-07-04 阿里巴巴集团控股有限公司 Network video live broadcast method and apparatus, and electronic device
CN110297684A (en) * 2019-06-28 2019-10-01 腾讯科技(深圳)有限公司 Theme display methods, device and storage medium based on virtual portrait
US20190371082A1 (en) * 2017-08-17 2019-12-05 Tencent Technology (Shenzhen) Company Limited Three-dimensional virtual image display method and apparatus, terminal, and storage medium
CN110557625A (en) * 2019-09-17 2019-12-10 北京达佳互联信息技术有限公司 live virtual image broadcasting method, terminal, computer equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE60318770D1 (en) * 2002-06-12 2008-03-13 Medison Co Ltd Method and apparatus for producing three-dimensional ultrasound images in quasi-real time
CN101079874A (en) * 2006-09-13 2007-11-28 腾讯科技(深圳)有限公司 A method and system for customizing virtual image
CN105099860A (en) * 2014-05-19 2015-11-25 腾讯科技(深圳)有限公司 Method and system for performing real-time interaction in instant messaging and client
CN107027046A (en) * 2017-04-13 2017-08-08 广州华多网络科技有限公司 Auxiliary live audio/video processing method and device
US20190371082A1 (en) * 2017-08-17 2019-12-05 Tencent Technology (Shenzhen) Company Limited Three-dimensional virtual image display method and apparatus, terminal, and storage medium
CN109874021A (en) * 2017-12-04 2019-06-11 腾讯科技(深圳)有限公司 Living broadcast interactive method, apparatus and system
WO2019128787A1 (en) * 2017-12-26 2019-07-04 阿里巴巴集团控股有限公司 Network video live broadcast method and apparatus, and electronic device
CN108200446A (en) * 2018-01-12 2018-06-22 北京蜜枝科技有限公司 Multimedia interactive system and method on the line of virtual image
CN108322832A (en) * 2018-01-22 2018-07-24 广州市动景计算机科技有限公司 Comment on method, apparatus and electronic equipment
CN109697060A (en) * 2018-12-29 2019-04-30 广州华多网络科技有限公司 Special video effect software and its generation method, device, equipment and storage medium
CN110297684A (en) * 2019-06-28 2019-10-01 腾讯科技(深圳)有限公司 Theme display methods, device and storage medium based on virtual portrait
CN110557625A (en) * 2019-09-17 2019-12-10 北京达佳互联信息技术有限公司 live virtual image broadcasting method, terminal, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张帅: "人工智能主播的前景分析――基于新华社"AI合成主播"的思考", 《中国记者》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111540055A (en) * 2020-04-16 2020-08-14 广州虎牙科技有限公司 Three-dimensional model driving method, device, electronic device and storage medium
CN111540055B (en) * 2020-04-16 2024-03-08 广州虎牙科技有限公司 Three-dimensional model driving method, three-dimensional model driving device, electronic equipment and storage medium
WO2021209042A1 (en) * 2020-04-16 2021-10-21 广州虎牙科技有限公司 Three-dimensional model driving method and apparatus, electronic device, and storage medium
CN111741326A (en) * 2020-06-30 2020-10-02 腾讯科技(深圳)有限公司 Video synthesis method, device, equipment and storage medium
CN111741326B (en) * 2020-06-30 2023-08-18 腾讯科技(深圳)有限公司 Video synthesis method, device, equipment and storage medium
CN111970521A (en) * 2020-07-16 2020-11-20 深圳追一科技有限公司 Live broadcast method and device of virtual anchor, computer equipment and storage medium
CN111970521B (en) * 2020-07-16 2022-03-11 深圳追一科技有限公司 Live broadcast method and device of virtual anchor, computer equipment and storage medium
CN112601098A (en) * 2020-11-09 2021-04-02 北京达佳互联信息技术有限公司 Live broadcast interaction method and content recommendation method and device
CN112653898B (en) * 2020-12-15 2023-03-21 北京百度网讯科技有限公司 User image generation method, related device and computer program product
CN112653898A (en) * 2020-12-15 2021-04-13 北京百度网讯科技有限公司 User image generation method, related device and computer program product
CN112598785A (en) * 2020-12-25 2021-04-02 游艺星际(北京)科技有限公司 Method, device and equipment for generating three-dimensional model of virtual image and storage medium
CN112598785B (en) * 2020-12-25 2022-03-25 游艺星际(北京)科技有限公司 Method, device and equipment for generating three-dimensional model of virtual image and storage medium
CN113014471A (en) * 2021-01-18 2021-06-22 腾讯科技(深圳)有限公司 Session processing method, device, terminal and storage medium
CN113099298B (en) * 2021-04-08 2022-07-12 广州华多网络科技有限公司 Method and device for changing virtual image and terminal equipment
CN113099298A (en) * 2021-04-08 2021-07-09 广州华多网络科技有限公司 Method and device for changing virtual image and terminal equipment
CN113766119A (en) * 2021-05-11 2021-12-07 腾讯科技(深圳)有限公司 Virtual image display method, device, terminal and storage medium
CN113766119B (en) * 2021-05-11 2023-12-05 腾讯科技(深圳)有限公司 Virtual image display method, device, terminal and storage medium
CN113542801A (en) * 2021-06-29 2021-10-22 北京百度网讯科技有限公司 Method, device, equipment, storage medium and program product for generating anchor identification
CN113542801B (en) * 2021-06-29 2023-06-06 北京百度网讯科技有限公司 Method, device, equipment, storage medium and program product for generating anchor identification
CN113507621A (en) * 2021-07-07 2021-10-15 上海商汤智能科技有限公司 Live broadcast method, device, system, computer equipment and storage medium
CN114007091A (en) * 2021-10-27 2022-02-01 北京市商汤科技开发有限公司 Video processing method and device, electronic equipment and storage medium
CN114286021A (en) * 2021-12-24 2022-04-05 北京达佳互联信息技术有限公司 Rendering method, rendering apparatus, server, storage medium, and program product
CN114422647A (en) * 2021-12-24 2022-04-29 上海浦东发展银行股份有限公司 Digital person-based agent service method, apparatus, device, medium, and product
CN114422862A (en) * 2021-12-24 2022-04-29 上海浦东发展银行股份有限公司 Service video generation method, device, equipment, storage medium and program product
CN114827652A (en) * 2022-05-18 2022-07-29 上海哔哩哔哩科技有限公司 Virtual image playing method and device
CN115314728A (en) * 2022-07-29 2022-11-08 北京达佳互联信息技术有限公司 Information display method, system, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110971930B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
CN110971930B (en) Live virtual image broadcasting method, device, terminal and storage medium
CN108401124B (en) Video recording method and device
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
US20190267037A1 (en) Method, apparatus and terminal for controlling video playing
CN110278464B (en) Method and device for displaying list
CN108965922B (en) Video cover generation method and device and storage medium
CN109327608B (en) Song sharing method, terminal, server and system
CN110740340B (en) Video live broadcast method and device and storage medium
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN110288689B (en) Method and device for rendering electronic map
CN112565806B (en) Virtual gift giving method, device, computer equipment and medium
CN110839174A (en) Image processing method and device, computer equipment and storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN110677713B (en) Video image processing method and device and storage medium
CN109783176B (en) Page switching method and device
CN111897465A (en) Popup display method, device, equipment and storage medium
CN111083554A (en) Method and device for displaying live gift
CN110769120A (en) Method, device, equipment and storage medium for message reminding
CN111192072A (en) User grouping method and device and storage medium
CN110933454B (en) Method, device, equipment and storage medium for processing live broadcast budding gift
CN112118353A (en) Information display method, device, terminal and computer readable storage medium
CN113485596B (en) Virtual model processing method and device, electronic equipment and storage medium
CN111464829B (en) Method, device and equipment for switching media data and storage medium
CN114594885A (en) Application icon management method, device and equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant