CN114286021B - Rendering method, rendering device, server, storage medium, and program product - Google Patents

Rendering method, rendering device, server, storage medium, and program product Download PDF

Info

Publication number
CN114286021B
CN114286021B CN202111600010.5A CN202111600010A CN114286021B CN 114286021 B CN114286021 B CN 114286021B CN 202111600010 A CN202111600010 A CN 202111600010A CN 114286021 B CN114286021 B CN 114286021B
Authority
CN
China
Prior art keywords
rendering
data
client
live broadcast
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111600010.5A
Other languages
Chinese (zh)
Other versions
CN114286021A (en
Inventor
胡小华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202111600010.5A priority Critical patent/CN114286021B/en
Publication of CN114286021A publication Critical patent/CN114286021A/en
Application granted granted Critical
Publication of CN114286021B publication Critical patent/CN114286021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure relates to a rendering method, apparatus, server, storage medium and program product, comprising receiving rendering data respectively transmitted by at least one client; generating respective corresponding virtual images based on rendering data respectively corresponding to the at least one client; based on at least one virtual image corresponding to the at least one client, obtaining rendering pictures corresponding to the at least one client respectively; and transmitting the rendering pictures respectively corresponding to the at least one client. The technical scheme of the embodiment of the disclosure solves the problem of poor rendering effect when rendering a rendering picture containing multiple anchor avatars.

Description

Rendering method, rendering device, server, storage medium, and program product
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a rendering method, apparatus, server, storage medium, and program product.
Background
With the rapid development of the Internet, the popularization coverage rate of live broadcast or recorded broadcast forms is higher and higher, and a plurality of business scenes such as content entertainment, social interaction and the like are carried. Taking a live scene as an example, a live party, which can also be called a main user, usually uses an avatar to replace a real person to interact, the live end can acquire rendering data from live data by collecting live data of the main user, render the live data, and obtain a rendering picture, namely an avatar picture, and then output the rendering picture at a live interface. In order to activate the live atmosphere, multiple anchor persons typically interact with each other in the same live scene, as if they were always in the same live room, using their own avatars.
In the traditional scheme, any live broadcasting end of the same live broadcasting scene can send the rendering data of the corresponding live broadcasting user to other live broadcasting ends of the live broadcasting scene, so that each live broadcasting end can render by utilizing the rendering data of all the live broadcasting users to obtain a rendering picture corresponding to each live broadcasting end for display, and the rendering effect is poor.
Disclosure of Invention
The present disclosure provides a rendering method, apparatus, server, storage medium, and program product to at least solve the problem of poor rendering effect when rendering a rendered screen including a plurality of anchor avatars in the related art. The technical scheme of the present disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a rendering method, applied to a server, including:
receiving rendering data respectively sent by at least one client;
generating respective corresponding virtual images based on rendering data respectively corresponding to the at least one client;
Based on at least one virtual image corresponding to the at least one client, obtaining rendering pictures corresponding to the at least one client respectively;
and transmitting the rendering pictures respectively corresponding to the at least one client.
Optionally, the rendering data includes audio information;
the method further comprises the steps of:
generating sound information corresponding to the at least one client based on at least one rendering data corresponding to the at least one client;
the sending the rendering frames respectively corresponding to the at least one client includes:
And transmitting the rendering picture and the sound information respectively corresponding to the at least one client side so that the at least one client side synchronously outputs the rendering picture and the sound information.
Optionally, the rendering data further includes action information;
the generating the respective corresponding avatar based on the rendering data respectively corresponding to the at least one client includes:
correcting the action information by utilizing the audio information respectively corresponding to the at least one client to obtain action information matched with the audio information;
And generating respective corresponding virtual images based on the action information which is respectively corresponding to the at least one client and is matched with the audio information.
Optionally, the audio information includes a timestamp;
The correcting the action information by using the audio information respectively corresponding to the at least one client, and obtaining the action information matched with the audio information includes:
And correcting the action information by utilizing the time stamps of the audio information respectively corresponding to the at least one client to obtain the action information matched with the audio information.
Optionally, the generating the respective corresponding avatar based on the rendering data respectively corresponding to the at least one client includes:
Determining at least one action model corresponding to the audio information respectively corresponding to the at least one client according to the corresponding relation between the preset audio information and the action model;
and generating respective corresponding avatars based on the at least one action model respectively corresponding to the at least one client.
Optionally, the client includes a live client, and the method further includes:
based on at least one virtual image corresponding to the at least one live broadcast end, a rendering picture corresponding to the audience end is obtained;
transmitting the rendering picture corresponding to the audience terminal so that the audience terminal can display the rendering picture in a live broadcast viewing interface;
the sending the rendering frames respectively corresponding to the at least one client includes:
And transmitting the rendering pictures corresponding to the at least one live broadcast end and the rendering pictures corresponding to the audience end to the at least one live broadcast end so that the at least one live broadcast end displays the rendering pictures corresponding to the at least one live broadcast end and the rendering pictures corresponding to the audience end in a live broadcast interface.
Optionally, the method further comprises:
receiving first interaction data and second interaction data sent by a spectator terminal;
Determining first animation data corresponding to first interaction data based on a corresponding relation between preset first interaction data and first animation data, and determining second animation data corresponding to second interaction data based on a corresponding relation between preset second interaction data and second animation data;
the generating the respective corresponding avatar based on the rendering data respectively corresponding to the at least one client includes:
generating respective corresponding virtual images based on rendering data and the first animation data respectively corresponding to the at least one live broadcast end;
the obtaining a rendering picture corresponding to the viewer end based on the at least one avatar corresponding to the at least one live end includes:
based on at least one virtual image corresponding to the at least one live broadcast end and the second animation data, a rendering picture corresponding to the audience end is obtained;
the obtaining, based on the at least one avatar corresponding to the at least one client, rendering pictures corresponding to the at least one client respectively includes:
And obtaining rendering pictures corresponding to the at least one live broadcast end respectively based on the at least one virtual image corresponding to the at least one live broadcast end and the second animation data.
Optionally, the method further comprises:
receiving first interaction data and second interaction data sent by a spectator terminal;
determining first animation data corresponding to first interaction data based on a corresponding relation between the preset first interaction data and the first animation data;
the generating the respective corresponding avatar based on the rendering data respectively corresponding to the at least one client includes:
generating respective corresponding virtual images based on rendering data and the first animation data respectively corresponding to the at least one live broadcast end;
the obtaining a rendering picture corresponding to the viewer end based on the at least one avatar corresponding to the at least one live end includes:
Based on at least one virtual image corresponding to the at least one live broadcast end, a first rendering picture corresponding to the audience end is obtained;
The sending the rendering picture corresponding to the audience terminal, so that the audience terminal can display the rendering picture in a live broadcast viewing interface comprises the following steps:
Transmitting the first rendering picture and the second interactive data corresponding to the audience terminal, so that the audience terminal determines second animation data corresponding to the second interactive data based on the corresponding relation between the preset second interactive data and the second animation data, obtains a corresponding second rendering picture based on the corresponding first rendering picture and the second animation data, and displays the corresponding second rendering picture in a live broadcast watching interface;
the obtaining, based on the at least one avatar corresponding to the at least one client, rendering pictures corresponding to the at least one client respectively includes:
based on at least one virtual image corresponding to the at least one live broadcast end, obtaining first rendering pictures corresponding to the at least one live broadcast end respectively;
The step of sending the rendering pictures respectively corresponding to the at least one live broadcast end and the rendering pictures corresponding to the audience end to the at least one live broadcast end so that the at least one live broadcast end displays the rendering pictures respectively corresponding to the live broadcast end and the rendering pictures corresponding to the audience end in a live broadcast interface comprises the following steps:
And sending the second interaction data and the first rendering pictures corresponding to the audience end to the at least one live end so that the at least one live end can determine second animation data corresponding to the second interaction data based on the corresponding relation between the second interaction data and the second animation data preset by the at least one live end, and obtain the second rendering pictures corresponding to the second interaction data based on the first rendering pictures corresponding to the second interaction data and the second animation data, and display the second rendering pictures corresponding to the audience end and the first rendering pictures corresponding to the audience end in a live interface.
Optionally, the client includes a recording end;
the method further comprises the steps of:
Storing rendering pictures respectively corresponding to the at least one client;
receiving modification requests which are respectively sent by the at least one client and are aimed at the rendering pictures corresponding to the at least one client;
And modifying the at least one rendering picture based on the modification request, and sending the modified rendering picture corresponding to the at least one client so that the at least one client can display the modified rendering picture corresponding to the at least one client in a preset time.
According to a second aspect of embodiments of the present disclosure, there is provided a rendering method, applied to a client, including:
Collecting rendering data corresponding to a target object;
The rendering data is sent to a server, so that the server generates respective corresponding virtual images based on rendering data corresponding to at least one client respectively, obtains rendering pictures corresponding to the at least one client respectively based on at least one virtual image corresponding to the at least one client, and sends the rendering pictures corresponding to the at least one client respectively to the at least one client;
receiving a corresponding rendering picture sent by the server;
And displaying the corresponding rendering picture.
Optionally, before sending the rendering data to the server, the method further includes:
Based on the rendering data, a preliminary rendering picture is obtained.
Optionally, the receiving the corresponding rendering frame sent by the server includes:
receiving a corresponding first rendering picture and at least one second interaction data sent by the server, wherein the first rendering picture is obtained by the server based on at least one virtual image corresponding to the at least one client, the virtual image is generated by the server based on the rendering data and the first animation data respectively corresponding to the at least one client, the first animation data is determined by the server based on at least one interaction data and a corresponding relation between preset first interaction data and first animation data, and the at least one first interaction data and the second interaction data are sent to the server by a spectator;
the method further comprises the steps of:
Determining second animation data corresponding to the at least one second interaction data based on a corresponding relation between the preset second interaction data and the second animation data;
obtaining a corresponding second rendering picture based on the corresponding first rendering picture and the second animation data;
the rendering picture corresponding to the display comprises:
and displaying the corresponding second rendering picture.
According to a third aspect of embodiments of the present disclosure, there is provided a rendering apparatus including:
the first receiving module is configured to receive rendering data respectively sent by at least one client;
A first generation module configured to generate respective corresponding avatars based on rendering data respectively corresponding to the at least one client;
a first rendering module configured to obtain rendering pictures respectively corresponding to the at least one client based on at least one avatar corresponding to the at least one client;
and the first sending module is configured to send the rendering frames corresponding to the at least one client respectively.
Optionally, the rendering data includes audio information; the apparatus further comprises:
a second generation module configured to generate sound information corresponding to the at least one client, respectively, based on at least one rendering data corresponding to the at least one client;
the first sending module is specifically configured to send the rendering frames and the sound information corresponding to the at least one client side respectively to the at least one client side, so that the at least one client side synchronously outputs the rendering frames and the sound information.
Optionally, the rendering data further includes action information; the first generation module is specifically configured to correct the action information by using the audio information respectively corresponding to the at least one client to obtain action information matched with the audio information; and generating respective corresponding virtual images based on the action information which is respectively corresponding to the at least one client and is matched with the audio information.
Optionally, the audio information includes a timestamp; the first generation module is specifically configured to correct the action information by using the time stamps of the audio information corresponding to the at least one client respectively, so as to obtain action information matched with the audio information; and generating respective corresponding virtual images based on the action information which is respectively corresponding to the at least one client and is matched with the audio information.
Optionally, the first generating module is specifically configured to determine at least one action model corresponding to the audio information respectively corresponding to the at least one client according to a corresponding relation between preset audio information and action models; and generating respective corresponding avatars based on the at least one action model respectively corresponding to the at least one client.
Optionally, the client comprises a live broadcast end; the apparatus further comprises:
A second rendering module configured to obtain a rendering screen corresponding to the viewer end based on at least one avatar corresponding to the at least one live end;
the second sending module is configured to send the rendering picture corresponding to the audience terminal so that the audience terminal can display the rendering picture in a live broadcast viewing interface;
The first sending module is specifically configured to send the rendering pictures corresponding to the at least one live broadcast terminal and the rendering pictures corresponding to the audience terminal to the at least one live broadcast terminal, so that the at least one live broadcast terminal displays the rendering pictures corresponding to the live broadcast terminal and the rendering pictures corresponding to the audience terminal in a live broadcast interface.
Optionally, the apparatus further includes:
The second receiving module is configured to receive the first interaction data and the second interaction data sent by the audience terminal;
A first determining module configured to determine first animation data corresponding to first interaction data based on a corresponding relation between the first interaction data and the first animation data, and determine second animation data corresponding to second interaction data based on a corresponding relation between the second interaction data and the second animation data;
The first generation module is specifically configured to generate respective corresponding virtual images based on rendering data and the first animation data respectively corresponding to the at least one live broadcast end;
The second rendering module is specifically configured to obtain a rendering picture corresponding to the audience terminal based on at least one virtual image corresponding to the at least one live terminal and the second animation data;
The first rendering module is specifically configured to obtain rendering pictures corresponding to the at least one live broadcast end respectively based on the at least one virtual image corresponding to the at least one live broadcast end and the second animation data.
Optionally, the apparatus further includes:
The second receiving module is configured to receive the first interaction data and the second interaction data sent by the audience terminal;
the second determining module is configured to determine first animation data corresponding to the first interaction data based on a corresponding relation between preset first interaction data and the first animation data;
The first generation module is specifically configured to generate respective corresponding virtual images based on rendering data and the first animation data respectively corresponding to the at least one live broadcast end;
The second rendering module is specifically configured to obtain a first rendering picture corresponding to the audience terminal based on at least one avatar corresponding to the at least one live broadcast terminal;
The second sending module is specifically configured to send the first rendering frame and the second interaction data corresponding to the audience terminal, so that the audience terminal determines second animation data corresponding to the second interaction data based on a corresponding relation between the preset second interaction data and the second animation data, obtains the corresponding second rendering frame based on the corresponding first rendering frame and the second animation data, and displays the corresponding second rendering frame in a live broadcast watching interface;
The first rendering module is specifically configured to obtain first rendering pictures corresponding to the at least one live broadcast end respectively based on at least one virtual image corresponding to the at least one live broadcast end;
The first sending module is specifically configured to send the second interaction data and the first rendering frames corresponding to the viewer end to the at least one live broadcast end, so that the at least one live broadcast end can determine second animation data corresponding to the second interaction data based on the corresponding relation between the second interaction data and the second animation data, and obtain the second rendering frames corresponding to the second interaction data based on the first rendering frames and the second animation data, and display the second rendering frames corresponding to the viewer end and the first rendering frames corresponding to the viewer end in a live broadcast interface.
Optionally, the client includes a recording end; the apparatus further comprises:
a storage module configured to store rendering pictures respectively corresponding to the at least one client;
The third receiving module is configured to receive modification requests for the rendering frames corresponding to the at least one client respectively;
The modification module is configured to modify the at least one rendering picture based on the modification request, and send the modified rendering picture corresponding to the at least one client, so that the at least one client can display the respective modified rendering picture in a preset time.
According to a fourth aspect of embodiments of the present disclosure, there is provided a rendering apparatus including:
The acquisition module is configured to acquire rendering data corresponding to the target object;
A third sending module, configured to send the rendering data to a server, so that the server generates respective corresponding avatars based on rendering data corresponding to at least one client, obtains rendering frames corresponding to the at least one client based on at least one avatar corresponding to the at least one client, and sends the rendering frames corresponding to the at least one client;
the fourth receiving module is configured to receive the corresponding rendering picture sent by the server;
And the display module is configured to display the corresponding rendering picture.
Optionally, the apparatus further includes:
and a third rendering module configured to obtain a preliminary rendering screen based on the rendering data.
Optionally, the fourth receiving module is specifically configured to receive a corresponding first rendering frame and at least one second interaction data sent by the server, where the first rendering frame is obtained by the server based on at least one avatar corresponding to the at least one client, the avatar is generated by the server based on rendering data and first animation data respectively corresponding to the at least one client, the first animation data is determined by the server based on at least one interaction data and a preset correspondence between the first interaction data and the first animation data, and the at least one first interaction data and the second interaction data are sent by the viewer to the server;
The apparatus further comprises:
A third determining module configured to determine second animation data corresponding to the at least one second interaction data based on a correspondence of preset second interaction data and second animation data;
A fourth rendering module configured to obtain a corresponding second rendering picture based on the corresponding first rendering picture and the second animation data;
The display module is specifically configured to display the corresponding second rendering picture.
According to a fifth aspect of embodiments of the present disclosure, there is provided a server comprising:
A processor;
a memory for storing the processor-executable instructions;
Wherein the processor is configured to execute the instructions to implement the rendering method of the first aspect.
According to a sixth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium, which when executed by a processor of a server, enables the server to perform the rendering method of any one of the first or second aspects.
According to a seventh aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer instructions which, when executed by a processor, implement the rendering method of any one of the first or second aspects.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
Any one of the at least one client can send the respective corresponding rendering data to the server without sending the respective corresponding rendering data to other clients, so that the server can generate respective corresponding avatars based on the rendering data respectively corresponding to the at least one client, render and obtain rendering pictures respectively corresponding to the at least one client based on the at least one rendering data corresponding to the at least one client, and send the rendering pictures to the corresponding client. The problem that the performance of the rendering process is blocked due to the fact that the data are sent between the clients is solved by utilizing the server to receive the rendering data of the clients, the rendering efficiency is improved, the service computing power is utilized for unified rendering, and the rendering effect is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a flow chart illustrating a rendering method according to an exemplary embodiment.
Fig. 2-1 is a schematic diagram of a rendered screen according to an example embodiment.
Fig. 2-2 are schematic diagrams illustrating a rendered screen according to another exemplary embodiment.
Fig. 2-3 are schematic diagrams illustrating a rendered screen according to yet another exemplary embodiment.
Fig. 3 is a flowchart illustrating a rendering method according to another exemplary embodiment.
Fig. 4 is a flowchart illustrating a rendering method according to still another exemplary embodiment.
Fig. 5 is a flowchart illustrating a rendering method according to still another exemplary embodiment.
Fig. 6 is a flowchart illustrating a rendering method according to still another exemplary embodiment.
FIG. 7 is a schematic diagram illustrating a rendered scene according to an example embodiment.
Fig. 8 is a block diagram illustrating a rendering apparatus according to an exemplary embodiment.
Fig. 9 is a block diagram of a rendering apparatus according to another exemplary embodiment.
Fig. 10 is a block diagram of a server, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Embodiments of the present disclosure may be applicable to picture rendering in live or recorded scenes, in particular avatar picture rendering. Taking a live scene as an example, a live party, which can also be called a main user, usually uses an avatar to replace a real person to interact, the live end can acquire rendering data from live data by collecting live data of the main user, render the live data, and obtain a rendering picture, namely an avatar picture, and then output the rendering picture at a live interface. In order to activate the live atmosphere, multiple anchor persons typically interact with each other in the same live scene, as if they were always in the same live room, using their own avatars.
In the conventional scheme, a plurality of live broadcast ends in the same live broadcast scene are connected to the same service end, and each live broadcast end can send rendering data of a corresponding anchor user to other live broadcast ends in the live broadcast scene, for example, the rendering data can be preferentially sent to the service end, and the rendering data can be sent to other live broadcast ends by the service end or can be directly sent to other live broadcast ends of the live broadcast scene. Therefore, each live broadcast end can render by utilizing the rendering data of all the anchor users to obtain a rendering picture which contains each anchor virtual image and corresponds to each live broadcast end for display. However, when the rendering method is used for rendering, the rendering data of the corresponding anchor user of each live broadcast end needs to be sent to other live broadcast ends, when the number of anchor persons participating in interaction is large, the sending data among the live broadcast ends can cause the performance blocking of the rendering process, the rendering efficiency is affected, and the performance consumption is large when the virtual images of a plurality of anchor persons are rendered due to the limited calculation power of the live broadcast ends, and the rendering effect is poor.
In order to solve the above technical problems, the inventor thinks that a plurality of live broadcast ends are connected to the same server, and if rendering data corresponding to a host user is directly sent to the server, and unified rendering is performed by the server, sending data between the live broadcast ends can be avoided. And the calculation force of the service end is better than that of the live end, so that a better rendering effect can be obtained. Therefore, through a series of thinking and experiments, the technical scheme of the present disclosure is provided, and a rendering method is provided, which includes receiving rendering data respectively sent by at least one client; generating respective corresponding virtual images based on rendering data respectively corresponding to the at least one client, and obtaining rendering pictures respectively corresponding to the at least one client based on the at least one virtual image corresponding to the at least one client; and transmitting the rendering pictures respectively corresponding to the at least one client. The problem that the performance of the rendering process is blocked due to the fact that the data are sent between the clients is solved by utilizing the server to receive the rendering data of the clients, the rendering efficiency is improved, the service computing power is utilized for unified rendering, and the rendering effect is improved.
The following description of the technical solutions in the embodiments of the present disclosure will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of the disclosure.
Fig. 1 is a flowchart illustrating a rendering method according to an exemplary embodiment, and as shown in fig. 1, the rendering method may be applied to a server, and may specifically include the following steps.
In step S11, rendering data respectively transmitted by at least one client is received.
Wherein, the rendering data may refer to data for rendering the avatar corresponding to the user, and the rendering data may be obtained based on the acquired data corresponding to the user. For example, in a live scene, the client may refer to a live end, the corresponding user may refer to a host user, at least one live end may include one or more live ends corresponding to the same live scene, such as the same live room, and each live end may collect live data corresponding to the host user, from which corresponding rendering data is obtained. For another example, in a recording scene, the client may refer to a recording end, the corresponding user may refer to a recording user, at least one recording end may include one or more recording ends corresponding to the same recording scene, such as the same recording room, and each recording end may collect recording data corresponding to the recording user, and obtain corresponding rendering data therefrom. The specific implementation process of the client acquiring the corresponding user data and obtaining the rendering data therefrom will be described in the subsequent embodiments, which will not be described herein.
The at least one client may send the corresponding rendering data to the server, so that the server may render based on the rendering data corresponding to the at least one client. In order to improve efficiency, the client may send the rendering data to the server through a low latency network protocol, such as the user datagram protocol (User Datagram Protocol, abbreviated as UDP), and communicate with the server.
In step S12, respective corresponding avatars are generated based on rendering data respectively corresponding to at least one client.
In step S13, rendering screens respectively corresponding to at least one client are obtained based on at least one avatar corresponding to at least one client.
According to the received at least one rendering data corresponding to the at least one client, the server may generate respective corresponding avatars using the rendering data corresponding to the at least one client, and obtain a rendering screen based on the at least one avatar. Wherein, the rendering picture can comprise at least one virtual image of the user corresponding to the client. Taking 2 clients as an example, the server may generate an avatar of a user a corresponding to the client a and rendering data corresponding to the client B according to the received rendering data corresponding to the client a, generate an avatar of a user B corresponding to the client B, and render a rendering screen including the avatar of the user a corresponding to the client a and the avatar of the user B corresponding to the client B.
Alternatively, the rendered screen may include other content such as a game screen, a chat background screen, etc., in addition to the user avatars, without limitation.
When the server performs rendering, rendering pictures corresponding to at least one client can be obtained. Specifically, the rendering screen corresponding to the client may refer to a rendering screen corresponding to a user viewing angle corresponding to the client, such as a rendering screen corresponding to a hosting user viewing angle or a rendering screen corresponding to a recording user viewing angle.
Wherein, the rendering picture corresponding to the user visual angle can have various realization modes. As an alternative implementation, in a rendered screen corresponding to a certain user's viewing angle, the proportion of the user's own avatar in the screen may be greater than that of other users. For ease of understanding, taking 2 clients as an example, fig. 2-1 shows a schematic diagram of a rendered screen corresponding to a user a perspective corresponding to the client a, where the screen includes avatars of the user a and the user b, and the proportion of the avatars of the user a in the screen is greater than that of the user b. Fig. 2-2 shows a schematic diagram of a rendered screen corresponding to a user B's viewing angle corresponding to a client B, in which the avatar of the user B is greater than the user a.
As another alternative implementation manner, in the rendered screen corresponding to a certain user viewing angle, the avatar of the user may be located in the middle of the screen, and the avatars of other users are located in the edge of the screen, etc., which will not be described schematically.
In order to further improve the rendering effect, in some embodiments, the server may pre-process at least one rendering data corresponding to at least one client before rendering, for example, may include packet loss recovery, voice noise reduction, and so on.
In step S14, the rendering frames corresponding to the at least one client are transmitted to the at least one client.
After the server side renders and obtains the rendering picture corresponding to at least one client side, the rendering picture can be sent to the corresponding client side so that the client side can output the rendering picture. For example, in a live scene, the server sends a rendering screen to a corresponding client, which may present the rendering screen on a live interface. For another example, in the recording and playing scene, the server side sends the rendering picture to the corresponding client side, and the client side can output the rendering picture after a certain preset time interval, wherein the preset time interval can be set according to the actual application scene without limitation.
Specifically, the server may encode the rendered frame and send the encoded frame to the corresponding client in the form of a live stream. Optionally, the server may send the encoded live stream to the client through a UDP protocol.
In this embodiment, any client in at least one client may send the rendering data corresponding to each client to the server without sending the rendering data to other clients, so that the server may render and obtain rendering frames corresponding to at least one client based on at least one rendering data corresponding to at least one client, and send the rendering frames to the corresponding client. The problem that the performance of the rendering process is blocked due to the fact that the data are sent between the clients is solved by utilizing the server to receive the rendering data of the clients, the rendering efficiency is improved, the service computing power is utilized for unified rendering, and the rendering effect is improved.
In practical application, when the server performs rendering, a rendering picture corresponding to other viewing angles, such as a rendering picture corresponding to a viewing angle of a viewer user, a rendering picture corresponding to a VIP viewing angle of a viewer, etc., may also be obtained. Taking a live scene as an example, a viewer user may refer to a user who views live broadcast of the anchor users. Taking 2 clients as an example, fig. 2-3 show schematic diagrams of a rendered screen corresponding to the viewing angle of the viewer user, where the screen includes avatars of the anchor user a and the anchor user b, and the proportion of the avatar of the anchor user a in the screen is the same as that of the anchor user b.
Optionally, the server may send the rendered image corresponding to the viewing angle of the audience user and the rendered image corresponding to the viewing angle of the anchor user to the corresponding live broadcast end together, so that the live broadcast end displays the rendered image in the live broadcast interface. Further, the corresponding anchor may adjust, for example, adjust its own actions, etc., according to the rendered frame.
After rendering to obtain each rendering picture, the server side can store the rendering picture. Taking the recorded broadcast scene as an example, after the server side sends the rendering pictures corresponding to at least one recording end respectively to at least one recording end, the recording end can output after a certain preset time interval, and in the preset time interval, the recording user may modify the rendering pictures, for example, delete a rendering picture of a certain frame, or change the sequence of rendering pictures of a certain two frames, etc. At this time, the server may make a corresponding modification to the rendered frame.
Specifically, the server may receive a modification request for each corresponding rendering frame sent by the at least one recording end, and modify the at least one rendering frame based on the modification request. And sending the modified rendering pictures corresponding to at least one recording end to the corresponding recording end respectively, so that the recording end displays the modified rendering pictures respectively within a preset time.
In practical applications, at least one user typically performs voice communication when interacting in the same scene. Therefore, the rendering data may include audio information, the audio information may be obtained based on the audio data in the acquired data corresponding to the user, and a specific implementation process will be described in a subsequent embodiment, which is not described herein.
Thus, in some embodiments, when the server performs rendering, sound information corresponding to at least one client may also be generated based on at least one rendering data corresponding to at least one client. The sound information may include sound information of a user corresponding to at least one client. Optionally, in the voice information corresponding to the client, the voice information of the user corresponding to the client may be larger than that of other users.
The server may send the rendering frame and the sound information corresponding to the at least one client together to the at least one client, so that the client outputs the rendering frame and the sound information. In order to improve the rendering effect, the synchronization of the audio and the video is realized, and the rendering picture and the sound information can be synchronously rendered and obtained by the service end based on the rendering data and synchronously output by the client. A specific rendering process will be described in the following embodiments.
In practical application, at least one user can interact in the same scene, besides voice communication, action interaction can also be performed. At this time, the rendering data may include audio information and motion information, and an avatar corresponding to the user may be generated using the audio information and the motion information rendering. The rendering process is described below in connection with the embodiment shown in fig. 3. As shown in fig. 3, which is a flowchart illustrating a rendering method according to another exemplary embodiment, the rendering method is also applied to a server side, and may include the following steps.
In step S31, rendering data respectively transmitted by at least one client is received, the rendering data including audio information and motion information.
In this embodiment, the rendering data may include audio information and motion information, and the motion information may be obtained based on motion data in the acquired data corresponding to the user.
Alternatively, the motion information may include at least one of expression information, limb information, and finger information. The expression information may include a mouth shape state, an eye state, and the like, may be obtained based on facial expression data in the acquired data corresponding to the user, the limb information may include a limb motion, a shoulder motion, a neck motion, and the like, may be obtained based on limb motion data in the acquired data corresponding to the user, and the finger information may include a finger state and the like, may be obtained based on finger motion data in the acquired data corresponding to the user, and the specific implementation process will be described in the following embodiments, and will not be repeated here.
The implementation process of the server receiving the rendering data sent by the client may refer to the specific implementation process of step S11 in the embodiment shown in fig. 1, which is not described herein again.
In step S32, the action information is corrected by using the audio information corresponding to each of the at least one client to obtain action information matching with the audio information.
In practical applications, the audio information and the motion information generally have a corresponding relationship, and the motion information includes expression information, where the expression information may include a mouth-shaped state, for example, when the audio information is "hello", the mouth-shaped state is implemented from closed to open, and when the audio information is "thienyl", the mouth-shaped state is implemented as always closed, etc. Therefore, the action information can be corrected by utilizing the audio information, so that the action information matched with the audio information is obtained, namely, the synchronization of the audio information and the action information is realized.
Alternatively, the audio information may include a time stamp, and the action information may be corrected using the time stamp of the audio information to obtain action information matching the audio information.
Optionally, the action information may also include a timestamp, and the action information may be corrected based on the timestamp of the audio information and the timestamp of the action information, to obtain the action information matched with the audio information.
The above manner of correcting the motion information by using the audio information may be set according to an actual application scenario, which is not limited in the present disclosure.
In step S33, respective corresponding avatars are generated based on the action information matching the audio information, which is respectively corresponding to the at least one client, and sound information, which is respectively corresponding to the at least one client, is generated based on the audio information.
In step S34, rendering screens respectively corresponding to at least one client are obtained based on at least one avatar corresponding to at least one client.
The server may generate an avatar corresponding to each client based on the motion information matched with the audio information. At this time, when a rendering screen is obtained based on avatar rendering corresponding to at least one client, the rendering screen is matched with sound information generated based on audio information.
Specifically, the server side can perform driving rendering by adopting a multi-mode technology, for example, expression driving, voice driving, limb driving, finger driving and the like can be included, and matched rendering images and sound information are obtained. Optionally, automatic speech recognition technology (Automatic Speech Recognition, abbreviated ASR), natural language processing technology (Natural Language Processing, abbreviated NLP), text-to-speech technology (TextToSpeech, abbreviated TTS), expressive body motion analysis technology, etc. may also be utilized for rendering, which is not limited by the present disclosure.
Optionally, the method for obtaining the rendered frames corresponding to the at least one client may include:
And rendering and generating avatar pictures corresponding to the at least one client based on the at least one action information corresponding to the at least one client.
In step S35, the rendering frame and the sound information corresponding to the at least one client are sent to the at least one client, so that the at least one client can synchronously output the corresponding rendering frame and sound information.
The server can encode the rendering picture and the corresponding sound information and send the rendering picture and the corresponding sound information to the corresponding client in a live stream mode. Alternatively, the encoded live stream may be sent to the client via UDP protocol.
In this embodiment, any client of the at least one client may send rendering data corresponding to each other to the server, where the rendering data includes audio information and motion information. The server side can perform unified rendering based on rendering data corresponding to each client side, correct the action information by utilizing the audio information, realize synchronization of the audio information and the action information, render by utilizing the action information to obtain rendering pictures corresponding to each client side respectively, generate sound information matched with the rendering pictures, and send the rendering pictures and the sound information to the corresponding client side. The method solves the problem that the performance of the rendering process is blocked due to the fact that data are sent between the clients, improves the rendering efficiency, achieves synchronous rendering of the rendering pictures and the sound information on the basis of unified rendering by using the computing power of the server, and further improves the rendering effect.
Optionally, in the rendering process, there may be a situation that the action information of the user is not completely obtained, for example, if the user does not face the camera within a certain preset time, the expression information of the user within the preset time cannot be collected and obtained. At this time, a preset action model may be adopted for rendering. The preset action model may include a smile action model, a speaking action model, and the like. For example, when the user is not facing the camera, the smile action model may be employed for rendering, obtaining a rendered screen containing the user avatar in a smile state.
In practical applications, when at least one user interacts in the same scene, there may be a situation that the network state is poor or other reasons, and each user only performs voice interaction, where the rendering data may only include audio information. The rendering process is described below in connection with the embodiment shown in fig. 4. As shown in fig. 4, which is a flowchart illustrating a rendering method according to still another exemplary embodiment, the rendering method is also applied to a server side, and may include the following steps.
In step S41, rendering data respectively transmitted by at least one client is received, the rendering data including audio information.
The implementation process of step S41 may refer to the specific implementation process of step S11 in the embodiment shown in fig. 1, and will not be described herein.
In step S42, at least one action model corresponding to the audio information corresponding to the at least one client is determined according to the preset correspondence between the audio information and the action model.
In this embodiment, since the motion data of the user cannot be obtained, a motion model, such as a waving motion model, a singing motion model, a dancing motion model, and the like, may be preset, so as to generate a corresponding avatar by using the audio information and the motion model.
Optionally, a correspondence between the motion model and the audio information may be further set, for example, a correspondence between the waving motion model and the audio information including the content such as "ha", "hi", a correspondence between the singing motion model and the audio information including the content such as "singing", a correspondence between the dance motion model and the audio information including the content such as "dance", and the like may be set. The action model and the corresponding relation between the action model and the audio information can be set according to the actual application scene, and the disclosure is not particularly limited.
The server may store the preset correspondence between the motion model and the audio information in a database. When rendering, the corresponding action model can be determined from the database according to the received audio information and the corresponding relation. For example, if the received audio information is "hao is good", the corresponding waving motion model may be determined, and rendering is performed using the audio information and the waving motion model.
In step S43, respective corresponding avatars are generated based on at least one motion model respectively corresponding to at least one client, and sound information respectively corresponding to at least one client is generated based on the audio information.
In step S44, rendering screens respectively corresponding to at least one client are obtained based on at least one avatar corresponding to at least one client.
Based on the determined at least one action model, the server may generate an avatar corresponding to each client, and render a rendered screen corresponding to the at least one client based on the at least one avatar corresponding to the at least one client. Specifically, the server side can correct the action model by utilizing the multi-mode technology and utilizing the audio information, and drive the rendering to obtain synchronous rendering picture and sound information. The specific implementation process is already described in detail in the embodiment shown in fig. 3, and will not be described here again.
In step S45, the rendering frame and the sound information corresponding to the at least one client are sent to the at least one client, so that the at least one client can synchronously output the corresponding rendering frame and sound information.
The implementation process of step S45 may refer to the specific implementation process of step S35 in the embodiment shown in fig. 3, which is not described herein.
In this embodiment, any client of the at least one client may send rendering data corresponding to each other to the server, where the rendering data includes audio information. The server side can determine an action model corresponding to the audio information according to the corresponding relation between the preset audio information and the action model, uniformly render the action model based on the audio information and the action model corresponding to each client side, correct the action model by utilizing the audio information, drive the rendering to obtain matched rendering pictures and sound information, realize the synchronization of the audio and the video, and send the rendering pictures and the sound information to the corresponding client side. The method solves the problem that the performance of the rendering process is blocked due to the fact that data are sent between the clients, improves the rendering efficiency, achieves that when a plurality of users only perform voice interaction, virtual image pictures of the users can still be rendered and obtained through the preset corresponding relation between the action model and the audio information and the action model, enhances the interaction atmosphere, improves the rendering effect, achieves synchronous rendering of the rendering pictures and the audio information on the basis of unified rendering by using the computing force of the server, and further improves the rendering effect.
In practical application, taking a live broadcast scene as an example, when a plurality of anchor users interact in the same live broadcast room, a viewer watching a live broadcast interface of any anchor user can interact with the anchor user, such as evaluating live broadcast content, rewarding the anchor user, and the like. In order to further improve the rendering effect, the server side can also perform rendering based on interaction conditions of audience users. The following is a description of the embodiment shown in fig. 5. As shown in fig. 5, which is a flowchart illustrating a rendering method according to still another exemplary embodiment, the rendering method is also applied to a server side, and may include the following steps.
In step S51, rendering data respectively transmitted by at least one live side is received.
The implementation process of step S51 may refer to the specific implementation process of step S11 in the embodiment shown in fig. 1, and will not be described herein.
In step S52, interactive data transmitted from the viewer' S side is received.
Wherein, the interactive data can comprise comment data, rewarding data and the like, and can be obtained by a viewer terminal in response to interactive operation of a viewer user. For example, in response to an operation of a viewer user entering a comment at a live interface of any of the at least one anchor, the corresponding viewer may generate comment data containing the comment content.
In this embodiment, when the audience user views the live broadcast interface of any anchor, the corresponding audience terminal may be connected to the server and send the interaction data of the corresponding audience user to the server, so that the server may render based on the interaction data. Optionally, the viewer end may send the interactive data to the server end through UDP protocol, and communicate with the server end.
In step S53, animation data corresponding to the interactive data is determined based on the correspondence between the preset interactive data and the animation data.
In practical applications, the server may generally use an animation form to reflect the interactive effect of the audience. Specifically, animation data, such as smile animation, clapping animation, and the like, and a correspondence relationship between interaction data and animation data, such as a correspondence relationship between smile animation and interaction data including commend content, a correspondence relationship between clapping animation and interaction data including bonus content, and the like, may be set in advance. The server may store the preset animation data and the corresponding relationship between the interactive data and the animation data in the database. When rendering, corresponding animation data can be determined from the database according to the received interaction data and the corresponding relation. For example, the received interaction data is the rewarding data for sending love, the corresponding clap animation can be determined, and the clap animation is combined for rendering.
Specifically, the server side performs rendering by using the animation data to generate an avatar of the anchor, and may also obtain a rendering picture in combination with the rendering of the avatar of the anchor. Optionally, the interaction data may include first interaction data and second interaction data. First animation data corresponding to the first interactive data may be determined based on a correspondence of the preset first interactive data and the first animation data, and second animation data corresponding to the second interactive data may be determined based on a correspondence of the preset second interactive data and the second animation data. The first animation data may be used to generate an avatar of the anchor, for example, may include a clap animation, etc., and the second animation data may be used to obtain a rendering picture, for example, may include a love animation, etc., may be set according to an actual application scene, and is not particularly limited.
In step S54, based on the rendering data and the first animation data respectively corresponding to the at least one live client, respective corresponding avatars are generated, and based on the at least one avatar and the second animation data respectively corresponding to the at least one client, rendering pictures respectively corresponding to the at least one live client are obtained.
Taking the example that the first animation data is clapping animation and the second animation data is loving animation, the server can combine the rendering data and the first animation data which are respectively corresponding to at least one live broadcast end to generate the corresponding virtual images. Wherein the avatar may perform a clapping action. And, based on at least one avatar and second animation data corresponding to at least one client, a rendering screen corresponding to at least one live client, respectively, may be obtained. Wherein, the rendering picture can comprise loving animation.
In step S55, the rendering frames corresponding to the at least one live broadcast end are sent to the at least one live broadcast end.
The implementation process of step S55 may refer to the specific implementation process of step S13 in the embodiment shown in fig. 1, and will not be described herein.
In this embodiment, any one of the at least one live broadcast end may send the rendering data corresponding to the at least one live broadcast end to the server, and at the same time, the viewer end may send the corresponding interaction data to the server, and the server may preferentially determine the first animation data corresponding to the first interaction data and the second animation data corresponding to the second interaction data, and generate the virtual images corresponding to the at least one live broadcast end based on the rendering data and the first animation data corresponding to the at least one live broadcast end, and render the rendering frame corresponding to the at least one live broadcast end based on the at least one virtual image and the second animation data corresponding to the at least one client. The problem of performance blocking in the rendering process caused by data transmission between live broadcast ends is solved, the rendering efficiency is improved, the rendering is performed by combining the computing power of the service end with the rendering data of the anchor users and the interaction data of audience users, and the rendering effect is improved.
In practical application, the server may further render to obtain a rendered frame corresponding to the viewer based on at least one avatar and the second animation data corresponding to the at least one client, and send the rendered frame corresponding to the viewer.
The service end can encode the rendering picture corresponding to the audience end and send the rendering picture to the audience end in a live stream mode so as to be convenient for audience users to watch. Alternatively, the encoded live stream may be distributed to each viewer through a content distribution network (Content Delivery Network, CDN for short).
In practical applications, in order to alleviate the computing pressure of the server, the interactive data of a part of audience users may also be processed by the live broadcast end or the audience end, such as the second interactive data.
The server may determine first animation data corresponding to the first interaction data based on a preset correspondence between the first interaction data and the first animation data, generate respective corresponding virtual images based on rendering data and the first animation data respectively corresponding to the at least one live broadcast end, obtain a first rendering screen corresponding to the at least one live broadcast end and a first rendering screen corresponding to the audience end based on the at least one virtual image corresponding to the at least one live broadcast end, and send the first rendering screen corresponding to the at least one live broadcast end and the rendering screen corresponding to the audience end to the corresponding live broadcast end, and send the rendering screen corresponding to the audience end.
Optionally, the server may further send the second interaction data to a corresponding live broadcast end, where the live broadcast end determines second animation data corresponding to the second interaction data based on a corresponding relationship between the second interaction data and the second animation data preset by the live broadcast end, and obtains the corresponding second rendering frame based on the first rendering frame and the second animation data sent by the server, so as to display the second rendering frame corresponding to each other and the first rendering frame corresponding to the viewer end in the live broadcast interface. For example, the second interaction data may be loving prize data, and the live broadcast end may determine loving animation data corresponding to the loving prize data, and render a second rendering picture with loving effect, as a final rendering picture, so as to display on the live broadcast interface, based on a corresponding relationship between the second interaction data preset by the live broadcast end and the second animation data.
Optionally, the server may further send the second interaction data to the viewer, and the viewer determines second animation data corresponding to the second interaction data based on a corresponding relationship between the preset second interaction data and the second animation data, and obtains a corresponding second rendering picture based on the first rendering picture and the second animation data sent by the server, so as to display the corresponding second rendering picture in the live broadcast viewing interface. The detailed implementation process is not repeated.
The interactive data of part of audience users are sent to each live broadcast end or audience end for rendering, so that the computing pressure of a service end is reduced, each live broadcast end or audience end can render according to own personalized settings, and the rendering effect is further improved.
Fig. 6 is a flowchart of a rendering method according to still another exemplary embodiment, which may be applied to a client as shown in fig. 6, and may include the following steps.
In step S61, rendering data corresponding to the target object is collected.
In this embodiment, the method and the device can be applied to a client. The client may refer to any client of at least one client in the same live broadcast or recording scene. The client can acquire actions, sounds and the like of a user in a live broadcast or recording process by utilizing the acquisition component, and acquire acquisition data such as action data, audio data and the like. For example, a video capturing component such as a camera may be used to capture facial expressions, limb movements, finger movements, etc. of the user, and an audio capturing component may be used to capture sounds of the user, etc.
Based on the acquired data, the avatar generating component may be utilized to obtain rendering data for generating an avatar corresponding to the user, and the specific avatar generating component may be set according to an actual application scenario, which is not limited in the present disclosure.
In practical applications, in order to improve the rendering efficiency, the rendering data may be implemented as structured data. Specifically, the above collected data may be subjected to structural feature extraction to obtain corresponding structural data, for example, the neural network model may be used to perform structural feature extraction.
In step S62, the rendering data is sent to the server, so that the server generates respective corresponding avatars based on the rendering data corresponding to the at least one client, obtains rendering frames corresponding to the at least one client based on the at least one avatar corresponding to the at least one client, and sends the rendering frames corresponding to the at least one client.
In step S63, a corresponding rendering screen sent by the server is received.
Wherein, the rendering picture can comprise the virtual image of the user corresponding to at least one client. The rendering frame may be generated by a server connected to the at least one client, and the specific generating process is described in detail in the foregoing embodiments, which is not described herein.
In step S64, a corresponding rendered screen is displayed.
In practical applications, the client may display the corresponding rendering frame.
In this embodiment, any client in the at least one client may send the rendering data corresponding to each client to the server, without sending the rendering data to other clients, so that the server obtains rendering frames corresponding to the at least one client based on the at least one rendering data corresponding to the at least one client, and sends the rendering frames to the corresponding client. The problem that the performance of the rendering process is blocked due to the fact that the data are sent between the clients is solved by utilizing the server to receive the rendering data of the clients, the rendering efficiency is improved, the service computing power is utilized for unified rendering, and the rendering effect is improved.
In practical applications, in order to ensure that the rendering data of the user collected by the client is correct, in some embodiments, before sending the rendering data to the server, the above rendering method may further include:
And performing pre-rendering by using the rendering data to obtain a preliminary rendering picture.
The preliminary rendering screen may be a rendering screen including a user avatar corresponding to the client. The implementation manner of rendering by the client based on the rendering data can refer to the specific implementation manner of rendering by the server, for example, the action information can be corrected by utilizing the audio information to obtain the action information matched with the audio information, the multi-mode technology is utilized to perform driving rendering to obtain the rendering picture matched with the sound information, and the description is omitted.
Further, the client may output the preliminary rendering screen. Based on the preliminary rendering picture, the user can perform self-adjustment, such as whether limb actions are reasonable, whether facial expressions are proper, and the like, so that the rendering effect is further improved.
In practical applications, in order to alleviate the computational pressure of the server, the client may also perform a partial rendering operation. In some embodiments, the client may receive the corresponding first rendered frame and the at least one second interaction data sent by the server. The implementation process of the second interaction data and the server rendering to obtain the first rendering frame is described in detail in the above embodiment, and will not be described herein.
After receiving the second interactive data, determining second animation data corresponding to at least one second interactive data based on a corresponding relation between the preset second interactive data and the second animation data, and re-rendering the second animation data and the received first rendering picture in combination to obtain a second rendering picture as a final rendering picture, and displaying the final rendering picture. The implementation manner in the above process is described in detail in the above embodiment, and will not be described in detail.
For ease of understanding, the rendering process is described below with reference to the embodiment shown in fig. 7, taking a live scene as an example. FIG. 7 is a schematic diagram of a rendered scene as shown in an exemplary embodiment.
The live broadcast end X where the host user X is located, the live broadcast end Y where the host user Y is located and the audience end where the audience user is located are connected to the same server, the host user X and the host user Y conduct live broadcast interaction in the same live broadcast room by adopting respective virtual images, and the audience user can enter the live broadcast room to watch and participate in interaction of the two host users. The live broadcast end X collects rendering data of the anchor user X and then sends the rendering data to the server, the live broadcast end Y also sends the rendering data of the anchor user Y to the server, and the audience end collects interaction data of the audience user and sends the interaction data to the server. The server performs rendering based on the rendering data of each anchor user and the interactive data of the audience user, and obtains a rendering picture containing the respective virtual images of the anchor user x and the anchor user y. The rendering pictures can comprise rendering pictures of multiple views such as a live user X view angle, a live user Y view angle and a viewer view angle, the server can send the rendering pictures of the live user X view angle and the viewer view angle to the live terminal X together for display in a live interface of the live terminal X, send the rendering pictures of the live user Y view angle and the viewer view angle to the live terminal Y together for display in the live interface of the live terminal Y, and send the rendering pictures of the viewer view angle to the viewer terminal.
Fig. 8 is a block diagram of a rendering device, according to an example embodiment. Referring to fig. 8, the apparatus includes a first receiving module 801, a first generating module 802, a first rendering module 803, and a first transmitting module 804.
The first receiving module 801 is configured to receive rendering data respectively transmitted by at least one client.
The first generation module 802 is configured to generate respective corresponding avatars based on rendering data respectively corresponding to at least one client.
The first rendering module 803 is configured to obtain rendered pictures respectively corresponding to at least one client based on at least one rendering data corresponding to at least one client.
The first sending module 804 is configured to send rendered frames corresponding to at least one client, respectively.
In this embodiment, any client in at least one client may send the rendering data corresponding to each client to the server without sending the rendering data to other clients, so that the server may render and obtain rendering frames corresponding to at least one client based on at least one rendering data corresponding to at least one client, and send the rendering frames to the corresponding client. The problem that the performance of the rendering process is blocked due to the fact that the data are sent between the clients is solved by utilizing the server to receive the rendering data of the clients, the rendering efficiency is improved, the service computing power is utilized for unified rendering, and the rendering effect is also improved.
In some embodiments, the rendering apparatus may further include a second generation module configured to generate sound information corresponding to the at least one client, respectively, based on the at least one rendering data corresponding to the at least one client.
The first sending module 804 may be specifically configured to send the rendered frame and the sound information corresponding to the at least one client.
In some embodiments, the first generating module 802 may be specifically configured to correct the action information by using the audio information corresponding to the at least one client respectively, to obtain action information matched with the audio information; and generating respective corresponding virtual images based on the action information matched with the audio information and respectively corresponding to the at least one client.
In some embodiments, the first generating module 802 may be specifically configured to correct the action information by using the time stamp of the audio information corresponding to each of the at least one client to obtain the action information matched with the audio information; and generating respective corresponding virtual images based on the action information matched with the audio information and respectively corresponding to the at least one client.
In some embodiments, the first generating module 802 may be specifically configured to determine at least one action model corresponding to the audio information respectively corresponding to the at least one client based on a preset correspondence between the audio information and the action model; and generating respective corresponding virtual images based on at least one action model respectively corresponding to the at least one client.
In some embodiments, the method may further include a second rendering module configured to obtain a rendered screen corresponding to the viewer side based on at least one avatar corresponding to the at least one live side;
the system also comprises a second sending module, a first receiving module and a second sending module, wherein the second sending module is configured to send a rendering picture corresponding to the audience terminal so that the audience terminal can display the rendering picture in a live broadcast watching interface;
The first sending module 804 may be specifically configured to send the rendered frames corresponding to the at least one live terminal and the rendered frames corresponding to the viewer terminal to the at least one live terminal, so that the at least one live terminal displays the rendered frames corresponding to the at least one live terminal and the rendered frames corresponding to the viewer terminal in the live interface.
In some embodiments, the system may further include a second receiving module configured to receive the first interaction data and the second interaction data sent by the viewer;
the first determining module is configured to determine first animation data corresponding to the first interaction data based on a corresponding relation between the preset first interaction data and the first animation data, and determine second animation data corresponding to the second interaction data based on a corresponding relation between the preset second interaction data and the second animation data;
The first generation module may be specifically configured to generate respective corresponding avatars based on rendering data and first animation data respectively corresponding to at least one live end;
The second rendering module may be specifically configured to obtain a rendered screen corresponding to the viewer end based on at least one avatar corresponding to the at least one live end and second animation data;
the first rendering module may be specifically configured to obtain rendered pictures respectively corresponding to at least one live end based on at least one avatar and second animation data corresponding to at least one live end.
In some embodiments, the method may further include a second determining module configured to determine first animation data corresponding to the first interaction data based on a preset correspondence between the first interaction data and the first animation data;
the second rendering module may be specifically configured to obtain a first rendered screen corresponding to the viewer end based on at least one avatar corresponding to the at least one live end;
The second sending module may be specifically configured to send the first rendered frame and the second interactive data corresponding to the viewer end, so that the viewer end determines the second animation data corresponding to the second interactive data based on the corresponding relation between the preset second interactive data and the second animation data, obtains the corresponding second rendered frame based on the corresponding first rendered frame and the second animation data, and displays the corresponding second rendered frame in the live broadcast viewing interface;
the first rendering module may be specifically configured to obtain, based on at least one avatar corresponding to at least one live end, first rendering pictures respectively corresponding to at least one live end;
The first sending module may be specifically configured to send the first rendering frames corresponding to the at least one live broadcast end, the second interaction data and the first rendering frames corresponding to the viewer end to the at least one live broadcast end, so that the at least one live broadcast end determines the second animation data corresponding to the second interaction data based on the corresponding relation between the second interaction data and the second animation data, obtains the second rendering frames corresponding to the second interaction data based on the first rendering frames corresponding to the second interaction data and the second animation data, and displays the second rendering frames corresponding to the second interaction data and the first rendering frames corresponding to the viewer end in the live broadcast interface.
In some embodiments, the method may further include a storage module configured to store rendered pictures respectively corresponding to the at least one client;
The third receiving module is configured to receive modification requests for the rendering pictures respectively sent by at least one client;
The modification module is configured to modify at least one rendering picture based on the modification request, and send the modified rendering picture corresponding to at least one client so that the at least one client can display the modified rendering picture corresponding to the at least one client in a preset time.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 9 is a block diagram of a rendering apparatus according to another exemplary embodiment. Referring to fig. 9, the apparatus includes an acquisition module 901, a third transmission module 902, a fourth reception module 903, and a presentation module 904.
The acquisition module 901 is configured to acquire rendering data corresponding to a target object.
The third sending module 902 is configured to send rendering data to the server, so that the server generates respective corresponding avatars based on rendering data corresponding to at least one client, obtains rendering frames corresponding to at least one client based on at least one avatar corresponding to at least one client, and sends the rendering frames corresponding to at least one client.
The fourth receiving module 903 is configured to receive a corresponding rendered frame sent by the server.
The display module 904 is configured to display the corresponding rendered screen.
In some embodiments, the rendering apparatus may further include a third rendering module configured to obtain a preliminary rendered screen based on the rendering data.
In some embodiments, the fourth receiving module 903 may be specifically configured to receive a corresponding first rendered frame and at least one second interactive data sent by the server, where the first rendered frame is obtained by the server based on at least one avatar corresponding to at least one client, the avatar is generated by the server based on rendering data and first animation data respectively corresponding to at least one client, the first animation data is determined by the server based on at least one interactive data and a preset correspondence between the first interactive data and the first animation data, and the at least one first interactive data and the second interactive data are sent by the viewer to the server;
The method can further comprise a third determining module, which is configured to determine second animation data corresponding to at least one second interaction data based on the corresponding relation between the preset second interaction data and the second animation data;
A fourth rendering module configured to obtain a corresponding second rendering picture based on the corresponding first rendering picture and the second animation data;
the presentation module 904 may be specifically configured to present the corresponding second rendered screen.
FIG. 10 is a block diagram for a server, including a processor 1002 and a memory 1001 for storing processor-executable instructions, according to an example embodiment;
Wherein the processor 1002 is configured to execute the instructions to implement the rendering method of any of the embodiments of fig. 1-6.
In an exemplary embodiment, there is also provided a computer-readable storage medium including instructions that, when executed by a processor of the above-described server, enable the server to perform the rendering method of any one of the embodiments of fig. 1-6. Alternatively, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, a computer program product is also provided, comprising computer instructions which, when executed by the above-mentioned processor, implement the rendering method of any of the embodiments of fig. 1-6.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (25)

1. The rendering method is characterized by being applied to a server and comprising the following steps:
receiving rendering data respectively sent by at least one client;
generating respective corresponding virtual images based on rendering data respectively corresponding to the at least one client;
Based on at least one virtual image corresponding to the at least one client, obtaining rendering pictures corresponding to the at least one client respectively;
transmitting rendering pictures respectively corresponding to the at least one client;
the client comprises a live broadcast end, and the method further comprises the following steps:
Based on at least one virtual image corresponding to the at least one live broadcast end, a rendering picture corresponding to the audience end is obtained; the at least one live broadcast terminal comprises a plurality of live broadcast terminals corresponding to the same live broadcast scene;
transmitting the rendering picture corresponding to the audience terminal so that the audience terminal can display the rendering picture in a live broadcast viewing interface;
the sending the rendering frames respectively corresponding to the at least one client includes:
and transmitting the rendering pictures corresponding to the live broadcasting terminals and the rendering pictures corresponding to the audience terminals to the live broadcasting terminals so that the live broadcasting terminals can display the rendering pictures corresponding to the live broadcasting terminals and the rendering pictures corresponding to the audience terminals in a live broadcasting interface.
2. The rendering method of claim 1, wherein the rendering data includes audio information;
the method further comprises the steps of:
generating sound information corresponding to the at least one client based on at least one rendering data corresponding to the at least one client;
the sending the rendering frames respectively corresponding to the at least one client includes:
And transmitting the rendering picture and the sound information respectively corresponding to the at least one client side so that the at least one client side synchronously outputs the rendering picture and the sound information.
3. The rendering method of claim 2, wherein the rendering data further includes action information;
the generating the respective corresponding avatar based on the rendering data respectively corresponding to the at least one client includes:
correcting the action information by utilizing the audio information respectively corresponding to the at least one client to obtain action information matched with the audio information;
And generating respective corresponding virtual images based on the action information which is respectively corresponding to the at least one client and is matched with the audio information.
4. A rendering method according to claim 3, wherein the audio information comprises a time stamp;
The correcting the action information by using the audio information respectively corresponding to the at least one client, and obtaining the action information matched with the audio information includes:
And correcting the action information by utilizing the time stamps of the audio information respectively corresponding to the at least one client to obtain the action information matched with the audio information.
5. The rendering method of claim 2, wherein the generating respective avatars based on the respective rendering data of the at least one client comprises:
Determining at least one action model corresponding to the audio information respectively corresponding to the at least one client according to the corresponding relation between the preset audio information and the action model;
and generating respective corresponding avatars based on the at least one action model respectively corresponding to the at least one client.
6. The rendering method of claim 1, wherein the method further comprises:
receiving first interaction data and second interaction data sent by a spectator terminal;
Determining first animation data corresponding to first interaction data based on a corresponding relation between preset first interaction data and first animation data, and determining second animation data corresponding to second interaction data based on a corresponding relation between preset second interaction data and second animation data;
the generating the respective corresponding avatar based on the rendering data respectively corresponding to the at least one client includes:
generating respective corresponding virtual images based on rendering data and the first animation data respectively corresponding to the at least one live broadcast end;
the obtaining a rendering picture corresponding to the viewer end based on the at least one avatar corresponding to the at least one live end includes:
based on at least one virtual image corresponding to the at least one live broadcast end and the second animation data, a rendering picture corresponding to the audience end is obtained;
the obtaining, based on the at least one avatar corresponding to the at least one client, rendering pictures corresponding to the at least one client respectively includes:
And obtaining rendering pictures corresponding to the at least one live broadcast end respectively based on the at least one virtual image corresponding to the at least one live broadcast end and the second animation data.
7. The rendering method of claim 1, wherein the method further comprises:
receiving first interaction data and second interaction data sent by a spectator terminal;
determining first animation data corresponding to first interaction data based on a corresponding relation between the preset first interaction data and the first animation data;
the generating the respective corresponding avatar based on the rendering data respectively corresponding to the at least one client includes:
generating respective corresponding virtual images based on rendering data and the first animation data respectively corresponding to the at least one live broadcast end;
the obtaining a rendering picture corresponding to the viewer end based on the at least one avatar corresponding to the at least one live end includes:
Based on at least one virtual image corresponding to the at least one live broadcast end, a first rendering picture corresponding to the audience end is obtained;
The sending the rendering picture corresponding to the audience terminal, so that the audience terminal can display the rendering picture in a live broadcast viewing interface comprises the following steps:
Transmitting the first rendering picture and the second interactive data corresponding to the audience terminal, so that the audience terminal determines second animation data corresponding to the second interactive data based on the corresponding relation between the preset second interactive data and the second animation data, obtains a corresponding second rendering picture based on the corresponding first rendering picture and the second animation data, and displays the corresponding second rendering picture in a live broadcast watching interface;
the obtaining, based on the at least one avatar corresponding to the at least one client, rendering pictures corresponding to the at least one client respectively includes:
based on at least one virtual image corresponding to the at least one live broadcast end, obtaining first rendering pictures corresponding to the at least one live broadcast end respectively;
The step of sending the rendering pictures respectively corresponding to the at least one live broadcast end and the rendering pictures corresponding to the audience end to the at least one live broadcast end so that the at least one live broadcast end displays the rendering pictures respectively corresponding to the live broadcast end and the rendering pictures corresponding to the audience end in a live broadcast interface comprises the following steps:
And sending the second interaction data and the first rendering pictures corresponding to the audience end to the at least one live end so that the at least one live end can determine second animation data corresponding to the second interaction data based on the corresponding relation between the second interaction data and the second animation data preset by the at least one live end, and obtain the second rendering pictures corresponding to the second interaction data based on the first rendering pictures corresponding to the second interaction data and the second animation data, and display the second rendering pictures corresponding to the audience end and the first rendering pictures corresponding to the audience end in a live interface.
8. The rendering method of claim 1, wherein the client comprises a recording end;
the method further comprises the steps of:
Storing rendering pictures respectively corresponding to the at least one client;
receiving modification requests which are respectively sent by the at least one client and are aimed at the rendering pictures corresponding to the at least one client;
And modifying the at least one rendering picture based on the modification request, and sending the modified rendering picture corresponding to the at least one client so that the at least one client can display the modified rendering picture corresponding to the at least one client in a preset time.
9. A rendering method, applied to a client, comprising:
Collecting rendering data corresponding to a target object;
The rendering data is sent to a server, so that the server generates respective corresponding virtual images based on rendering data corresponding to at least one client respectively, obtains rendering pictures corresponding to the at least one client respectively based on at least one virtual image corresponding to the at least one client, and sends the rendering pictures corresponding to the at least one client respectively to the at least one client;
receiving a corresponding rendering picture sent by the server;
displaying the corresponding rendering picture;
The client comprises live broadcast ends, and at least one live broadcast end comprises a plurality of live broadcast ends corresponding to the same live broadcast scene;
The rendering data is sent to a server, specifically, the server generates respective corresponding virtual images based on the rendering data respectively corresponding to the live broadcasting ends, obtains rendering pictures respectively corresponding to the live broadcasting ends and rendering pictures corresponding to the audience, sends the rendering pictures respectively corresponding to the live broadcasting ends and the rendering pictures corresponding to the audience to the live broadcasting ends, enables the live broadcasting ends to display the respective rendering pictures and the rendering pictures corresponding to the audience in a live broadcasting interface, and sends the rendering pictures corresponding to the audience to display the rendering pictures in a live broadcasting watching interface.
10. The rendering method of claim 9, wherein before sending the rendering data to a server, the method further comprises:
Based on the rendering data, a preliminary rendering picture is obtained.
11. The rendering method according to claim 9, wherein the receiving the corresponding rendered frame sent by the server includes:
receiving a corresponding first rendering picture and at least one second interaction data sent by the server, wherein the first rendering picture is obtained by the server based on at least one virtual image corresponding to the at least one client, the virtual image is generated by the server based on the rendering data and the first animation data respectively corresponding to the at least one client, the first animation data is determined by the server based on at least one interaction data and a corresponding relation between preset first interaction data and first animation data, and the at least one first interaction data and the second interaction data are sent to the server by a spectator;
the method further comprises the steps of:
Determining second animation data corresponding to the at least one second interaction data based on a corresponding relation between the preset second interaction data and the second animation data;
obtaining a corresponding second rendering picture based on the corresponding first rendering picture and the second animation data;
the rendering picture corresponding to the display comprises:
and displaying the corresponding second rendering picture.
12. A rendering apparatus, comprising:
the first receiving module is configured to receive rendering data respectively sent by at least one client;
A first generation module configured to generate respective corresponding avatars based on rendering data respectively corresponding to the at least one client;
a first rendering module configured to obtain rendering pictures respectively corresponding to the at least one client based on at least one avatar corresponding to the at least one client;
the first sending module is configured to send rendering pictures corresponding to the at least one client respectively to the at least one client;
the client comprises a live broadcast end; the apparatus further comprises:
A second rendering module configured to obtain a rendering screen corresponding to the viewer end based on at least one avatar corresponding to the at least one live end; the at least one live broadcast terminal comprises a plurality of live broadcast terminals corresponding to the same live broadcast scene;
the second sending module is configured to send the rendering picture corresponding to the audience terminal so that the audience terminal can display the rendering picture in a live broadcast viewing interface;
The first sending module is specifically configured to send the rendering pictures corresponding to the at least one live broadcast terminal and the rendering pictures corresponding to the audience terminal to the at least one live broadcast terminal, so that the at least one live broadcast terminal displays the rendering pictures corresponding to the live broadcast terminal and the rendering pictures corresponding to the audience terminal in a live broadcast interface.
13. The apparatus of claim 12, wherein the rendering data comprises audio information; the apparatus further comprises:
a second generation module configured to generate sound information corresponding to the at least one client, respectively, based on at least one rendering data corresponding to the at least one client;
the first sending module is specifically configured to send the rendering frames and the sound information corresponding to the at least one client side respectively to the at least one client side, so that the at least one client side synchronously outputs the rendering frames and the sound information.
14. The apparatus of claim 13, wherein the rendering data further comprises action information; the first generation module is specifically configured to correct the action information by using the audio information respectively corresponding to the at least one client to obtain action information matched with the audio information; and generating respective corresponding virtual images based on the action information which is respectively corresponding to the at least one client and is matched with the audio information.
15. The apparatus of claim 14, wherein the audio information comprises a timestamp; the first generation module is specifically configured to correct the action information by using the time stamps of the audio information corresponding to the at least one client respectively, so as to obtain action information matched with the audio information; and generating respective corresponding virtual images based on the action information which is respectively corresponding to the at least one client and is matched with the audio information.
16. The apparatus according to claim 13, wherein the first generating module is specifically configured to determine at least one action model corresponding to the audio information respectively corresponding to the at least one client according to a preset correspondence between the audio information and the action model; and generating respective corresponding avatars based on the at least one action model respectively corresponding to the at least one client.
17. The apparatus of claim 12, wherein the apparatus further comprises:
The second receiving module is configured to receive the first interaction data and the second interaction data sent by the audience terminal;
A first determining module configured to determine first animation data corresponding to first interaction data based on a corresponding relation between the first interaction data and the first animation data, and determine second animation data corresponding to second interaction data based on a corresponding relation between the second interaction data and the second animation data;
The first generation module is specifically configured to generate respective corresponding virtual images based on rendering data and the first animation data respectively corresponding to the at least one live broadcast end;
The second rendering module is specifically configured to obtain a rendering picture corresponding to the audience terminal based on at least one virtual image corresponding to the at least one live terminal and the second animation data;
The first rendering module is specifically configured to obtain rendering pictures corresponding to the at least one live broadcast end respectively based on the at least one virtual image corresponding to the at least one live broadcast end and the second animation data.
18. The apparatus of claim 12, wherein the apparatus further comprises:
The second receiving module is configured to receive the first interaction data and the second interaction data sent by the audience terminal;
the second determining module is configured to determine first animation data corresponding to the first interaction data based on a corresponding relation between preset first interaction data and the first animation data;
The first generation module is specifically configured to generate respective corresponding virtual images based on rendering data and the first animation data respectively corresponding to the at least one live broadcast end;
The second rendering module is specifically configured to obtain a first rendering picture corresponding to the audience terminal based on at least one avatar corresponding to the at least one live broadcast terminal;
The second sending module is specifically configured to send the first rendering frame and the second interaction data corresponding to the audience terminal, so that the audience terminal determines second animation data corresponding to the second interaction data based on a corresponding relation between the preset second interaction data and the second animation data, obtains the corresponding second rendering frame based on the corresponding first rendering frame and the second animation data, and displays the corresponding second rendering frame in a live broadcast watching interface;
The first rendering module is specifically configured to obtain first rendering pictures corresponding to the at least one live broadcast end respectively based on at least one virtual image corresponding to the at least one live broadcast end;
The first sending module is specifically configured to send the second interaction data and the first rendering frames corresponding to the viewer end to the at least one live broadcast end, so that the at least one live broadcast end can determine second animation data corresponding to the second interaction data based on the corresponding relation between the second interaction data and the second animation data, and obtain the second rendering frames corresponding to the second interaction data based on the first rendering frames and the second animation data, and display the second rendering frames corresponding to the viewer end and the first rendering frames corresponding to the viewer end in a live broadcast interface.
19. The apparatus of claim 12, wherein the client comprises a recording end; the apparatus further comprises:
a storage module configured to store rendering pictures respectively corresponding to the at least one client;
The third receiving module is configured to receive modification requests for the rendering frames corresponding to the at least one client respectively;
The modification module is configured to modify the at least one rendering picture based on the modification request, and send the modified rendering picture corresponding to the at least one client, so that the at least one client can display the respective modified rendering picture in a preset time.
20. A rendering apparatus, comprising:
The acquisition module is configured to acquire rendering data corresponding to the target object;
A third sending module, configured to send the rendering data to a server, so that the server generates respective corresponding avatars based on rendering data corresponding to at least one client, obtains rendering frames corresponding to the at least one client based on at least one avatar corresponding to the at least one client, and sends the rendering frames corresponding to the at least one client;
the fourth receiving module is configured to receive the corresponding rendering picture sent by the server;
the display module is configured to display the corresponding rendering picture;
The client comprises live broadcast ends, and at least one live broadcast end comprises a plurality of live broadcast ends corresponding to the same live broadcast scene;
The third sending module is specifically configured to send the rendering data to a server, so that the server generates respective corresponding virtual images based on rendering data corresponding to the live broadcast ends, obtains rendering pictures corresponding to the live broadcast ends and rendering pictures corresponding to a viewer end based on the virtual images corresponding to the live broadcast ends, sends the rendering pictures corresponding to the live broadcast ends and the rendering pictures corresponding to the viewer end to the live broadcast ends, so that the live broadcast ends display the rendering pictures corresponding to the live broadcast ends and the rendering pictures corresponding to the viewer end in a live broadcast interface, and sends the rendering pictures corresponding to the viewer end so that the viewer end displays the rendering pictures in a live broadcast viewing interface.
21. The apparatus of claim 20, wherein the apparatus further comprises:
and a third rendering module configured to obtain a preliminary rendering screen based on the rendering data.
22. The apparatus of claim 20, wherein the fourth receiving module is specifically configured to receive a corresponding first rendered frame and at least one second interactive data sent by the server, the first rendered frame is obtained by the server based on at least one avatar corresponding to the at least one client, the avatar is generated by the server based on rendering data and first animation data respectively corresponding to the at least one client, the first animation data is determined by the server based on at least one interactive data and a preset correspondence between the first interactive data and first animation data, and the at least one first interactive data and the second interactive data are sent by the viewer to the server;
The apparatus further comprises:
A third determining module configured to determine second animation data corresponding to the at least one second interaction data based on a correspondence of preset second interaction data and second animation data;
A fourth rendering module configured to obtain a corresponding second rendering picture based on the corresponding first rendering picture and the second animation data;
The display module is specifically configured to display the corresponding second rendering picture.
23. A server, comprising:
A processor;
a memory for storing the processor-executable instructions;
Wherein the processor is configured to execute the instructions to implement the rendering method of any one of claims 1 to 11.
24. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of a server, enable the server to perform the rendering method of any one of claims 1 to 11.
25. A computer program product comprising computer instructions which, when executed by a processor, implement the rendering method of any one of claims 1 to 11.
CN202111600010.5A 2021-12-24 2021-12-24 Rendering method, rendering device, server, storage medium, and program product Active CN114286021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111600010.5A CN114286021B (en) 2021-12-24 2021-12-24 Rendering method, rendering device, server, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111600010.5A CN114286021B (en) 2021-12-24 2021-12-24 Rendering method, rendering device, server, storage medium, and program product

Publications (2)

Publication Number Publication Date
CN114286021A CN114286021A (en) 2022-04-05
CN114286021B true CN114286021B (en) 2024-05-28

Family

ID=80875079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111600010.5A Active CN114286021B (en) 2021-12-24 2021-12-24 Rendering method, rendering device, server, storage medium, and program product

Country Status (1)

Country Link
CN (1) CN114286021B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174954A (en) * 2022-08-03 2022-10-11 抖音视界有限公司 Video live broadcast method and device, electronic equipment and storage medium
CN116668796B (en) * 2023-07-03 2024-01-23 佛山市炫新智能科技有限公司 Interactive artificial live broadcast information management system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106254899A (en) * 2016-08-16 2016-12-21 网宿科技股份有限公司 The control method of a kind of live even wheat and system
CN109660818A (en) * 2018-12-30 2019-04-19 广东彼雍德云教育科技有限公司 A kind of virtual interactive live broadcast system
CN110557625A (en) * 2019-09-17 2019-12-10 北京达佳互联信息技术有限公司 live virtual image broadcasting method, terminal, computer equipment and storage medium
CN110971930A (en) * 2019-12-19 2020-04-07 广州酷狗计算机科技有限公司 Live virtual image broadcasting method, device, terminal and storage medium
CN112135156A (en) * 2020-09-16 2020-12-25 广州华多网络科技有限公司 Live broadcast method, education live broadcast method, system, equipment and storage medium
CN112235585A (en) * 2020-08-31 2021-01-15 江苏视博云信息技术有限公司 Live broadcast method, device and system of virtual scene
CN112529992A (en) * 2019-08-30 2021-03-19 阿里巴巴集团控股有限公司 Dialogue processing method, device, equipment and storage medium of virtual image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106254899A (en) * 2016-08-16 2016-12-21 网宿科技股份有限公司 The control method of a kind of live even wheat and system
CN109660818A (en) * 2018-12-30 2019-04-19 广东彼雍德云教育科技有限公司 A kind of virtual interactive live broadcast system
CN112529992A (en) * 2019-08-30 2021-03-19 阿里巴巴集团控股有限公司 Dialogue processing method, device, equipment and storage medium of virtual image
CN110557625A (en) * 2019-09-17 2019-12-10 北京达佳互联信息技术有限公司 live virtual image broadcasting method, terminal, computer equipment and storage medium
CN110971930A (en) * 2019-12-19 2020-04-07 广州酷狗计算机科技有限公司 Live virtual image broadcasting method, device, terminal and storage medium
CN112235585A (en) * 2020-08-31 2021-01-15 江苏视博云信息技术有限公司 Live broadcast method, device and system of virtual scene
CN112135156A (en) * 2020-09-16 2020-12-25 广州华多网络科技有限公司 Live broadcast method, education live broadcast method, system, equipment and storage medium

Also Published As

Publication number Publication date
CN114286021A (en) 2022-04-05

Similar Documents

Publication Publication Date Title
CN108200446B (en) On-line multimedia interaction system and method of virtual image
US9424678B1 (en) Method for teleconferencing using 3-D avatar
CN114286021B (en) Rendering method, rendering device, server, storage medium, and program product
JP5208810B2 (en) Information processing apparatus, information processing method, information processing program, and network conference system
CN112543342B (en) Virtual video live broadcast processing method and device, storage medium and electronic equipment
CN110401810B (en) Virtual picture processing method, device and system, electronic equipment and storage medium
CN111080759A (en) Method and device for realizing split mirror effect and related product
KR20150105058A (en) Mixed reality type virtual performance system using online
CN114095744B (en) Video live broadcast method and device, electronic equipment and readable storage medium
US20230047858A1 (en) Method, apparatus, electronic device, computer-readable storage medium, and computer program product for video communication
CN114900678B (en) VR end-cloud combined virtual concert rendering method and system
CN115515016B (en) Virtual live broadcast method, system and storage medium capable of realizing self-cross reply
CN112839196B (en) Method, device and storage medium for realizing online conference
WO2024078243A1 (en) Training method and apparatus for video generation model, and storage medium and computer device
CN111629223B (en) Video synchronization method and device, computer readable storage medium and electronic device
CN110446090A (en) A kind of virtual auditorium spectators bus connection method, system, device and storage medium
CN110433491A (en) Movement sync response method, system, device and the storage medium of virtual spectators
CN117579855A (en) Virtual live broadcast method and device
WO2023231712A1 (en) Digital human driving method, digital human driving device and storage medium
CN115494962A (en) Virtual human real-time interaction system and method
CN116744027A (en) Meta universe live broadcast system
CN116016837A (en) Immersive virtual network conference method and device
CN110602523A (en) VR panoramic live multimedia processing and synthesizing system and method
CN114727120B (en) Live audio stream acquisition method and device, electronic equipment and storage medium
CN116962746A (en) Online chorus method and device based on continuous wheat live broadcast and online chorus system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant