CN114283232A - Picture display method and device, computer equipment and storage medium - Google Patents

Picture display method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114283232A
CN114283232A CN202111402674.0A CN202111402674A CN114283232A CN 114283232 A CN114283232 A CN 114283232A CN 202111402674 A CN202111402674 A CN 202111402674A CN 114283232 A CN114283232 A CN 114283232A
Authority
CN
China
Prior art keywords
information
dimensional scene
character
target
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111402674.0A
Other languages
Chinese (zh)
Inventor
王骁玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202111402674.0A priority Critical patent/CN114283232A/en
Publication of CN114283232A publication Critical patent/CN114283232A/en
Pending legal-status Critical Current

Links

Images

Abstract

The present disclosure provides a method, an apparatus, and a computer storage medium for displaying a real-time picture of a three-dimensional scene space, where the three-dimensional scene space includes a virtual character driven by control information generated based on behavior data of a captured virtual character control object, the method including: acquiring target character information to be presented in the three-dimensional scene space; determining corresponding presentation attribute information of the target character information in the three-dimensional scene space; the presentation attribute information comprises three-dimensional position information; generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target character information and the current three-dimensional scene data of the three-dimensional scene space; rendering the updated three-dimensional scene data through the three-dimensional rendering engine to obtain a real-time rendering picture containing the target character information.

Description

Picture display method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for displaying a picture, a computer device, and a storage medium.
Background
Under the scenes of virtual live broadcast and the like, the real character can control the virtual character to execute the related action and play the related picture at the user side. In some cases, it may be necessary to add some auxiliary elements to the playing screen to explain the action or voice of the current virtual character, such as text. In this case, if the text is directly added to the display screen, the overall sense of integration is poor.
Disclosure of Invention
The embodiment of the disclosure at least provides a picture display method, a picture display device and a computer storage medium.
In a first aspect, an embodiment of the present disclosure provides a picture displaying method, configured to display a real-time picture in a three-dimensional scene space, where the three-dimensional scene space includes a virtual character driven by control information, and the control information is generated based on behavior data of a captured virtual character control object, where the method includes: acquiring target character information to be presented in the three-dimensional scene space; determining corresponding presentation attribute information of the target character information in the three-dimensional scene space; the presentation attribute information comprises three-dimensional position information; generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target character information and the current three-dimensional scene data of the three-dimensional scene space; rendering the updated three-dimensional scene data through the three-dimensional rendering engine to obtain a real-time rendering picture containing the target character information.
In an optional implementation manner, after obtaining the real-time rendering screen, the method further includes: converting the real-time rendering picture into live video data; and transmitting the live video data to at least one user side so as to display a live picture corresponding to the real-time rendering picture at the user side.
In an optional embodiment, the target text information includes at least one of the following: lyric information corresponding to singing performance performed by the virtual character; live broadcasting bullet screen information input by audiences; and carrying out semantic recognition on the audio data of the virtual character control object to obtain character information.
In an optional implementation manner, the presentation attribute information of the target text information further includes: font type, font color, font size, animation display form.
In an optional implementation manner, determining the corresponding presentation attribute information of the target text information in the three-dimensional scene space includes: acquiring the presentation attribute information indicated in the target character information; and/or acquiring the presentation attribute information set in the three-dimensional rendering engine.
In an alternative embodiment, the behavior data includes motion data based on a virtual character control object captured by a motion capture device; the three-dimensional position information of the target character information is associated with three-dimensional position information of a target part of the virtual character, and the three-dimensional position information of the target part of the virtual character is changed under the driving of control information generated based on the motion data; or the three-dimensional position information of the target character information is associated with the three-dimensional position information of a target virtual object in the three-dimensional scene space, and the display state of the target virtual object in the three-dimensional scene space is dynamically changed.
In an optional implementation manner, the target character information is lyrics, the lyrics are displayed around a target portion of the virtual character, and the importance of the impersonation feature corresponding to the target portion is lower than the importance of impersonation features of other portions of the virtual character.
In an optional embodiment, the generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target text information and current three-dimensional scene data of the three-dimensional scene space includes: generating scene special effect data based on the presentation attribute information of the target character information; and generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target character information, the scene special effect data and the current three-dimensional scene data of the three-dimensional scene space.
In an optional embodiment, in a case that the target text information includes bullet screen information input by a live viewer, the generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target text information and current three-dimensional scene data of the three-dimensional scene space includes: determining bullet screen special effect information corresponding to the bullet screen information based on the keywords in the bullet screen information indicated in the presentation attribute information; the bullet screen special effect information comprises information used for indicating a bullet screen information dynamic display mode; and generating updated three-dimensional scene data of the three-dimensional scene space based on the bullet screen special effect information and the current three-dimensional scene data of the three-dimensional scene space.
In an optional implementation manner, determining bullet screen special effect information corresponding to bullet screen information based on a keyword in the bullet screen information indicated in the attribute information includes: extracting at least one keyword from the bullet screen information; under the condition that the extracted at least one keyword comprises a preset action keyword, determining a bullet screen information dynamic display mode matched with the action keyword; and in the case that the extracted at least one keyword comprises virtual object description information, determining the image information and the state information of the virtual object matched with the virtual object description information.
In an optional implementation manner, in a case that the bullet screen special effect information includes the virtual object information, the method further includes: and updating the state information of the virtual object in response to control information aiming at the virtual object in the bullet screen information input by the live audience.
In an optional implementation manner, determining the corresponding presentation attribute information of the target text information in the three-dimensional scene space includes: determining three-dimensional position information of the target character information in the three-dimensional scene space according to the current shooting parameter information of the virtual camera; and the three-dimensional position information of the target character information is positioned in the shooting range of the virtual camera.
In an optional implementation manner, determining the corresponding presentation attribute information of the target text information in the three-dimensional scene space includes: determining corresponding display color information of the target character information in the three-dimensional scene space according to the scene color information of the three-dimensional scene space; and the display color information of the target character information is different from the scene color information.
In an optional implementation manner, the obtaining target text information to be presented in the three-dimensional scene space includes: acquiring the target character information sent to a character receiver by a character sender; the determining of the corresponding presentation attribute information of the target text information in the three-dimensional scene space includes at least one of the following: determining first three-dimensional position information corresponding to the target character information in the three-dimensional scene space for the character sender according to first shooting parameter information of a virtual camera corresponding to the character sender; and determining second three-dimensional position information corresponding to the target character information in the three-dimensional scene space for the character receiver according to second shooting parameter information of the virtual camera corresponding to the character receiver.
In a second aspect, an embodiment of the present disclosure further provides a picture displaying apparatus, including: the acquisition module is used for acquiring target character information to be presented in the three-dimensional scene space; the determining module is used for determining corresponding presentation attribute information of the target character information in the three-dimensional scene space; the presentation attribute information comprises three-dimensional position information; a generation module, configured to generate updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target text information and current three-dimensional scene data of the three-dimensional scene space; and the processing module is used for rendering the updated three-dimensional scene data through the three-dimensional rendering engine to obtain a real-time rendering picture containing the target character information.
In an optional embodiment, after obtaining the real-time rendering screen, the processing module is further configured to: converting the real-time rendering picture into live video data; and transmitting the live video data to at least one user side so as to display a live picture corresponding to the real-time rendering picture at the user side.
In an optional embodiment, the target text information includes at least one of the following: lyric information corresponding to singing performance performed by the virtual character; live broadcasting bullet screen information input by audiences; and carrying out semantic recognition on the audio data of the virtual character control object to obtain character information.
In an optional implementation manner, the presentation attribute information of the target text information further includes: font type, font color, font size, animation display form.
In an optional implementation manner, when determining the corresponding presentation attribute information of the target text information in the three-dimensional scene space, the determining module is configured to: acquiring the presentation attribute information indicated in the target character information; and/or acquiring the presentation attribute information set in the three-dimensional rendering engine.
In an alternative embodiment, the behavior data includes motion data based on a virtual character control object captured by a motion capture device; the three-dimensional position information of the target character information is associated with three-dimensional position information of a target part of the virtual character, and the three-dimensional position information of the target part of the virtual character is changed under the driving of control information generated based on the motion data; or the three-dimensional position information of the target character information is associated with the three-dimensional position information of a target virtual object in the three-dimensional scene space, and the display state of the target virtual object in the three-dimensional scene space is dynamically changed.
In an optional implementation manner, the target character information is lyrics, the lyrics are displayed around a target portion of the virtual character, and the importance of the impersonation feature corresponding to the target portion is lower than the importance of impersonation features of other portions of the virtual character.
In an optional embodiment, when generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target text information and current three-dimensional scene data of the three-dimensional scene space, the generating module is configured to: generating scene special effect data based on the presentation attribute information of the target character information; and generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target character information, the scene special effect data and the current three-dimensional scene data of the three-dimensional scene space.
In an optional implementation manner, in a case that the target text information includes bullet screen information input by a live viewer, the generating module, when generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target text information and current three-dimensional scene data of the three-dimensional scene space, is configured to: determining bullet screen special effect information corresponding to the bullet screen information based on the keywords in the bullet screen information indicated in the presentation attribute information; the bullet screen special effect information comprises information used for indicating a bullet screen information dynamic display mode; and generating updated three-dimensional scene data of the three-dimensional scene space based on the bullet screen special effect information and the current three-dimensional scene data of the three-dimensional scene space.
In an optional implementation manner, when determining, based on a keyword in the bullet screen information indicated in the attribute information, bullet screen special effect information corresponding to the bullet screen information, the generating module is configured to: extracting at least one keyword from the bullet screen information; under the condition that the extracted at least one keyword comprises a preset action keyword, determining a bullet screen information dynamic display mode matched with the action keyword; and in the case that the extracted at least one keyword comprises virtual object description information, determining the image information and the state information of the virtual object matched with the virtual object description information.
In an optional implementation manner, in a case that the bullet screen special effect information includes the virtual object information, the generating module is further configured to: and updating the state information of the virtual object in response to control information aiming at the virtual object in the bullet screen information input by the live audience.
In an optional implementation manner, when determining the corresponding presentation attribute information of the target text information in the three-dimensional scene space, the determining module is configured to: determining three-dimensional position information of the target character information in the three-dimensional scene space according to the current shooting parameter information of the virtual camera; and the three-dimensional position information of the target character information is positioned in the shooting range of the virtual camera.
In an optional implementation manner, when determining the corresponding presentation attribute information of the target text information in the three-dimensional scene space, the determining module is configured to: determining corresponding display color information of the target character information in the three-dimensional scene space according to the scene color information of the three-dimensional scene space; and the display color information of the target character information is different from the scene color information.
In an optional embodiment, the obtaining module, when obtaining the target text information to be presented in the three-dimensional scene space, is configured to: acquiring the target character information sent to a character receiver by a character sender; when determining the corresponding presentation attribute information of the target text information in the three-dimensional scene space, the determining module is configured to at least one of: determining first three-dimensional position information corresponding to the target character information in the three-dimensional scene space for the character sender according to first shooting parameter information of a virtual camera corresponding to the character sender; and determining second three-dimensional position information corresponding to the target character information in the three-dimensional scene space for the character receiver according to second shooting parameter information of the virtual camera corresponding to the character receiver.
In a third aspect, this disclosure also provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, and when the machine-readable instructions are executed by the processor, the machine-readable instructions are executed by the processor to perform the steps in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, this disclosure also provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
For the description of the effects of the above-mentioned apparatus, computer device and computer-readable storage medium for displaying pictures, reference is made to the description of the above-mentioned method for displaying pictures, which is not repeated herein.
The picture display method provided by the embodiment of the disclosure can be used for displaying real-time pictures of a three-dimensional scene space; due to the real-time characteristic of the live broadcast process, the existing scheme generally cannot carry out subsequent processing on live broadcast contents, and at most, some preset contents are added at preset positions of live broadcast pictures. The embodiment of the disclosure can perform real-time integration processing of the target characters aiming at a three-dimensional scene space with real-time updated content, specifically, can determine corresponding presentation attribute information for target character information to be presented in the three-dimensional scene space, and generate updated three-dimensional scene data of the three-dimensional scene space according to the presentation attribute information, wherein the presentation attribute information includes three-dimensional position information of the target characters in the three-dimensional scene space; therefore, target text information fully blended into the three-dimensional scene space can be seen in a real-time rendering picture obtained by rendering the updated three-dimensional scene data through the three-dimensional rendering engine, the display mode of the text information is more flexible, and the text information can be deployed and presented in the whole three-dimensional scene space according to the actual scene requirements.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 is a flowchart of a picture displaying method according to an embodiment of the disclosure;
fig. 2 is a schematic diagram illustrating a display of target text information according to an embodiment of the disclosure;
fig. 3 is a schematic diagram of a specific application scenario provided in the embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating another example of displaying target text information according to an embodiment of the present disclosure;
fig. 5 is a schematic view of a picture displaying apparatus according to an embodiment of the disclosure;
fig. 6 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of embodiments of the present disclosure, as generally described and illustrated herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
In a scene where the real character controls the virtual character to perform the related behavior and displays a picture on which the virtual character performs the related behavior, various text messages may appear, such as a sentence spoken by the virtual character, lyrics when singing, or interactive messages (e.g., barrage messages) sent by a viewer, etc. When the text information is displayed, if the text information is directly added to the preset position of the playing picture, for example, a sentence spoken by a virtual character is displayed in the middle position below the playing picture, and information sent by a watching user is displayed in the preset area range above the playing picture, the text information is less blended in the whole scene, and the display mode is single.
Based on the above research, the embodiment of the present disclosure provides a picture displaying method, which may be used to display a real-time picture in a three-dimensional scene space, where the three-dimensional scene space includes a virtual character driven by control information; after target character information to be presented in the three-dimensional scene space is obtained, corresponding presentation attribute information of the target character information in the three-dimensional scene space is determined, and updated three-dimensional scene data of the three-dimensional scene space is generated according to the presentation attribute information, wherein the presentation attribute information comprises three-dimensional position information of the target character in the three-dimensional scene space; therefore, the target text information fully blended into the three-dimensional scene space can be seen in real time in the real-time rendering picture obtained by rendering the updated three-dimensional scene data through the three-dimensional rendering engine, the display mode of the text information is more flexible, and the text information can be deployed and presented in the whole three-dimensional scene space according to the actual scene requirements.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, a detailed description is first given of a picture displaying method disclosed in the embodiments of the present disclosure, and an execution subject of the picture displaying method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability. In some possible implementations, the screen presentation method may be implemented by a processor calling computer readable instructions stored in a memory.
The following describes the screen display method provided by the embodiment of the present disclosure in detail.
As shown in fig. 1, which is a flowchart of a picture displaying method provided in an embodiment of the present disclosure, the picture displaying method in the embodiment of the present disclosure is applied to scenes such as a live broadcast scene, and mainly includes the following steps S101 to S104:
s101: acquiring target character information to be presented in the three-dimensional scene space;
s102: determining corresponding presentation attribute information of the target character information in the three-dimensional scene space; the presentation attribute information comprises three-dimensional position information;
s103: generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target character information and the current three-dimensional scene data of the three-dimensional scene space;
s104: rendering the updated three-dimensional scene data through the three-dimensional rendering engine to obtain a real-time rendering picture containing the target character information.
First, an application scenario of the screen display method provided in the embodiment of the present disclosure is explained.
In specific implementation, application scenes of the picture display method provided by the embodiment of the present disclosure include but are not limited to live scenes, such as game live scenes. In the following, a possible application scenario is first described.
For the scenes such as the live game, the three-dimensional scene space is, for example, a three-dimensional virtual space which is set up in advance, and the three-dimensional scene space contains virtual characters which are driven by control information. Wherein the control information is generated based on the captured behavior data of the avatar control object. Here, the avatar control object is a real character in a real scene. The behavior data may include motion data, audio data (i.e., behavior of the avatar control object generating an utterance), and the like.
In the case where the motion data includes motion data of the virtual character control object captured by the motion capture device, the motion of the virtual character control object may be captured by the motion capture device, and the motion data of the virtual character control object may be generated. Such motion data may be correspondingly converted into control information, such as control signals, to drive the avatar to exhibit the same or similar motion as the avatar control object.
Here, the motion capture device may be a sensor device that senses motion of parts of the body, such as a kinetic capture glove, a kinetic capture helmet (for capturing facial expression motion), and a sound capture device (such as a microphone that captures mouth sounds and a throat microphone that captures sound motion), and so on.
In the case where the behavior data includes audio data, the audio acquisition device may be used to acquire sound emitted by the virtual character control object to obtain an audio signal, and the audio signal may be converted into control information for controlling the virtual character.
For example, in a live scene, the virtual character control object may perform a performance action of singing, dancing, etc., such as simulating a program of a talk show class, which may be determined according to actual conditions, and is not limited herein. Accordingly, when the virtual character is displayed in the three-dimensional scene space, an action corresponding to the virtual character control object, such as singing or interview interaction, is performed. In addition, the virtual character control object can control the virtual character corresponding to the virtual character control object to interact, exchange and the like with the virtual character controlled by other virtual character control objects.
Next, a plurality of steps shown in fig. 1 will be described in detail.
For the above S101, the target text information includes at least one of the following: lyric information corresponding to singing performance performed by the virtual character; live broadcasting bullet screen information input by audiences; and carrying out semantic recognition on the audio data of the virtual character control object to obtain character information.
In the case that the target text information includes lyric information corresponding to a singing performance performed by the virtual character, for example, predetermined lyric information may be acquired according to a song currently performing the singing performance in the case that the virtual character performs the singing performance. For example, a track corresponding to a singing performance may be predetermined for the virtual character, or a track of a singing performance may be determined in real time in response to selection of a track or a lyric performance by the virtual character control object, and lyric information may be obtained according to the determined track; in the lyric display process, the displayed lyrics can be synchronously updated according to the song progress corresponding to the singing performance.
In the case where the target text information includes bullet screen information input by a live viewer, for example, the bullet screen information input by the live viewer may be received as the target text information, where the bullet screen information may include, for example, symbols and/or text (which may be of any of a plurality of languages). In addition, in a possible case, identification information of a live viewer who sends a bullet screen, and the like may be used as target character information together with the bullet screen information; in another possible case, image information such as the head portrait of the live audience can be correspondingly displayed as one of the target text information.
When the target text information includes text information obtained by performing semantic Recognition on the audio data of the virtual character control object, the virtual character control object may perform semantic Recognition on the audio data of the virtual character control object by using an Automatic Speech Recognition (ASR) method to convert the audio data into text information and use the text information as the target text information.
For the above S102, in the case of determining the target text information, in order to render and display the target text information in the three-dimensional scene space, corresponding presentation attribute information in the three-dimensional scene space may also be determined for the target text information.
The presentation attribute information corresponding to the target text information may include at least one of the following items (a), (B), and (C), for example:
(A) the method comprises the following steps Font type, font color, font size, animation display form.
The presentation attribute of the target text information is also the attribute characteristic of the text content corresponding to the target text information when being presented. For example, textual content such as specific lyric content.
The font types included in the presentation attribute information may include basic font types such as a bold font, a regular font, a thin font, and the like, or may also include a custom font type uploaded by a live viewer.
With respect to the font color included in the presentation attribute information, for example, a solid font color, such as black, white, or the like, may be included; or may also include color font colors, such as adding different colors to different characters in the text content being presented simultaneously; or for each character, determining to display with a gradient color. Therefore, the displayed target text information can be more beautiful and has more artistic effects.
For the font size included in the presentation attribute information, for example, different font sizes may be determined according to different text contents, for example, a font size with a larger font size is determined for target text information associated with a virtual character or a virtual character control object, a font size with a medium font size is determined for barrage information with a higher frequency of occurrence in the barrage information, and a font size with a smaller font size is determined for a user identifier of a live audience. Therefore, different types of target character information can be distinguished by using different font sizes, and the effect of highlighting key points is realized by adding larger font sizes to more important target character information. In addition, live viewers or users can be guided to focus on virtual characters or barrage information more through different font sizes.
For the animation presentation form included in the presentation attribute information, for example, at least one of a rotation presentation around a character, a flight presentation or a jump presentation in a three-dimensional scene space, a zoom-in presentation, or a zoom-out presentation may be included. Similar to the font size in the above presentation attribute information, different animation display forms may also be determined for different target text information. Through different types of animation display forms, the target character information can be more diversified during display.
For example, referring to fig. 2, a schematic diagram for displaying target text information provided by the embodiment of the present disclosure is shown, in which only the target text information is shown, text content of the target text information includes "running to you", a font type is sons, a font color and a font size of each character are different, and a corresponding animation display form is an enlarged display and a flight display in a three-dimensional scene space. Fig. 2 (a) shows the target text information displayed at a time, and fig. 2 (b) shows the target text information displayed after the animation of a preset time has elapsed. In this example, through font color and animation display form, the displayed target text information can also express the meaning of the text content "running to you", and imitate the action of moving to a certain position, so that the target text information can flexibly and vividly express the action indicated by the actual text content.
Therefore, different target character information is combined and displayed by adopting the different types of presentation attribute information, so that the target character information determined based on the presentation attribute information has more diversity and better impression.
In another embodiment of the present disclosure, a specific manner for determining corresponding presentation attribute information of target text information in a three-dimensional scene space is further provided, which may include, but is not limited to, the following two manners (a1) or (a 2):
(A1) the method comprises the following steps And acquiring the presentation attribute information indicated in the target character information.
In this case, the target text information may include text content and presentation attribute information indicating the text content, so that when the text content is presented, the text content may be displayed according to the presentation attribute indicated by the target text information.
For example, in the case that the target text information includes the bullet screen information, if the presentation attribute information of the bullet screen content "this really good look" indicated by the bullet screen information includes "blue and jump", the presentation attribute information that can determine that the text content "this really good look" includes: the font color is "blue" and the animation presentation form is "jump".
(A2) The method comprises the following steps And acquiring the presentation attribute information set in the three-dimensional rendering engine.
In this case, the three-dimensional rendering engine may set corresponding presentation attribute information for different target text information in advance, for example. For example, in the live broadcasting scene described above, there are many different kinds of target text information, such as lyric information, caption information when a virtual character speaks, and barrage information; in addition, since the number of live viewers is large, the number of bullet screen information is also large. In this example, corresponding presentation attribute information is set for different types of target text information, which may also help to distinguish different target text information.
(B) The method comprises the following steps Three-dimensional position information.
In a specific implementation, in the case of determining the three-dimensional position information included in the presentation attribute information, the three-dimensional position information of the target text information in the three-dimensional scene space may be determined, for example, but not limited to, in the following manner:
(B1) the method comprises the following steps The three-dimensional position information of the target character information is associated with the three-dimensional position information of the target part of the virtual character.
In this case, the target character information is, for example, lyrics which are displayed around a target portion of the virtual character, and the importance of the impersonation feature corresponding to the target portion is lower than the importance of impersonation features of other portions of the virtual character.
Here, when the target text information is the lyrics, the lyrics can be automatically and synchronously read and displayed according to the playing progress of the current song.
For example, for the virtual character, the importance of different parts in the virtual character can be predetermined. For example, since a virtual character performing a singing and dance performance has a dress on the body, it is possible to prevent the important portion from being blocked by the dress, and display of target character information in the vicinity of the dress portion is not performed. For example, if the neck, wrist, etc. of the virtual character do not have corresponding decoration or have less decoration, and the additional display of the target character information does not affect the original decoration, the importance of these parts can be determined to be lower.
In one example, the target site for the virtual character may include a wrist, for example. When the virtual character correspondingly executes the action of singing, the target character information containing the lyrics is determined and can be correspondingly displayed on the wrist in a surrounding mode.
Alternatively, the target site may be specified based on the motion of the virtual character. In one possible case, control information for controlling the action of the virtual character is generated based on the action data of the virtual character control object captured by the action capture device, and the action of the virtual character is further driven. Therefore, the three-dimensional position information of the target portion of the virtual character changes based on the driving of the control information generated from the motion data.
Here, the driving of the action of the virtual character may cause the overall position of the virtual character to change or the local position of the virtual character to change, for example, the virtual character may be a whole movement, or only a head movement, a limb movement, or the like.
In this case, since the three-dimensional position information of the target character information is associated with the three-dimensional position information of the target portion, when the three-dimensional position information of the target portion of the virtual character changes, the display position of the target character information displayed in association with the position of the target portion changes, and the effect of moving the target character information following the movement of the target portion can be exhibited.
For example, when the extended palm of the virtual character slides over the front position of the body, the palm can be used as a target part, the target character information can move along with the sliding palm, and the target character information can be displayed with the effect of changing from nothing to nothing in a magic form by the virtual character. The display mode can be selected according to the actual situation, and is not limited here.
Therefore, on one hand, compared with the part with specific decoration characteristics, the characteristics of the target part can be correspondingly enriched by using the target character information; on the other hand, along with the dance movement of the virtual character, the target part also moves correspondingly, and the position of the target character information also changes, so that the target character information can be displayed more flexibly, and the attention degree of the target part is improved in a dynamic display mode.
(B2) The method comprises the following steps The three-dimensional position information of the target character information is associated with the three-dimensional position information of a target virtual object in the three-dimensional scene space, and the display state of the target virtual object in the three-dimensional scene space is dynamically changed.
In this case, it can be applied to the following scenarios, for example: the target virtual object includes a plurality of wishing slips in a wishing tree in a three-dimensional scene space that simulate movement of a location blown by wind in the three-dimensional scene space. Exemplarily, referring to fig. 3, a schematic diagram of a specific application scenario provided in the embodiment of the present disclosure is shown. In fig. 3 (a) two volunteer sticks on the volunteer tree are shown, including a volunteer stick a and a volunteer stick B, with a virtual character standing next to the volunteer tree. On the two wishing slips, for example, different target text information, for example, wishing text, can be displayed. In order to simulate the situation that the wishing characters can be seen on the wishing label in a real scene, the three-dimensional position information of the target character information can be determined by utilizing the three-dimensional position information respectively corresponding to the wishing labels, so that the target character information can be displayed in a form of being attached to the wishing label. In this example, (B) in fig. 3 shows a schematic view when the display of the target character information is relatively changed after the position of the wishlist B is changed by the wind blowing simulation.
Therefore, the display position of the target character information can be determined more easily in a mode of associating the three-dimensional position information of the target character information with the three-dimensional position information of the target virtual object, and a better display effect can be achieved in a mode of following the display.
In addition, in the above-described example corresponding to the wish tree, if the form of the wish label is changed, for example, the wish label B in fig. 3 (a), a rectangular wish label may be bent in order to simulate the effect of wind blowing. In this case, the corresponding target text information can be correspondingly attached to the surface deformation of the wishing label for processing, so that the target text information is more realistic when displayed on the wishing label.
(B3) The method comprises the following steps Determining three-dimensional position information of the target character information in the three-dimensional scene space according to the current shooting parameter information of the virtual camera; and the three-dimensional position information of the target character information is positioned in the shooting range of the virtual camera.
Here, a virtual camera may be understood as a virtual camera for shooting a virtual scene; the shooting parameter information of the virtual camera is set or adjusted, that is, the presentation range, the presentation angle and the like of the virtual scene are adjusted.
For example, the photographing parameter information may include at least one of a distance of the virtual camera from the virtual object, position information in the three-dimensional scene, an angle of view when the virtual object is photographed, and a photographing height. Since the virtual camera can determine the viewing angle at which the virtual object is displayed when the virtual object is photographed and the corresponding three-dimensional scene data is rendered and displayed, when the target text information is displayed, for example, appropriate three-dimensional position information can be selected within the photographing range of the virtual camera based on the current photographing parameter information of the virtual camera. Therefore, when the target text information is displayed in a rendering mode, the target text information can be visually located at a proper viewing position, and the defects caused by the fact that part of characters are omitted outside the shooting range of the virtual camera in the rendering mode can be avoided; in addition, when the three-dimensional position information is determined for the target character information, the current shooting parameter information of the virtual camera when the virtual character is shot is referred to, so that a certain relative position relation can be kept between the displayed target character information and the virtual character, a real-time rendering picture containing the target character information which can be rendered finally tends to be consistent in the aspect of presentation, and the picture is more harmonious.
(B4) The method comprises the following steps Determining first three-dimensional position information corresponding to the target character information in the three-dimensional scene space for the character sender according to first shooting parameter information of a virtual camera corresponding to the character sender; and/or determining second three-dimensional position information corresponding to the target character information in the three-dimensional scene space for the character receiving party according to second shooting parameter information of the virtual camera corresponding to the character receiving party.
Here, acquiring target text information to be presented in a three-dimensional scene space includes: and acquiring target character information sent to a character receiver by a character sender. The embodiment is suitable for a scene that a plurality of virtual roles carry out character communication under the control of different virtual role control objects (namely users); for example, two users far away from each other can have a conversation in the same conference scene through the respective controlled virtual characters, and the generated conversation content between the virtual characters can be used as the target text information in the scene. Specifically, for two virtual characters, when one virtual character speaks and the other virtual character listens, the talking virtual character may correspond to a character sender and send target character information, and the other listening virtual character corresponds to a character receiver and receives the target character information sent by the character sender.
Different virtual roles correspond to different virtual cameras, and because heights, current postures and the like of the different virtual roles can be different, shooting parameter information corresponding to the corresponding virtual cameras is also different. Illustratively, the two virtual roles stand opposite to each other, and if facing the same plane, the shooting postures of the virtual cameras of the two virtual roles are mirror images, that is, the shooting parameter information corresponding to the two different virtual cameras is different.
In this way, when the presentation attribute information corresponding to the target character information in the three-dimensional scene space is determined, for the character sender, for example, the first three-dimensional position information of the target character information in the three-dimensional scene space may be determined by using the first shooting parameter information of the virtual camera corresponding to the character sender. In this way, the target text message can be presented at a normal reading angle for the sender of the text. Correspondingly, for the character receiving party, second three-dimensional position information of the target character information in the three-dimensional scene space can also be determined in a similar manner.
In this example, referring to fig. 4, another schematic diagram for displaying the target text information is further provided in the embodiment of the present disclosure. In fig. 4 (a), virtual person a (i.e., the virtual character) and virtual person B stand facing the same plane, wherein the height of virtual person a is higher than the height of virtual person B. In addition, the plane L is a virtual plane, and is used only to indicate that the virtual character a and the virtual character B face the plane; wherein, the virtual character A faces one surface of the virtual plane L and is represented as an L-A surface, and the virtual character B faces the other surface of the virtual plane L and is represented as an L-B surface. For convenience of description, the two virtual characters are at the same distance from the virtual plane L.
In fig. 4 (b), when the virtual character a faces the plane in the L-a plane direction, if the virtual camera photographs from the back of the virtual character, the plane in the L-a plane direction can be photographed. Here, if the first three-dimensional position of the target character information is determined to be at the same height center position as the virtual character on the plane, a corresponding display manner of the target character information in fig. 4 (b) may be determined. In an example, the textual content in the target textual information includes "yes". Since the virtual character a has a distance from the plane in the L-a plane direction, the target character information "yes" is also a distance from the top of the head of the virtual character a in the perspective projection relationship shown in fig. 4 (a).
Similarly, in fig. 4 (c), a schematic diagram of the virtual character B when facing the plane in the L-B plane direction is shown, and for the same target text information, because the height of the virtual character B is lower, the shooting height in the second shooting parameter information of the virtual camera corresponding to the virtual character B is lower, and the value of the characteristic height in the second three-dimensional position corresponding to the determined target text information is also lower, so that the target text information displayed on the plane in the L-B plane direction is also lower. According to the perspective projection relationship, the distance between the target character information 'yes' and the top of the virtual character B is the same as the distance between the target character information and the top of the virtual character A.
On the other hand, since the L-a plane and the L-B plane are planes in two directions, the virtual plane L is different in the display position for different virtual characters when displaying the target character information. It can also be understood from the example shown in fig. 4 that, with this way of determining the display position of the target text information for different virtual characters, the readability of the target text information can be made stronger due to the different viewing angles that are closer to the different virtual characters.
In addition, in the above embodiment, the shooting parameters of the virtual camera may be specifically determined according to the captured line-of-sight direction of its corresponding avatar control object, and may be changed following a change in the line-of-sight direction of its corresponding avatar control object.
When the shooting parameters of the virtual camera are determined according to the sight line direction of the virtual character control object, the shooting direction of the virtual camera can be consistent with the sight line direction of the corresponding virtual character control object; the imaging range of the virtual camera may be matched with the visual field range of the corresponding virtual character control object (in this case, there is no virtual character corresponding to the virtual character control object in the captured image), or the imaging range of the virtual camera may be enlarged so that the virtual character corresponding to the virtual character control object is within the imaging range (in this case, the virtual character corresponding to the virtual character control object is displayed in the captured image).
In the above embodiment, with respect to the shot screen presented at the character sender: as a character sender, if the virtual character corresponding to the character sender is in the shooting range of the virtual camera corresponding to the character sender, the target character information sent by the character sender can be presented at the preset relative position of the virtual character corresponding to the character sender. If the virtual character corresponding to the character sender is not in the shooting range of the virtual camera corresponding to the character sender, the target character information sent by the character sender can be presented in the shooting picture corresponding to the character sender at a position close to the edge of the screen in the sight direction of the virtual character corresponding to the character receiver.
When the shooting direction of the virtual camera matches the visual line direction of the corresponding virtual character control object, and when the visual line direction of the virtual character control object is changed, the shooting direction of the virtual camera is also changed. In this case, for a shot picture presented at the recipient of the text: in a specific implementation, as the character receiving side, if the virtual character corresponding to the character sending side is within the shooting range of the virtual camera corresponding to the character receiving side, the target character information sent by the character sending side may be presented at the preset relative position of the virtual character corresponding to the character sending side. If the sight direction of the character receiver changes, the virtual character corresponding to the character sender does not exist in the shooting range of the virtual camera which changes the shooting direction along with the sight direction any more, at this moment, the target character information sent by the character sender can be presented at the edge position of the picture shot by the virtual camera, and the edge position of the picture can be: and the virtual character corresponding to the character sender is positioned at the edge position of the picture before the disappearance moment in the shot picture.
(C) The method comprises the following steps Color information is presented.
In a specific implementation, for example, the following manner may be adopted to determine the rendering color information of the target text information in the three-dimensional scene space: determining corresponding display color information of the target character information in the three-dimensional scene space according to the scene color information of the three-dimensional scene space; and the display color information of the target character information is different from the scene color information.
The scene color information may be different from the target text information in the color, for example, in the actual color, or in the hue and saturation used specifically.
For example, in a case where the scene color information reflects a single color in the three-dimensional scene space, for example, the three-dimensional scene space appears dark blue, white rendering color information may be set for the target text information. Thus, in the three-dimensional scene space, the target character information can be highlighted to form a visual difference, and the situation that the target character information is difficult to distinguish and read due to the fact that the presenting color information of the target character information is close to the scene color information of the three-dimensional scene space is avoided.
In addition, in a possible case, different from the font color in the presentation attribute information of the target text information described above, the presentation color information of the target text information is set for the whole target text information and is not set for one of the characters, which aims to distinguish the target text information from the three-dimensional scene space so that the target text information has better readability; the font color is set, and the main purpose is to enable the target text information to have more artistic effect when being displayed. Further, it is possible to set one of the presentation attribute information including the font color and the presentation color information as the presentation attribute information to the target character information, and there is no conflict.
For the above step S103, in the case of determining the presentation attribute information of the target text information, the updated three-dimensional scene data of the three-dimensional scene space may be generated accordingly by using the presentation attribute information of the target text information and the current three-dimensional scene data of the three-dimensional scene space.
In a specific implementation, for example, the following may be used: generating scene special effect data based on the presentation attribute information of the target character information; and generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target character information, the scene special effect data and the current three-dimensional scene data of the three-dimensional scene space.
Specifically, under the condition that the presentation attribute information of the target text information is determined, the corresponding scene special effect data can be correspondingly determined for the target text information by using the presentation attribute information. For the three-dimensional scene space, the corresponding three-dimensional scene data may be determined, so that, for example, the scene special effect data corresponding to the target text information may be combined to the three-dimensional scene data in a superposition rendering or other optional manner to update the three-dimensional scene data, and the updated three-dimensional scene data may be generated. That is, the updated three-dimensional scene data here has the scene characteristics displayed in the three-dimensional scene space and the character characteristics displayed in the target character information.
In addition, when generating the scene special effect data, for example, any one of a stage lighting special effect, a sound special effect, and the like may be added. The stage lighting special effect may include, for example, a light following of the target text information, and the sound special effect may include, for example, a applause or a cheering imitating the audience. Therefore, compared with visual display, the visual display can be added, the mode of combining audio and video is more diversified, and better viewing experience can be provided.
In another embodiment of the present disclosure, a specific manner for generating updated three-dimensional scene data of the three-dimensional scene space is further provided when the target text information includes bullet screen information input by a live viewer. Specifically, the following manner may be adopted: determining bullet screen special effect information corresponding to the bullet screen information based on the keywords in the bullet screen information indicated in the presentation attribute information; the bullet screen special effect information comprises information used for indicating a bullet screen information dynamic display mode; and generating updated three-dimensional scene data of the three-dimensional scene space based on the bullet screen special effect information and the current three-dimensional scene data of the three-dimensional scene space.
The bullet screen special effect information contains information used for indicating a bullet screen information dynamic display mode, and the bullet screen information dynamic display mode can include at least one of bullet screen information flying display, display landing from the air and jumping display.
In a specific implementation, the presentation attribute information may include, for example, text content in the bullet screen information. In one possible case, the corresponding bullet screen special effect information may be determined using the bullet screen information, for example, in the following manner: extracting at least one keyword from the bullet screen information; under the condition that the extracted at least one keyword comprises a preset action keyword, determining a bullet screen information dynamic display mode matched with the action keyword; and determining the image information and the state information of the virtual object matched with the virtual object description information under the condition that the extracted keywords comprise the virtual object description information.
The keywords extracted from the bullet screen information are, for example, predetermined objects, such as "cat," "dog," "chicken leg," "throw pillow," and the like; or include means of presentation such as "delivery," "running," "flying," and the like. In a possible case, the extracted keywords may also include other words that are semantically similar to the preset keywords. The specific method can be determined according to actual conditions, and is not limited herein.
In a specific implementation, the two different cases shown in the following (D1) and (D2) may be included, but not limited to, for example, aiming at different keywords:
(D1) the method comprises the following steps And under the condition that the extracted at least one keyword comprises a preset action keyword, determining a dynamic display mode of the bullet screen information matched with the action keyword.
For example, the preset action keywords may include "deliver", "run", "fly", and "drop", and if the action keywords extracted from the bullet screen information include "drop", it may be determined that the bullet screen information is dynamically displayed in a manner of displaying the dropped scene.
(D2) The method comprises the following steps And in the case that the extracted at least one keyword comprises virtual object description information, determining the image information and the state information of the virtual object matched with the virtual object description information.
For example, the virtual object may include "cat", "dog", "chicken leg", etc., and if the keyword extracted from the barrage information includes description information corresponding to the chicken leg, such as "fried chicken leg", it may be determined that the image information matched with the virtual object chicken leg is a style of the chicken leg, and the state information corresponds to that the chicken leg is being fried and cooked.
The bullet screen effect information may also be determined in combination with the above (D1) and (D2), where possible. For example, the bullet screen effect information indicates that the chicken leg is fried and cooked after falling to the ground.
In addition, in another embodiment of the present disclosure, the state information of the virtual object may be updated in response to control information for the virtual object in the live view information input by the live viewer.
For example, for the bullet screen effect information indicating that the chicken leg is fried and cooked after falling to the ground, for example, a picture of frying the chicken leg may be continuously displayed in the three-dimensional scene space. At this time, if new barrage information including control information for the virtual target leg appears, for example, the leg is taken out of the fryer, the leg may be taken out of the fryer in response to the control information indicated by the barrage information and by updating the state information for the virtual target leg.
Therefore, the current three-dimensional scene data can be updated by utilizing the bullet screen special effect information and the current three-dimensional scene data of the three-dimensional scene space, so as to generate the updated three-dimensional scene data of the three-dimensional scene space. Because the bullet screen special effect information is flexible and various and can respond to the control information aiming at the virtual object in the bullet screen information input by the live audience, the interactivity with the live audience is stronger.
For the above S104, under the condition that the updated three-dimensional scene data is determined, the updated three-dimensional scene data may be rendered by using a three-dimensional rendering engine, so as to obtain a real-time rendering picture including the target text information.
In a specific implementation, for the live broadcast scene described above, since the live broadcast viewer actually watches on the user side of the live broadcast viewer when watching, after obtaining a real-time rendering picture, for example, the real-time rendering picture may be converted into live broadcast video data; and pushing the live video data to at least one user side so as to display a live picture corresponding to the real-time rendering picture at the user side.
In such a scenario, the real-time rendered picture is converted into live video data, for example, data corresponding to the real-time rendered picture can be converted into data in a live video stream format, and then transmitted to a live audience for watching.
In addition, the user can also directly watch the picture after rendering the three-dimensional scene data, so that the obtained real-time rendered picture containing the target character information can be directly used for displaying.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a virtual character control apparatus corresponding to the image display method, and since the principle of the apparatus in the embodiment of the present disclosure for solving the problem is similar to the image display method in the embodiment of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not described again.
As shown in fig. 5, a schematic view of a picture displaying apparatus provided in an embodiment of the present disclosure is shown, the apparatus includes: an acquisition module 51, a determination module 52, a generation module 53, and a processing module 54; wherein the content of the first and second substances,
an obtaining module 51, configured to obtain target text information to be presented in the three-dimensional scene space;
a determining module 52, configured to determine corresponding presentation attribute information of the target text information in the three-dimensional scene space; the presentation attribute information comprises three-dimensional position information;
a generating module 53, configured to generate updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target text information and current three-dimensional scene data of the three-dimensional scene space;
and the processing module 54 is configured to render the updated three-dimensional scene data through the three-dimensional rendering engine, so as to obtain a real-time rendering picture including the target text information.
In an optional embodiment, after obtaining the real-time rendering screen, the processing module 54 is further configured to: converting the real-time rendering picture into live video data; and transmitting the live video data to at least one user side so as to display a live picture corresponding to the real-time rendering picture at the user side.
In an optional embodiment, the target text information includes at least one of the following: lyric information corresponding to singing performance performed by the virtual character; live broadcasting bullet screen information input by audiences; and carrying out semantic recognition on the audio data of the virtual character control object to obtain character information.
In an optional implementation manner, the presentation attribute information of the target text information further includes: font type, font color, font size, animation display form.
In an optional implementation manner, when determining the corresponding presentation attribute information of the target text information in the three-dimensional scene space, the determining module 52 is configured to: acquiring the presentation attribute information indicated in the target character information; and/or acquiring the presentation attribute information set in the three-dimensional rendering engine.
In an alternative embodiment, the behavior data includes motion data based on a virtual character control object captured by a motion capture device; the three-dimensional position information of the target character information is associated with three-dimensional position information of a target part of the virtual character, and the three-dimensional position information of the target part of the virtual character is changed under the driving of control information generated based on the motion data; or the three-dimensional position information of the target character information is associated with the three-dimensional position information of a target virtual object in the three-dimensional scene space, and the display state of the target virtual object in the three-dimensional scene space is dynamically changed.
In an optional implementation manner, the target character information is lyrics, the lyrics are displayed around a target portion of the virtual character, and the importance of the impersonation feature corresponding to the target portion is lower than the importance of impersonation features of other portions of the virtual character.
In an optional embodiment, the generating module 53, when generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target text information and the current three-dimensional scene data of the three-dimensional scene space, is configured to: generating scene special effect data based on the presentation attribute information of the target character information; and generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target character information, the scene special effect data and the current three-dimensional scene data of the three-dimensional scene space.
In an optional implementation manner, in a case that the target text information includes bullet screen information input by a live viewer, the generating module 53, when generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target text information and current three-dimensional scene data of the three-dimensional scene space, is configured to: determining bullet screen special effect information corresponding to the bullet screen information based on the keywords in the bullet screen information indicated in the presentation attribute information; the bullet screen special effect information comprises information used for indicating a bullet screen information dynamic display mode; and generating updated three-dimensional scene data of the three-dimensional scene space based on the bullet screen special effect information and the current three-dimensional scene data of the three-dimensional scene space.
In an optional implementation manner, when determining, based on the keyword in the bullet screen information indicated in the attribute information, the bullet screen special effect information corresponding to the bullet screen information, the generating module 53 is configured to: extracting at least one keyword from the bullet screen information; under the condition that the extracted at least one keyword comprises a preset action keyword, determining a bullet screen information dynamic display mode matched with the action keyword; and in the case that the extracted at least one keyword comprises virtual object description information, determining the image information and the state information of the virtual object matched with the virtual object description information.
In an optional implementation manner, in a case that the bullet screen special effect information includes the virtual object information, the generating module 53 is further configured to: and updating the state information of the virtual object in response to control information aiming at the virtual object in the bullet screen information input by the live audience.
In an optional implementation, when determining the corresponding presentation attribute information of the target text information in the three-dimensional scene space, the determining module 52 is configured to: determining three-dimensional position information of the target character information in the three-dimensional scene space according to the current shooting parameter information of the virtual camera; and the three-dimensional position information of the target character information is positioned in the shooting range of the virtual camera.
In an optional implementation, when determining the corresponding presentation attribute information of the target text information in the three-dimensional scene space, the determining module 52 is configured to: determining corresponding display color information of the target character information in the three-dimensional scene space according to the scene color information of the three-dimensional scene space; and the display color information of the target character information is different from the scene color information.
In an optional embodiment, the obtaining module 51, when obtaining the target text information to be presented in the three-dimensional scene space, is configured to: acquiring the target character information sent to a character receiver by a character sender; when determining the corresponding presentation attribute information of the target text information in the three-dimensional scene space, the determining module 52 is configured to at least one of: determining first three-dimensional position information corresponding to the target character information in the three-dimensional scene space for the character sender according to first shooting parameter information of a virtual camera corresponding to the character sender; and determining second three-dimensional position information corresponding to the target character information in the three-dimensional scene space for the character receiver according to second shooting parameter information of the virtual camera corresponding to the character receiver.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
An embodiment of the present disclosure further provides a computer device, as shown in fig. 6, which is a schematic structural diagram of the computer device provided in the embodiment of the present disclosure, and the computer device includes:
a processor 10 and a memory 20; the memory 20 stores machine-readable instructions executable by the processor 10, the processor 10 being configured to execute the machine-readable instructions stored in the memory 20, the processor 10 performing the following steps when the machine-readable instructions are executed by the processor 10:
acquiring target character information to be presented in the three-dimensional scene space; determining corresponding presentation attribute information of the target character information in the three-dimensional scene space; the presentation attribute information comprises three-dimensional position information; generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target character information and the current three-dimensional scene data of the three-dimensional scene space; rendering the updated three-dimensional scene data through the three-dimensional rendering engine to obtain a real-time rendering picture containing the target character information.
The storage 20 includes a memory 210 and an external storage 220; the memory 210 is also referred to as an internal memory, and temporarily stores operation data in the processor 10 and data exchanged with the external memory 220 such as a hard disk, and the processor 10 exchanges data with the external memory 220 through the memory 210.
The specific execution process of the instruction may refer to the steps of the picture display method described in the embodiments of the present disclosure, and details are not described here.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the picture display method in the foregoing method embodiments are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The present disclosure also provides a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the picture displaying method in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (17)

1. A picture presentation method for presenting a real-time picture of a three-dimensional scene space including a virtual character driven by control information generated based on behavior data of a captured virtual character control object, the method comprising:
acquiring target character information to be presented in the three-dimensional scene space;
determining corresponding presentation attribute information of the target character information in the three-dimensional scene space; the presentation attribute information comprises three-dimensional position information;
generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target character information and the current three-dimensional scene data of the three-dimensional scene space;
rendering the updated three-dimensional scene data through the three-dimensional rendering engine to obtain a real-time rendering picture containing the target character information.
2. The method of claim 1, wherein after obtaining the real-time rendered picture, further comprising:
converting the real-time rendering picture into live video data;
and transmitting the live video data to at least one user side so as to display a live picture corresponding to the real-time rendering picture at the user side.
3. The method of claim 1, wherein the target text information comprises at least one of:
lyric information corresponding to singing performance performed by the virtual character;
live broadcasting bullet screen information input by audiences;
and carrying out semantic recognition on the audio data of the virtual character control object to obtain character information.
4. The method of claim 1, wherein the presentation attribute information of the target text information further comprises: font type, font color, font size, animation display form.
5. The method of claim 1, wherein determining corresponding presentation attribute information of the target text information in the three-dimensional scene space comprises:
acquiring the presentation attribute information indicated in the target character information; and/or the presence of a gas in the gas,
and acquiring the presentation attribute information set in the three-dimensional rendering engine.
6. The method of claim 1, wherein the behavior data comprises motion data based on a virtual character control object captured by a motion capture device; the three-dimensional position information of the target character information is associated with three-dimensional position information of a target part of the virtual character, and the three-dimensional position information of the target part of the virtual character is changed under the driving of control information generated based on the motion data;
or the three-dimensional position information of the target character information is associated with the three-dimensional position information of a target virtual object in the three-dimensional scene space, and the display state of the target virtual object in the three-dimensional scene space is dynamically changed.
7. The method according to claim 6, wherein the target text information is lyrics, the lyrics are displayed around a target portion of the virtual character, and the importance of the corresponding impersonation feature of the target portion is lower than the importance of impersonation features of other portions of the virtual character.
8. The method of claim 1, wherein generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target text information and current three-dimensional scene data of the three-dimensional scene space comprises:
generating scene special effect data based on the presentation attribute information of the target character information;
and generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target character information, the scene special effect data and the current three-dimensional scene data of the three-dimensional scene space.
9. The method of claim 3, wherein in the case that the target text information includes bullet screen information input by a live viewer, the generating updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target text information and current three-dimensional scene data of the three-dimensional scene space comprises:
determining bullet screen special effect information corresponding to the bullet screen information based on the keywords in the bullet screen information indicated in the presentation attribute information; the bullet screen special effect information comprises information used for indicating a bullet screen information dynamic display mode;
and generating updated three-dimensional scene data of the three-dimensional scene space based on the bullet screen special effect information and the current three-dimensional scene data of the three-dimensional scene space.
10. The method according to claim 9, wherein determining bullet screen special effect information corresponding to the bullet screen information based on the keyword in the bullet screen information indicated in the attribute information comprises:
extracting at least one keyword from the bullet screen information;
under the condition that the extracted at least one keyword comprises a preset action keyword, determining a bullet screen information dynamic display mode matched with the action keyword;
and in the case that the extracted at least one keyword comprises virtual object description information, determining the image information and the state information of the virtual object matched with the virtual object description information.
11. The method according to claim 10, wherein in a case where the bullet screen effect information includes the virtual object information, the method further comprises:
and updating the state information of the virtual object in response to control information aiming at the virtual object in the bullet screen information input by the live audience.
12. The method of claim 1, wherein determining corresponding presentation attribute information of the target text information in the three-dimensional scene space comprises:
determining three-dimensional position information of the target character information in the three-dimensional scene space according to the current shooting parameter information of the virtual camera; and the three-dimensional position information of the target character information is positioned in the shooting range of the virtual camera.
13. The method of claim 1, wherein determining corresponding presentation attribute information of the target text information in the three-dimensional scene space comprises:
determining corresponding display color information of the target character information in the three-dimensional scene space according to the scene color information of the three-dimensional scene space; and the display color information of the target character information is different from the scene color information.
14. The method of claim 1, wherein the obtaining target text information to be presented in the three-dimensional scene space comprises: acquiring the target character information sent to a character receiver by a character sender;
the determining of the corresponding presentation attribute information of the target text information in the three-dimensional scene space includes at least one of the following:
determining first three-dimensional position information corresponding to the target character information in the three-dimensional scene space for the character sender according to first shooting parameter information of a virtual camera corresponding to the character sender;
and determining second three-dimensional position information corresponding to the target character information in the three-dimensional scene space for the character receiver according to second shooting parameter information of the virtual camera corresponding to the character receiver.
15. A picture display apparatus, comprising:
the acquisition module is used for acquiring target character information to be presented in the three-dimensional scene space;
the determining module is used for determining corresponding presentation attribute information of the target character information in the three-dimensional scene space; the presentation attribute information comprises three-dimensional position information;
a generation module, configured to generate updated three-dimensional scene data of the three-dimensional scene space based on the presentation attribute information of the target text information and current three-dimensional scene data of the three-dimensional scene space;
and the processing module is used for rendering the updated three-dimensional scene data through the three-dimensional rendering engine to obtain a real-time rendering picture containing the target character information.
16. A computer device, comprising: processor, memory storing machine-readable instructions executable by the processor for executing machine-readable instructions stored in the memory, the machine-readable instructions when executed by the processor, the processor performing the steps of the picture presentation method as claimed in any one of claims 1 to 14.
17. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when executed by a computer device, performs the steps of the screen presentation method according to any one of claims 1 to 14.
CN202111402674.0A 2021-11-19 2021-11-19 Picture display method and device, computer equipment and storage medium Pending CN114283232A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111402674.0A CN114283232A (en) 2021-11-19 2021-11-19 Picture display method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111402674.0A CN114283232A (en) 2021-11-19 2021-11-19 Picture display method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114283232A true CN114283232A (en) 2022-04-05

Family

ID=80869709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111402674.0A Pending CN114283232A (en) 2021-11-19 2021-11-19 Picture display method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114283232A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110800310A (en) * 2018-12-29 2020-02-14 深圳市大疆创新科技有限公司 Subtitle processing method and director system for sports game video
CN111464827A (en) * 2020-04-20 2020-07-28 玉环智寻信息技术有限公司 Data processing method and device, computing equipment and storage medium
US20210118235A1 (en) * 2019-10-15 2021-04-22 Beijing Sensetime Technology Development Co., Ltd. Method and apparatus for presenting augmented reality data, electronic device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110800310A (en) * 2018-12-29 2020-02-14 深圳市大疆创新科技有限公司 Subtitle processing method and director system for sports game video
US20210118235A1 (en) * 2019-10-15 2021-04-22 Beijing Sensetime Technology Development Co., Ltd. Method and apparatus for presenting augmented reality data, electronic device and storage medium
CN111464827A (en) * 2020-04-20 2020-07-28 玉环智寻信息技术有限公司 Data processing method and device, computing equipment and storage medium

Similar Documents

Publication Publication Date Title
US8317611B2 (en) Image integration, mapping and linking system and methodology
US11330150B2 (en) Video content synchronisation method and apparatus
JP2002150317A (en) Image display device
Dominguez “I'm Very Rich, Bitch!”: The Melodramatic Money Shot and the Excess of Racialized Gendered Affect in the Real Housewives Docusoaps
CN109120990B (en) Live broadcast method, device and storage medium
JP2020074041A (en) Imaging device for gaming, image processing device, and image processing method
KR101767569B1 (en) The augmented reality interactive system related to the displayed image contents and operation method for the system
KR102200239B1 (en) Real-time computer graphics video broadcasting service system
JP4962219B2 (en) Composite image output apparatus and composite image output processing program
CN114283232A (en) Picture display method and device, computer equipment and storage medium
CN114356090B (en) Control method, control device, computer equipment and storage medium
CN114625468A (en) Augmented reality picture display method and device, computer equipment and storage medium
CN114612637A (en) Scene picture display method and device, computer equipment and storage medium
US10139780B2 (en) Motion communication system and method
JP2009059014A (en) Composite image output device and composite image output processing program
JP4266154B2 (en) Automatic photo creation device and automatic photo creation method
WO2023130715A1 (en) Data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
US20170287521A1 (en) Methods, circuits, devices, systems and associated computer executable code for composing composite content
KR20050047483A (en) Image processing apparatus
JP7167388B1 (en) Movie creation system, movie creation device, and movie creation program
WO2020189025A1 (en) Content combination system, program, article, imaging device, and content combination method
Pomerance Michael Curtiz’s Gamble for Christmas
Wang Research on the Visual Language of VR Animation in the Multi-Screen Interactive Era
Baelo-Allué It’s Really Me’: Intermediality and Constructed Identities in Glamorama.”
Guida (Re) creating" Benjamim": Authorship Marked by Intermediality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination