CN116320520A - Model animation rendering method and device, computer storage medium and electronic equipment - Google Patents

Model animation rendering method and device, computer storage medium and electronic equipment Download PDF

Info

Publication number
CN116320520A
CN116320520A CN202310294006.3A CN202310294006A CN116320520A CN 116320520 A CN116320520 A CN 116320520A CN 202310294006 A CN202310294006 A CN 202310294006A CN 116320520 A CN116320520 A CN 116320520A
Authority
CN
China
Prior art keywords
model
target
attribute
value
audio frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310294006.3A
Other languages
Chinese (zh)
Inventor
林哲生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Boguan Information Technology Co Ltd
Original Assignee
Guangzhou Boguan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Boguan Information Technology Co Ltd filed Critical Guangzhou Boguan Information Technology Co Ltd
Priority to CN202310294006.3A priority Critical patent/CN116320520A/en
Publication of CN116320520A publication Critical patent/CN116320520A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4852End-user interface for client configuration for modifying audio parameters, e.g. switching between mono and stereo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure belongs to the technical field of model animation rendering, and relates to a model animation rendering method and device, a computer storage medium and electronic equipment. The method comprises the following steps: obtaining model resource data of a target three-dimensional model, and analyzing a target audio frequency band corresponding to model attributes of the target three-dimensional model from the model resource data; collecting an audio stream corresponding to a target anchor in a live broadcasting room, and analyzing the audio stream to obtain model driving values corresponding to different audio frequency segments; and determining a target driving value corresponding to the target audio frequency band from the model driving values, and rendering model animation corresponding to the target three-dimensional model in the live broadcasting room based on the target driving value and the model resource data. In the method, the model animation is rendered based on the target model driving value and the model resource data, so that the correlation exists between the rendering of the model animation and the sound of the target anchor, the rendering effect of the model animation in the voice live broadcasting room is improved, and the visual experience of audiences is enhanced.

Description

Model animation rendering method and device, computer storage medium and electronic equipment
Technical Field
The present disclosure relates to the technical field of model animation rendering, and in particular, to a model animation rendering method, a model animation rendering device, a computer-readable storage medium, and an electronic apparatus.
Background
With the development of live broadcasting technology, voice live broadcasting has become a mainstream live broadcasting mode. However, during live voice, the visual perception of the viewer is often ignored.
In the related art, a specific voice file is generally used to drive the display of the animation, however, there is no correlation between the animation effect displayed in this way and the anchor, and the displayed animation effect is poor, so that the visual perception of the audience in the voice live room cannot be improved.
In view of this, there is a need in the art to develop a new model animation rendering method and apparatus.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure is directed to a model animation rendering method, a model animation rendering device, a computer-readable storage medium, and an electronic apparatus, and further to overcome, at least to some extent, the problem of poor visual experience of a viewer in a live voice room due to the related art.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of an embodiment of the present invention, there is provided a model animation rendering method, the method including: obtaining model resource data of a target three-dimensional model, and analyzing a target audio frequency band corresponding to model attributes of the target three-dimensional model from the model resource data; collecting an audio stream corresponding to a target anchor in a live broadcasting room, and analyzing the audio stream to obtain model driving values corresponding to different audio frequency segments; and determining a target driving value corresponding to the target audio frequency band from the model driving values, and rendering model animation corresponding to the target three-dimensional model in the live broadcasting room based on the target driving value and the model resource data.
According to a second aspect of the embodiment of the present invention, there is provided a model resource data generation method, the method including: importing a target three-dimensional model, and displaying a configuration interface corresponding to the target three-dimensional model; the configuration interface comprises a first configuration area corresponding to the model attribute of the target three-dimensional model and a second configuration area corresponding to the model attribute of the target audio frequency band; responding to a first selection operation acted on the first configuration area, and obtaining the model attribute corresponding to the first selection operation; responding to a second selection operation acted in the second configuration area, and obtaining the target audio frequency band corresponding to the second selection operation; and generating model resource data corresponding to the target three-dimensional model based on the model attribute and the target audio frequency band.
According to a third aspect of an embodiment of the present invention, there is provided a model animation rendering apparatus including: the analysis module is configured to acquire model resource data of a target three-dimensional model, and analyze a target audio frequency band corresponding to a model attribute of the three-dimensional model from the model resource data; the acquisition module is configured to acquire an audio stream corresponding to a target anchor in a live broadcasting room, and analyze the audio stream to obtain model driving values corresponding to different audio frequency segments; and the rendering module is configured to determine a target driving value corresponding to the target audio frequency band from the model driving values, and render model animation corresponding to the target three-dimensional model in the live broadcasting room based on the target driving value and the model resource data.
According to a fourth aspect of an embodiment of the present invention, there is provided a model resource data generating apparatus, the apparatus including: the system comprises an importing module, a configuration interface and a display module, wherein the importing module is configured to import a target three-dimensional model and display a configuration interface corresponding to the target three-dimensional model; the configuration interface comprises a first configuration area corresponding to the model attribute of the target three-dimensional model and a second configuration area corresponding to the model attribute of the target audio frequency band; a first response module configured to respond to a first selection operation acting in the first configuration area, and obtain the model attribute corresponding to the first selection operation; a second response module configured to respond to a second selection operation acting in the second configuration area, and obtain the target audio frequency band corresponding to the second selection operation; and the generation module is configured to generate model resource data corresponding to the target three-dimensional model based on the model attribute and the target audio frequency band.
According to a fifth aspect of an embodiment of the present invention, there is provided an electronic apparatus including: a processor and a memory; wherein the memory has stored thereon computer readable instructions which, when executed by the processor, implement the method of any of the above-described exemplary embodiments.
According to a sixth aspect of embodiments of the present invention, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any of the above-described exemplary embodiments.
As can be seen from the above technical solutions, the model animation rendering method, the model animation rendering device, the computer storage medium and the electronic device according to the exemplary embodiments of the present invention have at least the following advantages and positive effects:
in the method and the device provided by the exemplary embodiment of the disclosure, based on the target model driving value and the model resource data, the model animation corresponding to the target three-dimensional model is rendered in the live broadcast room, and on one hand, the relationship exists between the rendered model animation and the sound of the target anchor because the target model driving value is obtained by analyzing the audio stream of the target anchor; on the other hand, the rendered model animation is the animation of the target three-dimensional model, so that the rendering effect of the model animation in the voice live broadcasting room is improved, and the visual experience of audiences is enhanced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
FIG. 1 schematically illustrates a flow diagram of a model animation rendering method in an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of analyzing an audio stream to obtain model driving values corresponding to different audio segments in a model animation rendering method according to an embodiment of the disclosure;
fig. 3 schematically illustrates a flow chart of performing frequency domain division on an audio stream to obtain initial amplitude data in a model animation rendering method in an embodiment of the disclosure;
FIG. 4 schematically illustrates a flow chart of calculating target amplitude data in an updated amplitude data queue to obtain a model driving value in a model animation rendering method according to an embodiment of the disclosure;
FIG. 5 schematically illustrates a flow chart of rendering model animations corresponding to a target three-dimensional model in a living room based on target driving values and model resource data in a model animation rendering method according to an embodiment of the present disclosure;
FIG. 6 schematically illustrates a flow chart of rendering a model animation corresponding to a target three-dimensional model in a live room in a model animation rendering method in an embodiment of the present disclosure;
FIG. 7 schematically illustrates a flow diagram of a model resource data generation method in an embodiment of the present disclosure;
FIG. 8 schematically illustrates a flow chart of modifying model resource data in a model resource data generation method in an embodiment of the disclosure;
FIG. 9 schematically illustrates an apparatus for a model animation rendering method in an embodiment of the present disclosure;
FIG. 10 schematically illustrates an apparatus for a model resource data generation method in an embodiment of the present disclosure;
FIG. 11 schematically illustrates an electronic device in an embodiment of the disclosure;
fig. 12 schematically illustrates a computer-readable storage medium in an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
The terms "a," "an," "the," and "said" are used in this specification to denote the presence of one or more elements/components/etc.; the terms "comprising" and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. in addition to the listed elements/components/etc.; the terms "first" and "second" and the like are used merely as labels, and are not intended to limit the number of their objects.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities.
Aiming at the problems in the related art, the present disclosure proposes a model animation rendering method. Fig. 1 shows a flow diagram of a model animation rendering method, as shown in fig. 1, which at least comprises the following steps:
s110, obtaining model resource data of the target three-dimensional model, and analyzing a target audio frequency band corresponding to the model attribute of the target three-dimensional model from the model resource data.
S120, collecting an audio stream corresponding to a target anchor in a live broadcasting room, and analyzing the audio stream to obtain model driving values corresponding to different audio frequency segments.
S130, determining a target driving value corresponding to the target audio frequency band in the model driving values, and rendering model animation corresponding to the target three-dimensional model in the live broadcasting room based on the target driving value and the model resource data.
In the method and the device provided by the exemplary embodiment of the disclosure, based on the target driving value and the model resource data, the model animation corresponding to the target three-dimensional model is rendered in the live broadcasting room, and on one hand, the relationship exists between the rendered model animation and the sound of the target anchor as the target driving value is obtained by analyzing the audio stream of the target anchor; on the other hand, the rendered model animation is the animation of the target three-dimensional model, so that the rendering effect of the model animation in the live broadcasting room is improved, and the visual experience of audiences is enhanced.
The steps of the model animation rendering method are described in detail below.
In step S110, model resource data of the target three-dimensional model is acquired, and a target audio frequency band corresponding to the model attribute of the target three-dimensional model is analyzed from the model resource data.
In an exemplary embodiment of the present disclosure, the target three-dimensional model is a model that needs to be rendered in a living room, and is a model having a stereoscopic rendering effect, unlike a planar model. The target three-dimensional model may be a preset three-dimensional model, or may be a three-dimensional model having a one-to-one mapping relationship with the anchor, which is not particularly limited in the present exemplary embodiment.
The model resource data refers to data necessary for rendering the target three-dimensional model, and specifically, the model resource data may include the target three-dimensional model saved in a graphic language transmission format, a texture file (used for determining textures of the target three-dimensional model), and an animation parameter configuration file. The animation parameter configuration file comprises model attributes of the target three-dimensional model and target audio frequency bands required to be used when calculating attribute values of the model attributes.
Based on the above, after the model resource data of the target three-dimensional model is obtained, the model resource data can be analyzed, and then the model attribute of the target three-dimensional model included in the model resource data and the target audio frequency band required to be used for determining the attribute value of the model attribute are determined.
In the present exemplary embodiment, on the one hand, a target three-dimensional model with a stereoscopic rendering effect is obtained, which is conducive to subsequent rendering of a model animation with a better visual effect; on the other hand, the model resource data is analyzed to obtain a target audio frequency band corresponding to the model attribute of the target three-dimensional model, which lays a foundation for the subsequent determination of a target driving value and the rendering of the model animation corresponding to the target three-dimensional model according to the target driving value and the model resource data.
In step S120, an audio stream corresponding to a target anchor in the live broadcast room is collected, and the audio stream is analyzed to obtain model driving values corresponding to different audio frequency segments.
In the exemplary embodiment of the present disclosure, whether it is live voice or live video, in the live broadcast process, the target anchor generates voices, and the terminal may collect these voices to obtain an audio stream corresponding to the target anchor.
By analyzing the audio stream, the change in the intensity of the sound made by the targeted anchor can be analyzed. Different audio frequency bands correspond to different sound intensities. For example, the target anchor a makes a sound in the live audio room, and the sound of the target anchor a is collected to obtain an audio stream L-1 corresponding to the target anchor a. The audio stream L-1 is analyzed to obtain a sound frequency corresponding to the audio stream L-1 and a frequency amplitude corresponding to the sound frequency. Based on this, it is possible to determine the frequency amplitude value belonging to the high audio frequency band, the frequency amplitude value belonging to the medium audio frequency band, and the frequency amplitude value belonging to the low audio frequency band among the frequency amplitudes.
And respectively calculating the frequency amplitude value belonging to the high audio frequency band, the frequency amplitude value belonging to the medium audio frequency band and the frequency amplitude value belonging to the low audio frequency band to obtain a model driving value. The model driving value specifically consists of three values, wherein the first value is a calculation result obtained by calculating the frequency amplitude value belonging to the low audio frequency band, the second value is a calculation result obtained by calculating the frequency amplitude value belonging to the medium audio frequency band, and the third value is a calculation result obtained by calculating the frequency amplitude value belonging to the high audio frequency band.
In an alternative embodiment, fig. 2 is a schematic flow chart of analyzing an audio stream to obtain model driving values corresponding to different audio frequency segments in a model animation rendering method, and as shown in fig. 2, the method at least includes the following steps: in step S210, the audio stream is subjected to frequency division to obtain initial amplitude data, where the initial amplitude data includes initial amplitude values in different audio frequency bands.
For example, 30 audio streams are collected every second, and the 30 audio streams are divided into frequency bands according to the collected time, so that 30 initial amplitude data can be obtained successively. Each initial amplitude data includes a plurality of initial amplitude values at different audio frequency bands. For example, the initial amplitude data may be obtained by frequency-band-dividing 30 audio streams acquired per second. In particular the initial amplitude data may consist of 45 initial amplitudes at different audio frequency bands.
In step S220, the initial amplitude values belonging to the same audio frequency band in the initial amplitude data are calculated to obtain the target amplitude data, so as to add the target amplitude data into the amplitude data queue.
The initial amplitude data is composed of initial amplitude values in different audio frequency bands, for example, the initial amplitude data includes an initial amplitude value in a low audio frequency band, an initial amplitude value in a medium audio frequency band, and an initial amplitude value in a high audio frequency band. The initial amplitude values in the low audio frequency band are subjected to average value calculation to obtain a numerical value N-1, the initial amplitude values in the medium audio frequency band are subjected to average value calculation to obtain a numerical value N-2, and the initial amplitude values in the high audio frequency band are subjected to average value calculation to obtain a numerical value N-3. At this time, the target amplitude data is composed of the value N-1, the value N-2, and the value N-3. After the target amplitude data is calculated, the target amplitude data is added to an amplitude data queue.
In step S230, interpolation processing is performed on the target amplitude data in the amplitude data queue, and an updated amplitude data queue is obtained.
Wherein, after adding the target amplitude data to the amplitude data queue, interpolation processing is required for the target amplitude data. Interpolation of the target amplitude data is required because Z-1 audio stream per second can be generally obtained, and Z-1 target amplitude data per second can be added to the amplitude data queue, however, in the process of subsequently rendering the model animation corresponding to the target three-dimensional model, the rendering speed is Z-2 frames per second. Since Z-2 is different from Z-1, the rendering effect of the model animation cannot be guaranteed.
Based on this, it is necessary to perform interpolation processing on the target amplitude data to ensure that there are Z-2 copies of the target amplitude data per second in the amplitude data queue.
Specifically, the interpolation processing of the target amplitude data is performed as follows: each time a new set of target amplitude data C-0 is obtained, the target amplitude data C-0 is calculated with the last target amplitude data in the amplitude data queue to generate target amplitude data C-1. And adding the target amplitude data C-1 and the target amplitude data C-0 into the amplitude data pair column in sequence to obtain an updated amplitude data queue.
In step S240, the target amplitude data in the updated amplitude data queue is calculated to obtain model driving values corresponding to different audio frequency bands.
The target amplitude data includes a target amplitude value, and the target amplitude value and an amplitude threshold value corresponding to the target amplitude value are calculated to obtain model driving values corresponding to different audio frequency bands.
In the present exemplary embodiment, the audio stream corresponding to the target anchor is subjected to frequency division, so that initial amplitude values in different audio frequency bands can be obtained, and further model driving values corresponding to different audio frequency bands can be obtained for subsequent rendering of model animation, so as to establish a connection between the sound of the target anchor and the model rendering, and enhance the visual experience of the audience in the living broadcast room; on the other hand, interpolation processing is carried out on the target amplitude data in the amplitude data queue, so that the updated amplitude data queue meets the requirement of model animation rendering, and the effect of model animation rendering is ensured.
In an alternative embodiment, fig. 3 shows a schematic flow chart of frequency domain division of an audio stream in a model animation rendering method to obtain initial amplitude data, and as shown in fig. 3, the method at least includes the following steps: in step S310, the audio stream is subjected to frequency domain analysis, and a plurality of frequencies corresponding to the audio stream and a plurality of frequency amplitudes corresponding to the plurality of frequencies are obtained.
In order to divide the audio stream into frequency bands, the audio stream needs to be subjected to frequency domain analysis to convert the audio stream from the time domain to the frequency domain, so as to obtain a plurality of frequencies corresponding to the audio stream and frequency amplitudes corresponding to the plurality of frequencies. Specifically, the audio stream may be subjected to a fast fourier transform to perform frequency domain analysis on the audio stream, or may be subjected to frequency domain analysis in other manners, which is not particularly limited in the present exemplary embodiment.
In step S320, frequency band division is performed on the plurality of frequency amplitudes to obtain initial amplitude data; the initial amplitude data includes initial amplitude values at different audio frequency bands.
After obtaining the plurality of frequency amplitudes, the plurality of frequency amplitudes may be subjected to frequency band division to obtain an initial amplitude value in a low audio frequency band, an initial amplitude value in a medium audio frequency band, and an initial amplitude value in a high audio frequency band. These initial amplitude values at different audio frequency bands constitute initial amplitude data.
In the present exemplary embodiment, on the one hand, frequency domain analysis is performed on an audio stream corresponding to a target anchor, contributing to obtaining a plurality of frequencies corresponding to a target anchor sound and a plurality of frequency amplitudes corresponding to the frequencies; on the other hand, the frequency band division is carried out on the plurality of frequency amplitudes to obtain initial amplitude data, so that model driving values corresponding to the target anchor audio stream can be obtained according to the initial amplitude data, the relation between the model driving values and the target anchor sound is established, and the effect of the model animation corresponding to the target three-dimensional model which is rendered subsequently is improved.
In an alternative embodiment, fig. 4 shows a flow chart of calculating target amplitude data in an updated amplitude data queue to obtain a model driving value in a model animation rendering method, where the target amplitude data includes a target amplitude value, and as shown in fig. 4, the method at least includes the following steps: in step S410, an amplitude threshold corresponding to the target amplitude data in the updated amplitude data queue is determined.
Wherein the target amplitude data is composed of target amplitude values in different audio frequency bands. The amplitude threshold refers to a critical value for defining a target amplitude value.
In step S420, if the target amplitude value in the target amplitude data is greater than or equal to the amplitude threshold value, the target amplitude value is replaced with the first model driving value.
The first model driving value refers to a preset value, and specifically, the first model driving value may be 1. For example, the amplitude threshold is 200 and the target amplitude data is [ D-1, D-2, D-3]. Since D-1 and D-2 are greater than 200, both D-1 and D-2 are replaced with 1.
In step S430, if the target amplitude value in the target amplitude data is smaller than the amplitude threshold value, the target amplitude value and the amplitude threshold value are calculated to obtain a second model driving value.
And when the target amplitude value in the target amplitude data is smaller than the amplitude threshold value, dividing the target amplitude value and the amplitude threshold value to obtain a second model driving value.
For example, since D-3 is smaller than 200, division is performed on D-3 and 200, and if the calculation result is 0.5, D-3 is replaced with 0.5, and the target amplitude data obtained at this time is [1,1,0.5].
In the present exemplary embodiment, by comparing the target amplitude value with the amplitude threshold value, the model drive value can be obtained to increase the convenience of calculating the attribute value of the model attribute using the model drive value later.
In step S130, a target driving value corresponding to the target audio frequency band is determined from the model driving values, and a model animation corresponding to the target three-dimensional model is rendered in the living broadcast room based on the target driving value and the model resource data.
In an exemplary embodiment of the present disclosure, the model driving values specifically include three values, which are a driving value corresponding to a low audio frequency band, a driving value corresponding to a medium audio frequency band, and a driving value corresponding to a high audio frequency band. If the target audio frequency band is the high audio frequency band, the driving value corresponding to the high audio frequency band in the model driving values can be determined as the target driving value.
After the target driving value is determined, calculating an attribute value corresponding to the model attribute of the target three-dimensional model by using the target driving value, and further rendering a model animation corresponding to the target three-dimensional model in the live broadcasting room based on the obtained calculation result and model resource data.
It should be noted that the target three-dimensional model may be composed of a plurality of model nodes, and for each model node, there is a model attribute corresponding to the model node. Specifically, the model attribute includes a rotation attribute of the model node, a scaling attribute of the model node, an offset attribute of the model node, a scaling attribute of the map at the model node, a rotation attribute of the map at the model node, an offset attribute of the map at the model node, a self-releasing intensity attribute of the material at the model node, and a transparency attribute of the material at the model node, which is not particularly limited in this exemplary embodiment.
In an alternative embodiment, fig. 5 shows a schematic flow chart of rendering a model animation corresponding to a target three-dimensional model in a living room based on a target driving value and model resource data, where the model resource data includes a driving value adjustment factor, in a model animation rendering method, and as shown in fig. 5, the method at least includes the following steps: in step S510, a first calculation formula is defined between the current attribute value of the model attribute corresponding to the current frame, the driving value adjustment factor, the target driving value, and the first attribute value of the model attribute corresponding to the first frame; the current frame and the first frame differ by one frame interval.
The model resource data also comprises a driving value adjusting factor, wherein the driving value adjusting factor is used for determining the influence degree of a target driving value on an attribute value corresponding to the model attribute.
It should be noted that, every 360 degrees of rotation, the model node and the map at the model node will be restored to the initial position, so the rotation attribute of the model node and the rotation attribute of the map at the model node may be classified into L-1 type attribute. Similarly, the offset attribute of the model node and the offset attribute of the map at the model node also have the characteristics described above, and therefore, the offset attribute of the model node and the offset attribute of the map at the model node also belong to the L-1 type attribute.
Other model properties, among others, do not have the above characteristics and therefore can be classified as first type properties. Whether belonging to the L-1 type attribute or the first type attribute, the current attribute value can be calculated by determining a first calculation formula.
For example, for the rotation attribute of the model node, the determined first calculation formula is shown in formula (1).
Rotation angle value of current frame of model node = rotation angle value of first frame of model node
+drive value adjustment factor×target drive value (1)
The rotation angle value of the current frame of the model node is the current attribute value, and the rotation angle value of the first frame of the model node is the first attribute value. Assuming that the model driving value is [1,1,0.5], if the target audio frequency band corresponding to the rotation angle attribute of the model node is the low audio frequency band, the target driving value in the formula (1) is 1.
In step S520, the driving value adjustment factor, the target driving value, and the first attribute value are calculated based on the first calculation formula, to obtain the current attribute value.
After the first calculation formula is determined, the driving value adjustment factor, the target driving value and the first attribute value are substituted into the first calculation formula, so that the current attribute value of the model attribute corresponding to the current frame can be calculated.
For example, the determined target driving value is 1, the first attribute value is 30 degrees, the driving value adjustment factor is a value a, and based on the first calculation formula, the rotation angle value of the model node in the current frame can be calculated to be 30+a.
In step S530, a model animation corresponding to the target three-dimensional model is rendered in the living room based on the current attribute value and the model resource data.
And after the current attribute value is determined, rendering a model animation corresponding to the target three-dimensional model in the live broadcasting room according to the current attribute value and the model resource data.
For example, the determined current attribute includes a rotation angle 60 of the current frame, and a scaling value of 1.2 for the current frame. If the target three-dimensional model is a three-dimensional flower, the model animation rendered in the living room is an enlarged three-dimensional flower rotated by 60 degrees.
In the present exemplary embodiment, the first attribute value, the target driving value, and the driving adjustment factor are calculated based on the first calculation formula, so that the current attribute value can be obtained, so that the model animation is rendered in the live broadcasting room based on the current attribute value, so as to improve the visual feeling of the audience.
In an alternative embodiment, fig. 6 shows a schematic flow chart of rendering a model animation corresponding to a target three-dimensional model in a live room in a model animation rendering method, where model resource data includes a first driving value adjustment factor; the model properties include a first type of properties, as shown in FIG. 6, and the method includes at least the steps of: in step S610, a first attribute threshold value and a second attribute threshold value corresponding to the first type attribute are determined, and a threshold difference between the first attribute threshold value and the second attribute threshold value is determined.
Wherein for the first type of attribute, its attribute value must be within a preset range. The upper limit value of the preset range is the first attribute threshold value, and the lower limit value of the preset range is the second attribute threshold value. The threshold difference is a calculation result obtained after the difference value calculation of the first attribute threshold and the second attribute threshold.
In step S620, a second calculation formula between the first attribute threshold, the target drive value, the threshold difference, the first current attribute value of the first type attribute corresponding to the current frame, the second attribute value of the first type attribute corresponding to the second frame, and the first drive value adjustment factor is determined; the current frame and the second frame differ by one frame interval.
Wherein, when calculating the first type attribute, the calculation formula used is the second calculation formula. For example, if the first type attribute is a scaling attribute of the model node, it is determined that the second calculation formula is shown in formula (2).
Scaling value of current frame of model node = minimum scaling value of model node + scaling range of model node x (target drive value x first drive value adjustment factor + (scaling value of model node at last frame-minimum scaling value of model node)/scaling range of model node x (1-
First drive value adjustment factor)) (2)
The scaling value of the current frame of the model node is the first current attribute value, the minimum scaling value of the model node is the second attribute threshold value, and the scaling range of the model node is the threshold value difference.
In step S630, based on the second calculation formula, the first attribute threshold, the target driving value, the threshold difference, the second attribute value, and the first driving value adjustment factor are calculated to obtain the first current attribute value.
And calculating the first attribute threshold value, the target driving value, the threshold difference, the second attribute value and the first driving value adjustment factor according to a second calculation formula, so as to obtain the first current attribute value.
In step S640, a model animation corresponding to the target three-dimensional model is rendered in the living room based on the first current attribute value and the model resource data.
And after the first current attribute value is calculated, rendering a model animation corresponding to the target three-dimensional model in the live broadcasting room according to the first current attribute value.
In the present exemplary embodiment, the first current attribute value is calculated based on the second calculation formula so that a model animation is rendered in the living room based on the first current attribute value later to improve the visual perception of the viewer.
In the present exemplary embodiment, a model animation corresponding to a target three-dimensional model is rendered in a live broadcast room based on a target model driving value and model resource data, and on one hand, the relationship exists between the rendered model animation and the sound of the target anchor because the target model driving value is obtained by analyzing the audio stream of the target anchor; on the other hand, the rendered model animation is the animation of the target three-dimensional model, so that the rendering effect of the model animation in the voice live broadcasting room is improved, and the visual experience of audiences is enhanced.
Aiming at the problems in the related art, the present disclosure proposes a model resource data generation method. Fig. 7 shows a flow chart of a model resource data generating method, as shown in fig. 7, the model resource data generating method at least includes the following steps:
s710, importing a target three-dimensional model, and displaying a configuration interface corresponding to the target three-dimensional model; the configuration interface comprises a first configuration area corresponding to the model attribute of the target three-dimensional model and a second configuration area corresponding to the model attribute of the target audio frequency band.
And S720, responding to a first selection operation acted on the first configuration area, and obtaining model attributes corresponding to the first selection operation.
And S730, responding to a second selection operation acted on the second configuration area, and obtaining a target audio frequency band corresponding to the second selection operation.
And S740, generating model resource data corresponding to the target three-dimensional model based on the model attribute and the target audio frequency band.
In the method and the device provided by the exemplary embodiment of the disclosure, a configuration interface corresponding to the target three-dimensional model is displayed, and a first configuration area corresponding to the model attribute and a second configuration area corresponding to the target audio frequency band exist in the configuration interface, wherein the target audio frequency band corresponds to the model attribute, so that when the model animation needs to be adjusted, the designer is not relied on any more, and the rendering effect of the model animation can be adjusted by performing corresponding configuration in the configuration interface, and convenience and flexibility for adjusting the rendering effect of the model animation are improved.
The steps of the model resource data generation method are described in detail below.
In step S710, a target three-dimensional model is imported, and a configuration interface corresponding to the target three-dimensional model is displayed; the configuration interface comprises a first configuration area corresponding to the model attribute of the target three-dimensional model and a second configuration area corresponding to the model attribute of the target audio frequency band.
The configuration interface is used for configuring model attributes corresponding to the target three-dimensional model and target audio frequency bands corresponding to the model attributes.
It should be noted that, there is a first configuration area for configuring the model attribute in the configuration interface, and there is a second configuration area for configuring the target audio frequency band corresponding to the model attribute in the configuration interface.
In step S720, in response to the first selection operation acting on the first configuration region, a model attribute corresponding to the first selection operation is obtained.
The first selection operation, that is, an operation of selecting the model attribute to be configured, may be a click operation, a double click operation, a long press operation, or any touch operation, which is not limited in this exemplary embodiment.
When the first selection operation is performed in the first configuration area, a model attribute corresponding to the first selection operation is obtained, for example, when the click operation is performed on the model attribute, which is the rotation angle of the model node K, in the first configuration area, the model attribute, which is the rotation angle of the model node K, is obtained.
In step S730, in response to the second selection operation acting in the second configuration region, a target audio frequency band corresponding to the second selection operation is obtained.
When the second selection operation is performed in the second configuration area, a target audio frequency band is obtained. For example, when the clicking operation is performed on the high audio frequency band in the second configuration area, the obtained target audio frequency band is the high audio frequency band, and at this time, there is a correspondence between the high audio frequency band and the model attribute, which is the rotation angle of the model node K.
In step S740, model resource data corresponding to the target three-dimensional model is generated based on the model attribute and the target audio frequency band.
After determining the model attribute and the target audio frequency band, generating model resource data corresponding to the target three-dimensional model, so as to render the target three-dimensional model in the live broadcast room according to the model resource data.
In an alternative embodiment, fig. 8 shows a schematic flow chart of modifying model resource data in a method for generating model resource data, and as shown in fig. 8, the method at least includes the following steps: in step S810, in response to the first modification operation acting on the first configuration region, a modification model attribute corresponding to the first modification operation is obtained.
The first modification operation refers to an operation of modifying the selected model attribute, and when the first modification operation is performed in the first configuration area, the model attribute corresponding to the first modification operation is the modified model attribute.
In step S820, in response to the second modification operation acting in the second configuration region, a modification target audio frequency band corresponding to the second modification operation is obtained.
The model attribute can be changed, and the target audio frequency band can be changed. When the second modification operation is performed in the second configuration area, the target audio frequency band corresponding to the second modification operation can be obtained to be the modification target audio frequency band.
In step S830, the model resource data is modified based on the modification model attribute and the modification target audio frequency band.
And modifying the model resource data according to the modification model attribute and the modification target audio frequency band.
For example, a click operation is performed in the second configuration area, and at this time, the modification target audio frequency band corresponding to the click operation is a low audio frequency band, and the model resource data is modified based on the model attribute, which is the rotation angle of the model node K, and the low audio frequency band.
In the present exemplary embodiment, a configuration interface corresponding to a target three-dimensional model is displayed, and a first configuration area corresponding to a model attribute and a second configuration area corresponding to a target audio frequency band exist in the configuration interface, where the target audio frequency band corresponds to the model attribute, so that when the model animation needs to be adjusted, the designer is not relied on any more, and the rendering effect of the model animation can be adjusted by performing corresponding configuration in the configuration interface, thereby increasing convenience and flexibility in adjusting the rendering effect of the model animation.
The following describes a model animation rendering method in the embodiment of the present disclosure in detail in connection with an application scenario.
The model resource data corresponding to the three-dimensional model of the three-dimensional ball is obtained, the model resource data is analyzed to obtain the model attribute S-1 and the model attribute S-2 in the three-dimensional model of the three-dimensional ball, and the target audio frequency band (particularly, the low audio frequency band) corresponding to the model attribute S-1 and the target audio frequency band (namely, the medium audio frequency band) corresponding to the model attribute S-2 can be obtained.
And collecting an audio stream corresponding to the anchor G in the live broadcasting room, and analyzing the audio stream to obtain a model driving value [1,0.2,0.5], wherein 1 corresponds to a low audio frequency band, 0.2 corresponds to a medium audio frequency band, and 0.5 corresponds to a high audio frequency band.
Because the target audio frequency band is the middle audio frequency band, the determined target driving value is 0.2, and the model animation corresponding to the stereoscopic sphere is rendered in the live broadcasting room based on the target driving value of 0.2 and the model resource data.
In the application scene, based on the target model driving value and model resource data, rendering a model animation corresponding to a target three-dimensional model in a live broadcasting room, wherein the target model driving value is obtained by analyzing an audio stream of a target anchor, so that on one hand, the relationship exists between the rendered model animation and sound of the target anchor; on the other hand, the rendered model animation is the animation of the target three-dimensional model, so that the rendering effect of the model animation in the voice live broadcasting room is improved, and the visual experience of audiences is enhanced. The steps of the model animation rendering method are described in detail below.
In addition, in an exemplary embodiment of the present disclosure, a model animation rendering apparatus is also provided. Fig. 9 shows a schematic structural diagram of a model animation rendering device, and as shown in fig. 9, a model animation rendering device 900 may include: the parsing module 910, the acquisition module 920, and the rendering module 930. Wherein:
The parsing module 910 is configured to obtain model resource data of the target three-dimensional model, and parse a target audio frequency band corresponding to a model attribute of the target three-dimensional model from the model resource data; the acquisition module 920 is configured to acquire an audio stream corresponding to a target anchor in the live broadcast room, and analyze the audio stream to obtain model driving values corresponding to different audio frequency segments; and a rendering module 930 configured to determine a target driving value corresponding to the target audio frequency band from the model driving values, and render a model animation corresponding to the target three-dimensional model in the live broadcasting room based on the target driving value and the model resource data.
In an exemplary embodiment of the present disclosure, a model resource data generating apparatus is also provided. Fig. 10 shows a schematic structural diagram of a model resource data generating apparatus, and as shown in fig. 10, the model resource data generating apparatus 1000 may include: an import module 1010, a first response module 1020, a second response module 1030, and a generation module 1040. Wherein:
an importing module 1010 configured to import a target three-dimensional model, and display a configuration interface corresponding to the target three-dimensional model; the configuration interface comprises a first configuration area corresponding to the model attribute of the target three-dimensional model and a second configuration area corresponding to the model attribute of the target audio frequency band; a first response module 1020 configured to respond to a first selection operation acting in the first configuration region, resulting in model attributes corresponding to the first selection operation; a second response module 1030 configured to respond to a second selection operation applied in the second configuration region, and obtain a target audio frequency band corresponding to the second selection operation; the generating module 1040 is configured to generate model resource data corresponding to the target three-dimensional model based on the model attribute and the target audio frequency band.
The specific details of the model animation rendering device 900 and the model resource data generating device 1000 are described in detail in the corresponding methods, and thus are not described herein.
It should be noted that although several modules or units of the model animation rendering device 900 and the model resource data generating device 1000 are mentioned in the above detailed description, such division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
An electronic device 1100 according to such an embodiment of the invention is described below with reference to fig. 11. The electronic device 1100 shown in fig. 11 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 11, the electronic device 1100 is embodied in the form of a general purpose computing device. Components of electronic device 1100 may include, but are not limited to: the at least one processing unit 1110, the at least one memory unit 1120, a bus 1130 connecting the different system components (including the memory unit 1120 and the processing unit 1110), and a display unit 1140.
Wherein the storage unit stores program code that is executable by the processing unit 1110 such that the processing unit 1110 performs steps according to various exemplary embodiments of the present invention described in the above-described "exemplary methods" section of the present specification.
The storage unit 1120 may include a readable medium in the form of a volatile storage unit, such as a Random Access Memory (RAM) 1121 and/or a cache memory 1122, and may further include a Read Only Memory (ROM) 1123.
The storage unit 1120 may also include a program/usage tool 1124 having a set (at least one) of program modules 1125, such program modules 1125 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which may include the reality of a network environment, or some combination thereof.
The bus 1130 may be a local bus representing one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a bus using any of a variety of bus architectures.
The electronic device 1100 may also communicate with one or more external devices 1170 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 1100, and/or any device (e.g., router, modem, etc.) that enables the electronic device 1100 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1150. Also, electronic device 1100 can communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 1160. As shown, network adapter 1160 communicates with other modules of electronic device 1100 via bus 1130. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 1100, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
Referring to fig. 12, a program product 1200 for implementing the above-described method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (12)

1. A method of model animation rendering, the method comprising:
obtaining model resource data of a target three-dimensional model, and analyzing a target audio frequency band corresponding to model attributes of the target three-dimensional model from the model resource data;
collecting an audio stream corresponding to a target anchor in a live broadcasting room, and analyzing the audio stream to obtain model driving values corresponding to different audio frequency segments;
and determining a target driving value corresponding to the target audio frequency band from the model driving values, and rendering model animation corresponding to the target three-dimensional model in the live broadcasting room based on the target driving value and the model resource data.
2. The method of claim 1, wherein the analyzing the audio stream to obtain model driving values corresponding to different audio frequency segments comprises:
performing frequency band division on the audio stream to obtain initial amplitude data; the initial amplitude data comprises initial amplitude values in different audio frequency bands;
calculating the initial amplitude values belonging to the same audio frequency band in the initial amplitude data to obtain target amplitude data, and adding the target amplitude data into an amplitude data queue;
performing interpolation processing on the target amplitude data in the amplitude data queue to obtain an updated amplitude data queue;
and calculating the target amplitude data in the updated amplitude data queue to obtain model driving values corresponding to the different audio frequency bands.
3. The method for rendering the model animation according to claim 2, wherein the performing frequency band division on the audio stream to obtain initial amplitude data comprises:
performing frequency domain analysis on the audio stream to obtain a plurality of frequencies corresponding to the audio stream and a plurality of frequency amplitudes corresponding to the frequencies respectively;
Performing frequency band division on the plurality of frequency amplitudes to obtain initial amplitude data; the initial amplitude data includes initial amplitude values at different audio frequency bands.
4. A model animation rendering method as claimed in claim 3, wherein the target amplitude data comprises a target amplitude value;
the calculating the target amplitude data in the updated amplitude data queue to obtain model driving values corresponding to the different audio frequency bands includes:
determining an amplitude threshold corresponding to the target amplitude data in the updated amplitude data queue;
if the target amplitude value in the target amplitude data is greater than or equal to the amplitude threshold value, replacing the target amplitude value with a first model driving value;
and if the target amplitude value in the target amplitude data is smaller than the amplitude threshold value, calculating the target amplitude value and the amplitude threshold value to obtain a second model driving value.
5. The model animation rendering method of claim 1, wherein the model resource data includes a driving value adjustment factor;
rendering a model animation corresponding to the target three-dimensional model in the live broadcast room based on the target driving value and the model resource data, wherein the model animation comprises the following steps:
Determining a first calculation formula among a current attribute value of the model attribute corresponding to the current frame, the driving value adjustment factor, a target driving value and a first attribute value of the model attribute corresponding to a first frame; a frame interval is different between the current frame and the first frame;
calculating the driving value adjustment factor, the target driving value and the first attribute value based on the first calculation formula to obtain the current attribute value;
and rendering a model animation corresponding to the target three-dimensional model in the live broadcasting room based on the current attribute value and the model resource data.
6. The model animation rendering method of claim 5, wherein the driving value adjustment factor further comprises a first driving value adjustment factor; the model attributes include a first type of attribute;
the method further comprises the steps of:
determining a first attribute threshold value and a second attribute threshold value corresponding to the first type attribute, and determining a threshold difference between the first attribute threshold value and the second attribute threshold value;
determining a second calculation formula among the first attribute threshold value, the target drive value, the threshold difference, a first current attribute value of the first type attribute corresponding to a current frame, a second attribute value of the first type attribute corresponding to a second frame and the first drive value adjustment factor; the current frame and the second frame differ by one frame interval;
Calculating the first attribute threshold value, the target driving value, the threshold difference, the second attribute value and the first driving value adjustment factor based on the second calculation formula to obtain the first current attribute value;
and rendering a model animation corresponding to the target three-dimensional model in the live broadcasting room based on the first current attribute value and the model resource data.
7. A method for generating model resource data, the method comprising:
importing a target three-dimensional model, and displaying a configuration interface corresponding to the target three-dimensional model; the configuration interface comprises a first configuration area corresponding to the model attribute of the target three-dimensional model and a second configuration area corresponding to the model attribute of the target audio frequency band;
responding to a first selection operation acted on the first configuration area, and obtaining the model attribute corresponding to the first selection operation;
responding to a second selection operation acted in the second configuration area, and obtaining the target audio frequency band corresponding to the second selection operation;
and generating model resource data corresponding to the target three-dimensional model based on the model attribute and the target audio frequency band.
8. The model resource data generation method according to claim 7, characterized in that the method further comprises:
responding to a first change operation acted in the first configuration area, and obtaining a change model attribute corresponding to the first change operation; or (b)
Responding to a second changing operation acted in the second configuration area to obtain a changing target audio frequency band corresponding to the second changing operation;
and modifying the model resource data based on the modification model attribute and the modification target audio frequency band.
9. A model animation rendering device, comprising:
the analysis module is configured to acquire model resource data of a target three-dimensional model, and analyze a target audio frequency band corresponding to a model attribute of the three-dimensional model from the model resource data;
the acquisition module is configured to acquire an audio stream corresponding to a target anchor in a live broadcasting room, and analyze the audio stream to obtain model driving values corresponding to different audio frequency segments;
and the rendering module is configured to determine a target driving value corresponding to the target audio frequency band from the model driving values, and render model animation corresponding to the target three-dimensional model in the live broadcasting room based on the target driving value and the model resource data.
10. A model resource data generating apparatus, comprising:
the system comprises an importing module, a configuration interface and a display module, wherein the importing module is configured to import a target three-dimensional model and display a configuration interface corresponding to the target three-dimensional model; the configuration interface comprises a first configuration area corresponding to the model attribute of the target three-dimensional model and a second configuration area corresponding to the model attribute of the target audio frequency band;
a first response module configured to respond to a first selection operation acting in the first configuration area, and obtain the model attribute corresponding to the first selection operation;
a second response module configured to respond to a second selection operation acting in the second configuration area, and obtain the target audio frequency band corresponding to the second selection operation;
and the generation module is configured to generate model resource data corresponding to the target three-dimensional model based on the model attribute and the target audio frequency band.
11. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-10 via execution of the executable instructions.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any of claims 1-10.
CN202310294006.3A 2023-03-22 2023-03-22 Model animation rendering method and device, computer storage medium and electronic equipment Pending CN116320520A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310294006.3A CN116320520A (en) 2023-03-22 2023-03-22 Model animation rendering method and device, computer storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310294006.3A CN116320520A (en) 2023-03-22 2023-03-22 Model animation rendering method and device, computer storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116320520A true CN116320520A (en) 2023-06-23

Family

ID=86832197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310294006.3A Pending CN116320520A (en) 2023-03-22 2023-03-22 Model animation rendering method and device, computer storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116320520A (en)

Similar Documents

Publication Publication Date Title
CN107492383B (en) Live content screening method, device, equipment and storage medium
WO2020228383A1 (en) Mouth shape generation method and apparatus, and electronic device
CN110047121B (en) End-to-end animation generation method and device and electronic equipment
CN109300179A (en) Animation method, device, terminal and medium
CN111249727B (en) Game special effect generation method and device, storage medium and electronic equipment
WO2021057740A1 (en) Video generation method and apparatus, electronic device, and computer readable medium
CN110047119B (en) Animation generation method and device comprising dynamic background and electronic equipment
CN113422980B (en) Video data processing method and device, electronic equipment and storage medium
CN113920225A (en) Animation special effect generation method, medium, device and computing equipment
CN110782504A (en) Curve determination method, device, computer readable storage medium and equipment
CN111506241A (en) Special effect display method and device for live broadcast room, electronic equipment and computer medium
CN116320520A (en) Model animation rendering method and device, computer storage medium and electronic equipment
CN111294662A (en) Bullet screen generation method, device, equipment and storage medium
US11750876B2 (en) Method and apparatus for determining object adding mode, electronic device and medium
CN113706663B (en) Image generation method, device, equipment and storage medium
CN115393490A (en) Image rendering method and device, storage medium and electronic equipment
CN112601170B (en) Sound information processing method and device, computer storage medium and electronic equipment
CN112433697B (en) Resource display method and device, electronic equipment and storage medium
CN114494950A (en) Video processing method and device, electronic equipment and storage medium
US11647153B1 (en) Computer-implemented method, device, and computer program product
KR20240060683A (en) A bitstream representing audio within the environment.
KR20240056791A (en) Bitstream representing audio in an environment
CN105528426B (en) A kind of URL parameter dynamically recording loading method of no refreshing
CN114866802A (en) Video stream transmission method and device, storage medium and electronic device
CN118662900A (en) Virtual object prompting method and device, program product and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination