CN114053696B - Image rendering processing method and device and electronic equipment - Google Patents

Image rendering processing method and device and electronic equipment Download PDF

Info

Publication number
CN114053696B
CN114053696B CN202111350504.2A CN202111350504A CN114053696B CN 114053696 B CN114053696 B CN 114053696B CN 202111350504 A CN202111350504 A CN 202111350504A CN 114053696 B CN114053696 B CN 114053696B
Authority
CN
China
Prior art keywords
asset
hair
cloth
resolving
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111350504.2A
Other languages
Chinese (zh)
Other versions
CN114053696A (en
Inventor
张斌
莫友三
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202211572548.4A priority Critical patent/CN116115995A/en
Priority to CN202111350504.2A priority patent/CN114053696B/en
Publication of CN114053696A publication Critical patent/CN114053696A/en
Application granted granted Critical
Publication of CN114053696B publication Critical patent/CN114053696B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • A63F2300/538Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for performing operations on behalf of the game client, e.g. rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6615Methods for processing data by generating or executing the game program for rendering three dimensional images using models with different levels of detail [LOD]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an image rendering processing method and device and electronic equipment, and relates to the technical field of computers. The method comprises the following steps: firstly, acquiring a cloth resolving asset of a cloth and an unresolved hair asset of preset specification hair; then carrying out asset processing on the cloth resolving assets to obtain skeleton information; then creating a binding file of the hair asset and the skeleton according to the skeleton information; configuring specified information of the cloth resolving asset, wherein the specified information comprises the hair asset and the binding file; and responding to the image rendering instruction, performing rendering effect simulation on the hair asset, and integrating the simulated rendering effect of the hair asset and the image rendering effect of the cloth resolving asset according to the specified information. According to the method and the device, the cloth result of external resolving and the real-time simulation hair result are integrated in real-time rendering, the cloth and the hair are rendered in real time, and a good dynamic effect is achieved under the condition that the rendering quality and the rendering speed are guaranteed.

Description

Image rendering processing method and device and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image rendering method and apparatus, and an electronic device.
Background
Along with the improvement of the aesthetic level of the whole game of the player, the requirements of the player on the artistic expression and the rendering effect of the game are higher and higher, wherein the requirements of the player on the rendering effect of the game character wearing clothes are gradually improved as the material of the clothes worn by the game character is higher and higher in the whole expression effect of the game.
The hairs on the cloth of the clothes are large-quantity assets, at present, a large amount of time cost is needed for solving hair dynamics, the efficiency of image real-time rendering processing can be influenced, the requirement of the large-quantity hair assets on equipment hardware is relatively high, and the equipment hardware cost can be increased.
Disclosure of Invention
In view of this, the present application provides an image rendering method and apparatus, and an electronic device, and mainly aims to solve the technical problem that in the current image real-time rendering, since a large amount of hair assets need to be solved, not only the efficiency of the image real-time rendering process is affected, but also the requirement on device hardware is increased.
According to an aspect of the present application, there is provided an image rendering processing method including:
acquiring a cloth resolving asset of a cloth and an unresolved hair asset of a preset specification hair;
performing asset processing on the cloth resolving asset to obtain skeleton information;
creating a binding file of the hair asset and bone according to the bone information;
configuring specified information of the cloth calculation asset, wherein the specified information comprises the hair asset and the binding file;
and responding to an image rendering instruction, performing rendering effect simulation on the hair asset, and integrating the simulated rendering effect of the hair asset and the image rendering effect of the cloth resolving asset according to the specified information.
Optionally, the performing asset processing on the cloth calculation asset to obtain skeleton information includes:
importing the cloth resolving assets through point cache to obtain point cache data;
converting the point cache data into a form of a mixed space;
and recording key frames frame by frame for the point cache data in the form of the mixed space, creating skeleton points of a model of the cloth resolving asset, and exporting the cloth resolving asset in a preset file format.
Optionally, the converting the point cache data into a form of a mixed space specifically includes:
copying the models of the 0 th frame and the 1 st frame in the point cache data, and performing mixed space processing on the two models of the 0 th frame and the 1 st frame;
copying a model of a 2 nd frame in the point cache data, and performing mixed space processing on the two models of the 2 nd frame and the 1 st frame;
and by parity of reasoning, sequentially performing the last 1 frame in the point cache data to obtain the point cache data in a mixed space form.
Optionally, the key frames in the point cache data in the mixed spatial form correspond to the key frames on the deformed target animation curve one to one.
Optionally, the creating a binding file of the hair asset and the bone according to the bone information includes:
creating the binding file based on the skeleton point, wherein the binding file comprises the hair asset and a skeleton required to be bound with the hair asset;
and giving the binding file to a constraint asset slot of the hair asset in the scene.
Optionally, the configuring the cloth calculation asset specifying information includes:
calculating asset adding components for the cloth;
and utilizing the assembly to allocate the hair assets to the corresponding slots of the cloth.
Optionally, the simulating the rendering effect of the hair asset, and integrating the simulated rendering effect of the hair asset and the image rendering effect of the cloth calculation asset according to the specified information includes:
performing hair simulation on the hair asset by using a preset illusion engine;
and according to the binding information of the hair asset and the skeleton in the binding file, moving the simulation rendering effect of the hair asset along with the image rendering effect of the cloth resolving asset.
Optionally, the preset size of hair is hair with a length smaller than a preset length threshold and/or a hardness larger than a preset hardness threshold.
According to another aspect of the present application, there is provided an image rendering processing apparatus including:
the acquisition module is used for acquiring cloth resolving assets of cloth and unresolved hair assets of preset specification hair;
the processing module is used for carrying out asset processing on the cloth resolving asset to obtain skeleton information;
a creating module for creating a binding file of the hair asset and the skeleton according to the skeleton information;
the configuration module is used for configuring specified information of the cloth resolving asset, and the specified information comprises the hair asset and the binding file;
and the rendering module is used for responding to an image rendering instruction, performing rendering effect simulation on the hair asset, and integrating the simulated rendering effect of the hair asset and the image rendering effect of the cloth resolving asset according to the specified information.
According to yet another aspect of the present application, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the image rendering processing method described above.
According to still another aspect of the present application, there is provided an electronic device including a storage medium, a processor, and a computer program stored on the storage medium and executable on the processor, the processor implementing the image rendering processing method when executing the computer program.
By means of the technical scheme, compared with the prior art, the image rendering processing method, the image rendering processing device and the electronic equipment have the advantages that the hair assets which are not dynamically calculated and have preset specifications are subjected to engine simulation to obtain the simulation rendering effect, on the premise that certain rendering quality is guaranteed, a large amount of time cost is saved, the image real-time rendering processing efficiency is improved, hardware dependence can be removed to a certain degree, and requirements for equipment hardware are lowered. Specifically, firstly, asset processing is carried out on the cloth resolving assets to obtain skeleton information; then according to the skeleton information, creating a binding file of a hair asset and a skeleton of which the hair is not resolved in a preset specification, and configuring specified information of a cloth resolving asset, wherein the specified information comprises the hair asset and the binding file; therefore, when the image rendering is needed, the rendering effect simulation can be carried out on the unresolved hair assets, and the simulation rendering effect of the hair assets and the image rendering effect of the cloth resolved assets are integrated according to the specified information. By applying the technical scheme of the application, the cloth result of external calculation and the real-time simulation hair result are integrated in real-time rendering, and the cloth and the hair are rendered in real time, so that a good dynamic effect is achieved under the condition that the rendering quality and the rendering speed are ensured.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart illustrating an image rendering processing method according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating another image rendering processing method according to an embodiment of the present disclosure;
FIG. 3 is a flow chart illustrating an example application provided by an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a model of a mink pile sweater according to an embodiment of the present application;
fig. 5 shows a schematic structural diagram of an image rendering processing apparatus according to an embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The method aims to solve the technical problems that the efficiency of image real-time rendering processing is influenced and the requirement on equipment hardware is increased due to the fact that a large amount of hair assets need to be solved in the current image real-time rendering. The present embodiment provides an image rendering processing method, as shown in fig. 1, the method includes:
step 101, acquiring a cloth resolving asset of a cloth and a hair asset with a preset specification and unresolved hair.
The preset specification hair can be preset according to actual requirements, for the embodiment, the preset specification can be used for judging the hair with the physical property not very complex, the rendering effect and the dynamics can be compared with the input time, and the hair can be completely simulated by an engine, so that a large amount of time cost is saved, and a good effect is achieved.
In the embodiment, the required hair asset and cloth calculation asset are manufactured, and the hair asset does not need dynamic calculation and is a static asset; while the cloth is a solution asset. For example, the required hair asset and cloth solution asset are made in Digital Content Creation (DCC) software, while the cloth is solved in external DCC software, considering that the solution software is a more real physical system, the calculation amount is large, the time is long, but the corresponding effect is relatively real; the direct cloth calculation in the illusion engine is only a simulation effect, so the difference between the final cloth dynamics result and the DCC calculation is huge, and the DCC calculation time is completely acceptable under the condition of considering the quality.
And 102, performing asset processing on the cloth calculation assets to obtain skeleton information.
Skeletal information may be included in the skeletal information that is created for the entire cloth solution asset. For the embodiment, the hair asset needs to be attached to the skeleton, so that the hair can move along with the cloth in the real-time rendering of the image, and a good dynamic effect is achieved under the condition that the rendering quality and the rendering speed are guaranteed, so that the asset processing needs to be performed on the cloth resolving asset to obtain corresponding skeleton information.
Step 103, creating a binding file of the hair asset and the skeleton according to the skeleton information.
The binding file may contain a pre-set specification hair unresolved hair asset, and a skeleton to which the hair asset needs to be bound.
And step 104, configuring specified information of the cloth resolving assets.
The specified information comprises the unresolved hair asset and the binding file of the preset specification hair, and the hair asset and the binding file are further specified on the cloth resolving asset.
And 105, responding to the image rendering instruction, simulating the rendering effect of the hair asset, and integrating the simulated rendering effect of the hair asset and the image rendering effect of the cloth resolving asset according to the specified information of the cloth resolving asset.
The hair assets can be subjected to rendering effect simulation through the illusion engine in the embodiment to obtain a simulated rendering effect, on the premise that certain rendering quality is guaranteed, a large amount of time cost is saved, the efficiency of real-time rendering processing of images is improved, hardware dependence can be removed to a certain degree, and requirements for equipment hardware are lowered. According to the specified information of the cloth calculation asset, the simulation rendering effect of the hair asset and the image rendering effect of the cloth calculation asset are integrated, so that the hair can move along with the cloth under the condition that the hair has the binding file in the scene.
At present, cloth calculation and hair calculation are both realized in a point cache mode in a real-time rendering engine, however, the point cache is relatively difficult to control, the asset quantity is large, certain requirements also exist on hardware when the assets are loaded, and the hairs cannot be subjected to real-time dynamics calculation along with the point cache. Compared with the prior art, the image rendering processing method provided by the embodiment has the advantages that the hair assets of the preset specification hair which is not subjected to dynamic calculation are subjected to engine simulation, and then the dynamic simulation of an offline rendering level is obtained in real-time rendering. And the cloth result of external solution and the real-time simulation hair result are integrated in the real-time rendering, and the cloth and the hair are rendered in real time, so that the dynamic effect is good under the condition of ensuring the rendering quality and the rendering speed.
Further, as a refinement and an extension of the specific implementation of the above embodiment, in order to fully illustrate the implementation of the embodiment, another image rendering processing method is provided, as shown in fig. 2, and the method includes:
step 201, acquiring a cloth resolving asset of the cloth and a hair asset with a preset specification and unresolved hair.
Optionally, the preset specification hair is hair with a length smaller than a preset length threshold and/or a hardness larger than a preset hardness threshold.
The preset length threshold and the preset hardness threshold can be preset according to actual requirements, and can be used for judging that the hair is not physically complicated, such as short hair and hard hair, compared with the situation that the hair needs to consume a large amount of time cost and the requirement on hardware is relatively high due to the huge amount of assets when the hair is subjected to dynamic calculation in a traditional mode, the method can be used for simulating the hair assets by a fantasy Engine (such as UE), and therefore the method has a good dynamic effect under the condition that the rendering quality and the rendering speed are guaranteed.
Step 202, importing cloth through point cache to calculate assets, and obtaining point cache data.
In order to obtain accurate skeleton information of the cloth calculation assets, the embodiment can perform asset processing on the cloth calculation assets by means of external three-dimensional modeling and animation software. For example, the cloth solution assets are imported into Maya software by way of a point cache (. Abc).
Step 203, converting the point cache data into a form of a mixed space.
Optionally, step 203 may specifically include: copying models of a 0 th frame and a 1 st frame in point cache data of a cloth resolving asset, and performing hybrid space (blend space) processing on the two models of the 0 th frame and the 1 st frame, namely hybrid deformation processing; then copying a model of a 2 nd frame in the point cache data, and performing mixed space processing on the two models of the 2 nd frame and the 1 st frame; and by analogy, sequentially performing the last 1 frame in the point cache data to obtain the point cache data in a mixed space form.
For example, the models of the 0 th frame and the 1 st frame in the point cache are copied (there are two models at this time), and the two models are taken as BlendSpace once; then copying the model of the point cache frame 2, and making the model (the model of the frame 2) and the model of the frame 1 as a BlendSpace; and sequentially finishing the last frame to obtain the whole Blendspace, wherein the process can be realized by a plug-in.
And 204, recording key frames frame by frame for the point cache data in the mixed space form, creating skeleton points of a model of the cloth calculation asset, and exporting the cloth calculation asset in a preset file format.
All frames of the point cache data in hybrid space form are key frames, and the preset file format may be the FBX format. For example, the cloth calculation assets are imported into Maya software in a point cache mode, and the plug-in completes the mixing deformation and completes the process of the value K frame of blend space. And then directly inserting a root skeleton point, and binding the whole model on the root skeleton point to realize the creation of the skeleton point for the whole asset, and finally exporting the cloth resolving asset in a fbx format.
In this embodiment, the function of recording key frames frame by frame for the point cache data in the form of the mixed space is to determine a model used by each frame in the animation, and in addition, the key frames can correspond to a curve of a morphing target (MorphoTarget) in the illusion engine, so that the effect is the same as that in the point cache. Therefore, the key frames in the point cache data in the mixed spatial form are optionally in one-to-one correspondence with the key frames on the morphing target animation curve. For example, the keyframes on the MorphoTarget animation curve correspond to the keyframes of blend space in Maya, and there needs to be a one-to-one correspondence, so as to ensure that the effect of importing the converted point cache data in Maya into the illusion engine is consistent with the original point cache effect.
Step 205, create a binding file of hair assets and bones based on the skeletal points.
The binding file contains the kinetically unresolved hair asset, as well as the skeleton to which the hair asset needs to be bound. For example, hairs need to follow the movement of cloth, so the hairs need to be bound on bones, the hairs are recorded in a binding file, and the bones bound with the hairs are needed.
And step 206, the binding file is given to the constraint force asset slot of the hair asset in the scene.
For example, selecting a hair Asset in a ghost engine, then creating a Binding file, selecting hairs and bones to be bound in the Binding file, and giving the Binding file to a Binding Asset slot of the hairs in the scene; since the hair is made according to the model position in Maya, the position of the hair on the model is the same as in Maya.
And step 207, configuring specified information of the cloth resolving assets.
Optionally, step 207 may specifically include: calculating asset adding components for the cloth; the assembly is then used to deploy the hair asset into a corresponding slot in the cloth. For example, a "component" is "added" directly under the cloth, then the hair is placed directly onto the corresponding slot.
And step 208, responding to the image rendering instruction, and performing hair simulation on the unresolved hair assets by using a preset illusion engine.
And 209, according to the binding information of the hair asset and the skeleton in the binding file, moving the simulation rendering effect of the hair asset along with the image rendering effect of the cloth resolving asset.
For example, hairs are simulated in real time in the illusion engine, because the computation of solving hairs in DCC is huge and time consuming, and the current version of the illusion engine cannot support the introduction of hairs solved in DCC into the illusion engine through a point cache (. Abc), so hairs are simulated and computed in the illusion engine. The illusion engine performs hair simulation by using a built-in physical engine, and specifically comprises the following operations: "Enable Simulant" is selected in the "physics" option of the hair asset, at this time, the simulation mode can be selected in the following "Niagara Solver", in the scene, under the condition that the hair has the binding file, the hair can follow the cloth to move.
The embodiment will integrate the externally resolved cloth results and the real-time resolved hair results in real-time rendering, and a specific integrated example process may be as shown in fig. 3. The hair assets (not dynamically resolved) in the DCC tool and the cloth resolved assets in the DCC tool are imported into a fantasy engine for integration, so that the hair is moved along with the cloth according to the binding file.
In order to illustrate the specific implementation process of the above embodiments, the following application examples are given, but not limited thereto:
for example, as shown in fig. 4, a model of a mink fur coat with which a character in a game can perform character movements is shown. At present, for image rendering, the dynamic effect is achieved, the mode that cloth resolving and hair resolving are both point cache is achieved in a real-time rendering engine, however, the point cache is not well controlled, the hair asset quantity of the mink velvet is huge, certain requirements also exist for hardware when the assets are loaded, and the mink velvet cannot be resolved in real-time dynamics along with the point cache.
In order to solve the above technical problems, according to the image rendering processing method provided in this embodiment, for the case where the physics such as short hair and hard hair of mink hair is not very complicated, the image rendering processing method can be simulated by a phantom engine, so that a good dynamic effect is achieved while the rendering quality and the rendering speed are ensured.
Specifically, a manufactured mink velvet hair asset and fur clothing cloth resolving asset is obtained, and the hair asset does not need to be resolved dynamically and is a static asset; whereas sweater cloth is a solution asset. And importing the cloth resolving assets to Maya software through point cache to obtain point cache data, and converting the point cache data into a form of a mixed space. And then, recording key frames frame by frame for the point cache data in the form of the mixed space, and creating skeleton points of a model of the cloth resolving asset so as to derive the sweater cloth resolving asset in the FBX format. The function of recording key frames frame by frame for the point cache data in the form of mixed space is to determine the model used by each frame in the animation, and in addition, the key frames can correspond to the curve of the deformation target in the illusion engine, so that the effect is ensured to be the same as that in the point cache.
After the processing of the sweater cloth resolving asset is finished, a binding file of the mink plush hair asset and the skeleton is created based on the skeleton points, the binding file is sent to a constraint force asset slot of the mink plush hair asset in the scene, and the hair asset and the binding file are designated on the sweater cloth resolving asset. And finally, performing mink fur simulation on unresolved hair assets by using the illusion engine during image rendering, and moving the simulation rendering effect of the mink fur assets along with the image rendering effect of the sweater cloth resolving assets according to the binding information of the hair assets and the skeleton in the binding file, so that the mink fur is moved along with the sweater cloth (the mink fur sweater can move consistently with the game role), and further achieving a good dynamic rendering effect of the game images.
Compared with the prior art, the image rendering processing method provided by the embodiment has the advantages that the hair assets which are not dynamically calculated by the hair in the preset specification are subjected to engine simulation, and then the dynamic simulation of an offline rendering level is obtained in real-time rendering. And the cloth result of external solution and the real-time simulation hair result are integrated in the real-time rendering, and the cloth and the hair are rendered in real time, so that the dynamic effect is good under the condition of ensuring the rendering quality and the rendering speed.
Further, as a specific implementation of the method shown in fig. 1 to fig. 2, the present embodiment provides an image rendering processing apparatus, as shown in fig. 5, the apparatus includes: an acquisition module 31, a processing module 32, a creation module 33, a configuration module 34, a rendering module 35.
The acquisition module 31 is used for acquiring a cloth resolving asset of a cloth and a hair asset with a preset specification and unresolved hair;
the processing module 32 is used for performing asset processing on the cloth resolving asset to obtain skeleton information;
a creating module 33, configured to create a binding file of the hair asset and the bone according to the bone information;
a configuration module 34 configured to configure specified information of the cloth solution asset, the specified information including the hair asset and the binding file;
and the rendering module 35 is configured to perform rendering effect simulation on the hair asset in response to an image rendering instruction, and integrate the simulated rendering effect of the hair asset and the image rendering effect of the cloth calculation asset according to the specified information.
In a specific application scenario, the processing module 32 is specifically configured to import the cloth calculation assets through point cache to obtain point cache data; converting the point cache data into a form of a mixed space; and recording key frames frame by frame for the point cache data in the form of the mixed space, creating skeleton points of a model of the cloth resolving asset, and exporting the cloth resolving asset in a preset file format.
In a specific application scenario, the processing module 32 is further configured to copy the models of the 0 th frame and the 1 st frame in the point cache data, and perform a hybrid space processing on the two models of the 0 th frame and the 1 st frame; copying a model of a 2 nd frame in the point cache data, and performing mixed space processing on the two models of the 2 nd frame and the 1 st frame; and by parity of reasoning, sequentially performing the last 1 frame in the point cache data to obtain the point cache data in a mixed space form.
In a specific application scene, the key frames in the point cache data in the mixed space form correspond to the key frames on the deformation target animation curve one by one.
In a specific application scenario, the creating module 33 is specifically configured to create the binding file based on the bone point, where the binding file includes the hair asset and a bone to be bound to the hair asset; and giving the binding file to a constraint asset slot of the hair asset in the scene.
In a specific application scenario, the configuration module 34 is specifically configured to calculate asset adding components for the cloth; deploying the hair asset to a corresponding slot of the cloth material using the assembly.
In a specific application scenario, the rendering module 35 is specifically configured to perform hair simulation on the hair asset by using a preset illusion engine; and according to the binding information of the hair asset and the skeleton in the binding file, moving the simulation rendering effect of the hair asset along with the image rendering effect of the cloth resolving asset.
In a specific application scenario, the preset specification hair is hair with a length smaller than a preset length threshold and/or a hardness larger than a preset hardness threshold.
It should be noted that other corresponding descriptions of the functional units related to the image rendering processing apparatus provided in this embodiment may refer to the corresponding descriptions in fig. 1 to fig. 2, and are not repeated herein.
Based on the method shown in fig. 1 to 2, correspondingly, the present embodiment further provides a storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the image rendering processing method shown in fig. 1 to 2.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method of the embodiments of the present application.
Based on the method shown in fig. 1 to fig. 2 and the virtual device embodiment shown in fig. 5, in order to achieve the above object, an embodiment of the present application further provides an electronic device, which may be a personal computer, a notebook computer, a smart phone, a server, or other network devices, and the device includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program to implement the image rendering processing method as shown in fig. 1 to 2.
Optionally, the entity device may further include a user interface, a network interface, a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WI-FI module, and the like. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
It will be understood by those skilled in the art that the above-described physical device structure provided in the present embodiment is not limited to the physical device, and may include more or less components, or combine some components, or arrange different components.
The storage medium may further include an operating system and a network communication module. The operating system is a program that manages the hardware and software resources of the above-described physical devices, and supports the operation of the information processing program as well as other software and/or programs. The network communication module is used for realizing communication among components in the storage medium and communication with other hardware and software in the information processing entity device.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware. By applying the scheme of the embodiment, the dynamically-resolved hair assets of the hair with the preset specification are submitted to the engine for simulation, and then the offline rendering-level dynamic simulation is obtained in the real-time rendering. And the cloth result of external solution and the real-time simulation hair result are integrated in the real-time rendering, and the cloth and the hair are rendered in real time, so that the dynamic effect is good under the condition of ensuring the rendering quality and the rendering speed.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (11)

1. An image rendering processing method, comprising:
acquiring a cloth resolving asset of the cloth and a hair asset with the hair of a preset specification not resolved;
performing asset processing on the cloth calculation asset to obtain skeleton information, wherein key frames are recorded frame by point cache data in a mixed space form, skeleton points of a model of the cloth calculation asset are created to obtain the skeleton information, and the point cache data in the mixed space form is obtained by performing point cache import on the cloth calculation asset and performing mixed space conversion on the imported data;
creating a binding file of the hair asset and bone according to the bone information;
configuring specified information of the cloth resolving asset, wherein the specified information comprises the hair asset and the binding file;
and responding to an image rendering instruction, performing rendering effect simulation on the hair asset, and integrating the simulated rendering effect of the hair asset and the image rendering effect of the cloth resolving asset according to the specified information.
2. The method of claim 1, wherein the asset processing of the cloth solution asset to obtain skeletal information comprises:
importing the cloth resolving assets through point cache to obtain point cache data;
converting the point cache data into a form of a mixed space;
and recording key frames frame by frame for the point cache data in the form of the mixed space, creating skeleton points of a model of the cloth resolving asset, and exporting the cloth resolving asset in a preset file format.
3. The method according to claim 2, wherein the converting the point cache data into a form of a mixture space specifically comprises:
copying the models of the 0 th frame and the 1 st frame in the point cache data, and performing mixed space processing on the two models of the 0 th frame and the 1 st frame;
copying a model of a 2 nd frame in the point cache data, and performing mixed space processing on the two models of the 2 nd frame and the 1 st frame;
and by parity of reasoning, sequentially performing the last 1 frame in the point cache data to obtain the point cache data in a mixed space form.
4. The method according to claim 2, wherein the key frames in the point buffer data in the mixed space form correspond to the key frames on the morphing target animation curve in a one-to-one manner.
5. The method of claim 2, wherein said creating a binding file of said hair asset and bone from said bone information comprises:
creating the binding file based on the bone point, wherein the binding file comprises the hair asset and a bone to which the hair asset is bound;
and giving the binding file to a constraint asset slot of the hair asset in the scene.
6. The method of claim 5, wherein configuring the cloth solution asset specification information comprises:
calculating asset adding components for the cloth;
and utilizing the assembly to allocate the hair assets to the corresponding slots of the cloth.
7. The method according to claim 5, wherein the simulating the rendering effect of the hair asset and integrating the simulated rendering effect of the hair asset with the image rendering effect of the cloth solution asset according to the specified information comprises:
performing hair simulation on the hair asset by using a preset illusion engine;
and according to the binding information of the hair asset and the skeleton in the binding file, moving the simulation rendering effect of the hair asset along with the image rendering effect of the cloth resolving asset.
8. The method according to any one of claims 1 to 7, wherein the preset gauge hair is hair having a length less than a preset length threshold and/or a hardness greater than a preset hardness threshold.
9. An image rendering processing apparatus characterized by comprising:
the acquisition module is used for acquiring cloth resolving assets of cloth and unresolved hair assets of preset specification hair;
the processing module is used for performing asset processing on the cloth resolving asset by the cloth resolving asset to obtain skeleton information, wherein key frames are recorded frame by point cache data in a mixed space form, skeleton points of a model of the cloth resolving asset are created to obtain the skeleton information, and the point cache data in the mixed space form is obtained by performing point cache leading-in on the cloth resolving asset and performing mixed space conversion on the data obtained after leading-in;
a creating module for creating a binding file of the hair asset and the skeleton according to the skeleton information;
the configuration module is used for configuring specified information of the cloth resolving asset, and the specified information comprises the hair asset and the binding file;
and the rendering module is used for responding to an image rendering instruction, performing rendering effect simulation on the hair asset, and integrating the simulated rendering effect of the hair asset and the image rendering effect of the cloth resolving asset according to the specified information.
10. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method of any of claims 1 to 8.
11. An electronic device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, wherein the processor implements the method of any one of claims 1 to 8 when executing the computer program.
CN202111350504.2A 2021-11-15 2021-11-15 Image rendering processing method and device and electronic equipment Active CN114053696B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211572548.4A CN116115995A (en) 2021-11-15 2021-11-15 Image rendering processing method and device and electronic equipment
CN202111350504.2A CN114053696B (en) 2021-11-15 2021-11-15 Image rendering processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111350504.2A CN114053696B (en) 2021-11-15 2021-11-15 Image rendering processing method and device and electronic equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202211572548.4A Division CN116115995A (en) 2021-11-15 2021-11-15 Image rendering processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN114053696A CN114053696A (en) 2022-02-18
CN114053696B true CN114053696B (en) 2023-01-10

Family

ID=80272314

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111350504.2A Active CN114053696B (en) 2021-11-15 2021-11-15 Image rendering processing method and device and electronic equipment
CN202211572548.4A Pending CN116115995A (en) 2021-11-15 2021-11-15 Image rendering processing method and device and electronic equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202211572548.4A Pending CN116115995A (en) 2021-11-15 2021-11-15 Image rendering processing method and device and electronic equipment

Country Status (1)

Country Link
CN (2) CN114053696B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309981A (en) * 2023-04-06 2023-06-23 北京优酷科技有限公司 Animation processing method and computing device
CN117727303A (en) * 2024-02-08 2024-03-19 翌东寰球(深圳)数字科技有限公司 Audio and video generation method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8988422B1 (en) * 2010-12-17 2015-03-24 Disney Enterprises, Inc. System and method for augmenting hand animation with three-dimensional secondary motion
CN105617655A (en) * 2016-01-14 2016-06-01 网易(杭州)网络有限公司 Physical effect display method and device as well as game system
CN110264552A (en) * 2019-06-24 2019-09-20 网易(杭州)网络有限公司 It is a kind of to simulate pilomotor method, apparatus, electronic equipment and storage medium
CN111028320A (en) * 2019-12-11 2020-04-17 腾讯科技(深圳)有限公司 Cloth animation generation method and device and computer readable storage medium
CN111462313A (en) * 2020-04-02 2020-07-28 网易(杭州)网络有限公司 Implementation method, device and terminal of fluff effect
CN111773719A (en) * 2020-06-23 2020-10-16 完美世界(北京)软件科技发展有限公司 Rendering method and device of virtual object, storage medium and electronic device
CN112767522A (en) * 2020-11-27 2021-05-07 成都完美时空网络技术有限公司 Virtual object wind animation rendering method and device, storage medium and electronic device
CN112767521A (en) * 2021-01-27 2021-05-07 北京达佳互联信息技术有限公司 Special effect implementation method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200375749A1 (en) * 2019-06-03 2020-12-03 Michael J. Yaremchuk One-Stage CAD/CAM Facial Skeletal Rearrangement and Refinement

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8988422B1 (en) * 2010-12-17 2015-03-24 Disney Enterprises, Inc. System and method for augmenting hand animation with three-dimensional secondary motion
CN105617655A (en) * 2016-01-14 2016-06-01 网易(杭州)网络有限公司 Physical effect display method and device as well as game system
CN110264552A (en) * 2019-06-24 2019-09-20 网易(杭州)网络有限公司 It is a kind of to simulate pilomotor method, apparatus, electronic equipment and storage medium
CN111028320A (en) * 2019-12-11 2020-04-17 腾讯科技(深圳)有限公司 Cloth animation generation method and device and computer readable storage medium
CN111462313A (en) * 2020-04-02 2020-07-28 网易(杭州)网络有限公司 Implementation method, device and terminal of fluff effect
CN111773719A (en) * 2020-06-23 2020-10-16 完美世界(北京)软件科技发展有限公司 Rendering method and device of virtual object, storage medium and electronic device
CN112767522A (en) * 2020-11-27 2021-05-07 成都完美时空网络技术有限公司 Virtual object wind animation rendering method and device, storage medium and electronic device
CN112767521A (en) * 2021-01-27 2021-05-07 北京达佳互联信息技术有限公司 Special effect implementation method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Unity中的布料系统";_Jin_;《https://blog.csdn.net/qq_34307432/article/details/80009672》;20180419;全文 *
"虚幻学习4---制作实时逼真的毛发【笔记】";packdge_black;《packdge_black,https://blog.csdn.net/packdge_black/article/details/118518422》;20210707;全文 *

Also Published As

Publication number Publication date
CN114053696A (en) 2022-02-18
CN116115995A (en) 2023-05-16

Similar Documents

Publication Publication Date Title
US12017145B2 (en) Method and system of automatic animation generation
CN114053696B (en) Image rendering processing method and device and electronic equipment
WO2018095273A1 (en) Image synthesis method and device, and matching implementation method and device
CN110766776B (en) Method and device for generating expression animation
JP2020510262A (en) Expression animation generating method and apparatus, storage medium, and electronic device
CN109964255B (en) 3D printing using 3D video data
JP2024522287A (en) 3D human body reconstruction method, apparatus, device and storage medium
CN110689604A (en) Personalized face model display method, device, equipment and storage medium
CN109408001A (en) 3D printing method, apparatus, 3D printing equipment and the storage medium of multi-model
CN106447756A (en) Method and system for generating a user-customized computer-generated animation
CN110930484B (en) Animation configuration method and device, storage medium and electronic device
JP2017111719A (en) Video processing device, video processing method and video processing program
KR101845535B1 (en) Story-telling system for changing 3 dimension character into 3 dimension avatar
CN112843704B (en) Animation model processing method, device, equipment and storage medium
CN115115752A (en) Virtual garment deformation prediction method and device, storage medium and electronic equipment
CN116843809A (en) Virtual character processing method and device
CN109816744B (en) Neural network-based two-dimensional special effect picture generation method and device
CN111489426A (en) Expression generation method, device, equipment and storage medium
CN113209625B (en) Data processing method and device
CN113209626B (en) Game picture rendering method and device
CN114255312A (en) Processing method and device of vegetation image and electronic equipment
CN114596394A (en) Method, device, system and storage medium for generating bone animation
CN115239856A (en) Animation generation method and device for 3D virtual object, terminal device and medium
CN114219888A (en) Method and device for generating dynamic silhouette effect of three-dimensional character and storage medium
CN110827303B (en) Image editing method and device for virtual scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant