CN116485972A - Rendering method and device of model and electronic equipment - Google Patents

Rendering method and device of model and electronic equipment Download PDF

Info

Publication number
CN116485972A
CN116485972A CN202310345825.6A CN202310345825A CN116485972A CN 116485972 A CN116485972 A CN 116485972A CN 202310345825 A CN202310345825 A CN 202310345825A CN 116485972 A CN116485972 A CN 116485972A
Authority
CN
China
Prior art keywords
model
baking
target
vertex
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310345825.6A
Other languages
Chinese (zh)
Inventor
周宇明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Neteasy Brilliant Network Technology Co ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310345825.6A priority Critical patent/CN116485972A/en
Publication of CN116485972A publication Critical patent/CN116485972A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a rendering method and device of a model and electronic equipment, wherein the method comprises the following steps: determining a model position and a camera position in the virtual scene; based on the model position, the camera position, the time parameter and the preset vertex animation mapping, baking the target model to obtain baking mapping data of the target model; setting a patch model at a model position, and adjusting an orientation of the patch model based on the model position and the camera position; and rendering the baking map data onto the adjusted patch model to obtain the rendering effect of the target model in the virtual scene. By adopting the method, the texture map of the target model required under the current view angle can be baked in real time and efficiently, the dynamic target model is rendered, the appearance effects of the model, actions and the like are enriched, the manufacturing resources are saved, the performance consumption is reduced, and the reality and the flexibility of model rendering are improved.

Description

Rendering method and device of model and electronic equipment
Technical Field
The present invention relates to the field of model rendering technologies, and in particular, to a method and an apparatus for rendering a model, and an electronic device.
Background
In virtual scenes such as sporting events and concerts, there are typically a large number of viewers. If each audience uses a model and animates through the skeletal skin technology, a large amount of art resources need to be made, and the performance consumption during operation is high, which is difficult to apply to equipment with limited resources.
In the related art, a viewer model can be prefabricated, images of a plurality of angles of the model are shot through a virtual camera and baked into a texture map, in the running process of a virtual scene, the surface patch model is placed in the virtual scene, then the orientation of the surface patch model is adjusted according to the position of the virtual camera, and the texture map with proper shooting angles is selected from the texture map to be rendered on the surface patch model, so that the visual effect of simulating a three-dimensional stereoscopic viewer through a two-dimensional surface patch is realized. However, the audience rendered by the method is static, and the number of the effects of the angle, the gesture, the appearance and the like of the displayed audience is limited, so that the reality and the flexibility of the rendered audience are poor.
Disclosure of Invention
Accordingly, the present invention is directed to a method, an apparatus, and an electronic device for rendering a model, so as to render a dynamic audience under the condition of low art production resources and performance consumption, enrich the appearance and actions of the audience, and improve the reality and flexibility of the rendered audience.
In a first aspect, an embodiment of the present invention provides a method for rendering a model, where the method includes: determining a model position and a camera position in the virtual scene; wherein, the model position is: rendering positions of the target models in the virtual scene; the camera positions are: shooting the position of a virtual camera of a virtual scene in the virtual scene; based on the model position, the camera position, the time parameter and the preset vertex animation mapping, baking the target model to obtain baking mapping data of the target model; wherein, the vertex animation map comprises: a vertex offset of a model vertex of the target model, the vertex offset varying with a time parameter; setting a patch model at a model position, and adjusting an orientation of the patch model based on the model position and the camera position; and rendering the baking map data onto the adjusted patch model to obtain the rendering effect of the target model in the virtual scene.
In a second aspect, an embodiment of the present invention further provides a rendering apparatus for a model, where the apparatus includes:
a first determining module for determining a model position and a camera position in a virtual scene; wherein, the model position is: rendering positions of the target models in the virtual scene; the camera positions are: shooting the position of a virtual camera of a virtual scene in the virtual scene; the baking module is used for baking the target model based on the model position, the camera position, the time parameter and the preset vertex animation map to obtain baking map data of the target model; wherein, the vertex animation map comprises: a vertex offset of a model vertex of the target model, the vertex offset varying with a time parameter; the adjusting module is used for setting a patch model at the model position and adjusting the orientation of the patch model based on the model position and the camera position; and the rendering module is used for rendering the baking map data onto the adjusted patch model to obtain the rendering effect of the target model in the virtual scene.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, where the memory stores machine executable instructions executable by the processor, and the processor executes the machine executable instructions to implement a rendering method of the model.
In a fourth aspect, embodiments of the present invention provide a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement a method of rendering the above model.
The embodiment of the invention has the following beneficial effects:
the rendering method of the model comprises the steps of determining a model position and a camera position in a virtual scene; wherein, the model position is: rendering positions of the target models in the virtual scene; the camera positions are: shooting the position of a virtual camera of a virtual scene in the virtual scene; based on the model position, the camera position, the time parameter and the preset vertex animation mapping, baking the target model to obtain baking mapping data of the target model; wherein, the vertex animation map comprises: a vertex offset of a model vertex of the target model, the vertex offset varying with a time parameter; setting a patch model at a model position, and adjusting an orientation of the patch model based on the model position and the camera position; and rendering the baking map data onto the adjusted patch model to obtain the rendering effect of the target model in the virtual scene.
In the method, vertex animation mapping is preset, firstly, baking is conducted on a target model according to a model position and a camera position, then time parameters are combined, baking mapping data are obtained, then the orientation of a surface patch model located at the model position in a virtual scene is adjusted according to the model position and the camera position, and finally the baking mapping data are rendered to the adjusted surface patch model, so that the rendering effect of the target model in the virtual scene is obtained. By adopting the method, the texture map of the target model required under the current view angle can be baked in real time and efficiently, so that the dynamic target model is rendered on the surface patch model, the appearance effects of the model, actions and the like are enriched, the manufacturing resources are saved, the performance consumption is reduced, and the reality and the flexibility of model rendering are improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are some embodiments of the invention and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for rendering a model according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a hemispherical space according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a texture map, a model position and a camera position according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of object model information stored in a texture region according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating transformation of mapping data into a target texture region according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a preset dynamic texture map according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of sampling baking map data according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a rendering apparatus of a model according to an embodiment of the present invention;
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
First, terms of art related to the embodiments of the present invention will be explained.
1、VAT
Vertex Animation Texture, the vertex animation map stores the relative original position offset information of all vertices of the model in each frame of the animation.
2、Imposter
One commonly used rendering algorithm in computer graphics that utilizes a 2D patch model to simulate a 3D object.
3、Texture2Darray
One commonly used Texture data type in modern graphics APIs (Application Program Interface, application program interfaces) consists of several two-dimensional Texture map textures 2D, where each Texture2D is referred to as a Slice.
Currently, in virtual scenes such as sporting events and concerts, in order to restore the scene atmosphere as realistic as possible, modeling and actions of spectators need to be simulated. In the related art, the following three methods are mainly adopted to render the audience.
Firstly, putting a face model in a virtual scene, then, making a set of sequence frame animations of audiences, storing the sequence frame animations in a texture map, and directly drawing different frame animations in the map onto the face model according to the running time when the virtual scene runs, so that the audiences in the virtual scene such as a sports event or a concert can be simulated simply. By adopting the mode, although the art resource production is easier, the performance consumption is lower, but the overall effect is poor, the audience is presented in a two-dimensional surface patch form with a fixed position and direction in the virtual scene, the stereoscopic impression is lacking, and when the included angle between the virtual camera and the normal vector of the surface patch model is close to a right angle, the very obvious lasting can occur.
In a second mode, a static image technology is adopted, images of a plurality of angles around the audience model are shot through a virtual camera in advance for the audience model which is manufactured in advance, the images are baked into texture mapping, then a patch model is placed in a virtual scene, when the virtual scene operates, the angle orientation of the patch model is adjusted to enable the patch model to be always opposite to the virtual camera, the texture mapping with the proper shooting angle is selected from the texture mapping according to the relative position between the virtual camera and the center point of the patch model, and the texture mapping is rendered onto the patch model, so that the visual effect of a three-dimensional stereoscopic audience is simulated through a two-dimensional patch. In this way, the production cost of the art resource is moderate, and the stereoscopic impression is improved, but the data under each view angle in the map is fixed due to the pre-baking when the map is used, so that the audience is completely static when the virtual scene is running. And, if the spectator is expected to have different shapes, only different maps can be baked in advance, so that the flexibility is poor.
And thirdly, using a physical audience to directly put a real audience model in the virtual scene, and using a skeleton skin technology to produce animation. Although this method has a better effect than the former two methods, a large amount of art resources need to be manufactured, the production cost is high, the performance consumption is also high, and the method is difficult to apply to equipment with limited resources.
Based on the above, the method, the device and the electronic equipment for rendering the model provided by the embodiment of the invention can be applied to rendering scenes of the model, for example, rendering audience models in sports events.
For the sake of understanding the present embodiment, first, a detailed description will be given of a model rendering method disclosed in the present embodiment, as shown in fig. 1, where the method includes the following steps:
step S102, determining a model position and a camera position in a virtual scene; wherein, the model position is: rendering positions of the target models in the virtual scene; the camera positions are: shooting the position of a virtual camera of a virtual scene in the virtual scene;
in virtual scenes, a variety of scene models are typically included, which can provide a user with an important visual or interactive experience. In actual implementation, virtual characters, scene props, and the like can be built by creating a three-dimensional model in a space where the virtual scene is located. At this time, the three-dimensional model is directly rendered and displayed in a space where the virtual scene is located.
The object model may be a character model in a virtual scene, or another object model capable of changing parameters such as shape, posture, and position. The target model may be located in a three-dimensional space of the virtual scene, or may be located in another three-dimensional space other than the virtual scene. In one example, a target model is built in a three-dimensional space outside the virtual scene, then the target model is baked in the three-dimensional space outside the virtual scene to obtain baked map data, and the baked map data is rendered to a specific model in the virtual scene to obtain a display effect of the target model in the virtual scene. In addition, a virtual camera is also arranged in the virtual scene, and the virtual camera can be used for shooting the scene in the virtual scene to generate a scene picture; the scene picture may include a display effect of the object model.
Before rendering the object model, the model position and camera position first need to be determined in the virtual scene. The model position is the rendering position of the target model in the virtual scene, wherein the target model can be a spectator model in the virtual scene or other virtual character models. It should be noted that if the target model is located in a three-dimensional space outside the virtual scene, the model position is the display position of the target model in the virtual scene, and the target model itself is not in the virtual scene; if the target model is located in the three-dimensional space of the virtual scene, the target model may or may not be located at the model position; the target model is not directly rendered and displayed, but baking map data is obtained by baking the target model, and the baking map data is rendered to a specific model in the virtual scene, wherein the specific model is positioned at the position of the model, so that the display effect of the target model is realized. The camera position is the position of the virtual camera shooting the virtual scene in the virtual scene.
Step S104, baking the target model based on the model position, the camera position, the time parameter and the preset vertex animation map to obtain baking map data of the target model; wherein, the vertex animation map comprises: a vertex offset of a model vertex of the target model, the vertex offset varying with a time parameter;
the above time parameter may include a running time of the virtual scene, and the preset vertex animation map may include a multi-frame animation, in which offset information of all vertices of the target model with respect to an original position of each frame in the animation frame, that is, a vertex offset of a model vertex of the target model, may be stored. It should be noted that the vertex offset may vary with time parameters, i.e., the vertex offset may be different for different time parameters.
Specifically, baking treatment can be performed on the target model according to the model position, the camera position, the time parameter and the preset vertex animation map, so as to obtain baking map data of the target model. Specifically, the baking process requires the use of a baking camera, which may be a virtual camera in the virtual scene described above, or may be another virtual camera; for example, when the target model is located in the virtual scene, the baking camera may be the virtual camera in the virtual scene, or may be another camera in the virtual scene; when the target model is located in a three-dimensional space other than the virtual scene, the torrefaction camera may be a virtual camera in the three-dimensional space, i.e., the torrefaction camera is in the same three-dimensional space as the target model. From the aforementioned model position and camera position, the baking position of the baking camera can be determined.
Different time parameters correspond to different animation frames in a preset vertex animation map, and the offset of the model vertex of the target model in the different animation frames is also different, so that when the method is actually realized, offset information of the model vertex of the target model can be acquired firstly based on the time parameters and the preset dynamic texture map, offset processing is carried out, and then baking processing is carried out on the target model according to the baking position, so that baking map data can be obtained.
Step S106, setting a patch model at the model position, and adjusting the orientation of the patch model based on the model position and the camera position;
the above-mentioned surface patch model can be the quadrilateral surface patch model of two-dimentional, can be in the model position when actually realizing, namely the rendering position of target model in virtual scene, set up the surface patch model, then according to model position and camera position adjustment surface patch model's orientation, make this surface patch model just right virtual camera all the time.
And step S108, rendering the baking map data onto the adjusted patch model to obtain the rendering effect of the target model in the virtual scene.
After the orientation of the patch model is adjusted, required relevant data such as normal vector, depth and the like can be sampled from baking mapping data and rendered onto the adjusted patch model, so that the rendering effect of the target model in the virtual scene can be obtained, and the rendering effect has more stereoscopic impression and dynamic impression.
The rendering method of the model comprises the steps of determining a model position and a camera position in a virtual scene; wherein, the model position is: rendering positions of the target models in the virtual scene; the camera positions are: shooting the position of a virtual camera of a virtual scene in the virtual scene; based on the model position, the camera position, the time parameter and the preset vertex animation mapping, baking the target model to obtain baking mapping data of the target model; wherein, the vertex animation map comprises: a vertex offset of a model vertex of the target model, the vertex offset varying with a time parameter; setting a patch model at a model position, and adjusting an orientation of the patch model based on the model position and the camera position; and rendering the baking map data onto the adjusted patch model to obtain the rendering effect of the target model in the virtual scene.
In the method, vertex animation mapping is preset, firstly, baking is conducted on a target model according to a model position and a camera position, then time parameters are combined, baking mapping data are obtained, then the orientation of a surface patch model located at the model position in a virtual scene is adjusted according to the model position and the camera position, and finally the baking mapping data are rendered to the adjusted surface patch model, so that the rendering effect of the target model in the virtual scene is obtained. By adopting the method, the texture map of the target model required under the current view angle can be baked in real time and efficiently, so that the dynamic target model is rendered on the surface patch model, the appearance effects of the model, actions and the like are enriched, the manufacturing resources are saved, the performance consumption is reduced, and the reality and the flexibility of model rendering are improved.
In a specific implementation, when baking the target model, the baking position needs to be determined first. Specifically, determining a baking position of the target model based on the model position and the camera position; wherein, the baking position is: when baking is carried out on the target model, the position of the virtual camera is located; and baking the target model based on the baking position, the time parameter and the preset vertex animation mapping, so as to obtain baking mapping data of the target model.
Baking is a technique of rendering model information into a map and then re-pasting the baked map back into a virtual scene. The above-described torrefaction camera may be used to torrent textures or scenes for game engines such as Unity or Unreal engines, which torrefaction camera is located in the three-dimensional space in which the target model is located, i.e., the torrefaction camera and the target model are in the same space.
In actual implementation, according to the model position and the camera position, a relative position relation between the virtual camera and a rendering position of the target model in the virtual scene can be determined, according to the relative position relation, information such as a baking angle, a baking direction and the like of the target model can be obtained, further, according to the information, a baking position of the baking camera relative to the target model can be determined, and then the baking camera can perform baking treatment on the target model at the baking position.
The following examples provide a specific implementation of determining the baking position of a target model.
Determining a baking angle of the target model based on the model position and the camera position; determining a baking direction of the target model based on the baking angle; a baking position of the target model is determined based on the baking direction.
In actual implementation, the baking position of the baking camera in the model space of the target model may be determined according to the baking direction. Specifically, the baking angle of the target model is determined according to the model position and the camera position, namely, the baking angle can be determined according to the rendering position of the target model in the virtual scene and the position of the virtual camera in the virtual scene. Since the patch model is provided at the model position, the baking angle can be determined from the position of the patch model and the position of the virtual camera in the virtual scene.
The roasting direction of the roasting camera relative to the target model may then be determined from the roasting angle. Specifically, the direction vector in the space of the virtual camera relative to the patch model may be mapped to a two-dimensional space in which a preset texture map is located, where the preset texture map may include a plurality of texture regions, each texture region may correspond to a different baking angle, and a texture map region corresponding to the baking angle is determined in the preset texture map, and further, the baking direction of the baking camera relative to the target model is determined according to the texture map region. Finally, a baking position of the baking camera in a model space of the target model may be determined from the baking direction.
In a specific embodiment, a difference vector between the camera position and the model position is calculated; and carrying out transformation processing on the difference vector through a rotation transformation matrix corresponding to the model position to obtain the baking angle of the target model.
Specifically, the difference vector between the camera position and the model position may be calculated first, and in actual implementation, the rotation of the surface patch model itself disposed at the model position needs to be considered, that is, the orientation of the surface patch model when the surface patch model is placed in the virtual scene, and then the rotation transformation matrix corresponding to the model position is used to transform the difference vector, so as to obtain the baking angle of the target model. Specifically, the baking angle of the target model can be calculated by the following formula, i.e., vi=Ri (-1) ×c 0-pi. Where vi is the baking angle of the target model, ri is the rotation transformation matrix corresponding to the model position, c0 is the camera position, and pi is the model position.
The following examples provide a specific implementation of determining a roasting direction of a roasting camera relative to a target model based on a roasting angle.
Determining a target texture area corresponding to the baking angle from a preset texture map based on the baking angle; the texture map in the initial state is in a blank state, and comprises a plurality of texture areas, wherein the texture areas are used for: storing a baking result of the target model under a corresponding baking angle; the baking direction of the target model is determined based on the position of the target texture region in the texture map.
The preset texture map may specifically be a large image map, in an initial state, the texture map may be in a blank state, the texture map may be divided into a plurality of texture regions, in order to describe the position of each texture region in the texture map conveniently, a coordinate system may be established, the position of each texture region in the texture map may be represented by texture coordinates, and a baking result of the target model under a corresponding baking angle may be stored in each texture region, where the baking result may include information such as a baking direction. When the baking angle is determined, the baking direction of the baking camera relative to the target model can be determined according to the position of the target texture area corresponding to the baking angle in the texture map.
The following embodiments provide specific embodiments for determining a target texture region corresponding to a baking angle.
Mapping the baking angle from the hemispherical space to a two-dimensional space where the texture map is located to obtain a two-dimensional vector; wherein, hemisphere space is: the virtual camera is relative to the three-dimensional space of the patch model, and the three-dimensional space is a hemispherical space; and determining the region position coordinates of the target texture region in the texture map based on the two-dimensional vector and the region number of the texture map.
The hemispherical space is a three-dimensional space in which the direction vector of the virtual camera relative to the patch model is located, the direction vector may specifically be a unit vector, and all the unit vectors may form a spherical surface in a three-dimensional rectangular coordinate system, which may refer to fig. 2. Taking a virtual scene of a sports event as an example, in general, there is little case where the virtual camera faces the audience from bottom to top, so it is only necessary to record the upper half of all the direction vectors, that is, the upper half of the sphere. The two-dimensional space can be [ -1.0, +1.0] 2 The texture map can also be regarded as a two-dimensional rectangle, and points in the two rectangles have a one-to-one mapping relationship.
In one example, the baking angle is mapped from the hemispherical space to the two-dimensional space in which the texture map is located, resulting in a corresponding two-dimensional vector H0 (x+y, x-y). Where x=vi.x/(|pi.x|+|pi.y|+|pi.z|), y=vi.z/(|pi.x|+|pi.y|+|pi.z|), where vi.x is the component of the baking angle in the x direction, vi.z is the component of the baking angle in the z direction, and pi.x, pi.y, pi.z are the components of the model position in the x, y, z directions, respectively.
Then, the region position coordinates of the target texture region in the texture map can be determined according to the two-dimensional vector and the region number of the texture map. The number of the regions of the texture map depends on the value of n when the texture map is divided, as shown in fig. 3, the number of the regions of the texture map is n×n, and the coordinates of the texture map in both directions are counted from 0, so that the maximum value of the abscissa and the ordinate of the position coordinates of the texture region is n-1. The region position coordinate HH of the target texture region on the texture map may be calculated by the following formula, where h=floor ((h0+1.0) ×q+0.5), where H0 is the two-dimensional vector, and the relationship between the parameters q and n is as follows: {0,1, …, n-1} {0,1, …, n-1}: q= (n-1)/2; where q is the median of the length or width of the texture region; floor () is a downward rounding function, that is, rounding H downward, so as to obtain the region position coordinates of the target texture region in the texture map.
Referring to fig. 3 as an example, referring to the left part of the figure, the texture map may be a two-dimensional rectangle, and n=4 is taken as an example, and the texture map is divided into 4*4 texture regions, and the position of each texture region in the texture map may be represented by region position coordinates. Since the region position coordinates start from (0, 0), the maximum value of the abscissa and the ordinate of the position coordinates of the texture region is n-1. In addition, taking the target model as the audience model, referring to the right part of the figure, the position of the quadrilateral surface patch in the right part is the rendering position of the audience model in the virtual scene, namely the model position, the baking angle vi can be determined according to the model position and the camera position, and after the baking angle is determined according to the model position and the camera position, the target texture area corresponding to the baking angle can be determined in the left texture map.
Further, the texture map includes a plurality of texture regions, and each texture region corresponds to a model position in the virtual scene. In particular, the texture map may be a large image map, which may be divided into a plurality of texture regions, the position of each texture region in the texture map may be represented by the region position coordinates of the texture map, and in addition, each texture region corresponds to a model position in the virtual scene. Information of the target model in the corresponding direction of the texture region can be stored in each texture region, and the information of the target model in the corresponding direction of each texture region can be determined according to the point in the two-dimensional space where the baking angle in the hemispherical space is mapped to the texture map. Fig. 4 is a diagram illustrating a target model as an audience model, and is a diagram illustrating a baking effect of a map at an overall angle to an audience model. It should be noted that, after the baking angle of the audience model in the current animation frame is determined according to the model position and the camera position, it is determined which texture areas in the texture map to be baked need to be baked, and only the angle needing to be baked, which is obtained by the calculation, needs to be baked.
The following examples provide a specific implementation for determining viewing angle conversion parameters.
Determining a view angle transformation parameter for transforming map data of a specified size into a target texture region based on region position coordinates of the target texture region in the texture map and the number of regions in the texture map; wherein the transformation parameters include a size scaling parameter and an offset parameter.
Texture is typically defined within a rectangular region in two dimensions, referred to as texture space. Specifically, the parameters can be defined on a unit square area, and the above-mentioned determination of the viewing angle transformation parameters of transforming the mapping data with a specified size into the target texture area can be understood as determination of the transformation parameters from a unit rectangle [0,1 ] under the texture space] 2 And transforming to the visual angle change parameters of the target texture region in the two-dimensional rectangle where the texture map is located.
Specifically, the perspective transformation parameters (OffsetX, offsetY, scaleX, scaleY) for transforming the map data with the specified size into the target texture region can be determined according to the region position coordinates HH of the target texture region in the texture map and the region number n in the texture map, wherein OffsetX, offsetY is an offset parameter, and ScaleX and ScaleY are scaling parameters. Wherein offsetx=hh.x/n, offsety=hh.y/n, scalex=1/n, scaley=1/n. The values of hh.x and hh.y are components in the x and y directions of the region position coordinates of the target texture region in the texture map, and the values of offsetx and OffsetY are also 0 to 1.
Fig. 5 illustrates an example, where the rectangle where the left big smile is located is the rectangle where the texture space is located, and the right big rectangle is the rectangle corresponding to the target texture map area in the texture map, where the location where the small smile is located is the target texture map area, and by determining the perspective transformation parameter, the linear transformation from the unit rectangle where the texture space is located to the rectangle corresponding to the target texture area can be implemented. Specifically, the unit rectangle is first scaled, that is, the length of the unit rectangle is multiplied by the scaling factor ScaleX in the x direction and the width of the unit rectangle is multiplied by the scaling factor ScaleY in the y direction. Since the length and width of the unit rectangle is 1 in this embodiment, the texture map is divided into n x n map areas, and therefore, the length and width of each small rectangle in the large right rectangle is 1/n, that is, the length and width of the target texture area is 1/n, and the scaling factor ScaleX, scaleY to be multiplied by scaling the unit rectangle on the left to the small rectangle on the right part is 1/n.
Then, the unit rectangle after the left scaling is moved to the target texture region in the right rectangle, that is, the position where the smile is located, that is, from (0, 0) to (hh.x, hh.y), there are (hh.x-0) target texture regions spaced in the x direction, there are (hh.y-0) target texture regions spaced in the y direction, and the length and width of each map region are 1/n, so that the offset offsetx= (hh.x-0) ×1/n=hh.x/n in the x direction, and the offset offsety= (hh.y-0) ×1/n) =hh.y/n in the y direction.
The following examples provide a specific implementation of determining the baking direction of a baking camera relative to a target model.
Mapping the region position coordinates of the target texture region in the texture map to a hemispherical space to obtain an intermediate result; the hemispherical space is a three-dimensional space of the virtual camera relative to the patch model, and the three-dimensional space is a hemispherical space; the baking direction of the target model is determined based on the intermediate result.
Specifically, the region position coordinates of the target texture region in the map are obtained, the region position coordinates are two-dimensional coordinates, and then the two-dimensional coordinates are mapped into the hemispherical space, so that an intermediate result can be obtained. The hemispherical space can be referred to as fig. 2, and the hemispherical space is a three-dimensional space of the virtual camera relative to the patch model, and the three-dimensional space is hemispherical. Further, the baking direction of the baking camera relative to the target model is determined according to the intermediate result.
In one example, the formula for calculating the intermediate result according to the region position coordinates of the target texture region in the texture map is: h '= (HH/(n-1)) ×2-1, T' = (H '. X+h'. Y, H '. X-H'. Y) 0.5, wherein H 'and T' are intermediate results as described above. Then, the formula for calculating the baking direction from the intermediate result is: dir=normal (T '. X,1- |t'. X| -T '. Y|, T'. Y), where Dir is the baking direction and normal () is a normalization function.
The following examples provide a specific implementation of determining the roasting position of a roasting camera based on the roasting direction.
Determining the world coordinates of a central point surrounding the central point of the ball of the target model and the radius of the ball surrounding the ball; and adding the product of the baking direction and the sphere radius to the world coordinate of the central point to obtain the baking position of the target model.
For the virtual models, each virtual model has a corresponding model space, and the model space is a space coordinate system established by taking a certain position point on the virtual model as an origin. Meanwhile, the virtual model is also located in the world space, and the world space is a three-dimensional virtual space in which a plurality of virtual models are located and corresponds to a world space coordinate system. In world space, all objects share an absolute position, which can be expressed by world coordinates. The bounding sphere is the smallest sphere that encloses the complete object model.
In one example, after determining the coordinates of the center point of the bounding sphere of the target model in world space, that is, the above-mentioned center point world coordinates, and the sphere radius of the bounding sphere, the product of the baking direction and the sphere radius of the bounding sphere is added to the center point world coordinates to obtain the baking position of the baking camera in the model space of the target model, where the calculation formula is: eye = center + Dir radius, where eye is the baking position, center is the world coordinate of the center point surrounding the center point of the sphere, dir is the baking direction, and radius is the sphere radius surrounding the sphere.
The following examples provide specific embodiments for obtaining baking map data.
Determining a baking transformation matrix based on the baking position; obtaining the vertex offset of the model vertex of the target model from a preset vertex animation map based on the time parameter; and baking the target model based on the baking transformation matrix and the vertex offset to obtain baking mapping data.
The above-described baking transformation matrix may be used to transform the position of the baking camera in world space to a corresponding position in model space of the target model, which may be determined from the baking position. The time parameter may include a running time of the virtual scene, and the preset vertex animation map VAT (Vertex Aniamtion Texture) may record each model vertex displacement data of the target model in a texture, and then input the texture into a world position parameter of the vertex shader, so that the animation of each frame is restored by the GPU (Graphics Processing Unit, graphics processor).
Specifically, the preset vertex animation map may have multiple frames of animations, different time parameters correspond to different animation frames in the preset vertex animation map, and model vertex offsets of the target models in different animation frames may be the same or different. Accordingly, the vertex offset of the model vertices of the target model may be determined from the preset vertex animation map according to the time parameter. Then, baking treatment can be carried out on the target model according to the baking transformation matrix and the vertex offset, so as to obtain baking map data.
The following examples provide a specific implementation of generating a baking transformation matrix.
Determining a first direction, and determining a direction from a baking position to a center point of a surrounding sphere of the target model as a second direction; constructing a viewing angle matrix based on the first direction and the second direction; constructing an orthogonal projection matrix based on the spherical radius of the surrounding sphere of the target model; a baked transformation matrix is generated based on the perspective matrix and the orthogonal projection matrix.
The first direction may be determined in the world space, and in actual implementation, the directly upper direction may be selected as the first direction in the world space according to actual conditions, or other suitable directions. Then, the direction from the baking position to the center point of the enclosing sphere of the target model is taken as a second direction. According to the first direction and the second direction, a View matrix View can be constructed.
Specifically, firstly, coordinate system conversion is performed, and coordinates are converted from a three-dimensional rectangular coordinate system where a default world space is located to a three-dimensional rectangular coordinate system where a camera view angle space is located. The three-dimensional rectangular coordinate system in which the default world space is located can be determined by the direct upward direction (0, 1, 0) and the coordinate system of the coordinate system, wherein the coordinate system of the coordinate system can comprise a right-hand coordinate system and a left-hand coordinate system. It can also be determined by the system of (0, 1) and the coordinate system, and can be adjusted according to the actual situation. The three-dimensional rectangular coordinate system in which the camera view angle is located can be determined by the right front direction of the camera and the coordinate system hand system. In the coordinate system where the default world space is located, the direction from the baking position to the center point of the object model around the sphere is directly in front of the camera.
Then, displacement transformation is performed, and the origin is moved from the origin of the three-dimensional coordinate system where the default world space is located to the origin of the three-dimensional rectangular coordinate system where the camera view angle space is located. And multiplying the matrixes obtained in the two steps to obtain the view angle matrix. The following is a right-hand coordinate system as an example, and shows a formula for constructing a view angle matrix:
in the formula, r is a positive right direction vector, and rx is a component in the x-axis direction in the r vector; ry is the component in the y-axis direction in the r vector; rz is the component of the r vector in the z-axis direction; u is a direct upward direction vector, and ux is a component in the x-axis direction in the u vector; uy is a component in the y-axis direction in the u vector; uz is the component of the u vector in the z-axis direction; f is a forward direction vector, and fx is a component of the f vector in the x-axis direction; fy is the component of the f vector in the y-axis direction; fz is the component of the f vector in the z-axis direction; the dot () function is mainly used for matrix multiplication and corresponds to dot multiplication in mathematical vectors, also called inner products of vectors.
Further, the orthographic projection matrix Proj can be constructed according to the sphere radius of the surrounding sphere of the target model. A baked transformation matrix may be generated from the perspective matrix and the orthogonal projection matrix. Specifically, a near-plane parameter value Z of a view cone of a baking camera is set firstly near Far plane parameter value Z far The near plane parameter value and the far plane parameter value represent distances of the near plane and the far plane, respectively, from an origin of the coordinate system. It is also necessary to set the viewing angle width W and the viewing angle height H. The setting method of the parameters can be flexibly adjusted according to the needs, and only the condition that the whole target model can be completely surrounded by the view cone is met.
In one example, a near-plane parameter value Z is set near A far plane parameter value Z of 0.01 far Radius 2, i.e. twice the sphere radius of the object model surrounding the sphere, the viewing angle width W and height H are also radius 2. The following still taking the right hand coordinate system as an example, shows a formula for constructing an orthographic projection matrix:
finally, a baked transformation matrix VP can be generated according to the above-mentioned view angle matrix and orthogonal projection matrix, and the calculation formula of the baked transformation matrix VP is as follows: vp=view×proj.
In a specific implementation manner, the vertex animation map includes a plurality of map tiles, and each map tile corresponds to a baking result of the target model under a certain baking angle; the mapping and slicing comprises the following steps: offset of model vertices in the target model in a plurality of animation frames; and determining a target animation frame based on the time parameter, and acquiring the vertex offset of the model vertex in the target animation frame from the vertex animation map.
Specifically, the vertex animation map may be divided into a plurality of map tiles, where each map tile corresponds to a baking result of the target model at a certain baking angle. The initial position of the vertex in the object model may be shifted by an offset, which may be stored in the map tile described above. In addition, when the virtual scene is running, the animation can be played at a certain speed, and different animation frames are corresponding to different time parameters, so that the offset of the model vertex in the target model in a plurality of animation frames can be stored in the mapping fragment.
FIG. 6 illustrates, as an example, the vertex offsets of all model vertices of a target model in each frame of an animation stored in a preset dynamic texture map. The abscissa in the preset dynamic texture map is vertex number v (0), v (1), v (2), … and v (n); the ordinate is Frame number Frame (0), frame (1), frame (2), …, frame (n). The data of each column in the preset dynamic texture map corresponds to the offset of the model vertex v (i) of the target model in each Frame, and the data of each row corresponds to the offset of each vertex in the animation Frame (i). In the figure, frame (0). Offset is the Offset of the vertex in the first Frame of the animation sequence Frame.
In actual implementation, the target animation frame may be determined according to the running time of the virtual scene, that is, the time parameter, and the target animation frame may be understood as an animation frame played at the time parameter, and then, the offset of the model vertex in the target animation frame is obtained from the vertex animation map.
By adopting the step, different vertex animation maps can be replaced in real time, so that the action of modifying the audience model in real time is realized.
Further, determining a target vertex from the target model, and determining an initial texture coordinate where the offset of the target vertex in a first frame of the animation frames is located; and determining target texture coordinates of the target animation frames corresponding to the time parameters based on the animation playing speed, the total frame number of the animation frames and the initial texture coordinates, and acquiring vertex offsets from the mapping positions indicated by the target texture coordinates.
The target vertex may be any vertex in the target model, after the target vertex is determined from the target model, an initial texture coordinate where an offset of the target vertex in a first Frame of the animation Frame is located is then determined, that is, an initial texture coordinate where an offset of a Frame (0) of the target vertex in the animation Frame is determined, where the initial texture coordinate may be represented by UV 2. Further, the target texture coordinate where the target animation frame corresponding to the time parameter t is located may be determined according to the animation playing speed, the total frame number frames of the animation frame, and the initial texture coordinate, where the target texture coordinate may be calculated by UV2+ (0, ceil (t×speed)/frames+1/frames). Where ceil () is a rounding function, the smallest integer head greater than or equal to the specified expression may be returned, and frac () function may return the fractional part of the argument. The vertex offsets may then be obtained from the map locations indicated by the target texture coordinates.
The following examples continue to describe embodiments of obtaining baking map data.
Obtaining vertex coordinates of model vertices of the target model; performing offset processing on the vertex coordinates based on the vertex offset to obtain an offset result; transforming the offset result based on the baking transformation matrix to obtain a vertex transformation result; based on the visual angle transformation parameters, transforming the vertex transformation result to obtain a visual angle transformation result; performing vertex coloring on the target model based on the visual angle transformation result to obtain a vertex coloring result; based on the vertex coloring result, texture coordinates, depth information and world space normal vectors of the target model are obtained, and baking map data are obtained; and saving the baking map data to a target texture area in a preset texture map.
First, the vertex coordinates p [ local ] of the model vertices of the target model, which are the position coordinates of the model vertices of the target model in the model space, are obtained. Then, the vertex Offset can be obtained from the map fragment corresponding to the target model, and the vertex coordinates are Offset according to the vertex Offset, so as to obtain an Offset result. Specifically, the Offset result may be obtained by adding the vertex coordinates to the vertex Offset, that is, the Offset result is p [ local ] +offset. Then, the Offset result is transformed according to the baking transformation matrix to obtain a vertex transformation result p [ homogeneous ], wherein homogeneous is homogeneous coordinate, and the vertex transformation result can be obtained by multiplying the baking transformation matrix with the Offset result, that is, p [ homogeneous ] =vp (p [ local ] +offset).
Secondly, the vertex transformation result can be transformed according to the visual angle transformation parameters to obtain a visual angle transformation result p [ homogeneous ]'. Xy, and when the visual angle transformation result is actually realized, the visual angle transformation result can be calculated by the following formula: p [ homogeneous ] '. Xy= ((((p [ homogeneous ]. Xy/p [ homogeneous ]. W) +1.0) 0.5) scalexy+offsetxy) 2.0-1.0) p [ homogeneous ]. W, p [ homogeneous ]'. Zw=p [ homogeneous ]. Zw. It should be noted that homogeneous coordinates are one way to represent N-dimensional coordinates using n+1 numbers, where x, y, z are three components in a cartesian coordinate system, w is an additional added variable,
ScaleXY=(ScaleX,ScaleY),OffsetXY=(OffsetX,OffsetY),
p[homogeneous].xy=(p[homogeneous].x,p[homogeneous].y),
p[homogeneous].xy/p[homogeneous].w=(p[homogeneous].x/p[homogeneous].w,p[homogeneous].y/p[homogeneous].w)。
then, vertex coloring can be performed on the target model according to the visual angle transformation result to obtain a vertex coloring result, wherein the vertex coloring result can comprise texture coordinates, depth information, world space normal vector and other information of the target model. Finally, according to the vertex coloring result, texture coordinates, depth information, world space normal vector and other information of the target model can be obtained, baking map data are obtained, and the baking map data are stored in a target texture area in a preset texture map.
In this embodiment, when the preset texture map is generated, only the angle required for the current viewing angle needs to be baked, so that the pressure of the vertex shader in the baking rendering stage is reduced.
The following examples provide specific embodiments for adjusting the orientation of a dough model.
Acquiring a baking angle of the target model; mapping the baking angle to the space where the virtual scene is located based on the rotation transformation matrix of the patch model to obtain a space vector corresponding to the baking angle; determining a reference direction, and constructing a local coordinate system based on the reference direction and the space vector; the model vertex position of the patch model is determined based on the center point position of the patch model, the local coordinates of the model vertices of the patch model relative to the center point, and the local coordinate system to adjust the orientation of the patch model by adjusting the model vertex position of the patch model.
Specifically, firstly, a baking angle of the target model is obtained, and it is to be noted that the baking angle is a corresponding baking angle v (rake) under the current scene camera view angle, and the baking angle can be calculated by the following formula: v=rζ (-1) × (c 1-p [ world ]), wherein R is a rotation transformation matrix of the patch model, c1 is a position of a scene camera, the position of the scene camera is the same as the position c0 of the virtual camera, p [ world ] is a vertex coordinate of the patch model in the world space in the virtual scene, and a center point position pi of the patch model is a rendering position of the target model in the virtual scene.
The angular orientation of the patch model may then be adjusted according to the perspective between the patch model and the scene camera. Specifically, the baking angle V (rake) is mapped to the space where the virtual scene is located according to the rotation change matrix R of the patch model, that is, the baking angle is rotated back to the world space, so as to obtain a spatial vector V ' =r×v (rake) corresponding to the baking angle, and then a proper direction is selected as a reference direction, for example, (0.0,1.0,0.0) is selected as the reference direction, a local coordinate system is constructed according to the reference direction and the spatial vector V ', specifically, (0.0,1.0,0.0) is taken as the direct upper direction, and V ' is taken as the direct front direction, so as to construct a local coordinate system { Forward, up, right }.
Further, the adjusted model vertex position p [ world ]', of the patch model, may be calculated according to the center point position pi of the patch model, the local coordinates TCi of the model vertex of the patch model with respect to the center point, and the local coordinate system, and the vertex position may be calculated according to the following formula: p [ world ]' =pi+tci.x_right+tci.y_up. Finally, the orientation of the patch model may be adjusted by adjusting the vertex position of the patch model.
The following examples provide a specific implementation of rendering a patch model.
The method comprises the steps of obtaining normal vector sampling data from baking mapping data, adjusting the normal vector sampling data based on a rotation transformation matrix of a patch model to obtain an actual normal vector, and rendering a normal effect of the patch model based on the actual normal vector; obtaining depth sampling data from baking map data, and adjusting the depth sampling data based on the spherical radius of a surrounding sphere of a target model and the pixel linear depth of a patch model to obtain actual linear depth; the depth effect of the patch model is rendered based on the actual linear depth.
The normal vector sampling data is obtained from the baked mapping data, and then the normal vector sampling data can be adjusted according to the rotation transformation matrix of the patch model to obtain an actual normal vector N', and the actual normal vector can be obtained through calculation according to the following formula: n' =normal (r×n), where R is the rotation transform matrix of the patch model and N is the normal vector sampling data. After the actual normal vector is obtained through the adjustment, the normal effect of the patch model can be rendered on the patch model according to the actual normal vector.
In addition, depth sampling data can be obtained from baking map data, and then the Depth can be adjusted according to the radius of the bounding sphere of the target model and the pixel Linear Depth of the patch model, so as to obtain an actual Linear Depth', wherein the actual Linear Depth can be obtained by the following formula: linear Depth' =linear depth+ (Depth-radius). Wherein Linear Depth is the pixel Linear Depth of the patch model, depth is Depth sampling data, radius is the bounding sphere radius of the target model. After the actual linear depth is obtained through the adjustment, the depth effect of the patch model can be rendered according to the actual linear depth.
The sample data may be obtained using the region position coordinates of the texture map in the vertex shading stage and the texture coordinates of the patch model. Fig. 7 is a schematic diagram illustrating the region position coordinates of the texture map and the texture coordinates of the patch model, wherein the left solid rectangle is a preset texture map and the right solid rectangle is a texture map of the patch model. Referring to the left solid large rectangle in the figure, the texture map is divided into 4*4 texture regions with n=4, the side length of each texture region is 1, the region position coordinate of the upper left texture region is (0, 0), and if the region position coordinate of the target texture region is (3, 1), namely the shadow region in the figure, the corresponding texture coordinate is (0.25,0.75), for convenience of understanding, the position of the black dot in the right solid rectangle in the figure can be referred to. The dashed lines are provided to facilitate an understanding of the relative relationship between texture coordinates of the patch model and the location of the regions of the texture map. In actual implementation, when the region position coordinates of the target texture region are (3, 1) and the texture coordinates of the corresponding patch model are (0.25,0.75), sampling may be performed according to the method shown in fig. 7 to obtain the above-described sampling data.
In the embodiment, a rendering scheme for the audience in the virtual scene based on the VAT and the image technology and used on the mobile terminal is provided, so that the rendering simulation of the audience in the virtual scene can be realized to the greatest extent under the condition that the performance and the power consumption of the mobile terminal are limited, and the sense of reality of the atmosphere in the virtual scene is improved. In addition, in the embodiment, only the angle required under the current visual angle is baked when the image is generated, so that the pressure of the vertex shader in the baking rendering stage is reduced. The actions of the audience model may also be modified in real-time by replacing different VATs in real-time. Rendering of a large number of viewers can be achieved through fewer drawing times, and each viewer only needs to render four vertexes, namely two triangles. And the rendered audience has more stereoscopic and dynamic sense.
For the embodiment of the method for rendering the model, referring to fig. 8, a schematic structural diagram of a rendering device for a model is shown, where the device includes:
a first determining module 80 for determining a model position and a camera position in the virtual scene; wherein, the model position is: rendering positions of the target models in the virtual scene; the camera positions are: shooting the position of a virtual camera of a virtual scene in the virtual scene;
The baking module 82 is configured to perform baking processing on the target model based on the model position, the camera position, the time parameter and a preset vertex animation map, so as to obtain baking map data of the target model; wherein, the vertex animation map comprises: a vertex offset of a model vertex of the target model, the vertex offset varying with a time parameter;
an adjustment module 84 for setting the patch model at the model position and adjusting the orientation of the patch model based on the model position and the camera position;
the rendering module 86 is configured to render the baked map data onto the adjusted patch model, so as to obtain a rendering effect of the target model in the virtual scene.
The rendering device of the model determines the position of the model and the position of the camera in the virtual scene; wherein, the model position is: rendering positions of the target models in the virtual scene; the camera positions are: shooting the position of a virtual camera of a virtual scene in the virtual scene; based on the model position, the camera position, the time parameter and the preset vertex animation mapping, baking the target model to obtain baking mapping data of the target model; wherein, the vertex animation map comprises: a vertex offset of a model vertex of the target model, the vertex offset varying with a time parameter; setting a patch model at a model position, and adjusting an orientation of the patch model based on the model position and the camera position; and rendering the baking map data onto the adjusted patch model to obtain the rendering effect of the target model in the virtual scene.
In the method, vertex animation mapping is preset, firstly, baking is conducted on a target model according to a model position and a camera position, then time parameters are combined, baking mapping data are obtained, then the orientation of a surface patch model located at the model position in a virtual scene is adjusted according to the model position and the camera position, and finally the baking mapping data are rendered to the adjusted surface patch model, so that the rendering effect of the target model in the virtual scene is obtained. By adopting the method, the texture map of the target model required under the current view angle can be baked in real time and efficiently, so that the dynamic target model is rendered on the surface patch model, the appearance effects of the model, actions and the like are enriched, the manufacturing resources are saved, the performance consumption is reduced, and the reality and the flexibility of model rendering are improved.
The above-mentioned baking module is further used for: determining a baking position of the target model based on the model position and the camera position; wherein, the baking position is: when baking is carried out on the target model, the position of the virtual camera is located; and baking the target model based on the baking position, the time parameter and the preset vertex animation map to obtain baking map data of the target model.
The above baking module is also for: determining a baking angle of the target model based on the model position and the camera position; determining a baking direction of the target model based on the baking angle; a baking position of the target model is determined based on the baking direction.
The above baking module is also for: calculating a difference vector between the camera position and the model position; and carrying out transformation processing on the difference vector through a rotation transformation matrix corresponding to the model position to obtain the baking angle of the target model.
The above baking module is also for: determining a target texture area corresponding to the baking angle from a preset texture map based on the baking angle; the texture map in the initial state is in a blank state, and comprises a plurality of texture areas, wherein the texture areas are used for: storing a baking result of the target model under a corresponding baking angle; the baking direction of the target model is determined based on the position of the target texture region in the texture map.
The above baking module is also for: mapping the baking angle from the hemispherical space to a two-dimensional space where the texture map is located to obtain a two-dimensional vector; wherein, hemisphere space is: the virtual camera is relative to the three-dimensional space of the patch model, and the three-dimensional space is a hemispherical space; and determining the region position coordinates of the target texture region in the texture map based on the two-dimensional vector and the region number of the texture map.
The texture map includes a plurality of texture regions, each texture region corresponding to a model position in the virtual scene.
The apparatus further comprises a second determining module configured to: determining a view angle transformation parameter for transforming map data of a specified size into a target texture region based on region position coordinates of the target texture region in the texture map and the number of regions in the texture map; wherein the transformation parameters include a size scaling parameter and an offset parameter.
The above baking module is also for: mapping the region position coordinates of the target texture region in the texture map to a hemispherical space to obtain an intermediate result; the hemispherical space is a three-dimensional space of the virtual camera relative to the patch model, and the three-dimensional space is a hemispherical space; the baking direction of the target model is determined based on the intermediate result.
The above baking module is also for: determining the world coordinates of a central point surrounding the central point of the ball of the target model and the radius of the ball surrounding the ball; and adding the product of the baking direction and the sphere radius to the world coordinate of the central point to obtain the baking position of the target model.
The above baking module is also for: determining a baking transformation matrix based on the baking position; obtaining the vertex offset of the model vertex of the target model from a preset vertex animation map based on the time parameter; and baking the target model based on the baking transformation matrix and the vertex offset to obtain baking mapping data.
The above baking module is also for: determining a first direction, and determining a direction from a baking position to a center point of a surrounding sphere of the target model as a second direction; constructing a viewing angle matrix based on the first direction and the second direction; constructing an orthogonal projection matrix based on the spherical radius of the surrounding sphere of the target model; a baked transformation matrix is generated based on the perspective matrix and the orthogonal projection matrix.
The vertex animation mapping comprises a plurality of mapping fragments, and each mapping fragment corresponds to a baking result of the target model under a certain baking angle; the mapping and slicing comprises the following steps: offset of model vertices in the target model in a plurality of animation frames; the above baking module is also for: and determining a target animation frame based on the time parameter, and acquiring the vertex offset of the model vertex in the target animation frame from the vertex animation map.
The above baking module is also for: determining a target vertex from a target model, and determining an initial texture coordinate where the offset of the target vertex in a first frame in an animation frame is located; and determining target texture coordinates of the target animation frames corresponding to the time parameters based on the animation playing speed, the total frame number of the animation frames and the initial texture coordinates, and acquiring vertex offsets from the mapping positions indicated by the target texture coordinates.
The above baking module is also for: obtaining vertex coordinates of model vertices of the target model; performing offset processing on the vertex coordinates based on the vertex offset to obtain an offset result; transforming the offset result based on the baking transformation matrix to obtain a vertex transformation result; based on the visual angle transformation parameters, transforming the vertex transformation result to obtain a visual angle transformation result; performing vertex coloring on the target model based on the visual angle transformation result to obtain a vertex coloring result; based on the vertex coloring result, texture coordinates, depth information and world space normal vectors of the target model are obtained, and baking map data are obtained; and saving the baking map data to a target texture area in a preset texture map.
The adjusting module is also used for: acquiring a baking angle of the target model; mapping the baking angle to the space where the virtual scene is located based on the rotation transformation matrix of the patch model to obtain a space vector corresponding to the baking angle; determining a reference direction, and constructing a local coordinate system based on the reference direction and the space vector; the model vertex position of the patch model is determined based on the center point position of the patch model, the local coordinates of the model vertices of the patch model relative to the center point, and the local coordinate system to adjust the orientation of the patch model by adjusting the model vertex position of the patch model.
The rendering module is further configured to: the method comprises the steps of obtaining normal vector sampling data from baking mapping data, adjusting the normal vector sampling data based on a rotation transformation matrix of a patch model to obtain an actual normal vector, and rendering a normal effect of the patch model based on the actual normal vector; obtaining depth sampling data from baking map data, and adjusting the depth sampling data based on the spherical radius of a surrounding sphere of a target model and the pixel linear depth of a patch model to obtain actual linear depth; the depth effect of the patch model is rendered based on the actual linear depth.
The present embodiment also provides an electronic device including a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the rendering method of the model. The electronic device may be a server or a terminal device.
Referring to fig. 9, the electronic device includes a processor 100 and a memory 101, the memory 101 storing machine executable instructions that can be executed by the processor 100, the processor 100 executing the machine executable instructions to implement the above-described model rendering method.
Further, the electronic device shown in fig. 9 further includes a bus 102 and a communication interface 103, and the processor 100, the communication interface 103, and the memory 101 are connected through the bus 102.
The memory 101 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 103 (which may be wired or wireless), and may use the internet, a wide area network, a local network, a metropolitan area network, etc. Bus 102 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in fig. 9, but not only one bus or one type of bus.
The processor 100 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 100 or by instructions in the form of software. The processor 100 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and, in combination with its hardware, performs the steps of the method of the previous embodiment.
The processor in the electronic device may implement the following operations in the rendering method of the model by executing machine executable instructions:
determining a model position and a camera position in the virtual scene; wherein, the model position is: rendering positions of the target models in the virtual scene; the camera positions are: shooting the position of a virtual camera of a virtual scene in the virtual scene; based on the model position, the camera position, the time parameter and the preset vertex animation mapping, baking the target model to obtain baking mapping data of the target model; wherein, the vertex animation map comprises: a vertex offset of a model vertex of the target model, the vertex offset varying with a time parameter; setting a patch model at a model position, and adjusting an orientation of the patch model based on the model position and the camera position; and rendering the baking map data onto the adjusted patch model to obtain the rendering effect of the target model in the virtual scene.
In the method, vertex animation mapping is preset, firstly, baking is conducted on a target model according to a model position and a camera position, then time parameters are combined, baking mapping data are obtained, then the orientation of a surface patch model located at the model position in a virtual scene is adjusted according to the model position and the camera position, and finally the baking mapping data are rendered to the adjusted surface patch model, so that the rendering effect of the target model in the virtual scene is obtained. By adopting the method, the texture map of the target model required under the current view angle can be baked in real time and efficiently, so that the dynamic target model is rendered on the surface patch model, the appearance effects of the model, actions and the like are enriched, the manufacturing resources are saved, the performance consumption is reduced, and the reality and the flexibility of model rendering are improved.
Determining a baking position of the target model based on the model position and the camera position; wherein, the baking position is: when baking is carried out on the target model, the position of the virtual camera is located; and baking the target model based on the baking position, the time parameter and the preset vertex animation mapping, so as to obtain baking mapping data of the target model.
Determining a baking angle of the target model based on the model position and the camera position; determining a baking direction of the target model based on the baking angle; a baking position of the target model is determined based on the baking direction.
Calculating a difference vector between the camera position and the model position; and carrying out transformation processing on the difference vector through a rotation transformation matrix corresponding to the model position to obtain the baking angle of the target model.
Determining a target texture area corresponding to the baking angle from a preset texture map based on the baking angle; the texture map in the initial state is in a blank state, and comprises a plurality of texture areas, wherein the texture areas are used for: storing a baking result of the target model under a corresponding baking angle; the baking direction of the target model is determined based on the position of the target texture region in the texture map.
Mapping the baking angle from the hemispherical space to a two-dimensional space where the texture map is located to obtain a two-dimensional vector; wherein, hemisphere space is: the virtual camera is relative to the three-dimensional space of the patch model, and the three-dimensional space is a hemispherical space; and determining the region position coordinates of the target texture region in the texture map based on the two-dimensional vector and the region number of the texture map.
The texture map includes a plurality of texture regions, each texture region corresponding to a model position in the virtual scene.
Determining a view angle transformation parameter for transforming map data of a specified size into a target texture region based on region position coordinates of the target texture region in the texture map and the number of regions in the texture map; wherein the transformation parameters include a size scaling parameter and an offset parameter.
Mapping the region position coordinates of the target texture region in the texture map to a hemispherical space to obtain an intermediate result; the hemispherical space is a three-dimensional space of the virtual camera relative to the patch model, and the three-dimensional space is a hemispherical space; the baking direction of the target model is determined based on the intermediate result.
Determining the world coordinates of a central point surrounding the central point of the ball of the target model and the radius of the ball surrounding the ball; and adding the product of the baking direction and the sphere radius to the world coordinate of the central point to obtain the baking position of the baking camera in the model space of the target model.
Determining a baking transformation matrix based on the baking position; obtaining the vertex offset of the model vertex of the target model from a preset vertex animation map based on the time parameter; and baking the target model based on the baking transformation matrix and the vertex offset to obtain baking mapping data.
Determining a first direction, and determining a direction from a baking position to a center point of a surrounding sphere of the target model as a second direction; constructing a viewing angle matrix based on the first direction and the second direction; constructing an orthogonal projection matrix based on the spherical radius of the surrounding sphere of the target model; a baked transformation matrix is generated based on the perspective matrix and the orthogonal projection matrix.
The vertex animation mapping comprises a plurality of mapping fragments, and each mapping fragment corresponds to a baking result of the target model under a certain baking angle; the mapping and slicing comprises the following steps: offset of model vertices in the target model in a plurality of animation frames; and determining a target animation frame based on the time parameter, and acquiring the vertex offset of the model vertex in the target animation frame from the vertex animation map.
Determining a target vertex from a target model, and determining an initial texture coordinate where the offset of the target vertex in a first frame in an animation frame is located; and determining target texture coordinates of the target animation frames corresponding to the time parameters based on the animation playing speed, the total frame number of the animation frames and the initial texture coordinates, and acquiring vertex offsets from the mapping positions indicated by the target texture coordinates.
Obtaining vertex coordinates of model vertices of the target model; performing offset processing on the vertex coordinates based on the vertex offset to obtain an offset result; transforming the offset result based on the baking transformation matrix to obtain a vertex transformation result; based on the visual angle transformation parameters, transforming the vertex transformation result to obtain a visual angle transformation result; performing vertex coloring on the target model based on the visual angle transformation result to obtain a vertex coloring result; based on the vertex coloring result, texture coordinates, depth information and world space normal vectors of the target model are obtained, and baking map data are obtained; and saving the baking map data to a target texture area in a preset texture map.
In this embodiment, when the preset texture map is generated, only the angle required for the current viewing angle needs to be baked, so that the pressure of the vertex shader in the baking rendering stage is reduced.
Acquiring a baking angle of the target model; mapping the baking angle to the space where the virtual scene is located based on the rotation transformation matrix of the patch model to obtain a space vector corresponding to the baking angle; determining a reference direction, and constructing a local coordinate system based on the reference direction and the space vector; the model vertex position of the patch model is determined based on the center point position of the patch model, the local coordinates of the model vertices of the patch model relative to the center point, and the local coordinate system to adjust the orientation of the patch model by adjusting the model vertex position of the patch model.
The method comprises the steps of obtaining normal vector sampling data from baking mapping data, adjusting the normal vector sampling data based on a rotation transformation matrix of a patch model to obtain an actual normal vector, and rendering a normal effect of the patch model based on the actual normal vector; obtaining depth sampling data from baking map data, and adjusting the depth sampling data based on the spherical radius of a surrounding sphere of a target model and the pixel linear depth of a patch model to obtain actual linear depth; the depth effect of the patch model is rendered based on the actual linear depth.
The present embodiment also provides a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement a rendering method of the above model.
The machine-executable instructions stored on the machine-readable storage medium may implement the following operations in the method for rendering the model:
determining a model position and a camera position in the virtual scene; wherein, the model position is: rendering positions of the target models in the virtual scene; the camera positions are: shooting the position of a virtual camera of a virtual scene in the virtual scene; based on the model position, the camera position, the time parameter and the preset vertex animation mapping, baking the target model to obtain baking mapping data of the target model; wherein, the vertex animation map comprises: a vertex offset of a model vertex of the target model, the vertex offset varying with a time parameter; setting a patch model at a model position, and adjusting an orientation of the patch model based on the model position and the camera position; and rendering the baking map data onto the adjusted patch model to obtain the rendering effect of the target model in the virtual scene.
In the method, vertex animation mapping is preset, firstly, baking is conducted on a target model according to a model position and a camera position, then time parameters are combined, baking mapping data are obtained, then the orientation of a surface patch model located at the model position in a virtual scene is adjusted according to the model position and the camera position, and finally the baking mapping data are rendered to the adjusted surface patch model, so that the rendering effect of the target model in the virtual scene is obtained. By adopting the method, the texture map of the target model required under the current view angle can be baked in real time and efficiently, so that the dynamic target model is rendered on the surface patch model, the appearance effects of the model, actions and the like are enriched, the manufacturing resources are saved, the performance consumption is reduced, and the reality and the flexibility of model rendering are improved.
Determining a baking position of the target model based on the model position and the camera position; wherein, the baking position is: when baking is carried out on the target model, the position of the virtual camera is located; and baking the target model based on the baking position, the time parameter and the preset vertex animation mapping, so as to obtain baking mapping data of the target model.
Determining a baking angle of the target model based on the model position and the camera position; determining a baking direction of the target model based on the baking angle; a baking position of the target model is determined based on the baking direction.
Calculating a difference vector between the camera position and the model position; and carrying out transformation processing on the difference vector through a rotation transformation matrix corresponding to the model position to obtain the baking angle of the target model.
Determining a target texture area corresponding to the baking angle from a preset texture map based on the baking angle; the texture map in the initial state is in a blank state, and comprises a plurality of texture areas, wherein the texture areas are used for: storing a baking result of the target model under a corresponding baking angle; the baking direction of the target model is determined based on the position of the target texture region in the texture map.
Mapping the baking angle from the hemispherical space to a two-dimensional space where the texture map is located to obtain a two-dimensional vector; wherein, hemisphere space is: the virtual camera is relative to the three-dimensional space of the patch model, and the three-dimensional space is a hemispherical space; and determining the region position coordinates of the target texture region in the texture map based on the two-dimensional vector and the region number of the texture map.
The texture map includes a plurality of texture regions, each texture region corresponding to a model position in the virtual scene.
Determining a view angle transformation parameter for transforming map data of a specified size into a target texture region based on region position coordinates of the target texture region in the texture map and the number of regions in the texture map; wherein the transformation parameters include a size scaling parameter and an offset parameter.
Mapping the region position coordinates of the target texture region in the texture map to a hemispherical space to obtain an intermediate result; the hemispherical space is a three-dimensional space of the virtual camera relative to the patch model, and the three-dimensional space is a hemispherical space; the baking direction of the target model is determined based on the intermediate result.
Determining the world coordinates of a central point surrounding the central point of the ball of the target model and the radius of the ball surrounding the ball; and adding the product of the baking direction and the sphere radius to the world coordinate of the central point to obtain the baking position of the baking camera in the model space of the target model.
Determining a baking transformation matrix based on the baking position; obtaining the vertex offset of the model vertex of the target model from a preset vertex animation map based on the time parameter; and baking the target model based on the baking transformation matrix and the vertex offset to obtain baking mapping data.
Determining a first direction, and determining a direction from a baking position to a center point of a surrounding sphere of the target model as a second direction; constructing a viewing angle matrix based on the first direction and the second direction; constructing an orthogonal projection matrix based on the spherical radius of the surrounding sphere of the target model; a baked transformation matrix is generated based on the perspective matrix and the orthogonal projection matrix.
The vertex animation mapping comprises a plurality of mapping fragments, and each mapping fragment corresponds to a baking result of the target model under a certain baking angle; the mapping and slicing comprises the following steps: offset of model vertices in the target model in a plurality of animation frames; and determining a target animation frame based on the time parameter, and acquiring the vertex offset of the model vertex in the target animation frame from the vertex animation map.
Determining a target vertex from a target model, and determining an initial texture coordinate where the offset of the target vertex in a first frame in an animation frame is located; and determining target texture coordinates of the target animation frames corresponding to the time parameters based on the animation playing speed, the total frame number of the animation frames and the initial texture coordinates, and acquiring vertex offsets from the mapping positions indicated by the target texture coordinates.
Obtaining vertex coordinates of model vertices of the target model; performing offset processing on the vertex coordinates based on the vertex offset to obtain an offset result; transforming the offset result based on the baking transformation matrix to obtain a vertex transformation result; based on the visual angle transformation parameters, transforming the vertex transformation result to obtain a visual angle transformation result; performing vertex coloring on the target model based on the visual angle transformation result to obtain a vertex coloring result; based on the vertex coloring result, texture coordinates, depth information and world space normal vectors of the target model are obtained, and baking map data are obtained; and saving the baking map data to a target texture area in a preset texture map.
In this embodiment, when the preset texture map is generated, only the angle required for the current viewing angle needs to be baked, so that the pressure of the vertex shader in the baking rendering stage is reduced.
Acquiring a baking angle of the target model; mapping the baking angle to the space where the virtual scene is located based on the rotation transformation matrix of the patch model to obtain a space vector corresponding to the baking angle; determining a reference direction, and constructing a local coordinate system based on the reference direction and the space vector; the model vertex position of the patch model is determined based on the center point position of the patch model, the local coordinates of the model vertices of the patch model relative to the center point, and the local coordinate system to adjust the orientation of the patch model by adjusting the model vertex position of the patch model.
The method comprises the steps of obtaining normal vector sampling data from baking mapping data, adjusting the normal vector sampling data based on a rotation transformation matrix of a patch model to obtain an actual normal vector, and rendering a normal effect of the patch model based on the actual normal vector; obtaining depth sampling data from baking map data, and adjusting the depth sampling data based on the spherical radius of a surrounding sphere of a target model and the pixel linear depth of a patch model to obtain actual linear depth; the depth effect of the patch model is rendered based on the actual linear depth.
The method, the apparatus and the computer program product of the electronic device for rendering a model provided in the embodiments of the present invention include a machine-readable storage medium storing program codes, and the instructions included in the program codes may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment and will not be described herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood by those skilled in the art in specific cases.
The functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored on a machine-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention for illustrating the technical solution of the present invention, but not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the foregoing examples, it will be understood by those skilled in the art that the present invention is not limited thereto: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (20)

1. A method of rendering a model, the method comprising:
determining a model position and a camera position in the virtual scene; wherein, the model position is: rendering positions of the target models in the virtual scene; the camera position is: shooting the position of a virtual camera of the virtual scene in the virtual scene;
based on the model position, the camera position, the time parameter and a preset vertex animation map, baking the target model to obtain baking map data of the target model; wherein, the vertex animation map comprises: the vertex offset of the model vertex of the target model, the vertex offset varying with the time parameter;
setting a patch model at the model position, and adjusting an orientation of the patch model based on the model position and the camera position;
and rendering the baking map data onto the adjusted patch model to obtain the rendering effect of the target model in the virtual scene.
2. The method of claim 1, wherein the step of baking the target model based on the model position, the camera position, the time parameter, and a preset vertex animation map to obtain baking map data of the target model comprises:
Determining a baking position of the target model based on the model position and the camera position; wherein, the baking position is: when baking is carried out on the target model, the position of the virtual camera is located;
and baking the target model based on the baking position, the time parameter and the preset vertex animation map to obtain baking map data of the target model.
3. The method of claim 2, wherein determining the baking position of the target model based on the model position and the camera position comprises:
determining a baking angle of the target model based on the model position and the camera position;
determining a baking direction of the target model based on the baking angle;
a torrefaction location of the target model is determined based on the torrefaction direction.
4. A method according to claim 3, wherein the step of determining the baking angle of the target model based on the model position and the camera position comprises:
calculating a difference vector of the camera position and the model position;
and carrying out transformation processing on the difference vector through a rotation transformation matrix corresponding to the model position to obtain the baking angle of the target model.
5. A method according to claim 3, wherein the step of determining the baking direction of the target model based on the baking angle comprises:
determining a target texture area corresponding to the baking angle from a preset texture map based on the baking angle; the texture map in the initial state is in a blank state, and the texture map comprises a plurality of texture areas, wherein the texture areas are used for: storing a baking result of the target model at a corresponding baking angle;
a baking direction of the target model is determined based on a position of the target texture region in the texture map.
6. The method of claim 5, wherein determining a target texture region corresponding to the baking angle from a preset texture map based on the baking angle comprises:
mapping the baking angle from a hemispherical space to a two-dimensional space in which the texture map is positioned, so as to obtain a two-dimensional vector; wherein, hemisphere space is: the virtual camera is relative to the three-dimensional space of the patch model, and the three-dimensional space is a hemispherical space;
and determining the region position coordinates of the target texture region on the texture map based on the two-dimensional vector and the region number of the texture map.
7. The method of claim 6, wherein the texture map includes a plurality of texture regions, each texture region corresponding to a model location in the virtual scene.
8. The method of claim 5, wherein after the step of determining a target texture region corresponding to the baking angle from a preset texture map based on the baking angle, the method further comprises:
determining a view transformation parameter for transforming map data of a specified size into the target texture region based on region position coordinates of the target texture region in the texture map and the number of regions in the texture map;
wherein the transformation parameters include a size scaling parameter and an offset parameter.
9. The method of claim 5, wherein determining the baking direction of the target model based on the location of the target texture region in the texture map comprises:
mapping the region position coordinates of the target texture region in the texture map to a hemispherical space to obtain an intermediate result; the hemispherical space is a three-dimensional space of the virtual camera relative to the patch model, and the three-dimensional space is a hemispherical space;
A baking direction of the target model is determined based on the intermediate result.
10. A method according to claim 3, wherein the step of determining the baking position of the target model based on the baking direction comprises:
determining the world coordinates of a center point of the object model surrounding a center point of the sphere and the radius of the sphere surrounding the sphere;
and adding the product of the baking direction and the sphere radius to the world coordinate of the central point to obtain the baking position of the target model.
11. The method of claim 2, wherein the step of baking the target model based on the baking position, the time parameter, and a preset vertex animation map to obtain baking map data of the target model comprises:
determining a baking transformation matrix based on the baking position;
acquiring the vertex offset of the model vertex of the target model from a preset vertex animation map based on the time parameter;
and based on the baking transformation matrix and the vertex offset, baking the target model to obtain baking map data.
12. The method of claim 11, wherein the step of determining a baking transformation matrix based on the baking position comprises:
Determining a first direction, and determining a direction from the baking position to a center point of a surrounding sphere of the target model as a second direction;
constructing a viewing angle matrix based on the first direction and the second direction; constructing an orthogonal projection matrix based on the spherical radius of the surrounding sphere of the target model;
a baked transformation matrix is generated based on the perspective matrix and the orthogonal projection matrix.
13. The method of claim 11, wherein the vertex animation map comprises a plurality of map tiles, each of the map tiles corresponding to a baking result of the target model at a baking angle; the mapping and slicing comprises the following steps: the offset of model vertexes in the target model in a plurality of animation frames;
the step of obtaining the vertex offset of the model vertex of the target model from a preset vertex animation map based on the time parameter comprises the following steps:
and determining a target animation frame based on the time parameter, and acquiring the vertex offset of the model vertex in the target animation frame from the vertex animation map.
14. The method of claim 13, wherein the step of obtaining the vertex offset for the model vertices in the target animation frame from the vertex animation map comprises:
Determining a target vertex from the target model, and determining an initial texture coordinate where the offset of the target vertex in a first frame in the animation frame is located;
and determining target texture coordinates of a target animation frame corresponding to the time parameter based on the animation playing speed, the total frame number of the animation frame and the initial texture coordinates, and acquiring vertex offset from a map position indicated by the target texture coordinates.
15. The method of claim 11, wherein the step of baking the target model based on the baked transformation matrix and the vertex offsets to obtain baked map data comprises:
obtaining vertex coordinates of model vertices of the target model;
performing offset processing on the vertex coordinates based on the vertex offset to obtain an offset result; transforming the offset result based on the baking transformation matrix to obtain a vertex transformation result;
based on the visual angle transformation parameters, transforming the vertex transformation result to obtain a visual angle transformation result;
performing vertex coloring on the target model based on the visual angle transformation result to obtain a vertex coloring result;
Based on the vertex coloring result, texture coordinates, depth information and world space normal vectors of the target model are obtained, and baking map data are obtained;
and storing the baking map data into a target texture area in a preset texture map.
16. The method of claim 1, wherein the step of adjusting the orientation of the patch model based on the model position and the camera position comprises:
acquiring a baking angle of the target model;
mapping the baking angle to the space where the virtual scene is located based on the rotation transformation matrix of the patch model, and obtaining a space vector corresponding to the baking angle;
determining a reference direction, and constructing a local coordinate system based on the reference direction and the space vector;
and determining the model vertex position of the patch model based on the center point position of the patch model, the local coordinates of the model vertex of the patch model relative to the center point and the local coordinate system, so as to adjust the orientation of the patch model by adjusting the model vertex position of the patch model.
17. The method of claim 1, wherein the step of rendering the baked map data onto the adjusted patch model to obtain a rendering effect of the target model in the virtual scene comprises:
Acquiring normal vector sampling data from the baking mapping data, adjusting the normal vector sampling data based on a rotation transformation matrix of the patch model to obtain an actual normal vector, and rendering a normal effect of the patch model based on the actual normal vector;
obtaining depth sampling data from the baking map data, and adjusting the depth sampling data based on the spherical radius of the surrounding sphere of the target model and the pixel linear depth of the patch model to obtain the actual linear depth; rendering a depth effect of the patch model based on the actual linear depth.
18. A rendering device of a model, the device comprising:
a first determining module for determining a model position and a camera position in a virtual scene; wherein, the model position is: rendering positions of the target models in the virtual scene; the camera position is: shooting the position of a virtual camera of the virtual scene in the virtual scene;
the baking module is used for baking the target model based on the model position, the camera position, the time parameter and the preset vertex animation map to obtain baking map data of the target model; wherein, the vertex animation map comprises: the vertex offset of the model vertex of the target model, the vertex offset varying with the time parameter;
An adjustment module for setting a patch model at the model position and adjusting an orientation of the patch model based on the model position and the camera position;
and the rendering module is used for rendering the baking map data onto the adjusted patch model to obtain the rendering effect of the target model in the virtual scene.
19. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the method of rendering a model of any one of claims 1-17.
20. A machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement a method of rendering a model according to any one of claims 1-17.
CN202310345825.6A 2023-03-28 2023-03-28 Rendering method and device of model and electronic equipment Pending CN116485972A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310345825.6A CN116485972A (en) 2023-03-28 2023-03-28 Rendering method and device of model and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310345825.6A CN116485972A (en) 2023-03-28 2023-03-28 Rendering method and device of model and electronic equipment

Publications (1)

Publication Number Publication Date
CN116485972A true CN116485972A (en) 2023-07-25

Family

ID=87220428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310345825.6A Pending CN116485972A (en) 2023-03-28 2023-03-28 Rendering method and device of model and electronic equipment

Country Status (1)

Country Link
CN (1) CN116485972A (en)

Similar Documents

Publication Publication Date Title
US5841441A (en) High-speed three-dimensional texture mapping systems and methods
US7629972B2 (en) Image-based protruded displacement mapping method and bi-layered displacement mapping method using the same
CN113838176B (en) Model training method, three-dimensional face image generation method and three-dimensional face image generation equipment
CN109325990B (en) Image processing method, image processing apparatus, and storage medium
US20020118217A1 (en) Apparatus, method, program code, and storage medium for image processing
US20070030266A1 (en) Scheme for providing wrinkled look in computer simulation of materials
CN108805971B (en) Ambient light shielding method
US8854392B2 (en) Circular scratch shader
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
CN110570500B (en) Character drawing method, device, equipment and computer readable storage medium
CN114693856B (en) Object generation method and device, computer equipment and storage medium
CN111494945A (en) Virtual object processing method and device, storage medium and electronic equipment
CN111583398A (en) Image display method and device, electronic equipment and computer readable storage medium
US5793372A (en) Methods and apparatus for rapidly rendering photo-realistic surfaces on 3-dimensional wire frames automatically using user defined points
CN115103134A (en) LED virtual shooting cutting synthesis method
US10909752B2 (en) All-around spherical light field rendering method
CN113230659A (en) Game display control method and device
CN108230430B (en) Cloud layer mask image processing method and device
WO2017174006A1 (en) Image processing method and device
CN108986228B (en) Method and device for displaying interface in virtual reality
CN116485972A (en) Rendering method and device of model and electronic equipment
CN113034350B (en) Vegetation model processing method and device
US11120606B1 (en) Systems and methods for image texture uniformization for multiview object capture
CN112002019B (en) Method for simulating character shadow based on MR mixed reality
CN114832375A (en) Ambient light shielding processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230818

Address after: Room 2047, Floor 2, No. 24, Lane 315, Fenggu Road, Xuhui District, Shanghai, 200000

Applicant after: Shanghai NetEasy Brilliant Network Technology Co.,Ltd.

Address before: 310000 7 storeys, Building No. 599, Changhe Street Network Business Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: NETEASE (HANGZHOU) NETWORK Co.,Ltd.

TA01 Transfer of patent application right