CN114494548A - Virtual model generation method and device and electronic equipment - Google Patents

Virtual model generation method and device and electronic equipment Download PDF

Info

Publication number
CN114494548A
CN114494548A CN202111603106.7A CN202111603106A CN114494548A CN 114494548 A CN114494548 A CN 114494548A CN 202111603106 A CN202111603106 A CN 202111603106A CN 114494548 A CN114494548 A CN 114494548A
Authority
CN
China
Prior art keywords
model
rendered
virtual
generating
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111603106.7A
Other languages
Chinese (zh)
Inventor
李德哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202111603106.7A priority Critical patent/CN114494548A/en
Publication of CN114494548A publication Critical patent/CN114494548A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method and a device for generating a virtual model and electronic equipment. Wherein, the method comprises the following steps: obtaining at least one model, wherein the at least one model is used for generating a target model in a combined mode; determining a gaze direction between a virtual camera and the at least one model; rendering the at least one model according to the sight direction to obtain at least one rendered model; and performing combined processing on the rendered at least one model to generate the target model so as to adjust illumination information of the surface of the target model based on the position change of the virtual camera in the scene. The invention solves the technical problem of high system resource overhead caused by the complex processing method of the jelly effect for manufacturing the virtual model in the prior art.

Description

Virtual model generation method and device and electronic equipment
Technical Field
The invention relates to the field of computer vision, in particular to a method and a device for generating a virtual model and electronic equipment.
Background
At present, the effect of short existence time is often seen in the hand trip, for example, some bullets flash, and in order to embody a real picture, a jelly effect is often added to the virtual model which moves at a high speed or vibrates rapidly relative to the camera. However, the processing methods for processing these virtual models are complex, which not only results in low profit, but also results in a large amount of overhead of system resources.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for generating a virtual model and electronic equipment, which are used for at least solving the technical problem of high system resource overhead caused by the complex processing method for manufacturing the jelly effect of the virtual model in the prior art.
According to an aspect of the embodiments of the present invention, there is provided a method for generating a virtual model, including: obtaining at least one model, wherein the at least one model is used for generating a target model in a combined mode; determining a gaze direction between the virtual camera and the at least one model; rendering at least one model according to the sight direction to obtain at least one rendered model; and performing combined processing on the rendered at least one model to generate a target model so as to adjust illumination information of the surface of the target model based on the position change of the virtual camera in the scene.
Further, the at least one model includes at least a first model and a second model, the first model and the second model being formed into the object model in a nested form, wherein the shape of the first model is similar to the shape of the second model, and the volume of the first model is larger than the volume of the second model.
Further, the method for generating the virtual model further comprises the following steps: determining a camera position of a virtual camera in a scene; and obtaining a first sight line direction corresponding to the first model according to the direction vector between the camera position and each vertex of the first model.
Further, the method for generating the virtual model further comprises the following steps: determining a first normal direction corresponding to each vertex in the first model; determining a first reflection result corresponding to the first model according to the first sight line direction and the first normal line direction; carrying out inversion processing on the first reflection result to obtain a second reflection result; obtaining a first transparency value of the first model from the second reflection result; and performing blurring processing on the edge of the first model based on the first transparency value, and generating the texture of the first model based on a preset texture map to obtain the rendered first model.
Further, the method for generating the virtual model further comprises the following steps: determining a camera position of a virtual camera in a scene; and obtaining a second sight line direction corresponding to the second model according to the direction vector between the camera position and each vertex of the second model.
Further, the method for generating the virtual model further comprises the following steps: determining a second normal direction corresponding to each vertex in the second model; determining a third reflection result corresponding to the second model according to the second sight line direction and the second normal line direction; obtaining a second transparency value of the second model from the third reflection result; and performing blurring processing on the edge of the second model based on the second transparency value, and generating texture of the second model based on a preset texture map to obtain the rendered second model.
Further, the method for generating the virtual model further comprises the following steps: combining the at least one model according to the nesting sequence of the at least one model to obtain an initial model; acquiring a preset highlight map; and carrying out highlight processing on the initial model based on the highlight map to obtain a target model.
According to another aspect of the embodiments of the present invention, there is also provided a device for generating a virtual model, including: the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring at least one model, and the at least one model is used for generating a target model in a combined mode; a determination module to determine a gaze direction between the virtual camera and the at least one model; the rendering module is used for rendering the at least one model according to the sight direction to obtain at least one rendered model; and the combination module is used for performing combination processing on the rendered at least one model to generate a target model so as to adjust the illumination information of the surface of the target model based on the position change of the virtual camera in the scene.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the above-mentioned method for generating a virtual model when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a method for running a program, wherein the program is arranged to perform the above-described method for generating a virtual model when running.
In the embodiment of the invention, a mode of obtaining a target model by performing combined processing on models rendered according to a sight direction is adopted, and the sight direction between a virtual camera and at least one model is determined by obtaining at least one model, so that the at least one model is rendered according to the sight direction to obtain the rendered at least one model, and then the rendered at least one model is subjected to combined processing to generate the target model, so that illumination information of the surface of the target model is adjusted based on the position change of the virtual camera in a scene. Wherein at least one model is used to generate a target model in combination.
In the process, the target model is set to be composed of a plurality of models, and each model is rendered according to the sight direction, so that the problem of high system overhead caused by the fact that a picture is used for making a jelly effect in the prior art is solved. Furthermore, at least one rendered model is combined to generate a target model, so that a model with a jelly effect is obtained, and the problem of high system overhead caused by the fact that the jelly effect is manufactured by adopting a complex processing method such as a RayMarching technology is effectively solved. In addition, in the application, at least one model is rendered according to the sight direction, so that each obtained model can present a real effect during movement, and further user experience is improved.
Therefore, the purpose that the target model is obtained by performing combined processing on the models rendered according to the sight direction is achieved, the technical effect of making the jelly effect with low cost is achieved, and the technical problem that in the prior art, the processing method for making the jelly effect of the virtual model is complex, so that the system resource cost is large is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of an alternative method of generating a virtual model according to an embodiment of the invention;
FIG. 2 is a schematic illustration of an alternative first model or second model according to embodiments of the invention;
FIG. 3 is a schematic illustration of an alternative nesting of first and second models in accordance with embodiments of the present invention;
FIG. 4 is a schematic illustration of an alternative lake surface reflection and refraction according to an embodiment of the invention;
FIG. 5 is a schematic view of an alternative line of sight and normal in accordance with embodiments of the invention;
FIG. 6 is a schematic diagram of an alternative rendered second model in accordance with embodiments of the present invention;
FIG. 7 is a schematic diagram of an alternative initial model according to an embodiment of the invention;
fig. 8 is a schematic diagram of an alternative virtual model generation apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the prior art, the jelly effect of the virtual model is generally achieved in the following manner:
(1) and shooting a single picture or a picture of a sequence diagram jelly by using a particle system to simulate the effect of the jelly bullet.
(2) And realizing the fruit jelly effect by utilizing a RayMarching (light stepping) technology.
The first technology simulates the jelly effect by using a single-frame map or a sequence diagram, so that the illumination of the jelly effect is rigid and not vivid, and the illumination information on the model cannot be changed along with the change of a lens, so that people feel harder and rigid. The second technology, although the dynamic aspect of the simulated jelly effect by adopting the RayMarching technology is more natural, has high cost, is not suitable for a mobile terminal, and has short effect existing time, high resource cost and low cost performance. In addition, if PBR (physical-Based Rendering) or more complex shader is used, the problems of high system resource occupation overhead, low yield and low cost performance are also caused. Therefore, in order to solve the above problems, a method for generating a virtual model is provided in the present application.
In accordance with an embodiment of the present invention, there is provided an embodiment of a method for generating a virtual model, it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
Fig. 1 is a schematic diagram of an alternative virtual model generation method according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S102, at least one model is obtained, wherein the at least one model is used for combining and generating a target model.
In step S102, at least one model may be obtained by an electronic device, an application system, a computing device, or the like, and in this embodiment, at least one model is obtained by a computing device. Wherein the computing device may effect the building of the at least one model based on the 3ds max software or other modeling software prior to obtaining the at least one model.
Optionally, the computing device may combine the models to generate the target model based on a one-to-one nesting manner of the models, may also combine the models to generate the target model based on a manner that a plurality of models cover the outer side of one model, may also combine the models to generate the target model based on a manner that one model covers the outer sides of a plurality of models, or in other manners.
It should be noted that, the target model is set to be composed of a plurality of models, that is, the target model is set to be a multilayer structure, so that the multilayer visual representation of the jelly effect can be effectively attached, and the subsequent realization of the jelly effect is facilitated.
Step S104, determining a sight line direction between the virtual camera and the at least one model.
In step S104, the line of sight between the virtual camera and at least one model may be the line of sight between the virtual camera and each vertex on the model, or may be the line of sight between the virtual camera and a part of the vertices on the model.
It should be noted that, in practical applications, the visual directions are different, and the effect that the model should exhibit is also different, so that the visual direction between the virtual camera and at least one model is determined, so as to render the model in the following process, and further make the effect exhibited by the model more real.
And S106, rendering at least one model according to the sight direction to obtain at least one rendered model.
In step S106, the rendering of at least one model by the computing device may be for setting the transparency of the model, or for setting the local color of the model, setting the local brightness of the model, or setting other influencing factors related to the rendering effect of the model. Meanwhile, the rendering modes of the models by the computing equipment can be the same or different, and at least one of the obtained rendered models is different from other models.
It should be noted that, by rendering at least one model according to the sight direction, each obtained model can present a real effect during movement, thereby improving user experience.
And S108, performing combined processing on the rendered at least one model to generate a target model so as to adjust illumination information of the surface of the target model based on the position change of the virtual camera in the scene.
In step S108, the computing device may perform a combination process on the rendered model based on the aforementioned one-to-one nesting or one-to-many/many-to-one wrapping or other manners, so as to generate a target model with a multi-level structure. In one aspect, the computing device may place a relatively transparent model on the outside and a model with local brightness or local color on the inside of the relatively transparent model to achieve a jelly effect. On the other hand, the computing device may also nest two relatively transparent models to achieve a jelly effect. In addition, the computing device may also combine the rendered models in other ways to achieve the jelly effect.
Further, since there is a model that is rendered based on the viewing direction, when the position of the virtual camera is changed, the jelly effect presented by the target model is also changed accordingly.
It should be noted that, the rendered models are combined to generate the target model, so as to obtain the model with the jelly effect, on one hand, the problem of high system overhead caused by using a mapping or adopting a complex processing method such as the raymanching technology to make the jelly effect in the prior art can be solved, and on the other hand, the illumination information on the model can be changed along with the change of the lens, so that the reality of the jelly effect is improved.
Based on the solutions defined in steps S102 to S108, it can be known that, in the embodiment of the present invention, a target model is obtained by performing combination processing on models rendered according to a view direction, and the view direction between the virtual camera and the at least one model is determined by obtaining the at least one model, so that the at least one model is rendered according to the view direction, the rendered at least one model is obtained, and then the rendered at least one model is performed with combination processing to generate the target model, so as to adjust illumination information on the surface of the target model based on a position change of the virtual camera in a scene. Wherein at least one model is used to generate a target model in combination.
It is easy to note that, in the above process, since the target model is set to be composed of a plurality of models, and each model is rendered according to the viewing direction, the problem of high system overhead caused by the jelly effect produced by using the map in the prior art is avoided. Furthermore, at least one rendered model is combined to generate a target model, so that a model with a jelly effect is obtained, and the problem of high system overhead caused by the fact that the jelly effect is manufactured by adopting a complex processing method such as a RayMarching technology is effectively solved. In addition, in the application, at least one model is rendered according to the sight direction, so that each obtained model can present a real effect during movement, and further user experience is improved.
Therefore, the purpose that the target model is obtained by performing combined processing on the models rendered according to the sight direction is achieved, the technical effect of making the jelly effect with low cost is achieved, and the technical problem that in the prior art, the processing method for making the jelly effect of the virtual model is complex, so that the system resource cost is large is solved.
In an alternative embodiment, the at least one model comprises at least a first model and a second model, the first model and the second model being in a nested form to form the object model, wherein the shape of the first model is similar to the shape of the second model, and the volume of the first model is larger than the volume of the second model.
Optionally, in this embodiment, the computing device performs a jelly effect on the bullet. The computing device may create a first model and a second model in advance and nest the rendered first model and second model to form a bullet model with a jelly effect (i.e., a target model).
Specifically, the computing device may first pull a cylinder in the 3ds max software and then adjust the shape with the FFd deformer to obtain the first model or the second model, where the first model and the second model may be the virtual model shown in fig. 2. Then, the computing device can obtain a second model by copying and reducing the first model; the first model may also be obtained by copying and enlarging the second model. Then, the computing device nests the first model with a larger volume on the second model, the nesting effect is as shown in fig. 3, and the mapping coordinates are defaulted and are grouped into a group to be imported into an engine for rendering the first model and the second model. The nesting relation between the first model and the second model can be coaxial nesting (namely, the axes are coincident) or non-coaxial nesting.
It should be noted that, by making the first model and the second model into the target model in a nested manner, the production of the jelly effect can be conveniently realized, and meanwhile, the calculated amount is effectively reduced, and further, the resource occupation is reduced.
In an alternative embodiment, the computing device may first determine a camera position of the virtual camera in the scene, and then obtain a first gaze direction corresponding to the first model according to a direction vector between the camera position and each vertex of the first model to determine a gaze direction between the virtual camera and the first model.
Alternatively, the computing device may determine the camera position of the virtual camera by acquiring a relative or absolute position of the virtual camera in the scene. Likewise, the computing device may determine the model position of the first model by obtaining a relative or absolute position of the first model in the scene, and then determine a vertex position of each vertex on the first model based on the model position.
Further, by connecting the camera position with each vertex position on the first model, a direction vector between the camera position and each vertex of the first model can be determined, and the obtained direction vectors are the first sight line direction between the virtual camera and the first model.
It should be noted that the first sight line direction corresponding to the first model is obtained according to the direction vector between the camera position and each vertex of the first model, so that the first sight line direction is determined quickly and accurately.
In an alternative embodiment, the computing device may render the first model according to the gaze direction to obtain a rendered first model based on the following.
Optionally, the computing device determines a first normal direction corresponding to each vertex in the first model, then determines a first reflection result corresponding to the first model according to the first sight line direction and the first normal direction, then performs inversion processing on the first reflection result to obtain a second reflection result, obtains a first transparency value of the first model from the second reflection result, performs blurring processing on an edge of the first model based on the first transparency value, and generates a texture of the first model based on a preset texture map, thereby obtaining the rendered first model.
First, a principle of determining a first reflection result corresponding to the first model based on the first gaze direction and the first normal direction will be described. Specifically, the core principle is the fresnel effect, and when light propagates from one medium with refractive index n1 to another medium with refractive index n2, reflection and refraction of light may occur simultaneously at the interface of the two. Fig. 4 is a schematic diagram of alternative lake surface reflection and refraction according to an embodiment of the present invention, as shown in fig. 4, taking the phenomenon occurring in real life as an example, people often go to a lake to walk around, and the lake surface in the same place is observed at different positions, so that the effect of the lake surface in different positions is always observed to be different, for example, the lake surface in the clear bottom can be seen at a near place, and the lake surface in the Pond can be seen at a far place. This phenomenon occurs because the reflection is weak when the line of sight is perpendicular to the interface, and when the line of sight forms an angle with the interface, the reflection is more pronounced the smaller the angle. The fresnel effect expresses the relationship between the angle and the reflected light by mathematical formula (and refraction is also accompanied by influence). The formula of the fresnel equation is as follows:
R(θ)=R(0)+(1-R(0))*(1-cos(θ))^5
R(0)=(n1-n2)2/(n1+n2)2
in the above equation, θ generally refers to the angle formed by the half angle vector formed by adding the light source direction (vertex-to-light source vector) and the camera direction (vertex-to-camera vector) and the camera direction vector, and the half angle vector can be replaced by the model normal without considering the influence of light rays, i.e. the angle formed by the camera direction vector and the normal vector, R (0) represents fresnel reflection at 0 angle, n1 is the refractive index of the light source, which is generally air or vacuum, n2 is the refractive index of the surface material, and the refractive index is different according to the material, such as: the refractive index of vacuum was 1.0, air was 1.000293, water was 1.333333, and diamond was 2.417. At present, the computing device ignores the influence of the refractive index in the writing of the coloring effect describing the fresnel effect, that is, if n1 is set to n2, then R (θ) is equal to R (θ) ═ 1-cos (θ)) ^5, thereby reducing the amount of computation. In the above description, it can be easily seen that the fresnel effect is influenced by the angle of the line of sight to the surface, and the smaller the angle to the surface, the stronger the reflection, and therefore, using this conclusion in this formula, it can be converted into: the reflection is stronger when the angle between the line of sight and the normal is larger, and the conclusion is verified as follows: fig. 5 is a schematic diagram of an alternative line of sight and normal according to an embodiment of the present invention, as shown in fig. 5, when a point a transitions to a point B, an angle between the line of sight and the normal becomes larger and larger, and assuming that an angle between AP and a normal NA at the point a is 0, and an angle between BP and a normal NB at the point B is 120 degrees (2 pi/3), the substitution formula can calculate that R (0) ═ 0, and R (2 pi/3) ═ 1+0.5) ^5 ≈ 7.6, thereby proving a conclusion that the reflection is stronger when the angle between the line of sight and the normal is larger, and further explaining the rationality of determining the first reflection result corresponding to the first model based on the first line of sight and the first normal.
In particular, the computing device may dot-product based on vertex normals (i.e., first normal directions) and gaze directions (i.e., first gaze directions) of a first model of the scene to obtain first reflection results. Alternatively, the computing device may implement the calculation of the first reflection result in nx2 software based on the following code:
float fresnel=dot(V,N);
return fresnel;
where V denotes a first line of sight direction and N denotes a first normal direction.
Further, the computing device may implement an inversion process on the first reflection result in nx2 software to obtain a second reflection result based on:
float fresnel=1-dot(V,N);
the computing device then outputs the second reflection result directly as an alpha value as a first transparency value for the first model to blur the edges of the first model based on the first transparency value. Optionally, the computing device may further perform power calculation in the foregoing process to control the strength of blurring the edge of the first model.
The computing device may then retrieve the predetermined texture map from a database or other storage device to determine at least a texture of the first model, and may also determine a color of the first model based on the predetermined texture map to obtain the rendered first model.
It should be noted that, by determining the first reflection result corresponding to the first model according to the first sight line direction and the first normal line direction, accurate calculation of the first reflection result is achieved, and further, the first transparency value determined based on the first reflection result is more suitable for an actual application scenario. Furthermore, the edge of the first model is subjected to blurring processing according to the first transparency value, so that the first model is subjected to transparency processing, and the rendered first model can well show the characteristic of a jelly effect.
In an alternative embodiment, the computing device may first determine a camera position of the virtual camera in the scene, and then obtain a second gaze direction corresponding to the second model according to a direction vector between the camera position and each vertex of the second model to determine a gaze direction between the virtual camera and the second model.
Alternatively, the computing device may determine the camera position of the virtual camera by acquiring a relative or absolute position of the virtual camera in the scene; likewise, the computing device may determine the model position of the second model by obtaining a relative or absolute position of the second model in the scene, and then determine the vertex position of each vertex on the second model based on the model position.
Further, by connecting the camera position with each vertex position on the second model, a direction vector between the camera position and each vertex of the second model can be determined, and a plurality of direction vectors, namely, a second sight line direction between the virtual camera and the second model, are obtained.
It should be noted that the second sight line direction corresponding to the second model is obtained according to the direction vector between the camera position and each vertex of the second model, so that the second sight line direction is determined quickly and accurately.
In an alternative embodiment, the computing device may render the second model according to the gaze direction to obtain a rendered second model based on the following.
Optionally, the computing device determines a second normal direction corresponding to each vertex in the second model, determines a third reflection result corresponding to the second model according to the second sight line direction and the second normal direction, obtains a second transparency value of the second model from the third reflection result, performs blurring processing on an edge of the second model based on the second transparency value, and generates a texture of the second model based on a preset texture map, so as to obtain the rendered second model.
In particular, the computing device may dot-product based on each vertex normal (i.e., the second normal direction) and each gaze direction (i.e., the second gaze direction) of the second model under the scene to obtain the third reflection result. Alternatively, the computing device may implement the calculation of the third reflection result in nx2 software based on the following code:
float fresnel=dot(V,N);
return fresnel;
where V denotes the second line of sight direction and N denotes the second normal direction.
The computing device then outputs the third reflection result directly as an alpha value as the first transparency value for the second model to blur the edges of the second model based on the second transparency value. Optionally, the computing device may further perform power calculation in the foregoing process to control the strength of blurring the edge of the second model.
Then, the computing device may obtain a preset texture map from a database or other storage device to determine at least a texture of the second model, and may also determine a color of the second model based on the preset texture map to obtain a rendered second model, where a schematic diagram of the rendered second model is shown in fig. 6.
It should be noted that, by determining the third reflection result corresponding to the second model according to the second sight line direction and the second normal line direction, accurate calculation of the third reflection result is achieved, and further the second transparency value determined based on the third reflection result is more in line with the actual application scenario. Furthermore, the edge of the second model is subjected to blurring processing according to the second transparency value, so that the second model is subjected to transparency processing, and the rendered second model can well show the characteristic of a jelly effect.
In an alternative embodiment, after the computing device obtains the rendered first model and the rendered second model, the computing device may perform a combination process on the at least one model according to a nesting order of the at least one model to obtain an initial model, and a schematic diagram of the initial model is shown in fig. 7. And then obtaining a preset highlight map, and finally performing highlight processing on the initial model based on the highlight map to obtain a target model.
Specifically, in this embodiment, the computing device nests the rendered second model within the rendered first model, thereby obtaining an initial model with a jelly effect. Optionally, the computing device may obtain a preset highlight map from a database or other storage devices, and emit a single or sequential highlight map based on the particle system to simulate highlights, so as to obtain a target model with a jelly effect and highlights.
Optionally, because the existence time of the bullet special effect is very short, highlight processing is performed on the initial model based on the highlight map, the calculated amount of the computing equipment can be effectively reduced, and therefore the working efficiency of the computing equipment is guaranteed while the reality of the model is improved.
It should be noted that, according to the method, an inner layer model and an outer layer model are built, the outer layer model uses a mixed mode of Fresnel effect and highlight superposition, the inner layer model uses 1 minus Fresnel as a transparent channel, and the jelly effect meeting the project function requirements is achieved by adopting some small manufacturing skills and combining a simple shader, so that the jelly effect with low cost is achieved. This application realizes effectually than traditional chartlet on the one hand, and on the other hand is better than the performance that complicated physics material realized.
Therefore, the purpose that the target model is obtained by performing combined processing on the models rendered according to the sight direction is achieved, the technical effect of making the jelly effect with low cost is achieved, and the technical problem that in the prior art, the processing method for making the jelly effect of the virtual model is complex, so that the system resource cost is large is solved.
According to an embodiment of the present invention, an embodiment of a virtual model generating apparatus is provided, where fig. 8 is a schematic diagram of an alternative virtual model generating apparatus according to an embodiment of the present invention, and as shown in fig. 8, the apparatus includes:
an obtaining module 802, configured to obtain at least one model, where the at least one model is used for generating a target model in a combined manner;
a determining module 804 for determining a gaze direction between the virtual camera and the at least one model;
a rendering module 806, configured to respectively render the at least one model according to the sight direction, so as to obtain at least one rendered model;
and the combining module 808 is configured to perform combining processing on the rendered at least one model to generate a target model, so as to adjust illumination information of a surface of the target model based on a position change of the virtual camera in the scene.
It should be noted that the obtaining module 802, the determining module 804, the rendering module 806, and the combining module 808 correspond to steps S102 to S108 in the foregoing embodiment, and the four modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the disclosure of the foregoing method embodiment.
Optionally, the at least one model includes at least a first model and a second model, the first model and the second model being nested to form the target model, wherein the shape of the first model is similar to the shape of the second model, and the volume of the first model is larger than the volume of the second model.
Optionally, the determining module 804 further includes: a first sub-determination module to determine a camera position of a virtual camera in a scene; and the first processing module is used for obtaining a first sight line direction corresponding to the first model according to the direction vector between the camera position and each vertex of the first model.
Optionally, the rendering module 806 further includes: the second sub-determination module is used for determining a first normal direction corresponding to each vertex in the first model; the third sub-determination module is used for determining a first reflection result corresponding to the first model according to the first sight line direction and the first normal line direction; the second processing module is used for carrying out reversal processing on the first reflection result to obtain a second reflection result; the first sub-acquisition module is used for acquiring a first transparency value of the first model from the second reflection result; and the third processing module is used for carrying out blurring processing on the edge of the first model based on the first transparency value, generating the texture of the first model based on a preset texture map, and obtaining the rendered first model.
Optionally, the determining module 804 further includes: a fourth sub-determination module for determining a camera position of the virtual camera in the scene; and the fourth processing module is used for obtaining a second sight line direction corresponding to the second model according to the direction vector between the camera position and each vertex of the second model.
Optionally, the rendering module 806 further includes: a fifth sub-determining module, configured to determine a second normal direction corresponding to each vertex in the second model; the sixth sub-determination module is used for determining a third reflection result corresponding to the second model according to the second sight line direction and the second normal line direction; the fifth processing module is used for acquiring a second transparency value of the second model from the third reflection result; and the sixth processing module is used for performing blurring processing on the edge of the second model based on the second transparency value, generating texture of the second model based on a preset texture map, and obtaining the rendered second model.
Optionally, the combining module 808 further includes: the seventh processing module is used for performing combined processing on at least one model according to the nesting sequence of the at least one model to obtain an initial model; the second sub-acquisition module is used for acquiring a preset highlight map; and the eighth processing module is used for performing highlight processing on the initial model based on the highlight map to obtain the target model.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the above-mentioned method for generating a virtual model when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a method for running a program, wherein the program is arranged to perform the above-described method for generating a virtual model when running.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described in detail in a certain embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A method for generating a virtual model, comprising:
obtaining at least one model, wherein the at least one model is used for generating a target model in a combined mode;
determining a gaze direction between a virtual camera and the at least one model;
rendering the at least one model according to the sight direction to obtain at least one rendered model;
and performing combined processing on the rendered at least one model to generate the target model so as to adjust illumination information of the surface of the target model based on the position change of the virtual camera in the scene.
2. The method of claim 1, wherein the at least one model comprises at least a first model and a second model, the first model and the second model being nested to form the object model, wherein the first model is shaped similarly to the second model, and wherein the first model has a volume that is larger than the volume of the second model.
3. The method of claim 2, wherein determining a direction of line of sight between a virtual camera and the at least one model comprises:
determining a camera position of the virtual camera in a scene;
and obtaining a first sight line direction corresponding to the first model according to the direction vector between the camera position and each vertex of the first model.
4. The method of claim 3, wherein rendering the at least one model according to the viewing direction to obtain at least one rendered model comprises:
determining a first normal direction corresponding to each vertex in the first model;
determining a first reflection result corresponding to the first model according to the first sight line direction and the first normal line direction;
carrying out inversion processing on the first reflection result to obtain a second reflection result;
obtaining a first transparency value of the first model from the second reflection result;
blurring the edge of the first model based on the first transparency value, and generating the texture of the first model based on a preset texture map to obtain a rendered first model.
5. The method of claim 2, wherein determining a direction of line of sight between a virtual camera and the at least one model comprises:
determining a camera position of the virtual camera in a scene;
and obtaining a second sight line direction corresponding to the second model according to the direction vector between the camera position and each vertex of the second model.
6. The method of claim 5, wherein rendering the at least one model according to the viewing direction to obtain at least one rendered model comprises:
determining a second normal direction corresponding to each vertex in the second model;
determining a third reflection result corresponding to the second model according to the second sight line direction and the second normal line direction;
obtaining a second transparency value of the second model from the third reflection result;
and performing blurring processing on the edge of the second model based on the second transparency value, and generating texture of the second model based on a preset texture mapping to obtain a rendered second model.
7. The method of claim 2, wherein combining the rendered at least one model to generate the target model comprises:
combining the at least one model according to the nesting sequence of the at least one model to obtain an initial model;
acquiring a preset highlight map;
and carrying out highlight processing on the initial model based on the highlight map to obtain the target model.
8. An apparatus for generating a virtual model, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring at least one model, and the at least one model is used for generating a target model in a combined mode;
a determination module to determine a gaze direction between a virtual camera and the at least one model;
the rendering module is used for rendering the at least one model according to the sight line direction to obtain at least one rendered model;
and the combination module is used for performing combination processing on the rendered at least one model to generate the target model so as to adjust the illumination information of the surface of the target model based on the position change of the virtual camera in the scene.
9. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to execute the method for generating a virtual model according to any one of claims 1 to 7 when running.
10. An electronic device, wherein the electronic device comprises one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a method for running a program, wherein the program is arranged to perform the method for generating a virtual model of any of claims 1 to 7 when run.
CN202111603106.7A 2021-12-24 2021-12-24 Virtual model generation method and device and electronic equipment Pending CN114494548A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111603106.7A CN114494548A (en) 2021-12-24 2021-12-24 Virtual model generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111603106.7A CN114494548A (en) 2021-12-24 2021-12-24 Virtual model generation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114494548A true CN114494548A (en) 2022-05-13

Family

ID=81496685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111603106.7A Pending CN114494548A (en) 2021-12-24 2021-12-24 Virtual model generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114494548A (en)

Similar Documents

Publication Publication Date Title
CN111369655B (en) Rendering method, rendering device and terminal equipment
CN106534835B (en) A kind of image processing method and device
CN112316420A (en) Model rendering method, device, equipment and storage medium
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
US20230230311A1 (en) Rendering Method and Apparatus, and Device
CN108043027B (en) Storage medium, electronic device, game screen display method and device
CN110458924B (en) Three-dimensional face model establishing method and device and electronic equipment
CN104157000B (en) The computational methods of model surface normal
CN112446943A (en) Image rendering method and device and computer readable storage medium
CN112150598A (en) Cloud layer rendering method, device, equipment and storage medium
CN111915710A (en) Building rendering method based on real-time rendering technology
WO2014170757A2 (en) 3d rendering for training computer vision recognition
CN116883607B (en) Virtual reality scene generation system based on radiation transmission
CN115115747A (en) Illumination rendering method and device, electronic equipment and storage medium
CN112634456A (en) Real-time high-reality drawing method of complex three-dimensional model based on deep learning
US20140306953A1 (en) 3D Rendering for Training Computer Vision Recognition
CN116894922A (en) Night vision image generation method based on real-time graphic engine
CN114494548A (en) Virtual model generation method and device and electronic equipment
CN115761105A (en) Illumination rendering method and device, electronic equipment and storage medium
CN115063330A (en) Hair rendering method and device, electronic equipment and storage medium
González et al. based ambient occlusion
CN117745915B (en) Model rendering method, device, equipment and storage medium
CN116681814B (en) Image rendering method and electronic equipment
US11501493B2 (en) System for procedural generation of braid representations in a computer image generation system
CN114972647A (en) Model rendering method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination