CN113822981A - Image rendering method and device, electronic equipment and storage medium - Google Patents

Image rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113822981A
CN113822981A CN202010567818.7A CN202010567818A CN113822981A CN 113822981 A CN113822981 A CN 113822981A CN 202010567818 A CN202010567818 A CN 202010567818A CN 113822981 A CN113822981 A CN 113822981A
Authority
CN
China
Prior art keywords
fluff
vector
highlight
rendering
orientation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010567818.7A
Other languages
Chinese (zh)
Other versions
CN113822981B (en
Inventor
王东烁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010567818.7A priority Critical patent/CN113822981B/en
Publication of CN113822981A publication Critical patent/CN113822981A/en
Application granted granted Critical
Publication of CN113822981B publication Critical patent/CN113822981B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models

Abstract

The disclosure relates to an image rendering method, an image rendering device, electronic equipment and a storage medium, relates to the technical field of image processing, and aims to solve the technical problem that real and natural highlight is difficult to embody in the rendering of a fluff material and improve the image processing effect of the fluff material. The method comprises the following steps: acquiring orientation vectors of fluff attached to the image model, wherein the orientation vectors represent the growth directions of the fluff after adjustment based on normal vectors of corresponding vertexes of the fluff on the image model; obtaining highlight parameters of the fluff according to the orientation vector of the fluff and the half-angle vector of the fluff, wherein the half-angle vector of the fluff is obtained by performing normalization processing according to the sum of the lighting vector and the visual vector in the rendering scene of the image model; and rendering the fluff according to the highlight parameters of the fluff.

Description

Image rendering method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of multimedia technologies, and in particular, to an image rendering method and apparatus, an electronic device, and a storage medium.
Background
With the rapid development of computer graphics, virtual objects are increasingly being used in the production of games, movies, and animations. In order to realize the reality of objects close to reality in the user interface, special rendering effects are needed, for example, image rendering technology of villous materials such as animal fur, fabrics, dolls, ornaments and the like.
For mobile equipment with limited hardware conditions, a rendering scheme of multi-layer rendering path (Pass) extrusion is usually adopted to simulate a fluff effect, a visual artifact similar to hair is created by means of gradually changing and overlapping a mapping, and light and shade conversion of the fluff material is expressed by using a diffuse reflection principle, so that the fluff feeling can be well expressed, meanwhile, performance overhead caused by a complex hair model and illumination calculation is avoided, and high frame rate experience of a user under a mobile terminal is ensured. In addition, on the basis of diffuse reflection, classical illumination models such as Phong and Blinn _ Phong can be adopted to increase the highlight effect for the fluff, so that the fluff material has a certain bright part level approximately, and the overall light and shade feeling and contrast are improved.
However, in the above technical solution, since the highlight is calculated according to the normal direction of the model, the highlight form is only related to the structure of the model itself, but has no correlation with dimensions such as the growth direction of the pile and the pile form, and only in order to visually add brightness information to the pile material, the highlight is not very consistent with the highlight law expressed by the self-bundled volume characteristic of the pile in reality, and it is difficult to express the reality and naturalness of the pile material.
Disclosure of Invention
The present disclosure provides an image rendering method, an image rendering device, an electronic device, and a storage medium, which at least solve the problem that the rendering technology of a fluff material is difficult to embody real and natural highlight, so that the image processing effect of the fluff material can be effectively improved. The technical scheme of the disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided an image rendering method, the method including: acquiring orientation vectors of fluff attached to the image model, wherein the orientation vectors represent the growth directions of the fluff after adjustment based on normal vectors of corresponding vertexes of the fluff on the image model; obtaining highlight parameters of the fluff according to the orientation vector of the fluff and the half-angle vector of the fluff, wherein the half-angle vector of the fluff is obtained by performing normalization processing according to the sum of the lighting vector and the visual vector in the rendering scene of the image model; and rendering the fluff according to the highlight parameters of the fluff.
In the technical scheme, a simple illumination model is designed, an algorithm for obtaining the highlight brightness of the fluff based on the orientation vector and the half-angle vector of the fluff is provided, the algorithm complexity is low, and the method can be applied to mobile equipment with limited image data processing capacity. Moreover, the fluff can be deflected and bent according to design requirements in a mode of disturbing the orientation vector of the fluff, so that the highlight brightness of the fluff is further influenced through the illumination model, the highlight state which is relatively consistent with the fluff in a real scene is generated, a relatively natural and real fluff image can be represented, and the display effect is optimized.
In a possible embodiment, deriving the highlight parameter of the pile according to the orientation vector of the pile and the half-angle vector of the pile comprises: the highlight parameter S of the fluff satisfies:
Figure BDA0002548474590000021
wherein dotHG ═ dot (normaize (V + L)), H denotes a half angle vector of the pile, G denotes an orientation amount of the pile, Gloss denotes glossiness information of the pile, V denotes the view vector, L denotes the light vector, dot function denotes dot product operation, and normaize denotes a normalization function. In the possible implementation mode, the highlight brightness of the fluff is obtained according to the orientation quantity of the disturbed fluff, the half-angle vector of the fluff and the glossiness information of the fluff, and the fluff highlight effect according with the optical scene is obtained by simplifying the fluff illumination model.
In one possible embodiment, obtaining the orientation of the fluff after the disturbance comprises: adding a physical acting force to a vertex shader part of the fluff to obtain an offset vector of a normal vector of the image model vertex in which the fluff grows relative to the fluff, wherein the offset vector is obtained according to direction information of the physical acting force in the vertex shader, force information of the physical acting force and level information of a fluff rendering channel; and obtaining the orientation quantity of the fluff after disturbance according to the offset vector of the fluff and the normal vector of the image model for the growth of the fluff. Among the above-mentioned possible implementation, the summit based on fine hair growth causes the growth skew through the physical effort to design the orientation quantity of fine hair after the disturbance in a flexible way, further obtain the highlight luminance of fine hair based on the orientation quantity of fine hair after the disturbance, thereby can realize that fine hair receives gravity and crooked, swing along with the wind or receive external force flattening etc. effect, make the image processing to the fine hair material more nimble, the display effect is more natural, true.
In one possible embodiment, obtaining the orientation of the fluff attached to the image model includes: adding a physical acting force to a vertex shader of the fluff to obtain an offset vector of a normal vector of the image model vertex in which the fluff grows relative to the fluff, wherein the offset vector is obtained according to direction information of the physical acting force in the vertex shader, force information of the physical acting force and level information of a fluff rendering channel; the orientation vector G satisfies: and G, normal (Force Level Power) + N), wherein Force represents the direction information of the physical acting Force, Level represents the hierarchy information of the fluff rendering channel, Power represents the Force information of the physical acting Force, N represents a normal vector of an image model where the fluff is located, and normal represents a normalization function. Among the above-mentioned possible implementation, because of the disturbance to the fine hair texture mapping, cause the growth skew to design the orientation vector of fine hair after the disturbance in a flexible way, further obtain the highlight luminance of fine hair based on the orientation vector of fine hair after the disturbance, thereby can self-define the growth direction of fine hair on the model, in order to realize more natural and real fine hair growing trend, make the image processing to the fine hair material more nimble, the display effect is more natural, true.
In a possible embodiment, obtaining the orientation of the fluff attached to the image model further includes: shifting texture coordinates of the sampled fluff noise wave map based on vector field information of the texture map of the fluff; and calculating orientation vectors obtained by deviating different parts of the fluff to different degrees according to texture information in the texture map of the fluff. Among the above-mentioned possible implementation, because of the disturbance to the fine hair texture mapping, cause the growth skew to design the orientation vector of fine hair after the disturbance in a flexible way, further obtain the highlight luminance of fine hair based on the orientation vector of fine hair after the disturbance, thereby can self-define the growth direction of fine hair on the model, in order to realize more natural and real fine hair growing trend, make the image processing to the fine hair material more nimble, the display effect is more natural, true.
In a possible implementation, before rendering the fluff according to the highlight parameter of the fluff, the method further includes: and adjusting the brightness information of the fluff according to the highlight parameter of the fluff, the level information corresponding to the levels of the rendering channels of the fluff and the corresponding preset adjusting parameter. In the possible implementation mode, the continuous highlight brightness value can be subjected to discretization treatment, the contrast of the bright and dark part information of the fluff extrusion model is reflected, the fluff bundle feeling and the granular feeling of the fluff can be reasonably reflected, and the display effect is further optimized.
In a possible implementation, before performing the rendering process according to the highlight parameter of the pile, the method further includes: and adjusting the offset strength of the orientation vector of the fluff according to the noise information of the fluff so as to enable the fluff to have orientation distinction. In the possible implementation manner, the offset strength of the fluff towards the vector is interfered by introducing the noise, so that the falling feeling of high brightness can be further realized, and the expression details of the fluff are enhanced.
In a possible implementation, before rendering the fluff according to the highlight parameter of the fluff, the method further includes: obtaining a surface highlight parameter and a sub-surface highlight parameter of the fluff through simulation, wherein the surface highlight parameter of the fluff is obtained through calculation according to the highlight parameter of the fluff, the glossiness information of the surface highlight of the fluff and a first adjusting parameter, and the sub-surface highlight parameter of the fluff is obtained through calculation according to the highlight parameter of the fluff, the color information of the fluff, the glossiness information of the sub-surface highlight of the fluff and a second adjusting parameter; performing rendering processing according to the highlight parameters of the fluff, specifically including: and performing highlight rendering processing on the fluff according to the surface highlight brightness of the fluff and the sub-surface highlight brightness of the fluff. Among the above-mentioned possible implementation, can carry out the discretization with comparatively continuous highlight brightness and handle, further embody the stereovision of fine hair through two-layer highlight expression, can be reasonable demonstrate the comparatively smooth and comparatively crude display effect of fine hair, further optimize fine hair presentation effect.
According to a second aspect of embodiments of the present disclosure, there is provided an image rendering apparatus including: the orientation quantity calculation module is configured to obtain an orientation vector of the fluff attached to the image model, wherein the orientation vector represents a growth direction adjusted based on a normal vector of a corresponding vertex of the fluff on the image model; a highlight parameter processing module configured to obtain a highlight parameter light parameter of the fluff according to an orientation vector of the fluff and a half-angle vector of the fluff, wherein the half-angle vector of the fluff is obtained by performing normalization processing according to a sum of intermediate vectors between a light vector and a visual vector in a rendering scene of the image model; the image rendering module is configured to render the fluff according to the highlight parameters of the fluff.
In a possible embodiment, the highlight parameter processing module is specifically configured to: the highlight parameter S of the fluff satisfies:
Figure BDA0002548474590000041
wherein dotHG ═ dot (normaize (V + L)), H denotes a half angle vector of the pile, G denotes an orientation amount of the pile, Gloss denotes glossiness information of the pile, V denotes the view vector, L denotes the light vector, dot function denotes dot product operation, and normaize denotes a normalization function.
In a possible implementation, the orientation quantity calculation module is specifically configured to: adding a physical acting force to a vertex shader of the fluff to obtain an offset vector of a normal vector of the image model vertex in which the fluff grows relative to the fluff, wherein the offset vector is obtained according to direction information of the physical acting force in the vertex shader, force information of the physical acting force and level information of a fluff rendering channel; the orientation vector G satisfies: and G, normal (Force Level Power) + N), wherein Force represents the direction information of the physical acting Force, Level represents the hierarchy information of the fluff rendering channel, Power represents the Force information of the physical acting Force, N represents a normal vector of an image model where the fluff is located, and normal represents a normalization function.
In a possible implementation, the orientation quantity calculation module is specifically configured to: shifting texture coordinates of the sampled fluff noise wave map based on vector field information of the texture map of the fluff; and calculating orientation vectors obtained by deviating different parts of the fluff to different degrees according to texture information in the texture map of the fluff.
In a possible implementation, the highlight parameter processing module is further specifically configured to: and adjusting the brightness information of the highlight of the fluff according to the highlight parameter of the fluff, the level information corresponding to the levels of the rendering channels of the fluff and the corresponding preset adjusting parameter.
In a possible implementation, the highlight parameter processing module is further specifically configured to: and adjusting the offset strength of the orientation vector of the fluff according to the noise information of the fluff so as to enable the fluff to have orientation distinction.
In a possible implementation, the highlight parameter processing module is further specifically configured to: obtaining a surface highlight parameter and a sub-surface highlight parameter of the fluff through simulation, wherein the surface highlight parameter of the fluff is obtained through calculation according to the highlight parameter of the fluff, the glossiness information of the surface highlight of the fluff and a first adjusting parameter, and the sub-surface highlight parameter of the fluff is obtained through calculation according to the highlight parameter of the fluff, the color information of the fluff, the glossiness information of the sub-surface highlight of the fluff and a second adjusting parameter; the apparatus further comprises: a rendering processing module configured to perform highlight rendering processing on the fluff according to the surface highlight parameter of the fluff and the sub-surface highlight parameter of the fluff.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; and a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the image rendering method of any of the first aspects above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium having instructions that, when executed by a processor of an electronic device/server, enable the electronic device to perform the image rendering method according to any one of the first aspect described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product which, when run on a computer, causes the computer to perform the image rendering method as defined in any one of the above first aspects.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: through a simple illumination calculation scheme, a more real and natural highlight effect of the fluff is obtained, meanwhile, the highlight effect is not limited by the fixed trend of the fluff, and the fluff form can be freely designed to correspondingly generate the highlight effect based on the fluff form.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a schematic diagram illustrating an electronic device in accordance with an exemplary embodiment.
FIG. 2 is a schematic diagram illustrating a pile image model and an illumination model according to an exemplary embodiment.
FIG. 3 is a flow chart illustrating a method of image rendering according to an example embodiment.
Fig. 4 is a first schematic diagram illustrating an image rendering effect based on a fluff material according to an exemplary embodiment.
Fig. 5 is a schematic diagram illustrating a second image rendering effect based on a pile material according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating an image rendering apparatus according to an exemplary embodiment.
Fig. 7 is a block diagram showing an apparatus (general structure of an electronic device) according to an exemplary embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Before the embodiments of the present application are explained in detail, application scenarios of the embodiments of the present application and related prior art will be explained.
The image processing method for the villus material provided by the embodiment of the application can be applied to computer graphics processing scenes such as movies, animations, games, VR (Virtual Reality) or AR (Augmented Reality), and can be particularly applied to scenes rendering Virtual objects comprising the villus material so as to simulate the villus material and increase the Reality sense of the Virtual objects. The virtual object may be a human, an animal, a bionic organism, or the like.
Rendering in computer graphics refers to the process of generating images from models with graphics software. A model is a description of a three-dimensional object in a well-defined language or data structure that includes geometric, viewpoint, texture, and lighting information. And (4) the model in the three-dimensional scene is rendered according to the set environment, light, material and rendering parameters. After the rendering program acquires the range to be rendered through the camera, the influence of the light source on the object is calculated, and therefore, the effect of each light source added in the scene on the object is calculated by the rendering program. The rendering program also calculates the color of the surface of the object according to the material of the object, and different types of materials, different attributes and different textures can generate different effects.
For example, in an animation scene, after a three-dimensional model of an animal is drawn, a pile on the three-dimensional model can be rendered through the image processing method of the pile material provided by the embodiment of the application, so that a real and natural pile texture on the surface of the skin of the animal can be simulated.
For a mobile device, low latency and high efficiency of fluff rendering are essential conditions for ensuring good user experience. For example, if each of the naps is geometrically modeled and then rendered, since each of the creatures basically has millions of naps, millions of modeling and rendering operations are required, which is complicated to operate, and thus the rendering efficiency of the mobile device is seriously affected, resulting in poor user experience and high requirements on the hardware performance of the electronic device.
The real-time rendering refers to an image processing technology for drawing three-dimensional graphic data into a two-dimensional bitmap according to a graphics algorithm and displaying the bitmap data in real time, and is commonly used in the fields of virtual reality, three-dimensional games, animation production and the like.
The illumination model is also called a shading model and is used for calculating the color value of a certain point of an object of the three-dimensional graph, and a physical-based theoretical model and an empirical-based illumination model exist.
Rendering Pass (Pass): one rendering pass is a rendering in the geometry problem, one invocation of a rendering API with a full set of rendering properties.
The Blinn-Phong model is a simple highlight calculation model proposed by Jimm Blinn that calculates highlight using the normal direction, and can perform fast real-time rendering.
Anisotropic highlight is an illumination model for calculating highlight in a tangential direction, and is often used to express highlight characteristics of materials such as hair and wire drawing metal.
A Flowmap (Flowmap) is a map that stores vector field data, can be used to control texture flow and offset, and is often used to represent the flow effect of liquid materials.
The multi-layer fluff rendering is a material rendering mode for expressing the hair body feeling by performing multiple vertex extrusion on a model and combining a noise mapping.
In the prior art, classical illumination models such as Phong or Blinn _ Phong are generally adopted for the rendering processing of the fluff material to increase the highlight effect for the fluff material, so that the fluff material has a certain brightness level, and the overall light and shade feeling and contrast are improved. For example, the simulation of highlight brightness according to the Blinn-Phong illumination model is to calculate the highlight based on the normal direction of the model where the villus is located, so that the highlight form is only related to the structure of the model where the villus is located, but not related to the growth direction of the villus or the bending form of the villus, and the highlight form only visually adds brightness information to the villus material, and the highlight expression does not conform to the objective rule.
In addition, an anisotropic illumination model based on materials such as hair or wiredrawing metal can calculate and simulate highlight of the hair materials according to the tangent or sub-tangent direction of the hair model, the obtained result is strongly related to the growth direction of the hair, the special strip-shaped highlight effect of the hair can be obtained, and the hair is more real visually and more in line with objective rules. However, the way of calculating the highlight based on the tangent line or the secondary tangent line enables the growth direction of the generated fluff to be strictly aligned with the direction of the pixel coordinate U or V in the fluff map, the direction and the bending degree of the fluff cannot be freely controlled, and the connection positions of the fluff with different directions are not natural, so that the scheme is only suitable for the fluff material which is very smooth and has strong artificial feeling, and is difficult to express the highlight texture under the real and natural fluff form.
Based on the above, the embodiment of the application can provide a simple and efficient image rendering method based on the fluff material, which is particularly suitable for mobile equipment. Through a simple illumination calculation scheme, a more real and natural highlight effect of the fluff is obtained, meanwhile, the highlight effect is not limited by the fixed trend of the fluff, and the fluff form can be freely designed to correspondingly generate the highlight effect based on the fluff form. In addition, the scheme has low requirement on the hardware performance of the electronic equipment, can be suitable for the electronic equipment with low hardware performance, and can meet the real-time rendering requirement of the electronic equipment.
The image rendering method provided by the embodiment of the application is applied to electronic equipment with an image processing function, and the electronic equipment can be a mobile phone, a tablet computer, a portable computer, a desktop computer, wearable equipment or the like, and can also be VR or AR equipment integrated with the image processing function or the like.
Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the present disclosure, and as shown in fig. 1, the electronic device includes a Central Processing Unit (CPU) 10 and a Graphics Processing Unit (GPU) 20. The CPU10 is configured to perform image preprocessing before rendering, and the GPU is configured to perform image rendering, for example, sequentially render images of a pile portion by an illumination model based on a distribution position of the pile.
Next, a brief description is provided for the image processing principle based on the fluff material according to the embodiment of the present application, as shown in fig. 2, in the image processing field, the fluff grows on the image model, and the fluff can be approximately understood as a root cylinder standing on the image model, for example, the image model can be a sphere as shown in fig. 2, and the cross section of the fluff bundle on each layer rendering Pass (Pass) of the fluff material can be approximately each segment of the cylinder block on the cylinder. The G vector in fig. 2 represents the growth direction of the villus, which is defined as the orientation in the present application. The N vector represents a normal vector of the image model where the villus is located, and is perpendicular to a plane where a vertex where the villus grows. The L vector represents a light vector generated by a light source provided for the image model in the rendered scene, and the V vector represents a view vector reflecting the image to the human eye.
In conjunction with fig. 2, it can be seen from the Blinn-Phong illumination model and the stereo feature analysis of the cylinder that the more the orientation quantity G of the half-angle vector H and the cylinder tend to be perpendicular, the brighter the highlight part on the cylinder is; conversely, the more the half-angle vector H and the orientation amount G of the cylinder are parallel, the darker the high-light portion on the cylinder. The Half angle vector is also called a Half vector, and is a direction in which Half of an included angle between the light vector L and the viewing direction V shown in fig. 2 is taken as a Half vector.
As shown in fig. 2, the fluff in the fluff rendering may be composed of multiple layers of pixel patches, the fluff itself has no volume, and it is difficult to obtain accurate normal vector distribution, and the fluff usually presents a fine and dense tuft shape, so that the brightest part of each cylindrical block can be approximately equal to the overall brightness of a single tuft section on each layer of Pass. Therefore, the embodiment of the present application provides a method for calculating the highlight effect of the fluff by using the orientation G and Half-angle vector Half of the fluff. Based on the design, the orientation G of the growth direction of the villus can be freely designed, for example, the direction or the bending degree of the villus can be changed through disturbance, and the multidimensional adjustment parameters aiming at the highlight effect are decomposed by combining the illumination characteristics of the villus, so that the villus effect which is closer to nature and reality is realized. In addition, the pile bundle feeling and the dislocation feeling of highlight of the fluff can be expressed through further optimization processing, so that the generated fluff display effect is more real and natural.
The image rendering method provided by the embodiment of the present application will be described in detail below.
Fig. 3 is a flowchart of an image rendering method provided in an embodiment of the present application, where the method is applied to an electronic device with an image processing function, and as shown in fig. 3, the method includes the following steps:
in step S31, the orientation of the pile attached to the image model is acquired.
Wherein, the orientation quantity represents the growth direction of the adjusted villus. As can be seen from the above description, the piles may be formed by multiple layers of pixel patches, and specifically may be rendered according to multiple triangle image models. Under the condition of not changing the extrusion direction of model vertexes, the vertexes of the growth of the villus are vertically extruded along the normal direction of the image model in which the villus grows, so that the original growth vector of the villus is the normal vector of the image model in which the villus grows. If the orientation is not disturbed to a certain extent, and the orientation is deviated to a certain extent, the display effect of the fluff obtained by rendering is too regular and rigid, and is not close to different reflection effects of natural bending and light scattering of the fluff in a real scene, so that the orientation of the fluff after disturbance can be obtained through certain parameter adjustment processing, namely the orientation indicates the growth direction of the fluff on the basis of the adjusted normal vector of the vertex corresponding to the image model.
Specifically, the orientation after the fluff is disturbed can be obtained through various different embodiments, for example, the orientation after the disturbance is generated by adding an acting force to the model vertex based on the extrusion of the fluff and shifting the model vertex, or the orientation after the disturbance is generated based on the shifting of the fluff map. This will be described in greater detail hereinafter for purposes of illustration and will not be described in any further detail herein.
In step S32, a highlight parameter of the pile is obtained according to the orientation vector of the pile and the half-angle vector of the pile.
The half-angle vector of the fluff is a middle vector between the light vector and the visual vector of the fluff, namely a vector generated by a half of an included angle between the light vector and the visual vector of the fluff, namely the half-angle vector is obtained by performing normalization processing according to the middle vector sum between the light vector and the visual vector in the rendering scene of the image model.
In one embodiment, the highlight brightness of the pile can be obtained according to the orientation of the pile after the disturbance, the half-angle vector of the pile and the glossiness information of the pile. The highlight brightness of the fluff is inversely proportional to the orientation of the disturbed fluff and the result of the dot product operation of the half-angle vector of the fluff.
Illustratively, in the drawing software for image processing, the highlight parameter S can be obtained by designing the following mathematical operation in a machine language:
s ═ pow (sqrt (1.0-dotHG ×) dotHG), Gloss), and alsoThat is to say, the number of the first,
Figure BDA0002548474590000091
wherein dotHG ═ dot (normaize (V + L), Grow _ Dir); gloss represents the preset glossiness information of the fluff, Grow _ Dir represents the orientation vector of the fluff, and the return value of the pow (x, y) function is y power of x, i.e., pow (x, y) ═ xy(ii) a The dotHG function represents performing a dot product operation on the H vector and the G vector, that is, dotHG H · G ═ H | × | G | × cos θ, where θ is an angle between the two vectors H and G; normaize is a normalization function that limits the data to be processed to a certain range after being processed by some algorithm.
It should be noted that the highlight calculation method is based on the operation of a single pixel, and the parallel processing is performed on a plurality of pixels at the same time. The Gloss values, the light vector L, the visual vector V and other parameters may be customized or adjusted in the 3D model, and those skilled in the art may perform simulation or setting according to experience according to a physical principle, which is not specifically limited in the present application.
In one embodiment, after obtaining the highlight parameters of the pile, the highlight parameters of the shadow part of the pile can be removed according to the obtained highlight parameters of the pile, that is, the highlight parameters of the dark side of the pile are adjusted to be dark, otherwise, the phenomenon of distortion of the back light transmission of the pile can be caused.
Illustratively, the foregoing highlight parameter S of the fluff may be multiplied by a lighting mask (mask) to achieve the effect of excluding the highlight parameter of the backlight surface.
Then the highlight parameter S1 ═ S clamp (max (0.0, dot (N, L)) × 3.0, 0.0, 1.0) of the dark surface, where the clamp function is an interval limiting function, and can limit the randomly changing value within a given interval; the max function represents taking the largest element in the vector or matrix.
In step S33, the pile is subjected to rendering processing according to the highlight parameter of the pile.
Based on the above embodiment, the rendering processing according to the highlight parameter of the fluff may include: the lighting mask for distinguishing the bright and dark surfaces according to the villus excludes the highlight brightness of the back surface of the villus.
According to the embodiment of the application, a simple illumination model is designed, an algorithm for obtaining the highlight brightness of the fluff based on the orientation vector and the half-angle vector of the fluff is provided, the algorithm complexity is low, and the method can be applied to mobile equipment with limited image data processing capacity. Moreover, the fluff can be deflected and bent according to design requirements in a mode of disturbing the orientation vector of the fluff, so that the highlight brightness of the fluff is further influenced through the illumination model, the highlight state which is relatively consistent with the fluff in a real scene is generated, a relatively natural and real fluff image can be represented, and the display effect is optimized.
In one embodiment, in step S31 in the above embodiment, the orientation of the fluff attached to the image model is obtained, and the orientation can be obtained by the following three embodiments.
First, growth migration is calculated based on vertices. The method specifically comprises the following steps:
step 1: and adding physical acting force to a vertex shader of the villus to obtain an offset vector of a normal vector of the image model vertex in which the villus grows relative to the villus.
The method comprises the steps of adding physical acting force to a vertex shader part of fluff to enable corresponding vertices of a model to be shifted to original positions, and calculating to obtain a shift vector of the fluff relative to a normal vector of image model vertices. The deflection vector of the pile is due to physical forces. After physical acting force is added to the vertex shader part of the fluff, the vertex of the model is extruded out and deviated from the original position, so that the effects that the fluff is bent under the action of gravity or swings along with wind and the like are achieved. At this time, the offset direction of each layer Pass of the fluff rendering path relative to the extrusion of the previous layer is the orientation amount of the fluff.
Therefore, the offset vector can be obtained according to the direction information of the physical acting force of each layer of the fluff, the strength information of the physical acting force and the level information of the fluff rendering channel.
Illustratively, the direction information of the physical acting Force is Force, the Force information of the physical acting Force is Power, a uniform variable uniform is introduced according to the extrusion amount of different Pass levels of the control fluff, and the value of the uniform is normalized to be Level. The unifonm is a parameter introduced by the interface, and is commonly used for storing various required data in the shader. The normalized Level value is within the range of 0-1 (including 0 and 1), the Level of the outermost layer of the fluff is 1, and the Level of the innermost layer of the fluff is 0. The offset vector of the pile relative to the normal vector of the image model can be obtained by the following formula:
Offset=Force*Level*Power
step 2: and obtaining the orientation quantity of the fluff after disturbance according to the offset vector of the fluff and the normal vector of the image model for the growth of the fluff.
As shown in fig. 2, the normal vector of the hair at the vertex of the image model is N, which is the default growth direction of the hair. Then, according to the deviation vector of the villus obtained at Step1 and the normal vector of the image model of the growth of the villus, the orientation amount Grow _ Dir of the villus after disturbance can be further obtained as:
Grow_Dir=normalize(Offset+N)=normalize(Force*Level*Power)+N)。
through the embodiment of the application, the vertex of each level that can extrude based on the model causes the growth skew through physical effort to design the orientation quantity of fine hair after the disturbance in a flexible way, further obtain the highlight luminance of fine hair based on the orientation quantity of fine hair after the disturbance, thereby can realize that fine hair highlight can be according to the nimble effect of changing of fine hair form, make the image processing to the fine hair material more nimble. For example, the highlight processing effect based on the shift disturbance of the vertex of the fluff can be as shown in fig. 4, and it can be seen that the processed fluff material has highlight information conforming to the form of the fluff, and the display effect is more natural and real.
Mode two, growth bias based on Flowmap.
Texture mapping technology, which is a technology for drawing (mapping) graphics onto a model surface, can significantly increase the details and reality of a drawn scene. Therefore, during the process of rendering the fluff, the disturbance in the fluff shift direction can also be performed by applying the texture map of the fluff.
With the flow map (Flowmap) technique, the designer is allowed to customize the growth direction of the pile on the model to the maximum extent by the vector field information carried in the texture. The Flowmap maps are the coordinate relations mapped to the model by a map interference texture, and certain pixel information is moved to a new position of the model to be displayed.
The method specifically comprises the following steps:
step 1: and shifting the texture coordinates of the sampled fluff noise map based on the vector field information of the texture map of the fluff.
For example, assuming that the texture map of the fluff is Flowmap, the basic noise map is noismeap, and the Flow is a vector field of pixel coordinates u-v of the perturbed texture map Flowmap, a perturbed u-v coordinate offset (uv _ offset) can be calculated by the following formula, and noise map sampling is performed:
Flow=texture2D(Flowmap,uv).xy*2.0-1.0;
uv_offset=vec2(uv.x*Flow.x*Level,uv.y*Flow.y*Level);
Noise=texture2D(Noisemap,uv_offset)。
the texture2D (Flowmap, uv) xy function represents sampling of the Flowmap of the texture map, and Red (Red, R) and Green (Green, G) color value channels of pixel points of the Flowmap are obtained. Therefore, the R-channel color value of the sampled Flow vector with the X-axis of 0-1 controls the Flow around the pile, and the G-channel color value of the Flow vector with the Y-axis of 0-1 controls the Flow above and below the pile.
Equivalent to sampling the Flowmap map to control the offset in both directions, the interval range of the Flow vector is transformed to [ -1, 1] by further operations, since the interval of the pixel coordinates u-v is [0, 1 ].
And further establishing a disturbed u-v coordinate offset vector uv _ offset through a vec2 function, so as to sample a noise map according to the disturbed pixel coordinate offset vector of the fluff. Wherein, the noise map is a transparency control map for generating a visual perception of the fluff, usually as black and white noise particles. By using texture2D (noise map, uv _ offset) function, noise map can be sampled according to offset vector uv _ offset, and the offset of noise particles is disturbed to change the visual orientation state of the piles, so that the piles can generate a display effect of bending directions one by one.
Step 2: and calculating orientation vectors obtained by deviating different parts of the fluff to different degrees according to the texture information in the fluff texture map.
Then, the orientation after disturbance can be obtained through the texture map Flowmap of the fluff to calculate the corresponding highlight effect. The vector field information of the Flow can be inverted and used as the normal map information of the tangent space (the normal information in the map is the orientation vector). And converted into world space by a conversion matrix (tagenttowold) for illumination calculation.
Illustratively, taking u-v coordinates in the Open Graphics Library (OpenGL) as an example, the approximate formula is as follows:
Grow_Dir.xy=vec2(-Flow.x,-Flow.y)*amount_1;
Grow_Dir.z=sqrt(1.0-Grow_Dir.x*Grow_Dir.x-Grow_Dir.y*Grow_Dir.y);
Grow_Dir=normalize(TangentToWorld*Grow_Dir)。
wherein, the amount _1 represents the adjusting parameter of the preset disturbance u-v vector field. In the above equation, the vector offset strength of the fluff towards the vector, which is obtained based on the Flowmap, is controlled by the color information of the texture map, and is not calculated by the vertex extrusion direction of the fluff, so that the offset strength can be flexibly adjusted by a preset adjustment parameter, such as the amount _ 1. Those skilled in the art can flexibly set and adjust the design according to the design requirements, and the application is not limited to this specifically.
Through the embodiment of the application, can cause the growth skew based on the disturbance to the fine hair texture mapping to design the orientation quantity of fine hair after the disturbance in a flexible way, further obtain the highlight luminance of fine hair based on the orientation quantity of fine hair after the disturbance, thereby can self-define the growth direction of fine hair on the model, in order to realize more natural and real fine hair growth vigor, make the image processing to the fine hair material more nimble, the display effect is more natural, true.
Mode three, a combination of the two offset modes.
The orientation deviation calculation method of the physical acting force is calculated based on the vertexes in the first method, and a new orientation after the fluff disturbance can be obtained by combining the orientation obtained based on the deviation of the Flowmap texture map obtained in the second method.
Illustratively, the N vector of the natural growth direction of the piles in the first mode is replaced by the orientation quantity Flow _ Dir obtained based on the offset of the Flowmap texture map obtained in the second mode, so as to obtain:
Grow_Dir=normalize(Flow_Dir+Force*Level*Power)。
through the embodiment of the application, based on the mode that produces the skew with two kinds of fine hair disturbances and combine together to improve the flexibility of adjusting fine hair growth vigor greatly, further optimize the display effect. For example, the processing effect based on the combination of the two offset ways can be as shown in fig. 5, and it can be seen that the texture effect of the fuzz combined with the highlight rendering calculation after the processing appears more natural and closer to reality.
In a specific embodiment, the illumination model is based on abstracting the fluff into a cylindrical model, assuming that the highlight brightness of the fluff is the brightest part on the cylindrical section, and ignoring the change of the highlight brightness on the horizontal section of the fluff, the highlight brightness is continuous and not enough to show the clustering feeling and the dislocation feeling of the fluff, so that the highlight brightness of the fluff can be discretized to reasonably adjust and show the effects of the fluff, the dislocation feeling, the layering feeling and the like.
Specifically, discretizing the highlight brightness of the pile can be performed by various different embodiments, such as adjusting the brightness information of the pile based on the level of pile extrusion, or introducing noise information to generate disturbance offset on the pile map. This will be described in greater detail hereinafter for purposes of illustration and will not be described in any further detail herein.
The highlight brightness of the fluff can be further optimized by the following treatment methods.
In the first mode, the brightness information of the fluff can be adjusted according to the highlight parameters of the fluff and preset adjustment parameters corresponding to the levels of a plurality of rendering channels of the fluff.
Namely, the highlight brightness of the dark part of the fluff is adjusted by setting different adjusting parameters for a plurality of Pass Level levels of fluff extrusion. The Level of the plurality of layers extruded by the fluff can be normalized data, namely the Level value is in the interval of 0-1 (including 0 and 1), the outermost layer Level of the fluff is 1, and the innermost layer Level of the fluff is 0, so that the highlight brightness of the fluff closer to the innermost layer Level is smaller, the highlight brightness of the fluff closer to the outermost layer Level is larger, the fluff illumination effect closer to the real space is further, and the pile sense and the granular sense of the fluff can be shown.
Illustratively, highlight brightness can be further optimized by machine language design of the following mathematical operations:
Specular=S*pow(Level,amount)。
wherein, amount represents the preset adjusting parameter, the bigger the value of pow (Level, amount), the more obvious the formed fluff bundles are distinguished, and the harder the fluff texture appears. Those skilled in the art can set and adjust the value of amount according to design requirements.
Through the processing, the discretization processing can be carried out on the continuous highlight brightness value, the contrast of the bright and dark part information of the fluff extrusion model is reflected, the fluff bundle feeling and the granular feeling of the fluff can be reasonably reflected, and the display effect is further optimized.
And secondly, the offset strength of the orientation of the fluff can be adjusted according to the noise information of the fluff, so that different fluff has finer orientation difference.
Based on the second technical solution of the above-mentioned way of obtaining the orientation vector of the fluff disturbance, the offset strength of the orientation vector of the fluff can be further disturbed by introducing information such as noise and multiplying the information by the Flowmap of the flow map.
For example, this step can be calculated as follows before solving Grow _ dir.z in the second way described above:
Flow.xy=Flow.xy*mix(1.0,Dir_Noise,amount_2)。
where, amount _2 represents an adjustment parameter for adjusting the high light misregistration intensity. Those skilled in the art can define and adjust the design requirements, and the present application is not limited to this.
Through the processing, the offset intensity of the fluff towards the vector is interfered by introducing the noise waves, so that the falling feeling of high brightness can be further realized, and the expression details of the fluff are enhanced.
And thirdly, obtaining the surface highlight parameter and the sub-surface highlight parameter of the fluff through simulation, and performing highlight rendering processing on the fluff according to the surface highlight parameter and the sub-surface highlight parameter of the fluff.
The surface highlight brightness and the sub-surface highlight brightness of the fluff can be simulated respectively, so that the layering sense of the fluff highlight is improved. Wherein the parts of the pile partial surface and the outer layer can be expressed by high light which is smoother and has more inclined luster color to the light source; the portions of the pile that are more prone to multiple reflections and transmissions than the inner, light, are represented by a higher gloss with a rougher, glossy color more biased toward the pile color.
For example, assuming that the surface highlight brightness is Main _ spec, the sub-surface highlight brightness is Sec _ spec, the highlight information before the smoothness calculation is for _ spec, and the pile color is Pixel, the simplified formula can be designed as follows:
Main_spec=pow(fore_spec,gloss1)*amount1;
Sec_spec=pow(fore_spec,gloss2)*amount2*Pixel;
Specular=Main_spec+Sec_spec。
therein, gloss1 and gloss2 are expressed as gloss parameters of the pile.
Illustratively, the gloss parameter gloss1 of the surface gloss can be set larger relative to the gloss parameter gloss2 of the subsurface gloss to meet the effect requirements described above, according to the gloss design requirements described above. Wherein, amount1 and amount2 are parameters introduced for adjusting the surface highlight brightness and the sub-surface highlight brightness.
Through the processing, the discretization processing can be carried out on the continuous highlight brightness, the layering of the fluff is further embodied through the two layers of highlight expressions, the smooth and rough display effect of the fluff can be reasonably embodied, and the fluff expression effect is further optimized.
It should be noted that, the three implementation methods for optimizing the pile expression effect may be performed separately or may be performed in a plurality of ways in a superposition manner, and those skilled in the art may select the implementation methods according to design requirements, and the present application is not limited specifically.
In another embodiment, information carried by the Flowmap can be introduced into the vertex color of the fluff, so that on one hand, the sampling number of texture maps can be reduced, the complexity of data processing is reduced, and the graphics rendering performance of the mobile device is improved. On the other hand, the orientation vector provided by the Flowmap can be obtained by using a vertex calculation mode, and the calculation mode is more uniform and controllable.
FIG. 6 is a block diagram illustrating an image rendering apparatus according to an example embodiment. Referring to fig. 6, the apparatus 600 includes an orientation amount calculation module 601, a highlight parameter processing module 602, and an image rendering module 603.
The orientation amount calculation module 601 may be configured to obtain an orientation vector of a fluff attached to an image model, where the orientation vector represents a growth direction adjusted based on a normal vector of a corresponding vertex of the fluff on the image model.
The highlight parameter processing module 602 may be configured to obtain a highlight parameter light parameter of the pile according to an orientation vector of the pile and a half-angle vector of the pile, where the half-angle vector of the pile is obtained by performing normalization processing according to a middle vector sum between a light vector and a view vector in a rendered scene of the image model.
The image rendering module 603 may be configured to render the fluff according to the highlight parameter of the fluff.
In one embodiment, the ginseng is a high ginsengThe number processing module 602 may be specifically configured to: the highlight parameter S of the fluff satisfies:
Figure BDA0002548474590000151
wherein dotHG ═ dot (normaize (V + L)), H denotes a half angle vector of the pile, G denotes an orientation amount of the pile, Gloss denotes glossiness information of the pile, V denotes the view vector, L denotes the light vector, dot function denotes dot product operation, and normaize denotes a normalization function.
In one embodiment, the orientation amount calculation module 601 is specifically configured to: adding a physical acting force to a vertex shader of the fluff to obtain an offset vector of a normal vector of the image model vertex in which the fluff grows relative to the fluff, wherein the offset vector is obtained according to direction information of the physical acting force in the vertex shader, force information of the physical acting force and level information of a fluff rendering channel; the orientation vector G satisfies: and G, normal (Force Level Power) + N), wherein Force represents the direction information of the physical acting Force, Level represents the hierarchy information of the fluff rendering channel, Power represents the Force information of the physical acting Force, N represents a normal vector of an image model where the fluff is located, and normal represents a normalization function.
In one embodiment, the orientation amount calculation module 601 is specifically configured to: shifting texture coordinates of the sampled fluff noise wave map based on vector field information of the texture map of the fluff; and calculating orientation vectors obtained by deviating different parts of the fluff to different degrees according to texture information in the texture map of the fluff.
In one embodiment, the highlight parameter processing module 602 is further specifically configured to: and adjusting the brightness information of the highlight of the fluff according to the highlight parameter of the fluff, the level information corresponding to the levels of the rendering channels of the fluff and the corresponding preset adjusting parameter.
In one embodiment, the highlight parameter processing module 602 is further specifically configured to: and adjusting the offset strength of the orientation vector of the fluff according to the noise information of the fluff so as to enable the fluff to have orientation distinction.
In one embodiment, the highlight parameter processing module 602 is further specifically configured to: and obtaining a surface highlight parameter and a secondary surface highlight parameter of the fluff through simulation, wherein the surface highlight parameter of the fluff is obtained through calculation according to the highlight parameter of the fluff, the glossiness information of the surface highlight of the fluff and a first adjusting parameter, and the secondary surface highlight parameter of the fluff is obtained through calculation according to the highlight parameter of the fluff, the color information of the fluff, the glossiness information of the secondary surface highlight of the fluff and a second adjusting parameter.
And the image rendering module can be specifically configured to execute highlight rendering processing on the fluff according to the surface highlight parameter of the fluff and the secondary surface highlight parameter of the fluff.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 7 is a block diagram illustrating an electronic device, such as an apparatus 700, according to an exemplary embodiment, where the apparatus 700 may be used to generate an image according to the above-described embodiments. As shown in fig. 7, the apparatus 700 may include at least one processor 701, a communication line 702, and a memory 703.
The processor 701 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more ics for controlling the execution of programs in accordance with the present disclosure.
Communication link 702 may include a path to transfer information between the aforementioned components, such as a bus.
The memory 703 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disk read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be separate and coupled to the processor via a communication line 702. The memory may also be integral to the processor. The memory provided by the disclosed embodiments may generally be non-volatile. The memory 703 is used for storing computer-executable instructions for executing the present disclosure, and is controlled by the processor 701. The processor 701 is configured to execute computer-executable instructions stored in the memory 703 to implement the methods provided by the embodiments of the present disclosure.
Optionally, the computer-executable instructions in the embodiments of the present disclosure may also be referred to as application program codes, which are not specifically limited in the embodiments of the present disclosure.
In particular implementations, processor 701 may include one or more CPUs such as CPU0 and CPU1 of fig. 7 for one embodiment.
In particular implementations, apparatus 700 may include multiple processors, such as processor 701 and processor 707 in fig. 7, for example, as an example. Each of these processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In particular implementations, apparatus 700 may also include a communication interface 704, as one embodiment. The communication interface 704 may be any device, such as a transceiver, for communicating with other devices or communication networks, such as an ethernet interface, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), etc.
In particular implementations, apparatus 700 may also include an output device 705 and an input device 706 as an example. An output device 705 is in communication with the processor 701 and may display information in a variety of ways. For example, the output device 705 may be a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display device, a Cathode Ray Tube (CRT) display device, a projector (projector), or the like. The input device 706 is in communication with the processor 701 and may receive user input in a variety of ways. For example, the input device 706 may be a mouse, a keyboard, a touch screen device, or a sensing device, among others.
In a specific implementation, the apparatus 700 may be a desktop, a laptop, a web server, a Personal Digital Assistant (PDA), a mobile phone, a tablet, a wireless terminal device, an embedded device, or a device with a similar structure as in fig. 7. The disclosed embodiments do not limit the type of device 700.
In some embodiments, the processor 701 in fig. 7 may cause the apparatus 700 to perform the methods in the above-described method embodiments by calling a computer stored in the memory 703 to execute instructions.
Illustratively, the functions/implementation processes of the orientation amount calculation module 601, highlight parameter processing module 602, and image rendering module 603 in fig. 6 may be implemented by the processor 701 in fig. 7 calling computer-executable instructions stored in the memory 703.
In an exemplary embodiment, there is also provided a storage medium comprising instructions, such as a memory 703 comprising instructions executable by a processor 701 of the apparatus 700 to perform the method described above.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented using a software program, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method of image rendering, the method comprising:
acquiring orientation vectors of fluff attached to an image model, wherein the orientation vectors represent growth directions of the fluff after adjustment based on normal vectors of corresponding vertexes of the fluff on the image model;
obtaining highlight parameters of the fluff according to the orientation vector of the fluff and the half-angle vector of the fluff, wherein the half-angle vector of the fluff is obtained by performing normalization processing according to the sum of a light vector and a visual vector in a rendering scene of the image model;
and rendering the fluff according to the highlight parameters of the fluff.
2. The method according to claim 1, wherein the obtaining the highlight parameters of the fluff according to the orientation vector of the fluff and the half-angle vector of the fluff comprises:
the highlight parameter S of the fluff satisfies:
Figure FDA0002548474580000011
wherein dotHG ═ dot (normaize (V + L), G), H denotes a half-angle vector of the pile, G denotes an orientation amount of the pile, Gloss denotes glossiness information of the pile, V denotes the view vector, L denotes the light vector, and the dot function denotes a dot product operation.
3. The method according to claim 1 or 2, wherein the obtaining of the orientation of the fluff attached to the image model comprises:
adding a physical acting force to a vertex shader of the fluff to obtain an offset vector of a normal vector of the image model vertex in which the fluff grows relative to the fluff, wherein the offset vector is obtained according to direction information of the physical acting force in the vertex shader, force information of the physical acting force and level information of a fluff rendering channel;
the orientation vector G satisfies: and G, normal (Force Level Power) + N), wherein Force represents the direction information of the physical acting Force, Level represents the hierarchy information of the fluff rendering channel, Power represents the Force information of the physical acting Force, and N represents the normal vector of the image model where the fluff is located.
4. The method according to claim 1 or 2, wherein the obtaining of the orientation of the fluff attached to the image model comprises:
shifting texture coordinates of the sampled fluff noise wave map based on vector field information of the texture map of the fluff;
and calculating orientation vectors obtained by deviating different parts of the fluff to different degrees according to texture information in the texture map of the fluff.
5. The method according to claim 1 or 2, wherein before the rendering processing of the pile according to the highlight parameters of the pile, the method further comprises:
and adjusting the brightness information of the fluff according to the highlight parameter of the fluff, the level information corresponding to the levels of the rendering channels of the fluff and the corresponding preset adjusting parameter.
6. The method according to claim 1 or 2, wherein before the rendering process according to the highlight parameters of the pile, the method further comprises:
and adjusting the offset strength of the orientation vector of the fluff according to the noise information of the fluff so as to enable the fluff to have orientation distinction.
7. The method according to claim 1 or 2, wherein before the rendering the fluff according to the highlight parameter of the fluff, the method further comprises:
obtaining a surface highlight parameter and a sub-surface highlight parameter of the fluff through simulation, wherein the surface highlight parameter of the fluff is obtained through calculation according to the highlight parameter of the fluff, the glossiness information of the surface highlight of the fluff and a first adjusting parameter, and the sub-surface highlight parameter of the fluff is obtained through calculation according to the highlight parameter of the fluff, the glossiness information of the sub-surface highlight of the fluff and a second adjusting parameter;
then, the rendering processing according to the highlight parameter of the fluff includes:
and performing highlight rendering processing on the fluff according to the surface highlight brightness of the fluff and the sub-surface highlight brightness of the fluff.
8. An image rendering apparatus, characterized in that the apparatus comprises:
the orientation quantity calculation module is configured to obtain an orientation vector of the fluff attached to the image model, wherein the orientation vector represents a growth direction adjusted based on a normal vector of a corresponding vertex of the fluff on the image model;
a highlight parameter processing module configured to obtain a highlight parameter light parameter of the fluff according to an orientation vector of the fluff and a half-angle vector of the fluff, wherein the half-angle vector of the fluff is obtained by performing normalization processing according to a sum of intermediate vectors between a light vector and a visual vector in a rendering scene of the image model;
the image rendering module is configured to render the fluff according to the highlight parameters of the fluff.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image rendering method of any of claims 1 to 7.
10. A storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the image rendering method of any one of claims 1 to 7.
CN202010567818.7A 2020-06-19 2020-06-19 Image rendering method and device, electronic equipment and storage medium Active CN113822981B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010567818.7A CN113822981B (en) 2020-06-19 2020-06-19 Image rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010567818.7A CN113822981B (en) 2020-06-19 2020-06-19 Image rendering method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113822981A true CN113822981A (en) 2021-12-21
CN113822981B CN113822981B (en) 2023-12-12

Family

ID=78912073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010567818.7A Active CN113822981B (en) 2020-06-19 2020-06-19 Image rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113822981B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023125064A1 (en) * 2021-12-29 2023-07-06 北京字跳网络技术有限公司 Fluff rendering method, apparatus, device, and medium
WO2023125071A1 (en) * 2021-12-28 2023-07-06 北京字跳网络技术有限公司 Virtual fluff generation method and apparatus, device, medium and product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281655A (en) * 2008-05-27 2008-10-08 中国科学院软件研究所 Method for drafting pelage-shaped pattern in profile region with accelerative GPU
CN102982575A (en) * 2012-11-29 2013-03-20 杭州挪云科技有限公司 Hair rendering method based on ray tracking
CN107221024A (en) * 2017-05-27 2017-09-29 网易(杭州)网络有限公司 Virtual objects hair treatment method and device, storage medium, electronic equipment
CN108961373A (en) * 2018-05-23 2018-12-07 福建天晴在线互动科技有限公司 A kind of method and terminal of fur rendering
CN110060321A (en) * 2018-10-15 2019-07-26 叠境数字科技(上海)有限公司 The quick real-time rendering method of hair based on true material

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281655A (en) * 2008-05-27 2008-10-08 中国科学院软件研究所 Method for drafting pelage-shaped pattern in profile region with accelerative GPU
CN102982575A (en) * 2012-11-29 2013-03-20 杭州挪云科技有限公司 Hair rendering method based on ray tracking
CN107221024A (en) * 2017-05-27 2017-09-29 网易(杭州)网络有限公司 Virtual objects hair treatment method and device, storage medium, electronic equipment
CN108961373A (en) * 2018-05-23 2018-12-07 福建天晴在线互动科技有限公司 A kind of method and terminal of fur rendering
CN110060321A (en) * 2018-10-15 2019-07-26 叠境数字科技(上海)有限公司 The quick real-time rendering method of hair based on true material

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023125071A1 (en) * 2021-12-28 2023-07-06 北京字跳网络技术有限公司 Virtual fluff generation method and apparatus, device, medium and product
WO2023125064A1 (en) * 2021-12-29 2023-07-06 北京字跳网络技术有限公司 Fluff rendering method, apparatus, device, and medium

Also Published As

Publication number Publication date
CN113822981B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
US20200234476A1 (en) Techniques and Workflows for Computer Graphics Animation System
EP3144899B1 (en) Apparatus and method for adjusting brightness of image
US9892540B2 (en) Image-based deformation of simulated characters of varied topology
US6078334A (en) 3-D texture mapping processor and 3-D image rendering system using the same
Lawonn et al. A survey of surface‐based illustrative rendering for visualization
Lu et al. Illustrative interactive stipple rendering
US6593924B1 (en) Rendering a non-photorealistic image
US9619920B2 (en) Method and system for efficient modeling of specular reflection
EP1918881A2 (en) Techniques and workflows for computer graphics animation system
CN113822981B (en) Image rendering method and device, electronic equipment and storage medium
JP2023029984A (en) Method, device, electronic apparatus, and readable storage medium for generating virtual image
CN113012273A (en) Illumination rendering method, device, medium and equipment based on target model
Schmidt et al. Sketching, scaffolding, and inking: a visual history for interactive 3D modeling
CN102792337B (en) For generating the method and apparatus of digital picture
CN116228943B (en) Virtual object face reconstruction method, face reconstruction network training method and device
CN116363288A (en) Rendering method and device of target object, storage medium and computer equipment
Hewgill et al. Procedural 3D texture synthesis using genetic programming
Ikkala et al. DDISH-GI: Dynamic Distributed Spherical Harmonics Global Illumination
CN117078838B (en) Object rendering method and device, storage medium and electronic equipment
US8217955B2 (en) Producing wrinkles and other effects for a computer-generated character based on surface stress
Kwon et al. Pencil rendering on 3D meshes using convolution
CN116883567A (en) Fluff rendering method and device
CN117496040A (en) Coloring method and system for material of blinking dots
Brill et al. Immersive surface interrogation
CN114332316A (en) Virtual character processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant