CN114998505A - Model rendering method and device, computer equipment and storage medium - Google Patents

Model rendering method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114998505A
CN114998505A CN202210608281.3A CN202210608281A CN114998505A CN 114998505 A CN114998505 A CN 114998505A CN 202210608281 A CN202210608281 A CN 202210608281A CN 114998505 A CN114998505 A CN 114998505A
Authority
CN
China
Prior art keywords
determining
map
color
face
vertex
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210608281.3A
Other languages
Chinese (zh)
Inventor
冷晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Datianmian White Sugar Technology Co ltd
Original Assignee
Beijing Datianmian White Sugar Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Datianmian White Sugar Technology Co ltd filed Critical Beijing Datianmian White Sugar Technology Co ltd
Priority to CN202210608281.3A priority Critical patent/CN114998505A/en
Publication of CN114998505A publication Critical patent/CN114998505A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a model rendering method, apparatus, computer device, and storage medium, wherein the method comprises: acquiring a three-dimensional model of a target face and a first map for representing the details of the face skin; determining a second chartlet for representing the light receiving effect of the face based on the illumination information of the target face; determining face effect information reflecting the expression of the target face under the illumination information based on the first map and the second map; rendering the three-dimensional model based on the facial effect information to obtain a target rendering model.

Description

Model rendering method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a model rendering method, apparatus, computer device, and storage medium.
Background
When the face of an object such as a super-realistic character is rendered, in order to enable the rendered and displayed character to have higher reality degree, a five-sense-organ image which is attached to a real face is designed for the object during design. However, this method only enables five sense organs of the character to be close to the features of the real face, and the lack of detail features expressed by the skin of the face in the real scene results in poor reality of the face of the object displayed by rendering.
Disclosure of Invention
The embodiment of the disclosure at least provides a model rendering method, a model rendering device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a model rendering method, including: acquiring a three-dimensional model of a target face and a first map for representing the details of the face skin; determining a second chartlet for representing the light receiving effect of the face based on the illumination information of the target face; determining face effect information reflecting the expression of the target face under the illumination information based on the first map and the second map; rendering the three-dimensional model based on the facial effect information to obtain a target rendering model.
Therefore, for the three-dimensional model of the target face, the face effect information reflecting the expression of the target face under the illumination information can be obtained by acquiring the first map representing the details of the face skin and the second map representing the light effect of the face under the illumination information of the target face. The three-dimensional model is rendered by using the face effect information, so that the rendered three-dimensional model can display skin details and a light receiving effect under illumination, and the reality degree of a target face displayed by rendering is improved by the fusion rendering of the skin details and the real light receiving effect.
In an optional embodiment, the determining, based on the illumination information of the target face, a second map for characterizing the photic effect of the face includes: determining a third chartlet for representing the distribution of the face grease area and a fourth chartlet for representing the face sub-surface scattering effect based on the illumination information of the target face; and performing fusion processing on the third map and the fourth map to obtain the second map.
Therefore, when the light receiving effect of the human face is determined, the highlight effect presented by the oil layer area under illumination and the sub-surface scattering effect of different human face areas can be considered at the same time, and the reality degree of the three-dimensional model rendering of the human face is improved.
In an alternative embodiment, the illumination information includes an illumination color and an illumination direction; the three-dimensional model includes: a plurality of vertices; a surface patch formed by mutually connecting a plurality of vertexes forms the surface of the three-dimensional model; the determining a fourth map for characterizing a sub-surface scattering effect of the face based on the illumination information of the target face comprises: determining a subsurface scattering region of the three-dimensional model based on the illumination direction; determining a sub-surface scattering color corresponding to each vertex in the sub-surface scattering area based on the illumination color and the illumination direction; and determining the fourth map based on the sub-surface scattering region and the sub-surface scattering color corresponding to each vertex in the sub-surface scattering region.
Therefore, effect simulation is carried out in a map pasting mode and delivered to a pipeline rendering mode, calculation force consumption can be reduced, meanwhile, a good sub-surface scattering effect of the target face under illumination can be achieved, and the reality degree of the target face under illumination is improved.
In an alternative embodiment, the determining the sub-surface scattering region of the three-dimensional model based on the illumination direction includes: determining a first candidate area of the three-dimensional model based on the illumination direction and the normal direction corresponding to each vertex; determining a shooting visual angle of the three-dimensional model based on the current shooting pose of the three-dimensional model, and determining a boundary area of the three-dimensional model as a second candidate area based on the shooting visual angle; performing first region interpolation processing on the first candidate region and the second candidate region to obtain a first combined region between the first candidate region and the second candidate region; and taking the first combined area as the sub-surface scattering area.
Therefore, each vertex in the first merging area meets the condition of the generation of the sub-surface scattering phenomenon in the current illumination direction and the shooting visual angle, and after the first merging area is used as the sub-surface scattering area and the sub-surface scattering effect is rendered in the first merging area, the rendered and displayed target rendering model has more sense of reality through the sub-surface scattering effect.
In an optional embodiment, before determining the sub-surface scattering region based on the first candidate region and the second candidate region, the method further includes: acquiring a third alternative area; the third candidate area is used for representing a skin distribution area of a target type skin of the target face; the determining the sub-surface scattering region based on the first candidate region and the second candidate region comprises: performing second region interpolation processing on the first candidate region, the second candidate region and the third candidate region to obtain a second combined region among the first candidate region, the second candidate region and the third candidate region; and taking the second combined area as the sub-surface scattering area.
Therefore, the second combined area obtained in the way not only considers the area where the sub-surface scattering occurs on the face under the condition that the sub-surface scattering phenomenon occurs in the current illumination direction and the shooting visual angle, but also considers the light and shadow effect possibly generated by special skin under illumination, further supplements the face light receiving effect, and enables the target rendering model after rendering display to have more sense of reality when the light receiving effect is embodied.
In an optional embodiment, the determining, based on the illumination color and the illumination direction, a sub-surface scattering color corresponding to each vertex in the sub-surface scattering area includes: determining a preset skin color of the target face, and superposing the illumination color and the preset skin color to obtain a reference color of the sub-surface scattering color; determining the color tendency of each vertex under the scattering of the sub-surface based on the illumination direction and the normal direction corresponding to each vertex; and aiming at any vertex, carrying out color adjustment on the reference color based on the color tendency corresponding to the vertex to obtain the sub-surface scattering color corresponding to the vertex.
In an optional embodiment, before determining the color tendency of each vertex under the sub-surface scattering based on the illumination direction and the normal direction corresponding to each vertex, the method further includes: acquiring a color inclination image; the color tendency graph is used for representing the corresponding color tendency of each vertex under different display light intensity; determining the color tendency of each vertex under the scattering of the subsurface based on the illumination direction and the normal direction corresponding to each vertex, including: determining the display light intensity corresponding to each vertex based on the illumination direction and the normal direction corresponding to each vertex; and determining corresponding color trends for the vertexes in the color trend graph based on the display light intensity corresponding to the vertexes.
Therefore, different positions corresponding to each vertex on the three-dimensional model are considered, different sub-surface scattering colors are determined according to the difference of the positions, and the method is more suitable for the expression effect in a real scene, so that the target rendering model after rendering and displaying has higher reality degree.
In an optional embodiment, the determining, based on the illumination direction and the normal direction corresponding to each vertex, the display light intensity corresponding to each vertex includes: determining a highlight range angle to the three-dimensional model based on the illumination direction and the current shooting pose of the target face; the highlight range angle is used for indicating an angle between the illumination direction and a shooting visual angle of the three-dimensional model under the current shooting pose; and determining the display light intensity corresponding to each vertex based on the highlight range angle and the normal direction of each vertex.
In an alternative embodiment, the first map is generated by: acquiring a first texture normal map and a second texture normal map; the first texture normal map is used for representing texture information corresponding to a grease area in the human face skin details, and the second texture normal map is used for representing texture information corresponding to a non-grease area in the human face skin details; and respectively performing definition adjustment on the first texture normal map and the second texture normal map based on the current shooting pose of the target face, and generating the first map based on the first texture normal map and the second texture normal map after definition adjustment.
Therefore, the first map can express the texture information of different areas on the human face skin, so that the rendered three-dimensional model has more detailed texture effects in different areas of the face by using the first map, and has different display effects under illumination, so that the obtained target rendering model has more reality.
In a second aspect, an embodiment of the present disclosure further provides a model rendering apparatus, including: the processing module is used for acquiring a three-dimensional model of a target face and a first map used for representing the details of the face skin; determining a second chartlet used for representing the light receiving effect of the face based on the illumination information of the target face; the determining module is used for determining face effect information reflecting the face expression under the illumination information based on the first map and the second map; and the rendering module is used for rendering the three-dimensional model based on the facial effect information to obtain a target rendering model.
In an alternative embodiment, the processing module, when determining the second map for characterizing the lighting effect of the face based on the lighting information of the target face, is configured to: determining a third chartlet for representing the distribution of the face grease area and a fourth chartlet for representing the face sub-surface scattering effect based on the illumination information of the target face; and performing fusion processing on the third map and the fourth map to obtain the second map.
In an optional embodiment, the illumination information includes illumination color and illumination direction; the three-dimensional model includes: a plurality of vertices; a surface patch formed by connecting a plurality of vertexes with each other forms the surface of the three-dimensional model; the processing module, when determining a fourth map for characterizing a sub-surface scattering effect of the face based on the illumination information of the target face, is configured to: determining a subsurface scattering region of the three-dimensional model based on the illumination direction; determining a sub-surface scattering color corresponding to each vertex in the sub-surface scattering area based on the illumination color and the illumination direction; and determining the fourth map based on the sub-surface scattering region and the sub-surface scattering color corresponding to each vertex in the sub-surface scattering region.
In an alternative embodiment, the processing module, when determining the sub-surface scattering region of the three-dimensional model based on the illumination direction, is configured to: determining a first candidate area of the three-dimensional model based on the illumination direction and the normal direction corresponding to each vertex; determining a shooting visual angle of the three-dimensional model based on the current shooting pose of the three-dimensional model, and determining a boundary area of the three-dimensional model as a second candidate area based on the shooting visual angle; performing first region interpolation processing on the first candidate region and the second candidate region to obtain a first combined region between the first candidate region and the second candidate region; and taking the first combined area as the sub-surface scattering area.
In an optional embodiment, before determining the sub-surface scattering region based on the first candidate region and the second candidate region, the processing module is further configured to: acquiring a third alternative area; the third candidate area is used for representing a skin distribution area of a target type skin of the target face; the processing module, when determining the sub-surface scattering region based on the first candidate region and the second candidate region, is configured to: performing second region interpolation processing on the first candidate region, the second candidate region and the third candidate region to obtain a second combined region among the first candidate region, the second candidate region and the third candidate region; and taking the second combined area as the sub-surface scattering area.
In an optional embodiment, when determining the sub-surface scattering color corresponding to each vertex in the sub-surface scattering region based on the illumination color and the illumination direction, the processing module is configured to: determining a preset skin color of the target face, and superposing the illumination color and the preset skin color to obtain a reference color of the sub-surface scattering color; determining the color tendency of each vertex under the scattering of the sub-surface based on the illumination direction and the normal direction corresponding to each vertex; and aiming at any vertex, carrying out color adjustment on the reference color based on the color tendency corresponding to the vertex to obtain the sub-surface scattering color corresponding to the vertex.
In an optional embodiment, before determining the color tendency of each vertex under the sub-surface scattering based on the illumination direction and the normal direction corresponding to each vertex, the processing module is further configured to: acquiring a color inclination image; the color tendency graph is used for representing the corresponding color tendency of each vertex under different display light intensity; the processing module is configured to, when determining the color tendency of each vertex under the sub-surface scattering based on the illumination direction and the normal direction corresponding to each vertex,: determining the display light intensity corresponding to each vertex based on the illumination direction and the normal direction corresponding to each vertex; and determining corresponding color trends for the vertexes in the color trend graph based on the display light intensity corresponding to the vertexes.
In an optional embodiment, when determining the display light intensity corresponding to each vertex based on the illumination direction and the normal direction corresponding to each vertex, the processing module is configured to: determining a highlight range angle of the three-dimensional model based on the illumination direction and the current shooting pose of the target face; the highlight range angle is used for indicating an angle between the illumination direction and a shooting visual angle of the three-dimensional model under the current shooting pose; and determining the display light intensity corresponding to each vertex based on the highlight range angle and the normal direction of each vertex.
In an alternative embodiment, the first map is generated by: acquiring a first texture normal map and a second texture normal map; the first texture normal map is used for representing texture information corresponding to a grease area in the human face skin details, and the second texture normal map is used for representing texture information corresponding to a non-grease area in the human face skin details; and respectively performing definition adjustment on the first texture normal map and the second texture normal map based on the current shooting pose of the target face, and generating the first map based on the first texture normal map and the second texture normal map after definition adjustment.
In a third aspect, this disclosure provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, and when the machine-readable instructions are executed by the processor, the machine-readable instructions, when executed by the processor, perform the steps of the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, alternative implementations of the present disclosure also provide a computer-readable storage medium having a computer program stored thereon, where the computer program is executed to perform the steps of the first aspect or any one of the possible implementations of the first aspect.
For the description of the effects of the model rendering apparatus, the computer device, and the computer-readable storage medium, reference is made to the description of the model rendering method, which is not repeated herein.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 illustrates a flow chart of a model rendering method provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a three-dimensional model of a target face provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a sub-surface scattering effect of skin under the influence of light provided by an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating a highlight effect of a fat layer area of a human face on light reflection according to an embodiment of the present disclosure;
FIG. 5 illustrates a schematic diagram of a color slope diagram provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating a model rendering apparatus provided by an embodiment of the present disclosure;
fig. 7 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of embodiments of the present disclosure, as generally described and illustrated herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
When the faces of objects such as a super-realistic character are rendered, the reality is improved by designing a five-sense-organ image fitting a real face for the objects, but detail features expressed in a real scene of skin, such as skin wrinkle textures, expressed light receiving effects under illumination and the like are easily ignored, so that the rendered faces of the character are more evenly expressed, and the rendered faces of the objects are poorer in reality.
Based on the research, the disclosure provides a model rendering method, for a three-dimensional model of a target face, by obtaining a first map representing face skin details and a second map representing a light receiving effect of the face under illumination information of the target face, facial effect information reflecting the expression of the target face under the illumination information is obtained. Therefore, the three-dimensional model is rendered by using the face effect information, so that the rendered three-dimensional model can display skin details and a light receiving effect under illumination, and the reality degree of the rendered and displayed target face is improved by the fusion rendering of the skin details and the real light receiving effect.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the embodiment, a detailed description is first given of a model rendering method disclosed in the embodiments of the present disclosure, and an execution subject of the model rendering method provided in the embodiments of the present disclosure is generally a computer device with certain computing power, where the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the model rendering method may be implemented by a processor calling computer readable instructions stored in a memory.
The following describes a model rendering method provided by the embodiments of the present disclosure. The model rendering method provided by the embodiment of the disclosure can be used for determining the target rendering model of the super-realistic character, and can be specifically applied to rendering the human face skin details and the facial photic effect of the super-realistic character. The super-realistic character described herein may specifically include a character image, a game character, and the like that simulate and are close to a real human face, and therefore the model rendering method provided by the embodiment of the disclosure may be specifically applied to different application fields such as game picture production or generation, animation movie production, and the like. After the target rendering model is determined by using the model rendering method provided by the embodiment of the disclosure, the determined target rendering model can be further used for image rendering of the super-realistic character, so that the super-realistic character has more reality and rationality by rendering the displayed skin texture of the face and the light receiving effect of different areas.
Referring to fig. 1, a flowchart of a model rendering method provided in an embodiment of the present disclosure is shown, where the method includes steps S101 to S103, where:
s101: acquiring a three-dimensional model of a target face and a first map for representing the details of the face skin; determining a second chartlet for representing the light receiving effect of the face based on the illumination information of the target face;
s102: determining face effect information reflecting the expression of the target face under the illumination information based on the first map and the second map;
s103: rendering the three-dimensional model based on the facial effect information to obtain a target rendering model.
The following describes details of S101 to S103.
In the above S101, a three-dimensional model of the target face is first described. The target face specifically comprises the face of the super-realistic character, and the super-realistic character is a virtual object and does not actually exist, so that the facial features of the super-realistic character, such as the position of five sense organs, the size of five sense organs and the like, can be determined in a mode of constructing a three-dimensional model corresponding to the face of the super-realistic character, and the determined three-dimensional model is used for simulating the facial image which the super-realistic character hopes to present in the real world. Illustratively, referring to fig. 2, a schematic diagram of a three-dimensional model of a target face provided in an embodiment of the present disclosure is shown, where the three-dimensional model is a manually constructed virtual model. In addition, different three-dimensional models may be determined for different target faces.
The three-dimensional model typically includes: the surface of the three-dimensional model is composed of a plurality of vertexes located on the surface of the three-dimensional model, and a patch (mesh) formed by the interconnection relationship between the vertexes.
After the three-dimensional model of the target face is obtained, a first map representing the details of the face skin can be obtained, and a second map used for representing the light receiving effect of the face is determined based on the illumination information of the target face.
Next, a mode of specifying the first map and the second map will be described.
First, the first map will be explained. The first map may represent details of the face skin, and may particularly represent texture information of different regions on the face skin, for example, deeper and fewer wrinkles distributed in greasy regions, such as ordinance lines distributed around the nasal wings, and may also include shallower and partially dense wrinkles distributed in non-greasy regions, such as fine lines distributed at the cheeks. Therefore, the rendered three-dimensional model can have more detailed texture effects in different areas of the face by using the first map, and can have different display effects under illumination, so that the obtained target rendering model has more reality.
In a specific implementation, the first map may be generated, for example, according to the following manner: acquiring a first texture normal map and a second texture normal map; the first texture normal map is used for representing texture information corresponding to a grease area in the human face skin details, and the second texture normal map is used for representing texture information corresponding to a non-grease area in the human face skin details; and respectively performing definition adjustment on the first texture normal map and the second texture normal map based on the current shooting pose of the target face, and generating the first map based on the first texture normal map and the second texture normal map after definition adjustment.
The texture normal map comprises a plurality of texels (Texles) for representing texture features of corresponding vertexes in the three-dimensional model. In a possible case, the distribution of the grease area and the non-grease area on the skin of the human face can be determined, and the corresponding texture details on the real human face according to different positions are correspondingly drawn, so that a first texture normal map expressing the texture information corresponding to the grease area and a second texture normal map expressing the texture information corresponding to the non-grease area are obtained.
The first texture normal map represents texture information corresponding to the grease area, and wrinkles with sparse and deep textures, such as statuary lines near the nasal wings, are generally distributed in the grease area, while wrinkles with dense and shallow textures, such as fine lines on the cheeks, are generally distributed in the non-grease area. Therefore, when the first map is generated by using the first texture normal map and the second texture normal map, the two texture normal maps can be adjusted with reference to the current shooting pose of the target face to obtain the first map at the current shooting pose.
Specifically, the distance from the target face can be obtained through the current shooting pose. As can be understood from the above description of the principle, at a larger distance, deeper wrinkles of the texture distributed on the human face may be more prominent, and therefore, the definition of the first texture normal map expressing the texture may be adjusted, for example, the wrinkles expressed in the first texture normal map may be more clearly displayed; and because the shallow fine lines distributed on the face are not easily seen when the distance is long, the sharpness of the second texture normal map expressing such texture can be adjusted in a reverse direction, for example, the fine lines expressed in the second texture normal map are not clearly shown. In a possible case, when the distance exceeds a certain range, for example, exceeds 5 meters, it may be considered that the fine grain on the face cannot be seen at the distance, and then the first texture normal map may also be directly used as the first map.
In contrast to the case of a large distance described above, in the case of a small distance, the first map may be determined by performing sharpness adjustment on each of the first texture normal map and the second texture normal map. For example, the adjustment causes wrinkles expressed in the first texture normal map to be not clearly shown, and the adjustment causes fine lines expressed in the second texture normal map to be clearly shown. In a possible case, when the distance is less than a certain range, for example, less than 5 cm, it may be considered that only the fine grain on the face can be seen at the distance, and the second texture normal map may also be directly used as the second map.
In addition, the shooting orientation of the target face can be obtained by utilizing the current shooting pose. With the shooting orientation, it is also possible to determine, for example, an area of significant interest to the face in the current shooting pose. For example, the current shooting pose indicates that the cheek of a human face is shot, or the left eye is shot. In this case, the purpose of selecting the current shooting pose includes, for example, highlighting a specific position of a human face, so that when the sharpness adjustment is performed on the first texture normal map and the second texture normal map, the non-attention region can be weakened to achieve the effect of focusing on the focus attention region.
Next, the second map will be explained. The second map is determined based on illumination information of the target face, and can be specifically used for representing the face photic effect. In a real scene, the effect of the illumination on the real face mainly includes a sub-surface scattering effect of light on the skin, such as shown in fig. 3; and the oil layer area of the human face has a highlight effect on light reflection, such as shown in fig. 4. Therefore, in the embodiment of the present disclosure, the rendering of the two effects is performed on the target face, and other expression effects possibly occurring in the real scene and caused by illumination may also be performed by determining the corresponding chartlet in a similar manner, and the rendering of the three-dimensional model is performed, which are all within the protection scope of the embodiment of the present disclosure.
In a specific implementation, when determining the second map for characterizing the lighting effect of the face based on the lighting information of the target face, the following method may be specifically adopted: determining a third chartlet for representing the distribution of the face grease area and a fourth chartlet for representing the face sub-surface scattering effect based on the illumination information of the target face; and performing fusion processing on the third map and the fourth map to obtain the second map.
The illumination information may specifically include an illumination color and an illumination direction. Illustratively, the illumination color may include, for example, white or yellow colors corresponding to white or yellow light in a natural light, or light having a specific color such as red, green, blue, etc. generated in other artificial light sources. When the illumination direction of the target face is acquired, since the three-dimensional model of the target face does not really exist but is created in the virtual three-dimensional space, when the illumination direction is determined, for example, the illumination direction of the target face by the virtual light source can be determined by determining the position of the virtual light source in the three-dimensional space. Specifically, the position information of the virtual light source and the three-dimensional model in the three-dimensional space can be respectively determined to determine the relative position relationship between the virtual light source and the three-dimensional model, so that the illumination direction of the three-dimensional model can be determined. The shooting direction can be determined according to the shooting parameters in real-time shooting, and details are not repeated.
The third map and the fourth map for determining the second map are specifically described below.
First, the third map will be explained. The third map is used for representing the distribution of the face grease area. Specifically, the distribution of the grease is different in different areas of the human face, for example, more grease is distributed at the nose, around the lips and at the forehead, and less grease is distributed at the cheeks and around the eyes. And in the area with more grease, the light-emitting diode has stronger highlight effect under the illumination. Therefore, when the second map is generated, the oil distribution of the area where each pixel point is located can be expressed through the pixel value of each pixel point.
And the pixel value of each pixel point in the third map can be used for representing the oil distribution of different areas of the corresponding face in the three-dimensional model.
Besides the internal cause that the distribution of the grease layer of the face causes the difference of the highlight effect in different areas on the face, the highlight effect of the face is also particularly influenced by the external cause of illumination. Specifically, the highlight effect of different parts of the human face is different according to different illumination directions of the human face, for example, when the left side of the human face is illuminated, the highlight effect of the left side of the human face is stronger, and even if the right face has a grease layer, the right face does not have stronger illumination, and therefore the highlight effect of the left side of the human face is not stronger.
In addition, the finally presented human face is determined by the shooting visual angle, and according to the reflection principle of light rays, for illumination in a determined direction, the light rays reflected by the grease layer can be seen only at the angle of the light ray reflection. Therefore, when highlight rendering is actually performed on the face, the shooting direction of the face can be determined, and the reality degree of highlight display of the rendered face is improved.
Therefore, in a specific implementation, the highlight range angle of the three-dimensional model can be further determined based on the illumination direction and the shooting direction; and determining the display light intensity of each vertex corresponding to the face grease area based on the highlight range angle and the normal direction of each vertex corresponding to the face grease area in the three-dimensional model.
In the case of determining the illumination direction and the photographing direction of the target face, the highlight range angle to the three-dimensional model may be determined based on the illumination direction and the photographing direction. In a possible case, the smaller the angle between the illumination direction and the shooting direction, the higher the highlight range angle indicates that the highlight effect that can be displayed in this shooting direction is stronger. In another possible case, the smaller the angle between the lighting direction and the shooting direction, the highlight range angle indicates that the highlight effect that can be displayed in this shooting direction is weaker.
For the three-dimensional model, a plurality of vertices may be specifically included, and the plurality of vertices may respectively determine corresponding normal directions. In a possible case, the normal direction corresponding to the vertex can be used as a position where the reflected light intensity is maximum when the illumination irradiates on the face position corresponding to the vertex, that is, if there is light irradiating along the normal direction of the vertex, the highlight with the highest intensity is correspondingly rendered. Therefore, the display intensity of each vertex is determined in relation to the normal direction of each vertex.
Specifically, the display light intensity of each vertex corresponding to the face grease area may be determined based on the highlight range angle and the normal direction of each vertex. In a possible case, the highlight range angle can be represented by a sum vector of a vector corresponding to the illumination direction and a vector corresponding to the shooting direction, when the display light intensity corresponding to a certain vertex is reached, the vector dot product operation can be performed through the sum vector corresponding to the highlight range angle and the normal direction of the vertex, and the display light intensity is determined according to the dot product operation result. Here, the result of the dot product operation can be used as the basic light intensity information corresponding to the vertex. For example, if the value of the basic light intensity information is larger, the correspondingly determined display light intensity is larger; if the value of the basic light intensity information is smaller, the correspondingly determined display light intensity is smaller.
In addition, the pixel value of each pixel point in the third map can be limited by the display light intensity threshold value so as to determine the pixel value corresponding to each pixel point. Specifically, under illumination, the human face grease layer can reflect light to a certain degree, so as to achieve a high light effect, but the light cannot be completely reflected, that is, the display light intensity does not exceed the illumination light intensity. In addition, in the areas with different grease layer distribution, the display light intensity also has different intensity changes. Based on this, a display light intensity threshold may be set, where the display light intensity threshold may represent the highest display light intensity that may be displayed at the corresponding vertex, so as to determine the display light intensity threshold corresponding to the vertex for different pixel values, that is, the distribution of the grease layers in different areas can be determined by the change of the display light intensity threshold.
Therefore, the pixel value of each pixel point in the third map is limited by displaying the light intensity threshold value, so that the rendered three-dimensional model shows the real reflection effect that the light intensity is weakened when the grease layer reflects light. In this way, when the display light intensity of each vertex is determined, the display light intensity of each vertex in the face grease region may be specifically determined based on the display light intensity threshold corresponding to each vertex in the face grease region, the highlight range angle, and the normal direction corresponding to each vertex in the three-dimensional model.
When the third map is obtained, after basic light intensity information corresponding to each vertex in the three-dimensional model is obtained by performing dot product processing on the highlight range angle and the normal direction corresponding to each vertex in the three-dimensional model, the basic light intensity information is adjusted by using a display light intensity threshold corresponding to each vertex, so that display light intensity of each vertex in the face grease area is obtained, and a pixel value corresponding to each vertex in the third map is determined.
In this way, a third map for characterizing the distribution of the facial fat region can be determined.
Next, a fourth map will be explained. And the fourth map is used for representing the sub-surface scattering effect of the human face. Sub-surface scattering is a complex phenomenon occurring in a medium with a high scattering coefficient, and after light is incident into the medium with the sub-surface scattering, for example, after light is irradiated on facial skin, the light is scattered in the medium for many times, and finally the light is emergent, so that the medium forms a special semitransparent effect. Because the excessive calculation power is consumed for simulating the multiple scattering process of the light in the skin in a pipeline processing mode directly, and the simulation is not easy to deploy on mobile terminal equipment, the simulation of the effect can be performed in a mapping mode, the calculation power consumption is reduced, the target face can achieve a better sub-surface scattering effect under illumination, and the reality degree of the target face under illumination is improved.
In a specific implementation, since light irradiation may affect a sub-surface scattering effect of a real face in a real scene, when determining a fourth map for representing the sub-surface scattering effect of the face, light irradiation information is also determined first, and the light irradiation information specifically includes the light irradiation color and the light irradiation direction described above.
Specifically, when determining the fourth map based on the illumination information, the following manner may be adopted: determining a subsurface scattering region of the three-dimensional model based on the illumination direction; determining a sub-surface scattering color corresponding to each vertex in the sub-surface scattering area based on the illumination color and the illumination direction; and determining the fourth map based on the sub-surface scattering region and the sub-surface scattering color corresponding to each vertex in the sub-surface scattering region.
The manner in which the sub-surface scattering region is determined will first be described. In a specific implementation, when determining the sub-surface scattering region of the three-dimensional model based on the illumination direction, the following method may be specifically adopted: determining a first candidate area of the three-dimensional model based on the illumination direction and the normal direction corresponding to each vertex; determining a shooting visual angle of the three-dimensional model based on the current shooting pose of the three-dimensional model, and determining a boundary area of the three-dimensional model based on the shooting visual angle to serve as a second alternative area; performing first region interpolation processing on the first candidate region and the second candidate region to obtain a first combined region between the first candidate region and the second candidate region; and taking the first combined area as the sub-surface scattering area.
When the sub-surface scattering region of the three-dimensional model is determined based on the illumination direction, the first candidate region can be determined according to the display effect of the human face skin under illumination in a real scene. The expression effect of the real human face under illumination can be known, and the obvious subsurface scattering effect can not be presented when the front face of the skin faces to the light, and the obvious subsurface effect can be presented when the back face of the skin faces to the light. Simulating the phenomenon, and determining a first candidate region according to the illumination direction and the vertex normal direction as a candidate region which can generate a sub-surface scattering effect in the three-dimensional model; and in each vertex in the first alternative area, the normal direction of the corresponding vertex is opposite to the illumination direction.
Then, based on the characteristics of the sub-surface scattering, including the objects producing the sub-surface scattering effect, the characteristics of the edge portion presenting effect being stronger than the central portion presenting effect, a second candidate region rendering the sub-surface scattering effect may be determined. When the edge part of the three-dimensional model is determined, the displayed edge part is different under different shooting poses of the three-dimensional model, so that the boundary area of the three-dimensional model under the current shooting view angle of the three-dimensional model can be determined through the current shooting pose of the three-dimensional model.
After the first candidate region and the second candidate region are determined, first region interpolation processing may be performed on the first candidate region and the second candidate region to obtain a first combined region between the first candidate region and the second candidate region. And each vertex in the first merging area meets the condition of generating the sub-surface scattering phenomenon in the current illumination direction and the shooting visual angle, so that after the first merging area is used as the sub-surface scattering area and the sub-surface scattering effect is rendered in the first merging area, the rendered and displayed target rendering model has more sense of reality through the sub-surface scattering effect.
In addition, a third candidate region can be determined by considering the skin type characteristics of the human face, and the secondary surface scattering region can be determined by using the third candidate region. Specifically, for a real human face in a real scene, specifically, the skin is supported by bones, soft tissues and the like, which may result in different curvatures and thicknesses of the skin at different positions, and the skin at the position with the supporting framework, such as the skin at the position of a nasal wing, an earlobe and the like, belongs to a thinner and more smooth target type skin due to the support of the bones, the soft tissues and the like. For the target type skin, the intensity of the effect is also influenced under the sub-surface scattering, for example, even if part of vertexes of the three-dimensional model cannot meet the conditions generated by the sub-surface scattering phenomenon under the current illumination direction and the shooting visual angle, a small amount of light projection effect can occur in the region where the target type skin is located, and the target type skin can be rendered as the sub-surface scattering effect. Therefore, the three-dimensional model can also be used to determine the region corresponding to the skin with the target type, resulting in a third candidate region. In one possible scenario, the third candidate region may be masked for rendering as a map.
In this way, when the sub-surface scattering region is determined, second region interpolation processing may be performed on the first candidate region, the second candidate region, and the third candidate region to obtain a second combined region among the first candidate region, the second candidate region, and the third candidate region, and the second combined region may be used as the sub-surface scattering region. Here, since the third candidate region may include one map, when performing the region interpolation processing with the first candidate region and the second candidate region, the second region interpolation processing is performed on the map, differently from the first region interpolation processing when determining the first merge region described above. The second combined area obtained in the way not only considers the area where the sub-surface scattering occurs on the face under the condition that the sub-surface scattering phenomenon occurs under the current illumination direction and the shooting visual angle, but also considers the light and shadow effect possibly generated by special skin under illumination, further supplements the light receiving effect of the face, and enables the target rendering model after rendering and displaying to have reality when the light receiving effect is reflected.
Next, a description will be given of a method of determining the sub-surface scattering color corresponding to each vertex in the sub-surface scattering region. Specifically, in a real scene, the sub-surface scattering color reflected by a real face is affected by skin color and illumination color, for example, if the skin color of the real face is dark, the expressed sub-surface scattering effect is also dark and dark; if the skin color of the real human face is white, the sub-surface scattering effect is bright and yellow. Or, if the illumination color is bright, such as white or warm yellow, after the illumination color is illuminated on the face, the sub-surface effect presented by the face is bright, such as bright yellow; if the illumination color is dark, such as dark yellow, the sub-surface effect of the face appears dark, such as black, after the illumination is performed on the face.
Since the sub-surface scattering color in the real scene is affected by the skin color and the illumination color of the human face, in order to simulate the characteristic, the reference color of the sub-surface scattering can be determined according to the preset skin color and the illumination color of the target human face. Specifically, the preset skin color of the target face may be determined, for example, the preset skin color is determined in response to a selection operation of a user, or the skin color determined when the target face is artistic designed is determined as the preset skin color. From the illumination information explained above, the illumination color can be determined. And superposing the illumination color and the preset skin color to obtain the reference color of the sub-surface scattering color.
The reference color of the sub-surface scattering can be used as the average color of the sub-surface scattering, and it can be known from the observation of the sub-surface scattering phenomenon in the real scene that the sub-surface scattering color gradually attenuates from the outer edge to the inner side, which shows that the outer edge shows a color with a yellowish or brilliant color, and the inner side shows a color with a blackish or darker color. This is due to the constant color decay of light as it is refracted inside the tissue.
Specifically, when the sub-surface scattering color presented by each vertex after color attenuation in the three-dimensional model is determined, the color tendency of each vertex under the sub-surface scattering can be determined based on the illumination direction and the normal direction corresponding to each vertex; and aiming at any vertex, carrying out color adjustment on the reference color based on the color tendency corresponding to the vertex to obtain the sub-surface scattering color corresponding to the vertex.
In order to simulate the change of the color presented under the effect of sub-surface scattering, the process of the color attenuation change can be made into a Look-Up-Table (LUT), which is referred to as a color slope diagram in the embodiment of the present disclosure. The color tendency map is used for representing a mapping relationship, and is specifically used for storing the corresponding color tendency of each vertex under different display light intensities in the embodiment of the disclosure. Illustratively, referring to fig. 5, a schematic diagram of a color gradient map provided by an embodiment of the present disclosure is shown. In the color-oriented graph, the colors on the left side are colors that are yellow and bright, and the colors on the right side are colors that are black and dark. Under the color dip graph, the display light intensity is marked by using a coordinate axis with an arrow, and the corresponding mapping relation in the color dip graph corresponds to that of a vertex with higher display light intensity, namely, the scattering color of the corresponding sub-surface is more yellow and more bright; for vertices showing lower light intensity, the corresponding subsurface scattering color is darker and darker than black.
That is, the color tendency can be specifically used to indicate that the rendering color of the target vertex under the sub-surface scattering effect tends to be in multiple aspects of color hue, color lightness, color chroma and the like, for example, the color of the red hue channel tends to be biased towards white and the color lightness tends to be more biased towards brightness under the sub-surface scattering effect of the target vertex at the cheek edge position. That is, the rendered color of the target vertex under sub-surface scattering can be determined using the color tendency.
Specifically, the display light intensity corresponding to each vertex may be determined based on the illumination direction and the normal direction corresponding to each vertex, which may specifically refer to the above-mentioned description of determining the third map representing the facial grease area partition, and details are not repeated here.
After the display light intensity corresponding to each vertex is determined, the corresponding color tendency can be determined for each vertex by using the mapping relation in the color tendency graph, and then the color of the reference color can be adjusted based on the color tendency corresponding to the vertex, so that the sub-surface scattering color corresponding to the vertex is obtained. Specifically, the color tendency may be used as a threshold value for influencing the reference color, and may be multiplied by the reference color to obtain the sub-surface scattering color corresponding to the vertex. Therefore, different positions corresponding to each vertex on the three-dimensional model are considered, different sub-surface scattering colors are determined according to the difference of the positions, and the method is more suitable for the expression effect in a real scene, so that the target rendering model after rendering and displaying has higher reality degree.
In this way, a fourth map representing the scattering effect of the human face sub-surface can be obtained by determining a region for rendering the sub-surface scattering effect on the three-dimensional model and determining the corresponding sub-surface scattering color for each vertex in the region.
After a third map representing the distribution of the face grease area and a fourth map representing the face subsurface scattering effect are determined, the third map and the fourth map both reflect the face expression effect under illumination information, so that the third map and the fourth map are subjected to fusion processing, and a second map representing the face photic effect can be obtained. When the fusion processing is performed on the third map and the fourth map, because the third map and the fourth map are determined based on the same three-dimensional model, the sizes of the third map and the fourth map are consistent, that is, for the third map, any pixel point has a pixel point at a unique corresponding position in the fourth map, so that when the fusion processing is performed, for example, interpolation processing can be performed on the pixel points at the corresponding positions in the third map and the fourth map, and the second map representing the light receiving effect of the human face is obtained.
For the above S102 and S103, after the first map and the second map are obtained, facial effect information reflecting the expression of the target face under the illumination information may be determined. Specifically, the first map and the second map may be subjected to fusion processing by interpolation processing, and the obtained new map may be used as a map expressing the face effect information expressed by the lower portion of the illumination information. Or, in another possible case, the rendering effect of each vertex in the three-dimensional model may be determined one by using the first map and the second map, and the facial effect information corresponding to each vertex of the three-dimensional model is obtained.
Therefore, the three-dimensional model can be rendered by utilizing the face effect information, and the target rendering model is obtained. When the three-dimensional model is rendered by using the determined facial effect information, for example, the three-dimensional model can be rendered by using a pipeline, so that a target rendering model with high reality degree in both human face skin detail expression and light receiving effect expression is obtained. The target rendering model can be further used in different application scenes such as animation production, game production and the like, and the corresponding character role has higher reality degree through detail display of skin.
It will be understood by those of skill in the art that in the above method of the present embodiment, the order of writing the steps does not imply a strict order of execution and does not impose any limitations on the implementation, as the order of execution of the steps should be determined by their function and possibly inherent logic.
Based on the same inventive concept, a model rendering device corresponding to the model rendering method is also provided in the embodiments of the present disclosure, and because the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the model rendering method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 6, a schematic diagram of a model rendering apparatus provided in an embodiment of the present disclosure is shown, the apparatus includes: a processing module 61, a determining module 62, a rendering module 63; wherein the content of the first and second substances,
the processing module 61 is used for acquiring a three-dimensional model of a target face and a first map for representing the details of the face skin; determining a second chartlet for representing the light receiving effect of the face based on the illumination information of the target face;
a determining module 62, configured to determine, based on the first map and the second map, facial effect information reflecting that a human face represents under the illumination information;
and a rendering module 63, configured to render the three-dimensional model based on the facial effect information to obtain a target rendering model.
In an alternative embodiment, the processing module 61, when determining the second map for characterizing the lighting effect of the face based on the lighting information of the target face, is configured to: determining a third chartlet for representing the distribution of the face grease area and a fourth chartlet for representing the face sub-surface scattering effect based on the illumination information of the target face; and performing fusion processing on the third map and the fourth map to obtain the second map.
In an alternative embodiment, the illumination information includes an illumination color and an illumination direction; the three-dimensional model includes: a plurality of vertices; a surface patch formed by connecting a plurality of vertexes with each other forms the surface of the three-dimensional model; the processing module 61, when determining the fourth map for characterizing the face sub-surface scattering effect based on the illumination information of the target face, is configured to: determining a subsurface scattering region of the three-dimensional model based on the illumination direction; determining a sub-surface scattering color corresponding to each vertex in the sub-surface scattering area based on the illumination color and the illumination direction; and determining the fourth map based on the sub-surface scattering region and the sub-surface scattering color corresponding to each vertex in the sub-surface scattering region.
In an alternative embodiment, the processing module 61, when determining the sub-surface scattering area of the three-dimensional model based on the illumination direction, is configured to: determining a first candidate area of the three-dimensional model based on the illumination direction and the normal direction corresponding to each vertex; determining a shooting visual angle of the three-dimensional model based on the current shooting pose of the three-dimensional model, and determining a boundary area of the three-dimensional model based on the shooting visual angle to serve as a second alternative area; performing first region interpolation processing on the first candidate region and the second candidate region to obtain a first combined region between the first candidate region and the second candidate region; and taking the first combined area as the sub-surface scattering area.
In an optional embodiment, the processing module 61 is further configured to, before determining the sub-surface scattering region based on the first candidate region and the second candidate region: acquiring a third alternative area; the third candidate area is used for representing a skin distribution area of a target type skin of the target face; the processing module 61, when determining the sub-surface scattering region based on the first candidate region and the second candidate region, is configured to: performing second region interpolation processing on the first candidate region, the second candidate region and the third candidate region to obtain a second combined region among the first candidate region, the second candidate region and the third candidate region; and taking the second combination region as the sub-surface scattering region.
In an optional embodiment, when determining the sub-surface scattering color corresponding to each vertex in the sub-surface scattering area based on the illumination color and the illumination direction, the processing module 61 is configured to: determining a preset skin color of the target face, and superposing the illumination color and the preset skin color to obtain a reference color of the sub-surface scattering color; determining the color tendency of each vertex under the scattering of the sub-surface based on the illumination direction and the normal direction corresponding to each vertex; and aiming at any vertex, carrying out color adjustment on the reference color based on the color tendency corresponding to the vertex to obtain the sub-surface scattering color corresponding to the vertex.
In an alternative embodiment, the processing module 61, before determining the color tendency of each vertex under the sub-surface scattering based on the illumination direction and the normal direction corresponding to each vertex, is further configured to: acquiring a color inclination image; the color tendency graph is used for representing the corresponding color tendency of each vertex under different display light intensity; when determining the color tendency of each vertex under the sub-surface scattering based on the illumination direction and the normal direction corresponding to each vertex, the processing module 61 is configured to: determining the display light intensity corresponding to each vertex based on the illumination direction and the normal direction corresponding to each vertex; and determining corresponding color trends for the vertexes in the color trend graph based on the display light intensity corresponding to the vertexes.
In an alternative embodiment, when determining the display light intensity corresponding to each vertex based on the illumination direction and the normal direction corresponding to each vertex, the processing module 61 is configured to: determining a highlight range angle to the three-dimensional model based on the illumination direction and the current shooting pose of the target face; the highlight range angle is used for indicating an angle between the illumination direction and a shooting visual angle of the three-dimensional model under the current shooting pose; and determining the display light intensity corresponding to each vertex based on the highlight range angle and the normal direction of each vertex.
In an alternative embodiment, the first map is generated by: acquiring a first texture normal map and a second texture normal map; the first texture normal map is used for representing texture information corresponding to a grease area in the human face skin details, and the second texture normal map is used for representing texture information corresponding to a non-grease area in the human face skin details; and respectively performing definition adjustment on the first texture normal map and the second texture normal map based on the current shooting pose of the target face, and generating the first map based on the first texture normal map and the second texture normal map after definition adjustment.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
An embodiment of the present disclosure further provides a computer device, as shown in fig. 7, which is a schematic structural diagram of the computer device provided in the embodiment of the present disclosure, and includes:
a processor 10 and a memory 20; the memory 20 stores machine-readable instructions executable by the processor 10, the processor 10 being configured to execute the machine-readable instructions stored in the memory 20, the processor 10 performing the following steps when the machine-readable instructions are executed by the processor 10:
acquiring a three-dimensional model of a target face and a first map for representing the details of the face skin; determining a second chartlet for representing the light receiving effect of the face based on the illumination information of the target face; determining face effect information reflecting the expression of the target face under the illumination information based on the first map and the second map; rendering the three-dimensional model based on the facial effect information to obtain a target rendering model.
The storage 20 includes a memory 210 and an external storage 220; the memory 210 is also referred to as an internal memory, and temporarily stores operation data in the processor 10 and data exchanged with the external memory 220 such as a hard disk, and the processor 10 exchanges data with the external memory 220 through the memory 210.
For the specific execution process of the instruction, reference may be made to the steps of the model rendering method described in the embodiment of the present disclosure, and details are not described here.
Embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the model rendering method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries program codes, and instructions included in the program codes may be used to execute the steps of the model rendering method in the foregoing method embodiments, which may be specifically referred to the foregoing method embodiments and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: those skilled in the art can still make modifications or changes to the embodiments described in the foregoing embodiments, or make equivalent substitutions for some of the technical features, within the technical scope of the disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A method of model rendering, comprising:
acquiring a three-dimensional model of a target face and a first map for representing the details of the face skin; determining a second chartlet used for representing the light receiving effect of the face based on the illumination information of the target face;
determining face effect information reflecting the expression of the target face under the illumination information based on the first map and the second map;
rendering the three-dimensional model based on the facial effect information to obtain a target rendering model.
2. The method according to claim 1, wherein the determining a second map for characterizing a photic effect of the face based on the illumination information of the target face comprises:
determining a third chartlet for representing the distribution of the face grease area and a fourth chartlet for representing the face sub-surface scattering effect based on the illumination information of the target face;
and performing fusion processing on the third map and the fourth map to obtain the second map.
3. The method according to claim 2, wherein the illumination information includes illumination color and illumination direction; the three-dimensional model includes: a plurality of vertices; a surface patch formed by connecting a plurality of vertexes with each other forms the surface of the three-dimensional model;
the determining a fourth map for characterizing a sub-surface scattering effect of the face based on the illumination information of the target face comprises:
determining a sub-surface scattering region of the three-dimensional model based on the illumination direction; and the number of the first and second groups,
determining a sub-surface scattering color corresponding to each vertex in the sub-surface scattering area based on the illumination color and the illumination direction;
and determining the fourth map based on the sub-surface scattering region and the sub-surface scattering color corresponding to each vertex in the sub-surface scattering region.
4. The method of claim 3, wherein determining the sub-surface scattering region of the three-dimensional model based on the illumination direction comprises:
determining a first candidate area of the three-dimensional model based on the illumination direction and the normal direction corresponding to each vertex;
determining a shooting visual angle of the three-dimensional model based on the current shooting pose of the three-dimensional model, and determining a boundary area of the three-dimensional model as a second candidate area based on the shooting visual angle;
performing first region interpolation processing on the first candidate region and the second candidate region to obtain a first combined region between the first candidate region and the second candidate region;
and taking the first combined area as the sub-surface scattering area.
5. The method of claim 4, further comprising, prior to determining the secondary surface scattering region based on the first candidate region and the second candidate region:
acquiring a third alternative area; the third candidate area is used for representing a skin distribution area of a target type skin of the target face;
the determining the sub-surface scattering region based on the first candidate region and the second candidate region comprises:
performing second region interpolation processing on the first candidate region, the second candidate region and the third candidate region to obtain a second combined region among the first candidate region, the second candidate region and the third candidate region;
and taking the second combined area as the sub-surface scattering area.
6. The method according to any one of claims 3-5, wherein said determining a sub-surface scattering color corresponding to each vertex in the sub-surface scattering region based on the illumination color and the illumination direction comprises:
determining a preset skin color of the target face, and superposing the illumination color and the preset skin color to obtain a reference color of the sub-surface scattering color;
determining the color tendency of each vertex under the scattering of the sub-surface based on the illumination direction and the normal direction corresponding to each vertex;
and aiming at any vertex, carrying out color adjustment on the reference color based on the color tendency corresponding to the vertex to obtain the sub-surface scattering color corresponding to the vertex.
7. The method of claim 6, further comprising, prior to determining the color propensity of each vertex under sub-surface scattering based on the illumination direction and the normal direction corresponding to each vertex: acquiring a color inclination image; the color tendency graph is used for representing the corresponding color tendency of each vertex under different display light intensity;
determining the color tendency of each vertex under the scattering of the subsurface based on the illumination direction and the normal direction corresponding to each vertex, including:
determining the display light intensity corresponding to each vertex based on the illumination direction and the normal direction corresponding to each vertex;
and determining corresponding color trends for the vertexes in the color trend graph based on the display light intensity corresponding to the vertexes.
8. The method of claim 7, wherein the determining the display intensity corresponding to each vertex based on the illumination direction and the normal direction corresponding to each vertex comprises:
determining a highlight range angle to the three-dimensional model based on the illumination direction and the current shooting pose of the target face; the highlight range angle is used for indicating an angle between the illumination direction and a shooting visual angle of the three-dimensional model under the current shooting pose;
and determining the display light intensity corresponding to each vertex based on the highlight range angle and the normal direction of each vertex.
9. The method according to any of claims 1-8, wherein the first map is generated by:
acquiring a first texture normal map and a second texture normal map; the first texture normal map is used for representing texture information corresponding to a grease area in the human face skin details, and the second texture normal map is used for representing texture information corresponding to a non-grease area in the human face skin details;
and respectively performing definition adjustment on the first texture normal map and the second texture normal map based on the current shooting pose of the target face, and generating the first map based on the first texture normal map and the second texture normal map after definition adjustment.
10. A model rendering apparatus, comprising:
the processing module is used for acquiring a three-dimensional model of a target face and a first map used for representing the details of the face skin; determining a second chartlet for representing the light receiving effect of the face based on the illumination information of the target face;
the determining module is used for determining face effect information reflecting the face expression under the illumination information based on the first map and the second map;
and the rendering module is used for rendering the three-dimensional model based on the facial effect information to obtain a target rendering model.
11. A computer device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, the processor for executing the machine-readable instructions stored in the memory, the processor performing the steps of the model rendering method of any of claims 1 to 9 when the machine-readable instructions are executed by the processor.
12. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when executed by a computer device, performs the steps of the model rendering method according to any one of claims 1 to 9.
CN202210608281.3A 2022-05-31 2022-05-31 Model rendering method and device, computer equipment and storage medium Pending CN114998505A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210608281.3A CN114998505A (en) 2022-05-31 2022-05-31 Model rendering method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210608281.3A CN114998505A (en) 2022-05-31 2022-05-31 Model rendering method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114998505A true CN114998505A (en) 2022-09-02

Family

ID=83030371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210608281.3A Pending CN114998505A (en) 2022-05-31 2022-05-31 Model rendering method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114998505A (en)

Similar Documents

Publication Publication Date Title
Kim et al. The dark side of gloss
CN110599574B (en) Game scene rendering method and device and electronic equipment
CN104484896B (en) It is a kind of that the physical method of figure skin Subsurface Scattering is simulated based on Environment
Lu et al. Illustrative interactive stipple rendering
Šoltészová et al. Chromatic shadows for improved perception
CN111861632B (en) Virtual makeup testing method and device, electronic equipment and readable storage medium
CN112819941B (en) Method, apparatus, device and computer readable storage medium for rendering water surface
CN111862285A (en) Method and device for rendering figure skin, storage medium and electronic device
Lee et al. Geometry-dependent lighting
Lee et al. Light collages: Lighting design for effective visualization
Penner et al. Pre-integrated skin shading
CN111292423A (en) Coloring method and device based on augmented reality, electronic equipment and storage medium
Wang et al. Lighting system for visual perception enhancement in volume rendering
US8159490B2 (en) Shading of translucent objects
CN114119848B (en) Model rendering method and device, computer equipment and storage medium
CN114549719A (en) Rendering method, rendering device, computer equipment and storage medium
CN113610955A (en) Object rendering method and device and shader
WO2022042003A1 (en) Three-dimensional coloring method and apparatus, and computing device and storage medium
CN114998505A (en) Model rendering method and device, computer equipment and storage medium
CN116363288A (en) Rendering method and device of target object, storage medium and computer equipment
CN115063330A (en) Hair rendering method and device, electronic equipment and storage medium
Kozlowski et al. Is accurate occlusion of glossy reflections necessary?
CN106056550B (en) A kind of rendering method and device based on high dynamic range images
CN114972647A (en) Model rendering method and device, computer equipment and storage medium
Lappa Photorealistic Texturing for Modern Video Games

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination