WO2024082927A1 - 头发渲染方法、装置、设备、存储介质和计算机程序产品 - Google Patents

头发渲染方法、装置、设备、存储介质和计算机程序产品 Download PDF

Info

Publication number
WO2024082927A1
WO2024082927A1 PCT/CN2023/121023 CN2023121023W WO2024082927A1 WO 2024082927 A1 WO2024082927 A1 WO 2024082927A1 CN 2023121023 W CN2023121023 W CN 2023121023W WO 2024082927 A1 WO2024082927 A1 WO 2024082927A1
Authority
WO
WIPO (PCT)
Prior art keywords
light path
longitudinal
scattering amount
information
pixel point
Prior art date
Application number
PCT/CN2023/121023
Other languages
English (en)
French (fr)
Inventor
潘晓宇
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2024082927A1 publication Critical patent/WO2024082927A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level

Definitions

  • the present application relates to the field of computer technology, and in particular to a hair rendering method, apparatus, computer equipment, storage medium and computer program product.
  • the Kajiya-kay lighting model is often used to shade and render the hair of virtual characters. That is, the hair is simulated and modeled as an opaque cylinder and shaded by the reflection principle.
  • a hair rendering method comprising:
  • main light source information of a virtual main light source set in a virtual environment wherein the main light source information includes a light source position and a light source direction that change over time, and the virtual main light source acts on a hair area of the virtual image;
  • each light path of the virtual main light source includes a reflection light path, a transmission light path and a transflection light path;
  • a longitudinal scattering amount is determined based on the longitudinal angle corresponding to the pixel point and the illumination information
  • an azimuthal scattering amount is determined based on the corresponding azimuthal angle
  • a scattering amount of the pixel point corresponding to the light path is determined according to the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information
  • the scattering amount of each light path corresponding to the pixel point is integrated to obtain the shading information of the virtual main light source corresponding to the pixel point, and the shading information is used to render the hair of the virtual image.
  • a hair rendering device comprising:
  • a first determination module is used to obtain main light source information of a virtual main light source set in a virtual environment, wherein the main light source information includes a light source position and a light source direction that change over time, and the virtual main light source acts on a hair area of the virtual image;
  • each light path of the virtual main light source includes a reflection light path, a transmission light path and a transflection light path;
  • a second determination module is configured to determine, for each light path, a longitudinal scattering amount based on the longitudinal angle corresponding to the pixel point and the illumination information, determine an azimuthal scattering amount based on the corresponding azimuthal angle, and determine a scattering amount of the pixel point corresponding to the light path according to the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information;
  • a fusion module is used to fuse the scattering amount of each light path corresponding to the pixel point to obtain the coloring information of the virtual main light source corresponding to the pixel point, and the coloring information is used to render the hair of the virtual image.
  • a computer device includes a memory and a processor, wherein the memory stores computer-readable instructions, and the processor implements the following steps when executing the computer-readable instructions: obtaining main light source information of a virtual main light source set in a virtual environment, wherein the main light source information includes a light source position and a light source direction that change with time;
  • a virtual main light source acts on the hair area of a virtual image; the longitudinal angle, azimuth angle, illumination information and color information of each light path of the virtual main light source corresponding to a pixel point in the hair area are obtained;
  • each light path of the virtual main light source includes a reflection light path, a transmission light path and a transflective light path; for each light path, a longitudinal scattering amount is determined based on the longitudinal angle and illumination information corresponding to the pixel point, an azimuth scattering amount is determined based on the corresponding azimuth angle, and a scattering amount of the light path corresponding to the pixel point is determined according to the longitudinal scattering amount, the azimuth scatter
  • a non-volatile computer-readable storage medium stores computer-readable instructions, which implement the following steps when executed by a processor: obtaining main light source information of a virtual main light source set in a virtual environment, the main light source information including a light source position and a light source direction that change with time, the virtual main light source acting on the hair area of a virtual image; obtaining the longitudinal angle, azimuth angle, illumination information and color information of each light path of the virtual main light source corresponding to a pixel point in the hair area; each light path of the virtual main light source includes a reflection light path, a transmission light path and a transflection light path; for each light path, determining a longitudinal scattering amount based on the longitudinal angle and illumination information corresponding to the pixel point, determining an azimuth scattering amount based on the corresponding azimuth angle, and determining a scattering amount corresponding to the light path of the pixel point according to the longitudinal scattering amount, the azimuth scattering amount and the corresponding color information; and
  • the computer program product includes computer-readable instructions, which implement the following steps when executed by a processor: determining a virtual main light source acting on a hair region of a virtual image; obtaining main light source information of a virtual main light source set in a virtual environment, the main light source information including a light source position and a light source direction that change with time, the virtual main light source acting on the hair region of the virtual image; obtaining longitudinal angles, azimuth angles, illumination information and color information of each light path corresponding to the virtual main light source of a pixel point in the hair region; each light path of the virtual main light source includes a reflection light path, a transmission light path and a transflection light path; for each light path, determining a longitudinal scattering amount based on the longitudinal angle and illumination information corresponding to the pixel point, determining an azimuth scattering amount based on the corresponding azimuth angle, and determining a scattering amount corresponding to the light path of the pixel point according to the longitudinal scattering amount, the azimuth scattering
  • FIG1 is a diagram of an application environment of a hair rendering method according to an embodiment
  • FIG2 is a schematic diagram of a flow chart of a hair rendering method in one embodiment
  • FIG3 is a schematic diagram of symbols of a scattering geometry in one embodiment
  • FIG4 is a schematic diagram of illumination information under different light paths in one embodiment
  • FIG5 is a schematic diagram of the geometric shape of a circular cross-section scattering in one embodiment
  • FIG6 is a schematic diagram of a hair dyeing interface in one embodiment
  • FIG7 is a schematic diagram of the hair effect when the virtual image faces away from the virtual main light source in one embodiment
  • FIG8 is a schematic diagram of the transmission effect of the hair of a virtual character in one embodiment
  • FIG9 is a schematic diagram of hair rendering effects under different light paths in one embodiment
  • FIG10 is a schematic diagram of various light effects of different hair colors in one embodiment
  • FIG11 is a schematic diagram showing a comparison between the rendering effect of the Kajiya-kay lighting model and real hair in one embodiment
  • FIG12 is a schematic diagram of hair rendering effect in one embodiment
  • FIG13 is a schematic diagram of a secondary design of a virtual image in one embodiment
  • FIG14 is a schematic diagram of a virtual image's clothing that matches the color of the hair in one embodiment
  • FIG15 is a block diagram of a hair rendering device according to an embodiment
  • FIG. 16 is a diagram showing the internal structure of a computer device in one embodiment.
  • the tangent direction refers to the direction of the hair.
  • the tangent direction is the direction from the hair end to the hair root.
  • the tangent direction is the direction from the hair root to the hair end.
  • Scattering An optical phenomenon in which light is scattered in all directions due to the inhomogeneity of the propagation medium.
  • Reflection An optical phenomenon in which light changes its direction at the interface between two substances and returns to the original substance. Light will be reflected when it encounters the surface of water, glass, and many other objects. The phenomenon of changing the direction of light at the interface between two substances and returning to the original substance is called reflection of light.
  • Refraction An optical phenomenon. When light is incident obliquely from one transparent medium to another transparent medium, the propagation direction generally changes. This phenomenon is called refraction of light.
  • Transmission An optical phenomenon. When light is incident on the surface of a transparent or translucent material, part of it is reflected, part of it is absorbed, and part of it can be transmitted. At this time, transmission is the phenomenon of the incident light refracting through the object.
  • the object being transmitted is a transparent or translucent body, such as glass, color filter, etc.
  • Illumination model A computer model that simulates the physical process of light illumination in nature based on the relevant laws of optics.
  • the illumination models in computer graphics are divided into local illumination models and global illumination models.
  • the local illumination model ignores the effect of the surrounding environment on the object and only considers the direct illumination effect of the light source on the surface of the object. This is only an ideal situation, and the result obtained is somewhat different from the real situation in nature.
  • the global illumination model takes into account the influence of the surrounding environment on the surface of the scene.
  • Light source An object that emits electromagnetic waves within a certain wavelength range (including visible light and invisible light such as ultraviolet light, infrared light and X-rays). It usually refers to an object that can emit visible light. Light sources are divided into natural light sources (i.e. natural light sources, such as the sun) and artificial light sources (such as electric lights).
  • Light path The path along which light travels. In this application, it simulates the path along which light travels on hair.
  • R the reflection path of light, which in the present application can be understood as a light path that simulates the reflection of light on hair.
  • TT Transmission Transmission, transmission light path
  • TRT Transmission Reflection Transmission
  • Longitudinal angle The angle between the projection of a vector onto the normal plane and the angle perpendicular to the tangent line on the normal plane.
  • Azimuthal angle The angle between a vector projected on the normal plane and a reference vector on the same plane.
  • Anisotropy refers to the fact that all or part of the chemical, physical and other properties of a substance change with the change of direction, showing different properties in different directions. Anisotropy is a common property in materials and media, and it varies greatly in scale. From crystals to various materials in daily life to the earth's media, all have anisotropy. It is worth noting that anisotropy and inhomogeneity are descriptions of matter from two different perspectives and are not the same.
  • the hair in the process of rendering mobile games, in order to color the hair of the virtual character, the hair is modeled into an opaque cylinder and colored by the reflection principle.
  • This coloring method is implemented by the Kajiya-kay lighting model. Since the model is not rendered based on the structure of real hair, the hair rendering effect is stiff and the light-sensitive part is shiny. Therefore, the traditional Kajiya-kay lighting model cannot obtain a realistic hair rendering effect.
  • the hair rendering method provided by the embodiment of the present application obtains the main light source information of the virtual main light source set in the virtual environment, the main light source information includes the light source position and light source direction that change with time, the virtual main light source acts on the virtual image's hair area, and the light path of the virtual main light source includes a reflection light path, a transmission light path, and a transflection light path.
  • the longitudinal scattering amount is determined based on the corresponding longitudinal angle and illumination information, and the scattering condition of the pixel point in the longitudinal scattering process can be accurately estimated.
  • the azimuth scattering amount can be directly determined based on the corresponding azimuth angle, thereby accurately estimating the scattering condition of the pixel point in the azimuth scattering process.
  • the longitudinal scattering amount, the azimuth scattering amount, and the corresponding color information the scattering amount of the light path corresponding to the pixel point is determined.
  • the coloring information of the pixel corresponding to the virtual main light source is obtained, thereby achieving realistic and efficient rendering of the virtual character's hair and improving the hair rendering effect.
  • the hair rendering method provided in the embodiment of the present application can be applied in the application environment as shown in FIG. 1 .
  • the terminal 102 communicates with the server 104 through a network.
  • the data storage system can store data that the server 104 needs to process.
  • the data storage system can be integrated on the server 104, or it can be placed on the cloud or other servers.
  • Both the terminal 102 and the server 104 can be used alone to execute the hair rendering method provided in the embodiment of the present application.
  • the terminal 102 and the server 104 can also be used in conjunction to execute the hair rendering method provided in the embodiment of the present application.
  • a client can be run in the terminal 102
  • the server 104 is a background server that can provide computing services and storage services for the client.
  • the server 104 obtains main light source information of a virtual main light source set in a virtual environment, the main light source information includes a light source position and a light source direction that change with time, and the virtual main light source acts on the hair area of the virtual image.
  • the server 104 obtains the longitudinal angle, azimuth angle, illumination information, and color information of each light path of the virtual main light source corresponding to the pixel point in the hair area, and each light path of the virtual main light source includes a reflection light path, a transmission light path, and a transflective light path.
  • the server 104 determines the longitudinal scattering amount based on the longitudinal angle and illumination information corresponding to the pixel point, determines the azimuth scattering amount based on the corresponding azimuth angle, and determines the scattering amount of the light path corresponding to the pixel point according to the longitudinal scattering amount, the azimuth scattering amount, and the corresponding color information.
  • the server 104 fuses the scattering amount of each light path corresponding to the pixel point to obtain the coloring information of the pixel point corresponding to the virtual main light source, and the coloring information is used to render the hair of the virtual image.
  • the server The server 104 can also generate hair rendering data based on the coloring information.
  • the server 104 sends the rendering data to the terminal 102, and the terminal 102 renders and displays the hair of the virtual image according to the rendering data through the running client.
  • the server 104 can also send the rendering data to the terminal 102 in advance, and the terminal 102 stores it locally.
  • the hair of the virtual image is rendered and displayed according to the locally stored rendering data.
  • the terminal 102 needs to render the virtual image in the game screen, needs to provide the hair parts of the virtual image, or needs to generate the virtual image, the hair of the virtual image can be rendered and displayed according to the locally stored rendering data.
  • the client running on the terminal 102 may be a client having a function of displaying a virtual image, such as a game application, a video application, a social application, an instant messaging application, a navigation application, a music application, a shopping application, an electronic map application, a browser, etc.
  • the client may be an independent application or a sub-application integrated in a certain client (for example, a social client and a travel client, etc.), which is not limited here.
  • the terminal 102 may be, but is not limited to, various personal computers, laptops, smart phones, tablet computers, IoT devices, and portable wearable devices.
  • the IoT devices may be smart TVs, smart car-mounted devices, etc.
  • the portable wearable devices may be smart watches, smart bracelets, head-mounted devices, etc.
  • the server 104 may be implemented as an independent server or a server cluster consisting of multiple servers.
  • a hair rendering method is provided, and the method is applied to a computer device (such as the terminal 102 or the server 104 in FIG. 1 ), comprising the following steps:
  • Step 202 obtaining main light source information of a virtual main light source set in the virtual environment, the main light source information including a light source position and a light source direction that change with time, and the virtual main light source acts on the hair area of the virtual image.
  • the virtual environment is a virtual environment generated by a computer, which can be a three-dimensional environment that simulates real events, a completely fictitious three-dimensional environment, or a semi-fictitious and semi-real three-dimensional scene.
  • the virtual main light source is a light source mainly used for emitting light in a virtual environment.
  • the main light source is a light source that simulates the principle of sunlight. It can be understood that the light emitted by the virtual main light source is parallel light. Therefore, the virtual main light source can be regarded as sunlight in a virtual environment.
  • there is a light source in the virtual environment namely the virtual main light source, which illuminates the virtual environment through the virtual main light source.
  • there can be multiple light sources in the virtual environment such as virtual main light sources, street lights, vehicle lights, etc.
  • the main light source information is used to simulate the virtual main light source in the virtual environment.
  • the position of the virtual main light source is set by simulating the position of the sun at a certain moment.
  • the position of the virtual main light source can be the east of the virtual environment.
  • the position of the virtual main light source can be determined in combination with the time set by the virtual environment. For example, when the time set in the virtual environment is morning, or the time period set is morning, the virtual main light source is set in the east of the virtual environment. When the time set in the virtual environment is noon, the virtual main light source is set in the south of the virtual environment. When the virtual environment is set in the evening, the virtual main light source is set in the west of the virtual environment.
  • the light source direction refers to the direction in which the virtual main light source emits light. For example, the light source direction is the direction facing the face of the virtual image.
  • the light source direction of the virtual main light source is also set by simulating the light direction of the sun at a certain moment.
  • a virtual image is a non-real, software-generated three-dimensional model used in a virtual environment.
  • a virtual image can be a virtual character in a game.
  • a virtual image can be a virtual character in an animation or movie.
  • the hair of the virtual object needs to be rendered.
  • the present application can also be applied to animals.
  • the embodiment of the present application can render the hair of the animal, that is, the hair of the virtual object is regarded as the hair of the animal.
  • the virtual image is displayed through pixels, wherein the pixels of the virtual image are displayed. The area where the avatar is located is considered the hair area of the avatar.
  • the computer device obtains main light source information of a virtual main light source set in the virtual environment, the main light source information including a light source position and a light source direction that change with time, and the computer device simulates the virtual main light source in real time in the virtual environment according to the light source position and the light source direction.
  • the computer device obtains the main light source information at the current moment, and simulates a virtual main light source in the virtual environment according to the main light source information.
  • the virtual main light source acts on the hair area of the virtual image.
  • Step 204 obtaining the longitudinal angle, azimuth angle, illumination information and color information of each light path of the virtual main light source corresponding to the pixel point in the hair area; each light path of the virtual main light source includes a reflection light path, a transmission light path and a transflective light path.
  • the longitudinal angle is the angle between a vector and the projection vector of the vector projected on the normal plane of the hair.
  • the azimuth angle is the angle between the vector projected on the normal plane and a reference vector on the normal plane.
  • the normal plane refers to the plane passing through the tangent point of the space curve and perpendicular to the tangent, that is, the plane perpendicular to the tangent at the point.
  • Figure 3 it is a schematic diagram of the symbol of the scattering geometry.
  • the normal plane of the hair is the plane formed by the w axis and the v axis
  • u is the tangent vector of the hair
  • the direction is from the root to the tip of the hair.
  • the tangent vector is the tangent vector of the curve at a point, which can be understood as the vector along the tangent direction of the curve at that point.
  • Vector v and vector w form a right-handed orthogonal basis, and the vw plane is used as the normal plane.
  • ⁇ i is the vector in the illumination direction
  • ⁇ r is the vector in the scattering direction.
  • ⁇ i and ⁇ r are both longitudinal angles
  • ⁇ i is the angle between the vector projected by ⁇ i on the normal plane and ⁇ i , that is, the inclination angle between the illumination direction and the normal plane, which can be understood as the incident longitudinal angle
  • ⁇ r is the angle between the vector projected by ⁇ r on the normal plane and ⁇ r , that is, the inclination angle between the scattering direction and the normal plane, which can be understood as the scattering longitudinal angle.
  • ⁇ i and ⁇ r are both azimuth angles.
  • ⁇ i is the angle between the vector projected by ⁇ i on the normal plane and vector v, which can be understood as the incident azimuth angle
  • ⁇ r is the angle between the vector projected by ⁇ r on the normal plane and vector v, which can be understood as the scattering azimuth angle.
  • the illumination information is used to reflect the light perception of the pixel, that is, the light that the human eye can perceive.
  • the illumination information includes two parameters: the offset mean and the variance width term.
  • the offset mean reflects the position of the halo on the hair.
  • the halo can be understood as the highlight of the hair, representing the luster of the hair.
  • the offset mean mainly affects the illumination position, and the position of the halo on the hair can be changed by using different values of the offset mean.
  • the variance width term reflects the width of the halo on the pixel, that is, the width of the illumination. The larger the value of the variance width term, the wider the width of the halo, the wider the width, the darker the scattered brightness, and the rougher the hair looks.
  • Figure 4 it is a schematic diagram of illumination information under different light paths, and Figure 4 exemplifies the illumination information of the reflection light path and the transflection light path.
  • Line 1 in Figure 4 is the offset mean of the reflection light path
  • box 1 is the variance width term of the reflection light path
  • line 2 is the offset mean of the transflection light path
  • box 2 is the variance width term of the transflection light path.
  • the offset mean affects the position of the scattered light
  • the variance width term affects the scattering range.
  • Color information is used to reflect the color of the pixel.
  • the color information includes the chromaticity information, brightness information and saturation information of the pixel.
  • the chromaticity information reflects the hue of the color of the pixel.
  • the brightness information reflects the brightness of the color of the pixel, and the saturation information reflects the vividness of the color of the pixel.
  • the optical path of the virtual main light source at the pixel includes a reflection optical path, a transmission optical path and a transflective optical path.
  • FIG5 it is a schematic diagram of the geometric shape of circular cross-section scattering.
  • the incident light is incident on a circular object with an incident energy dh and an incident angle ⁇ i , and the height difference between the incident light and the center of the circle is h.
  • the reflected optical path R the outgoing light is emitted with an outgoing energy d ⁇ R and an outgoing angle ⁇ i .
  • the reflected optical path does not involve propagation inside the hair, so the number of segments P of the path inside the hair is 0.
  • the number of segments of the internal path refers to the number of times the optical path changes inside the hair.
  • the transmission light path TT the transmission light path is that the light penetrates the circular object and then passes through the circular object.
  • the light path enters the hair at an incident angle of ⁇ i the light is transmitted inside at a refraction angle ⁇ t , and then exits with an exit energy d ⁇ TT and an exit angle ⁇ i .
  • the number of segments P of the internal path is 1.
  • the transflection light path is that the light penetrates the circular object and then enters the circular object.
  • the light of the round object is reflected inside the object and then passes through the inside of the round object.
  • the light path enters the hair at an incident angle of ⁇ i , the light is transmitted inside at a refraction angle of ⁇ t , and after a reflection at an angle of ⁇ t inside the hair, it is emitted with an emission energy d ⁇ TRT and an emission angle of ⁇ i .
  • the number of segments P of the internal path is 2.
  • the computer device obtains the longitudinal angle, azimuth angle and color information of each light path of the virtual main light source corresponding to the pixel point in the hair area.
  • the computer device obtains the input lighting information of each light path.
  • the computer device determines the lighting information of each light path based on at least one of the virtual image and the virtual scene.
  • the computer device calculates the longitudinal angle and azimuth angle corresponding to the corresponding optical path after the virtual main light source is incident at a preset incident angle in the corresponding optical path.
  • the computer device obtains the input illumination information of each optical path in response to the input operation of the operator.
  • the computer device allows the offset mean and variance width items to be manually set, and the operator inputs the value of the offset mean in the offset value range and increases the value of the variance width item.
  • the computer device obtains the illumination information of each optical path corresponding to the virtual main light source of the pixel point in the hair area in response to the input operation of the operator.
  • the computer device obtains at least one saturation corresponding to the virtual image, and determines a saturation from the at least one saturation as the color information of the reflection optical path, that is, the color information of the reflection optical path does not involve brightness information and chromaticity information, but only involves saturation information. At this time, half of the saturation can also be used as the color information of the reflection optical path.
  • the input color information of the transflection optical path is obtained.
  • the computer device directly uses the color of the virtual image's hair as the color information of the transmission light path, or, in response to the operator's operation on the color information under the transmission light path, obtains the input color information of the transmission light path.
  • FIG6 it is a schematic diagram of a hair dyeing interface.
  • the hair dyeing interface is provided with three selectable dyeing types, namely, a normal dyeing type, a free dyeing type, and a hairstyle dyeing type.
  • a normal dyeing type When the operator selects the hairstyle dyeing type, the operator can select the color of the hair and confirm the dyeing by consuming a preset number of virtual resources (for example, if hair color 1 is selected, 10 virtual resources need to be consumed).
  • the computer device determines the color of the target hair in response to the operator's selection operation, and obtains the color information corresponding to the color of the target hair.
  • the dyeing interface is also provided with an icon for resetting color information, an icon for revoking the determined color information, an icon for restoring the revoked color information, an icon for caching color information, and an icon for downloading color information.
  • the bright part of the hair is controllable by opening the value range of the offset mean, and the effect of multiple scattering is approximated by increasing the value of the distribution width term (i.e., the variance width term).
  • the effect of multiple scattering can still be achieved by setting the open color information, ensuring that the color of the hair is natural and there are no obvious dark parts visually.
  • the setting of open color information can also flexibly customize various new hair colors and support a variety of stylized effects to meet the artistic requirements of fine arts.
  • the computer device calculates the longitudinal angle and azimuth angle corresponding to the corresponding optical path.
  • the computer device determines the lighting information according to the personal attributes of the virtual image, which include clothing styles and hairstyles.
  • the computer device determines the lighting information according to the environmental parameters of the virtual environment, which include the scene type of the virtual environment and the time of the virtual environment.
  • the computer device determines the lighting information corresponding to the virtual image in the virtual environment according to the motion trajectory of the virtual image in the virtual environment and the action of the virtual image.
  • the computer device determines the color information of the reflection optical path according to the half-saturation of the color of the hair of the virtual image, and obtains the input color information of the reflection optical path in response to the operation of the color information under the reflection optical path input by the operator.
  • the computer device directly uses the color of the hair of the virtual image as the color information of the transmission optical path, or obtains the input color information of the transmission optical path in response to the operation of the color information under the transmission optical path input by the operator.
  • Step 206 for each light path, determine the longitudinal scattering amount based on the corresponding longitudinal angle of the pixel point and the illumination information, determine the azimuthal scattering amount based on the corresponding azimuthal angle, and determine the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information.
  • a beam of light hitting the hair will be scattered onto a conical surface centered on the incident point and along the axis of the hair.
  • the scattered light on the conical surface is obtained by propagating each light path.
  • the complex spatial relationship can be split into two two-dimensional planes, namely the cross-section and vertical section of the hair (normal plane, such as the w-v plane in Figure 3).
  • the longitudinal scattering amount represents the distribution of the longitudinal scattering of the pixel points on the cross-section, which can be understood as the proportion of light emitted on the cross-section after the emission direction is given. Longitudinal scattering is scattering along the direction of hair growth.
  • the azimuthal scattering amount represents the distribution of the azimuthal scattering of the pixel points on the vertical section, that is, after the emission direction is given, the proportion of light emitted on the vertical section.
  • the computer device performs longitudinal scattering calculation according to the longitudinal angle and illumination information corresponding to the corresponding light path to obtain the longitudinal scattering amount of the corresponding light path.
  • the computer device performs azimuth scattering calculation according to the azimuth angle corresponding to the corresponding light path to obtain the azimuth scattering amount of the corresponding light path.
  • the computer device determines the scattering amount of the light path corresponding to the pixel point according to the longitudinal scattering amount, the azimuth scattering amount and the corresponding color information.
  • the illumination information of the reflected light path includes longitudinal illumination information.
  • the longitudinal scattering amount is determined based on the longitudinal angle corresponding to the pixel point and the illumination information
  • the azimuthal scattering amount is determined based on the corresponding azimuthal angle
  • the scattering amount of the light path corresponding to the pixel point is determined according to the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information, including: for the reflected light path, the longitudinal scattering amount is determined based on the longitudinal angle corresponding to the pixel point and the longitudinal illumination information, the azimuthal scattering amount is determined based on the cosine value of the corresponding azimuthal angle, and the scattering amount of the reflected light path corresponding to the pixel point is determined according to the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information.
  • the longitudinal illumination information of the reflected light path represents the illumination information in the longitudinal scattering process when propagating through the reflected light path.
  • the computer device obtains the longitudinal scattering amount through Gaussian calculation based on the longitudinal angle and longitudinal illumination information of the reflected light path.
  • the computer device determines the cosine value of the azimuth angle corresponding to the pixel point based on the azimuth angle of the reflected light path, and determines the azimuth scattering amount according to the cosine value.
  • the computer device determines the scattering amount of the reflected light path corresponding to the pixel point based on the longitudinal scattering amount, the azimuth scattering amount, and the corresponding color information.
  • the longitudinal angle of the reflected light path includes an incident longitudinal angle and a scattered longitudinal angle.
  • the computer device calculates the half angle of the incident longitudinal angle and the scattered longitudinal angle of the reflected light path, and obtains the longitudinal scattering amount through Gaussian calculation based on the half angle and the longitudinal illumination information.
  • the computer device determines the cosine value of the corresponding azimuth angle according to the azimuth angle of the reflected light path, and determines the azimuth scattering amount according to the cosine value.
  • the longitudinal scattering amount is determined based on the corresponding longitudinal angle and the longitudinal illumination information, so that the proportion of the longitudinal scattered light to the incident light after passing through the reflected light path can be accurately reflected.
  • the azimuthal scattering amount is determined based on the cosine value of the corresponding azimuthal angle, and no additional loop iteration is required to calculate the azimuthal scattering amount, which simplifies the calculation steps of the azimuthal scattering amount. In this way, according to the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information, the scattering amount of the reflected light path with color expression can be accurately and quickly obtained.
  • the longitudinal angle includes an incident longitudinal angle and a scattered longitudinal angle;
  • the method comprises: calculating the average difference between the longitudinal scattering angle and the longitudinal incident angle of the reflected light path; calculating the product of the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information; and calculating the scattering amount of the reflected light path corresponding to the pixel point according to the cosine value of the average difference and the product, wherein the scattering amount of the reflected light path is proportional to the product and inversely proportional to the square of the cosine value of the average difference.
  • the computer device calculates the average difference between the incident longitudinal angle and the scattered longitudinal angle of the reflected light path, and calculates the product of the longitudinal scattering amount, the azimuthal scattering amount and the color information of the reflected light path.
  • the computer device calculates the square of the cosine value of the average difference, and uses the square of the cosine value of the average difference as the denominator and the product as the numerator to obtain the scattering amount of the reflected light path corresponding to the pixel point.
  • calculating the average difference between the incident longitudinal angle and the scattered longitudinal angle of the reflected light path includes: calculating the difference obtained by subtracting the incident longitudinal angle from the scattered longitudinal angle of the reflected light path, and determining half of the difference as the average difference.
  • the scattering amount SR of the reflected light path corresponding to the pixel point is calculated by the following formula:
  • MR ( ⁇ h ) is the longitudinal scattering amount
  • NR is the azimuthal scattering amount
  • a 1 is the color information
  • cos ⁇ d is the cosine value of the average difference
  • MR ( ⁇ h ) ⁇ NR ⁇ A 1 is the product
  • cos 2 ⁇ d is the square of the cosine value of the average difference.
  • the correction of the solid angle (i.e., the scattered beam) projection is achieved by dividing the product of the longitudinal scattering amount, the directional scattering amount, and the color information by the square of the square difference.
  • the scattering information with color expression can be quickly and accurately obtained, and through the cosine value and product of the mean difference, the proportion of the scattered light with color expression to the incident light can be directly obtained, that is, the scattering amount of the reflected light path with rich color expression is obtained, which simplifies the hair color adjustment steps.
  • the azimuth angle includes an incident azimuth angle and a scattering azimuth angle
  • the azimuth scattering amount is determined based on the cosine value of the corresponding azimuth angle, including: calculating the difference between the scattering azimuth angle and the incident azimuth angle; and using the cosine value of the difference as the azimuth scattering amount of the reflected light path corresponding to the pixel point.
  • the computer device calculates the difference between the scattering azimuth and the incident azimuth, and determines the cosine value of the difference.
  • the computer device directly uses the cosine value of the difference as the azimuth scattering amount of the reflected light path corresponding to the pixel point.
  • the computer device obtains the incident azimuth angle ⁇ i and the scattering azimuth angle ⁇ r of the reflected light path, and calculates the difference ⁇ between the scattering azimuth angle ⁇ r and the incident azimuth angle ⁇ i .
  • the difference ⁇ can be obtained by subtracting the scattering azimuth angle ⁇ r from the incident azimuth angle ⁇ i , or by subtracting the incident azimuth angle ⁇ r from the scattering azimuth angle ⁇ r , and is not specifically limited.
  • the azimuthal scattering amount is directly determined by the cosine function, which reduces the influence of the azimuthal scattering amount on the scattering amount of the reflected light path. No additional loop iteration is required to calculate the azimuthal scattering amount, which greatly simplifies the calculation steps of the azimuthal scattering amount. Thus, the cost of subsequent hair rendering of the virtual image is reduced, and popularization on mobile terminals is achieved.
  • the light path of the virtual main light source also includes a transmission light path
  • the illumination information of the transmission light path includes longitudinal illumination information and azimuthal illumination information
  • a longitudinal scattering amount is determined based on a corresponding longitudinal angle and illumination information
  • an azimuthal scattering amount is determined based on an azimuthal angle corresponding to a pixel point
  • a scattering amount of the light path corresponding to the pixel point is determined according to the longitudinal scattering amount, the azimuthal scattering amount, and the corresponding color information, including: for the transmission light path, a longitudinal scattering amount is determined based on a corresponding longitudinal angle of the pixel point and longitudinal illumination information, and an azimuthal scattering amount is determined based on a corresponding azimuthal angle and azimuthal illumination information; and
  • the longitudinal scattering amount, azimuthal scattering amount and corresponding color information determine the scattering amount of the transmission light path corresponding to the pixel point.
  • the longitudinal illumination information of the transmission light path represents the illumination information in the longitudinal scattering process when propagating through the transmission light path
  • the azimuthal illumination information of the transmission light path represents the illumination information in the azimuthal scattering process when propagating through the transmission light path
  • the computer device obtains the longitudinal scattering amount through Gaussian calculation based on the longitudinal angle and longitudinal illumination information of the transmission light path, and obtains the azimuthal scattering amount through Gaussian calculation based on the azimuthal angle and azimuthal illumination information of the transmission light path.
  • the computer device determines the scattering amount of the transmission light path corresponding to the pixel point according to the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information.
  • the longitudinal angle of the transmission light path includes the incident longitudinal angle and the scattered longitudinal angle
  • the azimuth angle of the transmission light path includes the incident azimuth angle and the scattered azimuth angle.
  • the computer device calculates the half angle of the incident longitudinal angle and the scattered longitudinal angle of the transmission light path, and obtains the longitudinal scattering amount through Gaussian calculation based on the half angle and the longitudinal illumination information.
  • the computer device calculates the difference between the incident azimuth angle and the scattered azimuth angle of the transmission light path, and obtains the azimuth scattering amount through Gaussian calculation based on the difference and the azimuth illumination information.
  • the computer device determines the scattering amount of the transmission light path corresponding to the pixel point based on the longitudinal scattering amount, the azimuth scattering amount, and the corresponding color information.
  • the angle of the scattered light is the same. Therefore, if the incident longitudinal angles of each optical path are the same and the scattered longitudinal angles are the same, then the half angles of each optical path are the same, and the average difference between the scattered longitudinal angles and the incident longitudinal angles corresponding to each optical path is the same. If the incident azimuths in each optical path are the same and the scattered azimuths are the same, then the difference between the scattered azimuths and the incident azimuths corresponding to each optical path is the same.
  • the scattering amount of the transmission light path is small, that is, the influence of the transmission light path on the hair coloring is small.
  • the scattering amount of the transmission light path of the pixel points is large, so that the light transmission effect can be presented at the edge of the hair.
  • Figure 7 it is a schematic diagram of the hair effect when the virtual image is facing away from the virtual main light source. Obviously, the hair at the edge of the hair area in Figure 7 has a very obvious light transmission effect.
  • the computer device obtains the incident azimuth angle ⁇ i and the scattering azimuth angle ⁇ r of the transmission light path, and calculates the difference ⁇ between the incident azimuth angle ⁇ i and the scattering azimuth angle ⁇ r .
  • the computer device determines the scattering amount of the transmission light path corresponding to the pixel point according to the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information.
  • the longitudinal scattering amount is determined based on the corresponding longitudinal angle and longitudinal illumination information, so that the proportion of the longitudinal scattered light to the incident light after passing through the transmission light path can be accurately reflected, and the azimuthal scattering amount is determined based on the corresponding azimuthal angle and azimuthal illumination information, without the need for additional cyclic iteration to calculate the azimuthal scattering amount, thus simplifying the calculation steps of the azimuthal scattering amount.
  • the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information the scattering amount of the transmission light path with color expression can be accurately and quickly obtained.
  • the scattering amount of the transmission light path corresponding to the pixel is determined according to the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information, including: obtaining a shadow map corresponding to the hair area, the shadow map including the shadow degree of each pixel, the shadow degree indicating whether there is a shadow at the pixel; and determining the scattering amount of the transmission light path corresponding to the pixel according to the product of the shadow degree of the pixel, the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information.
  • the computer device obtains a shadow map corresponding to the hair area of the avatar. For each pixel in the hair area, the computer device determines the scattering amount of the transmission light path corresponding to the pixel based on the product of the shadow degree, longitudinal scattering amount, azimuthal scattering amount and corresponding color information of the pixel.
  • the shadow degree of the pixel point is a positive integer
  • the shadow degree of the pixel point is regarded as zero
  • the scattering amount of the transmission light path corresponding to the pixel point is 0.
  • the shadow map can be used to superimpose the shadow of the virtual image to create the effect of light blocking in the thick hair.
  • Figure 8 it is a schematic diagram of the transmission effect of the hair of the virtual image. In order to show the effect of the transmission light path, the virtual image is set to face the virtual main light source.
  • the computer device determines the coloring information only through the scattering amount of the transmission light path through the debug mode (debugging tool).
  • the composite effect of the hair shadow is increased in combination with the shadow map.
  • the debug mode provides a function for viewing the color rendering of the hair, that is, you can customize the parameters, and you can also view the rendering effect determined by the scattering amount of at least one light path corresponding to at least one light source.
  • the hair rendering effect determined only by the transmitted light path in addition to the above-mentioned view of the hair rendering effect determined only by the transmitted light path, as shown in Figure 9, you can also view the rendering effect determined only by the reflected light path, the rendering effect determined only by the transflective light path, and the rendering effect after superposition of multiple light paths.
  • the rendering effect after superposition of multiple light paths can better reflect the layers and light perception of the hair, and the rendering is more realistic.
  • the longitudinal angle of the transmitted light path includes the incident longitudinal angle and the scattered longitudinal angle.
  • the average difference between the scattered longitudinal angle and the incident longitudinal angle of the transmitted light path is calculated.
  • the scattering amount of the transmitted light path corresponding to the pixel point is calculated based on the square of the cosine value of the product corresponding to the transmitted light path and the average difference.
  • the scattering amount of the transmitted light path is proportional to the product and inversely proportional to the square of the cosine value.
  • the scattering amount S TT of the transmission light path is as follows:
  • the shadow map corresponding to the hair area is obtained, which can accurately reflect whether each pixel has a shadow.
  • the product of the shadow degree longitudinal scattering amount, azimuthal scattering amount and corresponding color information of the pixel, a scattering amount with color expression and capable of reflecting the shadow effect can be obtained.
  • the light path of the virtual main light source also includes a transflective light path
  • the illumination information of the transflective light path includes longitudinal illumination information
  • the longitudinal scattering amount is determined based on the corresponding longitudinal angle of the pixel point and the illumination information
  • the azimuthal scattering amount is determined based on the corresponding azimuthal angle
  • the scattering amount of the light path corresponding to the pixel point is determined according to the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information, including: for the transflective light path, the longitudinal scattering amount is determined based on the corresponding longitudinal angle and the longitudinal illumination information, the azimuthal scattering amount is determined based on the cosine value of the azimuthal angle corresponding to the pixel point, and the scattering amount of the transflective light path corresponding to the pixel point is determined according to the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information.
  • the longitudinal illumination information of the transflective light path represents the illumination information in the longitudinal scattering process.
  • the computer device obtains the longitudinal scattering amount through Gaussian calculation based on the longitudinal angle and longitudinal illumination information of the transflective light path.
  • the computer device determines the cosine value of the corresponding azimuth angle based on the azimuth angle of the transflective light path, and determines the azimuth scattering amount according to the cosine value.
  • the computer device determines the scattering amount of the transflective light path corresponding to the pixel point according to the longitudinal scattering amount, the azimuth scattering amount, and the corresponding color information.
  • the longitudinal angle of the transflective light path includes an incident longitudinal angle and a scattered longitudinal angle.
  • the computer device calculates the half angle of the incident longitudinal angle and the scattered longitudinal angle of the transflective light path, and obtains the longitudinal scattering amount through Gaussian calculation based on the half angle and the longitudinal illumination information.
  • the computer device determines the cosine value of the corresponding azimuth angle according to the azimuth angle of the transflective light path, and determines the azimuth scattering amount according to the cosine value.
  • the azimuth angle of the transflective light path includes an incident azimuth angle and a scattered azimuth angle.
  • the computer device calculates the difference between the scattered azimuth angle and the incident azimuth angle of the transflective light path, and determines the azimuth scattering amount of the transflective light path according to the cosine value of the difference corresponding to the transflective light path.
  • the longitudinal scattering amount is determined based on the corresponding longitudinal angle and longitudinal illumination information, so that the proportion of longitudinal scattered light to incident light after passing through the transflective light path can be accurately reflected, and the azimuthal scattering amount is determined based on the cosine value of the corresponding azimuthal angle, without the need for additional cyclic iteration to calculate the azimuthal scattering amount, thereby simplifying the calculation steps of the azimuthal scattering amount.
  • the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information the scattering amount of the transflective light path with color expression can be accurately and quickly obtained.
  • the longitudinal angle includes an incident longitudinal angle and a scattered longitudinal angle.
  • the scattering amount of the transflective light path corresponding to the pixel point is determined according to the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information, including: calculating the average difference between the scattering longitudinal angle of the transflective light path and the incident longitudinal angle; calculating the product of the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information; calculating the scattering amount of the transflective light path corresponding to the pixel point according to the cosine value of the average difference and the product, the scattering amount of the transflective light path is proportional to the product and inversely proportional to the square of the cosine value of the average difference.
  • the computer device calculates the average difference between the scattering longitudinal angle and the incident longitudinal angle of the transflective light path, and calculates the product of the longitudinal scattering amount, the azimuthal scattering amount, and the color information of the transflective light path.
  • the computer device calculates the square of the cosine value of the average difference, and uses the square of the cosine value of the average difference as the denominator and the product as the numerator to obtain the scattering amount of the transflective light path corresponding to the pixel point.
  • the scattering amount S TRT of the transflective light path corresponding to the pixel point is calculated by the following formula:
  • the scattering information with color expression can be quickly and accurately obtained, and through the cosine value and product of the average difference, the proportion of scattered light with color expression to the incident light can be directly obtained, that is, the scattering amount of the transflective light path with rich color expression is obtained, which simplifies the hair color adjustment steps.
  • Step 208 the scattering amount of each light path corresponding to the pixel point is integrated to obtain the coloring information of the virtual main light source corresponding to the pixel point, and the coloring information is used to render the hair of the virtual image.
  • the coloring information can be understood as the scattering amount of a pixel point, which can be the scattering amount of one light path or the scattering amount obtained by fusing the scattering amounts of multiple light paths.
  • the computer device fuses the scattering amounts of the reflected light path, the transmitted light path and the transflective light path corresponding to the same pixel point to obtain shading information of the virtual main light source corresponding to the pixel point, and the shading information is used to render the hair of the virtual image.
  • the computer device For example, for each pixel, the computer device superimposes the scattering amount of the reflected light path, the scattering amount of the transmitted light path, and the scattering amount of the transflective light path of the pixel to obtain a total scattering amount, and determines the total scattering amount as the shading information of the virtual main light source corresponding to the pixel.
  • the computer device determines the weight of each light path, and weights the scattering amount of the reflected light path, the scattering amount of the transmitted light path, and the scattering amount of the transflective light path of the pixel according to the weight of each light path to obtain a total scattering amount, and determines the total scattering amount as the shading information of the virtual main light source corresponding to the pixel.
  • the main light source information includes the light source position and light source direction that change with time, the virtual main light source acts on the virtual image's hair area, and the light path of the virtual main light source includes a reflection light path, a transmission light path, and a transflection light path; by obtaining the longitudinal angle, azimuth angle, illumination information, and color information of each light path of the virtual main light source corresponding to the pixel point in the hair area, for each light path, the longitudinal scattering amount is determined based on the corresponding longitudinal angle and illumination information, and the scattering of the pixel point in the longitudinal scattering process can be accurately estimated.
  • the azimuth scattering amount can be directly determined based on the corresponding azimuth angle, thereby accurately estimating the scattering of the pixel point in the azimuth scattering process.
  • the longitudinal scattering amount, the azimuth scattering amount, and the corresponding color information the scattering amount of the light path corresponding to the pixel point is determined.
  • the coloring information of the pixel corresponding to the virtual main light source is obtained, thereby achieving realistic and efficient rendering of the virtual character's hair and improving the hair rendering effect.
  • the method further includes: obtaining backlight source information of a virtual backlight source set in a virtual environment, the backlight source information including a light source position and a lighting direction that change with time, the virtual backlight source being a virtual backlight source acting on a hair region of a virtual image, and the horizontal projection direction of the lighting direction of the virtual backlight source being opposite to the horizontal projection direction of the lighting direction of the virtual main light source; obtaining the longitudinal angle, lighting information and color information of each light path of the virtual backlight source corresponding to a pixel point in the hair region; each light path of the virtual backlight source includes a reflective light path and a transflective light path; for each light path corresponding to the virtual backlight source, determining a longitudinal scattering amount corresponding to the virtual backlight source based on a longitudinal angle corresponding to the pixel point and the lighting information, and determining a scattering amount of the light path corresponding to the pixel point according to the longitudinal scattering amount corresponding to the virtual backlight source and the corresponding color information;
  • the virtual backlight source is a light source arranged on the back of the virtual image, which is used to fill in the back of the virtual image.
  • the virtual backlight source is simulated in the virtual environment through the backlight source information.
  • the virtual backlight source may not change with time, that is, no matter how time changes, the position and light source direction of the virtual backlight source in the virtual environment are the same.
  • the computer device determines the lighting direction of the virtual backlight source according to the lighting direction of the virtual main light source, and the lighting direction of the virtual backlight source is opposite to the lighting direction of the virtual main light source.
  • the computer device determines the back of the virtual image as the light source position of the virtual backlight source, and simulates the virtual backlight source acting on the hair area of the virtual image in the simulation environment according to the light source position and lighting direction of the virtual backlight source, or the computer device uses the camera light source as the virtual backlight source.
  • the longitudinal scattering amount corresponding to the virtual backlight source is determined based on the corresponding longitudinal angle and the illumination information.
  • the computer device determines the scattering amount of the light path corresponding to the pixel point according to the product of the longitudinal scattering amount and the corresponding color information, and obtains the coloring information of the virtual backlight source corresponding to the pixel point by superimposing the scattering amounts of each light path corresponding to the same pixel point.
  • the computer device determines the azimuthal scattering amount based on the corresponding azimuth angle, and determines the scattering amount of the light path corresponding to the pixel point according to the product of the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information, and obtains the coloring information of the virtual backlight source corresponding to the pixel point by superimposing the scattering amounts of each light path corresponding to the pixel point.
  • the illumination information of the reflected light path corresponding to the virtual backlight source includes longitudinal illumination information.
  • the longitudinal scattering amount corresponding to the virtual backlight source is determined based on the longitudinal angle corresponding to the pixel point and the illumination information.
  • the scattering amount of the light path corresponding to the pixel point is determined, including: for the reflected light path of the virtual backlight source, according to the longitudinal angle corresponding to the pixel point and the longitudinal illumination information, the longitudinal scattering amount corresponding to the virtual backlight source is obtained by Gaussian calculation; according to the product of the longitudinal scattering amount corresponding to the virtual backlight source and the corresponding color information, the scattering amount of the reflected light path corresponding to the pixel point is determined.
  • the longitudinal scattering amount is determined based on the corresponding longitudinal angle and the longitudinal illumination information, so that the proportion of the longitudinal scattered light to the incident light after passing through the reflection light path can be accurately reflected.
  • the azimuthal scattering amount is determined based on the cosine value of the corresponding azimuthal angle, and no additional loop iteration is required to calculate the azimuthal scattering amount, which simplifies the calculation steps of the azimuthal scattering amount. In this way, according to the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information, the scattering amount of the reflection light path with color expression can be accurately and quickly obtained.
  • the longitudinal angle includes an incident longitudinal angle and a scattering longitudinal angle.
  • the scattering amount of the reflected light path corresponding to the pixel point is determined according to the product of the longitudinal scattering amount corresponding to the virtual backlight source and the corresponding color information, including: calculating the average difference between the scattering longitudinal angle and the incident longitudinal angle of the reflected light path of the virtual backlight source; determining the scattering amount of the reflected light path corresponding to the pixel point according to the cosine value of the average difference and the product, the scattering amount of the reflected light path is proportional to the product, and inversely proportional to the square of the cosine value of the average difference.
  • the illumination information of the reflective light path includes longitudinal illumination information
  • the longitudinal angle includes an incident longitudinal angle and a scattering longitudinal angle.
  • the average difference between the scattering longitudinal angle and the incident longitudinal angle of the reflective light path is calculated; based on the cosine value of the average difference and the product, the scattering amount of the reflective light path corresponding to the pixel point is calculated, and the scattering amount of the reflective light path is proportional to the product and inversely proportional to the square of the cosine value of the average difference.
  • the step of determining the directional scattering amount of the reflected light path includes: determining the azimuthal scattering amount according to the azimuth angle corresponding to the reflected light path.
  • the scattering amount SR ' of the reflected light path of the virtual backlight source is as follows:
  • A1 ' is the color information corresponding to the reflected light path of the virtual backlight source.
  • the scattering information with color expression can be quickly and accurately obtained, and through the cosine value and product of the mean difference, the proportion of the scattered light with color expression to the incident light can be directly obtained, that is, the scattering amount of the reflected light path with rich color expression is obtained, which simplifies the hair color adjustment steps.
  • the light path of the virtual backlight source also includes a transflective light path
  • the illumination information of the transflective light path includes longitudinal illumination information
  • the longitudinal scattering amount corresponding to the virtual backlight source is determined based on the longitudinal angle corresponding to the pixel point and the illumination information, and the scattering amount of the light path corresponding to the pixel point is determined according to the longitudinal scattering amount corresponding to the virtual backlight source and the corresponding color information, including: for the transflective light path of the virtual backlight source, according to the longitudinal angle corresponding to the pixel point and the longitudinal illumination information, the longitudinal scattering amount corresponding to the virtual backlight source is obtained by Gaussian calculation; according to the product of the longitudinal scattering amount and the corresponding color information, the scattering amount of the transflective light path corresponding to the pixel point is determined.
  • the illumination information of the transflective light path includes longitudinal illumination information
  • the longitudinal angle includes an incident longitudinal angle and a scattering longitudinal angle.
  • the average difference between the scattering longitudinal angle and the incident longitudinal angle of the transflective light path is calculated; based on the cosine value of the average difference and the product, the scattering amount of the transflective light path corresponding to the pixel point is calculated, and the scattering amount of the transflective light path is proportional to the product and inversely proportional to the square of the cosine value of the average difference.
  • the average difference between the scattering longitudinal angle and the incident longitudinal angle of the transflective light path is calculated; based on the cosine value of the average difference and the product, the scattering amount of the transflective light path corresponding to the pixel point is calculated, and the scattering amount of the transflective light path is proportional to the product and inversely proportional to the square of the cosine value of the average difference.
  • the step of determining the longitudinal scattering amount of the transflective light path includes: obtaining the longitudinal scattering amount by Gaussian calculation according to the longitudinal angle and longitudinal illumination information of the transflective light path.
  • the step of determining the directional scattering amount of the transflective light path includes: determining the azimuth scattering amount according to the azimuth angle corresponding to the transflective light path.
  • the corresponding color information A 3 ′ is obtained, and the scattering amount S TRT ′ of the transflective light path corresponding to the pixel point is calculated by the following formula:
  • the transmission light path can clearly reflect the contour effect of the front of the virtual image.
  • the virtual backlight is used to fill in the back of the virtual image, so there is no need to calculate the transmission light path.
  • the azimuth scattering corresponding to each light path is very weak and can be ignored.
  • the coloring information of the virtual backlight corresponding to the pixel point can be directly determined based on the product of the longitudinal scattering and the corresponding color information, which greatly optimizes the processing steps of the virtual backlight.
  • the scattering information with color expression can be quickly and accurately obtained, and through the cosine value and product of the mean difference, the proportion of the scattered light with color expression to the incident light can be directly obtained, that is, the scattering amount of the transflective light path with rich color expression is obtained, which simplifies the color adjustment step of the hair.
  • a virtual backlight source acting on the hair area of the virtual image is determined to fill in the back of the virtual image.
  • the longitudinal scattering amount is determined based on the corresponding longitudinal angle and illumination information, and the scattering amount of the virtual backlight source under each light path is obtained according to the longitudinal scattering amount and the corresponding color information.
  • the method also includes: obtaining light source information of a camera light source set in a virtual environment, the light source information including a light source position and a light source direction that change with time, the camera light source acting on a hair area of a virtual image; obtaining longitudinal angles, illumination information, and color information of each light path of the camera light source corresponding to a pixel point in the hair area; each light path of the camera light source includes a reflection light path and a transflective reflection light path; for each light path corresponding to the camera light source, determining a longitudinal scattering amount corresponding to the camera light source based on a longitudinal angle corresponding to the pixel point and the illumination information, and determining a scattering amount of the light path corresponding to the pixel point according to the longitudinal scattering amount corresponding to the camera light source and the corresponding color information; fusing the scattering amount of each light path corresponding to the pixel point to obtain coloring information of the camera light source corresponding to the pixel point, and the coloring information is used to render the hair of the
  • the camera light source is a point light source emitted by the camera position, which can be understood as a point light source emitted by the user's human eye, and is used to supplement the anisotropic highlights.
  • the camera light source is simulated in the virtual environment through the light source information.
  • the camera light source may not change with time, that is, no matter how time changes, the position and direction of the camera light source in the virtual environment are the same.
  • the computer device simulates the camera light source acting on the hair area of the virtual image in the virtual environment according to the light source information of the camera light source. For each light path corresponding to the camera light source, the computer device determines the longitudinal scattering amount corresponding to the camera light source based on the corresponding longitudinal angle and the illumination information, and determines the scattering amount of the light path corresponding to the pixel point according to the product of the longitudinal scattering amount corresponding to the camera light source and the corresponding color information. The computer device superimposes the scattering amounts of each light path corresponding to the same pixel point to obtain the coloring information of the camera light source corresponding to the pixel point.
  • the computer device determines the azimuthal scattering amount based on the corresponding azimuth angle, and determines the scattering amount of the light path corresponding to the pixel point according to the product of the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information, and obtains the coloring information of the camera light source corresponding to the pixel point by superimposing the scattering amounts of each light path corresponding to the pixel point.
  • the illumination information of the reflected light path corresponding to the camera light source includes longitudinal illumination information.
  • the longitudinal scattering amount corresponding to the camera light source is determined based on the longitudinal angle corresponding to the pixel point and the illumination information.
  • the scattering amount of the light path corresponding to the pixel point is determined, including: for the reflected light path of the camera light source, according to the longitudinal angle corresponding to the pixel point and the longitudinal illumination information, the longitudinal scattering amount corresponding to the camera light source is obtained by Gaussian calculation; according to the product of the longitudinal scattering amount corresponding to the camera light source and the corresponding color information, the scattering amount of the reflected light path corresponding to the pixel point is determined.
  • the illumination information of the reflected light path includes longitudinal illumination information
  • the longitudinal angle includes an incident longitudinal angle and a scattered longitudinal angle.
  • the average difference between the scattered longitudinal angle and the incident longitudinal angle of the reflected light path is calculated; based on the cosine value of the average difference and the product, the scattering amount of the reflected light path corresponding to the pixel point is calculated, and the scattering amount of the reflected light path is proportional to the product and inversely proportional to the square of the cosine value of the average difference.
  • the average difference between the scattering longitudinal angle of the reflected light path and the incident longitudinal angle is calculated; based on the cosine value of the average difference and the product, the scattering amount of the reflected light path corresponding to the pixel point is calculated, and the scattering amount of the reflected light path is proportional to the product and inversely proportional to the square of the cosine value of the average difference.
  • the step of determining the longitudinal scattering amount of the reflection light path includes: obtaining the longitudinal scattering amount by Gaussian calculation according to the longitudinal angle and longitudinal illumination information of the reflection light path.
  • the step of determining the directional scattering amount of the reflected light path includes: determining the azimuth scattering amount according to the azimuth angle corresponding to the reflected light path.
  • the illumination information of the transflective light path includes longitudinal illumination information
  • the longitudinal angle includes an incident longitudinal angle and a scattering longitudinal angle.
  • the average difference between the scattering longitudinal angle and the incident longitudinal angle of the transflective light path is calculated; based on the cosine value of the average difference and the product, the scattering amount of the transflective light path corresponding to the pixel point is calculated, and the scattering amount of the transflective light path is proportional to the product and inversely proportional to the square of the cosine value of the average difference.
  • the average difference between the scattering longitudinal angle and the incident longitudinal angle of the transflective light path is calculated; based on the cosine value of the average difference and the product, the scattering amount of the transflective light path corresponding to the pixel point is calculated, and the scattering amount of the transflective light path is proportional to the product and inversely proportional to the square of the cosine value of the average difference.
  • the scattering amount SR ′′ of the reflected light path of the camera light source is as follows:
  • A1 ′′ is the color information corresponding to the reflected light path of the camera light source.
  • the scattering information with color expression can be quickly and accurately obtained, and through the cosine value and product of the mean difference, the proportion of the scattered light with color expression to the incident light can be directly obtained, that is, the scattering amount of the reflected light path with rich color expression is obtained, which simplifies the hair color adjustment steps.
  • the optical path of the camera light source includes a transflective optical path
  • the illumination information of the transflective optical path includes longitudinal illumination information
  • the longitudinal scattering amount corresponding to the camera light source is determined based on the longitudinal angle corresponding to the pixel point and the illumination information
  • the scattering amount of the optical path corresponding to the pixel point is determined according to the longitudinal scattering amount corresponding to the camera light source and the corresponding color information, including: for the transflective optical path of the camera light source, according to the longitudinal angle corresponding to the pixel point and the longitudinal illumination information, the longitudinal scattering amount corresponding to the camera light source is obtained by Gaussian calculation; according to the longitudinal scattering amount corresponding to the camera light source and the corresponding color information, the scattering amount of the transflective optical path corresponding to the pixel point is determined.
  • the illumination information of the transflective light path includes longitudinal illumination information
  • the longitudinal angle includes an incident longitudinal angle and a scattering longitudinal angle.
  • the average difference between the scattering longitudinal angle and the incident longitudinal angle of the transflective light path is calculated; based on the cosine value of the average difference and the product, the scattering amount of the transflective light path corresponding to the camera light source is calculated, and the scattering amount of the transflective light path is proportional to the product and inversely proportional to the square of the cosine value of the average difference.
  • the average difference between the scattering longitudinal angle and the incident longitudinal angle of the transflective light path is calculated; based on the cosine value of the average difference and the product, the scattering amount of the transflective light path corresponding to the camera light source is calculated, and the scattering amount of the transflective light path is proportional to the product and inversely proportional to the square of the cosine value of the average difference.
  • the step of determining the directional scattering amount of the transflective light path includes: determining the azimuthal scattering amount according to the azimuth angle corresponding to the transflective light path.
  • the corresponding color information A 3 ′′ is obtained, and the scattering amount S TRT ′′ of the transflective light path corresponding to the pixel point is calculated by the following formula:
  • ⁇ h ′′ ( ⁇ i ′′+ ⁇ r ′′)/2.
  • the scattering information with color expression can be quickly and accurately obtained, and through the cosine value and product of the average difference, the proportion of the scattered light with color expression to the incident light can be directly obtained, that is, the scattering amount of the transflective light path with rich color expression is obtained, which simplifies the color adjustment step of the hair.
  • the camera light source acting on the hair area of the virtual image is determined to supplement the anisotropic highlights of the hair area of the virtual image.
  • the longitudinal scattering amount is determined based on the corresponding longitudinal angle and illumination information, and the scattering amount of the camera light source under each light path is obtained according to the longitudinal scattering amount and the corresponding color information.
  • the virtual light source acting on the same pixel also includes a virtual backlight source and a camera light source
  • the method also includes: fusing the shading information of the virtual main light source, the shading information of the virtual backlight source and the shading information of the camera light source corresponding to the same pixel to obtain the target shading information of the pixel, and the target shading information is used to render the virtual image's hair.
  • the computer device fuses the shading information of the virtual main light source, the shading information of the virtual backlight source and the shading information of the camera light source corresponding to the same pixel point to obtain the target shading information of the pixel point, and the target shading information is used to render the virtual image's hair.
  • the computer device determines the shading weights corresponding to each light source, and for each pixel in the hair area, the computer device determines the target shading information of the pixel through weighted calculation based on the shading weights corresponding to each light source and the shading information of each light source corresponding to the pixel.
  • the shading weights of the light sources may be the same or different, and are not specifically limited.
  • FIG10 it is a schematic diagram of various light effects under different hair colors.
  • FIG10 provides 10 sub-images, each of which obtains the target shading information through customized color information and illumination information under different light paths, and then renders a sub-image based on each target shading information.
  • FIG11 is a schematic diagram comparing the rendering effect of the Kajiya-kay lighting model and the real hair.
  • the hair rendered by the Kajiya-kay lighting model is relatively stiff, shiny, and cannot fit the light distribution of the hair.
  • FIG12 is a schematic diagram of the rendering effect of the present application.
  • the hair rendered by the virtual image determined by the present application according to the target coloring information has a rich sense of light and a real hair effect, that is, the quality of the rendered hair is high.
  • the computer device matches the color of the rendered hair with the color of the clothes of the avatar to obtain the image of the avatar.
  • the image of the avatar is imported into the target terminal, and the image is redesigned through the client on the target terminal, and the redesigned image is uploaded to the social application client of the target terminal for display.
  • the image in FIG13 is an image uploaded to the social application client by the user after the redesign.
  • the color of the hair and the color of the clothes after the rendering in the image are both color 1.
  • the color of the hair is obtained based on the target coloring information, and the user matches the clothes of the avatar independently, such as matching the type of clothes and the color of clothes according to the color of the hair.
  • FIG14 it is a schematic diagram of the avatar clothes matching the color of the hair
  • sub-images 1 and 2 in FIG14 are images obtained by matching the clothes of the avatar according to the color of the hair of the rendered hair of the avatar
  • sub-image 3 is the purchase information of the avatar in the display interface of the game client, and the purchase information includes the color of the hair and the clothes of the avatar.
  • the game client can sell advanced suits, hair colors and hairstyles as purchase information.
  • the light effect of the pixel can be further enriched to ensure realistic and efficient rendering of the virtual character's hair. In this way, the hair rendering function is greatly optimized.
  • the present application also provides an application scenario, which applies the above-mentioned hair rendering method.
  • the application of the hair rendering method in the application scenario is, for example, as follows: In the scenario of a game client, in order to achieve efficient and realistic hair rendering on the client, color information is determined according to actual art requirements, and a rich light perception design is performed through open lighting information to achieve high-performance rendering of the virtual image's hair.
  • main light source information of a virtual main light source set in a virtual environment is obtained, the main light source information includes a light source position and a light source direction that change with time, and the virtual main light source acts on a hair area of the virtual image; the longitudinal angle, azimuth angle, illumination information and color information of each light path of the virtual main light source corresponding to the pixel point in the hair area are obtained; each light path of the virtual main light source includes a reflection light path, a transmission light path and a transflective light path; for each light path, a longitudinal scattering amount is determined based on the longitudinal angle and illumination information corresponding to the pixel point, an azimuth scattering amount is determined based on the corresponding azimuth angle, and a scattering amount of the light path corresponding to the pixel point is determined according to the longitudinal scattering amount, the azimuth scattering amount and the corresponding color information; the scattering amount of each light path corresponding to the pixel point is fused to obtain coloring information of the virtual main light source corresponding to the pixel
  • the hair rendering method provided by the present application can also be applied in other application scenarios.
  • multimedia scenarios it often involves rendering virtual images in various multimedia, such as virtual images in animation videos and virtual images used for promotion in promotional videos.
  • the hair rendering method of the present application can be used to efficiently render the hair of the virtual image to obtain a realistic virtual image.
  • a hair rendering method which can be executed by a computer device, and optionally: main light source information of a virtual main light source set in a virtual environment is obtained, the main light source information includes a light source position and a light source direction that change with time, and the virtual main light source acts on the hair area of the virtual image.
  • the longitudinal angle, azimuth angle, illumination information and color information of each light path of the virtual main light source corresponding to the pixel point in the hair area are obtained; each light path of the virtual main light source includes a reflection light path, a transmission light path and a transflection light path.
  • the illumination information of the reflection light path includes longitudinal illumination information.
  • the longitudinal angle includes an incident longitudinal angle and a scattering longitudinal angle.
  • the longitudinal scattering amount is determined based on the longitudinal angle corresponding to the pixel point and the longitudinal illumination information.
  • the difference between the scattering azimuth angle and the incident azimuth angle is calculated; and the cosine value of the difference is used as the azimuth scattering amount of the reflection light path corresponding to the pixel point.
  • the average difference between the scattering longitudinal angle and the incident longitudinal angle of the reflection light path is calculated.
  • the product of the longitudinal scattering amount, the azimuth scattering amount and the corresponding color information is calculated. According to the cosine value and product of the average difference, the scattering amount of the reflected light path corresponding to the pixel is calculated.
  • the scattering amount of the reflected light path is proportional to the product and inversely proportional to the square of the cosine value of the average difference.
  • the illumination information of the transmission light path includes longitudinal illumination information and azimuthal illumination information.
  • the longitudinal scattering amount is determined based on the corresponding longitudinal angle and the longitudinal illumination information
  • the azimuthal scattering amount is determined based on the corresponding azimuthal angle and the azimuthal illumination information.
  • the shadow map corresponding to the hair area is obtained.
  • the shadow map includes the shadow degree of each pixel point, and the shadow degree indicates whether there is a shadow at the pixel point.
  • the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information, the scattering amount of the transmission light path corresponding to the pixel point is determined.
  • the illumination information of the transflective light path includes longitudinal illumination information.
  • the longitudinal scattering amount is determined based on the corresponding longitudinal angle and longitudinal illumination information, and the azimuthal scattering amount is determined based on the cosine value of the corresponding azimuthal angle.
  • the average difference between the scattering longitudinal angle and the incident longitudinal angle of the transflective light path is calculated; the product of the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information is calculated; the scattering amount of the transflective light path corresponding to the pixel point is calculated based on the cosine value of the average difference and the product.
  • the scattering amount of the transflective light path is proportional to the product and inversely proportional to the square of the cosine value of the average difference.
  • the scattering amounts of the pixel points corresponding to the reflective light path, the transmission light path and the transflective light path are merged to obtain the coloring information of the virtual main light source corresponding to the pixel point.
  • the virtual light source acting on the same pixel also includes a virtual backlight source and a camera light source.
  • the backlight source information of the virtual backlight source set in the virtual environment is obtained.
  • the backlight source information includes the light source position and illumination direction that change with time.
  • the virtual backlight source acts on the virtual image's hair area.
  • the horizontal projection direction of the virtual backlight source is opposite to the horizontal projection direction of the virtual main light source.
  • the longitudinal angle, illumination information and color information of each light path of the virtual backlight source corresponding to the pixel in the hair area are obtained; each light path of the virtual backlight source includes a reflection light path and a transflection light path.
  • the longitudinal scattering amount corresponding to the virtual backlight source is determined based on the longitudinal angle corresponding to the pixel and the illumination information, and the scattering amount of the light path corresponding to the pixel is determined according to the longitudinal scattering amount corresponding to the virtual backlight source and the corresponding color information.
  • the scattering amounts of the reflection light path and the transflection light path corresponding to the pixel are merged to obtain the coloring information of the virtual backlight source corresponding to the pixel.
  • the coloring information is used to render the hair of the virtual image.
  • the light source information of the camera light source set in the virtual environment is obtained.
  • the light source information includes the light source position and light source direction that change with time.
  • the camera light source acts on the virtual image's hair area.
  • each optical path of the camera light source includes a reflection optical path and a transflective optical path.
  • the longitudinal scattering amount corresponding to the camera light source is determined based on the longitudinal angle corresponding to the pixel point and the illumination information, and the scattering amount of the optical path corresponding to the pixel point is determined according to the longitudinal scattering amount corresponding to the camera light source and the corresponding color information.
  • the scattering amounts of the reflection optical path and the transflective optical path corresponding to the pixel point are merged to obtain the shading information of the camera light source corresponding to the pixel point, and the shading information is used to render the hair of the virtual image.
  • the shading information of the virtual main light source, the shading information of the virtual backlight source and the shading information of the camera light source corresponding to the same pixel point are merged to obtain the target shading information of the pixel point, and the target shading information is used to render the hair of the virtual image.
  • the main light source information by acquiring the main light source information of the virtual main light source set in the virtual environment, the main light source information
  • the method includes a light source position and a light source direction that change with time, a virtual main light source that acts on the hair area of the virtual image, and a light path of the virtual main light source that includes a reflection light path, a transmission light path, and a transflective light path; by obtaining the longitudinal angle, azimuth angle, illumination information, and color information of each light path of the virtual main light source corresponding to the pixel point in the hair area, for each light path, the longitudinal scattering amount is determined based on the corresponding longitudinal angle and illumination information, and the scattering of the pixel point in the longitudinal scattering process can be accurately estimated.
  • the azimuth scattering amount can be directly determined based on the corresponding azimuth angle, thereby accurately estimating the scattering of the pixel point in the azimuth scattering process.
  • the scattering amount of the light path corresponding to the pixel point is determined according to the longitudinal scattering amount, the azimuth scattering amount, and the corresponding color information. In this way, the light perception of the pixel point under each light path can be truly simulated by combining the longitudinal scattering, azimuth scattering, and color information.
  • the coloring information of the pixel point corresponding to the virtual main light source is obtained, so that the hair of the virtual character can be rendered realistically and efficiently, and the hair rendering effect is improved.
  • steps in the flowcharts involved in the above-mentioned embodiments can include multiple steps or multiple stages, and these steps or stages are not necessarily executed at the same time, but can be executed at different times, and the execution order of these steps or stages is not necessarily carried out in sequence, but can be executed in turn or alternately with other steps or at least a part of the steps or stages in other steps.
  • the embodiment of the present application also provides a hair rendering device for implementing the hair rendering method involved above.
  • the implementation solution provided by the device to solve the problem is similar to the implementation solution recorded in the above method, so the specific limitations of one or more hair rendering device embodiments provided below can refer to the limitations of the hair rendering method above, and will not be repeated here.
  • a hair rendering device including: a first determination module 1502 , an acquisition module 1504 , a second determination module 1506 and a fusion module 1508 , wherein:
  • the first determination module 1502 is used to obtain main light source information of a virtual main light source set in the virtual environment, where the main light source information includes a light source position and a light source direction that change over time, and the virtual main light source acts on a hair area of the virtual image.
  • the acquisition module 1504 is used to acquire the longitudinal angle, azimuth angle, illumination information and color information of each light path of the virtual main light source corresponding to the pixel point in the hair area; each light path of the virtual main light source includes a reflection light path, a transmission light path and a transflection light path.
  • the second determination module 1506 is used to determine the longitudinal scattering amount for each light path based on the longitudinal angle corresponding to the pixel point and the illumination information, determine the azimuthal scattering amount based on the corresponding azimuthal angle, and determine the scattering amount of the light path corresponding to the pixel point according to the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information.
  • the fusion module 1508 is used to fuse the scattering amount of each light path corresponding to the pixel point to obtain the coloring information of the virtual main light source corresponding to the pixel point, and the coloring information is used to render the hair of the virtual image.
  • the illumination information of the reflected light path includes longitudinal illumination information.
  • the second determination module is used to determine the longitudinal scattering amount for the reflected light path based on the longitudinal angle corresponding to the pixel point and the longitudinal illumination information, determine the azimuthal scattering amount based on the cosine value of the corresponding azimuthal angle, and determine the scattering amount of the reflected light path corresponding to the pixel point according to the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information.
  • the longitudinal angle includes an incident longitudinal angle and a scattered longitudinal angle.
  • the second determination module is used to calculate the average difference between the scattered longitudinal angle and the incident longitudinal angle of the reflected light path; calculate the product of the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information; and calculate the scattering amount of the reflected light path corresponding to the pixel point based on the cosine value of the average difference and the product.
  • the scattering amount of the reflected light path is proportional to the product and inversely proportional to the square of the cosine value of the average difference.
  • the azimuth angle includes an incident azimuth angle and a scattered azimuth angle.
  • the second determination module is used to calculate the difference between the scattered azimuth angle and the incident azimuth angle; and use the cosine value of the difference as the azimuth scattering amount of the reflected light path corresponding to the pixel point.
  • the light path of the virtual main light source also includes a transmitted light path
  • the illumination information of the transmitted light path includes longitudinal illumination information and azimuthal illumination information.
  • the second determination module is used to determine the longitudinal scattering amount for the transmitted light path based on the corresponding longitudinal angle and the longitudinal illumination information, and to determine the azimuthal scattering amount based on the corresponding azimuthal angle and the azimuthal illumination information of the pixel point.
  • the scattering amount of the transmitted light path corresponding to the pixel point is determined according to the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information.
  • the second determination module is used to obtain a shadow map corresponding to the hair area, where the shadow map includes a shadow degree of each pixel, and the shadow degree represents whether there is a shadow at the pixel; the scattering amount of the transmission light path corresponding to the pixel is determined according to the product of the shadow degree of the pixel, the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information.
  • the light path of the virtual main light source also includes a transflective light path
  • the illumination information of the transflective light path includes longitudinal illumination information
  • a second determination module is used to determine the longitudinal scattering amount for the transflective light path based on the corresponding longitudinal angle and the longitudinal illumination information, and to determine the azimuthal scattering amount based on the cosine value of the azimuthal angle corresponding to the pixel point, and to determine the scattering amount of the transflective light path corresponding to the pixel point according to the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information.
  • the longitudinal angle includes an incident longitudinal angle and a scattered longitudinal angle.
  • the second determination module is used to calculate the average difference between the scattered longitudinal angle and the incident longitudinal angle of the transflective light path; calculate the product of the longitudinal scattering amount, the azimuthal scattering amount and the corresponding color information; and calculate the scattering amount of the transflective light path corresponding to the pixel point based on the cosine value of the average difference and the product.
  • the scattering amount of the transflective light path is proportional to the product and inversely proportional to the square of the cosine value of the average difference.
  • the first determination module is further used to obtain backlight source information of a virtual backlight source set in a virtual environment, the backlight source information including a light source position and a lighting direction that change with time, the virtual backlight source acts on a hair region of a virtual image, and a horizontal projection direction of the lighting direction of the virtual backlight source is opposite to a horizontal projection direction of the lighting direction of the virtual main light source;
  • the acquisition module is further used to obtain a longitudinal angle, lighting information and color information of each light path of the virtual backlight source corresponding to a pixel point in the hair region; each light path of the virtual backlight source includes a reflective light path and a transflective light path;
  • the second determination module is further used to determine, for each light path corresponding to the virtual backlight source, a longitudinal scattering amount corresponding to the virtual backlight source based on a longitudinal angle corresponding to a pixel point and lighting information, and determine a scattering amount of the light path corresponding to the pixel point according to the longitudinal scattering amount corresponding to the
  • the illumination information of the reflected light path corresponding to the virtual backlight source includes longitudinal illumination information.
  • the second determination module is further used to obtain the longitudinal scattering amount corresponding to the virtual backlight source through Gaussian calculation according to the longitudinal angle and longitudinal illumination information corresponding to the pixel point for the reflected light path of the virtual backlight source; and to determine the scattering amount of the reflected light path corresponding to the pixel point according to the product of the longitudinal scattering amount corresponding to the virtual backlight source and the corresponding color information.
  • the longitudinal angle includes an incident longitudinal angle and a scattered longitudinal angle.
  • the second determination module is further used to calculate the average difference between the scattered longitudinal angle and the incident longitudinal angle of the reflected light path of the virtual backlight source; according to the cosine value and the product of the average difference, the scattering amount of the reflected light path corresponding to the pixel point is determined, and the scattering amount of the reflected light path is proportional to the product and inversely proportional to the square of the cosine value of the average difference.
  • the light path of the virtual backlight source also includes a transflective light path
  • the illumination information of the transflective light path includes longitudinal illumination information.
  • the second determination module is further used to obtain the longitudinal scattering amount corresponding to the virtual backlight source through Gaussian calculation according to the longitudinal angle and longitudinal illumination information corresponding to the pixel point for the transflective light path of the virtual backlight source; and determine the scattering amount of the transflective light path corresponding to the pixel point according to the product of the longitudinal scattering amount and the corresponding color information.
  • the first determination module is further used to obtain light source information of a camera light source set in the virtual environment, the light source information including the light source position and light source direction that change with time, and the camera light source acting on the hair area of the virtual image. domain; an acquisition module, further used to acquire the longitudinal angle, illumination information and color information of each light path of the camera light source corresponding to the pixel point in the hair area; each light path of the camera light source includes a reflection light path and a transflection light path; a second determination module, further used to determine, for each light path corresponding to the camera light source, the longitudinal scattering amount corresponding to the camera light source based on the longitudinal angle corresponding to the pixel point and the illumination information, and determine the scattering amount of the light path corresponding to the pixel point according to the longitudinal scattering amount corresponding to the camera light source and the corresponding color information; a fusion module, further used to fuse the scattering amount of each light path corresponding to the pixel point to obtain the coloring information of the camera light source corresponding to the
  • the illumination information of the reflected light path corresponding to the camera light source includes longitudinal illumination information.
  • the second determination module is further used to obtain the longitudinal scattering amount corresponding to the camera light source through Gaussian calculation according to the longitudinal angle corresponding to the pixel point and the longitudinal illumination information of the reflected light path of the camera light source; and determine the scattering amount of the reflected light path corresponding to the pixel point according to the product of the longitudinal scattering amount corresponding to the camera light source and the corresponding color information.
  • the optical path of the camera light source includes a transflective light path
  • the illumination information of the transflective light path includes longitudinal illumination information
  • the second determination module is used to obtain the longitudinal scattering amount corresponding to the camera light source through Gaussian calculation according to the longitudinal angle and longitudinal illumination information corresponding to the pixel point for the transflective light path of the camera light source; and determine the scattering amount of the transflective light path corresponding to the pixel point according to the longitudinal scattering amount and the corresponding color information corresponding to the camera light source.
  • the virtual light source acting on the same pixel also includes a virtual backlight source and a camera light source.
  • the fusion module is also used to fuse the shading information of the virtual main light source, the shading information of the virtual backlight source and the shading information of the camera light source corresponding to the same pixel to obtain the target shading information of the pixel.
  • the target shading information is used to render the virtual image's hair.
  • Each module in the hair rendering device can be implemented in whole or in part by software, hardware, or a combination thereof.
  • Each module can be embedded in or independent of a processor in a computer device in the form of hardware, or can be stored in a memory in a computer device in the form of software, so that the processor can call and execute operations corresponding to each module.
  • a computer device which can be a server or a terminal, and its internal structure diagram can be shown in Figure 16.
  • the computer device includes a processor, a memory, an input/output interface (Input/Output, referred to as I/O) and a communication interface.
  • the processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, a computer-readable instruction and a database.
  • the internal memory provides an environment for the operation of the operating system and the computer-readable instructions in the non-volatile storage medium.
  • the input/output interface of the computer device is used to exchange information between the processor and an external device.
  • the communication interface of the computer device is used to communicate with an external terminal through a network connection.
  • FIG. 16 is merely a block diagram of a partial structure related to the scheme of the present application, and does not constitute a limitation on the computer device to which the scheme of the present application is applied.
  • the specific computer device may include more or fewer components than shown in the figure, or combine certain components, or have a different arrangement of components.
  • a computer device including a memory and a processor, wherein the memory stores computer-readable instructions, and the processor implements the steps in the above-mentioned method embodiments when executing the computer-readable instructions.
  • a computer-readable storage medium on which computer-readable instructions are stored.
  • the steps in the above-mentioned method embodiments are implemented.
  • a computer program product comprising computer-readable instructions, which implement the steps in the above-mentioned method embodiments when executed by a processor.
  • user information involved in this application including but not limited to user device information, user personal information Information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • any reference to the memory, database or other medium used in the embodiments provided in this application can include at least one of non-volatile and volatile memory.
  • Non-volatile memory can include read-only memory (ROM), magnetic tape, floppy disk, flash memory, optical memory, high-density embedded non-volatile memory, resistive random access memory (ReRAM), magnetoresistive random access memory (MRAM), ferroelectric random access memory (FRAM), phase change memory (PCM), graphene memory, etc.
  • Volatile memory can include random access memory (RAM) or external cache memory, etc.
  • RAM can be in various forms, such as static random access memory (SRAM) or dynamic random access memory (DRAM).
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • the database involved in each embodiment provided in this application may include at least one of a relational database and a non-relational database.
  • Non-relational databases may include distributed databases based on blockchain, etc., but are not limited to this.
  • the processor involved in each embodiment provided in this application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, etc., but are not limited to this.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Generation (AREA)

Abstract

本申请涉及一种头发渲染方法、装置、计算机设备、存储介质和计算机程序产品。方法包括:虚拟主光源作用于虚拟形象的头发区域(202)。获取头发区域中的像素点对应虚拟主光源的各光路的纵向角、方位角、光照信息与颜色信息;虚拟主光源的各光路包括反射光路、透射光路与透反射光路(204)。对于每个光路,基于像素点相应的纵向角与光照信息确定纵向散射量,基于相应的方位角确定方位散射量,根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应光路的散射量(206)。融合像素点分别对应反射光路、透射光路与透反射光路的散射量,得到像素点对应虚拟主光源的着色信息,着色信息用于渲染虚拟形象的头发,提高头发渲染的效果(208)。

Description

头发渲染方法、装置、设备、存储介质和计算机程序产品
本申请要求2022年10月18日申请的,申请号为202211271651.5,名称为“头发渲染方法、装置、设备、存储介质和计算机程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及一种头发渲染方法、装置、计算机设备、存储介质和计算机程序产品。
背景技术
随着图像处理技术的发展,游戏的画面质量越来越高,为了让手机端游戏中虚拟角色头发的效果逼真,常常通过Kajiya-kay(卡吉雅模型)光照模型来对虚拟角色的头发进行着色渲染。即,将头发模拟建模为不透明的圆柱体,并通过反射原理来进行着色。
然而,真实头发是半透明的圆柱体,因此,采用Kajiya-kay光照模型对虚拟头发着色后,虚拟头发着色的效果比较生硬、油亮,无法模拟出真实头发的实际光感,即,存在对虚拟角色的头发渲染效果差的问题。
发明内容
一种头发渲染方法。所述方法包括:
获取设定于虚拟环境中的虚拟主光源的主光源信息,所述主光源信息包括随时间变化的光源位置和光源方向,所述虚拟主光源作用于虚拟形象的头发区域;
获取所述头发区域中的像素点对应所述虚拟主光源的各光路的纵向角、方位角、光照信息与颜色信息;所述虚拟主光源的各光路包括反射光路、透射光路与透反射光路;
对于每个光路,基于所述像素点相应的纵向角与光照信息确定纵向散射量,基于相应的方位角确定方位散射量,根据所述纵向散射量、所述方位散射量以及相应的颜色信息,确定所述像素点对应所述光路的散射量;及
融合所述像素点对应每个光路的散射量,得到所述像素点对应所述虚拟主光源的着色信息,所述着色信息用于渲染所述虚拟形象的头发。
一种头发渲染装置。所述装置包括:
第一确定模块,用于获取设定于虚拟环境中的虚拟主光源的主光源信息,所述主光源信息包括随时间变化的光源位置和光源方向,所述虚拟主光源作用于虚拟形象的头发区域;
获取模块,用于获取所述头发区域中的像素点对应所述虚拟主光源的各光路的纵向角、方位角、光照信息与颜色信息;所述虚拟主光源的各光路包括反射光路、透射光路与透反射光路;
第二确定模块,用于对于每个光路,基于所述像素点相应的纵向角与光照信息确定纵向散射量,基于相应的方位角确定方位散射量,根据所述纵向散射量、所述方位散射量以及相应的颜色信息,确定所述像素点对应所述光路的散射量;
融合模块,用于融合所述像素点对应每个光路的散射量,得到所述像素点对应所述虚拟主光源的着色信息,所述着色信息用于渲染所述虚拟形象的头发。
一种计算机设备。所述计算机设备包括存储器和处理器,所述存储器存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现以下步骤:获取设定于虚拟环境中的虚拟主光源的主光源信息,所述主光源信息包括随时间变化的光源位置和光源方向,所述 虚拟主光源作用于虚拟形象的头发区域;获取所述头发区域中的像素点对应所述虚拟主光源的各光路的纵向角、方位角、光照信息与颜色信息;所述虚拟主光源的各光路包括反射光路、透射光路与透反射光路;对于每个光路,基于所述像素点相应的纵向角与光照信息确定纵向散射量,基于相应的方位角确定方位散射量,根据所述纵向散射量、所述方位散射量以及相应的颜色信息,确定所述像素点对应所述光路的散射量;及融合所述像素点对应每个光路的散射量,得到所述像素点对应所述虚拟主光源的着色信息,所述着色信息用于渲染所述虚拟形象的头发。
一种非易失性的计算机可读存储介质。所述计算机可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现以下步骤:获取设定于虚拟环境中的虚拟主光源的主光源信息,所述主光源信息包括随时间变化的光源位置和光源方向,所述虚拟主光源作用于虚拟形象的头发区域;获取所述头发区域中的像素点对应所述虚拟主光源的各光路的纵向角、方位角、光照信息与颜色信息;所述虚拟主光源的各光路包括反射光路、透射光路与透反射光路;对于每个光路,基于所述像素点相应的纵向角与光照信息确定纵向散射量,基于相应的方位角确定方位散射量,根据所述纵向散射量、所述方位散射量以及相应的颜色信息,确定所述像素点对应所述光路的散射量;及融合所述像素点对应每个光路的散射量,得到所述像素点对应所述虚拟主光源的着色信息,所述着色信息用于渲染所述虚拟形象的头发。
一种计算机程序产品。所述计算机程序产品,包括计算机可读指令,该计算机可读指令被处理器执行时实现以下步骤:确定作用于虚拟形象的头发区域的虚拟主光源;获取设定于虚拟环境中的虚拟主光源的主光源信息,所述主光源信息包括随时间变化的光源位置和光源方向,所述虚拟主光源作用于虚拟形象的头发区域;获取所述头发区域中的像素点对应所述虚拟主光源的各光路的纵向角、方位角、光照信息与颜色信息;所述虚拟主光源的各光路包括反射光路、透射光路与透反射光路;对于每个光路,基于所述像素点相应的纵向角与光照信息确定纵向散射量,基于相应的方位角确定方位散射量,根据所述纵向散射量、所述方位散射量以及相应的颜色信息,确定所述像素点对应所述光路的散射量;及融合所述像素点对应每个光路的散射量,得到所述像素点对应所述虚拟主光源的着色信息,所述着色信息用于渲染所述虚拟形象的头发。
附图说明
为了更清楚地说明本申请实施例或传统技术中的技术方案,下面将对实施例或传统技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据公开的附图获得其他的附图。
图1为一个实施例中头发渲染方法的应用环境图;
图2为一个实施例中头发渲染方法的流程示意图;
图3为一个实施例中散射几何体的符号的示意图;
图4为一个实施例中不同光路下光照信息的示意图;
图5为一个实施例中圆形截面散射的几何形状示意图;
图6为一个实施例中染发界面的示意图;
图7为一个实施例中虚拟形象背向虚拟主光源时的头发效果示意图;
图8为一个实施例中虚拟形象的头发的透射效果示意图;
图9为一个实施例中不同光路下头发渲染效果的示意图;
图10为一个实施例中各个头发的颜色下多种光感效果示意图;
图11为一个实施例中Kajiya-kay光照模型的渲染效果与真实头发的对比示意图;
图12为一个实施例中头发渲染效果的示意图;
图13为一个实施例中虚拟形象二次设计的示意图;
图14为一个实施例中与头发的颜色匹配的虚拟形象服装的示意图;
图15为一个实施例中头发渲染装置的结构框图;
图16为一个实施例中计算机设备的内部结构图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
为便于理解,首先对相关概念进行阐述。
切线:此处切线方向指发丝的方向,例如,切线方向是由发尾指向发根的方向,又例如,切线方向是由发根指向发尾的方向。
散射:一种光学现象,由传播介质的不均匀性引起的光线向四周射去的现象。
反射:一种光学现象,光在传播到不同物质时,在分界面上改变传播方向又返回原来物质中的现象。光遇到水面、玻璃以及其他许多物体的表面都会发生反射。将光在两种物质的分界面上改变传播方向又返回原来物质中的现象,叫做光的反射。
折射:一种光学现象,光从一种透明介质斜射入另一种透明介质时,传播方向一般会发生变化,这种现象叫光的折射。
透射:一种光学现象,当光入射到透明或半透明材料表面时,一部分被反射,一部分被吸收,还有一部分可以透射过去,此时,透射是入射光经过折射穿过物体后的出射现象。被透射的物体为透明体或半透明体,如玻璃,滤色片等。
光照模型:根据光学的有关定律,模拟自然界中光照明的物理过程的计算机模型。计算机图形学中的光照模型分局部光照模型和全局光照模型。局部光照模型忽略周围环境对物体的作用,而只考虑光源对物体表面的直接照射效果,这仅是一种理想状况,所得结果与自然界中的真实情况有一定差距。全局光照模型则考虑了周围环境对景物表面的影响。
光源:发出一定波长范围的电磁波(包括可见光以及紫外线、红外线和X射线等不可见光)的物体,通常是指能发可见光的物体。光源分为自然光源(即天然的光源,例如,太阳)和人造光源(例如,电灯)。
光路:光的传播路径,在本申请是中是模拟光在头发上的传播路径。
R(Reflection,反射光路):光的反射路径,在本申请中可以理解为是模拟光在头发上发生反射的光路。
TT(Transmission Transmission,透射光路):光穿透物体,又从物体内部透出的光线路径,在本申请中指模拟光在头发上传播时,光先发生透射进入头发内,然后光再次发生透射(即从头发内部透射出来)的光路。此时,光发生了两次透射过程。
TRT(Transmission Reflection Transmission,透反射光路):光穿透进入物体后再内部反射又透射出物体的过程,在本申请中指模拟光在头发上传播时,光先发生透射进入头发内, 然后光在头发内部发生一次反射(此时,光还在头发内),然后再次发生透射(即从头发内部透射出来)的光路。此时,光发生了两次透射和一次反射,第一次透射是光由空气进入头发内的过程,反射是光在头发内传播的过程,第二次透射是光从头发内进入空气的过程。
纵向(longitudinal)角:向量投影在法向平面上,与法向平面上垂直于切线的角的夹角。
方位(azimuthal)角:向量投影在法向平面上相对于某一同在平面上的参照向量的夹角。
各向异性:指物质的全部或部分化学、物理等性质随着方向的改变而有所变化,在不同的方向上呈现出差异的性质。各向异性是材料和介质中常见的性质,在尺度上有很大差异,从晶体到日常生活中各种材料,再到地球介质,都具有各向异性。值得注意的是,各向异性与非均匀性是从两个不同的角度对物质进行的描述,不可等同。
相关技术中,在对手机端游戏进行渲染的过程中,为了对虚拟角色的头发进行着色,将头发建模成不透明的圆柱体,并通过反射原理来进行着色。这种着色方法是通过Kajiya-kay(卡吉雅模型)光照模型来实现的。由于该模型不是基于真实头发的结构进行渲染的,因此,头发渲染效果生硬,且光感部分油亮。因此,传统的Kajiya-kay光照模型无法得到真实的头发渲染效果。
本申请实施例提供的头发渲染方法,通过获取设定于虚拟环境中的虚拟主光源的主光源信息,主光源信息包括随时间变化的光源位置和光源方向,虚拟主光源作用于虚拟形象的头发区域的虚拟主光源,虚拟主光源的光路包括反射光路、透射光路与透反射光路。通过获取头发区域中的像素点对应虚拟主光源的各光路的纵向角、方位角、光照信息与颜色信息,对于每个光路,基于相应的纵向角与光照信息确定纵向散射量,能够准确预估像素点在纵向散射过程中的散射情况。基于相应的方位角能够直接确定方位散射量,从而,准确预估像素点在方位散射过程中的散射情况。根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应光路的散射量。这样,结合纵向散射、方位散射和颜色信息,能够真实模拟出该像素点在每个光路下的光感,最后通过融合像素点对应每个光路的散射量,得到像素点对应所述虚拟主光源的着色信息,实现了对虚拟角色的头发进行真实且高效的渲染,提高了头发渲染效果。
本申请实施例提供的头发渲染方法,可以应用于如图1所示的应用环境中。其中,终端102通过网络与服务器104进行通信。数据存储系统可以存储服务器104需要处理的数据。数据存储系统可以集成在服务器104上,也可以放在云上或其他服务器上。终端102和服务器104均可单独用于执行本申请实施例中提供的头发渲染方法。终端102和服务器104也可协同用于执行本申请实施例中提供的头发渲染方法。以终端102和服务器104可协同用于执行本申请实施例中提供的头发渲染方法为例进行说明,终端102中可运行有客户端,服务器104是可为该客户端提供计算服务与存储服务的后台服务器。
在一些实施例中,服务器104获取设定于虚拟环境中的虚拟主光源的主光源信息,所述主光源信息包括随时间变化的光源位置和光源方向,所述虚拟主光源作用于虚拟形象的头发区域。服务器104获取头发区域中的像素点对应虚拟主光源的各光路的纵向角、方位角、光照信息与颜色信息,虚拟主光源的各光路包括反射光路、透射光路与透反射光路。对于每个光路,服务器104基于所述像素点相应的纵向角与光照信息确定纵向散射量,基于相应的方位角确定方位散射量,根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应光路的散射量。服务器104融合像素点对应每个光路的散射量,得到像素点对应所述虚拟主光源的着色信息,着色信息用于渲染所述虚拟形象的头发。可选地,服务 器104还可以基于该着色信息生成头发的渲染数据,在终端102有头发渲染需求时,服务器104将该渲染数据下发至终端102,终端102通过运行的客户端根据该渲染数据,渲染并显示虚拟形象的头发。可选地,服务器104也可事先将该渲染数据下发至终端102,由终端102存储到本地,在有头发渲染需求时,根据本地存储的渲染数据渲染并显示虚拟形象的头发。例如,当终端102需要渲染游戏画面中的虚拟形象、需要提供虚拟形象的头发部件或是需要生成虚拟形象时,均可以根据本地存储的渲染数据进行渲染并显示虚拟形象的头发。
其中,终端102上运行的客户端可以为游戏应用、视频应用、社交应用、即时通信应用、导航应用、音乐应用、购物应用、电子地图应用、浏览器等具有显示虚拟形象的功能的客户端。其中,该客户端可以为独立的应用程序,也可以为集成在某客户端(例如,社交客户端以及出行客户端等)中的子应用程序,在此不做限定。
其中,终端102可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑、物联网设备和便携式可穿戴设备,物联网设备可为智能电视、智能车载设备等。便携式可穿戴设备可为智能手表、智能手环、头戴设备等。服务器104可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
在一些实施例中,如图2所示,提供了一种头发渲染方法,以该方法应用于计算机设备(例如图1中的终端102或服务器104),包括以下步骤:
步骤202,获取设定于虚拟环境中的虚拟主光源的主光源信息,主光源信息包括随时间变化的光源位置和光源方向,虚拟主光源作用于虚拟形象的头发区域。
其中,虚拟环境是计算机生成的一种虚拟的环境。该虚拟环境可以是对真实事件的仿真的三维环境,也可以是完全虚构的三维环境,也可以是半虚构半真实的三维场景。
虚拟主光源为在虚拟环境中主要用于发光的光源,该主光源是一种模拟太阳发光原理的光源,可以理解的是,该虚拟主光源所发出的光是平行光。因此,虚拟主光源可以视为虚拟环境中的太阳光。示例性地,虚拟环境中存在一种光源,即虚拟主光源,通过虚拟主光源对虚拟环境进行照射。示例性地,虚拟环境中可以存在多种光源,比如包括虚拟主光源、路灯、车辆的车灯等。
随着时间的变化,虚拟主光源在虚拟环境中的位置和光源方向会发生变化。因此,通过主光源信息来实现在虚拟环境中模拟出虚拟主光源。
在一些实施例中,该虚拟主光源的位置是通过模拟太阳在某个时刻的位置设定的,可选地,虚拟主光源的位置可以是虚拟环境的东方,可选地,虚拟主光源的位置可以结合虚拟环境所设定的时间确定,例如,当虚拟环境设定的时间为早上,或设定的时间段为上午时段时,虚拟主光源设定在虚拟环境中的东方。当虚拟环境设定的时间为中午时,虚拟主光源设定在虚拟环境中的南方。当虚拟环境设定在傍晚,虚拟主光源设定在虚拟环境中的西方。光源方向是指虚拟主光源发射光线的方向,例如,光源方向为正对着虚拟形象的脸部方向。该虚拟主光源的光源方向也是通过模拟太阳在某个时刻的光照方向设定的。
虚拟形象为虚拟环境中使用的非真实、软件制作的三维模型。在游戏场景中,虚拟形象可以是游戏中的虚拟角色。在多媒体场景中,虚拟形象可以是动画或电影中的虚拟人物。为了让虚拟形象更加的逼真,需要对虚拟对象的头发进行渲染。当然,本申请还可以应用于动物,此时,本申请实施例可以对动物的毛发进行渲染,即将虚拟对象的头发视为是动物的毛发。在展示界面中,通过像素点来展示虚拟形象,其中,将展示虚拟形象的像素点 所在的区域视为虚拟形象的头发区域。
可选地,计算机设备获取设定于虚拟环境中的虚拟主光源的主光源信息,该主光源信息包括随时间变化的光源位置和光源方向,计算机设备根据光源位置和光源方向在虚拟环境中实时模拟虚拟主光源。
示例性地,计算机设备在获取当前时刻下主光源信息,并根据主光源信息,在虚拟环境中模拟虚拟主光源。该虚拟主光源作用于虚拟形象的头发区域。
步骤204,获取头发区域中的像素点对应虚拟主光源的各光路的纵向角、方位角、光照信息与颜色信息;虚拟主光源的各光路包括反射光路、透射光路与透反射光路。
其中,纵向角为一向量与该向量投影在头发的法向平面上的投影向量的夹角。方位角为投影在法向平面上的向量相对于某一同在法向平面上的参照向量的夹角。其中,法向平面是指过空间曲线的切点,且与切线垂直的平面,即垂直于该点切线的平面。如图3所示,为散射几何体的符号的示意图,头发的法向平面为w轴和v轴构成的平面,u为头发的切向向量,方向为发根指向发梢,切向向量是曲线在一点处的切向量,可以理解为沿曲线该点处切线方向的向量。向量v和向量w组成了右手正交基底,将v-w平面作为法向平面。ωi为光照方向的向量,ωr为散射方向的向量。其中,θi和θr均为纵向角,θi为ωi投影在法向平面上的向量与ωi的夹角,即光照方向与法向平面的倾斜角,可理解为入射纵向角,θr为ωr投影在法向平面上的向量与ωr的夹角,即散射方向与法向平面的倾斜角,可理解为散射纵向角。φi和φr均为方位角。φi为ωi投影在法向平面上的向量与向量v之间的夹角,可理解为入射方位角,φr为ωr投影在法向平面上的向量与向量v之间的夹角,可理解为散射方位角。
光照信息用于反映像素点的光感,即人眼能够感知到的光亮。该光照信息包括偏移均值和方差宽度项这两个参数。偏移均值反映的是光环在头发上位置的位置。光环可以理解为头发的高光,代表着头发的光泽。该偏移均值主要影响光照位置,通过不同数值的偏移均值能够使得光环在头发上的位置发生变化。方差宽度项反映了像素点上光环的宽度,即光照的宽度。该方差宽度项的数值越大,光环的宽度越宽,宽度越宽,则散射的亮度越暗,此时头发看起来越粗糙。如图4所示,为不同光路下光照信息的示意图,图4中示例性给出了反射光路和透反射光路的光照信息。图4中的线1为反射光路的偏移均值,框1为反射光路的方差宽度项,线2为透反射光路的偏移均值,框2为透反射光路的方差宽度项。由图4可知,偏移均值影响的是散射光的位置,方差宽度项影响的是散射范围。
颜色信息用于反映像素点的色彩。该颜色信息包含该像素点的色度信息、亮度信息和饱和度信息。色度信息反映了该像素点的颜色的色调。亮度信息反映了该像素点的颜色的明暗程度,饱和度信息反映了该像素点的颜色的鲜艳程度。虚拟主光源在该像素点的光路包括反射光路、透射光路和透反射光路。如图5所示,为圆形截面散射的几何形状示意图。入射光以入射能量dh、入射角度为γi入射到圆形物体上,该入射光与圆心之间的高度差为h。现在存在三种光路。对于反射光路R,出射光线以出射能量dφR、出射角度γi出射。此时,反射光路不涉及到在头发内部进行传播,因此,在头发内部路径的段数P为0。内部路径的段数是指光路在头发内部的路径的变化次数。对于透射光路TT,该透射光路为光线穿透圆形物体,又从圆形物体内部透出,此时,光路在以入射角度为γi入射到头发内部后,光线在内部以折射角度γt传输,再以出射能量dφTT、出射角度γi出射。此时,内部路径的段数P为1。对于透反射光路TRT,该透反射光路为光线穿透圆形物体,然后进入圆形物 体的光线在物体内部反射,又从圆形物体内部透出,此时,光路在以入射角度为γi入射到头发内部后,光线在内部以折射角度γt传输,且在头发内部进行以γt角度进行一次反射后,再以出射能量dφTRT、出射角度γi出射。此时,内部路径的段数P为2。
可选地,计算机设备获取头发区域中像素点对应虚拟主光源的各光路的纵向角、方位角和颜色信息。计算机设备响应于对光照信息的触发操作,获取输入的各光路的光照信息。或者,计算机设备基于虚拟形象和虚拟场景中的至少一种,确定各光路的光照信息。
示例性地,对于每个光路,计算机设备计算虚拟主光源在相应的光路中以预设入射角入射后,与相应的光路对应的纵向角和方位角。计算机设备响应于操作人员的输入操作,获取输入的各光路的光照信息。例如,计算机设备允许人为设定偏移均值和方差宽度项,操作人员输入处于偏移值域中的偏移均值的数值,并增加方差宽度项的数值。计算机设备响应于操作人员的输入操作,获取头发区域中像素点对应虚拟主光源的各光路的光照信息。示例性地,计算机设备获取虚拟形象对应的至少一个饱和度,从至少一个饱和度中确定一个饱和度,作为反射光路的颜色信息,即反射光路的颜色信息不涉及到亮度信息和色度信息,仅涉及到饱和度信息。此时,也可以将饱和度的一半作为反射光路的颜色信息。响应于操作人员对透反射光路下颜色信息的设置操作,获取输入的透反射光路的颜色信息。计算机设备直接将虚拟形象的头发的颜色作为透射光路的颜色信息,或者,响应于操作人员对透射光路下颜色信息的操作,获取输入的透射光路的颜色信息。例如,如图6所示,为染发界面的示意图。该染发界面上设置三种可选择的染色类型,分别是普通染色类型、随心染色类型和发型染色类型。当操作人员选择发型染色类型时,操作人员可以选择头发的颜色,并通过消耗预设数量的虚拟资源来确认染色(比如,若选择了头发的颜色1,则需要消耗10个虚拟资源)。计算机设备响应于操作人员的选择操作确定目标头发的颜色,并获取与目标头发的颜色对应的颜色信息。该染色界面上还设置有重置颜色信息的图标、撤销已确定的颜色信息的图标、恢复已撤销的颜色信息的图标、缓存颜色信息的图标和下载颜色信息的图标。
需要说明的是,对于颜色信息的获取,通过开放偏移均值的值域实现头发亮处可控,同时,通过增大分布宽度项(即方差宽度项)的数值来近似多重散射的效果。这样,在不通过昂贵的多重散射进行渲染的情况下,通过开放颜色信息的设置仍能够达到多重散射的效果,确保了头发的颜色自然,在视觉上不会有明显的暗部。此外,通过开放颜色信息的设置还能灵活定制各种新的头发的颜色,支持多种风格化的效果,以满足美术的艺术要求。
可选地,对于每个光路,虚拟主光源在相应的光路中以预设入射角入射后,计算机设备计算与相应的光路对应的纵向角和方位角。计算机设备根据虚拟形象的个人属性确定光照信息,该个人属性包括服装款式和发型。或者,计算机设备根据虚拟环境的环境参数确定光照信息,该环境参数包括虚拟环境的场景类型和虚拟环境的时间。或者,计算机设备根据虚拟环境下虚拟形象的运动轨迹和虚拟形象的动作,确定与处于虚拟环境中虚拟形象对应的光照信息。计算机设备根据虚拟形象的头发的颜色的半饱和度确定反射光路的颜色信息,响应于操作人员输入的透反射光路下颜色信息的操作,获取输入的透反射光路的颜色信息。计算机设备直接将虚拟形象的头发的颜色作为透射光路的颜色信息,或者,响应于操作人员输入的透射光路下颜色信息的操作,获取输入的透射光路的颜色信息。
步骤206,对于每个光路,基于像素点相应的纵向角与光照信息确定纵向散射量,基于相应的方位角确定方位散射量,根据纵向散射量、方位散射量以及相应的颜色信息,确 定像素点对应光路的散射量。
其中,一束光打到发丝上的光线会被分散到以入射点为中心,沿着发丝轴向的一个圆锥面上,在圆锥面上的散射光线是每个光路传播得到的。对于每个光路,可以将复杂的空间关系拆分到两个二维平面上,分别是发丝的横切面和垂切面(法平面,如图3中的w-v平面)。纵向散射量表征在横切面上像素点的纵向散射的分布情况,可以理解为在给定了出射方向之后,该横切面上有多少比例的光线射出。纵向散射是沿着头发生长方向的散射。方位散射量表征在垂切面上像素点的方位散射的分布情况,即,在给定了出射方向之后,该垂切面上有多少比例的光线射出。
可选地,对于每个光路,计算机设备根据相应光路所对应的纵向角和光照信息,进行纵向散射计算,得到相应光路的纵向散射量。计算机设备根据相应光路所对应的方位角,进行方位散射计算,得到相应光路的方位散射量。计算机设备根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应光路的散射量。
在一些实施例中,反射光路的光照信息包括纵向光照信息,对于每个光路,基于像素点相应的纵向角与光照信息确定纵向散射量,基于相应的方位角确定方位散射量,根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应光路的散射量,包括:对于反射光路,基于像素点相应的纵向角与纵向光照信息确定纵向散射量,基于相应的方位角的余弦值确定方位散射量,根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应反射光路的散射量。
其中,反射光路的纵向光照信息表征在通过反射光路进行传播时,在纵向散射过程中的光照信息。
可选地,对于反射光路,计算机设备基于该反射光路的纵向角和纵向光照信息,通过高斯计算,得到纵向散射量。计算机设备基于该反射光路的方位角,确定像素点相应的方位角的余弦值,并根据该余弦值确定方位散射量。计算机设备根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应反射光路的散射量。
示例性地,反射光路的纵向角包括入射纵向角和散射纵向角,对于反射光路,计算机设备计算该反射光路的入射纵向角和散射纵向角的半角,并根据该半角和纵向光照信息,通过高斯计算,得到纵向散射量。计算机设备根据该反射光路的方位角,确定相应的方位角的余弦值,并根据该余弦值确定方位散射量。例如,在确定纵向散射量的过程中,计算机设备获取该反射光路的入射纵向角θi和散射纵向角θr,并将入射纵向角和散射纵向角的平均值作为反射光路的半角,即通过下述公式确定反射光路的半角θh
θh=(θir)/2
计算机设备获取反射光路中偏移均值αR1和方差宽度项βR1,并根据反射光路的半角θh、偏移均值αR1和方差宽度项βR1,通过高斯计算得到反射光路的纵向散射量MRh)=g(βR1;θhR1),其中,g()为高斯函数。
在本实施例中,对于反射光路,基于相应的纵向角与纵向光照信息确定纵向散射量,从而,能准确反映经过反射光路后纵向散射光占入射光的比例。基于相应的方位角的余弦值确定方位散射量,无需额外的循环迭代计算方位散射量,简化了方位散射量的计算步骤。这样,根据纵向散射量、方位散射量以及相应的颜色信息,能够准确且迅速的得到具有色彩表现的反射光路的散射量。
在一些实施例中,纵向角包括入射纵向角和散射纵向角;根据纵向散射量、方位散射 量以及相应的颜色信息,确定像素点对应反射光路的散射量,包括:计算反射光路的散射纵向角与入射纵向角的平均差;计算纵向散射量、方位散射量与相应的颜色信息的乘积;根据平均差的余弦值与乘积,计算像素点对应反射光路的散射量,反射光路的散射量与乘积成正比,且与平均差的余弦值的平方成反比。
可选地,计算机设备计算反射光路的入射纵向角和散射纵向角的平均差,并计算反射光路的纵向散射量、方位散射量和颜色信息的乘积。计算机设备计算平均差的余弦值的平方,并将该平均差的余弦值的平方作为分母、且将该乘积作为分子,得到像素点对应反射光路的散射量。示例性地,计算反射光路的入射纵向角和散射纵向角的平均差,包括:计算反射光路的散射纵向角减去入射纵向角得到的差值,将差值的一半确定为平均差。或者,计算反射光路的入射纵向角减去散射纵向角得到的差值,将差值的一半确定为平均差。
例如,计算机设备获取该反射光路的入射纵向角θi和散射纵向角θr,通过下述公式确定反射光路的平均差θd
θd=(θri)/2
并通过下述公式计算像素点对应反射光路的散射量SR
其中,MRh)为纵向散射量,NR为方位散射量,A1为颜色信息,cosθd为平均差的余弦值,MRh)·NR·A1为乘积,cos2θd为平均差的余弦值的平方。其中,将纵向散射量、方向散射量和颜色信息的乘积除以平方差的平方来实现立体角(即散射光束)投影的修正。
在本实施例中,通过计算纵向散射量、方位散射量和相应的颜色信息的乘积,能够迅速且准确的得到具有色彩表现的散射信息,通过平均差的余弦值和乘积,能够直接得到具有色彩表现的散射光线占入射光线的比例,即,得到具有丰富色彩表现的反射光路的散射量,简化了头发的调色步骤。
在一些实施例中,方位角包括入射方位角和散射方位角,基于相应的方位角的余弦值确定方位散射量,包括:计算散射方位角与入射方位角的差值;将差值的余弦值,作为像素点对应反射光路的方位散射量。
可选地,计算机设备计算散射方位角和入射方位角的差值,并确定差值的余弦值。计算机设备将差值的余弦值直接作为像素点对应反射光路的方位散射量。
例如,计算机设备获取该反射光路的入射方位角φi和散射方位角φr,并计算散射方位角φr和入射方位角φi的差值φ。计算机设备对该差值φ进行余弦函数计算,得到方位散射量NR,即NR=cosφ。该差值φ可以是入射方位角φi减去散射方位角φr得到的,也可以是散射方位角φr减去入射方位角φi得到的,具体不作限定。
在本实施例中,通过余弦函数直接确定方位散射量,降低了方位散射量对反射光路的散射量的影响,无需额外的循环迭代计算方位散射量,极大地简化了方位散射量的计算步骤,从而,降低了后续进行虚拟形象的头发渲染的成本,实现了在移动端的普及。
在一些实施例中,虚拟主光源的光路还包括透射光路,透射光路的光照信息包括纵向光照信息与方位光照信息;对于每个光路,基于相应的纵向角与光照信息确定纵向散射量,基于像素点相应的方位角确定方位散射量,根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应光路的散射量,包括:对于透射光路,基于像素点相应的纵向角与纵向光照信息确定纵向散射量,基于相应的方位角与方位光照信息确定方位散射量,根据 纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应透射光路的散射量。
其中,透射光路的纵向光照信息表征在通过透射光路进行传播时,在纵向散射过程中的光照信息,透射光路的方位光照信息表征在通过透射光路进行传播是,在方位散射过程中的光照信息。
可选地,对于透射光路,计算机设备基于透射光路的纵向角与纵向光照信息,通过高斯计算,得到纵向散射量,并基于透射光路的方位角与方位光照信息,通过高斯计算,得到方位散射量。计算机设备根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应透射光路的散射量。
示例性地,透射光路的纵向角包括入射纵向角和散射纵向角,透射光路的方位角包括入射方位角和散射方位角,对于透射光路,计算机设备计算该透射光路的入射纵向角和散射纵向角的半角,并根据该半角和纵向光照信息,通过高斯计算,得到纵向散射量。计算机设备计算透射光路的入射方位角和散射方位角的差值,并根据该差值和方位光照信息,通过高斯计算,得到方位散射量。计算机设备根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应透射光路的散射量。
需要说明的是,如图5所示,虚拟主光源以一定的入射角入射后,涉及到三种光路,即反射光路、透射光路和透反射光路。不管以何种光路传播,散射光的角度相同,由此可知,各个光路的入射纵向角相同,散射纵向角相同,则各个光路的半角相同,各个光路分别对应的散射纵向角和入射纵向角的平均差相同。各个光路中的入射方位角相同,散射方位角相同,则各个光路分别对应的散射方位角和入射方位角的差值相同。
对于透射光路来说,在虚拟形象正对虚拟主光源的情况下,透射光路的散射量较小,即透射光路对头发着色的影响较小。而虚拟形象背对虚拟主光源的情况下,对于头发区域中处于边缘的像素点,该像素点的透射光路的散射量较大,从而,能够在头发边缘处呈现透光的效果。如图7所示,为虚拟形象背向虚拟主光源时的头发效果示意图,很明显,图7中位于头发区域边缘处的头发具有很明显的透光效果。
例如,在确定透射光路的纵向散射量的过程中,计算机设备获取该透射光路的入射纵向角θi和散射纵向角θr,通过下述公式确定透射光路的半角θh
θh=(θir)/2
计算机设备获取透射光路中与纵向散射对应的偏移均值αTT1和方差宽度项βTT1,并根据透射光路的半角θh、偏移均值αTT1和方差宽度项βTT1,通过高斯计算得到透射光路的纵向散射量MTTh)=g(βTT1;θhTT1),其中,g()为高斯函数。在确定透射光路的方位散射量的过程中,计算机设备获取透射光路的入射方位角φi和散射方位角φr,并计算入射方位角φi和散射方位角φr的差值φ。计算机设备获取透射光路中与方位散射对应的偏移均值αTT2和方差宽度项βTT2,并根据透射光路的差值φ、偏移均值αTT2和方差宽度项βTT2,通过高斯计算得到透射光路的方位散射量NTT(φ)=g(βTT2;φ-αTT2)。计算机设备根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应透射光路的散射量。
在本实施例中,对于透射光路,基于相应的纵向角与纵向光照信息确定纵向散射量,从而,能准确反映经过透射光路后纵向散射光占入射光的比例,基于相应的方位角和方位光照信息确定方位散射量,无需额外的循环迭代计算方位散射量,简化了方位散射量的计算步骤。这样,根据纵向散射量、方位散射量以及相应的颜色信息,能够准确且迅速的得到具有色彩表现的透射光路的散射量。
在一些实施例中,根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应透射光路的散射量,包括:获取头发区域对应的阴影贴图,阴影贴图包括各个像素点的阴影度,阴影度表征像素点处是否存在阴影;根据像素点的阴影度、纵向散射量、方位散射量以及相应的颜色信息的乘积,确定像素点对应透射光路的散射量。
可选地,计算机设备获取与虚拟形象的头发区域对应的阴影贴图。对于头发区域的每个像素点,计算机设备根据该像素点的阴影度、纵向散射量、方位散射量以及相应的颜色信息的乘积,确定该像素点对应的透射光路的散射量。
其中,对于处于头发区域边缘处的像素点,该像素点的阴影度为正整数,对于不处于头发区域边缘处的像素点,该像素点的阴影度视为零,则该像素点对应的透射光路的散射量为0。通过阴影贴图能够叠加虚拟形象阴影,营造头发厚处光线遮挡的效果。如图8所示,为虚拟形象的头发的透射效果示意图,为显示透射光路的效果,设置虚拟形象为背向虚拟主光源的情况下,计算机设备通过debug模式(调试工具),仅通过透射光路的散射量确定着色信息,可选地,结合阴影贴图增加头发阴影处的复合效果,如图8所示,在虚拟形象的头发区域的边缘处有很明显的透射效果,但头发无法体现处光感和颜色,此时,在叠加了三种光路的散射量之后,得到图8中多光路叠加后的效果,不仅能够体现丰富的光感和头发的颜色,还能如实反映虚拟形象背向虚拟主光源时的透射效果。其中,debug模式提供了头发的颜色渲染的查看功能,即可以自定义参数,也可以查看由至少一种光源所对应的至少一种光路的散射量确定的渲染效果。除了上述提及了可以查看仅通过透射光路确定的头发渲染效果以外,如图9所示的不同光路下头发渲染效果的示意图,还可以查看仅通过反射光路确定渲染效果、仅通过透反射光路确定渲染效果和多光路叠加后的渲染效果。相较于仅通过反射光路确定渲染效果、仅通过透反射光路确定渲染效果,多光路叠加后的渲染效果更能体现出头发的层次和光感,渲染更加真实。
可选地,透射光路的纵向角包括入射纵向角和散射纵向角,在得到该像素点的阴影度、纵向散射量、方位散射量以及相应的颜色信息的乘积之后,计算透射光路的散射纵向角与入射纵向角的平均差,根据该透射光路对应的乘积与平均差的余弦值的平方,计算像素点对应透射光路的散射量,透射光路的散射量与乘积成正比,与余弦值的平方成反比。
例如,如前所述,对于每个像素点,在确定了透射光路中阴影度Y、纵向散射量MTTh)、方位散射量NTT(φ)、颜色信息A2、cosθd为平均差的余弦值之后,该透射光路的散射量STT如下式:
在本实施例中,对于透射光路,获取头发区域对应的阴影贴图,能够准确反映各个像素点是否存在阴影。这样,根据像素点的阴影度、纵向散射量、方位散射量以及相应的颜色信息的乘积,能够得到具有色彩表现和能够反映阴影效果的散射量。
在一些实施例中,虚拟主光源的光路还包括透反射光路,透反射光路的光照信息包括纵向光照信息;对于每个光路,基于像素点相应的纵向角与光照信息确定纵向散射量,基于相应的方位角确定方位散射量,根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应光路的散射量,包括:对于透反射光路,基于相应的纵向角与纵向光照信息确定纵向散射量,基于像素点相应的方位角的余弦值确定方位散射量,根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应透反射光路的散射量。
其中,透反射光路的纵向光照信息表征在纵向散射过程中的光照信息。
可选地,对于透反射光路,计算机设备基于该透反射光路的纵向角和纵向光照信息,通过高斯计算,得到纵向散射量。计算机设备基于该透反射光路的方位角,确定相应的方位角的余弦值,并根据该余弦值确定方位散射量。计算机设备根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应透反射光路的散射量。
可选地,透反射光路的纵向角包括入射纵向角和散射纵向角,对于透反射光路,计算机设备计算该透反射光路的入射纵向角和散射纵向角的半角,并根据该半角和纵向光照信息,通过高斯计算,得到纵向散射量。计算机设备根据该透反射光路的方位角,确定相应的方位角的余弦值,并根据该余弦值确定方位散射量。进一步,透反射光路的方位角包括入射方位角和散射方位角,计算机设备计算透反射光路的散射方位角和入射方位角的差值,并根据该透反射光路对应的差值的余弦值确定透反射光路的方位散射量。
例如,在确定了透反射光路的半角θh之后,计算机设备获取透反射光路中偏移均值αTRT1和方差宽度项βTRT1,并根据透反射光路的半角θh、偏移均值αTRT1和方差宽度项βTRT1,通过高斯计算得到透反射光路的纵向散射量MTRTh)=g(βTRT1;θhTRT1),其中,g()为高斯函数。
在本实施例中,对于透反射光路,基于相应的纵向角与纵向光照信息确定纵向散射量,从而,能准确反映经过透反射光路后纵向散射光占入射光的比例,基于相应的方位角的余弦值确定方位散射量,无需额外的循环迭代计算方位散射量,简化了方位散射量的计算步骤。这样,根据纵向散射量、方位散射量以及相应的颜色信息,能够准确且迅速的得到具有色彩表现的透反射光路的散射量。
在一些实施例中,纵向角包括入射纵向角和散射纵向角,根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应透反射光路的散射量,包括:计算透反射光路的散射纵向角与入射纵向角的平均差;计算纵向散射量、方位散射量与相应的颜色信息的乘积;根据平均差的余弦值与乘积,计算像素点对应透反射光路的散射量,透反射光路的散射量与乘积成正比、且与平均差的余弦值的平方成反比。
可选地,计算机设备计算透反射光路的散射纵向角与入射纵向角的平均差,并计算透反射光路的纵向散射量、方位散射量和颜色信息的乘积。计算机设备计算平均差的余弦值的平方,并将该平均差的余弦值的平方作为分母、且将该乘积作为分子,得到像素点对应透反射光路的散射量。
示例性地,在确定了透反射光路的散射纵向角与入射纵向角的平均差θd(即得到了平均差的余弦值cosθd)、纵向散射量MTRTh)、方位散射量NTRT、颜色信息A3之后,通过下述公式计算像素点对应透反射光路的散射量STRT
在本实施例中,对于透反射光路,通过计算纵向散射量、方位散射量和相应的颜色信息的乘积,能够迅速且准确的得到具有色彩表现的散射信息,通过平均差的余弦值和乘积,能够直接得到具有色彩表现的散射光线占入射光线的比例,即,得到具有丰富色彩表现的透反射光路的散射量,简化了头发的调色步骤。
步骤208,融合像素点对应每个光路的散射量,得到像素点对应虚拟主光源的着色信息,着色信息用于渲染虚拟形象的头发。
其中,着色信息可理解为像素点的散射量,可以是一个光路的散射量,也可以是根据多个光路的散射量融合得到的散射量。
可选地,计算机设备将相同像素点所对应的反射光路、透射光路与透反射光路的散射量进行融合,得到像素点对应的虚拟主光源的着色信息,着色信息用于渲染虚拟形象的头发。
例如,对于每个像素点,计算机设备将该像素点的反射光路的散射量、透射光路的散射量和透反射光路的散射量进行叠加,得到总散射量,将总散射量确定为该像素点对应的虚拟主光源的着色信息。
又例如,对于每个像素点,计算机设备确定各光路的权重,按各光路的权重,将该像素点的反射光路的散射量、透射光路的散射量和透反射光路的散射量进行加权,得到总散射量,将总散射量确定为该像素点对应的虚拟主光源的着色信息。
上述头发渲染的方法中,通过获取设定于虚拟环境中的虚拟主光源的主光源信息,主光源信息包括随时间变化的光源位置和光源方向,虚拟主光源作用于虚拟形象的头发区域的虚拟主光源,虚拟主光源的光路包括反射光路、透射光路与透反射光路;通过获取头发区域中的像素点对应虚拟主光源的各光路的纵向角、方位角、光照信息与颜色信息,对于每个光路,基于相应的纵向角与光照信息确定纵向散射量,能够准确预估像素点在纵向散射过程中的散射情况。基于相应的方位角能够直接确定方位散射量,从而,准确预估像素点在方位散射过程中的散射情况。根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应光路的散射量。这样,结合纵向散射、方位散射和颜色信息,能够真实模拟出该像素点在每个光路下的光感,最后通过融合像素点对应每个光路的散射量,得到像素点对应所述虚拟主光源的着色信息,实现了对虚拟角色的头发进行真实且高效的渲染,提高了头发渲染效果。
在一些实施例中,方法还包括:获取设定于虚拟环境中的虚拟背光源的背光源信息,所述背光源信息包括随时间变化的光源位置和光照方向,所述虚拟背光源作用于虚拟形象的头发区域的虚拟背光源,虚拟背光源的光照方向的水平投影方向与虚拟主光源的光照方向的水平投影方向相反;获取头发区域中像素点对应的虚拟背光源的各光路的纵向角、光照信息与颜色信息;虚拟背光源的各光路包括反射光路和透反射光路;对于虚拟背光源对应的每个光路,基于像素点相应的纵向角与光照信息确定虚拟背光源对应的纵向散射量,根据虚拟背光源对应的纵向散射量和相应的颜色信息,确定像素点对应光路的散射量;融合像素点对应每个光路的散射量,得到像素点对应虚拟背光源的着色信息,着色信息用于渲染虚拟形象的头发。
其中,虚拟背光源是设置在虚拟形象背面的光源,用于对虚拟形象背面进行补光。同样地,随着时间的变化,虚拟背光源在虚拟环境中的位置和光源方向也会发生变化。因此,通过背光源信息来实现在虚拟环境中模拟出虚拟背光源。当然,在另一些实施例中,虚拟背光源也可以不随时间变化,即不管时间如何变化,虚拟背光源在虚拟环境中的位置和光源方向都是一样的。
可选地,计算机设备根据虚拟主光源的光照方向,确定虚拟背光源的光照方向,虚拟背光源的光照方向和虚拟主光源的光照方向相反。计算机设备将虚拟形象的背面确定为虚拟背光源的光源位置,根据虚拟背光源的光源位置和光照方向,在模拟环境中模拟作用于虚拟形象的头发区域的虚拟背光源,或者,计算机设备将相机光源作为虚拟背光源。对于 虚拟背光源对应的每个光路,基于相应的纵向角与光照信息确定虚拟背光源对应的纵向散射量。计算机设备根据纵向散射量和相应的颜色信息的乘积,确定像素点对应的光路的散射量,通过叠加相同像素点对应的各光路的散射量,得到像素点对应的虚拟背光源的着色信息。
或者,计算机设备基于相应的方位角确定方位散射量,并根据纵向散射量、方位散射量和相应的颜色信息的乘积,确定像素点对应的光路的散射量,通过叠加像素点对应的各光路的散射量,得到像素点对应的虚拟背光源的着色信息。
在一些实施例中,虚拟背光源对应的反射光路的光照信息包括纵向光照信息,对于虚拟背光源对应的每个光路,基于像素点相应的纵向角与光照信息确定虚拟背光源对应的纵向散射量,根据虚拟背光源对应的纵向散射量和相应的颜色信息,确定像素点对应光路的散射量,包括:对于虚拟背光源的反射光路,根据像素点相应的纵向角和纵向光照信息,通过高斯计算,得到虚拟背光源对应的纵向散射量;根据虚拟背光源对应的纵向散射量和相应的颜色信息的乘积,确定像素点对应反射光路的散射量。
示例性地,对于虚拟背光源的反射光路,对应的入射纵向角θi′和散射纵向角θr′,并将入射纵向角和散射纵向角的平均值作为反射光路的半角,即通过下述公式确定反射光路的半角θh′:
θh′=(θi′+θr′)/2
计算机设备获取反射光路中偏移均值αR1′和方差宽度项βR1′,并根据反射光路的半角θ′h、偏移均值αR1′和方差宽度项βR1′,通过高斯计算得到反射光路的纵向散射量MR′(θh′)=g(βR1′;θ′hR1′),其中,g()为高斯函数。
在本实施例中,对于虚拟背光源的反射光路,基于相应的纵向角与纵向光照信息确定纵向散射量,从而,能准确反映经过反射光路后纵向散射光占入射光的比例,。基于相应的方位角的余弦值确定方位散射量,无需额外的循环迭代计算方位散射量,简化了方位散射量的计算步骤。这样,根据纵向散射量、方位散射量以及相应的颜色信息,能够准确且迅速的得到具有色彩表现的反射光路的散射量。
在一些实施例中,纵向角包括入射纵向角和散射纵向角,根据虚拟背光源对应的纵向散射量和相应的颜色信息的乘积,确定像素点对应反射光路的散射量,包括:计算虚拟背光源的反射光路的散射纵向角与入射纵向角的平均差;根据平均差的余弦值与乘积,确定像素点对应反射光路的散射量,反射光路的散射量与乘积成正比,且与平均差的余弦值的平方成反比。
可选地,对于虚拟背光源对应的反射光路,该反射光路的光照信息包括纵向光照信息,该纵向角包括入射纵向角和散射纵向角,在确定了纵向散射量和相应的颜色信息的乘积之后,对于该反射光路,计算反射光路的散射纵向角与入射纵向角的平均差;根据平均差的余弦值与乘积,计算像素点对应反射光路的散射量,反射光路的散射量与乘积成正比,且与平均差的余弦值的平方成反比。
或者,在确定了纵向散射量、方位散射量和相应的颜色信息的乘积之后,对于该反射光路,计算反射光路的散射纵向角与入射纵向角的平均差;根据平均差的余弦值与该乘积,计算像素点对应反射光路的散射量,反射光路的散射量与乘积成正比,且与平均差的余弦值的平方成反比。对于虚拟背光源对应的反射光路,该反射光路的方向散射量的确定步骤,包括:根据该反射光路所对应的方位角,确定方位散射量。
示例性地,计算机设备获取虚拟背光源的反射光路的入射纵向角θi′和散射纵向角θr′,通过下述公式确定反射光路的平均差θd′:
θd′=(θr′-θi′)/2
此时,虚拟背光源的反射光路的散射量SR′如下:
上述举例是不计算方位散射量的得到的NR′。A1′为虚拟背光源的反射光路对应的颜色信息。
在本实施例中,通过计算纵向散射量和相应的颜色信息的乘积,能够迅速且准确的得到具有色彩表现的散射信息,通过平均差的余弦值和乘积,能够直接得到具有色彩表现的散射光线占入射光线的比例,即,得到具有丰富色彩表现的反射光路的散射量,简化了头发的调色步骤。
在一些实施例中,虚拟背光源的光路还包括透反射光路,透反射光路的光照信息包括纵向光照信息;对于虚拟背光源对应的每个光路,基于像素点相应的纵向角与光照信息确定虚拟背光源对应的纵向散射量,根据虚拟背光源对应的纵向散射量和相应的颜色信息,确定像素点对应光路的散射量,包括:对于虚拟背光源的透反射光路,根据像素点相应的纵向角和纵向光照信息,通过高斯计算,得到虚拟背光源对应的纵向散射量;根据纵向散射量和相应的颜色信息的乘积,确定像素点对应透反射光路的散射量。
可选地,对于虚拟背光源对应的透反射光路,该透反射光路的光照信息包括纵向光照信息,该纵向角包括入射纵向角和散射纵向角,在确定了纵向散射量和相应的颜色信息的乘积之后,对于该透反射光路,计算透反射光路的散射纵向角与入射纵向角的平均差;根据平均差的余弦值与乘积,计算像素点对应透反射光路的散射量,透反射光路的散射量与乘积成正比,且与平均差的余弦值的平方成反比。
或者,在确定了纵向散射量、方位散射量和相应的颜色信息的乘积之后,对于该透反射光路,计算透反射光路的散射纵向角与入射纵向角的平均差;根据平均差的余弦值与该乘积,计算像素点对应透反射光路的散射量,透反射光路的散射量与乘积成正比,且与平均差的余弦值的平方成反比。
进一步地,对于虚拟背光源对应的透反射光路,该透反射光路的纵向散射量的确定步骤,包括:根据该透反射光路的纵向角和纵向光照信息,通过高斯计算,得到纵向散射量。
对于虚拟背光源对应的透反射光路,该透反射光路的方向散射量的确定步骤,包括:根据该透反射光路所对应的方位角,确定方位散射量。
示例性地,在确定了透反射光路的半角θh′、偏移均值αTRT1′和方差宽度项βTRT1′之后,并根据半角θh′、偏移均值αTRT1′)和方差宽度项βTRT1′,通过高斯计算得到透反射光路的纵向散射量MTRTh′)=g(βTRT1′;θh′-αTRT1′),其中,g()为高斯函数。获取对应的颜色信息A3′,通过下述公式计算像素点对应透反射光路的散射量STRT′:
上述半角θh′的计算为:θh′=(θi′+θr′)/2
需要说明的是,透射光路能够清晰反映虚拟形象正面的轮廓效果,然而,在确定虚拟 背光源的着色信息的过程中,虚拟背光源是对虚拟形象的背面进行补光,因此,无需进行透射光路的计算。进一步地,虚拟背光源在进行背面补光的过程中,各光路分别对应的方位散射量十分微弱,可以忽略,此时,可以直接根据纵向散射量和相应的颜色信息的乘积,确定像素点对应的虚拟背光源的着色信息,极大地优化了虚拟背光源地处理步骤。
在本实施例中,面对虚拟背光源的透反射光路,通过计算纵向散射量和相应的颜色信息的乘积,能够迅速且准确的得到具有色彩表现的散射信息,通过平均差的余弦值和乘积,能够直接得到具有色彩表现的散射光线占入射光线的比例,即,得到具有丰富色彩表现的透反射光路的散射量,简化了头发的调色步骤。
在本实施例中,通过确定作用于虚拟形象的头发区域的虚拟背光源,以对虚拟形象的背面进行补光。对于虚拟背光源对应的每个光路,基于相应的纵向角与光照信息确定纵向散射量,并根据纵向散射量和相应的颜色信息,得到虚拟背光源在各个光路下的散射量,通过融合像素点所对应的各个光路的散射量,进一步增强了头发区域像素点的光感效果,从而,得到模拟真实头发在背光下的着色信息,有利于后续对虚拟角色的头发进行真实且高效的渲染,以提高头发渲染效果。
在一些实施例中,方法还包括:获取设定于虚拟环境中的相机光源的光源信息,光源信息包括随时间变化的光源位置和光源方向,相机光源作用于虚拟形象的头发区域;获取头发区域中像素点对应相机光源的各光路的纵向角、光照信息与颜色信息;相机光源的各光路包括反射光路和透反射光路;对于相机光源对应的每个光路,基于像素点相应的纵向角与光照信息确定相机光源对应的纵向散射量,根据相机光源对应的纵向散射量和相应的颜色信息,确定像素点对应光路的散射量;融合像素点对应每个光路的散射量,得到像素点对应相机光源的着色信息,着色信息用于渲染虚拟形象的头发。
其中,相机光源为一种由相机位置发出的点光源,可理解为用户人眼发出的点光源,用于补充各项异性高光。同样地,随着时间的变化,相机光源在虚拟环境中的位置和光源方向也会发生变化。因此,通过光源信息来实现在虚拟环境中模拟出相机光源。当然,在另一些实施例中,相机光源也可以不随时间变化,即不管时间如何变化,相机光源在虚拟环境中的位置和光源方向都是一样的。
可选地,计算机设备根据相机光源的光源信息,在虚拟环境中模拟作用于虚拟形象的头发区域的相机光源。对于相机光源对应的每个光路,计算机设备基于相应的纵向角与光照信息确定相机光源对应的纵向散射量,并根据相机光源对应的纵向散射量和相应的颜色信息的乘积,确定像素点对应光路的散射量。计算机设备将相同像素点对应的各光路的散射量进行叠加,得到像素点对应的相机光源的着色信息。
或者,计算机设备基于相应的方位角确定方位散射量,并根据纵向散射量、方位散射量和相应的颜色信息的乘积,确定像素点对应的光路的散射量,通过叠加像素点对应的各光路的散射量,得到像素点对应的相机光源的着色信息。
在一些实施例中,相机光源对应的反射光路的光照信息包括纵向光照信息,对于相机光源对应的每个光路,基于像素点相应的纵向角与光照信息确定相机光源对应的纵向散射量,根据相机光源对应的纵向散射量和相应的颜色信息,确定像素点对应光路的散射量,包括:对于相机光源的反射光路,根据像素点相应的纵向角和纵向光照信息,通过高斯计算,得到相机光源对应的纵向散射量;根据相机光源对应的纵向散射量和相应的颜色信息的乘积,确定像素点对应反射光路的散射量。
可选地,对于相机光源对应的反射光路,该反射光路的光照信息包括纵向光照信息,该纵向角包括入射纵向角和散射纵向角,在确定了纵向散射量和相应的颜色信息的乘积之后,对于该反射光路,计算反射光路的散射纵向角与入射纵向角的平均差;根据平均差的余弦值与乘积,计算像素点对应反射光路的散射量,反射光路的散射量与乘积成正比,且与平均差的余弦值的平方成反比。
或者,在确定了纵向散射量、方位散射量和相应的颜色信息的乘积之后,对于该反射光路,计算反射光路的散射纵向角与入射纵向角的平均差;根据平均差的余弦值与该乘积,计算像素点对应反射光路的散射量,反射光路的散射量与乘积成正比,且与平均差的余弦值的平方成反比。
进一步地,对于相机光源对应的反射光路,该反射光路的纵向散射量的确定步骤,包括:根据该反射光路的纵向角和纵向光照信息,通过高斯计算,得到纵向散射量。
对于相机光源对应的反射光路,该反射光路的方向散射量的确定步骤,包括:根据该反射光路所对应的方位角,确定方位散射量。
可选地,对于相机光源对应的透反射光路,该透反射光路的光照信息包括纵向光照信息,该纵向角包括入射纵向角和散射纵向角,在确定了纵向散射量和相应的颜色信息的乘积之后,对于该透反射光路,计算透反射光路的散射纵向角与入射纵向角的平均差;根据平均差的余弦值与乘积,计算像素点对应透反射光路的散射量,透反射光路的散射量与乘积成正比,且与平均差的余弦值的平方成反比。
或者,在确定了纵向散射量、方位散射量和相应的颜色信息的乘积之后,对于该透反射光路,计算透反射光路的散射纵向角与入射纵向角的平均差;根据平均差的余弦值与该乘积,计算像素点对应透反射光路的散射量,透反射光路的散射量与乘积成正比,且与平均差的余弦值的平方成反比。
示例性地,计算机设备获取相机光源的反射光路的入射纵向角θi″和散射纵向角θr″,通过下述公式确定反射光路的平均差θd″:
θd″=(θr″-θi″)/2
此时,相机光源的反射光路的散射量SR″如下:
上述举例是不计算方位散射量的得到的NR″。A1″为相机光源的反射光路对应的颜色信息。
在本实施例中,通过计算纵向散射量和相应的颜色信息的乘积,能够迅速且准确的得到具有色彩表现的散射信息,通过平均差的余弦值和乘积,能够直接得到具有色彩表现的散射光线占入射光线的比例,即,得到具有丰富色彩表现的反射光路的散射量,简化了头发的调色步骤。
在一些实施例中,相机光源的光路包括透反射光路,透反射光路的光照信息包括纵向光照信息;对于相机光源对应的每个光路,基于像素点相应的纵向角与光照信息确定相机光源对应的纵向散射量,根据相机光源对应的纵向散射量和相应的颜色信息,确定像素点对应光路的散射量,包括:对于相机光源的透反射光路,根据像素点相应的纵向角和纵向光照信息,通过高斯计算,得到相机光源对应的纵向散射量;根据相机光源对应的纵向散射量和相应的颜色信息,确定像素点对应透反射光路的散射量。
可选地,对于相机光源对应的透反射光路,该透反射光路的光照信息包括纵向光照信息,该纵向角包括入射纵向角和散射纵向角,在确定了纵向散射量和相应的颜色信息的乘积之后,对于该透反射光路,计算透反射光路的散射纵向角与入射纵向角的平均差;根据平均差的余弦值与乘积,计算相机光源对应的透反射光路的散射量,透反射光路的散射量与乘积成正比,且与平均差的余弦值的平方成反比。
或者,在确定了纵向散射量、方位散射量和相应的颜色信息的乘积之后,对于该透反射光路,计算透反射光路的散射纵向角与入射纵向角的平均差;根据平均差的余弦值与该乘积,计算相机光源对应的透反射光路的散射量,透反射光路的散射量与乘积成正比,且与平均差的余弦值的平方成反比。对于相机光源对应的透反射光路,该透反射光路的方向散射量的确定步骤,包括:根据该透反射光路所对应的方位角,确定方位散射量。
示例性地,在确定了相机光源对应的透反射光路的半角θh″、偏移均值αTRT1″和方差宽度项βTRT1″之后,并根据半角θh″、偏移均值αTRT1″和方差宽度项βTRT1″,通过高斯计算得到透反射光路的纵向散射量MTRTh″)=g(βTRT1″;θh″-αTRT1″),其中,g()为高斯函数。获取对应的颜色信息A3″,通过下述公式计算像素点对应透反射光路的散射量STRT″:
上述半角θh″的计算为:θh″=(θi″+θr″)/2。
在本实施例中,面对相机光源的透反射光路,通过计算纵向散射量和相应的颜色信息的乘积,能够迅速且准确的得到具有色彩表现的散射信息,通过平均差的余弦值和乘积,能够直接得到具有色彩表现的散射光线占入射光线的比例,即,得到具有丰富色彩表现的透反射光路的散射量,简化了头发的调色步骤。
在本实施例中,通过确定作用于虚拟形象的头发区域的相机光源,以对虚拟形象的头发区域进行各向异性高光的补充。对于相机光源对应的每个光路,基于相应的纵向角与光照信息确定纵向散射量,并根据纵向散射量和相应的颜色信息,得到相机光源在各个光路下的散射量,通过融合像素点所对应的各个光路的散射量,进一步增强了头发区域像素点的光感效果,从而,得到模拟真实头发在相机光源下的着色信息,有利于后续对虚拟角色的头发进行真实且高效的渲染,以提高头发渲染效果。
在一些实施例中,作用于同一像素点的虚拟光源还包括虚拟背光源和相机光源,方法还包括:融合相同像素点所对应的虚拟主光源的着色信息、虚拟背光源的着色信息和相机光源的着色信息,得到像素点的目标着色信息,目标着色信息用于渲染虚拟形象的头发。
可选地,在确定各个光源分别对应的着色信息之后,计算机设备融合相同像素点所对应的虚拟主光源的着色信息、虚拟背光源的着色信息和相机光源的着色信息得到像素点的目标着色信息,目标着色信息用于渲染虚拟形象的头发。
可选地,计算机设备确定各光源分别对应的着色权重,对于头发区域的每个像素点,计算机设备根据各光源分别对应的着色权重和该像素点所对应的各光源的着色信息,通过加权计算,确定该像素点的目标着色信息。其中,各光源的着色权重可以相同,也可以不同,具体不作限定。例如,如图10所示,为各个头发的颜色下多种光感效果示意图,图10提供了10张子图,每张子图均是通过不同光路下自定义的颜色信息和光照信息得到目标着色信息,然后根据每个目标着色信息渲染生成一张子图。通过设置三个光源下各光路分 别对应的颜色信息,能够得到丰富的色彩支持,即子图1对应颜色1,…,子图10对应颜色10,并通过开放光照信息的设置,能够得到丰富的光感效果,即子图中显示有不同的大小和分布的高光,从而,能够得到渲染效果好且渲染效率高的渲染后的头发。
需要说明的是,如背景技术所述,根据传统的Kajiya-kay光照模型渲染得到的头发效果,如图11所示,为Kajiya-kay光照模型的渲染效果与真实头发的对比示意图,很明显,Kajiya-kay光照模型所渲染的头发比较生硬、油亮、无法贴合发丝光照分布。而通过本申请的方法得到的渲染效果,如图12所示,为本申请的渲染效果的示意图。本申请根据目标着色信息所确定的虚拟形象渲染后的头发具有丰富的光感和真实的头发效果,即渲染后的头发的质量高。
进一步地,计算机设备将渲染后的头发的头发的颜色与虚拟形象的服装颜色进行匹配,得到虚拟形象的图像。响应于用户的导出操作,将虚拟形象的图像导入目标终端,并通过目标终端上的客户端对图像进行二次设计,并将二次设计后的图像上传至目标终端的社交应用客户端进行展示。如图13所示,图13中的图像为用户经过二次设计后上传至社交应用客户端展示的图像。图像中渲染后的头发的头发的颜色和服装颜色均为颜色1。此外,基于目标着色信息得到头发的颜色,用户自主搭配虚拟形象的服装,如,根据头发的颜色搭配服装类型和服装颜色。如图14所示,为与头发的颜色匹配的虚拟形象服装的示意图,图14中子图1和子图2为用户根据虚拟形象的渲染后的头发的头发的颜色,对虚拟形象的服装进行匹配得到的图像,子图3为游戏客户端显示界面中虚拟形象的购买信息,购买信息包括虚拟形象的头发的颜色和服装。在实际使用中,游戏客户端可以将高级套装、头发的颜色连同发型均作为购买信息进行售卖。
在本实施例,通过融合相同像素点所对应的各光源的着色信息,能够进一步丰富像素点的光感效果,以确保真实且高效的渲染虚拟形象的头发。这样,极大地优化了头发渲染的功能。
本申请还提供一种应用场景,该应用场景应用上述的头发渲染方法。可选地,该头发渲染方法在该应用场景的应用例如如下所述:在游戏客户端的场景中,为了在客户端实现高效且真实的头发渲染,根据实际的美术需求确定颜色信息,并通过开放的光照信息进行丰富的光感设计,以实现对虚拟形象的头发进行高性能渲染。可选地,获取设定于虚拟环境中的虚拟主光源的主光源信息,所述主光源信息包括随时间变化的光源位置和光源方向,所述虚拟主光源作用于虚拟形象的头发区域;获取头发区域中的像素点对应虚拟主光源的各光路的纵向角、方位角、光照信息与颜色信息;虚拟主光源的各光路包括反射光路、透射光路与透反射光路;对于每个光路,基于像素点相应的纵向角与光照信息确定纵向散射量,基于相应的方位角确定方位散射量,根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应光路的散射量;融合像素点对应每个光路的散射量,得到像素点对应虚拟主光源的着色信息,着色信息用于渲染虚拟形象的头发。
当然并不局限于此,本申请提供的头发渲染方法还可以应用在其他应用场景中,例如,多媒体场景中,常常涉及到对各种多媒体中虚拟形象进行渲染,比如,动画视频中的虚拟形象、推广视频中用于推广的虚拟形象。为了在多媒体中呈现逼真的虚拟形象,可以通过本申请的头发渲染方法实现对虚拟形象的头发进行高效渲染,以得到真实的虚拟形象。
上述应用场景仅为示意性的说明,可以理解,本申请各实施例所提供的头发渲染方法的应用不局限于上述场景。
在一个具体的实施例中,提供了一种头发渲染方法,该方法可以由计算机设备执行,可选地:获取设定于虚拟环境中的虚拟主光源的主光源信息,主光源信息包括随时间变化的光源位置和光源方向,虚拟主光源作用于虚拟形象的头发区域的虚拟主光源。获取头发区域中的像素点对应虚拟主光源的各光路的纵向角、方位角、光照信息与颜色信息;虚拟主光源的各光路包括反射光路、透射光路与透反射光路。其中,反射光路的光照信息包括纵向光照信息。纵向角包括入射纵向角和散射纵向角。对于反射光路,基于像素点相应的纵向角与纵向光照信息确定纵向散射量。计算散射方位角与入射方位角的差值;将差值的余弦值,作为像素点对应反射光路的方位散射量。计算反射光路的散射纵向角与入射纵向角的平均差。计算纵向散射量、方位散射量与相应的颜色信息的乘积。根据平均差的余弦值与乘积,计算像素点对应反射光路的散射量,反射光路的散射量与乘积成正比,且与平均差的余弦值的平方成反比。透射光路的光照信息包括纵向光照信息与方位光照信息。对于透射光路,基于相应的纵向角与纵向光照信息确定纵向散射量,基于相应的方位角与方位光照信息确定方位散射量。获取头发区域对应的阴影贴图,阴影贴图包括各个像素点的阴影度,阴影度表征像素点处是否存在阴影。根据像素点的阴影度、纵向散射量、方位散射量以及相应的颜色信息的乘积,确定像素点对应透射光路的散射量。透反射光路的光照信息包括纵向光照信息。对于透反射光路,基于相应的纵向角与纵向光照信息确定纵向散射量,基于相应的方位角的余弦值确定方位散射量,计算透反射光路的散射纵向角与入射纵向角的平均差;计算纵向散射量、方位散射量与相应的颜色信息的乘积;根据平均差的余弦值与乘积,计算像素点对应透反射光路的散射量,透反射光路的散射量与乘积成正比、且与平均差的余弦值的平方成反比。将像素点分别对应反射光路、透射光路与透反射光路的散射量进行融合,得到该像素点对应该虚拟主光源的着色信息。
作用于同一像素点的虚拟光源还包括虚拟背光源和相机光源。获取设定于虚拟环境中的虚拟背光源的背光源信息,背光源信息包括随时间变化的光源位置和光照方向,虚拟背光源作用于虚拟形象的头发区域的虚拟背光源,虚拟背光源的光照方向的水平投影方向与虚拟主光源的光照方向的水平投影方向相反。获取头发区域中像素点对应的虚拟背光源的各光路的纵向角、光照信息与颜色信息;虚拟背光源的各光路包括反射光路和透反射光路。对于虚拟背光源对应的每个光路,基于像素点相应的纵向角与光照信息确定虚拟背光源对应的纵向散射量,根据虚拟背光源对应的纵向散射量和相应的颜色信息,确定像素点对应光路的散射量。将像素点分别对应反射光路与透反射的散射量进行融合,得到像素点对应的虚拟背光源的着色信息,着色信息用于渲染虚拟形象的头发。获取设定于虚拟环境中的相机光源的光源信息,光源信息包括随时间变化的光源位置和光源方向,相机光源作用于虚拟形象的头发区域的相机光源。获取头发区域中像素点对应相机光源的各光路的纵向角、光照信息与颜色信息;相机光源的各光路包括反射光路和透反射光路。对于相机光源对应的每个光路,基于像素点相应的纵向角与光照信息确定相机光源对应的纵向散射量,根据相机光源对应的纵向散射量和相应的颜色信息,确定像素点对应光路的散射量。将像素点分别对应反射光路与透反射的散射量进行融合,得到像素点对应的相机光源的着色信息,着色信息用于渲染虚拟形象的头发。融合相同像素点所对应的虚拟主光源的着色信息、虚拟背光源的着色信息和相机光源的着色信息,得到像素点的目标着色信息,目标着色信息用于渲染虚拟形象的头发。
在本实施例中,通过获取设定于虚拟环境中的虚拟主光源的主光源信息,主光源信息 包括随时间变化的光源位置和光源方向,虚拟主光源作用于虚拟形象的头发区域的虚拟主光源,虚拟主光源的光路包括反射光路、透射光路与透反射光路;通过获取头发区域中的像素点对应虚拟主光源的各光路的纵向角、方位角、光照信息与颜色信息,对于每个光路,基于相应的纵向角与光照信息确定纵向散射量,能够准确预估像素点在纵向散射过程中的散射情况。基于相应的方位角能够直接确定方位散射量,从而,准确预估像素点在方位散射过程中的散射情况。根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应光路的散射量。这样,结合纵向散射、方位散射和颜色信息,能够真实模拟出该像素点在每个光路下的光感,最后通过将像素点分别对应反射光路、透射光路与透反射光路的散射量进行融合,得到的像素点对应虚拟主光源的着色信息,实现了对虚拟角色的头发进行真实且高效的渲染,提高了头发渲染效果。
应该理解的是,虽然如上所述的各实施例所涉及的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,如上所述的各实施例所涉及的流程图中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
基于同样的发明构思,本申请实施例还提供了一种用于实现上述所涉及的头发渲染方法的头发渲染装置。该装置所提供的解决问题的实现方案与上述方法中所记载的实现方案相似,故下面所提供的一个或多个头发渲染装置实施例中的具体限定可以参见上文中对于头发渲染方法的限定,在此不再赘述。
在一些实施例中,如图15所示,提供了一种头发渲染装置,包括:第一确定模块1502、获取模块1504、第二确定模块1506和融合模块1508,其中:
第一确定模块1502,用于获取设定于虚拟环境中的虚拟主光源的主光源信息,主光源信息包括随时间变化的光源位置和光源方向,虚拟主光源作用于虚拟形象的头发区域。
获取模块1504,用于获取头发区域中的像素点对应虚拟主光源的各光路的纵向角、方位角、光照信息与颜色信息;虚拟主光源的各光路包括反射光路、透射光路与透反射光路。
第二确定模块1506,用于对于每个光路,基于像素点相应的纵向角与光照信息确定纵向散射量,基于相应的方位角确定方位散射量,根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应光路的散射量。
融合模块1508,用于融合像素点对应每个光路的散射量,得到像素点对应虚拟主光源的着色信息,着色信息用于渲染虚拟形象的头发。
在一些实施例中,反射光路的光照信息包括纵向光照信息,第二确定模块,用于对于反射光路,基于像素点相应的纵向角与纵向光照信息确定纵向散射量,基于相应的方位角的余弦值确定方位散射量,根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应反射光路的散射量。
在一些实施例中,纵向角包括入射纵向角和散射纵向角,第二确定模块,用于计算反射光路的散射纵向角与入射纵向角的平均差;计算纵向散射量、方位散射量与相应的颜色信息的乘积;根据平均差的余弦值与乘积,计算像素点对应反射光路的散射量,反射光路的散射量与乘积成正比,且与平均差的余弦值的平方成反比。
在一些实施例中,方位角包括入射方位角和散射方位角,第二确定模块,用于计算散射方位角与入射方位角的差值;将差值的余弦值,作为像素点对应反射光路的方位散射量。
在一些实施例中,虚拟主光源的光路还包括透射光路,透射光路的光照信息包括纵向光照信息与方位光照信息,第二确定模块,用于对于透射光路,基于相应的纵向角与纵向光照信息确定纵向散射量,基于像素点相应的方位角与方位光照信息确定方位散射量,根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应透射光路的散射量。
在一些实施例中,第二确定模块,用于获取头发区域对应的阴影贴图,阴影贴图包括各个像素点的阴影度,阴影度表征像素点处是否存在阴影;根据像素点的阴影度、纵向散射量、方位散射量以及相应的颜色信息的乘积,确定像素点对应透射光路的散射量。
在一些实施例中,虚拟主光源的光路还包括透反射光路,透反射光路的光照信息包括纵向光照信息;第二确定模块,用于对于透反射光路,基于相应的纵向角与纵向光照信息确定纵向散射量,基于像素点相应的方位角的余弦值确定方位散射量,根据纵向散射量、方位散射量以及相应的颜色信息,确定像素点对应透反射光路的散射量。
在一些实施例中,纵向角包括入射纵向角和散射纵向角,第二确定模块,用于计算透反射光路的散射纵向角与入射纵向角的平均差;计算纵向散射量、方位散射量与相应的颜色信息的乘积;根据平均差的余弦值与乘积,计算像素点对应透反射光路的散射量,透反射光路的散射量与乘积成正比、且与平均差的余弦值的平方成反比。
在一些实施例中,第一确定模块,还用于获取设定于虚拟环境中的虚拟背光源的背光源信息,背光源信息包括随时间变化的光源位置和光照方向,虚拟背光源作用于虚拟形象的头发区域,虚拟背光源的光照方向的水平投影方向与虚拟主光源的光照方向的水平投影方向相反;获取模块,还用于获取头发区域中像素点对应的虚拟背光源的各光路的纵向角、光照信息与颜色信息;虚拟背光源的各光路包括反射光路和透反射光路;第二确定模块,还用于对于虚拟背光源对应的每个光路,基于像素点相应的纵向角与光照信息确定虚拟背光源对应的纵向散射量,根据虚拟背光源对应的纵向散射量和相应的颜色信息,确定像素点对应光路的散射量;融合模块,还用于融合像素点对应每个光路的散射量,得到像素点对应虚拟背光源的着色信息,着色信息用于渲染虚拟形象的头发。
在一些实施例中,虚拟背光源对应的反射光路的光照信息包括纵向光照信息,第二确定模块,还用于对于虚拟背光源的反射光路,根据像素点相应的纵向角和纵向光照信息,通过高斯计算,得到虚拟背光源对应的纵向散射量;根据虚拟背光源对应的纵向散射量和相应的颜色信息的乘积,确定像素点对应反射光路的散射量。
在一些实施例中,纵向角包括入射纵向角和散射纵向角,第二确定模块,还用于计算虚拟背光源的反射光路的散射纵向角与入射纵向角的平均差;根据平均差的余弦值与乘积,确定像素点对应反射光路的散射量,反射光路的散射量与乘积成正比,且与平均差的余弦值的平方成反比。
在一些实施例中,虚拟背光源的光路还包括透反射光路,透反射光路的光照信息包括纵向光照信息,第二确定模块,还用于对于虚拟背光源的透反射光路,根据像素点相应的纵向角和纵向光照信息,通过高斯计算,得到虚拟背光源对应的纵向散射量;根据纵向散射量和相应的颜色信息的乘积,确定像素点对应透反射光路的散射量。
在一些实施例中,第一确定模块,还用于获取设定于虚拟环境中的相机光源的光源信息,光源信息包括随时间变化的光源位置和光源方向,相机光源作用于虚拟形象的头发区 域;获取模块,还用于获取头发区域中像素点对应相机光源的各光路的纵向角、光照信息与颜色信息;相机光源的各光路包括反射光路和透反射光路;第二确定模块,还用于对于相机光源对应的每个光路,基于像素点相应的纵向角与光照信息确定相机光源对应的纵向散射量,根据相机光源对应的纵向散射量和相应的颜色信息,确定像素点对应光路的散射量;融合模块,还用于融合像素点对应每个光路的散射量,得到像素点对应相机光源的着色信息,着色信息用于渲染虚拟形象的头发。
在一些实施例中,相机光源对应的反射光路的光照信息包括纵向光照信息,第二确定模块,还用于对于相机光源的反射光路,根据像素点相应的纵向角和纵向光照信息,通过高斯计算,得到相机光源对应的纵向散射量;根据相机光源对应的纵向散射量和相应的颜色信息的乘积,确定像素点对应反射光路的散射量。
在一些实施例中,相机光源的光路包括透反射光路,透反射光路的光照信息包括纵向光照信息;第二确定模块,用于对于相机光源的透反射光路,根据像素点相应的纵向角和纵向光照信息,通过高斯计算,得到相机光源对应的纵向散射量;根据相机光源对应的纵向散射量和相应的颜色信息,确定像素点对应透反射光路的散射量。
在一些实施例中,作用于同一像素点的虚拟光源还包括虚拟背光源和相机光源,融合模块,还用于融合相同像素点所对应的虚拟主光源的着色信息、虚拟背光源的着色信息和相机光源的着色信息,得到像素点的目标着色信息,目标着色信息用于渲染虚拟形象的头发。
上述头发渲染装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,也可以是终端,其内部结构图可以如图16所示。该计算机设备包括处理器、存储器、输入/输出接口(Input/Output,简称I/O)和通信接口。其中,处理器、存储器和输入/输出接口通过系统总线连接,通信接口通过输入/输出接口连接到系统总线。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质和内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的输入/输出接口用于处理器与外部设备之间交换信息。该计算机设备的通信接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种头发渲染方法。
本领域技术人员可以理解,图16中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,还提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机可读指令,该处理器执行计算机可读指令时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机可读指令,该计算机可读指令被处理器执行时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机程序产品,包括计算机可读指令,该计算机可读指令被处理器执行时实现上述各方法实施例中的步骤。
需要说明的是,本申请所涉及的用户信息(包括但不限于用户设备信息、用户个人信 息等)和数据(包括但不限于用于分析的数据、存储的数据、展示的数据等),均为经用户授权或者经过各方充分授权的信息和数据,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存、光存储器、高密度嵌入式非易失性存储器、阻变存储器(ReRAM)、磁变存储器(Magnetoresistive Random Access Memory,MRAM)、铁电存储器(Ferroelectric Random Access Memory,FRAM)、相变存储器(Phase Change Memory,PCM)、石墨烯存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器等。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。本申请所提供的各实施例中所涉及的数据库可包括关系型数据库和非关系型数据库中至少一种。非关系型数据库可包括基于区块链的分布式数据库等,不限于此。本申请所提供的各实施例中所涉及的处理器可为通用处理器、中央处理器、图形处理器、数字信号处理器、可编程逻辑器、基于量子计算的数据处理逻辑器等,不限于此。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种头发渲染方法,由计算机设备执行,所述方法包括:
    获取设定于虚拟环境中的虚拟主光源的主光源信息,所述主光源信息包括随时间变化的光源位置和光源方向,所述虚拟主光源作用于虚拟形象的头发区域;
    获取所述头发区域中的像素点对应所述虚拟主光源的各光路的纵向角、方位角、光照信息与颜色信息;所述虚拟主光源的各光路包括反射光路、透射光路与透反射光路;
    对于每个光路,基于所述像素点相应的纵向角与光照信息确定纵向散射量,基于相应的方位角确定方位散射量,根据所述纵向散射量、所述方位散射量以及相应的颜色信息,确定所述像素点对应所述光路的散射量;及
    融合所述像素点对应每个光路的散射量,得到所述像素点对应所述虚拟主光源的着色信息,所述着色信息用于渲染所述虚拟形象的头发。
  2. 根据权利要求1所述的方法,所述反射光路的光照信息包括纵向光照信息;所述对于每个光路,基于所述像素点相应的纵向角与光照信息确定纵向散射量,基于相应的方位角确定方位散射量,根据所述纵向散射量、所述方位散射量以及相应的颜色信息,确定所述像素点对应所述光路的散射量,包括:
    对于所述反射光路,基于所述像素点相应的纵向角与所述纵向光照信息确定纵向散射量,基于相应的方位角的余弦值确定方位散射量,根据所述纵向散射量、所述方位散射量以及相应的颜色信息,确定所述像素点对应所述反射光路的散射量。
  3. 根据权利要求1或2所述的方法,所述纵向角包括入射纵向角和散射纵向角;所述根据所述纵向散射量、所述方位散射量以及相应的颜色信息,确定所述像素点对应所述反射光路的散射量,包括:
    计算所述反射光路的所述散射纵向角与所述入射纵向角的平均差;
    计算所述纵向散射量、所述方位散射量与相应的颜色信息的乘积;
    根据所述平均差的余弦值与所述乘积,计算所述像素点对应所述反射光路的散射量,所述反射光路的散射量与所述乘积成正比,且与所述平均差的余弦值的平方成反比。
  4. 根据权利要求2所述的方法,所述方位角包括入射方位角和散射方位角,所述基于相应的方位角的余弦值确定方位散射量,包括:
    计算所述散射方位角与所述入射方位角的差值;
    将所述差值的余弦值,作为所述像素点对应所述反射光路的方位散射量。
  5. 根据权利要求1至4任一项所述的方法,所述虚拟主光源的光路还包括透射光路,所述透射光路的光照信息包括纵向光照信息与方位光照信息;所述对于每个光路,基于所述像素点相应的纵向角与光照信息确定纵向散射量,基于相应的方位角确定方位散射量,根据所述纵向散射量、所述方位散射量以及相应的颜色信息,确定所述像素点对应所述光路的散射量,包括:
    对于所述透射光路,基于所述像素点相应的纵向角与所述纵向光照信息确定纵向散射量,基于相应的方位角与所述方位光照信息确定方位散射量,根据所述纵向散射量、所述方位散射量以及相应的颜色信息,确定所述像素点对应所述透射光路的散射量。
  6. 根据权利要求5所述的方法,所述根据所述纵向散射量、所述方位散射量以及相应的颜色信息,确定所述像素点对应所述透射光路的散射量,包括:
    获取所述头发区域对应的阴影贴图,所述阴影贴图包括各个像素点的阴影度,所述阴 影度表征所述像素点处是否存在阴影;
    根据所述像素点的阴影度、所述纵向散射量、所述方位散射量以及相应的颜色信息的乘积,确定所述像素点对应所述透射光路的散射量。
  7. 根据权利要求1至4任一项所述的方法,所述虚拟主光源的光路还包括透反射光路,所述透反射光路的光照信息包括纵向光照信息;所述对于每个光路,基于所述像素点相应的纵向角与光照信息确定纵向散射量,基于相应的方位角确定方位散射量,根据所述纵向散射量、所述方位散射量以及相应的颜色信息,确定所述像素点对应所述光路的散射量,包括:
    对于所述透反射光路,基于所述像素点相应的纵向角与所述纵向光照信息确定纵向散射量,基于相应的方位角的余弦值确定方位散射量,根据所述纵向散射量、所述方位散射量以及相应的颜色信息,确定所述像素点对应所述透反射光路的散射量。
  8. 根据权利要求7所述的方法,所述纵向角包括入射纵向角和散射纵向角,所述根据所述纵向散射量、所述方位散射量以及相应的颜色信息,确定所述像素点对应所述透反射光路的散射量,包括:
    计算所述透反射光路的所述散射纵向角与所述入射纵向角的平均差;
    计算所述纵向散射量、所述方位散射量与相应的颜色信息的乘积;
    根据所述平均差的余弦值与所述乘积,计算所述像素点对应所述透反射光路的散射量,所述透反射光路的散射量与所述乘积成正比、且与所述平均差的余弦值的平方成反比。
  9. 根据权利要求1至4任一项所述的方法,所述方法还包括:
    获取设定于虚拟环境中的虚拟背光源的背光源信息,所述背光源信息包括随时间变化的光源位置和光照方向,所述虚拟背光源作用于所述虚拟形象的头发区域,所述虚拟背光源的光照方向的水平投影方向与所述虚拟主光源的光照方向的水平投影方向相反;
    获取所述头发区域中像素点对应的所述虚拟背光源的各光路的纵向角、光照信息与颜色信息;所述虚拟背光源的各光路包括反射光路和透反射光路;
    对于所述虚拟背光源对应的每个光路,基于所述像素点相应的纵向角与光照信息确定所述虚拟背光源对应的纵向散射量,根据所述虚拟背光源对应的纵向散射量和相应的颜色信息,确定所述像素点对应所述光路的散射量;
    融合所述像素点对应每个光路的散射量,得到所述像素点对应所述虚拟背光源的着色信息,所述着色信息用于渲染所述虚拟形象的头发。
  10. 根据权利要求9所述的方法,所述虚拟背光源对应的反射光路的光照信息包括纵向光照信息,所述对于所述虚拟背光源对应的每个光路,基于所述像素点相应的纵向角与光照信息确定所述虚拟背光源对应的纵向散射量,根据所述虚拟背光源对应的纵向散射量和相应的颜色信息,确定所述像素点对应所述光路的散射量,包括:
    对于所述虚拟背光源的反射光路,根据所述像素点相应的纵向角和所述纵向光照信息,通过高斯计算,得到所述虚拟背光源对应的纵向散射量;
    根据所述虚拟背光源对应的纵向散射量和相应的颜色信息的乘积,确定所述像素点对应所述反射光路的散射量。
  11. 根据权利要求10所述的方法,所述纵向角包括入射纵向角和散射纵向角,所述根据所述虚拟背光源对应的纵向散射量和相应的颜色信息的乘积,确定所述像素点对应所述反射光路的散射量,包括:
    计算所述虚拟背光源的反射光路的散射纵向角与所述入射纵向角的平均差;
    根据所述平均差的余弦值与所述乘积,确定所述像素点对应所述反射光路的散射量,所述反射光路的散射量与所述乘积成正比,且与所述平均差的余弦值的平方成反比。
  12. 根据权利要求9-11任一项所述的方法,所述虚拟背光源的光路还包括透反射光路,所述透反射光路的光照信息包括纵向光照信息;所述对于所述虚拟背光源对应的每个光路,基于所述像素点相应的纵向角与光照信息确定所述虚拟背光源对应的纵向散射量,根据所述虚拟背光源对应的纵向散射量和相应的颜色信息,确定所述像素点对应所述光路的散射量,包括:
    对于所述虚拟背光源的透反射光路,根据所述像素点相应的纵向角和纵向光照信息,通过高斯计算,得到所述虚拟背光源对应的纵向散射量;
    根据所述纵向散射量和相应的颜色信息的乘积,确定所述像素点对应所述透反射光路的散射量。
  13. 根据权利要求1-4任一项所述的方法,所述方法还包括:
    获取设定于虚拟环境中的相机光源的光源信息,所述光源信息包括随时间变化的光源位置和光源方向,所述相机光源作用于所述虚拟形象的头发区域;
    获取所述头发区域中像素点对应所述相机光源的各光路的纵向角、光照信息与颜色信息;所述相机光源的各光路包括反射光路和透反射光路;
    对于所述相机光源对应的每个光路,基于所述像素点相应的纵向角与光照信息确定所述相机光源对应的纵向散射量,根据所述相机光源对应的纵向散射量和相应的颜色信息,确定所述像素点对应所述光路的散射量;
    融合所述像素点对应每个光路的散射量,得到所述像素点对应所述相机光源的着色信息,所述着色信息用于渲染所述虚拟形象的头发。
  14. 根据权利要求13所述的方法,所述相机光源对应的反射光路的光照信息包括纵向光照信息,所述对于所述相机光源对应的每个光路,基于所述像素点相应的纵向角与光照信息确定所述相机光源对应的纵向散射量,根据所述相机光源对应的纵向散射量和相应的颜色信息,确定所述像素点对应所述光路的散射量,包括:
    对于所述相机光源的反射光路,根据所述像素点相应的纵向角和纵向光照信息,通过高斯计算,得到所述相机光源对应的纵向散射量;
    根据所述相机光源对应的纵向散射量和相应的颜色信息的乘积,确定所述像素点对应所述反射光路的散射量。
  15. 根据权利要求13-14任一项所述的方法,所述相机光源的光路包括透反射光路,所述透反射光路的光照信息包括纵向光照信息;所述对于所述相机光源对应的每个光路,基于所述像素点相应的纵向角与光照信息确定所述相机光源对应的纵向散射量,根据所述相机光源对应的纵向散射量和相应的颜色信息,确定所述像素点对应所述光路的散射量,包括:
    对于所述相机光源的透反射光路,根据所述像素点相应的纵向角和所述纵向光照信息,通过高斯计算,得到所述相机光源对应的纵向散射量;
    根据所述相机光源对应的纵向散射量和相应的颜色信息,确定所述像素点对应所述透反射光路的散射量。
  16. 根据权利要求1至15任一项所述的方法,作用于同一像素点的虚拟光源还包括 虚拟背光源和相机光源,所述方法还包括:
    融合相同像素点所对应的所述虚拟主光源的着色信息、所述虚拟背光源的着色信息和所述相机光源的着色信息,得到所述像素点的目标着色信息,所述目标着色信息用于渲染所述虚拟形象的头发。
  17. 一种头发渲染装置,所述装置包括:
    第一确定模块,用于获取设定于虚拟环境中的虚拟主光源的主光源信息,所述主光源信息包括随时间变化的光源位置和光源方向,所述虚拟主光源作用于虚拟形象的头发区域;
    获取模块,用于获取所述头发区域中的像素点对应所述虚拟主光源的各光路的纵向角、方位角、光照信息与颜色信息;所述虚拟主光源的各光路包括反射光路、透射光路与透反射光路;
    第二确定模块,用于对于每个光路,基于所述像素点相应的纵向角与光照信息确定纵向散射量,基于相应的方位角确定方位散射量,根据所述纵向散射量、所述方位散射量以及相应的颜色信息,确定所述像素点对应所述光路的散射量;
    融合模块,用于融合所述像素点对应每个光路的散射量,得到所述像素点对应所述虚拟主光源的着色信息,所述着色信息用于渲染所述虚拟形象的头发。
  18. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现权利要求1至16中任一项所述的方法的步骤。
  19. 一种计算机可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现权利要求1至16中任一项所述的方法的步骤。
  20. 一种计算机程序产品,包括计算机可读指令,该计算机可读指令被处理器执行时实现权利要求1至16中任一项所述的方法的步骤。
PCT/CN2023/121023 2022-10-18 2023-09-25 头发渲染方法、装置、设备、存储介质和计算机程序产品 WO2024082927A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211271651.5 2022-10-18
CN202211271651.5A CN117011438A (zh) 2022-10-18 2022-10-18 头发渲染方法、装置、设备、存储介质和计算机程序产品

Publications (1)

Publication Number Publication Date
WO2024082927A1 true WO2024082927A1 (zh) 2024-04-25

Family

ID=88562496

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/121023 WO2024082927A1 (zh) 2022-10-18 2023-09-25 头发渲染方法、装置、设备、存储介质和计算机程序产品

Country Status (2)

Country Link
CN (1) CN117011438A (zh)
WO (1) WO2024082927A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130215139A1 (en) * 2012-01-17 2013-08-22 Pacific Data Images Llc Ishair: importance sampling for hair scattering
CN113379885A (zh) * 2021-06-22 2021-09-10 网易(杭州)网络有限公司 虚拟头发的处理方法及装置、可读存储介质及电子设备
CN113610955A (zh) * 2021-08-11 2021-11-05 北京果仁互动科技有限公司 一种对象渲染方法、装置及着色器
US20210407154A1 (en) * 2020-06-30 2021-12-30 Beijing Dajia Internet Information Technology Co., Ltd. Method and electronic device for processing images
CN113888398A (zh) * 2021-10-21 2022-01-04 北京百度网讯科技有限公司 头发渲染方法、装置及电子设备
CN115082607A (zh) * 2022-05-26 2022-09-20 网易(杭州)网络有限公司 虚拟角色头发渲染方法、装置、电子设备和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130215139A1 (en) * 2012-01-17 2013-08-22 Pacific Data Images Llc Ishair: importance sampling for hair scattering
US20210407154A1 (en) * 2020-06-30 2021-12-30 Beijing Dajia Internet Information Technology Co., Ltd. Method and electronic device for processing images
CN113379885A (zh) * 2021-06-22 2021-09-10 网易(杭州)网络有限公司 虚拟头发的处理方法及装置、可读存储介质及电子设备
CN113610955A (zh) * 2021-08-11 2021-11-05 北京果仁互动科技有限公司 一种对象渲染方法、装置及着色器
CN113888398A (zh) * 2021-10-21 2022-01-04 北京百度网讯科技有限公司 头发渲染方法、装置及电子设备
CN115082607A (zh) * 2022-05-26 2022-09-20 网易(杭州)网络有限公司 虚拟角色头发渲染方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN117011438A (zh) 2023-11-07

Similar Documents

Publication Publication Date Title
CN107452048B (zh) 全局光照的计算方法及装置
US10950030B2 (en) Specular reflections in hybrid ray tracing
Li et al. Physically-based editing of indoor scene lighting from a single image
CN105844695A (zh) 一种基于真实材质测量数据的光照建模方法
CN107644453A (zh) 一种基于物理着色的渲染方法及系统
Yuan et al. Modelling cumulus cloud shape from a single image
Hu et al. Interactive volume caustics in single-scattering media
Wei et al. Object-based illumination estimation with rendering-aware neural networks
Hillaire A scalable and production ready sky and atmosphere rendering technique
US9007393B2 (en) Accurate transparency and local volume rendering
McGuire et al. Phenomenological transparency
US8248405B1 (en) Image compositing with ray tracing
Mirbauer et al. SkyGAN: Towards Realistic Cloud Imagery for Image Based Lighting.
CN103679818A (zh) 一种基于虚拟面光源的实时场景绘制方法
Cabeleira Combining rasterization and ray tracing techniques to approximate global illumination in real-time
WO2024082927A1 (zh) 头发渲染方法、装置、设备、存储介质和计算机程序产品
CN116758208A (zh) 全局光照渲染方法、装置、存储介质及电子设备
Liu et al. Non‐Linear Beam Tracing on a GPU
Wen et al. Post0-vr: Enabling universal realistic rendering for modern vr via exploiting architectural similarity and data sharing
Mora et al. Visualization and computer graphics on isotropically emissive volumetric displays
Papanikolaou et al. Real-time separable subsurface scattering for animated virtual characters
Fernandes An approximation to multiple scattering in volumetric illumination towards real-time rendering
Yu Ray Tracing in Computer Graphics
Mirbauer et al. SkyGAN: Realistic Cloud Imagery for Image‐based Lighting
Mahmud et al. Surrounding-aware screen-space-global-illumination using generative adversarial network