CN117078849A - Normal map generation method and device - Google Patents

Normal map generation method and device Download PDF

Info

Publication number
CN117078849A
CN117078849A CN202310962889.0A CN202310962889A CN117078849A CN 117078849 A CN117078849 A CN 117078849A CN 202310962889 A CN202310962889 A CN 202310962889A CN 117078849 A CN117078849 A CN 117078849A
Authority
CN
China
Prior art keywords
image
feature extraction
pseudo
height image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310962889.0A
Other languages
Chinese (zh)
Inventor
陈明翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ant Blockchain Technology Shanghai Co Ltd
Original Assignee
Ant Blockchain Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ant Blockchain Technology Shanghai Co Ltd filed Critical Ant Blockchain Technology Shanghai Co Ltd
Priority to CN202310962889.0A priority Critical patent/CN117078849A/en
Publication of CN117078849A publication Critical patent/CN117078849A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/86Arrangements for image or video recognition or understanding using pattern recognition or machine learning using syntactic or structural representations of the image or video pattern, e.g. symbolic string recognition; using graph matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Architecture (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the specification provides a normal map generation method and device, wherein the method comprises the following steps: acquiring a first image, wherein the first image is a surface image of a target model; inputting the first image into a pre-trained feature extraction network to obtain a plurality of feature images output by a plurality of appointed feature extraction layers in the feature extraction network; determining a pseudo-height image by using a plurality of characteristic images, wherein the pseudo-height image comprises gray values of pixels; based on the pseudo-altitude image, a normal map of the target model is generated.

Description

Normal map generation method and device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for generating a normal map.
Background
A tangent space normal map (Tangent Space Normal Map, alternatively referred to as a tangent space normal map) is a two-dimensional image in which each pixel contains normal vector information for the corresponding pixel of the model surface, the normal vector being tangent space based and therefore referred to as a tangent space normal map. During rendering, a normal map is applied to the model surface to enhance the details of the model surface, thereby making it possible to present high-precision surface details on a low-polygon model to improve the realism and quality of the corresponding model.
In order to obtain a better quality (i.e. better display) three-dimensional model, it is important how to obtain a better quality tangent space normal map.
Disclosure of Invention
One or more embodiments of the present disclosure provide a method and an apparatus for generating a normal map, so as to obtain a better-quality tangent space normal map.
According to a first aspect, there is provided a normal map generation method, including:
acquiring a first image, wherein the first image is a surface image of a target model;
inputting the first image into a pre-trained feature extraction network to obtain a plurality of feature images output by a plurality of appointed feature extraction layers in the feature extraction network;
determining a pseudo-height image by utilizing the plurality of characteristic images, wherein the pseudo-height image comprises gray values of pixels;
and generating a normal map of the target model based on the pseudo-height image.
According to a second aspect, there is provided a normal map generating apparatus comprising:
the acquisition module is configured to acquire a first image, wherein the first image is a surface image of a target model;
the input module is configured to input the first image into a pre-trained feature extraction network to obtain a plurality of feature images output by a plurality of appointed feature extraction layers in the feature extraction network;
a determining module configured to determine a pseudo-height image using the plurality of feature images, wherein the pseudo-height image includes gray values of pixels;
and the generation module is configured to generate a normal map of the target model based on the pseudo-height image.
According to a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first aspect.
According to a fourth aspect, there is provided a computing device comprising a memory and a processor, wherein the memory has executable code stored therein, and wherein the processor, when executing the executable code, implements the method of the first aspect.
According to the method and the device for generating the normal map provided by the embodiment of the specification, after the surface image of the target model, namely the first image, is obtained, the first image is input into the pre-trained feature extraction network to obtain a plurality of feature images output by a plurality of specified feature extraction layers in the feature extraction network, then the pseudo-height images comprising gray values of all pixels are determined by utilizing the plurality of feature images, wherein the gray values of all pixels in the pseudo-height images determined by utilizing the plurality of feature images are determined based on information importance changes and/or information density changes of the pixels represented by the plurality of feature images, and accordingly, the pseudo-height images are fused with local information and global information of the image, the shape of the surface of the target model in the image and the details of the surface of the target model are considered, so that gradient transformation among all pixels can be more natural, normal maps are generated based on the plurality of the pseudo-height images, normal maps can be more accurate, the quality is better, the degree changes of the surface of the target model represented by the normal map are more natural, and the surface rendering effect of the obtained normal map can be more visual effect can be guaranteed to be more consistent with the surface model, and the surface appearance of the obtained normal map is more visual effect can be rendered.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is evident that the drawings in the following description are only some embodiments of the present invention and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a schematic diagram of an implementation framework of one embodiment of the disclosure;
FIG. 2 is a schematic flow chart of a normal map generating method according to an embodiment;
FIG. 3A shows a perspective view of a garment (half-sleeve) model (three-dimensional model);
FIG. 3B shows a schematic representation of a three-dimensional unfolded view (i.e., a first image) of the surface of the garment model shown in FIG. 3A;
fig. 4 is a schematic block diagram of a normal map generating apparatus according to an embodiment.
Detailed Description
The technical solutions of the embodiments of the present specification will be described in detail below with reference to the accompanying drawings.
The embodiment of the specification discloses a normal map generation method and device, and firstly, application scenes and technical concepts of the method are introduced, and the method specifically comprises the following steps:
as described above, in the above process, during the rendering of the model, the normal map is applied to the model surface to enhance the details of the model surface, thereby making it possible to present high-precision surface details on the low-polygonal model to improve the realism and quality of the corresponding model. In order to obtain a better quality (i.e. better mapping effect) three-dimensional model, it is important how to obtain a better quality tangent space normal mapping.
In view of this, the inventor proposes a normal map generating method and apparatus, and fig. 1 shows a schematic implementation scenario according to an embodiment disclosed in the present specification. In this implementation scenario, a pre-trained feature extraction network may be pre-stored in a preset storage space corresponding to the electronic device, where the feature extraction network includes a plurality of feature extraction layers, and may perform feature extraction on the image. The plurality of feature extraction layers includes a number of specified feature extraction layers whose outputs (i.e., the output feature images) are specified for output for use in generating a normal map. As shown in fig. 1, the feature extraction network includes M feature extraction layers including a specified feature extraction layer 1, and a specified feature extraction layer 2 … … specifies a feature extraction layer N, where N is less than M. In yet another implementation, N may also be equal to M.
In the normal map generating process, the electronic device may acquire a surface image of a target model, which is referred to as a first image, where the target model may be a two-dimensional model or a three-dimensional model, and in the case that the target model is a three-dimensional model, the first image is a three-dimensional unfolded view of a surface of the three-dimensional model, where a texture of the surface of the three-dimensional model may be included, and the texture may represent a color and a brightness level of the corresponding surface. Then, the electronic equipment inputs the first image into the pre-trained feature extraction network to perform feature extraction on the first image based on M feature extraction layers included in the feature extraction network, so as to obtain a plurality of feature images output by a plurality of specified feature extraction layers (namely N specified feature extraction layers) in the feature extraction network, namely, feature images corresponding to the N specified feature extraction layers respectively are obtained; then the electronic equipment utilizes the acquired characteristic images to determine and obtain a pseudo-height image, wherein the pseudo-height image comprises gray values of pixels; based on the pseudo-height image, a normal map of the target model is generated.
In the process, the plurality of characteristic images are obtained by the characteristic extraction layers with different depths in the characteristic extraction network, the plurality of characteristic images simultaneously comprise local information and global information of the images, correspondingly, in a pseudo-height image determined by the plurality of characteristic images, gray values of pixels are determined based on information importance changes and/or information density changes of the pixels represented by the plurality of characteristic images, the pseudo-height image better fuses the local information and the global information of the images, the shape of the surface of a target model in the image and the details of the surface of the target model are considered, gradient transformation among the pixels can be more natural, normal map is generated based on the pseudo-height image, the normal map can be more accurate, the quality is better, fluctuation degree changes of the surface of the target model represented by the normal map are more natural, the model surface rendered based on the normal map can be more in line with visual feeling of eyes, no gaps appear at the edge positions of the model surface, and the surface effect of the model is more similar to the true effect is ensured.
The method and apparatus for generating a normal map provided in the present specification are described in detail below with reference to specific embodiments.
FIG. 2 illustrates a flow diagram of a method of generating a normal map in one embodiment of the present description. The method is performed by an electronic device, which may be implemented by any means, device, platform, cluster of devices, etc. having computing, processing capabilities. As shown in fig. 2, the method includes the following steps S210 to S240:
in step S210, a first image is acquired.
The first image is a surface image of a target model, the target model may be a two-dimensional model or a three-dimensional model, and in the case that the target model is a three-dimensional model, the first image is a three-dimensional unfolded view of a surface of the three-dimensional model, where a texture of the surface of the three-dimensional model may be included, and the texture may represent a color and a brightness degree of the surface of the corresponding three-dimensional model. The first image is a color image, such as an RGB (Red-Green-Blue) image. The three-dimensional model may be any type of three-dimensional model, such as a vehicle model, a building model (e.g., a house model, a wall model), or a model of props in a game. The two-dimensional model may be a wall model or the like. As shown in fig. 3A, a perspective view of a garment (half-sleeve) model is shown in fig. 3B, which shows a three-dimensional unfolded view (i.e., a first image) of the surface of the garment model shown in fig. 3A, wherein the three-dimensional unfolded view of the surface of the garment model may be an unfolded view obtained by unfolding the surface of the garment model by specific software based on a specific viewing angle, or an unfolded view obtained by manually unfolding the surface of the garment model based on the specific viewing angle.
After the electronic device acquires the first image, in step S220, the first image is input into a pre-trained feature extraction network, so as to obtain a plurality of feature images output by a plurality of designated feature extraction layers in the feature extraction network.
In one implementation, a pre-trained feature extraction network may be pre-stored in a preset storage space corresponding to the electronic device, where the feature extraction network includes a plurality of feature extraction layers that may be used to perform feature extraction on the output image. The plurality of feature extraction layers includes a number of specified feature extraction layers that are pre-specified. After the electronic device acquires the first image, inputting the first image into a pre-trained feature extraction network, so that the feature extraction network performs feature extraction on the first image based on a plurality of feature extraction layers of the feature extraction network (wherein the input of a first feature extraction layer in the feature extraction network is the first image, the input of a non-first feature extraction layer is the output of a previous feature extraction layer), and a plurality of feature images output by a plurality of appointed feature extraction layers in the feature extraction network are obtained, wherein the feature images and the appointed feature extraction layers have a corresponding relationship.
It will be appreciated that the feature extraction network may be any type of deep learning model that can perform feature extraction on images, including, for example, but not limited to, a VGG model (e.g., VGG16 model or VGG19 model).
The specific feature extraction layers in the feature extraction network may be set according to actual requirements, for example, in order to ensure that a normal map with better quality is obtained, the specific feature extraction layers may include feature extraction layers with front layers in the feature extraction network, or may include feature extraction layers with back layers in the feature extraction network. Specifically, in one embodiment, the plurality of designated feature extraction layers corresponding to the plurality of feature images may include a feature extraction layer in the feature extraction network that is located below a first layer number threshold, and a feature extraction layer that is located above a second layer number threshold. The second layer number threshold may be greater than or equal to the first layer number threshold.
In one case, where the feature extraction network is a VGG19 model, the number of specified feature extraction layers may include: all of the convolution layers in the VGG19 model, namely layers 1, 3, 6, 8, 11, 13, 15, 17, 20, 22, 24, 26, 29, 31, 33, and 35 of the VGG19 model.
After the electronic device obtains the plurality of feature images, in step S230, a pseudo-height image is determined using the plurality of feature images, where the pseudo-height image includes gray values of the pixels. In this step, the electronic device may process the plurality of feature images to obtain a pseudo-height image, where the pseudo-height image is a gray scale image.
In one embodiment, at step S230, the following steps 11-13 may be included:
and 11, reducing the dimensions of the characteristic images to obtain gray level images corresponding to the appointed characteristic extraction layers. In this step, the electronic device may utilize a preset dimension reduction method to reduce dimensions of the output of each specified feature extraction layer, that is, the feature image corresponding to each specified feature extraction layer, for example, to 1 dimension, so as to obtain a gray scale map corresponding to each specified feature extraction layer. In one implementation, the preset dimension reduction method may include, but is not limited to, PCA (Principal Components Analysis, principal component analysis) dimension reduction method.
Next, in step 12, a preset interpolation algorithm is used to adjust the gray-scale image corresponding to each specific feature extraction layer to a uniform-size gray-scale image, which is used as the target gray-scale image corresponding to each specific feature extraction layer. In this step, considering that the sizes of the gray maps (feature images) corresponding to the respective specified feature extraction layers are different, the sizes of the gray maps corresponding to the respective specified feature extraction layers need to be unified, and accordingly, the electronic device may interpolate the gray maps corresponding to the respective specified feature extraction layers by using a preset difference algorithm, and adjust the gray maps corresponding to the respective specified feature extraction layers to the gray maps with unified sizes, for example, adjust the sizes of the gray maps corresponding to the respective specified feature extraction layers to a certain preset size (for example, 512×512), so as to obtain the target gray maps corresponding to the respective specified feature extraction layers. In one case, the preset size may be the same as the size of the first image to facilitate subsequent mapping of the target model.
In one implementation, the preset difference algorithm may include, but is not limited to: bilinear interpolation.
Thereafter, in step 13, a pseudo-height image is determined based on the respective target gray scale maps. In this step, the electronic device may determine the pseudo-height image based on the respective target gray maps using a weighted average algorithm.
Specifically, in one embodiment, in step 13, the following steps may be included: and determining a pseudo-height image based on each target gray scale image and the weight value corresponding to each target gray scale image, wherein the weight value corresponding to each target gray scale image is determined based on the layer number of the designated feature extraction layer corresponding to the feature image corresponding to each target gray scale image in the feature extraction network.
In one implementation, the weight value corresponding to each target gray scale map (i.e., each designated feature extraction layer) may be set according to the actual requirement. In one case, if the normal map is required to pay more attention to global information, for a specified feature extraction layer with a larger number of layers, the corresponding weight value is set to be larger, i.e., the weight value corresponding to the target gray map is set to be larger. If the normal map is required to pay more attention to local information (namely details), for a specified feature extraction layer with a larger layer number, setting a corresponding weight value to be smaller, namely setting a weight value corresponding to a corresponding target gray map to be smaller.
In one case, the electronic device may determine the sum of products of the respective target gray-scale maps and their corresponding weight values as a pseudo-height image.
After the electronic device obtains the pseudo-height image, in step S240, a normal map of the target model is generated based on the pseudo-height image. In this step, the electronic device determines a normal vector (i.e., a normal) corresponding to each pixel based on the gray value of each pixel in the pseudo-height image, so as to obtain a normal map of the target model. The normal map can enable the surface of the target model to show concave-convex effect, so that the target model is more real.
Specifically, in one embodiment, in step S240, the following steps 21-22 may be included:
in step 21, according to the gray scale value of each pixel in the pseudo-height image, the horizontal axis gradient value and the vertical axis gradient value corresponding to each pixel in the pseudo-height image are determined. In this step, the electronic device may determine, based on a preset gradient algorithm, a horizontal axis gradient value and a vertical axis gradient value corresponding to each pixel in the pseudo-height image according to a gray value of each pixel in the pseudo-height image. In one implementation, the preset gradient algorithm may include, but is not limited to, a center difference method.
Next, in step 22, a normal line corresponding to each pixel in the pseudo-height image is determined based on the horizontal axis gradient value and the vertical axis gradient value corresponding to each pixel in the pseudo-height image to obtain a normal line map.
In one implementation manner, the electronic device may determine, by using a preset normal determining formula, a normal line in a tangential space (i.e., a tangential space) corresponding to each pixel in the pseudo-height image based on a horizontal axis gradient value and a vertical axis gradient value corresponding to each pixel in the pseudo-height image, so as to obtain a tangential space normal line map, so that when the target model rotates, the normal line corresponding to each pixel can be ensured to be kept uniform. In one case, the preset normal determination formula may be represented by the following formula (1):
normal(r,c) 1 =((h[r,c+1]-h[r,c-1])/2,(h[r+1,c]-h[r-1,c])/2,-1)/normalize(); (1)
wherein normal (r, c) 1 A normal line in a tangent space corresponding to a pixel (hereinafter referred to as a current pixel) at a position (r, c) in the pseudo-height image, where r represents row, which may correspond to the vertical axis, and c represents column, which may correspond to the horizontal axis; h [ r, c+1 ]]Representing the height value of the pixel directly to the right of the current pixel in the pseudo-height image (i.e., the gray value of the pixel at the (r, c+1) position in the pseudo-height image); h [ r, c-1 ]]Representing the height value of the pixel directly to the left of the current pixel in the pseudo-height image (i.e., the gray value of the pixel at the (r, c-1) position in the pseudo-height image); h [ r+1, c ]]Representing the height value of the pixel immediately above the current pixel in the pseudo-height image (i.e., the gray value of the pixel at the (r+1, c) position in the pseudo-height image); h [ r-1, c ]]Representing the height value of the pixel immediately below the current pixel in the pseudo-height image (i.e., the gray value of the pixel at the (r-1, c) position in the pseudo-height image), normal () can be represented as follows:
wherein, in one case, the preset normal determination formula may also be represented by the following formula (2):
normal(r,c) 2 =((h[r,c+1]-h[r,c-1])/2,(h[r+1,c]-h[r-1,c])/2,1)/normalize(); (2)
wherein normal (r, c) 2 Representing the normal line in tangent space corresponding to the pixel at the (r, c) position in the pseudo-height image (hereinafter referred to as the current pixel).
In yet another case, the preset normal determination formula may also be represented by the following formula (3):
wherein normal (r, c) 3 Representing a normal line in a tangent space corresponding to a pixel (hereinafter referred to as a current pixel) at a (r, c) position in the pseudo-height image; h [ r, c+1 ]]Representing the height value of the pixel directly to the right of the current pixel in the pseudo-height image (i.e., the gray value of the pixel at the (r, c+1) position in the pseudo-height image); h [ r, c ]]A height value (i.e., a gray value) representing a current pixel in the pseudo-height image; h [ r+1, c ]]Representing the height value of the pixel immediately above the current pixel in the pseudo-height image (i.e., the gray value of the pixel at the (r+1, c) position in the pseudo-height image).
In still another implementation manner, the electronic device may further determine, based on the horizontal axis gradient value and the vertical axis gradient value corresponding to each pixel in the pseudo-height image, a normal line corresponding to each pixel in a model coordinate system where the target model is located, and further convert, based on a preset conversion formula, the normal line corresponding to each pixel in the model coordinate system to a tangent space, so as to obtain a normal line corresponding to each pixel in the tangent space, and obtain a tangent space normal line map of the target model.
After the electronic device obtains the normal map based on the first image in the above manner, in one embodiment, the electronic device may continuously obtain correspondence between a plurality of surface nodes included in the surface of the target model and each pixel in the normal map, and then, based on the normal map and the correspondence, map the target model to obtain a mapped target model, so as to increase details of the surface of the target model and improve realism and quality of the target model.
In this embodiment, a plurality of feature images are obtained by feature extraction layers with different depths in a feature extraction network, the plurality of feature images simultaneously include local information and global information of the images, correspondingly, in a pseudo-height image determined by the plurality of feature images, gray values of pixels are determined based on information importance changes and/or information density changes of the pixels represented by the plurality of feature images, and accordingly, the pseudo-height image better fuses the local information and the global information of the images, and takes into account both the shape of the surface of a target model in the image and the details of the surface of the target model, so that gradient transformation between the pixels can be more natural, and further, normal map is generated based on the pseudo-height image, so that normal map is more accurate, quality is better, fluctuation degree changes of the surface of the target model represented by the normal map are more natural, the model surface rendered based on the normal map can be more consistent with visual feeling of eyes, gaps appear at the edge positions of the model surface, and the model surface effect is more similar to the real effect is ensured.
The foregoing describes certain embodiments of the present disclosure, other embodiments being within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. Furthermore, the processes depicted in the accompanying figures are not necessarily required to achieve the desired result in the particular order shown, or in a sequential order. In some embodiments, multitasking and parallel processing are also possible, or may be advantageous.
Corresponding to the above method embodiments, the present embodiment provides a normal map generating apparatus 400, a schematic block diagram of which is shown in fig. 4, including:
an acquisition module 410 configured to acquire a first image, the first image being a surface image of a target model;
the input module 420 is configured to input the first image into a pre-trained feature extraction network to obtain a plurality of feature images output by a plurality of designated feature extraction layers in the feature extraction network;
a determining module 430 configured to determine a pseudo-height image using the plurality of feature images, wherein the pseudo-height image includes gray values for each pixel;
a generation module 440 is configured to generate a normal map of the target model based on the pseudo-altitude image.
In an alternative embodiment, the determining module 430 includes:
a dimension reduction unit (not shown in the figure) configured to reduce dimensions of the plurality of feature images to obtain gray level images corresponding to each specified feature extraction layer;
a size adjustment unit (not shown in the figure) configured to adjust the gray-scale image corresponding to each of the specified feature extraction layers to a uniform-size gray-scale image using a preset interpolation algorithm, as a target gray-scale image corresponding to each of the specified feature extraction layers;
a determining unit (not shown in the figure) configured to determine the pseudo-height image based on the respective target gradation maps.
In an optional implementation manner, the determining unit is specifically configured to determine the pseudo-height image based on each target gray scale image and a weight value corresponding to each target gray scale image, where the weight value corresponding to each target gray scale image is determined based on the number of layers of the designated feature extraction layer corresponding to the feature image corresponding to the weight value corresponding to each target gray scale image in the feature extraction network.
In an optional implementation manner, the plurality of designated feature extraction layers corresponding to the plurality of feature images include a feature extraction layer with a layer number lower than a first layer number threshold value and a feature extraction layer with a layer number higher than a second layer number threshold value in the feature extraction network.
In an optional implementation manner, the generating module 440 is specifically configured to determine a horizontal axis gradient value and a vertical axis gradient value corresponding to each pixel in the pseudo-height image according to the gray scale value of each pixel in the pseudo-height image;
and determining the normal corresponding to each pixel in the pseudo-height image based on the horizontal axis gradient value and the vertical axis gradient value corresponding to each pixel in the pseudo-height image so as to obtain the normal map.
In an alternative embodiment, the method further comprises:
a mapping module (not shown in the figure) is configured to map the target model based on the normal map and correspondence between a plurality of surface nodes included in the target model surface and pixels of the normal map, respectively.
The foregoing apparatus embodiments correspond to the method embodiments, and specific descriptions may be referred to descriptions of method embodiment portions, which are not repeated herein. The device embodiments are obtained based on corresponding method embodiments, and have the same technical effects as the corresponding method embodiments, and specific description can be found in the corresponding method embodiments.
The embodiments of the present specification also provide a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the normal map generation method provided in the present specification.
The embodiment of the specification also provides a computing device, which comprises a memory and a processor, wherein executable codes are stored in the memory, and the processor realizes the normal map generation method provided by the specification when executing the executable codes.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for storage media and computing device embodiments, since they are substantially similar to method embodiments, the description is relatively simple, with reference to the description of method embodiments in part.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The foregoing detailed description of the embodiments of the present invention further details the objects, technical solutions and advantageous effects of the embodiments of the present invention. It should be understood that the foregoing description is only specific to the embodiments of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements, etc. made on the basis of the technical solutions of the present invention should be included in the scope of the present invention.

Claims (13)

1. A method of generating a normal map, comprising:
acquiring a first image, wherein the first image is a surface image of a target model;
inputting the first image into a pre-trained feature extraction network to obtain a plurality of feature images output by a plurality of appointed feature extraction layers in the feature extraction network;
determining a pseudo-height image by utilizing the plurality of characteristic images, wherein the pseudo-height image comprises gray values of pixels;
and generating a normal map of the target model based on the pseudo-height image.
2. The method of claim 1, wherein said determining a pseudo-height image using said plurality of feature images comprises:
performing dimension reduction on the feature images to obtain gray level images corresponding to each appointed feature extraction layer;
using a preset interpolation algorithm to adjust the gray level images corresponding to the specific feature extraction layers into uniform-size gray level images serving as target gray level images corresponding to the specific feature extraction layers;
the pseudo-height image is determined based on the respective target gray scale maps.
3. The method of claim 2, wherein the determining the pseudo-height image based on the respective target gray scale map comprises:
and determining the pseudo-height image based on each target gray scale image and the weight value corresponding to each target gray scale image, wherein the weight value corresponding to each target gray scale image is determined based on the layer number of the designated feature extraction layer corresponding to the feature image in the feature extraction network.
4. The method of claim 1, wherein the number of designated feature extraction layers corresponding to the number of feature images includes a feature extraction layer in the feature extraction network having a number of layers below a first number of layers threshold and a feature extraction layer having a number of layers above a second number of layers threshold.
5. The method of claim 1, wherein the generating a normal map of the target model based on the pseudo-height image comprises:
according to the gray scale value of each pixel in the pseudo-height image, determining a horizontal axis gradient value and a vertical axis gradient value corresponding to each pixel in the pseudo-height image;
and determining the normal corresponding to each pixel in the pseudo-height image based on the horizontal axis gradient value and the vertical axis gradient value corresponding to each pixel in the pseudo-height image so as to obtain the normal map.
6. The method of any one of claims 1-5, further comprising:
mapping the target model based on the normal mapping and the correspondence between a plurality of surface nodes included in the target model surface and pixels of the normal mapping.
7. A normal map generation apparatus comprising:
the acquisition module is configured to acquire a first image, wherein the first image is a surface image of a target model;
the input module is configured to input the first image into a pre-trained feature extraction network to obtain a plurality of feature images output by a plurality of appointed feature extraction layers in the feature extraction network;
a determining module configured to determine a pseudo-height image using the plurality of feature images, wherein the pseudo-height image includes gray values of pixels;
and the generation module is configured to generate a normal map of the target model based on the pseudo-height image.
8. The apparatus of claim 7, wherein the means for determining comprises:
the dimension reduction unit is configured to reduce dimensions of the feature images to obtain gray images corresponding to each appointed feature extraction layer;
the size adjusting unit is configured to adjust the gray level images corresponding to the specified feature extraction layers into uniform-size gray level images serving as target gray level images corresponding to the specified feature extraction layers by using a preset interpolation algorithm;
and a determining unit configured to determine the pseudo-height image based on the respective target gradation maps.
9. The apparatus according to claim 8, wherein the determining unit is specifically configured to determine the pseudo-height image based on each target gray scale image and a weight value corresponding to each target gray scale image, where the weight value corresponding to each target gray scale image is determined based on a number of layers in the feature extraction network where a specified feature extraction layer corresponding to a feature image corresponding to each target gray scale image is located.
10. The apparatus of claim 7, wherein the number of designated feature extraction layers for the number of feature images comprises a feature extraction layer in the feature extraction network that is below a first layer number threshold and a feature extraction layer that is above a second layer number threshold.
11. The apparatus of claim 7, wherein the generating module is specifically configured to determine a horizontal axis gradient value and a vertical axis gradient value corresponding to each pixel in the pseudo-height image according to a gray value of each pixel in the pseudo-height image;
and determining the normal corresponding to each pixel in the pseudo-height image based on the horizontal axis gradient value and the vertical axis gradient value corresponding to each pixel in the pseudo-height image so as to obtain the normal map.
12. The apparatus of any of claims 7-11, further comprising:
and the mapping module is configured to map the target model based on the normal mapping and the correspondence relation between a plurality of surface nodes included by the target model surface and pixels of the normal mapping.
13. A computing device comprising a memory and a processor, wherein the memory has executable code stored therein, which when executed by the processor, implements the method of any of claims 1-6.
CN202310962889.0A 2023-08-01 2023-08-01 Normal map generation method and device Pending CN117078849A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310962889.0A CN117078849A (en) 2023-08-01 2023-08-01 Normal map generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310962889.0A CN117078849A (en) 2023-08-01 2023-08-01 Normal map generation method and device

Publications (1)

Publication Number Publication Date
CN117078849A true CN117078849A (en) 2023-11-17

Family

ID=88707075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310962889.0A Pending CN117078849A (en) 2023-08-01 2023-08-01 Normal map generation method and device

Country Status (1)

Country Link
CN (1) CN117078849A (en)

Similar Documents

Publication Publication Date Title
WO2020192568A1 (en) Facial image generation method and apparatus, device and storage medium
CN113838176B (en) Model training method, three-dimensional face image generation method and three-dimensional face image generation equipment
US10719920B2 (en) Environment map generation and hole filling
US11276150B2 (en) Environment map generation and hole filling
CN108230435B (en) Graphics processing using cube map textures
JP7084616B2 (en) Image processing device, image processing method, and image processing program
US20220172322A1 (en) High resolution real-time artistic style transfer pipeline
CN113781621A (en) Three-dimensional reconstruction processing method, device, equipment and storage medium
CN111311720B (en) Texture image processing method and device
Finlayson et al. Lookup-table-based gradient field reconstruction
CN113706431A (en) Model optimization method and related device, electronic equipment and storage medium
Freer et al. Novel-view synthesis of human tourist photos
CN112562053A (en) PBR material map generation method and device
CN117078849A (en) Normal map generation method and device
Kimmel 3D shape reconstruction from autostereograms and stereo
US11120606B1 (en) Systems and methods for image texture uniformization for multiview object capture
CN110009676B (en) Intrinsic property decomposition method of binocular image
CN113614791A (en) Dynamic three-dimensional imaging method
EP4258221A2 (en) Image processing apparatus, image processing method, and program
CN117649477B (en) Image processing method, device, equipment and storage medium
US20240127402A1 (en) Artificial intelligence techniques for extrapolating hdr panoramas from ldr low fov images
JP7504120B2 (en) High-resolution real-time artistic style transfer pipeline
CN111354064B (en) Texture image generation method and device
CN109410224B (en) Image segmentation method, system, device and storage medium
CN118036336A (en) Range self-adaptive high-fidelity depth data simulation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination