CN113012271B - Finger three-dimensional model texture mapping method based on UV (ultraviolet) mapping - Google Patents

Finger three-dimensional model texture mapping method based on UV (ultraviolet) mapping Download PDF

Info

Publication number
CN113012271B
CN113012271B CN202110306178.9A CN202110306178A CN113012271B CN 113012271 B CN113012271 B CN 113012271B CN 202110306178 A CN202110306178 A CN 202110306178A CN 113012271 B CN113012271 B CN 113012271B
Authority
CN
China
Prior art keywords
finger
camera
coordinates
dimensional
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110306178.9A
Other languages
Chinese (zh)
Other versions
CN113012271A (en
Inventor
王林丰
杨伟力
康文雄
邓飞其
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202110306178.9A priority Critical patent/CN113012271B/en
Publication of CN113012271A publication Critical patent/CN113012271A/en
Application granted granted Critical
Publication of CN113012271B publication Critical patent/CN113012271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Abstract

The invention provides a finger three-dimensional model texture mapping method based on UV mapping, which comprises the following steps: acquiring a coordinate set and a triangular patch set of all vertexes forming a finger contour from the finger three-dimensional mesh model; selecting the origin of the coordinate systems of any three cameras to form a space plane, and solving the space plane equation where all the cameras are located; mapping the triangular patch texture of the model from three dimensions to two dimensions, and performing interpolation and average gray value processing; and writing the obtained two-dimensional texture information back to the finger model without the texture in a UV (ultraviolet) mapping mode to obtain the finger three-dimensional grid model with the texture information. The finger three-dimensional grid model reconstructed by the method basically retains complete finger part textures, is attached to an actual human finger, can lose less finger characteristic information, can effectively solve the influence of finger identification caused by the gesture and position of the finger, and has a great effect on improvement of finger part identification precision.

Description

Finger three-dimensional model texture mapping method based on UV (ultraviolet) mapping
Technical Field
The invention relates to the technical field of biological feature recognition, in particular to a finger three-dimensional model texture mapping method based on UV (ultraviolet) mapping.
Background
Three-dimensional reconstruction refers to a technique for building a mathematical model suitable for computer representation and processing on a three-dimensional object so as to acquire the three-dimensional structure and information of the object, and has been applied very successfully in many aspects. Meanwhile, with the development of science, people have stronger protection consciousness on self property and information, and more occasions need identity authentication.
Biometric identification has been widely used as an identification technique, and finger-based authentication has its unique advantages. The conventional finger recognition method today generally takes a two-dimensional image (such as a fingerprint) through a monocular camera, and then processes the two-dimensional image through a series of methods to obtain a recognition result. However, the technical solution has certain disadvantages, firstly, the finger image information acquired by a single camera is limited, and secondly, the traditional fingerprint identification method is greatly influenced by the posture and position of the finger and has the defects of easy counterfeiting and the like.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide a finger three-dimensional model texture mapping method based on UV mapping; the finger three-dimensional grid model reconstructed by the method basically retains complete finger part textures, is attached to an actual human finger, can lose less finger characteristic information, can effectively solve the influence of finger identification caused by the gesture and position of the finger, and has a great effect on improvement of finger part identification precision.
In order to achieve the purpose, the invention is realized by the following technical scheme: a finger three-dimensional model texture mapping method based on UV mapping is characterized in that: the method comprises the following steps:
s1, obtaining the coordinate set { V } of all the vertexes forming the finger outline from the finger three-dimensional mesh modeli=(xi,yi,zi|i∈[1,n]) In which (x)i,yi,zi) Is a vertex ViCoordinates under a three-dimensional world coordinate system, wherein n is the number of vertexes; solving the central points O (x, y, z) of all the vertexes;
s2, obtaining all triangular patches from the finger three-dimensional mesh model, and obtaining three vertexes V corresponding to each triangular patchiAcquiring all triangular patch sets f forming the finger outline;
s3, using the conversion relation between the camera coordinate system and the world coordinate system to find each camera AjCoordinates in a world coordinate system, wherein j is 1,2, …, m is the number of cameras, and m is more than 3;
s4, selecting the origin of a coordinate system of any three cameras to construct a spatial plane alpha, and solving a spatial plane equation where all the cameras are located;
s5, obtaining the coordinate set { V } of all the vertexes from S1i=(xi,yi,zi|i∈[1,n]) Judge each vertex V separatelyiThe photographing camera of (1);
s6, obtaining a triangular patch set f according to the step S2 and each vertex V obtained in the step S5iThe shooting cameras of (1) respectively judging the shooting camera of each triangular patch;
step S7, obtained according to step S6Triangular patches and shooting camera data, and solving pixel coordinates of three vertexes of each triangular patch in the picture shot by the corresponding shooting camera, thereby obtaining vertex three-dimensional world coordinates (X)w,Yw,Zw) Mapping the image to two-dimensional pixel coordinates (u, v) under an image coordinate system to obtain a triangular area corresponding to each triangular patch on a picture shot by a corresponding shooting camera, so as to obtain a UV Map picture and a corresponding relation between vertex coordinates of the triangular patches and UV coordinates on the UV Map picture;
s8, obtaining the UV Map picture according to the S7, selecting a reference gray value, and unifying the gray average values of the UV Map picture so that the texture information of different cameras in the UV Map picture has the same gray average value;
s9, resetting the UV Map picture obtained in the S8 into a target UV Map image with the pixel size of w '× h' by using an interpolation algorithm;
s10, writing the UV coordinates into a finger three-dimensional mesh model without textures according to the corresponding relation between the vertex coordinates of the triangular patch obtained in the S7 and the UV coordinates on the UV Map picture; and obtaining texture values of the vertexes of the triangular patch on the UV Map picture according to the target UV Map image obtained in the step S9, thereby obtaining a finger three-dimensional mesh model with texture information and completing texture mapping.
Preferably, in step S1, the solving method for the coordinates of the center point O (x, y, z) is as follows:
Figure BDA0002987797420000021
preferably, in step S3, the conversion relationship between the camera coordinate system and the world coordinate system is:
Figure BDA0002987797420000031
wherein (X)c,Yc,Zc) To select the coordinates of any point in the camera coordinate system, R is the 3X3 orthonormal unit matrix, t is the three-dimensional translation vector, (X)w,Yw,Zw) To select the coordinates of the point in the world coordinate system. The coordinates of the camera in the world coordinate system can be obtained by taking the camera in the self camera coordinate system as the origin.
Preferably, in step S4, the solution method of the spatial plane equation is: assuming that the spatial plane equation is ax + by + cz + d is 0, and the world coordinates of the origin points of the three camera coordinate systems are (x1, y1, z1), (x2, y2, z2), (x3, y3, and z3), respectively, the solution formula of the plane equation parameter [ a, b, c, d ] is:
Figure BDA0002987797420000032
preferably, in the step S5, the camera for determining each vertex is: the method comprises the following steps:
step S51, the vertex V is connectediProjecting to a spatial plane alpha to obtain Vi', projecting the central point O (x, y, z) obtained in the step S1 to a spatial plane alpha to obtain O'; each camera AjProjecting to a spatial plane alpha respectively to obtain Aj′;
S52, calculating vectors in sequence
Figure BDA0002987797420000033
And vector
Figure BDA0002987797420000034
The included angle between them;
s53, summing the sum vector obtained in the S52 step
Figure BDA0002987797420000035
The camera corresponding to the minimum vector of the included angle is judged as the vertex ViThe photographing camera of (1).
Preferably, in the step S6, the determining the camera of each triangular patch means: if two or more of the three vertices of the triangular patch are from the same camera, the camera is determined to be the camera of the triangular patch.
Preferably, the step S7 includes the following steps:
s71, setting three vertexes of a triangular patch to be J, K, L, and setting a shooting camera corresponding to the triangular patch to be Ac,c∈[1,m];
S72, respectively calculating J, K, L three-dimensional world coordinates and camera AcThe internal and external parameters are substituted into a conversion formula to obtain J, K, L in the camera AcCoordinate points J ', K ', L ' on the shot picture;
the mapping formula is as follows:
Figure BDA0002987797420000041
wherein [ u, v ]]TIs a two-dimensional pixel coordinate with homogeneous coordinates of [ u, v,1 ]]T;[Xw,Yw,Zw]TIs a three-dimensional world point coordinate with a homogeneous coordinate of [ Xw,YW,Zw,1]T;ZCScale factors from the world coordinate system to the image coordinate system; m1The reference matrix is a camera internal reference matrix and only relates to the internal structure of the camera; m2Is a camera external reference matrix, determined by the orientation of the camera relative to the world coordinate system;
s73, intercepting a triangular area formed by the coordinate points J ', K ' and L ', and storing the triangular area on a blank png-format picture to form a UV Map picture;
and S74, recording pixel coordinates of three vertexes of the triangular area on the UV Map picture, normalizing the pixel coordinates to obtain UV coordinates, and recording the corresponding relation between the vertex coordinates of the triangular patch and the UV coordinates on the UV Map picture in a dictionary mode.
Preferably, the step S8 includes the following steps:
s81, calculating the cameras A respectively1、A2、A3、...、AmThe average gray value corresponding to the texture information is marked as a1、a2、a3、...、am
Figure BDA0002987797420000042
S82, selecting a1、a2、a3、...、amThe median of (2) is taken as a reference gray value and is marked as target;
and step S83, averaging the texture gray values of all the cameras to target: firstly, the gray scale coefficient of each camera is calculated respectively
Figure BDA0002987797420000043
Then multiplying the gray values of all pixel points in the UV Map picture by the corresponding shooting camera AjGray scale factor coef ofj
Preferably, the step S9 is that:
setting the original image size of the UV Map image as w × h and the pixel matrix as I, setting the pixel point position of the pixel point (x, y) in the target UV Map image (w '× h') corresponding to the original image as
Figure BDA0002987797420000044
Figure BDA0002987797420000045
Solving the pixel value p of a pixel point (u, v) in the original image: assuming that the four-neighborhood pixel coordinates of the pixel point (u, v) are (x1, y1), (x2, y1), (x1, y2), (x2, y2), respectively, and the corresponding pixel values are p1, p2, p3, and p4, respectively, then:
Figure BDA0002987797420000051
and assigning the obtained pixel value p to a pixel point (x, y) of the target UV Map image.
Compared with the prior art, the invention has the following advantages and beneficial effects: the method provides a finger three-dimensional model texture mapping method based on UV (ultraviolet) mapping, a multi-camera system is used for simultaneously shooting two-dimensional texture information of different directions of the surface of a finger, then the three-dimensional model without texture is subjected to data acquisition, algorithm processing, interpolation, normalization and other operations to obtain two-dimensional texture information corresponding to all three-dimensional areas, and then the two-dimensional texture information is written back to the three-dimensional model in a UV mapping mode to obtain the finger three-dimensional model with basically complete texture. The method has high operation speed, the reconstructed finger three-dimensional grid model basically keeps complete finger part textures, the finger three-dimensional grid model is attached to an actual human finger, the finger characteristic information can be less lost, the influence of finger identification caused by the gesture and the position of the finger can be effectively solved, and the method has great effect on improving the finger part identification precision.
Drawings
FIG. 1 is a flow chart of a UV mapping-based finger three-dimensional model texture mapping method of the present invention;
FIG. 2 is a schematic diagram of a finger three-dimensional mesh model without texture in the finger three-dimensional model texture mapping method based on UV mapping according to the present invention;
FIG. 3 is a camera plane of a six-camera in a world coordinate system in the finger three-dimensional model texture mapping method based on UV mapping of the invention;
FIG. 4 is a schematic diagram of a finger three-dimensional mesh model with texture in a front view angle in the finger three-dimensional model texture mapping method based on UV mapping of the present invention;
FIG. 5 is a schematic diagram of a finger three-dimensional mesh model with texture in a side view according to the finger three-dimensional model texture mapping method based on UV mapping.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Examples
The flow of the method for mapping the texture of the finger three-dimensional model based on the UV map is shown in fig. 1, and the method comprises the following steps:
s1, obtaining the coordinate set { V } of all the vertexes forming the finger outline from the finger three-dimensional mesh modeli=(xi,yi,zi|i∈[1,n]) In which (x)i,yi,zi) Is a vertex ViCoordinates under a three-dimensional world coordinate system, wherein n is the number of vertexes; the center points O (x,y, z); the finger three-dimensional mesh model may be a finger three-dimensional mesh model without a fingerprint at this time, as shown in fig. 2.
The solving method of the coordinates of the central point O (x, y, z) is as follows:
Figure BDA0002987797420000061
s2, obtaining all triangular patches from the finger three-dimensional mesh model, and obtaining three vertexes V corresponding to each triangular patchiTo obtain a set f of all triangular patches that make up the finger outline. All the triangular patches are composed of three vertexes Vi, and the vertexes of all the triangular patches can be obtained from obj files for storing the finger three-dimensional mesh model.
And S3, determining the coordinate of each camera Aj in the world coordinate system by using the conversion relation between the camera coordinate system and the world coordinate system, wherein j is 1,2, …, m, m is the number of cameras, and m is more than 3. In this embodiment, the number of cameras is 6, and the camera plane of each camera in the world coordinate system is shown in fig. 3.
The conversion relation between the camera coordinate system and the world coordinate system is as follows:
Figure BDA0002987797420000062
wherein (X)c,Yc,Zc) To select the coordinates of any point in the camera coordinate system, R is a 3X3 orthogonal identity matrix, t is a three-dimensional translation vector, (X)w,Yw,Zw) To select the coordinates of the point in the world coordinate system. The coordinates of the camera in the world coordinate system can be obtained by taking the camera in the self camera coordinate system as the origin.
And S4, selecting the coordinate system origins of any three cameras to construct a space plane alpha, and solving the space plane equation where all the cameras are located.
The solving method of the space plane equation comprises the following steps: assuming that the spatial plane equation is ax + by + cz + d is 0, and the world coordinates of the origin points of the three camera coordinate systems are (x1, y1, z1), (x2, y2, z2), (x3, y3, and z3), respectively, the solution formula of the plane equation parameter [ a, b, c, d ] is:
Figure BDA0002987797420000063
s5, obtaining the coordinate set { V } of all the vertexes from S1i=(xi,yi,zi|i∈[1,n]) Judge each vertex V separatelyiThe photographing camera of (1).
The shooting camera for judging each vertex is as follows: the method comprises the following steps:
step S51, the vertex V is connectediProjecting to a spatial plane alpha to obtain Vi', projecting the central point O (x, y, z) obtained in the step S1 to a spatial plane alpha to obtain O'; each camera AjProjecting to a spatial plane alpha respectively to obtain Aj′;
S52, calculating vectors in sequence
Figure BDA0002987797420000071
And vector
Figure BDA0002987797420000072
The included angle therebetween;
step S53, the sum vector obtained in step S52 is added
Figure BDA0002987797420000073
The camera corresponding to the minimum vector of the included angle is judged as the vertex ViThe photographing camera of (1).
S6, obtaining a triangular patch set f according to the step S2 and each vertex V obtained in the step S5iThe camera of (2) determines the camera of each triangular patch, respectively.
The shooting camera for judging each triangular patch is as follows: if two or more of the three vertices of the triangular patch are from the same camera, the camera is determined to be the camera of the triangular patch.
Step S7, obtaining three according to step S6The corner surface patch and the shooting camera data solve the pixel coordinates of three vertexes of each triangular surface patch in the picture shot by the corresponding shooting camera, thereby leading the vertexes to have three-dimensional world coordinates (X)w,Yw,Zw) And mapping the vertex coordinates of the triangular surface patch to two-dimensional pixel coordinates (u, v) under an image coordinate system to obtain a corresponding triangular area of each triangular surface patch on a picture shot by a corresponding shooting camera, thereby obtaining the UV Map picture and the corresponding relation between the vertex coordinates of the triangular surface patch and the UV coordinates on the UV Map picture.
The step S7 includes the following steps:
s71, setting three vertexes of a triangular patch to be J, K, L, and setting a shooting camera corresponding to the triangular patch to be Ac,c∈[1,m];
S72, respectively calculating J, K, L three-dimensional world coordinates and camera AcThe internal and external parameters are substituted into a conversion formula to obtain J, K, L in the camera AcCoordinate points J ', K ', L ' on the shot picture;
the mapping formula is as follows:
Figure BDA0002987797420000074
wherein [ u, v ]]TIs a two-dimensional pixel coordinate with homogeneous coordinates of [ u, v,1 ]]T;[Xw,Yw,Zw]TIs a three-dimensional world point coordinate with a homogeneous coordinate of [ Xw,Yw,Zw,1]T;ZCScale factors from the world coordinate system to the image coordinate system; m1The reference matrix is a camera internal reference matrix and is only related to the internal structure of the camera; m is a group of2Is a camera external reference matrix, determined by the orientation of the camera relative to the world coordinate system;
s73, intercepting a triangular area formed by the coordinate points J ', K ' and L ', and storing the triangular area on a blank png-format picture to form a UV Map picture;
and S74, recording pixel coordinates of three vertexes of the triangular area on the UV Map picture, normalizing the pixel coordinates to obtain UV coordinates, and recording the corresponding relation between the vertex coordinates of the triangular patch and the UV coordinates on the UV Map picture in a dictionary mode.
And S8, obtaining the UV Map picture according to the S7, selecting a reference gray value, and unifying the gray average values of the UV Map picture so that the texture information of different cameras in the UV Map picture has the same gray average value.
The step S8 includes the following steps:
s81, calculating the cameras A respectively1、A2、A3、...、AmThe average gray value corresponding to the texture information is marked as a1、a2、a3、...、am
Figure BDA0002987797420000081
S82, selecting a1、a2、a3、...、amThe median of (2) is taken as a reference gray value and is marked as target;
and step S83, averaging the texture gray values of all the cameras to target: firstly, the gray scale coefficient of each camera is calculated respectively
Figure BDA0002987797420000082
Then multiplying the gray values of all pixel points in the UV Map picture by the corresponding shooting camera AjGray scale factor coef ofj
And step S9, resetting the UV Map picture obtained in the step S8 into a target UV Map image with the pixel size of w '× h' by an interpolation algorithm.
The step of S9 is that:
setting the original image size of the UV Map image as w × h and the pixel matrix as I, setting the pixel point position of the pixel point (x, y) in the target UV Map image (w '× h') corresponding to the original image as
Figure BDA0002987797420000083
Figure BDA0002987797420000084
Solving the pixel value p of a pixel point (u, v) in the original image: assuming that the four-neighborhood pixel coordinates of the pixel point (u, v) are (x1, y1), (x2, y1), (x1, y2), (x2, y2), respectively, and the corresponding pixel values are p1, p2, p3, and p4, respectively, then:
Figure BDA0002987797420000085
and assigning the obtained pixel value p to a pixel point (x, y) of the target UV Map image.
S10, writing the UV coordinates into a finger three-dimensional mesh model without textures according to the corresponding relation between the vertex coordinates of the triangular patch obtained in the S7 and the UV coordinates on the UV Map picture; and according to the target UV Map image obtained in the step S9, obtaining texture values of the vertexes of the triangular patch on the UV Map image, thereby obtaining a finger three-dimensional mesh model with texture information, and completing texture mapping as shown in FIGS. 4 and 5.
The method provides a finger three-dimensional model texture mapping method based on UV (ultraviolet) mapping, a multi-camera system is used for simultaneously shooting two-dimensional texture information of different directions of the surface of a finger, then the three-dimensional model without texture is subjected to data acquisition, algorithm processing, interpolation, normalization and other operations to obtain two-dimensional texture information corresponding to all three-dimensional areas, and then the two-dimensional texture information is written back to the three-dimensional model in a UV mapping mode to obtain the finger three-dimensional model with basically complete texture. The method has high operation speed, the reconstructed finger three-dimensional grid model basically keeps complete finger part textures, the finger three-dimensional grid model is attached to an actual human finger, the finger characteristic information can be less lost, the influence of finger identification caused by the gesture and the position of the finger can be effectively solved, and the method has great effect on improving the finger part identification precision.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (7)

1. A finger three-dimensional model texture mapping method based on UV mapping is characterized in that: the method comprises the following steps:
s1, obtaining the coordinate set { V } of all the vertexes forming the finger outline from the finger three-dimensional mesh modeli=(xi,yi,zi|i∈[1,n]) In which (x)i,yi,zi) Is a vertex ViCoordinates under a three-dimensional world coordinate system, wherein n is the number of vertexes; solving the central points O (x, y, z) of all the vertexes;
s2, obtaining all triangular patches from the finger three-dimensional mesh model, and obtaining three vertexes V corresponding to each triangular patchiTo obtain all triangular patch sets f forming the finger outline;
s3, using the conversion relation between the camera coordinate system and the world coordinate system to find each camera AjCoordinates in a world coordinate system, wherein j is 1,2, …, m is the number of cameras, and m is more than 3;
s4, selecting the origin of a coordinate system of any three cameras to construct a spatial plane alpha, and solving a spatial plane equation where all the cameras are located;
s5, obtaining the coordinate set { V } of all the vertexes from S1i=(xi,yi,zi|i∈[1,n]) Judge each vertex V separatelyiThe photographing camera of (1);
s6, obtaining a triangular patch set f according to the step S2 and each vertex V obtained in the step S5iThe shooting cameras of (1) respectively judging the shooting camera of each triangular patch;
and S7, according to the triangular patches and the shooting camera data obtained in the S6, solving the pixel coordinates of the three vertexes of each triangular patch in the picture shot by the corresponding shooting camera, so as to obtain the vertex three-dimensional world coordinates (X)w,Yw,Zw) Mapping to two-dimensional pixel coordinates (u, v) under an image coordinate system to obtain a triangular area corresponding to each triangular patch on a picture shot by a corresponding shooting camera, thereby obtaining a UV MapThe image and the corresponding relation between the vertex coordinates of the triangular patch and the UV coordinates on the UV Map image;
s8, obtaining the UV Map picture according to the S7, selecting a reference gray value, and unifying the gray average values of the UV Map picture to ensure that texture information belonging to different cameras in the UV Map picture has the same gray average value;
s9, resetting the UV Map picture obtained in the S8 into a target UV Map image with the pixel size of w '× h' by using an interpolation algorithm;
s10, writing the UV coordinates into a finger three-dimensional mesh model without textures according to the corresponding relation between the vertex coordinates of the triangular patch obtained in the S7 and the UV coordinates on the UV Map picture; according to the target UV Map image obtained in the step S9, obtaining texture values of the vertexes of the triangular patch on the UV Map image, thereby obtaining a finger three-dimensional mesh model with texture information and completing texture mapping;
in the step S5, the camera for determining each vertex is: the method comprises the following steps:
step S51, the vertex V is connectediProjecting to a spatial plane alpha to obtain Vi', projecting the central point O (x, y, z) obtained in the step S1 to a spatial plane alpha to obtain O'; each camera AjRespectively projecting to a spatial plane alpha to obtain Aj′;
S52, calculating vectors in sequence
Figure FDA0003504704830000021
And vector
Figure FDA0003504704830000022
The included angle therebetween;
step S53, the sum vector obtained in step S52 is added
Figure FDA0003504704830000023
The camera corresponding to the minimum vector of the included angle is judged as the vertex ViThe photographing camera of (1);
in the step S6, determining the camera of each triangular patch means: if two or more of the three vertices of the triangular patch are from the same camera, the camera is determined to be the camera of the triangular patch.
2. The UV map-based finger three-dimensional model texture mapping method according to claim 1, characterized in that: in the step S1, the solving method of the coordinate of the central point O (x, y, z) is as follows:
Figure FDA0003504704830000024
3. the UV map-based finger three-dimensional model texture mapping method according to claim 1, characterized in that: in the step S3, the conversion relationship between the camera coordinate system and the world coordinate system is:
Figure FDA0003504704830000025
wherein (X)c,Yc,Zc) To select the coordinates of any point in the camera coordinate system, R is a 3X3 orthogonal identity matrix, t is a three-dimensional translation vector, (X)w,Yw,Zw) To select the coordinates of the point in the world coordinate system.
4. The UV map-based finger three-dimensional model texture mapping method according to claim 1, characterized in that: in the step S4, the solution method of the spatial plane equation is as follows: assuming that the spatial plane equation is ax + by + cz + d is 0, and the world coordinates of the origin points of the three camera coordinate systems are (x1, y1, z1), (x2, y2, z2), (x3, y3, and z3), respectively, the solution formula of the plane equation parameter [ a, b, c, d ] is:
Figure FDA0003504704830000026
5. the UV map-based finger three-dimensional model texture mapping method according to claim 1, characterized in that: the step S7 includes the following steps:
s71, setting three vertexes of a triangular patch to be J, K, L, and setting a shooting camera corresponding to the triangular patch to be Ac,c∈[1,m];
S72, respectively calculating J, K, L three-dimensional world coordinates and camera AcThe internal and external parameters are substituted into a conversion formula to obtain J, K, L in the camera AcCoordinate points J ', K ', L ' on the shot picture;
the mapping formula is as follows:
Figure FDA0003504704830000031
wherein [ u, v ]]TIs a two-dimensional pixel point coordinate with homogeneous coordinates of [ u, v,1 ]]T;[Xw,Yw,Zw]TIs a three-dimensional world point coordinate with a homogeneous coordinate of [ Xw,Yw,Zw,1]T;ZCScale factors from the world coordinate system to the image coordinate system; m1A reference matrix is arranged in the camera; m is a group of2Is the external parameter matrix of the camera;
s73, intercepting a triangular area formed by the coordinate points J ', K ' and L ', and storing the triangular area on a blank picture to form a UV Map picture;
and S74, recording pixel coordinates of three vertexes of the triangular area on the UV Map picture, normalizing the pixel coordinates to obtain UV coordinates, and recording the corresponding relation between the vertex coordinates of the triangular patch and the UV coordinates on the UV Map picture in a dictionary mode.
6. The UV map-based finger three-dimensional model texture mapping method according to claim 1, characterized in that: the step S8 includes the following steps:
s81, calculating the cameras A respectively1、A2、A3、...、AmThe average gray value corresponding to the texture information is marked as a1、a2、a3、...、am
Figure FDA0003504704830000032
S82, selecting a1、a2、a3、...、amThe median of (2) is taken as a reference gray value and is marked as target;
and step S83, averaging the texture gray values of all the cameras to target: firstly, the gray scale coefficient of each camera is calculated respectively
Figure FDA0003504704830000033
Then multiplying the gray values of all pixel points in the UV Map picture by the corresponding shooting camera AjGray scale factor coef ofj
7. The UV map-based finger three-dimensional model texture mapping method according to claim 1, characterized in that: the step of S9 is that:
setting the original image size of the UV Map image as w × h and the pixel matrix as I, setting the pixel point position of the pixel point (x, y) in the target UV Map image (w '× h') corresponding to the original image as
Figure FDA0003504704830000041
Figure FDA0003504704830000042
Solving the pixel value p of a pixel point (u, v) in the original image: assuming that the four-neighborhood pixel coordinates of the pixel point (u, v) are (x1, y1), (x2, y1), (x1, y2), (x2, y2), respectively, and the corresponding pixel values are p1, p2, p3, and p4, respectively, then:
Figure FDA0003504704830000043
and assigning the obtained pixel value p to a pixel point (x, y) of the target UV Map image.
CN202110306178.9A 2021-03-23 2021-03-23 Finger three-dimensional model texture mapping method based on UV (ultraviolet) mapping Active CN113012271B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110306178.9A CN113012271B (en) 2021-03-23 2021-03-23 Finger three-dimensional model texture mapping method based on UV (ultraviolet) mapping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110306178.9A CN113012271B (en) 2021-03-23 2021-03-23 Finger three-dimensional model texture mapping method based on UV (ultraviolet) mapping

Publications (2)

Publication Number Publication Date
CN113012271A CN113012271A (en) 2021-06-22
CN113012271B true CN113012271B (en) 2022-05-24

Family

ID=76405020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110306178.9A Active CN113012271B (en) 2021-03-23 2021-03-23 Finger three-dimensional model texture mapping method based on UV (ultraviolet) mapping

Country Status (1)

Country Link
CN (1) CN113012271B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781622A (en) * 2021-08-31 2021-12-10 咪咕文化科技有限公司 Three-dimensional model texture mapping conversion method, device, equipment and medium
CN117058299A (en) * 2023-08-21 2023-11-14 云创展汇科技(深圳)有限公司 Method for realizing rapid mapping based on rectangular length and width in ray detection model

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550992A (en) * 2015-12-30 2016-05-04 四川川大智胜软件股份有限公司 High fidelity full face texture fusing method of three-dimensional full face camera

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106919941B (en) * 2017-04-26 2018-10-09 华南理工大学 A kind of three-dimensional finger vein identification method and system
CN108062784B (en) * 2018-02-05 2022-04-29 深圳市易尚展示股份有限公司 Three-dimensional model texture mapping conversion method and device
CN109543535B (en) * 2018-10-23 2021-12-21 华南理工大学 Three-dimensional finger vein feature extraction method and matching method thereof
CN111009007B (en) * 2019-11-20 2023-07-14 广州光达创新科技有限公司 Finger multi-feature comprehensive three-dimensional reconstruction method
CN112002014B (en) * 2020-08-31 2023-12-15 中国科学院自动化研究所 Fine structure-oriented three-dimensional face reconstruction method, system and device
CN112288850A (en) * 2020-10-23 2021-01-29 深圳市金牌珠宝科技有限公司 Mapping system based on UV coordinate

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550992A (en) * 2015-12-30 2016-05-04 四川川大智胜软件股份有限公司 High fidelity full face texture fusing method of three-dimensional full face camera

Also Published As

Publication number Publication date
CN113012271A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
CN106803267B (en) Kinect-based indoor scene three-dimensional reconstruction method
CN109544677B (en) Indoor scene main structure reconstruction method and system based on depth image key frame
CN107945267B (en) Method and equipment for fusing textures of three-dimensional model of human face
JP6681729B2 (en) Method for determining 3D pose of object and 3D location of landmark point of object, and system for determining 3D pose of object and 3D location of landmark of object
CN108052942B (en) Visual image recognition method for aircraft flight attitude
JP6560480B2 (en) Image processing system, image processing method, and program
WO2017219391A1 (en) Face recognition system based on three-dimensional data
CN112967236B (en) Image registration method, device, computer equipment and storage medium
CN109584327B (en) Face aging simulation method, device and equipment
CN113012271B (en) Finger three-dimensional model texture mapping method based on UV (ultraviolet) mapping
CN110675487A (en) Three-dimensional face modeling and recognizing method and device based on multi-angle two-dimensional face
CN111401266B (en) Method, equipment, computer equipment and readable storage medium for positioning picture corner points
CN111079565B (en) Construction method and identification method of view two-dimensional attitude template and positioning grabbing system
CN109147025B (en) RGBD three-dimensional reconstruction-oriented texture generation method
CN110060304B (en) Method for acquiring three-dimensional information of organism
WO2018133119A1 (en) Method and system for three-dimensional reconstruction of complete indoor scene based on depth camera
CN111951383A (en) Face reconstruction method
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN113614735A (en) Dense 6-DoF gesture object detector
CN112330813A (en) Wearing three-dimensional human body model reconstruction method based on monocular depth camera
CN107507263B (en) Texture generation method and system based on image
CN114549669B (en) Color three-dimensional point cloud acquisition method based on image fusion technology
CN111401157A (en) Face recognition method and system based on three-dimensional features
CN108447038B (en) Grid denoising method based on non-local total variation operator
CN111126418A (en) Oblique image matching method based on planar perspective projection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant