CN113076921B - Multispectral texture synchronous mapping method of three-dimensional finger biological feature model - Google Patents

Multispectral texture synchronous mapping method of three-dimensional finger biological feature model Download PDF

Info

Publication number
CN113076921B
CN113076921B CN202110428854.XA CN202110428854A CN113076921B CN 113076921 B CN113076921 B CN 113076921B CN 202110428854 A CN202110428854 A CN 202110428854A CN 113076921 B CN113076921 B CN 113076921B
Authority
CN
China
Prior art keywords
dimensional
vein
skin
gray
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110428854.XA
Other languages
Chinese (zh)
Other versions
CN113076921A (en
Inventor
杨伟力
王林丰
康文雄
邓飞其
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202110428854.XA priority Critical patent/CN113076921B/en
Publication of CN113076921A publication Critical patent/CN113076921A/en
Application granted granted Critical
Publication of CN113076921B publication Critical patent/CN113076921B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Abstract

The invention provides a multispectral texture synchronous mapping method of a three-dimensional finger biological feature model, which comprises the steps of firstly reading vertex information of a three-dimensional finger network model, then obtaining a vertex shooting camera, then mapping three-dimensional vertex coordinates to two-dimensional pixel coordinates of a two-dimensional image plane, then obtaining a skin texture gray value and a vein texture gray value corresponding to the two-dimensional pixel coordinates by utilizing bilinear interpolation, and finally writing the skin texture gray value and the vein texture gray value and the three-dimensional vertices back to the three-dimensional finger network model without textures in a one-to-one correspondence manner to obtain the three-dimensional finger biological feature model with the multispectral texture information. The method has the advantages of clear process, high processing efficiency, rich texture of the reconstructed finger model, ideal effect and fitting with the actual human finger.

Description

Multispectral texture synchronous mapping method of three-dimensional finger biological feature model
Technical Field
The invention relates to the technical field of three-dimensional texture mapping, in particular to a multispectral texture synchronous mapping method of a three-dimensional finger biological feature model.
Background
With the development of science, people have stronger protection awareness on self property and information, and more occasions need identity authentication. Biometric identification has been widely used as an identification technique, and finger-based authentication has its unique advantages.
The conventional finger recognition method is to take a two-dimensional image (such as a fingerprint) by a monocular camera, and then process the two-dimensional image by a series of methods to obtain a recognition result. However, the technical solution has certain disadvantages, firstly, the finger image information obtained by a single camera is limited, and secondly, the traditional fingerprint identification method is greatly influenced by the gesture and position of the finger, and has the defects of easy counterfeiting and the like, which is not beneficial to the improvement of the finger identification precision. In response to these problems, three-dimensional finger features have richer texture information, which includes all finger texture features and finger geometric features, and are more robust to finger pose, and thus are receiving much attention from both academic and industrial fields. Therefore, there is a need to solve the texture mapping problem for the three-dimensional finger biometric model, and in particular, to a multi-spectral texture synchronous mapping technique with both finger surface skin texture and finger internal vein texture.
Disclosure of Invention
In order to overcome the defects and shortcomings in the prior art, the invention aims to provide a multispectral texture synchronous mapping method of a three-dimensional finger biological characteristic model; the method has the advantages of clear process, high processing efficiency, rich texture of the reconstructed model, ideal effect, fitting with the actual human body finger, less loss of the characteristic information of the finger and great effect on improvement of the recognition precision of the finger.
In order to achieve the purpose, the invention is realized by the following technical scheme: a multispectral texture synchronous mapping method of a three-dimensional finger biological feature model is characterized in that: the method comprises the following steps:
s1, reading a three-dimensional finger network model; obtaining a coordinate set { V) of all vertexes in the three-dimensional finger network model i =(x i ,y i ,z i |i∈[1,n]) In which (x) i ,y i ,z i ) Is a vertex V i Coordinates under a three-dimensional world coordinate system, wherein n is the number of vertexes; solving the average value of all vertex coordinate values and determining the average value as a central point O (x, y, z);
s2, solving each camera A by utilizing the conversion relation between the camera coordinate system and the world coordinate system j Coordinates in a world coordinate system, where j =1, 2.., m, m being the number of cameras, m > 3;
s3, selecting the origin of a coordinate system of any three cameras to establish a three-dimensional reference plane alpha, and solving a three-dimensional reference plane equation ax + by + cz + d =0 of the three-dimensional reference plane a by using the coordinate values of the origin of the three selected cameras;
s4, collecting the coordinates { V ] obtained in the step S1 i =(x i ,y i ,z i |i∈[1,n]) In the vertex V, each vertex V i Classified into vertices located in a single camera view and vertices located in two overlapping camera views, and each vertex V is stored i A corresponding shooting camera;
s5, aligning the vertex V in the single camera visual angle i Calculating the vertex V i Two-dimensional pixel coordinates (u, v) in a picture taken by a shooting camera;
for vertex V located in the overlapped visual angles of two cameras i Respectively calculate the vertex V i Two-dimensional pixel coordinates (u, v) in pictures taken by two photographing cameras;
s6, aligning the vertex V in the single camera visual angle i Acquiring the pixel information of the two-dimensional skin picture and the two-dimensional vein picture corresponding to the shooting camera, and respectively storing the pixel information into a two-dimensional pixel matrix I skin And I vein (ii) a According to vertex V i By a two-dimensional pixel matrix I skin Obtaining vertex V i Skin texture gray value gray corresponding to two-dimensional skin picture skin Through a two-dimensional pixel matrix I vein Obtaining vertex V i Vein texture gray value gray corresponding to two-dimensional vein picture vein
For vertex V located in the overlapped visual angles of the two cameras i Acquiring the pixel information of the two-dimensional skin pictures and the two-dimensional vein pictures of the two corresponding shooting cameras, and respectively storing the pixel information of the two-dimensional skin pictures of the two shooting cameras into two-dimensional pixel matrixes I skin Respectively storing the pixel information of the two-dimensional vein pictures of the two shooting cameras into two-dimensional pixel matrixes I vein (ii) a According to vertex V i By two-dimensional pixel matrices I skin Obtaining vertex V i Two skin texture gray values corresponding to two-dimensional skin pictures of two shooting cameras skin Through two-dimensional pixel matrices Iv ein Obtaining vertex V i Two vein texture gray values corresponding to two-dimensional vein pictures of two shooting cameras vein (ii) a Two are combinedGray value gray of skin texture skin Integration into the final skin texture gray value gray skin (ii) a Two vein texture gray values gray vein Integration into the final vein texture Gray vein
S7, according to the vertex V i Coordinate (x) of i ,y i ,z i ) Corresponding skin texture gray value gray skin And vein texture gray value gray vein And writing the three-dimensional finger biological characteristic model into the vertex of the three-dimensional finger network model without textures to obtain the three-dimensional finger biological characteristic model with the multispectral texture information, and finishing the texture mapping process.
Preferably, in the step S1, the solving method of the coordinates of the central point O (x, y, z) is as follows:
Figure GDA0003833614730000031
preferably, in the step S2, the conversion relationship between the camera coordinate system and the world coordinate system is:
Figure GDA0003833614730000032
wherein, in order to select the coordinate of any point in the camera coordinate system, (X) w ,Y w ,Z w 1) coordinates of an arbitrary point in the world coordinate system in the homogeneous coordinate system, (X) c ,Y c ,Z c 1) is the homogeneous coordinate of the point under the coordinate system of the camera, R is a rotation matrix between 3x3 coordinate systems, and t is the coordinate of the three-dimensional translation vector between the coordinate systems under the world coordinate system of the selected point.
Preferably, in the step S3, the solution method of the three-dimensional reference plane equation includes: the world coordinates of the origin of the coordinate systems of the three cameras are (x 1, y1, z 1), (x 2, y2, z 2), (x 3, y3, z 3), respectively, and then the solution formula of the plane equation parameters [ a, b, c, d ] is:
Figure GDA0003833614730000033
preferably, in the step S4, each vertex V is connected to i The classification method comprises the following steps: the method comprises the following steps:
s41, connecting the vertex V i Projected to a three-dimensional reference plane a to obtain V i 'projecting the central point O (x, y, z) obtained in the step S1 to a three-dimensional reference plane a to obtain O'; each camera A j Respectively projected to a three-dimensional reference plane alpha to obtain A j ′;
S42, calculating vectors in sequence
Figure GDA0003833614730000034
And vector
Figure GDA0003833614730000035
The included angle between them;
s43, if the minimum value of the included angles calculated in the S42 step is less than 20 degrees, the vertex V is determined i Classifying the video camera into vertexes positioned in the visual angle of the single video camera, and setting the video camera corresponding to the minimum included angle as a shooting video camera; if the minimum value of the included angles calculated in the step S42 is more than or equal to 20 degrees, the vertex V is i The two cameras are classified as vertexes located within the overlapping view angles of the two cameras, and the two cameras corresponding to the two smallest included angles are set as the shooting cameras.
Preferably, in step S5, the two-dimensional pixel coordinate (u, v) is calculated by:
Figure GDA0003833614730000041
wherein [ u, v ]] T Is a two-dimensional coordinate value in an image coordinate system, and the homogeneous coordinate is [ u, v,1 ]] T ;[X w ,Y w ,Z w ] T Three-dimensional point coordinates under a world coordinate system of a vertex, and the homogeneous coordinate is [ X ] w ,Y w ,Z w ,1] T ;Z C Scale factors from the world coordinate system to the image coordinate system; m 1 To take a photographCamera internal reference matrix; m 2 Is the external parameter matrix of the camera.
Preferably, in the step S6, the skin texture gray value gray skin And vein texture gray value gray vein The acquisition method comprises the following steps:
determine vertex V i In the corresponding two-dimensional pixel coordinate (u, v), whether u and v are integers:
if u and v are both integers, then directly in the two-dimensional pixel matrix I skin And I vein Extracting skin texture gray value gray skin And vein texture gray value gray vein
gray skin =I skin [v-1][u-1]
gray vein =I vein [v-1][u-1]
If one or two of u and v is non-integer, interpolating the gray value of the adjacent pixel point of the two-dimensional pixel coordinate (u, v) to obtain the gray value gray of the skin texture skin And vein texture gray value gray vein
Preferably, if one or both of u and v are non-integer, the skin texture gray value gray skin And vein texture gray value gray vein The calculation is divided into three cases as follows:
when u is an integer and v is a non-integer, the two nearest integers of v are y1 and y2, then:
Figure GDA0003833614730000042
Figure GDA0003833614730000043
when u is a non-integer and v is an integer, the two nearest integers of u are x1 and x2, then:
Figure GDA0003833614730000051
Figure GDA0003833614730000052
when u and v are non-integers, the coordinates of four pixel points adjacent to the vertex are (x 1, y 1), (x 2, y 1), (x 1, y 2), (x 2, y 2) respectively from left to right and from top to bottom; firstly, performing horizontal linear interpolation to obtain:
Figure GDA0003833614730000053
Figure GDA0003833614730000054
then linear interpolation is carried out in the longitudinal direction to obtain:
Figure GDA0003833614730000055
Figure GDA0003833614730000056
preferably, in the step S6, two skin texture gray values gray are obtained skin Integration into the final skin texture gray value gray skin (ii) a Gray value gray of two vein textures vein Integration into the final vein texture gray value gray vein The method comprises the following steps:
setting the vertex V i The two shooting cameras are respectively A a And A b (ii) a Two skin texture gray values gray skin Middle, camera A a The corresponding gray value of skin texture is gray1 skin Video camera A b The corresponding skin texture gray value is gray2 skin (ii) a Two vein texture gray values gray vein Middle, camera A a Corresponding vein texture gray value gray1 vein Video camera A b Corresponding vein texture gray value gray2 vein
gray skin =μ×gray1 skin +(1-μ)gray2 skin
gray vein =μ×gray1 vein +(1-μ)gray2 vein
Wherein the mu is a weight, and the weight is,
Figure GDA0003833614730000057
q is the vertex V i Projecting V on a three-dimensional reference plane a i ' and vector
Figure GDA0003833614730000058
Q = Q + (V' i And vector
Figure GDA0003833614730000061
The distance therebetween).
Compared with the prior art, the invention has the following advantages and beneficial effects:
the invention discloses a multispectral texture synchronous mapping method, which comprises the steps of utilizing a plurality of cameras to surround finger surface skin pictures and finger internal vein pictures obtained by shooting of fingers, and carrying out mapping of two sets of textures on a three-dimensional finger part network model without textures to obtain a three-dimensional finger part biological characteristic model with skin textures and vein textures; the method has the advantages of clear process, high processing efficiency, rich texture of the reconstructed model, ideal effect, fitting with actual human fingers, less loss of finger characteristic information and great effect on improvement of finger identification precision.
Drawings
FIG. 1 is a flow chart of a multi-spectral texture synchronous mapping method of the present invention;
FIG. 2 is a schematic representation of a three-dimensional finger network model without texture for the multi-spectral texture synchronization mapping method of the present invention;
FIG. 3 is a camera plane of a six-camera in a world coordinate system in the multispectral texture synchronization mapping method of the present invention;
fig. 4 (a) and fig. 4 (b) are respectively a three-dimensional finger biometric model with surface skin texture obtained by the multispectral texture synchronous mapping method of the present invention;
fig. 5 (a) and 5 (b) are three-dimensional finger biometric models with internal vein textures obtained by the multispectral texture synchronous mapping method of the invention, respectively.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Examples
The multispectral texture synchronous mapping method of the three-dimensional finger biological feature model is used for adding both surface skin texture information and internal vein texture information of a finger to a three-dimensional finger network model without textures in the process of three-dimensional reconstruction of the three-dimensional finger network model. The method mainly comprises the following steps: the method comprises the steps of firstly reading vertex information of a three-dimensional finger part network model, then obtaining a vertex shooting camera, then mapping three-dimensional vertex coordinates to pixel coordinates of a two-dimensional image plane, then obtaining gray values corresponding to the two-dimensional pixel coordinates by utilizing bilinear interpolation, and finally writing two groups of gray values and three-dimensional vertices back to the three-dimensional finger part network model without textures in a one-to-one correspondence mode to obtain the three-dimensional finger part biological feature model with two groups of texture information.
The process is shown in fig. 1, and comprises the following steps:
s1, reading a three-dimensional finger network model; obtaining a coordinate set { V) of all vertexes in the three-dimensional finger network model i =(x i ,y i ,z i |i∈[1,n]) In which (x) i ,y i ,z i ) Is a vertex V i Coordinates under a three-dimensional world coordinate system, wherein n is the number of vertexes; solving the average value of all vertex coordinate values and determining the average value as a central point O (x, y, z); the three-dimensional finger network model may now be a fingerprint-free three-dimensional finger network model, as shown in fig. 2.
The solving method of the coordinates of the central point O (x, y, z) is as follows:
Figure GDA0003833614730000071
s2, solving each camera A by utilizing the conversion relation between the camera coordinate system and the world coordinate system j Coordinates in a world coordinate system, where j =1, 2.., m, m being the number of cameras, m > 3; in this embodiment, the number of cameras is 6, and the camera plane of each camera in the world coordinate system is shown in fig. 3.
The conversion relation between the camera coordinate system and the world coordinate system is as follows:
Figure GDA0003833614730000072
wherein (X) w ,Y w ,Z w 1) coordinates of an arbitrary point in the world coordinate system in the homogeneous coordinate system, (X) c ,Y c ,Z c 1) is the homogeneous coordinate of the point in the camera coordinate system, R is the rotation matrix between 3x3 coordinate systems, t is the coordinate of the selected point in the world coordinate system, and t is the coordinate of any selected point in the camera coordinate system.
S3, selecting the origin of a coordinate system of any three cameras to establish a three-dimensional reference plane a, and solving a three-dimensional reference plane equation ax + by + cz + d =0 of the three-dimensional reference plane a by using the coordinate values of the origin of the three selected cameras;
the solving method of the three-dimensional reference plane equation comprises the following steps: the world coordinates of the origin of the coordinate systems of the three cameras are (x 1, y1, z 1), (x 2, y2, z 2), (x 3, y3, z 3), respectively, and then the solution formula of the plane equation parameters [ a, b, c, d ] is:
Figure GDA0003833614730000073
s4, collecting the coordinates { V ] obtained in the step S1 i =(x i ,y i ,z i |i∈[1,n]) In each vertex V i Classified into vertices located in a single camera view and vertices located in two overlapping camera views, and each vertex V is stored i Corresponding shooting and image pickupA machine;
all the vertexes V are connected i The classification method comprises the following steps: the method comprises the following steps:
s41, converting the vertex V i Projected to a three-dimensional reference plane a to obtain V i 'projecting the central point O (x, y, z) obtained in the step S1 to a three-dimensional reference plane a to obtain O'; each camera A j Respectively projected to the three-dimensional reference plane a to obtain A j ′;
Following by V i 'As an example, the projection coordinates (x' i ,y′ i ,z′ i ) The solving formula of (2):
Figure GDA0003833614730000081
s42, calculating vectors in sequence
Figure GDA0003833614730000082
And vector
Figure GDA0003833614730000083
The included angle therebetween;
s43, if the minimum value of the included angles calculated in the S42 step is less than 20 degrees, the vertex V is determined i Classifying the video camera into vertexes positioned in the visual angle of the single video camera, and setting the video camera corresponding to the minimum included angle as a shooting video camera; if the minimum value of the included angles calculated in the step S42 is greater than or equal to 20 degrees, the vertex V is i The two cameras are classified as vertexes located within the overlapping view angles of the two cameras, and the two cameras corresponding to the two smallest included angles are set as the shooting cameras.
S5, aligning a vertex V in the single camera visual angle i Calculating the vertex V i Two-dimensional pixel coordinates (u, v) in a picture taken by a shooting camera;
for vertex V located in the overlapped visual angles of the two cameras i Separately calculating the vertex V i Two-dimensional pixel coordinates (u, v) in pictures taken by two photographing cameras;
the two-dimensional pixel coordinate (u, v) is calculated by the following method:
Figure GDA0003833614730000084
wherein [ u, v ]] T Is a two-dimensional coordinate value under an image coordinate system, and the homogeneous coordinate is [ u, v,1 ]] T ;[X w ,Y w ,Z w ] T Three-dimensional point coordinates in a world coordinate system which is a vertex, and the homogeneous coordinates are [ X w ,Y w ,Z w ,1] T ;Z C Scale factors from the world coordinate system to the image coordinate system; m is a group of 1 The reference matrix is a camera internal reference matrix and only relates to the internal structure of the camera; m is a group of 2 Is the external reference matrix of the camera and is determined by the orientation of the camera relative to the world coordinate system.
S6, aligning the vertex V in the single camera visual angle i Acquiring the pixel information of the two-dimensional skin picture and the two-dimensional vein picture corresponding to the shooting camera, and respectively storing the pixel information into a two-dimensional pixel matrix I skin And I vein (ii) a According to vertex V i By a two-dimensional pixel matrix I skin Obtaining vertex V i Skin texture gray value gray corresponding to two-dimensional skin picture skin Through a two-dimensional pixel matrix I vein Obtaining vertex V i Vein texture gray value gray corresponding to two-dimensional vein picture vein (ii) a Because the two-dimensional skin picture and the two-dimensional vein picture correspond to each other one by one, the two-dimensional image coordinates (u, v) correspond to the textures of the two-dimensional skin picture and the two-dimensional vein picture at the same time;
for vertex V located in the overlapped visual angles of two cameras i Acquiring the pixel information of the two-dimensional skin picture and the two-dimensional vein picture of the two corresponding shooting cameras, and respectively storing the pixel information of the two-dimensional skin picture of the two shooting cameras into two-dimensional pixel matrixes I skin Respectively storing the pixel information of the two-dimensional vein pictures of the two shooting cameras into two-dimensional pixel matrixes I vein (ii) a According to vertex V i By two, through twoA two-dimensional pixel matrix I skin Obtaining vertex V i Two skin texture gray values corresponding to two-dimensional skin pictures of two shooting cameras skin Through two-dimensional pixel matrices I vein Obtaining vertex V i Two vein texture gray values gray corresponding to two-dimensional vein pictures of two shooting cameras vein (ii) a Two skin texture gray values gray skin Integration into the final skin texture gray value gray skin (ii) a Gray value gray of two vein textures vein Integration into the final vein texture gray value gray vein
In particular, the skin texture gray value gray skin And vein texture gray value gray vein The acquisition method comprises the following steps:
determine vertex V i In the corresponding two-dimensional pixel coordinates (u, v), whether u and v are integers:
if u and v are both integers, then directly in the two-dimensional pixel matrix I skin And I vein Extracting skin texture gray value gray skin And vein texture gray value gray vein
gray skin =I skin [v-1][u-1]
gray vein =I vein [v-1][u-1]
If one or two of u and v is non-integer, interpolating the gray value of the adjacent pixel point of the two-dimensional pixel coordinate (u, v) to obtain the gray value of the skin texture skin And vein texture gray value gray vein
Preferably, if one or both of u and v are non-integer, the skin texture gray value gray skin And vein texture gray value gray vein The calculation is divided into three cases as follows:
when u is an integer and v is a non-integer, and the two nearest integers of v are y1 and y2, then:
Figure GDA0003833614730000101
Figure GDA0003833614730000102
when u is a non-integer and v is an integer, the two nearest integers of u are x1 and x2, then:
Figure GDA0003833614730000103
Figure GDA0003833614730000104
when u and v are non-integers, the coordinates of four pixel points adjacent to the vertex are (x 1, y 1), (x 2, y 1), (x 1, y 2), (x 2, y 2) respectively from left to right and from top to bottom; firstly, linear interpolation is transversely carried out to obtain:
Figure GDA0003833614730000105
Figure GDA0003833614730000106
then linear interpolation is carried out in the longitudinal direction to obtain:
Figure GDA0003833614730000107
Figure GDA0003833614730000108
two skin texture gray values gray skin Integration into the final skin texture gray value gray skin (ii) a Two vein texture gray values gray vein Integration into the final vein texture gray value gray vein The method comprises the following steps:
set vertex V i Respectively two shooting camerasIs A a And A b (ii) a Two skin texture gray values gray skin Middle, camera A a The corresponding skin texture gray value is gray1 skin Video camera A b The corresponding gray value of skin texture is gray2 skin (ii) a Two vein texture gray values gray vein Middle, camera A a Corresponding vein texture gray value gray1 vein Video camera A b Corresponding vein texture gray value gray2 vein
gray skin =μ×gray1 skin +(1-μ)gray2 skin
gray vein =μ×gray1 vein +(1-μ)gray2 vein
Wherein the mu is a weight, and the weight is,
Figure GDA0003833614730000111
q is vertex V i Projecting V on a three-dimensional reference plane a i ' and vector
Figure GDA0003833614730000112
Q = Q + (V' i And vector
Figure GDA0003833614730000113
The distance therebetween).
S7, according to the vertex V i Coordinate (x) of i ,y i ,z i ) Corresponding skin texture gray value gray skin And vein texture gray value gray vein Writing the three-dimensional finger biological feature model into the vertexes of the three-dimensional finger network model without textures to obtain a three-dimensional finger biological feature model with multispectral texture information, and completing a texture mapping process, as shown in fig. 4 (a) and 4 (b) and fig. 5 (a) and 5 (b); the multispectral texture information includes surface skin texture and internal vein texture.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such modifications are intended to be included in the scope of the present invention.

Claims (6)

1. A multispectral texture synchronous mapping method of a three-dimensional finger biological feature model is characterized in that: the method comprises the following steps:
s1, reading a three-dimensional finger network model; obtaining a set of coordinates { V ] of all vertices in a three-dimensional finger network model i =(x i ,y i ,z i |i∈[1,n]) In which (x) i ,y i ,z i ) Is a vertex V i Coordinates under a three-dimensional world coordinate system, wherein n is the number of vertexes; solving the average value of all vertex coordinate values and determining the average value as a central point O (x, y, z);
s2, solving each camera A by utilizing the conversion relation between the camera coordinate system and the world coordinate system j Coordinates in the world coordinate system, wherein j =1,2, \8230;, m, m is the number of cameras, m > 3;
s3, selecting the origin of the coordinate system of any three cameras to construct a three-dimensional reference plane alpha, and solving a three-dimensional reference plane equation ax + by + cz + d =0 of the three-dimensional reference plane alpha by using the coordinate values of the origins of the three selected cameras;
s4, collecting the coordinates { V ] obtained in the step S1 i =(x i ,y i ,z i |i∈[1,n]) In the vertex V, each vertex V i Classified into a vertex located within a single camera view and a vertex located within an overlapping view of two cameras, and each vertex V is stored i A corresponding shooting camera;
s5, aligning the vertex V in the single camera visual angle i Calculating the vertex V i Two-dimensional pixel coordinates (u, v) in a picture taken by a shooting camera;
for vertex V located in the overlapped visual angles of two cameras i Separately calculating the vertex V i Two-dimensional pixel coordinates (u, v) in pictures taken by two photographing cameras;
s6, aligning the vertex V in the single camera visual angle i Acquiring images of two-dimensional skin picture and two-dimensional vein picture corresponding to the photographing cameraPixel information is stored in two-dimensional pixel matrix I skin And I vein (ii) a According to vertex V i By a two-dimensional pixel matrix I skin Obtaining vertex V i Skin texture gray value gray corresponding to two-dimensional skin picture skin Through a two-dimensional pixel matrix I vein Obtaining vertex V i Vein texture gray value gray corresponding to two-dimensional vein picture vein
For vertex V located in the overlapped visual angles of two cameras i Acquiring the pixel information of the two-dimensional skin picture and the two-dimensional vein picture of the two corresponding shooting cameras, and respectively storing the pixel information of the two-dimensional skin picture of the two shooting cameras into two-dimensional pixel matrixes I skin Respectively storing the pixel information of the two-dimensional vein pictures of the two shooting cameras into two-dimensional pixel matrixes I vein (ii) a According to vertex V i By two-dimensional pixel matrices I skin Obtaining vertex V i Two skin texture gray values corresponding to two-dimensional skin pictures of two shooting cameras skin Through two-dimensional pixel matrices I vein Obtaining vertex V i Two vein texture gray values corresponding to two-dimensional vein pictures of two shooting cameras vein (ii) a Two skin texture gray values gray skin Integration into the final skin texture gray value gray skin (ii) a Two vein texture gray values gray vein Integration into the final vein texture gray value gray vein
S7, according to the vertex V i Coordinate (x) of i ,y i ,z i ) Corresponding skin texture gray value gray skin And vein texture gray value gray vein Writing the three-dimensional finger biological characteristic model into the vertex of the three-dimensional finger network model without texture to obtain the three-dimensional finger biological characteristic model with multispectral texture information, and finishing the texture mapping process;
in the step S6, the gray value gray of the skin texture skin And vein texture gray value gray vein The acquisition method comprises the following steps:
determine vertex V i In the corresponding two-dimensional pixel coordinate (u, v), whether u and v are integers:
if u and v are both integers, then directly in the two-dimensional pixel matrix I skin And I vein Extracting skin texture gray value gray skin And vein texture gray value gray vein
gray skin =I skin [v-1][u-1]
gray vein =I vein [v-1][u-1]
If one or two of u and v is non-integer, interpolating the gray value of the adjacent pixel point of the two-dimensional pixel coordinate (u, v) to obtain the gray value gray of the skin texture skin And vein texture gray value gray vein
If one or both of u and v are non-integer, then the skin texture gray value gray skin And vein texture gray value gray vein The calculation is divided into the following three cases:
when u is an integer and v is a non-integer, the two nearest integers of v are y1 and y2, then:
Figure FDA0003833614720000021
Figure FDA0003833614720000022
when u is a non-integer and v is an integer, the two nearest integers of u are x1 and x2, then:
Figure FDA0003833614720000023
Figure FDA0003833614720000031
when u and v are non-integers, the coordinates of four pixel points adjacent to the vertex are (x 1, y 1), (x 2, y 1), (x 1, y 2), (x 2, y 2) respectively from left to right and from top to bottom; firstly, performing horizontal linear interpolation to obtain:
Figure FDA0003833614720000032
Figure FDA0003833614720000033
then linear interpolation is carried out in the longitudinal direction to obtain:
Figure FDA0003833614720000034
Figure FDA0003833614720000035
s6, two skin texture gray values gray are obtained skin Integration into the final skin texture Gray skin (ii) a Gray value gray of two vein textures vein Integration into the final vein texture gray value gray vein (ii) a The method comprises the following steps:
setting the vertex V i The two shooting cameras are respectively A a And A b (ii) a Two skin texture gray values gray skin Middle, camera A a The corresponding skin texture gray value is gray1 skin Video camera A b The corresponding skin texture gray value is gray2 skin (ii) a Two vein texture gray values gray vein Middle, camera A a Corresponding vein texture gray value gray1 vein Video camera A b Corresponding vein texture gray value gray2 vein
gray skin =μ×gray1 skin +(1-μ)gray2 skin
gray vein =μ×gray1 vein +(1-μ)gray2 vein
Wherein mu is a weight of the first and second groups,
Figure FDA0003833614720000036
q is vertex V i Projecting V on three-dimensional reference plane alpha i ' and vector
Figure FDA0003833614720000037
Q = Q + (V) i ' and vector
Figure FDA0003833614720000038
The distance therebetween).
2. The method according to claim 1, wherein said method comprises the steps of: in the step S1, a solving method of the coordinate of the central point O (x, y, z) is as follows:
Figure FDA0003833614720000041
3. the method according to claim 1, wherein said method comprises the steps of: in the step S2, the conversion relationship between the camera coordinate system and the world coordinate system is as follows:
Figure FDA0003833614720000042
wherein, in order to select the coordinate of any point in the camera coordinate system, (X) w ,Y w ,Z w 1) coordinates of an arbitrary point in the world coordinate system in the homogeneous coordinate system, (X) c ,Y c ,Z c 1) is the homogeneous coordinate of the point in the camera coordinate system, R is a rotation matrix between 3x3 coordinate systems, and t is the coordinate of the selected point in the world coordinate system as a three-dimensional translation vector between the coordinate systems.
4. The method of multi-spectral texture synchronous mapping of a three-dimensional finger biometric model according to claim 1, wherein: in the step S3, the solution method of the three-dimensional reference plane equation is: the world coordinates of the origin of the coordinate systems of the three cameras are (x 1, y1, z 1), (x 2, y2, z 2), (x 3, y3, z 3), respectively, and then the solution formula of the plane equation parameters [ a, b, c, d ] is:
Figure FDA0003833614720000043
5. the method of multi-spectral texture synchronous mapping of a three-dimensional finger biometric model according to claim 1, wherein: in the step S4, each vertex V is connected i The classification method comprises the following steps: the method comprises the following steps:
s41, connecting the vertex V i Projecting to a three-dimensional reference plane alpha to obtain V i ', projecting the central point O (x, y, z) obtained in the step S1 to a three-dimensional reference plane alpha to obtain O'; each camera A j Respectively projecting the midpoints to a three-dimensional reference plane alpha to obtain A j ′;
S42, calculating vectors in sequence
Figure FDA0003833614720000044
And vector
Figure FDA0003833614720000045
The included angle therebetween;
s43, if the minimum value of the included angles calculated in the S42 step is less than 20 degrees, the vertex V is determined i Classifying the video camera into vertexes positioned in the visual angle of the single video camera, and setting the video camera corresponding to the minimum included angle as a shooting video camera; if the minimum value of the included angles calculated in the step S42 is more than or equal to 20 degrees, the vertex V is i The two cameras are classified as vertexes located within the overlapping view angles of the two cameras, and the two cameras corresponding to the two smallest included angles are set as the shooting cameras.
6. The method of multi-spectral texture synchronous mapping of a three-dimensional finger biometric model according to claim 1, wherein: in the step S5, the two-dimensional pixel coordinate (u, v) is calculated by:
Figure FDA0003833614720000051
wherein [ u, v ]] T Is a two-dimensional coordinate value in an image coordinate system, and the homogeneous coordinate is [ u, v,1 ]] T ;[X w ,Y w ,Z w ] T Three-dimensional point coordinates under a world coordinate system of a vertex, and the homogeneous coordinate is [ X ] w ,Y w ,Z w ,1] T ;Z C Scale factors from the world coordinate system to the image coordinate system; m is a group of 1 A reference matrix is arranged in the camera; m is a group of 2 Is the external parameter matrix of the camera.
CN202110428854.XA 2021-04-21 2021-04-21 Multispectral texture synchronous mapping method of three-dimensional finger biological feature model Active CN113076921B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110428854.XA CN113076921B (en) 2021-04-21 2021-04-21 Multispectral texture synchronous mapping method of three-dimensional finger biological feature model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110428854.XA CN113076921B (en) 2021-04-21 2021-04-21 Multispectral texture synchronous mapping method of three-dimensional finger biological feature model

Publications (2)

Publication Number Publication Date
CN113076921A CN113076921A (en) 2021-07-06
CN113076921B true CN113076921B (en) 2022-11-18

Family

ID=76618210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110428854.XA Active CN113076921B (en) 2021-04-21 2021-04-21 Multispectral texture synchronous mapping method of three-dimensional finger biological feature model

Country Status (1)

Country Link
CN (1) CN113076921B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085802A (en) * 2020-07-24 2020-12-15 浙江工业大学 Method for acquiring three-dimensional finger vein image based on binocular camera

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001236505A (en) * 2000-02-22 2001-08-31 Atsushi Kuroda Method, device and system for estimating coordinate
US10762366B2 (en) * 2015-10-10 2020-09-01 Zkteco Co., Ltd. Finger vein identification method and device
CN106919941B (en) * 2017-04-26 2018-10-09 华南理工大学 A kind of three-dimensional finger vein identification method and system
US10776469B2 (en) * 2017-07-18 2020-09-15 Samsung Electronics Co., Ltd. Method for generating 3D biometric model of body part of user and electronic device thereof
CN109190554A (en) * 2018-08-30 2019-01-11 深圳大学 It is a kind of based on fingerprint and to refer to the 3D identifying system and method for vein
CN112084840A (en) * 2020-07-24 2020-12-15 浙江工业大学 Finger vein identification method based on three-dimensional NMI

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085802A (en) * 2020-07-24 2020-12-15 浙江工业大学 Method for acquiring three-dimensional finger vein image based on binocular camera

Also Published As

Publication number Publication date
CN113076921A (en) 2021-07-06

Similar Documents

Publication Publication Date Title
Beymer et al. Example based image analysis and synthesis
Yamany et al. Free-form surface registration using surface signatures
RU2215326C2 (en) Image-based hierarchic presentation of motionless and animated three-dimensional object, method and device for using this presentation to visualize the object
Bartoli et al. Generalized thin-plate spline warps
WO2016175150A1 (en) Template creation device and template creation method
CN106919944A (en) A kind of wide-angle image method for quickly identifying based on ORB algorithms
CN105279789A (en) A three-dimensional reconstruction method based on image sequences
CN108776989A (en) Low texture plane scene reconstruction method based on sparse SLAM frames
CN106934824B (en) Global non-rigid registration and reconstruction method for deformable object
CN114529605A (en) Human body three-dimensional attitude estimation method based on multi-view fusion
CN112330813A (en) Wearing three-dimensional human body model reconstruction method based on monocular depth camera
CN116958437A (en) Multi-view reconstruction method and system integrating attention mechanism
CN113538569A (en) Weak texture object pose estimation method and system
CN110111292A (en) A kind of infrared and visible light image fusion method
CN111325828B (en) Three-dimensional face acquisition method and device based on three-dimensional camera
CN113012271B (en) Finger three-dimensional model texture mapping method based on UV (ultraviolet) mapping
CN115393519A (en) Three-dimensional reconstruction method based on infrared and visible light fusion image
CN114996814A (en) Furniture design system based on deep learning and three-dimensional reconstruction
CN113076921B (en) Multispectral texture synchronous mapping method of three-dimensional finger biological feature model
CN111768476A (en) Expression animation redirection method and system based on grid deformation
Aganj et al. Multi-view texturing of imprecise mesh
CN110728296A (en) Two-step random sampling consistency method and system for accelerating feature point matching
Shibayama et al. Reconstruction of 3D surface and restoration of flat document image from monocular image sequence
CN115330935A (en) Three-dimensional reconstruction method and system based on deep learning
CN108429889A (en) A kind of 1,000,000,000 pixel video generation method of EO-1 hyperion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant