CN107480654B - Be applied to three-dimensional vein recognition device of wearable equipment - Google Patents

Be applied to three-dimensional vein recognition device of wearable equipment Download PDF

Info

Publication number
CN107480654B
CN107480654B CN201710777503.3A CN201710777503A CN107480654B CN 107480654 B CN107480654 B CN 107480654B CN 201710777503 A CN201710777503 A CN 201710777503A CN 107480654 B CN107480654 B CN 107480654B
Authority
CN
China
Prior art keywords
annular
unit
main body
vein
outer ring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710777503.3A
Other languages
Chinese (zh)
Other versions
CN107480654A (en
Inventor
王东方
苑文楼
顾哲豪
许远航
刘欢
刘欣
殷志富
杨旭
吴越
王昕�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN201710777503.3A priority Critical patent/CN107480654B/en
Publication of CN107480654A publication Critical patent/CN107480654A/en
Application granted granted Critical
Publication of CN107480654B publication Critical patent/CN107480654B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E60/00Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
    • Y02E60/10Energy storage using batteries

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to a three-dimensional vein recognition device applied to wearable equipment, and belongs to the field of biological feature recognition. The intelligent control device comprises an intelligent control unit, an annular inner ring unit, an image sensor unit, a circular array type infrared light source unit, an annular outer ring unit and a drive control unit. The novel image sensor has the advantages that the novel image sensor is novel in structure, a brand-new annular structure is adopted, the image sensor unit acquires information from 360 degrees in the vein image acquisition process, the acquired vein information is more in quantity, the identification is more accurate, the reliability is high, the robustness is stronger, the false rejection rate is low, the user identity information safety is high, the man-machine interaction is realized, the structure is simple, the control is convenient, and the popularization is facilitated.

Description

Be applied to three-dimensional vein recognition device of wearable equipment
Technical Field
The invention belongs to the field of biological feature recognition, and particularly relates to a three-dimensional vein recognition device applied to wearable equipment.
Background
Vein recognition is a novel biological recognition technology which is emerging in recent years, and the technology utilizes the characteristics of vein blood vessels of human bodies to carry out the identification, thereby having very wide application prospect. Medical related studies have shown that the distribution of veins in all humans differs, and that the distribution of veins in humans, once shaped, does not change substantially over time. Veins are therefore characterized by the breadth, distinguishability, persistence, and repeatability that biological recognition should possess. In addition, vein recognition has many unique advantages not possessed by other biometric methods, such as living body recognition, difficulty in forgery, noncontact recognition, and the like, which makes vein recognition a biometric method of more research significance in the future.
However, the existing vein recognition technology has great limitations and has not gained good popularity. The main reason is that the existing vein recognition technology is mainly simple two-dimensional recognition, and the recognition result is easily influenced by the static posture and the action of a person: the fine motion adjustment of the person will change the image acquisition angle, so the user must place fingers, wrists and palms according to the ideal condition of the laboratory in the vein recognition process, and bending and deflection cannot occur, which greatly impairs the user experience and increases the false rejection rate. In addition, the two-dimensional vein image is easy to imitate, so that the safety is not guaranteed.
Although some works propose a three-dimensional vein recognition device based on a binocular vision recognition method, which solves the problem that two-dimensional vein recognition is susceptible to human action and posture to a certain extent, the method does not realize real three-dimensional vein construction. Since infrared rays cannot completely transmit the human body, i.e., vein images obtained by binocular vision are only vein information within a certain angle range, vein information of a position face opposite to a camera cannot be acquired. In addition, the binocular vision three-dimensional reconstruction method requires two cameras to shoot at the same time, is very difficult to control, is very large and complex in device equipment, and is high in cost, so that the binocular vision three-dimensional reconstruction method cannot be well applied to popularization of wearable intelligent equipment.
In summary, none of the prior art, devices and methods simultaneously fulfill the following requirements for vein recognition: 1. the identification accuracy is high, the reliability is high, and the robustness is strong; 2. the recognition process is less influenced by the gesture and the action of the user, the false rejection rate is low, the user experience is good, and the method can be applied to wearable equipment; 3. the method has the advantages of distinguishing power for the false vein image, high safety of user identity information, capability of realizing man-machine interaction, simple structure, convenience in control and capability of being popularized in a large scale.
Disclosure of Invention
The invention provides a three-dimensional vein recognition device applied to wearable equipment, which aims to solve the following problems in the existing vein recognition technology: most of the simple two-dimensional recognition, the recognition result is easily influenced by the static posture and the action of the person; the fingers, wrists and palms cannot bend and deflect in the vein recognition process, so that the user experience is greatly damaged, and the rejection rate is increased; the two-dimensional vein image is easy to forge, so that the safety cannot be guaranteed; the three-dimensional vein recognition device based on the binocular vision recognition method can only acquire vein information in a certain angle, and meanwhile, because two cameras are required to shoot simultaneously, the device is very difficult to control, the device equipment is very large and complex, and the cost is high, so that the three-dimensional vein recognition device cannot be well applied to popularization of wearable intelligent equipment.
The technical scheme adopted by the invention is as follows: the intelligent control unit is in interference fit with grooves on the annular outer ring unit by virtue of protrusions on a rear cover of the intelligent control unit, three annular array grooves are inherently formed in the inner end face of the drive control unit, three bosses are respectively arranged on two end faces of an annular outer ring main body of the annular outer ring unit, the grooves are in interference fit with the bosses, and the two drive control units are respectively fixedly connected with the annular outer ring unit; the edge bulge of the main body of the annular inner ring unit is in clearance fit with the annular groove of the annular outer ring main body, so that the annular inner ring unit performs circular motion around the center of the device in the annular outer ring unit; the image sensor unit is in clearance fit with the inner side track groove of the annular outer ring main body through the convex sliding rail of the carrying device unit, and the two driving control units are respectively connected with the two ends of the image sensor unit through leads, so that the image sensor unit performs circular motion around the center of the device in the inner side track groove of the annular outer ring main body; the upper end of the image sensor unit is clamped in the non-hollowed-out groove of the annular inner ring main body, and when the image sensor unit moves, the annular inner ring main body moves along with the image sensor unit; the circular array type infrared light source unit is embedded in the circular array type groove of the annular outer ring unit.
The intelligent control unit comprises a rear cover, a rechargeable battery, a micro-sensing antenna mounting groove, a micro-sensing antenna I, a host, a display screen, a plug and a bulge on the rear cover, wherein the micro-sensing antenna is mounted in the micro-sensing antenna mounting groove and inserted into the host, the rechargeable battery is arranged in the host, the display screen and the plug are arranged on the host, and the rear cover is fixedly connected with the host.
The structure of the annular outer ring unit is as follows: the annular outer ring main body is provided with three bosses on two end faces respectively, the circular array type grooves are used for embedding circular array type infrared light sources, the inner side of the annular outer ring main body is provided with annular grooves and track grooves, and the outer side of the annular outer ring main body is provided with grooves.
The structure of the annular inner ring unit is as follows: the protection layer, the filter layer and the annular inner ring main body are fixedly connected from inside to outside in sequence, the annular bulge at the edge of the annular inner ring main body is in clearance fit with the annular track groove at the inner side of the annular outer ring main body, the annular inner ring main body and the filter layer are both provided with annular grooves with an angle of 162 degrees, the annular inner ring main body is provided with a non-hollowed-out groove for clamping an image sensor unit, and the protection layer is annular sealing and has a light transmission function.
The structure of the image sensor unit is as follows: the upper part of the carrying device unit is clamped in the non-hollowed groove of the annular inner ring main body, one end of each of the two main shafts is in interference fit with the groove on the CCD camera main body, the other end of each of the two main shafts is in transition fit with the groove on the carrying device unit, the CCD camera main body is arranged on the carrying device unit through the main shafts, and the carrying device unit is in clearance fit with the annular track groove on the inner side of the annular outer ring main body through the convex sliding rail; the CCD camera body is internally provided with a storage chip and a micro-sensing antenna II.
The structure of the drive control unit is as follows: comprises two identical parts, wherein the structure of one part is as follows: the annular battery unit is arranged on the annular main body, three annular array grooves are formed in the inner end face of the annular main body, the motors are respectively embedded in the side faces of the annular main body, and the motors are connected with one end of the image sensor unit through wires.
The annular battery unit is formed by a plurality of annular rechargeable batteries, the annular battery unit is connected with the intelligent control unit through a wire, and the intelligent control unit is charged through a plug of the intelligent control unit, so that the motor is powered, and the motor drives the image sensor unit to perform circular motion through the wire in the rotating groove arranged in the annular outer ring main body.
The circular array type infrared light source is an infrared diode with the wavelength of 850nm and is used for providing infrared light source irradiation;
the filter layer in the annular inner ring unit is a high-transmission filter with the wavelength of 800 nm-1100 nm and a neutral gray filter;
the material used for the protective layer in the annular inner ring unit is polyurethane rubber.
The vein identification method is characterized by comprising the following steps:
the first step: acquiring a plurality of two-dimensional vein images with different angles, wherein the different angles are vein images obtained by rotating 360 degrees around the central axis of a vein tissue part of a user, such as a wrist, a palm or a finger, and the acquisition method is that a transmission type image sensor and an array type near infrared light source are respectively positioned at two sides of the vein tissue part;
And a second step of: the image processing unit arranged in the intelligent control unit is used for carrying out batch pretreatment on the acquired vein images, and the processing steps comprise: graying, gaussian low-pass filtering and segmentation, removing noise and miscellaneous points in an image, extracting the most important vein information, reducing the operand required by three-dimensional reconstruction and shortening the three-dimensional reconstruction time;
extracting key points by using a Scale Invariant Feature Transform (SIFT), adding local features to the key points, and finding out a plurality of pairs of feature points matched with each other through pairwise comparison of the feature points of the two sides to form feature point matching between the images;
and a third step of: performing three-dimensional modeling, performing SIFT feature point matching between every two images, selecting two images for reconstruction, adding other pictures, and adding other feature points on the images, which are also on the reconstructed images, into a reconstruction process for reconstruction;
fourth step: and uploading the venous three-dimensional characteristic point cloud to a cloud database through a micro-sensing antenna of the intelligent control unit, and finally receiving the matching judgment result information of the cloud database, thereby verifying the identity of the user.
The invention has the advantages that: firstly, a brand new annular structure is adopted, the image sensor unit acquires information from 360 degrees in the vein image acquisition process, the acquired vein information quantity is large, and finally, a three-dimensional characteristic point cloud is formed, so that the identification is more accurate, the reliability is high, and the robustness is stronger; secondly, the recognition process is not influenced by the motion and the gesture of the person, the false rejection rate is low, and the method can be applied to wearable equipment; thirdly, as vein images are collected at 360 degrees, the method has discrimination on false vein images; fourthly, due to the existence of the intelligent control unit, the safety of the user identity information is high, man-machine interaction is realized, the structure is simple, the control is convenient, and the popularization is facilitated; fifthly, the structural design is ingenious: the edge bulge of the annular inner ring main body is in clearance fit with the annular groove of the annular outer ring main body, so that the annular inner ring unit can perform a circular motion around the center of the device in the annular outer ring unit; the image sensor unit is in clearance fit with the inner side track groove of the annular outer ring main body through the convex sliding rail of the carrying device unit, so that the image sensor unit can perform a circular motion around the center of the device in the inner side track groove of the annular outer ring main body with the annular inner ring unit; meanwhile, the CCD camera body can rotate around the main shaft by a certain angle, so that in the motion process, the image sensor can acquire information in a 360-degree range and venous information in a larger range as much as possible in the longitudinal direction.
Drawings
FIG. 1 is a schematic diagram of the structure of the present invention;
FIG. 2 is a schematic illustration of the present invention;
FIG. 3 is a schematic diagram of the connection of the intelligent control unit and the annular outer ring unit of the present invention;
FIG. 4 is a schematic illustration of the connection of the drive control unit and the annular outer ring unit of the present invention;
FIG. 5 is a schematic illustration of the connection of the inner ring unit and the annular outer ring unit, the image sensor unit and the annular outer ring unit in the present invention;
FIG. 6 is a schematic diagram of the connection of an image sensor unit to an annular inner ring unit in the present invention;
FIG. 7 is a schematic diagram of an intelligent control unit of the present invention;
FIG. 8 is a schematic view of an annular inner ring unit and an annular outer ring unit of the present invention;
FIG. 9 is a schematic diagram of an image sensor unit of the present invention;
fig. 10 is a schematic diagram of a portion of a drive control unit of the present invention;
FIG. 11 is a schematic diagram of the connection of the drive control unit and the image sensor unit of the present invention;
FIG. 12 is a general flowchart of an identification process of the present invention;
FIG. 13 is a flow chart of an image processing procedure of the present invention;
FIG. 14 is a SIFT feature point matching result of a frontal vein image and a back vein image of a wrist obtained by the method of the present invention;
fig. 15 is a schematic diagram of the equivalent three-dimensional venous feature point cloud finally obtained by the invention.
Detailed Description
The intelligent control device comprises an intelligent control unit 1, an annular inner ring unit 2, an image sensor unit 3, a circular array type infrared light source unit 4, an annular outer ring unit 5 and a drive control unit 6, wherein a protrusion 108 on the rear cover of the intelligent control unit 1 is in interference fit with a groove 506 on the annular outer ring unit 5, three annular array grooves 601 are fixedly arranged on the inner end surface of the drive control unit 6, three bosses 502 are respectively arranged on two end surfaces of an annular outer ring main body 501 of the annular outer ring unit 5, and the grooves 601 are in interference fit with the bosses 502 to realize that two drive control units 6 are fixedly connected with the annular outer ring unit 5 respectively; the edge protrusion 203 of the main body 201 of the annular inner ring unit 2 is in clearance fit with the annular groove 504 of the annular outer ring main body 501, so that the annular inner ring unit 2 performs circular motion around the center of the device in the annular outer ring unit 5; the image sensor unit 3 is in clearance fit with the inner side track groove 505 of the annular outer ring main body 501 through the convex sliding rail 308 of the carrying device unit 301, and the two driving control units 6 are respectively connected with the two ends of the image sensor unit 3 through leads, so that the image sensor unit 3 performs circular motion around the center of the device in the inner side track groove 505 of the annular outer ring main body 501; the upper end of the image sensor unit 3 is clamped in the non-hollowed-out groove 202 of the annular inner ring main body 201, and when the image sensor unit 3 moves, the annular inner ring main body 201 moves along with the image sensor unit; the circular array type infrared light source unit 4 is embedded in the circular array type groove 503 of the annular outer ring unit 5.
The intelligent control unit 1 comprises a rear cover 101, a rechargeable battery 102, a micro-induction antenna mounting groove 103, a micro-induction antenna I104, a host 105, a display screen 106, a plug 107 and a protrusion 108 on the rear cover, wherein the micro-induction antenna 104 is mounted in the micro-induction antenna mounting groove 103 and inserted into the host 105, the rechargeable battery 102 is arranged in the host, the host is provided with the display screen 106 and the plug 107, and the rear cover 101 is fixedly connected with the host 105.
The host can realize the basic functions (communication, short message, internet surfing) of a general smart phone and can realize the rapid charging of the rechargeable battery 102; the opening and closing of the circular array type infrared light source unit 4 can be controlled by manual operation on the display screen 106; the driving unit 6 can be controlled by a manual operation on the display screen 106 to control the operation of the image sensor unit 3; the first micro-sensing antenna 104 is used for receiving the vein image transmitted by the second micro-sensing antenna 303 so as to perform the next processing; the intelligent control unit 1 is internally provided with an image processing unit which is mainly used for judging the number of acquired images, preprocessing the acquired images and reconstructing the images in three dimensions; the intelligent control unit 1 can be connected to the cloud database through the micro-sensing antenna 104, upload the vein three-dimensional characteristic point cloud processed by the image processing unit, and finally receive the judgment information of the identity of the holder returned by the cloud database.
The structure of the annular outer ring unit 5 is as follows: three bosses 502 are respectively arranged on two end faces of the annular outer ring main body 501, a circular array type groove 503 is used for embedding a circular array type infrared light source 4, an annular groove 504 and a track groove 505 are arranged on the inner side of the annular outer ring main body 501, and a groove 506 is arranged on the outer side of the annular outer ring main body 501.
The structure of the annular inner ring unit 2 is as follows: the protective layer 206, the filter layer 204 and the annular inner ring main body 201 are fixedly connected from inside to outside, and the annular bulge 203 at the edge of the annular inner ring main body 201 is in clearance fit with the annular track groove 504 at the inner side of the annular outer ring main body, namely the annular inner ring main body 201 can realize the function of rotating around the center of the annular inner ring main body; the annular inner ring main body 201 and the optical filter layer 204 are both provided with annular grooves 205 with an angle of 162 degrees, so that infrared light emitted by a plurality of array infrared light sources can penetrate, and the interference of other array infrared light sources on high-quality vein image acquisition is reduced; the annular inner ring main body 201 is provided with a non-hollowed groove 202 for clamping the image sensor unit 3, namely, when the image sensor unit 3 moves circularly around the center of the device, the whole annular inner ring unit 2 rotates along with the circular movement; the protection layer 206 is ring-shaped and sealed, and has a light-transmitting function.
The structure of the image sensor unit 3 is as follows: the upper part of the carrier unit 301 is clamped in the non-hollowed groove 202 of the annular inner ring main body 201, one end of each of two main shafts 304 is in interference fit with a groove 305 on the CCD camera main body 302, the other end of each main shaft is in transition fit with a groove 307 on the carrier unit 301, the CCD camera main body 302 is arranged on the carrier unit 301 through the main shafts 304, and the carrier unit 301 is in clearance fit with an annular track groove 505 on the inner side of the annular outer ring main body 501 through a convex sliding rail 308; the CCD camera body 302 has a memory chip 306 and a second micro-sensing antenna 303.
I.e. when the image sensor unit 3 moves circularly around the center of the device, the annular inner ring unit 2 will also move with it; the CCD camera main body 302 mainly refers to a USB image acquisition device using CMOS as a photosensitive element, two ends of the carrier unit 301 are driven by two motors 604 respectively, and perform circular motion along two directions along a sliding rail with the center of the annular outer ring main body 501 as the center of a circle; the two main shafts 304 are controlled by the intelligent control unit 1, so that the CCD camera main body 302 can rotate around the main shafts 304 by a certain angle, and the image sensor 3 can acquire information within a 360-degree range and venous information within a larger range as much as possible in the longitudinal direction in the moving process; the memory chip 306 is used for temporarily storing vein image information captured by the CCD camera body 302, and the second micro-sensing antenna 303 is in butt joint with the first micro-sensing antenna 104 of the intelligent control unit 1 to transmit the obtained vein image.
The drive control unit 6 has a structure that: comprises two identical parts, wherein the structure of one part is as follows: the annular battery unit 603 is installed on the annular main body 602, three annular array grooves 601 are formed in the inner end face of the annular main body, motors 604 are respectively embedded in the side faces of the annular main body 602, and the motors 604 are connected with one end of the image sensor unit 3 through leads 605.
The annular battery unit 603 is composed of a plurality of annular rechargeable batteries, the annular battery unit 603 is connected with the intelligent control unit 1 through a wire, and the intelligent control unit 1 is charged through the plug 107, so that the motor is powered, and the motor drives the image sensor unit 3 to perform circular motion through the wire in the rotating groove arranged in the annular outer ring main body 503.
The circular array type infrared light source 4 mainly refers to an infrared diode with the wavelength of about 850nm and is used for providing infrared light source irradiation;
the filter layer 204 in the annular inner ring unit 2 refers to a 800 nm-1100 nm high-transmission filter and a neutral gray filter;
the material used for the protective layer 206 in the annular inner ring unit 2 is urethane rubber.
When the three-dimensional vein recognition device is in operation, the three-dimensional vein recognition device is sleeved on a wrist, the intelligent control unit 1 triggers the circular array type infrared light source unit 4 to be started, and then the intelligent control unit 1 triggers the driving control unit 6, so that the two motors 604 respectively drive the image sensor unit 3 to do circular motion around the center of the device; the image sensor unit 3 carries the annular inner ring unit 2 and takes the center of the device as the center of a circle to do circular motion; while the image sensor unit 3 performs circular motion, the CCD camera main body 302 is connected to the housing of the carrying device unit 301 through two main shafts 304 which are controlled by the intelligent control unit 1 and can rotate according to a fixed angle, so that axial rotation within a certain angle range is performed, vein image information within a larger range is obtained, the image sensor unit runs for a circle, stays in the middle for a plurality of times, and captures a plurality of images altogether; after a period of time, the image sensor unit 3 captures a series of wrist vein images, and the images are sent through a second micro-sensing antenna 303 arranged in the CCD camera main body 302 and received by a first micro-sensing antenna 104 arranged in the intelligent control unit 1; after being processed by an image processing unit arranged in the intelligent control unit 1, the three-dimensional vein feature point cloud is obtained; the intelligent control unit 1 is connected to a cloud database through a micro-sensing antenna 104 arranged in the intelligent control unit 1, and is matched with three-dimensional vein feature point clouds in the database; the matching process in the database will not be described, and finally, the micro-sensing antenna 104 in the intelligent control unit 1 is used to receive the judgment result returned by the cloud database, and verify the identity of the holder.
The three-dimensional vein recognition device is used as a carrier, and the vein recognition method comprises the following steps:
the first step: acquiring a plurality of two-dimensional vein images with different angles by using the novel annular structure; the vein image acquisition mode comprises the following steps: by irradiating the palm or other parts with vein tissue with an array near-infrared light source, the hemoglobin of blood in the blood vessel of the part is reduced by losing oxygen, the reduced hemoglobin absorbs near-infrared rays of the wave band, so that relatively less light is transmitted, darker parts are generated in the formed image, other parts which do not contain hemoglobin, such as interstitial fluid, do not react as above, the transmitted light is normal, and the transmitted light is displayed as brighter parts in the formed image, thereby determining the position and shape of the vein.
The different angles are vein images obtained by rotating 360 degrees around the central axis of a vein tissue part (such as a wrist, a palm, a finger) of a user, and the like, and the method is that the image sensor and the array near infrared light source are respectively positioned at two sides of the vein tissue part;
and a second step of: the image processing unit arranged in the intelligent control unit 1 is used for carrying out batch preprocessing on the acquired vein images, and the processing steps comprise: graying, gaussian low-pass filtering and segmentation, wherein the steps mainly aim to remove noise and miscellaneous points in the image, extract the most important vein information, reduce the operand required by three-dimensional reconstruction and shorten the three-dimensional reconstruction time;
Extracting key points by using a Scale-invariant feature transform (Scale-invariant feature transform) method, adding local features to the key points, and finding out a plurality of pairs of feature points matched with each other through pairwise comparison of the feature points of the two sides to form feature point matching between the images;
and a third step of: performing three-dimensional modeling based on SFM (structure from motion, SFM), performing SIFT feature point matching between every two images, selecting two images for reconstruction, adding other pictures, and adding the feature points on other reconstructed images into the reconstruction process for reconstruction;
fourth step: the micro-sensing antenna 104 of the intelligent control unit 1 uploads the vein three-dimensional characteristic point cloud to the cloud database, and finally matching judgment result information of the cloud database is received, so that the identity of the user is verified.
The following description of the embodiments of the present disclosure will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present disclosure. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
Numerous specific details are set forth in the following detailed description in order to provide a thorough understanding of the present invention, however, it will be appreciated by one skilled in the art that the present disclosure may be practiced without such specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure embodiments.
Fig. 12 is a general flow chart of a three-dimensional vein recognition apparatus applied to a wearable device.
The first step is to acquire a plurality of two-dimensional vein images of 360 degrees, wherein the series of images are vein images obtained by the image sensor 3 which is wound around the outside of the part containing the vein tissue and the image sensor 3 which is rotated back and forth along the own axis by a certain angle. When the three-dimensional vein recognition device works, the three-dimensional vein recognition device is sleeved on the wrist. The circular array type infrared light source unit 4 is activated by the intelligent control unit 1, and then the driving control unit 6 is activated by the intelligent control unit 1, so that the motor 605 moves with the image sensor unit 3. The image sensor unit 3 performs a circular motion with the annular inner ring unit 2 centered on the center of the annular outer ring main body 501. While the image sensor unit 3 performs a circular motion, the CCD camera body 302 is connected to the housing of the carrier 301 through two main shafts 304 controlled by the intelligent control unit 1 to be rotatable at a fixed angle, thereby performing an axial rotation within a certain angle range, thereby obtaining a vein image within a larger range. After a period of time, the image sensor unit captures a series of wrist vein images.
And secondly, preprocessing vein images in batches, and respectively carrying out gray scale processing, gaussian low-pass filtering and segmentation on the acquired images.
Fig. 13 shows specific steps of image processing in which the gradation processing is to change the image pixel value by a specified linear function. Through the first step, the obtained finger vein images are all RGB images. In order to simplify the image information, the three-dimensional image matrix is converted into a two-dimensional matrix, the data amount is reduced, and the number processing speed is increased, so that the RGB image should be subjected to the gradation processing by removing the parts of the image which are not related to the feature recognition, such as the color information. The invention adopts a weighted average method to carry out graying treatment. The weighted average method is to assign different weights to the three R, G, B components according to the weights of some indexes in the image, and then perform weighted average operation. According to experience, the color sensitivity order of human eyes is that green is greater than red and greater than blue, so that the weight coefficients of the three are ordered according to the order, W R >W G >W B . Therefore, the gray-scale calculation formula is:
Gray=W R R+W G G+W B in the invention, each coefficient is tried out according to the experimental result,
W R =0.2989,W G =0.5870,W B gray scale calculation as defined by = 0.1140 The formula is:
Gray=0.2989R+0.5870G+0.1140B
the gaussian low-pass filtering is to filter an image because vein information of a grayscaled image of an original image is not obvious after the grayscaled image is subjected to grayscaling treatment and background noise is too large. Since veins belong to the high gray scale part of the acquired image, the high gray scale part of the image may be enhanced before filtering, whereas the noise part of the background mostly belongs to the low gray scale part. Thus, the gray scale difference between the interesting region and the noise part can be larger, and the subsequent processing is more convenient. By means of the gray level histogram of the vein image we can see that the main frequencies are essentially of low frequency cost. Therefore, in the image filtering part, we adopt a Gaussian low-pass filtering method. The Gaussian low-pass filtering belongs to a frequency domain image enhancement method, and the main theoretical basis of the Gaussian low-pass filtering is a two-dimensional Gaussian distribution formula:
Figure GDA0004084947920000091
wherein σ is the standard deviation of the gaussian curve; (x, y) is a spatial coordinate and is a scale coordinate. When the standard deviation of the gaussian becomes large, the gaussian frequency domain image becomes wider and the time domain image becomes correspondingly narrower.
The third step is three-dimensional reconstruction of vein three-dimensional characteristic point cloud based on monocular vision motion method:
The method comprises the following steps of (1) matching images based on a SIFT algorithm:
(1) Extracting key points: the key points are feature points which are detected by the image in different scale spaces and have three features of scale, direction and size. The object is accurately reflected by a certain scale. The main idea of the scale space theory is that the scale transformation is carried out on an original image to obtain scale space representation sequences under the multi-scale of the image, and main contour extraction is carried out on the sequences. Therefore, to extract keypoints, a scale space is first constructed. The Gaussian kernel is the only kernel capable of generating a multi-scale space, and the scale space L (x, y, sigma) of one image is defined as convolution operation of an original image I (x, y) and a variable-scale two-dimensional Gaussian function G (x, y, sigma);
two-dimensional gaussian function of variable scale:
Figure GDA0004084947920000101
a scale space of one image:
L(x,y,σ)=G(x,y,σ)*I(x,y)
where (x, y) is a spatial coordinate and is a scale coordinate. We represent the scale space with a gaussian pyramid. The construction of the Gaussian pyramid is divided into two steps: the first step is to gaussian smooth the image. And secondly, downsampling the image. Meanwhile, gaussian filtering is adopted, so that one image can generate several groups of images, and one group of images comprises several layers of images, so that the continuity of the image scale is maintained.
Next, to efficiently detect stable keypoints in the scale space, a Gaussian differential scale-space (DoG scale-space) is proposed. Since the DoG value is sensitive to noise and edges, the key points are then precisely located by fitting the DoG function. An approximate Harris Corner detector is used herein. Taylor expansion in scale space using the DoG function:
Figure GDA0004084947920000102
the above formula is derived and is made to be zero, so that an exact position is obtained, namely, the extreme point of the above formula:
Figure GDA0004084947920000103
among the feature points that have been detected, the feature points of the contrast are to be removed. Substituting the extreme points back into Taylor expansion of the DoG function in the scale space, and taking only the two former items to obtain:
Figure GDA0004084947920000104
the unstable extreme points with low contrast are removed by the above equation. Experiments show that all extreme points with absolute values smaller than 0.03 can be discarded.
(2) The edge response is removed. An extremum of a poorly defined gaussian difference operator has a larger principal curvature across the edges and a smaller principal curvature in the direction perpendicular to the edges, and the extremum principal curvature of the gaussian difference operator is found using a 2 x 2 Hessian matrix H, and the derivative is estimated from the sample point neighborhood differences.
Figure GDA0004084947920000105
Wherein D is xx Deriving twice in the x direction of an image representing a certain scale in the DoG pyramid; d (D) xy The image representing a certain scale in the DoG pyramid is firstly derived in the x direction and then derived in the y direction; d (D) yx The image representing a certain scale in the DoG pyramid is firstly derived in the y direction and then derived in the x direction; d (D) yy The y-direction derivative of the image representing a certain scale in the DoG pyramid is derived twice.
The principal curvature of D is proportional to the eigenvalues of H, and in order to avoid direct calculation of these eigenvalues, only the ratio between them is considered. Let α be the maximum eigenvalue and β be the minimum eigenvalue, then there are:
Tr(H)=D xx +D yy =α+β
Figure GDA0004084947920000111
let α=r·β, then:
Figure GDA0004084947920000112
while
Figure GDA0004084947920000113
The two eigenvalues reach a minimum when they are equal, increasing with increasing r. Therefore, to detect whether the principal curvature is at a certain threshold r, only the following needs to be detected:
Figure GDA0004084947920000114
Figure GDA0004084947920000115
/>
and eliminating the key points, and otherwise, keeping the key points. Here, r=10 is taken.
Adding local features to the keypoints: after the feature points in each graph are determined in the previous step, the stabilizing direction of the local structure is obtained through an image gradient method. For the key points detected in the DoG pyramid, gradient and direction distribution characteristics of pixels in a 3 sigma neighborhood window of the Gaussian pyramid image where the key points are located are collected. The modulus and direction of the gradient at (x, y) are as follows:
Figure GDA0004084947920000116
Figure GDA0004084947920000117
Wherein the scale used by L is the scale of each key point. So far, the image key points are detected, and each key point has three information: position, scale, direction, a SIFT feature region can be determined. After the gradient calculation of the key points is completed, the gradient and the direction of the pixels in the histogram statistical field are used. The gradient histogram divides the 0-360 degree direction into 36 bins and the peak direction of the histogram represents the principal direction of the keypoint. Meanwhile, in order to enhance the robustness of matching, only the direction with the peak value being 80% greater than the peak value of the main direction is reserved as the auxiliary direction of the key point. The key points of the detected position, scale and direction are SIFT feature points of the image;
next, a descriptor is built for each keypoint in the form of a set of vectors. The key points are not changed along with various changes, such as illumination change, visual angle change and the like, namely SIFT descriptors.
Lower experiment results show that: the descriptor is characterized by a 4×4×8=128-dimensional vector, and the comprehensive effect is optimal.
The following is a 128-dimensional keypoint descriptor generation step:
(1) the coordinate axis is first rotated in the direction of the key point to ensure that the rotation is not changed. An 8×8 window is centered around the keypoint. Each cell represents a pixel in the scale space where the neighborhood of the key point is located, the gradient amplitude and the gradient direction of each pixel are obtained by utilizing a formula, the arrow direction represents the gradient direction of the pixel, the arrow length represents the gradient modulus value, then the Gaussian window is used for carrying out weighting operation on the gradient modulus value, and the contribution of the gradient direction information of the pixel which is closer to the key point is larger.
(2) Then calculating gradient direction histograms of 8 directions on each 4×4 small block, and drawing accumulated values of each gradient direction to form a seed point. Thus, a key point consists of a total of 4 seed points of 2×2, each seed point having 8 direction vector information. This idea of neighborhood directional information union enhances the noise immunity of the algorithm, while also providing better fault tolerance for feature matching that contains positioning errors.
(3) Finally, the gradient of each pixel in a 16 x 16 window around the keypoint is calculated and the weights off-center are reduced using a gaussian down function. In each of the 4 x 4 1/16 quadrants, a gradient direction histogram is calculated by adding the weighted gradient value to one of the 8 direction bins of the histogram.
Thus, a 4×4×8=128-dimensional descriptor can be formed for each feature, and each dimension can represent a scale/orientation of one of the 4×4 lattices, and the influence of illumination is further removed after the vector is normalized.
(3) And matching SIFT feature points between every two images:
and adopting the Euclidean distance of the feature vector of the key point as a similarity judgment measure of the key points in the two images. Taking a certain key point in the left part of fig. 14, and finding the first two key points closest to the euclidean distance in the right part of fig. 14, and accepting the pair of matching points if the closest distance divided by the next closest distance is less than a certain proportional threshold. Decreasing this ratio threshold, the number of SIFT matching points will decrease, but be more stable. To exclude key points without matching relationship due to image occlusion and background clutter, lower proposes a method of comparing the nearest neighbor distance to the next nearest neighbor distance, and a distance ratio smaller than a certain threshold is considered to be a correct match. Because for a mismatch, there may be a large number of other mismatches in a similar distance due to the high dimension of the feature space, and thus its ratio value is relatively high. The Lower recommended ratio threshold is 0.8. However, after matching a large number of two pictures with any scale, rotation and brightness changes, the result shows that the ratio value is optimal between 0.4 and 0.6, less than 0.4 has few matching points, and greater than 0.6 has a large number of mismatching points. So far, SIFT feature points of the two images are matched.
And secondly, reconstructing a vein three-dimensional characteristic point cloud based on an SFM algorithm, wherein the method comprises the following steps of:
(1) Two images are selected for reconstruction: the criteria chosen were to choose two graphs with enough matching feature points and enough base. Therefore, first, the HOMEGRAPHY between the two images is obtained by RANSAC to obtain the interior point. The image pairs with the least number of inner points but not less than 100 are selected as the original input. After the original image is obtained, the external parameters of the two images and the internal parameters of the camera are obtained by adopting a 5-point method, and then the optimization is carried out by adopting BUNDLERADJUSTMENT. This optimization concept extends throughout the reconstruction process. And solving the space point coordinates according to the obtained parameters and the matching relation of the two images. (because the points on the image and the spatial points are in one-to-many relationship, at least two images are required to find the coordinates of the spatial points)
(2) Adding other pictures: the requirement is that the image contains the most matches to the reconstructed three-dimensional points. Thus, the problems of calibrating similar internal and external parameters of corresponding points on the two-dimensional image of the known three-dimensional point machine are solved. Firstly, the KLT method is adopted to carry out initial estimation on the internal and external parameters of the camera, and then BUNDLERADJUSTMENT is adopted to carry out optimization.
Other characteristic points on the image and also on the reconstructed image are added to the reconstruction process for reconstruction. After the reconstruction is finished, carrying out a global BUNDLERADJUSTMENT to finally obtain a vein three-dimensional characteristic point cloud;
the fourth step is to match and identify three-dimensional vein feature point cloud: and uploading the three-dimensional vein feature point cloud to a cloud database by utilizing the micro-sensing antenna 104 of the intelligent control unit 1. In the cloud database, the specific matching process of the three-dimensional vein feature point cloud will not be described one by one. Finally, the micro-sensing antenna 104 of the intelligent control unit receives the matching result judgment information transmitted back by the cloud database, and the matching result judgment information is used for verifying the identity of the user.
Fig. 12 is a flowchart of a specific structure control system of a three-dimensional vein recognition method applied to a wearable device: the image sensor 3 is first driven to move the structure to a specific position and position information is stored. Then, the image sensor unit 3 is started to acquire vein information. Next, the vein image is transmitted to the smart control unit 1 by interfacing the image with the micro-sensing antenna 104 of the smart control unit through the micro-strip antenna 303 of the image sensor unit 3. And secondly, judging whether the total rotation angle reaches 360 degrees through an image processing unit arranged in the intelligent control unit 1, and if not, continuously driving the motion structure of the image sensor 3 to the next position to continuously acquire. And after the acquired image is finished, performing image processing. And then comparing the three-dimensional vein feature point cloud information with the three-dimensional vein feature point cloud information in the cloud database, and judging whether the operation is information registration or needs matching. If registration is made, vein and location information is stored directly. If the matching is performed after registration, the original registration information is required to be called for comparison, the authentication is successful if the requirements are met, and the authentication fails if the requirements are not met.
Simulation verification
The purpose of the experiment is as follows: and verifying the feasibility of the three-dimensional vein structure realization and the identification method thereof.
Experimental scenario and equipment: the experimental scene is selected from a laboratory with normal light and normal temperature; the experimental equipment comprises an experimental platform, a simplified three-dimensional vein recognition device and a computer.
The experimental steps are as follows: 1. an experiment platform is built; 2. connecting the simplified three-dimensional vein recognition device with a computer so that the information of the image can be directly observed on the computer in real time; 3. the experimenter stretches the wrist into the experimental device to perform a series of vein image acquisition.
Experimental results: fig. 14 shows two images of the front and back of the wrist obtained by the method of the present invention, the images have been subjected to graying and gaussian low-pass filtering, feature point extraction and feature point matching based on SIFT are performed, 631 feature points are extracted from the vein image of the front of the wrist (left side in fig. 14), 4247 feature points are extracted from the vein image of the back of the wrist (right side in fig. 14), and finally only 2 sets of feature points are successfully matched, which proves that the number and distribution of feature points extracted from the front of the wrist and the back of the wrist have obvious differences, i.e. the two images provide two completely different sets of information for the construction of three-dimensional vein feature point cloud, further it is illustrated that more vein information can be obtained by the method provided by the present invention, which is favorable for the construction and matching of three-dimensional vein feature point cloud, and finally a more accurate matching result is obtained. Fig. 15 is a schematic diagram of an equivalent three-dimensional vein feature point cloud obtained by the three-dimensional vein recognition device.
Conclusion of experiment: the three-dimensional vein structure realization and the identification method thereof have feasibility; the vein information obtained by the device and the method is more, and the accuracy of identification and judgment is greatly improved; the device has low cost and simple structure, and has very good popularization and development application prospects.

Claims (7)

1. Be applied to three-dimensional vein recognition device of wearable equipment, its characterized in that: the intelligent control unit is in interference fit with grooves on the annular outer ring unit by virtue of protrusions on a rear cover of the intelligent control unit, three annular array grooves are inherently formed in the inner end face of the drive control unit, three bosses are respectively arranged on two end faces of an annular outer ring main body of the annular outer ring unit, the grooves are in interference fit with the bosses, and the two drive control units are respectively fixedly connected with the annular outer ring unit; the edge bulge of the main body of the annular inner ring unit is in clearance fit with the annular groove of the annular outer ring main body, so that the annular inner ring unit performs circular motion around the center of the device in the annular outer ring unit; the image sensor unit is in clearance fit with the inner side track groove of the annular outer ring main body through the convex sliding rail of the carrying device unit, and the two driving control units are respectively connected with the two ends of the image sensor unit through leads, so that the image sensor unit performs circular motion around the center of the device in the inner side track groove of the annular outer ring main body; the upper end of the image sensor unit is clamped in the non-hollowed-out groove of the annular inner ring main body, and when the image sensor unit moves, the annular inner ring main body moves along with the image sensor unit; the circular array type infrared light source unit is embedded in the circular array type groove of the annular outer ring unit;
The structure of the annular outer ring unit is as follows: three bosses are respectively arranged on two end faces of the annular outer ring main body, the circular array type grooves are used for embedding circular array type infrared light sources, the inner side of the annular outer ring main body is provided with an annular groove and a track groove, and the outer side of the annular outer ring main body is provided with a groove;
the structure of the annular inner ring unit is as follows: the device comprises a protective layer, a filter layer and an annular inner ring main body which are fixedly connected from inside to outside, wherein annular protrusions at the edge of the annular inner ring main body are in clearance fit with annular track grooves at the inner side of an annular outer ring main body, annular grooves with an angle of 162 DEG are formed in the annular inner ring main body and the filter layer, a non-hollowed-out groove is formed in the annular inner ring main body and used for clamping an image sensor unit, and the protective layer is annular and sealed and has a light transmission function;
the structure of the image sensor unit is as follows: the upper part of the carrying device unit is clamped in the non-hollowed groove of the annular inner ring main body, one end of each of the two main shafts is in interference fit with the groove on the CCD camera main body, the other end of each of the two main shafts is in transition fit with the groove on the carrying device unit, the CCD camera main body is arranged on the carrying device unit through the main shafts, and the carrying device unit is in clearance fit with the annular track groove on the inner side of the annular outer ring main body through the convex sliding rail; the CCD camera body is internally provided with a storage chip and a micro-sensing antenna II.
2. A three-dimensional vein recognition apparatus for use in a wearable device as defined in claim 1, wherein: the intelligent control unit comprises a rear cover, a rechargeable battery, a micro-sensing antenna mounting groove, a micro-sensing antenna I, a host, a display screen, a plug and a bulge on the rear cover, wherein the micro-sensing antenna is mounted in the micro-sensing antenna mounting groove and inserted into the host, the rechargeable battery is arranged in the host, the display screen and the plug are arranged on the host, and the rear cover is fixedly connected with the host.
3. A three-dimensional vein recognition apparatus for use in a wearable device as defined in claim 1, wherein: the filter layer in the annular inner ring unit is a high-transmission filter with the wavelength of 800 nm-1100 nm and a neutral gray filter.
4. A three-dimensional vein recognition apparatus for use in a wearable device as defined in claim 1, wherein: the material used for the protective layer in the annular inner ring unit is polyurethane rubber.
5. A three-dimensional vein recognition apparatus for use in a wearable device as defined in claim 1, wherein: the structure of the drive control unit is as follows: comprises two identical parts, wherein the structure of one part is as follows: the annular battery unit is arranged on the annular main body, three annular array grooves are formed in the inner end face of the annular main body, and the motors are respectively embedded on the side faces of the annular main body and are connected with one end of the image sensor unit through leads;
The annular battery unit is composed of a plurality of annular rechargeable batteries, and is connected with the intelligent control unit through a lead wire and charged through a plug of the intelligent control unit, so that the motor is powered.
6. A three-dimensional vein recognition apparatus for use in a wearable device as defined in claim 1, wherein: the circular array type infrared light source is an infrared diode with the wavelength of 850nm and is used for providing infrared light source irradiation.
7. A vein recognition method employing a three-dimensional vein recognition apparatus applied to a wearable device as claimed in claim 1, characterized by comprising the steps of:
the first step: acquiring a plurality of two-dimensional vein images with different angles, wherein the different angles are vein images obtained by rotating 360 degrees around the central axis of a vein tissue part of a user, such as a wrist, a palm or a finger, and the acquisition method is that a transmission type image sensor and an array type near infrared light source are respectively positioned at two sides of the vein tissue part;
and a second step of: the image processing unit arranged in the intelligent control unit is used for carrying out batch pretreatment on the acquired vein images, and the processing steps comprise: graying, gaussian low-pass filtering and segmentation, removing noise and miscellaneous points in an image, extracting the most important vein information, reducing the operand required by three-dimensional reconstruction and shortening the three-dimensional reconstruction time;
Extracting key points by using a Scale Invariant Feature Transform (SIFT), adding local features to the key points, and finding out a plurality of pairs of feature points matched with each other through pairwise comparison of the feature points of the two sides to form feature point matching between the images;
and a third step of: performing three-dimensional modeling, performing SIFT feature point matching between every two images, selecting two images for reconstruction, adding other pictures, and adding other feature points on the images, which are also on the reconstructed images, into a reconstruction process for reconstruction;
fourth step: and uploading the venous three-dimensional characteristic point cloud to a cloud database through a micro-sensing antenna of the intelligent control unit, and finally receiving the matching judgment result information of the cloud database, thereby verifying the identity of the user.
CN201710777503.3A 2017-08-31 2017-08-31 Be applied to three-dimensional vein recognition device of wearable equipment Active CN107480654B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710777503.3A CN107480654B (en) 2017-08-31 2017-08-31 Be applied to three-dimensional vein recognition device of wearable equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710777503.3A CN107480654B (en) 2017-08-31 2017-08-31 Be applied to three-dimensional vein recognition device of wearable equipment

Publications (2)

Publication Number Publication Date
CN107480654A CN107480654A (en) 2017-12-15
CN107480654B true CN107480654B (en) 2023-05-23

Family

ID=60603457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710777503.3A Active CN107480654B (en) 2017-08-31 2017-08-31 Be applied to three-dimensional vein recognition device of wearable equipment

Country Status (1)

Country Link
CN (1) CN107480654B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SG10201808304WA (en) * 2018-09-24 2020-04-29 Nat Univ Hospital Singapore Pte Ltd System and method for identifying veins
CN109740561A (en) * 2019-01-11 2019-05-10 重庆工商大学 Three-dimensional finger vein imaging system based on monocular camera
CN111160247B (en) * 2019-12-28 2023-05-12 智冠一掌通科技(深圳)有限公司 Method for three-dimensional modeling and identification by scanning palm vein
CN113194229B (en) * 2021-04-21 2022-02-08 深圳市天擎数字有限责任公司 Dynamic video acquisition device for acquiring new media materials
CN113709434B (en) * 2021-08-31 2024-06-28 维沃移动通信有限公司 Projection bracelet and projection control method and device thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0897754A (en) * 1994-09-22 1996-04-12 Dainippon Printing Co Ltd Signal transmitter
WO2013157236A1 (en) * 2012-04-18 2013-10-24 富士フイルム株式会社 3d model data generation device, method and program

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7327872B2 (en) * 2004-10-13 2008-02-05 General Electric Company Method and system for registering 3D models of anatomical regions with projection images of the same
US9095285B2 (en) * 2013-04-11 2015-08-04 Yaroslav Ryabov Portable biometric identification device using a dorsal hand vein pattern
CN207337420U (en) * 2017-08-31 2018-05-08 吉林大学 A kind of three-dimensional vein identification device applied to wearable device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0897754A (en) * 1994-09-22 1996-04-12 Dainippon Printing Co Ltd Signal transmitter
WO2013157236A1 (en) * 2012-04-18 2013-10-24 富士フイルム株式会社 3d model data generation device, method and program

Also Published As

Publication number Publication date
CN107480654A (en) 2017-12-15

Similar Documents

Publication Publication Date Title
CN107480654B (en) Be applied to three-dimensional vein recognition device of wearable equipment
Lu et al. Soft tissue feature tracking based on deep matching network
CN106919941B (en) A kind of three-dimensional finger vein identification method and system
Haque et al. Recurrent attention models for depth-based person identification
CN109558764B (en) Face recognition method and device and computer equipment
Zhou et al. Recent advances on singlemodal and multimodal face recognition: a survey
CN108182397B (en) Multi-pose multi-scale human face verification method
CN106897675A (en) The human face in-vivo detection method that binocular vision depth characteristic is combined with appearance features
WO2016010721A1 (en) Multispectral eye analysis for identity authentication
WO2016010720A1 (en) Multispectral eye analysis for identity authentication
Chen et al. Invariant description and retrieval of planar shapes using radon composite features
Garagad et al. A novel technique of iris identification for biometric systems
Soh et al. A review: Personal identification based on palm vein infrared pattern
CN100495427C (en) Human ear detection under complex background and method for syncretizing various information
Bastias et al. A method for 3D iris reconstruction from multiple 2D near-infrared images
CN113674395B (en) 3D hand lightweight real-time capturing and reconstructing system based on monocular RGB camera
Glandon et al. 3d skeleton estimation and human identity recognition using lidar full motion video
Yang et al. A novel system and experimental study for 3D finger multibiometrics
Taha et al. Iris features extraction and recognition based on the local binary pattern technique
CN110866235B (en) Identity recognition method and device for simultaneously capturing human pulse and vein images
CN101617338B (en) Object shape generating method, object shape generating device and program
Benalcazar et al. Iris recognition: comparing visible-light lateral and frontal illumination to NIR frontal illumination
Gayathri et al. Low cost hand vein authentication system on embedded linux platform
Agarwal et al. A review on vein biometric recognition using geometric pattern matching techniques
Huang et al. Optimizing features quality: a normalized covariance fusion framework for skeleton action recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant