CN111738076A - Non-contact palm print and palm vein identification method and device - Google Patents

Non-contact palm print and palm vein identification method and device Download PDF

Info

Publication number
CN111738076A
CN111738076A CN202010419832.2A CN202010419832A CN111738076A CN 111738076 A CN111738076 A CN 111738076A CN 202010419832 A CN202010419832 A CN 202010419832A CN 111738076 A CN111738076 A CN 111738076A
Authority
CN
China
Prior art keywords
palm
image
distance
palm print
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010419832.2A
Other languages
Chinese (zh)
Inventor
刘治
曹艳坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202010419832.2A priority Critical patent/CN111738076A/en
Publication of CN111738076A publication Critical patent/CN111738076A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Abstract

The invention belongs to the field of palm print and palm vein recognition, and particularly relates to a non-contact palm print and palm vein recognition method and device. The palm print and palm vein recognition method comprises the steps of collecting the distance between a person to be detected and an image collection module; when the distance between the person to be detected and the image acquisition module falls into a preset distance range, acquiring a palm print image and a palm vein image, and respectively extracting a palm print characteristic and a palm vein characteristic in the palm print image and the palm vein image; and fusing the palm print features and the palm vein features to obtain fused features, and matching the fused features with the features in the preset database to obtain a recognition result.

Description

Non-contact palm print and palm vein identification method and device
Technical Field
The invention belongs to the field of palm print and palm vein recognition, and particularly relates to a non-contact palm print and palm vein recognition method and device.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Because the palm print has uniqueness, people can use the palm print to identify identities. Palm print recognition is an emerging identity recognition method in recent years and is an important supplement to the existing human body biological feature recognition technology. However, the palm print recognition technology also has the following disadvantages: the palm print features are exposed outside and are easy to illegally obtain and forge; palm print identification does not have a living body authentication function; in order to ensure the cutting precision of the palm characteristic region, a palm print acquisition mode and fingerprints are generally adopted by a contact type acquisition device, so that the sanitation hidden danger exists when palm print identification is used in public places.
In addition to the palmprint characteristics of the palm, the palm veins are used by people for individual identification. The vein recognition is that a human palm is irradiated by a near-infrared light source, and the near-infrared light has strong absorption characteristics by utilizing heme in blood, so that a palm image irradiated by the near-infrared light can present darker grains at subcutaneous veins, and the vein feature can be utilized for identity recognition. The vein recognition technology has the main characteristics of non-invasive image acquisition, vein characteristics which cannot be acquired under visible light, and strong concealment and anti-counterfeiting performance, so that the vein recognition technology has certain strong anti-counterfeiting capacity and also has a live body verification function (only the palm of a living body has the vein characteristics).
The inventor finds that the influence of the distance between a person to be detected and the image acquisition module on the identification accuracy of the palm veins is often ignored in the palm vein acquisition, and when the distance between the person to be detected and the image acquisition module is too far, the acquired palm vein image is blurred and unclear, so that the identification accuracy is influenced; when the distance between the person to be detected and the image acquisition module is too close, only a local palm print and palm vein image can be acquired, important features in the palm print and palm vein may be missed to be acquired, and the accuracy of identification is also influenced.
Disclosure of Invention
In order to solve the above problems, the present invention provides a non-contact palm print palm vein recognition method and device, which considers the distance between the person to be detected and the image acquisition module, and starts the palm print image and palm vein image acquisition only when the distance between the person to be detected and the image acquisition module falls within the preset distance range, thereby avoiding the problems of image blurring or incomplete image acquisition and improving the accuracy of image recognition.
In order to achieve the purpose, the invention adopts the following technical scheme:
the first aspect of the present invention provides a non-contact palm print and palm vein identification method, which includes:
collecting the distance between a person to be measured and the image collecting module;
when the distance between the person to be detected and the image acquisition module falls into a preset distance range, acquiring a palm print image and a palm vein image, and respectively extracting a palm print characteristic and a palm vein characteristic in the palm print image and the palm vein image;
and fusing the palm print features and the palm vein features to obtain fused features, and matching the fused features with the features in the preset database to obtain a recognition result.
As an embodiment, it is determined whether the distance between the subject and the image capturing module falls within a preset distance range:
and if so, sending an image acquisition starting command to the image acquisition module, otherwise, sending an alarm for adjusting the distance between the person to be detected and the image acquisition module until the distance between the person to be detected and the image acquisition module falls into a preset distance range.
As an embodiment, the preset distance range setting method includes:
measuring different distances between the person to be measured and the image acquisition module from near to far;
respectively acquiring a palm print image and a palm vein image which are measured at different distances;
taking the distance between a person to be detected and the image acquisition module as a label, taking the palm print image and the palm vein image as a training set, and training a preset neural network to obtain the identification accuracy rate;
and comparing the accuracy at each distance, and taking the distance range with the identification precision not less than a preset threshold value as a set distance range.
The technical scheme has the advantages that the distance between the person to be detected and the image acquisition module is used as the label, the palm print image and the palm vein image are used as the training set to train the preset neural network, and the distance corresponding to the preset identification precision can be obtained, so that the comprehensiveness and the definition of image acquisition are improved, and the accuracy of image identification is improved.
In one embodiment, a parallel convolutional neural network is adopted to extract the palm print features and the palm vein features in the palm print image and the palm vein image respectively.
The technical scheme has the advantages that the palm print image and the palm vein image are respectively processed by adopting the parallel convolutional neural network, so that the image processing speed is improved, the palm print image and the palm vein image are not influenced mutually, and the identification efficiency is improved.
As an implementation mode, a weight fusion network is used for fusing the palm print characteristic and the palm vein characteristic to obtain a fusion characteristic; the training process of the weight fusion network comprises the following steps: and weighting the palm print characteristic and the palm vein characteristic, and then training to obtain a weight matrix, and minimizing the overall loss.
The technical scheme has the advantages that the palm print characteristic and the palm vein characteristic can be integrated by fusing the palm print characteristic and the palm vein characteristic through the weight fusion network, and the fusion effect and the identification accuracy rate are improved.
A second aspect of the present invention provides a non-contact palm print palm vein recognition system, including:
the distance measurement module is used for acquiring the distance between the person to be measured and the image acquisition module;
the data processing center module is used for acquiring a palm print image and a palm vein image when the distance between the person to be detected and the image acquisition module falls into a preset distance range, and extracting a palm print characteristic and a palm vein characteristic in the palm print image and the palm vein image respectively;
and the characteristic fusion module is used for fusing the palm print characteristic and the palm vein characteristic to obtain a fusion characteristic, and then matching the fusion characteristic with the characteristic in the preset database to obtain an identification result.
As an embodiment, in the distance measuring module, it is determined whether the distance between the subject and the image capturing module falls within a preset distance range:
and if so, sending an image acquisition starting command to the image acquisition module, otherwise, sending an alarm for adjusting the distance between the person to be detected and the image acquisition module until the distance between the person to be detected and the image acquisition module falls into a preset distance range.
As an embodiment, in the distance measuring module, a preset distance range setting method includes:
measuring different distances between the person to be measured and the image acquisition module from near to far;
respectively acquiring a palm print image and a palm vein image which are measured at different distances;
taking the distance between a person to be detected and the image acquisition module as a label, taking the palm print image and the palm vein image as a training set, and training a preset neural network to obtain the identification accuracy rate;
and comparing the accuracy at each distance, and taking the distance range with the identification precision not less than a preset threshold value as a set distance range.
The technical scheme has the advantages that the distance between the person to be detected and the image acquisition module is used as the label, the palm print image and the palm vein image are used as the training set to train the preset neural network, and the distance corresponding to the preset identification precision can be obtained, so that the comprehensiveness and the definition of image acquisition are improved, and the accuracy of image identification is improved.
In one embodiment, in the data processing center module, a parallel convolutional neural network is adopted to extract the palm print features and the palm vein features in the palm print image and the palm vein image respectively.
The technical scheme has the advantages that the palm print image and the palm vein image are respectively processed by adopting the parallel convolutional neural network, so that the image processing speed is improved, the palm print image and the palm vein image are not influenced mutually, and the identification efficiency is improved.
As an implementation manner, in the data processing center module, a palm print feature and a palm vein feature are fused by using a weight fusion network to obtain a fusion feature; the training process of the weight fusion network comprises the following steps: and weighting the palm print characteristic and the palm vein characteristic, and then training to obtain a weight matrix, and minimizing the overall loss.
The technical scheme has the advantages that the palm print characteristic and the palm vein characteristic can be integrated by fusing the palm print characteristic and the palm vein characteristic through the weight fusion network, and the fusion effect and the identification accuracy rate are improved.
As an embodiment, the non-contact palm print palm vein recognition system further includes:
and the voice prompt module is used for distance prompt and recognition result prompt.
As an embodiment, the non-contact palm print palm vein recognition system further includes:
and the identification result module is used for displaying the identification result and broadcasting the identification result.
A third aspect of the invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring the distance between a person to be detected and the image acquisition module;
when the distance between the person to be detected and the image acquisition module falls into a preset distance range, acquiring a palm print image and a palm vein image, and respectively extracting a palm print characteristic and a palm vein characteristic in the palm print image and the palm vein image;
and fusing the palm print features and the palm vein features to obtain fused features, and matching the fused features with the features in the preset database to obtain a recognition result.
Compared with the prior art, the invention has the beneficial effects that:
(1) when the user collects and identifies, the user does not need to contact, and the use is more free, elegant, safe and sanitary;
(2) the distance between the person to be detected and the image acquisition module is considered, when the distance between the person to be detected and the image acquisition module falls into a preset distance range, palm print image and palm vein image acquisition are started, the problems of image blurring or incomplete image acquisition are avoided, and the accuracy of image identification can be improved.
(3) Meanwhile, the palm print information and the palm vein information are combined, so that the identification precision, the living body verification and the anti-counterfeiting capability of the system are greatly improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
Fig. 1 is a flowchart of a non-contact palm print and palm vein recognition method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a distance determination between a subject and an image capture module according to an embodiment of the present invention;
FIG. 3 is a flow chart of palm print image and palm vein image recognition according to an embodiment of the present invention;
fig. 4 is a schematic view of a non-contact palm print palm vein recognition system according to an embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
In the present invention, terms such as "upper", "lower", "left", "right", "front", "rear", "vertical", "horizontal", "side", "bottom", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only terms of relationships determined for convenience of describing structural relationships of the parts or elements of the present invention, and are not intended to refer to any parts or elements of the present invention, and are not to be construed as limiting the present invention.
In the present invention, terms such as "fixedly connected", "connected", and the like are to be understood in a broad sense, and mean either a fixed connection or an integrally connected or detachable connection; may be directly connected or indirectly connected through an intermediate. The specific meanings of the above terms in the present invention can be determined according to specific situations by persons skilled in the relevant scientific or technical field, and are not to be construed as limiting the present invention.
Example one
Fig. 1 shows a flowchart of the non-contact palm print and palm vein recognition method of the present embodiment. As shown in fig. 1, the non-contact palm print and palm vein recognition method of the present embodiment includes:
s101: and collecting the distance between the person to be measured and the image collecting module.
Comparing the distance between the collected person to be detected and the image collecting module with a preset distance range, and judging whether the distance falls into the preset distance range:
if so, sending an image acquisition starting command to the image acquisition module, otherwise, sending an alarm for adjusting the distance between the person to be detected and the image acquisition module until the distance between the person to be detected and the image acquisition module falls into a preset distance range;
specifically, the preset distance range is set by the following method:
measuring different distances between the person to be measured and the image acquisition module from near to far;
respectively acquiring a palm print image and a palm vein image which are measured at different distances;
taking the distance between a person to be detected and the image acquisition module as a label, taking the palm print image and the palm vein image as a training set, and training a preset neural network to obtain the identification accuracy rate;
and comparing the accuracy at each distance, and taking the distance range with the identification precision not less than a preset threshold value as a set distance range.
The specific implementation of the distance calculation between the person to be measured and the image acquisition module is as follows:
if the time required for light to travel back and forth once in air at speed c between points A, B is t, then the distance D between points A, B can be expressed as follows.
D=c*t/2 (1)
In the formula: d is the distance between two points of the measuring station A, B; c is the speed of light propagation in the atmosphere; t the time required for the light to make one round trip A, B.
The distance between the person to be detected and the image acquisition module is used as a label, the palm print image and the palm vein image are used as a training set to train the preset neural network, and the distance corresponding to the preset identification precision can be obtained, so that the comprehensiveness and the definition of image acquisition are improved, and the accuracy of image identification is improved.
S102: acquiring a palm print image and a palm vein image, and respectively extracting a palm print characteristic and a palm vein characteristic in the palm print image and the palm vein image;
in the present embodiment, a parallel convolutional neural network is adopted to extract the palm print features and the palm vein features in the palm print image and the palm vein image, respectively, as shown in fig. 2.
The palm print image and the palm vein image are respectively processed by adopting the parallel convolution neural network, so that the image processing speed is improved, the palm print image and the palm vein image are not influenced mutually, and the identification efficiency is improved.
It is understood that in other embodiments, other neural network models in parallel may be employed to extract the palm print features and the palm vein features in the palm print image and the palm vein image, respectively.
S103: fusing the palm print features and the palm vein features to obtain fused features, further matching the fused features with the features in the preset database, and if consistent features exist, matching is successful; otherwise, the match fails, as shown in FIG. 3.
Specifically, a weight fusion network is utilized to fuse the palm print characteristic and the palm vein characteristic to obtain a fusion characteristic; the training process of the weight fusion network comprises the following steps: and weighting the palm print characteristic and the palm vein characteristic, and then training to obtain a weight matrix, and minimizing the overall loss.
The specific operation can be described as shown in the following formula:
Figure BDA0002496529700000091
wherein, wxIs a palm print characteristic weight matrix,
Figure BDA0002496529700000092
is a palm print feature matrix, wyIs a palm vein feature weight matrix,
Figure BDA0002496529700000093
is a palm vein feature matrix. The fusion algorithm is beneficial to improving the fusion effect and the identification accuracy.
The embodiment utilizes the weight fusion network to fuse the palm print characteristic and the palm vein characteristic, can synthesize the characteristics of the palm print characteristic and the palm vein characteristic, and is favorable for improving the fusion effect and the identification accuracy.
The embodiment does not need to contact when the user is collected and identified, and the use is more free, elegant, safe and sanitary; the distance between the person to be detected and the image acquisition module is considered in the embodiment, and the palm print image and the palm vein image acquisition are started only when the distance between the person to be detected and the image acquisition module falls into the preset distance range, so that the problems of image blurring or incomplete image acquisition are avoided, and the accuracy of image identification can be improved.
Example two
As shown in fig. 4, the non-contact palm print palm vein recognition system of the present embodiment includes a distance measurement module, an image acquisition module, a data processing center module, a voice prompt module, and a recognition result module.
Wherein, the distance measuring module is used for collecting the distance between the person to be measured and the image acquisition module, and compares the distance with the preset distance range to judge whether the distance falls into the preset distance range: and if so, sending an image acquisition starting command to the image acquisition module, otherwise, sending an alarm for adjusting the distance between the person to be detected and the image acquisition module until the distance between the person to be detected and the image acquisition module falls into a preset distance range.
The distance measuring module can adopt a laser distance measuring device, and the laser distance measuring is an instrument for accurately measuring the distance of a target by using laser. When the laser distance measuring instrument works, a thin laser beam is emitted to a target, the photoelectric element receives the laser beam reflected by the target, the timer measures the time from emitting to receiving of the laser beam, and the distance from an observer to the target is calculated.
The laser ranging device is connected with the voice broadcasting device, a distance threshold value is set at first, and image information collected in the distance is the most accurate. And then the distance measuring device tests the distance between the tested person and the device, compares the distance with the set distance, prompts the tested person to move backwards if the distance is smaller than the threshold value, prompts the tested person to move forwards if the distance is larger than the threshold value, and prompts the tested person to acquire information if the distance is within the threshold value range, and please keep the information.
Specifically, the preset distance range is set by the following method:
measuring different distances between the person to be measured and the image acquisition module from near to far;
respectively acquiring a palm print image and a palm vein image which are measured at different distances;
taking the distance between a person to be detected and the image acquisition module as a label, taking the palm print image and the palm vein image as a training set, and training a preset neural network to obtain the identification accuracy rate;
and comparing the accuracy at each distance, and taking the distance range with the identification precision not less than a preset threshold value as a set distance range.
The specific implementation of the distance calculation between the person to be measured and the image acquisition module is as follows:
if the time required for light to travel back and forth once in air at speed c between points A, B is t, then the distance D between points A, B can be expressed as follows.
D=c*t/2 (1)
In the formula: d is the distance between two points of the measuring station A, B; c is the speed of light propagation in the atmosphere; t the time required for the light to make one round trip A, B.
The distance between the person to be detected and the image acquisition module is used as a label, the palm print image and the palm vein image are used as a training set to train the preset neural network, and the distance corresponding to the preset identification precision can be obtained, so that the comprehensiveness and the definition of image acquisition are improved, and the accuracy of image identification is improved.
The data processing center module is used for acquiring a palm print image and a palm vein image and respectively extracting a palm print characteristic and a palm vein characteristic in the palm print image and the palm vein image;
fusing the palm print features and the palm vein features to obtain fused features, further matching the fused features with the features in the preset database, and if consistent features exist, matching is successful; otherwise, the matching fails.
In the present embodiment, a parallel convolutional neural network is adopted to extract the palm print features and the palm vein features in the palm print image and the palm vein image, respectively, as shown in fig. 2.
The palm print image and the palm vein image are respectively processed by adopting the parallel convolution neural network, so that the image processing speed is improved, the palm print image and the palm vein image are not influenced mutually, and the identification efficiency is improved.
It is understood that in other embodiments, other neural network models in parallel may be employed to extract the palm print features and the palm vein features in the palm print image and the palm vein image, respectively.
Specifically, a weight fusion network is utilized to fuse the palm print characteristic and the palm vein characteristic to obtain a fusion characteristic; the training process of the weight fusion network comprises the following steps: and weighting the palm print characteristic and the palm vein characteristic, and then training to obtain a weight matrix, and minimizing the overall loss.
The specific operation can be described as shown in the following formula:
Figure BDA0002496529700000111
wherein, wxIs a palm print characteristic weight matrix,
Figure BDA0002496529700000112
is a palm print feature matrix, wyIs a palm vein feature weight matrix,
Figure BDA0002496529700000113
is a palm vein feature matrix. The fusion algorithm is beneficial to improving the fusion effect and the identification accuracy.
The embodiment utilizes the weight fusion network to fuse the palm print characteristic and the palm vein characteristic, can synthesize the characteristics of the palm print characteristic and the palm vein characteristic, and is favorable for improving the fusion effect and the identification accuracy.
The voice prompt module is used for distance prompt and recognition result prompt.
The voice prompt module comprises a distance prompt module and a result prompt module, and the distance prompt module is connected with the distance measuring device to prompt the tested person to move back and forth as shown in the step. And the result prompting module is connected with the data processing center and used for carrying out voice broadcast on the recognition result.
The image acquisition module of this embodiment includes palm print acquisition module and palm vein acquisition module, and palm print acquisition module is used for gathering palm print image information, and palm vein acquisition module is used for gathering palm vein image information. The image acquisition device adopts a multispectral sensor, the multispectral sensor can irradiate visible light and near infrared light, the visible light source can acquire palm print images, and the near infrared light can acquire palm vein images. Therefore, the palm print image and the palm vein image can be acquired simultaneously.
The identification result module of this embodiment includes identification result display module and result report module. After receiving the identification result of the data processing center module, the identification result display module displays the identification result on the display screen, and the result broadcasting module broadcasts the identification result in a voice mode.
The embodiment does not need to contact when the user is collected and identified, and the use is more free, elegant, safe and sanitary; the distance between the person to be detected and the image acquisition module is considered in the embodiment, and the palm print image and the palm vein image acquisition are started only when the distance between the person to be detected and the image acquisition module falls into the preset distance range, so that the problems of image blurring or incomplete image acquisition are avoided, and the accuracy of image identification can be improved.
EXAMPLE III
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
obtaining the distance between the person to be detected and the image acquisition module, comparing the distance with a preset distance range, and judging whether the distance falls into the preset distance range:
if so, sending an image acquisition starting command to the image acquisition module, otherwise, sending an alarm for adjusting the distance between the person to be detected and the image acquisition module until the distance between the person to be detected and the image acquisition module falls into a preset distance range;
acquiring a palm print image and a palm vein image, and respectively extracting a palm print characteristic and a palm vein characteristic in the palm print image and the palm vein image;
fusing the palm print features and the palm vein features to obtain fused features, further matching the fused features with the features in the preset database, and if consistent features exist, matching is successful; otherwise, the matching fails.
Specifically, the preset distance range is set by the following method:
measuring different distances between the person to be measured and the image acquisition module from near to far;
respectively acquiring a palm print image and a palm vein image which are measured at different distances;
taking the distance between a person to be detected and the image acquisition module as a label, taking the palm print image and the palm vein image as a training set, and training a preset neural network to obtain the identification accuracy rate;
and comparing the accuracy at each distance, and taking the distance range with the identification precision not less than a preset threshold value as a set distance range.
The specific implementation of the distance calculation between the person to be measured and the image acquisition module is as follows:
if the time required for light to travel back and forth once in air at speed c between points A, B is t, then the distance D between points A, B can be expressed as follows.
D=c*t/2 (1)
In the formula: d is the distance between two points of the measuring station A, B; c is the speed of light propagation in the atmosphere; t the time required for the light to make one round trip A, B.
The distance between the person to be detected and the image acquisition module is used as a label, the palm print image and the palm vein image are used as a training set to train the preset neural network, and the distance corresponding to the preset identification precision can be obtained, so that the comprehensiveness and the definition of image acquisition are improved, and the accuracy of image identification is improved.
In the present embodiment, a parallel convolutional neural network is adopted to extract the palm print features and the palm vein features in the palm print image and the palm vein image, respectively, as shown in fig. 2.
The palm print image and the palm vein image are respectively processed by adopting the parallel convolution neural network, so that the image processing speed is improved, the palm print image and the palm vein image are not influenced mutually, and the identification efficiency is improved.
It is understood that in other embodiments, other neural network models in parallel may be employed to extract the palm print features and the palm vein features in the palm print image and the palm vein image, respectively.
Specifically, a weight fusion network is utilized to fuse the palm print characteristic and the palm vein characteristic to obtain a fusion characteristic; the training process of the weight fusion network comprises the following steps: and weighting the palm print characteristic and the palm vein characteristic, and then training to obtain a weight matrix, and minimizing the overall loss.
The specific operation can be described as shown in the following formula:
Figure BDA0002496529700000141
wherein, wxIs a palm print characteristic weight matrix,
Figure BDA0002496529700000142
is a palm print feature matrix, wyIs a palm vein feature weight matrix,
Figure BDA0002496529700000143
characteristic moment of palm veinAnd (5) arraying. The fusion algorithm is beneficial to improving the fusion effect and the identification accuracy.
The embodiment utilizes the weight fusion network to fuse the palm print characteristic and the palm vein characteristic, can synthesize the characteristics of the palm print characteristic and the palm vein characteristic, and is favorable for improving the fusion effect and the identification accuracy.
The embodiment does not need to contact when the user is collected and identified, and the use is more free, elegant, safe and sanitary; the distance between the person to be detected and the image acquisition module is considered in the embodiment, and the palm print image and the palm vein image acquisition are started only when the distance between the person to be detected and the image acquisition module falls into the preset distance range, so that the problems of image blurring or incomplete image acquisition are avoided, and the accuracy of image identification can be improved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A non-contact palm print palm vein recognition method is characterized by comprising the following steps:
collecting the distance between a person to be measured and the image collecting module;
when the distance between the person to be detected and the image acquisition module falls into a preset distance range, acquiring a palm print image and a palm vein image, and respectively extracting a palm print characteristic and a palm vein characteristic in the palm print image and the palm vein image;
and fusing the palm print features and the palm vein features to obtain fused features, and matching the fused features with the features in the preset database to obtain a recognition result.
2. The non-contact palm print and palm vein recognition method according to claim 1, wherein it is determined whether the distance between the person to be measured and the image acquisition module falls within a preset distance range:
and if so, sending an image acquisition starting command to the image acquisition module, otherwise, sending an alarm for adjusting the distance between the person to be detected and the image acquisition module until the distance between the person to be detected and the image acquisition module falls into a preset distance range.
3. The non-contact palm print palm vein recognition method according to claim 2, wherein the preset distance range is set by:
measuring different distances between the person to be measured and the image acquisition module from near to far;
respectively acquiring a palm print image and a palm vein image which are measured at different distances;
taking the distance between a person to be detected and the image acquisition module as a label, taking the palm print image and the palm vein image as a training set, and training a preset neural network to obtain the identification accuracy rate;
and comparing the accuracy at each distance, and taking the distance range with the identification precision not less than a preset threshold value as a set distance range.
4. The non-contact palm print and palm vein identification method according to claim 1, characterized in that a parallel convolution neural network is adopted to extract the palm print features and the palm vein features in the palm print image and the palm vein image respectively;
or
Fusing the palm print characteristic and the palm vein characteristic by using a weight fusion network to obtain a fusion characteristic; the training process of the weight fusion network comprises the following steps: and weighting the palm print characteristic and the palm vein characteristic, and then training to obtain a weight matrix, and minimizing the overall loss.
5. A non-contact palm print palm vein recognition system, comprising:
the distance measurement module is used for acquiring the distance between the person to be measured and the image acquisition module;
the data processing center module is used for acquiring a palm print image and a palm vein image when the distance between the person to be detected and the image acquisition module falls into a preset distance range, and extracting a palm print characteristic and a palm vein characteristic in the palm print image and the palm vein image respectively;
and the characteristic fusion module is used for fusing the palm print characteristic and the palm vein characteristic to obtain a fusion characteristic, and then matching the fusion characteristic with the characteristic in the preset database to obtain an identification result.
6. The system of claim 5, wherein the distance measuring module determines whether the distance between the subject and the image capturing module falls within a predetermined distance range:
and if so, sending an image acquisition starting command to the image acquisition module, otherwise, sending an alarm for adjusting the distance between the person to be detected and the image acquisition module until the distance between the person to be detected and the image acquisition module falls into a preset distance range.
7. The system of claim 5, wherein the distance measuring module is configured to set a predetermined distance range by:
measuring different distances between the person to be measured and the image acquisition module from near to far;
respectively acquiring a palm print image and a palm vein image which are measured at different distances;
taking the distance between a person to be detected and the image acquisition module as a label, taking the palm print image and the palm vein image as a training set, and training a preset neural network to obtain the identification accuracy rate;
and comparing the accuracy at each distance, and taking the distance range with the identification precision not less than a preset threshold value as a set distance range.
8. The system according to claim 5, wherein in the data processing center module, a parallel convolutional neural network is adopted to extract the palm print feature and the palm vein feature in the palm print image and the palm vein image respectively;
or
In the data processing center module, fusing the palm print characteristic and the palm vein characteristic by using a weight fusion network to obtain a fusion characteristic; the training process of the weight fusion network comprises the following steps: and weighting the palm print characteristic and the palm vein characteristic, and then training to obtain a weight matrix, and minimizing the overall loss.
9. The system of claim 5, further comprising:
the voice prompt module is used for distance prompt and recognition result prompt;
or
The non-contact palm print palm vein recognition system further comprises:
and the identification result module is used for displaying the identification result and broadcasting the identification result.
10. A computer-readable storage medium, on which a computer program is stored, which program, when executed by a processor, carries out the steps of:
acquiring the distance between a person to be detected and the image acquisition module;
when the distance between the person to be detected and the image acquisition module falls into a preset distance range, acquiring a palm print image and a palm vein image, and respectively extracting a palm print characteristic and a palm vein characteristic in the palm print image and the palm vein image;
and fusing the palm print features and the palm vein features to obtain fused features, and matching the fused features with the features in the preset database to obtain a recognition result.
CN202010419832.2A 2020-05-18 2020-05-18 Non-contact palm print and palm vein identification method and device Pending CN111738076A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010419832.2A CN111738076A (en) 2020-05-18 2020-05-18 Non-contact palm print and palm vein identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010419832.2A CN111738076A (en) 2020-05-18 2020-05-18 Non-contact palm print and palm vein identification method and device

Publications (1)

Publication Number Publication Date
CN111738076A true CN111738076A (en) 2020-10-02

Family

ID=72647377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010419832.2A Pending CN111738076A (en) 2020-05-18 2020-05-18 Non-contact palm print and palm vein identification method and device

Country Status (1)

Country Link
CN (1) CN111738076A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113901940A (en) * 2021-10-21 2022-01-07 华南理工大学 Palm print and palm vein dynamic fusion identification method and system based on palm temperature information
CN115631514A (en) * 2022-10-12 2023-01-20 中海银河科技(北京)有限公司 Palm vein fingerprint-based user identification method, device, equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426843A (en) * 2015-11-19 2016-03-23 安徽大学 Single-lens palm vein and palmprint image acquisition apparatus and image enhancement and segmentation method
CN107195124A (en) * 2017-07-20 2017-09-22 长江大学 The self-service book borrowing method in library and system based on palmmprint and vena metacarpea
CN209930357U (en) * 2019-05-24 2020-01-10 广州麦仑信息科技有限公司 Non-contact palm vein image snapshot system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426843A (en) * 2015-11-19 2016-03-23 安徽大学 Single-lens palm vein and palmprint image acquisition apparatus and image enhancement and segmentation method
CN107195124A (en) * 2017-07-20 2017-09-22 长江大学 The self-service book borrowing method in library and system based on palmmprint and vena metacarpea
CN209930357U (en) * 2019-05-24 2020-01-10 广州麦仑信息科技有限公司 Non-contact palm vein image snapshot system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113901940A (en) * 2021-10-21 2022-01-07 华南理工大学 Palm print and palm vein dynamic fusion identification method and system based on palm temperature information
CN113901940B (en) * 2021-10-21 2024-03-19 华南理工大学 Palm print and palm vein dynamic fusion identification method and system based on palm temperature information
CN115631514A (en) * 2022-10-12 2023-01-20 中海银河科技(北京)有限公司 Palm vein fingerprint-based user identification method, device, equipment and medium
CN115631514B (en) * 2022-10-12 2023-09-12 中海银河科技(北京)有限公司 User identification method, device, equipment and medium based on palm vein fingerprint

Similar Documents

Publication Publication Date Title
CN107609383B (en) 3D face identity authentication method and device
CN106037651B (en) A kind of heart rate detection method and system
WO2017152649A1 (en) Method and system for automatically prompting distance from human eyes to screen
AU2007284299B2 (en) A system for iris detection, tracking and recognition at a distance
CN107748869A (en) 3D face identity authentications and device
CN107633165A (en) 3D face identity authentications and device
CN104598797B (en) A kind ofly adopt face recognition, authenticate device that facial vena identification combines with finger vena identification and authentication method
US20060280340A1 (en) Conjunctival scans for personal identification
US20090208064A1 (en) System and method for animal identification using iris images
WO2001029769A9 (en) Method and apparatus for aligning and comparing images of the face and body from different imagers
JPH06500177A (en) A method for identifying individuals by analyzing basic shapes derived from biosensor data
CN106446779A (en) Method and apparatus for identifying identity
CN111738076A (en) Non-contact palm print and palm vein identification method and device
CN111344703A (en) User authentication device and method based on iris recognition
Drahansky et al. Liveness detection based on fine movements of the fingertip surface
CN110069965A (en) A kind of robot personal identification method
CN104636731B (en) Authentication device combining finger vein recognition with wrist vein recognition and fingernail recognition
CN111081375A (en) Early warning method and system for health monitoring
CN115409774A (en) Eye detection method based on deep learning and strabismus screening system
CN108710841A (en) A kind of face living body detection device and method based on MEMs infrared sensor arrays
CN114894337A (en) Temperature measurement method and device for outdoor face recognition
CN113011544B (en) Face biological information identification method, system, terminal and medium based on two-dimensional code
CN112716468A (en) Non-contact heart rate measuring method and device based on three-dimensional convolution network
EP3770780B1 (en) Identification system, method, and program
CN111310717A (en) Intelligent screening and identity recognition device for non-sensible body temperature of sports people

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination