CN107949863B - Authentication device and authentication method using biometric information - Google Patents

Authentication device and authentication method using biometric information Download PDF

Info

Publication number
CN107949863B
CN107949863B CN201680051560.7A CN201680051560A CN107949863B CN 107949863 B CN107949863 B CN 107949863B CN 201680051560 A CN201680051560 A CN 201680051560A CN 107949863 B CN107949863 B CN 107949863B
Authority
CN
China
Prior art keywords
finger
image
unit
authentication
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201680051560.7A
Other languages
Chinese (zh)
Other versions
CN107949863A (en
Inventor
三浦直人
松田友辅
长坂晃朗
宫武孝文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Publication of CN107949863A publication Critical patent/CN107949863A/en
Application granted granted Critical
Publication of CN107949863B publication Critical patent/CN107949863B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/117Identification of persons
    • A61B5/1171Identification of persons based on the shapes or appearances of their bodies or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Input (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

A highly accurate and compact biometric authentication device that can be easily operated is realized. An authentication device for authenticating a person using a feature of a living body, comprising: an installation area for installing a living body; a light source that irradiates the living body displayed in the setting area with light; an imaging unit that images light from the light source reflected by the living body; an image processing unit that processes the image captured by the imaging unit; a feature extraction unit that extracts a biological feature of the living body from the image; a storage unit for storing the biometric feature; and a comparison unit for comparing the similarity of the biological characteristics, wherein the imaging device and the plurality of light sources are arranged at positions facing the biological body, the plurality of light sources irradiate different wavelengths, the imaging device images the plurality of wavelengths, a means for measuring the distance between the biological body and the imaging device is provided, the posture of the biological body is determined, an optimal presentation position is detected and induced to the position, and the characteristics of the biological body are corrected based on the posture of the biological body.

Description

Authentication device and authentication method using biometric information
Technical Field
The present invention relates to an authentication system for authenticating an individual using a living body, and more particularly to a small, highly convenient, and highly accurate authentication technique.
Background
Among various biometric authentication techniques, finger vein authentication is known to enable highly accurate authentication. The finger vein authentication can realize excellent authentication accuracy because of using a blood vessel pattern inside a finger, and can realize high security because forgery and falsification are difficult compared to fingerprint authentication.
In recent years, biometric authentication devices have been mounted in mobile terminals such as mobile phones, notebook PCs (Personal computers), smart phones, tablet terminals, and the like, and devices such as lockers, safes, and printers, and examples of securing the security of each device have increased. In addition, in recent years, biometric authentication has been used for settlement and the like as a field to which biometric authentication is applied, in addition to entrance and exit management, attendance management, registration to a computer, and the like. In particular, it is important to realize reliable personal authentication by a publicly used biometric authentication device. In view of the recent spread of tablet-type portable terminals and the trend of wearable computing, it is one of the important requirements to achieve miniaturization of the device while ensuring convenience as described above. In addition, it is also important to provide an authentication device with high operability so that a user can easily use the authentication device.
Patent document 1 discloses a technique related to downsizing of an authentication device that performs personal authentication based on the shape of a blood vessel.
Patent document 2 discloses a technique of capturing a reflected image including a red visible light source on the palm of the device by a color camera while masking the palm in a non-contact state, extracting veins and palm prints of the palm, and performing authentication.
Prior art documents
Patent document
Patent document 1: JP patent publication No. 2013-33555
Patent document 2: international publication No. 13/136553
Disclosure of Invention
Problems to be solved by the invention
In order to realize a personal authentication device that is small, convenient to use, and highly accurate, it is important to capture wide-range biometric information from a sensor having a narrow area, and to effectively correct a positional deviation when a biometric is presented or to guide a user to a correct position when a positional deviation occurs.
In the biometric authentication device described in patent document 1, the opening of the sensor is made narrower than the width of the finger in order to reduce the size of the device, but since the opening is displayed by touching the finger, the outline of the finger cannot be imaged, and therefore, position correction in the case where positional deviation of the finger occurs becomes difficult, and nothing is mentioned about the means for solving the problem. Further, since a finger needs to be placed at a predetermined position of the device, it is not possible to capture a wide range of biological information, and since a finger placing table needs to be provided, it is difficult to miniaturize the device.
Patent document 2 describes a technique of performing authentication by simultaneously capturing a palm vein and a palm print using a color camera equipped in a smartphone or a tablet standard, ambient light, or light of a liquid crystal screen. This technique has an advantage that palm biometric information can be captured widely without touching the device. However, there is no disclosure of guidance for stopping the living body at an appropriate position by the user.
Therefore, an object of the present invention is to realize a personal authentication device that can acquire biometric information having a large amount of information with personal characteristics while maintaining a high degree of freedom in presenting a biometric in order to realize highly accurate personal authentication with a small size and excellent usability.
Means for solving the problems
An authentication device for authenticating a person using a feature of a living body, comprising: an installation area for installing a living body; a light source that irradiates the living body displayed in the setting area with light; an imaging unit that images light from the light source reflected by the living body; an image processing unit that processes the image captured by the imaging unit; a feature extraction unit that extracts a biological feature of the living body from the image; a storage unit for storing the biometric feature; and a comparison unit for comparing the similarity of the biological characteristics, wherein the imaging device and the plurality of light sources are arranged at positions facing the biological body, the plurality of light sources irradiate different wavelengths, the imaging device images the plurality of wavelengths, a means for measuring the distance between the biological body and the imaging device is provided, the posture of the biological body is determined, an optimal presentation position is detected and induced to the position, and the characteristics of the biological body are corrected based on the posture of the biological body.
Effects of the invention
According to the present invention, in a biometric authentication device using a finger, the device case is small, a wide range of living bodies can be imaged, authentication can be performed with high accuracy by obtaining various biometric characteristics from the living bodies, and the position can be corrected or detected even when a positional deviation of the finger occurs, so that a small-sized authentication device with high authentication accuracy and excellent convenience can be provided.
Drawings
Fig. 1 is a diagram showing the overall configuration of a biometric authentication system according to a first embodiment.
Fig. 2 is a diagram illustrating an apparatus configuration of the biometric authentication system according to the first embodiment.
Fig. 3 is an example of a process flow for performing the reflective non-contact authentication using the input device.
Fig. 4 is an image example of veins, fingerprints, and fat streaks photographed by simultaneously emitting infrared light and green light.
Fig. 5 shows an example of spectral sensitivity characteristics of RGB elements of a color camera.
Fig. 6 is an explanatory diagram of a method of detecting the presentation position of the finger.
Fig. 7 is an explanatory diagram of a method of detecting the finger contour and the position of the fingertip.
Fig. 8 is an explanatory diagram showing a method of detecting a gesture of a finger.
Fig. 9 is an explanatory diagram showing a method of detecting the scroll rotation angle of the finger.
Fig. 10 is an embodiment of a guidance screen for guiding a user to a position or a posture of a finger suitable for authentication.
Fig. 11 is an explanatory diagram of the finger cutout processing and the stereo correction processing.
Fig. 12 is an explanatory diagram of the configuration and operation method of the authentication device capable of easily guiding the finger position.
Fig. 13 is an explanatory diagram of an operation method of the authentication device for easily guiding a position by tapping a finger.
Fig. 14 shows an example of a reflective non-contact biometric authentication device including an infrared light source and one infrared camera as a second embodiment.
Fig. 15 shows an example of a process flow in the second embodiment.
Fig. 16 is an explanatory diagram of a method of detecting the tilt angle of the finger in the second embodiment.
Fig. 17 shows an embodiment of the configuration of the authentication apparatus for determining that the distance to the subject is short by the light-shielded area of the infrared light source.
Fig. 18 is a diagram illustrating an apparatus configuration of an authentication apparatus according to a third embodiment.
Fig. 19 is a diagram illustrating an apparatus configuration of an authentication apparatus according to the fourth embodiment.
Fig. 20 is an explanatory diagram of a method of guiding a photographing finger in the fourth embodiment.
Fig. 21 is an explanatory diagram of a guidance method for capturing a finger by one-handed operation in the fourth embodiment.
Fig. 22 is an explanatory diagram of another example of the guiding method of the photographing finger in the fourth embodiment.
Fig. 23 is an explanatory diagram of the configuration of the authentication device in the fifth embodiment.
Fig. 24 is an explanatory diagram of the configuration of the authentication device in the sixth embodiment.
Fig. 25 is an explanatory diagram of another device configuration of the authentication device in the sixth embodiment.
Fig. 26 is an explanatory diagram of the configuration of the authentication device in the seventh embodiment.
Detailed Description
Example 1
Fig. 1 is a diagram showing an overall configuration of a biometric authentication system using a blood vessel of a finger according to a first embodiment. It is to be understood that the present invention may be configured not as a system but as a device in which all or a part of the structure is mounted on the housing. The apparatus may be configured as a personal authentication apparatus including authentication processing, or may be configured as a blood vessel image acquisition apparatus or a blood vessel image extraction apparatus in which authentication processing is performed outside the apparatus and is exclusively used for acquisition of a blood vessel image. Further, as described later, the present invention may be implemented as a terminal.
The authentication system of embodiment 1 includes: an input device 2, an authentication processing unit 10, a storage device 14, a display unit 15, an input unit 16, a speaker 17, and an image input unit 18.
The input device 2 includes a light source 3 provided in a housing thereof and an imaging device 9 provided inside the housing. In addition, a part of the image processing function of the authentication processing unit 10 or a part including the image input unit 18 in the image processing function may be referred to as an image processing unit. In short, the authentication processing unit 10 is a generic term of a processing unit that executes processing related to authentication, and includes: a determination unit that determines a distance between a living body (finger) and a system or a posture of the living body (finger) from an image, a state control unit that performs an instruction to correct the distance between the living body (finger) and the living body (finger) or the posture of the living body (finger) on a display unit or the like, an unnecessary information removing unit that removes unnecessary information (wrinkles, background, and the like) from the image obtained by imaging, a feature extracting unit that extracts feature information from the image obtained by imaging, a collation unit that collates the extracted feature information with registered data stored in advance in a storage device, and the like.
The Light source 3 is a Light Emitting element such as an LED (Light Emitting Diode), for example, and irradiates the finger 1 shown on the upper part of the input device 2 with Light. The imaging device 9 captures an image of the finger 1 displayed on the input device 2. Further, the number of fingers 1 may be plural. The image input unit 18 acquires an image captured by the imaging device 9 of the input device 2, and inputs the acquired image to the authentication processing unit 10.
The authentication processing unit 10 includes: a Central Processing Unit (CPU) 11, a memory 12, and various Interfaces (IF) 13.
The CPU11 executes programs stored in the memory 12 to perform various processes. The memory 12 stores programs executed by the CPU. Further, the memory 12 temporarily stores the image input from the image input section 18.
The interface 13 connects the authentication processing unit 10 to an external device. Specifically, the interface 13 is connected to the input device 2, the storage device 14, the display unit 15, the input unit 16, the speaker 17, the image input unit 18, and the like.
The storage device 14 stores registered data of a user in advance. The registration data is information for collating the user, and is, for example, an image of a finger vein pattern. In general, an image of a finger vein pattern is an image of a pattern in which blood vessels (finger veins) mainly distributed under the skin on the palm side of a finger are photographed as a dark shadow.
The display unit 15 is, for example, a liquid crystal display, and is an output device that displays information received from the authentication processing unit 10.
The input unit 16 is, for example, a keyboard, and transmits information input from a user to the authentication processing unit 10. The speaker 17 is an output device that transmits information received from the authentication processing unit 10 by an acoustic signal (for example, voice).
Here, the display unit 15 and the speaker 17 are examples of a device (instruction unit) for instructing a user using the authentication system to correct the distance between the living body (finger) and the system and the posture of the living body (finger), and the present invention is not limited to this device.
Note that, each of the processing units described above may be all processed by one CPU, or a CPU may be used for each processing unit.
Fig. 2 is a diagram illustrating a configuration of an input device of the biometric authentication system according to the first embodiment.
The input device 2 captures images of biological features distributed on the surface or under the skin of a finger. The input device 2 is surrounded by the device case 21, and 2 imaging devices 9 are arranged inside the input device, and a plurality of infrared light sources 31 and visible light sources 32 are alternately arranged in a ring shape around the imaging devices 9, and the finger 1 can be uniformly irradiated through the opening. The infrared light source 31 irradiates infrared light, and the visible light source 32 irradiates visible light. The wavelength of the visible light source 32 can be arbitrarily selected from the range of approximately 450nm to 570nm, i.e., from blue to green. The light source 31 and the light source 32 can each emit light at an arbitrary intensity. As an example of specific wavelengths, the light source 31 selects infrared light of 850nm, and the light source 32 selects green light of 550 nm. Further, light emitting elements of respective wavelengths may be integrated. The acrylic plate 22 is fitted into the opening portion, and has an effect of preventing dust and the like from entering the inside of the apparatus and physically protecting members inside the apparatus. Further, since the subject cannot be seen when the light emitted from the light sources 31 and 32 is reflected by the acrylic plate 22, all the light sources can be turned on in time series to continuously capture the subject, and the images can be combined by the hdr (high dynamic range) technique to obtain a clear subject without a reflection component. The light source 31 and the light source 32 may be arranged in an annular or lattice shape between the two imaging devices 9, not around the imaging devices 9, or may be arranged in an annular or lattice shape so as to surround the two imaging devices 9.
The imaging device 9 is a color camera and has a plurality of light receiving elements having sensitivity in the visible light and infrared light bands. The imaging device 9 includes, for example, three types of CMOS or CCD elements having light receiving sensitivities in blue (B), green (G), and red (R), and these elements are arranged in a lattice shape as is known as a Bayer array. Further, each element of RGB has sensitivity also in near infrared light. The sensitivity of each light receiving element is constituted by a sensor having a peak of light receiving sensitivity in the vicinity of 450nm for blue, in the vicinity of 550nm for green, and in the vicinity of 620nm for red, for example. Further, the captured color image is assumed to be in the form of an RGB color image in which the respective color planes of RGB can be independently obtained. It goes without saying that the imaging device 9 may be a multispectral camera having a light receiving element with a wavelength of more than 3 wavelengths. The imaging device 9 is provided with a band-pass filter 33 that transmits all wavelengths of light output from the light source 3 and blocks other frequency bands, thereby blocking unnecessary stray light and improving image quality. In the present embodiment, only light having wavelengths of 550nm and 850nm is transmitted.
The input device 2 can capture various biological characteristics existing in the skin of a fingertip by irradiating a plurality of visible lights. For example, there are fingerprints, wrinkles in the epidermis, wrinkles in the joints, patterns of melanin such as veins, freckles, and moles, patterns of blood, patterns of fatty lobules in subcutaneous tissue (referred to herein as fatty lines), and the like.
Fig. 3 is an embodiment of a process flow for performing authentication while estimating the posture of a living body by using a stereo camera.
First, the system prompts the user to present a finger (S401). Next, infrared light and green light are irradiated and photographing is started while the two color cameras are synchronized (S402). In this case, when the luminance saturation is observed due to external light or the like, exposure adjustment may be performed, and the exposure time for which the luminance saturation disappears may be set. Next, the distance and the three-dimensional shape of the object are measured from the captured image (S403). Then, it is determined whether or not a subject is present within a range of a distance set in advance from the authentication apparatus (S404), and if not, it is determined that no living body is present, and the process returns to the beginning (S401) to wait until a living body is present. On the other hand, when the subject is present within the predetermined distance range, the gesture detection processing of the finger is performed assuming that the subject is present (S405). Next, finger guidance processing is performed using the posture determination result (S406). At this time, when the position and angle of the finger deviate from the assumed positions or the finger is not properly bent, the user is guided by using a screen display, a guidance LED, a buzzer, or the like, and the posture determination and guidance are repeated so that the user is in an appropriate imaging state. On the other hand, when it is determined that the posture is appropriate (S407), the finger image is clipped based on the posture information of the finger (S408). The posture information of the finger includes positions of the fingertip and the base of the finger, and each image is extracted using position information of one or more fingers to be authenticated. Then, the magnification of the image is corrected in accordance with the position, orientation, and distance of the finger with respect to the cut-out finger image, and the posture of the finger is normalized (S409). Next, biometric features are extracted from the finger image whose posture is normalized (S410). Then, the degree of similarity with the registered data is calculated by referring to the biometric features (S411), and it is determined whether or not the user is a registered person (S412).
Here, each processing module will be described in detail.
First, the distance and shape measurement (S403) of the object will be described in detail. In the present embodiment, two color cameras are used to capture reflected images of near infrared light and green light, and a technique of stereoscopic vision (stereo matching) using parallax thereof is applied to perform distance measurement. In addition, internal and external parameters that are provided for transforming the coordinate system between the two cameras are known. Further, it is also possible to synchronize the shooting between the cameras. In this case, by using a general stereoscopic method, an arbitrary pixel of the left camera and an arbitrary pixel of the right camera are associated with each other, and the distance to the subject can be obtained in pixel units. As a method of associating pixels between two cameras, template matching using partial images, association of Feature points based on SIFT (Scale-invariant Feature Transform), or the like can be used. Thus, it is possible to obtain distance information or shape information of an object captured by both cameras, and to grasp a three-dimensional structure of the object.
In general stereo matching, when there is no strong edge information in a subject, it is difficult to correlate pixels between two cameras, and it is difficult to obtain accurate distance information. In particular, in general, a reflected image of infrared light with respect to a living body such as a finger has a weak edge, and it is difficult to acquire a corresponding point. Therefore, in the present embodiment, infrared light and green light are simultaneously emitted to photograph the reflected light thereof, while information of veins and the like that can be photographed by infrared light and fine fingerprints, fat streaks of edges that can be photographed by green light are obtained.
Fig. 4 shows an example of an image of veins, fingerprints, and fat streaks captured by simultaneously emitting infrared light and green light. In the reflected image 61 of the infrared light, the finger veins 62 are captured indistinctly, fingerprints and the like of the epidermis are hardly noticeable, and characteristic points such as edges are relatively few. On the other hand, the finger veins 62 are hardly observed in the reflection image 63 of green light, but the fingerprint 64, the joint wrinkle 65, and the fat line 66 can be observed. Since these feature amounts have strong edges, feature points can be easily obtained by a general feature point extraction method such as SIFT, and the accuracy of association of coordinates of two images in stereo matching can be improved.
However, fingerprints, joint wrinkles, and fat lines are often similar to edges at other positions in a local area. Therefore, when a feature point and its feature amount are obtained by general image processing, if the area of the local image from which the feature amount is extracted is small, the local image may not be recognized by the feature amount of another feature point, and an erroneous corresponding point may be detected. Therefore, in the case of a line feature such as a fingerprint or a joint wrinkle, only the end or branch point of the line is extracted, whereas in the case of a granular feature such as a fat line, the feature amount of the point is extracted from a slightly wider range including a plurality of edges of the surrounding fingerprint or fat line. This can suppress the problem of the opening, which is generally known as a cause of mishandling of the feature point.
Here, an embodiment in which the infrared and green reflected images are obtained simultaneously by simultaneously irradiating the infrared light and the green light will be described.
When infrared light and green light are simultaneously irradiated in a state where a living body is covered, these lights are absorbed or reflected on the surface or under the surface of the living body, and the reflected lights reach the imaging device. Thereby, the reflected light intensity distribution near the surface of the living body is visualized. At this time, since the infrared light and the green light are simultaneously irradiated, the light intensities are reflected on the respective light receiving elements of RGB of the color camera.
Fig. 5 shows an example of spectral sensitivity characteristics of each element of RGB of the color camera. As shown in this figure, in the infrared light region having a wavelength longer than the wavelength of around 850nm, the elements of RGB have substantially the same sensitivity distribution. On the other hand, for green light, only the sensitivity of the G element is high. At this time, the amounts of light received by the three elements (Ir, Ig, Ib, respectively) and the wavelengths of the LEDs emitting light are obtained R, G, B by the following simultaneous equations.
[ numerical formula 1]
Ir=I550*Wr(550)+I850Wr (850) · formula 1
[ numerical formula 2]
Ig=I550*Wg(550)+I850Wg (850) · formula 2
[ numerical formula 3]
Ib=I550*Wb(550)+I850Wb (85O) · formula 3
Here, I550 and I850 denote the reflected light components in the case of a single green light and infrared light, respectively, and Wr (λ), Wg (λ), and Wg (λ) denote the sensitivity of the R, G, B elements at wavelength λ, respectively. By these equations, the reflected light components (I550 and I850) of green light and infrared light can be obtained. However, since two unknowns can be found by two equations and the number of equations is three, errors included in the solution of the unknowns are averaged by averaging three results obtained by extracting two unknowns from the three equations.
In addition, as another example, a method of alternately lighting infrared light and green light and using the respective images may be adopted, but the frame rate of the image capturing is reduced by half compared to this method, and there is a possibility that the positions of the objects captured in the images of both wavelengths are slightly shifted. Therefore, according to the embodiment, since the infrared image and the fat line can be captured at the same time, the subject can be captured at high speed, and the stereoscopic matching with high accuracy can be realized without causing positional deviation between frames of the subject.
In general, the results of distance measurement based on stereoscopic vision obtained by the above method are performed on a pixel basis, but there is a case where inaccurate results are generated on a pixel basis due to the influence of noise of an image. Therefore, noise removal may also be performed by pattern cutting, application of a speckle filter, and spatial averaging.
Next, an example of the posture detection of the living body performed in step S405 will be described. Various situations are assumed for the method of presenting the finger. For example, there are situations related to the posture of the finger, such as when only one finger is put on the device, when five finger masks are opened, when the finger is bent, and situations related to the display position, such as when the position of the finger is not properly reflected by the camera. Therefore, the posture of the finger, such as the position of the tip or root of each finger, the angle of presentation of the finger, or the like, is estimated using the result of the distance measurement processing of the subject, and information for inducing the user to make an appropriate presentation position or posture or for correcting the image is obtained based on the estimation result.
FIG. 6 is an embodiment of a method for detecting a present position of a finger. First, as shown in fig. 6(a), a projection luminance distribution 101 is obtained by integrating a distance image 100 toward the x-axis and obtaining an average value. Here, it is assumed that the luminance value of the distance image is higher as the distance is shorter, the right direction of the image is defined as the x-axis direction, the lower direction is defined as the y-axis direction, and the direction away from the camera (the back side of the paper) is defined as the z-axis direction. At this time, the projected luminance distribution of the portion where the finger is observed has a high value, and the projected luminance distribution of the portion where the subject is not present, such as the gap between the fingers, has a low value. Further, since a plurality of fingers may be reflected on the image, a plurality of fluctuations in projection brightness may exist. In this case, by obtaining the peak of the projection luminance as shown in fig. 6(b), the coordinates of the plurality of fingers in the x-axis direction can be obtained.
When the approximate positions of the plurality of fingers are grasped, the region of interest 102 centered on each finger is defined next. As shown in fig. 6(b), the region of interest 102 of the finger is a region in which a threshold is set for the projected luminance distribution 101, the maximum point exceeding the threshold is set as the center in the x-axis direction, and the width is arbitrarily set to be sufficiently wider than the width of the finger. The height of the region is set to be the same as the height from the image 100. In the present embodiment, three fingers are simultaneously photographed, and thus three regions of interest are set.
In addition, instead of all of the plurality of captured fingers being the target of the authentication process, only a specific one of the fingers may be the target of the process. Fig. 6(c) is an embodiment of a process of detecting a specific one of the fingers. When the distortion of the lens is reduced at the center of the image and the user covers the finger directly above the authentication device, it is considered that the finger image reflected at the center of the image is most appropriate as the processing target. Therefore, only the finger near the center of the image is taken out. First, the projection distribution 101 on the x-axis is multiplied by a weight coefficient 103 having a larger value as the distance from the center of the image increases, and the value of the projection distribution converted to the distance from the center of the image increases. Then, a maximum value 104 of the projection distribution is obtained, and the x coordinate is defined as the position of the finger closest to the center of the image in the x-axis direction. This makes it possible to detect a finger appearing at the center of the image, and not a finger appearing at the edge of the image.
Next, contour detection of the finger is performed for each region of interest. FIG. 7 is an illustration of an embodiment of detecting the contour of a finger and a fingertip. The outline detection of the finger may be performed by dividing the pixel values in each region of interest into the finger and the background by a method such as 2-valued processing based on discriminant analysis or graph cutting to extract the boundary thereof, or may be performed by connecting or tracking to extract a strong edge. In this case, since there is a possibility that adjacent fingers are included in the region of interest, a linking method may be employed in which the weight of the edge is set higher as the image center is closer, thereby maximizing the total amount of the edge and minimizing the distance and curvature of the edge. This reduces the number of cases of erroneous extraction of the outline of the adjacent finger, and can provide an effect of reducing the authentication error. By this processing, as shown in fig. 7, a finger contour 120 can be obtained.
If the finger contour 120 of each region of interest can be detected, the center line 121 of the finger is then obtained. The center line 121 of the finger may be obtained by performing linear approximation on the finger outline 120, or the inner region of the extracted finger outline may be defined as a finger region, and principal component analysis may be performed using each pixel of the finger region as an element to obtain a direction (principal direction) of the obtained first principal component. Next, the finger tip 122 is found using the finger outline 120 and the center line 121 of the finger. This may be performed by defining the intersection of the center line of the finger and the outline of the finger as the fingertip 122, or by calculating a projection distribution along the y-axis toward the region of interest 102, examining the value of the luminance distribution from the lower side toward the upper side of the image, and determining the y-coordinate that is first below a threshold value that can be determined as being the background as the fingertip. In this method, since the approximate position of the fingertip can be determined without detecting the finger contour, the processing can be simplified and speeded up.
By the above processing, the display position of the x coordinate, the y coordinate, and the z coordinate of each finger can be detected, and the finger contour and the position of the fingertip can be extracted.
Then, the posture of the finger is detected using the measurement result of the distance/solid shape of the finger. The posture of the finger includes not only x, y, and z coordinates of the finger but also bending, warping, and rotation about each of x, y, and z axes, and these states need to be detected. In addition, the finger is divided into three segments by two knuckles, and the skeleton structure of the finger in which the center axis of each segment is approximated to a straight line is analyzed. In the coordinate system shown in fig. 6(a), the rotation about each of the x-axis, y-axis, and z-axis is defined as pitch, roll, and yaw, respectively.
FIG. 8 illustrates an embodiment of detecting a gesture of a finger. As shown in fig. 8(a), in this example, the finger 1 including two joints 150 in the region of interest 102 for the range image is captured, and the finger is gently curved and also includes a roll. Depending on the position of the fingers shown, it is also conceivable, in contrast to the illustrated figures, to include only one joint or no joint at all. The brighter the area inside the finger outline 120, the closer the distance to the camera. First, as shown in fig. 8(b), the distance image is partially differentiated to calculate a normal vector 140 for each coordinate. Since the finger 1 has a cylindrical shape similar to an ellipse, the normal lines are radially distributed. Next, as shown in fig. 8(c), a normal vector of a pixel through which the center axis 121 of the finger passes is examined. This normal vector is shown in fig. 8 (d). The normal vector is directed in various directions due to the bulge of the fingertip, the middle joint of the finger, and the like. At this time, since the joint 150 is concave, the direction of the normal vector crossing the position of the joint is oriented in the intersecting direction. Therefore, the angle formed by these normal vectors and the central axis 121 of the finger is obtained, and the position where the direction is reversed is detected as the estimated position 151 of the joint. Finally, for each segment divided by the estimated joint position 151, a direction most orthogonal to all normal vectors is obtained by principal component analysis, and this is taken as the central axis 152 of the segment. By such processing, the skeletal structure of each segment of the finger can be obtained. The pitch of the finger can be obtained as the tilt angle by linearly approximating the distribution of the distance values of the finger inner region along the center axis 152 of the segment. The yaw can also be derived from the inclination of the center axis 121 of the finger.
Regarding the rolling of the finger, the cross section of the finger can be regarded as an ellipse, the major axis and minor axis of the ellipse are estimated from the coordinates of the finger surface of the portion that can be observed, and the rolling angle can be obtained from the angle formed by the major axis direction and the x axis. Specifically, first, as shown in fig. 9(a), when a cross section 160 of the finger passing through a certain pixel of interest 163 and perpendicular to the center axis 152 of the node obtained as described above is defined, a curve formed by the cross section and the finger surface becomes an ellipse 161. In this case, the ellipse 161 is not observed in all but only a portion where the plurality of normal vectors 140 obtained as an image are distributed. With respect to this portion, the respective coordinates of x, y, z of the surface of the finger are known. That is, only a part of the ellipse 161 can be observed. By applying, for example, Hough transform to the partial curve, the lengths and orientations of the major and minor axes 162, 161 of the ellipse can be obtained. Then, by changing the target pixel 163 to various positions in the direction along the center axis 152 of the finger segment, a large number of major axes 162 can be obtained, and by obtaining the average value of the angles formed by all the major axes 162 and the x-axis, the average roll angle of the finger segment can be estimated. Finally, the average roll angle of the finger can be obtained by estimating and averaging the average roll angles of all the nodes.
As another example of obtaining the above-described finger posture information, it is also possible to estimate the finger posture information from an unknown distance image based on machine learning such as Random forest using a large number of finger images whose finger posture information is known as teacher data.
As described above, the positions of the x, y, and z axes of the fingers, and the state of bending, warping, and rotation of the fingers can be grasped.
Further, the positions of the x, y, and z axes of each finger, the bending angle of the finger, and the rotation angle may be accumulated in time series for each frame, and the movement velocity or the movement acceleration relating to the position or the posture of the finger may be calculated from the information, and used for the posture determination of the finger. For example, since the image is blurred when the finger is detected to move at a high speed, it is possible to induce the user not to move the finger at a high speed, or to induce the user to accidentally shoot the finger even when the finger is hardly moved, and it is also possible to express the intention of authentication by urging the user to move the finger. Further, by allowing the image capturing only when the camera is moved away from the camera, it is possible to capture an image having an image quality in which the blur tendency of the subject is substantially uniform. This unifies the image quality at the time of registration and at the time of input, and improves the authentication accuracy.
Next, an embodiment of the display for guiding the finger (S406) shown in fig. 3 will be described in detail. If the position and posture of the finger can be detected as described above, the finger can be guided to an appropriate position.
Fig. 10 is an embodiment of a guidance screen for guiding a user to a position or a posture of a finger suitable for authentication. Fig. 10(a) is a state in which the finger-outline guide 180 is displayed at a position suitable for authentication, and a guide message 181 prompting the presentation of a finger is displayed. In this screen, when the user moves his or her finger to the right to facilitate the user's operation, the image is displayed in a superimposed manner, and the finger of the image is moved to the right by performing the process of reversing the image to the left or right. The image displayed here may be a distance image obtained by combining images from the left and right cameras, or may be an infrared light reflection image or a green light reflection image of either the left or right camera. In addition, the finger region may be filled in to hide the biometric information.
The user masks the finger according to the display. However, since the outline of the finger is not suitable for all the fingers, the outline of the finger does not always conform to the outline of the mask finger. Further, not only the positional alignment of the two-dimensional plane but also the appropriate position for the camera distance are required, and therefore even if the finger is masked in accordance with the displayed finger outline guide, the finger is not necessarily masked at the appropriate presentation position. Further, it is not easy to align the positions of a plurality of fingers, and not all fingers are necessarily brought to a position suitable for authentication. Therefore, it is necessary to induce while sequentially changing the guidance content.
Fig. 10(b) shows an example of the case where three fingers are covered. First, the finger movement instruction arrow 182 is displayed in the left direction so that the finger detected at the center among the three fingers detected by the above-described processing is positioned at the center of the image. This enables the entire finger to be aligned. Next, as shown in fig. 10(c), even in a state where the center finger is moved to the center of the image, since the left and right fingers are displaced, a notification is given that the fingers are slightly closed, and a finger movement instruction arrow 182 to the right or left is displayed in the vicinity of each finger to thereby guide the position. Similarly, when the fingers are not naturally closed, the display may be performed such that the fingers are slightly opened.
Fig. 10(d) shows a state in which the finger is deviated to the right and rotated, and the joint is bent while approaching the camera position. First, an arrow for moving the finger to the left is displayed, and then a camera distance indication icon 183 for pulling the camera distance is displayed. Then, a rotation instruction icon 184 for restoring the rotation of the finger is displayed. Finally, the finger joints are straightened. If the fingertip is too far below the screen at this time, the user can display the meaning.
In this way, by first guiding the vertical and horizontal positions of the finger and the camera distance and then guiding the rotation such as the pitching and the flexion and extension of the finger, the guidance can be performed in the order from the rough presentation position to the fine adjustment, and the guidance efficiency can be improved.
As described above, based on the posture detection result, the presentation position and posture of the finger can be accurately guided.
In the above-described embodiment, three fingers are assumed as the appropriate number of images to be taken, but any number from one to five fingers may be taken. When the number of fingers is large, the average position of the rightmost finger and the leftmost finger is set to the center of the image, so that the entire covered fingers can be induced to converge on the angle of view in a well-balanced manner. Alternatively, the number of fingers currently covered may be detected in real time, and the number of display of the finger outline guide 180 may be switched for each number of fingers to show an appropriate position. By these processes, a plurality of fingers can be presented efficiently, and a plurality of finger authentications with higher accuracy can be realized.
Next, a description will be given of step S407 of determining whether or not the finger is in an appropriate position. Although the position and posture of the finger suitable for authentication can be uniquely determined, if the deviation from this is not allowed, the user takes time to present the finger, and the convenience is degraded. Therefore, an allowable range is set for the number of imaging fingers, the positions of the fingers x, y, and z, the rotation, the flexion and extension states, and other parameters, and it is determined that the posture is appropriate when all of the parameters enter the allowable range. In addition, when the number of the imaging fingers is three, for example, as an appropriate value, the number of the imaging fingers is maintained by being continuously induced until three images are included in the image at the time of registration to secure the data amount, but one finger may be allowed at the time of authentication. This is because, even if only one of the two matches is probabilistically possible to determine that the person is apparently the principal, the restriction on the posture of the finger at the time of authentication can be alleviated, and a highly convenient operation can be realized.
Next, an embodiment of the finger clipping processing step S408 and the stereo correction processing step S409 will be described.
Even when the finger guide is performed as described above, it is difficult to capture an image at an ideal position in consideration of the convenience of the user, and therefore, the posture of the finger may vary. In this case, a feature amount different from the pattern at the time of registration is obtained due to the deformation of the biological feature accompanying the change in posture. On the other hand, it is necessary to perform matching by using a feature amount robust to deformation such as a SIFT feature amount, or to correct deformation of the original image. In the present embodiment, on the premise of matching based on template matching with a low processing cost, the captured two-dimensional planar biological image is geometrically transformed and corrected to a normalized state using the detection result of the posture information of the finger (here, the state in which the fingertip, the joint, or the root is marked as the three-dimensional shape of the finger).
First, as shown in fig. 11(a), a cut-out image 201 of a finger based on the fingertip position obtained as described above is generated for the infrared light reflection image and the green light reflection image 200 for the region of interest 102 determined for the distance image 100 as shown in fig. 11 (b). This causes the fingertip to contact the upper side of the cutout image 201, and positional displacement between the x-axis and the y-axis is corrected. The posture information such as the fingertip position, the finger outline, and the center axis of each finger segment obtained as described above is obtained for the distance image 100, and there is a deviation from the coordinate system of the reflection images captured by the left and right cameras. Here, it is assumed that the deviation of the coordinate system between the distance image 100 and the reflection image 200 is calibrated and can be obtained by coordinate transformation. When an error occurs in the coordinate conversion, the total number of edges of all pixels in the finger contour may be measured while the finger contour obtained from the range image is moved in parallel on the reflection image, and the position may be corrected again at a position where the total number of edges on the finger contour is the largest. The reflected image 200 may be an image of any of the left and right cameras, or may be an image obtained by combining both left and right images. In either case, the coordinate conversion can be performed.
Next, as shown in fig. 11(c), the outline of the finger appearing in the cutout image 201 is enlarged or reduced in size so that the distance value of the finger obtained from the distance image is constant. The magnification is calculated by the ratio of the optimal distance value to the current finger distance value. Thus, even if there is a deviation in the z-axis direction, the size of the finger can be kept constant. In this case, by partially enlarging or reducing the image so that the distance value of the partial region of the finger is constant, it is possible to eliminate the unevenness of the distance due to the pitch angle of the finger.
Then, as shown in fig. 11(d), rotation correction is performed so that the center axis 152 of each segment of the finger is directed upward in the image. Since the center axis 152 may have a different inclination for each section of the finger, the center axis of each section may be averaged and rotation-corrected so as to be directed upward in the image.
Finally, as shown in fig. 11(e), the coordinates of the two-dimensional plane are projectively transformed in a direction parallel to the x-axis of the image by the ellipse 161 of the cross section of the finger and the major axis 162 thereof obtained as described above. The projectively transformed x-axis is illustrated herein as the x' -axis, but will be referred to hereinafter as the x-axis. Thus, even if the outline of the two-dimensional image of the finger is deformed by scrolling, the image is converted into a projected image in which the ventral side of the finger is directed to the front, and therefore, a distortion-free image can be obtained.
By the above processing, the coordinates of the x, y, and z axes of the finger and the rotation direction can be corrected. The correction described here can also be performed after the feature extraction process described later, and can provide similar effects.
Finally, the feature extraction process (S410), the comparison process (S411), and the determination process (S412) will be described. The feature extraction process extracts feature amounts for the infrared reflection image and the green reflection image, respectively. Since information such as veins can be confirmed in the infrared reflection image and fingerprints, joint wrinkles, fat lines, and the like can be confirmed in the green reflection image, a general vein pattern extraction process and a texture pattern extraction process are performed to obtain biometric features. In particular, in order to sharpen the line pattern, a passivation mask process may be performed as a pretreatment. In addition, although there are two images of the left and right cameras, it is also possible to use only one image of the camera, to separately perform feature extraction using both images, and to synthesize the resulting patterns, or to use the resulting patterns as independent patterns without synthesizing them. In short, since both videos have parallax and can be used as different information, it is possible to expect an improvement in accuracy by using videos from a plurality of cameras.
In the matching process, the similarity of the extracted patterns is determined by a general template matching method or a feature point matching method. Then, when the obtained similarity is lower than a predetermined threshold, the authentication is successful. Further, although a plurality of biometric features are obtained by using an infrared image, a green image, and images from a plurality of cameras, authentication may be determined by comparing all the biometric features with each other, or by comparing a plurality of results with each other, which have high similarity. Calculating multiple similarities increases the redundancy of the data, which helps to improve accuracy.
Next, the difference between the guidance method at the time of registration and the guidance method at the time of authentication will be described in detail. At the time of registration, it is desirable to image a living body in a state as stable as possible. Therefore, after the living body is continuously guided to the ideal presentation position so that the imaging state of the living body becomes good, registration data is generated. For example, the finger is induced so that the object to be photographed is projected on the center as much as possible, the range of photographing is as wide as possible, the finger is rotated little and faces the camera, the finger is included in the depth of view of the camera, and the finger is not bent.
However, it is also conceivable that it is difficult to precisely align the position of the finger in the air, and the user cannot be stationary at an ideal presentation position, and thus convenience is deteriorated. Therefore, the induction needs to have a certain degree of margin.
Even when the captured image is geometrically corrected as described above, it is not always possible to capture an image of the same quality when the position of the finger actually changes. Therefore, a plurality of pieces of registered data that are various finger postures are obtained, and at the time of matching, matching is performed with all pieces of registered data to obtain a plurality of degrees of similarity, and the final authentication determination is performed based on these degrees of similarity. In this method, since the registered data has various changes in the posture of the finger, it is effective for robust authentication against the change in the posture. In this case, when the finger is pulled in at the time of registration, the finger posture is always fixed and the image is taken if the finger pull criteria are the same, and therefore it is difficult to include a change in the finger posture.
Therefore, the first image is taken at the position with the highest image quality, and then, the image is taken at the position with a high magnification by approaching the camera slightly. Finally, shooting is performed at a position where the finger is far from the camera. When the three images are registered, the similarity of one image is higher even if the finger of the mask moves closer to or farther from the camera with respect to the standard position at the time of authentication. As a result, robust contrast can be realized against variation in magnification.
Similarly, it is needless to say that various combinations can be made with respect to the posture of the finger, for example, three cases of image capture in which the finger is pulled to the standard position, left from the standard position, and right from the standard position. In consideration of the finger posture in which the biological variation becomes larger depending on the characteristics of the camera and the shape of the device, it is more effective to induce the finger posture having the large variation. By such induction, robust authentication can be achieved against a change in the posture of the finger.
On the other hand, in authentication, strict alignment is required, which increases the operation time or causes complication of alignment, and thus, the convenience is lowered. Therefore, the allowable amount of the induced misalignment is made larger than that at the time of registration, and the positional misalignment is absorbed by performing correction based on the posture information of the finger. In this case, if the system configuration is such that the image can be captured in real time and the image can be collated, the authentication is successful as long as the finger is in a posture close to the registered data at a certain position during the period when the hand is masked, and therefore, the authentication can be performed accurately even if a large positional deviation is allowed. Therefore, even if the authentication process is performed when the finger is not at the proper position, the process flow can be changed so that the user can display a message indicating that the position is shifted on the screen.
As described above, a high-quality living body can be imaged at the time of registration, and authentication processing can be performed by a highly convenient operation at the time of authentication.
Fig. 12 is a diagram illustrating an embodiment of a structure and an operation method of an authentication device that easily induces a position of a finger. In the non-contact shooting, since the fingers are covered in the sky, it is difficult to know which part the camera shoots without confirming the screen, and it is also difficult to grasp an appropriate distance from the camera. Therefore, as shown in fig. 12(a), the fingertip guide mark 220 and the base finger guide mark 221 are provided in the device case 21. The guide mark is represented by a triangle, and the orientation of the apex of the triangle shows the fingertip side. This makes it possible to show the relationship between the device and the orientation of the finger. Note that these symbols are only required to indicate the orientation and direction of the finger, and the shape is not limited to a triangle.
As shown in fig. 12(b), the user places the tip of the finger 1 on the fingertip guide mark 220 and places the base of the finger so as to enter between the two base guide marks 221. The finger is then moved away from the device as shown in figure 12 (c). In this way, when the finger is first brought into contact with the device case 21 and then moved away from the device case, the possibility of passing through the optical axis center of the camera and the optimum distance from the camera increases, and thus, higher-quality living body imaging can be realized. In addition, although the guide mark is disposed at the center of the housing in the present embodiment, the guide mark may be disposed so that the finger can easily move to an appropriate imaging position according to the position of the camera, the number of fingers to be imaged, and the like. The guide mark may be a depression formed in the device case 21 in accordance with the shape of a finger, and the finger may be placed in the depression.
An example of the processing flow of the non-contact authentication device in which the guide mark is arranged in the device is described below. First, the system guides the user "please put a finger on the device". Then, the light source is turned on, and the living body is waited for to be presented while continuously capturing images. If a living body is placed on the finger placing stage, the image is saturated due to strong reflected light of the light source. Since whether the image is saturated in brightness is determined by the distance between the light source and the finger, the setting of the finger can be determined by examining the average brightness value when the finger is placed on the device case 21 in advance and determining the threshold processing for determining whether the finger is placed. When the user places a finger in accordance with the guide mark, the user determines that the finger is placed by the finger placement determination process.
The system then guides the user to "please leave the finger". Accordingly, as shown in fig. 12(c), when the user moves the finger away from the device, the intensity of the reflected light decreases, and the finger is recognized to be separated from the device. When the distance measurement processing described above is performed and it is detected that the finger is located at an appropriate distance, the feature value of the living body is extracted from the image at that time. The biometric feature is registered in the registration process, and the biometric feature is input and compared with the registration data in the authentication process.
According to the present embodiment, since the living body can be imaged without greatly deviating the finger from the angle of view or from the appropriate imaging distance from the camera by placing the finger on the guide mark of the finger, it is possible to obtain an effect that the user can determine the best imaging image in a series of operations without keeping the finger still in the air.
As described in the embodiment of the guidance method at the time of registration, the threshold value of the optimum shooting distance may be changed according to the number of registration attempts at the time of obtaining a plurality of pieces of registration data so that the plurality of pieces of registration data having different distances are included in the registration data. This makes it possible to maintain similarity with the registered data even if the distance between the fingers changes during authentication.
As another operation method of the authentication device that easily induces the position of the finger, there is a method of tapping the device. The operation of moving the fingers away from the device is difficult to perform unless the device is fixed or held with one hand. Therefore, instead of the operation of moving the finger away, the user performs the tapping in the vertical direction by moving the finger tip of the same hand away from the predetermined position by a predetermined distance and then returning the finger tip to the predetermined position by the one-hand holding device. Thus, a living body image at an optimum camera distance can be captured by only one-handed operation.
As shown in fig. 13(a), first, the user holds the device case 21, for example, places the index finger 240 on the guide mark of the fingertip. When the system issues an instruction to tap a finger, the user starts tapping. Since the orientation of the finger during tapping is shown as being inclined as shown in fig. 13(b), the captured finger image is also captured in a state where the finger is tilted. However, the present invention can be used for authentication as long as the whole finger is at an appropriate angle converging to the angle of view of the image and the registered data is stored in the same posture. Therefore, the finger is repeatedly moved up and down several times, and several shots are taken at the timing when the finger is converged on the angle of view. Then, feature data of the living body is generated from these images and registered. If the same operation is performed during authentication, the similarity with the registered data is improved at the moment when the finger posture is in a state close to the registration state, and the authentication is successful.
In this way, by performing the tapping operation, the presentation position of the finger can be converged to a certain range, so that the reproducibility with the registration data is improved, the finger in the air can be easily imaged without performing the positional alignment of the finger in the air, and the authentication can be performed while holding the device with one hand, so that the convenience of the device is improved.
Example 2
Fig. 14 shows an example of a reflective non-contact biometric authentication device including a light source that emits infrared light and one infrared camera as a second embodiment.
A plurality of infrared light sources 31 are arranged around the imaging device 9, and a band-pass filter 33 that passes only light near the wavelength emitted by the infrared light sources 31 is mounted on the lens unit of the imaging device 9. The imaging device 9 is an infrared camera and can capture light reflected by the finger 1.
Fig. 15 is an example of the flow of processing in this embodiment. This is substantially the same as the processing flow shown in fig. 3 of example 1 described above. However, the difference from the above embodiment 1 lies in the following: using one infrared camera instead of two color cameras; do not implement stereo vision based distance measurements; a single wavelength image is used rather than multiple wavelengths. The outline of the processing flow is described below.
First, the system prompts a user to present a finger (S501). Then, the infrared light is irradiated and the photographing is started by one infrared camera (S502). In this case, exposure adjustment is performed even when external light is strong. Next, the distance and the three-dimensional shape of the object are estimated from the captured image (S503). It is determined whether or not a subject is present within a range of a predetermined distance from the authentication device (S504), and if not, it is determined that a living body is not present, the authentication device waits until the living body is present. On the other hand, if a subject is present within a predetermined distance range, the gesture detection processing of the finger is performed as if the subject is present (S505). Next, guidance processing of the finger is performed using the posture determination result (S506). In this case, when the position or angle of the finger is deviated from the assumed position or angle or the finger is not properly bent, the user is induced by using a screen display, a guidance LED, a buzzer, or the like, and the posture determination and the induction are repeated so as to be in an appropriate imaging state. On the other hand, if it is determined that the posture is appropriate (S507), the finger image is cut out based on the posture information of the finger (S508). The posture information of the finger includes positions of the fingertip and the base of the finger, and each image is extracted using position information of one or more fingers to be authenticated. Then, the magnification of the image is corrected for the cut-out finger image in accordance with the position, orientation, and distance of the finger, and the posture of the finger is normalized (S509). Next, biometric features are extracted from the finger image whose posture is normalized (S510). Then, the biometric features are compared (S511), the degree of similarity with the registered data is calculated, and it is determined whether or not the user is a registered person (S512).
Here, the processing modules that are different between fig. 15 as the present embodiment and fig. 3 as the first embodiment described above will be described in detail.
First, the irradiation of infrared light and the imaging process by the camera (S502) will be described. In a scene in which the biometric authentication apparatus is actually used, various objects exist around the imaging apparatus. Therefore, in the case of capturing an infrared reflection image, an unnecessary object other than a finger is often captured. Since an unnecessary object may be noise for accurately detecting a finger, it is necessary to remove the unnecessary object in order to perform authentication with higher accuracy. As one of the methods, as shown in fig. 14, there is a method of mounting a band-pass filter 33 which transmits only the wavelength of the light source 31. On the other hand, when reflected light in the same wavelength region exists in the environment, the component cannot be removed. Therefore, in the present embodiment, background removal is performed by the difference between the light-on image and the light-off image of the light source.
First, an image is taken with the light source 31 turned off. And an image in which the light source 31 is lit is taken immediately thereafter. When the image captured when the image is off is subtracted from the image captured when the image is on, unnecessary subject components that are brightly reflected regardless of the light source 31 can be removed, leaving the finger-covered image unchanged. This enables accurate finger cutting regardless of the influence of external light. Further, since the difference is cancelled by the calculation when the external light component is irradiated on the finger surface, it is possible to reduce malfunction caused by the external light in the posture detection process described later.
Further, when the image with the light source turned off is extremely bright, it is known that external light is strongly emitted. At this time, by strongly irradiating the light source according to the brightness thereof and controlling so as to shorten the exposure time of the camera, the measurement of the living body can be realized even in an environment irradiated with strong external light. In the case of adjusting the exposure, since the optimum parameters required for the processing change in the below-described finger posture detection processing or the like, the optimum parameters for each exposure time are obtained in advance in each processing.
In this way, in the present embodiment, unnecessary background is removed by the background difference based on the flicker of the light source, and the finger detection accuracy can be improved. However, in this method, since the frame rate for photographing is halved, when the frame rate is not sufficiently high, it is also possible to perform removal of unnecessary background by the band pass filter 33, or to apply a method of obtaining a large number of infrared reflection images from which the finger and the background have been separated in advance and separating the finger and the background by machine learning such as Random forest.
Next, the measurement of the distance and the three-dimensional shape of the object (S503) will be described in detail. Since the present embodiment cannot perform distance measurement based on stereoscopic vision as in the above-described embodiment, it is necessary to perform distance measurement using one infrared camera.
As a method of estimating the distance with one infrared camera, there is a method using an inverse square law relating to the light intensity of the reflected light to be irradiated. Since the light source is provided around the camera, the light source can be regarded as a point light source, and the light emitted toward the finger spreads in a spherical shape. The surface area of the sphere becomes larger in proportion to the square of the radius of the sphere, and thus the energy of light per unit area is inversely proportional to the square of the radius. Therefore, the brightness of the object is determined by the reflectance and absorptance of the object and the distance between the object and the camera.
The amount of light to be irradiated is Ii, the reflection/absorption coefficient of the skin is μ, the distance to the subject is D, and the amount of light received is Io. Further, the reflected light from the finger surface is lambertian, and if the reflected light is assumed to be isotropically diffused, the following approximate expression holds.
[ numerical formula 4]
Figure BDA0001589330480000221
Here, since the irradiation light intensity Ii and the average reflection intensity μ of the skin can be examined in advance, the distance D can be solved to find the distance of each pixel in the image. However, if Ii is directly measured as a luminance value, the luminance is normally saturated, and therefore, the image is captured while the irradiation light amount Ii is physically attenuated by a filter having a known attenuation factor, and calibration such as increasing the amount of attenuation of the light amount Ii is performed. Alternatively, a mirror or the like that directly reflects the irradiation light Ii may be mounted in the device, and the irradiation light Ii may be directly observed via the filter having the known attenuation factor. By this method, a distance image can be obtained from the luminance values of the masked finger.
Further, a camera capable of accurately measuring the distance and the imaging device of the present embodiment may be provided together to capture an image of the finger, a large amount of learning data in which positive distance tags are assigned to respective pixels may be collected, and a distance image of the finger may be obtained from the infrared reflection image based on machine learning such as Random Forest.
However, in practice, there is a possibility that a brighter image than the assumed image may be captured due to the influence of external light or the like. In this case, an error is included in the absolute value of the distance D. Therefore, by obtaining a relative distance distribution in which the distance of the object located at an arbitrary position in the image is 1, the three-dimensional structure of the object can be grasped. In addition, a rough conversion table of the luminance value and the distance value of the image is generated in advance, the distance value is read from the luminance value in the pixel of the subject located at the arbitrary position, and the distance value is multiplied by the relative distance value of the entire image, whereby a rough distance image can be obtained.
Next, the difference from embodiment 1 will be described with respect to the posture detection of the finger (S505). Here, the same processing as in example 1 may be performed using the above-described method of obtaining a range image, but an example of performing posture detection of a finger without converting an infrared reflection image into a range image is shown as a simpler method.
First, the infrared reflection image is regarded as a range image, and the region of interest 102 is obtained by the method shown in fig. 6 of embodiment 1. The higher the brightness value of the reflected image, the closer the distance is.
First, the position detection of the finger in the x-axis direction can be obtained from the x-axis direction coordinates obtained when the region of interest is determined by considering the infrared reflection image as a distance image, as shown in fig. 6 of embodiment 1. The detection of the position of the fingertip in the y-axis direction of the finger can be obtained by the same method as shown in fig. 7. Further, regarding the distance from the camera (position detection on the z-axis), since the inverse square law is established between the luminance of the reflected image and the light source distance and there is a correlation therebetween, by examining the relationship between the average luminance of the finger and the camera distance when a certain light value is emitted in advance, a conversion table from the average luminance to the camera distance can be generated. Then, when detecting the contour of the input finger, the average brightness of the inside of the finger is obtained, and the camera distance is calculated from the average brightness value by using the conversion table. In this way, it is possible to detect that the finger is too close to the camera when the brightness value is too bright, and to detect that the finger is too far from the camera when the brightness value is too dark. Although this value changes due to the influence of the external light, the influence of the external light can be reduced by subtracting the value if the brightness due to the external light is grasped by capturing an image when the light source is turned off.
As for the detection of the yaw, the center axis 121 of the finger is obtained and the rotation correction is performed so that the axis is parallel to the y-axis of the image, in the same manner as in the above-described method shown in fig. 7.
Further, regarding the detection of pitch, an embodiment of the detection method is shown in fig. 16. In this example, the fingertip is tilted in a direction away from the camera. First, an average luminance projection distribution 301 is calculated toward the y-axis for a cut-out image 300 of a finger captured in an infrared reflection image. The average luminance projection distribution at coordinate y is denoted here as iave (y). Because the fingertip is far from the camera, Iave (y) means that the value near the fingertip is low. Since the reflection image is accompanied by fine irregularities of the finger or a change in brightness due to the presence or absence of blood vessels, fine fluctuation is observed in iave (y). Next, the relative distance dr (y) shown in the following equation 7 is calculated.
[ numerical formula 5]
Linear distance ratio value D' (y) Wa (1/Sqrt { lave (y)), (formula 5)
[ numerical formula 6]
Vertical distance ratio D (y) ═ D' (y) ^ 2-Wb (y-yc) ^ 2, · (formula 6)
[ number formula 7]
Relative distance Dr (y) D (yc) c (formula 7)
Where yc is the image center coordinate of the y-axis of the cut-out image 300 of the finger, Wa is an arbitrary scaling coefficient, Wb is a calibration coefficient described later, and Sqrt { x } is the square root of x. A value proportional to a distance from the light source to the coordinate y in the oblique direction is D' (y), and a value proportional to a distance from the height of the light source to the coordinate y in the vertical direction is D (y) (see fig. 16 (b)). The relative distance dr (y) is a relative distance at each coordinate y when the vertical distance at the coordinate yc is defined as zero.
Iave (y) is a distance D (y) or D' of the coordinate y having a smaller value is estimated to be farther. However, as shown in fig. 16(b), when the flat panel is placed horizontally in front of the camera, D (y) calculated from the average luminance projection distribution 302 of the flat panel should be constant regardless of y, but the luminance value becomes dark at the y coordinate farther from the light source, and thus D' (y) changes. Since it is inconvenient to change the distance according to the y coordinate, the distance D' (y) is converted into the distance D (y) by equation 6. Equation 6 includes a parameter Wb which is determined in advance so that D' (y) becomes a fixed value regardless of y when a horizontal plane plate is covered in front of the camera.
Fig. 16(c) shows the flow of calculation. First, D' (y) is obtained from iave (y) as shown in formula 5 (the vertical axis is not shown). Then, the obtained data is converted into Dr (y) by using the equations 6 and 7. Then, the obtained dr (y) is linearly approximated by the least square method, and an estimated pitch line 303 of the finger is obtained. And finally, calculating the inclination of the straight line. The inclination at this time is an estimated pitch angle, and when the angle is larger than a fixed value, a warning is given for a deviation of the pitch or the infrared reflection image is normalized by the pitch correction method described in detail in embodiment 1.
Finally, regarding scrolling, the presence or absence of scrolling and the direction of rotation thereof are estimated from the orientation of the joints of the fingers and the shape of the outline. Specifically, a large number of rotated images of a finger whose rotation angle is known are collected, and the roll angle of an unknown image is estimated after learning this by, for example, a convolutional neural network. Using the result, a warning is issued when the rotation angle exceeds a fixed value, or the infrared reflection image is normalized by the method of roll correction described in detail in embodiment 1.
With the above method, the posture of the finger can be estimated without obtaining a distance image of the object, the amount of deviation from the standard posture can be obtained, and the user can be guided or the pattern can be corrected using the result.
Fig. 17 shows an example of the configuration of the authentication device that determines that the distance to the subject is short by the light-shielded region of the infrared light source. A light shielding plate 320 is disposed above the imaging device 9 mounted on the input device 2 so as to surround the outer periphery of the imaging device 9. The light shielding plate 320 is provided to hide the inside of the infrared light source 31 arranged on the circumference, and shields a part of the infrared light. At this time, when the finger 1 approaches the input device 2, an area 321 where light does not reach is generated as shown in the drawing. When the image is observed at this time, an image is captured in which the vicinity of the center of the imaging device 9 is dark and the periphery thereof is bright. Therefore, a double circle having different radii with respect to the center of the image is defined, and a situation in which the center of the image is dark and the periphery thereof is bright is detected by a method of comparing the average brightness of the inner circle and the area surrounded by the inner circumference and the outer circumference. When this situation is detected, it can be determined that the finger is too close, and the user can be warned. Further, the distance between the finger and the camera can be estimated by measuring the area or radius of the center dark region.
Next, the feature extraction process (S510) in embodiment 2 will be described in detail. In the present embodiment, since only the reflected image of the infrared wavelength is obtained, the image processing for a plurality of wavelengths as in the above-described embodiment cannot be performed. Therefore, vein patterns, joint wrinkle patterns, and finger contour shapes that are likely to be reflected in the near-infrared reflection image are extracted as feature amounts. First, in order to extract veins, a blunting mask that strongly reinforces the line in the long-side direction of the finger and removes noise is applied. This enhances the vein pattern that tends to travel in the longitudinal direction of the finger, and further blurs other information. By performing extraction processing of a finger vein pattern, which is generally used as a related art, from the image, it is possible to extract an efficient and stable finger vein pattern from the infrared reflection image. Next, a passivation mask that strongly reinforces and removes noise for a line of the finger in a direction orthogonal to the long side direction is applied so that the joint wrinkle pattern is reinforced. Then, by extracting the pattern in the same manner, it is possible to extract a line pattern, particularly a joint wrinkle. Since the shape of the line pattern and the distance between the first joint and the second joint have personal personality, they can be effectively used as authentication information. Further, the finger contour information can be obtained similarly by the finger contour detection processing shown in the above-described embodiment. By this processing, biological characteristics of a plurality of modalities (modalities) of veins, joint wrinkles, and finger contours can be obtained.
These pieces of information are acquired and registered as a plurality of biometric patterns, and the comparison is performed independently or while restricting the positional relationship between the biometric patterns, and finally the comparison results of the respective modalities are merged by a general multi-modality biometric authentication technique to obtain the final authentication result. Thus, it is possible to realize authentication with high accuracy while using an image of a single wavelength.
Further, as another embodiment related to the distance measurement of the object, a method of measuring the three-dimensional structure of the object while continuously photographing the object may be implemented. First, when starting shooting, the system guides the user to move the finger in various directions toward the input device, and continuously shoots the image. When detecting how a subject moves in a captured image by template matching or feature point matching starting with SIFT, it is possible to measure the three-dimensional shape of the subject. This is a technique generally called Structure from motion (SfM), by which a three-dimensional Structure of the object surface can be obtained. However, in the case of finger flexion and extension, it is necessary to perform correspondence such as feature point matching corresponding to a non-rigid body.
In this way, in the method of moving a living body, even if the living body is a living body feature on the back surface which cannot be observed in a certain posture, if the living body is moved so that the image can be captured, a stereoscopic living body feature can be obtained from information on the amount of movement and the three-dimensional structure, and therefore, a strong contrast can be realized in the movement of the living body.
Example 3
Fig. 18 shows an example of a reflective non-contact biometric authentication device including a light source that emits visible light and a color camera as a third embodiment.
A plurality of visible light sources 341 are arranged around the color camera 340, and a low-pass filter 342 for blocking infrared light is mounted on the lens of the camera. The wavelength of the visible light source 341 has a peak of emission intensity at around 450nm for blue, around 550nm for green, and around 620nm for red, for example, and the emission intensity of each wavelength can be adjusted individually.
The authentication processing procedure is the same as in embodiment 2 described above. As a difference from embodiment 2, there are the following aspects: use of a color camera instead of an infrared camera; the captured image uses multiple wavelengths of visible reflectance images rather than a single wavelength of infrared reflectance images. Here, the specific processing of the present embodiment will be described in detail.
In the present embodiment, unlike the foregoing embodiments, an object can be photographed by an RGB color camera. Visible light has a characteristic that it is difficult to reach a deep portion on the surface of a living body, unlike infrared light, but on the other hand, it can handle a plurality of wavelengths, and therefore, it is possible to capture a large amount of biological information from the optical characteristics of living tissue on the surface of the skin.
The skin tissue has various biological characteristics such as fingerprints, wrinkles in the epidermis, wrinkles in the finger joints, melanin, blood in the capillaries, arteries, veins, subcutaneous fat, and the like, and the light absorption rate and reflectance are different from each other. Therefore, by examining the reflection characteristics of each biological tissue in advance and observing the biological tissue while irradiating a plurality of visible light beams, information of each biological tissue can be extracted independently. As the method, a method of separating melanin and blood by independent component analysis of three images of RGB, a multilayer biometric technique, and the like can be used. Further, since the vein is easily observed in the red image and the finger joint is easily observed in the green or blue image, the finger joint component can be removed from the red image by a combination of a difference between the two images or division at an arbitrary ratio to obtain only a purer vein image, and thus the biometric characteristic can be measured more clearly.
In this way, in the reflection non-contact measurement using the light source of 3 wavelengths and the color camera, since it is possible to obtain biological information that is difficult to observe by infrared light, it is possible to obtain an effect that the authentication accuracy can be improved.
In addition, when a camera capable of capturing infrared rays as well as visible light can be used, it is needless to say that an infrared light source can be added to the RGB light source to capture images. This makes it possible to capture information on veins and the like present in deep parts of the surface of a living body, and increase the biometric features that can be used for authentication, thereby improving authentication accuracy.
Further, the distance measurement of the object based on the inverse square law shown in the second embodiment described above can also be applied in the present embodiment. However, in the present embodiment, the light source has three wavelengths, and thus the respective distances can be independently calculated. For example, the average reflection characteristics of the skin are obtained in advance for the three wavelengths of light to be irradiated, and the respective reflected lights are simultaneously captured by a color camera. At this time, as shown in equation 4, the relationship between the intensity of light radiated at each wavelength and the intensity of light observed is described from the relationship between the attenuation by reflection and the attenuation by the distance of the object. If the average attenuation of light on the finger surface by reflection is known, only the distance of the object becomes an unknown number. Thus, the distances of the object are obtained for the three wavelengths. The results of distance measurement at three wavelengths differ due to the influence of external light, measurement errors, and the like, but are basically expected to be in the vicinity of a positive solution value. In order to average the errors, the obtained three distance values are averaged to obtain a final result. This can reduce the influence of errors compared with the distance value estimated only by infrared light as described above, and thus can realize more accurate distance measurement.
Example 4
Fig. 19 shows an example of the configuration of a reflection non-contact biometric authentication device in a smartphone or tablet as a fourth embodiment. The smartphone 360 includes an inner camera 362 and an outer camera 363 on the back surface thereof above a color liquid crystal panel 361, and a white light source 364 for flash projection for flash shooting is mounted near the outer camera 363. The inner camera 362 and the outer camera 363 are color cameras capable of capturing visible light, and optical filters for blocking infrared light are mounted inside the cameras. Further, a focus adjustment function and an exposure adjustment function are also mounted.
The authentication processing procedure in this embodiment is basically the same as that in embodiment 2 described above. The difference from example 2 is as follows: there are situations where a reflective light source cannot be utilized; the sensitivity characteristics of the camera and the wavelength spectrum of the light source may not be known in advance. Here, the specific processing of the present embodiment will be described in detail.
First, the image pickup processing, distance measurement, and finger posture detection of an object will be described. As for the distance measurement, the posture detection, and the extraction processing of the biological characteristics of the subject, a method of improving the accuracy of the distance measurement using a color camera and a method of extracting a plurality of biological characteristics inside the skin can be used, as in the method described in the third embodiment. In addition, the color of the finger on the palm side may be registered separately in advance, and the color may be effectively used for the detection of the finger region.
On the other hand, when a living body is photographed by the external camera 363 which can use a white light source for flash, if the emission spectrum of the white light source can be grasped in advance, the emission intensity of each wavelength component in calibration and distance measurement in measurement of the feature amount of the living body can be grasped, and therefore, the accuracy of detection of the finger region, distance measurement, feature extraction of the living body, and the like can be improved.
In addition, when the internal camera 362 is used, the liquid crystal panel 361 is located at a position facing the living body to be covered, and thus the liquid crystal panel can be used as a light source for living body imaging. Thus, the distance measurement and the biometric measurement of the object can be realized by the same method as the apparatus using the RGB visible light sources described in the above-described embodiment 3. Further, since the liquid crystal panel can control planar light emission, the light source position can be set arbitrarily, for example, imaging by lighting only the upper half of the panel and then imaging by lighting only the lower half. Since the position of light emission is known, the three-dimensional shape of the object can be obtained by a technique of photometric stereo in the configuration of the present invention.
However, the light intensity of the liquid crystal panel is generally weaker than that of a light emitting element such as an LED, and it is conceivable that the subject cannot be irradiated with sufficient intensity. Therefore, it is necessary to perform imaging using ambient light distributed around the authentication device. In this case, since the ambient light cannot determine the position of the light source, it is difficult to perform distance measurement based on the intensity of the image luminance value. In addition, since the spectral distribution of light varies depending on the environment, it is difficult to obtain biological information such as veins, melanin, and fatty streaks using color information. Therefore, a plurality of biological characteristics can be obtained by decomposing a captured finger image into images of RGB color planes, obtaining an average luminance for each image, discarding the image of the color plane without using the image of the color plane when the average luminance is darker than a predetermined threshold value, and performing enhancement filtering of veins, enhancement filtering of finger joints, and the like as described above for the image of the color plane determined to be sufficiently bright.
In addition, when the joint of the finger can be detected, since the color of hemoglobin can be observed at a position where the joint of the finger is located, the color of hemoglobin can be extracted by obtaining the difference between the color of the joint of the finger and the color of the surroundings thereof. In this case, the color of general hemoglobin is observed in advance in various ambient lights and expressed as color space information such as RGB space and HSV space. Then, the color space information of hemoglobin that can be grasped from the current image is compared with the color space information of various hemoglobin prepared in advance, and the ambient light associated with the most similar color space information can be estimated as the spectrum of the current ambient light. The method can estimate ambient light and is effective for the finger area detection and the living body measurement described above.
In the second embodiment, the position of the finger can be estimated on the premise that the reflected light is emitted from a position close to the camera, but in the present embodiment, particularly when the image capturing using the inner camera 362 is performed, there is no light source, and therefore the premise cannot be utilized. Therefore, as an embodiment of the posture detection of the finger, a method is used in which the area of the finger captured in the image is detected using color information and edge information of the image.
First, the camera of the smartphone of this embodiment is used to capture a finger, together with a distance sensor or the like that is capable of calculating distance information and posture information of the finger as learning data. In this way, positive solution values of the distance information and the orientation information are marked for the image captured by the camera of the smartphone. At this time, the number of types of fingers, the number of types of postures, and the number of images are obtained as many as possible. Next, using general machine learning, parameter learning for estimating distance information and posture information of a finger only from an image of a smartphone is performed. For example, in order to obtain distance information and posture information of a pixel at a certain coordinate, the intensity of a muscle color component of a pixel value and the amount of a nearby image edge are used for an evaluation value, and learning is performed based on a random forest. Then, the image of the smartphone is put into a learned random forest and traversed, and either one of the background region and the finger internal region is classified for each pixel, and further, if the image is in a finger, it is determined whether the finger is a fingertip, a root, a joint, or the like. In addition, the distance value is also estimated. Since the estimation result of the distance value varies for each pixel due to the influence of noise or the like, processing may be performed so as to spatially smooth the distance value.
In this way, if the internal area of the hand and the background area can be separated, the background portion is blackened, and thus a method of detecting the posture of the finger based on the luminance value of the reflection image without the distance measurement can be applied as in the second embodiment described above. Therefore, by combining the biometric feature extraction methods described in the third embodiment, the authentication process can be realized.
Fig. 20 is an example of a guidance method for photographing a finger with the exocamera in the present embodiment. The user grips the smartphone with the hand 380 holding one side of the device and takes an image of the finger 1 on the other side. At this time, a finger outline guide 381 is drawn on the liquid crystal panel 361, and the user takes an image of the finger so as to match the guide. The system calculates the posture information of the finger in real time, captures the image at a time point when the finger enters the allowable range, extracts biometric features based on the image, and performs authentication. At this time, if the position of the base of the finger is clearly indicated in the finger outline guide 381 as shown in the drawing, it is easy to adjust the rough distance between the camera and the finger. Such a display can be applied to any of the embodiments of the present invention.
When the positional deviation occurs, as in the above-described embodiment, the direction of the positional deviation may be guided by visual means such as an arrow, or the deviation of the finger may be notified by a sound or vibration function using a speaker or a vibration element mounted on the smartphone. In particular, it is difficult to guide the distance between the camera and the living body on the screen, and it is possible to intuitively perform the alignment if the guide is performed such as by increasing the amount of vibration of the vibration function or increasing the volume of the guide sound emitted from the speaker as the positional deviation increases.
Fig. 21 shows an example of a guidance method for shooting a finger with an external camera, which can be performed by one-handed operation. The user holds the smartphone with a single hand as shown, first the system instructs to block the outward looking camera 363 with the fingertips. The user blocks the external camera 363 with the finger 1. If the authentication system detects this, then the finger profile guide 381 is displayed on the screen to indicate that the finger is bent back to make the finger far away for shooting. The user holds the smartphone 360 as shown in fig. 21 and bends the finger backward away from the outward-looking camera 363 in accordance with the guide.
At this time, as shown in the liquid crystal panel 361, the finger 1 is reflected in a tilted state. If the state is detected, the system performs a photographing process and an authentication process. By applying this guidance method, the position of the finger can be easily aligned even with one hand, and the biometric authentication described in the present invention in the smartphone can be realized.
Further, the operation of gently shaking the smartphone 360 while keeping the state where the finger 1 is bent backward may be induced. At this time, when the continuous image is captured, the finger 1 of the mask is observed at the same position, and the unnecessary background 400 reflected in the image moves in accordance with the shaking motion. Therefore, the presence or absence of movement is determined for each pixel, and the area where no movement is present is regarded as a finger area, and the other area is regarded as a background area, whereby the detection of the finger area can be performed. According to this method, the finger region can be accurately detected by simple image processing, and the authentication accuracy can be improved.
Fig. 22 shows an example of a guidance method in the case of using an internal camera. The inner cameras are generally set to easily photograph faces in many cases, and it is difficult to photograph fingers near the liquid crystal panel. In this case, the finger is shielded with a distance in the air, and the liquid crystal screen is shielded by the finger, so that it is difficult to confirm the finger. Therefore, the guide is to place the finger at a given position of the liquid crystal panel and then move the finger into the air.
First, when authentication is started, as shown in fig. 22(a), a guide 420 for setting a finger and a finger position guide 421 for placing a finger are displayed on a screen. In the present embodiment, the orientation of the finger is set to the left-oblique upper direction, but the orientation may be changed according to the habit of the hand, or the finger may be masked in the positive lateral direction. The user then aligns where to place the finger. Since the liquid crystal panel 361 is a touch panel, when it is detected that a finger is placed at a predetermined position, a guide 422 for lifting the finger in the air is displayed as shown in fig. 22 (b). At this time, since the finger is placed obliquely upward to the left, the guide 422 to the effect that the finger is lifted in the air is displayed on the lower left of the screen so that the guide can be easily seen. However, since there is a possibility that the screen is covered with a finger, guidance by sound may be used at the same time. The user moves the finger into the air according to the guide. Then, as shown in fig. 22(c), the finger enters the angle of view of the camera. If the system detects this, the shooting is carried out.
The position where the guide 421 of the finger starts to be set has an effect of guiding the finger to a position where the angle of view is easily entered. In addition, when a plurality of fingers are to be photographed, the guide 421 may be a finger outline of the plurality of fingers. In this case, the effect of easily simultaneously capturing a plurality of fingers can be obtained by performing the guidance so that the fingers are opened at an appropriate angle. With such a guidance method, the authentication processing of the above embodiment can be realized even when an internal camera is used.
As another guidance method, a finger tip is repeatedly tapped on the inner camera or the liquid crystal panel near the inner camera, and the tapping is detected by a change in the luminance of the touch panel or the inner camera. In this case, it is also possible to perform imaging in consecutive frames, detect that the distance of the finger is appropriate by a distance measuring means based on measurement of the width of the finger outline, and perform authentication by extracting the feature of the living body from the finger image captured at the appropriate distance.
The induction method based on such tapping is a relatively simple operation, and has the following equivalent effects: the position of the finger can be easily induced; the fingers do not need to be made static in the air, so that the position alignment becomes easy; and the distance between the living body and the camera constantly changes, so that the living body can be imaged in an appropriate focus at a timing when the living body reaches a distance suitable for imaging.
In addition, the smartphone of the present embodiment is provided with an internal camera and an external camera which are widely used at the present time. Needless to say, as long as a camera, an infrared camera, a multispectral camera, a distance camera, or the like capable of capturing an image of a finger located near the liquid crystal panel is mounted on the smartphone or the tablet, distance measurement, finger posture detection processing, and authentication processing can be performed as in the above-described embodiments by effectively using these cameras. The present embodiment is not limited to a smartphone, and can be similarly implemented even in a tablet or a portable game machine.
Further, the authentication may be performed by providing a finger rest base on which a finger can be placed at a predetermined distance from the camera for a user who is difficult to align the position in the air, and fitting the finger rest base into a pedestal of a smartphone or a tablet. For example, a fixed tablet may be provided as a terminal that can be easily operated by a general user in an administrative service window, a bank window, or the like. In this case, since portability is not a particular problem, by providing the above-described pedestal and the flat plate in combination, it is possible to perform authentication by a simple operation without requiring a dedicated authentication device.
Example 5
Fig. 23 shows an embodiment in which the authentication apparatuses shown in the first to third embodiments described above are applied to a video game.
Fig. 23(a) is a diagram in which a motion capture device 441 for detecting the posture of the body of the player and operating the game is provided on the top of a television 440, and the motion capture device includes an input device 2 for authentication. This images the finger 1 of the user, and authentication is performed. The user operates the game using the motion capture device 441 or a game controller separately provided, and the like, but the input device 2 detects the user's finger and performs authentication. The guidance may be displayed on the television 440 or may be provided by voice using a speaker or the like at the time of authentication.
Fig. 23(b) shows an example in which the input device 2 is mounted on a controller 442 of a video game. The user can hold the controller 442 to perform a game operation, and the input device 2 for authentication is attached thereto, and can be used for the game control and the settlement of the charge for the item as described above.
Fig. 23(c) shows an example of a portable game machine. The portable game machine 443 includes an input device 2 for authentication at a central portion thereof. The authentication method is the same as the example of mounting to the smartphone described in embodiment 4 above.
In each of the gaming machines described above, the authentication application described below can be implemented. First, it is possible to easily realize a method of identifying a user who executes a game, automatically reading out saved data, performing access control to a content on a game machine or a network, and managing and limiting a game play time of the game. In addition, various parameters of the game character can be changed according to the authentication result, and automatic settlement can be performed for the charging of the payment items and the like. In particular, it is also possible to control the characteristics of the game character based on the similarity between the registered data and the input data, for example, to increase the moving speed of the character or to enhance the attack power, and to use the characteristics as an interface for game control.
Further, when a plurality of fingers are registered and only one finger is presented at the time of authentication, functions can be divided according to the type of the presented finger. For example, in the case of an action RPG game, an attack of a percussion system is presented when the index finger is presented, an attack based on magic is presented when the middle finger is presented, and recovery of physical strength based on magic is presented when the ring finger is ring finger. In the present embodiment, such an operation is switched by selecting on a menu screen or inputting a complicated command using a plurality of input keys, and these operations can be simplified. Further, when biometric information of the outer circumference of the finger along the center axis of the displayed finger is registered alone, the finger is displayed on the input device and the finger is scrolled around the axis center, whereby the registered data corresponding to the displayed scroll angle can be matched with each other. Accordingly, the scroll angle of the finger can be detected from the matched registered data, and therefore, the operation can be effectively used as an interface for a game. For example, it is possible to change the rotation speed of an engine in a racing game in a simulated manner according to the angle of rolling of the finger.
Example 6
Fig. 24 shows an embodiment of an authentication device to which the present invention is applied to a walk-through gate (walk-through gate) that can perform authentication by masking only a finger while walking.
When the user approaches the authentication gate 460, the user brings the finger 1 close to the input device 2 as shown in the figure. The input device 2 continuously takes images, detects the posture of the finger, and performs authentication at high speed. If the authentication is successful, the shutter 461 is opened to allow the user to pass. The finger guide can be implemented by the liquid crystal screen 462 or the speaker 463. On the liquid crystal screen 462, not only the finger guide but also the authentication result, the result of the billing performed at the time of entrance/exit, and the like can be displayed.
The registration may be performed in the authentication gate 460, and a CPU and a memory may be installed in the gate to store the registration data, or a registration terminal equipped with the input device 2 may be installed in a different place to transfer the registration data to the authentication gate 460 via a network. The registration data may be stored in a server or the like provided separately. The authentication may be performed inside the authentication gate, or the image may be transferred to the server and performed on the server. The authentication gate 460 may be equipped with a device for wireless communication, and a user may carry a wireless IC tag or the like when passing through the gate, and transfer the registration data stored in the wireless IC tag to the authentication gate 460 to perform authentication.
According to the configuration of the present embodiment, since the authentication device is a reflective non-contact authentication device, authentication can be performed even when the user roughly masks while walking, and the throughput of the entrance guard can be improved. Further, by applying the method of the difference image based on the flicker of the light source and the exposure adjustment function in the above embodiments, it is possible to perform photographing even outdoors, and it is possible to expand the application range of the apparatus.
In addition to the reflected light type authentication device of the present invention, a hybrid authentication device in which a general transmitted light type device is combined may be used in the case where a relatively large device such as an entrance guard for a large-scale facility is allowed to be installed, in the case where the security is strictly managed in an important management facility or the like.
Fig. 25 shows an example of a transmission/reflection dual mode device structure. Fig. 25(a) shows a configuration in which a transmissive light source 481 is vertically arranged in an authentication device 480 for a door access. When a user covers the finger 1 from the right side of the drawing, the transmitted light source 481 and the reflected light source 482 irradiate light simultaneously. The transmitted light source 481 radiates infrared light, and the reflected light source 482 radiates green and red visible light. The wavelengths of green, red and infrared are set to 550nm, 620nm and 850nm, respectively. The imaging device 9 is an RGB color camera having infrared sensitivity, and receives transmitted and reflected light. In this case, the luminance components of the three wavelengths can be described as follows according to the spectral sensitivity characteristics of the camera.
[ number formula 8]
Ir=I550*Wr(550)+I620*Wr(620)+I850Wr (850) · formula 8
[ numerical formula 9]
Ig=I550*Wg(550)+I620*Wg(620)+I850Wg (850) · formula 9
[ numerical formula 10]
Ib=I550*Wb(550)+I620*Wb(620)+I850Wb (850) · formula 10
The symbols are defined as shown in formulas 1 to 3 of example 1. By solving the simultaneous equations, I550 and I620 of the reflected light component and I850 of the transmitted light component can be obtained, and authentication can be performed using each image. Further, it is also possible to mount one camera each so that the transmitted light and the reflected light can be imaged separately, and to switch the light sources for irradiation in a time-sharing manner, but the former has a problem of increased cost, and the latter has a problem of decreased frame rate. Therefore, the method of capturing light of different wavelengths with one camera and performing the separation process as in the present embodiment can capture the light at low cost and at high speed. In addition, in this embodiment, since the place where the finger is covered is open, the user can easily operate the device even for the first time.
Fig. 25(b) shows a mode in which a finger is presented in a space inside the device. The finger 1 is interposed between the infrared light source 481 and the imaging device 9, and can take a transmitted image and a reflected image by irradiating transmitted light from above and reflected light from below. The finger can be inserted from the right side to the left side of the figure, but can also be entered by moving the finger from the side of the device as shown in the lower section of the figure. The sides of the device are now open so as not to impede finger movement. A plurality of photosensors 483 are provided at positions slightly shifted from each other on the side surface on which input is performed. The photosensor responds when a finger passes right under the photosensor, and can sense the moving speed of the finger, the angle of the finger, and the like from the timing deviation of the response time of the four sensors shown in the figure and the distance between the sensors. Upon receiving this, the time when the finger passes through the center of the camera is instantaneously predicted, and the imaging device 9 captures the finger at a timing suitable for capturing images. In this case, the exposure time is made extremely short so that image blurring due to movement of the finger does not occur, and the light quantity is extremely strongly irradiated in a short time in accordance with the exposure time. With this method, only by passing a finger at high speed, it is possible to perform instantaneous authentication.
In any of the embodiments, since the biological characteristics of both the transmitted light and the reflected light can be obtained, improvement in accuracy and improvement in forgery prevention can be expected. Further, clear finger veins can be obtained by the transmitted light, and authentication can be stably achieved even when the living body mask is placed at a position where the transmitted light cannot be irradiated by the reflected light, so that the user can use the authentication device more easily.
Example 7
Fig. 26 shows an embodiment in which the authentication device of the present invention is applied to a mouse. A left button 501 and a right button 502 are mounted on the mouse 500, and a scroll wheel 503 is mounted on the center portion. The input device 2 for authentication is provided near the wheel 503 at the center of the mouse 500. The shape of the mouse 500 is curved in accordance with the shape of the finger, and the inclination from the fingertip side to the mouse center is opposite to the inclination from the mouse center to the palm side. While holding the mouse 500. The distance between the finger 1 and the input device 2 is close, and if the finger 1 is extended straight as shown in the figure due to the inclination of the mouse, it is easy to obtain the shooting distance from the camera necessary for shooting, and a sufficient range can be shot. In the figure, the index finger is extended, but it is needless to say that by extending another finger such as the middle finger, it is possible to capture images of the finger. The input device 2 may be mounted on the left button 501 or the right button 502 of the mouse.
Since the operation of extending the finger can be performed easily while holding the mouse, the operation can be performed in a natural operation of the PC without impairing the convenience of the user in various situations when operating the PC, for example, confirmation of the person who has logged in the PC at the time of network settlement, confirmation of the access right of the user for each application software, and the like.
All of the authentication devices described above are described with reference to the fingers as an example, but may be applied to any position of the skin of a living body, for example, any position of the face, ear, palm, back of the hand, wrist, arm, head, foot, and the like. In this case, it is necessary to perform the optimum posture detection processing for each part, but it goes without saying that it can be realized by a method of collecting a large amount of teacher data and then performing machine learning, or the like.
Description of the reference numerals
1, 2. input device, 3. light source, 9. camera, 10. authentication processing section, 11. central processing section, 12. memory, 13. interface, 14. storage device, 15. display section, 16. input section, 17. loudspeaker, 18. image input section, 21. device housing, 31. infrared light source, 32. visible light source, 33. bandpass filter, 61. infrared light reflection image, 62. vein, 63. reflection image, 64. fingerprint, 65. joint wrinkle, 66. fat, 100. distance image, 101. distribution of the brightness distribution, 102. projection of the brightness distribution, 102. image, 121-finger center line, 122-finger tip, 140-finger normal vector, 150-joint, 151-joint estimated location, 152-finger segment center axis, 160-finger cross-section, 161-ellipse, 162-ellipse long diameter, 163-focus pixel, 180-finger profile guidance, 181-guide message, 182-finger movement indication arrow, 183-camera distance indication icon, 184-rotation indication icon, 200-reflection image, 201-cut image, 220-finger guide mark, 221-finger root guide mark, 240-finger image, 300-finger image, 301-finger average projection brightness distribution, 320-finger average projection, 320-finger pitch projection, 320-finger average projection distribution, 321, a color camera, 341, a visible light source, 342, a low-pass filter, 360, a smart phone, 361, a color liquid crystal panel, 362, an internal camera, 363, an external camera, 364, a white light source for flash projection, 380, a hand on one side of the handheld device, 381, a finger profile guide, 400, an unnecessary background, 420, a finger setup guide, 421, a finger position guide, 422, a finger lift-off guide, 440, 441, a motion device, 442, a controller, 443, a portable game machine, 460, a verification gate, a security gate, a light source, a security gate, a projector, a security gate, a motion device, a security gate, a controller, a mobile game machine, a security gate, a light source, a light, 483 · photosensor, 500 · · mouse, 501 · left button, 502 · right button, 503 · wheel.

Claims (10)

1. An authentication device is characterized by comprising:
a storage unit that holds characteristic information relating to a finger;
an instruction unit that instructs correction of a distance from the image pickup unit to the finger and a posture of the finger;
a light source that irradiates light to the finger;
an imaging unit that images light reflected by the finger;
a determination unit that calculates distance information from the imaging unit to the finger based on a brightness value of the image captured by the imaging unit, generates an image centered on the finger, detects a contour of the finger based on a pixel value in the generated image, specifies a center axis of the finger based on the detected contour, detects an estimated position of a joint based on a normal vector of a pixel through which the center axis of the finger passes, specifies a center axis of a segment for each segment of the finger divided by the estimated position of the joint, obtains a skeletal structure of each segment of the finger, and determines a posture of the finger by analyzing the skeletal structure of each segment;
a state control unit that performs a correction instruction of the distance between the imaging unit and the finger and the posture of the finger on the instruction unit based on the determination;
A feature extraction unit that extracts the feature information from the image; and
and a comparison unit that compares the similarity between the extracted feature information and the feature information stored in the storage unit.
2. The authentication device of claim 1,
the state control unit performs a correction instruction of the distance between the imaging unit and the finger and the posture of the finger to the instruction unit based on the amount of light irradiated from the light source and the amount of light received by the imaging unit.
3. The authentication apparatus according to claim 2,
the state control unit obtains a relative distance distribution in which a distance at an arbitrary position in the image is 1, and thereby performs a correction instruction of the posture of the finger on the instruction unit.
4. The authentication device according to any one of claims 1 to 3,
the disclosed device is provided with: and a garbage removal unit that removes garbage from the image.
5. The authentication device of claim 4,
the unnecessary information removing unit removes the unnecessary information by comparing an image captured by turning on the light source with an image captured by turning off the light source.
6. The authentication device of claim 4,
the light source has a plurality of light sources that irradiate light of respectively different frequencies,
the unnecessary information removing unit removes the unnecessary information by comparing images captured at different frequencies.
7. The authentication device according to claim 5 or 6,
the determination unit, the state control unit, the feature extraction unit, the comparison unit, and the unnecessary information removal unit are executed by one processing device.
8. The authentication apparatus according to claim 7,
the instruction unit is a display unit that instructs correction of the distance between the image pickup unit and the finger and the posture of the finger by an image, or a sound generation unit that instructs correction of the distance between the image pickup unit and the finger and the posture of the finger by a sound.
9. The authentication apparatus according to claim 8,
the characteristic information is information related to veins.
10. An authentication method is characterized by comprising:
an irradiation step of irradiating a finger with light;
a photographing step of photographing light reflected by the finger;
A determination step of calculating distance information from an imaging unit to the finger based on a brightness value of the image captured in the imaging step, generating an image centered on the finger, detecting a contour of the finger based on a pixel value in the generated image, determining a center axis of the finger based on the detected contour, detecting an estimated position of a joint based on a normal vector of a pixel through which the center axis of the finger passes, determining a center axis of a segment for each segment of the finger divided by the estimated position of the joint, obtaining a skeletal structure of each segment of the finger, and determining a posture of the finger by analyzing the skeletal structure of each segment;
a step of instructing a user of the authentication apparatus to correct the distance between the imaging unit and the finger and the posture of the finger based on the determination;
a feature extraction step of extracting feature information from the image; and
and a comparison step of comparing the similarity between the extracted feature information and feature information held in advance.
CN201680051560.7A 2015-11-10 2016-10-31 Authentication device and authentication method using biometric information Active CN107949863B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2015-220027 2015-11-10
JP2015220027A JP6523925B2 (en) 2015-11-10 2015-11-10 Authentication apparatus using biometric information and authentication method
PCT/JP2016/082252 WO2017082100A1 (en) 2015-11-10 2016-10-31 Authentication device and authentication method employing biometric information

Publications (2)

Publication Number Publication Date
CN107949863A CN107949863A (en) 2018-04-20
CN107949863B true CN107949863B (en) 2022-01-18

Family

ID=58695250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680051560.7A Active CN107949863B (en) 2015-11-10 2016-10-31 Authentication device and authentication method using biometric information

Country Status (3)

Country Link
JP (1) JP6523925B2 (en)
CN (1) CN107949863B (en)
WO (1) WO2017082100A1 (en)

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7269874B2 (en) * 2016-08-12 2023-05-09 スリーエム イノベイティブ プロパティズ カンパニー How to process multiple regions of interest independently
US11450140B2 (en) 2016-08-12 2022-09-20 3M Innovative Properties Company Independently processing plurality of regions of interest
JP6955369B2 (en) * 2017-05-22 2021-10-27 キヤノン株式会社 Information processing equipment, control methods and programs for information processing equipment
CN107341900B (en) * 2017-07-04 2019-10-22 广州市银科电子有限公司 A kind of bill anti-counterfeit discrimination method based on infrared printing ink mark intelligent recognition
JP6945391B2 (en) * 2017-08-25 2021-10-06 株式会社熊平製作所 Biometric device
KR102459853B1 (en) * 2017-11-23 2022-10-27 삼성전자주식회사 Method and device to estimate disparity
JP6846330B2 (en) * 2017-11-27 2021-03-24 株式会社日立製作所 Biometric device and biometric system
JP7002348B2 (en) * 2018-01-22 2022-02-10 株式会社 日立産業制御ソリューションズ Biometric device
CN108171858A (en) * 2018-02-13 2018-06-15 南京东屋电气有限公司 A kind of automobile door lock with separate type infrared image acquisition device
JP7184538B2 (en) * 2018-05-24 2022-12-06 株式会社日立製作所 Blood vessel imaging device and personal authentication system
CN111970971B (en) * 2018-05-30 2024-08-27 松下知识产权经营株式会社 Identification device and identification method
JP7015216B2 (en) * 2018-06-25 2022-02-02 株式会社日立製作所 Biometric program, biometric method
CN109196523A (en) * 2018-09-03 2019-01-11 深圳市汇顶科技股份有限公司 Multiple light courcess Fingerprint enhancement and synthetic method and related fingerprint sensor
JP7269711B2 (en) 2018-10-03 2023-05-09 株式会社日立製作所 Biometric authentication system, biometric authentication method and program
US10678342B2 (en) * 2018-10-21 2020-06-09 XRSpace CO., LTD. Method of virtual user interface interaction based on gesture recognition and related device
JP7534841B2 (en) * 2019-01-29 2024-08-15 株式会社 日立産業制御ソリューションズ Biometric authentication device, biometric authentication method, and computer program
JP7079742B2 (en) * 2019-02-08 2022-06-02 株式会社日立製作所 Computer system
JP6887167B2 (en) * 2019-04-01 2021-06-16 株式会社マーケットヴィジョン Image processing system
CN112037399B (en) * 2019-05-17 2023-04-07 杭州海康威视数字技术股份有限公司 Control method of gate passing indication, gate equipment and system
CN111723630B (en) * 2019-05-28 2023-11-14 电装智能科技(上海)有限公司 Finger vein authentication device, finger vein authentication system, and authentication method thereof
CN112348899A (en) * 2019-08-07 2021-02-09 虹软科技股份有限公司 Calibration parameter obtaining method and device, processor and electronic equipment
CN110533010A (en) * 2019-10-10 2019-12-03 中国计量大学 Adaptive light source finger vein image acquisition system
JP7469871B2 (en) * 2019-12-04 2024-04-17 株式会社日立製作所 Photography and authentication devices
CN111160253A (en) * 2019-12-30 2020-05-15 业成科技(成都)有限公司 Biological identification module, wearable equipment and mobile terminal
WO2021234926A1 (en) 2020-05-21 2021-11-25 富士通株式会社 Authentication method, authentication program, and authentication device
JP7519871B2 (en) 2020-10-21 2024-07-22 株式会社日立製作所 Biometric authentication device and method
JP7507659B2 (en) 2020-11-06 2024-06-28 株式会社日立製作所 Photographing device, authentication device, and biometric photographing method
CN114468994B (en) * 2021-02-11 2023-02-28 先阳科技有限公司 Tissue component measuring method and device and wearable equipment
JP7470069B2 (en) 2021-02-17 2024-04-17 株式会社日立製作所 Pointing object detection device, pointing object detection method, and pointing object detection system
US20220300593A1 (en) * 2021-03-16 2022-09-22 Silk Id Systems Inc. System and method of biometric identification of a subject
WO2022219929A1 (en) * 2021-04-13 2022-10-20 ソニーグループ株式会社 Authentication system, authentication device, and authentication method
CN115082969A (en) * 2022-05-19 2022-09-20 深圳市联谛信息无障碍有限责任公司 Biological identification method and device and electronic equipment
TWI821096B (en) * 2023-01-03 2023-11-01 大陸商北京集創北方科技股份有限公司 Fingerprint identification method, fingerprint identification device, and information processing device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3728386B2 (en) * 1999-03-08 2005-12-21 沖電気工業株式会社 Iris recognition device
KR100999989B1 (en) * 2006-06-08 2010-12-10 후지쯔 가부시끼가이샤 Guidance device and method
JP2009301333A (en) * 2008-06-13 2009-12-24 Olympus Corp Imaging device and personal authentication device
JP5816682B2 (en) * 2011-04-22 2015-11-18 株式会社日立製作所 Blood vessel imaging device and biometric authentication device
JP2015001843A (en) * 2013-06-17 2015-01-05 日立オムロンターミナルソリューションズ株式会社 Biometric authentication device
CN104102913B (en) * 2014-07-15 2018-10-16 无锡优辰电子信息科技有限公司 Wrist vena identification system

Also Published As

Publication number Publication date
WO2017082100A1 (en) 2017-05-18
CN107949863A (en) 2018-04-20
JP2017091186A (en) 2017-05-25
JP6523925B2 (en) 2019-06-05

Similar Documents

Publication Publication Date Title
CN107949863B (en) Authentication device and authentication method using biometric information
JP6353927B2 (en) Blood vessel imaging device and personal authentication system
KR102587193B1 (en) System and method for performing fingerprint-based user authentication using images captured using a mobile device
JP7242528B2 (en) Systems and methods for performing fingerprint user authentication using images captured using mobile devices
US8855376B2 (en) Finger vein authentication device
EP2848196B1 (en) Apparatus for blood vessel imaging
JP5295848B2 (en) Personal authentication device
CN112639871B (en) Biometric authentication system, biometric authentication method, and recording medium
KR102679397B1 (en) Biometric authentication apparatus and biometric authentication method
JP2014180435A (en) Biological information input device, biological information input program, and biological information input method
GB2598016A (en) Biometric authentication apparatus and biometric authentication method
KR20150139183A (en) Wrist-type wearable device for vein recognition
US8977009B2 (en) Biometric authentication device, biometric authentication program, and biometric authentication method
JP2012248079A (en) Biological information processing apparatus, biological information processing method and biological information processing program
CN106933341B (en) Method and device for determining region of finger in image and wrist type equipment
JP2013225324A (en) Personal authentication device, image processor, terminal and system
JP2013205931A (en) Biological information acquisition device, biological information acquisition method, biological information acquisition control program
JP6759142B2 (en) Biometric device and method
US20230359717A1 (en) Biometric authentication system, authentication terminal, and authentication method
JP2018081469A (en) Blood vessel image pickup apparatus and personal authentication system
KR20220037473A (en) Shooting device and authentication device
JP5528172B2 (en) Face image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant