WO2018219012A1 - 三维静脉识别装置和方法、开关、移动终端 - Google Patents

三维静脉识别装置和方法、开关、移动终端 Download PDF

Info

Publication number
WO2018219012A1
WO2018219012A1 PCT/CN2018/078993 CN2018078993W WO2018219012A1 WO 2018219012 A1 WO2018219012 A1 WO 2018219012A1 CN 2018078993 W CN2018078993 W CN 2018078993W WO 2018219012 A1 WO2018219012 A1 WO 2018219012A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
vein
dimensional vein
image
vein image
Prior art date
Application number
PCT/CN2018/078993
Other languages
English (en)
French (fr)
Inventor
李向明
Original Assignee
燕南国创科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 燕南国创科技(北京)有限公司 filed Critical 燕南国创科技(北京)有限公司
Publication of WO2018219012A1 publication Critical patent/WO2018219012A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Definitions

  • the present invention relates to the field of security authentication technologies, and in particular, to a three-dimensional vein identification device and method, a switch, and a mobile terminal.
  • biometric recognition technologies such as fingerprints, face, iris, voiceprint and finger vein have been widely used.
  • the finger vein recognition technology has strong security and convenience. Because the finger vein belongs to the living body inside the human body, it is difficult to be forged and stolen. In addition, the finger vein is located inside the human body and will not be affected by the cold and hot outside environment, dry and wet fingers, and scratches. There is a difference in the structure of the finger veins of each person, and this difference does not change with age. Finger vein recognition technology stands out among many biometrics.
  • the existing finger vein recognition technology uses a two-dimensional recognition method, that is, a way of recognizing the planar projection of the finger vein.
  • the structure of the finger vein is a three-dimensional structure, and its projection in a two-dimensional plane is affected by factors such as the posture of the finger and the distance between the camera and the camera, which directly causes a large difference in the image of the finger vein for each shot, thereby causing The problem of not being able to accurately identify the user.
  • the existing two-dimensional identification method of the finger vein cannot meet the high requirements of the user.
  • the embodiment of the invention provides a three-dimensional vein recognition device and method, a switch, and a mobile terminal.
  • a three-dimensional vein recognition device comprising: a light source, a microlens array, an image sensor, a three-dimensional vein image processor and a three-dimensional vein recognizer, the microlens array comprises an outer surface, a inner surface and an edge portion, the light source is arranged on one side of the edge portion, and the image sensor is arranged at On one side of the inner surface, the image sensor is connected to the three-dimensional vein image processor, and the three-dimensional vein image processor is connected to the three-dimensional vein recognizer, wherein:
  • the light source is configured to provide light
  • the microlens array is configured to receive reflected light formed by the vein of the recognition object on the side of the outer surface, and generate two or more two-dimensional vein images from the reflected light;
  • the image sensor is configured to receive two or more two-dimensional vein images, and transmit the received two-dimensional vein image to the three-dimensional vein image processor;
  • the three-dimensional vein image processor is configured to perform angle and depth conversion processing on the two-dimensional vein image, obtain the vein depth, and convert two or more two-dimensional vein images into one based on two or more vein depths Three-dimensional vein image;
  • the three-dimensional vein recognizer is configured to match the three-dimensional vein image with the three-dimensional vein image template, and when the three-dimensional vein image is successfully matched with the three-dimensional vein image template, a recognition success signal is issued.
  • a three-dimensional vein recognition method includes the following steps:
  • the three-dimensional vein image processor is used to perform angle and depth transformation on the two-dimensional vein image to obtain the vein depth, and two or more two-dimensional vein images are converted into a three-dimensional vein based on two or more vein depths. image;
  • the three-dimensional vein image is matched with the three-dimensional vein image template by using the three-dimensional vein recognizer, and when the three-dimensional vein image is successfully matched with the three-dimensional vein image template, a recognition success signal is issued.
  • a three-dimensional vein recognition device includes:
  • a processor for executing the program stored by the memory, the program causing the processor to perform the method of the second aspect described above.
  • a computer readable storage medium stores instructions that, when executed on a computer, cause the computer to perform the methods described in the various aspects above.
  • the switch comprises the device of the first aspect.
  • the mobile device includes the devices of the first, third and fifth aspects.
  • the user's finger can be illuminated by the light source and reflected, and the reflected light illuminates the microlens array to generate two or more two-dimensional vein images, and the two-dimensional vein image received by the image sensor is transmitted to the three-dimensional vein image processor.
  • the three-dimensional vein image processor performs angle and depth transformation on the two-dimensional vein image to obtain the vein depth, and converts two or more two-dimensional vein images into one three-dimensional vein image based on two or more vein depths. Then, the three-dimensional vein image is matched with the three-dimensional vein image template by using the three-dimensional vein recognizer, thereby realizing high-accuracy three-dimensional vein recognition.
  • FIG. 1 is a schematic structural view of a three-dimensional vein recognition device according to an embodiment of the present invention.
  • FIG. 2 is a schematic structural view of a three-dimensional vein recognition device according to another embodiment of the present invention.
  • FIG. 3 is a schematic structural view of a three-dimensional vein recognition device according to still another embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a three-dimensional vein image processing optical path according to an embodiment of the present invention.
  • FIG. 5 is a schematic flow chart of a three-dimensional vein recognition method according to an embodiment of the present invention.
  • Fig. 6 is a schematic view showing the frame of a three-dimensional vein recognition device according to an embodiment of the present invention.
  • FIG. 1 is a schematic structural view of a three-dimensional vein recognition device according to an embodiment of the present invention.
  • the application scenario of this embodiment may be: using the device to collect vein information of a user's finger, wrist, palm, eyelid, cheek, etc., thereby identifying the identity of the user.
  • the device can be used as a smart switch of a user wearable smart device (such as an Augmented Reality (AR) helmet, a smart watch, a smart bracelet, etc.), when the user identity is successful (ie, the collected user's three-dimensional vein and pre-storage) The three-dimensional vein matching) turns on the wearable smart device.
  • AR Augmented Reality
  • the device can also act as a smart switch for other electronic devices such as smart phones, televisions, and vehicles.
  • the three-dimensional vein recognition device may include the following components: a light source 1, a microlens array 2, an image sensor 3, a three-dimensional vein image processor 4, and a three-dimensional vein recognizer 5.
  • a light source 1 a microlens array 2
  • an image sensor 3 a three-dimensional vein image processor 4
  • a three-dimensional vein recognizer 5 The structure, position and connection relationship of each component in the three-dimensional vein recognition device can be as follows:
  • the microlens array 2 may include an outer surface and an inner surface.
  • the image sensor setting 3 is placed on one side of the inner surface.
  • the image sensor 3 is connected to the three-dimensional vein image processor 4.
  • the three-dimensional vein image processor 4 is connected to the three-dimensional vein recognizer 5.
  • the light source 1 can be configured to provide light; the microlens array 2 can be configured to receive reflected light formed by a vein of the light source of the light source that is emitted to the outer surface side, and two or more of the reflected light are generated.
  • the image sensor 3 may be configured to receive two or more two-dimensional vein images, and transmit the received two-dimensional vein image to the three-dimensional vein image processor;
  • the three-dimensional vein image processor 4 may be configured to The two-dimensional vein image is subjected to angle and depth conversion processing to obtain the vein depth, and two or more two-dimensional vein images are converted into one three-dimensional vein image based on two or more vein depths;
  • the three-dimensional vein recognizer 5 The three-dimensional vein image may be configured to match the three-dimensional vein image template, and when the three-dimensional vein image is successfully matched with the three-dimensional vein image template, a recognition success signal is issued.
  • the light source 1 may be a near-infrared light emitter, and the number of the light sources 1 such as an infrared diode or an infrared light bulb may be one or two or more, and two or more light sources 1 may be disposed on one side or both sides of the edge portion. .
  • a plurality of light sources are disposed on one side of the microlens array 2.
  • a plurality of light sources are respectively disposed around the microlens array 2 and the like. It can be flexibly set according to the actual situation.
  • the preferred location of the light source 1 can be achieved by illuminating the light emitted by the light source onto the user's finger and illuminating the reflected light onto the microlens array 2.
  • the microlens array 2 may be an array of several (e.g., 10,000) microlenses.
  • the clear aperture and relief depth of the microlens can reach the micron level.
  • the microlens array 2 can have not only basic functions such as focusing and imaging of a conventional lens, but also features such as small size and high integration.
  • the image sensor 3 may be a Charge-coupled Device (CCD).
  • CCD Charge-coupled Device
  • the image sensor 3 can convert the optical signal into a charge signal and transmit the converted charge signal to the three-dimensional vein image processor 4.
  • the three-dimensional vein image processor 4 may be a Micro Control Unit (MCU).
  • MCU Micro Control Unit
  • the three-dimensional vein recognizer 5 can match the three-dimensional vein image processed by the three-dimensional vein image processor 4 with the three-dimensional vein image template (pre-stored three-dimensional vein image of the user), and when the three-dimensional vein image is successfully matched with the three-dimensional vein image template, the three-dimensional vein image is successfully generated. Identify the success signal.
  • the recognition success signal may be a signal to turn on a device, such as a user's wearable smart settings.
  • the device's implementation of identifying the user's identity can be as follows:
  • the light source emits near-infrared light, illuminates the user's finger, and the reflected light is projected onto the image sensor 3 through the microlens array 2.
  • the image sensor 3 records the r- ⁇ information conversion of the microlens array.
  • the three-dimensional vein image processor 4 receives the information transmitted by the image sensor 3 and performs inverse conversion of ⁇ -r to restore the three-dimensional image of the vein.
  • r can be the length in the z-axis direction in the three-dimensional image.
  • can be the angle of the image relative to the microlens.
  • the three-dimensional vein recognizer 5 can perform feature extraction on the three-dimensional vein image restored by the three-dimensional vein image processor 4, and compare it with the pre-stored three-dimensional vein image template to give a recognition result. Among them, the image processing process of the three-dimensional vein image processor 4 will be described in further detail below.
  • the embodiment of the present invention can illuminate the user's finger and reflect by the light source provided by the light source, and the reflected light illuminates the microlens array to generate two or more two-dimensional vein images, and the two-dimensional vein image received by the image sensor is transmitted to the three-dimensional image.
  • a vein image processor that performs angle and depth conversion on a two-dimensional vein image by a three-dimensional vein image processor to obtain a vein depth and converts two or more two-dimensional vein images based on two or more vein depths
  • the three-dimensional vein image is matched with the three-dimensional vein image template by using a three-dimensional vein recognizer, thereby realizing high-accuracy three-dimensional vein recognition.
  • the embodiments of the present invention can be designed as a compact device so that they can be carried around and meet the needs of the wearable smart device.
  • FIG. 2 is a schematic structural view of a three-dimensional vein recognition device according to another embodiment of the present invention.
  • the application scenario of the embodiment of Fig. 2 is that the device recognizes the identity of the user by acquiring the vein inside the finger 6 of the user.
  • the difference between the embodiment of the present invention and the embodiment shown in FIG. 1 is that the embodiment of the present invention adds the light source 1 and the venous collection table 7 to the embodiment of FIG.
  • the light source 1 and the vein collection table 7 may be disposed on the outer surface side of the microlens array 2, that is, above the microlens array 2 shown in FIG.
  • the venous collection table 7 can be made of a light transmissive material, such as transparent plastic or glass, which is very thin and negligible.
  • the vein collection table 7 carries the user's finger 6. The infrared light emitted from the light source 1 is transmitted through the vein collecting table 7 to the finger 6 of the user, and is reflected on the microlens array 2.
  • the vein collecting table 7 by setting the vein collecting table 7, the user's finger 6 can be positioned to be fixed with the position and distance of the microlens array 2, and a relatively stable two-dimensional vein image can be obtained, ensuring the late vein.
  • the accurate acquisition of depth further improves the recognition accuracy of the late three-dimensional vein.
  • FIG. 3 is a schematic structural view of a three-dimensional vein recognition device according to still another embodiment of the present invention.
  • the difference between the embodiment of the present invention and the embodiment shown in FIG. 2 is that the three-dimensional vein image processor 4 is split into: a depth calculating component 41 and an image converting component.
  • a light source intensity controller 8 is added.
  • the depth calculation element 41 is connected to the image sensor 3 and the image conversion element 42, respectively.
  • the image conversion element 42 is connected to the three-dimensional vein recognizer 5.
  • the depth calculating component 41 is configured to convert the upper boundary of the recognition object and the boundary angle of the lower boundary of the recognition object with respect to the microlens based on the microlens parameters of the microlens in the microlens array, into a vein depth;
  • the image conversion component 42 is configured to construct a three-dimensional vein image based on two or more vein depths and two or more two-dimensional vein images. The content of constructing a portion of a three-dimensional vein image will also be described below.
  • the light source intensity controller 8 is connected to the light source 1.
  • the light source intensity controller 8 can be configured to control the intensity of the current input to the light source 1 and/or the intensity of the voltage.
  • the term "and/or" in this context is merely an association describing the associated object, indicating that there may be three relationships, for example, A and / or B, which may indicate that A exists separately, and both A and B exist, respectively. B these three situations.
  • the embodiment of the present invention can control the intensity of the current input to the light source 1 and/or the intensity of the voltage by the light source intensity controller 8, so that the light emitted by the light source 1 can be adjusted so that it can be obtained on the basis of better light intensity.
  • the better definition of the two-dimensional meridian image ensures accurate acquisition of the late vein depth and further improves the recognition accuracy of the third-dimensional vein in the later stage.
  • the depth calculation component is further configured to acquire a vein depth H based on the microlens diameter R, the microlens number N, the object distance L, and the object boundary angle difference ⁇ , the vein depth H being equal to the first product and the second product
  • the ratio of the first product is equal to the square of L multiplied by ⁇
  • the second product is equal to N times R
  • N is a natural number
  • N is less than or equal to the number of microlenses in the microlens array.
  • the microlens parameters may include: a microlens diameter R, a microlens number N, an object distance L, and an object boundary angle difference ⁇ , where N is a natural number and N is less than or equal to the number of microlenses in the microlens array 2. .
  • the vein depth H is equal to the ratio of the first product to the second product, wherein the first product is equal to the square of L multiplied by ⁇ , and the second product is equal to N times R.
  • the venous depth H aspect will be further described below.
  • FIG. 4 is a schematic diagram of a three-dimensional vein image processing optical path according to an embodiment of the present invention.
  • This embodiment will take the imaging of the user's finger 6 with respect to one microlens in the microlens array 2 as an example, and specifically explain the mutual transformation relationship between the vein depth H and the object boundary angle difference ⁇ . It can be understood by those skilled in the art that the imaging of several other microlenses can be implemented by referring to this method. For the sake of brevity, the details are not described herein.
  • the user's finger 6 is placed on the vein collecting table 7, and the thickness of the user's finger 6 can be expressed by the vein depth H.
  • the distance between the venous collection table 7 and the microlens array 2 can be expressed by the object distance L.
  • the reference numerals of the lenses in the microlens array 2 can be represented by a microlens number N. Where N is a natural number and N is less than or equal to the number of microlenses in the microlens array 2.
  • the microlens parallel to the Z axis is numbered 0, and the uppermost microlens is numbered n.
  • the microlens diameter can be represented by R.
  • the upper boundary of the user's finger 6 may be ⁇ 'n with respect to the microlens numbered n, and the lower boundary may be ⁇ n with respect to the angle of the microlens numbered n.
  • the angular difference between ⁇ 'n and ⁇ n can be expressed by the object boundary angle difference ⁇ .
  • H ⁇ L, R ⁇ L In the embodiment of the present invention, H ⁇ L, R ⁇ L.
  • is only related to the vein depth H and the microlens label n (i.e., the position of the microlens where the upper boundary ray is located). Therefore, the vein depth H of the three-dimensional user's finger 6 is converted into two-dimensional angle information by the microlens array 2.
  • the boundary angle difference ⁇ and the vein depth H of the upper boundary of the recognition object and the lower boundary of the recognition object with respect to the microlens can be Make conversions.
  • the vein depth H is much larger than the focal length F of the microlens, so the projected images at different depth points on the three-dimensional object will be approximately imaged on the back focal plane of the microlens.
  • the three-dimensional vein image processor 4 is further configured to rotate the three-dimensional vein image to match the rotated three-dimensional vein image with the three-dimensional vein image template, and when the three-dimensional vein image is successfully matched with the three-dimensional vein image template, Identify the success signal.
  • the three-dimensional vein image can be rigidly rotated at a certain angle (for example, 2 degrees in the forward direction), and then the three-dimensional vein image after the rotation is subjected to feature extraction, so that the user's finger is slightly deflected and can be accurately recognized.
  • a certain angle for example, 2 degrees in the forward direction
  • FIG. 5 is a schematic flow chart of a three-dimensional vein recognition method according to an embodiment of the present invention.
  • the method may include the following steps: S510, receiving, by the microlens array, the reflected light formed by the light of the light source toward the vein of the identification object on the outer surface side of the microlens array, and the reflected light is generated by the reflected light.
  • One or three or more two-dimensional vein images S520, the image sensor is used to receive the two-dimensional vein image, and the two-dimensional vein image is transmitted to the three-dimensional vein image processor; S530, the angle of the two-dimensional vein image is performed by using the three-dimensional vein image processor Deep conversion processing, obtaining vein depth, and converting two or more two-dimensional vein images into one three-dimensional vein image based on two or more vein depths; S540, using a three-dimensional vein recognizer to image a three-dimensional vein The three-dimensional vein image template is matched, and when the three-dimensional vein image is successfully matched with the three-dimensional vein image template, a recognition success signal is issued.
  • the above step S530 may include the following sub-steps: S531, based on the microlens parameters of the microlens in the microlens array, the boundary angle difference between the upper boundary of the recognition object and the lower boundary of the recognition object with respect to the microlens, Converted to vein depth; S532, constructing a three-dimensional vein image based on two or more vein depths and two or more two-dimensional vein images.
  • the microlens parameters may include: a microlens diameter R, a microlens number N, an object distance L, and an object boundary angle difference ⁇ , where N is a natural number and N is less than or equal to the number of microlenses in the microlens array 2. .
  • the vein depth H is equal to the ratio of the first product to the second product, wherein the first product is equal to the square of L multiplied by ⁇ , and the second product is equal to N times R.
  • the method may further include, based on the various embodiments described above, controlling the intensity of the current input to the light source and/or the intensity of the voltage.
  • the method may further include: rotating the three-dimensional vein image, matching the rotated three-dimensional vein image with the three-dimensional vein image template, and matching the three-dimensional vein image with the three-dimensional vein image template. Successfully, a recognition success signal is issued.
  • a three-dimensional vein recognition device can include a memory and a processor. Wherein the memory is configured to store the program; the processor is configured to execute a program stored in the memory, the program causing the processor to perform any of the three-dimensional vein recognition described above.
  • Fig. 6 is a schematic view showing the frame of a three-dimensional vein recognition device according to an embodiment of the present invention.
  • the framework may include a central processing unit (CPU) 601 that may be loaded into a program in a random access memory (RAM) 603 according to a program stored in a read only memory (ROM) 602 or from a storage portion 608.
  • CPU central processing unit
  • RAM random access memory
  • ROM read only memory
  • the various operations performed by the embodiment of Fig. 5 are performed.
  • RAM 603 various programs and data required for system architecture operations are also stored.
  • the CPU 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also coupled to bus 604.
  • the following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, etc.; an output portion 607 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a storage portion 608 including a hard disk or the like. And a communication portion 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the Internet.
  • Driver 610 is also coupled to I/O interface 605 as needed.
  • a removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like, is mounted on the drive 610 as needed so that a computer program read therefrom is installed into the storage portion 608 as needed.
  • an embodiment of the invention includes a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for executing the method illustrated in the flowchart.
  • the computer program can be downloaded and installed from the network via communication portion 609, and/or installed from removable media 611.
  • the present invention also provides a mobile terminal, which may include any of the above-described three-dimensional vein recognition devices.
  • the mobile terminal can be a smart phone.
  • the user's vein information can be collected on the back of the smartphone to authenticate the user.
  • the above embodiments it may be implemented in whole or in part by software, hardware, or the like.
  • software it may be implemented in whole or in part in the form of a computer program product.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Input (AREA)

Abstract

公开了一种三维静脉识别装置和方法、开关、移动终端。其中,该装置包括:包括:光源、微透镜阵列、图像传感器、三维静脉图像处理器和三维静脉识别器,微透镜阵列包括外表面、里表面和边缘部,光源设置在边缘部的一侧,图像传感器设置在里表面的一侧,图像传感器与三维静脉图像处理器连接,三维静脉图像处理器与三维静脉识别器连接,实施例可以通过光源照射用户的手指的反射光照射微透镜阵列生成两张或者三张以上二维静脉图像,由三维静脉图像处理器对二维静脉图像进行角度和深度转化处理,获取静脉深度,并基于两个或三个以上静脉深度,将两张或三张以上二维静脉图像转化为一张三维静脉图像,从而实现精确度较高的三维静脉识别。

Description

三维静脉识别装置和方法、开关、移动终端 技术领域
本发明涉及安全认证技术领域,尤其涉及一种三维静脉识别装置和方法、开关、移动终端。
背景技术
随着安全认证技术的快速发展,指纹、人脸、虹膜、声纹和指静脉等生物特征识别技术得到了广泛应用。其中,手指静脉识别技术具有较强的安全性和便捷性。因为手指静脉属于人体内部的活体特征,所以难以被伪造和被盗取。另外,指静脉位于人体内部,不会受外界环境冷热、手指干湿以及划伤等影响。每个人的手指静脉结构均存在差异,且这种差异不随年龄的增长而改变。手指静脉识别技术在众多生物特征识别技术中脱颖而出。
申请人致力于研究静脉识别技术,申请人在研究过程中发现:现有的手指静脉识别技术采用的是二维识别的方式,即识别手指静脉的平面投影的方式。然而,手指静脉的结构为三维结构,其在二维平面的投影会受到手指姿态以及与摄像头之间的距离等因素影响,直接导致每次拍摄的指静脉图像可能会存在较大差别,进而导致无法准确识别用户的身份的问题。随着用户对识别精度要求的提高,现有的手指静脉二维识别方法无法满足用户的高要求。
如何提高识别静脉识别率成为亟待解决的技术问题。
发明内容
为了解决静脉识别率相对较低的问题,本发明实施例提供了一种三维静脉识别装置和方法、开关、移动终端。
一种三维静脉识别装置。该装置包括:光源、微透镜阵列、图像传感 器、三维静脉图像处理器和三维静脉识别器,微透镜阵列包括外表面、里表面和边缘部,光源设置在边缘部的一侧,图像传感器设置在里表面的一侧,图像传感器与三维静脉图像处理器连接,三维静脉图像处理器与三维静脉识别器连接,其中:
光源被配置为提供光线;
微透镜阵列被配置为接收光线射向外表面一侧的识别对象的静脉形成的反射光,由反射光生成两张或者三张以上二维静脉图像;
图像传感器被配置为接收两张或者三张以上二维静脉图像,将接收的二维静脉图像传输给三维静脉图像处理器;
三维静脉图像处理器被配置为对二维静脉图像进行角度和深度转化处理,获取静脉深度,并基于两个或三个以上静脉深度,将两张或三张以上二维静脉图像转化为一张三维静脉图像;
三维静脉识别器被配置为将三维静脉图像与三维静脉图像模板相匹配,当三维静脉图像与三维静脉图像模板匹配成功,发出识别成功信号。
一种三维静脉识别方法。该方法包括以下步骤:
利用微透镜阵列接收由光源的光线射向利用微透镜阵列的外表面一侧的识别对象的静脉形成的反射光,由反射光生成两张或者三张以上二维静脉图像;
利用图像传感器接收二维静脉图像,将二维静脉图像传输给三维静脉图像处理器;
利用三维静脉图像处理器对二维静脉图像进行角度和深度转化处理,获取静脉深度,并基于两个或者三个以上静脉深度,将两张或者三张以上二维静脉图像转化为一张三维静脉图像;
利用三维静脉识别器将三维静脉图像与三维静脉图像模板相匹配,当三维静脉图像与三维静脉图像模板匹配成功,发出识别成功信号。
一种三维静脉识别装置。该装置包括:
存储器,用于存放程序;
处理器,用于执行所述存储器存储的程序,所述程序使得所述处理器执行上述第二方面所述的方法。
一种计算机可读存储介质。该计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述各方面所述的方法。
用于智能设备的开关。该开关包括第一方面所述的装置。
一种移动终端。该移动装置包括第一、三和五方面所述的装置。
本发明实施例可以通过光源照射用户的手指并反射,反射光线照射微透镜阵列生成两张或者三张以上二维静脉图像,利用图像传感器接收的二维静脉图像传输给三维静脉图像处理器,由三维静脉图像处理器对二维静脉图像进行角度和深度转化处理,获取静脉深度,并基于两个或三个以上静脉深度,将两张或三张以上二维静脉图像转化为一张三维静脉图像,再利用三维静脉识别器将三维静脉图像与三维静脉图像模板相匹配,从而实现精确度较高的三维静脉识别。
附图说明
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。
图1是本发明一实施例的三维静脉识别装置的结构示意图;
图2是本发明另一实施例的三维静脉识别装置的结构示意图;
图3是本发明又一实施例的三维静脉识别装置的结构示意图;
图4是本发明一实施例的三维静脉图像处理光路示意图;
图5是本发明一实施例的三维静脉识别方法流程示意图;
图6是本发明一实施例的三维静脉识别装置的框架示意图。
具体实施方式
以下结合附图对本发明的优选实施例进行说明,应当理解,此处所描述的优选实施例仅用于说明和解释本发明,并不用于限定本发明。
图1是本发明一实施例的三维静脉识别装置的结构示意图。
本实施例的应用场景可以是:利用该装置采集用户的手指、手腕、手掌、眼睑、脸颊等部位的静脉信息,以此识别该用户的身份。为了描述简洁,下面仅以用户的手指为了进行举例说明。可以理解,手腕、手掌、眼 睑、脸颊等部位的静脉信息同样适用于该装置。该装置可以作为用户可穿戴智能设备(例如增强现实技术(Augmented Reality,AR)头盔、智能手表、智能手环等)的智能开关,当用户身份识别成功(即采集的用户的三维静脉与预先存储的三维静脉匹配),则开启该可穿戴智能设备。本领域的技术人员可以理解,该装置还可以作为其他电子设备(例如智能手机、电视和车辆等)的智能开关。
如图1所示,三维静脉识别装置可以如下部件包括:光源1、微透镜阵列2、图像传感器3、三维静脉图像处理器4和三维静脉识别器5。其中,三维静脉识别装置中各个部件的结构、位置和连接关系可以如下所示:
微透镜阵列2可以包括外表面和里表面。图像传感器设3置在里表面的一侧。图像传感器3与三维静脉图像处理器4连接。三维静脉图像处理器4与三维静脉识别器5连接。
其中,光源1可以被配置为提供光线;微透镜阵列2可以被配置为接收由光源的光线射向外表面一侧的识别对象的静脉形成的反射光,由反射光生成两张或者三张以上二维静脉图像;图像传感器3可以被配置为接收两张或者三张以上二维静脉图像,将接收的二维静脉图像传输给三维静脉图像处理器;三维静脉图像处理器4可以被配置为对二维静脉图像进行角度和深度转化处理,获取静脉深度,并基于两个或三个以上静脉深度,将两张或三张以上二维静脉图像转化为一张三维静脉图像;三维静脉识别器5可以被配置为将三维静脉图像与三维静脉图像模板相匹配,当三维静脉图像与三维静脉图像模板匹配成功,发出识别成功信号。
具体的,光源1可以是近红外光发射器,例如红外二极管或者红外灯泡等光源1的数量可以为一个或者两个以上,两个以上的光源1可以设置在边缘部的一侧或者两侧以上。例如,多个光源均设置在微透镜阵列2的某一边。又例如,多个光源分别设置在微透镜阵列2四周等。具体可以根据实际情况灵活设置。较佳的光源1的设置位置可以实现:可以将光源发射的光线充分照射在用户的手指上,并将反射光照射在微透镜阵列2。
微透镜阵列2可以是由若干(例如1万)微透镜组成的阵列。微透镜的通光孔径及浮雕深度可以达到微米级别。微透镜阵列2不仅可以具有传 统透镜的聚焦、成像等基本功能,而且可以具有尺寸小、集成度高等特点。
图像传感器3可以是电荷耦合元件(Charge-coupled Device,CCD)。图像传感器3可以把光信号转换成电荷信号,将转换后的电荷信号发送给三维静脉图像处理器4。
三维静脉图像处理器4可以是微控制单元(Micro Control Unit,MCU)。
三维静脉识别器5可以将三维静脉图像处理器4处理后的三维静脉图像与三维静脉图像模板(预先存储的用户的三维静脉图像)相匹配,当三维静脉图像与三维静脉图像模板匹配成功,发出识别成功信号。该识别成功信号可以是开启某设备(例如用户的可穿戴智能设置)的信号。
在一些实施例中,该装置的识别用户身份的实现方式可以如下所示:
光源发出近红外光,照射用户的手指,反射光经微透镜阵列2后投射在图像传感器3上。图像传感器3记录了微透镜阵列的r-θ信息转化。三维静脉图像处理器4接收图像传感器3发送的信息,并进行θ-r的反转换,还原出静脉的三维图像。其中,r可以为三维图像中的z轴方向的长度。θ可以为图像相对于微透镜的角度。三维静脉识别器5可以将三维静脉图像处理器4还原的三维静脉图像进行特征提取,与预存的三维静脉图像模板进行比较,给出识别结果。其中,三维静脉图像处理器4的图像处理过程将在下文进一步详细描述。
由此,本发明实施例可以通过光源提供的光源照射用户的手指并反射,反射光线照射微透镜阵列生成两张或者三张以上二维静脉图像,利用图像传感器接收的二维静脉图像传输给三维静脉图像处理器,由三维静脉图像处理器对二维静脉图像进行角度和深度转化处理,获取静脉深度,并基于两个或三个以上静脉深度,将两张或三张以上二维静脉图像转化为一张三维静脉图像,再利用三维静脉识别器将三维静脉图像与三维静脉图像模板相匹配,从而实现精确度较高的三维静脉识别。
另外,本发明实施例可以设计成结构小巧的装置,以便可以随身携带,满足可穿戴智能设备的需求。
图2是本发明另一实施例的三维静脉识别装置的结构示意图。
图2实施例的应用场景是该装置通过获取用户的手指6的内部的静脉,来识别用户的身份。
如图2所示,本发明实施例与图1所示实施例的区别在于,本发明实施例在图1实施例的基础上增加了:光源1和静脉采集台7。
其中,光源1和静脉采集台7可以设置在微透镜阵列2外表面一侧,即图2所示的微透镜阵列2的上方。静脉采集台7可以为透光材料制成,例如透明塑料或玻璃,其厚度非常薄,可以忽略不计。静脉采集台7承载用户的手指6。光源1发射的红外光线透过静脉采集台7射向用户的手指6,反射在微透镜阵列2上。
由此,本发明实施例通过设置静脉采集台7,可以将用户的手指6进行定位,使其与微透镜阵列2的位置和距离固定,可以获得较为稳定的二维静脉图像,确保了后期静脉深度的准确获取,进一步提高了后期的三维静脉的识别精度。
图3是本发明又一实施例的三维静脉识别装置的结构示意图。
如图3所示,本发明实施例与图2所示实施例的区别在于:三维静脉图像处理器4被拆分成:深度计算元件41和图像转化元件。在图2所示实施例的基础上增加了:光源强度控制器8。
其中,深度计算元件41分别与图像传感器3和图像转化元件42连接。图像转化元件42与三维静脉识别器5连接。其中,深度计算元件41被配置为基于微透镜阵列中微透镜的微透镜参数,将识别对象的上边界与识别对象的下边界相对于微透镜的边界角度差,转化为静脉深度;图像转化元件42被配置为基于两个或三个以上静脉深度和两张或三张以上二维静脉图像,构造一张三维静脉图像。构造一张三维静脉图像部分的内容还将在下文进行描述。
光源强度控制器8与光源1连接。光源强度控制器8可以被配置为控制输入光源1的电流的强度和/或电压的强度。本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
由此,本发明实施例可以通过光源强度控制器8控制输入光源1的电 流的强度和/或电压的强度,使得光源1发射的光线可以调节,使得在较佳的光线强度的基础上可以获取较佳清晰度的二维经脉图像,确保了后期静脉深度的准确获取,进一步提高了后期的三维静脉的识别精度。
在一些实施例中,深度计算元件还被配置为基于微透镜直径R、微透镜编号N、对象距离L和对象边界角度差Δα,获取静脉深度H,静脉深度H等于第一乘积与第二乘积的比值,其中,第一乘积等于L的平方乘以Δα,第二乘积等于N乘以R,N为自然数,N小于等于微透镜阵列中微透镜的数量。
在一些实施例中,微透镜参数可以包括:微透镜直径R、微透镜编号N、对象距离L和对象边界角度差Δα,其中,N为自然数,N小于等于微透镜阵列2中微透镜的数量。
在一些实施例中,静脉深度H等于第一乘积与第二乘积的比值,其中,第一乘积等于L的平方乘以Δα,第二乘积等于N乘以R。静脉深度H方面内容将在下文进一步描述。
图4是本发明一实施例的三维静脉图像处理光路示意图。
本实施例将以用户手指6相对于微透镜阵列2中一个微透镜的成像为例,具体说明静脉深度H与对象边界角度差Δα的相互转化关系。本领域的技术技术人员可以理解,其他若干个微透镜的成像均可以参考此方法来实现,为了描述简洁,此方面内容不再赘述。
如图4所示,用户手指6放在静脉采集台7上面,用户手指6的厚度可以用静脉深度H表示。静脉采集台7与微透镜阵列2的距离可以用对象距离L表示。微透镜阵列2中透镜的标号可以用微透镜编号N表示。其中,N为自然数,N小于等于微透镜阵列2中微透镜的数量。例如,与Z轴平行的微透镜标号为0,最上方的微透镜标号为n。微透镜直径可以用R表示。用户手指6的上边界相对于标号为n的微透镜的角度可以为α’n,其下边界相对于标号为n的微透镜的角度可以为αn。α’n与αn的角度差可以用对象边界角度差Δα表示。
根据三角形的定理可知:
Figure PCTCN2018078993-appb-000001
Figure PCTCN2018078993-appb-000002
在本发明实施例中,H<<L,R<<L。
由此,tanαn≈αn,tanα’n≈α’n           (3)
Figure PCTCN2018078993-appb-000003
由上述公式(4)可知,Δα只与静脉深度H和微透镜标号n(即上边界光线所在微透镜的位置)有关。因此,三维用户手指6的静脉深度H被微透镜阵列2转化成了二维的角度信息。由此,为基于微透镜阵列中微透镜的微透镜参数(n、H、L等),可以将识别对象的上边界与识别对象的下边界相对于微透镜的边界角度差Δα与静脉深度H进行转化。
在本实施例中,静脉深度H远大于微透镜的焦距F,所以三维物体上不同深度点的投射图像都将近似成像在微透镜的后焦距面上。
在一些实施例中,三维静脉图像处理器4还可以被配置为旋转三维静脉图像,将旋转后的三维静脉图像与三维静脉图像模板相匹配,当三维静脉图像与三维静脉图像模板匹配成功,发出识别成功信号。
由此,本发明实施例可以通过对三维静脉图像进行一定角度(例如正向2度)刚性旋转,然后对旋转后的三维静脉图像进行特征提取,使得用户手指稍微偏转,也可以准确识别。
图5是本发明一实施例的三维静脉识别方法流程示意图。
如图5所示,该方法可以包括以下步骤:S510,利用微透镜阵列接收由光源的光线射向利用微透镜阵列的外表面一侧的识别对象的静脉形成的反射光,由反射光生成两张或者三张以上二维静脉图像;S520,利用图像传感器接收二维静脉图像,将二维静脉图像传输给三维静脉图像处理器;S530,利用三维静脉图像处理器对二维静脉图像进行角度和深度转化处理,获取静脉深度,并基于两个或者三个以上静脉深度,将两张或者三张以上二维静脉图像转化为一张三维静脉图像;S540,利用三维静脉识别器将三维静脉图像与三维静脉图像模板相匹配,当三维静脉图像与三维静脉图像模板匹配成功,发出识别成功信号。
在一些实施例中,上述步骤S530可以包括以下子步骤:S531,基于微透镜阵列中微透镜的微透镜参数,将识别对象的上边界与识别对象的下 边界相对于微透镜的边界角度差,转化为静脉深度;S532,基于两个或三个以上静脉深度和两张或三张以上二维静脉图像,构造一张三维静脉图像。
在一些实施例中,微透镜参数可以包括:微透镜直径R、微透镜编号N、对象距离L和对象边界角度差Δα,其中,N为自然数,N小于等于微透镜阵列2中微透镜的数量。
在一些实施例中,静脉深度H等于第一乘积与第二乘积的比值,其中,第一乘积等于L的平方乘以Δα,第二乘积等于N乘以R。
在一些实施例中,该方法在上述各个实施例的基础上还可以包括:控制输入光源的电流的强度和/或电压的强度。
在一些实施例中,该方法在上述各个实施例的基础上还可以包括:旋转三维静脉图像,将旋转后的三维静脉图像与三维静脉图像模板相匹配,当三维静脉图像与三维静脉图像模板匹配成功,发出识别成功信号。
在一些实施例中,三维静脉识别装置,可以包括:存储器和处理器。其中,存储器被配置为存放程序;处理器被配置为执行存储器存储的程序,程序使得处理器执行上述任意一种三维静脉识别。
需要说明的是,在不冲突的情况下,本领域的技术人员可以按实际需要将上述的操作步骤的顺序进行灵活调整,或者将上述步骤进行灵活组合等操作。为了简明,不再赘述各种实现方式。另外,各实施例的内容可以相互参考引用。
图6是本发明一实施例的三维静脉识别装置的框架示意图。
如图6所示,该框架可以包括中央处理单元(CPU)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储部分608加载到随机访问存储器(RAM)603中的程序而执行图5实施例所做的各种操作。在RAM603中,还存储有系统架构操作所需的各种程序和数据。CPU 601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
以下部件连接至I/O接口605:包括键盘、鼠标等的输入部分606;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分607;包括硬盘等的存储部分608;以及包括诸如LAN卡、调制解调 器等的网络接口卡的通信部分609。通信部分609经由诸如因特网的网络执行通信处理。驱动器610也根据需要连接至I/O接口605。可拆卸介质611,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器610上,以便于从其上读出的计算机程序根据需要被安装入存储部分608。
特别地,根据本发明的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本发明的实施例包括一种计算机程序产品,其包括有形地包含在机器可读介质上的计算机程序,所述计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分609从网络上被下载和安装,和/或从可拆卸介质611被安装。
本发明还提出了一种移动终端,该移动终端可以包括:上述的任一种三维静脉识别装置。该移动终端可以是智能手机。可以在智能手机的背面采集用户的静脉信息,从而实现对用户身份认证。
需要说明的是,上述各实施例的装置可作为上述各实施例的用于各实施例的方法中的执行主体,可以实现各个方法中的相应流程,实现相同的技术效果,为了简洁,此方面内容不再赘述。
在上述实施例中,可以全部或部分地通过软件、硬件等来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (16)

  1. 一种三维静脉识别装置,包括:光源、微透镜阵列、图像传感器、三维静脉图像处理器和三维静脉识别器,所述微透镜阵列包括外表面、里表面和边缘部,所述光源设置在所述边缘部的一侧,所述图像传感器设置在所述里表面的一侧,所述图像传感器与所述三维静脉图像处理器连接,所述三维静脉图像处理器与所述三维静脉识别器连接,其中:
    所述光源被配置为提供光线;
    所述微透镜阵列被配置为接收所述光线射向所述外表面一侧的识别对象的静脉形成的反射光,由所述反射光生成两张或者三张以上二维静脉图像;
    所述图像传感器被配置为接收两张或者三张以上所述二维静脉图像,将接收的二维静脉图像传输给所述三维静脉图像处理器;
    所述三维静脉图像处理器被配置为对所述二维静脉图像进行角度和深度转化处理,获取静脉深度,并基于两个或三个以上所述静脉深度,将两张或三张以上所述二维静脉图像转化为一张三维静脉图像;
    所述三维静脉识别器被配置为将所述三维静脉图像与三维静脉图像模板相匹配,当所述三维静脉图像与所述三维静脉图像模板匹配成功,发出识别成功信号。
  2. 根据权利要求1所述的装置,其中,所述三维静脉图像处理器包括:深度计算元件和图像转化元件,所述深度计算元件分别与所述图像传感器和所述图像转化元件连接,所述图像转化元件与所述三维静脉识别器连接,其中:
    所述深度计算元件被配置为基于所述微透镜阵列中微透镜的微透镜参数,将所述识别对象的上边界与所述识别对象的下边界相对于所述微透镜的边界角度差,转化为所述静脉深度;
    所述图像转化元件被配置为基于两个或三个以上所述静脉深度和两张或三张以上所述二维静脉图像,构造一张三维静脉图像。
  3. 根据权利要求2所述的装置,其中:
    所述深度计算元件还被配置为基于微透镜直径R、微透镜编号N、对象距离L和对象边界角度差Δα,获取所述静脉深度H,所述静脉深度H等于第一乘积与第二乘积的比值,其中,所述第一乘积等于所述L的平方乘以所述Δα,所述第二乘积等于所述N乘以所述R,所述N为自然数,N小于等于所述微透镜阵列中微透镜的数量。
  4. 根据权利要求1所述的装置,其中:所述光源的数量为两个以上,两个以上的所述光源设置在所述边缘部的一侧或者两侧以上。
  5. 根据权利要求1所述的装置,其中:所述光源为近红外光发射器。
  6. 根据权利要求1所述的装置,还包括:光源强度控制器,所述光源强度控制器与所述光源连接,
    所述光源强度控制器被配置为控制输入所述光源的电流的强度和/或电压的强度。
  7. 根据权利要求1-6中任一项所述的装置,还包括:静脉采集台,
    所述静脉采集台设置在所述外表面一侧,并被配置为承载所述识别对象。
  8. 根据权利要求1-6中任一项所述的装置,其中,所述识别对象包括以下一种或者两种以上:手指、手腕、手掌、眼睑、脸颊。
  9. 根据权利要求7所述的装置,其中,所述三维静脉图像处理器还被配置为旋转所述三维静脉图像,将旋转后的三维静脉图像与所述三维静脉图像模板相匹配,当所述三维静脉图像与所述三维静脉图像模板匹配成功,发出识别成功信号。
  10. 一种三维静脉识别方法,包括以下步骤:
    利用微透镜阵列接收由光源的光线射向所述利用微透镜阵列的外表面一侧的识别对象的静脉形成的反射光,由所述反射光生成两张或者三张以上二维静脉图像;
    利用图像传感器接收所述二维静脉图像,将所述二维静脉图像传输给三维静脉图像处理器;
    利用所述三维静脉图像处理器对所述二维静脉图像进行角度和深度转化处理,获取静脉深度,并基于两个或者三个以上所述静脉深度,将两张或者三张以上所述二维静脉图像转化为一张三维静脉图像;
    利用三维静脉识别器将所述三维静脉图像与三维静脉图像模板相匹配,当所述三维静脉图像与所述三维静脉图像模板匹配成功,发出识别成功信号。
  11. 根据权利要求10所述的方法,其中,所述利用所述三维静脉图像处理器对所述二维静脉图像进行角度和深度转化处理,获取静脉深度,并基于两个或者三个以上所述静脉深度,将两张或者三张以上所述二维静脉图像转化为一张三维静脉图像,包括:
    基于所述微透镜阵列中微透镜的微透镜参数,将所述识别对象的上边界与所述识别对象的下边界相对于所述微透镜的边界角度差,转化为所述静脉深度;
    基于两个或三个以上所述静脉深度和两张或三张以上所述二维静脉图像,构造一张三维静脉图像。
  12. 根据权利要求11所述的方法,其中,所述基于所述微透镜阵列中微透镜的微透镜参数,将所述识别对象的上边界与所述识别对象的下边界相对于所述微透镜的边界角度差,转化为所述静脉深度,包括:
    基于微透镜直径R、微透镜编号N、对象距离L和对象边界角度差Δα,获取所述静脉深度H,所述静脉深度H等于第一乘积与第二乘积的比 值,其中,所述第一乘积等于所述L的平方乘以所述Δα,所述第二乘积等于所述N乘以所述R,所述N为自然数,N小于等于所述微透镜阵列中微透镜的数量。
  13. 根据权利要求10所述的方法,还包括:
    控制输入所述光源的电流的强度和/或电压的强度。
  14. 根据权利要求11-13中任一项所述的方法,其中,所述利用所述三维静脉图像处理器对所述二维静脉图像进行角度和深度转化处理,获取静脉深度,并基于两个或者三个以上所述静脉深度,将两张或者三张以上所述二维静脉图像转化为一张三维静脉图像之后,还包括:
    旋转所述三维静脉图像,将旋转后的三维静脉图像与所述三维静脉图像模板相匹配,当所述三维静脉图像与三维静脉图像模板匹配成功,发出识别成功信号。
  15. 一种用于智能设备的开关,包括:
    根据权利要求1-9中任一项所述的装置。
  16. 一种移动终端,包括:
    如权利要求1-9中任一项所述的装置;
    或者,
    如权利要求15所述的开关。
PCT/CN2018/078993 2017-06-01 2018-03-14 三维静脉识别装置和方法、开关、移动终端 WO2018219012A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710404655.9 2017-06-01
CN201710404655.9A CN107392088A (zh) 2017-06-01 2017-06-01 三维静脉识别装置和方法、开关、移动终端

Publications (1)

Publication Number Publication Date
WO2018219012A1 true WO2018219012A1 (zh) 2018-12-06

Family

ID=60333008

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/078993 WO2018219012A1 (zh) 2017-06-01 2018-03-14 三维静脉识别装置和方法、开关、移动终端

Country Status (2)

Country Link
CN (1) CN107392088A (zh)
WO (1) WO2018219012A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392088A (zh) * 2017-06-01 2017-11-24 燕南国创科技(北京)有限公司 三维静脉识别装置和方法、开关、移动终端
CN108470373B (zh) * 2018-02-14 2019-06-04 天目爱视(北京)科技有限公司 一种基于红外的3d四维数据采集方法及装置
CN109543591A (zh) * 2018-11-19 2019-03-29 珠海格力电器股份有限公司 一种三维手指静脉采集的方法及设备
CN109657630B (zh) * 2018-12-25 2021-06-18 上海天马微电子有限公司 显示面板、显示面板的触摸识别方法和显示装置
CN112069864A (zh) * 2019-06-11 2020-12-11 杭州萤石软件有限公司 3d静脉图像确定方法、装置及系统
CN112990160B (zh) * 2021-05-17 2021-11-09 北京圣点云信息技术有限公司 一种基于光声成像技术的面部静脉识别方法及识别装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101488184A (zh) * 2008-01-18 2009-07-22 索尼株式会社 生物识别系统
JP2010009157A (ja) * 2008-06-25 2010-01-14 Hitachi Media Electoronics Co Ltd 指静脈認証装置および情報処理装置
CN104055489A (zh) * 2014-07-01 2014-09-24 李栋 一种血管成像装置
CN106022210A (zh) * 2016-05-04 2016-10-12 成都指码科技有限公司 一种静脉轮廓三维点云匹配的身份识别方法及装置
CN106580265A (zh) * 2017-01-24 2017-04-26 青岛大学 探测人体微血管超微结构的三维成像装置
CN107392088A (zh) * 2017-06-01 2017-11-24 燕南国创科技(北京)有限公司 三维静脉识别装置和方法、开关、移动终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101488184A (zh) * 2008-01-18 2009-07-22 索尼株式会社 生物识别系统
JP2010009157A (ja) * 2008-06-25 2010-01-14 Hitachi Media Electoronics Co Ltd 指静脈認証装置および情報処理装置
CN104055489A (zh) * 2014-07-01 2014-09-24 李栋 一种血管成像装置
CN106022210A (zh) * 2016-05-04 2016-10-12 成都指码科技有限公司 一种静脉轮廓三维点云匹配的身份识别方法及装置
CN106580265A (zh) * 2017-01-24 2017-04-26 青岛大学 探测人体微血管超微结构的三维成像装置
CN107392088A (zh) * 2017-06-01 2017-11-24 燕南国创科技(北京)有限公司 三维静脉识别装置和方法、开关、移动终端

Also Published As

Publication number Publication date
CN107392088A (zh) 2017-11-24

Similar Documents

Publication Publication Date Title
WO2018219012A1 (zh) 三维静脉识别装置和方法、开关、移动终端
US11188734B2 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
US10339362B2 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
US11263432B2 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
US10922512B2 (en) Contactless fingerprint recognition method using smartphone
KR20190129826A (ko) 생체 검측 방법 및 장치, 시스템, 전자 기기, 저장 매체
Das et al. Recent advances in biometric technology for mobile devices
CN108664783A (zh) 基于虹膜识别的识别方法和支持该方法的电子设备
CN112232155B (zh) 非接触指纹识别的方法、装置、终端及存储介质
CN109766876A (zh) 非接触式指纹采集装置和方法
CN112232163B (zh) 指纹采集方法及装置、指纹比对方法及装置、设备
US10430644B2 (en) Blended iris and facial biometric system
US11544966B2 (en) Image acquisition system for off-axis eye images
CN112016525A (zh) 非接触式指纹采集方法和装置
CN112232159B (zh) 指纹识别的方法、装置、终端及存储介质
WO2022068931A1 (zh) 非接触指纹识别方法、装置、终端及存储介质
CN112232157B (zh) 指纹区域检测方法、装置、设备、存储介质
US20200394289A1 (en) Biometric verification framework that utilizes a convolutional neural network for feature matching
CN112651270A (zh) 一种注视信息确定方法、装置、终端设备及展示对象
CN112232152B (zh) 非接触式指纹识别方法、装置、终端和存储介质
CN212569821U (zh) 非接触式指纹采集装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18809641

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 18.05.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18809641

Country of ref document: EP

Kind code of ref document: A1