WO2022148382A1 - 生物特征采集识别系统及方法、终端设备 - Google Patents

生物特征采集识别系统及方法、终端设备 Download PDF

Info

Publication number
WO2022148382A1
WO2022148382A1 PCT/CN2022/070355 CN2022070355W WO2022148382A1 WO 2022148382 A1 WO2022148382 A1 WO 2022148382A1 CN 2022070355 W CN2022070355 W CN 2022070355W WO 2022148382 A1 WO2022148382 A1 WO 2022148382A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
biometric
display
acquisition
target area
Prior art date
Application number
PCT/CN2022/070355
Other languages
English (en)
French (fr)
Inventor
张亮亮
刘鸿
韩东成
范超
Original Assignee
安徽省东超科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202110013200.0A external-priority patent/CN112668540B/zh
Priority claimed from CN202110013199.1A external-priority patent/CN112668539A/zh
Application filed by 安徽省东超科技有限公司 filed Critical 安徽省东超科技有限公司
Priority to KR1020237026629A priority Critical patent/KR20230136613A/ko
Priority to EP22736544.2A priority patent/EP4276682A1/en
Publication of WO2022148382A1 publication Critical patent/WO2022148382A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • G06V40/1312Sensors therefor direct reading, e.g. contactless acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • G06V40/1318Sensors therefor using electro-optical elements or layers, e.g. electroluminescent sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan

Definitions

  • the present disclosure relates to the technical field of imaging and identification, and in particular, to a biometric feature collection and identification system, a terminal device and a biometric feature collection and identification method.
  • a fingerprint identification system adopts a contact sensor (eg, a contact optical sensor or a contact capacitive sensor) to collect a fingerprint image, and uses a minutiae-based matching algorithm to perform fingerprint matching.
  • the matching algorithm based on minutiae is very sensitive to the quality of the fingerprint image.
  • the fingerprint image acquisition device based on the contact sensor cannot guarantee the image quality, and has shortcomings such as small fingerprint area, low resolution, and insufficient feature points.
  • this approach relies on physical contact between the finger and the fingerprint sensing device, the requirement of using the finger on the scanner raises user concerns about hygiene.
  • an object of the present disclosure is to provide a biometric feature collection and identification system, which is more convenient in operation and can avoid the risk of a user touching the device during operation.
  • the second purpose of the present disclosure is to provide a terminal device.
  • the third purpose of the present disclosure is to propose a method for collecting and identifying biological features.
  • the biometrics acquisition and identification system includes an imaging subsystem, including: an imaging module for imaging and displaying a guide screen of biometrics acquisition and identification in an aerial target area; a detection module, When it is detected that there is a target object in the air target area, the target object interacts with the guide screen, and the posture of the target object conforms to the guide posture in the guide screen, a collection trigger signal is sent;
  • the system includes: an image acquisition module for acquiring image information of a target object in the aerial target area in response to the acquisition trigger signal; an image storage module for storing biometric information; an image processing module for coordinating with the image
  • the acquisition module and the image storage module are connected to perform biometric processing according to the image information, and store the processed biometric information in the image storage module, or store the processed biometric information with the image.
  • the biometric information stored in the storage module is compared to identify the user.
  • the guidance screen of biometric acquisition and identification is imaged in the air target area through the imaging module, that is, the air target area is used as the reference plane for user operation, so that the user can
  • the presented guidance picture is operated, and then the detection module sends an acquisition trigger signal to the image acquisition module when it detects that the posture of the target object conforms to the guidance posture in the guidance picture, and the image acquisition module captures the image of the target object in the air target area, and
  • the image processing module performs biometric processing according to the image information, so as to store or identify the user's biometrics, so as to achieve the purpose of non-contact acquisition and identification of the user's biometrics.
  • the guide screen can trigger the image acquisition module to collect and identify image information. There is no need to set additional restriction devices to guide the user's operation, and there is no need to touch the device body during the process of acquisition and identification, so that the user can collect non-contact biometric features. Safer and more efficient identification operations.
  • the imaging module includes: a casing formed with a display window and an accommodating cavity formed therein; a display disposed in the accommodating cavity for displaying the living being A guide screen for feature acquisition and identification; an optical assembly, the optical assembly is arranged in the accommodation cavity, and is used for converging and imaging the light of the guide image displayed by the display on the air target area, and the display is arranged in the air target area.
  • the display window is on the imaging side of the optical assembly; a main control unit, the main control unit is arranged in the accommodating cavity, and the main control unit is used to control the display.
  • the imaging module further includes: a data processing module connected to the main control unit, the data processing module is configured to detect the gesture of the target object and the guiding gesture in the guiding screen When there is a discrepancy, the guidance prompt information is issued; the main control unit controls the display to display the guidance prompt information; the optical component condenses the light of the guidance prompt information displayed on the display into the air target area.
  • a data processing module connected to the main control unit, the data processing module is configured to detect the gesture of the target object and the guiding gesture in the guiding screen When there is a discrepancy, the guidance prompt information is issued; the main control unit controls the display to display the guidance prompt information; the optical component condenses the light of the guidance prompt information displayed on the display into the air target area.
  • the image acquisition module includes: at least one image acquisition unit for acquiring image information of a target object in the aerial target area; a control unit, connected to each of the image acquisition units, for responding to The image acquisition unit is controlled to start based on the acquisition trigger signal.
  • the image acquisition unit is disposed on the imaging side of the optical assembly, and the optical axis of the image acquisition unit forms a preset angle with the normal of the imaging plane of the aerial target area.
  • the image acquisition unit is arranged on the imaging side of the optical component, and a beam splitter is arranged on the surface of the imaging side of the optical component, and the beam splitter is used to separate the target object in the air target area.
  • the image information is reflected to transmit the image information to the image acquisition unit.
  • the optical axis of the image acquisition unit is perpendicular to the normal line of the imaging plane of the aerial target area; the beam splitter is a beam splitter for visible light transflective; or the image acquisition The optical axis of the unit is perpendicular to the normal line of the imaging plane of the air target area; the image acquisition unit is an infrared image acquisition unit; the beam splitter is a beam splitter that transmits visible light and reflects infrared light.
  • the image acquisition module further includes: at least one total reflection unit configured to perform total reflection on the image information of the target object in the air target area reflected by the beam splitter , to transmit the image information to the image acquisition unit.
  • the image capturing unit is disposed on the light source side of the optical component, and the optical axis of the image capturing unit and the plane where the aerial target area is located are at a preset angle.
  • a through hole is provided on the optical component corresponding to a position where the optical axis of the image capturing unit passes through.
  • the image acquisition module further includes: an illumination unit, the illumination unit is connected to the control unit, and is configured to activate illumination in response to the acquisition trigger signal.
  • the illumination unit is a backlight assembly of the display, and the main control unit is further configured to control the backlight assembly of the display to emit illumination light in a preset mode in response to the acquisition trigger signal; or
  • the lighting unit is arranged on the imaging side of the optical assembly, and the illumination surface of the lighting unit faces the air target area.
  • the lighting unit is disposed on the light source side of the optical assembly, and is disposed opposite to the display.
  • the surface of the display is provided with a diffuse reflection layer, and the illuminating surface of the lighting unit faces the surface of the display; or the illuminating surface of the lighting unit faces the aerial target area.
  • Embodiments of the second aspect of the present disclosure provide a terminal device, including: a device body; and the biometric feature collection and identification system according to the above embodiment, where the biometric feature collection and identification system is provided on the device body.
  • the terminal device by using the biometric feature collection and identification system provided by the above embodiments to collect and identify the user's biometric information, the risk of the user touching the device during operation can be avoided, and no additional restrictions need to be set.
  • the device guides the user's operation, so that the non-contact fingerprint collection operation is also safer and more efficient.
  • An embodiment of a third aspect of the present disclosure provides a method for biometric acquisition and identification, including: providing an aerial target area; imaging a guide screen for biometric acquisition and identification in the aerial target area; detecting that a target object exists in the aerial target area, And the target object interacts with the guidance screen, and the posture of the target object conforms to the guidance posture in the guidance picture, collect image information of the target object in the aerial target area; perform biometrics according to the image information. process, and store the processed biometric information, or compare the processed biometric information with the stored biometric information to identify the user.
  • the user by imaging the guidance screen of biometric acquisition and identification in the aerial target area, that is, using the aerial target area as the reference plane for user operation, the user can follow the guidance presented at the aerial target area.
  • the screen is operated, and then when it is detected that the guidance posture conforming to the guidance screen is detected, the image information of the target object in the air target area is collected, and biometric processing is performed according to the image information, so that the processed biometric information is processed.
  • the user can touch the guidance screen of the target area in the air, and when it conforms to the guidance posture in the guidance screen, the user can collect
  • the operation mode of image information is also more convenient and intuitive, and there is no need to set additional restriction devices to guide the user's operation, avoiding the risk of the user touching the device body during operation, thereby making the user's non-contact identification operation safer and more efficient.
  • performing biometric processing according to the image information includes: acquiring a biometric region of interest in the image information; preprocessing the biometric region of interest to obtain a preprocessed image; extracting The feature points in the preprocessed image; the similarity matching is performed on the feature points to determine the target biological feature.
  • the method before acquiring the biometric region of interest in the image information, the method further includes: acquiring a three-dimensional biometric image of the target object according to the image information of the target object in the aerial target area collected from different directions ; Expand the three-dimensional biometric image into an equivalent two-dimensional biometric image.
  • the method further includes: detecting that the posture of the target object does not match the guidance posture in the guidance picture, sending guidance prompt information; controlling the display to display For the guidance prompt information, the light of the guidance prompt information displayed on the display is converged and imaged on the air target area.
  • the method for collecting and identifying biometric features further includes: detecting an interaction between a target object in the aerial target area and the guide screen; providing biometric features according to the interaction between the target object and the guide screen An interactive picture; imaging the biometric interactive picture on the aerial target area.
  • FIG. 1 is a structural block diagram of a biometric feature collection and identification system according to an embodiment of the present disclosure
  • FIG. 2 is a schematic structural diagram of a biometric feature collection and identification system according to an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of human-computer interaction according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of an optical assembly according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of a first optical waveguide array and a second optical waveguide array according to an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of a front structure of an optical assembly along a thickness direction according to an embodiment of the present disclosure
  • FIG. 7 is a schematic partial structure diagram of a first optical waveguide array and a second optical waveguide array according to an embodiment of the present disclosure
  • FIG. 8 is a schematic diagram of an optical path of an optical assembly according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of a biometric feature collection and identification system according to another embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of image information collection performed by three image collection units according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram of a feature extraction method of three image acquisition units according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram of a feature extraction manner of three image acquisition units according to another embodiment of the present disclosure.
  • FIG. 13 is a schematic diagram illustrating that the optical axis of the image acquisition unit forms a preset angle with the normal of the imaging plane of the aerial target area according to an embodiment of the present disclosure
  • FIG. 14 is a schematic diagram illustrating that the optical axis of the image acquisition unit is perpendicular to the normal of the imaging plane of the aerial target area according to an embodiment of the present disclosure
  • 15 is a schematic diagram of image information acquisition using a total reflection unit according to an embodiment of the present disclosure.
  • 16 is a schematic diagram of an image acquisition unit disposed on a light source side of an optical assembly according to an embodiment of the present disclosure
  • 17 is a schematic diagram of a lighting unit performing diffuse reflection through a display surface according to an embodiment of the present disclosure
  • FIG. 18 is a schematic diagram of an illumination surface of a lighting unit facing an aerial target area according to an embodiment of the present disclosure
  • FIG. 19 is a schematic diagram of a lighting unit disposed on a display according to an embodiment of the present disclosure.
  • FIG. 20 is a structural block diagram of a terminal device according to an embodiment of the present disclosure.
  • FIG. 21 is a flowchart of a method for biometric acquisition and identification according to an embodiment of the present disclosure.
  • 22 is a flowchart of biometric processing based on image information according to one embodiment of the present disclosure.
  • FIG. 1 is a structural block diagram of a biometric feature collection and identification system 1000 according to an embodiment of the present disclosure.
  • the biometric feature collection and identification system 1000 according to an embodiment of the present disclosure includes an imaging subsystem 100 and a collection and identification subsystem 200 .
  • the biological features referred to in the embodiments of the present disclosure may be physiological features common to the human body, such as fingerprints, faces, palm prints, and irises.
  • the imaging subsystem 100 includes an imaging module 110 and a detection module 120 .
  • the imaging module 110 is used to image and display the guidance picture of the biometric collection and identification in the air target area;
  • the detection module 120 is used to detect that there is a target object in the air target area and the target object interacts with the guidance picture, and the posture of the target object conforms to the guidance picture.
  • the acquisition trigger signal is sent.
  • the acquisition and identification subsystem 200 includes an image acquisition module 210 , an image processing module 220 and an image storage module 230 .
  • the image acquisition module 210 is used for collecting the image information of the target object in the aerial target area in response to the acquisition trigger signal, and the acquisition area of the image acquisition module 210 covers the three-dimensional space where the aerial target area is located;
  • the image storage module 230 is used for storing biometric information;
  • the processing module 220 is connected to the image acquisition module 210 and the image storage module 230 for performing biometric processing according to the collected image information, storing the processed biometric information in the image storage module 230, or storing the processed biometric information in the image storage module 230. The information is compared with the biometric information stored by the image storage module 230 to identify the user.
  • the imaging module 110 images the guide image collected and recognized by the biometrics in the air target area
  • the detection module 120 sends an acquisition trigger signal to the image when it detects that the gesture of the target object conforms to the guide gesture in the guide image.
  • the acquisition module 210 the image acquisition module 210, in response to the acquisition trigger signal, acquires image information of the target object in the air target area
  • the image processing module 220 performs biometric processing according to the image information
  • the image storage module 230 processes the image processing module 220.
  • the biometric feature collection and identification system 1000 of the embodiment of the present disclosure performs biometric feature collection and identification by using a combination of an interactive aerial imaging technology and a non-contact biometric feature collection and identification technology.
  • the biometric identification in this embodiment may be fingerprint identification, and the biometric information may be fingerprint information.
  • the position of the guidance picture displayed by the imaging subsystem 100 in the aerial target area is relatively fixed, so that the user can directly interact with the floating real image, that is, the user can perform actual operations according to the guidance picture presented in the aerial target area.
  • the user places his finger on the air target area according to the guidance screen.
  • the image collection module 210 will collect the user's fingerprint information.
  • the image processing module 220 processes the captured image information and stores it in the image storage module 230 . At this point, the collection of user fingerprint information is completed.
  • the user places the finger on the air target area according to the guidance screen.
  • the image acquisition module 210 performs the user's fingerprint information on the user's fingerprint information.
  • the image processing module 220 processes the captured image information, compares the processed fingerprint information with the fingerprint information stored in the image storage module 230, and determines the user identity according to the identification result.
  • the acquisition and recognition subsystem 200 is triggered to perform image acquisition and recognition by touching the air target area, and there is no need to set additional restriction devices to guide the user's operation, and the user does not need to touch the device body during the acquisition and recognition process, so that the user can perform fingerprint acquisition. It is safer, more convenient and more efficient to identify.
  • the imaging module 110 forms a floating real image at a certain position in the air, that is, a guidance screen, and the three-dimensional space covering the floating real image is the aerial target area, so that the imaging module 110 displays relevant prompt information in the aerial target area to guide the current User action to complete the collection and identification of the current user's biometric information. Therefore, by directly interacting with the floating real image, the user does not need to set an additional restriction mechanism to guide the user to operate, which reduces the risk of the user contacting the device body, and improves the non-contact collection and identification of the biometric collection and identification system 1000 of the present disclosure. use effect.
  • the detection module 120 is used to detect the user's operation on the floating real image, and when it is detected that there is a target object in the air target area and the posture of the target object conforms to the guidance posture in the guidance screen, it sends an acquisition trigger signal to the image acquisition module 210, and the image acquisition module 210 receives the acquisition trigger signal, acquires image information of the target object in the air target area, and then the image processing module 220 performs biometric processing according to the image information to realize biometric acquisition or identification.
  • the operation mode in which the user triggers the image acquisition module 210 to perform acquisition and identification by touching the target area in the air is more convenient and intuitive.
  • the detection module 120 may periodically detect the interaction between the user and the floating real image. For example, in the process of fingerprint collection and identification, the interaction includes interaction position, palm orientation, and the like.
  • the detection module 120 sends an acquisition trigger signal, and the image acquisition module 210 receives the acquisition trigger signal, and then sends a notification to the user in the floating real image area.
  • An image of the hand is captured, and the image processing module 220 processes the image of the hand to obtain fingerprint information, and compares the fingerprint information with the stored fingerprint information to determine the user's identity.
  • the detection module 120 may be an optical sensor, and its sensing form may include, but is not limited to, far and near infrared, ultrasonic wave, laser interference, grating, encoder, optical fiber type or CCD (Charge-coupled Device, charge-coupled device) and the like.
  • the sensing area of the detection module 120 is located on the same plane as the guide screen and includes the three-dimensional space where the guide screen is located.
  • the best sensing form can be selected according to the installation space, viewing angle and use environment, so as to facilitate the user to operate in the air target area with the best attitude and improve the user experience.
  • the acquisition area of the image acquisition module 210 covers the location where the guide screen is located, that is, the aerial target area, and this area constitutes the biometric acquisition area.
  • the display position of the guide screen is relatively fixed, and when the detection module 120 detects that the user directly interacts with the guide screen, it sends a collection trigger signal to trigger the image collection module 210 to detect the information at the position of the guide screen.
  • the user's biometrics perform image acquisition, the image processing module 220 processes the biometric images, and stores the processed biometric information in the image storage module 230 to realize biometric acquisition; or, combines the processed biometric information with the stored biometric information.
  • the biometric information is compared to identify the user's identity, so as to achieve the purpose of non-contact biometric collection and identification of the user.
  • the image acquisition module 210 When the image acquisition module 210 performs image acquisition, it can accurately fit the placement posture of the target object according to the floating real image without medium guidance, and can quickly search for the contour of the target object and fit the position of the target object, so as to accurately extract the target object.
  • the center position of the target object in the image, and main feature extraction is performed in a suitable range around the center position. Therefore, the influence of scale, translation and rotation on the image during image acquisition is reduced, the unreliability caused by the image algorithm is reduced, and the algorithm error of biometric feature extraction and biometric matching potential image is reduced.
  • the image acquisition module 210 can be collected as much as possible according to the actual situation.
  • the aperture is set larger to increase the amount of incoming light scattered by the target object during the acquisition process, so as to obtain a clearer image of the target object.
  • the image acquisition module 210 can be used to acquire multiple biometric information of users in the air target area, and the acquisition method is not limited to structured light, stereo vision, and time of flight (TOF) method .
  • the acquisition method is not limited to structured light, stereo vision, and time of flight (TOF) method .
  • two high-speed cameras can be used to collect the information.
  • Fingerprint image and obtain fingerprint depth information according to at least two disparity maps corresponding to different parts of the finger, so as to splicing and construct a 3D fingerprint image of the surface of the finger part, and then expand it into an equivalent 2D fingerprint image, so as to obtain the same as the current acquisition by other methods.
  • archive a large number of fingerprint database compatible fingerprint information such as fingerprint information obtained by contact method. It can be understood that, if the compatibility with other planar fingerprint information is not considered, it is also feasible to identify and verify the user identity only based on the obtained 3D fingerprint image.
  • the image processing module 220 is used to perform biometric processing on the image information collected by the image acquisition module 210, so as to complete the collection and identification of the biometrics.
  • the processing of the image information includes extraction of the region of interest, grayscale of the image, enhancement of the image, binarization and refinement of the image, extraction of feature points, and matching of feature points.
  • image preprocessing operations and feature extraction operations Through a series of image preprocessing operations and feature extraction operations, the feature data of key points are recorded, and the feature data of the key points are stored in the image storage module 230, so as to achieve the purpose of collecting user identities.
  • the image storage module 230 may be a storage device pre-integrated in the system, or a cloud server with storage function remotely connected by means of wifi, bluetooth, etc., or a detachable portable device such as an SD card or a hard disk, etc. This is not limited.
  • the biometric information of the user is stored by the image storage module 230 to facilitate subsequent extraction for identifying the user's identity.
  • the guidance screen of biometric acquisition and identification is imaged in the air target area through the imaging module 110, that is, the air target area is used as the reference plane for user operation, and the user can
  • the detection module 120 sends an acquisition trigger signal to the image acquisition module 210 when it detects that the posture of the target object conforms to the guidance posture in the guidance picture; the image acquisition module 210 images the target object in the air target area.
  • the image processing module 220 performs biometric processing according to the captured information of the image to complete the collection of the user's biometrics or identify the user's identity, so as to achieve the purpose of non-contact biometric collection and identification of the user.
  • the user can trigger the image acquisition module 210 to collect and identify the image information by touching the target area in the air. There is no need to set an additional restriction device to guide the user's operation, and the device body does not need to be touched during the acquisition and identification process, so that the Users are safer and more efficient when performing non-contact biometric collection and identification operations.
  • the imaging module 110 of the embodiment of the present disclosure includes a housing 10 , a display 20 , an optical assembly 30 , and a main control unit 40 .
  • the housing 10 is formed with a display window 1 and an accommodating cavity 2 is formed inside;
  • the display 20 is arranged in the accommodating cavity 2 and is used to display a guide screen for biometric acquisition and recognition;
  • the optical assembly 30 is arranged in the In the accommodating cavity 2, the light of the guide screen displayed by the display 20 is converged and imaged in the air target area 11, the display 20 is arranged on the light source side of the optical assembly 30, the display window 1 is on the imaging side of the optical assembly 30, and the display window 1 is used for To transmit the light refracted by the optical component 30, specifically, the optical component 30 can be disposed at the display window 1, the optical component 30 refracts the light emitted by the display 20, and the refracted light passes through the display window 1 and converges to form an aerial target.
  • Area 11 the main control unit 40 is arranged in the accommodating cavity 2 , and the main control unit 40 is used to control the display 20 .
  • the display 20 is placed on one side of the optical assembly 30 , that is, the light source side, and the display 20 is controlled to display a guide image.
  • the three-dimensional space of the guidance screen is the air target area 11 .
  • the detection module 120 is used to detect the interactive operation between the user and the guide screen, and feed back the detected operation signal to the main control unit 40.
  • the main control unit 40 triggers the image acquisition module 210 to perform image acquisition, and the acquired image is processed by the image processing module 220.
  • the image information is biometrically processed to collect biometric information or identify the user.
  • the imaging mode of the display 20 may include RGB (red, green, blue) light emitting diodes (Light Emitting Diode, LED), LCD (Liquid Crystal Display, liquid crystal display), LCOS (Liquid Crystal on Silicon, liquid crystal display) Silicon attached) devices, OLED (Organic Light-Emitting Diode, organic light-emitting diode) arrays, projection, laser, laser diode or any other suitable display or stereoscopic display, which is not limited.
  • the display 20 can provide a clear, bright and high-contrast dynamic image light source.
  • the main control unit 40 controls the display 20 to display a guide image, and converges the image through the optical component 30, which can present a clear floating real image at the target area in the air, thereby facilitating User action.
  • the brightness of the display 20 can be set to be not lower than 500 cd/m 2 , so that the influence caused by the loss of brightness in the propagation of the light path can be reduced.
  • the display brightness of the display 20 can be adjusted according to the brightness of the ambient light.
  • the viewing angle control processing can be performed on the display image surface of the display 20 to reduce the afterimage of the aerial target area 11, improve the picture quality, and at the same time prevent others from peeping, which is convenient for being widely used in other applications that require privacy information protection. input device.
  • optical assembly 30 The structure of the optical assembly 30 according to the embodiment of the present disclosure and the principle for realizing imaging will be described below.
  • the optical assembly 30 may use a flat lens, and the flat lens is fixed on the housing 10 .
  • FIG. An optical waveguide array 6 and a second optical waveguide array 7 .
  • the first optical waveguide array 6 and the second optical waveguide array 7 are closely attached on the same plane and are arranged orthogonally.
  • the thicknesses of the first optical waveguide array 6 and the second optical waveguide array 7 are the same, which is convenient for design and production.
  • the flat lens sequentially includes a first transparent substrate 8 , a first optical waveguide array 6 , a second optical waveguide array 7 and a second glass substrate 8 from the display 20 side to the air target area 11 side. .
  • the first transparent substrate 8 and the second transparent substrate 8 both have two optical surfaces, and the transparent substrate 8 has a transmittance of 90%-100% for light with wavelengths between 390 nm and 760 nm.
  • the material of the transparent substrate 8 can be at least one of glass, plastic, polymer and acrylic resin, for protecting the optical waveguide array and filtering out excess light. It should be noted that, if the strength of the first optical waveguide array 6 and the second optical waveguide array 7 after close and orthogonal bonding is sufficient, or the installation environment has thickness restrictions, only one transparent substrate 8 may be configured or not configured at all. Transparent substrate 8 .
  • the principle of the optical assembly 30 to achieve aerial imaging is that the first optical waveguide array 6 and the second optical waveguide array 7 are composed of a plurality of reflection units 9 with rectangular cross-sections, and the length of each reflection unit 9 is limited by the peripheral size of the optical waveguide array. Different lengths. As shown in FIG. 5 , the extension direction of the reflection unit 9 in the first optical waveguide array 6 is X, the extension direction of the reflection unit 9 of the second optical waveguide array 7 is Y, and the Z direction is the thickness direction of the optical waveguide array.
  • the extension directions (optical waveguide array directions) of the reflection units 9 in the first optical waveguide array 6 and the second optical waveguide array 7 are perpendicular to each other, that is, viewed from the Z direction (thickness direction), the first optical waveguide array 6 and the second optical waveguide
  • the arrays 7 are arranged orthogonally, so that the two light beams in the orthogonal direction converge at one point, and ensure that the object image plane (the light source side and the imaging side) is symmetrical with respect to the flat lens, resulting in an equivalent negative refraction phenomenon to achieve aerial imaging .
  • the first optical waveguide array 6 or the second optical waveguide array 7 is composed of a plurality of parallel-arranged reflection units 9 which are obliquely arranged with a user viewing angle of deflection 45°.
  • the first optical waveguide array 6 can be composed of reflective units 9 that are aligned at 45° in the lower left direction and have a rectangular cross section
  • the second optical waveguide array 7 can be composed of reflective units 9 that are aligned at 45° in the lower right direction and have a rectangular cross section.
  • composition, the arrangement directions of the reflection units 9 in the two groups of optical waveguide arrays can be interchanged.
  • the extension direction of the reflection unit 9 in the first optical waveguide array 6 is Y
  • the extension direction of the reflection unit 9 in the second optical waveguide array 7 is X
  • the Z direction is the thickness direction of the optical waveguide array.
  • the first optical waveguide array 6 and the second optical waveguide array 7 are arranged orthogonally, so that the two light beams in the orthogonal direction converge at one point, and ensure that the object image plane (light source side and imaging side) is relative to the flat plate
  • the lens is symmetrical, which produces an equivalent negative refraction phenomenon and realizes aerial imaging.
  • the optical waveguide material has an optical refractive index n1, and in some embodiments, n1>1.4, for example, the value of n1 is 1.5, 1.8, 2.0, or the like.
  • the cross section of the reflection unit 9 may be rectangular, and the reflection film 12 is provided on one or both sides along the arrangement direction of the reflection unit 9 .
  • both sides of each reflective unit 9 are coated with a reflective film 12, and the material of the reflective film 12 can be a metal material such as aluminum, silver, or other non-metallic compound material that realizes total reflection.
  • the function of the reflective film 12 is to prevent light from entering adjacent optical waveguide arrays due to lack of total reflection to form stray light from affecting imaging.
  • each reflective unit 9 may also add a dielectric film on the reflective film 12, and the function of the dielectric film is to improve the light reflectivity.
  • Large-scale requirements can be achieved by splicing multiple optical waveguide arrays when displaying on a large screen.
  • the overall shape of the optical waveguide array is set according to the application scenario.
  • the two groups of optical waveguide arrays have a rectangular structure as a whole, the two diagonal reflection units 9 are triangular, and the middle reflection unit 9 is a trapezoidal structure.
  • the length of each reflection unit 9 is not equal, the reflection unit 9 located on the diagonal of the rectangle has the longest length, and the reflection units 9 at both ends have the shortest length.
  • the flat lens may further include an anti-reflection component and a viewing angle control component.
  • the anti-reflection component can improve the overall transmittance of the flat lens and improve the clarity and brightness of the guide image imaged in the aerial target area 11 .
  • the viewing angle control component can be used to eliminate the afterimage of the guide screen imaged in the aerial target area 11, reduce the dizziness of the observer, prevent the observer from peeping into the device from other angles, and improve the overall aesthetics of the device.
  • the anti-reflection component and the viewing angle control component may be combined, or may be independently disposed between the transparent substrate 8 and the waveguide array, between two layers of the waveguide array, or on the outer layer of the transparent substrate 8 .
  • the imaging principle of the flat lens is described below with reference to FIG. 8 , and the specific content is as follows.
  • the orthogonal decomposition of any optical signal is performed using mutually orthogonal double-layer waveguide array structures.
  • the original signal is projected on the first optical waveguide array 6, and a rectangular coordinate system is established with the original signal projection point as the origin and perpendicular to the first optical waveguide array 6 as the x-axis, in which the original signal is decomposed into x-axis.
  • the signal X and the signal Y located on the y-axis are mutually orthogonal signals.
  • the signal X passes through the first optical waveguide array 6, it is totally reflected on the surface of the reflective film 12 according to the same reflection angle as the incident angle; at this time, the signal Y remains parallel to the first optical waveguide array 6 and passes through the first After the optical waveguide array 6, the surface of the second optical waveguide array 7 is totally reflected on the surface of the reflective film 12 at the same reflection angle as the incident angle, and the reflected optical signal composed of the reflected signal Y and the signal X is the same as the original light.
  • the signal is mirror-symmetrical.
  • the light in any direction can achieve mirror symmetry through the flat lens, and the divergent light of any light source will re-converge into a floating real image at a symmetrical position through the flat lens, that is, the guide screen is formed at the target area 11 in the air, and the floating real image is formed.
  • the imaging distance is the same as the distance from the flat lens to the image source, that is, the display 20, which is equidistant imaging, and the position of the floating real image is in the air, without a specific carrier, but directly presents the real image in the air. Therefore, the image in the space viewed by the user is the image emitted by the display 20 .
  • the above process occurs on the flat lens when the light emitted by the light source of the display 20 passes through the flat lens.
  • the incident angles after convergent imaging are ⁇ 1 , ⁇ 2 , ⁇ 3 « is L, and the viewing angle ⁇ of the floating real image is 2 times max( ⁇ ).
  • the size of the optical waveguide array is small, the image can only be seen at a certain distance from the imaging side of the optical waveguide array; and if the size of the optical waveguide array becomes larger, a larger imaging distance can be achieved. , thereby increasing the field of view.
  • the angle between the flat lens and the display 20 is set to be in the range of 45° ⁇ 5°, so that the size of the flat lens can be effectively utilized, the image quality can be improved and the effect of afterimages can be reduced.
  • other angles can also be selected under the condition of sacrificing part of the imaging quality.
  • the size and position of the flat lens can also be freely adjusted according to the actual display screen, which is not limited.
  • the above mainly describes the imaging principle of a flat lens with a double-layer optical waveguide array structure.
  • the cubic-columnar reflection units 9 are all arranged in an array along the X and Y directions in the one-layer optical waveguide array structure, that is, the two-layer optical waveguide arrays are combined into one layer.
  • the imaging principle is the same as that of the double-layer optical waveguide array structure. It can be used as the structure of flat lens.
  • the thicknesses of the first optical waveguide array 6 and the second optical waveguide array 7 are the same, so that the structural complexity of the first optical waveguide array 6 and the second optical waveguide array 7 can be simplified, and the first optical waveguide array 6 can be reduced.
  • the manufacturing difficulty of the second optical waveguide array 7 improves the production efficiency of the first optical waveguide array 6 and the second optical waveguide array 7 and reduces the production cost of the first optical waveguide array 6 and the second optical waveguide array 7 .
  • the same thickness here is a relative range, not absolutely the same, that is, for the purpose of improving production efficiency, on the premise of not affecting the quality of aerial imaging, there may be a certain thickness difference between the optical waveguide arrays. .
  • the main control unit 40 and the detection module 120 can be connected in a wired or wireless manner to transmit digital or analog signals, so that the volume of the overall device can be flexibly controlled, and the electrical power of the biometric identification system 1000 can be enhanced. stability.
  • the imaging module 110 in the embodiment of the present disclosure further includes a data processing module 111 .
  • the data processing module 111 is connected to the main control unit 40 , and the data processing module 111 is used to detect the posture and guidance of the target object in the detection module 120 .
  • the guidance prompt information is issued; and the main control unit 40 controls the display 20 to display the guidance prompt information, and then the light of the guidance prompt information displayed on the display 20 is converged and imaged in the air target area 11 through the optical assembly 30.
  • Guide the prompt information to adjust the interactive posture, so as to better realize the interaction, and complete the collection and identification of the user's biometric information.
  • a light-absorbing layer is provided on the inner wall of the accommodating cavity 2, that is, the parts of the housing 10 except the display surface of the display 20 are all treated with black light-absorbing treatment, such as spraying light-absorbing paint or sticking a light-absorbing film, to eliminate the The diffuse reflection of light by the internal components of the casing 10 improves the display effect of the floating real image.
  • the image acquisition module 210 in the embodiment of the present disclosure includes at least one image acquisition unit 21 and a control unit 22 .
  • the image acquisition unit 21 is used for acquiring image information of the target object in the aerial target area 11 ;
  • the control unit 22 is connected to each image acquisition unit 21 , and the control unit 22 is connected to the main control unit 40 .
  • the control unit 22 is configured to control the image acquisition unit 21 to start up in response to the acquisition trigger signal.
  • the image acquisition unit 21 is configured to acquire the image information of the target object in the aerial target area in response to the acquisition trigger signal.
  • the image acquisition unit 21 may be a single or multiple high-speed CMOS cameras. As shown in FIG. 10 , the image acquisition unit 21 includes three cameras, and the acquisition area of each camera covers the area where the guide screen is located. The focal plane position of each camera is set as the air target area 11, so that the image information of different parts of the target object can be clearly captured. For example, when the user's palm is located in the air target area 11, each camera can clearly capture fingerprint images of different parts of at least one finger.
  • the image acquisition unit 21 can use a camera with a fixed focus and a large aperture, so that the process of focusing on the position of the target object when capturing an image can be omitted, and the speed and success rate of image acquisition and recognition can be improved. And reliability, and the large aperture can also ensure sufficient light transmission, improve the clarity and brightness of the shooting.
  • the image acquisition unit 21 needs to retain the necessary focusing function.
  • the image acquisition unit 21 can obtain the visible light image of the target object, and can also obtain the infrared image of the target object.
  • the image acquisition unit 21 can also add filters for light in corresponding wavelength bands to exclude the influence of ambient light.
  • the image acquisition unit 21 there are many ways for the image acquisition unit 21 to extract the main features in a suitable range around the center of the target object, which is not limited. Two feature extraction methods are listed below, taking the image acquisition unit 21 including three high-speed CMOS cameras as an example, the details are as follows.
  • CMOS cameras with different orientations are set to obtain images of target objects in different orientations, corresponding to acquisition channel 1, acquisition channel 2 and acquisition channel 3, and through
  • the image processing module 220 performs feature extraction and matching on the images of each acquisition channel respectively, and fuses the results of the three sets of matching through a mean fusion algorithm to obtain a final comparison result.
  • FIG. 11 taking the collection and identification of fingerprint information as an example, three high-speed CMOS cameras with different orientations are set to obtain images of target objects in different orientations, corresponding to acquisition channel 1, acquisition channel 2 and acquisition channel 3, and through The image processing module 220 performs feature extraction and matching on the images of each acquisition channel respectively, and fuses the results of the three sets of matching through a mean fusion algorithm to obtain a final comparison result.
  • three high-speed CMOS cameras with different orientations are set to obtain images of the target object in different orientations, and the image processing module 220 obtains the depth information of the target object according to at least two disparity maps corresponding to different parts of the target object , and splicing to construct a 3D image of the surface of the target object, which is then expanded into an equivalent 2D target object image, so as to perform feature extraction and feature matching on the basis of the 2D image, and obtain the final comparison result.
  • biometric feature collection and identification system 1000 of the embodiment of the present disclosure with reference to FIG. 9 by taking the identification of user fingerprint information as an example, and the specific content is as follows.
  • the main control unit 40 makes the display 20 display a guide image, and makes the guide image image and display at the air target area 11 through the optical component 30.
  • the type pattern is imaged into the air on the other side of the flat lens, that is, the air target area 11, so as to guide the user to perform fingerprint collection and identification in the correct area, and the detection module 120, such as an optical sensor, periodically detects the user's interactive operation, including the interactive position, palm orientation, etc. The user places the palm according to the displayed guide screen.
  • the detection module 120 When the detection module 120 detects that the user's palm touches the air target area and the position and direction are correct, the detection module 120 sends a collection trigger signal to the main control unit 40, and the main control unit 40 sends a control signal to the control unit 40.
  • the control unit 22 controls the image acquisition unit 21 to start collecting the image of the user's palm fingerprint, and transmits the image information to the image processing module 220 for processing and analysis, and compares it with the internal fingerprint database stored in the image storage module 230 to verify the user Whether the identity is passed.
  • the data processing module 111 analyzes the failure reasons, such as the wrong palm orientation, too fast movement or position deviation of the user, etc., and generates guidance prompt information and sends it to the main control unit 40, where the main The control unit 40 controls the display 20 to display the guidance prompt information, so as to guide the palm movement of the user, correctly complete the collection of fingerprint information, and realize the identification of the user identity.
  • the detection module 120 can also detect other operations of the user, including clicking, sliding, etc., and transmit the interactive operation information to the main control unit 40.
  • the main control unit 40 judges the specific operation content of the user according to the internal instruction set, For example, selecting the fingerprint recording mode, viewing fingerprint information, etc., and simultaneously transmitting UI (User Interface, user interface) operation interfaces such as relevant control buttons and settings to the display 20, so as to perform image display at the air target area to guide user operations.
  • UI User Interface, user interface
  • FIG. 9 is only an example of the biometric identification system 1000 according to the embodiment of the present disclosure, in which the image processing module 220 may be directly integrated with the data processing module 111 in the imaging module 110, or the image processing module 220 may be integrated with the data processing module 111 in the imaging module 110.
  • the data processing module 111 in the imaging module 110 can also be separately set, which is not limited, but both can be used to generate guidance prompt information when the image information does not identify a valid biometric feature.
  • the main control unit 40 and the control unit 22 proposed in the embodiments of the present disclosure may be set in an integrated manner, or may be set up separately, which is not limited.
  • the main control unit 40 may be directly integrated with the display 20, or the main control unit 40 and the display 20 may be separately provided.
  • the content of the control instruction of the main control unit 40 can also be transmitted to other external devices for processing or controlling other external devices, such as controlling a fingerprint lock, a card punching machine, and the like.
  • the image acquisition unit 21 , the control unit 22 , and the image processing module 220 in the embodiment of the present disclosure can also be controlled by an external device without going through the main control unit 40 .
  • the image acquisition unit 21 in the embodiment of the present disclosure is set to face the direction of the air target area 11.
  • the optical axis of the camera is perpendicular to the plane where the guide screen is located.
  • the optical axis of the image capture unit 21 has an oblique angle with the normal of the guide screen, so that the optical axis of the image capture unit 21 cannot be perpendicular to the plane where the guide screen is located, resulting in the problem of distortion of the captured image.
  • the embodiment of the present disclosure proposes various arrangements of the image acquisition unit 21 to reduce the problem of image distortion and countermeasures when distortion occurs. Several preferred arrangements of the embodiments of the present disclosure are specifically described below.
  • the image acquisition unit 21 is disposed on the imaging side of the optical assembly 30 , and the optical axis of the image acquisition unit 21 and the normal of the imaging plane of the aerial target area 11 form a preset angle ⁇ (0 ° ⁇ 90°). That is, the image capturing unit 21 is arranged above the optical assembly 30 , that is, the image capturing unit 21 and the guide screen are located on the same side of the optical assembly 30 . In this setting mode, after the image capturing unit 21 avoids the optical assembly 30, the optical axis of the image capturing unit 21 and the normal line of the guide screen have a certain angle ⁇ . Therefore, when processing the image in the embodiment of the present disclosure, it is possible to Clear image information can be obtained by correcting the deformation and distortion factors caused by the ⁇ angle.
  • the image acquisition unit 21 is disposed on the imaging side of the optical assembly 30
  • a beam splitter 31 is disposed on the upper surface of the optical assembly 30 , that is, the imaging side.
  • the beam splitter 31 can transmit a part of the passing light and reflect the other part, so that the image information of the target object in the air target area 11 can be reflected by the beam splitter 31 to transmit the image information to the image acquisition unit 21 . , to obtain clear image information.
  • the optical axis of the image acquisition unit 21 is perpendicular to the normal of the imaging plane of the aerial target area 11 , and the image acquisition unit 21 adopts a visible light image acquisition unit.
  • the beam splitter 31 is a beam splitter that is transflective for visible light, that is, a beam splitter having a transmittance of 50% and a reflectance of 50% for visible light.
  • the image acquisition unit 21 is set so that after the optical axis is reflected by the beam splitter 31, it is perpendicular to the plane where the guide screen is located. Therefore, when the image acquisition unit 21 needs to acquire the visible light target object image, the image acquisition unit 21 can acquire the undeformed and distorted target object image through the reflection of the beam splitter 31 .
  • the optical axis of the image acquisition unit 21 is perpendicular to the normal of the imaging plane of the aerial target area 11 , and the image acquisition unit 21 adopts an infrared image acquisition unit.
  • the beam splitter 31 is a beam splitter that transmits visible light and reflects infrared light.
  • the beam splitter 31 that transmits visible light and reflects infrared light has good light transmittance to the visible light band, the use of this beam splitter 31 can solve the problem of reducing the brightness of the floating real image, and because the beam splitter 31 completely reflects infrared light, the So that there is basically no loss of luminous flux when capturing an infrared image, and the image acquisition unit 21 can acquire a clear image of the target object.
  • a filter member 24 capable of filtering visible light is provided on the light incident side of the infrared image capturing unit, so as to further avoid interference of visible light.
  • the beam splitter 31 of the embodiments of the present disclosure may be set to cover the size of the entire optical assembly 30 , or the size of the beam splitter may be set freely according to actual image acquisition requirements. For example, as shown in FIG. 14 , the beam splitter 31 completely covers the surface of the optical component 30 on the imaging side.
  • the image acquisition module 210 of the embodiment of the present disclosure further includes at least one total reflection unit 25 such as a total reflection mirror, and the at least one total reflection unit 25 is used to reflect the air reflected by the beam splitter 31 .
  • the image information of the target object in the target area 11 is totally reflected to transmit the image information to the image acquisition unit 21 .
  • the image acquisition unit 21 For example, as shown in FIG.
  • a total reflection mirror is used to make the light scattered by the target object in the air target area 11 undergo multiple total reflections and finally enter the image acquisition unit 21, so that the total reflection unit 25 is used to make the image layout
  • the location of the capture unit 21 is more free, for example, to reduce the occupied space, the height of the device can be made lower, and the total reflection unit 25 will not change the angle state between the final optical axis and the guide screen.
  • the image acquisition unit 21 is arranged on the light source side of the optical assembly 30 , and the optical axis of the image acquisition unit 21 and the plane where the air target area 11 is located may be perpendicular or at other angles.
  • the image acquisition unit 21 is arranged below the optical assembly 30, that is, the image acquisition unit 21 and the display 20 are located on the same side, and the optical axis of the image acquisition unit 21 is perpendicular to the plane where the air target area 11 is located, so that the optical assembly 30 has certain Light transmittance, so that the target object pattern without deformation and distortion can be obtained.
  • the optical axis of the image acquisition unit 21 and the plane where the aerial target area 11 is located can also be at a preset angle ⁇ . image information.
  • the image acquisition unit 21 when the image acquisition unit 21 is located on the light source side of the optical assembly 30, the image information is acquired by utilizing the light transmittance of the optical assembly 30.
  • the optical component 30 since the optical component 30 has a large number of microstructures, the light is easily interfered by the microstructures, so these interferences need to be eliminated during image processing.
  • the image capture unit 21 adopts an infrared image capture unit to capture the image of the target object, and the light incident side of the infrared image capture unit is provided with a filter component 24 for filtering visible light to avoid interference of microstructures.
  • a through hole is provided on the optical assembly 30 corresponding to the position where the optical axis of the image capture unit 21 passes through, so that the image capture unit 21 can directly pass through the through hole to complete the image capture of the target object. shooting, thereby reducing the interference of the microstructure of the optical assembly 30 .
  • the image acquisition module 210 of the embodiment of the present disclosure further includes an illumination unit 23 , which is connected to the control unit 22 and configured to activate illumination in response to an acquisition trigger signal.
  • the control unit 22 is used to control the image capture unit 21 and the lighting unit 23 to be turned on or off. It can be understood that the control unit 22 can control both to turn on or off at the same time, so as to prevent the lighting unit 23 from being always on and reduce the energy consumption, or the control unit 22 can also control the image capture unit 21 and the lighting unit 23 independently, which is not limited.
  • the illumination unit 23 by using the illumination unit 23 to uniformly illuminate the target object, the contrast and clarity of the ridges in the image of the target object can be enhanced.
  • the embodiment of the present disclosure can flexibly configure the position of the lighting unit 23 according to the lighting requirement, which is not limited. It should be noted that since the direction of the light source directly determines the direction of the shadow of the ridge line of the target object, under the illumination of the light source in different directions, the ridge line of the collected target object image has a displacement of up to 2 to 3 times the width of the ridge line. Therefore, the embodiments of the present disclosure improve the quality of the captured images by designing various arrangements of lighting units. Several preferred arrangements of the embodiments of the present disclosure are specifically described below.
  • the lighting unit 23 is a backlight assembly of the display 20, and the main control unit 40 is further configured to control the backlight assembly of the display 20 to emit illumination light in a preset mode in response to the acquisition trigger signal.
  • the detection module 120 detects that a target object exists in the air target area and the interactive posture of the target object and the guidance screen conforms to the guidance posture in the guidance picture, the detection module 120 sends an acquisition trigger signal, and the main control unit 40 controls the display 20 in response to the acquisition trigger signal.
  • the backlight assembly emits illuminating light in a preset mode, and then the illuminating light emitted by the display 20 is collected to the air target area through the optical assembly 30 to illuminate the target object, which can enhance the contrast and clarity of the ridges in the image of the target object, and improve the Acquisition image quality.
  • the imaging mode of the display 20 may include RGB light-emitting diodes, LCOS devices, OLED arrays, projections, lasers, laser diodes or any other suitable display or stereoscopic display, and the brightness of the display 20 is not lower than 500cd/m 2 , Therefore, in the embodiment of the present disclosure, the display 20 itself can be used as the lighting unit 23.
  • the detection module 120 is used to sense the existence of the target object, and a collection trigger signal is sent to the main control unit 40.
  • the main control unit 40 sends a control command to the display 20, and controls the display 20, such as the backlight assembly of the display 20, to emit a high-brightness blue light stroboscopic flash once, and the high-brightness stroboscopic flash can also converge at the position of the guide screen after passing through the optical assembly 30, so as to reach the target.
  • the surface of the object forms a uniform diffuse reflection.
  • the control unit 22 synchronously controls the image acquisition unit 21 to immediately photograph the shadows of the ridge lines and valley lines of the target object, so as to collect the target object image and transmit it to the image processing module 220, and the image processing module 220 will process the target object image.
  • the information is subjected to biometric processing to store the processed biometric information and complete feature collection; or, compare the processed biometric information with the biometric information stored in the image storage module 230 to identify the user's identity, thereby realizing The purpose of non-contact collection and identification of user biometrics.
  • the lighting unit 23 of the embodiment of the present disclosure is arranged below the optical assembly 30 , that is, the lighting unit 23 is arranged on the light source side of the optical assembly 30 and is opposite to the display 20 .
  • the lighting unit 23 may There are the following three preferred arrangement methods.
  • the surface of the display 20 is provided with a diffuse reflection layer 26 ; the illumination surface of the lighting unit 23 faces the surface of the display 20 . That is, in the embodiment of the present disclosure, the lighting unit 23 is placed on the opposite side of the display 20, and at the same time, the surface of the display 20 is subjected to a diffuse reflection treatment, such as pasting a transparent diffuse reflection film to form the diffuse reflection layer 26. Based on this, the display light is not affected. In the case of transmission, the light emitted by the lighting unit 23 can be scattered.
  • the control unit 22 triggers the lighting unit 23 to light up
  • the light emitted by the lighting unit 23 is scattered by the diffuse reflection layer 26 on the surface of the display 20 and reflected by the optical component 30, and then converges at the position of the guide screen to form a uniform Illumination plane to illuminate the ridges of the target object to obtain a high-contrast target object image.
  • the illumination surface of the lighting unit 23 in the embodiment of the present disclosure faces the air target area 11 . That is, in the embodiment of the present disclosure, the lighting unit 23 is placed on the opposite side of the display 20. At this time, it is not necessary to perform diffuse reflection processing on the surface of the display 20.
  • the optical assembly 30 has a transmission effect, so that the light source of the illumination unit 23 can directly pass through the optical assembly 30 to provide illumination for the target object.
  • the lighting unit 23 includes a ring-shaped or circular light source, or the lighting unit 23 includes a plurality of light sources whose illuminating surfaces face the air target area 11 , and the plurality of light sources are arranged at a preset interval angle. That is, for the arrangement scheme of the lighting units 23 shown in FIG. 18 , at this time, the number of the lighting units 23 may be one or more, which is not limited.
  • the positions of the plurality of lighting units 23 may be arranged to illuminate different parts of at least one target object through different angles, such as the left side, right side, front side of the fingerprint, etc., and the lighting areas of the lighting units 23 may overlap. This is not limited.
  • the form of the light source provided by the lighting unit 23 is not limited to the light source of the display 20 itself, the internal light source of the device, or the external light source of the device, and the like.
  • the light source can be a visible light source, preferably a blue LED light source with a wavelength of 450nm-470nm.
  • the target object can obtain a high-contrast image under the light source of this wavelength band.
  • the light source can also be a light source in the infrared band to avoid visible light on the guide screen. Display effect interference.
  • the embodiment of the present disclosure preferably uses a ring light source to obtain a clear target object image.
  • optical components such as lenses, dodging plates, etc., can also be added in front of the light source to increase the lighting effect.
  • the lighting unit 23 is disposed on the display 20 , for example, the lighting unit 23 includes a ring light source disposed around the display 20 ; or, the lighting unit 23 is integrated with the backlight assembly of the display 20 .
  • the lighting unit 23 uses a ring-shaped LED light source to surround the display 20 .
  • the lighting unit 23 is lit, the light emitted by the ring-shaped LED light source is reflected by the optical component 30 to guide the screen position.
  • the lighting unit 23 can be directly integrated with the backlight assembly of the display 20.
  • the LCD display can be controlled to allow all the light to pass through, so that the light emitted by the lighting unit 23 can pass through.
  • the LCD display and after being reflected by the optical assembly 30 again, illuminates the guide screen position.
  • the illumination unit 23 may be arranged on the imaging side of the optical assembly 30 , and the illumination surface of the illumination unit 23 faces the air target area 11 . That is, the lighting unit 23 is arranged above the optical assembly 30 , that is, the lighting unit 23 and the guide screen are located on the same side of the optical assembly 30 , as shown in FIG. 16 . In this arrangement, the arrangement of the lighting units 23 is more free, for example, a ring-shaped blue light LED light source can be used to evenly surround the plurality of image capturing units 21 .
  • three image acquisition units 21 and three illumination units 23 are synchronously arranged to illuminate and photograph different parts of the target object at different angles, and the three image acquisition units 21 are placed in the same plane at an included angle of 45°. , so as to ensure that the collected image can completely cover the effective area of the target object.
  • the lighting unit 23 can also be separated from the image acquisition unit 21 for independent lighting, and the number can be one or more, preferably a ring LED light source is used to effectively cover different parts of the target object.
  • the biometric acquisition and identification system 1000 by combining the interactive aerial imaging technology with the non-contact acquisition and identification technology, based on the location where the imaging subsystem 100 displays the guidance picture in the air is determined, the guidance picture will be covered.
  • the three-dimensional space where it is located, that is, the air target area, is used as a reference plane for the image acquisition and identification subsystem 200 to perform image acquisition.
  • the detection module 120 sends a capture trigger signal to trigger the image capture unit 21 to capture the user's biometric information, so that there is no need to set an additional restriction device to guide the user's operation, and the user does not touch during the operation
  • the risk of the device itself makes the non-contact biometric collection and identification operation more convenient, safe and efficient.
  • the aperture of the image acquisition unit 21 under the condition that the aperture of the image acquisition unit 21 satisfies the depth of field to cover the real image position, the aperture can be set as larger as possible according to the actual situation, so as to increase the incoming light amount of scattered light from the target object during the acquisition process , so as to obtain a clearer image of the target object, and at the same time, it can also reduce the demand for the brightness of the light source of the lighting unit 23 .
  • the embodiment of the second aspect of the present disclosure provides a terminal device.
  • the terminal device 2000 in the embodiment of the present disclosure includes a device body 300 and the biometric feature collection and identification system 1000 provided by the above embodiments.
  • the biometric feature collection and identification system 1000 is provided on the device body 300 .
  • the terminal device 2000 by using the biometric feature collection and identification system 1000 provided by the above-mentioned embodiments to collect and identify the user's biometric information, the risk of the user touching the device during operation can be avoided, and there is no need to set additional
  • the restriction device is used to guide the user's operation, so that the non-contact fingerprint collection operation is also safer and more efficient.
  • the embodiment of the third aspect of the present disclosure provides a method for collecting and identifying biological features. As shown in FIG. 21 , the method of the embodiment of the present disclosure includes at least steps S1-S4.
  • Step S1 providing an aerial target area.
  • the embodiments of the present disclosure adopt the method of combining the interactive aerial imaging technology and the non-contact biometric identification technology to efficiently complete the aerial fingerprint collection. identification process.
  • the method of the embodiment of the present disclosure can form a floating real image at a certain position in the air, and the floating real image position is relatively fixed in the air, so that the coverage of the floating real image can be
  • the three-dimensional space is used as the air target area.
  • Step S2 imaging the guide screen of biometric acquisition and identification in the air target area.
  • the display is controlled to display a guide image for biometric acquisition and identification, and the guide image is imaged on the air target area through an optical assembly.
  • the guidance picture can be understood as a floating real image used to guide the user's operation, and the guidance picture collected and identified by the biometric feature can be imaged and displayed in the aerial target area by adopting the interactive aerial imaging technology.
  • the user can guide the user to operate without setting an additional restriction mechanism, thereby avoiding the risk of the user coming into contact with the device body.
  • step S3 it is detected that a target object exists in the air target area, the target object interacts with the guidance screen, and the posture of the target object conforms to the guidance posture in the guidance screen, and image information of the target object in the air target area is collected.
  • the backlight assembly of the display is controlled to emit illumination light in a preset mode, and collect the information in the air target area.
  • Image information of the target object By controlling the backlight assembly of the display to generate illumination light in a preset mode to illuminate the target object, the contrast and clarity of the ridges in the image of the target object can be enhanced, and the quality of the captured image can be improved.
  • the user can operate according to the guide screen to adjust the image capture area to the aerial target area.
  • the image information of the target object in the air target area is collected. For example, during fingerprint collection and identification, the user places the palm at the position of the guide screen, and when it is detected that the user's palm touches the floating real image area and the position and direction are correct, the collection action is triggered to obtain the user's fingerprint information.
  • the placement posture of the target object can be accurately fitted according to the floating real image without medium guidance, and at the same time, the contour of the target object can be quickly searched, and the position of the target object can be fitted, so as to accurately extract the target object.
  • the center position of the target object in the image, and main feature extraction is performed in a suitable range around the center position. Therefore, the influence of scale, translation and rotation on the image during image acquisition is reduced, the unreliability caused by the image algorithm is reduced, and the algorithm error of biometric feature extraction and biometric matching potential image is reduced.
  • the imaging mode of the display can include RGB light emitting diodes, LCOS devices, OLED arrays, projection, laser, laser diodes or any other suitable display or stereoscopic display, and the brightness of the display is not lower than 500cd/m 2 . Therefore, in the embodiment of the present invention, the display itself can be used as the lighting unit. For example, when the user operates at the target area in the air, the detection module is used to sense the existence of the target object, and the acquisition trigger signal will be sent to the main control unit.
  • the unit sends a control command to the display, and the backlight component of the control display emits a high-bright blue stroboscopic flash once, and the high-brightness stroboscopic flash can also converge at the position of the guide screen after passing through the optical component, thereby forming a uniform diffuse reflection on the surface of the target object.
  • the control unit synchronously controls the image acquisition unit to immediately photograph the shadows of the ridges and valley lines of the target object, so as to collect the image of the target object and transmit it to the image processing module.
  • processing to store the processed biometric information to complete feature collection; or, compare the processed biometric information with the biometric information stored in the image storage module to identify the user's identity, thereby realizing non-identification of the user's biometrics. Purpose of contact collection identification.
  • Step S4 perform biometric processing according to the image information, and store the processed biometric information, or compare the processed biometric information with the stored biometric information to identify the user.
  • the collection of the user's biometric features is completed.
  • the user's biometric information needs to be identified, when the direct interaction between the user and the guide screen is detected, the user's biometrics are imaged, and the captured image information is processed and compared with the stored biometric information. The comparison is performed to verify whether the user's identity is passed, so as to realize the non-contact collection and identification of the user's biometrics.
  • the user by imaging the guidance screen of biometric acquisition and identification in the air target area, that is, using the air target area as the reference plane for user operation, the user can The guidance screen is operated, and when it is detected that the guidance posture conforming to the guidance screen is detected, the image information of the target object in the air target area is collected, and the biometric processing is performed according to the image information to store the processed biometric information. Or identify the user's identity, so as to achieve the purpose of non-contact collection and identification of the user's biometrics.
  • This method can make the operation mode of collecting image information more convenient and intuitive, no need to set additional restricting devices to guide the user's operation, avoid the risk of the user touching the device body during operation, and make the user's non-contact identification operation more natural and safe .
  • the method of the embodiment of the present disclosure includes at least steps S5-S8 for performing biometric feature processing according to image information, as shown in FIG. 22 .
  • Step S5 acquiring the biometric region of interest in the image information.
  • the biometric area of interest can be understood as a biometric image area selected from the image information. By delineating the biometric area of interest for further processing, processing time can be reduced and accuracy can be increased.
  • the non-biometric background in the image information is eliminated, for example, the biometrics can be extracted from the image information through a color space, wherein the color space can adopt the HSV color model (Hue, Saturation, Lightness) or the YcbCr model ( Y refers to the luminance component, Cb refers to the blue chrominance component, Cr refers to the red chrominance component) and so on.
  • HSV color model Hue, Saturation, Lightness
  • YcbCr model Y refers to the luminance component
  • Cb refers to the blue chrominance component
  • Cr refers to the red chrominance component
  • the values of skin HSV are 26 ⁇ H ⁇ 34, 43 ⁇ S ⁇ 255, and 46 ⁇ V ⁇ 255, where H refers to hue, S refers to saturation, and V refers to degree.
  • the non-biometric background can be eliminated through the HSV color space. Taking fingerprints as an example, by using the three parameters of H, S, and V in the image information, the part of the image information that meets the finger color requirements is retained, and the background part is removed to complete the fingerprint matching. Extraction of sharp features.
  • contour detection can be used, and contours with smaller areas can be culled by calculating the area in the closed contour.
  • the contour can be understood as a curve after connecting consecutive points, with the same color or grayscale.
  • thresholding can be performed or the Canny edge detection operator can be used for boundary detection, so as to make the contour detection more accurate.
  • the biometric region of interest is extracted through the depth information.
  • the optical sensor periodically detects the user's interactive operations, including the interactive position, the direction of the target object, etc., so the collected depth information of the target object is a fixed value.
  • the biometric region of interest can be obtained.
  • the image information obtained by the depth camera is a number with 4 channels, which are RGB of the color channel and D of the depth channel, that is, the image information combined into RGB-D.
  • the RBG value in the image information is retained as the biometric area of interest. Others do not meet the requirements.
  • the required depth value is set to black.
  • Step S6 preprocessing the biometric region of interest to obtain a preprocessed image.
  • preprocessing is performed on the biometric region of interest, including operations such as image grayscale, image enhancement, and image binarization and refinement, to obtain a preprocessed image.
  • the color image is converted into a grayscale image, that is, the three RGB channels are converted into one channel, so as to reduce the amount of calculation, improve the execution speed of the system, and reduce the processing time. Delay to ensure the real-time performance of the system.
  • Image grayscale is to make the values of the three color components R, G, and B of the image color the same. Since the value range of the color value is [0, 255], there are only 256 grayscale levels, that is, the grayscale image. Only 256 grayscale colors can be represented.
  • the grayscale of the image may adopt a component method, a maximum value method, an average value method, or a weighted average method, etc., which is not limited.
  • the ridge lines of the target object can be made more obvious, and the interval between the ridge lines is also clearer.
  • the image enhancement process can use Gabor feature extraction. Gabor is a linear filter used for edge extraction. The expression of frequency and direction is similar to the human visual system, which can provide good direction selection and scale selection for image enhancement. , and Gabor is insensitive to illumination changes, suitable for texture analysis.
  • the enhanced grayscale image can be binarized and refined.
  • binarization refers to binarizing the 0-255 values of each channel in the grayscale image into 0 and 1.
  • a threshold range can be set, and the pixel value in the set threshold range becomes 1 (such as the white part), and the pixel value not in the set threshold range becomes 0 (such as the black part), in order to distinguish the biometric region of interest to be identified, and use a 2*2 rectangle as a template to refine the binarized image.
  • Step S7 extracting feature points in the preprocessed image.
  • feature point extraction is based on the preprocessed image, searching for feature points that can describe the biometric feature in the biometric region of interest, such as finding the feature points of fingerprints.
  • the feature points are spread around the ridge line, including the middle point, the end point, the bifurcation point, the intersection point and the like of the ridge line.
  • the SIFT feature extraction algorithm can be used to extract the SIFT feature descriptor of the biometric region of interest, and a sliding window can be used to describe the characteristics of the ridge, such as scanning the entire image with a 3*3 window. , and analyze the value, position, and number of pixels in the window to determine whether there is an end point or whether there is a bifurcation point, etc.
  • step S8 similarity matching is performed on the feature points to determine the target biological feature.
  • the embodiment of the present disclosure records the feature data of the feature points after the above preprocessing operations and feature extraction operations on the image to compare with another set of feature data, determine the similarity through an algorithm, and finally determine the How well the biometrics match.
  • the feature points extracted by SIFT feature in the preprocessed image all have direction, scale and position information, and the feature points at this time have translation, scaling and rotation invariance, so by comparing the preprocessed image with the stored image
  • the similarity of the feature points of the target object is used to judge whether it belongs to the same person.
  • multiple parts of the target object are cropped into different regional images by algorithms such as the center of gravity distance method or the curvature analysis method, so as to perform feature extraction and matching algorithms for each part separately.
  • performing biometric processing according to the image information further comprising: obtaining a three-dimensional biological feature of the target object according to the image information of the target object in the aerial target area collected in different directions feature image, and expand the three-dimensional biometric image into an equivalent two-dimensional biometric image, so as to obtain biometric information compatible with a large number of target object databases currently acquired and archived by other means.
  • the method of the embodiment of the present disclosure further includes: detecting that the posture of the target object does not match the guidance posture in the guidance screen, sending guidance prompt information, and controlling the display to display the guidance prompt information, and changing the direction of the guidance prompt information displayed on the display
  • the light is concentrated and imaged on the target area in the air. That is, when the image information collection fails, the failure reason is analyzed, such as the user's palm orientation is wrong, the movement is too fast or the position is offset, etc., to generate guidance prompt information, and the guidance prompt information is imaged in the air target area to guide the user. action to correctly complete the acquisition of image information.
  • the method of the embodiment of the present disclosure further includes: detecting the interaction between the target object and the guide screen in the air target area, providing a biometric interaction image according to the interaction between the target object and the guide image; and imaging the biometric interaction image in the air target area.
  • detecting the user's interactive actions including operations such as clicking, sliding, etc., according to the corresponding relationship between the interactive action and the internal instruction set, determine the specific operation content of the user, such as selecting the fingerprint enrollment mode, viewing fingerprint information, etc., and assigning the corresponding
  • the biometric interaction screen such as the UI operation interface such as related control buttons and settings, is displayed in the air target area.
  • Embodiments of a fourth aspect of the present disclosure provide a storage medium on which a computer program is stored, wherein, when the computer program is executed by a processor, the method for collecting and identifying biometric features of the foregoing embodiments is implemented.
  • any description of a process or method in a flowchart or otherwise described herein may be understood to represent a representation of executable instructions comprising one or more steps for implementing a custom logical function or process modules, segments or portions of code, and the scope of preferred embodiments of the present disclosure include alternative implementations, which may not be in the order shown or discussed, including in a substantially simultaneous manner or in the reverse order depending on the functionality involved , to perform functions, which should be understood by those skilled in the art to which the embodiments of the present disclosure belong.
  • a "computer-readable medium” can be any device that can contain, store, communicate, propagate, or transport the program for use by or in conjunction with an instruction execution system, apparatus, or apparatus.
  • computer readable media include the following: electrical connections with one or more wiring (electronic devices), portable computer disk cartridges (magnetic devices), random access memory (RAM), Read Only Memory (ROM), Erasable Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Compact Disc Read Only Memory (CDROM).
  • the computer readable medium may even be paper or other suitable medium on which the program may be printed, as the paper or other medium may be optically scanned, for example, followed by editing, interpretation, or other suitable medium as necessary process to obtain the program electronically and then store it in computer memory.
  • portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof.
  • various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware as in another embodiment, it can be implemented by any one of the following techniques known in the art, or a combination thereof: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, Programmable Gate Arrays (PGA), Field Programmable Gate Arrays (FPGA), etc.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
  • the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Image Input (AREA)

Abstract

一种生物特征采集识别系统(1000)。生物特征采集识别系统(1000)包括成像子系统(100)和采集识别子系统(200)。成像子系统(100)包括:成像模块(110),其将生物特征采集识别的引导画面在空中目标区域(11)成像显示;检测模块(120),其在检测到空中目标区域(11)存在目标物体且其姿态符合引导画面中的引导姿态时,发送采集触发信号。采集识别子系统(200)包括:图像采集模块(210),其响应于采集触发信号,采集空中目标区域(11)内目标物体的图像信息;图像存储模块(230),其存储生物特征信息;图像处理模块(220),与图像采集模块(210)连接且根据图像信息进行生物特征处理。还公开了一种识别方法、终端设备(2000)。

Description

生物特征采集识别系统及方法、终端设备 技术领域
本公开涉及成像识别技术领域,尤其是涉及一种生物特征采集识别系统、终端设备和生物特征采集识别方法。
背景技术
相关技术中,指纹识别系统采用接触式传感器(例如接触式光学传感器或接触式电容传感器)以采集指纹图像,且使用基于细节点的匹配算法进行指纹匹配。其中,基于细节点的匹配算法对指纹图像的质量十分敏感,然而,基于接触式传感器的指纹图像采集设备无法保证图像质量,有采集指纹面积小、分辨率低、特征点不足等缺点。此外,由于该方式依赖于手指和指纹传感设备之间的物理接触,将手指放在扫描仪上的使用要求引起用户对卫生的担心。
发明内容
本公开旨在至少解决现有技术中存在的技术问题之一。为此,本公开的一个目的在于提出一种生物特征采集识别系统,该采集识别系统的操作方式更加方便,且可以避免用户操作时接触设备的风险。
本公开的目的之二在于提出一种终端设备。
本公开的目的之三在于提出一种生物特征采集识别方法。
为了解决上述问题,本公开第一方面实施例的生物特征采集识别系统,包括,成像子系统,包括:成像模块,用于将生物特征采集识别的引导画面在空中目标区域成像显示;检测模块,用于在检测到所述空中目标区域存在目标物体且所述目标物体与所述引导画面交互,所述目标物体的姿态符合所述引导画面中的引导姿态时,发送采集触发信号;采集识别子系统,包括:图像采集模块,用于响应于所述采集触发信号,采集所述空中目标区域内目标物体的图像信息;图像存储模块,用于存储生物特征信息;图像处理模块,与所述图像采集模块、所述图像存储模块连接,用于根据所述图像信息进行生物特征处理,将处理后的生物特征信息存储于所述图像存储模块,或者,将处理后的生物特征信息与所述图像存储模块存储的生物特征信息进行比较,以识别用户身份。
根据本公开实施例的生物特征采集识别系统,通过成像模块将生物特征采集识别的引导画面成像在空中目标区域,即以空中目标区域作为用户操作的基准面,从而用户可以根据在空中目标区域处呈现的引导画面进行操作,进而检测模块在检测到目标物体的姿态符合引导画面中的引导姿态时,发送采集触发信号至图像采集模块,图像采集模块对空中目标区域内目标物体进行图像捕捉,且图像处理模块根据该图像信息进行生物特征处理,以对该用户的生物特征进行存储或识别,从而实现对用户生物特征的非接触采集识别的目的,以及本公开实施例中通过用户触摸空中目标区域的引导画面,即可触发图像采集模块进行图像信息采集识别,无需设置额外的限制装置来引导用户操作,且在采集识别的过程中无需接触到设备本体,从而使得用户在进行非接触生物特征采集识别操作时更加安全、高效。
在一些实施例中,所述成像模块包括:壳体,所述壳体形成有显示窗口且在内部形成有容纳腔;显示器,所述显示器设置于所述容纳腔中,用于显示所述生物特征采集识别的引导画面;光学组件,所述光学组件设置于所述容纳腔中,用于将所述显示器显示的所述引导画面的光线汇聚成像在所述空中目标区域,所述显示器设置于所述光学组件的光源侧,所述显示窗口在所述光学组件的成像侧;主控单元,所述主控单元设置在所述容纳腔中,所述主控单元用于控制所述显示器。
在一些实施例中,所述成像模块还包括:数据处理模块,与所述主控单元连接,所述数据处理模块用于在检测到所述目标物体的姿态与所述引导画面中的引导姿态不符时发出指引 提示信息;所述主控单元控制所述显示器显示所述指引提示信息;所述光学组件将所述显示器显示的所述指引提示信息的光线汇聚成像在所述空中目标区域。
在一些实施例中,所述图像采集模块包括:至少一个图像采集单元,用于采集所述空中目标区域内目标物体的图像信息;控制单元,与每个所述图像采集单元连接,用于响应于所述采集触发信号控制所述图像采集单元启动。
在一些实施例中,所述图像采集单元设置在所述光学组件的成像侧,所述图像采集单元的光轴与所述空中目标区域的成像平面的法线呈预设角度。
在一些实施例中,所述图像采集单元设置在所述光学组件的成像侧,所述光学组件成像侧的表面上设置有分束器,所述分束器用于将所述空中目标区域目标物体的图像信息进行反射,以将所述图像信息传递给所述图像采集单元。
在一些实施例中,所述图像采集单元的光轴与所述空中目标区域的成像平面的法线垂直;所述分束器为对可见光半透半反的分束器;或者所述图像采集单元的光轴与所述空中目标区域的成像平面的法线垂直;所述图像采集单元为红外图像采集单元;所述分束器为透射可见光且反射红外光的分束器。
在一些实施例中,所述图像采集模块还包括:至少一个全反射单元,至少一个所述全反射单元用于对所述分束器反射的所述空中目标区域目标物体的图像信息进行全反射,以将所述图像信息传递给所述图像采集单元。
在一些实施例中,所述图像采集单元设置在所述光学组件的光源侧,且所述图像采集单元的光轴与所述空中目标区域所在平面呈预设角度。
在一些实施例中,在所述光学组件上对应所述图像采集单元的光轴穿过的位置设置有通孔。
在一些实施例中,所述图像采集模块还包括:照明单元,所述照明单元与所述控制单元连接,用于响应于所述采集触发信号启动照明。
在一些实施例中,所述照明单元为所述显示器的背光组件,且所述主控单元还用于响应于所述采集触发信号控制所述显示器的背光组件以预设模式发射照明光;或者所述照明单元设置在所述光学组件的成像侧,且所述照明单元的照射面朝向所述空中目标区域。
在一些实施例中,所述照明单元设置在所述光学组件的光源侧,且与所述显示器相对设置。
在一些实施例中,所述显示器的表面设置有漫反射层,且所述照明单元的照射面朝向所述显示器的表面;或者所述照明单元的照射面朝向所述空中目标区域。
本公开第二方面实施例提供一种终端设备,包括:设备本体;上述实施例所述的生物特征采集识别系统,所述生物特征采集识别系统设置在所述设备本体上。
根据本公开实施例的终端设备,通过采用上述实施例提供的生物特征采集识别系统,来对用户的生物特征信息进行采集与识别,可以避免用户操作时接触设备的风险,且无需设置额外的限制装置来引导用户操作,从而使得非接触指纹采集操作也更加安全、高效。
本公开第三方面实施例提供一种生物特征采集识别方法,包括:提供空中目标区域;将生物特征采集识别的引导画面成像在所述空中目标区域;检测到所述空中目标区域存在目标物体,且所述目标物体与所述引导画面交互,且所述目标物体的姿态符合所述引导画面中的引导姿态,采集所述空中目标区域内目标物体的图像信息;根据所述图像信息进行生物特征处理,并将处理后的生物特征信息进行存储,或者,将处理后的生物特征信息与存储的生物特征信息进行比较,以识别用户身份。
根据本公开实施例的生物特征采集识别方法,通过将生物特征采集识别的引导画面成像在空中目标区域,即以空中目标区域作为用户操作的基准面,用户可以根据在空中目标区域处呈现的引导画面进行操作,进而在检测到符合引导画面中的引导姿态时,则对空中目标区域内目标物体的图像信息进行采集,并根据该图像信息进行生物特征处理,以对处理后的生物特征信息进行存储或识别用户身份,从而实现对用户生物特征的非接触采集识别的目的, 以及本公开实施例中通过用户触摸空中目标区域的引导画面,并在符合引导画面中的引导姿态时,即可采集图像信息的操作方式也更加方便直观,无需设置额外的限制装置来引导用户操作,避免用户操作时接触设备本体的风险,从而使得用户在进行非接触识别操作时更加安全、高效。
在一些实施例中,根据所述图像信息进行生物特征处理,包括:获取所述图像信息中的生物特征感兴趣区域;对所述生物特征感兴趣区域进行预处理,以获得预处理图像;提取所述预处理图像中的特征点;对所述特征点进行相似度匹配,以确定目标生物特征。
在一些实施例中,在获取所述图像信息中的生物特征感兴趣区域之前,还包括:根据不同方位采集的所述空中目标区域内目标物体的图像信息获得所述目标物体的三维生物特征图像;将所述三维生物特征图像展开为等效的二维生物特征图像。
在一些实施例中,在对所述特征点进行相似度匹配之后,所述方法还包括:检测到所述目标物体的姿态与所述引导画面中的引导姿态不符发送指引提示信息;控制显示器显示所述指引提示信息,将所述显示器显示的所述指引提示信息的光线汇聚成像在所述空中目标区域。
在一些实施例中,所述生物特征采集识别方法还包括:检测所述空中目标区域内目标物体与所述引导画面的交互动作;根据所述目标物体与所述引导画面的交互动作提供生物特征交互画面;将所述生物特征交互画面成像在所述空中目标区域。
本公开的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本公开的实践了解到。
附图说明
本公开的上述和/或附加的方面和优点从结合下面附图对实施例的描述中将变得明显和容易理解,其中:
图1是根据本公开一个实施例的生物特征采集识别系统的结构框图;
图2是根据本公开一个实施例的生物特征采集识别系统的结构示意图;
图3是根据本公开一个实施例的人机交互的结构示意图;
图4是根据本公开一个实施例的光学组件的结构示意图;
图5是根据本公开一个实施例的第一光波导阵列和第二光波导阵列的示意图;
图6是根据本公开一个实施例的光学组件沿厚度方向的正面结构示意图;
图7是根据本公开一个实施例的第一光波导阵列和第二光波导阵列的局部结构示意图;
图8是根据本公开一个实施例的光学组件的光路示意图;
图9是根据本公开另一个实施例的生物特征采集识别系统的结构示意图;
图10是根据本公开一个实施例的三个图像采集单元进行图像信息采集的示意图;
图11是根据本公开一个实施例的三个图像采集单元特征提取方式的示意图;
图12是根据本公开另一个实施例的三个图像采集单元特征提取方式的示意图;
图13是根据本公开一个实施例的图像采集单元的光轴与空中目标区域的成像平面的法线呈预设角度的示意图;
图14是根据本公开一个实施例的图像采集单元的光轴与空中目标区域的成像平面的法线垂直的示意图;
图15是根据本公开一个实施例的采用全反射单元进行图像信息采集的示意图;
图16是根据本公开一个实施例的图像采集单元设置在光学组件的光源侧的示意图;
图17是根据本公开一个实施例的照明单元通过显示器表面进行漫反射的示意图;
图18是根据本公开一个实施例的照明单元的照射面朝向空中目标区域的示意图;
图19是根据本公开一个实施例的照明单元设置在显示器上的示意图;
图20是根据本公开一个实施例的终端设备的结构框图;
图21是根据本公开一个实施例的生物特征采集识别方法的流程图;
图22是根据本公开一个实施例的根据图像信息进行生物特征处理的流程图。
附图标记:
终端设备2000;生物特征采集识别系统1000;设备本体300;成像子系统100;采集识别子系统200;成像模块110;检测模块120;图像采集模块210;图像处理模块220;图像存储模块230;空中目标区域11;数据处理模块111;壳体10;显示器20;光学组件30;主控单元40;显示窗口1;容纳腔2;图像采集单元21;控制单元22;分束器31;照明单元23;滤光部件24;全反射单元25;漫反射层26;第一光波导阵列6;第二光波导阵列7;透明基板8;反射单元9;反射膜12;胶粘剂13。
具体实施方式
下面详细描述本公开的实施例,参考附图描述的实施例是示例性的。
为了解决上述问题,下面参考附图描述根据本公开第一方面实施例提供的生物特征采集识别系统,该生物特征采集识别系统的操作方式更加方便,且可以避免用户操作时接触设备存在的风险。
图1为本公开实施例的生物特征采集识别系统1000的结构框图,本公开实施例的生物特征采集识别系统1000包括成像子系统100和采集识别子系统200。其中,本公开实施例中所指的生物特征可以为指纹、面容、掌纹、虹膜等人体所共有的生理特征。
其中,成像子系统100包括成像模块110和检测模块120。成像模块110用于将生物特征采集识别的引导画面在空中目标区域成像显示;检测模块120用于在检测到空中目标区域存在目标物体且目标物体与引导画面交互,目标物体的姿态符合引导画面中的引导姿态时,发送采集触发信号。
采集识别子系统200包括图像采集模块210、图像处理模块220和图像存储模块230。图像采集模块210用于响应采集触发信号,采集空中目标区域内目标物体的图像信息,图像采集模块210的采集区域覆盖空中目标区域所在的三维空间;图像存储模块230用于存储生物特征信息;图像处理模块220连接至图像采集模块210和图像存储模块230,用于根据采集到的图像信息进行生物特征处理,将处理后的生物特征信息存储于图像存储模块230,或者,将处理后的生物特征信息与图像存储模块230存储的生物特征信息进行比较,以识别用户身份。
在本公开实施例中,通过成像模块110将生物特征采集识别的引导画面成像在空中目标区域,检测模块120在检测到目标物体的姿态符合引导画面中的引导姿态时,发送采集触发信号至图像采集模块210,图像采集模块210响应于采集触发信号,对空中目标区域内目标物体的图像信息进行采集,图像处理模块220根据该图像信息进行生物特征处理,图像存储模块230将图像处理模块220处理后的生物特征信息进行存储。即本公开实施例的生物特征采集识别系统1000采用可交互空中成像技术和非接触生物特征采集识别技术相结合的方式进行生物特征采集识别。
优选地,本实施例中的生物特征采集识别可以是指纹采集识别,生物特征信息可以是指纹信息。成像子系统100在空中目标区域成像显示的引导画面,其位置是相对固定的,以便用户与浮空实像直接交互,即用户可以根据空中目标区域所呈现的引导画面进行实际操作。
例如,当需要采集用户指纹信息时,用户根据引导画面将手指放置到空中目标区域,当用户的手指放置到空中目标区域且符合引导画面中的引导姿态时,图像采集模块210对用户的指纹信息进行图像捕捉,图像处理模块220将捕捉到的图像信息进行处理后存储到图像存储模块230。此时,即完成对用户指纹信息的采集。
当需要对用户指纹信息识别时,用户根据引导画面将手指放置到空中目标区域,当用户的手指放置到空中目标区域且符合引导画面中的引导姿态时,图像采集模块210对用户的指纹信息进行图像捕捉,图像处理模块220将捕捉到的图像信息进行处理,并将处理后的指纹信息与存储在图像存储模块230中的指纹信息进行对比识别,并根据识别结果确定用户身份。
本公开通过触摸空中目标区域触发采集识别子系统200进行图像采集识别,无需设置额外的限制装置来引导用户操作,且在采集识别的过程中用户无需接触到设备本体,从而使用户在进行指纹采集识别时更加安全、方便、高效。
具体地,成像模块110在空中的确定位置处形成浮空实像即引导画面,覆盖浮空实像所在的三维空间即为空中目标区域,从而成像模块110在空中目标区域显示相关提示信息,以指引当前用户动作,完成对当前用户的生物特征信息的采集识别。因此,用户通过与浮空实像直接交互,从而无需设置额外的限制机构来引导用户进行操作,降低了用户与设备本体接触的风险,提高了本公开生物特征采集识别系统1000进行非接触采集和识别的使用效果。
检测模块120用于检测用户对浮空实像的操作,在检测到空中目标区域存在目标物体且目标物体的姿态符合引导画面中的引导姿态时,发送采集触发信号至图像采集模块210,图像采集模块210接收到采集触发信号,采集空中目标区域内目标物体的图像信息,进而图像处理模块220根据图像信息进行生物特征处理,以实现生物特征的采集或识别。用户通过触摸空中目标区域触发图像采集模块210进行采集识别的操作方式,更加方便、直观。
在实施例中,检测模块120可以周期性检测用户与浮空实像的交互操作,例如在指纹采集识别过程中,交互操作包括交互位置、手掌方向等。在检测到用户手部触摸浮空实像区域且触摸位置及手部方向与引导画面对应时,检测模块120发送采集触发信号,图像采集模块210接收到采集触发信号则对浮空实像区域中的用户手部进行图像捕捉,图像处理模块220对手部图像进行处理以获得指纹信息,并将指纹信息与存储的指纹信息进行比较,以确定用户身份。
在实施例中,检测模块120可以为光学传感器,其感应形式可以包括但不限于远近红外、超声波、激光干涉、光栅、编码器、光纤式或CCD(Charge-coupled Device,电荷耦合器件)等。检测模块120的感应区域与引导画面位于同一平面且包含引导画面所处三维空间。在实际应用时,可以根据安装空间、观看角度和使用环境选择最佳的感应形式,从而方便用户以最佳的姿态在空中目标区域进行操作,提高用户体验。
图像采集模块210的采集区域覆盖引导画面所处位置,即空中目标区域,该区域构成生物特征采集区。具体地,在进行生物特征采集识别时,引导画面的显示位置是相对固定的,检测模块120检测到用户与引导画面直接交互时,发送采集触发信号以触发图像采集模块210对引导画面位置处的用户生物特征进行图像采集,图像处理模块220对生物特征图像进行处理,将处理后的生物特征信息存储于图像存储模块230,以实现生物特征采集;或者,将处理后的生物特征信息与存储的生物特征信息进行比较,以识别用户身份,从而实现用户非接触生物特征采集识别的目的。
图像采集模块210在进行图像采集时,可以根据无介质引导的浮空实像准确地拟合目标物体的放置姿态,同时可以快速地搜索目标物体轮廓,并拟合出目标物体位置,从而准确提取出图像中目标物体的中心位置,并在围绕中心位置的合适范围内进行主要特征提取。从而在图像采集时降低因为尺度、平移、旋转对图像产生的影响,降低图像算法带来的不可靠性,并降低生物特征提取、生物特征匹配潜在的图像的算法误差。
此外,由于浮空实像所处位置是确定的,因此,本公开实施例在保证图像采集模块210的采集区域覆盖浮空实像位置的情况下,可以根据实际情况尽可能地将图像采集模块210的光圈设置得更大,以增加采集过程中目标物体散射光的进光量,从而获得更加清晰的目标物体图像。
在一些实施例中,图像采集模块210可以用以获取空中目标区域中的用户的多个生物特征信息,其采集方法不限于结构光、立体视觉、光飞行时间法(Time of fl ight,TOF)。例如,对于指纹信息的采集识别,可以通过两个高速相机进行采集,两个高速相机以预定的基线(如摄像头间距)距离设置,通过采用立体视觉方法获得用户手掌中至少一个手指的不同部位的指纹图像,并根据手指不同部位对应的至少两个视差图获得指纹深度信息,从而拼 接构建出手指部位表面的3D指纹图像,再展开为等效的2D指纹图像,从而获得与当前通过其他方式获取并归档的大量指纹数据库兼容的指纹信息,例如通过接触式方法获得的指纹信息。可以理解的是,如果不考虑与其他平面指纹信息的兼容性,仅根据获得的3D指纹图像来识别和验证用户身份也是可行的。
图像处理模块220用以对图像采集模块210采集的图像信息进行生物特征处理,以完成对生物特征的采集识别。其中,对图像信息进行处理包括感兴趣区域的提取、图像的灰度化、图像的增强、图像的二值化与细化、特征点的提取、特征点的匹配。通过一系列的图像预处理操作与特征提取操作,记录下关键点的特征数据,并将该关键点的特征数据存入图像存储模块230中,从而实现对用户身份进行采集的目的。或者,将关键点的特征数据与图像存储模块230中已存储的特征数据进行比对,通过算法判断相似度,最后判定生物特征的匹配程度,以确定用户生物特征是否通过,从而实现对用户身份进行验证的目的。
图像存储模块230可以是预先集成在系统内的存储装置,也可以是通过wifi、蓝牙等方式远程连接的具有存储功能的云端服务器,也可以是可拆卸的便携式设备如SD卡或硬盘等,对此不作限制。通过图像存储模块230对用户的生物特征信息进行存储,便于后续进行提取以用于识别用户身份。
根据本公开实施例的生物特征采集识别系统1000,通过成像模块110将生物特征采集识别的引导画面成像在空中目标区域,即以空中目标区域作为用户操作的基准面,用户可以根据在空中目标区域处呈现的引导画面进行操作;检测模块120在检测到目标物体的姿态符合引导画面中的引导姿态时,发送采集触发信号至图像采集模块210;图像采集模块210对空中目标区域内目标物体进行图像捕捉,图像处理模块220根据捕捉到的该图像的信息进行生物特征处理,以完成对该用户的生物特征进行采集或者对用户身份的识别,从而实现对用户非接触生物特征采集识别的目的。本公开实施例中用户触摸空中目标区域,即可触发图像采集模块210进行图像信息采集识别,无需设置额外的限制装置来引导用户操作,且在采集识别的过程中无需接触到设备本体,从而使得用户在进行非接触生物特征采集识别操作时更加安全、高效。
在一些实施例中,如图2所示,本公开实施例的成像模块110包括壳体10、显示器20、光学组件30以及主控单元40。
其中,如图2所示,壳体10形成有显示窗口1且在内部形成有容纳腔2;显示器20设置于容纳腔2中,用于显示生物特征采集识别的引导画面;光学组件30设置于容纳腔2中,用于将显示器20显示的引导画面的光线汇聚成像在空中目标区域11,显示器20设置于光学组件30的光源侧,显示窗口1在光学组件30的成像侧,显示窗口1用于透出光学组件30折射后的光线,具体来说,光学组件30可以设置在显示窗口1处,光学组件30将显示器20发出的光线进行折射,折射光线透过显示窗口1汇聚成像在空中目标区域11;主控单元40设置在容纳腔2中,主控单元40用于控制显示器20。
具体地,如图3所示,在光学组件30的一侧即光源侧放置显示器20,控制显示器20显示引导画面,显示器20显示的引导画面的光线通过光学组件30成像显示在空中目标区域11。引导画面的三维空间即为空中目标区域11。检测模块120用于检测用户与引导画面的交互操作,并将检测到的操作信号反馈至主控单元40,主控单元40触发图像采集模块210进行图像采集,并通过图像处理模块220对采集的图像信息进行生物特征处理,以用于采集生物特征信息或识别用户身份。
在实施例中,显示器20的成像模式可以包括RGB(红色、绿色、蓝色)发光二极管(Light Emitting Diode,LED)、LCD(Liquid Crystal Display,液晶显示器)、LCOS(Liquid Crystal on Si licon,液晶附硅)器件、OLED(Organic Light-Emitting Diode,有机发光二极管)阵列、投影、激光、激光二极管或任何其他合适的显示器或立体显示器,对此不作限制。显示器20可以提供清晰、明亮且高对比度的动态图像光源,主控单元40控制显示器20显示引 导画面,并经光学组件30汇聚成像,可以在空中目标区域位置呈现出清晰的浮空实像,从而便于用户操作。
在实施例中,可以设置显示器20的亮度不低于500cd/m 2,从而可以降低光路传播中由亮度损失造成的影响。当然,实际应用时,可以根据环境光的亮暗来调整显示器20的显示亮度。
在实施例中,可以对显示器20的显示图像表面进行可视角控制处理,以减轻空中目标区域11的残影,提高画面质量,同时也可以防止他人窥视,便于广泛应用于其他需要隐私信息保护的输入装置。
下面对本公开实施例的光学组件30的结构及其实现成像的原理进行说明。
在一些实施例中,光学组件30可以采用平板透镜,平板透镜固定于壳体10上,如图4所示,平板透镜包括两个透明基板8,以及置于两个透明基板8之间的第一光波导阵列6和第二光波导阵列7。其中,第一光波导阵列6和第二光波导阵列7在同一平面紧密贴合且正交布置。优选地,第一光波导阵列6和第二光波导阵列7的厚度相同,便于设计和生产。
具体地,如图4所示,平板透镜从显示器20一侧到空中目标区域11一侧依次包括第一透明基板8、第一光波导阵列6、第二光波导阵列7和第二玻璃基板8。
其中,第一透明基板8和第二透明基板8均具有两个光学面,透明基板8对波长在390nm至760nm之间的光线具有90%—100%的透射率。透明基板8的材料可以为玻璃、塑料、聚合物和丙烯酸树脂中的至少一个,用于保护光波导阵列及滤去多余光线。需要说明的是,如果第一光波导阵列6和第二光波导阵列7紧密正交贴合后的强度足够,或安装的环境有厚度限制,则也可以只配置一个透明基板8或完全不配置透明基板8。
光学组件30实现空中成像的原理为,第一光波导阵列6和第二光波导阵列7由多个横截面为矩形的反射单元9组成,各反射单元9的长度由光波导阵列外围尺寸限制从而长短不一。如图5所示,第一光波导阵列6中反射单元9的延伸方向为X,第二光波导阵列7的反射单元9的延伸方向为Y,Z方向为光波导阵列的厚度方向。第一光波导阵列6和第二光波导阵列7中反射单元9的延伸方向(光波导阵列方向)相互垂直,即从Z方向(厚度方向)看,第一光波导阵列6和第二光波导阵列7之间正交布置,从而使处于正交方向的两个光束会聚于一点,且保证物像面(光源侧和成像侧)相对于平板透镜对称,产生等效负折射现象,实现空中成像。
在一些实施例中,如图6所示,第一光波导阵列6或第二光波导阵列7由以用户视角偏转45°斜向布置的多个平行排布的反射单元9组成。具体地,第一光波导阵列6可由呈左下方向45°并排且横截面为矩形的反射单元9组成,第二光波导阵列7可由呈右下方向45°并排且横截面为矩形的反射单元9组成,两组光波导阵列中反射单元9的排列方向可以互换。例如,第一光波导阵列6中反射单元9的延伸方向为Y,第二光波导阵列7的反射单元9的延伸方向为X,Z方向为光波导阵列的厚度方向,从Z方向(厚度方向)看,第一光波导阵列6和第二光波导阵列7之间正交布置,使处于正交方向的两个光束会聚于一点,且保证物像面(光源侧和成像侧)相对于平板透镜对称,产生等效负折射现象,实现空中成像。其中,光波导材料具有光学折射率n1,在一些实施例中,n1>1.4,例如n1取值为1.5、1.8、2.0等。
如图7所示,对于第一光波导阵列6和第二光波导阵列7,各反射单元9与其相邻的反射单元9之间存在两个交接面,各交接面之间由透光性较好的胶粘剂13接合。优选地,胶粘剂13可以选择光敏胶或热固胶,胶粘剂13的厚度为T1,且满足T1>0.001mm,例如,T1=0.002mm或者T1=0.003mm或者T1=0.0015mm,具体厚度可以依据具体需要设置。平板透镜中相邻的光波导阵列之间以及光波导阵列与透明基板8之间均设置有胶粘剂13,增加牢固性。
在一些实施例中,反射单元9的横截面可以为矩形,且沿反射单元9的排布方向的一侧或两侧面设置有反射膜12。具体地,在光波导阵列排布方向上,各反射单元9两侧均镀有反射膜12,该反射膜12的材料可以为实现全反射的铝、银等金属材料或其他非金属化合物材料。反射膜12的作用是防止光线因没有全反射而进入相邻光波导阵列中形成杂光影响成像。或者, 各反射单元9也可以在反射膜12上添加介质膜,介质膜的作用是提高光反射率。
单个反射单元9的横截面宽a和横截面长b,满足0.1mm≤a≤5mm,0.1mm≤b≤5mm,例如a=2mm,b=4mm;或者,a=3mm,b=5mm。在大屏幕显示时可以通过拼接多块光波导阵列来实现大尺寸需求。光波导阵列的整体形状根据应用场景需要设置,本实施例中,两组光波导阵列整体呈矩形结构,两对角的反射单元9为三角形,中间的反射单元9为梯形结构。各个反射单元9的长度不等,位于矩形对角线的反射单元9长度最长,两端的反射单元9长度最短。
此外,平板透镜还可以包括增透部件和视角控制部件,增透部件可以提高平板透镜的整体透过率,提高成像于空中目标区域11的引导画面的清晰度和明亮度。视角控制部件可以用于消除成像于空中目标区域11的引导画面的残像,降低观察者的眩晕感,同时防止观察者从其他角度窥视到装置内部,提升装置整体的美观度。其中,增透部件和视角控制部件可以组合,或者也可以分别独立设置在透明基板8与波导阵列之间、两层波导阵列之间或透明基板8的外层。
下面参考图8对平板透镜的成像原理进行说明,具体内容如下。
在微米尺度上,使用相互正交的双层波导阵列结构,来对任意光信号进行正交分解。原始信号投射在第一光波导阵列6,以原始信号投射点作为原点、垂直于第一光波导阵列6为x轴建立直角坐标系,在该直角坐标系内原始信号被分解为位于x轴的信号X和位于y轴的信号Y两路相互正交信号。其中,信号X在经过第一光波导阵列6时,按照与入射角相同的反射角在反射膜12表面进行全反射;此时,信号Y保持平行于第一光波导阵列6,穿过第一光波导阵列6后,在第二光波导阵列7表面按照与入射角相同的反射角在反射膜12表面进行全反射,反射后的信号Y与信号X组成的反射后的光信号便与原始光信号成镜面对称。因此任意方向的光线经过此平板透镜均可实现镜面对称,任意光源的发散光经过此平板透镜便会在对称位置重新汇聚成浮空实像,即在空中目标区域11处成像引导画面,浮空实像的成像距离与平板透镜到像源即显示器20的距离相同,为等距离成像,且浮空实像的位置在空中,不需要具体载体,而是直接在空气中呈现实像。因此,使用者所看到的空间中的影像即是显示器20发出的图像。
在本公开实施例中,显示器20的光源发出的光线在穿过平板透镜时,在平板透镜上发生上述过程。具体地,如图8所示,光线在第一光波导阵列6上的入射角分别为α 1、α 2和α 3,光线在第一光波导阵列6上的反射角为β 1、β 2和β 3,其中α 1=β 1,α 2=β 2,α 3=β 3,经过第一光波导阵列6反射后,在第二光波导阵列7上的入射角分别为γ 1、γ 2和γ 3,在第二光波导阵列7上的反射角分别为δ 1、δ 2和δ 3,其中,γ 1=δ 1,γ 2=δ 2,γ 3=δ 3
进一步地,汇聚成像后的入射角分别为α 1,α 2,α 3…..α n,显示器20的光源与平板透镜的距离为L,则浮空实像的成像位置与平板透镜的距离也为L,且该浮空实像的可视角度ε为2倍max(α)。
可以理解的是,若光波导阵列的尺寸较小,则仅在距离光波导阵列成像侧的一定距离才可看到影像;而若光波导阵列的尺寸变大,即可实现更大的成像距离,从而增大视野率。
优选地,平板透镜与显示器20的夹角设置为45°±5°的范围,从而可以有效利用平板透镜的尺寸,提高成像质量和降低残像影响。此外,如果对成像位置有其他需求,则也可以在牺牲部分成像质量的情况下选择其他角度,优选地,平板透镜的大小设置为可以显示整个显示器20所呈现的浮空实像的画面。但如果实际使用时仅需要看到显示器20的部分画面,则平板透镜的尺寸也可以根据实际显示画面自由调整大小和位置,对此不作限制。
另外,以上主要表述采用双层光波导阵列结构的平板透镜的成像原理,在另一些实施例中,若将四周面均设为附有反射膜12的多个立方柱状反射单元9,且多个立方柱状反射单元9均在一层光波导阵列结构中沿X和Y方向呈阵列排布,即将两层光波导阵列合并成一层,其成像原理与双层光波导阵列结构的成像原理相同,也可以作为平板透镜的结构。
在实施例中,第一光波导阵列6与第二光波导阵列7的厚度相同,从而可以简化第一光 波导阵列6与第二光波导阵列7结构的复杂度,降低第一光波导阵列6与第二光波导阵列7的制造难度,提升第一光波导阵列6与第二光波导阵列7的生产效率,减少第一光波导阵列6与第二光波导阵列7的生产成本。需要注意的是,此处的厚度相同为一个相对的范围,并非是绝对相同,即以提高生产效率为目的,在不影响空中成像质量的前提下,光波导阵列之间可以存在一定的厚度差。
在实施例中,主控单元40与检测模块120之间可以采用有线或无线方式连接,以传输数字或模拟信号,从而可以灵活控制整体装置的体积,而且可以增强生物特征采集识别系统1000的电气稳定性。
如图9所示,本公开实施例的成像模块110还包括数据处理模块111,数据处理模块111与主控单元40连接,数据处理模块111用于在检测模块120检测到目标物体的姿态与引导画面中的引导姿态不符时发出指引提示信息;以及主控单元40控制显示器20显示指引提示信息,进而通过光学组件30将显示器20显示的指引提示信息的光线汇聚成像在空中目标区域11,用户根据指引提示信息调整交互姿态,从而更好地实现交互,完成对用户生物特征信息的采集与识别。
本公开实施例中容纳腔2的内壁上设置吸光层,也就是,在壳体10内除显示器20显示面以外的部分均做黑色吸光处理,如喷涂吸光涂料或张贴吸光膜,以用于消除壳体10内部构件对光线的漫反射,提高浮空实像的显示效果。
如图9所示,本公开实施例的图像采集模块210包括至少一个图像采集单元21和控制单元22。其中,图像采集单元21用于采集空中目标区域11内目标物体的图像信息;控制单元22与每个图像采集单元21连接,且控制单元22与主控单元40连接。控制单元22用于响应于采集触发信号控制图像采集单元21启动。具体地,图像采集单元21用于响应于采集触发信号采集空中目标区域内目标物体的图像信息。
图像采集单元21可以为单个或多个高速CMOS相机,如图10所示,图像采集单元21包括三个相机,每个相机的采集区域均覆盖引导画面所在区域。将每个相机的焦平面位置设定为空中目标区域11,从而可以清晰拍摄到目标物体不同部位的图像信息。例如,当用户手掌位于空中目标区域11时,各相机可以清晰拍摄到至少一个手指的不同部位的指纹图像。此外在满足上述条件情况下,优选地,图像采集单元21可以采用定焦大光圈的相机,从而可以省去拍摄图像时对目标物体位置处进行对焦的过程,提高图像采集识别的速度、成功率和可靠性,且大光圈也可以保证足够的通光量,提高拍摄的清晰度和亮度。
需要注意的是,当图像采集单元21单个焦段的采集区域无法覆盖整个浮空实像所在区域,或即使覆盖整个浮空实像所在区域,但对目标物体进行图像采集时,无法一次性获取所有需要的图像信息时,图像采集单元21需要保留必要的调焦功能。同时,由于以空中目标区域11作为基准面,考虑此调焦范围较小的情况,因此图像采集单元21可以获取目标物体的可见光图像,也可以获取目标物体的红外图像。此外,图像采集单元21也可增加相应波段光线的滤光片,以排除环境光的影响。
图像采集单元21在围绕目标物体中心的合适范围内提取主要特征的方式有多种,对此不作限制。下面列举两种特征提取方式,以图像采集单元21包括三个高速CMOS相机为例,具体如下。
如图11所示,以采集识别指纹信息为例,通过设置三个不同方位的高速CMOS相机,来获得不同方位的目标物体的图像,对应采集通道1、采集通道2和采集通道3,并通过图像处理模块220分别对每个采集通道的图像进行特征提取和匹配,将三组匹配的结果通过均值融合算法进行融合,得到最终的比对结果。或者,如图12所示,通过设置三个不同方位的高速CMOS相机,来获得不同方位的目标物体的图像,图像处理模块220根据目标物体不同部位对应的至少两个视差图获得目标物体深度信息,并进行拼接以构建出目标物体表面的3D图像,再展开为等效的2D目标物体图像,从而在2D图像的基础上进行特征提取和特征匹配,得到 最终比对的结果。
下面参考附图9以识别用户指纹信息为例对本公开实施例的生物特征采集识别系统1000进行说明,具体内容如下。
初始状态下,主控单元40令显示器20显示引导画面,并经光学组件30使得引导画面在空中目标区域11处成像显示,如图3所示,光学组件30如平板透镜将显示器20中的手掌型图案成像至平板透镜的另一侧的空气中即空中目标区域11处,以指导用户在正确区域进行指纹采集识别,同时检测模块120例如光学传感器周期性检测用户的交互操作,包括交互位置,手掌方向等。用户根据显示的引导画面放置手掌,当检测模块120检测到用户手掌触摸空中目标区域且位置与方向正确时,检测模块120发送采集触发信号至主控单元40,主控单元40发送控制信号至控制单元22,控制单元22控制图像采集单元21开始采集用户手掌指纹的图像,并将图像信息传递至图像处理模块220进行处理分析,并与图像存储模块230存储的内部指纹库进行对比,以验证用户身份是否通过。此外,若图像采集单元21采集图像信息失败,数据处理模块111分析失败原因,如用户手掌朝向不对、移动过快或位置偏移等,并生成指引提示信息并发送至主控单元40,由主控单元40控制显示器20显示指引提示信息,以引导用户手掌动作,正确完成对指纹信息的采集,实现对用户身份的识别。
此外,检测模块120如光学传感器也可以检测用户的其他操作,包括点击、滑动等,并将交互操作信息传递至主控单元40,主控单元40根据内部指令集,判断用户的具体操作内容,如选择指纹录取模式、查看指纹信息等,同时将相关控制按钮和设置等UI(User Interface,用户界面)操作界面传输至显示器20中,以在空中目标区域处进行图像显示,指引用户操作。
可以理解的是,图9仅为本公开实施例生物特征采集识别系统1000的一个示例,其中,图像处理模块220可以直接与成像模块110中的数据处理模块111集成设置,或者图像处理模块220与成像模块110中的数据处理模块111也可以单独分离设置,对此不作限制,但两者均可以用于在图像信息未识别出有效生物特征时生成指引提示信息。以及,本公开实施例提出的主控单元40和控制单元22可以集成设置,也可以单独分离设置,对此不作限制。
在实施例中,主控单元40可以直接与显示器20集成设置,或者主控单元40与显示器20也可以单独分离设置。主控单元40的控制指令内容也可以传递至外部其他装置,以用于处理或控制外部其他装置,如控制指纹锁、打卡机等。此外,可以理解的是,本公开实施例的图像采集单元21、控制单元22、图像处理模块220,也可以由外部装置实现控制,而不需要经过主控单元40。
本公开实施例的图像采集单元21设置为正对着空中目标区域11方向,例如图像采集单元21为高速CMOS相机时,令相机光轴垂直于引导画面所在平面,但是在实际使用时,由于光学组件30的存在,图像采集单元21的光轴与引导画面的法线存在一个倾斜角度,使得图像采集单元21的光轴无法垂直于引导画面所在平面,导致采集的图像出现失真的问题。对此,本公开实施例提出多种图像采集单元21的布置方式,以减少图像失真的问题,以及出现失真时的应对方式。以下具体说明本公开实施例优选的几种布置方案。
在一些实施例中,如图13所示,图像采集单元21设置在光学组件30的成像侧,图像采集单元21的光轴与空中目标区域11的成像平面的法线呈预设角度θ(0°<θ<90°)。也就是,将图像采集单元21布置于光学组件30上方,即图像采集单元21与引导画面位于光学组件30同一侧。在此设置方式下,由于图像采集单元21避开光学组件30后,图像采集单元21的光轴与引导画面法线存在确定的角度θ,所以,本公开实施例在对图像进行处理时,可以通过对θ角所带来的变形失真因素进行校正,来获得清晰的图像信息。
在另一些实施例中,如图14所示,图像采集单元21设置在光学组件30的成像侧,同时在光学组件30的上表面即成像侧布置分束器31。具体地,分束器31可以令穿过的光一部分透射,另一部分反射从而通过分束器31将空中目标区域11中的目标物体的图像信息进行反射,以将图像信息传递给图像采集单元21,来获得清晰的图像信息。
下面结合附图14举例说明对光学组件成像侧的表面上设置有分束器31的两种优选方案。
在一些实施例中,如图14所示,图像采集单元21的光轴与空中目标区域11的成像平面的法线垂直,图像采集单元21采用可见光图像采集单元。分束器31为对可见光半透半反的分束器,即对于可见光具有50%透射比和50%反射比的分束器。图像采集单元21被设置为光轴经过分束器31反射后,与引导画面所在平面垂直。从而,当图像采集单元21需要获取可见光目标物体图像时,图像采集单元21通过分束器31的反射可以获取未变形失真的目标物体图像。
在另一些实施例中,如图14所示,图像采集单元21的光轴与空中目标区域11的成像平面的法线垂直,图像采集单元21采用红外图像采集单元。分束器31为透射可见光且反射红外光的分束器。由于透射可见光而反射红外光的分束器31对可见光波段的透光性好,因此采用此分束器31可以解决浮空实像亮度降低的问题,且由于分束器31完全反射红外光,从而使得在捕获红外图像时基本没有光通量损失,图像采集单元21可以获取清晰的目标物体图像。
在一些实施例中,如图14所示,在红外图像采集单元的光入射侧设置有可以过滤可见光的滤光部件24,以进一步避免可见光的干扰。
在一些实施例中,本公开实施例的分束器31可以设置为覆盖整个光学组件30的尺寸大小,或者也可以根据实际图像获取需求来自由设定其大小。例如,如图14所示,分束器31完全覆盖光学组件30成像侧的表面。
在一些实施例中,如图15所示,本公开实施例的图像采集模块210还包括至少一个全反射单元25如全反射镜,至少一个全反射单元25用于对分束器31反射的空中目标区域11中的目标物体的图像信息进行全反射,以将图像信息传递给图像采集单元21。例如,图15所示采用一块全反射镜,令空中目标区域11处的目标物体散射的光经过多次全反射后,最终进入图像采集单元21,从而通过采用全反射单元25,使得在布置图像采集单元21位置时更加自由,例如为减少占用空间可以令设备的高度更低,且全反射单元25也不会改变最终光轴与引导画面的角度状态。
在另一些实施例中,如图16所示,本公开实施例中图像采集单元21设置在光学组件30的光源侧,且图像采集单元21的光轴与空中目标区域11所在平面可以垂直也可以呈其它角度。例如,图像采集单元21布置在光学组件30下方,即图像采集单元21与显示器20位于同一侧,并令图像采集单元21的光轴与空中目标区域11所在平面垂直,有效利用光学组件30具备一定透光性,从而可以获取无变形失真的目标物体图案。可以理解的是,图像采集单元21的光轴与空中目标区域11所在平面也可以呈预设角度θ,在后期处理过程中通过对θ角所带来的变形失真因素进行校正,来获得清晰的图像信息。
其中,在图像采集单元21位于光学组件30光源侧时,通过利用光学组件30的透光性来获取图像信息。但在获取目标物体图像时,由于光学组件30具有大量微结构,使得光线容易受微结构干扰,因此在进行图像处理时需要剔除这些干扰。以下是本公开实施例提出两种优选的解决方案。
在一些实施例中,图像采集单元21采用红外图像采集单元,对目标物体图像进行拍摄,且红外图像采集单元的光入射侧设置有过滤可见光的滤光部件24,以避免微结构的干扰。
在另一些实施例中,本公开实施例在光学组件30上对应图像采集单元21的光轴穿过的位置设置有通孔,使得图像采集单元21可以直接通过通孔,完成对目标物体图像的拍摄,从而减少光学组件30微结构的干扰。
在一些实施例中,如图9所示,本公开实施例的图像采集模块210还包括照明单元23,照明单元23与控制单元22连接,用于响应于采集触发信号启动照明。具体地,控制单元22用以控制开启或关闭图像采集单元21与照明单元23,可以理解的是,控制单元22既可以同时控制两者开启或关闭,以避免照明单元23一直处于开启状态,减少能耗,或者控制单元22也可以对图像采集单元21与照明单元23单独进行控制,对此不作限制。本公开实施例通过 采用照明单元23对目标物体进行均匀照明,可以增强目标物体图像中脊线的对比度和清晰度。
在实施例中,本公开实施例可以根据照明需求灵活配置照明单元23的位置,对此不作限制。需要注意的是,由于光源的方向直接决定目标物体脊线阴影的方向,在不同方向光源照射下,采集的目标物体图像的脊线有多至2到3倍脊线宽度的位移。因此,本公开实施例通过设计多种照明单元的布置方案,以提高采集图像的质量。以下具体说明本公开实施例优选的几种布置方案。
在一些实施例中,照明单元23为显示器20的背光组件,主控单元40还用于响应于采集触发信号控制显示器20的背光组件以预设模式发射照明光。例如,检测模块120在检测到空中目标区域存在目标物体且目标物体与引导画面交互姿态符合引导画面中的引导姿态时,发送采集触发信号,且主控单元40响应于采集触发信号控制显示器20的背光组件以预设模式发射照明光,进而显示器20发射的照明光通过光学组件30聚汇至空中目标区域,以对目标物体进行照明,可以增强目标物体图像中脊线的对比度和清晰度,提高采集图像质量。
具体地,由于显示器20的成像模式可以包括RGB发光二极管、LCOS器件、OLED阵列、投影、激光、激光二极管或任何其他合适的显示器或立体显示器,且显示器20的亮度不低于500cd/m 2,因此本公开实施例中可以利用显示器20本身作为照明单元23,当用户在空中目标区域11处进行操作时,利用检测模块120感应到目标物体的存在,将发送采集触发信号至主控单元40,主控单元40则发送控制命令至显示器20,控制显示器20例如显示器20的背光组件发出高亮蓝光频闪一次,高亮频闪光通过光学组件30后同样可以在引导画面位置处汇聚,从而在目标物体表面形成均匀漫反射。此时,控制单元22同步控制图像采集单元21立即对目标物体的脊线和谷线的阴影进行拍摄,以采集目标物体图像并传输入图像处理模块220中,由图像处理模块220对目标物体图像信息进行生物特征处理,以对处理后的生物特征信息进行存储,完成特征采集;或者,将处理后的生物特征信息与图像存储模块230存储的生物特征信息进行比较,以识别用户身份,从而实现对用户生物特征非接触采集识别的目的。
在一些实施例中,如图17所示,本公开实施例的照明单元23布置于光学组件30下方,即照明单元23设置在光学组件30的光源侧,且与显示器20相对设置。
需要注意的是,对于照明单元23位于光源侧的设置方式,由于壳体10内部除显示器20的显示面外其余部分均已作吸光处理,因此,为提高目标物体图像的质量,照明单元23可以有以下优选的三种布置方法。
在一些实施例中,如图17所示,显示器20的表面设置有漫反射层26;照明单元23的照射面朝向显示器20的表面。也就是,本公开实施例中将照明单元23放置在显示器20对侧,同时对显示器20表面进行漫反射处理,如张贴透明的漫反射膜以形成漫反射层26,基于此在不影响显示光线透过的情况下,可对照明单元23发出的光进行散射。当控制单元22触发照明单元23点亮时,照明单元23发射的光线经过显示器20的表面处的漫反射层26发生散射并经光学组件30反射后,同样在引导画面位置处汇聚,形成均匀的光照平面,以照亮目标物体脊线,从而获得高对比度的目标物体图像。
在另一些实施例中,如图18所示,本公开实施例中照明单元23的照射面朝向空中目标区域11。也就是,本公开实施例将照明单元23放置在显示器20对侧,此时无需对显示器20的表面进行漫反射处理,而是通过设置照明单元23的光源照射面朝向引导画面的位置,且利用光学组件30具有透射的效应,使得照明单元23的光源可以直接透过光学组件30,以为目标物体提供照明。
在实施例中,照明单元23包括一个环形或圆形光源,或者,照明单元23包括多个照射面朝向空中目标区域11的光源,且多个光源以预设间隔角度设置。也就是,对于图18所示的照明单元23的布置方案,此时照明单元23的数量可以是一个,也可以是多个,对此不作限制。其中,多个照明单元23的位置可以布置成通过不同角度,来照射至少一个目标物体的不同部位,例如指纹的左侧、右侧、前侧等,同时照明单元23的照明区域可以重叠,对此不作 限制。
在实施例中,照明单元23提供的光源形式不限于显示器20本身光源、装置内部光源或装置外部光源等。光源可以选择可见光源,优选波长为450nm-470nm的蓝光LED光源,目标物体在此波段的光源照射下可获得高对比度的图像,或者,光源还可以选用红外波段的光源,以避免可见光对引导画面显示效果的干扰。需要注意的是,由于全方向强光光源会减弱目标物体脊线形成的阴影,进而影响目标物体图像的清晰度,因此,本公开实施例优选使用环形光源,以获取清晰的目标物体图像。优选地,也可以在光源前增加光学部件,如透镜、匀光板等,以增加照明效果。
在另一些实施例中,照明单元23设置在显示器20上,例如,照明单元23包括围绕显示器20设置的环形光源;或者,照明单元23与显示器20的背光组件集成设置。
如图19所示,本公开实施例中照明单元23使用环形LED光源将显示器20围绕一圈,在照明单元23点亮时,环形LED光源发射的光线经光学组件30反射后,为引导画面位置提供照明。或者,直接将照明单元23与显示器20背光组件集成在一起,例如,以LCD显示器为例,在触发LED照明时,可控制LCD显示器以使光线全部通过,从而使照明单元23发出的光线透过LCD显示器,并再次经光学组件30反射后,照亮引导画面位置。
对于照明单元23位于成像侧的设置,在一些实施例中,照明单元23可以设置在光学组件30的成像侧,且照明单元23的照射面朝向空中目标区域11。也就是,照明单元23布置于光学组件30上方,即照明单元23与引导画面位于光学组件30同一侧,如图16所示。在此布置方式下,使得照明单元23的布置方式更加自由,例如可以采用一种环形蓝光LED光源,以均匀围绕多个图像采集单元21。再例如,采用三个图像采集单元21和三个照明单元23同步布置为呈不同角度来照射和拍摄目标物体的不同部位,三个图像采集单元21在同平面内分别以夹角45°摆放,从而保证采集的图像可以完全覆盖目标物体的有效面积。可以理解的是,照明单元23也可以脱离图像采集单元21独立照明,数量可以是一个或多个,优选采用环形LED光源,以有效覆盖目标物体的不同部位。
因此,根据本公开实施例的生物特征采集识别系统1000,通过将可交互空中成像技术与非接触采集识别技术结合,基于成像子系统100在空中显示引导画面的位置是确定的,将覆盖引导画面所在的三维空间即空中目标区域作为采集识别子系统200进行图像采集的基准面。当用户触摸引导画面时,由检测模块120发送采集触发信号,以触发图像采集单元21对用户生物特征信息进行采集,从而无需设置额外的限制装置来引导用户操作,且用户在操作时无碰触设备本体的风险,使得非接触生物特征采集识别操作更加方便、安全、高效。此外,本公开实施例中在图像采集单元21的光圈满足景深覆盖实像位置的情况下,可以根据实际情况尽可能的将光圈设置的更大,以增加采集过程中目标物体部位散射光的进光量,从而获取更加清晰的目标物体图像,同时也可以减少对照明单元23光源亮度的需求。
本公开第二方面实施例提供一种终端设备,如图20所示,本公开实施例的终端设备2000包括设备本体300以及上述实施例提供的生物特征采集识别系统1000。其中,生物特征采集识别系统1000设置在设备本体300上。
根据本公开实施例的终端设备2000,通过采用上述实施例提供的生物特征采集识别系统1000,来对用户的生物特征信息进行采集与识别,可以避免用户操作时接触设备的风险,且无需设置额外的限制装置来引导用户操作,从而使得非接触指纹采集操作也更加安全、高效。
本公开第三方面实施例提供一种生物特征采集识别方法,如图21所示,本公开实施例的方法至少包括步骤S1-S4。
步骤S1,提供空中目标区域。
为了解决现有非接触式指纹识别操作不便与卫生隐患的问题,本公开实施例采用以可交互空中成像技术与非接触生物特征采集识别技术相结合的方式,用以高效完成对空中指纹的采集识别过程。
在实施例中,本公开实施例方法通过采用可交互空中成像技术,可以在空中的确定位置处形成浮空实像,而浮空实像位置在空中是相对固定的,从而可以将覆盖浮空实像所在的三维空间作为空中目标区域。
步骤S2,将生物特征采集识别的引导画面成像在空中目标区域。
可选地,控制显示器显示用于生物特征采集识别的引导画面,通过光学组件将该引导画面成像在该空中目标区域。
在实施例中,引导画面可以理解为用于引导用户操作的浮空实像,通过采用可交互空中成像技术可以将生物特征采集识别的引导画面在空中目标区域成像显示。用户通过与浮空实像的直接交互,可以无需设置额外的限制机构来引导用户进行操作,避免了用户与设备本体接触的风险。
步骤S3,检测到空中目标区域存在目标物体,且目标物体与引导画面交互,且目标物体的姿态符合引导画面中的引导姿态,采集空中目标区域内目标物体的图像信息。
进一步地,检测到空中目标区域存在目标物体,确定目标物体与引导画面交互且目标物体的姿态符合引导画面中的引导姿态,控制显示器的背光组件以预设模式发射照明光,采集空中目标区域内目标物体的图像信息。通过控制显示器的背光组件以预设模式产生照明光以对目标物体进行照明,可以增强目标物体图像中脊线的对比度和清晰度,提高采集图像质量。
在实施例中,基于采用可交互空中成像技术生成的引导画面的位置在空中是相对固定的,用户可以根据引导画面进行操作,将图像采集的采集区域调整为空中目标区域。在检测到空中目标区域内存在目标物体,且目标物体的姿态符合引导画面中的引导姿态时,对空中目标区域内的目标物体的图像信息进行采集。例如,在进行指纹采集识别时,用户在引导画面位置处放置手掌,当检测到用户手掌触摸浮空实像区域且位置与方向正确时,触发采集动作,以获取用户的指纹信息。
在实施例中,在进行图像采集时,根据无介质引导的浮空实像准确地拟合目标物体的放置姿态,同时可以快速地搜索目标物体轮廓,并拟合出目标物体位置,从而准确提取出图像中目标物体的中心位置,并在围绕中心位置的合适范围内进行主要特征提取。从而在图像采集时降低因为尺度、平移、旋转对图像产生的影响,降低图像算法带来的不可靠性,并降低生物特征提取、生物特征匹配潜在的图像的算法误差。
具体地,由于显示器的成像模式可以包括RGB发光二极管、LCOS器件、OLED阵列、投影、激光、激光二极管或任何其他合适的显示器或立体显示器,且显示器的亮度不低于500cd/m 2。因此,本发明实施例中可以利用显示器本身作为照明单元,例如,当用户在空中目标区域处进行操作时,利用检测模块感应到目标物体的存在,将发送采集触发信号至主控单元,主控单元则发送控制命令至显示器,控制显示器的背光组件发出高亮蓝光频闪一次,高亮频闪光通过光学组件后同样可以在引导画面位置处汇聚,从而在目标物体表面形成均匀漫反射。此时,控制单元同步控制图像采集单元立即对目标物体的脊线和谷线的阴影进行拍摄,以采集目标物体图像并传输入图像处理模块中,由图像处理模块对目标物体图像信息进行生物特征处理,以对处理后的生物特征信息进行存储,完成特征采集;或者,将处理后的生物特征信息与图像存储模块存储的生物特征信息进行比较,以识别用户身份,从而实现对用户生物特征非接触采集识别的目的。
步骤S4,根据图像信息进行生物特征处理,并将处理后的生物特征信息进行存储,或者,将处理后的生物特征信息与存储的生物特征信息进行比较,以识别用户身份。
在实施例中,通过对获取的图像信息进行处理分析,并将处理后的图像信息进行存储,从而完成对用户生物特征的采集。
当需要对用户生物特征信息进行识别时,在检测到用户与引导画面的直接交互时,对用户的生物特征进行图像捕捉,以及对捕捉后的图像信息进行处理,并与已存储的生物特征信息进行比对,以此来验证用户身份是否通过,从而实现对用户生物特征的非接触采集识别。
根据本公开实施例的生物特征采集识别方法,通过将生物特采集征识别的引导画面成像在空中目标区域,即以空中目标区域作为用户操作的基准面,用户可以根据在空中目标区域处呈现的引导画面进行操作,在检测到符合引导画面中的引导姿态时,对空中目标区域内目标物体的图像信息进行采集,并根据该图像信息进行生物特征处理,以对处理后的生物特征信息进行存储或识别用户身份,从而实现对用户生物特征的非接触采集识别的目的。通过该方法可以使采集图像信息的操作方式更加方便直观,无需设置额外的限制装置来引导用户操作,避免用户操作时接触设备本体的风险,从而使得用户在进行非接触识别操作时更加自然、安全。
在一些实施例中,本公开实施例的方法对于根据图像信息进行生物特征处理,如图22所示,至少包括步骤S5-S8。
步骤S5,获取图像信息中的生物特征感兴趣区域。
其中,生物特征感兴趣区域可以理解为从图像信息中选择的生物特征图像区域,通过圈定生物特征感兴趣区域以作进一步处理,从而可以减少处理时间,增加精度。
在实施例中,首先,剔除图像信息中的非生物特征背景,如可以通过颜色空间从图像信息中提取生物特征,其中,颜色空间可以采用HSV颜色模型(Hue,Saturation,Lightness)或YcbCr模型(Y指亮度分量,Cb指蓝色色度分量,Cr指红色色度分量)等。
举例说明,在HSV颜色空间中,皮肤HSV的取值分别是26<H<34、43<S<255、46<V<255,其中,H指色调,S指饱和度,V指明度。通过HSV颜色空间可以剔除非生物特征背景,以指纹为例,通过在图像信息中利用H、S、V三个参数,将图像信息中符合手指颜色要求的部分保留,去除背景部分,完成对指尖特征的提取。
其次,对于未剔除的接近皮肤的背景,可以使用轮廓检测,并通过计算闭合轮廓中的面积,以剔除较小面积的轮廓。其中,轮廓可以理解为将连续的点连接后的曲线,具有相同的颜色或灰度。此外,在寻找轮廓之前,可以进行阈值化处理或者采用Canny边缘检测算子进行边界检测,以使轮廓检测更加准确。
进一步地,通过深度信息提取生物特征感兴趣区域,如图3所示,光学传感器周期性检测用户的交互操作,包括交互位置、目标物体方向等,因此采集到的目标物体的深度信息为定值,通过剔除深度信息一定范围以外的数据,即可获得生物特征感兴趣区域。例如,通过深度摄像头获得的图像信息为具有4个通道的数字,分别为颜色通道的RGB以及深度通道的D,即组合成RGB-D的图像信息,通过遍历图像信息中行和列的值,可以对每个像素的深度信息进行判断,若深度值取值范围为20cm到40cm内,为满足目标物体要求的数据信息,则保留图像信息中的RBG值,作为生物特征感兴趣区域,其他未满足要求的深度值置为黑色。
步骤S6,对生物特征感兴趣区域进行预处理,以获得预处理图像。
在实施例中,对生物特征感兴趣区域进行预处理,包括图像的灰度化、图像的增强、图像的二值化与细化等操作,以获得预处理图像。
其中,对于图像的灰度化,由于彩色图像运算量较大,因此将彩色图转为灰度图,即将RGB三个通道转化为一个通道,以减少运算量,提高系统的执行速度,减少处理延时,保证系统的实时性。图像灰度化就是使图像色彩的三种颜色分量R、G、B的值相同,由于颜色值的取值范围是[0,255],所以灰度的级别只有256种,即灰度图象仅能表现256种灰度颜色。图像灰度化可以采用分量法、最大值法、平均值法或加权平均法等,对此不作限定。例如,以平均值法为例,使用R、G、B颜色分量的平均值进行处理,计算各个通道值的公式为I(x,y)=1/3*I_R(x,y)+1/3*I_G(x,y)+1/3*I_B(x,y)。
对于图像的增强,可以使得目标物体的脊线更加明显,且脊线之间的间隔也更加清晰。例如,对图像的增强过程可以采用Gabor特征提取,Gabor为用于边缘提取的线性滤波器,其频率和方向的表达方式与人类视觉系统相类似,可以为图像增强提供良好的方向选择和尺度选择,且Gabor对于光照变化不敏感,适用于纹理分析。
为进一步减少计算量,可以对增强后的灰度图进行二值化和细化处理。其中,二值化是指将灰度图中每个通道0-255数值二值化为0和1。例如,对灰度图像进行二值化处理,可以设定阈值范围,将在设定阈值范围中的像素值变为1(如白色部分),而不在设定阈值范围中的像素值变为0(如黑色部分),以区分需要识别的生物特征感兴趣区域,同时使用一个2*2的矩形为模板对二值化后的图像做细化操作。
步骤S7,提取预处理图像中的特征点。
具体地,特征点提取是基于预处理后的图像,在生物特征感兴趣区域中寻找可以描述该生物特征的特征点,例如寻找指纹的特征点。在实施例中,特征点围绕脊线展开,包括脊线中间点、终结点、分叉点、交叉点等。为了更好地描述特征点,可以使用SIFT特征提取算法提取生物特征感兴趣区域的SIFT特征描述子,同时可以采用专门用于描述脊线特点的滑动窗口,如使用3*3的窗口扫描整个图像,并对窗口中的像素点的值、位置、数量进行分析,以判断是否有终结点或是否有分叉点等。
步骤S8,对特征点进行相似度匹配,以确定目标生物特征。
在实施例中,本公开实施例通过以上对图像的预处理操作与特征提取操作后,记录下特征点的特征数据,以与另外一组特征数据进行比对,通过算法判断相似度,最后判定生物特征的匹配程度。
例如,在预处理图像中采用SIFT特征提取的特征点均具有方向、尺度和位置信息,并且此时的特征点具有平移、缩放和旋转不变性,从而通过对比该预处理图像与已存储的图像中目标物体特征点的相似度,来判断是否属于同一个人。其中,对于目标物体不同部位的图像中,通过算法如重心距离法或者曲率分析法等将目标物体多个部位裁剪到不同的区域图像中,以便对每个部位单独进行特征提取和匹配算法。
在一些实施例中,在获取图像信息中的生物特征感兴趣区域之前,根据图像信息进行生物特征处理,还包括,根据不同方位采集的空中目标区域内目标物体的图像信息获得目标物体的三维生物特征图像,并将三维生物特征图像展开为等效的二维生物特征图像,从而获得与当前通过其他方式获取并归档的大量目标物体数据库兼容的生物特征信息。
在一些实施例中,本公开实施例的方法还包括,检测到目标物体的姿态与引导画面中的引导姿态不符发送指引提示信息,并控制显示器显示指引提示信息,将显示器显示的指引提示信息的光线汇聚成像在空中目标区域。也就是,在图像信息采集失败时,通过分析失败原因,如用户手掌朝向不对、移动过快或位置偏移等,以生成指引提示信息,并将指引提示信息成像在空中目标区域,以指引用户动作,正确完成对图像信息的采集。
本公开实施例的方法还包括,检测空中目标区域内目标物体与引导画面的交互动作,根据目标物体与引导画面的交互动作提供生物特征交互画面;将生物特征交互画面成像在空中目标区域。例如,通过检测用户的交互动作,包括点击、滑动等操作,以根据交互动作与内部指令集的对应关系,判断用户的具体操作内容,如选择指纹录取模式、查看指纹信息等,并将对应的生物特征交互画面,如相关控制按钮和设置等UI操作界面,在空中目标区域处进行图像显示。
本公开第四方面实施例提供一种存储介质,其上存储有计算机程序,其中,计算机程序被处理器执行时实现上述实施例的生物特征采集识别方法。
在本说明书的描述中,流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本公开的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本公开的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行 系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本公开的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本公开各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本公开的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本公开的限制,本领域的普通技术人员在本公开的范围内可以对上述实施例进行变化、修改、替换和变型。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示意性实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本公开的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。
尽管已经示出和描述了本公开的实施例,本领域的普通技术人员可以理解:在不脱离本公开的原理和宗旨的情况下可以对这些实施例进行多种变化、修改、替换和变型,本公开的范围由权利要求及其等同物限定。

Claims (20)

  1. 一种生物特征采集识别系统,其特征在于,包括:
    成像子系统,包括:
    成像模块,用于将生物特征采集识别的引导画面在空中目标区域成像显示;
    检测模块,用于在检测到所述空中目标区域存在目标物体且所述目标物体与所述引导画面交互,所述目标物体的姿态符合所述引导画面中的引导姿态时,发送采集触发信号;
    采集识别子系统,包括:
    图像采集模块,用于响应于所述采集触发信号,采集所述空中目标区域内目标物体的图像信息;
    图像存储模块,用于存储生物特征信息;
    图像处理模块,与所述图像采集模块、所述图像存储模块连接,用于根据所述图像信息进行生物特征处理,将处理后的生物特征信息存储于所述图像存储模块,或者,将处理后的生物特征信息与所述图像存储模块存储的生物特征信息进行比较,以识别用户身份。
  2. 根据权利要求1所述的生物特征采集识别系统,其特征在于,所述成像模块包括:
    壳体,所述壳体形成有显示窗口且在内部形成有容纳腔;
    显示器,所述显示器设置于所述容纳腔中,用于显示所述生物特征采集识别的引导画面;
    光学组件,所述光学组件设置于所述容纳腔中,用于将所述显示器显示的所述引导画面的光线汇聚成像在所述空中目标区域,所述显示器设置于所述光学组件的光源侧,所述显示窗口在所述光学组件的成像侧;
    主控单元,所述主控单元设置在所述容纳腔中,所述主控单元用于控制所述显示器。
  3. 根据权利要求2所述的生物特征采集识别系统,其特征在于,所述成像模块还包括:
    数据处理模块,与所述主控单元连接,所述数据处理模块用于在所述检测模块检测到所述目标物体的姿态与所述引导画面中的引导姿态不符时发出指引提示信息;
    所述主控单元控制所述显示器显示所述指引提示信息;
    所述光学组件将所述显示器显示的所述指引提示信息的光线汇聚成像在所述空中目标区域。
  4. 根据权利要求2或3所述的生物特征采集识别系统,其特征在于,所述图像采集模块包括:
    至少一个图像采集单元,用于采集所述空中目标区域内目标物体的图像信息;
    控制单元,与每个所述图像采集单元连接,用于响应于所述采集触发信号控制所述图像采集单元启动。
  5. 根据权利要求4所述的生物特征采集识别系统,其特征在于,所述图像采集单元设置在所述光学组件的成像侧,所述图像采集单元的光轴与所述空中目标区域的成像平面的法线呈预设角度。
  6. 根据权利要求4所述的生物特征采集识别系统,其特征在于,
    所述图像采集单元设置在所述光学组件的成像侧;
    所述光学组件成像侧的表面上设置有分束器,所述分束器用于将所述空中目标区域目标物体的图像信息进行反射,以将所述图像信息传递给所述图像采集单元。
  7. 根据权利要求6所述的生物特征采集识别系统,其特征在于,
    所述图像采集单元的光轴与所述空中目标区域的成像平面的法线垂直,且所述分束器为对可见光半透半反的分束器;或者
    所述图像采集单元的光轴与所述空中目标区域的成像平面的法线垂直,所述图像采集单 元为红外图像采集单元,且所述分束器为透射可见光且反射红外光的分束器。
  8. 根据权利要求6或7所述的生物特征采集识别系统,其特征在于,所述图像采集模块还包括:
    至少一个全反射单元,至少一个所述全反射单元用于对所述分束器反射的所述空中目标区域目标物体的图像信息进行全反射,以将所述图像信息传递给所述图像采集单元。
  9. 根据权利要求4所述的生物特征采集识别系统,其特征在于,
    所述图像采集单元设置在所述光学组件的光源侧,且所述图像采集单元的光轴与所述空中目标区域所在平面呈预设角度。
  10. 根据权利要求9所述的生物特征采集识别系统,其特征在于,在所述光学组件上对应所述图像采集单元的光轴穿过的位置设置有通孔。
  11. 根据权利要求4-10中任一项所述的生物特征采集识别系统,其特征在于,所述图像采集模块还包括:
    照明单元,所述照明单元与所述控制单元连接,用于响应于所述采集触发信号启动照明。
  12. 根据权利要求11所述的生物特征采集识别系统,其特征在于,
    所述照明单元为所述显示器的背光组件,且所述主控单元还用于响应于所述采集触发信号控制所述显示器的背光组件以预设模式发射照明光;或者
    所述照明单元设置在所述光学组件的成像侧,且所述照明单元的照射面朝向所述空中目标区域。
  13. 根据权利要求11所述的生物特征采集识别系统,其特征在于,所述照明单元设置在所述光学组件的光源侧,且与所述显示器相对设置。
  14. 根据权利要求13所述的生物特征采集识别系统,其特征在于,
    所述显示器的表面设置有漫反射层且所述照明单元的照射面朝向所述显示器的表面;或者
    所述照明单元的照射面朝向所述空中目标区域。
  15. 一种终端设备,其特征在于,包括:
    设备本体;
    权利要求1-14任一项所述的生物特征采集识别系统,所述生物特征采集识别系统设置在所述设备本体上。
  16. 一种生物特征采集识别方法,其特征在于,包括:
    提供空中目标区域;
    将生物特征采集识别的引导画面成像在所述空中目标区域;
    检测到所述空中目标区域存在目标物体,且所述目标物体与所述引导画面交互,且所述目标物体的姿态符合所述引导画面中的引导姿态,采集所述空中目标区域内目标物体的图像信息;
    根据所述图像信息进行生物特征处理,并将处理后的生物特征信息进行存储,或者,将处理后的生物特征信息与存储的生物特征信息进行比较,以识别用户身份。
  17. 根据权利要求16所述的生物特征采集识别方法,其特征在于,根据所述图像信息进行生物特征处理,包括:
    获取所述图像信息中的生物特征感兴趣区域;
    对所述生物特征感兴趣区域进行预处理,以获得预处理图像;
    提取所述预处理图像中的特征点;
    对所述特征点进行相似度匹配,以确定目标生物特征。
  18. 根据权利要求17所述的生物特征采集识别方法,其特征在于,在获取所述图像信息中的生物特征感兴趣区域之前,还包括:
    根据不同方位采集的所述空中目标区域内目标物体的图像信息获得所述目标物体的三维 生物特征图像;
    将所述三维生物特征图像展开为等效的二维生物特征图像。
  19. 根据权利要求16-18中任一项所述的生物特征采集识别方法,其特征在于,所述方法还包括:
    检测到所述目标物体的姿态与所述引导画面中的引导姿态不符发送指引提示信息;
    控制显示器显示所述指引提示信息,将所述显示器显示的所述指引提示信息的光线汇聚成像在所述空中目标区域。
  20. 根据权利要求16-19中任一项所述的生物特征采集识别方法,其特征在于,所述生物特征采集识别方法还包括:
    检测所述空中目标区域内目标物体与所述引导画面的交互动作;
    根据所述目标物体与所述引导画面的交互动作提供生物特征交互画面;
    将所述生物特征交互画面成像在所述空中目标区域。
PCT/CN2022/070355 2021-01-06 2022-01-05 生物特征采集识别系统及方法、终端设备 WO2022148382A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020237026629A KR20230136613A (ko) 2021-01-06 2022-01-05 생체 특징 채집 식별 시스템 및 방법, 단말기
EP22736544.2A EP4276682A1 (en) 2021-01-06 2022-01-05 Biometric acquisition and recognition system and method, and terminal device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202110013200.0A CN112668540B (zh) 2021-01-06 2021-01-06 生物特征采集识别系统及方法、终端设备和存储介质
CN202110013200.0 2021-01-06
CN202110013199.1 2021-01-06
CN202110013199.1A CN112668539A (zh) 2021-01-06 2021-01-06 生物特征采集识别系统及方法、终端设备和存储介质

Publications (1)

Publication Number Publication Date
WO2022148382A1 true WO2022148382A1 (zh) 2022-07-14

Family

ID=82357146

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/070355 WO2022148382A1 (zh) 2021-01-06 2022-01-05 生物特征采集识别系统及方法、终端设备

Country Status (3)

Country Link
EP (1) EP4276682A1 (zh)
KR (1) KR20230136613A (zh)
WO (1) WO2022148382A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761820A (zh) * 2022-11-29 2023-03-07 河南职业技术学院 基于计算机生物特征采集识别系统及装置

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150220772A1 (en) * 2014-02-06 2015-08-06 University Of Massachusetts System and methods for contactless biometrics-based identification
CN105824421A (zh) * 2016-03-21 2016-08-03 北京上古视觉科技有限公司 基于全息投影交互方式的多模态生物识别系统及方法
CN205485927U (zh) * 2016-03-21 2016-08-17 北京上古视觉科技有限公司 基于全息投影交互方式的多模态生物识别系统
CN108196681A (zh) * 2018-01-27 2018-06-22 像航(上海)科技有限公司 根据人脸识别和激光影像实现空中成像的实时触控系统
CN112016525A (zh) * 2020-09-30 2020-12-01 墨奇科技(北京)有限公司 非接触式指纹采集方法和装置
CN112668540A (zh) * 2021-01-06 2021-04-16 安徽省东超科技有限公司 生物特征采集识别系统及方法、终端设备和存储介质
CN112668539A (zh) * 2021-01-06 2021-04-16 安徽省东超科技有限公司 生物特征采集识别系统及方法、终端设备和存储介质
CN213844157U (zh) * 2021-01-06 2021-07-30 安徽省东超科技有限公司 生物特征采集识别系统和终端设备
CN213844156U (zh) * 2021-01-06 2021-07-30 安徽省东超科技有限公司 生物特征采集识别系统和终端设备
CN213844155U (zh) * 2021-01-06 2021-07-30 安徽省东超科技有限公司 生物特征采集识别系统和终端设备

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150220772A1 (en) * 2014-02-06 2015-08-06 University Of Massachusetts System and methods for contactless biometrics-based identification
CN105824421A (zh) * 2016-03-21 2016-08-03 北京上古视觉科技有限公司 基于全息投影交互方式的多模态生物识别系统及方法
CN205485927U (zh) * 2016-03-21 2016-08-17 北京上古视觉科技有限公司 基于全息投影交互方式的多模态生物识别系统
CN108196681A (zh) * 2018-01-27 2018-06-22 像航(上海)科技有限公司 根据人脸识别和激光影像实现空中成像的实时触控系统
CN112016525A (zh) * 2020-09-30 2020-12-01 墨奇科技(北京)有限公司 非接触式指纹采集方法和装置
CN112668540A (zh) * 2021-01-06 2021-04-16 安徽省东超科技有限公司 生物特征采集识别系统及方法、终端设备和存储介质
CN112668539A (zh) * 2021-01-06 2021-04-16 安徽省东超科技有限公司 生物特征采集识别系统及方法、终端设备和存储介质
CN213844157U (zh) * 2021-01-06 2021-07-30 安徽省东超科技有限公司 生物特征采集识别系统和终端设备
CN213844156U (zh) * 2021-01-06 2021-07-30 安徽省东超科技有限公司 生物特征采集识别系统和终端设备
CN213844155U (zh) * 2021-01-06 2021-07-30 安徽省东超科技有限公司 生物特征采集识别系统和终端设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761820A (zh) * 2022-11-29 2023-03-07 河南职业技术学院 基于计算机生物特征采集识别系统及装置
CN115761820B (zh) * 2022-11-29 2024-03-26 河南职业技术学院 基于计算机生物特征采集识别系统及装置

Also Published As

Publication number Publication date
KR20230136613A (ko) 2023-09-26
EP4276682A1 (en) 2023-11-15

Similar Documents

Publication Publication Date Title
CN112668540B (zh) 生物特征采集识别系统及方法、终端设备和存储介质
EP2755163B1 (en) Biometric information image-capturing device, biometric authentication apparatus and manufacturing method of biometric information image-capturing device
US10528788B2 (en) Optical fingerprint module
CN112232155B (zh) 非接触指纹识别的方法、装置、终端及存储介质
CN112232163B (zh) 指纹采集方法及装置、指纹比对方法及装置、设备
US10546175B2 (en) Optical fingerprint module
JP3231956U (ja) 実指を判断するための統合型スペクトルセンシング装置
US11182631B2 (en) Information processing method and electronic device
CN112016525A (zh) 非接触式指纹采集方法和装置
CN111801684A (zh) 指纹检测的装置和电子设备
CN112232159B (zh) 指纹识别的方法、装置、终端及存储介质
WO2022148382A1 (zh) 生物特征采集识别系统及方法、终端设备
CN213844155U (zh) 生物特征采集识别系统和终端设备
CN110214328B (zh) 指纹识别的方法、装置和电子设备
CN112668539A (zh) 生物特征采集识别系统及方法、终端设备和存储介质
US10534975B1 (en) Multi-frequency high-precision object recognition method
CN112232157B (zh) 指纹区域检测方法、装置、设备、存储介质
US20180293422A1 (en) Optical Fingerprint Module
CN213844156U (zh) 生物特征采集识别系统和终端设备
WO2020191595A1 (zh) 生物特征识别装置、方法和电子设备
WO2020199159A1 (zh) 引导用户注册指纹的方法和电子设备
CN213844157U (zh) 生物特征采集识别系统和终端设备
TW202004669A (zh) 多頻譜高精確辨識物體的方法
CN112232152B (zh) 非接触式指纹识别方法、装置、终端和存储介质
WO2022083048A1 (zh) 光学指纹识别模组及其识别方法和电子装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22736544

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20237026629

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022736544

Country of ref document: EP

Effective date: 20230807