WO2017132903A1 - 与可见光复用的生物特征复合成像系统和方法 - Google Patents

与可见光复用的生物特征复合成像系统和方法 Download PDF

Info

Publication number
WO2017132903A1
WO2017132903A1 PCT/CN2016/073356 CN2016073356W WO2017132903A1 WO 2017132903 A1 WO2017132903 A1 WO 2017132903A1 CN 2016073356 W CN2016073356 W CN 2016073356W WO 2017132903 A1 WO2017132903 A1 WO 2017132903A1
Authority
WO
WIPO (PCT)
Prior art keywords
infrared light
image
imaging
visible light
region
Prior art date
Application number
PCT/CN2016/073356
Other languages
English (en)
French (fr)
Inventor
徐鹤菲
Original Assignee
徐鹤菲
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 徐鹤菲 filed Critical 徐鹤菲
Priority to CN201680081007.8A priority Critical patent/CN109074438A/zh
Priority to PCT/CN2016/073356 priority patent/WO2017132903A1/zh
Priority to US16/074,560 priority patent/US10579871B2/en
Publication of WO2017132903A1 publication Critical patent/WO2017132903A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/11Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/58Means for changing the camera field of view without moving the camera body, e.g. nutating or panning of optics or image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Definitions

  • the present invention relates to the field of image processing, biometrics, and optical imaging technologies, and more particularly to a visible light multiplexed biometric composite imaging technique.
  • Biometrics is an emerging identity technology.
  • biometric technologies including face recognition, iris recognition and 3D face recognition
  • near-infrared imaging technology that is, using infrared light source to actively illuminate biological features, and using camera technology to receive infrared light source reflection
  • the digitized near-infrared image of the biometric features is identified.
  • Iris recognition is an emerging biometric technology that is expanding in the field of identity recognition. Secure and convenient identification is a difficult point to develop for mobile terminal business services. At present, the mobile terminal is used as a means of identity confirmation mainly relying on passwords and cards, and has problems such as difficulty in memory, easy to be stolen, and low security. Among many identification technologies, iris recognition has the highest security and accuracy, and has the advantages of being unique, not requiring memory, being able to be stolen, and having a high level of security.
  • an iris recognition function is added to a mobile terminal (such as a mobile phone), and a near-infrared camera module needs to be added to the front of the mobile phone, and the front visible light camera module for self-photographing is independent. That is, the front panel of the mobile phone needs to open two holes, one for self-timer and one for iris imaging, which is complicated in industrial design and has an unattractive appearance.
  • the iris imaging design implemented by the near-infrared camera module generally uses infrared light imaging of the 760 nm-880 nm spectral band; an additional infrared light source (between 760 nm and 880 nm is required). ) Fill light illumination, and the near-infrared camera module needs to be able to receive the energy in the infrared band.
  • the iris recognition on the mobile phone is mainly used for the user's own identity recognition, and the user experience generally needs to be used in advance.
  • the existing front-mounted self-photographing head of the mobile phone cannot accept the infrared light source or receive a large attenuation due to the coating filtration itself. Therefore, the prior art iris recognition requires a separate near-infrared camera for imaging the iris, which cannot be seen with the existing one.
  • Photographic imaging (spectral frequency at 380-760nm) is multiplexed with cameras (such as smartphones' pre-front color cameras). All of these lead to an increase in the volume of the iris imaging system, an increase in cost, a complicated design, and a poor user experience, and it is impossible to miniaturize the integration application to a more demanding mobile terminal.
  • a single camera is used to implement two-in-one multiplexing on a mobile device, which can satisfy the normal front-view self-timer visible light imaging function of the mobile phone, and can satisfy the biological recognition function of infrared light imaging, and the near-infrared and visible light dual-band imaging mainly
  • a mechanical switchable infrared light filter to the imaging system (see Chinese patent CN201420067432.X)
  • using a filter having infrared and visible light double-band transmission spectra see US Pat. No. 8,408,821, Chinese Patent Application No. CN104394306A
  • an image sensor that integrates both visible light and infrared light detecting pixels (see US Pat. No. 7,915,562, Chinese Patent Application No. CN104284179A).
  • the filter in the Chinese patent application CN104394306A includes a first region and a second region, the first region is a two-channel coating capable of transmitting both visible light and infrared light, and the second region is a single-channel coating, which can only pass Infrared light of a specific wavelength.
  • This method is not feasible in terms of engineering implementation on a smartphone because such a design causes infrared light to enter the first area and redness of the self-timer, and the signal of the infrared light passing through the second area cannot be completely corrected.
  • the effect of reddish color because the imaged areas of the two areas are inconsistent in the same frame, the sensitized content is different, so it is impossible to stably extract the factor that can be perfectly corrected to correct the redness problem of the self-timer, especially in the background color is green.
  • the self-timer effect is deteriorated, which is unacceptable to the user when the front-end self-timer requirements of the mobile phone are so high; and when the first area is used as the infrared light biometric image, the visible light can pass, which may result in For example, the iris is exposed to complex external visible light.
  • plaques such as glasses, lights, and windows reflect bright spots.
  • the design of the multi-area dual-spectrum imaging device has either an effect on the self-timer effect or an effect on the infrared light bio-recognition effect, and cannot be both.
  • the biometric image is affected by the visible light of the complex environment, which results in poor quality of the acquired biometric image, which seriously affects the recognition accuracy and user experience of the backend algorithm.
  • the axial chromatic aberration of visible light and infrared light imaging is not taken into consideration, it is difficult to take care of the self-timer, which has an effect on the self-portrait image effect, and it only mentions the use of the voice coil motor to adjust the focus without providing automatic for dual-spectrum imaging. How to achieve zoom.
  • 7,915,562 which is directed to infrared light, relates to image sensor design for dual-spectrum imaging, but does not provide the entire imaging system design.
  • the sub-regional multi-spectral filter used in Chinese patent application CN104284179A is a Color Filter Array attached to the surface of the image sensor, rather than a separate optical filter.
  • the design does not consider the axial chromatic aberration of visible light and infrared light imaging, and it is difficult to collect a near-infrared iris image that satisfies the identification requirements at the normal use distance of the mobile device.
  • the biometric function of imaging while taking into account the needs of visible light self-photographing and infrared biometrics.
  • the biometric imaging system and method of the invention can satisfy the ordinary photographing function of the user, and can also be used for collecting the near-infrared image of the biological feature (for example, iris), and can realize the fast focusing of the biological feature.
  • the biological feature for example, iris
  • the invention adopts the design of a filter combination of a component region sub-band to split the incident multi-spectral light source into light of two sets of optical paths and is received by the image sensor.
  • the filter A in the filter combination structure has a coating for facilitating reflection in the infrared light band and visible light band, and the other filter B is favorable for transmitting in the infrared light band and reflecting in the visible light band. Coating.
  • the two filter combinations A and B are preferably placed between the image sensor chip and the optical lens, and the two do not overlap in the path of the incident light path. When the camera is working, the incident light passes through the filter group.
  • the structure After being combined, the structure is divided into a visible portion and an infrared portion, and is received by a corresponding partition portion of the same image sensor; according to the difference in the size of the two filters of A and B and the position of the filter from the image sensor,
  • the area of the image sensor is correspondingly divided into three different imaging areas, namely a visible light imaging area and an infrared light imaging area, and a transition area for overlapping imaging of visible light and infrared light. Because the filter is located close to the image sensor when the module structure is designed, the area of the transition region where the visible light and the infrared light have overlapping imaging is small, and mainly affects the visible image. Subsequent to the software switching process, the visible light image after the transition region is removed can be output.
  • the image sensor in the self-photographing visible light mode, can preferably output only a self-photographing visible light image that removes the visible light imaging area of the transition region, and the captured self-timer image is not affected by the infrared light and is reddish;
  • the image sensor in the light mode, can preferably output only the infrared light image of the size of the infrared light imaging region, and the collected biometric image is not affected by visible light of various complicated environments, and environmental noise is generated, for example, the surface of the eye is formed by window reflection.
  • the visible light reflection spot, or the reflective speckle formed by the user wearing the glasses due to the reflection of various environmental lights obscures the iris information, thereby affecting the recognition rate and the user experience.
  • the output images of these two modes do not interfere with each other and are the most preferred two-in-one solution.
  • the present invention can realize multiplexing imaging of visible light and infrared light without physical position switching of the filter, and does not require moving parts for the filter, and can fall in the mobile terminal. Understand the stability of the system and structure.
  • the present invention provides a biometric composite imaging system and method for multiplexing visible light imaging with infrared light imaging.
  • the present invention provides an improved system and method for dual-mode composite imaging of visible light (self-photographing) and infrared light (biometric imaging) using a camera module.
  • the system and method of the present invention includes different parameter configuration designs and system application flows of image signal processing ISP corresponding to two different working modes, in particular, biometric data processing from infrared sensor imaging mode from sensor image processor (ISP) output to Improved algorithmic flow of data encryption transmission, handshake, pre-processing and comparison.
  • ISP sensor image processor
  • the present invention also proposes a method for more optimally performing biometric in vivo detection in combination with the advantages of the dual mode under the innovation of the present invention.
  • the composite imaging system of the present invention innovatively employs an improved image sensor for sub-regional imaging (one region has a color filter and the other region has no color filter) and a dual-band filter design to achieve single-lens visible and infrared light.
  • Composite imaging The color filter design enhances the absorption of light energy in the 760-880 nm band in the near-infrared band of the corresponding image sensor region, thereby reducing the power consumption of the active illumination source, and achieving low-infrared biometric imaging for mobile terminal devices. Power consumption design.
  • an imaging function multiplexed biometric composite imaging system comprising: a lens assembly for receiving light from a region of interest; and a filter assembly for performing the received light Filtering to effect imaging of light that is allowed to pass through a band, the filter assembly comprising at least a visible light bandpass region and an infrared light bandpass region, the visible light bandpass region only allowing visible light to pass through the filter assembly, And the infrared light bandpass region allows only infrared light to pass through the filter assembly;
  • the image sensor includes a visible light imaging region and an infrared light imaging region and a transition region between the two regions, the image sensor is visible light Operating in one of a mode and an infrared light imaging mode, wherein the visible light imaging region images visible light passing through the visible light bandpass region in the visible light imaging mode, and the infrared light imaging region is imaged at the infrared light Imaging the infrared light passing through the infrared light bandpass region in a mode,
  • an imaging function multiplexed biometric composite imaging method comprising: receiving light from a region of interest; selecting one of at least two imaging modes based on a user input, the imaging mode A visible light imaging mode and an infrared light imaging mode are included; the received light is filtered in the selected imaging mode, wherein visible light is passed in the visible light imaging mode, and infrared light is passed in the infrared light imaging mode And imaging the filtered light on a corresponding area of the image sensor, wherein the visible light is imaged and outputted in the visible light imaging mode, and the passed infrared light is imaged and outputted in the infrared light imaging mode Wherein the infrared light is from a biometric; wherein, in the infrared light imaging mode, autofocusing of the biometric of the region of interest is achieved based on the particular physical property of the biometric as image quality information.
  • a mobile terminal for biometric composite imaging comprising: an infrared light source for emitting infrared light to the biometric; a screen for displaying an image and providing for guiding a user Cooperating with an eye image preview window for acquiring a biometric image, the eye image preview window is located along a length direction of the mobile terminal An upper portion of the area of the screen; a composite imaging camera module, the composite imaging camera module further comprising: a lens assembly for receiving light from the region of interest; and a filter assembly for performing the received light Filtering to effect imaging of light that is allowed to pass through a band, the filter assembly comprising at least a visible light bandpass region and an infrared light bandpass region, the visible light bandpass region only allowing visible light to pass through the filter assembly, And the infrared light bandpass region allows only infrared light to pass through the filter assembly; the image sensor includes a visible light imaging region and an infrared light imaging region and a transition region between the two
  • FIG. 1a and 1b are schematic views of a composite imaging system for biometrics in accordance with the present invention.
  • Figure 1c is a more detailed schematic diagram of a composite imaging system for biometrics in accordance with the present invention.
  • 2a is a front elevational view showing the relative positions of the image sensor, the filter, and the lens assembly
  • FIG. 2b is a schematic cross-sectional view of the relative position of the image sensor, the filter, and the lens assembly;
  • FIG. 3 is a schematic view showing the position of the filter assembly placed on the surface of the image sensor;
  • Figure 4 is a schematic front view of an image sensor
  • Figure 5 is a schematic side view of an image sensor
  • Figure 6 is a schematic elevational view of the filter assembly
  • Figure 7 is a schematic side view of a filter assembly
  • FIG. 8 is a spectral characteristic diagram of a visible light band pass filter in a filter assembly in a visible light band (for example, 380-760 nm);
  • Figure 9 is an infrared light bandpass filter in the filter assembly in the infrared band (for example, Spectral characteristics of 780-880 nm);
  • Figure 10 is a schematic illustration of an autofocus imaging unit of a composite imaging system in accordance with the present invention.
  • Figure 11 is a flow chart of a composite imaging method in accordance with the present invention.
  • FIG. 12 is a flow chart of an autofocus algorithm in an infrared light imaging mode, in accordance with an embodiment of the present invention.
  • Figure 13 is a flow chart of a dual-spectrum in vivo detection process in a composite imaging system
  • FIG. 14 is a schematic diagram showing the architecture of a uniformly programmable biometric recognition software in accordance with the present invention.
  • FIG. 15 is a schematic diagram showing a data flow of the software architecture shown in FIG. 14;
  • FIG. 16a to 16c illustrate a preferred embodiment of a mobile terminal including a composite imaging system, wherein Figs. 16a and 16b show a structural configuration of the mobile terminal, and Fig. 16c shows a user experience diagram of the mobile terminal when in use. ;as well as
  • FIGS. 17a to 17c illustrate another preferred embodiment of a mobile terminal including a composite imaging system, in which FIGS. 17a and 17b illustrate a structural configuration of the mobile terminal, and FIG. 17c illustrates a user experience of the mobile terminal in use.
  • FIGS. 17a and 17b illustrate a structural configuration of the mobile terminal
  • FIG. 17c illustrates a user experience of the mobile terminal in use.
  • a filter combination structure including two filter regions A and B (for example, two filters) is preferably placed between the image sensor chip and the optical lens, two There is no overlap in the path of the incident light path.
  • the incident light is filtered into a visible portion and an infrared portion after passing through the filter combining structure, and is received by a corresponding partition portion of the same image sensor.
  • the image sensor area is correspondingly divided into three different imaging areas, namely, the visible light imaging area and the infrared light. The imaging area, as well as the transition area for overlapping imaging of visible and infrared light.
  • the filter is far from the image sensor when designing the module structure Recently, the area of the transition region where the visible light and the infrared light have overlapping imaging is small, and mainly affects the visible light image. Subsequent to the software switching process, the visible light image after the transition region is removed can be output.
  • FIGS. 1a and 1b are schematic illustrations of a composite imaging system for biometrics in accordance with the present invention.
  • the composite imaging system 100 includes an image sensor 110, a filter assembly 120, a lens assembly 130, and a micro-motor actuator 140.
  • Figure 1c is a more detailed schematic diagram of a composite imaging system for biometrics in accordance with the present invention, wherein in addition to the image sensor 110, filter assembly 120, lens assembly 130, micromotor, shown in Figures 1a and 1b
  • the light source 150 is further illustrated, as well as the positional relationship of the field of view of the composite imaging device with the region of interest and biometrics. It is to be noted that the light source 150 in Fig. 1c is merely illustrative and the invention is not limited to the schematic configuration of the light source 150 in Fig. 1c.
  • FIG. 2a, 2b to 7 show the specific structure and relative position of the various components in the composite imaging system.
  • 2a is a front view showing the relative positions of the image sensor 110, the filter assembly 120, and the lens assembly 130
  • FIG. 2b is a schematic cross-sectional view showing the relative positions of the image sensor, the filter, and the lens assembly.
  • the lens assembly 130 has a certain field of view and receives light from a multi-spectral light source.
  • Incident light from the multi-spectral light source passes through the lens assembly and reaches the filter assembly 120.
  • the filter assembly is a design of a component region, sub-band bandpass filter combination that includes a visible light bandpass region that allows only light in the visible range to pass and infrared light that only allows light in the infrared band to pass. Band pass area.
  • the visible light band pass region and the infrared light band pass region may be a visible light band pass filter 121 and an infrared light band pass filter 122, respectively.
  • the incident multi-spectral light source is split into light in two bands and received by the image sensor.
  • the visible light band pass filter 121 in the filter assembly 120 has a coating film which is favorable for reflection in the infrared light band and visible light band, and another infrared light band pass filter 122 has a favorable infrared function.
  • the image sensor 110 is a complete image sensor pixel combination array that transmits digital image pixel data collected by the image sensor to a backend encryption chip or processor through a data transmission interface (such as a MIPI interface).
  • the filter assembly 120 is placed on the front end path of the optical path acquired by the image sensor 110 such that light reaching the image sensor 110 has been filtered by the filter assembly 120, ie, The multispectral light incident through the lens assembly 130 is filtered
  • the light sheet assembly 120 splits light into two wavelengths of visible light and infrared light and is received by different portions of the image sensor. Since the area occupied by the visible light band pass filter 121 and the infrared light band pass filter 122 in the optical path of the filter assembly 120 is different, the image sensor 110 may include a visible light band pass region (eg, visible light).
  • the band pass filter 121) and the infrared light band pass region correspond to a visible light imaging region and an infrared light imaging region, which can be divided by software.
  • image sensor 110 may also include a transition region that has overlapping imaging of visible and infrared light.
  • the visible light imaging region of the image sensor 110 images visible light passing through the visible light bandpass region on its corresponding optical path, and the infrared light imaging region passes through the infrared light bandpass region on its corresponding optical path Infrared light is imaged.
  • the area of the filter assembly 120 is greater than the area of the image sensor 110. Because the current mobile phone requires the height of the imaging module to be thinner and thinner, the spacing between the filter component and the image sensor needs to be small, generally less than 2 mm, so the error of light propagation in the wide-angle diffusion in this pitch range is small. , which can be ignored.
  • image sensors used in mobile terminals such as mobile phones and tablet computers generally consist of a matrix of pixels.
  • a micro lens Micro lens
  • a color filter Color Filter
  • Increasing the color filter optimizes the color visible light image and corrects the reddening of the color image; and removing the color filter enhances the pixel's absorption of light energy in the 760-880 nm band.
  • the composite imaging system of the present invention innovatively employs an improved image sensor for sub-regional imaging (where the visible light imaging region has a color filter and the color filter is removed over the infrared light imaging region) and the dual band filter design A single lens visible and infrared composite imaging is achieved.
  • the design of the color filter removal enhances the corresponding spectral sensitivity of the corresponding image sensor region to the infrared spectral range, and enhances the absorption of light energy in the 760-880 nm band in the near-infrared band, thereby optimizing and enhancing the region.
  • Image effects for iris imaging is a single lens visible and infrared composite imaging.
  • a lower energy infrared (IR) LED source can also maintain strong infrared spectral energy reception, enabling imaging of richer iris details, thereby reducing the power consumption requirements of the inventive system for active illumination sources, and Implemented for mobile Terminal device near-infrared biometric imaging low power design.
  • IR infrared
  • composite imaging system 100 includes a color filter corresponding to a visible light imaging region of image sensor 110, and a color filter corresponding to the infrared light imaging region of image sensor 110 is removed.
  • the image sensor in the self-photographing visible light mode, can preferably output only the self-photographing visible light image of the size of the visible light imaging region after the transition region is removed. Since the software removes the transition region when the image is output, the acquired self-timer image is not reddish by the infrared light; in the infrared light mode, the image sensor can preferably output only the infrared light image of the infrared light imaging region size.
  • the acquired biometric images are not affected by visible light in various complex environments, such as visible light reflection spots formed by window reflections on the surface of the eyes, or reflections by users of glasses wearing various environmental lights. The bright spots thus block the iris information, which in turn affects the recognition rate and user experience. Such two output images do not interfere with each other and are the most preferred two-in-one solution.
  • the present invention can realize multiplexing imaging of visible light and infrared light without physical position switching of the filter, and does not require moving parts for the filter, and can fall in the mobile terminal. Understand the stability of the system and structure.
  • the focusing range of visible light imaging for mobile phones and tablet computers is generally far, and the infrared imaging focusing range of biometric recognition is relatively close. Therefore, if the focal length of the imaging system is fixed, it is impossible to achieve both modes while achieving clear imaging.
  • the composite imaging system 100 of the present invention may further include a micromotor actuator 140 that controls a moving component (not shown) to move the lens assembly 130 such that the lens assembly 130 enters in a corresponding imaging mode.
  • Focus mode to accommodate different focal length requirements for both visible and infrared imaging modes.
  • the use of the moving parts can solve the problem of axial focus chromatic aberration of visible light and infrared light imaging.
  • the moving part is used to adjust the movement of the lens assembly.
  • the micro motor actuator 140 is configured to separately adjust the focal length based on different working modes (visible mode or infrared mode) to achieve different working modes (generally, the front visible light mode of the mobile terminal is mainly used for self-timer, and the infrared light imaging mode is used.
  • the imaging distance required to focus is also closer to the relative position of the visible light distance image sensor.
  • FIG. 3 is a schematic view showing the position of the filter assembly 120 placed on the surface of the image sensor 110.
  • the filter assembly 120 can be a separate component from the image sensor 110 (ie, the filter assembly 120 is slightly larger than the image sensor 110), or can be packaged over the surface of the wafer of the image sensor by a packaging process. ,As shown in Figure 3.
  • FIG. 4 is a schematic front view of image sensor 110
  • FIG. 5 is a schematic side view of image sensor 110.
  • the image sensor 110 according to the present invention includes a region for visible light imaging and a region for infrared light imaging, and a transition region between the two regions.
  • the area of the image sensor 110 for visible light imaging and the area for infrared light imaging, and the transition area between the two areas are shown in FIGS. 4 and 5.
  • the accuracy and minimum resolution requirements of the image sensor 110 in the composite imaging system 100 of the present invention may use a large number of pixel image sensors. Taking the iris of an ordinary person with an average diameter of 11 mm as an example, according to the ISO standard, the outer diameter of the monocular iris in the image needs to be 120 pixels. If the iris can be recognized by a lens with a horizontal FOV of 60 degrees at a normal use distance (30 CM), the image sensor needs to have at least 3773 pixels in the horizontal direction.
  • CMOS image sensor 4680(W) x 3456(H)
  • 13M 13M
  • FIG. 6 is a schematic front view of the filter assembly 120 and FIG. 7 is a schematic side view of the filter assembly 120.
  • the visible light bandpass filter 121 (for visible light) and the infrared light bandpass filter 122 (for infrared light) of the filter assembly 120 correspond to the region of the image sensor 110 for visible light imaging and for infrared, respectively. The area of light imaging.
  • Figures 8 and 9 illustrate the spectral characteristics of the filter assembly 120 in the visible (e.g., 380-760 nm) band and the spectral characteristics of the filter assembly in the infrared (e.g., 780-880 nm), respectively.
  • the composite imaging system 100 is capable of complex imaging by software control of a step size lookup table corresponding to two different modes pre-calculated when switching between a visible light imaging mode and an infrared light imaging mode. Fast focus in two imaging modes. Because the focal length of infrared imaging is closer than the focal length of visible light imaging, the position of the lens in the infrared light imaging mode is closer to the person in the axial direction than the visible light mode. eye.
  • composite imaging system 100 is particularly capable of achieving fast focus in an infrared light imaging mode.
  • Figure 10 is a schematic illustration of an autofocus unit of a composite imaging system in accordance with the present invention.
  • the autofocus unit includes an autofocus algorithm module that can be implemented by or included in a processor (not shown).
  • the autofocus algorithm module controls the micromotor actuator as image quality information based on specific physical properties of the biometrics from the image sensor, thereby The biometric features autofocus.
  • the micro-motor actuator is used to control the moving component to move the lens assembly to achieve auto-focusing of the biometrics of the region of interest.
  • the specific physical property may include a pupil spacing of the iris of the binocular.
  • the specific physical property may comprise an iris outer diameter of the monocular.
  • the processor in the composite imaging system can be further configured to perform the mapping by real-time computing the specific physical properties of each frame image generated by the image sensor and mapping corresponding to a pre-computed step size lookup table. Quick focus on biometrics.
  • a composite imaging system employs a new specific physical property having a relatively objective constant value based on the biometric in the acquired electronic image, obtaining properties of the particular physical property in the electronic image.
  • the value is used as the image quality information of the electronic image, and the lens component is adjusted according to the attribute value to implement autofocus control on the biometrics of the region of interest, thereby ensuring the visible light imaging mode (the far focal length) to the infrared light imaging mode. (Close focus)
  • the software control to quickly focus during the switching process improves the user experience and the image quality of the acquired biometrics, thereby improving the recognition accuracy. Specific details regarding the implementation of the fast focus of the composite imaging system of the present invention in the infrared light imaging mode will be further described in the method embodiments below.
  • the present invention also provides a composite imaging method that can be implemented by the composite imaging system described above.
  • Figure 11 shows a flow chart of a composite imaging method in accordance with the present invention.
  • the method includes receiving light from a region of interest (S1110); selecting one of at least two imaging modes based on a user input, the imaging mode including a visible light imaging mode and an infrared light imaging mode (S1120); Filtering the received light in the imaging mode (S1130); and filtering the filtered light on the corresponding area of the image sensor Line imaging (S1140), wherein the passed visible light is imaged for output in the visible light imaging mode, and the passed infrared light is imaged for output in the infrared light imaging mode, and wherein the infrared light is from a biometric.
  • the image sensor may further output image data corresponding to a corresponding area of the image sensor pixel array to a cryptographic chip or processor for further processing through a data transfer interface, such as a MIPI interface.
  • a data transfer interface such as a MIPI interface.
  • autofocusing of the biometrics of the region of interest is achieved based on the specific physical property of the biometric as image quality information (S1150).
  • the composite imaging method of the present invention further includes realizing the mapping of the specific physical attributes of each frame image generated by the image sensor and mapping corresponding to a pre-computed step size lookup table to achieve the biometric Fast focus.
  • the user switches the camera into a visible light imaging mode or an infrared light imaging mode through software control.
  • the image sensor (CMOS/CCD) chip includes a visible light imaging area and an infrared light imaging area in accordance with the corresponding design area size of the visible light band pass filter 121 and the infrared light band pass filter 122 in the filter assembly.
  • the software control image signal processor ISP Image Signal Processor selects the corresponding visible light imaging region to work, and calls the corresponding visible light imaging ISP parameter setting to optimize the visible light imaging effect.
  • the micro-motor actuator can be used to control the moving parts to move the lens assembly into the visible focus mode. Autofocus is performed using a conventional focus method based on image quality evaluation (such as contrast focusing), and an image and an output format of a resolution corresponding to a position corresponding to the visible light imaging region are output.
  • the ISP selects the corresponding infrared light imaging area to work, and the corresponding ISP parameter setting of the infrared light imaging is called to optimize the effect of the infrared light imaging.
  • the micromotor actuator can be used to control the moving component to move the lens component into the infrared light focusing mode, and simultaneously output the image and output format of the resolution corresponding to the position of the infrared light imaging region.
  • the autofocus process in the infrared light mode is as follows: the imaging system calculates the image of the biometric of the region of interest captured by the lens assembly, and combines the portion of the biometric in the image to calculate Corresponding biometric image Quality information, including but not limited to image clarity, contrast, average grayscale, image information entropy, pupil spacing, pupil diameter, iris outer diameter, horizontal corner width, and other specific physical properties.
  • the specific physical property may include a pupil spacing of the iris of the binocular.
  • the specific physical property may comprise an iris outer diameter of the monocular.
  • the calculated biometric image quality information may be a set of image quality values or a single image quality indicator.
  • the system controls the micromotor to change the position of the lens component, so that the image quality calculated from the acquired image is optimized, thereby completing autofocus on the biometrics of the region of interest. control.
  • the position of the lens will be closer to the human eye in the axial direction than the visible light mode.
  • the current one-frame image is acquired from the image sensor (S1210).
  • specific physical properties of the biometrics eg, binocular pupil spacing or iris outer diameter
  • S1220 specific physical properties of the biometrics
  • S1230 specific physical attribute
  • S1240 optimal imaging object distance corresponding to the focal length of the current optical system
  • the direction and step size of the lens assembly to be moved are calculated, and the micro motor actuator is driven to move the lens assembly to the designated position (S1260).
  • the above steps are repeated until the optimum imaging object distance and depth of field are satisfied, thereby completing the autofocus process (S1270).
  • the composite imaging system subtly utilizes the characteristics of the dual-spectrum imaging possessed by the present system to implement the biometric detection function.
  • the biometric detection function can be controlled and implemented by a processor (not shown), or a functional module that implements biometric detection can be included in the processor.
  • the composite imaging system includes a living body detection unit that provides a function of detecting whether the obtained iris image is from a real person or a forged iris.
  • the basic principle of the living body detecting unit is to perform living body judgment using different optical reflection characteristics of normal human biological tissues and materials for forging irises under visible light and infrared light.
  • the composite imaging system can sequentially enter the visible light imaging mode and the infrared light imaging mode through a camera module during imaging, and sequentially acquire visible light images and near-infrared images of the current human eye.
  • an improved living body recognition scheme can be obtained.
  • I r, ⁇ (p) represents the intensity of the reflected light with a wavelength of ⁇
  • ⁇ ⁇ (p) is the reflectance of the substance at the wavelength ⁇
  • I s, ⁇ (p) represents the intensity of the incident light source with a wavelength of ⁇ .
  • ⁇ (p) is the angle between the normal vector of the object at point p and the vector pointing from the point p to the camera.
  • the reflectance ratio R(p) of the object surface point p can be obtained as follows:
  • ⁇ 1 is the wavelength of visible light
  • ⁇ 2 is the wavelength of infrared light
  • the intensity of the incident light source of visible light and infrared light is uniformly distributed on the surface of the imaged object, and the intensity of the visible light source is incident. Can be approximated as the intensity of the light source measured by the light sensor attached to the mobile device, the intensity of the near-infrared incident light source It can be obtained from the illuminating parameters of the near-infrared illuminating device on the mobile device.
  • the reflectance ratio R(p) can be simplified to
  • P-point visible light and infrared light reflected light intensity with It can be obtained from the pixel gray value corresponding to the p point in the acquired visible light and infrared light images.
  • the corresponding reflectance ratio image can be calculated from the acquired visible light and infrared light images:
  • R is the reflectance ratio image
  • k is a positive constant
  • P is the acquired visible light image
  • Q is the acquired infrared light image.
  • Figure 13 is a flow diagram of a dual spectral in vivo detection process in a composite imaging system. Specifically, taking iris recognition as an example, the specific steps of the living body detection process can be described as follows: living body detection
  • the unit control imaging system sequentially enters the visible light imaging mode and the infrared light imaging mode (S1310), and sequentially acquires the visible light image and the near infrared image of the current human eye.
  • S1310 infrared light imaging mode
  • the user's man-machine interface design keeps the fixed acquisition distance and the acquisition angle as stable as possible.
  • the image processing algorithm is used to automatically register the dual-spectral image (S1320), and the image processing algorithm is used to automatically detect the pupil and iris positions in the registered image (S1330), and The same area to be treated is segmented (S1340) centering on the detected pupil and iris. Then, the corresponding reflectance ratio image is calculated by using the divided bispectral image (S1350), and the distribution characteristic of the reflectance ratio (S1360) such as a histogram, a gradient, a variance, and the like are analyzed.
  • the distribution characteristic parameter of the reflectance ratio is within a preset range, it is determined that the current human eye is a prosthesis or a forged iris, otherwise the current human eye is determined to be a living body (S1370).
  • the above method combines the information of two spectra of visible light and infrared light, and proposes a creative complex calculation method for living body detection, which can realize the stability of different races (light pupil and dark pupil) in various complicated environments. More robust and in vivo detection and recognition.
  • the composite imaging system further includes an image encryption unit to provide a function of encrypting the acquired biometric image.
  • the image encryption unit may be implemented by a processor (not shown) or included in the processor or included in a module of an image sensor or a composite imaging system in a separate modular unit. It is to be noted that the image encryption unit may be implemented in the same processor together with the autofocus unit described above, or separately by different processors.
  • the image encryption unit works as follows: after the software-controlled composite imaging system enters the infrared light imaging mode and acquires the infrared image, the image encryption function is activated, the obtained biometric image is encrypted, and the encrypted data is output for Further processing.
  • the image encryption unit does not start, and does not encrypt the obtained visible light image, but directly outputs the obtained visible light image.
  • FIG. 14 is a schematic diagram showing a unified programmable biometric software architecture in accordance with the present invention, which may be embodied in a composite imaging system in accordance with the present invention.
  • the unified programmable biometric software architecture is based on the following objectives:
  • Unified software architecture makes it easier to integrate sensors from different manufacturers
  • a unified interface allows application developers to ignore the interaction between biometric algorithms and iris collectors
  • the software architecture shown in FIG. 14 includes: a biometric interface manager for providing a biometric identification interface to a third party developer; and a biometric algorithm processor for processing biometric information related data. Operation; biometric collection device for collecting biometric information.
  • the software architecture can interact with other applications.
  • Figure 15 is a diagram showing the data flow of the software architecture shown in Figure 14. Specifically, the transfer of data frames and commands between the biometric interface manager, the biometric algorithm processor, the biometrics collection device, and the application is illustrated in FIG.
  • 16a-16c and 17a-17c are schematic illustrations of two innovative implementations of a mobile terminal including a composite imaging system in accordance with the present invention.
  • the mobile terminal utilizes one or more infrared LEDs in the 780-880 nm band as a light source and includes a composite imaging camera module coupled to the composite imaging system in accordance with the present invention.
  • FIG. 16a to 16c show a preferred embodiment of a mobile terminal including the composite imaging system of the present invention, wherein Figs. 16a and 16b show the structural configuration of the mobile terminal, and Fig. 16c shows the mobile terminal in use.
  • the composite imaging system implemented as the composite imaging camera module 100 is disposed on one side of the front side of the screen of the mobile terminal (such as the top of the screen or the bottom of the screen, which is the top in this embodiment).
  • the infrared light source 150 for example, an infrared light emitting diode (LED)
  • the composite imaging camera module 100 are disposed on the same side of the front surface of the screen of the mobile terminal, wherein the position of the infrared light source 150 and the composite imaging camera module 100
  • the horizontal distance of the center is in the range of 2-8 cm, which is advantageous for the elimination of the reflected spot when the user wears the glasses.
  • the infrared source 150 can be composed of one or more infrared LEDs having a central spectral range of 780-880 nm.
  • the filter assembly and the image sensor are configured in such a manner that the visible light band pass filter 121 is placed above the infrared light band pass filter 122 (N direction), and the visible image area of the corresponding image sensor is placed in the infrared Above the light imaging area (N direction).
  • the area of the visible light imaging area is greater than 50% of the area of the image sensor, and the area of the infrared light imaging area is less than 50% of the area of the image sensor, and the transition area is between the visible light imaging area and the infrared light imaging area. Its area is less than 15% of the area of the image sensor.
  • the above configuration may also be described as: the composite imaging camera module 100 and the infrared light source 150 are located above the screen along the length direction of the mobile terminal; the visible light band pass filter 121 is along the mobile terminal The length direction of the image sensor is placed above the infrared light bandpass filter 122; and the visible light imaging region of the image sensor is placed above the infrared light imaging region along the length of the mobile terminal, and the transition region is located in the visible light imaging region and the infrared light imaging region. between. Additionally, when the infrared light imaging mode is activated, an eye image preview window 160 can be provided in the screen.
  • the eye image preview window 160 only outputs an image of the corresponding infrared light imaging region (ie, biometric imaging) for guiding the user to cooperate to acquire the biometric image.
  • the position of the eye image preview window 160 may be placed on the upper or lower side of the screen on the screen of the mobile terminal.
  • the eye image preview window 160 is located at the upper portion of the screen region (ie, the N direction) along the length direction of the mobile terminal, that is, near the side of the composite imaging camera module 100.
  • An image can be output in the eye image preview window 160 so that a biometric image (eg, an iris image) can be acquired for subsequent pre-processing or encryption recognition processes.
  • the composite imaging camera module 100 captures visible and infrared light from biometrics. Due to the principle of lens imaging upside down, infrared light from biometrics enters the interior of the mobile terminal through the composite imaging camera module 100, and passes through the infrared light bandpass filter 122 located below the length of the mobile terminal to reach the image sensor. The infrared imaging area below is used to image the biometric infrared light.
  • placing the infrared source 150 above the mobile terminal also helps to more fully illuminate the biometric features as the upper portion of the mobile terminal is tilted toward the user, such that the energy of the infrared source 150 can be biometrically identified. It primarily illuminates the user's biometrics (such as the iris).
  • Figures 17a through 17c illustrate another preferred embodiment of a mobile terminal including the composite imaging system of the present invention.
  • the infrared light source 150 is located below the screen along the length direction of the mobile terminal (ie, the S direction), and the visible light band pass filter 121 is along The length direction of the mobile terminal is placed below the infrared light bandpass filter 122; and the visible light imaging region of the image sensor is placed below the infrared light imaging region along the length of the mobile terminal, the transition region is located in the visible light imaging region and infrared light imaging Between the regions.
  • the mobile terminal when in use, when the mobile terminal enters the infrared light imaging mode to perform infrared light imaging of the biometric feature (e.g., into the iris recognition mode), at the upper portion of the screen area of the mobile terminal (i.e., The N direction) provides an eye image preview window 160 that previews the infrared light imaging area of the image sensor to guide the client, as shown in Figure 17a.
  • the eye image preview window 160 is located at the upper portion of the screen area (ie, the N direction) along the length direction of the mobile terminal, that is, near the side of the composite imaging camera module 100, which facilitates the user's eyes when the iris is recognized.
  • the user can tilt the upper portion of the mobile terminal (ie, the side including the composite imaging camera module 100) away from the user side as shown in FIG. 17c, so that the user pans through the software while looking at the eye image preview window 160.
  • Control is to ensure that the preview image in the eye image preview window 160 is a preview of the infrared light imaging region of the image sensor such that images of the user's binoculars can be output in the eye image preview window 160.
  • a biometric image (eg, an iris image) can be acquired for subsequent pre-processing or encryption recognition processes.
  • the preview window is placed at the lower portion of the screen area of the mobile terminal, the iris texture is blocked by the upper eyelid and the eyelashes during use by the user, so the present invention does not favor this configuration.
  • placing the infrared light source 150 under the mobile terminal helps to tilt the upper portion of the mobile terminal away from the user, and the biometric feature can be more fully illuminated when the user holds the mobile terminal for biometric identification, so that the infrared light source 150 The energy can primarily illuminate the user's biometrics (eg, the iris) during biometric identification.
  • the image sensor employs a 13M CMOS image sensor (4680(W) x 3456(H)).
  • the visible light portion height is greater than 50% (1728 pixels) of the height of the entire image sensor, and the infrared light portion height is less than 50% ( ⁇ 1728 pixels) of the height of the entire image sensor.
  • the filter is placed between the optical lens and the image sensor.
  • the width and height are slightly larger than the image sensor.
  • the visible and infrared light of the filter can be slightly larger than the corresponding area in the image sensor to ensure sufficient coverage of the image sensor. Corresponding area.
  • the image sensor employs a large resolution CMOS image sensor with an image resolution in the horizontal direction of more than 2400 pixels.
  • CMOS image sensor with an image resolution in the horizontal direction of more than 2400 pixels.
  • the self-portrait trend is the large field of view (the diagonal angle of the lens is generally around 70-80 degrees), and the iris recognition
  • a small field of view is generally required, so often two LENS cannot be combined.
  • the diameter of the outer diameter of the monocular iris needs to be 120 pixels.
  • the imaging chip needs at least 3200 pixels in the horizontal direction. , corresponding to 8M CMOS. If you use 13M CMOS, the pressure on the FOV is even smaller.
  • the image sensor employs a 13M CMOS image sensor (4680(W) x 3456(H)).
  • the visible light and infrared light imaging regions of the image sensor may preferably be designed such that the visible light imaging region height is 80% (2756 pixels) of the height of the entire image sensor, and the infrared light imaging region height is 20% of the height of the entire image sensor (700). Pixels).
  • an auxiliary infrared source can be added to the display screen in front of the mobile terminal, which can generate an infrared source.
  • the auxiliary infrared light source can perform auxiliary infrared illumination on the biological feature, thereby saving power of the infrared light source disposed on the mobile terminal.
  • the infrared portion of the screen is illuminated by the switching of the software to illuminate the iris of the human eye.
  • the screen can be illuminated with an OLED light source.
  • the image sensor can be divided into: the infrared light portion is above the visible portion.
  • the composite imaging system and method of the present invention can ensure the normal use of the front camera of the mobile device, such as a self-timer, and can collect biometric features (eg, iris) at a distance normally used by the user (eg, 20-50 cm). Identify the required infrared biometrics Image does not affect the user experience.
  • biometric features eg, iris
  • Sub-area infrared light imaging only requires a part of the image sensor to receive the illumination of the infrared source instead of the entire image sensor, thereby reducing the total power consumption requirement of the image sensor for the infrared illumination source, ie using less energy and less
  • the infrared LED source of the emission angle can also maintain sufficient infrared spectral energy absorption in the infrared region to obtain a biometric image rich in texture details.
  • the composite imaging system and method of the present invention utilizes a near-infrared automatic fast and effective focusing algorithm based on biometric information to ensure fast focal length correction when switching from visible far focal length mode software to infrared near focal length mode, which is beneficial to improve acquisition.
  • the quality of the near-infrared biometric image is beneficial to improve acquisition.
  • the composite imaging system and method of the present invention can image the visible portion and the infrared portion through a camera module during imaging. And by analyzing the difference between visible light and infrared light images, an improved living body recognition scheme can be obtained.
  • the composite imaging system and method of the present invention can encrypt the acquired biometric image to ensure the security of the user's personal sensitive information.
  • the encryption method in the present invention performs selective image encryption by image quality judgment, thereby effectively reducing the requirement for data processing throughput of the encryption chip, and ensuring real-time image encryption.
  • the encryption method in the present invention can correctly guide the collection of the biometric image of the user by the image for preview output after the downsampling process, without affecting the user experience. At the same time, since the preview image after the downsampling process does not have sufficient biometric information, it does not cause leakage of the user's personal sensitive information.
  • the present invention uses iris recognition as an example to illustrate a composite imaging system and method for imaging function multiplexing of the present invention.
  • aspects of the present invention are not limited to the recognition of the iris of the human eye, but can also be applied to other biological features that can be used for identification, such as whitening, fingerprints, retina, nose, face (two-dimensional or three-dimensional), Eye lines, lip lines and veins.

Abstract

一种成像功能复用的生物特征复合成像系统和方法以及包括该复合成像系统的移动终端。该复合成像系统包括:镜头组件(130);滤光片组件(120),其包含至少可见光带通区域和红外光带通区域;图像传感器(110),包括可见光成像区域和红外光成像区域以及两个区域之间的过渡区域。所述图像传感器(110)在可见光成像模式和红外光成像模式之一下进行操作。在所述红外光成像模式下,基于所述生物特征的特定物理属性作为图像质量信息来实现对所述感兴趣区域的生物特征的自动对焦。

Description

与可见光复用的生物特征复合成像系统和方法 技术领域
本发明涉及图像处理、生物识别和光学成像技术领域,尤其涉及一种可见光复用的生物特征复合成像技术。
背景技术
生物识别技术是新兴的身份识别技术。为了实现稳定识别,大部分的生物识别技术(包括人脸识别、虹膜识别和三维人脸识别)采用近红外成像技术,即使用红外光源对生物特征进行主动照明,并用摄像头技术接收红外光源反射得到的生物特征的数字化近红外图像进行识别。
虹膜识别是一种新兴的生物识别技术,在身份识别领域应用不断扩大。安全便捷的身份识别是开展面向移动终端业务服务的难点。目前用移动终端作为身份确认的手段主要依赖密码和卡,存在难记忆、易被窃取,安全性低等问题。在众多身份识别技术中,虹膜识别的安全性和精确度最高,具有个体唯一、不需要记忆、不能被窃取,安全级别高等优点。
在当前技术中,在移动终端(比如手机)上加入了虹膜识别的功能,需要在手机的正面增加一颗近红外摄像头模组,和用于自拍的前置可见光摄像头模组是独立存在的。也就是手机的前面板需要开两个孔,一个用于自拍,一个用于虹膜成像,工业设计上复杂而且外观并不美观。
在现有技术中,针对黄、褐色或黑色眼睛的人种,近红外摄像头模组所实现的虹膜成像设计一般采用760nm-880nm频谱波段红外光成像;需要额外的红外光源(760nm-880nm之间)进行补光照明,并且近红外摄像头模组需要能够接收该红外波段的能量。
手机上采用虹膜识别主要用于用户自身的身份识别,用户体验一般需要前置使用。而手机现有的前置自拍摄像头因本身有镀膜过滤无法接受该红外波段的光源或接收到的衰减很大。所以现有技术的虹膜识别需要单独的近红外摄像头进行对虹膜成像,不能够与现有的可见 光成像(光谱频率在380-760nm)的摄像头进行复用(比如智能手机现有的前置彩色摄像头)。这些均导致虹膜成像系统的体积大大增加,成本增加,设计复杂,用户体验差,无法微型化集成应用到需求量更广的移动终端。
所以如何能够将使用一颗摄像头进行二合一,既能满足手机正常的前置自拍的可见光成像功能,又能满足红外光成像的生物识别功能,是当前的一个技术瓶颈。
当前利用单摄像头在移动设备上实现进行二合一复用,既能满足手机正常的前置自拍的可见光成像功能,又能满足红外光成像的生物识别功能,其近红外和可见光双波段成像主要有以下几种实现方法。包括在成像系统中增加一个机械式可切换的红外光滤光片(参见中国专利CN201420067432.X),使用具有红外和可见光双波段透射光谱的滤光片(参见美国专利US8408821,中国专利申请CN104394306A),和同时集成了可见光和红外光检测像素的图像传感器(参见美国专利US7915652,中国专利申请CN104284179A)。
机械式可切换的红外光滤光片(参见中国专利CN201420067432.X)由于自身体积相对较大,难以在移动设备上广泛使用。美国专利US8408821中公开了用一个双带通的滤光片允许可见光和红外光同时透过,但这会使的可见光的部分受到红外光的干扰而使图像偏红,同时也会使得红外光的部分受到可见光的波段的影响而影响生物识别的精度;同时也并没有用到分区域分组滤光片带通的设计。中国专利申请CN104394306A中的滤光片包括第一区域和第二区域,所述第一区域为双通道镀膜,能够同时透过可见光和红外光,所述第二区域为单通道镀膜,仅能够通过特定波长的红外光。这种方法的在智能手机上的工程实现方面是不可行的,因为,这样的设计会导致第一区域有红外光进入而使得自拍偏红,而且通过第二区域的红外光的信号无法完全校正偏红的效果,因为这两个区域的成像的区域在同一帧是不一致的,感光内容不一样所以无法稳定的提取可以完美校正的因子来矫正自拍的偏红问题,特别是在背景色为绿色的环境下,使得自拍效果变差,这在当下手机前置自拍要求如此高的情况下用户是不能接受的;同时第一区域用作红外光生物识别成像时,由于可见光也能够通过,会导致比如虹膜特征受到外界环境复杂可见光反射光 斑的影响,比如眼镜、灯光、窗户反射亮斑。该多区域双光谱成像设备的设计中要么对自拍效果有影响,要么对红外光生物识别效果有影响,不能兼顾两者。特别是会使得生物特征成像受到外界复杂环境可见光的影响而导致采集到的生物特征图像质量变差,严重影响后端算法的识别精度和用户体验。由于没有考虑到可见光和红外光成像轴向色差问题,很难兼顾自拍,会对自拍的图像效果有影响,并且其只提到用音圈马达来调整焦点,而没有提供针对双光谱成像的自动变焦的实现方法。涉及红外光的美国专利US7915652中只涉及到双光谱成像的图像传感器设计,而没有提供整个成像系统设计。中国专利申请CN104284179A中所用的分区域多光谱滤光片是附着在图像传感器表面的色彩滤镜矩阵(Color Filter Array),而不是独立的光学滤光片。此外,该设计没有考虑可见光和红外光成像轴向色差问题,很难在移动设备正常使用距离上采集到满足识别要求的近红外虹膜图像。
发明内容
本发明的目的在于提供一种生物特征成像系统和方法,其能够使用一组图像传感器进行成像功能复用,从而既能满足移动终端正常的前置自拍的可见光成像功能,又能满足红外光成像的生物识别功能,同时兼顾可见光自拍和红外光生物特征识别的需求。
本发明的目的在于提供通过使用一组镜头组件来实现红外光和可见光成像功能复用的生物特征复合成像系统和方法,以及包括该符合成像系统的移动终端。
本发明的生物特征成像系统和方法既可以满足用户的普通拍照功能,又可以用于生物特征(例如虹膜)近红外图像的采集,同时能够实现对生物特征的快速对焦。
本发明通过一组分区域分波段的滤光片组合的设计,将入射的多光谱光源分光成为两组波段光路的光被图像传感器所接收。其中该滤光片组合结构中的一块滤光片A具有有利于对红外光波段反射、可见光波段透过的镀膜,另外一块滤光片B具有有利于对红外光波段透过、可见光波段反射的镀膜。在整个摄像头模组结构中,A、B两块滤光片组合结构优选的放置在图像传感器芯片和光学镜头之间,两者在入射光路的路径上没有重叠。该摄像头工作时,入射光在通过该滤光片组 合结构后被过滤分成可见光部分和红外光部分,被同一个图像传感器的对应的分区部分接收;根据A,B两块滤光片的面积大小的不同以及滤光片距离图像传感器位置的不同,该图像传感器的区域被对应的划分为三个不同的成像区域,即,可见光成像区域和红外光成像区域,以及对可见光和红外光有重叠成像的过渡区域。因为在模组结构设计时,滤光片距离图像传感器位置很近,所以可见光和红外光有重叠成像的过渡区域面积很小,主要对可见光成像图像有影响。后续在软件切换流程中可以对去除过渡区域后的可见光图像进行输出。
利用这样的结构设计,在自拍可见光模式下,图像传感器可优选的仅输出去除过渡区域的可见光成像区域大小的自拍可见光图像,所采集的自拍图像不会受红外光的影响而偏红;在红外光模式下,图像传感器可优选的仅输出红外光成像区域大小的红外光图像,所采集的生物特征图像不会受各种复杂环境可见光的影响而产生环境噪声,比如眼睛表面因窗户反射所形成的可见光反射光斑,或者戴眼镜的用户因各种环境灯的反射所形成的反射亮斑从而遮挡了虹膜信息,进而影响识别率和用户体验。这样两个模式的输出图像互不相干扰,是最优选的二合一方案。
和现有技术不同,本发明不需要对滤光片进行物理上的位置切换即可实现可见光与红外光的复用成像,并且无需针对滤光片的运动部件,以及能够在移动终端的跌落环境下增强系统和结构的稳定性。
本发明提供了一种将可见光成像与红外光成像复用的生物识别复合成像系统和方法。尤其是,本发明提供了可以利用一个摄像头模组进行可见光(自拍)和红外光(生物特征成像)双模式复合成像的改进的系统和方法。本发明的系统和方法包括对应两种不同工作模式下图像信号处理ISP的不同参数配置设计以及系统应用流程,特别是针对红外光成像模式下生物特征数据处理从传感器图像处理器(ISP)输出到数据加密传输、握手信号、预处理和比对的改进型算法流程。同时,本发明还提出了一种结合本发明创新点下的双模式的优势来更优化地进行生物特征活体检测的方法。
本发明的复合成像系统创新地采用了改进型分区域成像的图像传感器(一个区域有色彩滤镜,另一个区域没有色彩滤镜)和双波段滤光片设计配合来实现单镜头可见光和红外光复合成像;其中,去除色 彩滤镜的设计提升了其对应的图像传感器区域近红外波段下对760-880nm波段的光能量的吸收,从而降低了主动照明光源的功耗,实现了针对移动终端设备近红外生物特征成像低功耗的设计。
根据本发明的一个方面,提供了一种成像功能复用的生物特征复合成像系统,包括:镜头组件,用于接收来自感兴趣区域的光;滤光片组件,用于对所接收的光进行过滤,以实现对允许通过波段的光进行成像,所述滤光片组件包含至少可见光带通区域和红外光带通区域,所述可见光带通区域仅允许可见光透过所述滤光片组件,以及所述红外光带通区域仅允许红外光透过所述滤光片组件;图像传感器,包括可见光成像区域和红外光成像区域以及两个区域之间的过渡区域,所述图像传感器在可见光成像模式和红外光成像模式之一下进行操作,其中所述可见光成像区域在所述可见光成像模式下对通过所述可见光带通区域的可见光进行成像,以及所述红外光成像区域在所述红外光成像模式下对通过所述红外光带通区域的红外光进行成像,其中所述红外光来自生物特征;其中,在所述红外光成像模式下,基于所述生物特征的特定物理属性作为图像质量信息来实现对所述感兴趣区域的生物特征的自动对焦。
根据本发明的另一个方面,提供了一种成像功能复用的生物特征复合成像方法,包括:接收来自感兴趣区域的光;基于用户输入选择至少两个成像模式中的一个,所述成像模式包括可见光成像模式和红外光成像模式;在所选择的成像模式下对所接收的光进行过滤,其中在所述可见光成像模式下使可见光通过,以及在所述红外光成像模式下使红外光通过;以及在图像传感器的对应区域上对过滤后的光进行成像,其中在所述可见光成像模式下对通过的可见光进行成像输出,以及在所述红外光成像模式下对通过的红外光进行成像输出,其中所述红外光来自生物特征;其中,在所述红外光成像模式下,基于所述生物特征的特定物理属性作为图像质量信息来实现对所述感兴趣区域的生物特征的自动对焦。
根据本发明的又一个方面,提供了一种用于生物特征复合成像的移动终端,包括:红外光源,用于向所述生物特征发射红外光;屏幕,用于显示图像以及提供用于引导用户配合采集生物特征图像的眼部图像预览窗口,所述眼部图像预览窗口沿所述移动终端的长度方向位于 所述屏幕的区域的上部;复合成像摄像头模组,所述复合成像摄像头模组进一步包括:镜头组件,用于接收来自感兴趣区域的光;滤光片组件,用于对所接收的光进行过滤,以实现对允许通过波段的光进行成像,所述滤光片组件包含至少可见光带通区域和红外光带通区域,所述可见光带通区域仅允许可见光透过所述滤光片组件,以及所述红外光带通区域仅允许红外光透过所述滤光片组件;图像传感器,包括可见光成像区域和红外光成像区域以及两个区域之间的过渡区域,所述图像传感器在可见光成像模式和红外光成像模式之一下进行操作,其中所述可见光成像区域在所述可见光成像模式下对通过所述可见光带通区域的可见光进行成像,以及所述红外光成像区域在所述红外光成像模式下对通过所述红外光带通区域的红外光进行成像,其中所述红外光来自生物特征;其中,所述眼部图像预览窗口对所述图像传感器的红外光成像区域进行预览输出并且仅输出所述红外光成像区域的生物特征图像。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:
图1a和图1b是根据本发明的用于生物特征的复合成像系统的示意图;
图1c是根据本发明的用于生物特征的复合成像系统的更为详细的示意图;
图2a是图像传感器、滤光片和镜头组件相对位置的正面示意图;
图2b是图像传感器、滤光片和镜头组件相对位置的剖面示意图;图3是滤光片组件放置于图像传感器表面位置示意图;
图4是图像传感器的示意性正视图;
图5是图像传感器的示意性侧视图;
图6是滤光片组件的示意性正视图;
图7是滤光片组件的示意性侧视图;
图8是滤光片组件中的可见光带通滤光片在可见光波段(例如,380-760nm)的光谱特性图;
图9是滤光片组件中的红外光带通滤光片在红外光波段(例如, 780-880nm)的光谱特性图;
图10是根据本发明的复合成像系统的自动对焦成像单元的示意图;
图11是根据本发明的复合成像方法的流程图;
图12是根据本发明的实施例的红外光成像模式下的自动对焦算法流程图;
图13是复合成像系统中的双光谱活体检测过程的流程图;
图14是示出了根据本发明的统一可编程的生物特征识别软件架构示意图;
图15是示出了图14所示的软件架构的数据流程的示意图;
图16a至图16c示出了包括复合成像系统的移动终端的一个优选实施例,其中图16a和16b示出了移动终端的结构配置,以及图16c示出了移动终端在使用时的用户体验图;以及
图17a至图17c示出了包括复合成像系统的移动终端的另一个优选实施例,其中图17a和17b示出了移动终端的结构配置,以及图17c示出了移动终端在使用时的用户体验图。
在附图中,相同或相似的附图标记代表相同或相似的部件。
具体实施方式
本领域的技术人员应当明白本发明可以以脱离这些具体细节的其它实现方式来实现。而且为了不模糊本发明,在当前的说明中省略了已知的功能和结构的并非必要的细节。
下面结合附图对本发明作进一步详细描述。
在本发明的复合成像系统的结构中,包含A、B两个滤光区域(例如,两块滤光片)的滤光片组合结构被优选地放置在图像传感器芯片和光学镜头之间,两者在入射光路的路径上没有重叠。该复合成成像工作时,入射光在通过该滤光片组合结构后被过滤分成可见光部分和红外光部分,被同一个图像传感器的对应的分区部分接收。根据A、B两块滤光片的面积大小的不同以及滤光片距离图像传感器的位置的不同,该图像传感器区域被对应的划分为三个不同的成像区域,即,可见光成像区域和红外光成像区域,以及对可见光和红外光有重叠成像的过渡区域。因为在模组结构设计时,滤光片距离图像传感器位置很 近,所以可见光和红外光有重叠成像的过渡区域面积很小,主要对可见光成像图像有影响。后续在软件切换流程中可以对去除过渡区域后的可见光图像进行输出。
图1a和1b是根据本发明的用于生物特征的复合成像系统的示意图。如图1a和1b中所示,该复合成像系统100包括图像传感器110、滤光片组件120、镜头组件130、微电机致动器140。
图1c是根据本发明的用于生物特征的复合成像系统的更为详细的示意图,其中除了示出了图1a和1b中的图像传感器110、滤光片组件120、镜头组件130、微电机致动器140的示意性位置关系外,还进一步示出了光源150,以及示出了复合成像设备的视场角与感兴趣区域和生物特征的位置关系。要注意的是,图1c中的光源150仅为示意性的,本发明并不限于图1c中光源150的示意性配置。
图2a、2b至图7示出了复合成像系统中的各个部件的具体结构和相对位置。其中,图2a是图像传感器110、滤光片组件120和镜头组件130相对位置的正面示意图,以及图2b是图像传感器、滤光片和镜头组件相对位置的剖面示意图。在根据本发明的复合成像系统100中,镜头组件130具有一定的视场角并且接收来自多光谱光源的光。
来自多光谱光源的入射光在通过所述镜头组件后,到达滤光片组件120。滤光片组件是一组分区域、分波段的带通滤光片组合的设计,其包括仅允许可见光波段下的光通过的可见光带通区域和仅允许红外光波段下的光通过的红外光带通区域。优选地,如图2b中所示,可见光带通区域和红外光带通区域可以分别是可见光带通滤光片121和红外光带通滤光片122。之后,入射的多光谱光源被分光成两个波段下的光并且被图像传感器所接收。优选地,该滤光片组件120中的可见光带通滤光片121具有有利于对红外光波段反射、可见光波段透过的镀膜,而另外一块红外光带通滤光片122具有有利于对红外光波段透过、可见光波段反射的镀膜。图像传感器110为一个完整的图像传感器像素组合阵列,其通过数据传输接口(比如MIPI接口),将图像传感器采集的数字图像像素数据传输到后端加密芯片或者处理器。在本发明的复合成像系统的结构中,滤光片组件120被放置在图像传感器110所获取的光路的前端路径上,使得到达图像传感器110的光已经被滤光片组件120所过滤,即,通过镜头组件130所入射的多光谱光被滤 光片组件120分光成为可见光和红外光两个波段下的光并且被图像传感器的不同部分所接收。由于滤光片组件120中可见光带通滤光片121和红外光带通滤光片122在光学路径上所占的面积是不同的,所以图像传感器110可以包括分别与可见光带通区域(例如可见光带通滤光片121)和红外光带通区域(红外光带通滤光片122)相对应的可见光成像区域和红外光成像区域,这两个区域可以利用软件来划分出。此外,图像传感器110还可以包括对可见光和红外光有重叠成像的过渡区域。图像传感器110的可见光成像区域对其所对应的光学路径上的通过可见光带通区域的可见光进行成像,以及所述红外光成像区域对其所对应的光学路径上通过所述红外光带通区域的红外光进行成像。
为了确保在成像光路上,图像传感器110能够被完全被滤光片组件120所覆盖,在本发明的复合成像系统中,滤光片组件120的面积大于图像传感器110的面积。因为目前手机要求成像模组的高度是越来越薄的,所以滤光片组件到图像传感器的间距需要很小,一般小于2mm,所以这个间距范围内光按照广角扩散传播的误差是很小的,其可以被忽略。
目前手机、平板电脑等移动终端所使用的图像传感器一般由像素矩阵组成。在生产过程中,一般会在图像传感器硅基上添加微型透镜(Micro lens)和色彩滤镜(Color Filter)这两种部件。通常,需要在图像传感器的像素区域上提供对应的色彩滤镜(Color Filter)来对色彩进行滤除。增加色彩滤镜会对彩色可见光图像具有优化以及矫正彩色图像偏红的效果;而去除色彩滤镜能提升该像素对760-880nm波段的光能量的吸收。
本发明的复合成像系统创新地采用了改进型分区域成像的图像传感器(其中,可见光成像区域具有色彩滤镜,而在红外光成像区域上去除色彩滤镜)和双波段滤光片设计配合来实现单镜头可见光和红外光复合成像。其中,去除色彩滤镜的设计提升了对应的图像传感器区域对于红外光谱范围的相应的频谱敏感度,并且提升了在近红外波段下对760-880nm波段的光能量的吸收,从而优化增强该区域对于虹膜成像的图像效果。即,用较小能量的红外(IR)LED光源也可以保持较强红外光谱能量的接收,能够对更丰富的虹膜细节进行成像,从而降低了本发明系统对主动照明光源的功耗需求,并且实现了针对移动 终端设备近红外生物特征成像低功耗的设计。
根据本发明的实施例,复合成像系统100包括与图像传感器110的可见光成像区域对应的色彩滤镜,而去除与图像传感器110的红外光成像区域对应的色彩滤镜。
利用这样的结构设计,在自拍可见光模式下,图像传感器可优选地仅输出去除过渡区域后的可见光成像区域大小的自拍可见光图像。因为在图像输出时软件去除了过渡区域,所采集的自拍图像不会受红外光的影响而偏红;在红外光模式下,图像传感器可优选地仅输出红外光成像区域大小的红外光图像,所采集的生物特征图像不会受各种复杂环境可见光的影响而产生环境噪声,比如眼睛表面因窗户反射所形成的可见光反射光斑,或者戴眼镜的用户因各种环境灯的反射所形成的反射亮斑从而遮挡了虹膜信息,进而影响识别率和用户体验。这样两个输出图像互不相干扰,是最优选的二合一方案。
和现有技术不同,本发明不需要对滤光片进行物理上的位置切换即可实现可见光与红外光的复用成像,并且无需针对滤光片的运动部件,以及能够在移动终端的跌落环境下增强系统和结构的稳定性。
目前手机、平板电脑对可见光成像的对焦范围一般较远,而生物识别的红外成像对焦范围较近。所以如果成像系统的焦距是固定的,则无法兼顾两种模式同时实现清晰成像。
如图1中所示的,本发明的复合成像系统100还可以包括微电机致动器140,其控制运动部件(未示出)移动镜头组件130来使得镜头组件130在相应的成像模式下进入对焦模式来适应可见光和红外光成像两种模式的不同焦距需求。所述运动部件的使用能够解决可见光和红外光成像轴向焦距色差问题。具体地,运动部件用于调节所述镜头组件的移动。微电机致动器140用于基于不同的工作模式(可见光模式或者红外模式)来分别调整焦距实现不同的工作模式(一般地,移动终端的前置可见光模式主要用于自拍,而红外光成像模式主要对生物特征(比如虹膜特征)成像,因为虹膜的特征精细,所以要求对焦的成像距离也相对可见光距离图像传感器的相对位置更近一些)。或者,获取所述电子图像的图像质量信息,根据所述电子图像的图像质量信息控制微电机调节所述镜头组件以实现对所述感兴趣区域的生物特征进行自动对焦控制。
图3是滤光片组件120放置于图像传感器110表面位置示意图。滤光片组件120可以是独立于图像传感器110以外单独的一个元件(即,滤光片组件120比图像传感器110的面积略大),也可以通过封装工艺封装在图像传感器的硅片的表面上方,如图3所示。
图4是图像传感器110的示意性正视图,以及图5是图像传感器110的示意性侧视图。根据本发明的图像传感器110包括用于可见光成像的区域和用于红外光成像的区域,以及两个区域之间的过渡区域。图像传感器110的用于可见光成像的区域和用于红外光成像的区域,以及两个区域之间的过渡区域在图4和图5中被示出。为了同时满足可见光模式成像应用(比如自拍)对比较大的成像范围(对应比较大的视场角)的要求和红外光成像应用(比如虹膜成像)对图像分辨率(单位面积内的像素数)的精度和最小分辨率要求,本发明的复合成像系统100中的图像传感器110可以使用大像素数的图像传感器。以平均直径为11毫米的普通人的虹膜为例,按照ISO标准,图像中单眼虹膜外圆直径需要有120个像素。若在正常使用距离上(30CM)能够用水平FOV为60度的镜头进行虹膜识别,这需要图像传感器水平方向至少具备3773个像素,按照16∶9的图像宽高比,垂直方向需要有2120个像素,这对应着总共的像素数量是8M。考虑实际图像传感器在水平和垂直方向上的像素数,优选使用含8M以上的比如13M的CMOS图像传感器(4680(W)x 3456(H))。
图6是滤光片组件120的示意性正视图以及图7是滤光片组件120的示意性侧视图。滤光片组件120的可见光带通滤光片121(用于可见光)和红外光带通滤光片122(用于红外光)分别对应于图像传感器110的用于可见光成像的区域和用于红外光成像的区域。
图8和图9分别示出了滤光片组件120在可见光(例如,380-760nm)波段光谱特性以及滤光片组件在红外光波段(例如,780-880nm)光谱特性。
根据本发明的一个方面,复合成像系统100能够在可见光成像模式和红外光成像模式之间的切换时,通过软件控制对应于两个不同的模式预先计算的步长查找表进行映射来实现复合成像的两个成像模式下的快速对焦。因为红外成像的焦距比可见光成像的焦距更近,所以在红外光成像模式下其镜头的位置相比可见光模式在轴向上更靠近人 眼。
根据本发明的一个方面,复合成像系统100尤其能够在红外光成像模式下实现快速对焦。图10是根据本发明的复合成像系统的自动对焦单元的示意图。除了前文所述的图像传感器、镜头组件和微电机致动器外,该自动对焦单元包括自动对焦算法模块,其可以由处理器(未示出)来实现或者包括在处理器当中。具体地,当本发明的复合成像系统处于红外光成像模式下,自动对焦算法模块基于来自图像传感器的生物特征的特定物理属性作为图像质量信息来控制该微电机致动器,从而对感兴趣区域的生物特征进行自动对焦。更具体地,基于该图像质量信息,利用微电机致动器控制运动部件来移动镜头组件,从而实现对感兴趣区域的生物特征的自动对焦。优选地,当所述生物特征包括双眼时,所述特定物理属性可以包括所述双眼的虹膜的瞳孔间距。另外优选地,当所述生物特征包括单眼时,所述特定物理属性可以包括所述单眼的虹膜外圆直径。另外,复合成像系统中的处理器还可以被配置成通过实时计算由所述图像传感器所生成每一帧图像的所述特定物理属性并且对应于预先计算的步长查找表进行映射来实现对所述生物特征的快速对焦。
根据本发明的一个方面,复合成像系统采用了新的基于所采集的电子图像中所述生物特征的具有相对客观恒定数值的特定物理属性,获取所述特定物理属性在所述电子图像中的属性值作为所述电子图像的图像质量信息,根据所述属性值调节镜头组件以实现对感兴趣区域的生物特征进行自动对焦控制,从而保证了从可见光成像模式(焦距较远)到红外光成像模式(焦距较近)切换的过程中的通过软件控制来迅速对焦,提升了用户体验和所采集的生物特征的图像质量,从而提高了识别精度。关于本发明的复合成像系统在红外光成像模式下实现快速对焦的具体细节将在下文的方法实施例中进一步描述。
本发明还提供了一种复合成像方法,该方法可以通过上文描述的复合成像系统来实现。图11示出了根据本发明的复合成像方法的流程图。所述方法包括:接收来自感兴趣区域的光(S1110);基于用户输入选择至少两个成像模式中的一个,所述成像模式包括可见光成像模式和红外光成像模式(S1120);在所选择的成像模式下对所接收的光进行滤光(S1130);以及在图像传感器的对应区域上对滤光后的光进 行成像(S1140),其中在所述可见光成像模式下对通过的可见光进行成像输出,以及在所述红外光成像模式下对通过的红外光进行成像输出,以及其中所述红外光来自生物特征。图像传感器可以进一步通过数据传输接口(比如MIPI接口)输出与图像传感器像素阵列的相应面积相对应的图像数据到加密芯片或者处理器以用于进一步处理。进一步地,在所述红外光成像模式下,基于所述生物特征的特定物理属性作为图像质量信息来实现对所述感兴趣区域的生物特征的自动对焦(S1150)。另外,本发明的复合成像方法还包括通过实时计算由所述图像传感器所生成每一帧图像的所述特定物理属性并且对应于预先计算的步长查找表进行映射来实现对所述生物特征的快速对焦。
具体地,用户通过软件控制来切换该摄像头进入可见光成像模式或者红外光成像模式。图像传感器(CMOS/CCD)芯片按照滤光片组件中的可见光带通滤光片121和红外光带通滤光片122相应的设计规格面积尺寸而包括可见光成像区域和红外光成像区域。在可见光成像模式下,软件控制图像信号处理器ISP(Image Signal Processor)选择对应的可见光成像区域工作,调用相应的可见光成像的ISP参数设置使得可见光成像的效果优化。特别的,针对虹膜识别,因为有主动红外照明而且照明光源稳定,需要修改ISP参数相应降低图像传感器CMOS的增益,增大图像传感器CMOS的对比度,降低图像传感器CMOS的噪声,增大图像传感器CMOS的信噪比,从而有利于提高虹膜成像质量。如果该模组具有变焦功能,可以利用微电机致动器控制运动部件来移动镜头组件进入可见光对焦模式。利用常规的基于图像质量评估的对焦方法(比如反差对焦)完成自动对焦,并输出可见光成像区域所对应位置的分辨率大小的图像和输出格式。如果在红外光成像模式下,则ISP选择对应的红外光成像区域工作,调用相应的红外光成像的ISP参数设置使得红外光成像的效果优化。如果该模组具有变焦功能,可以利用微电机致动器控制运动部件来移动镜头组件进入红外光对焦模式,同时输出红外光成像区域所对应位置的分辨率大小的图像和输出格式。
在具有变焦功能的情况下,在红外光模式下的自动对焦过程如下所述:成像系统在获取通过镜头组件捕获的感兴趣区域的生物特征的图像之后,结合图像中生物特征的部分,计算出相应的生物特征图像 质量信息,包括但不限于图像的清晰度、对比度、平均灰度、图像信息熵、瞳孔间距、瞳孔直径、虹膜外圆直径、水平眼角宽度等特定物理属性。优选地,当所述生物特征包括双眼时,所述特定物理属性可以包括所述双眼的虹膜的瞳孔间距。另外优选地,当所述生物特征包括单眼时,所述特定物理属性可以包括所述单眼的虹膜外圆直径。计算得到的生物特征图像质量信息可以是一组图像质量数值,也可以是单一的图像质量指标。根据得到的所述图像的图像质量信息,系统控制微电机改变所述镜头组件的位置,使得从获取图像计算出的图像质量最优化,从而完成对所述感兴趣区域的生物特征进行的自动对焦控制。在完成红外自动对焦时,镜头的位置会比可见光模式在轴向更靠近人眼。
图12是根据本发明的实施例的红外光成像模式下的自动对焦算法流程图。首先,从图像传感器获取当前一帧图像(S1210)。接着,利用图像处理算法自动检测出生物特征的特定物理属性(例如,双眼瞳孔间距或虹膜外圆直径)(S1220)。然后,基于所检测出的特定物理属性计算当前物距(S1230),将当前物距与当前光学系统的焦距所对应的最佳成像物距进行比较(S1240)。若所述当前物距处于当前最佳成像物距景深范围内,则自动对焦过程完成(S1250)。否则,计算镜头组件需要移动的方向和步长,并驱动微电机致动器来将镜头组件移动到指定位置(S1260)。重复以上步骤,直到满足最佳成像物距和景深,从而完成自动对焦过程(S1270)。
活体检测是生物识别的一个重要需求也是难题。以虹膜识别为例,目前虹膜识别一般在红外光下实现,所以虹膜识别下的活体识别也对应地在红外光环境下来进行。在各种复杂环境下要实现不同人种(浅色瞳孔和深色瞳孔)稳定的活体识别是难度较大的。根据本发明的一个方面,复合成像系统巧妙地利用了本系统所具有的双光谱成像的特性来实现生物识别的活体检测功能。活体检测功能可以通过处理器(未示出)进行控制和实现,或者实现活体检测的功能模块可以被包括在处理器中。优选地,复合成像系统包括活体检测单元,其提供了检测所获得的虹膜图像来自真人还是伪造的虹膜的功能。活体检测单元的基本原理是利用正常人生物组织和伪造虹膜的物质在可见光和红外光下呈现的不同的光学反射特性来进行活体判断。
复合成像系统在成像时通过一个摄像头模组可以顺序进入可见光成像模式和红外光成像模式,并先后获取当前人眼的可见光图像和近红外图像。通过分析当前人眼的双光谱图像的差异,可得到改进型的活体识别的方案。具体地,复合成像系统的成像过程可以用Lambertian反射模型来描述。根据Lambertian反射定律,在波长为λ的光源照射下,成像物体表面某一点p=[x,y]T的反射光强度可以表示为:
Ir,λ(p)=αλ(p)Is,λ(p)cosθ(p)
此处Ir,λ(p)代表波长为λ的反射光强度,αλ(p)在是物质在波长为λ时的反射率,Is,λ(p)代表波长为λ的入射光源强度,θ(p)是在p点物体表面法向量和从p点指向摄像头的向量的夹角。
在双光谱(即,可见光和红外光)成像模式下,可以得到物体表面点p的反射率比值R(p)如下:
Figure PCTCN2016073356-appb-000001
这里λ1为可见光波长,λ2为红外光波长。
假设可见光和红外光入射光源强度在成像物体表面均为均匀分布,可见光入射光源强度
Figure PCTCN2016073356-appb-000002
可以近似为移动设备上自带的光线传感器测得的光源强度,近红外入射光源强度
Figure PCTCN2016073356-appb-000003
可以从移动设备上自带近红外发光装置的发光参数得到。这样,反射率比值R(p)可以简化为
Figure PCTCN2016073356-appb-000004
其中k为一个正的常量。
p点可见光和红外光反射光强度
Figure PCTCN2016073356-appb-000005
Figure PCTCN2016073356-appb-000006
可以从采集的可见光和红外光图像中p点对应的像素灰度值得到。这样,可以从采集到的可见光和红外光图像中计算出相对应的反射率比值图像:
Figure PCTCN2016073356-appb-000007
其中R为反射率比值图像,k为一个正的常量,P为采集到的可见光图像,Q为采集到的红外光图像。
图13是复合成像系统中的双光谱活体检测过程的流程图。具体地,以虹膜识别为例,活体检测过程的具体步骤可以描述如下:活体检测 单元控制成像系统顺序进入可见光成像模式和红外光成像模式(S1310),并顺序获取当前人眼的可见光图像和近红外图像。其中,在图像采集过程中,通过用户人机界面设计尽量保持固定的采集距离和采集角度的稳定。在得到同一人眼的可见光和红外光图像以后,利用图像处理算法,自动配准双光谱图像(S1320),利用图像处理算法在配准后的图像中自动检测瞳孔和虹膜位置(S1330),并以检测到的瞳孔和虹膜为中心,分割出相同的待处理区域(S1340)。之后,利用分割后的双光谱图像计算出对应的反射率比值图像(S1350),并分析反射率比值的分布特征(S1360),比如直方图,梯度,方差等参数。如果反射率比值的分布特征参数在预设范围内,则判断当前人眼为假体或伪造虹膜,否则判断当前人眼为活体(S1370)。上述方法结合了可见光和红外光两种光谱的信息量,提出了创造性的复合型计算的活体检测算法,在各种复杂环境下能够实现不同人种(浅色瞳孔和深色瞳孔)的稳定的、具有更强鲁棒性的活体检测和识别。
根据本发明的一个方面,复合成像系统还包括图像加密单元,以便提供对获取的生物特征图像进行加密的功能。该图像加密单元可以由处理器(未示出)来实现或者包括在处理器当中,或者以独立模块单元的方式被包括在图像传感器或者复合成像系统的模组中。要注意的是,图像加密单元可以与上文所描述的自动对焦单元一起实现在相同的处理器中,或者分别由不同的处理器来实现。图像加密单元的工作方式如下:在软件控制复合成像系统进入红外光成像模式,并获取了红外图像之后,启动图像加密功能,对得到的生物特征图像进行加密,并输出加密后的数据,以供进一步处理。在软件控制复合成像系统进入可见光成像模式时,图像加密单元并不会启动,也不对获得的可见光图像进行加密,而是直接输出获得的可见光图像。
图14是示出了根据本发明的统一可编程的生物特征识别软件架构示意图,其可以被体现在根据本发明的复合成像系统中。以虹膜识别为例,该统一可编程的生物特征识别软件架构基于以下目的:
1.统一的软件架构可以更容易集成不同厂家的传感器;
2.统一的接口可以使应用程序开发者不用考虑生物特征识别算法以及虹膜采集器之间的交互;
3.跨平台支持不同的操作系统(windows、Android、iOS或其他 系统);
4.可以通过扩展接口很容易满足应用程序的特殊需求。
以虹膜识别为例,图14所示出的软件架构包括:生物特征接口管理器,用于向第三方开发者提供生物特征识别接口;生物特征算法处理器,用于处理生物特征信息相关的数据运算;生物特征采集设备,用于生物特征信息的采集。该软件架构能够与其他应用软件进行交互。图15是示出了图14所示的软件架构的数据流程的示意图。具体地,在图15中示出了生物特征接口管理器、生物特征算法处理器、生物特征采集设备以及应用程序之间的数据帧和命令的传递。
图16a至图16c和图17a至图17c是包括根据本发明的复合成像系统的移动终端的两种创新实现方式的示意图。该移动终端利用780~880nm波段范围内的一颗或多颗红外LED作为光源,并且包括复合成像摄像头模组,其与根据本发明的复合成像系统相耦合。
图16a至图16c示出了包括本发明的复合成像系统的移动终端的一个优选实施例,其中图16a和16b示出了移动终端的结构配置,以及图16c示出了移动终端在使用时的用户体验图。在该实施例中,被实现为复合成像摄像头模组100的复合成像系统被布置于移动终端屏幕正面的一侧(比如屏幕顶部或者屏幕底部,这本实施方式中为顶部)。在本实施例中,红外光源150(例如,红外发光二极管(LED))和复合成像摄像头模组100被布置于移动终端屏幕正面的同一侧,其中红外光源150的位置与复合成像摄像头模组100的中心的水平距离在2-8厘米范围内,这有利于佩戴眼镜用户使用时的反射光斑的消除。红外光源150可以由中心光谱范围在780~880nm内的一颗或多颗红外LED组成。若以复合成像摄像头模组100置于移动终端屏幕上方为参考坐标方位,即模组100在屏幕的N方向,屏幕在模组的与N方向相反的S方向上,则在本实施例中,滤光片组件和图像传感器按如下方式进行配置:其中的可见光带通滤光片121置于红外光带通滤光片122的上方(N方向),对应的图像传感器的可见光成像区域置于红外光成像区域的上方(N方向)。在本实施例中,可见光成像区域的面积大于图像传感器的面积的50%,以及红外光成像区域的面积小于图像传感器的面积的50%,过渡区域位于可见光成像区域和红外光成像区域之间,其面积小于图像传感器的面积的15%。由于移动终端的立体结构包括 长度方向、宽度方向和厚度方向,因此,上述配置也可以被描述为:复合成像摄像头模组100和红外光源150沿移动终端的长度方向位于屏幕的上方;可见光带通滤光片121沿移动终端的长度方向置于红外光带通滤光片122的上方;以及图像传感器的可见光成像区域沿移动终端的长度方向置于红外光成像区域的上方,过渡区域位于可见光成像区域和红外光成像区域之间。此外,当红外光成像模式被激活时,可以在屏幕中提供眼部图像预览窗口160。该眼部图像预览窗口160仅输出对应的红外光成像区域(即生物特征成像)的图像,用于引导用户配合采集生物特征图像。在使用时,眼部图像预览窗口160的位置可以在移动终端的屏幕上被置于靠近屏幕的上侧或下侧。在本实施例中,如图16a所示,该眼部图像预览窗口160沿移动终端的长度方向位于屏幕区域的上部(即,N方向),也就是靠近复合成像摄像头模组100的一侧,这有利于在使用时,使用户的眼睛注视复合成像摄像头的方向,从而降低人眼上眼皮和睫毛对虹膜纹理特征的遮挡,以便得到更优、更丰富的虹膜图像以有利于进行识别。
在图16a至16c的实施例中,在上述配置的情况下,当使用时,在移动终端进入红外光成像模式对生物特征进行红外光成像(例如进入虹膜识别模式)时,在移动终端的屏幕的上部提供眼部图像预览窗口160,该图像预览窗口160对图像传感器的红外光成像区域进行预览输出来引导客户,如图16a所示。此时,用户可以如图16c所示的使移动终端的上部(即包含复合成像摄像头模组100的一侧)朝向用户侧倾斜靠近,使得用户在注视眼部图像预览窗口160时,其双眼的图像能够输出在该眼部图像预览窗口160中,从而能够采集到生物特征图像(例如,虹膜图像)以用于后续的预处理或加密识别过程。具体地,复合成像摄像头模组100采集来自生物特征的可见光和红外光。由于透镜成像倒立的原理,来自生物特征的红外光经过复合成像摄像头模组100进入移动终端内部,并且经过沿移动终端长度方向位于下方的红外光带通滤光片122而到达图像传感器的同样位于下方的红外光成像区域,从而对生物特征进行红外光成像。在本实施例的配置中,将红外光源150置于移动终端上方还有助于当移动终端上部朝向用户倾斜靠近时更充分地对生物特征进行照明,使得红外光源150的能量能够在生物特征识别时主要照明用户的生物特征(例如虹膜)。
图17a至图17c示出了包括本发明的复合成像系统的移动终端的另一个优选实施例。与图16a至16c的实施例相比,在图17a至图17c的实施例中,红外光源150沿移动终端的长度方向位于屏幕的下方(即,S方向),可见光带通滤光片121沿移动终端的长度方向置于红外光带通滤光片122的下方;以及图像传感器的可见光成像区域沿移动终端的长度方向置于红外光成像区域的下方,过渡区域位于可见光成像区域和红外光成像区域之间。与图16a至16c的实施例相类似地,当使用时,在移动终端进入红外光成像模式对生物特征进行红外光成像(例如进入虹膜识别模式)时,在移动终端的屏幕区域的上部(即N方向)提供眼部图像预览窗口160,该图像预览窗口160对图像传感器的红外光成像区域的进行预览输出来引导客户,如图17a所示。该眼部图像预览窗口160沿移动终端的长度方向位于屏幕区域的上部(即,N方向),也就是靠近复合成像摄像头模组100的一侧,这有利于在虹膜识别时,使用户的眼睛注视复合成像摄像头的方向,从而降低人眼上眼皮和睫毛对虹膜纹理特征的遮挡,以便得到更优、更丰富的虹膜图像以有利于进行识别。此时,用户可以如图17c所示的使移动终端的上部(即包含复合成像摄像头模组100的一侧)远离用户侧倾斜,使得用户在注视眼部图像预览窗口160时,通过软件进行平移控制来确保该眼部图像预览窗口160中的预览图像为图像传感器的红外光成像区域的预览,使得用户双眼的图像能够输出在该眼部图像预览窗口160中。从而能够采集到生物特征图像(例如,虹膜图像)以用于后续的预处理或加密识别过程。在本实施例的配置中,如果预览窗口置于移动终端的屏幕区域的下部,则会导致用户在使用过程中虹膜纹理被上眼皮和睫毛遮挡,所以本发明不倾向于这种配置。在本实施例的配置中,将红外光源150置于移动终端下方有助于当移动终端上部远离用户倾斜,用户手持移动终端进行生物识别时可以使生物特征被更充分地照明,使得红外光源150的能量能够在生物特征识别时主要照明用户的生物特征(例如虹膜)。
在本发明的复合成像系统或移动终端中,如果图像传感器采用13M的CMOS图像传感器(4680(W)x 3456(H))。其中,可见光部分高度大于整个图像传感器高度的50%(1728个像素),红外光部分高度小于整个图像传感器高度的50%(<1728个像素)。
滤光片放置于光学镜头和图像传感器之间,宽度和高度比图像传感器略大,滤光片的可见光和红外光可通过区域比图像传感器中的对应区域略大,以保证充分覆盖图像传感器中的对应区域。
根据优选实施例,图像传感器采用大分辨率的CMOS图像传感器,水平方向的图像分辨率在2400个像素以上。因为对于移动终端(尤其是手机)而言,前置摄像头的一个重要功能是自拍,自拍流行趋势是大视场角(镜头对角线视场角一般在70-80度左右),而虹膜识别要求对局部成像,一般要求小视场角,所以往往两个LENS不能结合。为了能够用复用一颗视场角较大的LENS即满足自拍又满足虹膜识别功能,需要增加整个成像芯片面积。比如按照ISO标准,单眼虹膜外圆直径需要有120个像素,若要在30CM能够用FOV_D(水平方向视场角)为59度左右的镜头进行虹膜识别,需要成像芯片水平方向至少是3200个像素,对应着8M的CMOS。如果用13M的CMOS,对FOV的压力就更小。因此,在进一步的优选实施例中,图像传感器采用13M的CMOS图像传感器(4680(W)x 3456(H))。图像传感器中可见光和红外光成像区域可以优选被设计为:可见光光成像区域高度为整个图像传感器高度的80%(2756个像素),红外光光成像区域高度为整个图像传感器高度的20%(700个像素)。
此外,可以在移动终端前置的显示的屏幕中加入辅助红外光源,其可以产生红外光源。当移动终端处于红外光成像模式下时,所述辅助红外光源能够对所述生物特征进行辅助红外光照明,从而节省移动终端上设置的红外光源的功率。随着软件的切换而点亮屏幕局部的红外屏幕部分对人眼虹膜进行照明。后续可用OLED光源对屏幕进行照明。此外,如果摄像头在屏幕的上方,则该图像传感器可被划分为:红外光部分在可见光部分的上方。
本发明的复合成像系统和方法可以具有如下优点:
1)单摄像头可见光和红外光复合成像,简化了硬件设计,没有运动部件来切换滤光片,大幅度提高了稳定性,并利用纯软件可实现可见光和红外光成像之间的切换
2)本发明的复合成像系统和方法可以保证移动设备前置摄像头的正常使用,比如自拍,而且可以在用户正常使用的距离上(比如20-50厘米)采集到满足生物特征(例如,虹膜)识别要求的红外生物特征 图像,不影响用户体验。
3)分区域红外光成像只需要图像传感器的一部分区域接收到红外光源的照明,而不是整个图像传感器,从而降低了图像传感器对红外照明光源的总功耗需求,即用较小能量和较小发射角的红外LED光源也可以保持该红外区域足够的红外光谱能量的吸收,以得到纹理细节丰富的生物特征图像。
4)本发明的复合成像系统和方法利用基于生物特征信息的近红外自动快速有效的对焦算法,保证了从可见光远焦距模式软件切换到红外光近焦距模式时的快速焦距校正,有利于提升获取的近红外生物特征图像的质量。
5)本发明的复合成像系统和方法在成像时可以通过一个摄像头模组进行可见光部分和红外光部分的成像。并且通过分析可见光和红外光图像的差异,可得到改进型的活体识别的方案。
6)本发明的复合成像系统和方法可以对获取的生物特征图像进行加密,保证用户的个人敏感信息的安全性。本发明中的加密方法通过图像质量判断进行有选择的图像加密,从而有效地降低了对加密芯片数据处理吞吐量的要求,保证了图像加密的实时性。另外,本发明中的加密方法通过降采样处理后输出的预览用图像,能够正确引导用户生物特征识别图像的采集,不影响用户体验。同时由于降采样处理后的预览图像并没有足够丰富的生物特征信息,所以不会造成用户个人敏感信息的泄露。
本发明以虹膜识别为例来说明本发明的成像功能复用的复合成像系统和方法。然而本发明的各方面并不局限于对人眼虹膜的识别,还可以应用到能够用于身份识别的其他生物特征,例如,眼白、指纹、视网膜、鼻子、人脸(二维或三维)、眼纹、唇纹以及静脉。
在此所使用的术语仅用于描述特定实施例的目的,而并非意欲限制本发明。如在此所使用的那样,单数形式的“一个”、“这个”意欲同样包括复数形式,除非上下文清楚地另有所指。还应当理解,当在此使用时,术语“包括”指定出现所声明的特征、整体、步骤、操作、元件和/或组件,但并不排除出现或添加一个或多个其他特征、整 体、步骤、操作、元件、组件和/或其群组。
除非另外定义,否则在此所使用的术语(包括技术术语和科学术语)具有与本发明所属领域的普通技术人员所共同理解的相同意义。在此所使用的术语应解释为具有与其在该说明书的上下文以及有关领域中的意义一致的意义,而不能以理想化的或过于正式的意义来解释,除非在此特意如此定义。
尽管上述已详细描述了一些实施例,但其他的修改是可能的。例如,为了实现期望的结果,在图中所描绘的逻辑流程不需要所示出的特定顺序,或连续的顺序。可提供其他的步骤,或者可从所描述的流程消除某些步骤,并且可以对所描述的系统增加其他的部件或从所描述的系统中去除组件。其他的实施例可以在之后的权利要求的范围内。

Claims (24)

  1. 一种成像功能复用的生物特征复合成像系统,包括:
    镜头组件,用于接收来自感兴趣区域的光;
    滤光片组件,用于对所接收的光进行过滤,以实现对允许通过波段的光进行成像,所述滤光片组件包含至少可见光带通区域和红外光带通区域,所述可见光带通区域仅允许可见光透过所述滤光片组件,以及所述红外光带通区域仅允许红外光透过所述滤光片组件;
    图像传感器,包括可见光成像区域和红外光成像区域以及两个区域之间的过渡区域,所述图像传感器在可见光成像模式和红外光成像模式之一下进行操作,其中所述可见光成像区域在所述可见光成像模式下对通过所述可见光带通区域的可见光进行成像,以及所述红外光成像区域在所述红外光成像模式下对通过所述红外光带通区域的红外光进行成像,其中所述红外光来自生物特征;
    其中,在所述红外光成像模式下,基于所述生物特征的特定物理属性作为图像质量信息来实现对所述感兴趣区域的生物特征的自动对焦。
  2. 如权利要求1所述的复合成像系统,进一步包括:
    运动部件,用于调节所述镜头组件的移动,以及
    微电机致动器,用于在任一成像模式下控制所述运动部件来移动所述镜头组件以便调整所述镜头组件的焦距
    其中,所述自动对焦包括基于所述生物特征的特定物理属性作为图像质量信息来利用所述微电机致动器控制所述运动部件移动所述镜头组件以实现对所述感兴趣区域的生物特征的自动对焦。
  3. 如权利要求1所述的复合成像系统,进一步包括:
    图像加密单元,用于对由所述图像传感器所生成的图像进行加密。
  4. 如权利要求3所述的复合成像系统,其中:
    当在所述红外光成像模式下由所述图像传感器生成红外光图像之后,所述图像加密单元对得到的生物特征的红外光图像进行加密并且输出加密后的图像以供进一步处理;以及
    在所述可见光成像模式下时,所述图像加密单元不对所述图像传 感器生成的可见光图像加密而是直接输出所生成的可见光图像。
  5. 如权利要求1所述的复合成像系统,其中:
    所述滤光片组件的面积大于所述图像传感器的面积。
  6. 如权利要求1所述的复合成像系统,其中:
    所述图像传感器是大分辨率CMOS图像传感器,所述大分辨率CMOS图像传感器在水平方向的图像分辨率大于2400个像素。
  7. 如权利要求1所述的复合成像系统,其中:
    所述复合成像系统包括与所述图像传感器的可见光成像区域对应的色彩滤镜,而不包括与所述图像传感器的红外光成像区域对应的色彩滤镜。
  8. 如权利要求1所述的复合成像系统,其中:
    当所述生物特征包括双眼时,所述特定物理属性包括所述双眼的虹膜的瞳孔间距。
  9. 如权利要求1所述的复合成像系统,其中:
    当所述生物特征包括单眼时,所述特定物理属性包括所述单眼的虹膜外圆直径。
  10. 如权利要求1所述的复合成像系统,其中:
    在通过软件控制所述可见光成像模式和红外光成像模式之间的切换时,对应于所述可见光成像模式和红外光成像模式预先计算的步长查找表进行映射来实现对所述可见光成像模式和红外光成像模式的快速对焦,其中,在所述红外光成像模式下所述镜头组件的位置相比在所述可见光模式下在轴向上更靠近人眼。
    所述红外光成像模式下的自动对焦进一步包括通过实时计算由所述图像传感器所生成每一帧图像的所述特定物理属性并且对应于预先计算的步长查找表进行映射来实现对所述生物特征的快速对焦。
  11. 如权利要求1所述的复合成像系统,进一步包括:
    活体检测单元,其被配置成执行活体检测,所述活体检测包括检测由所述图像传感器所生成的图像是来自真人还是来自伪造的生物特征。
  12. 如权利要求11所述的复合成像系统,其中,
    所述活体检测进一步包括对所述图像传感器生成的可见光图像和红外光图像进行计算并得到反射比率值,以及如果所述反射比率值在 预设范围内,则判断所生成的图像是来自假体或伪造的生物特征,否则,判断所生成的图像是来自真人或活体的生物特征。
  13. 一种成像功能复用的生物特征复合成像方法,包括:
    接收来自感兴趣区域的光;
    基于用户输入选择至少两个成像模式中的一个,所述成像模式包括可见光成像模式和红外光成像模式;
    在所选择的成像模式下对所接收的光进行过滤,其中在所述可见光成像模式下使可见光通过,以及在所述红外光成像模式下使红外光通过;以及
    在图像传感器的对应区域上对过滤后的光进行成像,其中在所述可见光成像模式下对通过的可见光进行成像输出,以及在所述红外光成像模式下对通过的红外光进行成像输出,其中所述红外光来自生物特征;
    其中,在所述红外光成像模式下,基于所述生物特征的特定物理属性作为图像质量信息来实现对所述感兴趣区域的生物特征的自动对焦。
  14. 如权利要求13所述的复合成像方法,其中:
    当所述生物特征包括双眼时,所述特定物理属性包括所述双眼的虹膜的瞳孔间距。
  15. 如权利要求13所述的复合成像方法,其中:
    当所述生物特征包括单眼时,所述特定物理属性包括所述单眼的虹膜外圆直径。
  16. 如权利要求13所述的复合成像方法,其中:
    在通过软件控制所述可见光成像模式和红外光成像模式之间的切换时,对应于所述可见光成像模式和红外光成像模式预先计算的步长查找表进行映射来实现对所述可见光成像模式和红外光成像模式的快速对焦;
    所述红外光模式下的自动对焦进一步包括通过实时计算由所述图像传感器所生成每一帧图像的所述特定物理属性并且对应于预先计算的步长查找表进行映射来实现对所述生物特征的快速对焦。
  17. 如权利要求13所述的复合成像方法,进一步包括:
    对由所述图像传感器所生成的图像进行加密。
  18. 如权利要求17所述的复合成像方法,其中所述加密进一步包括:
    当在所述红外光成像模式下由所述图像传感器生成红外光图像之后,对得到的生物特征的红外光图像进行加密并且输出加密后的图像以供进一步处理;以及
    在所述可见光成像模式下时,不对所述图像传感器生成的可见光图像加密而是直接输出所生成的可见光图像。
  19. 如权利要求13所述的复合成像方法,进一步包括:
    对由所述图像传感器所生成的图像进行活体检测,所述活体检测包括检测由所述图像传感器所生成的图像是来自真人还是来自伪造的生物特征。
  20. 如权利要求19所述的复合成像方法,其中,
    所述活体检测进一步包括对所述图像传感器生成的可见光图像和红外光图像进行计算并得到反射比率值,以及如果所述反射比率值在预设范围内,则判断所生成的图像是来自假体或伪造的生物特征,否则,判断所生成的图像是来自真人或活体的生物特征。
  21. 一种用于生物特征复合成像的移动终端,包括:
    红外光源,用于向所述生物特征发射红外光;
    屏幕,用于显示图像以及提供用于引导用户配合采集生物特征图像的眼部图像预览窗口,所述眼部图像预览窗口沿所述移动终端的长度方向位于所述屏幕的区域的上部;
    复合成像摄像头模组,所述复合成像摄像头模组进一步包括:
    镜头组件,用于接收来自感兴趣区域的光;
    滤光片组件,用于对所接收的光进行过滤,以实现对允许通过波段的光进行成像,所述滤光片组件包含至少可见光带通区域和红外光带通区域,所述可见光带通区域仅允许可见光透过所述滤光片组件,以及所述红外光带通区域仅允许红外光透过所述滤光片组件;
    图像传感器,包括可见光成像区域和红外光成像区域以及两个区域之间的过渡区域,所述图像传感器在可见光成像模式和红外光成像模式之一下进行操作,其中所述可见光成像区域在所述可见光成像模式下对通过所述可见光带通区域的可见光进行成像,以及所述红外光成像区域在所述红外光成像模式下对通过所述红外光带通区域的红外 光进行成像,其中所述红外光来自生物特征;
    其中,所述眼部图像预览窗口对所述图像传感器的红外光成像区域进行预览输出并且仅输出所述红外光成像区域的生物特征图像。
  22. 如权利要求21所述的移动终端,其中:
    所述复合成像摄像头模组和所述红外光源沿所述移动终端的长度方向位于所述屏幕的上方,其中所述可见光带通区域沿所述移动终端的长度方向置于所述红外光带通区域的上方,以及所述图像传感器的所述可见光成像区域沿所述移动终端的长度方向置于所述红外光成像区域的上方,使得所述移动终端在使用时,通过使所述移动终端的包含所述复合成像摄像头模组的一侧朝向用户侧倾斜靠近来将所述生物特征图像输出在所述眼部图像预览窗口中。
  23. 如权利要求21所述的移动终端,其中:
    所述复合成像摄像头模组和所述红外光源分别沿所述移动终端的长度方向位于所述屏幕的上方和下方,其中所述可见光带通区域沿所述移动终端的长度方向置于所述红外光带通区域的下方,以及所述图像传感器的所述可见光成像区域沿所述移动终端的长度方向置于所述红外光成像区域的下方,使得所述移动终端在使用时,通过使所述移动终端的包含所述复合成像摄像头模组的一侧远离用户侧倾斜并且通过平移控制确保所述眼部图像预览窗口中的预览图像为来自所述图像传感器的红外光成像区域的预览来将所述生物特征图像输出在所述眼部图像预览窗口中。
  24. 如权利要求21所述的移动终端,其中:
    所述屏幕进一步包括辅助红外光源,以及当所述复合成像摄像头模组处于红外光成像模式下时,所述屏幕的辅助红外光源对所述生物特征进行辅助红外光照明。
PCT/CN2016/073356 2016-02-03 2016-02-03 与可见光复用的生物特征复合成像系统和方法 WO2017132903A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201680081007.8A CN109074438A (zh) 2016-02-03 2016-02-03 与可见光复用的生物特征复合成像系统和方法
PCT/CN2016/073356 WO2017132903A1 (zh) 2016-02-03 2016-02-03 与可见光复用的生物特征复合成像系统和方法
US16/074,560 US10579871B2 (en) 2016-02-03 2016-02-03 Biometric composite imaging system and method reusable with visible light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/073356 WO2017132903A1 (zh) 2016-02-03 2016-02-03 与可见光复用的生物特征复合成像系统和方法

Publications (1)

Publication Number Publication Date
WO2017132903A1 true WO2017132903A1 (zh) 2017-08-10

Family

ID=59500541

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/073356 WO2017132903A1 (zh) 2016-02-03 2016-02-03 与可见光复用的生物特征复合成像系统和方法

Country Status (3)

Country Link
US (1) US10579871B2 (zh)
CN (1) CN109074438A (zh)
WO (1) WO2017132903A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111263057A (zh) * 2018-12-03 2020-06-09 佳能株式会社 摄像设备、摄像设备的控制方法、计算方法和存储介质
CN113014747A (zh) * 2019-12-18 2021-06-22 中移物联网有限公司 屏下摄像头模组、图像处理方法及终端
EP3805983A4 (en) * 2018-06-08 2021-07-21 Vivo Mobile Communication Co., Ltd. SET AND TERMINAL FOR OPTICAL FINGERPRINT RECOGNITION
CN113505672A (zh) * 2021-06-30 2021-10-15 上海聚虹光电科技有限公司 虹膜采集装置、虹膜采集方法、电子设备和可读介质

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102649782B1 (ko) * 2016-04-13 2024-03-21 소니그룹주식회사 신호 처리 장치 및 촬상 장치
KR101786553B1 (ko) * 2017-03-02 2017-10-17 주식회사 에스카 시정상태의 변화에 강인한 복합 필터링 기반의 오토포커싱 기능을 갖는 감시카메라 및 그것이 적용된 영상감시시스템
CN107480589B (zh) * 2017-07-07 2020-08-04 Oppo广东移动通信有限公司 红外光源组件及电子装置
CN109002796B (zh) * 2018-07-16 2020-08-04 阿里巴巴集团控股有限公司 一种图像采集方法、装置和系统以及电子设备
US11023756B2 (en) * 2018-10-26 2021-06-01 Advanced New Technologies Co., Ltd. Spoof detection using iris images
CN111381599A (zh) * 2018-12-28 2020-07-07 中强光电股份有限公司 无人机避障系统及其控制方法
CN110245627B (zh) * 2019-06-19 2022-04-29 京东方科技集团股份有限公司 一种显示面板及显示装置
CN111447423A (zh) * 2020-03-25 2020-07-24 浙江大华技术股份有限公司 图像传感器、摄像装置及图像处理方法
US11321838B2 (en) * 2020-08-31 2022-05-03 Facebook Technologies, Llc. Distributed sensor module for eye-tracking
CN112200842B (zh) * 2020-09-11 2023-12-01 深圳市优必选科技股份有限公司 一种图像配准方法、装置、终端设备及存储介质
CN112906529A (zh) * 2021-02-05 2021-06-04 深圳前海微众银行股份有限公司 人脸识别补光方法、装置、人脸识别设备及其系统
CN113225485B (zh) * 2021-03-19 2023-02-28 浙江大华技术股份有限公司 图像采集组件、融合方法、电子设备及存储介质
CN114143427A (zh) * 2021-11-23 2022-03-04 歌尔科技有限公司 摄像头组件、移动终端和基于摄像头的体温测量方法
US20230277064A1 (en) * 2022-03-01 2023-09-07 Mimosa Diagnostics Inc. Releasable portable imaging device for multispectral mobile tissue assessment
US20230412907A1 (en) * 2022-05-20 2023-12-21 Sony Interactive Entertainment Inc. Near infrared (nir) transparent organic light emitting diode (oled) display
US20230388620A1 (en) * 2022-05-26 2023-11-30 Motorola Mobility Llc Visual Feature Based Video Effects
US20240080552A1 (en) * 2022-09-06 2024-03-07 Qualcomm Incorporated Systems and methods of imaging with multi-domain image sensor

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014014153A1 (ko) * 2012-07-16 2014-01-23 아이리텍 인크 일반촬영 및 홍채인식촬영모드를 갖는 홍채인식겸용 카메라
CN103593647A (zh) * 2013-10-21 2014-02-19 王晓鹏 一种生物特征成像的方法与设备
WO2014205021A1 (en) * 2013-06-18 2014-12-24 Delta ID Inc. Multiple mode image acquisition for iris imaging
CN104394306A (zh) * 2014-11-24 2015-03-04 北京中科虹霸科技有限公司 用于虹膜识别的多通道多区域镀膜的摄像头模组及设备
CN204362181U (zh) * 2014-12-05 2015-05-27 北京蚁视科技有限公司 同时采集红外光图像和可见光图像的图像采集装置
CN105100567A (zh) * 2015-07-13 2015-11-25 南昌欧菲光电技术有限公司 成像装置及移动终端

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103152517B (zh) * 2013-02-06 2018-06-22 北京中科虹霸科技有限公司 用于移动虹膜识别设备的成像模组及移动设备
CN203734738U (zh) * 2014-02-28 2014-07-23 北京中科虹霸科技有限公司 一种用于移动终端的虹膜识别摄像头模组
CN107257433B (zh) * 2017-06-16 2020-01-17 Oppo广东移动通信有限公司 对焦方法、装置、终端和计算机可读存储介质
US10740431B2 (en) * 2017-11-13 2020-08-11 Samsung Electronics Co., Ltd Apparatus and method of five dimensional (5D) video stabilization with camera and gyroscope fusion
US11006038B2 (en) * 2018-05-02 2021-05-11 Qualcomm Incorporated Subject priority based image capture

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014014153A1 (ko) * 2012-07-16 2014-01-23 아이리텍 인크 일반촬영 및 홍채인식촬영모드를 갖는 홍채인식겸용 카메라
WO2014205021A1 (en) * 2013-06-18 2014-12-24 Delta ID Inc. Multiple mode image acquisition for iris imaging
CN103593647A (zh) * 2013-10-21 2014-02-19 王晓鹏 一种生物特征成像的方法与设备
CN104394306A (zh) * 2014-11-24 2015-03-04 北京中科虹霸科技有限公司 用于虹膜识别的多通道多区域镀膜的摄像头模组及设备
CN204362181U (zh) * 2014-12-05 2015-05-27 北京蚁视科技有限公司 同时采集红外光图像和可见光图像的图像采集装置
CN105100567A (zh) * 2015-07-13 2015-11-25 南昌欧菲光电技术有限公司 成像装置及移动终端

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3805983A4 (en) * 2018-06-08 2021-07-21 Vivo Mobile Communication Co., Ltd. SET AND TERMINAL FOR OPTICAL FINGERPRINT RECOGNITION
US11430251B2 (en) 2018-06-08 2022-08-30 Vivo Mobile Communication Co., Ltd. Optical fingerprint identification assembly and terminal
CN111263057A (zh) * 2018-12-03 2020-06-09 佳能株式会社 摄像设备、摄像设备的控制方法、计算方法和存储介质
CN113014747A (zh) * 2019-12-18 2021-06-22 中移物联网有限公司 屏下摄像头模组、图像处理方法及终端
CN113014747B (zh) * 2019-12-18 2023-04-28 中移物联网有限公司 屏下摄像头模组、图像处理方法及终端
CN113505672A (zh) * 2021-06-30 2021-10-15 上海聚虹光电科技有限公司 虹膜采集装置、虹膜采集方法、电子设备和可读介质
CN113505672B (zh) * 2021-06-30 2024-03-12 上海聚虹光电科技有限公司 虹膜采集装置、虹膜采集方法、电子设备和可读介质

Also Published As

Publication number Publication date
CN109074438A (zh) 2018-12-21
US10579871B2 (en) 2020-03-03
US20190065845A1 (en) 2019-02-28

Similar Documents

Publication Publication Date Title
WO2017132903A1 (zh) 与可见光复用的生物特征复合成像系统和方法
US10311298B2 (en) Biometric camera
CA3084546C (en) Enhancing the performance of near-to-eye vision systems
WO2017049923A1 (zh) 一种多功能移动图像处理装置、处理方法及用途
WO2017049922A1 (zh) 一种图像信息采集装置、图像采集方法及其用途
WO2016070781A1 (zh) 移动终端可见光和生物识别组合光电成像系统及方法
CN205666883U (zh) 支持近红外光与可见光成像的复合成像系统和移动终端
WO2017161520A1 (zh) 支持近红外光与可见光成像的复合成像系统和移动终端
EP3011495B1 (en) Multiple mode image acquisition for iris imaging
CN103024338B (zh) 具有图像捕获和分析模块的显示设备
WO2015172514A1 (zh) 图像采集装置和方法
US20170061210A1 (en) Infrared lamp control for use with iris recognition authentication
CN206370880U (zh) 一种双摄像头成像系统和移动终端
JP6564271B2 (ja) 撮像装置及び画像処理方法、プログラム、並びに記憶媒体
US9300858B2 (en) Control device and storage medium for controlling capture of images
JP2001005948A (ja) 虹彩撮像装置
Thavalengal et al. Proof-of-concept and evaluation of a dual function visible/NIR camera for iris authentication in smartphones
EP4156082A1 (en) Image transformation method and apparatus
KR20140050603A (ko) 이동 식별 플랫폼
CN106934349B (zh) 双摄像头成像及虹膜采集识别一体化设备
KR102506363B1 (ko) 정확히 2개의 카메라를 갖는 디바이스 및 이 디바이스를 사용하여 2개의 이미지를 생성하는 방법
JP2000201289A (ja) 映像入出力装置及び映像取得方法
JP2007058507A (ja) 視線検出装置
CN107430276A (zh) 头戴式显示设备
JP2007236668A (ja) 撮影装置および認証装置ならびに撮影方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16888728

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16888728

Country of ref document: EP

Kind code of ref document: A1